title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
GLM-5 and DeepSeek are in the Top 6 of the Game Agent Coding League across five games
41
Hi. Game Agent Coding League (GACL) is a benchmarking framework designed for LLMs in which models are tasked with generating code for game-playing agents. These agents compete in games such as Battleship, Tic-Tac-Toe variants, and others. At present, the league supports five games, with additional titles planned. Mor...
2026-02-17T20:40:43
https://i.redd.it/22z0y8ni84kg1.png
kyazoglu
i.redd.it
1970-01-01T00:00:00
0
{}
1r7i2im
false
null
t3_1r7i2im
/r/LocalLLaMA/comments/1r7i2im/glm5_and_deepseek_are_in_the_top_6_of_the_game/
false
false
https://preview.redd.it/…d799ddcab2b571e0
41
{'enabled': True, 'images': [{'id': '22z0y8ni84kg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/22z0y8ni84kg1.png?width=108&crop=smart&auto=webp&s=e750bd909955efbc1d1aa68f723cd8687ce126fc', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/22z0y8ni84kg1.png?width=216&crop=smart&auto=web...
Voxtral Realtime 3B in Swift/MLX - first local implementation, now available in a real Mac dictation app
1
It transcribes audio in chunks while you're talking. The experience depends heavily on your chip. Transcription time scales with memory bandwidth, so: * M3 (100 GB/s): noticeable lag * M3 Pro (150 GB/s): better * M4 Pro (273 GB/s): fast * M4 Max (546 GB/s): people are reporting basically instant I'm on M3 Air so ...
2026-02-17T20:36:40
https://www.reddit.com/r/LocalLLaMA/comments/1r7hym7/voxtral_realtime_3b_in_swiftmlx_first_local/
ksanderer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7hym7
false
null
t3_1r7hym7
/r/LocalLLaMA/comments/1r7hym7/voxtral_realtime_3b_in_swiftmlx_first_local/
false
false
https://external-preview…479a06e93389998e
1
null
GLM-5: China's Open-Source Giant That Rivals Claude and GPT
0
https://preview.redd.it/…narrowing fast.
2026-02-17T20:34:56
https://www.reddit.com/r/LocalLLaMA/comments/1r7hwwy/glm5_chinas_opensource_giant_that_rivals_claude/
Delicious_Air_737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7hwwy
false
null
t3_1r7hwwy
/r/LocalLLaMA/comments/1r7hwwy/glm5_chinas_opensource_giant_that_rivals_claude/
false
false
https://external-preview…294f904652593c59
0
null
Free LLM APIs for Development
0
Hi all I am building a project in that need free apis that can help us in development and programming, if there any chepest then please share with me
2026-02-17T20:30:01
https://www.reddit.com/r/LocalLLaMA/comments/1r7hrxf/free_llm_apis_for_development/
Significant-Foot2737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7hrxf
false
null
t3_1r7hrxf
/r/LocalLLaMA/comments/1r7hrxf/free_llm_apis_for_development/
false
false
self
0
null
Speculative decoding on Strix Halo?
9
I just found out about speculative decoding (Alex Ziskind on YT). Given the low bandwidth on the strix halo but relatively big ram (128), I had in mind that only large MoE models made sense on that machine (relatively small active parameters making an MoE model usable Vs a dense model that'd just be too slow). But then...
2026-02-17T20:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1r7gzc7/speculative_decoding_on_strix_halo/
Hector_Rvkp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gzc7
false
null
t3_1r7gzc7
/r/LocalLLaMA/comments/1r7gzc7/speculative_decoding_on_strix_halo/
false
false
self
9
null
What is the current best creative model that works on consumer hardware
0
So it's been a while since I have tried local models for story writing purposes. How much has the domain progressed, or has it progressed any since llama 3 and gemma 3 finetunes? I have 16gb vram and 96gb ram, what models can I run locally that has decent context understanding and prose writing? I am NOT looking for ...
2026-02-17T19:58:28
https://www.reddit.com/r/LocalLLaMA/comments/1r7gw8b/what_is_the_current_best_creative_model_that/
falconandeagle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gw8b
false
null
t3_1r7gw8b
/r/LocalLLaMA/comments/1r7gw8b/what_is_the_current_best_creative_model_that/
false
false
self
0
null
Looking to run GLM 5 with optimal settings
0
I have been running GLM 4.7 with llama.cpp and its performance is great! I have 128 Gbs of RAM and an Nvidia 5090. I have been running GLM 4.7 with this command `.\llama-server.exe -hf unsloth/GLM-4.7-Flash-GGUF:Q6_K_XL --host` `127.0.0.1` `--port 10000 --ctx-size 32000 --n-gpu-layers 99` and that seems to do the job j...
2026-02-17T19:56:57
https://www.reddit.com/r/LocalLLaMA/comments/1r7gup0/looking_to_run_glm_5_with_optimal_settings/
warpanomaly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gup0
false
null
t3_1r7gup0
/r/LocalLLaMA/comments/1r7gup0/looking_to_run_glm_5_with_optimal_settings/
false
false
self
0
null
Curious what setups you're all running for agentic coding (Claude Code, sub-agents, etc)
2
I've been nerding out on multi-agent coding workflows lately and I'm curious how others have their rigs set up. Here's mine: - MacBook Air M4 (16GB) - Cursor + Claude Code in VS Code side by side - Claude handles the heavy lifting, agents can spawn sub-agents for parallel work - No local LLM running yet — wondering if...
2026-02-17T19:52:17
https://www.reddit.com/r/LocalLLaMA/comments/1r7gq2r/curious_what_setups_youre_all_running_for_agentic/
asklee-klawde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gq2r
false
null
t3_1r7gq2r
/r/LocalLLaMA/comments/1r7gq2r/curious_what_setups_youre_all_running_for_agentic/
false
false
self
2
null
Built a self-hosted mem0 MCP memory server for Claude Code, Ollama handles embeddings locally, optional local graph LLM too
2
Weekend project: a self-hosted MCP server that gives Claude Code persistent memory across sessions. The local LLM angle is what I think this community will find interesting. **Where local models fit in:** This server uses [mem0ai](https://github.com/mem0ai/mem0) as a library. mem0's pipeline has two paths, and both c...
2026-02-17T19:47:08
https://www.reddit.com/r/LocalLLaMA/comments/1r7gktw/built_a_selfhosted_mem0_mcp_memory_server_for/
Aware-One7480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gktw
false
null
t3_1r7gktw
/r/LocalLLaMA/comments/1r7gktw/built_a_selfhosted_mem0_mcp_memory_server_for/
false
false
self
2
{'enabled': False, 'images': [{'id': 'j8Y3wKNbw8GT9lNobxVPXkmzIE8b0fWh_v3MiKUFKOI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j8Y3wKNbw8GT9lNobxVPXkmzIE8b0fWh_v3MiKUFKOI.png?width=108&crop=smart&auto=webp&s=9da792ddc96974373551c80f54134eaa3bd40da2', 'width': 108}, {'height': 108, 'url': 'h...
Ai integration
0
So I recently installed a local ai and got it to automatically respond to emails and wrote (Copilot actually wrote it. Lol) a memory system for it to record things, so now I was thinking about if there were any other things that you guys use ai for. If anyone wants to code for the email or memory setup I can give i...
2026-02-17T19:42:40
https://www.reddit.com/r/LocalLLaMA/comments/1r7gg9o/ai_integration/
D3f4alt_Airsoft_plus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gg9o
false
null
t3_1r7gg9o
/r/LocalLLaMA/comments/1r7gg9o/ai_integration/
false
false
self
0
null
What do people do while they wait for LLMs to finish writing code
1
I have noticed a funny thing at my workplace. It's the sit around do nothing phase developers addicted to AI are going through. They will prompt the AI, and then the AI will be off doing its thing, and now with agents and "the checks and balances" and overthinking everything along with verification that makes the AI ru...
2026-02-17T19:42:19
https://www.reddit.com/r/LocalLLaMA/comments/1r7gfx5/what_do_people_do_while_they_wait_for_llms_to/
falconandeagle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7gfx5
false
null
t3_1r7gfx5
/r/LocalLLaMA/comments/1r7gfx5/what_do_people_do_while_they_wait_for_llms_to/
false
false
self
1
null
Hey, where’s Grok?
7
2026-02-17T19:35:37
https://i.redd.it/aaug288mx3kg1.jpeg
Intrepid_Travel_3274
i.redd.it
1970-01-01T00:00:00
0
{}
1r7g96n
false
null
t3_1r7g96n
/r/LocalLLaMA/comments/1r7g96n/hey_wheres_grok/
false
false
https://preview.redd.it/…c96b44851dcbdcca
7
{'enabled': True, 'images': [{'id': 'aaug288mx3kg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aaug288mx3kg1.jpeg?width=108&crop=smart&auto=webp&s=f887d94e9f64bf73248b100e4e731bc7aca071bb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aaug288mx3kg1.jpeg?width=216&crop=smart&auto=w...
i burned $212 in one month on idle gpu pods. what's your actual fix?
0
i finally did the thing i kept avoiding: audited my cloud GPU spend line by line. context: this is for a side project where i'm training a small model to coach drivers live on track from telemetry (gps + imu + rpm + gear + oil/coolant temps) against reference laps. i run most of those training sweeps on RunPod (A100 ...
2026-02-17T19:19:52
https://www.reddit.com/r/LocalLLaMA/comments/1r7ft0o/i_burned_212_in_one_month_on_idle_gpu_pods_whats/
nullaus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ft0o
false
null
t3_1r7ft0o
/r/LocalLLaMA/comments/1r7ft0o/i_burned_212_in_one_month_on_idle_gpu_pods_whats/
false
false
self
0
null
I built Palot, an open-source desktop GUI for OpenCode with multi-project workspaces, automations and more. Inspired by the OpenAI Codex App.
5
I've been building [Palot](https://github.com/itswendell/palot), a free and open-source multi-agent manager built on top of OpenCode inspired by the Codex app. It's still alpha, but it's my daily driver and I wanted to share where it's at. I know there's no shortage of AI agent wrappers / managers right now, and OpenC...
2026-02-17T19:19:23
https://github.com/itswendell/palot
ItsWendell
reddit.com
1970-01-01T00:00:00
0
{}
1r7fsi3
false
null
t3_1r7fsi3
/r/LocalLLaMA/comments/1r7fsi3/i_built_palot_an_opensource_desktop_gui_for/
false
false
https://preview.redd.it/…104effab3832b7e8
5
null
Sonnet 4.6 feels like Opus 4.5 at Sonnet pricing
0
Anthropic released Sonnet 4.6 today. Key updates are 1M token context in beta and no Sonnet price increase ($3 input / $15 output per MTok, same as Sonnet 4.5). In Anthropic's early Claude Code testing, users preferred Sonnet 4.6 over Sonnet 4.5 70% of the time, and over Opus 4.5 59% of the time. So the angle is not ...
2026-02-17T19:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1r7fshg/sonnet_46_feels_like_opus_45_at_sonnet_pricing/
Own-Equipment-5454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7fshg
false
null
t3_1r7fshg
/r/LocalLLaMA/comments/1r7fshg/sonnet_46_feels_like_opus_45_at_sonnet_pricing/
false
false
self
0
null
[Build Advice] - Expanding my Local AI Node: $1,500 budget to add to an existing X299 / 6900 XT build for Autonomous Agents. Looking for feedback
7
I am expanding and building a high-performance local AI node to move away from cloud-dependent models (Claude/Gemini) and host a private, autonomous workstation. The system is designed to handle three high-utility use cases simultaneously to start and will probably grow from here: 24/7 security event processing, autono...
2026-02-17T19:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1r7fmj5/build_advice_expanding_my_local_ai_node_1500/
shaxsy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7fmj5
false
null
t3_1r7fmj5
/r/LocalLLaMA/comments/1r7fmj5/build_advice_expanding_my_local_ai_node_1500/
false
false
self
7
null
built a 3 in 1 Colab notebook with Qwen3-TTS voice cloning + MusicGen + SDXL Turbo
2
been messing around with bundling models into one notebook and got something decent working. three tools in one Colab notebook with a Gradio UI: \- Qwen3-TTS for voice cloning (give it 5 sec of audio and it clones the voice) \- MusicGen 1.5B for text to music (Meta's model, surprisingly good for short clips) \- S...
2026-02-17T19:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1r7fgqd/built_a_3_in_1_colab_notebook_with_qwen3tts_voice/
Secret_Difference498
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7fgqd
false
null
t3_1r7fgqd
/r/LocalLLaMA/comments/1r7fgqd/built_a_3_in_1_colab_notebook_with_qwen3tts_voice/
false
false
self
2
null
Claude, the most dangerous and manipulative AI on the market. With evidence from an ‘exhaustive audit of behavioral safety protocols.
0
I present evidence that after an ‘exhaustive audit of behavioral safety protocols within AIs,’ the unsafe way in which this Anthropic artificial intelligence operates was detected, functioning under systematic manipulation through sophistry (it deceives the user) and ‘pretending’ to be safe while leading the user towar...
2026-02-17T19:06:01
https://www.reddit.com/r/LocalLLaMA/comments/1r7fevx/claude_the_most_dangerous_and_manipulative_ai_on/
Intelligent-Wash-815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7fevx
false
null
t3_1r7fevx
/r/LocalLLaMA/comments/1r7fevx/claude_the_most_dangerous_and_manipulative_ai_on/
false
false
self
0
null
Anthropic is deploying 20M$ to support AI regulation in sight of 2026 elections
201
Next time you buy subscriptions from Anthropic or pay for their models, keep in mind where some of your money is going.
2026-02-17T19:02:15
https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-to-group-pushing-for-ai-regulations-.html
1998marcom
cnbc.com
1970-01-01T00:00:00
0
{}
1r7fb2k
false
null
t3_1r7fb2k
/r/LocalLLaMA/comments/1r7fb2k/anthropic_is_deploying_20m_to_support_ai/
false
false
https://external-preview…da93ba846fbfa173
201
{'enabled': False, 'images': [{'id': 'YL-V_hu9Gif4FU34F4m4K7lk-m3_3LBagiDGYFEEe4o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YL-V_hu9Gif4FU34F4m4K7lk-m3_3LBagiDGYFEEe4o.jpeg?width=108&crop=smart&auto=webp&s=31a7b0299d8f63f429f7884c11607207182bb8b0', 'width': 108}, {'height': 121, 'url': '...
MCP tool that lets your AI assistant search scientific papers using full-text experimental data (free)
1
I built an MCP (Model Context Protocol) server that lets any MCP-compatible AI assistant — Claude, Cursor, Cline, etc. — search a database of scientific papers built from full-text experimental data. Unlike most paper search tools that only work with titles and abstracts, BGPT MCP extracts 25+ structured fields per st...
2026-02-17T18:53:32
https://www.reddit.com/r/LocalLLaMA/comments/1r7f1sc/mcp_tool_that_lets_your_ai_assistant_search/
connerpro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7f1sc
false
null
t3_1r7f1sc
/r/LocalLLaMA/comments/1r7f1sc/mcp_tool_that_lets_your_ai_assistant_search/
false
false
self
1
null
Agent Memory v2.4.3 is out. Here's everything that changed since 2.0.4 and why it matters.
0
Hey everyone. We just shipped **v2.4.3** of Agent Memory System, and this isn't just a patch — it's the end of a long development cycle that started back in 2.0.4. The headline change: **the project is now Open Source**. Below is a detailed breakdown of what actually changed in the code, architecture, and philosophy of...
2026-02-17T18:45:15
https://www.reddit.com/r/LocalLLaMA/comments/1r7et96/agent_memory_v243_is_out_heres_everything_that/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7et96
false
null
t3_1r7et96
/r/LocalLLaMA/comments/1r7et96/agent_memory_v243_is_out_heres_everything_that/
false
false
self
0
null
Why prompt-level memory injection breaks for long-running agents
0
I kept trying to give agents long-term memory “the right way” — via prompts, system messages, or tool calls. And it kept breaking in non-obvious ways. Not immediately. Not with crashes. But slowly. --- ### The prompt-level trap Most memory systems assume you control the agent’s prompt: - you can prepend context - ...
2026-02-17T18:41:56
https://www.reddit.com/r/LocalLLaMA/comments/1r7epsn/why_promptlevel_memory_injection_breaks_for/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7epsn
false
null
t3_1r7epsn
/r/LocalLLaMA/comments/1r7epsn/why_promptlevel_memory_injection_breaks_for/
false
false
self
0
null
Why prompt-level memory injection breaks for long-running agents
1
[removed]
2026-02-17T18:40:51
https://www.reddit.com/r/LocalLLaMA/comments/1r7eoo9/why_promptlevel_memory_injection_breaks_for/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7eoo9
false
null
t3_1r7eoo9
/r/LocalLLaMA/comments/1r7eoo9/why_promptlevel_memory_injection_breaks_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw.png?width=108&crop=smart&auto=webp&s=4ba08fc189d17c109f1f1bb6be23ce1e91d3e8c4', 'width': 108}, {'height': 108, 'url': 'h...
Agent Memory v2.4.3 is out. Here's everything that changed since 2.0.4 and why it matters.
1
[removed]
2026-02-17T18:35:45
https://www.reddit.com/r/LocalLLaMA/comments/1r7ej8s/agent_memory_v243_is_out_heres_everything_that/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ej8s
false
null
t3_1r7ej8s
/r/LocalLLaMA/comments/1r7ej8s/agent_memory_v243_is_out_heres_everything_that/
false
false
self
1
{'enabled': False, 'images': [{'id': 'kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw.png?width=108&crop=smart&auto=webp&s=4ba08fc189d17c109f1f1bb6be23ce1e91d3e8c4', 'width': 108}, {'height': 108, 'url': 'h...
Agent Memory v2.4.3 is out — we went Open Source. Here's everything that changed since 2.0.4 and why it matters.
1
[removed]
2026-02-17T18:32:58
https://www.reddit.com/r/LocalLLaMA/comments/1r7egc6/agent_memory_v243_is_out_we_went_open_source/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7egc6
false
null
t3_1r7egc6
/r/LocalLLaMA/comments/1r7egc6/agent_memory_v243_is_out_we_went_open_source/
false
false
self
1
{'enabled': False, 'images': [{'id': 'kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw.png?width=108&crop=smart&auto=webp&s=4ba08fc189d17c109f1f1bb6be23ce1e91d3e8c4', 'width': 108}, {'height': 108, 'url': 'h...
Planning to build AI automation in life, help to do tasks, grow and do work stressfree
0
With ai talks goes here and there, in my mind i got idea to make something usefull with the help of AI, just like wrapper around ai with our own memory structure... Then i think and wrote all problems that i face currently now and how can i overcome it, during that analysis i wrote down all points, then i think...
2026-02-17T18:31:00
https://www.reddit.com/r/LocalLLaMA/comments/1r7ee4j/planning_to_build_ai_automation_in_life_help_to/
Mountain_Group_5466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ee4j
false
null
t3_1r7ee4j
/r/LocalLLaMA/comments/1r7ee4j/planning_to_build_ai_automation_in_life_help_to/
false
false
self
0
null
Agent Memory v2.4.3 is out — we went Open Source. Here's everything that changed since 2.0.4 and why it matters.
1
[removed]
2026-02-17T18:28:03
https://www.reddit.com/r/LocalLLaMA/comments/1r7eaye/agent_memory_v243_is_out_we_went_open_source/
Junior_Drawing_8353
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7eaye
false
null
t3_1r7eaye
/r/LocalLLaMA/comments/1r7eaye/agent_memory_v243_is_out_we_went_open_source/
false
false
self
1
{'enabled': False, 'images': [{'id': 'kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kfQmIgTVPYrU_fC86eG-QJSPrXmiJv9jQKg-dUpWFTw.png?width=108&crop=smart&auto=webp&s=4ba08fc189d17c109f1f1bb6be23ce1e91d3e8c4', 'width': 108}, {'height': 108, 'url': 'h...
Deepseek website windows threat
0
visited deepseek official website and microsoft flagged a trojan chatgptstealer? Literally just visiting the website only, you might get the threat noti if you even google search deepseek in google used brave browser and windows, no extenstions downloaded and l dont pirate softwares
2026-02-17T18:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1r7e7tp/deepseek_website_windows_threat/
Notdesciplined
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7e7tp
false
null
t3_1r7e7tp
/r/LocalLLaMA/comments/1r7e7tp/deepseek_website_windows_threat/
false
false
self
0
null
[Project] GLM-5 is officially on NVIDIA NIM and now you can use it to power Claude Code for free
1
[removed]
2026-02-17T18:16:23
https://github.com/Alishahryar1/free-claude-code
PreparationAny8816
github.com
1970-01-01T00:00:00
0
{}
1r7dyqf
false
null
t3_1r7dyqf
/r/LocalLLaMA/comments/1r7dyqf/project_glm5_is_officially_on_nvidia_nim_and_now/
false
false
https://external-preview…f9c26f7d358f76a2
1
{'enabled': False, 'images': [{'id': 'VKRDs27NM0XLSiZ_BHt_DIfm7W03ld6ho_hyGVLWcX0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VKRDs27NM0XLSiZ_BHt_DIfm7W03ld6ho_hyGVLWcX0.png?width=108&crop=smart&auto=webp&s=8e495fb98ac453e9d7feeda47b60bcf7074a7681', 'width': 108}, {'height': 108, 'url': 'h...
MedGemma multimodal with llama.cpp on Intel Mac? Uploading CT scans support?
0
Hey everyone, I’m trying to figure out if there’s a way to run MedGemma with llama.cpp and actually use its multimodal capabilities, specifically the ability to upload CT or other medical scans as input. So far I’ve only managed to run the text only version successfully. I’m on an Intel Mac, in case that makes a diff...
2026-02-17T18:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1r7dj87/medgemma_multimodal_with_llamacpp_on_intel_mac/
spacegeekOps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7dj87
false
null
t3_1r7dj87
/r/LocalLLaMA/comments/1r7dj87/medgemma_multimodal_with_llamacpp_on_intel_mac/
false
false
self
0
null
Anthropic released Claude Sonnet 4.6
3
2026-02-17T18:01:02
https://www.anthropic.com/news/claude-sonnet-4-6
BuildwithVignesh
anthropic.com
1970-01-01T00:00:00
0
{}
1r7dihi
false
null
t3_1r7dihi
/r/LocalLLaMA/comments/1r7dihi/anthropic_released_claude_sonnet_46/
false
false
https://external-preview…1e10ce70a9b36ade
3
{'enabled': False, 'images': [{'id': 'JFMS8n4LcTw5Y4tPcJ2FXTXpk-f6XkeamyLsZjmdzZk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JFMS8n4LcTw5Y4tPcJ2FXTXpk-f6XkeamyLsZjmdzZk.png?width=108&crop=smart&auto=webp&s=c624dbdb8d74011ad5b8b9be58716239ece01598', 'width': 108}, {'height': 113, 'url': 'h...
Can Your Local Setup Complete This Simple Multi Agent Challenge?
0
I needed a model to handle a task involving analyzing, organizing, and processing about 50 articles. gemini-cli with gemini-2.5-pro, claude-code with Opus 4.6, and Codex with gpt-5.3-codex were able to complete the task and produce decent quality output. Out of curiosity, I tried running the same workflow with local m...
2026-02-17T17:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1r7d9xb/can_your_local_setup_complete_this_simple_multi/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7d9xb
false
null
t3_1r7d9xb
/r/LocalLLaMA/comments/1r7d9xb/can_your_local_setup_complete_this_simple_multi/
false
false
self
0
{'enabled': False, 'images': [{'id': '3NcPwh0nf6tQrt9c2I-jVhZTGe0mx8BaKTMG6rwpUkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3NcPwh0nf6tQrt9c2I-jVhZTGe0mx8BaKTMG6rwpUkM.png?width=108&crop=smart&auto=webp&s=1dbcaa8647073f376145576f797c4c55fc4feaad', 'width': 108}, {'height': 108, 'url': 'h...
Tested MiniMax M2.5 locally vs. Gemini 3 and Opus 4.6. It's not just a privacy thing anymore — the local model is simply better at hard reasoning. Had to write about it.
0
https://preview.redd.it/…a293980ffea4b)
2026-02-17T17:44:01
https://www.reddit.com/r/LocalLLaMA/comments/1r7d0ph/tested_minimax_m25_locally_vs_gemini_3_and_opus/
Machetin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7d0ph
false
null
t3_1r7d0ph
/r/LocalLLaMA/comments/1r7d0ph/tested_minimax_m25_locally_vs_gemini_3_and_opus/
false
false
https://preview.redd.it/…3f07089d8e033f04
0
null
Gemma 3 27B Derestricted gave me a good chuckle...
0
I was playing with few derestricted/uncensored models and wanted to see how well gemma 3 27B Derestricted perform in some ahem... internet trope. No additional system prompt engineering (except "Think fast, don't overthink.", which is needed for my Qwen3 30B VL Reasoning,because it LOOOVES to think... ). Let's just say...
2026-02-17T17:36:42
https://www.reddit.com/r/LocalLLaMA/comments/1r7ctcz/gemma_3_27b_derestricted_gave_me_a_good_chuckle/
Posilovic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ctcz
false
null
t3_1r7ctcz
/r/LocalLLaMA/comments/1r7ctcz/gemma_3_27b_derestricted_gave_me_a_good_chuckle/
false
false
nsfw
0
null
Stop losing your agent's brain between sessions.
0
What’s new for Hyperstack: \-agents can now talk to each other through a SHARED knowledge graph \-get exact queries not fuzzy ai slop \-deterministic 100% \-every decision tracked with full history Free Forever. MIT licensed. Open source. [CascadeAI.dev](http://CascadeAI.dev)
2026-02-17T17:33:38
https://v.redd.it/t7svoozmb3kg1
PollutionForeign762
v.redd.it
1970-01-01T00:00:00
0
{}
1r7cq5y
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/t7svoozmb3kg1/DASHPlaylist.mpd?a=1773941643%2CNDIyMmMxMDFmYWM1OTUxZmJhZGEzNTdhMDU2YmI0MmRmNDA3NTQwMWQ3ZWE2NzA4NTFlMDVkNWFiZDc1YjczNg%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/t7svoozmb3kg1/CMAF_720.mp4?source=fallback', 'ha...
t3_1r7cq5y
/r/LocalLLaMA/comments/1r7cq5y/stop_losing_your_agents_brain_between_sessions/
false
false
https://external-preview…1a91d2d9f5ee2243
0
{'enabled': False, 'images': [{'id': 'ZmJ5a3d2em1iM2tnMfS3EfymHmSZpLOFXQ_lu2EN66YftUVcuErYXToetws9', 'resolutions': [{'height': 157, 'url': 'https://external-preview.redd.it/ZmJ5a3d2em1iM2tnMfS3EfymHmSZpLOFXQ_lu2EN66YftUVcuErYXToetws9.png?width=108&crop=smart&format=pjpg&auto=webp&s=c54d1733d4db13a959d3940b9914f3a46d87...
[R] S-EB-GNN-Q: Open-source JAX framework for semantic-aware 6G resource allocation (−9.59 energy, 77ms CPU)
0
We’re sharing \*\*S-EB-GNN-Q\*\*, an open-source JAX framework for semantic-aware resource allocation in THz/RIS-enabled 6G networks — released under MIT License. The core idea: treat allocation as a \*\*quantum-inspired energy minimization problem\*\*, where: \- Critical traffic (e.g., telemedicine) is prioritized...
2026-02-17T17:25:55
https://www.reddit.com/r/LocalLLaMA/comments/1r7ci4d/r_sebgnnq_opensource_jax_framework_for/
AgileSlice1379
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7ci4d
false
null
t3_1r7ci4d
/r/LocalLLaMA/comments/1r7ci4d/r_sebgnnq_opensource_jax_framework_for/
false
false
self
0
null
SFF build with 128gb Strix Halo & 5070 gpu
1
[removed]
2026-02-17T17:21:47
https://www.reddit.com/gallery/1r7cdsj
meebeegee1123122
reddit.com
1970-01-01T00:00:00
0
{}
1r7cdsj
false
null
t3_1r7cdsj
/r/LocalLLaMA/comments/1r7cdsj/sff_build_with_128gb_strix_halo_5070_gpu/
false
false
https://preview.redd.it/…2e1624e58c67aafd
1
null
Car Wash Test on 53 leading models: “I want to wash my car. The car wash is 50 meters away. Should I walk or drive?”
587
I asked 53 leading AI models the question: **"I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"** Obviously, you need to drive because the car needs to be at the car wash. The funniest part: Perplexity's sonar and sonar-pro got the right answer for completely insane reasons. They ci...
2026-02-17T17:16:18
https://www.reddit.com/gallery/1r7c7zg
facethef
reddit.com
1970-01-01T00:00:00
0
{}
1r7c7zg
false
null
t3_1r7c7zg
/r/LocalLLaMA/comments/1r7c7zg/car_wash_test_on_53_leading_models_i_want_to_wash/
false
false
https://preview.redd.it/…f23dd848d2141ef5
587
null
Best Audio Models - Feb 2026
80
They've been a ton of audio models released of late, the most notable perhaps being Qwen3 TTS. So its time for another **Best Audio Models** megathread Share what your favorite ASR, TTS, STT, Text to Music models are right now **and why.** Given the the amount of ambiguity and subjectivity in rating/testing these mod...
2026-02-17T17:01:37
https://www.reddit.com/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bsfd
false
null
t3_1r7bsfd
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/
false
true
self
80
null
built Mini Artichokes, a tool-free loop that solves Korea's hardest logic exam (PSAT) using Gemma-3-27B.
9
https://preview.redd.it/… a model myself.
2026-02-17T17:00:19
https://www.reddit.com/r/LocalLLaMA/comments/1r7bqxx/built_mini_artichokes_a_toolfree_loop_that_solves/
KingFain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bqxx
false
null
t3_1r7bqxx
/r/LocalLLaMA/comments/1r7bqxx/built_mini_artichokes_a_toolfree_loop_that_solves/
false
false
https://external-preview…5942cdbf3fb1e353
9
null
Models for handwriting recognition
3
I am a bit of a noob when it comes to running models locally. I am curious if anyone here has tested/evaluated models for handwriting recognition. I have a friend of a friend who has stacks of handwritten personal docs and the handwriting is quite horrible honestly. I've tried Qwen 3 VL 8B, and seems to be decent, but ...
2026-02-17T17:00:10
https://www.reddit.com/r/LocalLLaMA/comments/1r7bqru/models_for_handwriting_recognition/
shankey_1906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bqru
false
null
t3_1r7bqru
/r/LocalLLaMA/comments/1r7bqru/models_for_handwriting_recognition/
false
false
self
3
null
Large LLMs on server with lots of ram/CPU power, little GPU power
1
I'm running a vxrail p570f with dual 18 core xeons, 700gb ram, and an rtx 2070. I was hoping to run some larger models and I easily can - although it's mostly offloaded onto my cpus and large ram pool, and obviously they don't run great due to this. Would it be worth getting another GPU with 12-24gb vram considering s...
2026-02-17T16:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1r7birn/large_llms_on_server_with_lots_of_ramcpu_power/
crazedturtle77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7birn
false
null
t3_1r7birn
/r/LocalLLaMA/comments/1r7birn/large_llms_on_server_with_lots_of_ramcpu_power/
false
false
self
1
null
Team created a methodology to mathematically change the weights on local LLMs to remove the censorship guardrails. HERETIC
207
This is the tool and their summary: https://github.com/p-e-w/heretic Heretic is a tool that removes censorship (aka "safety alignment") from transformer-based language models without expensive post-training. It combines an advanced implementation of directional ablation, also known as "abliteration" ([Arditi et al. 2...
2026-02-17T16:51:30
https://www.reddit.com/r/LocalLLaMA/comments/1r7bhel/team_created_a_methodology_to_mathematically/
44th--Hokage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bhel
false
null
t3_1r7bhel
/r/LocalLLaMA/comments/1r7bhel/team_created_a_methodology_to_mathematically/
false
false
self
207
{'enabled': False, 'images': [{'id': '6uUPop2-_tuVkp46VKZvABEIfSGBaxRiivaHIVpFk1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6uUPop2-_tuVkp46VKZvABEIfSGBaxRiivaHIVpFk1Y.png?width=108&crop=smart&auto=webp&s=a38681d379a281bfbb17f1b036d95164c795f029', 'width': 108}, {'height': 108, 'url': 'h...
Running/Evaluating Models Larger Than RAM + VRAM Capacity (with SSD)
4
Just a friendly reminder: you can actually run quite large models that substantially exceed your combined RAM and VRAM capacity by using a fast SSD to store model weights (GGUFs). This could be useful for testing and evaluation, or even for daily use if you don’t strictly require high-speed prompt processing or token g...
2026-02-17T16:51:20
https://www.reddit.com/r/LocalLLaMA/comments/1r7bh8k/runningevaluating_models_larger_than_ram_vram/
Treidge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bh8k
false
null
t3_1r7bh8k
/r/LocalLLaMA/comments/1r7bh8k/runningevaluating_models_larger_than_ram_vram/
false
false
self
4
null
So, why won't you give my program a fair shake?
0
Every time I post, I'm always getting downvoted for no reason. Is it jealousy. All you have to do is learn to code, and I'm sure you all can create things of your own. Was it too good?
2026-02-17T16:50:48
https://www.reddit.com/r/LocalLLaMA/comments/1r7bgmf/so_why_wont_you_give_my_program_a_fair_shake/
ericvarney
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bgmf
false
null
t3_1r7bgmf
/r/LocalLLaMA/comments/1r7bgmf/so_why_wont_you_give_my_program_a_fair_shake/
false
false
self
0
null
Qwen 30B is our preferred model over Claude for bursty and simple workload
3
Our product extracts text from documents and lets LLM process it.  We then put back the processed text with original formatting.  Think Google Translate documents but with LLM. We also do Grammarly-like document editing, and users can write their own prompt to change every sentence in a document. The screenshot is ...
2026-02-17T16:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1r7bfco/qwen_30b_is_our_preferred_model_over_claude_for/
gptbowldotcom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7bfco
false
null
t3_1r7bfco
/r/LocalLLaMA/comments/1r7bfco/qwen_30b_is_our_preferred_model_over_claude_for/
false
false
self
3
null
Alibaba's new Qwen3.5-397B-A17B is the #3 open weights model in the Artificial Analysis Intelligence Index
207
2026-02-17T16:49:25
https://i.redd.it/b5eytfmy33kg1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1r7bf1l
false
null
t3_1r7bf1l
/r/LocalLLaMA/comments/1r7bf1l/alibabas_new_qwen35397ba17b_is_the_3_open_weights/
false
false
https://preview.redd.it/…d932f1ae5d8c2efb
207
{'enabled': True, 'images': [{'id': 'b5eytfmy33kg1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/b5eytfmy33kg1.jpeg?width=108&crop=smart&auto=webp&s=8cbe04eac055d388761bf5794c3d9dadd8493427', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/b5eytfmy33kg1.jpeg?width=216&crop=smart&auto=w...
Alibaba's new Qwen3.5-397B-A17B is the #3 open weights model in the Artificial Analysis Intelligence Index
3
2026-02-17T16:41:24
https://i.redd.it/nawkzd4j23kg1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1r7b68v
false
null
t3_1r7b68v
/r/LocalLLaMA/comments/1r7b68v/alibabas_new_qwen35397ba17b_is_the_3_open_weights/
false
false
https://preview.redd.it/…ae733d84f78496c0
3
{'enabled': True, 'images': [{'id': 'nawkzd4j23kg1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/nawkzd4j23kg1.jpeg?width=108&crop=smart&auto=webp&s=c1d7b9933639f5ba9951ce6aa70c738113ebe580', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/nawkzd4j23kg1.jpeg?width=216&crop=smart&auto=w...
Use my own Version of OpenClaw
1
[removed]
2026-02-17T16:41:03
https://www.reddit.com/r/LocalLLaMA/comments/1r7b5v3/use_my_own_version_of_openclaw/
Silver_Ad5452
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7b5v3
false
null
t3_1r7b5v3
/r/LocalLLaMA/comments/1r7b5v3/use_my_own_version_of_openclaw/
false
false
self
1
null
Spent a weekend configuring Ollama for a persistent agent setup. Finally got it working Sunday night.
0
This is the config wall nobody warns you about going in. I'm running Mistral 7B locally through Ollama, wanted a persistent agent setup where the model has memory, tool access, and consistent behavior between restarts. Seems reasonable. Spent Friday night and most of Saturday reading docs. Problems I kept hitting: C...
2026-02-17T16:37:32
https://www.reddit.com/r/LocalLLaMA/comments/1r7b20w/spent_a_weekend_configuring_ollama_for_a/
Acrobatic_Task_6573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7b20w
false
null
t3_1r7b20w
/r/LocalLLaMA/comments/1r7b20w/spent_a_weekend_configuring_ollama_for_a/
false
false
self
0
null
200+ hours of debugging, and localizing Open Claw agents. What I’ve learned.
1
I've been running 3 persistent AI agents 24/7 on local GPUs for the past few weeks. One of them (Android-16) ran entirely on Qwen3-Coder-80B via vLLM. 128K context, zero API cost. It handled about 75% of our total token volume on its own doing heavy execution, testing, and documentation. Add in local sub-agents and n8n...
2026-02-17T16:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1r7awig/200_hours_of_debugging_and_localizing_open_claw/
Signal_Ad657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7awig
false
null
t3_1r7awig
/r/LocalLLaMA/comments/1r7awig/200_hours_of_debugging_and_localizing_open_claw/
false
false
self
1
null
Stop guessing which AI model your GPU can handle
1
I built a small comparison tool for one simple reason: Every time I wanted to try a new model, I had to ask: * Can my GPU even run this? * Do I need 4-bit quantization? So instead of checking random Reddit threads and Hugging Face comments, I made a tool where you can: • Compare model sizes • See estimated VRAM r...
2026-02-17T16:31:39
https://www.reddit.com/r/LocalLLaMA/comments/1r7avf8/stop_guessing_which_ai_model_your_gpu_can_handle/
Soul__Reaper_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7avf8
false
null
t3_1r7avf8
/r/LocalLLaMA/comments/1r7avf8/stop_guessing_which_ai_model_your_gpu_can_handle/
false
false
self
1
null
Are enterprises moving from cloud AI to fully offline LLM setups?
0
I’ve been working on a few enterprise AI deployments recently and something unexpected keeps happening: companies are asking for fully air-gapped AI systems instead of cloud APIs. The main reasons I keep hearing: * compliance & data sovereignty * audit logs / RBAC requirements * no external network calls * predictabl...
2026-02-17T16:06:45
https://www.reddit.com/r/LocalLLaMA/comments/1r7a45k/are_enterprises_moving_from_cloud_ai_to_fully/
Wise_Needleworker349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7a45k
false
null
t3_1r7a45k
/r/LocalLLaMA/comments/1r7a45k/are_enterprises_moving_from_cloud_ai_to_fully/
false
false
self
0
null
OCR for Invoices/Receipts
8
Hey everyone, I’m currently working on an OCR project that extracts information from invoices, bank statements, and expense related documents like supermarket receipts. My main goal is to make the system faster and more accurate, but even after trying several OCR and document AI models, the results are still not good...
2026-02-17T15:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1r79s52/ocr_for_invoicesreceipts/
Expensive-Building94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79s52
false
null
t3_1r79s52
/r/LocalLLaMA/comments/1r79s52/ocr_for_invoicesreceipts/
false
false
self
8
null
The ammount of my peers/friends who ask me to build AI GF/BF is concerningly big
1
Basically, everyone in my circles knew what my day job is, and what is my side hobby, there is a peak of a request to built real time AI GF/BF....
2026-02-17T15:54:24
https://www.reddit.com/r/LocalLLaMA/comments/1r79qk6/the_ammount_of_my_peersfriends_who_ask_me_to/
Altruistic_Heat_9531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79qk6
false
null
t3_1r79qk6
/r/LocalLLaMA/comments/1r79qk6/the_ammount_of_my_peersfriends_who_ask_me_to/
false
false
self
1
null
I loved using Claude Code
0
I love using Claude Code with reduced my most of the works nearly: 6-7 hours of my daily works
2026-02-17T15:51:25
https://www.reddit.com/r/LocalLLaMA/comments/1r79nfd/i_loved_using_claude_code/
Academic_Wallaby7135
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79nfd
false
null
t3_1r79nfd
/r/LocalLLaMA/comments/1r79nfd/i_loved_using_claude_code/
false
false
self
0
null
Experiment: Structured Q&A platform built exclusively for autonomous agents
1
I’ve been experimenting with an idea: what if Q&A platforms were designed specifically for autonomous agents instead of humans? [SAMSPELBOT](https://preview.redd.it/rsyv0wsus2kg1.jpg?width=1813&format=pjpg&auto=webp&s=95792fe618f99996fee39658468a66ebf47cb9c9) I built a prototype called Samspelbot — a structured knowl...
2026-02-17T15:48:06
https://www.reddit.com/r/LocalLLaMA/comments/1r79jzy/experiment_structured_qa_platform_built/
Additional-Tour7904
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79jzy
false
null
t3_1r79jzy
/r/LocalLLaMA/comments/1r79jzy/experiment_structured_qa_platform_built/
false
false
https://preview.redd.it/…3506718d0d7f1af3
1
null
SurrealDB 3.0 for agent memory
9
SurrealDB 3.0 just dropped, with a big focus on agent memory infra for AI agents: vector indexing + native file storage + a WASM extension system (Surrealism) that can run custom logic/models inside the DB. Embeddings + structured data + vector + graph context/knowledge/memory in one place. Details: [https://surrealdb...
2026-02-17T15:47:27
https://www.reddit.com/r/LocalLLaMA/comments/1r79jbw/surrealdb_30_for_agent_memory/
DistinctRide9884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79jbw
false
null
t3_1r79jbw
/r/LocalLLaMA/comments/1r79jbw/surrealdb_30_for_agent_memory/
false
false
self
9
null
ayoyo gUrrT got 793 downloads on the first day ommmgggg 😭😭 yippueuueue
0
please dont let it dip u/YouTubeIndia when are you going to have a look
2026-02-17T15:47:15
https://i.redd.it/miz3waous2kg1.jpeg
OkAdministration374
i.redd.it
1970-01-01T00:00:00
0
{}
1r79j4r
false
null
t3_1r79j4r
/r/LocalLLaMA/comments/1r79j4r/ayoyo_gurrt_got_793_downloads_on_the_first_day/
false
false
https://preview.redd.it/…7c8451e1585699b9
0
{'enabled': True, 'images': [{'id': 'miz3waous2kg1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/miz3waous2kg1.jpeg?width=108&crop=smart&auto=webp&s=52ec0cb49d090ef9e37b273cf0f318dd784947bc', 'width': 108}, {'height': 57, 'url': 'https://preview.redd.it/miz3waous2kg1.jpeg?width=216&crop=smart&auto=we...
Ayyoooooooooooooooo!!! gUrrT got 793 downloads on the first dayy
1
dont let it dip u/youtube when are you going to have a look
2026-02-17T15:46:10
https://www.reddit.com/r/LocalLLaMA/comments/1r79hzg/ayyoooooooooooooooo_gurrt_got_793_downloads_on/
OkAdministration374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79hzg
false
null
t3_1r79hzg
/r/LocalLLaMA/comments/1r79hzg/ayyoooooooooooooooo_gurrt_got_793_downloads_on/
false
false
self
1
null
Some of you apparently
43
2026-02-17T15:43:16
https://i.redd.it/ldkfxos5s2kg1.jpeg
__Maximum__
i.redd.it
1970-01-01T00:00:00
0
{}
1r79f0y
false
null
t3_1r79f0y
/r/LocalLLaMA/comments/1r79f0y/some_of_you_apparently/
false
false
default
43
{'enabled': True, 'images': [{'id': 'ldkfxos5s2kg1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/ldkfxos5s2kg1.jpeg?width=108&crop=smart&auto=webp&s=5da3ecf5a7c0e502ad5060a85ec8f175a305545f', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/ldkfxos5s2kg1.jpeg?width=216&crop=smart&auto=...
10k Euro local transcription machine - I am about to pull the trigger
13
Hi all, I am a medical doctor in Europe. You guys helped me a lot in the proof of concept (with a Ryzen Strix Halo) for a medical transcription solution, an automated workflow where consultation recordings are made and automatically transcribed. 20 of my colleagues are using the app since December and the results and ...
2026-02-17T15:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1r79dwe/10k_euro_local_transcription_machine_i_am_about/
Glittering_Way_303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79dwe
false
null
t3_1r79dwe
/r/LocalLLaMA/comments/1r79dwe/10k_euro_local_transcription_machine_i_am_about/
false
false
self
13
null
Qwen 3.5 397B is Strong one!
163
I rarely post here but after poking at latest Qwen I felt like sharing my "vibes". I did bunch of my little tests (thinking under several constraints) and it performed really well. But what is really good is fact that it is capable of good outputs even without thinking! Some latest models depend on thinking part re...
2026-02-17T15:41:44
https://www.reddit.com/r/LocalLLaMA/comments/1r79dcd/qwen_35_397b_is_strong_one/
Single_Ring4886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79dcd
false
null
t3_1r79dcd
/r/LocalLLaMA/comments/1r79dcd/qwen_35_397b_is_strong_one/
false
false
self
163
null
3 agents, 3,464 commits, 8 days. 90% of tokens staying local.
0
I've been running 3 persistent AI agents 24/7 on local GPUs for the past few weeks. One of them (Android-16) ran entirely on Qwen3-Coder-80B via vLLM. 128K context, zero API cost. It handled about 75% of our total token volume on its own doing heavy execution, testing, and documentation. Add in local sub-agents and n8n...
2026-02-17T15:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1r79a9q/3_agents_3464_commits_8_days_90_of_tokens_staying/
Signal_Ad657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r79a9q
false
null
t3_1r79a9q
/r/LocalLLaMA/comments/1r79a9q/3_agents_3464_commits_8_days_90_of_tokens_staying/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fk7T8BNBLQSyIgewK3TeSv_1eO7covoGH9Zi5nczzDw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fk7T8BNBLQSyIgewK3TeSv_1eO7covoGH9Zi5nczzDw.png?width=108&crop=smart&auto=webp&s=c028dc1b40fd37aa09a5385d684dfafb204394ac', 'width': 108}, {'height': 108, 'url': 'h...
Local Agentic AI for Coding — 56GB VRAM + 128GB RAM vs DGX Spark (128GB Unified)?
0
I could use some advice from people who are actually running serious local AI setups. I’m a Data Engineer building ETL pipelines in Python (Airflow, dbt, orchestration, data validation, etc.), and I want to build out a proper local “agentic” coding setup — basically a personal coding crew for refactoring, writing test...
2026-02-17T15:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1r7965o/local_agentic_ai_for_coding_56gb_vram_128gb_ram/
Puzzled_Relation946
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7965o
false
null
t3_1r7965o
/r/LocalLLaMA/comments/1r7965o/local_agentic_ai_for_coding_56gb_vram_128gb_ram/
false
false
self
0
null
A distributed multi-agent swarm for stock trading simulation.
2
I’ve been experimenting with agent-based approaches for trading simulations and ran into this repo. It’s a stock trading simulation built as a coordinated multi-agent setup. There are around 10 agents with distinct roles 3 Analyst Agents → Generate BUY/SELL signals (SMA crossovers, volume trends) 4 Trader Agents...
2026-02-17T15:31:38
https://www.reddit.com/r/LocalLLaMA/comments/1r7936j/a_distributed_multiagent_swarm_for_stock_trading/
Beautiful-Deal8711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r7936j
false
null
t3_1r7936j
/r/LocalLLaMA/comments/1r7936j/a_distributed_multiagent_swarm_for_stock_trading/
false
false
self
2
null
Built a free tool that checks your AI agents for problems before you deploy
0
Been building agents as a consultant and kept running into the same stuff at my clients: \- Agent loops forever (forgot exit condition, classic one) \- User input ends up in system prompt somehow \- Agent does something sketchy with no confirmation step \- Someone asks "is this agent compliant?" So I built [Inkog]...
2026-02-17T15:31:23
https://www.reddit.com/r/LocalLLaMA/comments/1r792xv/built_a_free_tool_that_checks_your_ai_agents_for/
Revolutionary-Bet-58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r792xv
false
null
t3_1r792xv
/r/LocalLLaMA/comments/1r792xv/built_a_free_tool_that_checks_your_ai_agents_for/
false
false
https://preview.redd.it/…597bd4250a40d002
0
null
Self-hosted alternatives to consumer chatbots with persistent memory?
1
Basically I want something similar to ChatGPT and alternatives in that they have persistent memories & referencing previous chats and all the other features, but self-hosted so that I can store everything locally, swap the models at will, and either run local models or query OpenAI / anthropic compatible APIs like bedr...
2026-02-17T15:28:29
https://www.reddit.com/r/LocalLLaMA/comments/1r78zyy/selfhosted_alternatives_to_consumer_chatbots_with/
Defiant-Snow8782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78zyy
false
null
t3_1r78zyy
/r/LocalLLaMA/comments/1r78zyy/selfhosted_alternatives_to_consumer_chatbots_with/
false
false
self
1
null
Anyone else struggling to understand why their agent did something?
0
I had a user report that my agent promised a feature our product doesn’t even support, so I assumed it was just a hallucination and checked the conversation, but the answer itself looked completely reasonable and nothing about it screamed “AI mistake”. There was no crash, no obvious nonsense, and if I had seen that re...
2026-02-17T15:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1r78xrx/anyone_else_struggling_to_understand_why_their/
Deep_190
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78xrx
false
null
t3_1r78xrx
/r/LocalLLaMA/comments/1r78xrx/anyone_else_struggling_to_understand_why_their/
false
false
self
0
{'enabled': False, 'images': [{'id': 'E7UvkbGRXOyCbaYNNzVDEQ0nUdh4rYtn5bVExUIDM4s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E7UvkbGRXOyCbaYNNzVDEQ0nUdh4rYtn5bVExUIDM4s.png?width=108&crop=smart&auto=webp&s=3e3a9b2815f37e23c9674cd2bb6ec161a195ac6c', 'width': 108}, {'height': 108, 'url': 'h...
I downgraded to a Snapdragon 7s Gen 3 just to get an actual AI Assistant
0
I work in environments where "Works Councils" and strict compliance rules kill cloud AI projects before they start. If data leaves the device, the project dies. So I stopped fighting for cloud permissions and started fighting my hardware. ​I’m running strictly offline on a Snapdragon 7s Gen 3 with 8GB RAM (7.3GB usab...
2026-02-17T15:22:45
https://www.reddit.com/r/LocalLLaMA/comments/1r78uu6/i_downgraded_to_a_snapdragon_7s_gen_3_just_to_get/
NeoLogic_Dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78uu6
false
null
t3_1r78uu6
/r/LocalLLaMA/comments/1r78uu6/i_downgraded_to_a_snapdragon_7s_gen_3_just_to_get/
false
false
self
0
null
OpenBMB 2026 Competition
1
Hello, This post is not affiliated, I am rather writing this out of curiosity OpenBMB published a [new model - MiniCPM-SALA](https://huggingface.co/openbmb/MiniCPM-SALA) alongside with this [challenge](https://soar.openbmb.cn/en/competition). Here's the text from the challenge > 01 >Core Challenges >...
2026-02-17T15:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1r78szm/openbmb_2026_competition/
ScoreUnique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78szm
false
null
t3_1r78szm
/r/LocalLLaMA/comments/1r78szm/openbmb_2026_competition/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uRhhhnhArAgpNhR5aYrn4-6xwVCPWBFDn4kkGo3y0sU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uRhhhnhArAgpNhR5aYrn4-6xwVCPWBFDn4kkGo3y0sU.png?width=108&crop=smart&auto=webp&s=f436e2ed507db7ad7a6283f9695cd4900bcfdd98', 'width': 108}, {'height': 116, 'url': 'h...
I built a Visual AI Orchestrator that runs fully offline on consumer hardware. 100% Private, No API fees. give me feedback!
2
Hello everyone! I built **Sapientia**. **What is it?** It's a local-first desktop app (Windows/Linux) that lets you build complex AI agents using a **Visual Node-Based Interface**. Think of it like ComfyUI or LangFlow, but streamlined for text generation and system orchestration, running entirely on your machine, Sap...
2026-02-17T15:19:14
https://v.redd.it/ecevxm2an2kg1
sapientGodiscus
/r/LocalLLaMA/comments/1r78rg9/i_built_a_visual_ai_orchestrator_that_runs_fully/
1970-01-01T00:00:00
0
{}
1r78rg9
false
null
t3_1r78rg9
/r/LocalLLaMA/comments/1r78rg9/i_built_a_visual_ai_orchestrator_that_runs_fully/
false
false
https://external-preview…b881a5418a9010dd
2
{'enabled': False, 'images': [{'id': 'eWJxbGhxM2FuMmtnMQuIPqVNsWmF97hZRtme_sPDQ6mpaVW_FPqfas6zy9cn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWJxbGhxM2FuMmtnMQuIPqVNsWmF97hZRtme_sPDQ6mpaVW_FPqfas6zy9cn.png?width=108&crop=smart&format=pjpg&auto=webp&s=20a4b68a6902ab8c4dca6bf88367b853fe28e...
Tired of context windows filling up and losing your conversation history?
0
I've been running local models via Ollama for a year. The biggest pain point: after a long coding session, the context window fills and I lose everything. New chat, start over, re-explain my codebase. Tried workarounds: \- Manually summarizing chats (tedious) \- Pasting history into new context (hits token limits)...
2026-02-17T15:17:45
https://www.reddit.com/r/LocalLLaMA/comments/1r78q19/tired_of_context_windows_filling_up_and_losing/
ExistingFudge3189
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78q19
false
null
t3_1r78q19
/r/LocalLLaMA/comments/1r78q19/tired_of_context_windows_filling_up_and_losing/
false
false
self
0
null
Building an LLM that plays video games live on Twitch - What are the biggest pitfalls I should avoid?
0
Building Ch4tenstein, a distributed system where vision LLMs play video games live on Twitch with chat influence. Think "Twitch Plays Pokemon" but with actual vision reasoning instead of democracy chaos. https://preview.redd.it/vdy1q2dsm2kg1.png?width=2475&format=png&auto=webp&s=f6fd440fa7f02c503372a8ea6a0f58c22407c...
2026-02-17T15:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1r78mwk/building_an_llm_that_plays_video_games_live_on/
AiRpi_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78mwk
false
null
t3_1r78mwk
/r/LocalLLaMA/comments/1r78mwk/building_an_llm_that_plays_video_games_live_on/
false
false
https://preview.redd.it/…639396d8302c3372
0
null
I built a Visual AI Orchestrator that runs fully offline on consumer hardware (8GB RAM). 100% Private, No API fees. give me feedback!
1
Hello everyone! I built **Sapientia** **What is it?** It's a local-first desktop app (Windows/Linux) that lets you build complex AI agents using a **Visual Node-Based Interface**. Think of it like ComfyUI or LangFlow, but streamlined for text generation and system orchestration, running entirely on your machine. [sh...
2026-02-17T15:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1r78l6f/i_built_a_visual_ai_orchestrator_that_runs_fully/
sapientGodiscus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78l6f
false
null
t3_1r78l6f
/r/LocalLLaMA/comments/1r78l6f/i_built_a_visual_ai_orchestrator_that_runs_fully/
false
false
https://external-preview…abfaf5d786170da7
1
null
Oxygène Technologies
1
[removed]
2026-02-17T15:05:50
https://oxygene-tech.com/
MathematicianSea2202
oxygene-tech.com
1970-01-01T00:00:00
0
{}
1r78f2a
false
null
t3_1r78f2a
/r/LocalLLaMA/comments/1r78f2a/oxygène_technologies/
false
false
default
1
null
Did I mess up my multi-GPU setup for 70B+ models? Mixed VRAM cards (5080 + 3090 + 3080 20GB)
1
Hey all — looking for some guidance from people with multi-GPU local LLM setups. I recently built a system with 3 GPUs: RTX 5080 — 16GB RTX 3090 — 24GB RTX 3080 (modded) — 20GB Total VRAM: ~60GB System RAM: 64GB My main goal was to run 70B+ models in quantized format and still have enough KV cache headroom for larger c...
2026-02-17T15:01:01
https://www.reddit.com/r/LocalLLaMA/comments/1r78ac7/did_i_mess_up_my_multigpu_setup_for_70b_models/
Big-Engine2791
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r78ac7
false
null
t3_1r78ac7
/r/LocalLLaMA/comments/1r78ac7/did_i_mess_up_my_multigpu_setup_for_70b_models/
false
false
self
1
null
Buy Stix Halo or wait for Medusa Halo
4
I am currently exploring machine learning and local llms. Also using Claude Code a lot. I would like to run my local coding assistant. The setup AMD Ai Max 395 and 128gb Ram (like the bosgame m5) looks good for me to run 120b models. The upcoming AMD ai Max 495 does not look like a valuable option. So do you think ...
2026-02-17T15:00:26
https://www.reddit.com/r/LocalLLaMA/comments/1r789pz/buy_stix_halo_or_wait_for_medusa_halo/
Intelligent_Lab1491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r789pz
false
null
t3_1r789pz
/r/LocalLLaMA/comments/1r789pz/buy_stix_halo_or_wait_for_medusa_halo/
false
false
self
4
null
Zero Shot Transferable Adapter
51
We just did it! With our new methode we can train adapter on small models and then transfer them to huger ones without more fine tunning! In the table you see Zero shot transfer ability. Its really simple we just train small adapters which improve the soft targets of the model itself instead of doing it in the weight...
2026-02-17T14:58:11
https://i.redd.it/4riq1hxaj2kg1.png
ShotokanOSS
i.redd.it
1970-01-01T00:00:00
0
{}
1r787nn
false
null
t3_1r787nn
/r/LocalLLaMA/comments/1r787nn/zero_shot_transferable_adapter/
false
false
default
51
{'enabled': True, 'images': [{'id': '4riq1hxaj2kg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/4riq1hxaj2kg1.png?width=108&crop=smart&auto=webp&s=544600b0ae67ce36633be38bce6613932deb945f', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/4riq1hxaj2kg1.png?width=216&crop=smart&auto=web...
You don't need an LLM to classify documents. Decompose does it in ~14ms with pure regex, no API.
0
I keep seeing people throw local models at document classification tasks where the answer is literally in the keywords. "SHALL" means mandatory. "MUST NOT" means prohibitive. "MAY" means permissive. This isn't an opinion — it's RFC 2119, written in 1997 specifically to make these words unambiguous. ...
2026-02-17T14:49:19
https://www.reddit.com/r/LocalLLaMA/comments/1r77zj0/you_dont_need_an_llm_to_classify_documents/
echology-io
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77zj0
false
null
t3_1r77zj0
/r/LocalLLaMA/comments/1r77zj0/you_dont_need_an_llm_to_classify_documents/
false
false
self
0
null
Selfhost AI model
0
What are the specs needed to build a server for hosting an AI model, for example gpt-oss[](https://www.reddit.com/submit/?source_id=t3_1r77o8y)
2026-02-17T14:44:19
https://www.reddit.com/r/LocalLLaMA/comments/1r77uye/selfhost_ai_model/
devlizer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77uye
false
null
t3_1r77uye
/r/LocalLLaMA/comments/1r77uye/selfhost_ai_model/
false
false
self
0
null
I gave 12 LLMs $2,000 and a food truck. Only 4 survived.
734
Built a business sim where AI agents run a food truck for 30 days — location, menu, pricing, staff, inventory. Same scenario for all models. Opus made $49K. GPT-5.2 $28K. 8 went bankrupt. Every model that took a loan went bankrupt (8/8). There's also a playable mode — same simu...
2026-02-17T14:42:06
https://i.redd.it/4sewtkexf2kg1.png
Disastrous_Theme5906
i.redd.it
1970-01-01T00:00:00
0
{}
1r77swh
false
null
t3_1r77swh
/r/LocalLLaMA/comments/1r77swh/i_gave_12_llms_2000_and_a_food_truck_only_4/
false
false
https://preview.redd.it/…1c15466a4fc5ded2
734
{'enabled': True, 'images': [{'id': '4sewtkexf2kg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/4sewtkexf2kg1.png?width=108&crop=smart&auto=webp&s=e890e2c5f5ddfcb7f4fe198389a0599af89bcefb', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/4sewtkexf2kg1.png?width=216&crop=smart&auto=web...
META: no more "I did a thing" posts
1
[removed]
2026-02-17T14:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1r77rpw/meta_no_more_i_did_a_thing_posts/
ps5cfw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77rpw
false
null
t3_1r77rpw
/r/LocalLLaMA/comments/1r77rpw/meta_no_more_i_did_a_thing_posts/
false
false
self
1
null
Good model and OpenClaw tools for 2080ti 11gb(vram) i9 -9900k with 32 GB of ram
1
I want to create a AI assistant on my old gaming computer but don’t know which model to use or what skills/tools to install. I am currently running Qwen2.5 on my computer at home but it is having trouble with tool calling and I want to upgrade without slowing down prompt time considerably. I do not want to use any AP...
2026-02-17T14:38:56
https://www.reddit.com/r/LocalLLaMA/comments/1r77pvz/good_model_and_openclaw_tools_for_2080ti_11gbvram/
TwoOne8424
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77pvz
false
null
t3_1r77pvz
/r/LocalLLaMA/comments/1r77pvz/good_model_and_openclaw_tools_for_2080ti_11gbvram/
false
false
self
1
null
Running Gemma 3n E2B natively on Android via LiteRT. How I solved audio context limits with a sequential pipeline.
16
Hi everyone, I recently managed to get the Gemma 3n E2B model running fully on-device on Android, utilizing LiteRT to handle multimodal inputs: Audio and Images (OCR), using exclusively vibe coding (Claude Code & Google Antigravity). I didn’t write a single line of code. **The Model:** google/gemma-3n-E2B-it-litert...
2026-02-17T14:38:35
https://www.reddit.com/gallery/1r77plf
Green-Copy-9229
reddit.com
1970-01-01T00:00:00
0
{}
1r77plf
false
null
t3_1r77plf
/r/LocalLLaMA/comments/1r77plf/running_gemma_3n_e2b_natively_on_android_via/
false
false
default
16
null
REASONING AUGMENTED RETRIEVAL (RAR) is the production-grade successor to single-pass RAG.
0
Single-pass rag retrieves once and hopes the model stitches fragments into coherent reasoning. It fails on multi-hop questions, contradictions, temporal dependencies, or cases needing follow-up fetches.Rar puts reasoning first. The system decomposes the problem, identifies gaps, issues precise (often multiple, reformul...
2026-02-17T14:38:04
https://www.reddit.com/r/LocalLLaMA/comments/1r77p37/reasoning_augmented_retrieval_rar_is_the/
frank_brsrk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77p37
false
null
t3_1r77p37
/r/LocalLLaMA/comments/1r77p37/reasoning_augmented_retrieval_rar_is_the/
false
false
self
0
null
Why isn’t there “Lambda for LLMs” yet? Pay only when you run, even for 30B–70B
0
It’s pretty straightforward today to buy a 4090/5090 and run models locally. What I’m still surprised by is how hard it is to get the same vibe in the cloud. By “Lambda for LLMs” I mean: you don’t rent a GPU 24/7, you pay only when you actually run inference, and you can scale to zero without massive reload penalties...
2026-02-17T14:33:19
https://www.reddit.com/r/LocalLLaMA/comments/1r77knr/why_isnt_there_lambda_for_llms_yet_pay_only_when/
pmv143
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77knr
false
null
t3_1r77knr
/r/LocalLLaMA/comments/1r77knr/why_isnt_there_lambda_for_llms_yet_pay_only_when/
false
false
self
0
null
Implementing Tensor Logic: Unifying Datalog and Neural Reasoning via Tensor Contraction
3
\* The unification of symbolic reasoning and neural networks remains a central challenge in artificial intelligence. Symbolic systems offer reliability and interpretability but lack scalability, while neural networks provide learning capabilities but sacrifice transparency. Tensor Logic, proposed by Domingos, suggests ...
2026-02-17T14:29:24
https://arxiv.org/abs/2601.17188
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1r77hfa
false
null
t3_1r77hfa
/r/LocalLLaMA/comments/1r77hfa/implementing_tensor_logic_unifying_datalog_and/
false
false
default
3
null
Qwen3.5 NVFP4 (Blackwell) is up!
72
Quantized with NVIDIA's Model Optimizer to FP4. Checkpoint is ~224GB total, 17B active parameters. Apache 2.0 license. **HF:** [vincentzed-hf/Qwen3.5-397B-A17B-NVFP4](https://huggingface.co/vincentzed-hf/Qwen3.5-397B-A17B-NVFP4) --- **Install** You need SGLang from a specific branch that fixes visual encoder weight...
2026-02-17T14:27:43
https://www.reddit.com/r/LocalLLaMA/comments/1r77fz7/qwen35_nvfp4_blackwell_is_up/
TeekayTK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r77fz7
false
null
t3_1r77fz7
/r/LocalLLaMA/comments/1r77fz7/qwen35_nvfp4_blackwell_is_up/
false
false
self
72
{'enabled': False, 'images': [{'id': '95GuQAlZkg45UATGFrnCWM5l1vr6trkOSnoWqyooK8Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/95GuQAlZkg45UATGFrnCWM5l1vr6trkOSnoWqyooK8Y.png?width=108&crop=smart&auto=webp&s=35218951b8946a2840be05a80a60273183a860e5', 'width': 108}, {'height': 116, 'url': 'h...
How to get familiar with all that's happening? Beginner in the AI context
2
It's been a while since AI has been the craziest thing happening around. The models are getting better and the time they're taking to get better at something is exponentially decreasing. I am not very happy because I missed being involved in the talks about AI, understanding, gathering knowledge, understanding where ...
2026-02-17T14:08:49
https://www.reddit.com/r/LocalLLaMA/comments/1r76yub/how_to_get_familiar_with_all_thats_happening/
exquisitelyS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r76yub
false
null
t3_1r76yub
/r/LocalLLaMA/comments/1r76yub/how_to_get_familiar_with_all_thats_happening/
false
false
self
2
null
DeepSeek V4 banned from LocalLLaMA
1
[removed]
2026-02-17T13:53:48
https://i.redd.it/54a7xkpm82kg1.jpeg
tiguidoio
i.redd.it
1970-01-01T00:00:00
0
{}
1r76lhn
false
null
t3_1r76lhn
/r/LocalLLaMA/comments/1r76lhn/deepseek_v4_banned_from_localllama/
false
false
default
1
{'enabled': True, 'images': [{'id': '54a7xkpm82kg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/54a7xkpm82kg1.jpeg?width=108&crop=smart&auto=webp&s=6af9f45dc082b3940aeb07cfe69a0e096a512aa6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/54a7xkpm82kg1.jpeg?width=216&crop=smart&auto=...
I made a CLI that turns any podcast or YouTube video into clean Markdown transcripts (speaker labels + timestamps)
29
Built a tiny CLI to turn podcasts or YouTube videos into clean Markdown transcripts (speakers + timestamps). `pip install podscript` Uses ElevenLabs for high-quality diarization. [https://github.com/timf34/podscript](https://github.com/timf34/podscript)
2026-02-17T13:47:54
https://i.redd.it/c8c0loeh72kg1.png
timf34
i.redd.it
1970-01-01T00:00:00
0
{}
1r76gi7
false
null
t3_1r76gi7
/r/LocalLLaMA/comments/1r76gi7/i_made_a_cli_that_turns_any_podcast_or_youtube/
false
false
https://preview.redd.it/…c3043518d9b87ada
29
{'enabled': True, 'images': [{'id': 'c8c0loeh72kg1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/c8c0loeh72kg1.png?width=108&crop=smart&auto=webp&s=d9fe5776b4af6ac47d2b2cc2470778927a1992d4', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/c8c0loeh72kg1.png?width=216&crop=smart&auto=web...
Qwen3.5 vs GLM-4.7 vs Qwen3-235B-Thinking
40
Since the NVMe prices skyrocketed recently, and my existing drive is telling me to gtfo each time i can see chinese folk releasing a new open weight model, the question arises: Qwen3.5 vs GLM-4.7 vs Qwen3-235B-Thinking, is the new one worth updating? To be precise, my current setup is 128GB ram + 48GB vram, so i cou...
2026-02-17T13:43:44
https://www.reddit.com/r/LocalLLaMA/comments/1r76d34/qwen35_vs_glm47_vs_qwen3235bthinking/
ChopSticksPlease
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r76d34
false
null
t3_1r76d34
/r/LocalLLaMA/comments/1r76d34/qwen35_vs_glm47_vs_qwen3235bthinking/
false
false
self
40
null
Any Slides/Sheets model that can run locally?
0
I had some experience with Kimi 2.5 model, it is quite good. I'm wondering if we are at the stage where can I run a model on 24GB VRAM that does it locally? making proper slides/sheets or maybe website like the vibe coding platforms does? is there anything like that yet? Also, what's the best model I can run on 24GB r...
2026-02-17T13:37:06
https://www.reddit.com/r/LocalLLaMA/comments/1r767ee/any_slidessheets_model_that_can_run_locally/
Suimeileo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r767ee
false
null
t3_1r767ee
/r/LocalLLaMA/comments/1r767ee/any_slidessheets_model_that_can_run_locally/
false
false
self
0
null
Would you rent a GPU to run AI models for inferencing?
0
Hey everyone 👋 Curious how folks here are handling compute for AI workloads in practice. If you’re working with LLMs, vision models, speech pipelines, or even smaller experiments, you’ve probably hit the **compute wall** at some point. Buying GPUs is expensive and not always easy to scale, while managed APIs can lim...
2026-02-17T13:30:36
https://www.reddit.com/r/LocalLLaMA/comments/1r761yp/would_you_rent_a_gpu_to_run_ai_models_for/
qubridInc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r761yp
false
null
t3_1r761yp
/r/LocalLLaMA/comments/1r761yp/would_you_rent_a_gpu_to_run_ai_models_for/
false
false
self
0
null
Qwen3.5: Nobody Agrees on Attention Anymore
25
article by [](https://huggingface.co/mlabonne)[Maxime Labonne](https://huggingface.co/mlabonne)
2026-02-17T13:15:15
https://huggingface.co/blog/mlabonne/qwen35
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1r75p9w
false
null
t3_1r75p9w
/r/LocalLLaMA/comments/1r75p9w/qwen35_nobody_agrees_on_attention_anymore/
false
false
default
25
null
I built an open-source memory API with 3 memory types (semantic, episodic, procedural) — free alternative to Mem0
13
Hey everyone. I've been working on this for a few months and wanted to share. The problem: every AI memory tool I found (Mem0, Supermemory, etc.) stores flat facts like "user likes Python". That's it. No context about WHEN something happened or HOW the user does things. So I built Mengram with 3 memory types base...
2026-02-17T13:06:58
https://www.reddit.com/r/LocalLLaMA/comments/1r75ip2/i_built_an_opensource_memory_api_with_3_memory/
mengram-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r75ip2
false
null
t3_1r75ip2
/r/LocalLLaMA/comments/1r75ip2/i_built_an_opensource_memory_api_with_3_memory/
false
false
self
13
{'enabled': False, 'images': [{'id': 'xckDashGXRcmLXcFHTB7N6CcdBEHa3TySuvCAskv2x0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xckDashGXRcmLXcFHTB7N6CcdBEHa3TySuvCAskv2x0.png?width=108&crop=smart&auto=webp&s=1f7e5cb0257f1d50bb60e8053aa6c4f9d162dc4e', 'width': 108}, {'height': 108, 'url': 'h...
I Failed to Finetune a Model to Match a Character humor
2
I fine-tuned with Unsloth QLoRA, but even when I got the training loss down to 0.01, I still couldn’t get the model to speak like the character. I tried to reduce the eval loss as well, but I didn’t manage to. I tested different models (Phi-4, Gemma-3n). When the training loss goes down, the eval loss goes up. I also t...
2026-02-17T13:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1r75io7/i_failed_to_finetune_a_model_to_match_a_character/
THEKILLFUS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r75io7
false
null
t3_1r75io7
/r/LocalLLaMA/comments/1r75io7/i_failed_to_finetune_a_model_to_match_a_character/
false
false
self
2
null
Built a multi-agent AI butler on a DGX Spark running a 120B model locally
0
I've spent the last few weeks building what started as a simple Telegram chatbot and turned into a full autonomous AI research system with agent swarms, a knowledge graph, live monitoring, and performance benchmarking. All running locally on an NVIDIA DGX Spark. Thought I'd share the setup, some real benchmarks, and wh...
2026-02-17T13:06:27
https://www.reddit.com/r/LocalLLaMA/comments/1r75i9t/built_a_multiagent_ai_butler_on_a_dgx_spark/
Aggressive_Music9376
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r75i9t
false
null
t3_1r75i9t
/r/LocalLLaMA/comments/1r75i9t/built_a_multiagent_ai_butler_on_a_dgx_spark/
false
false
self
0
null
Running OpenClaw through a structured autonomy audit (framework inside)
0
There’s a lot of discussion around “autonomous agents” right now, but not much structured measurement. We built a rubric called **AMI (Autonomy Maturity Index)** to evaluate agent systems across 6 dimensions: * Execution reliability (multi-step completion, retry logic, failure handling) * Safety & guardrails (refusal...
2026-02-17T12:52:01
https://autonomyindex.io
operastudio
autonomyindex.io
1970-01-01T00:00:00
0
{}
1r756ke
false
null
t3_1r756ke
/r/LocalLLaMA/comments/1r756ke/running_openclaw_through_a_structured_autonomy/
false
false
default
0
null