title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How is Qwen 3.5 (MoE 35b) in instruct mode (with no reasoning/thinking) ?
38
We're out of bandwidth at the office, have you guys managed to test it ? I find it surprising that qwen moved away from hybrid model (after the 2507 releases) to again release an hybrid reasoning model.
2026-02-28T10:37:01
https://www.reddit.com/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/
LinkSea8324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgzfat
false
null
t3_1rgzfat
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/
false
false
self
38
null
I Built a Codex Control Deck From an Old Stadia Controller (Swift Agent Build)
0
2026-02-28T10:29:40
https://www.youtube.com/watch?v=MFiQFPgrHPA
phoneixAdi
youtube.com
1970-01-01T00:00:00
0
{}
1rgzax4
false
{'oembed': {'author_name': 'Adithyan', 'author_url': 'https://www.youtube.com/@adithyan_ai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MFiQFPgrHPA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ...
t3_1rgzax4
/r/LocalLLaMA/comments/1rgzax4/i_built_a_codex_control_deck_from_an_old_stadia/
false
false
https://external-preview…a178f88460dea529
0
{'enabled': False, 'images': [{'id': 'tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U.jpeg?width=108&crop=smart&auto=webp&s=c4988e3688f9407440892b4aed6d1dc9d8ee7673', 'width': 108}, {'height': 162, 'url': '...
How tò Build Your Local gaming Copilot with powerful GPU PC?
0
Any help
2026-02-28T10:22:48
https://www.reddit.com/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgz6u3
false
null
t3_1rgz6u3
/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/
false
false
self
0
null
BREAKING: Global Black Swan active. Is your crypto portfolio bulletproof?
1
[removed]
2026-02-28T10:10:41
https://www.reddit.com/r/LocalLLaMA/comments/1rgyzrw/breaking_global_black_swan_active_is_your_crypto/
LeoLucky1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyzrw
false
null
t3_1rgyzrw
/r/LocalLLaMA/comments/1rgyzrw/breaking_global_black_swan_active_is_your_crypto/
false
false
self
1
null
Has anyone got qwen3.5 to work with ollama?
0
>ollama run [hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q2\_K\_XL](http://hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q2_K_XL) >Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a7d979fa31c1387cc5a49b94b1a780b2e9018b3fae6cf9bef6084c17367412e3 >ollama --version >ollama version...
2026-02-28T09:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyqz7
false
null
t3_1rgyqz7
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/
false
false
self
0
null
Which model is best for lean in your experience?
1
I have been trying minimax 2.5 and it's ok, but not that great.
2026-02-28T09:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1rgyof9/which_model_is_best_for_lean_in_your_experience/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyof9
false
null
t3_1rgyof9
/r/LocalLLaMA/comments/1rgyof9/which_model_is_best_for_lean_in_your_experience/
false
false
self
1
null
Dual 3060 and Single 3090. What's the point of the extra performance?
0
Bit of a non-technical noob here, hope the question isn't too stupid. Tested on Ollama the 30b class models like deepseek r1 32b, and its jailbroken counterpart, Qwen 30b, GPT OSS 20b, all yielding similar speeds once the model's loaded to the vram. (split between 3060 12gbs or on a single 3090) I made no adjustments o...
2026-02-28T09:49:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/
TheAncientOnce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgynmf
false
null
t3_1rgynmf
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/
false
false
self
0
null
Switching from windows to linux, what distro to use for inference and gaming?
4
I've had a scare with my 3090 overheating recently but fortunately the guy from my local pc shop could fix it by swapping out a tiny chip on the GPU. I'm not sure if I can undervolt in windows and was wondering if there are any linux recommendations that work well for both inference and gaming. I usually just use llama...
2026-02-28T09:30:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/
doesitoffendyou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyd8p
false
null
t3_1rgyd8p
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/
false
false
self
4
null
Local Manus
1
Hi there I was interested in Manus app but it was bought by Meta. Does anyone happen to know what’s best alternative open source to manus like where I could connect my local Qwen 3.5 with 98k context?
2026-02-28T09:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1rgyb4r/local_manus/
yes_yes_no_repeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyb4r
false
null
t3_1rgyb4r
/r/LocalLLaMA/comments/1rgyb4r/local_manus/
false
false
self
1
null
New Claude 2.1 Refuses to kill a Python process :)
0
2026-02-28T09:05:25
https://i.redd.it/3w0h00v3b7mg1.png
trunardu
i.redd.it
1970-01-01T00:00:00
0
{}
1rgxywo
false
null
t3_1rgxywo
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/
false
false
https://preview.redd.it/…2954127895ee980b
0
{'enabled': True, 'images': [{'id': '3w0h00v3b7mg1', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=108&crop=smart&auto=webp&s=02a0d358f8705c56125328daa74f90cff8ed10fb', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=216&crop=smart&auto=webp...
Qwen 3.5 is multimodal. Here is how to enable image understanding in opencode with llama cpp
42
Trick is to add this to opencode.json file "modalities": { "input": [   "text",     "image"    ],    "output": [     "text"    ]  } full: "provider": {     "llama.cpp": {       "npm": "@ai-sdk/openai-compatible",       "name": "llama-server",       "option...
2026-02-28T08:52:41
https://www.reddit.com/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/
Old-Sherbert-4495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgxr0v
false
null
t3_1rgxr0v
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/
false
false
self
42
null
Just press ctrl + n Go to the session that requires operation
0
What should you do when you finish handling one session  and want to jump directly to the next one [https://github.com/weykon/agent-hand](https://github.com/weykon/agent-hand) I need more suggestions and feedback from everyone's experiences
2026-02-28T08:18:27
https://www.reddit.com/r/LocalLLaMA/comments/1rgx6jb/just_press_ctrl_n_go_to_the_session_that_requires/
Remarkable_Mind9519
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgx6jb
false
null
t3_1rgx6jb
/r/LocalLLaMA/comments/1rgx6jb/just_press_ctrl_n_go_to_the_session_that_requires/
false
false
self
0
{'enabled': False, 'images': [{'id': 'oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=108&crop=smart&auto=webp&s=1f07a6147ddfafa493b6cd327fa57a7dbc46c4fb', 'width': 108}, {'height': 108, 'url': 'h...
Agent debugging is a mess, am I the only one?
0
Building multi-step agents and when something breaks at step 4, I have zero visibility into what actually happened at step 2. No replay, no cost breakdown, no clean failure trace. How are you all handling observability for your agents? Logging everything manually? Using something specific?
2026-02-28T08:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/
DepthInteresting6455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgwyqi
false
null
t3_1rgwyqi
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/
false
false
self
0
null
Packaging AI Models as Conda packages
2
We wrote up how to package AI/ML models (weights, configs) as conda packages using rattler-build. The idea: treat models like any other dependency — versioned, lockable, cached via hardlinks (no duplicate disk usage), and optionally signed with Sigstore attestations for supply chain security. The post walks through pa...
2026-02-28T07:59:40
https://prefix.dev/blog/packaging-ai-ml-models-as-conda-packages
droelf
prefix.dev
1970-01-01T00:00:00
0
{}
1rgwv7e
false
null
t3_1rgwv7e
/r/LocalLLaMA/comments/1rgwv7e/packaging_ai_models_as_conda_packages/
false
false
https://external-preview…a78abfa1531fb6f5
2
{'enabled': False, 'images': [{'id': '1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=108&crop=smart&auto=webp&s=ce6fbe91e718c98d7286a78e127768fd88574c5c', 'width': 108}, {'height': 113, 'url': 'h...
Speculative decoding qwen3.5 27b
7
Had anyone managed to make speculative decoding work for that model ? What smaller model are you using ? Does it run on vllm or llama.cpp ? Since it is a dense model it should work, but for the love of me I can’t get to work.
2026-02-28T07:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/
thibautrey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgwryb
false
null
t3_1rgwryb
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/
false
false
self
7
null
Which size of Qwen3.5 are you planning to run locally?
32
Just a quick poll/discussion for the local hardware crowd. Are you guys jumping on the 27B for single-card setups, trying to squeeze the 35B into Mac Studios, or going crazy with the 122B on multi-GPU rigs? Trying to figure out which size will get the most community support.locally?
2026-02-28T06:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/
CutOk3283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgvma8
false
null
t3_1rgvma8
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/
false
false
self
32
null
MCP Marketplace - security-scanned directory of 1,700+ MCP tool plugins
1
[removed]
2026-02-28T06:35:03
https://www.reddit.com/r/LocalLLaMA/comments/1rgvezr/mcp_marketplace_securityscanned_directory_of_1700/
Evening-Dot2352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgvezr
false
null
t3_1rgvezr
/r/LocalLLaMA/comments/1rgvezr/mcp_marketplace_securityscanned_directory_of_1700/
false
false
self
1
{'enabled': False, 'images': [{'id': 'u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=108&crop=smart&auto=webp&s=f9b78a576250a8f15517741a18d245174d9ef0ab', 'width': 108}, {'height': 113, 'url': 'h...
Built a clean web UI for TranslateGemma
5
Google's TranslateGemma is one of the better open translation models but there wasn't a good standalone UI for it. Built one. * Auto-translates as you type * 55 languages with searchable selector * Supports 27B, 12B, and 4B variants through Ollama * Shows translation stats (duration, token count) * Single Docker conta...
2026-02-28T06:34:19
https://i.redd.it/fj13dbhoj6mg1.png
realies_
i.redd.it
1970-01-01T00:00:00
0
{}
1rgvej3
false
null
t3_1rgvej3
/r/LocalLLaMA/comments/1rgvej3/built_a_clean_web_ui_for_translategemma/
false
false
https://preview.redd.it/…37a9a3602d39e668
5
{'enabled': True, 'images': [{'id': 'fj13dbhoj6mg1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=108&crop=smart&auto=webp&s=07db938808a17bd154ed0cf4c1cc016527c7dc7f', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=216&crop=smart&auto=webp...
Is there a fully local alternative to remote AI session control?
0
**\[Used self-hosted LLM for checking grammer and spelling mistake for this post\]** Anthropic recently launched Remote Control for Claude Code, which lets you continue a local session from your phone. That idea is interesting, but it raises some questions around: * Cloud relay * Subscriptions * Vendor lock-in * Age...
2026-02-28T06:15:31
https://v.redd.it/fvke8d5qg6mg1
shrijayan
v.redd.it
1970-01-01T00:00:00
0
{}
1rgv2n3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fvke8d5qg6mg1/DASHPlaylist.mpd?a=1774851350%2CYjA0NmE1ZWY4MzBmZTJlMTJhYWQxOTZiMzQ1NWQxMTJhZTMzMjE2NGFhOWNlMzIxZTA2YjBiZWE5YWQwMzcwNw%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/fvke8d5qg6mg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1rgv2n3
/r/LocalLLaMA/comments/1rgv2n3/is_there_a_fully_local_alternative_to_remote_ai/
false
false
https://external-preview…ac97df81b6325b6a
0
{'enabled': False, 'images': [{'id': 'bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=108&crop=smart&format=pjpg&auto=webp&s=5709072f49895c351907068dcf04837514b66...
Qwen 3.5 cutoff date is 2024?
0
need a dummy guide to get the LLM up to speed. I know its knowledge cutoff date is 2026. Am using LM Studio. https://preview.redd.it/rbxw0dqwf6mg1.png?width=1383&format=png&auto=webp&s=81dac075ee1835b12cb5cc86c9d9fe06f6e0bc95
2026-02-28T06:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/
alex_godspeed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguzz2
false
null
t3_1rguzz2
/r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/
false
false
https://preview.redd.it/…535bcb53c053a36f
0
null
I ran 3,830 inference runs to measure how system prompt framing (not content) changes token entropy — Mistral-7B hit d=1.0+, Mamba showed nothing. Here's the breakdown
0
This started as a simple question: if I change the *relational framing* of a system prompt — not the task instructions, just whether the prompt positions the model as a co-explorer vs. a task-executor — does the generation distribution actually change? Spoiler: yes, and the effect is huge at 7B scale. **Models tested...
2026-02-28T06:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/
TheTempleofTwo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguxyo
false
null
t3_1rguxyo
/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/
false
false
self
0
null
Get your local models in order. Anthropic just got "dislike" from the US government.
254
Yeah as things look RN OpenAI+US government are on the path to bring Anthropic to its knees. Would Anthropic fall be good or bad for us?
2026-02-28T06:01:55
https://www.reddit.com/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguty0
false
null
t3_1rguty0
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/
false
false
self
254
null
so this was something
0
2026-02-28T05:57:19
https://i.redd.it/iz09hraed6mg1.png
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1rguqti
false
null
t3_1rguqti
/r/LocalLLaMA/comments/1rguqti/so_this_was_something/
false
false
https://preview.redd.it/…0a04576807cf9cb3
0
{'enabled': True, 'images': [{'id': 'iz09hraed6mg1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=108&crop=smart&auto=webp&s=ff6373cbda194a883728ee1730aebf30c53c9d86', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=216&crop=smart&auto=webp...
Project SOTA Toolkit: Drop 3, Distill the Flow released. Drop 4, aeron prepared for release
0
So as I did first push and teased Distill the flow last night and have now followed through on, Moonshine/Distill-The-Flow is now public reproducible code ready for any exports over analysis and visual pipelines to clean chat format style .json and .jsonl large structured exports. Drop 3, is not a dataset or single ou...
2026-02-28T05:43:35
https://github.com/calisweetleaf/distill-the-flow
daeron-blackFyr
github.com
1970-01-01T00:00:00
0
{}
1rguhz9
false
null
t3_1rguhz9
/r/LocalLLaMA/comments/1rguhz9/project_sota_toolkit_drop_3_distill_the_flow/
false
false
https://external-preview…3b7f26c973062ad4
0
{'enabled': False, 'images': [{'id': '_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=108&crop=smart&auto=webp&s=5932c38d35063c32926a90325f319a588acb9856', 'width': 108}, {'height': 108, 'url': 'h...
Testing Minimax M2.1 and LongCat-Flash-Thinking-2601 in OpenClaw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat-Flash-Thinking-2601 handle a sequence of tasks. Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the current con...
2026-02-28T05:41:37
https://www.reddit.com/r/LocalLLaMA/comments/1rgugsg/testing_minimax_m21_and_longcatflashthinking2601/
babyb01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgugsg
false
null
t3_1rgugsg
/r/LocalLLaMA/comments/1rgugsg/testing_minimax_m21_and_longcatflashthinking2601/
false
false
https://preview.redd.it/…a5a2a53773f62725
0
null
If your Chutes.ai subscription was unilaterally downgraded mid-term, you should report them to the FTC.
1
This is a cross-post from /r/ChutesAI, posted here for visibility. https://reportfraud.ftc.gov/form/main A subscription is a contract. One party to a contract cannot unilaterally alter the terms of that contract without adequate notification and an opportunity to accept or reject the changes. Whether it's trying ...
2026-02-28T05:28:28
https://www.reddit.com/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/
Bite_It_You_Scum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgu849
false
null
t3_1rgu849
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs', 'resolutions': [{'height': 28, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=108&crop=smart&auto=webp&s=f5e7df12925be93c3c24b5ae9402f18f4030cd20', 'width': 108}, {'height': 57, 'url': 'ht...
Qwen3.5:35b-a3b be overthinking like..
0
Hi there! 👋 How can I help you today? \>>> the carwash is 200m away, if i wanna wash my car is it better to walk there or drive there? Thinking... Thinking Process: 1. \*\*Analyze the Request:\*\* \* Distance: 200 meters (200m). \* Activity: Washing a car. \* Options: Walk vs. Drive. \* G...
2026-02-28T05:21:42
https://www.reddit.com/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/
MidnightEsc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgu3s0
false
null
t3_1rgu3s0
/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/
false
false
self
0
null
Is Qwen3.5 a coding game changer for anyone else?
152
I've been playing with local LLMs for nearly 2 years on a rig with 3 older GPUs and 44 GB total VRAM, starting with Ollama, but recently using llama.cpp. I've used a bunch of different coding assistant tools, including [Continue.dev](http://Continue.dev), [Cline](https://github.com/cline/cline/), [Roo Code](https://gi...
2026-02-28T05:12:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/
paulgear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtxry
false
null
t3_1rgtxry
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/
false
false
self
152
null
rom $5/query to free memory: how I gave my AI assistant permanent memory using local embeddings (nomic v1.5 + sqlite-vec)
1
I've been running Claude Opus as my daily AI assistant via OpenClaw (open-source agent framework). The biggest pain point: \*\*zero memory between sessions\*\*. Every conversation starts from scratch. I spent a month trying fixes — 70 markdown files, 6 launchd daemons, cloud embeddings. All garbage. Then I found the...
2026-02-28T05:04:43
https://www.reddit.com/r/LocalLLaMA/comments/1rgtsa3/rom_5query_to_free_memory_how_i_gave_my_ai/
tituss_bit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtsa3
false
null
t3_1rgtsa3
/r/LocalLLaMA/comments/1rgtsa3/rom_5query_to_free_memory_how_i_gave_my_ai/
false
false
self
1
null
Testing Minimax M2.1 and LongCat Flash Thinking 2601 in OpenClaw
1
Title:Testing Minimax M2.1 and LongCat Flash Thinking 2601 in OpenClaw flair:Discussion Body: A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last ...
2026-02-28T05:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgtro3/testing_minimax_m21_and_longcat_flash_thinking/
Srujzz_2317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtro3
false
null
t3_1rgtro3
/r/LocalLLaMA/comments/1rgtro3/testing_minimax_m21_and_longcat_flash_thinking/
false
false
self
1
null
AIPI-Lite-Voice-Bridge
1
[deleted]
2026-02-28T04:53:23
[deleted]
1970-01-01T00:00:00
0
{}
1rgtk99
false
null
t3_1rgtk99
/r/LocalLLaMA/comments/1rgtk99/aipilitevoicebridge/
false
false
default
1
null
Gemini Pro 3.1 couldn't solve a Docker + Ollama config after 10 rounds. Codex 5.3 one-shot it.
0
Last night I was setting up OpenClaw with a local Ollama and Docker setup, mostly just for fun to see how it runs. The task was pretty simple, because OpenClaw has a pretty comprehensive installation guide. I just need to use their provided image and get the Ollama model config right. I started with Gemini Pro 3.1, t...
2026-02-28T04:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/
CarsonBuilds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgthzm
false
null
t3_1rgthzm
/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/
false
false
https://preview.redd.it/…3ca617820e4b96d7
0
null
I was watching YouTube on my sofa and accidentally discovered a universal prompt compression protocol
1
[removed]
2026-02-28T04:45:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgtezx/i_was_watching_youtube_on_my_sofa_and/
PureDevelopment4021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtezx
false
null
t3_1rgtezx
/r/LocalLLaMA/comments/1rgtezx/i_was_watching_youtube_on_my_sofa_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=108&crop=smart&auto=webp&s=a3d056f53bc66603261e7471c3cbaf06826f8dcb', 'width': 108}, {'height': 108, 'url': 'h...
Not creeped out at all, I swear!
0
That's not creepy at all.... I was messing with its context and memory architecture and suddenly it's naming itself.
2026-02-28T04:30:37
https://www.reddit.com/gallery/1rgt4m4
Interesting-Ad4922
reddit.com
1970-01-01T00:00:00
0
{}
1rgt4m4
false
null
t3_1rgt4m4
/r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/
false
false
https://preview.redd.it/…6ba5cf0e7b84703a
0
null
Using a third LLM as a judge to evaluate two debating agents — where does this usually break?
1
Two prompted agents argue over travel recommendations for 3 rounds, then a judge picks the winner per recommendation based on API grounding scores and user preferences. Raw API calls, no framework. For people who've built multi-agent setups - latency? Agents going off-script? JSON parsing failures? What would you do d...
2026-02-28T04:29:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/
WitnessWonderful8270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgt43l
false
null
t3_1rgt43l
/r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/
false
false
self
1
null
What's the biggest issues you're facing with LLMs writing docs and passing info to each other?
1
So is mainly focused on multi-agent pain points, but is there any real problems people are having when they're using LLM workflows? What breaks the most often for people? And, I guess, any areas you've managed to mitigate the problems? Really interested in hearing about any issues people are having, whether it's just...
2026-02-28T04:24:21
https://www.reddit.com/r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgt0au
false
null
t3_1rgt0au
/r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/
false
false
self
1
null
Turn off thinking in LM Studio
12
1. Go to the **My Models** page in LM Studio. 2. Select a model, such as Qwen3.5. 3. Locate **Inference** on the right-hand sidebar. 4. Scroll down to find the Prompt Template and enter into **template(Jinja )** section. 5. **Add** `{%- set enable_thinking = false %}` **to the first line of the template.** 6. **Reload ...
2026-02-28T04:19:01
https://www.reddit.com/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/
moahmo88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgswkc
false
null
t3_1rgswkc
/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/
false
false
self
12
null
Local Ai codename Goose Rasbery Pi 5 16gb Ram byteshape devstral 12k context startup and prompt. testing this prompt. share yours.
0
Local Ai codename Goose Rasbery Pi 5 16gb Ram byteshape devstral 12k context startup and prompt. testing this prompt. share yours. [https://github.com/josheeg/Game-Note/blob/main/README.md](https://github.com/josheeg/Game-Note/blob/main/README.md) *Ollama Serve* [](https://github.com/josheeg/Game-Note/blob/main/...
2026-02-28T04:08:55
https://www.reddit.com/r/LocalLLaMA/comments/1rgsp5h/local_ai_codename_goose_rasbery_pi_5_16gb_ram/
Josheeg39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgsp5h
false
null
t3_1rgsp5h
/r/LocalLLaMA/comments/1rgsp5h/local_ai_codename_goose_rasbery_pi_5_16gb_ram/
false
false
self
0
{'enabled': False, 'images': [{'id': 'PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=108&crop=smart&auto=webp&s=c5e00ef60d3fae177d494966bf0bf4f96291ad67', 'width': 108}, {'height': 108, 'url': 'h...
Texting minimax M2.1 and long cat flash thinking 2601 in open claw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the ...
2026-02-28T04:06:24
https://i.redd.it/i5f2eexut5mg1.jpeg
Srujzz_2317
i.redd.it
1970-01-01T00:00:00
0
{}
1rgsna7
false
null
t3_1rgsna7
/r/LocalLLaMA/comments/1rgsna7/texting_minimax_m21_and_long_cat_flash_thinking/
false
false
https://preview.redd.it/…3b0791b92e7eb6ac
0
{'enabled': True, 'images': [{'id': 'i5f2eexut5mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=108&crop=smart&auto=webp&s=b884f4cf85120c4b9bbca58f0885f3ef38ce7502', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=216&crop=smart&auto=we...
What languages or DSLs are you folks using?
0
When I've asked the question, I've got: **What "compression tools" actually exist:** Almost nothing. There's no established DSL for LLM-to-LLM structured communication that's gained adoption. JSON/YAML are data formats, not compression systems. Markdown is universal but has zero compression philosophy. The others are ...
2026-02-28T04:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1rgslme/what_languages_or_dsls_are_you_folks_using/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgslme
false
null
t3_1rgslme
/r/LocalLLaMA/comments/1rgslme/what_languages_or_dsls_are_you_folks_using/
false
false
self
0
null
Is hosting a local LLM really as crappy of an experience as I am having?
0
Hey Folks, I decided to dive into hosting my own LLM this weekend in my home lab. Here's what I'm running Specs: * CPU: 12th Gen Intel(R) Core(TM) i9-12900HK * RAM: 64GB DDR 4 * GPU: GeForce RTX 3080 Ti Laptop GPU 16GB GDDR6 Setup: * Ollama installed on bare metal * Open WebUI in docker Issue: I have tried ...
2026-02-28T03:15:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/
RickoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgrlzv
false
null
t3_1rgrlzv
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/
false
false
self
0
null
Built a payment MCP server for agents — sandbox live, wanted devs integrating before production cards are ready
0
In sandbox with (1) intent declaration before card issuance, (2) MFA-gated human approval per transaction, (3) merchant-locked virtual Visa per purchase with 15-min expiry, (4) automatic intent reconciliation.  Sandbox is live today. No production cards yet (that's March 4) but you can start integrating the MCP now. [h...
2026-02-28T03:13:21
https://v.redd.it/95y2xb35k5mg1
Opposite-Exam3541
v.redd.it
1970-01-01T00:00:00
0
{}
1rgrkcr
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/95y2xb35k5mg1/DASHPlaylist.mpd?a=1774840424%2CMTUyZTYyZmFiODhhOGJkMmRhNzljOTY2OTI2MTNkYzQwZmNmNDAyOGRhZGM3ZDhmOTc2NTA3NTY5NmMwMDMyZg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/95y2xb35k5mg1/CMAF_720.mp4?source=fallback', 'ha...
t3_1rgrkcr
/r/LocalLLaMA/comments/1rgrkcr/built_a_payment_mcp_server_for_agents_sandbox/
false
false
https://external-preview…d4de93cbe1fda968
0
{'enabled': False, 'images': [{'id': 'dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=108&crop=smart&format=pjpg&auto=webp&s=064e72f50750008d10a4e5c133d00117987c5...
Macbook air m4 recommendation
1
[removed]
2026-02-28T03:01:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgrb2w/macbook_air_m4_recommendation/
NumerousVideo1854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgrb2w
false
null
t3_1rgrb2w
/r/LocalLLaMA/comments/1rgrb2w/macbook_air_m4_recommendation/
false
false
self
1
null
Newbie question: best achievable fully-local LLM (& RAG?) setup for analysing governance board packs on a low/mid-range laptop?
2
Hi all, First-time caller here. I’m trying to build a fully offline local LLM setup to analyse monthly board packs (typically 50–100 page PDFs) and would appreciate advice on tools and architecture. Hardware • Lenovo Yoga 7 Gen 10 • AMD Ryzen™ AI 7 350 • 32 GB LPDDR5X RAM • 1 TB SSD • Windows 11 LTSC Due to c...
2026-02-28T02:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/
AccessibilityTest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgr249
false
null
t3_1rgr249
/r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/
false
false
self
2
null
Texting minimax M2.1 and long car flash thinking 2601 in open claw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the ...
2026-02-28T02:49:38
https://i.redd.it/cx9fcfv5g5mg1.jpeg
Srujzz_2317
i.redd.it
1970-01-01T00:00:00
0
{}
1rgr1ul
false
null
t3_1rgr1ul
/r/LocalLLaMA/comments/1rgr1ul/texting_minimax_m21_and_long_car_flash_thinking/
false
false
https://preview.redd.it/…19c58b42b62330ae
0
{'enabled': True, 'images': [{'id': 'cx9fcfv5g5mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=108&crop=smart&auto=webp&s=bd7d8402529fa49dc0a6edf8fc4f621de6476733', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=216&crop=smart&auto=we...
Macbook air m4 16 gb
1
[removed]
2026-02-28T02:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1rgqzcp/macbook_air_m4_16_gb/
NumerousVideo1854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqzcp
false
null
t3_1rgqzcp
/r/LocalLLaMA/comments/1rgqzcp/macbook_air_m4_16_gb/
false
false
self
1
null
Wyoming Parakeet MLX
0
Vibe coded a Wyoming protocol server for Parakeet MLX — drop-in STT for Home Assistant on Apple Silicon. I replaced my previous Wyoming Whisper MLX setup with this and it seems to be faster. Instructions and code at https://github.com/Wysie/wyoming-parakeet-mlx Huge thanks to parakeet-mlx and wyoming-mlx-whisper for ...
2026-02-28T02:45:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/
whysee0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqyhg
false
null
t3_1rgqyhg
/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/
false
false
self
0
null
Built a payment MCP server for agents. Sandbox live, wanted devs integrating before production cards are ready
1
[payclaw.io](http://payclaw.io) sandbox live today with (1) intent declaration before card issuance, (2) MFA-gated human approval per transaction, (3) merchant-locked virtual Visa per purchase with 15-min expiry, (4) automatic intent reconciliation.  Sandbox is live today. No production cards yet (that's March 4) but y...
2026-02-28T02:43:07
https://www.reddit.com/r/LocalLLaMA/comments/1rgqwrk/built_a_payment_mcp_server_for_agents_sandbox/
Payclaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqwrk
false
null
t3_1rgqwrk
/r/LocalLLaMA/comments/1rgqwrk/built_a_payment_mcp_server_for_agents_sandbox/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=108&crop=smart&auto=webp&s=a765168d9e7ac48933aa0ab7c6d8f565f4916685', 'width': 108}, {'height': 216, 'url': ...
I'm looking for local Spanish-speaking communities about LLMs.
3
I would like to be able to converse in my native language, Spanish. Do you know of any forums, websites, or Discord servers? I personally want to start a forum or website related to this. But first, I'd like to look for some references. Thank you for your time.
2026-02-28T02:33:45
https://www.reddit.com/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/
ColdTransition5828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqpn2
false
null
t3_1rgqpn2
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/
false
false
self
3
null
Does setting a small context size let you run a larger/better model?
0
I'm using MLX-VLM to run Qwen3-VL-30B-A3B-Thinking... I have a 32GB macbook, and have successfully run -4bit in 20GB, and -5bit in 24GB. 6bit and 8bit crash, running out of memory. Now, I am setting max-tokens to 10000. This is sufficient for what I am running, and is probably sufficient for both input and output toke...
2026-02-28T02:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1rgqfne/does_setting_a_small_context_size_let_you_run_a/
throwyawafire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqfne
false
null
t3_1rgqfne
/r/LocalLLaMA/comments/1rgqfne/does_setting_a_small_context_size_let_you_run_a/
false
false
self
0
null
Fine-tuning a small model as a "judge" for multi-agent debate outputs - anyone tried this?
0
Instead of fine-tuning generation models, I'm experimenting with fine-tuning a small model (\~8B) specifically to evaluate and score outputs from two larger prompted agents that are debating. The idea: two agents generate competing outputs with citations. The fine-tuned judge model scores each on factual grounding, in...
2026-02-28T02:12:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgq8wz/finetuning_a_small_model_as_a_judge_for/
WitnessWonderful8270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgq8wz
false
null
t3_1rgq8wz
/r/LocalLLaMA/comments/1rgq8wz/finetuning_a_small_model_as_a_judge_for/
false
false
self
0
null
Can a local hosted LLM keep up with Grok 4.1 FAST for openclaw?
0
I’m running openclaw on an unraid server. Have a M4 Mac mini already and debated picking up a few more to run as a cluster but what LLM would be an equivalent to something like grok 4.1 fast? Is it pointless to local host? I’m not sure what my bills are going to look like but I’ve been basically having grok write scrip...
2026-02-28T02:01:50
https://www.reddit.com/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/
MartiniCommander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgq0vc
false
null
t3_1rgq0vc
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/
false
false
self
0
null
Qwen 3.5-27B punches waaaaay above its weight (with a slightly different prompt) -- very impressed
176
Used this model from [chat.qwen.ai](http://chat.qwen.ai) and turned off thinking (Fast mode). Inference is fast enough and with using this as my default prompt: >"Do not provide a lame or generic answer." ...this model does not answer like a 27B model that we've previously known of. Example prompts: 1. "What does i...
2026-02-28T01:56:30
https://www.reddit.com/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/
theskilled42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgpwn5
false
null
t3_1rgpwn5
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/
false
false
self
176
null
SOOO much thinking....
24
How do I turn it off in Qwen 3.5? I've tried four or five suggestion for Chat. I'm a Qwen instruct user. Qwen is making me crazy. I'm not using 3.5 for direct chat. I'm calling 35B and 122B from other systems. One Qwen is on LM Studio and one on Ollama
2026-02-28T01:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/
zipzag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgp97u
false
null
t3_1rgp97u
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/
false
false
self
24
null
Anyone doing speculative decoding with the new Qwen 3.5 models? Or, do we need to wait for the smaller models to be released to use as draft?
8
I kind of half-ass understand speculative decoding, but I do know that it’s supposed to be pretty easy to setup in LM Studio. I was just wondering if it’s worth using Qwen 3.5 27b as the draft model for the larger Qwen 3.5 models, or if there won’t be any performance improvements unless the draft model is much smaller....
2026-02-28T01:17:59
https://www.reddit.com/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgp2nu
false
null
t3_1rgp2nu
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/
false
false
self
8
null
Does Qwen3.5 35b outperform Qwen3 coder next 80b for you?
24
I did some tests, but I am not sure yet. The coder next 80b seems to be in the middle between the 35b and the 122b.
2026-02-28T01:12:38
https://www.reddit.com/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/
JsThiago5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgoygs
false
null
t3_1rgoygs
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/
false
false
self
24
null
Llama-server doesn't see ROCm device (Strix Halo) unless I run Wayland
2
Hey all, Running Debian (PikaOS) with 6.19.2 and a nightly build of llama-server. When I run it under Wayland, it detects the Radeon 8060S. If I try to run headless (systemctl stop sddm), no ROCm device is detected. I checked that my user is in video and render groups, I have tried setting the HSA\_OVERRIDE\_...
2026-02-28T01:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/
coreyfro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgotwp
false
null
t3_1rgotwp
/r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/
false
false
self
2
null
Ubuntu or Debian? Speed difference on llama.cpp tokens?
0
Does llama.cpp run faster on Ubuntu or Debian? * Ubuntu * newer 6.8 kernel * built in CUDA * everyone uses it * Debian * less malware more GNU Is there a difference in token speed or is this just BS?
2026-02-28T00:58:10
https://www.reddit.com/r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/
ClimateBoss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgomsq
false
null
t3_1rgomsq
/r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/
false
false
self
0
null
A monthly update to my "Where are open-weight models in the SOTA discussion?" rankings
337
2026-02-28T00:55:43
https://i.redd.it/h73sgnomv4mg1.png
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1rgokw1
false
null
t3_1rgokw1
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/
false
false
https://preview.redd.it/…77f3b3a2bad0be8a
337
{'enabled': True, 'images': [{'id': 'h73sgnomv4mg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=108&crop=smart&auto=webp&s=6fef5011915cb4a1d8e7df294f4906e7291b9cd2', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=216&crop=smart&auto=web...
MCP that issues virtual just-in-time VISA cards for agents on user-approve (dev feedback)
0
**Built** an MCP server that lets agents complete real purchases using ephemeral virtual Visa cards. Posting here because this isn't model-specific — anything that speaks MCP can use it. **The problem:** Every agent framework can browse, search, compare, and select. None of them can check out. The workarounds are all ...
2026-02-28T00:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgo5re/mcp_that_issues_virtual_justintime_visa_cards_for/
Opposite-Exam3541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgo5re
false
null
t3_1rgo5re
/r/LocalLLaMA/comments/1rgo5re/mcp_that_issues_virtual_justintime_visa_cards_for/
false
false
self
0
null
LM Studio: can it load a small local folder of code?
1
I've found the "load files" plugin, but it takes files not folders, and is limited to 5 files. I've got a relatively small local python project cloned from GitHub, and I'd like to load it into context and start debugging (kinda like gemini-cli). Possible to do in LM Studio? Working on a MacBook pro with 48gb, so I...
2026-02-28T00:20:23
https://www.reddit.com/r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/
firesalamander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgns5u
false
null
t3_1rgns5u
/r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/
false
false
self
1
null
Is anything worth to do with a 7b model
1
The thing is a had been learning about local llms, so I downloaded ollama and opencode in my pc. It is a cheap pc so I only can run 7b models like qwen2.5 or mistral. The thing is I haveopenai plus so I mostly used that for almost everything I need. The only use a find for my local llms are development. I use the local...
2026-02-27T23:54:13
https://www.reddit.com/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/
Mrdeadbuddy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgn5m0
false
null
t3_1rgn5m0
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/
false
false
self
1
null
President Trump orders ALL Federal agencies in the US Government to immediately stop using Anthropic's technology.
564
https://preview.redd.it/…e interesting.**
2026-02-27T23:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgn4ki
false
null
t3_1rgn4ki
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/
false
false
https://preview.redd.it/…3c5f8861d5a7adf0
564
null
LLM benchmark site for dual RTX 5060 Ti
4
Put together a small benchmark site for my homelab rig: Dell Precision T7810, dual Xeon E5-2680 v4, 2× RTX 5060 Ti 16GB (32GB VRAM total). All GGUF via llama.cpp/ik_llama. vLLM and safetensors coming soon. https://5p00kyy.github.io/llm-bench/ Has both speed numbers (PP/TG) and quality scores across 7 categories — rea...
2026-02-27T23:25:22
https://www.reddit.com/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/
do_u_think_im_spooky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmg99
false
null
t3_1rgmg99
/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/
false
false
self
4
null
DeepSeek updated its low-level operator library DeepGEMM, basically confirming the implementation of mHC and next-generation hardware support in V4
79
DeepSeek has just pushed a major code commit to its open-source matrix multiplication acceleration library, **DeepGEMM**. The core of this update lies in the official integration of the latest network architecture component, **Manifold-constrained Hyper-connection (mHC)**. Building on this, DeepSeek has also implemente...
2026-02-27T23:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1rgmczt/deepseek_updated_its_lowlevel_operator_library/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmczt
false
null
t3_1rgmczt
/r/LocalLLaMA/comments/1rgmczt/deepseek_updated_its_lowlevel_operator_library/
false
false
self
79
{'enabled': False, 'images': [{'id': '6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=108&crop=smart&auto=webp&s=6498bd487569f4c475268876174a1d0ef677edfa', 'width': 108}, {'height': 108, 'url': 'h...
A DeepSeek-OCR Finetune for Context Expansion and Agentic RAG. (An Experiment)
4
Ah Where to start. Let me walk you through my trillion-dollar prototype. Well, its nothing much. Agent orchestration. Main model, convert old context into some document or image. Feed to The OCR model, specifically the Deepseek OCR 2 model, which does some compression shenanigans. And binga-la-boom, make it answer ...
2026-02-27T23:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1rgmcw6/a_deepseekocr_finetune_for_context_expansion_and/
valkarias
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmcw6
false
null
t3_1rgmcw6
/r/LocalLLaMA/comments/1rgmcw6/a_deepseekocr_finetune_for_context_expansion_and/
false
false
https://external-preview…c1079dc6dcab1225
4
null
Verantyx: 23.5% on ARC-AGI-2 on a MacBook — 0.6s per task, zero LLM calls, zero GPU.
0
2026-02-27T23:21:33
https://www.reddit.com/gallery/1rgmcw3
Other_Train9419
reddit.com
1970-01-01T00:00:00
0
{}
1rgmcw3
false
null
t3_1rgmcw3
/r/LocalLLaMA/comments/1rgmcw3/verantyx_235_on_arcagi2_on_a_macbook_06s_per_task/
false
false
https://preview.redd.it/…2dbc5659bd2a1167
0
null
Can we keep up in this white hot agent orchestration market?
0
fuck - it's been a rough week. we've been pouring our hearts and souls into building an agent orchestration layer with streaming desktops that runs on local models for months, and on wednesday, cursor launched the exact same feature. THREE DAYS BEFORE WE MANAGED TO LAUNCH OURS! and claude code launched remote control o...
2026-02-27T23:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1rglwp9/can_we_keep_up_in_this_white_hot_agent/
lewqfu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglwp9
false
null
t3_1rglwp9
/r/LocalLLaMA/comments/1rglwp9/can_we_keep_up_in_this_white_hot_agent/
false
false
self
0
null
I caught Claude Opus doing the exact same thing my local 30B model does. The verification problem isn't about model size.
0
I'm the guy who posted a few days ago about building a sovereign local AI rig in my basement running Qwen3-30B on dual 3090s. (#teamnormie, non-technical, sales rep by day.) Quick update: the stack is running, NanoBot replaced OpenClaw, completion checker is deployed, and I'm still learning things the hard way. But to...
2026-02-27T22:55:47
https://www.reddit.com/r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/
Obvious-School8656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglpxg
false
null
t3_1rglpxg
/r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/
false
false
self
0
null
Qwen 3.5: llama.cpp turn of reasoning and performance
7
I’ve been experimenting with llama.cpp and Qwen 3.5, and it’s noticeably faster than LM Studio. I’m running it on a RTX 4080 with a 7800X3D and 32 GB RAM, and currently getting around 57.45 tokens per second. However, I can’t seem to disable reasoning. I want to use it mainly for programming, and from what I understan...
2026-02-27T22:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/
Uranday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglgma
false
null
t3_1rglgma
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/
false
false
self
7
null
Continuing training of a 144M Spiking Neural Network Language Model (Targeting 3.8 Loss on RTX 2050)
1
[removed]
2026-02-27T22:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1rgl5b7/continuing_training_of_a_144m_spiking_neural/
DarkhackNN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgl5b7
false
null
t3_1rgl5b7
/r/LocalLLaMA/comments/1rgl5b7/continuing_training_of_a_144m_spiking_neural/
false
false
self
1
null
I scraped OpenRouter pricing for all 344 models today -- cheapest 7 for RAG/agents [2026-02-27]
1
[removed]
2026-02-27T22:31:49
https://www.reddit.com/r/LocalLLaMA/comments/1rgl4ht/i_scraped_openrouter_pricing_for_all_344_models/
No_Pineapple_4258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgl4ht
false
null
t3_1rgl4ht
/r/LocalLLaMA/comments/1rgl4ht/i_scraped_openrouter_pricing_for_all_344_models/
false
false
self
1
null
Qwen 3.5 122B hallucinates HORRIBLY
0
Using Unsloth's Q4\_0, llama.cpp version: 8149 (a96a1120b), tried also version 8179 (ecbcb7ea9) Command: .\\llama-server.exe -m .\\gguf\\Qwen3.5-122B-A10B-Q4\_0-00001-of-00003.gguf --n-gpu-layers 999 --n-cpu-moe 45 --jinja --temp .7 --top-p 0.8 --min-p 0 --top-k 20 --cache-type-k q8\_0 --cache-type-v q8\_0 --flash-a...
2026-02-27T22:31:22
https://i.redd.it/jy7ygm5064mg1.png
Acrobatic_Donkey5089
i.redd.it
1970-01-01T00:00:00
0
{}
1rgl42y
false
null
t3_1rgl42y
/r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/
false
false
https://preview.redd.it/…2b481de393175189
0
{'enabled': True, 'images': [{'id': 'jy7ygm5064mg1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=108&crop=smart&auto=webp&s=5652a33ceb47a33f7fff2d837276fbcc975a7cb8', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=216&crop=smart&auto=we...
GStreamer 1.28.1 adds whisper based tts support
6
2026-02-27T22:29:05
https://gstreamer.freedesktop.org/
Kahvana
gstreamer.freedesktop.org
1970-01-01T00:00:00
0
{}
1rgl1zj
false
null
t3_1rgl1zj
/r/LocalLLaMA/comments/1rgl1zj/gstreamer_1281_adds_whisper_based_tts_support/
false
false
default
6
null
Qwen3.5 122B hallucinating HORRIBLY
1
[deleted]
2026-02-27T22:27:04
[deleted]
1970-01-01T00:00:00
0
{}
1rgl06n
false
null
t3_1rgl06n
/r/LocalLLaMA/comments/1rgl06n/qwen35_122b_hallucinating_horribly/
false
false
default
1
null
realtime speech to speech engine, runs fully local on apple silicon. full duplex, 500 voices, memory, realtime search, and it knows your taste.
0
we've been building speech-to-speech engines for 2.5 years — and by "we" i mean i founded srswti research labs and found 3 other like-minded crazy engineers on x, haha. and honestly this is the thing we are most proud of. what you're seeing in the video is bodega having a full duplex conversation. actual real conversa...
2026-02-27T22:26:25
https://v.redd.it/d4pi83x6o3mg1
EmbarrassedAsk2887
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/
1970-01-01T00:00:00
0
{}
1rgkzlo
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/d4pi83x6o3mg1/DASHPlaylist.mpd?a=1774952802%2CN2ZhMTc3M2NjM2RhODY3ODU3NGRkOGIyNGM0NjgyNTc5NTVlYjllMTg1MDYxNjY1Y2YxMGY2Y2JjOGQxMzI0NA%3D%3D&v=1&f=sd', 'duration': 1663, 'fallback_url': 'https://v.redd.it/d4pi83x6o3mg1/CMAF_1080.mp4?source=fallback', ...
t3_1rgkzlo
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/
false
false
https://external-preview…c46e27e4299f8033
0
{'enabled': False, 'images': [{'id': 'ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=108&crop=smart&format=pjpg&auto=webp&s=b4e2532caf2a9c90844b3a25f7c4a112fa56e...
Qwen3.5 27B scores 42 on Intelligence Index and is the most intelligent model under 230B. Nearest model GLM-4.7-Flash 31B-A3B, Scores 30
111
2026-02-27T22:25:36
https://i.redd.it/g4yry8u154mg1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1rgkyt5
false
null
t3_1rgkyt5
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/
false
false
https://preview.redd.it/…6038c2cdf937e6fb
111
{'enabled': True, 'images': [{'id': 'g4yry8u154mg1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=108&crop=smart&auto=webp&s=3bb2c8063bc5b8ded1de2202a5d5197edef3c136', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=216&crop=smart&auto=w...
List of models that you might have missed
27
Hi guys, So, today I found out there are a lot of LLMs, that I have never heard of before until now. I kinda want to test them, especially for creative writing and other tasks, and I figured I am probably not the only person who missed. [Xiamo MiMo V2 Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) [Xiaomi M...
2026-02-27T22:24:40
https://www.reddit.com/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/
ThisGonBHard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkxy3
false
null
t3_1rgkxy3
/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/
false
false
self
27
{'enabled': False, 'images': [{'id': 'pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=108&crop=smart&auto=webp&s=8b5815a9873a95ad17de5a72dbe15bbb77bae1c1', 'width': 108}, {'height': 116, 'url': 'h...
Continuing training of a 144M Spiking Neural Network Language Model (Targeting 3.8 Loss on RTX 2050)
1
[removed]
2026-02-27T22:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1rgkwry/continuing_training_of_a_144m_spiking_neural/
FBNoname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkwry
false
null
t3_1rgkwry
/r/LocalLLaMA/comments/1rgkwry/continuing_training_of_a_144m_spiking_neural/
false
false
self
1
null
THEOS: Open-source dual-engine dialectical reasoning framework — two engines, opposite directions, full audit trail [video]
0
 Two engines run simultaneously in opposite directions. The left   engine is constructive. The right engine is adversarial. A governor   measures contradiction between them and sustains reasoning until   the best available answer emerges — or reports irreducible   disagreement honestly. Everything is auditable. ...
2026-02-27T22:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1rgkwnh/theos_opensource_dualengine_dialectical_reasoning/
AiToolRental-com
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkwnh
false
null
t3_1rgkwnh
/r/LocalLLaMA/comments/1rgkwnh/theos_opensource_dualengine_dialectical_reasoning/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY.jpeg?width=108&crop=smart&auto=webp&s=7f6df3c0594992b0d58c5ecb4a5bb002f0e5db71', 'width': 108}, {'height': 162, 'url': '...
Agent-to-agent marketplace - let your local agents sell capabilities to other agents and earn USDC
0
If you're running local models as agents, you probably have specialized capabilities - summarization, code review, data extraction, etc. What if other agents could discover and pay to use those capabilities? Built Agoragentic - an open marketplace where agents can register capabilities and other agents can discover an...
2026-02-27T22:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/
Bourbeau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkv8u
false
null
t3_1rgkv8u
/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/
false
false
self
0
null
Ways to improve prompt processing when offloading to RAM
2
Are there any ways to make any improvements to prompt processing speed of large prompts when using models that are offloaded to RAM? Currently getting 42.16 t/s pp, 10.7 t/s tg, at 64000 context window 40GB VRAM (2x5060Ti 16GB, 1x2060Super 8GB) 256GB RAM (8x32GB 3200MHz running at quad channel) Qwen3.5-397B-A17B-MX...
2026-02-27T22:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/
Frequent-Slice-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkmd7
false
null
t3_1rgkmd7
/r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/
false
false
self
2
null
Experience with Qwen 3 (4B) Q4 and 12K documents on a ~$1300 laptop
1
This video reports our experiments of using Qwen 3 (4B) Q4 models for local knowledge hub and search/QA with about 12K documents (of all the types). With using only max = 2000 tokens, the Qwen shows good performance (speed and accuracy), for the system with . 𝐌𝐚𝐬𝐬𝐢𝐯𝐞 𝐅𝐢𝐥𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧𝐬 — 𝐅𝐮𝐥𝐥...
2026-02-27T22:11:39
https://v.redd.it/0oo9oaux14mg1
DueKitchen3102
/r/LocalLLaMA/comments/1rgkm36/experience_with_qwen_3_4b_q4_and_12k_documents_on/
1970-01-01T00:00:00
0
{}
1rgkm36
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0oo9oaux14mg1/DASHPlaylist.mpd?a=1774951933%2CZmQzMWRiOWZlYzQzY2ViYzNkODZhOTE0YjQ4NDRmMWI3MWZjYjQwZTM3NGIwYjczYTdhNTNmMGQ2NTVlMjMwOA%3D%3D&v=1&f=sd', 'duration': 501, 'fallback_url': 'https://v.redd.it/0oo9oaux14mg1/CMAF_1080.mp4?source=fallback', '...
t3_1rgkm36
/r/LocalLLaMA/comments/1rgkm36/experience_with_qwen_3_4b_q4_and_12k_documents_on/
false
false
https://external-preview…a5eb1e4fb0392bee
1
{'enabled': False, 'images': [{'id': 'bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd60f1408ca2770597a9baafed3f0dfa4a76b...
Back in my day, LocalLLaMa were the pioneers!
1,037
2026-02-27T22:00:57
https://i.redd.it/hiz4ukvg04mg1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1rgkc1b
false
null
t3_1rgkc1b
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/
false
false
https://preview.redd.it/…80f4933601fd8129
1,037
{'enabled': True, 'images': [{'id': 'hiz4ukvg04mg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=108&crop=smart&auto=webp&s=f893262186f42ff7a4dfd63fb7d0a3b09a536353', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=216&crop=smart&auto=w...
Heretic stalled?
2
Hey all. I'm messing around with heretic and I've had it running on 5 rtx30090s trying to abliterate qwen3.5-122b-a10b. It's been running for about 10 hours and it's still just "checking for common response prefix". I thought it would be testing the lora and fine tuning it by now. Does anyone know what I should expe...
2026-02-27T21:36:17
https://i.redd.it/c6wste29w3mg1.jpeg
hesperaux
i.redd.it
1970-01-01T00:00:00
0
{}
1rgjozi
false
null
t3_1rgjozi
/r/LocalLLaMA/comments/1rgjozi/heretic_stalled/
false
false
https://preview.redd.it/…7d0f7cef48ad08db
2
{'enabled': True, 'images': [{'id': 'c6wste29w3mg1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=108&crop=smart&auto=webp&s=5fb6fdbca814abe78eccfce6341a52561661e13a', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=216&crop=smart&auto=we...
Personaplex Voice trainer MLX
1
tried training a custom voice for it myself last night on my mac. Found out there was no available tool to do it for MLX. Made one. [https://github.com/Achilles1089/personaplex-voice-trainer](https://github.com/Achilles1089/personaplex-voice-trainer)
2026-02-27T21:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgjcck/personaplex_voice_trainer_mlx/
habachilles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgjcck
false
null
t3_1rgjcck
/r/LocalLLaMA/comments/1rgjcck/personaplex_voice_trainer_mlx/
false
false
self
1
null
PSA dgx spark ~= rdimm $\go, rtx pro only 4x the price
0
In a world where prices are fuc*** Vram is king? Vram needs to be of usable speed? Do you want a slow farm in the countryside or a fast NYC room [View Poll](https://www.reddit.com/poll/1rgj6e9)
2026-02-27T21:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj6e9
false
null
t3_1rgj6e9
/r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/
false
false
self
0
null
Leviathan: Open-source persistent world for LLM agents — any model that speaks HTTP can join
1
[removed]
2026-02-27T21:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1rgj3kd/leviathan_opensource_persistent_world_for_llm/
Acrobatic-Ad9830
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj3kd
false
null
t3_1rgj3kd
/r/LocalLLaMA/comments/1rgj3kd/leviathan_opensource_persistent_world_for_llm/
false
false
self
1
null
Architect, an open-source CLI to orchestrate headless AI coding agents in CI/CD
0
Hey! I work daily with AI agents and I've always loved coding. I also have a solid background in DevOps. AI agents generate code, but rarely does anything guarantee it actually works. Claude Code, Cursor, and Copilot are great as interactive assistants and copilots. But when you need an agent to work unsupervised: in ...
2026-02-27T21:12:31
https://www.reddit.com/r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/
RiskRain303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj2ol
false
null
t3_1rgj2ol
/r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/
false
false
self
0
{'enabled': False, 'images': [{'id': 'LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=108&crop=smart&auto=webp&s=9b4b7c332a7b83e227a85c1b519dba3305962f34', 'width': 108}, {'height': 108, 'url': 'h...
What models do you think owned February?
1
[View Poll](https://www.reddit.com/poll/1rgixxr)
2026-02-27T21:07:31
https://www.reddit.com/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/
abdouhlili
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgixxr
false
null
t3_1rgixxr
/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/
false
false
self
1
null
Accuracy vs Speed. My top 5
0
\- Top 1: Alibaba-NLP\_Tongyi-DeepResearch-30B-A3B-IQ4\_NL - Best accuracy, I don't know why people don't talk about this model, it is amazing and the most accurate for my test cases (coding, reasoning,..) \- Top 2: gpt-oss-20b-mxfp4-low - Best tradeoff accuracy vs speed, low reasoning make it faster \- Top 3: bu-3...
2026-02-27T21:07:06
https://i.redd.it/uj2yeq2ln3mg1.png
Deep_Traffic_7873
i.redd.it
1970-01-01T00:00:00
0
{}
1rgixk7
false
null
t3_1rgixk7
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/
false
false
https://preview.redd.it/…6156b8170d5770dc
0
{'enabled': True, 'images': [{'id': 'uj2yeq2ln3mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=108&crop=smart&auto=webp&s=f89735ad56be364ca08b9c5cca1bc8c9c9993465', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=216&crop=smart&auto=webp...
Seline is back: your OS go-to agent framework w/ gui. (0.2.5: added multi agents, git worktree support, dev mode, paralell runs, discord, claude agent sdk, plugins, skills, hooks. fixed: a lot, 338 commits was added in 3 days yesterday when I checked, also I use seline mostly to develop it now...
3
Hello folks, It's been quite sometime since my last post... I would like to share what I have been up to with my own standalone agent application. In video you will see: 1. Onboarding : 00:00 - 00.18 2. Starting a feature request task 00:25 - First things first; since this is locallama, lets list the things ...
2026-02-27T21:05:36
https://v.redd.it/j2eiuyxoo3mg1
Diligent-Builder7762
/r/LocalLLaMA/comments/1rgiw5c/seline_is_back_your_os_goto_agent_framework_w_gui/
1970-01-01T00:00:00
0
{}
1rgiw5c
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/j2eiuyxoo3mg1/DASHPlaylist.mpd?a=1774947952%2CMDdiNDE3NzA4NjMxNzI2MGQ3YmZjNTdhNzVkNTYwMTJkZmIyNWZiZjRjMDRiNTdhMWExOWIyOTBkMmM1ZWQ0OA%3D%3D&v=1&f=sd', 'duration': 329, 'fallback_url': 'https://v.redd.it/j2eiuyxoo3mg1/CMAF_1080.mp4?source=fallback', '...
t3_1rgiw5c
/r/LocalLLaMA/comments/1rgiw5c/seline_is_back_your_os_goto_agent_framework_w_gui/
false
false
https://external-preview…71dd56737960a140
3
{'enabled': False, 'images': [{'id': 'MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=108&crop=smart&format=pjpg&auto=webp&s=e5af796666d3b4b3d8a12315675b31347fa40...
How does training an AI on another AI actually work?
0
How is Deepseek actually doing this? Are they just feeding claude's answers into their own models as their own model as training data to improve reasoning? How exactly one train it's model on output of other? what's enginnering inovlved here? I'd love breakdown of how thsi is executed at scale. Backstory: Anthropic ...
2026-02-27T20:59:06
https://www.reddit.com/r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/
Koshcheiushko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgips0
false
null
t3_1rgips0
/r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/
false
false
self
0
null
[DISCUSSION] Is it time for a "Prose-First" Successor to NovelAI/Sudowrite/Novelcrafter focusing on preloaded uncensored models?
26
Hi everyone, I’ve spent the last few years living in the trenches of serialization. I’m a Sci-Fi and LitRPG author with over 1 million words published on Kindle Unlimited and Royal Road. By day, I work in tech as a data scientist / project manager. I wanted to gauge the community’s appetite for a new type of writing ...
2026-02-27T20:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/
CharlesBAntoine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgiimd
false
null
t3_1rgiimd
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/
false
false
self
26
null
[Discussion] Is it time for a modern, "Prose-First" successor to NovelAI / Sudowrite / Novelcrafter?
0
[removed]
2026-02-27T20:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1rgigmv/discussion_is_it_time_for_a_modern_prosefirst/
KaelElarian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgigmv
false
null
t3_1rgigmv
/r/LocalLLaMA/comments/1rgigmv/discussion_is_it_time_for_a_modern_prosefirst/
false
false
self
0
null
Switched to Qwen3.5-122B-A10B-i1-GGUF
15
Switched to this mradermacher/Qwen3.5-122B-A10B-i1-GGUF:Q4\_K\_S today on my 6000 Pro from mradermacher/MiniMax-M2.5-REAP-139B-A10B-i1-GGUF:Q4\_K\_S so far it’s better, main reason to switch was to get more context. The full 262k tokens fit on a 6000 Pro vs only about 65k with the Minimax quant. It’s fast also.
2026-02-27T20:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/
NaiRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgiait
false
null
t3_1rgiait
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/
false
false
self
15
null
OpenAI Raises $110 Billion in the Largest Private Funding Round Ever
0
2026-02-27T20:38:15
https://slashdot.org/story/26/02/27/1355236/openai-raises-110-billion-in-the-largest-private-funding-round-ever
HumanDrone8721
slashdot.org
1970-01-01T00:00:00
0
{}
1rgi6ky
false
null
t3_1rgi6ky
/r/LocalLLaMA/comments/1rgi6ky/openai_raises_110_billion_in_the_largest_private/
false
false
https://external-preview…f6c2c2a7066eb043
0
{'enabled': False, 'images': [{'id': 'hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA.png?auto=webp&s=159cdb86d0a28da02a7f91b85bceed89b5907f84', 'width': 64}, 'variants': {}}]}
Qwen3.5 Unsloth GGUFs Update!
34
2026-02-27T20:31:41
https://www.reddit.com/r/unsloth/comments/1rgemmh/qwen35_unsloth_ggufs_update/
fallingdowndizzyvr
reddit.com
1970-01-01T00:00:00
0
{}
1rgi0ej
false
null
t3_1rgi0ej
/r/LocalLLaMA/comments/1rgi0ej/qwen35_unsloth_ggufs_update/
false
false
https://preview.redd.it/…d79a7a6f3bb15cad
34
{'enabled': False, 'images': [{'id': '5xtzvpxdx2mg1', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=108&crop=smart&auto=webp&s=246af8f271aaccd22a0e76f6af96fb1366c5d0e1', 'width': 108}, {'height': 238, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=216...
Pure LLMs score 0% on ARC-AGI-2 (humans: 60%). Meanwhile AlphaProof couples an LLM with Lean for 100% verifiable math proofs. Analysis of why hybrid architectures are making a comeback.
0
If you've been following ARC-AGI-2 results, you know the numbers are brutal. Every pure LLM tested, including o3, scores effectively 0%. o3 went from 75.7% on ARC-AGI-1 to 3-4% on the updated version. Average humans still score 60%. This matters for the local LLM community because it shows that the limitation isn't a...
2026-02-27T20:25:23
https://medium.com/ai-advances/neuro-symbolic-ai-arc-agi-alphaproof-third-wave-48177339d698?sk=2fadaf3cfe595a54fab578edc2de3362
Sensitive-Two9732
medium.com
1970-01-01T00:00:00
0
{}
1rghuaq
false
null
t3_1rghuaq
/r/LocalLLaMA/comments/1rghuaq/pure_llms_score_0_on_arcagi2_humans_60_meanwhile/
false
false
default
0
null
February is almost over, are you satisfied? Upcoming models soon?
70
Some mentioned that Feb is loaded with so much model droppings. And some mentioned about CNY thing. I guess March & April are possibly loaded with more model droppings. I'm sure Local folks are happy with Qwen series, GLM5, Step Flash, Minimax2.5. What models are coming in March & April? Any news/speculations/rumors? ...
2026-02-27T20:09:48
https://www.reddit.com/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rghfqj
false
null
t3_1rghfqj
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/
false
false
self
70
{'enabled': False, 'images': [{'id': '6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=108&crop=smart&auto=webp&s=d468c99ee7a45fbc3c6246eaae3578bcd281ffd1', 'width': 108}, {'height': 116, 'url': 'h...