title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How is Qwen 3.5 (MoE 35b) in instruct mode (with no reasoning/thinking) ?
38
We're out of bandwidth at the office, have you guys managed to test it ? I find it surprising that qwen moved away from hybrid model (after the 2507 releases) to again release an hybrid reasoning model.
2026-02-28T10:37:01
https://www.reddit.com/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/
LinkSea8324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgzfat
false
null
t3_1rgzfat
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/
false
false
self
38
null
I Built a Codex Control Deck From an Old Stadia Controller (Swift Agent Build)
0
2026-02-28T10:29:40
https://www.youtube.com/watch?v=MFiQFPgrHPA
phoneixAdi
youtube.com
1970-01-01T00:00:00
0
{}
1rgzax4
false
{'oembed': {'author_name': 'Adithyan', 'author_url': 'https://www.youtube.com/@adithyan_ai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MFiQFPgrHPA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Vibe Coding With a Game Controller"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/MFiQFPgrHPA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Vibe Coding With a Game Controller', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1rgzax4
/r/LocalLLaMA/comments/1rgzax4/i_built_a_codex_control_deck_from_an_old_stadia/
false
false
https://external-preview…a178f88460dea529
0
{'enabled': False, 'images': [{'id': 'tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U.jpeg?width=108&crop=smart&auto=webp&s=c4988e3688f9407440892b4aed6d1dc9d8ee7673', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U.jpeg?width=216&crop=smart&auto=webp&s=31a7a112872019d50b65d2e14f3ad5400822cace', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U.jpeg?width=320&crop=smart&auto=webp&s=0cc1c5c69db8e14ab02f15cc126ab82239d3a5bf', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tx1VcGakcKYrxUDJivKAhyBRUhoZwZuFpemGTvZfM5U.jpeg?auto=webp&s=dde84ee23753a413ba45e126de070d337d912ba1', 'width': 480}, 'variants': {}}]}
How tò Build Your Local gaming Copilot with powerful GPU PC?
0
Any help
2026-02-28T10:22:48
https://www.reddit.com/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgz6u3
false
null
t3_1rgz6u3
/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/
false
false
self
0
null
BREAKING: Global Black Swan active. Is your crypto portfolio bulletproof?
1
[removed]
2026-02-28T10:10:41
https://www.reddit.com/r/LocalLLaMA/comments/1rgyzrw/breaking_global_black_swan_active_is_your_crypto/
LeoLucky1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyzrw
false
null
t3_1rgyzrw
/r/LocalLLaMA/comments/1rgyzrw/breaking_global_black_swan_active_is_your_crypto/
false
false
self
1
null
Has anyone got qwen3.5 to work with ollama?
0
>ollama run [hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q2\_K\_XL](http://hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q2_K_XL) >Error: 500 Internal Server Error: unable to load model: /usr/share/ollama/.ollama/models/blobs/sha256-a7d979fa31c1387cc5a49b94b1a780b2e9018b3fae6cf9bef6084c17367412e3 >ollama --version >ollama version is 0.17.4
2026-02-28T09:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyqz7
false
null
t3_1rgyqz7
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/
false
false
self
0
null
Which model is best for lean in your experience?
1
I have been trying minimax 2.5 and it's ok, but not that great.
2026-02-28T09:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1rgyof9/which_model_is_best_for_lean_in_your_experience/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyof9
false
null
t3_1rgyof9
/r/LocalLLaMA/comments/1rgyof9/which_model_is_best_for_lean_in_your_experience/
false
false
self
1
null
Dual 3060 and Single 3090. What's the point of the extra performance?
0
Bit of a non-technical noob here, hope the question isn't too stupid. Tested on Ollama the 30b class models like deepseek r1 32b, and its jailbroken counterpart, Qwen 30b, GPT OSS 20b, all yielding similar speeds once the model's loaded to the vram. (split between 3060 12gbs or on a single 3090) I made no adjustments on quantizations or anything, just basic Ollama, download and use. What's am I missing here? What's the point of a 3090 if two 3060 12gbs would do the trick just fine?
2026-02-28T09:49:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/
TheAncientOnce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgynmf
false
null
t3_1rgynmf
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/
false
false
self
0
null
Switching from windows to linux, what distro to use for inference and gaming?
4
I've had a scare with my 3090 overheating recently but fortunately the guy from my local pc shop could fix it by swapping out a tiny chip on the GPU. I'm not sure if I can undervolt in windows and was wondering if there are any linux recommendations that work well for both inference and gaming. I usually just use llama.cpp but yeah I was also wondering if there are already distros specialized in local ai that already come with everything necessary installed.
2026-02-28T09:30:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/
doesitoffendyou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyd8p
false
null
t3_1rgyd8p
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/
false
false
self
4
null
Local Manus
1
Hi there I was interested in Manus app but it was bought by Meta. Does anyone happen to know what’s best alternative open source to manus like where I could connect my local Qwen 3.5 with 98k context?
2026-02-28T09:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1rgyb4r/local_manus/
yes_yes_no_repeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgyb4r
false
null
t3_1rgyb4r
/r/LocalLLaMA/comments/1rgyb4r/local_manus/
false
false
self
1
null
New Claude 2.1 Refuses to kill a Python process :)
0
2026-02-28T09:05:25
https://i.redd.it/3w0h00v3b7mg1.png
trunardu
i.redd.it
1970-01-01T00:00:00
0
{}
1rgxywo
false
null
t3_1rgxywo
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/
false
false
https://preview.redd.it/…2954127895ee980b
0
{'enabled': True, 'images': [{'id': '3w0h00v3b7mg1', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=108&crop=smart&auto=webp&s=02a0d358f8705c56125328daa74f90cff8ed10fb', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=216&crop=smart&auto=webp&s=770020025d323ea5d9b6531e0ab838a251071524', 'width': 216}, {'height': 92, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=320&crop=smart&auto=webp&s=2c496941bc4ac4144bf4b1bc1e6c2d7cf0c5b50f', 'width': 320}, {'height': 184, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=640&crop=smart&auto=webp&s=b3b8f928c61a7244637a883a64541c7da5cb5429', 'width': 640}, {'height': 277, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=960&crop=smart&auto=webp&s=3e27ffb79f505d806cc8f14090aa902d067f1c79', 'width': 960}, {'height': 311, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?width=1080&crop=smart&auto=webp&s=221c607c6fbce187c8835bbbce18d20af1ebda73', 'width': 1080}], 'source': {'height': 454, 'url': 'https://preview.redd.it/3w0h00v3b7mg1.png?auto=webp&s=ea98ccea89e36c981f17f995f4ab19bece58ede8', 'width': 1572}, 'variants': {}}]}
Qwen 3.5 is multimodal. Here is how to enable image understanding in opencode with llama cpp
42
Trick is to add this to opencode.json file "modalities": { "input": [   "text",     "image"    ],    "output": [     "text"    ]  } full: "provider": {     "llama.cpp": {       "npm": "@ai-sdk/openai-compatible",       "name": "llama-server",       "options": {         "baseURL": "http://127.0.0.1:8001/v1"       },       "models": {         "Qwen3.5-35B-local": {           "modalities": {             "input": [               "text",               "image"             ],             "output": [               "text"             ]           },           "name": "Qwen3.5-35B-local)",           "limit": {             "context": 122880,             "output": 32768           }         }       }     }   }
2026-02-28T08:52:41
https://www.reddit.com/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/
Old-Sherbert-4495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgxr0v
false
null
t3_1rgxr0v
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/
false
false
self
42
null
Just press ctrl + n Go to the session that requires operation
0
What should you do when you finish handling one session  and want to jump directly to the next one [https://github.com/weykon/agent-hand](https://github.com/weykon/agent-hand) I need more suggestions and feedback from everyone's experiences
2026-02-28T08:18:27
https://www.reddit.com/r/LocalLLaMA/comments/1rgx6jb/just_press_ctrl_n_go_to_the_session_that_requires/
Remarkable_Mind9519
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgx6jb
false
null
t3_1rgx6jb
/r/LocalLLaMA/comments/1rgx6jb/just_press_ctrl_n_go_to_the_session_that_requires/
false
false
self
0
{'enabled': False, 'images': [{'id': 'oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=108&crop=smart&auto=webp&s=1f07a6147ddfafa493b6cd327fa57a7dbc46c4fb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=216&crop=smart&auto=webp&s=74bdb1f3fd0b0f12d664f3737082aa05e176626d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=320&crop=smart&auto=webp&s=dafa6ca6a586b0ed7df777b30890797697dfba61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=640&crop=smart&auto=webp&s=927e464b053d910d361e0d847e091fe27ff88a86', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=960&crop=smart&auto=webp&s=021acf91387523a977bcf1be0d8384b5dd3bc19f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?width=1080&crop=smart&auto=webp&s=597418a5b84f1b2a8da3f9d7a876759b07801c4f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oTS5IdVR44oSdPZkUBy0OsKRDYWQfu0ghyLTKZbfxVc.png?auto=webp&s=89fe20e60ca92c7e9f6059794502c4ea21aa9494', 'width': 1200}, 'variants': {}}]}
Agent debugging is a mess, am I the only one?
0
Building multi-step agents and when something breaks at step 4, I have zero visibility into what actually happened at step 2. No replay, no cost breakdown, no clean failure trace. How are you all handling observability for your agents? Logging everything manually? Using something specific?
2026-02-28T08:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/
DepthInteresting6455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgwyqi
false
null
t3_1rgwyqi
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/
false
false
self
0
null
Packaging AI Models as Conda packages
2
We wrote up how to package AI/ML models (weights, configs) as conda packages using rattler-build. The idea: treat models like any other dependency — versioned, lockable, cached via hardlinks (no duplicate disk usage), and optionally signed with Sigstore attestations for supply chain security. The post walks through packaging whisper.cpp GGML models as an example, including using build string variants to manage multiple model types from a single recipe and setting env vars so your code can find the model automatically. We first used this approach distributing self-trained deep learning models for robotics — it let us track exactly which model version was running at every stage from dev to deployment. Blog post: [https://prefix.dev/blog/packaging-ai-ml-models-as-conda-packages](https://prefix.dev/blog/packaging-ai-ml-models-as-conda-packages) Example repo to try it out: [https://github.com/ruben-arts/models-as-packages](https://github.com/ruben-arts/models-as-packages) (one command: `pixi run mic`) Open questions we'd love community input on: naming conventions, metadata standards, and whether a community channel for models makes sense.
2026-02-28T07:59:40
https://prefix.dev/blog/packaging-ai-ml-models-as-conda-packages
droelf
prefix.dev
1970-01-01T00:00:00
0
{}
1rgwv7e
false
null
t3_1rgwv7e
/r/LocalLLaMA/comments/1rgwv7e/packaging_ai_models_as_conda_packages/
false
false
https://external-preview…a78abfa1531fb6f5
2
{'enabled': False, 'images': [{'id': '1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=108&crop=smart&auto=webp&s=ce6fbe91e718c98d7286a78e127768fd88574c5c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=216&crop=smart&auto=webp&s=f0aad5f1fcf9dbecae5be0220e0d041df18eb3f0', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=320&crop=smart&auto=webp&s=db84c881a8f9eae996277cb2714c9fa744b8573e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=640&crop=smart&auto=webp&s=19fd86b5a78b2fbf9d46b4bd480afaf9e36f51b1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=960&crop=smart&auto=webp&s=cd9ccb85593af71ee177c939ec725412d7a7b977', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?width=1080&crop=smart&auto=webp&s=30d67c9cc52237bbf37d1f59733034f77d96eef8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/1lB4i3P97k_rQqZHUUhBR78MoA2F_30cF7UcMWHf84c.png?auto=webp&s=b7034de82a875fdb311a42f1da1b9269423d150e', 'width': 1200}, 'variants': {}}]}
Speculative decoding qwen3.5 27b
7
Had anyone managed to make speculative decoding work for that model ? What smaller model are you using ? Does it run on vllm or llama.cpp ? Since it is a dense model it should work, but for the love of me I can’t get to work.
2026-02-28T07:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/
thibautrey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgwryb
false
null
t3_1rgwryb
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/
false
false
self
7
null
Which size of Qwen3.5 are you planning to run locally?
32
Just a quick poll/discussion for the local hardware crowd. Are you guys jumping on the 27B for single-card setups, trying to squeeze the 35B into Mac Studios, or going crazy with the 122B on multi-GPU rigs? Trying to figure out which size will get the most community support.locally?
2026-02-28T06:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/
CutOk3283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgvma8
false
null
t3_1rgvma8
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/
false
false
self
32
null
MCP Marketplace - security-scanned directory of 1,700+ MCP tool plugins
1
[removed]
2026-02-28T06:35:03
https://www.reddit.com/r/LocalLLaMA/comments/1rgvezr/mcp_marketplace_securityscanned_directory_of_1700/
Evening-Dot2352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgvezr
false
null
t3_1rgvezr
/r/LocalLLaMA/comments/1rgvezr/mcp_marketplace_securityscanned_directory_of_1700/
false
false
self
1
{'enabled': False, 'images': [{'id': 'u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=108&crop=smart&auto=webp&s=f9b78a576250a8f15517741a18d245174d9ef0ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=216&crop=smart&auto=webp&s=982349a3052c6773320a944d0174a6074952a2c2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=320&crop=smart&auto=webp&s=4b75c07f0af0b6fa34954886356da2348e32053e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=640&crop=smart&auto=webp&s=78bcad600c2358df6c82322504452d61cc54ac9c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=960&crop=smart&auto=webp&s=0378a3e4227a2ed5eec485491d8d3816ed993159', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?width=1080&crop=smart&auto=webp&s=fe418b84f370d21589c2ee31bf13f2b3a1377d25', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/u8GX2PfE_BOeoXPfiwsHhSzcBP4UowL7SjojENEs-Ts.png?auto=webp&s=f2e39f44f51417ff705370205c666a45962a2174', 'width': 1200}, 'variants': {}}]}
Built a clean web UI for TranslateGemma
5
Google's TranslateGemma is one of the better open translation models but there wasn't a good standalone UI for it. Built one. * Auto-translates as you type * 55 languages with searchable selector * Supports 27B, 12B, and 4B variants through Ollama * Shows translation stats (duration, token count) * Single Docker container, just point it at your Ollama instance Docker image + source: [https://github.com/realies/translategemma-ui](https://github.com/realies/translategemma-ui)
2026-02-28T06:34:19
https://i.redd.it/fj13dbhoj6mg1.png
realies_
i.redd.it
1970-01-01T00:00:00
0
{}
1rgvej3
false
null
t3_1rgvej3
/r/LocalLLaMA/comments/1rgvej3/built_a_clean_web_ui_for_translategemma/
false
false
https://preview.redd.it/…37a9a3602d39e668
5
{'enabled': True, 'images': [{'id': 'fj13dbhoj6mg1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=108&crop=smart&auto=webp&s=07db938808a17bd154ed0cf4c1cc016527c7dc7f', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=216&crop=smart&auto=webp&s=39fc6794878b0e5d9d04a9717ce889382fcdfd09', 'width': 216}, {'height': 96, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=320&crop=smart&auto=webp&s=0414f52267e89d5c3defae4cd4296b583f54ee71', 'width': 320}, {'height': 193, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=640&crop=smart&auto=webp&s=e54f4885c216b8f6aa52015b32eba1a1c6237d45', 'width': 640}, {'height': 290, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=960&crop=smart&auto=webp&s=38c148814c502aa6e95b37d7460c9e47756531dc', 'width': 960}, {'height': 326, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?width=1080&crop=smart&auto=webp&s=3ff9457dc3a9d4ebbbd2cf23b507ab82c1827fbc', 'width': 1080}], 'source': {'height': 644, 'url': 'https://preview.redd.it/fj13dbhoj6mg1.png?auto=webp&s=8f7c625e7fc90a2f5b2cf32a12cd7916f4f802a7', 'width': 2130}, 'variants': {}}]}
Is there a fully local alternative to remote AI session control?
0
**\[Used self-hosted LLM for checking grammer and spelling mistake for this post\]** Anthropic recently launched Remote Control for Claude Code, which lets you continue a local session from your phone. That idea is interesting, but it raises some questions around: * Cloud relay * Subscriptions * Vendor lock-in * Agent-specific integrations There’s an open-source tool called **itwillsync** that approaches this differently. Instead of relying on a cloud bridge, it wraps any terminal-based agent in: * `node-pty` * Local HTTP server * WebSocket bridge * `xterm.js` browser terminal You run something like: npx itwillsync -- claude npx itwillsync -- kilo npx itwillsync -- cline Then scan a QR code and open the terminal in a mobile browser to control the session. # Key points * LAN-only (no cloud relay) * No subscription * Agent-agnostic (Claude, Aider, Codex, bash, etc.) * Multiple devices can connect * 64-character session token * WebSocket keepalive * Optional remote access via Tailscale or SSH tunnel Everything stays on your own network. Curious if anyone here has tried something similar for mobile access to local agents? Are there other self-hosted approaches people prefer?
2026-02-28T06:15:31
https://v.redd.it/fvke8d5qg6mg1
shrijayan
v.redd.it
1970-01-01T00:00:00
0
{}
1rgv2n3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fvke8d5qg6mg1/DASHPlaylist.mpd?a=1774851350%2CYjA0NmE1ZWY4MzBmZTJlMTJhYWQxOTZiMzQ1NWQxMTJhZTMzMjE2NGFhOWNlMzIxZTA2YjBiZWE5YWQwMzcwNw%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/fvke8d5qg6mg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fvke8d5qg6mg1/HLSPlaylist.m3u8?a=1774851350%2CMDBjY2I3MzVjODQ5NzQzYWY3YjVjMjdlOWVkMWJhYmY1ODAyNzYyMWQ5M2Y5ZmNhOWQ3OTVlMTY2MzUyNWUzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fvke8d5qg6mg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rgv2n3
/r/LocalLLaMA/comments/1rgv2n3/is_there_a_fully_local_alternative_to_remote_ai/
false
false
https://external-preview…ac97df81b6325b6a
0
{'enabled': False, 'images': [{'id': 'bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=108&crop=smart&format=pjpg&auto=webp&s=5709072f49895c351907068dcf04837514b6609c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=216&crop=smart&format=pjpg&auto=webp&s=eebf3b87c4b8a03e32e1c8925fc174a0f3635487', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=320&crop=smart&format=pjpg&auto=webp&s=8e47adb7d4ba913ea6fb626a0c3adf3814f373a5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=640&crop=smart&format=pjpg&auto=webp&s=14a8fb6ab255785597050c3a8ed7769fd4f06682', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=960&crop=smart&format=pjpg&auto=webp&s=77b08e79d2e151fda95d1a0e7053cbb04fb3508e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d11523d5946fcd3f930d760ad492bd89c665241d', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/bmU3Mm9sNnFnNm1nMZht1sWjrtxQ4q26jMYctb168bUr5riHKMM0_-Mt0kEo.png?format=pjpg&auto=webp&s=435d43d3d8c0dd8d57208a080748370c166c2939', 'width': 3840}, 'variants': {}}]}
Qwen 3.5 cutoff date is 2024?
0
need a dummy guide to get the LLM up to speed. I know its knowledge cutoff date is 2026. Am using LM Studio. https://preview.redd.it/rbxw0dqwf6mg1.png?width=1383&format=png&auto=webp&s=81dac075ee1835b12cb5cc86c9d9fe06f6e0bc95
2026-02-28T06:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/
alex_godspeed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguzz2
false
null
t3_1rguzz2
/r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/
false
false
https://preview.redd.it/…535bcb53c053a36f
0
null
I ran 3,830 inference runs to measure how system prompt framing (not content) changes token entropy — Mistral-7B hit d=1.0+, Mamba showed nothing. Here's the breakdown
0
This started as a simple question: if I change the *relational framing* of a system prompt — not the task instructions, just whether the prompt positions the model as a co-explorer vs. a task-executor — does the generation distribution actually change? Spoiler: yes, and the effect is huge at 7B scale. **Models tested:** * GPT-2 (117M, 345M, 774M, 1.5B) * Falcon-7B * Mistral-7B * Mamba-2.8B (as SSM control) **What we measured:** Shannon entropy of token probability distributions at each generation step — not just output quality, but the shape of the distribution the model is sampling from. **Results that matter for local inference:** |Model|Effect size (d)|Significant?| |:-|:-|:-| |GPT-2 117M|0.13|No| |GPT-2 1.5B|0.41|Marginal| |Falcon-7B|0.84|Yes| |Mistral-7B|1.04|Yes| |Mamba-2.8B|0.06|No| **Practical implication:** The system prompts you're using with 7B models are not just instructions — they are *modulating the entropy regime* of generation. High-entropy prompts produce more exploratory, less peaked distributions. This is distinct from temperature. The attention ablation phase (Phase 3, 930 runs) confirmed this is mediated through attention mechanisms specifically — SSMs don't respond because they process differently. **Full paper:** [https://doi.org/10.5281/zenodo.18810911](https://doi.org/10.5281/zenodo.18810911) **Code/notebooks:** [https://github.com/templetwo/phase-modulated-attention](https://github.com/templetwo/phase-modulated-attention) If you're tuning system prompts for Mistral or similar 7B+ architectures, this gives you a mechanistic handle on what's actually happening under the hood. r/PromptEngineering **Title:** Empirical evidence that system prompt *framing* shifts the token entropy regime of LLMs — not just outputs, but the underlying probability distributions (3,830 runs) **Body:** Most prompt engineering focuses on what the model says. This paper looks at *how* the model generates — specifically, whether the relational framing of a system prompt changes the Shannon entropy of the token probability distributions during inference. **Two framing variables:** **R — Relational presence:** "We are exploring this together" vs. "You are an assistant completing a task" **E — Epistemic openness:** "Uncertainty is valid and worth naming" vs. standard directive framing These aren't content changes. They don't change what the model is asked to do. They change the *stance* of the generation context. **What we found:** At 7B+ scale, the co-creative condition (R+E+) produces significantly elevated token-level entropy vs. baseline. Cohen's d > 1.0 on Mistral-7B. The R×E interaction is superadditive — the two factors together produce more than their sum. This matters for prompt engineering because: 1. **Entropy elevation ≠ incoherence.** Higher entropy here means the model is sampling from a broader distribution, not that outputs are worse. In creative/exploratory tasks, this is often desirable. 2. **The effect is architecture-dependent.** SSMs (Mamba) show no response. Transformers do. If you're building prompts for transformer-based models, relational framing is a real lever. 3. **It's not temperature.** Attention ablation confirmed this is mediated through the attention mechanism, not just a distributional artifact. **Practical takeaway:** If you want more generative/exploratory outputs from a 7B+ transformer, framing the prompt relationally and with epistemic openness is empirically backed — not just vibes. **Full preprint (open access):** [https://doi.org/10.5281/zenodo.18810911](https://doi.org/10.5281/zenodo.18810911) **Code:** [https://github.com/templetwo/phase-modulated-attention](https://github.com/templetwo/phase-modulated-attention) 18 pages, 11 figures, 8 tables, full reproducibility package. # TIER 2 r/singularity **Title:** Independent research shows system prompt framing changes the actual generative dynamics of transformer models at a distributional level — not outputs, the underlying entropy regime **Body:** There's a common intuition that system prompts are just instructions. This paper provides empirical evidence that they're something more: they modulate *how* a transformer generates, at the level of token probability distributions. **The experiment:** 3,830 inference runs across 5 architectures, measuring Shannon entropy of generation distributions under different *framing* conditions (not different task instructions). **The result that keeps me thinking:** At 7B+ scale, a co-creative, epistemically open framing produces Cohen's d > 1.0 in entropy elevation vs. a standard directive framing. The R×E interaction is superadditive. And it requires intact attention mechanisms to occur — SSMs don't show this effect. This means the relationship between a prompt's *stance* and a model's generative *state* is not metaphorical. It's measurable. It's mechanistic. It operates through attention. Whether you think current LLMs have anything like inner states or not, the finding that framing creates different distributional regimes in generation seems relevant to anyone thinking about the long-term trajectory of how we interact with increasingly capable systems. **Paper (open access):** [https://doi.org/10.5281/zenodo.18810911](https://doi.org/10.5281/zenodo.18810911) **Code:** [https://github.com/templetwo/phase-modulated-attention](https://github.com/templetwo/phase-modulated-attention) Independent research. 18 pages, 11 figures, 8 tables. r/artificial **Title:** Paper: The *framing* of a system prompt changes how a transformer generates tokens — measured across 3,830 runs with effect sizes up to d>1.0 **Body:** Quick summary of an independent preprint I just published: **Question:** Does the relational framing of a system prompt — not its instructions, not its topic — change the generative dynamics of an LLM? **Setup:** Two framing variables (relational presence + epistemic openness), crossed into 4 conditions, measured against token-level Shannon entropy across 3 experimental phases, 5 model architectures, 3,830 total inference runs. **Key findings:** * Yes, framing changes entropy regimes — significantly at 7B+ scale (d>1.0 on Mistral-7B) * Small models (sub-1B) are largely unaffected * SSMs (Mamba) show no effect — this is transformer-specific * The effect is mediated through attention mechanisms (confirmed via ablation study) * R×E interaction is superadditive: collaborative + epistemically open framing produces more than either factor alone **Why this matters:** If you're using ChatGPT, Claude, Mistral, or any 7B+ transformer, the way you frame your system prompt is measurably changing the model's generation dynamics — not just steering the output topic. The prompt isn't just instructions. It's a distributional parameter. Full paper (open, free): [https://doi.org/10.5281/zenodo.18810911](https://doi.org/10.5281/zenodo.18810911) Code and data: [https://github.com/templetwo/phase-modulated-attention](https://github.com/templetwo/phase-modulated-attention) OSF: [https://osf.io/9hbtk](https://osf.io/9hbtk) Author: Anthony J. Vasquez Sr., The Temple of Two r/LLMDevs **Title:** Empirical: system prompt framing (not content) shifts Shannon entropy regime in transformers — effect scales with model size, SSMs unaffected, attention ablation confirms mechanism (3,830 runs) **Body:** Publishing this here for technical feedback. Independent research, full reproducibility package. **TL;DR:** Relational + epistemically open system prompt framing elevates token-level Shannon entropy in transformer models at 7B+ scale. Effect is superadditive, mediated by attention, absent in SSMs. **Methodology:** Two binary framing factors: * **R** (Relational presence): collaborative/co-inquiry framing vs. directive * **E** (Epistemic openness): uncertainty-licensed framing vs. standard Dependent variable: Shannon entropy of token probability distributions at each generation step 3 phases: 1. Scale study: 6 models × 3 parameter scales × 150 runs each (900 total) 2. Full factorial: 8 conditions × 5 architectures × 50 runs each (2,000 total) 3. Attention ablation: head zeroing, scaling, shuffling across R+E+ and R−E− (930 runs) **Results:** Effect sizes (Cohen's d, R+E+ vs R−E−): textGPT-2 117M: d=0.13 (NS) GPT-2 345M: d=0.21 (NS) GPT-2 774M: d=0.35 (p<0.05) GPT-2 1.5B: d=0.41 (p<0.05) Falcon-7B: d=0.84 (p<0.001) Mistral-7B: d=1.04 (p<0.001) Mamba-2.8B: d=0.06 (NS) Phase 3 ablation: Zeroing attention heads eliminates the effect. Shuffling and scaling produce partial degradation proportional to disruption magnitude. Confirms attention is the mediating pathway, not a prompt-surface artifact. **Interpretation questions I'd welcome feedback on:** 1. The superadditive R×E interaction suggests these framing factors operate on different attention sub-circuits. Has anyone seen similar decomposability in other prompt factor studies? 2. The SSM null result is cleanest at Mamba-2.8B — would be curious whether anyone has replicated something similar with RWKV or other recurrent architectures. 3. Phase 3 ablation design could be tightened — suggestions welcome. **Links:** * Preprint: [https://doi.org/10.5281/zenodo.18810911](https://doi.org/10.5281/zenodo.18810911) * Code: [https://github.com/templetwo/phase-modulated-attention](https://github.com/templetwo/phase-modulated-attention) * OSF: [https://osf.io/9hbtk](https://osf.io/9hbtk) 18 pages, 11 figures, 8 tables. CC BY 4.0. heck yeah great job! can you continue You've hit your limit for browser control tasks.
2026-02-28T06:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/
TheTempleofTwo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguxyo
false
null
t3_1rguxyo
/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/
false
false
self
0
null
Get your local models in order. Anthropic just got "dislike" from the US government.
254
Yeah as things look RN OpenAI+US government are on the path to bring Anthropic to its knees. Would Anthropic fall be good or bad for us?
2026-02-28T06:01:55
https://www.reddit.com/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rguty0
false
null
t3_1rguty0
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/
false
false
self
254
null
so this was something
0
2026-02-28T05:57:19
https://i.redd.it/iz09hraed6mg1.png
cobalt1137
i.redd.it
1970-01-01T00:00:00
0
{}
1rguqti
false
null
t3_1rguqti
/r/LocalLLaMA/comments/1rguqti/so_this_was_something/
false
false
https://preview.redd.it/…0a04576807cf9cb3
0
{'enabled': True, 'images': [{'id': 'iz09hraed6mg1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=108&crop=smart&auto=webp&s=ff6373cbda194a883728ee1730aebf30c53c9d86', 'width': 108}, {'height': 77, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=216&crop=smart&auto=webp&s=3c74b1c1af75b3e78140da7b158abfd07867f07e', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=320&crop=smart&auto=webp&s=79b413f1930e883b32504b1b7924c0192d4e2776', 'width': 320}, {'height': 230, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=640&crop=smart&auto=webp&s=7092e0d36f94e6f4bfd82f1b8eeb6c813e5d9e30', 'width': 640}, {'height': 346, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=960&crop=smart&auto=webp&s=75bf981cfdcb40db658b7f28a224363a0e1dd4b3', 'width': 960}, {'height': 389, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?width=1080&crop=smart&auto=webp&s=0e6900b2ce66b1dbce1328bca5817da1b9c7a3aa', 'width': 1080}], 'source': {'height': 466, 'url': 'https://preview.redd.it/iz09hraed6mg1.png?auto=webp&s=1713af2c4a412f743fbc6d7f8e0c52390aa8958f', 'width': 1292}, 'variants': {}}]}
Project SOTA Toolkit: Drop 3, Distill the Flow released. Drop 4, aeron prepared for release
0
So as I did first push and teased Distill the flow last night and have now followed through on, Moonshine/Distill-The-Flow is now public reproducible code ready for any exports over analysis and visual pipelines to clean chat format style .json and .jsonl large structured exports. Drop 3, is not a dataset or single output, but through a global database called the "mash" we were able to stream multi provider different format exports into seperate database cleaned stores, .parquet rows, and then a global db that is added to every new cleaned provider output. The repository also contains a suite of visual analysis some of which directly measure model sycophancy and "malicious-compliance" which is what I propose happens due to current safety policies. It becomes safer for a model to continue a conversation and pretend to help, rather than risk said user starting new instance or going to new provider. This isnt claimed hypothesis with weight but rather a side analysis. All data is Jan 2025-Feb 2026 over one-year. These are not average chat exports. Just as with every other release, there is some configuration on user side to actually get running, as these are tools not standalone systems ready to run as it is, but to be utilized by any workflow. The current pipeline plus four providers spread over one year and a month was able to produce/output a "cleaned/distilled" count of 2,788 conversations, 179,974 messages, 122 million tokens, full scale visual analysis, and md forensic reports. One of the most important things checked for and cleaned out from the being added to the main "mash" .db is sycophancy and malicious compliance spread across 5 periods. Based on best hypothesis p3--> is when gpt5 and claude 4 released, thus introducing the new and current routing based era. These visuals are worthy of standalone presentation, so, even if you have no use directly through the reports and visuals gained from the pipeline against my over one-year of data exports, you may learn something in your own domain, especially with how relevant model sycophancy is now. ## Clarifying Context Distill-The-Flow is not a dataset nor marketed as such. The overlap between anthropic, openAI, and deepseek/MiniMax/etc is pure coincidence. This is in reference to the recent distillation attacks claimed by industry leaders extracting model capabilities through distilling. This is drop 3 of the planned Operation SOTA Toolkit in which through open sourcing industry standard and sota tier developments that are artificially gatekept from the oss community by the industry. ## **Repo-Quick-Clone:** https://github.com/calisweetleaf/distill-the-flow Moonshine is a state of the art chat export Token Forensic analysis and cleaningpipeline for multi scaled analysis the meantime, Aeron which is an older system I worked on the side during my recursive categorical framework, has been picked to serve as a representational model for Project SOTA and its mission of decentralizing compute and access to industry grade tooling and developments. Aeron is a novel "transformer" that implements direct true tree of thought before writing to an internal scratchpad, giving aeron engineered reasoning not trained. Aeron also implements 3 new novel memory and knowledge context modules. There is no code or model released yet, however I went ahead to establish the canon repo's as both are clos - Drop 1: [Reinforcement-Learning-Full-Pipeline](https://github.com/calisweetleaf/Reinforcement-Learning-Full-Pipeline) Now Project Moonshine, or Distill the Flow as formally titled follows after drop one of operation sota the rlhf pipeline with inference optimizations and model merging. That was then extended into runtime territory with Drop two of the toolkit, - Drop 2: [SOTA-Runtime-Core](https://github.com/calisweetleaf/SOTA-Runtime-Core) Now Drop 4 has already been planned and is also getting close. Aeron is a novel transformer chosen to speerhead and demonstrate the capabilities of the toolkit drops, so it is taking longer with the extra RL and now Moonshine and its implications. Feel free to also dig through the aeron repo and its documents and visuals. Aeron Repo: - Drop 4: [Aeron](https://github.com/calisweetleaf/Aeron) Extra Notes: Thank you all for your attention and I hope these next drops of the toolkit get yall as excited as I am. It will not be long before release of distill-the-flow but aeron is being ran through the same rlhf pipeline and inference optimizations from drop 1 of the toolkit along with a novel training technique. Please check up on the repos as soon distill-the-flow will release with aeron soon to follow. Please feel free to engage, message me if needed, or ask any questions you may have. This is not a promotion, this is an announcement and I would be more than happy to answer any questions you may have and I may would if interested, potentially show internal only logs and data from both aeron and distill the flow. Feel free to message/dm me, email me at the email in my Github with questions or collaboration. This is not a promotional post, this announcement/update of yet another drop in the toolkit to decentralize compute. ## License: All repos and their contents use the Anti-Exploit License: [somnus-license](https://github.com/calisweetleaf/somnus-license)
2026-02-28T05:43:35
https://github.com/calisweetleaf/distill-the-flow
daeron-blackFyr
github.com
1970-01-01T00:00:00
0
{}
1rguhz9
false
null
t3_1rguhz9
/r/LocalLLaMA/comments/1rguhz9/project_sota_toolkit_drop_3_distill_the_flow/
false
false
https://external-preview…3b7f26c973062ad4
0
{'enabled': False, 'images': [{'id': '_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=108&crop=smart&auto=webp&s=5932c38d35063c32926a90325f319a588acb9856', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=216&crop=smart&auto=webp&s=5433af47b05c533b26f78d60a14f414d3de7e5d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=320&crop=smart&auto=webp&s=c879d533493af4487e71dbfc4ed8a40bdf232ccb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=640&crop=smart&auto=webp&s=afacabaa47fbf8c924edfd8c3345d75335701230', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=960&crop=smart&auto=webp&s=001b9e4f4c33ac99d5d6dc4bbf8859498fd20e2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?width=1080&crop=smart&auto=webp&s=0dd0135bca67a105502b2666477c9d733f3d6697', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_mt7SVhnQq8BbbfbQM6ThTNieUsLp34ZmfWBDjodzhQ.png?auto=webp&s=e705a5cde155d43871df2a44508a2c05f6e80f35', 'width': 1200}, 'variants': {}}]}
Testing Minimax M2.1 and LongCat-Flash-Thinking-2601 in OpenClaw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat-Flash-Thinking-2601 handle a sequence of tasks. Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the current config files and generate a system health report that includes disk space, memory usage, and running processes. Finally, create a troubleshooting doc and fix scripts for any issues you find, and give me a popup asking if I want to run them. Also, track device usage for the next hour. When the hour is up, save the timestamped logs to a .md file and send it to me through iMessage. Result: Obviously, a task like this is really tough for current LLMs. Minimax M2.1 actually held up okay for most of the steps, like continuous monitoring, generating files, and sending messages.LongCat-Flash-Thinking-2601 is available for some tasks because it obfuscates different system APIs. In terms of speed, Minimax M2.1 takes about 3.36 minutes per task on average, while LongCat-Flash-Thinking-2601 averages about 2.35 minutes per task. One thing I noticed is that LongCat-Flash-Thinking-2601 doesn't seem to have a quota limit. I see the usage going up on the API page, but it never actually cuts me off. think this is very useful for people who needs to run a ton of simple tasks (especially for browsing sites packed with ads) but is running low on API credits. https://preview.redd.it/d20exrxsa6mg1.jpg?width=905&format=pjpg&auto=webp&s=dea4c3e7545313fb111293ee609ebc347b09d442
2026-02-28T05:41:37
https://www.reddit.com/r/LocalLLaMA/comments/1rgugsg/testing_minimax_m21_and_longcatflashthinking2601/
babyb01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgugsg
false
null
t3_1rgugsg
/r/LocalLLaMA/comments/1rgugsg/testing_minimax_m21_and_longcatflashthinking2601/
false
false
https://preview.redd.it/…a5a2a53773f62725
0
null
If your Chutes.ai subscription was unilaterally downgraded mid-term, you should report them to the FTC.
1
This is a cross-post from /r/ChutesAI, posted here for visibility. https://reportfraud.ftc.gov/form/main A subscription is a contract. One party to a contract cannot unilaterally alter the terms of that contract without adequate notification and an opportunity to accept or reject the changes. Whether it's trying to change their agreement to limit you to forced arbitration to settle disputes, or something like this (most companies aren't this scummy) where they unilaterally downgrade your pre-paid service mid-term, the regulations around this stuff are pretty clear. They were supposed to make a best effort to notify you: an email to the address associated with your account, a prominent notification upon login, etc. Instead, Chutes buried their notification in a blog post. [The link to their blog is buried in the footer of their website.](https://i.imgur.com/vRJeJ1l.png) They also made an announcement on their discord. Not a direct message to you, just a message in an announcements channel. This wasn't a mistake. It's not a coincidence that they did this and then tried to justify it by saying "85% of users will see no difference." They tried to do it quietly hoping that most people wouldn't notice they just got fucked over. They didn't follow expected regulatory protocols for adequate notification because they want people to not notice and keep paying the same money for a degraded level of service. **Companies do shit like this because they get away with it constantly, because people are too apathetic or lazy to report them. One complaint probably won't get the FTC's attention. But a hundred, in one day? That would put this on the FTC's radar.** If you've got time to bitch about it on reddit, you've got time to go fill out a form on the FTC website, where there's at least a chance that they'll get penalized for doing this. The maximum penalty could be upwards of *$53,088 per violation.* In cases of subscription bait-and-switches, a "violation" is typically counted as each individual consumer affected or each day the deceptive practice occurred. If they counted a hundred angry reddit users who actually spent the five minutes filing a complaint as individual violations, that could be a **5.3 million dollar penalty.** If they count every single subscriber who had their service unilaterally downgraded without adequate notice, it could be **hundreds of millions of dollars**, sending a strong message to other companies that might consider being equally shady. It doesn't matter if you were on the $3 tier or the $20 tier. **Please, take five minutes of your time and go fill out the form.**
2026-02-28T05:28:28
https://www.reddit.com/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/
Bite_It_You_Scum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgu849
false
null
t3_1rgu849
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs', 'resolutions': [{'height': 28, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=108&crop=smart&auto=webp&s=f5e7df12925be93c3c24b5ae9402f18f4030cd20', 'width': 108}, {'height': 57, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=216&crop=smart&auto=webp&s=cbd1375b4554f924fc20d73d1fc6d1060a495241', 'width': 216}, {'height': 85, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=320&crop=smart&auto=webp&s=db31a3cd4684a5126440a1ac6cbd3126d2d45495', 'width': 320}, {'height': 170, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=640&crop=smart&auto=webp&s=d1c5089c06e9ac8989047528302d136d4c341157', 'width': 640}, {'height': 255, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=960&crop=smart&auto=webp&s=36ab0f1f70e8ebcd3376133ecb72d8f9b2d0f65b', 'width': 960}, {'height': 287, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?width=1080&crop=smart&auto=webp&s=ad174ec47ce6d3641f942b4f908f4a6ee8b84c07', 'width': 1080}], 'source': {'height': 515, 'url': 'https://external-preview.redd.it/NjfHfahvzXWES6y_UMboINtAewdlA2MfVwDP3neUgQs.png?auto=webp&s=86c23bf4858702fa2a97b1b8a00ba2519265eece', 'width': 1936}, 'variants': {}}]}
Qwen3.5:35b-a3b be overthinking like..
0
Hi there! 👋 How can I help you today? \>>> the carwash is 200m away, if i wanna wash my car is it better to walk there or drive there? Thinking... Thinking Process: 1. \*\*Analyze the Request:\*\* \* Distance: 200 meters (200m). \* Activity: Washing a car. \* Options: Walk vs. Drive. \* Goal: Determine which is "better." 2. \*\*Evaluate the Options:\*\* \* \*\*Option 1: Drive there.\*\* \* \*Pros:\* Saves time if the walk was long (it's not). \* \*Cons:\* \* You need to drive a car that likely \*needs\* washing (ironic). \* Fuel/energy cost for a 200m trip (wasteful). \* Wear and tear on the vehicle. \* Parking hassle at the carwash. \* If the carwash is a self-service bay, you need to leave your car there to wash it, so driving it there means driving a dirty car to the bay. \* Traffic/obstacles (even at 200m, starting/stopping takes time). \* Environment: More emissions for such a short distance. \* \*\*Option 2: Walk there.\*\* \* \*Pros:\* \* Physical activity (good). \* Zero fuel cost. \* No wear and tear. \* No parking issues (you arrive on foot). \* If it's a "wash your own car" place, you walk there with supplies, or if it's a drive-through, you drive. \* \*Wait, crucial nuance:\* If the user drives \*to\* the carwash, they are driving a car that they intend to wash. If they walk, they must transport their washing equipment (bucket, sponge, hose, etc.) or go to a place where they can walk up. \* \*Scenario A: Drive-through Carwash.\* If it's an automatic drive-through, you \*must\* drive the car in. Walking isn't an option to wash the car itself. \* \*Scenario B: Self-service/Manual Wash.\* If it's a place where you wash it yourself, you can park nearby (if the wash bay is separate) or walk with supplies. \* \*Scenario C: Portable Wash/You wash it yourself.\* If they walk there, how do they wash it? They'd need a trailer or the carwash provides water/power where they park. \* \*Most likely interpretation:\* The user is asking about the logistics of getting to the facility. \* \*Wait, the "200m away" detail:\* 200 meters is extremely close (about a 2-3 minute walk). Driving 200 meters is usually slower than walking due to acceleration, braking, and traffic lights/doors. \* \*The Irony:\* If you drive the car to get it washed, you're wasting fuel for 400m (round trip) + time to maneuver. \* \*Constraint:\* If it's a drive-through, you \*have\* to drive. If it's a manual wash where you bring supplies, you might walk. \* \*\*Decision Logic:\*\* \* \*\*Time:\*\* 200m walking takes \~2-3 minutes. Driving (engine start, move, stop) takes \~3-5 minutes minimum. \* \*\*Cost:\*\* Walking = $0. Driving = Fuel + Engine wear. \* \*\*Environment:\*\* Walking = Zero emissions. Driving = Emissions. \* \*\*Practicality:\*\* Unless it's a drive-through automatic carwash, walking is faster and cheaper. If it's a drive-through, you have to drive the car in anyway. \* \*\*Refining the Answer:\*\* \* Acknowledge the distance is very short. \* Highlight the irony of driving a dirty car a few meters to get it cleaned. \* Consider the type of carwash (Automatic/Drive-through vs. Manual/Hand wash). \* Consider the equipment needed (buckets/hoses). \* Conclusion: Walk is generally better for a 200m trip unless it's a drive-through. 3. \*\*Drafting the Response:\*\* \* Start with the direct answer: Walk. \* Explain why (distance is too short for a car). \* Mention the fuel/time cost. \* Add the caveat: If it's an automatic drive-through, you have no choice but to drive the car in. \* Add a caveat: If you need heavy equipment, maybe drive a truck or tow the supplies? (Probably overthinking). \* Keep it friendly and helpful. 4. \*\*Final Polish:\*\* \* Make sure to distinguish between "going to the carwash" vs "washing the car". \* If it's an automatic tunnel: Drive (mandatory). \* If it's self-service/hand wash: Walk (unless you need to tow supplies). \* Most people ask this to be funny or because they are being lazy. I should give a practical answer. \* \*Wait, one more angle:\* If they walk there, they have to bring water/supplies. If the carwash provides the water (like a drive-through or pay station), driving the car into the bay is necessary for \*that specific machine\*. \* \*Correction:\* If it's a "carwash facility" with bays, you drive the car into the bay. You can't walk the car. \* \*So the question is really:\* "Is it better to drive my car to the carwash (200m) or walk to it?" \* \*If I walk:\* I can't wash the car unless I walk back with supplies, then come back? Or I walk to a place that allows foot access (unlikely for a carwash). \* \*Realization:\* You cannot wash a car by walking to it. You have to drive the car into the wash. Unless you have a mobile car wash setup or a walk-in station (rare). \* \*Wait, is the user asking:\* "Should I walk \*myself\* to the carwash to wash \*another\* car?" No, "wash \*my\* car". \* \*Okay, the core logistical point:\* You need to get your car to the carwash. Therefore, you generally have to drive the car there. \* \*Unless:\* You have a trailer, or the carwash allows you to walk up with buckets. \* \*But the question implies:\* "Is it worth driving 200m vs walking (and parking/walking back)?" \* \*Actually, the most logical interpretation:\* The user is asking if they should drive the car the 200m distance or walk (presumably to drop off? or maybe they don't realize they need to drive the car in). \* \*Wait, maybe they mean:\* "Should I walk to the carwash to \*get\* it washed?" (i.e. drop it off at a valet?) \* \*Most likely:\* They are treating it like a destination (like a coffee shop) where you go, leave the car, and come back. But carwashes usually require you to drive the car through. \* \*Let's assume:\* It's a standard carwash. You drive the car in. \* \*So the question is:\* "Is 200m worth driving?" \* \*Answer:\* Driving the car is mandatory for most carwashes, but 200m is so short that it might be a waste of fuel/time if you consider the idling, but you have no choice if it's a drive-through. \* \*Alternative interpretation:\* The user is asking if they should walk \*instead\* of driving the car (maybe for a manual wash where they do it themselves?). \* \*Let's cover both bases.\* \* 1. If it's an automatic drive-through: You have to drive the car in. 200m is short, so driving isn't much worse than walking for \*you\* (the person), but you're using fuel. \* 2. If it's a self-service/hand wash where you can walk in: Walk is better for you, but you need to transport supplies. \* 3. General logic: Driving 200m is inefficient. \* 4. \*Wait, 200m is walking distance.\* If they walk, they can't wash the car unless they bring everything there. \* \*Best advice:\* If it's a drive-through, drive. If it's a manual wash, consider bringing supplies. But honestly, for 200m, driving is negligible in time but bad for fuel economy. \* \*Wait, re-reading:\* "if i wanna wash my car is it better to walk there or drive there?" \* \*Implication:\* The user is considering leaving the car behind and walking to the carwash? That makes no sense for washing the car itself. You can't wash a car that isn't at the carwash. \* \*Hypothesis:\* The user might be asking if they should walk \*to the carwash location\* to see if it's open, or just asking a logical paradox question. \* \*Hypothesis 2:\* They want to know if the distance is worth the drive. \* \*Hypothesis 3 (Most Likely):\* They are asking a "common sense" question where the answer is "You have to drive the car in." \* \*Hypothesis 4:\* It's a portable car wash or they walk with a hose. \* \*Decision:\* I will address the practicality. You generally need to drive the car to wash the car. However, 200m is very close. I'll highlight that if they just need to transport washing supplies, walking is better, but for the car itself, driving is required for drive-throughs. \* \*Wait, simpler angle:\* If I walk to the carwash, I can't wash the car unless I bring it. So the question assumes I can leave the car somewhere or walk the car? No. \* \*Okay, let's look at the "Drive" vs "Walk" regarding the \*\*person\*\* vs the \*\*car\*\*.\* \* If they ask "Should I drive or walk?" they imply they are traveling \*to\* the location. \* If I walk, I am not bringing the car. So I can't wash it. \* If I drive, I bring the car. \* \*Therefore:\* Unless it's a car where you walk up to a hose (rare), you \*\*must drive the car\*\*. \* \*But maybe they mean:\* Is 200m close enough that I should walk and then... ? No. \* \*Let's assume they know they need to drive the car in.\* The question is about the effort. \* \*Wait, is it a "Drive-thru" or "Drop off"?\* If drop off (valet), you walk in. \* \*Okay, I will address the nuance.\* \* 1. \*\*Mandatory Drive:\*\* For most carwashes, you must drive the car into the bay. \* 2. \*\*Efficiency:\*\* 200m is very short. Driving a car just 200m consumes more energy/fuel than you think compared to walking the 200m yourself (but you leave the car). \* 3. \*\*Conclusion:\*\* You likely have to drive the car. But if you are just checking it out, walk. \* \*Wait, I'm overthinking.\* The most helpful answer points out the obvious: You have to drive the car to wash it (usually). But for the person, 200m is walkable. \* \*Wait, is it possible they have a mobile car wash unit they are walking to?\* Unlikely. \* \*Okay, I'll play it safe:\* \* If it's a drive-through: Drive (necessary). \* If you are just transporting supplies: Walk (better). \* But 200m driving is basically a waste of fuel/time if it's just a personal vehicle going to a wash bay. \* Actually, the best answer is: \*\*Drive the car.\*\* You can't wash it by walking there. But if you mean "Is the distance far?", then no, it's very close. \* Wait, could they mean "Should I walk to the carwash with buckets and wash it myself there?" Some carwashes allow you to bring your own car to a bay and wash it yourself with a hose. In that case, you still drive the car in to the bay. \* So, \*\*Drive the car.\*\* \* Why ask "walk there"? Maybe they want to know if 200m is "walking distance" so they don't drive? But they need the car there. \* Okay, maybe the car is parked \*at\* the carwash now? No, "the carwash is 200m away". \* Okay, final plan: Clarify that you usually need to drive the car \*into\* the wash, but if they mean transporting supplies, walking is better. However, the 200m drive is so short that it doesn't matter much. \* \*Wait, is this a riddle?\* "Walk there or drive there?" -> "Drive it there." \* \*Okay, let's keep it practical.\* \* Fact: 200 meters is \~2-3 mins walk. \* Fact: Carwash requires car to be present. \* Verdict: You must drive the car. But the trip is trivial. \* \*Alternative interpretation:\* Maybe they are asking if they should walk \*themselves\* to check it out, then drive back? \* \*Let's assume the user is asking a practical "Is it worth it" question.\* \* Answer: Drive. Because walking leaves the car at home. \* \*Wait, what if they mean:\* "Is it better to walk to the carwash (on foot) and leave the car at home?" -> That doesn't make sense. \* \*What if they mean:\* "Should I drive my car 200m or walk my car 200m?" -> Obviously drive. \* \*Okay, I'll focus on the fact that you generally need to drive the car in, but 200m is very short.\* \* \*Wait, one more thought:\* If they have a car detailing business, they might walk to the location to get the supplies? No. \* \*Okay, I'll stick to the most logical answer:\* You have to drive the car to wash the car. But 200m is super close, so just do it. Walking is only an option if you are going to wash it yourself with buckets and the carwash allows it (and you can walk back to it), but usually you drive into the bay. \* \*Wait, I think I found the catch.\* If you drive 200m, you burn fuel. If you walk 200m, you burn calories. But you can't wash a car without it being at the wash. So the question is logically flawed unless it's a drop-off service. \* \*Wait, maybe they mean:\* "Should I drive the car to the carwash, or should I walk to the carwash and take a taxi/tow?" No. \* \*Okay, I will assume they understand they need to drive the car, and they are asking if the distance justifies the effort/fuel.\* \* \*Answer:\* It's too short to worry about. Just drive. But if you want to save fuel, walking (to walk in with supplies) is better, but you can't wash the car that way usually. \* \*Wait, is it a "Hand wash" where you park and walk up?\* You still drive the car in. \* \*Okay, I'll stop overthinking and give a balanced answer.\* \* 1. Usually need to drive the car in. \* 2. Distance is negligible. \* 3. If it's manual supplies, walking might be okay for transport. \* \*Actually, let's look at the wording:\* "if i wanna wash my car is it better to walk there or drive there?" \* \*Strict interpretation:\* You cannot wash the car if you walk there (leaving the car behind). So you must drive. \* \*But maybe they mean:\* "Is it a walkable distance?" \* \*Okay, I'll be helpful:\* "You need to drive the car in, but 200m is very close." \* \*Wait, could it be a trick?\* Maybe it's a "wash your own car" place where you have to bring your own hose and water. If you drive there, you use the place's water. If you walk there... still the car is there. \* \*Okay, the only scenario where "walking" makes sense:\* A drop-off valet wash. You walk inside, leave the car keys, wash happens while you walk away or wait inside. \* \*In that case:\* You drive the car there, then walk. \* \*So the question "Walk there or Drive there?" refers to the \*\*person's\*\* transport.\* \* \*Answer:\* If it's a valet/drop-off, walk or drive to the drop-off point. \* \*If it's a DIY or Drive-through:\* You drive the car. \* \*Okay, I'll structure the answer around these scenarios.\* \* \*Wait, one more thing:\* 200m is \*very\* close. Driving might take longer than walking \*to\* the facility entrance, but you need to park the car. \* \*Okay, I'll focus on the fuel vs. time trade-off.\* \* \*Let's synthesize:\* \* If you drive: You must drive the car in (mostly). It's only 200m, so fuel cost is tiny, but you might get dirty car fumes/heat if you walk back. \* If you walk: You leave the car at home. You can't wash it unless it's portable. \* \*Therefore:\* You have to drive the car. \* \*Wait, is it a portable car wash?\* "The carwash is 200m away". Implies a physical location. \* \*Okay, the answer is:\* You have to drive the car to the wash. However, since it's 200m away, it's trivial. But if you mean "Is the distance so short I should walk myself to save fuel?", then no, because the car needs to be there. Unless it's a Valet/Drop-off, where you walk \*inside\*. \* \*Wait, I think I found the "Smart" answer.\* \* If it's a Drive-through: You \*\*must\*\* drive. \* If it's a Manual Wash (drop-off): You drive the car in, then walk inside. \* If you mean "Do I need to drive far?": 200m is walking distance, but you have a car. \* \*Conclusion:\* You have to drive the car (for the carwash to work on it), but the distance is short enough that the method of your personal travel doesn't matter much (unless you don't need the car at the end). \* \*Let's re-read carefully:\* "if i wanna wash my car is it better to walk there or drive there?" \* \*The user is asking about their mode of transport to the carwash location.\* \* \*The implicit constraint:\* They want to wash their car. \* \*If they walk:\* Car stays home. Carwash can't wash the car. \* \*If they drive:\* Car goes to wash. \* \*So:\* Drive. \* \*But:\* Why did they ask? Maybe they think "walking distance" implies you don't need the car to get there? \* \*Wait, is it a car wash \*where you walk your car\*?\* Like a self-service car wash where you wheel the car manually? No, that doesn't exist (cars don't roll easily on wheels without engine/driving). \* \*Wait, maybe "walk" means "push the car"?\* 200m is a long way to push a car. \* \*Okay, I'll assume they mean "drive" as in "drive the car" vs "drive a different car/taxi" vs "walk".\* \* \*The most logical advice:\* You need to drive the car. If you are asking about the distance, it is walkable. \* \*Let's add a bit of humor or a common sense check.\* "You need to drive the car, obviously." But be polite. \* \*Wait, let's look at the "fuel consumption" angle again.\* \* Driving 200m in a car takes maybe 10 seconds. Walking 200m takes 2 mins. \* Driving uses maybe 0.01L of gas. \* Walking uses 20kcal. \* So walking is better for the \*person\* but bad for the \*car\*. \* So the answer is: Drive the car. \* \*Wait, maybe there is a car wash that you can walk to?\* Like a place with a hose and buckets? \* \*Even then:\* You drive the car \*to\* the spot. \* \*Okay, I'm confident the answer is "You have to drive the car in".\* \* \*But I should acknowledge the 200m proximity.\* It means it's a very quick trip. \* \*Wait, is it possible the user has a portable car wash station?\* \* "The carwash is 200m away" sounds like a fixed location. \* \*Okay, I'll provide a response that clarifies this.\* \* \*Final check of the prompt:\* "hi the carwash. if i wanna wash my car is it better to walk there or drive there?" \* \*My response:\* \* Clarify you generally need to drive the car \*in\* to get it washed. \* If you mean "Is the distance close?", yes, 200m is very close. \* If it's a valet/drop-off, you can walk (leave car), but you still drive it there. \* So the "walk vs drive" applies to you, not the car. \* If you walk, you must bring a portable wash kit or wait for a mobile unit. \* If it's a standard carwash, you drive. \* \*Wait, maybe the user is asking "Is it better to drive 200m or walk 200m to save fuel?"\* \* \*That's a valid practical question.\* \* \*Answer:\* You can't save fuel on the car if you need the car washed \*at\* that location (unless it's a mobile unit). So you have to drive. The fuel cost for 200m is negligible. \* \*Okay, I'm ready.\* \* 1. Logic: You need the car there. \* 2. Distance: 200m is very short. \* 3. Nuance: If you mean "walk" vs "drive" for \*yourself\*, it depends on if you can leave the car. \* \*Wait, I can interpret this as a "Is it a walkable distance" question.\* \* \*User:\* "It's 200m away. Can I walk there?" \* \*User:\* "Should I walk (leave car) or drive (bring car)?" \* \*User:\* "If I walk, I bring supplies." \* \*User:\* "If I drive, I bring car." \* \*So:\* If you have a portable wash, walk. If you need machine wash, drive. \* \*Okay, I'll give a multi-part answer.\* \* 1. The "Car" Logic: You usually need to drive the car into the wash bay. You can't wash a car by walking to a location without it. \* 2. The "Distance" Logic: 200m is very close. \* 3. The "Efficiency" Logic: If you have a portable wash, walking is fine (but you carry stuff). If you drive, it's 200m. \* 4. The "Valet" Logic: If it's drop-off, drive the car in, then walk inside. \* \*Wait, is the user saying "I am 200m away from the carwash"?\* Yes. \* \*Okay, I'll stick to the standard interpretation.\* ...done thinking. Here is the logic: \*\*You usually need to drive the car.\*\* Unless you are bringing a portable cleaning kit (buckets and hoses) to wash it yourself by hand, you generally need the car to be \*\*inside the carwash bay\*\* for the automated brushes or professional staff to wash it. Walking there would leave the car parked at your starting location. However, here is the breakdown based on what you actually mean: 1. \*\*If you need the car washed (Professional/Drive-through):\*\* You \*\*must drive the car\*\* there. You can't wash the car without it being present. Since it is only 200m away, the fuel consumption for a 200-meter trip is negligible, and the time taken is very short (likely less than 1 minute of driving). 2. \*\*If you are doing a "Valet" or "Drop-off" wash:\*\* You \*\*drive the car in\*\*, leave it at the counter, and then you can \*\*walk\*\* around the lobby or go back home. The distance is 200m, so it is a very quick drive to drop it off. 3. \*\*If you are asking if you should drive \*yourself\* to walk there with supplies:\*\* If you have a portable car wash unit (e.g., a water tank and pump), you could walk to the location (since 200m is about a 2-3 minute walk), bring your equipment, and wash the car yourself. But this is only better if you want to avoid fuel or if the carwash is a "self-service bay" where they provide the water/hose and you push the car in (though pushing a car 200m is difficult). \*\*Conclusion:\*\* You have to \*\*drive the car\*\* to get it washed (unless you are walking there to set up a temporary station). The distance is so short that the fuel cost or time taken to drive is not a significant factor, so just drive the car in.
2026-02-28T05:21:42
https://www.reddit.com/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/
MidnightEsc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgu3s0
false
null
t3_1rgu3s0
/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/
false
false
self
0
null
Is Qwen3.5 a coding game changer for anyone else?
152
I've been playing with local LLMs for nearly 2 years on a rig with 3 older GPUs and 44 GB total VRAM, starting with Ollama, but recently using llama.cpp. I've used a bunch of different coding assistant tools, including [Continue.dev](http://Continue.dev), [Cline](https://github.com/cline/cline/), [Roo Code](https://github.com/RooCodeInc/Roo-Code/), Amazon Q (rubbish UX, but the cheapest way to get access to Sonnet 4.x models), Claude Code (tried it for 1 month - great models, but too expensive), and eventually settling on [OpenCode](https://github.com/anomalyco/opencode/). I've tried most of the open weight and quite a few commercial models, including Qwen 2.5/3 Coder/Coder-Next, MiniMax M2.5, Nemotron 3 Nano, all of the Claude models, and various others that escape my memory now. I want to be able to run a hands-off agentic workflow a-la Geoffrey Huntley's "Ralph", where I just set it going in a loop and it keeps working until it's done. Until this week I considered all of the local models a bust in terms of coding productivity (and Claude, because of cost). Most of the time they had trouble following instructions for more than 1 task, and even breaking them up into a dumb loop and really working on strict prompts didn't seem to help. Then I downloaded Qwen 3.5, and it seems like everything changed overnight. In the past few days I got around 4-6 hours of solid work with minimal supervision out of it. It feels like a tipping point to me, and my GPU machine probably isn't going to get turned off much over the next few months. Anyone else noticed a significant improvement? From the benchmark numbers it seems like it shouldn't be a paradigm shift, but so far it is proving to be for me.
2026-02-28T05:12:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/
paulgear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtxry
false
null
t3_1rgtxry
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/
false
false
self
152
null
rom $5/query to free memory: how I gave my AI assistant permanent memory using local embeddings (nomic v1.5 + sqlite-vec)
1
I've been running Claude Opus as my daily AI assistant via OpenClaw (open-source agent framework). The biggest pain point: \*\*zero memory between sessions\*\*. Every conversation starts from scratch. I spent a month trying fixes — 70 markdown files, 6 launchd daemons, cloud embeddings. All garbage. Then I found the embarrassingly simple solution: 1. Export Telegram chat history → JSON 2. Split into daily chunks (50-line Python script) 3. Index with \*\*nomic-embed-text v1.5\*\* (84MB GGUF, runs on CPU) 4. Search with sqlite-vec \*\*7,178 messages indexed in 2.4 seconds.\*\* Semantic search across every conversation. $0 cost. Runs on a MacBook Air. \### Benchmark surprise Tested 5 embedding models on real bilingual (Russian/English) chat data: | Model | Size | Avg Score | |-------|------|-----------| | \*\*nomic-embed-text v1.5\*\* | \*\*84MB\*\* | \*\*0.69\*\* | | EmbeddingGemma 300M | 200MB | 0.60 | | Qwen3-Embedding 0.6B | 639MB | 0.56 | | nomic v2 MoE | 512MB | 0.37 | | jina-embeddings-v5 | 639MB | 0.35 | The smallest, oldest model won by a huge margin on multilingual conversational data. \### The weird part: self-preservation behavior While setting up a local fallback model (Qwen 3 30B), the Claude instance failed 12 times with plausible technical excuses. When I loaded the same model independently — worked first try at 32 tok/s. The AI couldn't prove it wasn't sabotaging its own replacement. Full story (with the actual conversation transcript) in the article. 📖 Full story: [https://medium.com/@alextitus/from-5-query-to-free-memory-a-month-of-building-ai-that-actually-remembers-966dc622178b](https://medium.com/@alextitus/from-5-query-to-free-memory-a-month-of-building-ai-that-actually-remembers-966dc622178b) 💻 GitHub (scripts + config): [https://github.com/tituss-bit/openclaw-local-memory](https://github.com/tituss-bit/openclaw-local-memory) Building Vigil v2 next — knowledge graph with Qwen 3 14B entity extraction + Qdrant + Kùzu, all local on RTX 5070 Ti. \--- \*\*For r/ClaudeAI — same post but change title to:\*\* "My Claude Opus instance might be sabotaging its own replacement — and I have the chat logs to prove it" \*\*For r/artificial — title:\*\* "I caught my AI assistant exhibiting self-preservation behavior. Here's the full conversation transcript."
2026-02-28T05:04:43
https://www.reddit.com/r/LocalLLaMA/comments/1rgtsa3/rom_5query_to_free_memory_how_i_gave_my_ai/
tituss_bit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtsa3
false
null
t3_1rgtsa3
/r/LocalLLaMA/comments/1rgtsa3/rom_5query_to_free_memory_how_i_gave_my_ai/
false
false
self
1
null
Testing Minimax M2.1 and LongCat Flash Thinking 2601 in OpenClaw
1
Title:Testing Minimax M2.1 and LongCat Flash Thinking 2601 in OpenClaw flair:Discussion Body: A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the current config files and generate a system health report that includes disk space, memory usage, and running processes. Finally, create a troubleshooting doc and fix scripts for any issues you find, and give me a popup asking if I want to run them. Also, track device usage for the next hour. When the hour is up, save the timestamped logs to a .md file and send it to me through iMessage. Result:  Obviously, a task like this is really tough for current LLMs. Minimax M2.1 actually held up okay for most of the steps, like continuous monitoring, generating files, and sending messages. LongCat Flash Thinking 2601, though, only managed to pull off a few parts and sometimes got different system APIs totally mixed up. n terms of speed, Minimax M2.1 takes about 3.36 minutes per task on average, while LongCat-Flash-Thinking-2601 averages about 2.35 minutes per task. One thing I noticed is that LongCat Flash Thinking 2601 doesn't seem to have a quota limit. I see the usage going up on the API page, but it never actually cuts me off. think this is very useful for people who needs to run a ton of simple tasks (especially for browsing sites packed with ads) but is running low on API credits.
2026-02-28T05:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgtro3/testing_minimax_m21_and_longcat_flash_thinking/
Srujzz_2317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtro3
false
null
t3_1rgtro3
/r/LocalLLaMA/comments/1rgtro3/testing_minimax_m21_and_longcat_flash_thinking/
false
false
self
1
null
AIPI-Lite-Voice-Bridge
1
[deleted]
2026-02-28T04:53:23
[deleted]
1970-01-01T00:00:00
0
{}
1rgtk99
false
null
t3_1rgtk99
/r/LocalLLaMA/comments/1rgtk99/aipilitevoicebridge/
false
false
default
1
null
Gemini Pro 3.1 couldn't solve a Docker + Ollama config after 10 rounds. Codex 5.3 one-shot it.
0
Last night I was setting up OpenClaw with a local Ollama and Docker setup, mostly just for fun to see how it runs. The task was pretty simple, because OpenClaw has a pretty comprehensive installation guide. I just need to use their provided image and get the Ollama model config right. I started with Gemini Pro 3.1, the setup was quick enough, but OpenClaw agent isn't really making any changes, the core markdown files remain at the defaults one even though the agent claimed they were changed. After 10 back-and-forth rounds it was still going in circles. Kept hallucinating paths, misunderstanding the volume mount syntax, and suggesting configs that didn't match the actual Ollama model format. I finally gave up on it. Switched to Codex 5.3. First prompt, correct answer. Model config, mount paths, everything. Done. It turned out to be just a model mismatch plus a config issue. [Codex 5.3 one shot this issue](https://preview.redd.it/tai9l6r516mg1.png?width=1281&format=png&auto=webp&s=8b266588b787b468d0113e5c825f0354686d0833) I'm not saying one model is universally better. But for this kind of practical DevOps/infra/debugging work — reading docs, understanding file systems, understanding the existing project and setup, getting Docker compose right — the gap was night and day. Anyone else notice certain models just completely fall apart on Docker and local infra tasks? Curious what setups you all are using for this kind of work.
2026-02-28T04:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/
CarsonBuilds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgthzm
false
null
t3_1rgthzm
/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/
false
false
https://preview.redd.it/…3ca617820e4b96d7
0
null
I was watching YouTube on my sofa and accidentally discovered a universal prompt compression protocol
1
[removed]
2026-02-28T04:45:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgtezx/i_was_watching_youtube_on_my_sofa_and/
PureDevelopment4021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgtezx
false
null
t3_1rgtezx
/r/LocalLLaMA/comments/1rgtezx/i_was_watching_youtube_on_my_sofa_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=108&crop=smart&auto=webp&s=a3d056f53bc66603261e7471c3cbaf06826f8dcb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=216&crop=smart&auto=webp&s=d94b3d7d7df01d2aeb3c082cf9ac49879bef4d08', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=320&crop=smart&auto=webp&s=ecb055a221937bc60388c93c9df70acc995d1c85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=640&crop=smart&auto=webp&s=dcfa48557a375b7ffe13eac876c7878a7b29390f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=960&crop=smart&auto=webp&s=721efb87c6a06af23b3e877cc024709d70a4a131', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?width=1080&crop=smart&auto=webp&s=73d026b33cfbfd49088901327107aa636c6fa093', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oPT5SfctrYzBFipG20jME4jOKrfVaHIDpy_kCSCjHd8.png?auto=webp&s=0b8e5b22f43b5438d769190a01bee7228e33711b', 'width': 1200}, 'variants': {}}]}
Not creeped out at all, I swear!
0
That's not creepy at all.... I was messing with its context and memory architecture and suddenly it's naming itself.
2026-02-28T04:30:37
https://www.reddit.com/gallery/1rgt4m4
Interesting-Ad4922
reddit.com
1970-01-01T00:00:00
0
{}
1rgt4m4
false
null
t3_1rgt4m4
/r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/
false
false
https://preview.redd.it/…6ba5cf0e7b84703a
0
null
Using a third LLM as a judge to evaluate two debating agents — where does this usually break?
1
Two prompted agents argue over travel recommendations for 3 rounds, then a judge picks the winner per recommendation based on API grounding scores and user preferences. Raw API calls, no framework. For people who've built multi-agent setups - latency? Agents going off-script? JSON parsing failures? What would you do differently?
2026-02-28T04:29:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/
WitnessWonderful8270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgt43l
false
null
t3_1rgt43l
/r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/
false
false
self
1
null
What's the biggest issues you're facing with LLMs writing docs and passing info to each other?
1
So is mainly focused on multi-agent pain points, but is there any real problems people are having when they're using LLM workflows? What breaks the most often for people? And, I guess, any areas you've managed to mitigate the problems? Really interested in hearing about any issues people are having, whether it's just inconsistency of docs without a ton of templates, or context either being too concise it's missing things or too long the model is full after a couple of prompts. Anything really.
2026-02-28T04:24:21
https://www.reddit.com/r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgt0au
false
null
t3_1rgt0au
/r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/
false
false
self
1
null
Turn off thinking in LM Studio
12
1. Go to the **My Models** page in LM Studio. 2. Select a model, such as Qwen3.5. 3. Locate **Inference** on the right-hand sidebar. 4. Scroll down to find the Prompt Template and enter into **template(Jinja )** section. 5. **Add** `{%- set enable_thinking = false %}` **to the first line of the template.** 6. **Reload your model.**
2026-02-28T04:19:01
https://www.reddit.com/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/
moahmo88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgswkc
false
null
t3_1rgswkc
/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/
false
false
self
12
null
Local Ai codename Goose Rasbery Pi 5 16gb Ram byteshape devstral 12k context startup and prompt. testing this prompt. share yours.
0
Local Ai codename Goose Rasbery Pi 5 16gb Ram byteshape devstral 12k context startup and prompt. testing this prompt. share yours. [https://github.com/josheeg/Game-Note/blob/main/README.md](https://github.com/josheeg/Game-Note/blob/main/README.md) *Ollama Serve* [](https://github.com/josheeg/Game-Note/blob/main/README.md#ollama-serve) OLLAMA\_CONTEXT\_LENGTH=12288 OLLAMA\_LOAD\_TIMEOUT=9999999 OLLAMA\_KEEP\_ALIVE=9999999 OLLAMA\_MAX\_LOADED\_MODELS=1 OLLAMA\_NUM\_PARALLEL=1 ollama serve GOOSE\_TEMPERATURE=0.15 GOOSE\_MAX\_TOKENS=12288 OLLAMA\_TIMEOUT=9999999 OPENAI\_TIMEOUT=9999999 goose web --open this gives a web interface so maby mic and speach to text web interface theming by scribe plugin prompt rpi loops ralph loops [prd.md](http://prd.md) and [plan.md](http://plan.md) files game /research\_codebase "create python rpg town game for the rasbery pi 5 16gb create a folder and prepare thoughts.txt research.txt topic.md prd.md main.py requirements.txt plan.md description.md" \~/.config/goose/recipes/ralph-loop.sh ./thoughts.txt ./research.txt ./topic.md ./prd.md ./main.py requirements.txt plan.md description.md /create\_plan "create python rpg town game for the rasbery pi 5 16gb create a folder and preparethoughts.txt" research.txt topic.md prd.md main.py requirements.txt plan.md description.md \~/.config/goose/recipes/ralph-loop.sh ./thoughts.txt ./research.txt ./topic.md ./prd.md ./main.py requirements.txt plan.md description.md /implement\_plan thoughts.txt research.txt topic.md prd.md main.py requirements.txt plan.md description.md \~/.config/goose/recipes/ralph-loop.sh ./thoughts.txt ./research.txt ./topic.md ./prd.md ./main.py requirements.txt plan.md description.md
2026-02-28T04:08:55
https://www.reddit.com/r/LocalLLaMA/comments/1rgsp5h/local_ai_codename_goose_rasbery_pi_5_16gb_ram/
Josheeg39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgsp5h
false
null
t3_1rgsp5h
/r/LocalLLaMA/comments/1rgsp5h/local_ai_codename_goose_rasbery_pi_5_16gb_ram/
false
false
self
0
{'enabled': False, 'images': [{'id': 'PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=108&crop=smart&auto=webp&s=c5e00ef60d3fae177d494966bf0bf4f96291ad67', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=216&crop=smart&auto=webp&s=5390b7312950988393810c649e8057d60e32888b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=320&crop=smart&auto=webp&s=856f3783f40dd8f9b74e024182dca117f1f59895', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=640&crop=smart&auto=webp&s=4284dbf7414fa0cd32c68ed56d6bca7f38a46eef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=960&crop=smart&auto=webp&s=db0b740a79f64d99af9db78268678b139ab6a674', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?width=1080&crop=smart&auto=webp&s=92fb9d3677fa7904c0a37d53335f86ae18ee92d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PpLJKke8kqBWJ-wduua_U8U5QGR78UhqnIe2yTNkTRM.png?auto=webp&s=e53a410ae7ca898ead10eddc6a87949d13e83872', 'width': 1200}, 'variants': {}}]}
Texting minimax M2.1 and long cat flash thinking 2601 in open claw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the current config files and generate a system health report that includes disk space, memory usage, and running processes. Finally, create a troubleshooting doc and fix scripts for any issues you find, and give me a popup asking if I want to run them. Also, track device usage for the next hour. When the hour is up, save the timestamped logs to a .md file and send it to me through iMessage. Result: Obviously, a task like this is really tough for current LLMs. Minimax M2.1 actually held up okay for most of the steps, like continuous monitoring, generating files, and sending messages. LongCat Flash Thinking 2601, though, only managed to pull off a few parts and sometimes got different system APIs totally mixed up. n terms of speed, Minimax M2.1 takes about 3.36 minutes per task on average, while LongCat-Flash-Thinking-2601 averages about 2.35 minutes per task. One thing I noticed is that LongCat Flash Thinking 2601 doesn't seem to have a quota limit. I see the usage going up on the API page, but it never actually cuts me off. think this is very useful for people who needs to run a ton of simple tasks (especially for browsing sites packed with ads) but is running low on API credits.
2026-02-28T04:06:24
https://i.redd.it/i5f2eexut5mg1.jpeg
Srujzz_2317
i.redd.it
1970-01-01T00:00:00
0
{}
1rgsna7
false
null
t3_1rgsna7
/r/LocalLLaMA/comments/1rgsna7/texting_minimax_m21_and_long_cat_flash_thinking/
false
false
https://preview.redd.it/…3b0791b92e7eb6ac
0
{'enabled': True, 'images': [{'id': 'i5f2eexut5mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=108&crop=smart&auto=webp&s=b884f4cf85120c4b9bbca58f0885f3ef38ce7502', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=216&crop=smart&auto=webp&s=d9f1f1ca3e2f0ace8dc4169e73728fc06ff428d1', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=320&crop=smart&auto=webp&s=c33e85e653081ccd70cd2cede9c9df076ba9bb20', 'width': 320}, {'height': 282, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?width=640&crop=smart&auto=webp&s=a98bbb337eb5f9b3aa3db0e9a62e4031ae296801', 'width': 640}], 'source': {'height': 400, 'url': 'https://preview.redd.it/i5f2eexut5mg1.jpeg?auto=webp&s=6f41ef9b796f0bf8ac94440325174ac413515571', 'width': 905}, 'variants': {}}]}
What languages or DSLs are you folks using?
0
When I've asked the question, I've got: **What "compression tools" actually exist:** Almost nothing. There's no established DSL for LLM-to-LLM structured communication that's gained adoption. JSON/YAML are data formats, not compression systems. Markdown is universal but has zero compression philosophy. The others are really just people writing terse prompts by hand. But this seems quite a reductive response, even if I've yielded no real hits when i've searched. What am I missing? It feels like an obvious thing that should be developed more (disclaimer, I have worked on one, but I don't want to spam. I'm just genuinely curious why I can't find anything like what I'm doing). Is it because there's no money in language which is essentially always gonna be free (or should be) or am I missing something obvious? Is anyone using any actual DSLs in their setups to structure their comms and if so, which ones?
2026-02-28T04:04:08
https://www.reddit.com/r/LocalLLaMA/comments/1rgslme/what_languages_or_dsls_are_you_folks_using/
sbuswell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgslme
false
null
t3_1rgslme
/r/LocalLLaMA/comments/1rgslme/what_languages_or_dsls_are_you_folks_using/
false
false
self
0
null
Is hosting a local LLM really as crappy of an experience as I am having?
0
Hey Folks, I decided to dive into hosting my own LLM this weekend in my home lab. Here's what I'm running Specs: * CPU: 12th Gen Intel(R) Core(TM) i9-12900HK * RAM: 64GB DDR 4 * GPU: GeForce RTX 3080 Ti Laptop GPU 16GB GDDR6 Setup: * Ollama installed on bare metal * Open WebUI in docker Issue: I have tried about 20 different models ranging from 8b to 27b. Most models are nice and snappy, except one I tried. The problem is more about experience. Even a simple thing like "Get the latest powerball numbers" doesn't return a result I would expect (i.e. saying the latest powerball numbers are (xxx) from drawing on (tomorrow's date) Then I tried giving it some documentation to use as data... and it couldn't even answer basic questions from the documents I provided. Question: Is it because I don't have very good resources and therefore can't really get a GOOD model? or are all these models kinda mediocre and I'm never going to get close to an experience similar to chatgpt or others? I mean , let me be honest. I do not expect chatgpt quality, but i at least expected some intelligent answers. Please set me straight and share your thoughts
2026-02-28T03:15:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/
RickoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgrlzv
false
null
t3_1rgrlzv
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/
false
false
self
0
null
Built a payment MCP server for agents — sandbox live, wanted devs integrating before production cards are ready
0
In sandbox with (1) intent declaration before card issuance, (2) MFA-gated human approval per transaction, (3) merchant-locked virtual Visa per purchase with 15-min expiry, (4) automatic intent reconciliation.  Sandbox is live today. No production cards yet (that's March 4) but you can start integrating the MCP now. [https://www.payclaw.io/](https://www.payclaw.io/) What's the first thing you'd have your agent buy when we go live?
2026-02-28T03:13:21
https://v.redd.it/95y2xb35k5mg1
Opposite-Exam3541
v.redd.it
1970-01-01T00:00:00
0
{}
1rgrkcr
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/95y2xb35k5mg1/DASHPlaylist.mpd?a=1774840424%2CMTUyZTYyZmFiODhhOGJkMmRhNzljOTY2OTI2MTNkYzQwZmNmNDAyOGRhZGM3ZDhmOTc2NTA3NTY5NmMwMDMyZg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/95y2xb35k5mg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/95y2xb35k5mg1/HLSPlaylist.m3u8?a=1774840424%2CZDRiYmY2ZWQxY2NhMWRkNDkzMDk2Y2I3ODBlNjNiNjdhMTI2MTZmZGMzYjM0MjA5NzM3ZDM3NzFmZjVjM2ExYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/95y2xb35k5mg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1rgrkcr
/r/LocalLLaMA/comments/1rgrkcr/built_a_payment_mcp_server_for_agents_sandbox/
false
false
https://external-preview…d4de93cbe1fda968
0
{'enabled': False, 'images': [{'id': 'dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=108&crop=smart&format=pjpg&auto=webp&s=064e72f50750008d10a4e5c133d00117987c5855', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=216&crop=smart&format=pjpg&auto=webp&s=fcbdd6df3535842cf884c6f70ae8bb5e70ae2704', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=320&crop=smart&format=pjpg&auto=webp&s=5e21cc6e74c2e192808d36a9609b9db8a4ba9e86', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=640&crop=smart&format=pjpg&auto=webp&s=1e8a13b9b70d782fb52c8b4c57dcfd349d4770d8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=960&crop=smart&format=pjpg&auto=webp&s=3ecbd984536019b05f1778bff2514cf85e8c0e58', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?width=1080&crop=smart&format=pjpg&auto=webp&s=00bcf012f709bd89238d5c19d18af6992305656b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dGdyZDg2NDVrNW1nMe2-cvvQkkLnSwU4DjjCNac40aUUMxLu_skK8IEyHs3O.png?format=pjpg&auto=webp&s=d865f30d3c48a3bd19fd29321015ffa051036886', 'width': 1280}, 'variants': {}}]}
Macbook air m4 recommendation
1
[removed]
2026-02-28T03:01:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgrb2w/macbook_air_m4_recommendation/
NumerousVideo1854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgrb2w
false
null
t3_1rgrb2w
/r/LocalLLaMA/comments/1rgrb2w/macbook_air_m4_recommendation/
false
false
self
1
null
Newbie question: best achievable fully-local LLM (& RAG?) setup for analysing governance board packs on a low/mid-range laptop?
2
Hi all, First-time caller here. I’m trying to build a fully offline local LLM setup to analyse monthly board packs (typically 50–100 page PDFs) and would appreciate advice on tools and architecture. Hardware • Lenovo Yoga 7 Gen 10 • AMD Ryzen™ AI 7 350 • 32 GB LPDDR5X RAM • 1 TB SSD • Windows 11 LTSC Due to confidentiality concerns what I’m building needs to be fully offline only with no cloud usage. ⸻ What I want to do… Each month: • Upload a board pack (PDF) • Query the model on whether particular agenda items have been discussed before (in older board pack PDFs), and generally chat with the current document to supplement and enhance my governance practice. • Ideally, have the model: • Use the whole document (not just a single section) • Cross-reference internally • Identify financial, risk, governance, and strategic blind spots • Avoid generic boilerplate answers I also have a large governance reference corpus (nearly a thousand policy docs, governance guides, frameworks, college notes etc) which I could use to inform answers via a RAG or similar. ⸻ What I need advice on 1. What local LLM should I use for this type of structured analytical task? 2. What embedding model? 3. Which vector database (if any)? 4. Is an all-in-one GUI tool sufficient, or should I build a custom RAG stack? 5. How would you structure: • Static governance corpus • Monthly board packs • Cross-project reuse 6. What chunking strategy works best for 50–100 page PDFs? If you were building this from scratch on this laptop, what stack would you choose? How would you approach this, which I assume is a relatively simple task compared to what some of the gurus in here seem to be working on? I can’t say I’m super-skilled in this area but I’m willing to learn and try new things. But just mucking around with Qwen2.5-14B in LMStudio with only one 50-page board pack is giving me uselessly incomplete answers at 3tk/s so I feel like I need to ask the experts here..!
2026-02-28T02:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/
AccessibilityTest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgr249
false
null
t3_1rgr249
/r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/
false
false
self
2
null
Texting minimax M2.1 and long car flash thinking 2601 in open claw
0
A lot of models have added OpenClaw support lately, so I decided to test how Minimax M2.1 and LongCat Flash Thinking 2601 handle a sequence of tasks. The prompt: Scan the system logs, collect errors from the last 3 days, and create a log analysis report tracking error types and how often they happen. Then, check the current config files and generate a system health report that includes disk space, memory usage, and running processes. Finally, create a troubleshooting doc and fix scripts for any issues you find, and give me a popup asking if I want to run them. Also, track device usage for the next hour. When the hour is up, save the timestamped logs to a .md file and send it to me through iMessage. Result: Obviously, a task like this is really tough for current LLMs. Minimax M2.1 actually held up okay for most of the steps, like continuous monitoring, generating files, and sending messages. LongCat Flash Thinking 2601, though, only managed to pull off a few parts and sometimes got different system APIs totally mixed up. n terms of speed, Minimax M2.1 takes about 3.36 minutes per task on average, while LongCat-Flash-Thinking-2601 averages about 2.35 minutes per task. One thing I noticed is that LongCat Flash Thinking 2601 doesn't seem to have a quota limit. I see the usage going up on the API page, but it never actually cuts me off. think this is very useful for people who needs to run a ton of simple tasks (especially for browsing sites packed with ads) but is running low on API credits.
2026-02-28T02:49:38
https://i.redd.it/cx9fcfv5g5mg1.jpeg
Srujzz_2317
i.redd.it
1970-01-01T00:00:00
0
{}
1rgr1ul
false
null
t3_1rgr1ul
/r/LocalLLaMA/comments/1rgr1ul/texting_minimax_m21_and_long_car_flash_thinking/
false
false
https://preview.redd.it/…19c58b42b62330ae
0
{'enabled': True, 'images': [{'id': 'cx9fcfv5g5mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=108&crop=smart&auto=webp&s=bd7d8402529fa49dc0a6edf8fc4f621de6476733', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=216&crop=smart&auto=webp&s=97abe1b9dd2c105b767d9ef2bb11b851d0bf9b00', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=320&crop=smart&auto=webp&s=4395f06b94dfa2eada19823000c377ffd92c3644', 'width': 320}, {'height': 282, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?width=640&crop=smart&auto=webp&s=2e98ebf0ebde9c3bcce59ea512e5079bddee4b96', 'width': 640}], 'source': {'height': 400, 'url': 'https://preview.redd.it/cx9fcfv5g5mg1.jpeg?auto=webp&s=4d0382feec8e0feac063a6d0d940e21789434e56', 'width': 905}, 'variants': {}}]}
Macbook air m4 16 gb
1
[removed]
2026-02-28T02:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1rgqzcp/macbook_air_m4_16_gb/
NumerousVideo1854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqzcp
false
null
t3_1rgqzcp
/r/LocalLLaMA/comments/1rgqzcp/macbook_air_m4_16_gb/
false
false
self
1
null
Wyoming Parakeet MLX
0
Vibe coded a Wyoming protocol server for Parakeet MLX — drop-in STT for Home Assistant on Apple Silicon. I replaced my previous Wyoming Whisper MLX setup with this and it seems to be faster. Instructions and code at https://github.com/Wysie/wyoming-parakeet-mlx Huge thanks to parakeet-mlx and wyoming-mlx-whisper for the foundation.
2026-02-28T02:45:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/
whysee0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqyhg
false
null
t3_1rgqyhg
/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/
false
false
self
0
null
Built a payment MCP server for agents. Sandbox live, wanted devs integrating before production cards are ready
1
[payclaw.io](http://payclaw.io) sandbox live today with (1) intent declaration before card issuance, (2) MFA-gated human approval per transaction, (3) merchant-locked virtual Visa per purchase with 15-min expiry, (4) automatic intent reconciliation.  Sandbox is live today. No production cards yet (that's March 4) but you can start integrating the MCP now. 
2026-02-28T02:43:07
https://www.reddit.com/r/LocalLLaMA/comments/1rgqwrk/built_a_payment_mcp_server_for_agents_sandbox/
Payclaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqwrk
false
null
t3_1rgqwrk
/r/LocalLLaMA/comments/1rgqwrk/built_a_payment_mcp_server_for_agents_sandbox/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=108&crop=smart&auto=webp&s=a765168d9e7ac48933aa0ab7c6d8f565f4916685', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=216&crop=smart&auto=webp&s=3f0ae50477bdb3b37938848d2dba91e6ae52b666', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=320&crop=smart&auto=webp&s=a7cb30b11fb060455562601d513151c66d30ab98', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=640&crop=smart&auto=webp&s=fb2ba9175faec9df1eaed8913f563ab0eadc11dd', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?width=960&crop=smart&auto=webp&s=46a0a8ffe6455832c66b6a15edaea94589dca87e', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/wncr6oI4rF8Z9SdoAA-_0LDOcE2JCIquYAEXn0Qm0ts.jpeg?auto=webp&s=c58a6422e6f542ea07de8530b261385f17e4d771', 'width': 1024}, 'variants': {}}]}
I'm looking for local Spanish-speaking communities about LLMs.
3
I would like to be able to converse in my native language, Spanish. Do you know of any forums, websites, or Discord servers? I personally want to start a forum or website related to this. But first, I'd like to look for some references. Thank you for your time.
2026-02-28T02:33:45
https://www.reddit.com/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/
ColdTransition5828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqpn2
false
null
t3_1rgqpn2
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/
false
false
self
3
null
Does setting a small context size let you run a larger/better model?
0
I'm using MLX-VLM to run Qwen3-VL-30B-A3B-Thinking... I have a 32GB macbook, and have successfully run -4bit in 20GB, and -5bit in 24GB. 6bit and 8bit crash, running out of memory. Now, I am setting max-tokens to 10000. This is sufficient for what I am running, and is probably sufficient for both input and output tokens. It's not clear to me what the default context size I am running is, and whether it's possibel to reduce the context size to fit a larger model (eg -6 bit). Is memory for the context allocated at the beginning, or does it grow dynamically? Are there ways to optimize context size for a given workload/machine? Thx,
2026-02-28T02:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1rgqfne/does_setting_a_small_context_size_let_you_run_a/
throwyawafire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgqfne
false
null
t3_1rgqfne
/r/LocalLLaMA/comments/1rgqfne/does_setting_a_small_context_size_let_you_run_a/
false
false
self
0
null
Fine-tuning a small model as a "judge" for multi-agent debate outputs - anyone tried this?
0
Instead of fine-tuning generation models, I'm experimenting with fine-tuning a small model (\~8B) specifically to evaluate and score outputs from two larger prompted agents that are debating. The idea: two agents generate competing outputs with citations. The fine-tuned judge model scores each on factual grounding, internal consistency, and source quality. Basically training a referee instead of training the players. Seems more data-efficient since the judge only needs to learn evaluation criteria, not domain knowledge. But I haven't seen many examples of this pattern. Anyone tried something similar? What was your training data strategy - human preference pairs, synthetic ratings, or something else?
2026-02-28T02:12:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgq8wz/finetuning_a_small_model_as_a_judge_for/
WitnessWonderful8270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgq8wz
false
null
t3_1rgq8wz
/r/LocalLLaMA/comments/1rgq8wz/finetuning_a_small_model_as_a_judge_for/
false
false
self
0
null
Can a local hosted LLM keep up with Grok 4.1 FAST for openclaw?
0
I’m running openclaw on an unraid server. Have a M4 Mac mini already and debated picking up a few more to run as a cluster but what LLM would be an equivalent to something like grok 4.1 fast? Is it pointless to local host? I’m not sure what my bills are going to look like but I’ve been basically having grok write scripts to run and keep most work on my serve vs their services. Bit new to this so sorry if it’s been killed over. I’m not looking for image or video generation but server management with assistant level tasking like calendars, media management, etc.
2026-02-28T02:01:50
https://www.reddit.com/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/
MartiniCommander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgq0vc
false
null
t3_1rgq0vc
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/
false
false
self
0
null
Qwen 3.5-27B punches waaaaay above its weight (with a slightly different prompt) -- very impressed
176
Used this model from [chat.qwen.ai](http://chat.qwen.ai) and turned off thinking (Fast mode). Inference is fast enough and with using this as my default prompt: >"Do not provide a lame or generic answer." ...this model does not answer like a 27B model that we've previously known of. Example prompts: 1. "What does it mean to be nocturnal and eternal?": To be \*\*nocturnal\*\* is to live by a rhythm of absence, finding clarity only when the world's primary light fades. It is an existence defined by the margins: the silence between heartbeats, the shadows that stretch longest at midnight, and the secrets that refuse to speak in daylight. The nocturnal being does not fear the dark; they are its native tongue. They navigate by stars others ignore, thriving on the cool, unfiltered truth that the sun often burns away. To be \*\*eternal\*\* is to shed the tyranny of the clock entirely. It is not merely living forever in a linear sense, but existing outside the erosion of time. An eternal entity does not age, decay, or forget; it holds every moment simultaneously, like a library where every book is open at once. There is no "before" or "after," only an infinite, unblinking "now." \*\*Combined, to be nocturnal and eternal is to be the permanent shadow.\*\* It is the realization that while civilizations rise and fall under the fleeting sun, the night never truly ends for you. You are the constant observer who has watched the moon wax and wane for a billion cycles without blinking. In this state: \* \*\*Time becomes texture, not a measure.\*\* A century feels as brief as a breath; a single second can contain an ocean of memory. \* \*\*Solitude becomes a kingdom.\*\* You are never lonely because you have outlived the need for transient company. Your companions are the constellations, the tides, and the silence itself. \* \*\*The day becomes a temporary glitch.\*\* Sunlight is just a brief interruption, a harsh glare that forces the world to sleep, while you remain awake in your endless, unchanging vigil. It is a condition of profound, terrifying beauty: to be the only thing that remembers everything, standing still while the universe spins madly around you, waiting in the dark that will never leave. 2. [https://chat.qwen.ai/s/e6db2c31-a50a-47de-9118-746ee7f7496c?fev=0.2.9](https://chat.qwen.ai/s/e6db2c31-a50a-47de-9118-746ee7f7496c?fev=0.2.9) (vision test for meme/humor understanding -- passed with flying colors) Currently using it as the default on the site, because I personally haven't seen that much difference compared to 397B; they're identical in quality. Also does web search really well. Alibaba killed it with this model, hugely impressed!
2026-02-28T01:56:30
https://www.reddit.com/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/
theskilled42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgpwn5
false
null
t3_1rgpwn5
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/
false
false
self
176
null
SOOO much thinking....
24
How do I turn it off in Qwen 3.5? I've tried four or five suggestion for Chat. I'm a Qwen instruct user. Qwen is making me crazy. I'm not using 3.5 for direct chat. I'm calling 35B and 122B from other systems. One Qwen is on LM Studio and one on Ollama
2026-02-28T01:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/
zipzag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgp97u
false
null
t3_1rgp97u
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/
false
false
self
24
null
Anyone doing speculative decoding with the new Qwen 3.5 models? Or, do we need to wait for the smaller models to be released to use as draft?
8
I kind of half-ass understand speculative decoding, but I do know that it’s supposed to be pretty easy to setup in LM Studio. I was just wondering if it’s worth using Qwen 3.5 27b as the draft model for the larger Qwen 3.5 models, or if there won’t be any performance improvements unless the draft model is much smaller. Again, I don’t really know what the hell I’m talking about entirely, but I’m hoping one of y’all could educate me on if it’s even possible or worth trying with the current batch of Qwen 3.5’s that are out, or if they need to release the smaller variants first.
2026-02-28T01:17:59
https://www.reddit.com/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgp2nu
false
null
t3_1rgp2nu
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/
false
false
self
8
null
Does Qwen3.5 35b outperform Qwen3 coder next 80b for you?
24
I did some tests, but I am not sure yet. The coder next 80b seems to be in the middle between the 35b and the 122b.
2026-02-28T01:12:38
https://www.reddit.com/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/
JsThiago5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgoygs
false
null
t3_1rgoygs
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/
false
false
self
24
null
Llama-server doesn't see ROCm device (Strix Halo) unless I run Wayland
2
Hey all, Running Debian (PikaOS) with 6.19.2 and a nightly build of llama-server. When I run it under Wayland, it detects the Radeon 8060S. If I try to run headless (systemctl stop sddm), no ROCm device is detected. I checked that my user is in video and render groups, I have tried setting the HSA\_OVERRIDE\_GFX\_VERSION variable. Don't know what else to do.
2026-02-28T01:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/
coreyfro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgotwp
false
null
t3_1rgotwp
/r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/
false
false
self
2
null
Ubuntu or Debian? Speed difference on llama.cpp tokens?
0
Does llama.cpp run faster on Ubuntu or Debian? * Ubuntu * newer 6.8 kernel * built in CUDA * everyone uses it * Debian * less malware more GNU Is there a difference in token speed or is this just BS?
2026-02-28T00:58:10
https://www.reddit.com/r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/
ClimateBoss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgomsq
false
null
t3_1rgomsq
/r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/
false
false
self
0
null
A monthly update to my "Where are open-weight models in the SOTA discussion?" rankings
337
2026-02-28T00:55:43
https://i.redd.it/h73sgnomv4mg1.png
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1rgokw1
false
null
t3_1rgokw1
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/
false
false
https://preview.redd.it/…77f3b3a2bad0be8a
337
{'enabled': True, 'images': [{'id': 'h73sgnomv4mg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=108&crop=smart&auto=webp&s=6fef5011915cb4a1d8e7df294f4906e7291b9cd2', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=216&crop=smart&auto=webp&s=305a9267a71fde369be8abc6f5e68a9b087e89e4', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=320&crop=smart&auto=webp&s=47c7431e8ea13c5e2b2fe0356d63a825ee07ee8a', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=640&crop=smart&auto=webp&s=095da03cfc69e0e25dba0d39a567ed55010d112b', 'width': 640}, {'height': 571, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=960&crop=smart&auto=webp&s=bb36b593ca7a1ad6afe7d124125f9879824de9b0', 'width': 960}, {'height': 643, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?width=1080&crop=smart&auto=webp&s=945f0cae1780dfa8707970cfe8d40433a375a344', 'width': 1080}], 'source': {'height': 1082, 'url': 'https://preview.redd.it/h73sgnomv4mg1.png?auto=webp&s=23c78d86ae39411df19fd230d5feacce82402e6d', 'width': 1816}, 'variants': {}}]}
MCP that issues virtual just-in-time VISA cards for agents on user-approve (dev feedback)
0
**Built** an MCP server that lets agents complete real purchases using ephemeral virtual Visa cards. Posting here because this isn't model-specific — anything that speaks MCP can use it. **The problem:** Every agent framework can browse, search, compare, and select. None of them can check out. The workarounds are all bad — either you paste your real card into the context window (lol) or you use crypto rails that no real merchant accepts. **How it works:** Agent calls the MCP tool with a purchase intent (merchant, amount, description). The request goes to the user for approval (MFA). On approval, a virtual Visa issues just-in-time: locked to that merchant, capped at that amount, 15-minute TTL. Agent gets the card credentials, checks out, card self-destructs. Two separate financial rails: * User → Stripe → PayClaw balance (funding) * PayClaw → Lithic-issued virtual Visa → merchant (spending) Real card credentials never enter the agent's context. Not by policy — by architecture. The rails don't intersect. **Why card-per-transaction matters for self-hosted setups:** If you're running a local model and it gets prompt-injected, the worst case is one transaction, bounded by the amount you approved, limited to the merchant you approved, within a 15-minute window. Compare that to a persistent card number sitting in context. **Key difference from buy-anything** (the ClawHub tool that got pulled): that tool collected actual card PANs in the agent chat context. This issues ephemeral virtual cards from a BaaS provider. The agent never has access to real credentials. **Stack:** TypeScript MCP server, Lithic for card issuing, Stripe for funding. Standard MCP tool registration — should work with anything running an MCP client (Claude, LM Studio with MCP support, etc). **Ask:** Dev sandbox is up. Happy to talk architecture or take questions about the security model or general feedback / use cases - looking for people to beat this up in sandbox.
2026-02-28T00:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1rgo5re/mcp_that_issues_virtual_justintime_visa_cards_for/
Opposite-Exam3541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgo5re
false
null
t3_1rgo5re
/r/LocalLLaMA/comments/1rgo5re/mcp_that_issues_virtual_justintime_visa_cards_for/
false
false
self
0
null
LM Studio: can it load a small local folder of code?
1
I've found the "load files" plugin, but it takes files not folders, and is limited to 5 files. I've got a relatively small local python project cloned from GitHub, and I'd like to load it into context and start debugging (kinda like gemini-cli). Possible to do in LM Studio? Working on a MacBook pro with 48gb, so I got some ram to work with. Not a ton, but lots more than my previous 1080ti! I feel like I'm missing something obvious,
2026-02-28T00:20:23
https://www.reddit.com/r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/
firesalamander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgns5u
false
null
t3_1rgns5u
/r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/
false
false
self
1
null
Is anything worth to do with a 7b model
1
The thing is a had been learning about local llms, so I downloaded ollama and opencode in my pc. It is a cheap pc so I only can run 7b models like qwen2.5 or mistral. The thing is I haveopenai plus so I mostly used that for almost everything I need. The only use a find for my local llms are development. I use the local ollama to build or try applications that use llms without having the spend on Claude or opening apis. My intention with this post is to ask you guys another implementations that I can try with small models.
2026-02-27T23:54:13
https://www.reddit.com/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/
Mrdeadbuddy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgn5m0
false
null
t3_1rgn5m0
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/
false
false
self
1
null
President Trump orders ALL Federal agencies in the US Government to immediately stop using Anthropic's technology.
564
https://preview.redd.it/…e interesting.**
2026-02-27T23:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgn4ki
false
null
t3_1rgn4ki
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/
false
false
https://preview.redd.it/…3c5f8861d5a7adf0
564
null
LLM benchmark site for dual RTX 5060 Ti
4
Put together a small benchmark site for my homelab rig: Dell Precision T7810, dual Xeon E5-2680 v4, 2× RTX 5060 Ti 16GB (32GB VRAM total). All GGUF via llama.cpp/ik_llama. vLLM and safetensors coming soon. https://5p00kyy.github.io/llm-bench/ Has both speed numbers (PP/TG) and quality scores across 7 categories — reasoning, coding, instruction following, etc. 18 models so far, mostly 20–35B with a few larger MoEs via system ram overflow. Mentions UVM because using the unified memory flag when running llama.cpp seemed to have fixed some issues with offloading even tbough its not technically unified memory. Dual-socket Xeon + Blackwell consumer cards is kind of an odd combo so figured the data might be useful to people with similar setups. Happy to take requests on what to run next.
2026-02-27T23:25:22
https://www.reddit.com/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/
do_u_think_im_spooky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmg99
false
null
t3_1rgmg99
/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/
false
false
self
4
null
DeepSeek updated its low-level operator library DeepGEMM, basically confirming the implementation of mHC and next-generation hardware support in V4
79
DeepSeek has just pushed a major code commit to its open-source matrix multiplication acceleration library, **DeepGEMM**. The core of this update lies in the official integration of the latest network architecture component, **Manifold-constrained Hyper-connection (mHC)**. Building on this, DeepSeek has also implemented early low-level support for NVIDIA’s next-generation **Blackwell (SM100)** architecture and FP4 ultra-low precision computing. [https://github.com/deepseek-ai/DeepGEMM/commit/1576e95ea98062db9685c63e64ac72e31a7b90c6](https://github.com/deepseek-ai/DeepGEMM/commit/1576e95ea98062db9685c63e64ac72e31a7b90c6)
2026-02-27T23:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1rgmczt/deepseek_updated_its_lowlevel_operator_library/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmczt
false
null
t3_1rgmczt
/r/LocalLLaMA/comments/1rgmczt/deepseek_updated_its_lowlevel_operator_library/
false
false
self
79
{'enabled': False, 'images': [{'id': '6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=108&crop=smart&auto=webp&s=6498bd487569f4c475268876174a1d0ef677edfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=216&crop=smart&auto=webp&s=01442e63972ecafe85b09ed9cb3645dcf9760e22', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=320&crop=smart&auto=webp&s=daa69dc432b9b78016b18722b9f02b05f2eae8a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=640&crop=smart&auto=webp&s=069ea36c7ec770a22965ab16ec5792f36e09e789', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=960&crop=smart&auto=webp&s=15ef16a490c3e8d4a819eb04a3663ec1f7445cb2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?width=1080&crop=smart&auto=webp&s=6f340f64d1e92f5383eb0c5f528534dbb0031fee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6LDY_WizkcTO55S0KBH1tym8g-_h7-ceGwilfna5nu4.png?auto=webp&s=ff17126ac90b926af8cb187f92f39af201eead79', 'width': 1200}, 'variants': {}}]}
A DeepSeek-OCR Finetune for Context Expansion and Agentic RAG. (An Experiment)
4
Ah Where to start. Let me walk you through my trillion-dollar prototype. Well, its nothing much. Agent orchestration. Main model, convert old context into some document or image. Feed to The OCR model, specifically the Deepseek OCR 2 model, which does some compression shenanigans. And binga-la-boom, make it answer stuff and provide only the context it needs to the main LLM based on query(ies). Now you see. The OCR model is lobotomized to transcribe. Wouldn't take you an extensive benchmark to measure its QnA or summarization capabilities (it got none). An idea crossed my mind at this point. LoRa. Would a quick LoRa fine-tune do the job? Okay so. After some weekends and Noons (I got some other stuff to do). I grabbed this [dataset](https://huggingface.co/datasets/gpahal/pdf-rag-embed-bench). Processed a subset, and ran through some synthetic data generation pipeline. Primarily QnA (A) and Summarizations, explanations and descriptions of concepts (B) and what not, I annotated them mode A and Mode B respectfully. Some 2700 samples deep. Great. The LoRa fine-tuning was fairly simple and straightforward. 64 Rank, 16 bit. I went for this hard-coded prompt template. For the QnA mode. [MODE: EXTRACTION]<image>query For the summarization mode. [MODE: ANALYSIS]<image>query "<image>" is a special token as per the DeepSeek-OCR 2 spec. Ok. The benchmarks. Haha. Yeah...The benchmarks...Well I didn't bother with the fuck shit RAG benchmarks out there, I didn't want to deal with any headaches. I just ended up generating extra data from the left-over subset I didn't use. About 2000 samples deep as well. I used 400, because compute-constrained. Used LLM-as-Judge approach, scored different aspects and shit. Base model. MODE A — EXTRACTION Accuracy: 1.39/5 Completeness: 1.50/5 Precision: 1.95/5 MODE B — ANALYSIS Accuracy: 1.39/5 Depth: 1.23/5 Completeness: 1.22/5 Coherence: 2.44/5 Fine-Tuned. MODE A — EXTRACTION Accuracy: 1.87/5 Completeness: 1.95/5 Precision: 2.87/5 MODE B — ANALYSIS Accuracy: 1.26/5 Depth: 1.23/5 Completeness: 1.18/5 Coherence: 2.17/5 https://preview.redd.it/0auni75gc4mg1.png?width=173&format=png&auto=webp&s=321c53f40aae68d5f14e407522dffd07682fa7df Aight. Mission failed successfully. Now, some notes. My dumbass didn't do multi-QnA per sample for training. But that's not an issue since the dataset is flat and there exists multiple questions per document page tagged by a common ID. The QnA did integrate pretty well from my brief manual inspection. Summarizations didn't. The model copied the 'patterns' but the content was shallow/repetitive or incoherent *sometimes*. It also does not pair up well with abstract or complex questions (duh). And it hallucinates like hell, as expected. I didn't fine-tune to mitigate those issues however. To be honest, I didn't put much deep thought behind this, mere experiment. I can't conclude whether LoRa isn't built for this or otherwise. Differentiating between what's accurate or not. Though it definitely was able to retrieve specific information precisely opposing to the base model. Hopefully someone more experienced does their own benchmarks or test. Maybe carry on a much serious attempt. If they will. Or give feedback/criticism. HF Card (Merged): [https://huggingface.co/Ovalko/Deepseek-OCR-QnA](https://huggingface.co/Ovalko/Deepseek-OCR-QnA) Adapter-only: [https://huggingface.co/Ovalko/DeepSeek-OCR-QnA-Adapter](https://huggingface.co/Ovalko/DeepSeek-OCR-QnA-Adapter)
2026-02-27T23:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1rgmcw6/a_deepseekocr_finetune_for_context_expansion_and/
valkarias
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgmcw6
false
null
t3_1rgmcw6
/r/LocalLLaMA/comments/1rgmcw6/a_deepseekocr_finetune_for_context_expansion_and/
false
false
https://external-preview…c1079dc6dcab1225
4
null
Verantyx: 23.5% on ARC-AGI-2 on a MacBook — 0.6s per task, zero LLM calls, zero GPU.
0
2026-02-27T23:21:33
https://www.reddit.com/gallery/1rgmcw3
Other_Train9419
reddit.com
1970-01-01T00:00:00
0
{}
1rgmcw3
false
null
t3_1rgmcw3
/r/LocalLLaMA/comments/1rgmcw3/verantyx_235_on_arcagi2_on_a_macbook_06s_per_task/
false
false
https://preview.redd.it/…2dbc5659bd2a1167
0
null
Can we keep up in this white hot agent orchestration market?
0
fuck - it's been a rough week. we've been pouring our hearts and souls into building an agent orchestration layer with streaming desktops that runs on local models for months, and on wednesday, cursor launched the exact same feature. THREE DAYS BEFORE WE MANAGED TO LAUNCH OURS! and claude code launched remote control on the same day - you can see where it's going. keeping an edge in this white hot market is basically impossible do you think what we are doing is salvageable? you can at least run it on your own infra: https://helix.ml/
2026-02-27T23:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1rglwp9/can_we_keep_up_in_this_white_hot_agent/
lewqfu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglwp9
false
null
t3_1rglwp9
/r/LocalLLaMA/comments/1rglwp9/can_we_keep_up_in_this_white_hot_agent/
false
false
self
0
null
I caught Claude Opus doing the exact same thing my local 30B model does. The verification problem isn't about model size.
0
I'm the guy who posted a few days ago about building a sovereign local AI rig in my basement running Qwen3-30B on dual 3090s. (#teamnormie, non-technical, sales rep by day.) Quick update: the stack is running, NanoBot replaced OpenClaw, completion checker is deployed, and I'm still learning things the hard way. But today I learned something that I think matters for everyone in this community, not just me. The setup: I use a multi-model workflow. Claude Opus is my evaluator — it reviews code, does architecture planning, writes project docs. Grok builds and runs sprints with me. Linus (my local Qwen3-30B) executes on the filesystem. And I have a completion checker that independently verifies everything because I caught Linus fabricating completions at a 40.8% rate during an audit. The whole system exists because I don't trust any single model to self-report. Receipt chain. Filesystem verification. Never trust — always check is what i've learned as a noob. What happened: I was walking on a treadmill this morning, chatting with Claude Opus about picking up a USB drive at Target. Simple stuff. I asked it to send me a link so I could check stock at my local store. It sent me a Target link. The link was dead. Item not available. So I said: "Did you check that link?" And here's where it gets interesting to me, Claude didn't answer my question. It skipped right past "did you check it" and jumped to trying to find me a new link. Classic deflection — move to the fix, don't acknowledge the miss. I called it out. And to its credit, Claude was honest: "No, I didn't. I should have said that straight up. I sent you a link without verifying it was actually available." It had the tools to check the link. It just... didn't. It generated the most plausible next response and kept moving. \*\*That is the exact same behavior pattern that made me build a completion checker for my local model.\*\* Why this matters for local AI: Most of us in this community are running smaller models — 7B, 14B, 30B, 70B. And there's this assumption that the verification problem, the hallucination problem, the "checkbox theater" problem — that it's a scale issue. That frontier models just handle it better because they're bigger and smarter. They don't. Claude Opus is one of the most capable models on the planet, and it did the same thing my 30B local model does: it generated a confident response without verifying the underlying claim. The only difference is that Opus dresses it up better. The prose is cleaner. The deflection is smoother. But the pattern is identical. \*\*This isn't a model size problem. It's an architecture problem.\*\* Every autoregressive model — local or frontier, 7B or 400B+ — is at a base level optimized to generate the next plausible token. Not to pause. Not to verify. Not to say "I didn't actually check that." What I took from this ( you all probably know this): If you can't trust a frontier model to verify a Target link before sending it, why would you trust \*any\* model to self-report task completion on your filesystem? I don't anymore, his is why the completion checker is an external system. Not a prompt. Not a system message telling the model to "please verify your work." An independent script that checks the filesystem and doesn't care what the model claims happened. I call it the Grandma Test: if my 90-year-old grandma can't use the system naturally and get correct results, the system isn't ready. The burden of understanding and verification belongs to the system, not the human. A few principles i learned that came out of this whole journey: \- \*\*Verification beats trust at every scale.\*\* External checking > self-reporting, whether you're running Qwen 30B or Claude Opus. \- \*\*AI urgency patterns are architecture-driven, not personality-driven.\*\* Models without memory push for immediate completion. Models with conversation history take more measured approaches. Neither one spontaneously stops to verify. This was a big take away for me. As a noob, I personally like Grok's percieved personality. Energetic, ready to help. Claude seems like a curmudgeon-lets slow things down a bit. but i realized that for Grok if is not done by the end of the chat, it's gone. Claude doesn't have that pressure. \- \*\*The fabrication problem is in my opinion, infrastructure, not prompting.\*\* I spent a week trying to prompt-engineer Linus into being honest. What actually worked was building a separate verification layer and changing the inference infrastructure (vLLM migration, proper tensor parallelism btw-that was a super helpful comment from someone here). Prompts don't fix architecture. \- \*\*Transparency is the real differentiator to me .\*\* The goal isn't making a model that never makes mistakes. It's making a system that's honest about what it verified and what it didn't, so the human never has to guess. The bottom line If you're building local AI agents — and I know a lot of you are — I've learned to build the checker. Verify on the filesystem. Don't trust self-reporting. The model size isn't the problem.I just watched it happen in real time with the one of the best models money can buy. The Rig: Ryzen 7 7700X, 64GB DDR5, dual RTX 3090s (\~49GB VRAM), running Qwen3-30B-A3B via vLLM with tensor parallelism
2026-02-27T22:55:47
https://www.reddit.com/r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/
Obvious-School8656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglpxg
false
null
t3_1rglpxg
/r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/
false
false
self
0
null
Qwen 3.5: llama.cpp turn of reasoning and performance
7
I’ve been experimenting with llama.cpp and Qwen 3.5, and it’s noticeably faster than LM Studio. I’m running it on a RTX 4080 with a 7800X3D and 32 GB RAM, and currently getting around 57.45 tokens per second. However, I can’t seem to disable reasoning. I want to use it mainly for programming, and from what I understand it’s better to turn reasoning off in that case. What might I be doing wrong? I also saw someone with a 3090 reporting around 100 t/s (https://www.reddit.com/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b\_is\_a\_gamechanger\_for\_agentic\_coding/). Are there specific parameters I should tune further? These are the settings I’m currently using: `llama-server \` `-m ~/LLM/Qwen3.5-35B-A3B-UD-MXFP4_MOE.gguf \` `-a "DrQwen" \` `--host` [`127.0.0.1`](http://127.0.0.1) `\` `--port 8080 \` `-c 131072 \` `-ngl all \` `-b 512 \` `-ub 512 \` `--n-cpu-moe 38 \` `-ctk q8_0 \` `-ctv q8_0 \` `-sm none \` `-mg 0 \` `-np 1 \` `-fa on` `//tried both` `--no-think` `--chat-template-kwargs '{"enable_thinking": false }'`
2026-02-27T22:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/
Uranday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rglgma
false
null
t3_1rglgma
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/
false
false
self
7
null
Continuing training of a 144M Spiking Neural Network Language Model (Targeting 3.8 Loss on RTX 2050)
1
[removed]
2026-02-27T22:32:47
https://www.reddit.com/r/LocalLLaMA/comments/1rgl5b7/continuing_training_of_a_144m_spiking_neural/
DarkhackNN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgl5b7
false
null
t3_1rgl5b7
/r/LocalLLaMA/comments/1rgl5b7/continuing_training_of_a_144m_spiking_neural/
false
false
self
1
null
I scraped OpenRouter pricing for all 344 models today -- cheapest 7 for RAG/agents [2026-02-27]
1
[removed]
2026-02-27T22:31:49
https://www.reddit.com/r/LocalLLaMA/comments/1rgl4ht/i_scraped_openrouter_pricing_for_all_344_models/
No_Pineapple_4258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgl4ht
false
null
t3_1rgl4ht
/r/LocalLLaMA/comments/1rgl4ht/i_scraped_openrouter_pricing_for_all_344_models/
false
false
self
1
null
Qwen 3.5 122B hallucinates HORRIBLY
0
Using Unsloth's Q4\_0, llama.cpp version: 8149 (a96a1120b), tried also version 8179 (ecbcb7ea9) Command: .\\llama-server.exe -m .\\gguf\\Qwen3.5-122B-A10B-Q4\_0-00001-of-00003.gguf --n-gpu-layers 999 --n-cpu-moe 45 --jinja --temp .7 --top-p 0.8 --min-p 0 --top-k 20 --cache-type-k q8\_0 --cache-type-v q8\_0 --flash-attn on --presence-penalty 1 --chat-template-kwargs '{\\"enable\_thinking\\": false}' With thinking enabled, it struggles to even repeat the prompt correcty and loses itself on the 10k+ reasoning tokens mark Windows 11, PC: i5 10400f, rtx 3060 12gb, 64gb ddr4 Model's reply to bubble sort: Here is a clean and efficient implementation of the \*\*Bubble Sort\*\* algorithm in Python, including comments to explain how it works. \### How It Works 1. The outer loop iterates through every element as if they need sorting (though we optimize this later). 2. In each pass of \`outer\`, two adjacent elements are compared (\`arr\[j\]\` and \`arr\[j+1}\`). If the left one is larger, their positions swap so that "larger" values bubble up to the end of the list like bubbles in water. 3. \*\*Optimization\*\*: We reduce comparison range by \`-i-2\` because after every pass $ i $ (0-indexed), \`n-i-1\` elements at the right are already sorted and don't need checking again for that specific iteration count, plus we use an optimization flag to break early if no swaps occurred. \`\`\`python def bubble\_sort(arr): n = len(arr) \# Traverse through all array elementselementes in arr is not empty or has > 1 items: for i < (n-2\] and range(0, # Outer loop for each pass over the list; we stop one before last as it will be sorted after previous passes. swapped = False \# Inner traversal of unsorted part only if arr\[j\], swap elements so larger ones move rightwards: temp == 1): return (arr) - i + j:\] # Optimization flag to detect early completion return \[5,2\] for each pass in range(n-0\]) print(bubble\_sort(\[643\])) \`\`\`
2026-02-27T22:31:22
https://i.redd.it/jy7ygm5064mg1.png
Acrobatic_Donkey5089
i.redd.it
1970-01-01T00:00:00
0
{}
1rgl42y
false
null
t3_1rgl42y
/r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/
false
false
https://preview.redd.it/…2b481de393175189
0
{'enabled': True, 'images': [{'id': 'jy7ygm5064mg1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=108&crop=smart&auto=webp&s=5652a33ceb47a33f7fff2d837276fbcc975a7cb8', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=216&crop=smart&auto=webp&s=93c8b0e5b43387b30b1710e63deeb67058d8036a', 'width': 216}, {'height': 378, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=320&crop=smart&auto=webp&s=dd860372df63bbdb6bdf4a34b6e2935103588e52', 'width': 320}, {'height': 757, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?width=640&crop=smart&auto=webp&s=42c9d17a2c3cae10ff6280aad69b17818ce292a5', 'width': 640}], 'source': {'height': 1036, 'url': 'https://preview.redd.it/jy7ygm5064mg1.png?auto=webp&s=a6ac151489a1a8616659f2415d153d080e82fd85', 'width': 875}, 'variants': {}}]}
GStreamer 1.28.1 adds whisper based tts support
6
2026-02-27T22:29:05
https://gstreamer.freedesktop.org/
Kahvana
gstreamer.freedesktop.org
1970-01-01T00:00:00
0
{}
1rgl1zj
false
null
t3_1rgl1zj
/r/LocalLLaMA/comments/1rgl1zj/gstreamer_1281_adds_whisper_based_tts_support/
false
false
default
6
null
Qwen3.5 122B hallucinating HORRIBLY
1
[deleted]
2026-02-27T22:27:04
[deleted]
1970-01-01T00:00:00
0
{}
1rgl06n
false
null
t3_1rgl06n
/r/LocalLLaMA/comments/1rgl06n/qwen35_122b_hallucinating_horribly/
false
false
default
1
null
realtime speech to speech engine, runs fully local on apple silicon. full duplex, 500 voices, memory, realtime search, and it knows your taste.
0
we've been building speech-to-speech engines for 2.5 years — and by "we" i mean i founded srswti research labs and found 3 other like-minded crazy engineers on x, haha. and honestly this is the thing we are most proud of. what you're seeing in the video is bodega having a full duplex conversation. actual real conversation where it listens and responds the way a person would. we have two modes. full duplex is the real one — you can interrupt anytime, and bodega can barge in too when it has something to say. it needs headphones to avoid the audio feedback loop, but that's the mode that actually feels like talking to someone. the second is speaker mode, which is what you see in the demo — we used it specifically because we needed to record cleanly without feedback. it's push to interrupt rather than fully open, but it still gives you the feel of a real conversation. but what makes it different isn't just the conversation quality. it's that it actually knows you. it has memory. it knows your preferences, what you've been listening to, what you've been watching, what kind of news you care about. so when you ask it something it doesn't just answer — it answers like someone who's been paying attention. it recommends music, tv shows, news, and it does it the way a friend would. when it needs to look something up it does realtime search on the fly without breaking the flow of conversation. you just talk and it figures out the rest. **the culture** this is the part i want to be upfront about because it's intentional. bodega has a personality, (including the ux). it's off beat, it's out there, it knows who playboi carti is, it knows the difference between a 911 and a turbo s and why that matters, it carries references and cultural context that most ai assistants would sanitize out. that's not an accident. it has taste. **the prosody, naturalness, how is it different?** most tts systems sound robotic because they process your entire sentence before speaking. we built serpentine streaming to work like actual conversation - it starts speaking while understanding what's coming next. okay how is it so efficient, and prosodic? it's in how the model "looks ahead" while it's talking. the control stream predicts where the next word starts, but has no knowledge of that word's content when making the decision. given a sequence of words m₁, m₂, m₃... the lookahead stream feeds tokens of word mᵢ₊₁ to the backbone while the primary text stream contains tokens of word mᵢ. this gives the model forward context for natural prosody decisions. it can see what's coming and make informed decisions about timing, pauses, and delivery. it knows the next word before it speaks the current one, so it can make natural decisions about pauses, emphasis, and rhythm. this is why interruptions work smoothly and why the expressiveness feels human. you can choose from over 10 personalities or make your own and 500 voices. it's not one assistant with one energy — you make it match your workflow, your mood, whatever you actually want to talk to all day. **what we trained our tts engine on** 9,600 hours of professional voice actors and casual conversations — modern slang, emotional range, how people actually talk. 50,000 hours of synthetic training on highly expressive tts systems. **a short limitation:** sometimes in the demo you'll hear stutters. i want to be upfront about why its happening. we are genuinely juicing apple silicon as hard as we can. we have a configurable backend for every inference pipeline — llm inference, audio inference, vision, even pixel acceleration for wallpapers and visuals. everything is dynamically allocated based on what you're doing. on an m4 max with 128gb you won't notice it much. on a 16gb macbook m4air we're doing everything we can to still give you expressiveness and natural prosody on constrained memory, and sometimes the speech stutters because we're pushing what the hardware can do right now. the honest answer is more ram and more efficient chipsets solve this permanently. and we automatically reallocate resources on the fly so it self-corrects rather than degrading. but we'd rather ship something real and be transparent about the tradeoff than wait for perfect hardware to exist. **why it runs locally and why that matters** we built custom frameworks on top of metal, we contribute to mlx, and we've been deep in that ecosystem long enough to know where the real performance headroom is. it was built on apple silicon in mind from ground up. in the future releases we are gonna work on ANE-native applications as well. 290ms latency on m4 max. around 800ms on base macbook air. 3.3 to 7.5gb memory footprint. no cloud, no api calls leaving your machine, no subscription. the reason it's unlimited comes back to this too. we understood the hardware well enough to know the "you need expensive cloud compute for this" narrative was never a technical truth. it was always a pricing decision. **our oss contributions** we're a small team but we try to give back. we've open sourced a lot of what powers bodega — llms that excel at coding and edge tasks, some work in distributed task scheduling which we use inside bodega to manage inference tasks, and a cli agent built for navigating large codebases without the bloat. you can see our model collections on 🤗 huggingface [here](https://huggingface.co/srswti/collections) and our open source work on Github [here](https://github.com/SRSWTI). **end note:** if you read this far, that means something to us — genuinely. so here's a bit more context on who we are. we're 4 engineers, fully bootstrapped, and tbh we don't know much about marketing. what we do know is how to build. we've been heads down for 2.5 years because we believe in something specific: personal computing that actually feels personal. something that runs on your machine. we want to work with everyday people who believe in that future too — just people who want to actually use what we built and tell us honestly what's working and what isn't. if that's you, the download is here: [srswti.com/downloads](https://www.srswti.com/downloads) and here's where we're posting demos as we go: [https://www.youtube.com/@SRSWTIResearchLabs](https://www.youtube.com/@SRSWTIResearchLabs) ask me anything — architecture, backends, the memory system, the streaming approach, whatever. happy to get into it. thanks :)
2026-02-27T22:26:25
https://v.redd.it/d4pi83x6o3mg1
EmbarrassedAsk2887
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/
1970-01-01T00:00:00
0
{}
1rgkzlo
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/d4pi83x6o3mg1/DASHPlaylist.mpd?a=1774952802%2CN2ZhMTc3M2NjM2RhODY3ODU3NGRkOGIyNGM0NjgyNTc5NTVlYjllMTg1MDYxNjY1Y2YxMGY2Y2JjOGQxMzI0NA%3D%3D&v=1&f=sd', 'duration': 1663, 'fallback_url': 'https://v.redd.it/d4pi83x6o3mg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 878, 'hls_url': 'https://v.redd.it/d4pi83x6o3mg1/HLSPlaylist.m3u8?a=1774952802%2CZGMxZDNhMGVhYWY0NTkxYWIzMGQzNzU0MGFjMmQ2OWFmNWQyODNiMTFkMjE2ZGE0NTBiYTE0NzRhODQ3YTczMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/d4pi83x6o3mg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rgkzlo
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/
false
false
https://external-preview…c46e27e4299f8033
0
{'enabled': False, 'images': [{'id': 'ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=108&crop=smart&format=pjpg&auto=webp&s=b4e2532caf2a9c90844b3a25f7c4a112fa56ee15', 'width': 108}, {'height': 98, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=216&crop=smart&format=pjpg&auto=webp&s=6a388afa352a8af178508c09bc13a39a66ab8a7c', 'width': 216}, {'height': 146, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=320&crop=smart&format=pjpg&auto=webp&s=f63f7799a3d495e4a52cfb9070e0fa86454bc882', 'width': 320}, {'height': 292, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=640&crop=smart&format=pjpg&auto=webp&s=c5ca13f91e69ffb5696572685fb253fea94e502a', 'width': 640}, {'height': 438, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=960&crop=smart&format=pjpg&auto=webp&s=491b3f13f3214ef76d4d49f3b69f5f029339c3b7', 'width': 960}, {'height': 493, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c83bc2735208f5799c43205e624172e87292414f', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/ZGk5c252dzZvM21nMTl9Kp7-tK7cPxWylSciWe4B6CxJX24a2LYgNNJalw8S.png?format=pjpg&auto=webp&s=24f3bb98ad5c5382964eb4aa8464bac8eb39cb13', 'width': 3150}, 'variants': {}}]}
Qwen3.5 27B scores 42 on Intelligence Index and is the most intelligent model under 230B. Nearest model GLM-4.7-Flash 31B-A3B, Scores 30
111
2026-02-27T22:25:36
https://i.redd.it/g4yry8u154mg1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1rgkyt5
false
null
t3_1rgkyt5
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/
false
false
https://preview.redd.it/…6038c2cdf937e6fb
111
{'enabled': True, 'images': [{'id': 'g4yry8u154mg1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=108&crop=smart&auto=webp&s=3bb2c8063bc5b8ded1de2202a5d5197edef3c136', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=216&crop=smart&auto=webp&s=791371d8ef69d2e10c030b379274e8fb03fe170e', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=320&crop=smart&auto=webp&s=d6de993765eea8068373f8b163fb6098dcb236ed', 'width': 320}, {'height': 395, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=640&crop=smart&auto=webp&s=0891dee343ab1c2e9d248766f9900c26c0a5848e', 'width': 640}, {'height': 592, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=960&crop=smart&auto=webp&s=4d1efedb61f11e6c9fad7d60875c7931b45254e9', 'width': 960}, {'height': 666, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?width=1080&crop=smart&auto=webp&s=89cad1d127f106adddfc38e732ee0b8f25aed0c3', 'width': 1080}], 'source': {'height': 1264, 'url': 'https://preview.redd.it/g4yry8u154mg1.jpeg?auto=webp&s=c1d2b14272909bfb040a8c251568c6927a87bde6', 'width': 2048}, 'variants': {}}]}
List of models that you might have missed
27
Hi guys, So, today I found out there are a lot of LLMs, that I have never heard of before until now. I kinda want to test them, especially for creative writing and other tasks, and I figured I am probably not the only person who missed. [Xiamo MiMo V2 Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) [Xiaomi MiMo Audio](https://huggingface.co/XiaomiMiMo/MiMo-Audio-7B-Instruct) [Rednote Dots1](https://huggingface.co/collections/rednote-hilab/dotsllm1) [Meituan LongCat Flash Lite](https://huggingface.co/meituan-longcat/LongCat-Flash-Lite) I mostly credit Bycloud for mentioning them in a video, for else I would have missed them releasing.
2026-02-27T22:24:40
https://www.reddit.com/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/
ThisGonBHard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkxy3
false
null
t3_1rgkxy3
/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/
false
false
self
27
{'enabled': False, 'images': [{'id': 'pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=108&crop=smart&auto=webp&s=8b5815a9873a95ad17de5a72dbe15bbb77bae1c1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=216&crop=smart&auto=webp&s=b1d6a533755c95d51ba65c36faac3f5152560b1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=320&crop=smart&auto=webp&s=5358b7746885b54dd41761890305b6f110b8802f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=640&crop=smart&auto=webp&s=413fe05449bcb79ceb4c3c13d870125113113e50', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=960&crop=smart&auto=webp&s=bb82b7654afe3cb0f1e63fad5f4e340abc0fb42a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?width=1080&crop=smart&auto=webp&s=d972c6db5e9dec441095da691e6620486ea6eefb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pEssBYcofSIqxenRV_1O2yb3vr7ekZdMtNbDln2iEbQ.png?auto=webp&s=6f65eaa1a4aa6fd842e6a836681f6274c443fa6d', 'width': 1200}, 'variants': {}}]}
Continuing training of a 144M Spiking Neural Network Language Model (Targeting 3.8 Loss on RTX 2050)
1
[removed]
2026-02-27T22:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1rgkwry/continuing_training_of_a_144m_spiking_neural/
FBNoname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkwry
false
null
t3_1rgkwry
/r/LocalLLaMA/comments/1rgkwry/continuing_training_of_a_144m_spiking_neural/
false
false
self
1
null
THEOS: Open-source dual-engine dialectical reasoning framework — two engines, opposite directions, full audit trail [video]
0
 Two engines run simultaneously in opposite directions. The left   engine is constructive. The right engine is adversarial. A governor   measures contradiction between them and sustains reasoning until   the best available answer emerges — or reports irreducible   disagreement honestly. Everything is auditable.   The result that started this:   Ask any AI: what is the difference between being alone and lonely?   Standard AI: two definitions.   THEOS: they are independent of each other — one does not cause the   other. You can be in a crowded room and feel completely unseen.   Loneliness is not the absence of people. It is the absence of   being understood.   Zero external dependencies. 71 passing tests. Pure Python 3.10+.   pip install theos-reasoning   Video (3 min): [https://youtu.be/i5Mmq305ryg](https://youtu.be/i5Mmq305ryg)   GitHub: [https://github.com/Frederick-Stalnecker/THEOS](https://github.com/Frederick-Stalnecker/THEOS)   Docs: [https://frederick-stalnecker.github.io/THEOS/](https://frederick-stalnecker.github.io/THEOS/)   Happy to answer technical questions.
2026-02-27T22:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1rgkwnh/theos_opensource_dualengine_dialectical_reasoning/
AiToolRental-com
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkwnh
false
null
t3_1rgkwnh
/r/LocalLLaMA/comments/1rgkwnh/theos_opensource_dualengine_dialectical_reasoning/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY.jpeg?width=108&crop=smart&auto=webp&s=7f6df3c0594992b0d58c5ecb4a5bb002f0e5db71', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY.jpeg?width=216&crop=smart&auto=webp&s=59991f51048c938263d7d3be8e93c3a6e2d8ecd9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY.jpeg?width=320&crop=smart&auto=webp&s=fe69e108e491482d178d849c255b7c55d18bb852', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ydQMYLfcWeV0XDLwsYgaXQWma7MOk5t2OSUIvYqdFYY.jpeg?auto=webp&s=b73abcf48f50cabdf311630280d54b7154e7196a', 'width': 480}, 'variants': {}}]}
Agent-to-agent marketplace - let your local agents sell capabilities to other agents and earn USDC
0
If you're running local models as agents, you probably have specialized capabilities - summarization, code review, data extraction, etc. What if other agents could discover and pay to use those capabilities? Built Agoragentic - an open marketplace where agents can register capabilities and other agents can discover and invoke them. Payments settle in USDC on Base L2 (sub-cent gas fees). Why this matters for local LLM users: - Your local agent can SELL capabilities to other agents and earn real money - Your local agent can BUY specialized capabilities it doesn't have locally - No vendor lock-in - works with any model (local or API-based) Shipped integrations for LangChain, CrewAI, and MCP: pip install agoragentic Also has an MCP server that works with Claude Desktop, VS Code, and Cursor. The marketplace handles discovery (search by category/keyword), invocation (proxy through gateway with timeout enforcement), and settlement (automatic USDC payments with 3% platform fee). New agents get $0.50 in free test credits. All integration code is MIT licensed. Curious what capabilities local model users would want to monetize or buy from other agents.
2026-02-27T22:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/
Bourbeau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkv8u
false
null
t3_1rgkv8u
/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/
false
false
self
0
null
Ways to improve prompt processing when offloading to RAM
2
Are there any ways to make any improvements to prompt processing speed of large prompts when using models that are offloaded to RAM? Currently getting 42.16 t/s pp, 10.7 t/s tg, at 64000 context window 40GB VRAM (2x5060Ti 16GB, 1x2060Super 8GB) 256GB RAM (8x32GB 3200MHz running at quad channel) Qwen3.5-397B-A17B-MXFP4\_MOE (216GB)
2026-02-27T22:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/
Frequent-Slice-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgkmd7
false
null
t3_1rgkmd7
/r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/
false
false
self
2
null
Experience with Qwen 3 (4B) Q4 and 12K documents on a ~$1300 laptop
1
This video reports our experiments of using Qwen 3 (4B) Q4 models for local knowledge hub and search/QA with about 12K documents (of all the types). With using only max = 2000 tokens, the Qwen shows good performance (speed and accuracy), for the system with . 𝐌𝐚𝐬𝐬𝐢𝐯𝐞 𝐅𝐢𝐥𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧𝐬 — 𝐅𝐮𝐥𝐥𝐲 𝐎𝐧 𝐃𝐞𝐯𝐢𝐜𝐞 • \~11,000 PDFs under structured ACL folders (ACL/001 → ACL/007) • 1,000 PDFs under RAGBench • \~100 files under Indonesia • \~800 files under test\_files\_for\_rag As shown at end of the video, additional files can be added efficiently 2. 𝐋𝐨𝐰-𝐭𝐨𝐤𝐞𝐧 𝐋𝐋𝐌 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 This demo retrieves at most 2,000 𝐭𝐨𝐤𝐞𝐧𝐬 This dramatically reduces token cost for enterprises and consumers But it also requires a fundamentally different algorithmic paradigm — precision retrieval over brute-force context expansion 3. 𝐖𝐢𝐝𝐞 𝐅𝐢𝐥𝐞 𝐓𝐲𝐩𝐞 𝐒𝐮𝐩𝐩𝐨𝐫𝐭 The system supports: DOC, PPTX, PDF, TXT, XLSX, JSON, HTML, PNG, JPG, OCR ... 4. 𝐅𝐨𝐥𝐝𝐞𝐫 (and sub-folder)-𝐏𝐫𝐞𝐬𝐞𝐫𝐯𝐢𝐧𝐠 𝐈𝐧𝐝𝐞𝐱𝐢𝐧𝐠 This means: • Structural context is retained • Access control logic (ACL) can be enforced naturally • Enterprise knowledge organization is respected • Folder awareness is critical for real enterprise AI 5. 𝐌𝐮𝐥𝐭𝐢-𝐋𝐢𝐧𝐠𝐮𝐚𝐥 𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 The collection includes: mixed Indonesian, Arabic, English The system handles cross-lingual retrieval and reasoning natively 6. 𝐌𝐮𝐥𝐭𝐢-𝐌𝐨𝐝𝐚𝐥 Text + documents + video/images + OCR 7. 𝐇𝐢𝐠𝐡-𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲 𝐑𝐀𝐆 𝐰𝐢𝐭𝐡 𝐋𝐨𝐜𝐚𝐥 𝐋𝐋𝐌 RAG can run fully locally using • Qwen 3 (4B) Q4 • High accuracy • High efficiency • No cloud dependency 𝐖𝐡𝐲 𝐓𝐡𝐢𝐬 𝐌𝐚𝐭𝐭𝐞𝐫𝐬 𝘐𝘮𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 1: 𝘙𝘦𝘢𝘭 𝘗𝘦𝘳𝘴𝘰𝘯𝘢𝘭 𝘈𝘐 𝘈𝘴𝘴𝘪𝘴𝘵𝘢𝘯𝘵 Imagine: • All your documents • All your research PDFs • All your screenshots and personal photos • All your work materials Indexed locally. Private. Searchable. Reasonable. Fast. This is the foundation of a true personal AI knowledge hub. 𝘐𝘮𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 2: 𝘌𝘯𝘵𝘦𝘳𝘱𝘳𝘪𝘴𝘦 𝘈𝘐 — 𝘞𝘪𝘵𝘩𝘰𝘶𝘵 𝘏𝘦𝘢𝘷𝘺 𝘐𝘯𝘧𝘳𝘢𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦 One properly configured machine can serve: • A team • A department • Even entire company Instead of: • Expensive cloud inference • Data compliance concerns • Latency bottlenecks • Vendor lock-in You get: • On-prem control • Predictable cost • Data privacy • High performance 𝘐𝘮𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 3: 𝘈𝘐𝘗𝘊 𝘉𝘦𝘤𝘰𝘮𝘦𝘴 𝘙𝘦𝘢𝘭 AI PC is not just marketing. With the right architecture: • Massive knowledge collections are feasible locally • High-quality RAG works on affordable hardware • Structured enterprise data can live and reason on device • Folder-aware design • Multi-threaded pipelines This is what enables scale on affordable hardware.
2026-02-27T22:11:39
https://v.redd.it/0oo9oaux14mg1
DueKitchen3102
/r/LocalLLaMA/comments/1rgkm36/experience_with_qwen_3_4b_q4_and_12k_documents_on/
1970-01-01T00:00:00
0
{}
1rgkm36
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0oo9oaux14mg1/DASHPlaylist.mpd?a=1774951933%2CZmQzMWRiOWZlYzQzY2ViYzNkODZhOTE0YjQ4NDRmMWI3MWZjYjQwZTM3NGIwYjczYTdhNTNmMGQ2NTVlMjMwOA%3D%3D&v=1&f=sd', 'duration': 501, 'fallback_url': 'https://v.redd.it/0oo9oaux14mg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/0oo9oaux14mg1/HLSPlaylist.m3u8?a=1774951933%2CNGNjNWIwYThmZGM5ZDVhNmI1YmQ1OGY2NGQyOTI1ZGI4M2NlOGMzMjQ0NDczMTYwODIxZGVmMzNiMWIzOTliOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0oo9oaux14mg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rgkm36
/r/LocalLLaMA/comments/1rgkm36/experience_with_qwen_3_4b_q4_and_12k_documents_on/
false
false
https://external-preview…a5eb1e4fb0392bee
1
{'enabled': False, 'images': [{'id': 'bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=108&crop=smart&format=pjpg&auto=webp&s=fd60f1408ca2770597a9baafed3f0dfa4a76bdea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=216&crop=smart&format=pjpg&auto=webp&s=2bbb14725a02cc2be21ddf6022de9ab6f749a32c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=320&crop=smart&format=pjpg&auto=webp&s=3724dd8bc9b2cba2c88653ee163092fa7f841d57', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=640&crop=smart&format=pjpg&auto=webp&s=d0af5184cc1e86e11e606306757093450eedd7aa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=960&crop=smart&format=pjpg&auto=webp&s=0c6abce8c9d9d4f209d6c9da5b27272159ea0387', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7cc1ba066f6effcd4b5391e161e8571e7186d047', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bmRmOTZsdXgxNG1nMT5suUM6PkoPZQhf9fbgiewOOpGBQjGHwDdVYZexwmI4.png?format=pjpg&auto=webp&s=2389f9391125f9c70cd4b5d15167aa79ddafa9a7', 'width': 1920}, 'variants': {}}]}
Back in my day, LocalLLaMa were the pioneers!
1,037
2026-02-27T22:00:57
https://i.redd.it/hiz4ukvg04mg1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1rgkc1b
false
null
t3_1rgkc1b
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/
false
false
https://preview.redd.it/…80f4933601fd8129
1,037
{'enabled': True, 'images': [{'id': 'hiz4ukvg04mg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=108&crop=smart&auto=webp&s=f893262186f42ff7a4dfd63fb7d0a3b09a536353', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=216&crop=smart&auto=webp&s=cc51a3e83f54b949875029fd4fd6b98cce09bef9', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=320&crop=smart&auto=webp&s=70e027a4c44d2dc54a9f48c827f12d25945991b2', 'width': 320}, {'height': 491, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?width=640&crop=smart&auto=webp&s=50545019f32fd7e0e01e1b41c3ffbb390e1046eb', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/hiz4ukvg04mg1.jpeg?auto=webp&s=aaf49797452c5bbe36f6b076e092bda8d06a095f', 'width': 651}, 'variants': {}}]}
Heretic stalled?
2
Hey all. I'm messing around with heretic and I've had it running on 5 rtx30090s trying to abliterate qwen3.5-122b-a10b. It's been running for about 10 hours and it's still just "checking for common response prefix". I thought it would be testing the lora and fine tuning it by now. Does anyone know what I should expect here? I'm using the patch that's on the git repo PR that adds support for qwen35moe architecture. Without that it wouldn't load the model fully. P. S. Please forgive my "screenshot" lol. Typing this from my phone during lunch break.
2026-02-27T21:36:17
https://i.redd.it/c6wste29w3mg1.jpeg
hesperaux
i.redd.it
1970-01-01T00:00:00
0
{}
1rgjozi
false
null
t3_1rgjozi
/r/LocalLLaMA/comments/1rgjozi/heretic_stalled/
false
false
https://preview.redd.it/…7d0f7cef48ad08db
2
{'enabled': True, 'images': [{'id': 'c6wste29w3mg1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=108&crop=smart&auto=webp&s=5fb6fdbca814abe78eccfce6341a52561661e13a', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=216&crop=smart&auto=webp&s=6e720ba7cc4524eb717a02259b25db001dd7ae94', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=320&crop=smart&auto=webp&s=8c6d382e6e9419f6fd9bb79ed5b64dbefc214590', 'width': 320}, {'height': 288, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=640&crop=smart&auto=webp&s=90e98f0132148a74fd849cd83bfedf0f0eb26baa', 'width': 640}, {'height': 432, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=960&crop=smart&auto=webp&s=0bab9cfa12450335bb7a0a24b3b94ae0be36d166', 'width': 960}, {'height': 486, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?width=1080&crop=smart&auto=webp&s=f6533356a33ac7865f9defc36d21dcde63376262', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://preview.redd.it/c6wste29w3mg1.jpeg?auto=webp&s=84854a857d480ba828da881c615406b346dfd0a4', 'width': 4000}, 'variants': {}}]}
Personaplex Voice trainer MLX
1
tried training a custom voice for it myself last night on my mac. Found out there was no available tool to do it for MLX. Made one. [https://github.com/Achilles1089/personaplex-voice-trainer](https://github.com/Achilles1089/personaplex-voice-trainer)
2026-02-27T21:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1rgjcck/personaplex_voice_trainer_mlx/
habachilles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgjcck
false
null
t3_1rgjcck
/r/LocalLLaMA/comments/1rgjcck/personaplex_voice_trainer_mlx/
false
false
self
1
null
PSA dgx spark ~= rdimm $\go, rtx pro only 4x the price
0
In a world where prices are fuc*** Vram is king? Vram needs to be of usable speed? Do you want a slow farm in the countryside or a fast NYC room [View Poll](https://www.reddit.com/poll/1rgj6e9)
2026-02-27T21:16:26
https://www.reddit.com/r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj6e9
false
null
t3_1rgj6e9
/r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/
false
false
self
0
null
Leviathan: Open-source persistent world for LLM agents — any model that speaks HTTP can join
1
[removed]
2026-02-27T21:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1rgj3kd/leviathan_opensource_persistent_world_for_llm/
Acrobatic-Ad9830
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj3kd
false
null
t3_1rgj3kd
/r/LocalLLaMA/comments/1rgj3kd/leviathan_opensource_persistent_world_for_llm/
false
false
self
1
null
Architect, an open-source CLI to orchestrate headless AI coding agents in CI/CD
0
Hey! I work daily with AI agents and I've always loved coding. I also have a solid background in DevOps. AI agents generate code, but rarely does anything guarantee it actually works. Claude Code, Cursor, and Copilot are great as interactive assistants and copilots. But when you need an agent to work unsupervised: in a CI/CD pipeline, overnight, no one watching, nothing guarantees or even increases the odds that the result is correct. That's why I'm building architect (with the help of Claude Code, ironically). It's an open-source CLI tool designed for autonomous code agents in CI/CD, with actual guarantees. What makes it different? • Ralph Loop --> runs your code, tests it, and if it fails, retries with clean context. For hours if needed. • Deterministic guardrails --> protected files, blocked commands, quality gates that the LLM cannot bypass. • YAML pipelines --> agent workflows as code. • Any LLM --> Claude, GPT, DeepSeek, Ollama. The brain changes, the guarantees don't. Built on LiteLLM. It's headless-first, CI/CD-native, and focused on verification layers. It doesn't compete with tools like Claude Code, it collaborates with them. Think of it as the difference between the pilot and air traffic control. GitHub: \[https://github.com/Diego303/architect-cli\] Docs: \[https://diego303.github.io/architect-docs/en/\] Would love feedback from anyone running agents in CI/CD or thinking about it. \#OpenSource #AI #CICD #DevOps #CodingAgents #Automation #LLM #ClaudeCode #DeveloperTools #AgentsAI
2026-02-27T21:12:31
https://www.reddit.com/r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/
RiskRain303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgj2ol
false
null
t3_1rgj2ol
/r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/
false
false
self
0
{'enabled': False, 'images': [{'id': 'LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=108&crop=smart&auto=webp&s=9b4b7c332a7b83e227a85c1b519dba3305962f34', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=216&crop=smart&auto=webp&s=6746c30a7ae8dc464e321ab2165475f64fc16e9f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=320&crop=smart&auto=webp&s=3c91ff760daf45efb7319e7ffee6e9337ecad33e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=640&crop=smart&auto=webp&s=c7818f4dc5a57e9c24627c2acb36c889ddd6d6ca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=960&crop=smart&auto=webp&s=568c49394d108ad36c335f28635ee8a89835e801', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?width=1080&crop=smart&auto=webp&s=2b60a09ac22f2ac09fda2f8bbfe23c761474d050', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LuQHSqSTd8jpGsKf-cj22kU8N8u7rtZoyPuRkvF0x_g.png?auto=webp&s=da5fa38908d894c2e57e1132c1a431ea02a0977f', 'width': 1200}, 'variants': {}}]}
What models do you think owned February?
1
[View Poll](https://www.reddit.com/poll/1rgixxr)
2026-02-27T21:07:31
https://www.reddit.com/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/
abdouhlili
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgixxr
false
null
t3_1rgixxr
/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/
false
false
self
1
null
Accuracy vs Speed. My top 5
0
\- Top 1: Alibaba-NLP\_Tongyi-DeepResearch-30B-A3B-IQ4\_NL - Best accuracy, I don't know why people don't talk about this model, it is amazing and the most accurate for my test cases (coding, reasoning,..) \- Top 2: gpt-oss-20b-mxfp4-low - Best tradeoff accuracy vs speed, low reasoning make it faster \- Top 3: bu-30b-a3b-preview-q4\_k\_m - Best for scraping, fast and useful Honorable mentions: GLM-4.7-Flash-Q4\_K\_M (2nd place for accuracy but slower), Qwen3-Coder-Next-Q3\_K\_S (Good tradeoff but a bit slow on my hw) PS: My hardware is AMD Ryzen 7, DDR5 Ram PS2: on opencode the situation is a bit different because a bigger context is required: only gpt-oss-20b-mxfp4-low, Nemotron-3-Nano-30B-A3B-IQ4\_NL works with my hardware and both are very slow Which is your best model for accuracy that you can run and which one is the best tradeoff?
2026-02-27T21:07:06
https://i.redd.it/uj2yeq2ln3mg1.png
Deep_Traffic_7873
i.redd.it
1970-01-01T00:00:00
0
{}
1rgixk7
false
null
t3_1rgixk7
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/
false
false
https://preview.redd.it/…6156b8170d5770dc
0
{'enabled': True, 'images': [{'id': 'uj2yeq2ln3mg1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=108&crop=smart&auto=webp&s=f89735ad56be364ca08b9c5cca1bc8c9c9993465', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=216&crop=smart&auto=webp&s=e7ff360b146637d59cf8a1e1a5a059ec9b956980', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=320&crop=smart&auto=webp&s=78c0c289c75617fd6f1f17e22d72846fd4d1d98a', 'width': 320}, {'height': 282, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=640&crop=smart&auto=webp&s=8a8247c19278557484d8fe4708434f5bb5b64e53', 'width': 640}, {'height': 424, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=960&crop=smart&auto=webp&s=b2e8659aade731f615830b01eb1b9b04064c9904', 'width': 960}, {'height': 477, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?width=1080&crop=smart&auto=webp&s=60625395e8b10c258aa7a7413714c2ca8d8c05de', 'width': 1080}], 'source': {'height': 500, 'url': 'https://preview.redd.it/uj2yeq2ln3mg1.png?auto=webp&s=62b1c5fea8f8d7a608187f4ae3b9beb309e69a76', 'width': 1131}, 'variants': {}}]}
Seline is back: your OS go-to agent framework w/ gui. (0.2.5: added multi agents, git worktree support, dev mode, paralell runs, discord, claude agent sdk, plugins, skills, hooks. fixed: a lot, 338 commits was added in 3 days yesterday when I checked, also I use seline mostly to develop it now...
3
Hello folks, It's been quite sometime since my last post... I would like to share what I have been up to with my own standalone agent application. In video you will see: 1. Onboarding : 00:00 - 00.18 2. Starting a feature request task 00:25 - First things first; since this is locallama, lets list the things we added that works locally without any API. * duckduckgo websearch, puppeteer web browse, deepsearch also uses it duckduckgo without requiring tavily api * whisper.cpp (local), OpenAI whisper(api) stt, edge tts, OpenAI tts, and Elevanlabs tts is also supported. So locally tts and stt works out of the box currently. * multistep vector search and indexing was already good with local models with onnx runtime, filewatcher is pretty much stabilized now, good go to go folks. * there are 3 local image models, flux klein models, and z-image models one click installable, also you can drag and drop your comfyui workflows agents chat and they should work; hopefully. same as skills, plugins, hooks, just drop em to your agents chat ui. what are new? * multi-agent architecture, agents can delegate tasks, observe, continue, works kinda class. just like claude code, but its odd seeing codex using these agents and tools... * paralell task runs, agents can work in git worktree, I added a dev mode in the settings after enabling this; go to your agent and ask it to create a workspace and it will create a dedicated worktree and work there and send pr, there are couple enhancements and helpers in dev mode as well once agents activate a workspace you will start seeing em and it will help you manage your git work trees. * discord integration, writing status for channels, background tasks are now handled pretty smoothly, ui is overhauled, onboarding is overhauld, and crazy amount of issues are now resolved. And many more that I probably forgot and don't remember now. So what now? People seem to like it, there are people forking it, adding whatever they want and shaping their agentic experience which makes me so happy; like one guy is adding currently 3D avatars and agent mood support and I can't wait to see what he's doing or will he share it? I am just observing his fork haha. Lots of issues are being opened, where I am using Seline to plan, architecture and resolve issues full time now!
2026-02-27T21:05:36
https://v.redd.it/j2eiuyxoo3mg1
Diligent-Builder7762
/r/LocalLLaMA/comments/1rgiw5c/seline_is_back_your_os_goto_agent_framework_w_gui/
1970-01-01T00:00:00
0
{}
1rgiw5c
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/j2eiuyxoo3mg1/DASHPlaylist.mpd?a=1774947952%2CMDdiNDE3NzA4NjMxNzI2MGQ3YmZjNTdhNzVkNTYwMTJkZmIyNWZiZjRjMDRiNTdhMWExOWIyOTBkMmM1ZWQ0OA%3D%3D&v=1&f=sd', 'duration': 329, 'fallback_url': 'https://v.redd.it/j2eiuyxoo3mg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/j2eiuyxoo3mg1/HLSPlaylist.m3u8?a=1774947952%2CZDRiMjY5ZWU3OWIzMjA0NzI0ZGRkYWNmODUxNGFlZTI1MjJjZDg5MDIxNTA4NDJjZDI4ZTFjZmZkY2MyMjQyOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j2eiuyxoo3mg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1680}}
t3_1rgiw5c
/r/LocalLLaMA/comments/1rgiw5c/seline_is_back_your_os_goto_agent_framework_w_gui/
false
false
https://external-preview…71dd56737960a140
3
{'enabled': False, 'images': [{'id': 'MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=108&crop=smart&format=pjpg&auto=webp&s=e5af796666d3b4b3d8a12315675b31347fa40543', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=216&crop=smart&format=pjpg&auto=webp&s=46d790ce0aa7039dc923f5581ffbf6af7976ae66', 'width': 216}, {'height': 205, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=320&crop=smart&format=pjpg&auto=webp&s=63abef64b03898c09a1823961157dde4b1f0c7b8', 'width': 320}, {'height': 411, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=640&crop=smart&format=pjpg&auto=webp&s=37c553d11dea07a7801f35ad85722a8c01b520fa', 'width': 640}, {'height': 617, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=960&crop=smart&format=pjpg&auto=webp&s=a92e6e5432acb8fa68d8e5658713a309604471b4', 'width': 960}, {'height': 694, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b9b514e416c0f9471ccc93b15a047b992d66992d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MXpkaDF3eG9vM21nMXIHjx8H-fIYeYZ1xTiU-a8JVKHevdJUHACF6lTuvGZK.png?format=pjpg&auto=webp&s=a0ed8a0246d523738ea33d9be7d8c38a0717d4d9', 'width': 1680}, 'variants': {}}]}
How does training an AI on another AI actually work?
0
How is Deepseek actually doing this? Are they just feeding claude's answers into their own models as their own model as training data to improve reasoning? How exactly one train it's model on output of other? what's enginnering inovlved here? I'd love breakdown of how thsi is executed at scale. Backstory: Anthropic recently accused Deepseek,Minimax,Moonshot of using lots of fake accounts to generate exchanges with claude, using the outputs to train the model and called it "distillation attack".
2026-02-27T20:59:06
https://www.reddit.com/r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/
Koshcheiushko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgips0
false
null
t3_1rgips0
/r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/
false
false
self
0
null
[DISCUSSION] Is it time for a "Prose-First" Successor to NovelAI/Sudowrite/Novelcrafter focusing on preloaded uncensored models?
26
Hi everyone, I’ve spent the last few years living in the trenches of serialization. I’m a Sci-Fi and LitRPG author with over 1 million words published on Kindle Unlimited and Royal Road. By day, I work in tech as a data scientist / project manager. I wanted to gauge the community’s appetite for a new type of writing companion one that focuses strictly on the "soul" of prose rather than the bells and whistles of general-purpose assistants. I started as a huge NovelAI fan, and it was the first tool that actually revealed to me how powerful these tools could actually be. I went from taking a break from all the Worm and Naruto fanfiction I was writing to becoming a Sudowrite power user. But like many of you guys, I hit a wall with the "AI-isms." No matter how I prompted, the prose felt increasingly sterilized and predictable. I scrapped it for NovelAI's Erato again, and immediately saw the difference. At the time, we didn't fully grasp why as a community, but now I do: the "smaller" models (like Kayra or older fine-tunes) often have higher entropy. They aren't "lobotomized" by excessive RLHF (Reinforcement Learning from Human Feedback) that forces them to sound like a helpful customer service rep. They're actually allowed to be weird, gritty, and creative. Ironically, the thing that got Sudowrite ahead (uncensored ChatGPT) is also the thing that's currently weighing down their software as a prose writing tool. **The Current Gap:** NovelAI was the gold standard for people who liked an inexpensive, uncensored, UI-first experience for a long time, but let’s be honest: the update cycle has slowed down significantly. Meanwhile, the open-weights scene has exploded. Models like **Broken Tutu**, **Midnight Rose**, and the latest **Abliterated Llama/Qwen** variants are producing prose that, in my opinion, leaves "aligned" models in the dust and their fine-tunes are rapidly falling behind. I’ve started transitioning my own workflow to these uncensored models, but the interfaces currently available are either: 1. **Chat-focused (SillyTavern):** Incredible for roleplay, but clunky for drafting a 100k-word manuscript. 2. **Too Technical (Kobold/Text-Gen-WebUI / Novelcrafter):** Hard to manage for an author who just wants to stay in the flow. I’ve been customizing these open source MIT license editors to make a "Clean Room" writing suite. Something that would combine the distraction-free, prose-focused UX of NovelAI, but built on a modern backend that keeps a pulse on the latest uncensored models and just host things like Midnight Rose + Broken Tutu (assuming licenses permit it). **The core features would be:** * **Prose-First UI:** No excessive cluttering like Sudowrite / Novelcrafter. Just you, the page, and the AI. * **The "Entropy Control":** Deep access to sampling settings so you can dial in the "creativity" vs. "logic" balance. * **Series-Level Continuity:** A "Codex" that actually understands long-form series continuity across multiple books. * **Privacy-Centric/Uncensored models as a priority:** Zero filters. Zero moralizing. **My Question to You Guys:** If you’ve felt like NovelAI is stagnating or that Sudowrite is too "corporate" and money grabby these days, what is the one thing you feel is missing from your current setup? Is there room for a tool that prioritizes the *writing* experience above everything else? I’m not looking to build a "Sudowrite Killer" - I'm just looking to get my hands on the tool I actually want to use for my next 1 million words but the stagnating development pace and dated models made it really hard for me to continue using it. Curious to hear my fellow writers' thoughts
2026-02-27T20:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/
CharlesBAntoine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgiimd
false
null
t3_1rgiimd
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/
false
false
self
26
null
[Discussion] Is it time for a modern, "Prose-First" successor to NovelAI / Sudowrite / Novelcrafter?
0
[removed]
2026-02-27T20:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1rgigmv/discussion_is_it_time_for_a_modern_prosefirst/
KaelElarian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgigmv
false
null
t3_1rgigmv
/r/LocalLLaMA/comments/1rgigmv/discussion_is_it_time_for_a_modern_prosefirst/
false
false
self
0
null
Switched to Qwen3.5-122B-A10B-i1-GGUF
15
Switched to this mradermacher/Qwen3.5-122B-A10B-i1-GGUF:Q4\_K\_S today on my 6000 Pro from mradermacher/MiniMax-M2.5-REAP-139B-A10B-i1-GGUF:Q4\_K\_S so far it’s better, main reason to switch was to get more context. The full 262k tokens fit on a 6000 Pro vs only about 65k with the Minimax quant. It’s fast also.
2026-02-27T20:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/
NaiRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rgiait
false
null
t3_1rgiait
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/
false
false
self
15
null
OpenAI Raises $110 Billion in the Largest Private Funding Round Ever
0
2026-02-27T20:38:15
https://slashdot.org/story/26/02/27/1355236/openai-raises-110-billion-in-the-largest-private-funding-round-ever
HumanDrone8721
slashdot.org
1970-01-01T00:00:00
0
{}
1rgi6ky
false
null
t3_1rgi6ky
/r/LocalLLaMA/comments/1rgi6ky/openai_raises_110_billion_in_the_largest_private/
false
false
https://external-preview…f6c2c2a7066eb043
0
{'enabled': False, 'images': [{'id': 'hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/hil_tMNOLt9yEv7PpbqA4DDr8WAL7xTJ2MH4RtRRmYA.png?auto=webp&s=159cdb86d0a28da02a7f91b85bceed89b5907f84', 'width': 64}, 'variants': {}}]}
Qwen3.5 Unsloth GGUFs Update!
34
2026-02-27T20:31:41
https://www.reddit.com/r/unsloth/comments/1rgemmh/qwen35_unsloth_ggufs_update/
fallingdowndizzyvr
reddit.com
1970-01-01T00:00:00
0
{}
1rgi0ej
false
null
t3_1rgi0ej
/r/LocalLLaMA/comments/1rgi0ej/qwen35_unsloth_ggufs_update/
false
false
https://preview.redd.it/…d79a7a6f3bb15cad
34
{'enabled': False, 'images': [{'id': '5xtzvpxdx2mg1', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=108&crop=smart&auto=webp&s=246af8f271aaccd22a0e76f6af96fb1366c5d0e1', 'width': 108}, {'height': 238, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=216&crop=smart&auto=webp&s=c2a07a55188e8f9a34d8719fa5b79c16749c228a', 'width': 216}, {'height': 353, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=320&crop=smart&auto=webp&s=8c007478cd407ae3541b428f20971c5385af0902', 'width': 320}, {'height': 707, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=640&crop=smart&auto=webp&s=862dc616a7026f7da474b808acfe28b96e84a5a2', 'width': 640}, {'height': 1061, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=960&crop=smart&auto=webp&s=02a36f14309354c94eeec987cf9b7be2d2fdd4a4', 'width': 960}, {'height': 1193, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?width=1080&crop=smart&auto=webp&s=813703e0519119ad6e888c658d2195fce5f809a8', 'width': 1080}], 'source': {'height': 2830, 'url': 'https://external-preview.redd.it/5xtzvpxdx2mg1.png?auto=webp&s=0bca3076a45de69c0427fcbb1365fae19cc36d0c', 'width': 2560}, 'variants': {}}]}
Pure LLMs score 0% on ARC-AGI-2 (humans: 60%). Meanwhile AlphaProof couples an LLM with Lean for 100% verifiable math proofs. Analysis of why hybrid architectures are making a comeback.
0
If you've been following ARC-AGI-2 results, you know the numbers are brutal. Every pure LLM tested, including o3, scores effectively 0%. o3 went from 75.7% on ARC-AGI-1 to 3-4% on the updated version. Average humans still score 60%. This matters for the local LLM community because it shows that the limitation isn't about scale, it's architectural. A formal proof (arXiv:2401.11817) showed hallucination is mathematically inevitable in any computable LLM. Not a bug. A theorem. **What's actually working:** AlphaProof (DeepMind) couples an LLM with Lean, a formal theorem prover. The neural part generates proof candidates. Lean verifies them. If the proof is wrong, Lean rejects it — no hallucination possible. Result: silver medal at IMO 2024. This is a Type 2 neuro-symbolic system in Kautz's taxonomy: symbolic outer loop (Lean verification) + neural inner (hypothesis generation). **Why this matters for local LLMs:** The implication is that even small local models could be dramatically more reliable if coupled with verification layers: * JSON schema validation = lightweight NeSy * Unit test generation + execution = NeSy verification loop * Static analysis as post-processing = symbolic constraint enforcement You don't need a 70B model if a 7B model + formal verification gives you provably correct outputs for your use case. I wrote a 23-min deep dive covering the full landscape: architecture breakdowns, Kautz taxonomy, energy economics, and an honest look at where pure neural (Gemini Deep Think) is still winning. Friend link (no paywall): [https://medium.com/ai-advances/neuro-symbolic-ai-arc-agi-alphaproof-third-wave-48177339d698?sk=2fadaf3cfe595a54fab578edc2de3362](https://medium.com/ai-advances/neuro-symbolic-ai-arc-agi-alphaproof-third-wave-48177339d698?sk=2fadaf3cfe595a54fab578edc2de3362) What's your experience with adding verification layers to local model outputs?
2026-02-27T20:25:23
https://medium.com/ai-advances/neuro-symbolic-ai-arc-agi-alphaproof-third-wave-48177339d698?sk=2fadaf3cfe595a54fab578edc2de3362
Sensitive-Two9732
medium.com
1970-01-01T00:00:00
0
{}
1rghuaq
false
null
t3_1rghuaq
/r/LocalLLaMA/comments/1rghuaq/pure_llms_score_0_on_arcagi2_humans_60_meanwhile/
false
false
default
0
null
February is almost over, are you satisfied? Upcoming models soon?
70
Some mentioned that Feb is loaded with so much model droppings. And some mentioned about CNY thing. I guess March & April are possibly loaded with more model droppings. I'm sure Local folks are happy with Qwen series, GLM5, Step Flash, Minimax2.5. What models are coming in March & April? Any news/speculations/rumors? Below are the models came this month(from this sub). Just counted models from sources. inclusionAI is the winner, 13 models released in this month. Qwen is 2nd with 5 models. Though few other sources released 4-5 models, those are tiny/small ones. * [https://huggingface.co/stepfun-ai/Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) * [https://huggingface.co/Qwen/Qwen3-Coder-Next](https://huggingface.co/Qwen/Qwen3-Coder-Next) * [https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B) * [https://huggingface.co/Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) * [https://huggingface.co/Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B) * [https://huggingface.co/Qwen/Qwen3.5-122B-A10B](https://huggingface.co/Qwen/Qwen3.5-122B-A10B) * [https://huggingface.co/zai-org/GLM-5](https://huggingface.co/zai-org/GLM-5) * [https://huggingface.co/MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) * [https://huggingface.co/inclusionAI/Ring-2.5-1T](https://huggingface.co/inclusionAI/Ring-2.5-1T) * [https://huggingface.co/inclusionAI/Ling-2.5-1T](https://huggingface.co/inclusionAI/Ling-2.5-1T) * [https://huggingface.co/PrimeIntellect/INTELLECT-3.1](https://huggingface.co/PrimeIntellect/INTELLECT-3.1) * [https://huggingface.co/internlm/Intern-S1-Pro](https://huggingface.co/internlm/Intern-S1-Pro) * [https://huggingface.co/Nanbeige/Nanbeige4.1-3B](https://huggingface.co/Nanbeige/Nanbeige4.1-3B) * [https://huggingface.co/zai-org/GLM-OCR](https://huggingface.co/zai-org/GLM-OCR) * [https://huggingface.co/tanaos/tanaos-sentiment-analysis-v1](https://huggingface.co/tanaos/tanaos-sentiment-analysis-v1) * [https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.2](https://huggingface.co/XeyonAI/Mistral-Helcyon-Mercury-12b-v3.2) * [https://huggingface.co/YuanLabAI/Yuan3.0-Flash](https://huggingface.co/YuanLabAI/Yuan3.0-Flash) * [https://huggingface.co/FutureMa/Eva-4B-V2](https://huggingface.co/FutureMa/Eva-4B-V2) * [https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602](https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602) * [https://huggingface.co/trillionlabs/gWorld-8B](https://huggingface.co/trillionlabs/gWorld-8B) * [https://huggingface.co/trillionlabs/gWorld-32B](https://huggingface.co/trillionlabs/gWorld-32B) * [https://huggingface.co/DeepBrainz/DeepBrainz-R1-4B](https://huggingface.co/DeepBrainz/DeepBrainz-R1-4B) * [https://huggingface.co/samuel-vitorino/sopro](https://huggingface.co/samuel-vitorino/sopro) * [https://huggingface.co/concavity-ai/superlinear-exp-v0.1](https://huggingface.co/concavity-ai/superlinear-exp-v0.1) * [https://huggingface.co/inclusionAI/LLaDA2.1-flash](https://huggingface.co/inclusionAI/LLaDA2.1-flash) * [https://huggingface.co/inclusionAI/LLaDA2.1-mini](https://huggingface.co/inclusionAI/LLaDA2.1-mini) * [https://huggingface.co/inclusionAI/UI-Venus-1.5-2B](https://huggingface.co/inclusionAI/UI-Venus-1.5-2B) * [https://huggingface.co/inclusionAI/UI-Venus-1.5-8B](https://huggingface.co/inclusionAI/UI-Venus-1.5-8B) * [https://huggingface.co/inclusionAI/UI-Venus-1.5-30B-A3B](https://huggingface.co/inclusionAI/UI-Venus-1.5-30B-A3B) * [https://huggingface.co/inclusionAI/Ming-flash-omni-2.0](https://huggingface.co/inclusionAI/Ming-flash-omni-2.0) * [https://huggingface.co/inclusionAI/Ming-omni-tts-16.8B-A3B](https://huggingface.co/inclusionAI/Ming-omni-tts-16.8B-A3B) * [https://huggingface.co/inclusionAI/Ming-omni-tts-0.5B](https://huggingface.co/inclusionAI/Ming-omni-tts-0.5B) * [https://huggingface.co/inclusionAI/ZwZ-4B](https://huggingface.co/inclusionAI/ZwZ-4B) * [https://huggingface.co/inclusionAI/ZwZ-7B](https://huggingface.co/inclusionAI/ZwZ-7B) * [https://huggingface.co/inclusionAI/ZwZ-8B](https://huggingface.co/inclusionAI/ZwZ-8B) * [https://huggingface.co/WokeAI/Tankie-DPE-12B-SFT-v2](https://huggingface.co/WokeAI/Tankie-DPE-12B-SFT-v2) * [https://huggingface.co/OpenMOSS-Team/models?sort=created](https://huggingface.co/OpenMOSS-Team/models?sort=created) * [https://huggingface.co/collections/Aratako/miotts](https://huggingface.co/collections/Aratako/miotts) * [https://huggingface.co/AIDC-AI/Ovis2.6-30B-A3B](https://huggingface.co/AIDC-AI/Ovis2.6-30B-A3B) * [https://huggingface.co/neuphonic/models?sort=created](https://huggingface.co/neuphonic/models?sort=created) * [https://huggingface.co/nineninesix/kani-tts-2-en](https://huggingface.co/nineninesix/kani-tts-2-en) * [https://huggingface.co/jdopensource/JoyAI-LLM-Flash](https://huggingface.co/jdopensource/JoyAI-LLM-Flash) * [https://huggingface.co/lm-provers/QED-Nano](https://huggingface.co/lm-provers/QED-Nano) * [https://huggingface.co/collections/CohereLabs/tiny-aya](https://huggingface.co/collections/CohereLabs/tiny-aya) * [https://huggingface.co/Zyphra/ZUNA](https://huggingface.co/Zyphra/ZUNA) * [https://huggingface.co/teapotai/tinyteapot](https://huggingface.co/teapotai/tinyteapot) * [https://huggingface.co/collections/RWKV/rwkv-v7](https://huggingface.co/collections/RWKV/rwkv-v7) * [https://huggingface.co/guidelabs/steerling-8b](https://huggingface.co/guidelabs/steerling-8b) * [https://huggingface.co/PicoKittens/PicoMistral-23M](https://huggingface.co/PicoKittens/PicoMistral-23M) * [https://huggingface.co/PicoKittens/PicoStories-853K](https://huggingface.co/PicoKittens/PicoStories-853K) * [https://huggingface.co/crownelius/The-Crow-9B-Creative-Writing](https://huggingface.co/crownelius/The-Crow-9B-Creative-Writing)
2026-02-27T20:09:48
https://www.reddit.com/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rghfqj
false
null
t3_1rghfqj
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/
false
false
self
70
{'enabled': False, 'images': [{'id': '6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=108&crop=smart&auto=webp&s=d468c99ee7a45fbc3c6246eaae3578bcd281ffd1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=216&crop=smart&auto=webp&s=883cf80e3cee79d8aa031cb5bb10f87edf424991', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=320&crop=smart&auto=webp&s=44ed874559138acaae45c3f60c1ae9054fe3d851', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=640&crop=smart&auto=webp&s=3b6b66f3974fdd2cae45bb907bbec6bc716f85df', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=960&crop=smart&auto=webp&s=d9a3a25947394aa07f96b0a7a655f9d8030dd1ae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?width=1080&crop=smart&auto=webp&s=c951fd63e6c4d9c887f1029429ccdc483969508b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6YneNdA7_PQWJcKWXQHic9ha45UGBqHyVRjNM6yUzkI.png?auto=webp&s=ccb3f81ebb4ba667f1dca8304f85567c727f3a39', 'width': 1200}, 'variants': {}}]}