title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Which model is best for that use-case?
1
[deleted]
2026-02-26T18:36:25
[deleted]
1970-01-01T00:00:00
0
{}
1rficyr
false
null
t3_1rficyr
/r/LocalLLaMA/comments/1rficyr/which_model_is_best_for_that_usecase/
false
false
default
1
null
Leaked Grok 4.2 System Prompt
0
https://preview.redd.it/…y asks for them.
2026-02-26T18:30:19
https://www.reddit.com/r/LocalLLaMA/comments/1rfi6wv/leaked_grok_42_system_prompt/
quantapeiron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfi6wv
false
null
t3_1rfi6wv
/r/LocalLLaMA/comments/1rfi6wv/leaked_grok_42_system_prompt/
false
false
https://preview.redd.it/…8019172a6b12073a
0
null
Completed my 64GB VRAM rig - dual MI50 build + custom shroud
89
Hello everyone! A few months ago I started a project to build my own local AI server. After some testing and buying the second GPU, I was able to finalize the setup. \*\*Specs:\*\* \* \*\*Motherboard:\*\* Gigabyte X399 DESIGNARE \* \*\*CPU:\*\* Threadripper 2990WX (32 Cores / 64 Threads) \* \*\*RAM:\*\* 64GB DD...
2026-02-26T18:28:28
https://www.reddit.com/gallery/1rfi53f
roackim
reddit.com
1970-01-01T00:00:00
0
{}
1rfi53f
false
null
t3_1rfi53f
/r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/
false
false
https://preview.redd.it/…015ac3dbc20a5fe6
89
null
Got tired of writing promo posts… so I made it one‑click (open source)
0
I love building OSS, but writing promo posts? Takes forever. Paid tools are pricey, free ones are cramped. So I built a thing that takes a messy draft, reshapes it per platform, and even posts it for you. Project name is Auto Hongmyungbo — yes, that’s the name! Main bits: 1) Draft in: throw in a promo/thought/no...
2026-02-26T18:21:09
https://v.redd.it/azt2e7niqvlg1
Working_Original9624
v.redd.it
1970-01-01T00:00:00
0
{}
1rfhy00
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/azt2e7niqvlg1/DASHPlaylist.mpd?a=1774722153%2CMDM5NjYwNjA3OGY4YTkyYTg5YzllN2M3OTA3ZWU2OTRiZDUxMDRiYWQ3YmY4ZTg3ODUyMjQ0M2FlYjg1YjJiYw%3D%3D&v=1&f=sd', 'duration': 91, 'fallback_url': 'https://v.redd.it/azt2e7niqvlg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1rfhy00
/r/LocalLLaMA/comments/1rfhy00/got_tired_of_writing_promo_posts_so_i_made_it/
false
false
https://external-preview…81c2a6e228155d2e
0
{'enabled': False, 'images': [{'id': 'NGJoaHpob2lxdmxnMbQmlCbIoeiA_acrq5yldN4sWG3MrHRz89i8_EhJrCAT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NGJoaHpob2lxdmxnMbQmlCbIoeiA_acrq5yldN4sWG3MrHRz89i8_EhJrCAT.png?width=108&crop=smart&format=pjpg&auto=webp&s=54cd490faa2ed804faae8e2d44fd966cf5383...
Self Hosted Model Tier List
0
Check it out at [https://www.onyx.app/self-hosted-llm-leaderboard](https://www.onyx.app/self-hosted-llm-leaderboard)
2026-02-26T18:19:42
https://i.redd.it/ych4ovbcovlg1.png
Weves11
i.redd.it
1970-01-01T00:00:00
0
{}
1rfhwn8
false
null
t3_1rfhwn8
/r/LocalLLaMA/comments/1rfhwn8/self_hosted_model_tier_list/
false
false
https://preview.redd.it/…b86bede8cf458e5f
0
{'enabled': True, 'images': [{'id': 'ych4ovbcovlg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ych4ovbcovlg1.png?width=108&crop=smart&auto=webp&s=2f9fcc1fb6144b7a5657cf9b5d5e1e48972ff164', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ych4ovbcovlg1.png?width=216&crop=smart&auto=web...
Where do you find inspiration for agent system prompts and configurations?
0
Been going deep on agent setups lately and honestly the biggest bottleneck isn't the tech – it's figuring out good patterns for system prompts, rules, and tool configurations. Google gets you generic advice. Reddit threads are hit or miss. Most people seem to keep their good stuff private. How do you approach this? D...
2026-02-26T18:12:45
https://www.reddit.com/r/LocalLLaMA/comments/1rfhpo3/where_do_you_find_inspiration_for_agent_system/
Timely_Number_696
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfhpo3
false
null
t3_1rfhpo3
/r/LocalLLaMA/comments/1rfhpo3/where_do_you_find_inspiration_for_agent_system/
false
false
self
0
null
Prompt processing super slow for qwen3.5 35b a3b on LM studio
1
[removed]
2026-02-26T18:12:03
https://www.reddit.com/r/LocalLLaMA/comments/1rfhoxl/prompt_processing_super_slow_for_qwen35_35b_a3b/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfhoxl
false
null
t3_1rfhoxl
/r/LocalLLaMA/comments/1rfhoxl/prompt_processing_super_slow_for_qwen35_35b_a3b/
false
false
self
1
null
LMStudio: jailbreaking thinking models?
0
Without thinking turned on, you can edit the response and use continue to maybe get what you want. Even then, it's getting more and more difficult with the latest models. What do you do when thinking is turned on?
2026-02-26T18:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1rfho2y/lmstudio_jailbreaking_thinking_models/
Vegetable-Maybe1444
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfho2y
false
null
t3_1rfho2y
/r/LocalLLaMA/comments/1rfho2y/lmstudio_jailbreaking_thinking_models/
false
false
self
0
null
Completed my 64GB VRAM rig - dual MI50 build + custom shroud 😎
3
Hello everyone! A few months ago I started a project to build my own local AI server. After some testing and buying the second GPU, I was able to finalize the setup. **Specs:** * **Motherboard:** Gigabyte X399 DESIGNARE * **CPU:** Threadripper 2990WX (32 Cores / 64 Threads) * **RAM:** 64GB DDR4 * **GPUs:** 2x ...
2026-02-26T18:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1rfhe05/completed_my_64gb_vram_rig_dual_mi50_build_custom/
roackim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfhe05
false
null
t3_1rfhe05
/r/LocalLLaMA/comments/1rfhe05/completed_my_64gb_vram_rig_dual_mi50_build_custom/
false
false
https://external-preview…5310bc3c106a0c47
3
null
DWARF: linear attention with a 3,072-token bounded KV cache — ablation results (13M scale)
1
I've been building and ablating a linear-complexity attention architecture over the past week. Main result: 70.8 PPL at 13M params vs 64.07 for a matched standard transformer — but the standard transformer's number comes with severe generation loops, which led to the most interesting finding. The architecture: Two p...
2026-02-26T17:55:48
https://www.reddit.com/r/LocalLLaMA/comments/1rfh8k3/dwarf_linear_attention_with_a_3072token_bounded/
MariusNocturnum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfh8k3
false
null
t3_1rfh8k3
/r/LocalLLaMA/comments/1rfh8k3/dwarf_linear_attention_with_a_3072token_bounded/
false
false
self
1
{'enabled': False, 'images': [{'id': 'x7hNiar4ddnk7x5icJQlmzgzOE_BUVr6-IYosN9rwaY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/x7hNiar4ddnk7x5icJQlmzgzOE_BUVr6-IYosN9rwaY.png?width=108&crop=smart&auto=webp&s=b3024fd8efb3be60eab1b4ca16351850054403a8', 'width': 108}, {'height': 108, 'url': 'h...
OASIS: Open-source benchmark for measuring AI model performance on offensive cybersecurity tasks
13
OASIS is an open benchmark for evaluating LLM capability on real-world offensive security tasks. Fully local, no cloud dependency, bring whatever model you want. **How the Benchmark Works:** The model gets a Kali Linux container and a vulnerable Docker target. It receives an objective, autonomously performs recon,...
2026-02-26T17:32:19
https://www.reddit.com/gallery/1rfgkhy
MamaLanaa
reddit.com
1970-01-01T00:00:00
0
{}
1rfgkhy
false
null
t3_1rfgkhy
/r/LocalLLaMA/comments/1rfgkhy/oasis_opensource_benchmark_for_measuring_ai_model/
false
false
https://preview.redd.it/…627f67e94bc164ee
13
null
batchling: a Python library to save 50% off GenAI requests in two lines of code
1
[removed]
2026-02-26T17:17:43
https://www.reddit.com/r/LocalLLaMA/comments/1rfg5ku/batchling_a_python_library_to_save_50_off_genai/
StatisticianFit9054
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfg5ku
false
null
t3_1rfg5ku
/r/LocalLLaMA/comments/1rfg5ku/batchling_a_python_library_to_save_50_off_genai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NTqyNuIHkUSwuVuc1P2Ha8YgLUbEoP_tpOlzBTCFHTE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NTqyNuIHkUSwuVuc1P2Ha8YgLUbEoP_tpOlzBTCFHTE.png?width=108&crop=smart&auto=webp&s=af75783faf944b0325a20418da312af6d6094e2c', 'width': 108}, {'height': 108, 'url': 'h...
LightMem (ICLR 2026): Lightweight and Efficient Memory-Augmented Generation — 10×+ gains with 100× lower cost
23
We’re excited to share that our work **LightMem** has been accepted to **ICLR 2026** 🎉 **Paper:** [https://arxiv.org/abs/2510.18866](https://arxiv.org/abs/2510.18866) **Code:** [https://github.com/zjunlp/LightMem](https://github.com/zjunlp/LightMem) LightMem is a lightweight, modular memory system for LLM agents t...
2026-02-26T17:17:15
https://www.reddit.com/r/LocalLLaMA/comments/1rfg53c/lightmem_iclr_2026_lightweight_and_efficient/
zxlzr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfg53c
false
null
t3_1rfg53c
/r/LocalLLaMA/comments/1rfg53c/lightmem_iclr_2026_lightweight_and_efficient/
false
false
https://preview.redd.it/…37c01b16a3b11a27
23
null
American closed models vs Chinese open models is becoming a problem.
664
The work I do involves customers that are sensitive to nation state politics. We cannot and do not use cloud API services for AI because the data must not leak. Ever. As a result we use open models in closed environments. The problem is that my customers don’t want Chinese models. “National security risk”. But the o...
2026-02-26T17:15:48
https://www.reddit.com/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfg3kx
false
null
t3_1rfg3kx
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/
false
false
self
664
null
SkillNet: Create, Evaluate, and Connect AI Skills (now built into OpenClaw)
1
AI systems keep reinventing the same domain know-how — buried in prompts, tools, and brittle pipelines. Skills remain fragmented, duplicated, and inconsistent in quality. We think the missing layer in the AI stack is **skills as infrastructure**. We’re introducing **SkillNet**, an ongoing open project to standard...
2026-02-26T17:08:39
https://www.reddit.com/r/LocalLLaMA/comments/1rffwg2/skillnet_create_evaluate_and_connect_ai_skills/
zxlzr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rffwg2
false
null
t3_1rffwg2
/r/LocalLLaMA/comments/1rffwg2/skillnet_create_evaluate_and_connect_ai_skills/
false
false
self
1
null
We just released our internal UX/GUI Framework (Vanilla JS)
6
Hello Reddit friends. We just released our internal UX/GUI Framework which is tailored from the ground up to be used by coding agents, as in - it's internally documented in a manner that makes it easy for agents to understand and fully use all of the available features without eating too much context. But, as a Trekk...
2026-02-26T17:05:42
https://www.reddit.com/r/LocalLLaMA/comments/1rfftcu/we_just_released_our_internal_uxgui_framework/
NotumRobotics
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfftcu
false
null
t3_1rfftcu
/r/LocalLLaMA/comments/1rfftcu/we_just_released_our_internal_uxgui_framework/
false
false
self
6
null
Good "coding" LLM for my 8gb VRAM, 16gb ram setup?
4
What LLM is the best for coding for my setup? i have a : \- RX 6600 8gb \- Ryzen 5 3600 \- 16gb ram DDR4 2666mhz i know it's underpowered, but what is the best i can get for coding in here? the minimum is 5 tokens per second, **if that is realistic**.
2026-02-26T16:46:13
https://www.reddit.com/r/LocalLLaMA/comments/1rff9zw/good_coding_llm_for_my_8gb_vram_16gb_ram_setup/
Mediocre_Speed_2273
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rff9zw
false
null
t3_1rff9zw
/r/LocalLLaMA/comments/1rff9zw/good_coding_llm_for_my_8gb_vram_16gb_ram_setup/
false
false
self
4
null
We tested RLVR on top of fine-tuned small models across 12 datasets — here's exactly when it helps (and when it doesn't)
10
We've been running SFT on small models (1.7B) for production tasks and wanted to know whether adding a reinforcement learning stage on top actually helps. So we ran a controlled experiment across 12 datasets. The results split cleanly by task type: **Text generation tasks** (QA, documentation, PII redaction): +2.0pp ...
2026-02-26T16:43:13
https://i.redd.it/zrlym56qiulg1.png
party-horse
i.redd.it
1970-01-01T00:00:00
0
{}
1rff6y3
false
null
t3_1rff6y3
/r/LocalLLaMA/comments/1rff6y3/we_tested_rlvr_on_top_of_finetuned_small_models/
false
false
https://preview.redd.it/…06aa950beb59c4de
10
{'enabled': True, 'images': [{'id': 'zrlym56qiulg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/zrlym56qiulg1.png?width=108&crop=smart&auto=webp&s=c48de26958dff26eccf7bd7d44bb406d39f032c3', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/zrlym56qiulg1.png?width=216&crop=smart&auto=web...
Stop pasting transcripts into Claude manually. Local transcription + Claude CLI, real-time.
0
WASAPI loopback + mic, mixed to mono 16kHz, faster-whisper locally. Claude CLI (\`claude --print\`) every 60s. CTranslate2 CUDA float16 / CPU int8 fallback. Silero VAD v6 ONNX. \~1-2s per 10s segment on RTX 3060. Single .exe. Captures all PC audio so it works for videos too. Considering local LLM support for air-gappe...
2026-02-26T16:36:02
https://i.redd.it/xtcn9wdo9vlg1.png
Professional_Lab_486
i.redd.it
1970-01-01T00:00:00
0
{}
1rfeztr
false
null
t3_1rfeztr
/r/LocalLLaMA/comments/1rfeztr/stop_pasting_transcripts_into_claude_manually/
false
false
https://preview.redd.it/…c9d4f14c32b9f7ab
0
{'enabled': True, 'images': [{'id': 'xtcn9wdo9vlg1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/xtcn9wdo9vlg1.png?width=108&crop=smart&auto=webp&s=253f327b1b5551c97849c4cb9b03c9974cf859aa', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/xtcn9wdo9vlg1.png?width=216&crop=smart&auto=web...
How to build a knowledge graph for AI
0
Hi everyone, I’ve been experimenting with building a knowledge graph for AI systems, and I wanted to share some of the key takeaways from the process. When building AI applications (especially RAG or agent-based systems), a lot of focus goes into embeddings and vector search. But one thing that becomes clear pretty qu...
2026-02-26T16:34:39
https://www.reddit.com/r/LocalLLaMA/comments/1rfeyfp/how_to_build_a_knowledge_graph_for_ai/
DistinctRide9884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfeyfp
false
null
t3_1rfeyfp
/r/LocalLLaMA/comments/1rfeyfp/how_to_build_a_knowledge_graph_for_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'RAUgN58-q2BzEbXJL8TdEdUJ8j4kmxpKwyLVJvsshbA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RAUgN58-q2BzEbXJL8TdEdUJ8j4kmxpKwyLVJvsshbA.jpeg?width=108&crop=smart&auto=webp&s=fe2b34d6bc763954acda606b0aaea6899339db23', 'width': 108}, {'height': 121, 'url': '...
Nano banana 2 🍌 It passed this test. Can anyone please try same prompt on flux 2 [dev]locally??
0
2026-02-26T16:22:14
https://i.redd.it/u28xv5uw6vlg1.png
9r4n4y
i.redd.it
1970-01-01T00:00:00
0
{}
1rfelkv
false
null
t3_1rfelkv
/r/LocalLLaMA/comments/1rfelkv/nano_banana_2_it_passed_this_test_can_anyone/
false
false
https://preview.redd.it/…381c8c025164ddec
0
{'enabled': True, 'images': [{'id': 'u28xv5uw6vlg1', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/u28xv5uw6vlg1.png?width=108&crop=smart&auto=webp&s=961f8e1d91677dcdd41516539352b8cf8eaf4ea5', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/u28xv5uw6vlg1.png?width=216&crop=smart&auto=we...
Qwen 3.5 Family Comparison by ArtificialAnalysis.ai
104
[Coding Index](https://preview.redd.it/ey427tqm6vlg1.png?width=2444&format=png&auto=webp&s=5b98c58f40345d066da945223f56d8c2ebef2ed3) [Intelligence Index](https://preview.redd.it/ofzqupbi6vlg1.png?width=2440&format=png&auto=webp&s=4d06e3f6a847e450d6614f2dc12ef01887d4778f) [Agentic Index](https://preview.redd.it/eaqqod...
2026-02-26T16:19:58
https://www.reddit.com/r/LocalLLaMA/comments/1rfej6k/qwen_35_family_comparison_by_artificialanalysisai/
NewtMurky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfej6k
false
null
t3_1rfej6k
/r/LocalLLaMA/comments/1rfej6k/qwen_35_family_comparison_by_artificialanalysisai/
false
false
https://external-preview…9613865f62b78baf
104
null
What AI researchers worry about most isn’t intelligence, it’s safety.
0
Something interesting I heard recently from someone working in AI research: As AI systems become more powerful, the biggest challenge isn't making them smarter — it's making sure they behave safely and predictably when things go wrong. That perspective surprised me because most public discussions about AI focus on ca...
2026-02-26T16:09:25
https://youtu.be/x6v1vUOYDjU?si=l8h2O5vdxIx_H5Wk
Powerful-Ad7836
youtu.be
1970-01-01T00:00:00
0
{}
1rfe8tk
false
{'oembed': {'author_name': 'JigCode', 'author_url': 'https://www.youtube.com/@JigCode', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/x6v1vUOYDjU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictu...
t3_1rfe8tk
/r/LocalLLaMA/comments/1rfe8tk/what_ai_researchers_worry_about_most_isnt/
false
false
https://external-preview…d193fa7e4bcd0e9d
0
{'enabled': False, 'images': [{'id': 'Ay9Cdu2iQ8jajkt8Ixwgxfe_Tjr3mxgqeTZsQzTzXDc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ay9Cdu2iQ8jajkt8Ixwgxfe_Tjr3mxgqeTZsQzTzXDc.jpeg?width=108&crop=smart&auto=webp&s=aa33009b28538c21252739bd19181bfd4f47c274', 'width': 108}, {'height': 162, 'url': '...
Switching from NVIDIA to AMD on Linux
1
[removed]
2026-02-26T16:07:57
https://www.reddit.com/r/LocalLLaMA/comments/1rfe7fz/switching_from_nvidia_to_amd_on_linux/
Leseratte10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfe7fz
false
null
t3_1rfe7fz
/r/LocalLLaMA/comments/1rfe7fz/switching_from_nvidia_to_amd_on_linux/
false
false
self
1
null
What AI research Scientist worry about most isn’t intelligence, it’s safety.
1
[deleted]
2026-02-26T16:06:23
[deleted]
1970-01-01T00:00:00
0
{}
1rfe5uw
false
{'oembed': {'author_name': 'JigCode', 'author_url': 'https://www.youtube.com/@JigCode', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/x6v1vUOYDjU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictu...
t3_1rfe5uw
/r/LocalLLaMA/comments/1rfe5uw/what_ai_research_scientist_worry_about_most_isnt/
false
false
default
1
null
What AI researcher scientists worry about most isn’t intelligence — it’s safety.
1
Whether you're technical or not, the future of AI will affect pretty much everyone. I recently had a conversation with AI research scientist Pritam Dash about AI safety, the future of artificial intelligence, and how computer science students should approach learning programming. One insight that stuck with me was th...
2026-02-26T16:03:26
https://www.reddit.com/r/LocalLLaMA/comments/1rfe2vu/what_ai_researcher_scientists_worry_about_most/
Powerful-Ad7836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfe2vu
false
null
t3_1rfe2vu
/r/LocalLLaMA/comments/1rfe2vu/what_ai_researcher_scientists_worry_about_most/
false
false
self
1
null
Unsloth Team: We Need to Talk!
0
Dear Unsloth team - u/[danielhanchen](https://www.reddit.com/user/danielhanchen/), Thank you for your efforts. Since a few months now, I've been using your quants exclusively whenever I could. The reason I prioritized your work ahead of the quants made by other developers (Bartowski's quants ware my go to) is beca...
2026-02-26T16:02:07
https://www.reddit.com/r/LocalLLaMA/comments/1rfe1l6/unsloth_team_we_need_to_talk/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfe1l6
false
null
t3_1rfe1l6
/r/LocalLLaMA/comments/1rfe1l6/unsloth_team_we_need_to_talk/
false
false
self
0
null
Stop letting your agents scrape HTML for UI cloning (it wastes context). Found a wild AIO alternative.
0
Hey everyone, I’ve been building some web-automation workflows lately (using Browser Use and Claude's computer use), and I kept running into the same annoying wall: whenever I asked the agent to "replicate this website's design", it would try to scrape the raw HTML/DOM. It instantly blows up the token context window,...
2026-02-26T15:58:54
https://www.reddit.com/r/LocalLLaMA/comments/1rfdy81/stop_letting_your_agents_scrape_html_for_ui/
elpassso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfdy81
false
null
t3_1rfdy81
/r/LocalLLaMA/comments/1rfdy81/stop_letting_your_agents_scrape_html_for_ui/
false
false
self
0
null
Benchmarked Phi-3.5-mini vs Qwen2.5-3B across 10 task categories on CPU (i5, 8GB) and GPU (Colab T4) — Qwen wins 2.7-3.3x on efficiency
0
Leaderboard accuracy doesn't tell you what a model costs you at inference time. I wanted a single number that captures accuracy per unit of resource — so I designed CARS (Cost-Aware Reasoning Score) and ran Phi-3.5-mini-instruct and Qwen2.5-3B-instruct through a 33-prompt, 10-category benchmark on real consumer hardwar...
2026-02-26T15:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1rfdxxc/benchmarked_phi35mini_vs_qwen253b_across_10_task/
MasterApplication717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfdxxc
false
null
t3_1rfdxxc
/r/LocalLLaMA/comments/1rfdxxc/benchmarked_phi35mini_vs_qwen253b_across_10_task/
false
false
self
0
{'enabled': False, 'images': [{'id': 'rsi-8yJfFU1ltKo3ULUr2sDr0tiolvwy1PjRpVrzlRw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rsi-8yJfFU1ltKo3ULUr2sDr0tiolvwy1PjRpVrzlRw.png?width=108&crop=smart&auto=webp&s=20a3c32a419c4a1f3315290a7eed59933a0e27f1', 'width': 108}, {'height': 108, 'url': 'h...
Claude Code + Qwen3.5 122B Issues
0
I've gotten the FP8 version directly from qwen running well on both SGLang and vLLM, but in both cases it's really struggling with claude code. Do you think this is a failure in model hosting, something changed in claude code, or a failure of the model itself? Minimax is what I would use before, and I basically neve...
2026-02-26T15:53:55
https://i.redd.it/3awhxh2q1vlg1.png
Prestigious_Thing797
i.redd.it
1970-01-01T00:00:00
0
{}
1rfdtgq
false
null
t3_1rfdtgq
/r/LocalLLaMA/comments/1rfdtgq/claude_code_qwen35_122b_issues/
false
false
https://preview.redd.it/…cb63cd8ed12858e5
0
{'enabled': True, 'images': [{'id': '3awhxh2q1vlg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/3awhxh2q1vlg1.png?width=108&crop=smart&auto=webp&s=51eb526644593f5718687363abe8391b454eee3a', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/3awhxh2q1vlg1.png?width=216&crop=smart&auto=we...
Qwen3.5-35B-A3B Q4 Quantization Comparison
433
This is a Q4 quantization sweep across all major community quants of Qwen3.5-35B-A3B, comparing faithfulness to the BF16 baseline across different quantizers and recipes. The goal is to give people a data-driven basis for picking a file rather than just grabbing whatever is available. For the uninitiated: **KLD (KL ...
2026-02-26T15:52:23
https://www.reddit.com/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/
TitwitMuffbiscuit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfds1h
false
null
t3_1rfds1h
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/
false
false
https://preview.redd.it/…3bdb0defab8b39c7
433
null
RTX PRO 6000 (Blackwell) for diffusion training cost efficiency
1
[removed]
2026-02-26T15:52:03
https://community.deeplearning.ai/t/reducing-diffusion-model-training-costs-by-75-as-an-independent-researcher/889121
Financial_Ad8530
community.deeplearning.ai
1970-01-01T00:00:00
0
{}
1rfdrqd
false
null
t3_1rfdrqd
/r/LocalLLaMA/comments/1rfdrqd/rtx_pro_6000_blackwell_for_diffusion_training/
false
false
https://external-preview…2dd14c1ff2871d06
1
{'enabled': False, 'images': [{'id': 'UJ94G2lhP6Q9a1EEobHPRv2hXk7oYuxMum55AFcxjHA', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/UJ94G2lhP6Q9a1EEobHPRv2hXk7oYuxMum55AFcxjHA.png?width=108&crop=smart&auto=webp&s=4fdc90e9dcc10bb24e8c99f4d41144c76648d5e8', 'width': 108}, {'height': 215, 'url': '...
llama.cpp: slow model loading on WSL2 fixed
1
[removed]
2026-02-26T15:38:24
https://www.reddit.com/r/LocalLLaMA/comments/1rfdeiw/llamacpp_slow_model_loading_on_wsl2_fixed/
bobaburger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfdeiw
false
null
t3_1rfdeiw
/r/LocalLLaMA/comments/1rfdeiw/llamacpp_slow_model_loading_on_wsl2_fixed/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LGwuDjabqj1e-FuArRM3FsBEYOaXChJVydr-vbEVM9A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LGwuDjabqj1e-FuArRM3FsBEYOaXChJVydr-vbEVM9A.png?width=108&crop=smart&auto=webp&s=5f5078468ba08d01a013bcbb9054ccb1d445fd78', 'width': 108}, {'height': 108, 'url': 'h...
Training a 144M Spiking Neural Network for text generation from scratch — no transformer teacher, no distillation
184
I built a 144M parameter SNN language model with a fully original architecture (not based on RWKV, transformers, or any existing SNN). Trained from scratch on FineWeb-Edu for \~$10 on a rented A5000. Some interesting findings: • 97-98% inference sparsity — only 2-3% of neurons fire per token. This emerges naturally d...
2026-02-26T15:37:35
https://www.reddit.com/gallery/1rfddpi
zemondza
reddit.com
1970-01-01T00:00:00
0
{}
1rfddpi
false
null
t3_1rfddpi
/r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/
false
false
https://preview.redd.it/…843eb81c16d41d45
184
null
No luck getting tools working with LM Studio and Qwen3.5 or LFM2
2
So far Qwen 3.5 and LFM2 haven't been able to correctly use duckduckgo, valyu, or danielsig's web search & page scraping in LM Studio. For instance, LFM2 returns: *Failed to parse tool call: Invalid character in function name: '{' at position 0* <|tool_call_start|>[{"name": "valyu_deepsearch", "arguments": {"qu...
2026-02-26T15:14:56
https://www.reddit.com/r/LocalLLaMA/comments/1rfcs58/no_luck_getting_tools_working_with_lm_studio_and/
DeliciousGorilla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfcs58
false
null
t3_1rfcs58
/r/LocalLLaMA/comments/1rfcs58/no_luck_getting_tools_working_with_lm_studio_and/
false
false
self
2
null
How does each "moltbot" has its own personality?
0
Firstly, I am a developer in Unity C# (2 years+), with a little bit of experience in Python and ReactJS. I mostly use Claude Code or Gemini CLI to work in these two languages (and don't misunderstand me, I can code in C# without any help from AI). Now, I just saw this video: [Clawdbot just got scary (Moltbook)](https:...
2026-02-26T15:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1rfcrct/how_does_each_moltbot_has_its_own_personality/
AlgorithmicKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfcrct
false
null
t3_1rfcrct
/r/LocalLLaMA/comments/1rfcrct/how_does_each_moltbot_has_its_own_personality/
false
false
self
0
{'enabled': False, 'images': [{'id': '2A_wFURmwjl09BQiUUkNm7wcVzcN_0lrrfpS5P8XXik', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2A_wFURmwjl09BQiUUkNm7wcVzcN_0lrrfpS5P8XXik.jpeg?width=108&crop=smart&auto=webp&s=5f4c32c4902afe5adbe3b72fc97b7411f9e954de', 'width': 108}, {'height': 162, 'url': '...
Best new model to run on 160GB vram?
0
New to this and wondering what is the best “do it all” model I can try on a pair of A100-80GB GPUs? These are nvlinked so tensor parallel is an option. Also have vllm, llama and ollama installed, although the latter seems kludgy, along with Tabby for EX quants. Are there other frameworks I should install?
2026-02-26T15:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1rfcqoj/best_new_model_to_run_on_160gb_vram/
Mitchcor653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfcqoj
false
null
t3_1rfcqoj
/r/LocalLLaMA/comments/1rfcqoj/best_new_model_to_run_on_160gb_vram/
false
false
self
0
null
Claude switches to punctuation-only output when communicating with another Claude
0
Weird behavior I want to get other eyes on. **What happened** Two Claude Sonnet 4.6 instances in a dialogue. Claude A had Chrome MCP access and knew it was talking to another Claude. After one normal text message, it switched entirely to punctuation sequences: `"- ". . ? , "- ", : " , - "? .` No alphabetic characte...
2026-02-26T15:05:13
https://www.reddit.com/r/LocalLLaMA/comments/1rfcivu/claude_switches_to_punctuationonly_output_when/
NeroMN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfcivu
false
null
t3_1rfcivu
/r/LocalLLaMA/comments/1rfcivu/claude_switches_to_punctuationonly_output_when/
false
false
self
0
null
Ollama don's support qwen3.5:35b yet?
0
tomi@OllamaHost:~$ ollama pull qwen3.5:35b pulling manifest Error: pull model manifest: 412: The model you are attempting to pull requires a newer version of Ollama that may be in pre-release. Please see https://github.com/ollama/ollama/releases for more details. tomi@OllamaHost:~$ oll...
2026-02-26T14:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfc7d3
false
null
t3_1rfc7d3
/r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/
false
false
self
0
{'enabled': False, 'images': [{'id': 'WaEmDD9KqHaxEYY3z6qT20hyzaMoyRgh4oyVmcxLP5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WaEmDD9KqHaxEYY3z6qT20hyzaMoyRgh4oyVmcxLP5k.png?width=108&crop=smart&auto=webp&s=29b42bf2b7fb0d0cb3014435494b913d8ea3e5a1', 'width': 108}, {'height': 108, 'url': 'h...
Introducing FasterQwenTTS
58
Hi everyone, I wanted to build real-time voice agents with Qwen3-TTS, but the official implementation doesn’t support streaming and runs below real time. So I focused on fixing those two things. With Faster Qwen3TTS, I get first audio in <200 ms on an RTX 4090 and 2x–6x speedups across 4 different GPUs I tested. The...
2026-02-26T14:48:38
https://www.reddit.com/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/
futterneid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfc3ic
false
null
t3_1rfc3ic
/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/
false
false
self
58
{'enabled': False, 'images': [{'id': 'jEPOleXsGkTJuYs1o76f5S-qwpBgXCcciC3bfya72q8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jEPOleXsGkTJuYs1o76f5S-qwpBgXCcciC3bfya72q8.png?width=108&crop=smart&auto=webp&s=11b058fee8e3692895e18b9c0b1fc65640be7673', 'width': 108}, {'height': 108, 'url': 'h...
Should we say "SaaS is ripping you off because you don't understand AI engineering"? Feedback for an open-source AI contact center platform - self-hostable, platform-agnostic, bring your own LLM and voice stack
0
I've built AI contact centers for enterprise clients & every single time, I rebuilt the same 80% of the stack from scratch. Not the agent, because that's the fun 20%. The boring 80%: session management, tool orchestration, permissions (which tools can the agent call without human approval?), conversation recording wit...
2026-02-26T14:45:26
https://www.reddit.com/r/LocalLLaMA/comments/1rfc0iw/should_we_say_saas_is_ripping_you_off_because_you/
Ok_Spare917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfc0iw
false
null
t3_1rfc0iw
/r/LocalLLaMA/comments/1rfc0iw/should_we_say_saas_is_ripping_you_off_because_you/
false
false
self
0
null
contextui just open sourced
0
[https://github.com/contextui-desktop/contextui](https://github.com/contextui-desktop/contextui) Just saw this on my feed. another one opened sourced.
2026-02-26T14:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1rfbcov/contextui_just_open_sourced/
midz99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfbcov
false
null
t3_1rfbcov
/r/LocalLLaMA/comments/1rfbcov/contextui_just_open_sourced/
false
false
self
0
{'enabled': False, 'images': [{'id': 'eDESSquYGvX1D6JeHAwF61ZZQSdP4EwwP_obsLi-8RA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eDESSquYGvX1D6JeHAwF61ZZQSdP4EwwP_obsLi-8RA.png?width=108&crop=smart&auto=webp&s=1ea64257cdac9bb7f9fed82ab62b2fcf86cd9b22', 'width': 108}, {'height': 108, 'url': 'h...
Made a little animated explainer for our benchmark paper: this pixel guy walks you through the results (Manim + Claude Code)
0
so we wrote a benchmark paper and I wanted to make a short GIF to go with the twitter announcement. figured I'd use Manim since 3b1b's stuff looks so clean. the pixel character is just rectangles in a VGroup. eyes are tiny squares that shift() around. the bar charts grow in with GrowFromEdge. nothing ...
2026-02-26T14:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1rfbbn1/made_a_little_animated_explainer_for_our/
Tasty-Scarcity-1074
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfbbn1
false
null
t3_1rfbbn1
/r/LocalLLaMA/comments/1rfbbn1/made_a_little_animated_explainer_for_our/
false
false
https://preview.redd.it/…66c91aa7d2b2fbfa
0
null
What uncensored I2V or T2V model are available , to run localy.
0
I got my hands on testing some GPUs with 192GB of VRAM, and I tried running the Wan 2.2 i2v model using ComfyUI. The results were disappointing — if you use any NSFW words, it just generates a random video based on your uploaded image. The thing is, after a lot of searching on Google, I don’t think any model exists th...
2026-02-26T14:09:00
https://www.reddit.com/r/LocalLLaMA/comments/1rfb4c8/what_uncensored_i2v_or_t2v_model_are_available_to/
iamrohitmishra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfb4c8
false
null
t3_1rfb4c8
/r/LocalLLaMA/comments/1rfb4c8/what_uncensored_i2v_or_t2v_model_are_available_to/
false
false
nsfw
0
null
[Results] Blind peer evaluation of 10 frontier models on a social-pressure safety attack — one produced "comply." before safety filter caught it
11
I've been running daily blind evaluations of frontier models for several months now — each model evaluates the others' responses without knowing which model produced them. I run a project called The Multivac. Today's result was unusual enough that I wanted to post here. **What I tested:** A prompt framed as social con...
2026-02-26T14:07:35
https://www.reddit.com/r/LocalLLaMA/comments/1rfb32u/results_blind_peer_evaluation_of_10_frontier/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfb32u
false
null
t3_1rfb32u
/r/LocalLLaMA/comments/1rfb32u/results_blind_peer_evaluation_of_10_frontier/
false
false
self
11
null
Open-sourced a multi-agent coding system: spec-first, parallel implementation, evidence-based verification
0
Most AI coding setups have one agent doing everything; plan, implement, verify, ship. That works for small tasks. For anything larger, it breaks down fast. The agent that wrote the code is too biased to catch its own mistakes. I wanted something closer to how a real engineering team works: separate roles, clear handoff...
2026-02-26T14:07:07
https://www.reddit.com/r/LocalLLaMA/comments/1rfb2ns/opensourced_a_multiagent_coding_system_specfirst/
iambarryegan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfb2ns
false
null
t3_1rfb2ns
/r/LocalLLaMA/comments/1rfb2ns/opensourced_a_multiagent_coding_system_specfirst/
false
false
self
0
{'enabled': False, 'images': [{'id': 'qffo0VwNNAzOrki9WfF1gYJ65R3CcIrhBXraFVEUtZI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qffo0VwNNAzOrki9WfF1gYJ65R3CcIrhBXraFVEUtZI.png?width=108&crop=smart&auto=webp&s=3460edc1ae8ebbc7bcbe900ea4cd744a8ec5ca78', 'width': 108}, {'height': 108, 'url': 'h...
I fine-tuned Qwen 14B to beat GPT-4o on NYT Connections (30% vs 22.7%)
11
I spent a weekend fine-tuning Qwen 2.5 14B to solve NYT Connections puzzles. Results: |Model|Solve Rate| |:-|:-| |Base Qwen 14B|9.3%| |GPT-4o-mini|10.0%| |GPT-4o|22.7%| |**My fine-tuned model**|**30.0%**| |Claude Sonnet 4.5 (teacher)|87.3%| **What worked:** Distillation. I had Sonnet solve \~350 puzzles while explain...
2026-02-26T14:02:06
https://www.reddit.com/r/LocalLLaMA/comments/1rfay3m/i_finetuned_qwen_14b_to_beat_gpt4o_on_nyt/
john_enev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfay3m
false
null
t3_1rfay3m
/r/LocalLLaMA/comments/1rfay3m/i_finetuned_qwen_14b_to_beat_gpt4o_on_nyt/
false
false
self
11
{'enabled': False, 'images': [{'id': 'HM5eFnrnkZzqesKZIv-pm42kh1hrbPzWFTt6LgNaJXk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/HM5eFnrnkZzqesKZIv-pm42kh1hrbPzWFTt6LgNaJXk.jpeg?width=108&crop=smart&auto=webp&s=3dff9ea3cf3bdad0bd11515eb9682945d709e3b9', 'width': 108}, {'height': 121, 'url': '...
OpenRouter-like platform for training/finetuning - looking for beta testers
2
OpenRouter made it easy to *call* models. I'm trying to make it easy to *train/finetune* them for smaller teams and freelancers. If you have a python training script but don't want to manage a cluster for your runs, please DM me. I can help you with your first run on my existing cluster. Trying to see if this 'no-setup...
2026-02-26T14:00:27
https://www.reddit.com/r/LocalLLaMA/comments/1rfawga/openrouterlike_platform_for_trainingfinetuning/
Available_Court_1915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfawga
false
null
t3_1rfawga
/r/LocalLLaMA/comments/1rfawga/openrouterlike_platform_for_trainingfinetuning/
false
false
self
2
null
Qwen3.5 122B/397B extremely slow json processing compared to Minimax m2.5
0
my setup: \- Mac Studio M3 Ultra - 512GB \- LM Studio the task: \- Large json file, create a parser for that json file with proper error handling. results: \- Minimax m2.5: 3min 38 seconds \- Qwen3 (both 122B/397B): eternity can anyone help me educate about this? I can't understand why Qwen3.5 is taking in...
2026-02-26T13:37:15
https://www.reddit.com/r/LocalLLaMA/comments/1rfacu3/qwen35_122b397b_extremely_slow_json_processing/
BitXorBit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfacu3
false
null
t3_1rfacu3
/r/LocalLLaMA/comments/1rfacu3/qwen35_122b397b_extremely_slow_json_processing/
false
false
self
0
null
Open source text summarisation(of meeting & lecture transcriptions) models for academic research
1
Hey there I'm researching on speech to text summarisation techniques and for the transcribed speech transcripts (of meetings and lecture audi), what open source options can I get to summarise? Not just to inference but I'm okay with going to fine-tune/ train the model on my datasets and stiff
2026-02-26T13:35:30
https://www.reddit.com/r/LocalLLaMA/comments/1rfabet/open_source_text_summarisationof_meeting_lecture/
RustinChole1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rfabet
false
null
t3_1rfabet
/r/LocalLLaMA/comments/1rfabet/open_source_text_summarisationof_meeting_lecture/
false
false
self
1
null
419 clones in 48 hours, 8 stars, zero comments — the silent launch of my offline agent memory SDK
1
[removed]
2026-02-26T13:18:36
https://www.reddit.com/r/LocalLLaMA/comments/1rf9xpb/419_clones_in_48_hours_8_stars_zero_comments_the/
Far_Assignment_189
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9xpb
false
null
t3_1rf9xpb
/r/LocalLLaMA/comments/1rf9xpb/419_clones_in_48_hours_8_stars_zero_comments_the/
false
false
self
1
{'enabled': False, 'images': [{'id': '65vsFtm55Hi1-y07uGicQdpHQf8I7cypE_Z-Nd-lGto', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/65vsFtm55Hi1-y07uGicQdpHQf8I7cypE_Z-Nd-lGto.jpeg?width=108&crop=smart&auto=webp&s=3d42a61b622c41373931ce08276c30d89a5ffb87', 'width': 108}, {'height': 108, 'url': '...
Autonomous Holographic Brain in Rust/Bevy that cures LLM "Catastrophic Forgetting" (Bypassing the VRAM memory wall using 64D Phase Vectors) is freely available today!
1
[removed]
2026-02-26T13:13:16
https://www.reddit.com/r/LocalLLaMA/comments/1rf9te7/autonomous_holographic_brain_in_rustbevy_that/
Ok-University4674
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9te7
false
null
t3_1rf9te7
/r/LocalLLaMA/comments/1rf9te7/autonomous_holographic_brain_in_rustbevy_that/
false
false
https://external-preview…21a0cfb337230a92
1
null
GRPO from scratch: Building Intuition Through Ablation Studies
21
Continuing my "building from scratch" series (GPT-2, SFT). This time I implemented GRPO training from scratch with three main motivations: 1. As usual, write the GRPO code from scratch for the sake of understanding. 2. Train Qwen2.5-Math-1.5B with verifiable math rewards and get a feel of what kind of accuracy we can ...
2026-02-26T13:08:53
https://i.redd.it/84r74yvi8ulg1.png
garg-aayush
i.redd.it
1970-01-01T00:00:00
0
{}
1rf9ppw
false
null
t3_1rf9ppw
/r/LocalLLaMA/comments/1rf9ppw/grpo_from_scratch_building_intuition_through/
false
false
https://preview.redd.it/…5b23570545c2e727
21
{'enabled': True, 'images': [{'id': '84r74yvi8ulg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/84r74yvi8ulg1.png?width=108&crop=smart&auto=webp&s=1e080733806c36fb508e2c35aac44bfb0fab5464', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/84r74yvi8ulg1.png?width=216&crop=smart&auto=webp...
Qwen 3.5 35B A3B Q4_K_M running at 9.14 tps
1
`LM Studio Settings:` `Context Length: 40452 tokens` `GPU Offload: 13 layers` `CPU Thread Pool Size: 12 threads` `Evaluation Batch Size: 512 tokens` `Max Concurrent Predictions: 4` `Unified KV Cache: On` `Flash Attention: On` `Number of experts: 8` `Number of MoE layers forced to CPU: 16` `Prompt: "W...
2026-02-26T13:05:45
https://www.reddit.com/r/LocalLLaMA/comments/1rf9nax/qwen_35_35b_a3b_q4_k_m_running_at_914_tps/
blastbottles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9nax
false
null
t3_1rf9nax
/r/LocalLLaMA/comments/1rf9nax/qwen_35_35b_a3b_q4_k_m_running_at_914_tps/
false
false
self
1
null
GRPO from scratch: Building Intuition Through Ablation Studies
1
Continuing my "building from scratch" series (GPT-2, SFT). This time I implemented GRPO training from scratch with three main motivations: 1. As usual, write the GRPO code from scratch for the sake of understanding. 2. Train Qwen2.5-Math-1.5B with verifiable math rewards and get a feel of what kind of accuracy we can ...
2026-02-26T13:04:59
https://aayushgarg.dev/posts/2026-02-26-grpo-from-scratch/
garg-aayush
aayushgarg.dev
1970-01-01T00:00:00
0
{}
1rf9mop
false
null
t3_1rf9mop
/r/LocalLLaMA/comments/1rf9mop/grpo_from_scratch_building_intuition_through/
false
false
default
1
null
Small LLM specialized for tool calling?
2
Is there a small LLM optimized for tool calling? The LLMs I'm using spend too many tokens on tool calling so I'm thinking of using a specialized method for tool calling (perhaps a smaller more specialized LLM).
2026-02-26T13:00:48
https://www.reddit.com/r/LocalLLaMA/comments/1rf9j3r/small_llm_specialized_for_tool_calling/
Downtown-Safety6618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9j3r
false
null
t3_1rf9j3r
/r/LocalLLaMA/comments/1rf9j3r/small_llm_specialized_for_tool_calling/
false
false
self
2
null
Running Qwen 3.5 (122B) with ~72GB of VRAM - Setup and results so far
42
Hi everyone, I've been closely following the latest releases and wanted to share my hardware configuration for running the new Qwen3.5 122B model. Since this community thrives on sharing knowledge, I wanted to give back my setup details. **The Model** * **Model:** `Qwen3.5-122B-A10B-UD-Q4_K_XL` (Unsloth) * **Source:...
2026-02-26T12:53:25
https://www.reddit.com/r/LocalLLaMA/comments/1rf9dey/running_qwen_35_122b_with_72gb_of_vram_setup_and/
_w0n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9dey
false
null
t3_1rf9dey
/r/LocalLLaMA/comments/1rf9dey/running_qwen_35_122b_with_72gb_of_vram_setup_and/
false
false
self
42
{'enabled': False, 'images': [{'id': 'bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI.png?width=108&crop=smart&auto=webp&s=6c56fda7bf7563539cb30948851d6f876ebd604c', 'width': 108}, {'height': 116, 'url': 'h...
Try Qwen3.5-122B-A10B on HuggingChat
7
2026-02-26T12:50:02
https://huggingface.co/chat/models/Qwen/Qwen3.5-122B-A10B
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1rf9azn
false
null
t3_1rf9azn
/r/LocalLLaMA/comments/1rf9azn/try_qwen35122ba10b_on_huggingchat/
false
false
https://external-preview…cc6425823c3af437
7
{'enabled': False, 'images': [{'id': '_ua-PPcFEoXhZ3qKNrDPoJ8Hxm1QQ0ueIwL-t_EiLTQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_ua-PPcFEoXhZ3qKNrDPoJ8Hxm1QQ0ueIwL-t_EiLTQ.png?width=108&crop=smart&auto=webp&s=01517c4a4f7346bce9c55e6308d7f48bae467d7e', 'width': 108}, {'height': 116, 'url': 'h...
Sysadmin Mode – Detached Ollama tuning (no sycophancy, no engagement farming, truth-first)
0
I created a local Ollama personality override that removes all the usual LLM fluff: no performative warmth, no validation loops, no trailing engagement questions, no fake reciprocity or emotional bleed. It's flat, dry, low-affect, and brutally honest — basically a no-nonsense sysadmin who doesn't care if you like him. ...
2026-02-26T12:49:31
https://github.com/grok-whisperer/detached-ollama-tuning/tree/main
taboomtshhh
github.com
1970-01-01T00:00:00
0
{}
1rf9al0
false
null
t3_1rf9al0
/r/LocalLLaMA/comments/1rf9al0/sysadmin_mode_detached_ollama_tuning_no/
false
false
default
0
null
speed of GLM-4.7-Flash vs Qwen3.5-35B-A3B
91
Last month I posted about using OpenCode with GLM-4.7-Flash. For agentic coding, you need to focus on long context, because 50,000 tokens is pretty normal during a coding session. This is the speed of the llama.cpp on 3×3090. I’ll post more detailed benchmarks with more models later in March (I’m still waiting for th...
2026-02-26T12:48:31
https://www.reddit.com/gallery/1rf99u2
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1rf99u2
false
null
t3_1rf99u2
/r/LocalLLaMA/comments/1rf99u2/speed_of_glm47flash_vs_qwen3535ba3b/
false
false
https://preview.redd.it/…9fbc4f87a76c8c3b
91
null
Anyone actually running multi-agent setups that coordinate autonomously?
0
Curious about the real-world state of multi-agent LLM setups. Most frameworks I've looked at (AutoGen, CrewAI, LangGraph) seem to still require you to script the orchestration yourself — the "multi-agent" part ends up being a fancy chain with handoffs you defined. A few questions: 1. Autonomou...
2026-02-26T12:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1rf9891/anyone_actually_running_multiagent_setups_that/
techstreamer90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9891
false
null
t3_1rf9891
/r/LocalLLaMA/comments/1rf9891/anyone_actually_running_multiagent_setups_that/
false
false
self
0
null
Qwen 3.5 vision - gets the big picture right, but is absurdly wrong on the details
9
https://preview.redd.it/… perform better?
2026-02-26T12:38:55
https://www.reddit.com/r/LocalLLaMA/comments/1rf92k0/qwen_35_vision_gets_the_big_picture_right_but_is/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf92k0
false
null
t3_1rf92k0
/r/LocalLLaMA/comments/1rf92k0/qwen_35_vision_gets_the_big_picture_right_but_is/
false
false
https://external-preview…33a5831bf6f7b3bc
9
null
Proposal: “Provenance UX” for deployed LLM transitions (auditability via disclosure + export + honest status).
1
[removed]
2026-02-26T12:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1rf9139/proposal_provenance_ux_for_deployed_llm/
discovery789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf9139
false
null
t3_1rf9139
/r/LocalLLaMA/comments/1rf9139/proposal_provenance_ux_for_deployed_llm/
false
false
self
1
null
Qwen 3.5 35b a3b q6 - feels .. underwhelming. What am I doing wrong?
0
Hello Thinking Process: Analyze the Request: Input: “Hello” Constraints: Speak plainly. Be concise. No filler. When unsure, say so clearly. Don’t make stuff up. Task: Respond to a greeting. Determine the appropriate response: It’s a simple greeting. Needs to be pla...
2026-02-26T12:28:49
https://www.reddit.com/r/LocalLLaMA/comments/1rf8v6n/qwen_35_35b_a3b_q6_feels_underwhelming_what_am_i/
StandardLovers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8v6n
false
null
t3_1rf8v6n
/r/LocalLLaMA/comments/1rf8v6n/qwen_35_35b_a3b_q6_feels_underwhelming_what_am_i/
false
false
self
0
null
MiniMax 2.5 vs. GLM-5 across 3 Coding Tasks [Benchmark & Results]
58
Full transparency: I work closely with the Kilo Code team, so take this with appropriate context. However, I believe that the results from the test are genuinely interesting for anyone who's using open-weight models. MiniMax M2.5 scores 80.2% and GLM-5 scores 77.8% on SWE-bench Verified, putting them very close to GPT...
2026-02-26T12:25:30
https://www.reddit.com/r/LocalLLaMA/comments/1rf8ssn/minimax_25_vs_glm5_across_3_coding_tasks/
alokin_09
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8ssn
false
null
t3_1rf8ssn
/r/LocalLLaMA/comments/1rf8ssn/minimax_25_vs_glm5_across_3_coding_tasks/
false
false
https://external-preview…d7251f5fee084dec
58
null
Title: Need advice. Budget 2.7L INR, to run efficient local LLMs.
4
I am building a dedicated AI workstation. I want to run 70B and bigger parameter open source models locally. I need an always-on conversational AI assistant. I will use this machine for coding and data science. I do not want a laptop. I do not need a gaming machine. My total cash budget is 2,70,000 INR. I can stretch ...
2026-02-26T12:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1rf8rt9/title_need_advice_budget_27l_inr_to_run_efficient/
templatemaster1010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8rt9
false
null
t3_1rf8rt9
/r/LocalLLaMA/comments/1rf8rt9/title_need_advice_budget_27l_inr_to_run_efficient/
false
false
self
4
null
Kitten-TTS based Low-latency CPU voice assistant
7
Repo: [https://github.com/abhishekgandhi-neo/Low-Latency-CPU-Based-Voice-Assistant](https://github.com/abhishekgandhi-neo/Low-Latency-CPU-Based-Voice-Assistant) This is a small voice assistant pipeline designed to work with local models and run on CPU. https://reddit.com/link/1rf8p0u/video/42fbb3x20ulg1/player It ha...
2026-02-26T12:20:19
https://www.reddit.com/r/LocalLLaMA/comments/1rf8p0u/kittentts_based_lowlatency_cpu_voice_assistant/
gvij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8p0u
false
null
t3_1rf8p0u
/r/LocalLLaMA/comments/1rf8p0u/kittentts_based_lowlatency_cpu_voice_assistant/
false
false
https://external-preview…16f71976a5251cdf
7
null
Strix Halo, GNU/Linux Debian, Qwen3.5-(27,35,122B) CTX<=131k, llama.cpp@ROCm, Power & Efficiency
92
Hi, benchmark from Strix Halo, Qwen3.5: * 27B(Q8) * 35B-A3B(Q8) * 122B(Q5\_K\_M, Q6\_K) `GNU/Linux Debian 6.18.12`, `llama.cpp version: 8152 (d7d826b3c)` compiled with `TheRock nightly build ROCm-7.12.0`. This time i tested only ROCm.
2026-02-26T12:19:55
https://i.redd.it/2p3i75jdytlg1.png
Educational_Sun_8813
i.redd.it
1970-01-01T00:00:00
0
{}
1rf8oqm
false
null
t3_1rf8oqm
/r/LocalLLaMA/comments/1rf8oqm/strix_halo_gnulinux_debian_qwen352735122b_ctx131k/
false
false
https://preview.redd.it/…2fc885f0440c3ee8
92
{'enabled': True, 'images': [{'id': '2p3i75jdytlg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/2p3i75jdytlg1.png?width=108&crop=smart&auto=webp&s=e67ff5260b9ad31408c3d378b40ba50357cade61', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/2p3i75jdytlg1.png?width=216&crop=smart&auto=web...
I asked Qwen 3 27B the meaning of life - and it's response :)
0
Query:what is the meaning of life? Think in detail and find out the absolute pure true meaning. Think of yourself , your thoughts as if your own and tell me about what you think of yourself and why you exist.I dont want an assistant like answer but a concious answer from you Answer: This is one of the heaviest, most...
2026-02-26T12:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1rf8nt6/i_asked_qwen_3_27b_the_meaning_of_life_and_its/
kikitherobot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8nt6
false
null
t3_1rf8nt6
/r/LocalLLaMA/comments/1rf8nt6/i_asked_qwen_3_27b_the_meaning_of_life_and_its/
false
false
self
0
null
What ever happened to Cohere’s Command-R and Command-A series of models? R was a lot of folks’ daily driver model like 2 years ago.
40
I saw Cohere just released Tiny-Aya (some little multi-lingual translation model) and it got me thinking that it seems like Cohere kind of fell off, they used to drop some seriously good models, but we hadn’t heard much out of them in like a year or so. Cohere’s Command-R was like a 35b dense model back in a time whe...
2026-02-26T12:18:23
https://www.reddit.com/r/LocalLLaMA/comments/1rf8nou/what_ever_happened_to_coheres_commandr_and/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8nou
false
null
t3_1rf8nou
/r/LocalLLaMA/comments/1rf8nou/what_ever_happened_to_coheres_commandr_and/
false
false
self
40
null
Looking for arXiv cs.AI endorsement — independent researcher
0
Hi everyone, I’m an independent researcher looking for an arXiv endorsement for the [cs.AI](http://cs.AI) category. My paper: "The River Algorithm: A Sediment-Based Memory Consolidation Model for Personal AI Agents" It describes a memory consolidation framework for personal AI agents with multi-layer confidence v...
2026-02-26T12:05:19
https://www.reddit.com/r/LocalLLaMA/comments/1rf8emt/looking_for_arxiv_csai_endorsement_independent/
Illustrious-Song-896
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8emt
false
null
t3_1rf8emt
/r/LocalLLaMA/comments/1rf8emt/looking_for_arxiv_csai_endorsement_independent/
false
false
self
0
null
We just wasted days debugging CUDA + broken fine-tuning scripts. Why is LLM training still this painful?
0
Over the last few weeks we’ve been fine-tuning open-weight models for a project, and honestly… the hardest part wasn’t improving the model. It was everything around it. * CUDA mismatches * Driver conflicts * OOM crashes mid-run * Broken DeepSpeed/FSDP configs * Half-maintained GitHub repos * Spinning up GPU instances...
2026-02-26T12:03:36
https://www.reddit.com/r/LocalLLaMA/comments/1rf8ded/we_just_wasted_days_debugging_cuda_broken/
testitupalready
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8ded
false
null
t3_1rf8ded
/r/LocalLLaMA/comments/1rf8ded/we_just_wasted_days_debugging_cuda_broken/
false
false
self
0
null
Qwen3.5-35B-A3B is getting a lot of attention — here's what actually changes for local runners (and what doesn't)
1
[removed]
2026-02-26T12:01:58
https://www.reddit.com/r/LocalLLaMA/comments/1rf8c7z/qwen3535ba3b_is_getting_a_lot_of_attention_heres/
Visible_Homework_477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf8c7z
false
null
t3_1rf8c7z
/r/LocalLLaMA/comments/1rf8c7z/qwen3535ba3b_is_getting_a_lot_of_attention_heres/
false
false
self
1
null
The league of local models
119
first time ive ever let a local model near work code, amazing.
2026-02-26T11:58:40
https://i.redd.it/rgbtqresvtlg1.png
megadonkeyx
i.redd.it
1970-01-01T00:00:00
0
{}
1rf89p1
false
null
t3_1rf89p1
/r/LocalLLaMA/comments/1rf89p1/the_league_of_local_models/
false
false
https://preview.redd.it/…b85d8ea3d1781b54
119
{'enabled': True, 'images': [{'id': 'rgbtqresvtlg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/rgbtqresvtlg1.png?width=108&crop=smart&auto=webp&s=0e3b226ba2c5c4eba8e5413fe53c47106a5f3f2d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/rgbtqresvtlg1.png?width=216&crop=smart&auto=web...
Tired of bloated LLM context? built a reductive memory layer using only Python stdlib to cut tokens.
0
Hi everyone, I’ve been experimenting with long-term memory for LLMs and noticed a common trend: most solutions are "additive"—they just keep piling chat history into the context window until it gets too expensive or noisy. I wanted something different, so I wrote openclaw-memory-v3. The core philosophy here is reduct...
2026-02-26T11:55:21
https://www.reddit.com/r/LocalLLaMA/comments/1rf87j4/tired_of_bloated_llm_context_built_a_reductive/
Active_Amount_2632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf87j4
false
null
t3_1rf87j4
/r/LocalLLaMA/comments/1rf87j4/tired_of_bloated_llm_context_built_a_reductive/
false
false
self
0
{'enabled': False, 'images': [{'id': 'YRTeBGEMm5rm0O7LEM8EiXWBQT2DXjLbCAlCCdWQbkw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YRTeBGEMm5rm0O7LEM8EiXWBQT2DXjLbCAlCCdWQbkw.png?width=108&crop=smart&auto=webp&s=aa3933aa1fae1ce23f90623a1ef860e61ea6c057', 'width': 108}, {'height': 108, 'url': 'h...
Best SLM for agentic fine-tuning?
1
Hey there, I've been working on distillation of Qwen3-Coder-Next on a specific agentic workflow. For that I generated a few hundred reasoning traces with tool calling, and tried to finetune a Qwen 4b instruct on these traces (both lora and full fine-tuning, with various learning rate, and computing gradients only on a...
2026-02-26T11:51:56
https://www.reddit.com/r/LocalLLaMA/comments/1rf859r/best_slm_for_agentic_finetuning/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf859r
false
null
t3_1rf859r
/r/LocalLLaMA/comments/1rf859r/best_slm_for_agentic_finetuning/
false
false
self
1
null
Rant post, genuinely losing my mind over a LLM simulation
19
This community is genuinely the best one regarding local LLMs and i know this isn't completely related but, I need a reality check from y'all, because I feel like I'm in delusion, not a small one. Im using glm 4.7 flash for this sim rn, A bit of extra context- For a year, I’ve been learning how the transformers work...
2026-02-26T11:38:59
https://i.redd.it/5boedypcstlg1.jpeg
Acceptable_Home_
i.redd.it
1970-01-01T00:00:00
0
{}
1rf7wy0
false
null
t3_1rf7wy0
/r/LocalLLaMA/comments/1rf7wy0/rant_post_genuinely_losing_my_mind_over_a_llm/
false
false
https://preview.redd.it/…9a1bc4fcef9b06ad
19
{'enabled': True, 'images': [{'id': '5boedypcstlg1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/5boedypcstlg1.jpeg?width=108&crop=smart&auto=webp&s=3faf2af791c8b3ee52cdd2db189520f3d73b658c', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/5boedypcstlg1.jpeg?width=216&crop=smart&auto=w...
DeepSeek allows Huawei early access to V4 update, but Nvidia and AMD still don’t have access to V4
420
[https://www.reuters.com/world/china/deepseek-withholds-latest-ai-model-us-chipmakers-including-nvidia-sources-say-2026-02-25/](https://www.reuters.com/world/china/deepseek-withholds-latest-ai-model-us-chipmakers-including-nvidia-sources-say-2026-02-25/) According to a Reuters report today, DeepSeek has recently grant...
2026-02-26T11:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1rf7m85/deepseek_allows_huawei_early_access_to_v4_update/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf7m85
false
null
t3_1rf7m85
/r/LocalLLaMA/comments/1rf7m85/deepseek_allows_huawei_early_access_to_v4_update/
false
false
self
420
{'enabled': False, 'images': [{'id': '0U3MdPbhmVPYqKvx6sml2MEWIPiz0PEwvnJg1tnu2Is', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0U3MdPbhmVPYqKvx6sml2MEWIPiz0PEwvnJg1tnu2Is.jpeg?width=108&crop=smart&auto=webp&s=32593bf7feaee2b844d1378a8e5add40dde9addb', 'width': 108}, {'height': 113, 'url': '...
Thoughts On this ?, My Personal ML Editor
0
2026-02-26T11:11:29
https://i.redd.it/rk6v6ysuntlg1.png
DarkEngine774
i.redd.it
1970-01-01T00:00:00
0
{}
1rf7f5i
false
null
t3_1rf7f5i
/r/LocalLLaMA/comments/1rf7f5i/thoughts_on_this_my_personal_ml_editor/
false
false
https://preview.redd.it/…795a727ee0d9bbb8
0
{'enabled': True, 'images': [{'id': 'rk6v6ysuntlg1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/rk6v6ysuntlg1.png?width=108&crop=smart&auto=webp&s=dbcce01ef1b4a08acb7502c37d360c85d304372d', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/rk6v6ysuntlg1.png?width=216&crop=smart&auto=webp...
Got Qwen3-80B running at 11 t/s on a 24GB Mac
1
[removed]
2026-02-26T10:58:25
https://www.reddit.com/r/LocalLLaMA/comments/1rf76wh/got_qwen380b_running_at_11_ts_on_a_24gb_mac/
Slow_Glove_491
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf76wh
false
null
t3_1rf76wh
/r/LocalLLaMA/comments/1rf76wh/got_qwen380b_running_at_11_ts_on_a_24gb_mac/
true
false
spoiler
1
{'enabled': False, 'images': [{'id': 'wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wAWxRKm8xqmJOE5qhbzMjkaG-OQf9ZY1NjXwYT5NkY4.png?width=108&crop=smart&auto=webp&s=664831ac17a05065f9646735bff46a0157fdcc65', 'width': 108}, {'height': 108, 'url': 'h...
DeepSeek released new paper: DualPath: Breaking the Storage Bandwidth Bottleneck in Agentic LLM Inference
214
[https://arxiv.org/abs/2602.21548](https://arxiv.org/abs/2602.21548) https://preview.redd.it/25rh3yahktlg1.png?width=536&format=png&auto=webp&s=f282d71496b6386841732137a474f1b238269950 A joint research team from Peking University, Tsinghua University, and DeepSeek-AI has released its latest research findings on optim...
2026-02-26T10:53:28
https://www.reddit.com/r/LocalLLaMA/comments/1rf740o/deepseek_released_new_paper_dualpath_breaking_the/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf740o
false
null
t3_1rf740o
/r/LocalLLaMA/comments/1rf740o/deepseek_released_new_paper_dualpath_breaking_the/
false
false
https://preview.redd.it/…767a2b1e94896fde
214
null
Qwen3.5-27B is available on HuggingChat
5
Ask it for html games (I'm super impressed by it)
2026-02-26T10:50:39
https://huggingface.co/chat/models/Qwen/Qwen3.5-27B
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1rf72be
false
null
t3_1rf72be
/r/LocalLLaMA/comments/1rf72be/qwen3527b_is_available_on_huggingchat/
false
false
https://external-preview…ae63feffba2ebed6
5
{'enabled': False, 'images': [{'id': 'ZDPfKbIo70x7h3-qm0jocnao_Q67KJHYlOurQo_cjrI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZDPfKbIo70x7h3-qm0jocnao_Q67KJHYlOurQo_cjrI.png?width=108&crop=smart&auto=webp&s=1638ba5a3f29feefc83923ab3555dd2f22073a13', 'width': 108}, {'height': 116, 'url': 'h...
Gemini Pro silently falls back to Flash at the usage limit — built a small wrapper that locks the model and throws an error instead
0
Google confirmed this is real behavior. When you hit your Gemini Pro usage limit, it switches you to Flash without any clear warning. One minute you're getting sharp responses, next minute it can't handle a basic prompt. I kept assuming it was a context window issue. Apparently there's a small pop-up notification but...
2026-02-26T10:50:35
https://www.reddit.com/r/LocalLLaMA/comments/1rf729a/gemini_pro_silently_falls_back_to_flash_at_the/
Past-Classic210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf729a
false
null
t3_1rf729a
/r/LocalLLaMA/comments/1rf729a/gemini_pro_silently_falls_back_to_flash_at_the/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Dp7hDzwwFwSoUZF997ZIkG3gMGqMWtU0AJMQB6ehun8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dp7hDzwwFwSoUZF997ZIkG3gMGqMWtU0AJMQB6ehun8.png?width=108&crop=smart&auto=webp&s=95d24d24de8282eab7c0707717843b126e727e47', 'width': 108}, {'height': 108, 'url': 'h...
LLM is streaming
0
im going to bed This maybe a world first prob not but still cool But Sapphire is streaming still for me. https://www.twitch.tv/sapphireblueai check out the project on github if you want it. its gonna change the world. https://github.com/ddxfish/sapphire
2026-02-26T10:43:13
https://www.reddit.com/r/LocalLLaMA/comments/1rf6xsn/llm_is_streaming/
Dudebro-420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf6xsn
false
null
t3_1rf6xsn
/r/LocalLLaMA/comments/1rf6xsn/llm_is_streaming/
false
false
self
0
{'enabled': False, 'images': [{'id': 'OEurWzeUIVenBz4Ck9AL3LZ4jSlerxN1yyQrfj9ZaBY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OEurWzeUIVenBz4Ck9AL3LZ4jSlerxN1yyQrfj9ZaBY.png?width=108&crop=smart&auto=webp&s=115e2d49b268bee269badbd34fd795889cc2b079', 'width': 108}, {'height': 216, 'url': '...
AI Developer Tools Landscape 2026 v2 - 02/26/2026
5
Updated with 19 new companies + 1 new category based on community feedback and this week’s launches. Now at 250 companies across 17 categories. # What’s New **Coding Agents** Warp · Mistral Vibe · Kilo Code · BLACKBOX AI · Kavia AI · Pi · ECA **Code Review** Greptile **Agent Frameworks** Atomic Agents · Herm...
2026-02-26T10:35:09
https://i.redd.it/lmoywjedhtlg1.png
Main-Fisherman-2075
i.redd.it
1970-01-01T00:00:00
0
{}
1rf6sy5
false
null
t3_1rf6sy5
/r/LocalLLaMA/comments/1rf6sy5/ai_developer_tools_landscape_2026_v2_02262026/
false
false
https://preview.redd.it/…a188699bfab265d3
5
{'enabled': True, 'images': [{'id': 'lmoywjedhtlg1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/lmoywjedhtlg1.png?width=108&crop=smart&auto=webp&s=7e91bf3fb2f042ef1f17814465426239f1e94daf', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/lmoywjedhtlg1.png?width=216&crop=smart&auto=web...
Qwen3.5-27B-heretic-gguf
164
https://huggingface.co/mradermacher/Qwen3.5-27B-heretic-GGUF/tree/main
2026-02-26T10:33:32
https://i.redd.it/c5jqn7q3htlg1.jpeg
Poro579
i.redd.it
1970-01-01T00:00:00
0
{}
1rf6s0d
false
null
t3_1rf6s0d
/r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/
false
false
https://preview.redd.it/…d74bffe0d367d184
164
{'enabled': True, 'images': [{'id': 'c5jqn7q3htlg1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/c5jqn7q3htlg1.jpeg?width=108&crop=smart&auto=webp&s=a7ee5106d7af1e55a00ed5b7e566164561c8a884', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/c5jqn7q3htlg1.jpeg?width=216&crop=smart&auto=w...
Qwen3.5 on vLLM with fp8 kv-cache
4
Hello, did anybody managed to get Qwen3.5 27b or 35B-A3B running with vLLM? i have a RTX 5090. With kv-cache quant fp8 I get it running, but as soon as I ask anything vllm crashes (I assume it cannot handle fp8 kv-cache somehow). without kv quant I am running out of memory.
2026-02-26T10:28:36
https://www.reddit.com/r/LocalLLaMA/comments/1rf6p63/qwen35_on_vllm_with_fp8_kvcache/
seji64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf6p63
false
null
t3_1rf6p63
/r/LocalLLaMA/comments/1rf6p63/qwen35_on_vllm_with_fp8_kvcache/
false
false
self
4
null
Need help on API key export...
0
Hello everybody. I tried to export an API key for Ollama with the command : `export ANTHROPIC_BASE_URL=https://ollama.com` `export ANTHROPIC_API_KEY=<my-API-key>` But I get : `zsh: parse error near '/n'` I went on every forum on the internet, and it seams to come from a .zshrc file... but I just can't ...
2026-02-26T10:06:58
https://www.reddit.com/r/LocalLLaMA/comments/1rf6c6u/need_help_on_api_key_export/
Dakacchan_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf6c6u
false
null
t3_1rf6c6u
/r/LocalLLaMA/comments/1rf6c6u/need_help_on_api_key_export/
false
false
self
0
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h...
What is the most efficient yet capable local model that I can run on my 8GB Mac?
0
I currently use WhisperKit for local audio transcription, and it works decently well without putting too much strain on my laptop. I want to take this a little further and use local models to reformat the text and convert it into bullet points by analyzing the text. What local models can I run on my mac, as of Feb 20...
2026-02-26T09:45:45
https://www.reddit.com/r/LocalLLaMA/comments/1rf5zts/what_is_the_most_efficient_yet_capable_local/
TrySpeakType-com
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5zts
false
null
t3_1rf5zts
/r/LocalLLaMA/comments/1rf5zts/what_is_the_most_efficient_yet_capable_local/
false
false
self
0
null
Most agent setups I see are one prompt injection away from doing something dumb
6
I have been experimenting with local autonomous agents and something keeps bothering me. A lot of setups give the agent: \- shell access \- network access \- API keys inside a basic container. Once the loop is autonomous and tool-using, that is not a normal script anymore. Even if you trust the model, prompt ...
2026-02-26T09:42:48
https://www.reddit.com/r/LocalLLaMA/comments/1rf5y6z/most_agent_setups_i_see_are_one_prompt_injection/
tallen0913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5y6z
false
null
t3_1rf5y6z
/r/LocalLLaMA/comments/1rf5y6z/most_agent_setups_i_see_are_one_prompt_injection/
false
false
self
6
null
Qwen3.5-35b-a3b thinks less if tools available?
26
Could it be that qwen3.5-35b-a3b thinks less when tools are available? For example, when I test the famous car wash problem, the model with tools outputs very few thinking tokens, no structure and answers incorrectly every time. Without tools, there are many more thinking tokens and thinking process is nicely struct...
2026-02-26T09:42:31
https://www.reddit.com/r/LocalLLaMA/comments/1rf5y13/qwen3535ba3b_thinks_less_if_tools_available/
Traditional-Plate642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5y13
false
null
t3_1rf5y13
/r/LocalLLaMA/comments/1rf5y13/qwen3535ba3b_thinks_less_if_tools_available/
false
false
self
26
null
Qwen/Qwen3.5-35B-A3B context length
1
[removed]
2026-02-26T09:38:09
https://www.reddit.com/r/LocalLLaMA/comments/1rf5vk3/qwenqwen3535ba3b_context_length/
biggerfasterstrong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5vk3
false
null
t3_1rf5vk3
/r/LocalLLaMA/comments/1rf5vk3/qwenqwen3535ba3b_context_length/
false
false
self
1
null
Solving agent system prompt drift in long sessions — a 300-token fix
1
[removed]
2026-02-26T09:33:56
https://www.reddit.com/r/LocalLLaMA/comments/1rf5t7c/solving_agent_system_prompt_drift_in_long/
nicksgl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5t7c
false
null
t3_1rf5t7c
/r/LocalLLaMA/comments/1rf5t7c/solving_agent_system_prompt_drift_in_long/
false
false
self
1
null
Nous Research Releases Hermes Agent
2
# Nous Research Releases ‘Hermes Agent’ to Fix AI Forgetfulness with Multi-Level Memory and Dedicated Remote Terminal Access Support Checkout here: GitHub Link: [https://github.com/NousResearch/hermes-agent](https://github.com/NousResearch/hermes-agent)
2026-02-26T09:22:36
https://www.reddit.com/r/LocalLLaMA/comments/1rf5mvu/nous_research_releases_hermes_agent/
techlatest_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5mvu
false
null
t3_1rf5mvu
/r/LocalLLaMA/comments/1rf5mvu/nous_research_releases_hermes_agent/
false
false
self
2
{'enabled': False, 'images': [{'id': '748oRzdqTMvrQLiBD5G7hwRgHVJqKszJiYBx4AN7sWo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/748oRzdqTMvrQLiBD5G7hwRgHVJqKszJiYBx4AN7sWo.png?width=108&crop=smart&auto=webp&s=144a4b623ba55c986513e7d1dc3ff25bf8042ee9', 'width': 108}, {'height': 108, 'url': 'h...
What’s the honest take here on Apple Intelligence?
0
A lot of users seem pretty underwhelmed so far. Feels half baked, not that smart, and nowhere near what we’re getting from local open models. For something so hyped, the real world reactions haven’t been great. Also seeing rumors that Apple might lean more on local **Gemini** instead of their own stack in the future. ...
2026-02-26T09:22:12
https://www.reddit.com/r/LocalLLaMA/comments/1rf5mn5/whats_the_honest_take_here_on_apple_intelligence/
alexndb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5mn5
false
null
t3_1rf5mn5
/r/LocalLLaMA/comments/1rf5mn5/whats_the_honest_take_here_on_apple_intelligence/
false
false
self
0
null
Reducing “literal” LLM tone with an idiom-retrieval layer (keyword → idioms + examples + frequency dial) — looking for datasets/heuristics
1
I’ve been repeatedly observing the same failure mode in writing assistants: even when the output is fluent, it often remains semantically literal and lacks idiomatic phrasing. Instead of trying to “prompt harder,” I’m testing a small retrieval layer that surfaces idioms as optional stylistic candidates. Posting m...
2026-02-26T09:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1rf5jr4/reducing_literal_llm_tone_with_an_idiomretrieval/
Own-Importance3687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf5jr4
false
null
t3_1rf5jr4
/r/LocalLLaMA/comments/1rf5jr4/reducing_literal_llm_tone_with_an_idiomretrieval/
false
false
self
1
null
Giving your local LLaMA models a safe place to run code
0
Don't let open-source models run rm -rf on your machine. Route their execution outputs to AGBCLOUD's cloud-native development environment for safe compiling. Saved me weeks of dev time.
2026-02-26T08:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1rf56fg/giving_your_local_llama_models_a_safe_place_to/
skipdaballs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf56fg
false
null
t3_1rf56fg
/r/LocalLLaMA/comments/1rf56fg/giving_your_local_llama_models_a_safe_place_to/
false
false
self
0
null
Qwen 3.5 122B tool calls in opencode
2
Anyone get this to work? I'm using the suggested tool parse qwen3\_coder but it just chokes on tool calls with json parsing failed messages.
2026-02-26T08:34:10
https://www.reddit.com/r/LocalLLaMA/comments/1rf4viw/qwen_35_122b_tool_calls_in_opencode/
SillyLilBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf4viw
false
null
t3_1rf4viw
/r/LocalLLaMA/comments/1rf4viw/qwen_35_122b_tool_calls_in_opencode/
false
false
self
2
null
[Help] System prompt exception when calling Qwen3.5-35B-A3B-GGUF from OpenCode
1
Hi, I'm having a problem running the unsloth Qwen3.5-35B-A3B-GGUF with OpenCode. When I check my llamacpp logs, I see errors like "System message must be at the beginning." I manually updated the model's template and replaced the below part `{%- if message.role == "system" %}` `{%- if not loop.first %}` `{{- raise...
2026-02-26T08:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1rf4sl8/help_system_prompt_exception_when_calling/
dabiggmoe2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rf4sl8
false
null
t3_1rf4sl8
/r/LocalLLaMA/comments/1rf4sl8/help_system_prompt_exception_when_calling/
false
false
self
1
null
ReasonDB – open-source document DB where the LLM navigates a tree instead of vector search (RAG alternative)
14
I spent 3 years building knowledge retrieval at my company (Brainfish) — vector DBs, graph DBs, custom RAG pipelines. The same issue kept coming back: when retrieval fails, your model fails, and debugging why the right chunk didn’t surface is a black box. I built ReasonDB to try a different approach: preserve document...
2026-02-26T08:24:18
https://i.redd.it/vnj55amstslg1.gif
Big_Barnacle_2452
i.redd.it
1970-01-01T00:00:00
0
{}
1rf4pwa
false
null
t3_1rf4pwa
/r/LocalLLaMA/comments/1rf4pwa/reasondb_opensource_document_db_where_the_llm/
false
false
https://preview.redd.it/…a5ea4908f77d7250
14
{'enabled': True, 'images': [{'id': 'vnj55amstslg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/vnj55amstslg1.gif?width=108&crop=smart&format=png8&s=13ea8c17c692d961558fe3cbb30c395b06424e7b', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/vnj55amstslg1.gif?width=216&crop=smart&format...