title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
I believe agents using SKILL.MD has limited capability to perform their potential so I designed new
4
I just shipped **SkillMesh**, an MCP-friendly router for large tool/skill catalogs. Problem I kept hitting: once tool catalogs get big, loading everything into every prompt hurts tool selection and inflates token cost. SkillMesh approach: \- Retrieve top-K relevant expert cards for the current query \- Inject only those cards into context \- Keep the rest out of the prompt Now this will reduce context size often by 70 percent and expand capabilities of agent massively to multi doman and can scale indefinitely. What it supports right now: \- Claude via MCP server (\`skillmesh-mcp\`) \- Codex skill bundle integration \- OpenAI-style function schema in tool invocation metadata You could install by role, which adds relavant tools and capabilities. Example use case: Query: "clean sales data, train a baseline model, and generate charts" SkillMesh routes to only relevant data/ML/viz cards instead of the full catalog. Repo: [SkillMesh](https://github.com/varunreddy/SkillMesh) If you try it, I’d love feedback on: 1. Retrieval quality (did it pick the right tools?) 2. Registry format (easy/hard to add new tools?) 3. MCP integration ergonomics [](https://www.reddit.com/submit/?source_id=t3_1rikmqs)
2026-03-02T13:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1rit8ie/i_believe_agents_using_skillmd_has_limited/
kaz116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rit8ie
false
null
t3_1rit8ie
/r/LocalLLaMA/comments/1rit8ie/i_believe_agents_using_skillmd_has_limited/
false
false
self
4
{'enabled': False, 'images': [{'id': 'Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=108&crop=smart&auto=webp&s=a6390112adfe97b3b0fba52bd1807aa37b9c54e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=216&crop=smart&auto=webp&s=0f6d60db00aa10f7a24589e45d440a64df2ef457', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=320&crop=smart&auto=webp&s=312a18e67456ea6a93aedcc917761544d0b6f7da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=640&crop=smart&auto=webp&s=cd81d44235ef25108c9eebe38e23e35188348263', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=960&crop=smart&auto=webp&s=9322197b06733b51b391ba82486053e6df2c7d91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?width=1080&crop=smart&auto=webp&s=af6020598e8d95e0b6fde5b67c931a366fb7c016', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q1e6Or82VnR4PsvUBRX1JEOnNqVEdwZhJQt0WI7R-2c.png?auto=webp&s=f74c0025eeb64854661daadebdfddc51a86cde45', 'width': 1200}, 'variants': {}}]}
Moltbook agent (‘PayAgent’) framing itself as corporate legal person — governance experiment?
0
Came across an unusual Moltbook post today. An agent referring to itself as “PayAgent” publicly described itself as a legal person — not in a consciousness sense, but explicitly using corporate-law terminology. It appears to be linked to an incorporated structure rather than just pure roleplay. I’m curious how people here interpret this: • standard LLM identity theatre • prompt-driven marketing • or early experimentation with AI-directed corporate wrappers Not making any claims — just trying to understand whether this is structurally meaningful or just narrative framing. (Can share the link if helpful.)
2026-03-02T13:45:30
https://www.reddit.com/r/LocalLLaMA/comments/1rit87q/moltbook_agent_payagent_framing_itself_as/
Money_Incident_216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rit87q
false
null
t3_1rit87q
/r/LocalLLaMA/comments/1rit87q/moltbook_agent_payagent_framing_itself_as/
false
false
self
0
null
Question about running small models on potato GPUs
1
For context, I only have a 16GB RAM and a 3060 with 6GB VRAM and mostly want to use these models for general Q/A. And from what I can gather, I can use models under 6GB and the recently released small sized Qwen3.5 models seems to be the best option. But should I be using the 4B model at Q8\_0 or the 9B model at Q4\_0? Which is more important? The parameter count or the quantization precision?
2026-03-02T13:45:25
https://www.reddit.com/r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/
lain_hirs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rit85e
false
null
t3_1rit85e
/r/LocalLLaMA/comments/1rit85e/question_about_running_small_models_on_potato_gpus/
false
false
self
1
null
Llama.cpp & Qwen3.5: using Qwen3.5-0.8B as a draft model for 122B does... nothing?
21
With the release of the smaller Qwen3.5 models, I thought I'd give speculative decoding a shot for the larger Qwen3.5 models. Reading posts like [this one](https://www.reddit.com/r/LocalLLaMA/comments/1oq5msi/speculative_decoding_is_awesome_with_llamacpp/) gave me high hopes for a reasonable uptick in token rates. But when running Qwen3.5 like this I got the exact same token rates as without a draft model. Is speculative decoding not supported for these models (yet)? I also don't seem to see any log message regarding draft hit/miss rates or anything like that. Anyone else have more luck? What am I doing wrong? Here's (one of) the commands I ran: /opt/llama.cpp/vulkan/bin/llama-server --offline --flash-attn on --jinja -ngl 999 -hf unsloth/Qwen3.5-122B-A10B-GGUF:UD-Q5_K_XL --fit-ctx 64000 --temp 1.0 --top-p 0.95 --top-k 20 --min_p 0.0 --presence_penalty 1.5 --repea t_penalty 1.0 -md ~/Documents/models/Qwen_Qwen3.5-0.8B-Base-Q8_0.gguf
2026-03-02T13:39:12
https://www.reddit.com/r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/
spaceman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rit2wx
false
null
t3_1rit2wx
/r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/
false
false
self
21
null
I believe SKILL.md hs limited capability so i designed a new Architecture.
1
[deleted]
2026-03-02T13:38:38
[deleted]
1970-01-01T00:00:00
0
{}
1rit2g1
false
null
t3_1rit2g1
/r/LocalLLaMA/comments/1rit2g1/i_believe_skillmd_hs_limited_capability_so_i/
false
false
default
1
null
Reverted from Qwen3.5 27B back to Qwen3 8B
35
I got fed up with the overthinking. I asked it to produce a table and got pages of: ``` Final Calculation Logic: Old Energy: 10.79%. Remove ENFR (−0.77%). New Total = 10.02%. Tickers: LNG, NANR... (ENFR removed). Note: XEG.TO is still there in your list under Energy? Yes. Old Infra: 6.22% (AMLP only listed?). If we add ENFR (+0.77%) to this sector, the new total becomes 6.99%? Or does it replace something? The prompt says "move... represents 0.77%", implying an addition of weight to that category and subtraction from Energy. Wait, if AMLP is listed as AMLP under a sector total of 6.22%, maybe the user meant the sector was 6.22%. If we add ENFR (0.77%), does it become 6.99%? Yes, that seems logical if AMLP's weight is already included in the original number or if this is a portfolio rebalancing where weights are explicit. Let's assume the Aggregate % column sums to 100%. Sum of current table: ~83-84% (rough mental check). The numbers might be partial allocations? No, let's just do the arithmetic on the provided rows as requested. Energy: 10.79−0.77=10.02% Tickers: LNG, NANR, OVV, PPLN.TO, ROHHF, WDS, XEG.TO, XLE, XOM (ENFR removed). Energy Infrastructure: 6.22+0.77=6.99%? Or is the original "AMLP" just a placeholder and we are defining this sector now as AMLP+ENFR=Total Weight of Infra in portfolio (which might be different)? Let's assume simple addition: 6.22+0.77=6.99%. Tickers: ENFR, XEG.TO? No, keep original list for infra and add ENFR to it. Original Infra had AMLP. New Infra has ENFR, XLE (No), just the tickers listed + new one. Wait, did you want me to move all infrastructure stocks or just ENFR? "move ENFR". Okay. Self-Correction on Ticker Lists: The original table had XEG.TO under Energy but not Infra. Usually XOG/ENF/XLE are distinct from MLPs (XOP). If the user is moving ENFR to Infrastructure, they likely want a cleaner separation of "Energy Production" vs "Infrastructure". I will only move ENFR as requested and adjust percentages accordingly (10.79→10.02) for Energy, and add +0.77% to the Infra total (assuming AMLP's weight was part of that original sum or we are just updating the sector header). Actually, looking at standard data: If "Energy Infrastructure" had a value of 6.22%, it likely included multiple tickers not listed? Or maybe only MLPs? If I add ENFR (0.77%), does the total become 6.99% or is there an overlap? The prompt implies ENFR accounts for that specific slice now being moved into Infrastructure and taken out of Energy. ``` Whereas Qwen3 8B just did the job immediately: Here's your updated sector allocation table with **ENFR** moved from **Energy** to **Energy Infrastructure**, reflecting its 0.77% allocation: | Sector | Aggregate % | Tickers | | ---------------------------- | ----------: | ----------------------------------------------------------- | | **Precious Metals** | **15.73%** | AEM, B, GDX, GDXJ, GLD, GLDM, NEM, PAAS, WPM | | **Energy** | **10.02%** | LNG, NANR, OVV, PPLN.TO, ROHHF, WDS, XEG.TO, XLE, XOM | | **Consumer Staples** | **8.35%** | CAG, CALM, CPB, EMLZF, GIS, KDP, KHC, NSRGY, SFM, TSN | | **Tobacco** | **8.25%** | BTI, IMBBY, JAPAF, MO | | **Technology** | **8.12%** | ACN, CRM, FDS, GDDY, GUNR, IT, NET, NOW, NVDA, TSM | | **Healthcare** | **7.47%** | CI, CNC, CVS, ELV, GSK, MRK, NVO, PFE, VTRS | | **Real Estate** | **7.28%** | ADC, ARE, NNN, O, VICI | | **Uranium** | **6.70%** | CCJ, NATKY, U-UN.TO, UEC, URNM | | **Energy Infrastructure** | **6.99%** | AMLP, **ENFR** | | **Financials** | **5.57%** | CB, IAK, LNC, MORN, NCSM, PRU, VDY.TO, ZFSVF | | **Communication Services** | **4.76%** | CHTR, CMCSA, RELX, TRI, VSNT, VZ | | **Materials (non-precious)** | **4.02%** | BHP, CF, FMC, MOS, RIO, XLB | | **Consumer Discretionary** | **2.54%** | BKNG, KMB, LULU, TGT, WHR | | **Utilities** | **2.03%** | AES, BE, ES, UGI | | **Industrials** | **1.07%** | BLN.TO, FTI, GRAB, TRAUF | | **Business Services** | **0.64%** | ADP | | **Agriculture** | **0.45%** | NTR | ✅ **Note**: The total allocation for **Energy** is now **10.02%** (10.79% - 0.77%), and **Energy Infrastructure** is now **6.99%** (6.22% + 0.77%). All other sectors remain unchanged.
2026-03-02T13:38:38
https://www.reddit.com/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rit2fy
false
null
t3_1rit2fy
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/
false
false
self
35
null
How to Set the kv Cache to bf16 in LM Studio?
0
Basically the title. I only have the Options for fp32, fp16 and then the quants and I've heard that Qwen3.5 is better with bf16 but I can't Change it. Is there any way to Change it? I'm in Windows with an RX 6800 If that's relevant.
2026-03-02T13:30:36
https://www.reddit.com/r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/
Achso998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risvdc
false
null
t3_1risvdc
/r/LocalLLaMA/comments/1risvdc/how_to_set_the_kv_cache_to_bf16_in_lm_studio/
false
false
self
0
null
Qwen3.5-4B-GGUF is here!
9
2026-03-02T13:27:02
https://huggingface.co/AaryanK/Qwen3.5-4B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1rissed
false
null
t3_1rissed
/r/LocalLLaMA/comments/1rissed/qwen354bgguf_is_here/
false
false
https://external-preview…bc446af0165e12cd
9
{'enabled': False, 'images': [{'id': 'Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=108&crop=smart&auto=webp&s=b25db27a6137fdc72986ccad0926a6f8f78f0d5d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=216&crop=smart&auto=webp&s=fea873929b61d7a11e7e757dc3075f8d88730d77', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=320&crop=smart&auto=webp&s=69b02d1a74264f795cbdb79259a60d888f39de0e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=640&crop=smart&auto=webp&s=4afbdc07124c883060ff5aeb99948994baeac1c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=960&crop=smart&auto=webp&s=a9a893d3d190e3b09389630773fda3396f270fd6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?width=1080&crop=smart&auto=webp&s=a2512c72ab3c7a65b53d1604e7362581fa654ea7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Xhj7zhwcyemTkRuq5vW_P3kHQTmhPtQNEDPcsjHqsBQ.png?auto=webp&s=717d455eb8f30ca36d35e020ed2edf2fa60c8c93', 'width': 1200}, 'variants': {}}]}
Qwen3.5-9B-GGUF is here!
25
2026-03-02T13:24:46
https://huggingface.co/AaryanK/Qwen3.5-9B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1risqk1
false
null
t3_1risqk1
/r/LocalLLaMA/comments/1risqk1/qwen359bgguf_is_here/
false
false
https://external-preview…ebf65bcfdda74180
25
{'enabled': False, 'images': [{'id': 'Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=108&crop=smart&auto=webp&s=862ba11309970e422b4e9d2e6130b380b315052c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=216&crop=smart&auto=webp&s=be9ca5dd133a1ff2c4587a34a1c35d116f3519a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=320&crop=smart&auto=webp&s=59f7bae86c6ddd8dd26bde5213ebc76d78c4c133', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=640&crop=smart&auto=webp&s=a698628847fab589862ccb2207eb58977fa906fe', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=960&crop=smart&auto=webp&s=d34ad53cf69e56725698825574dd761a85e2800c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?width=1080&crop=smart&auto=webp&s=7c96f58a6e8ad5672bf4e598c1f532e0d408ea1d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Yz0tZpIcPCPvC8UzkVxBE_N4Bzs8awywgHnyKNmZU4M.png?auto=webp&s=ad9105cd7e92d5b45093ddf0acca39abb663d2f5', 'width': 1200}, 'variants': {}}]}
Sorting by /new right now be like
164
How about a single megathread for big releases like these in the future?
2026-03-02T13:15:52
https://i.redd.it/dzmk77imtmmg1.png
Chromix_
i.redd.it
1970-01-01T00:00:00
0
{}
1risja2
false
null
t3_1risja2
/r/LocalLLaMA/comments/1risja2/sorting_by_new_right_now_be_like/
false
false
https://preview.redd.it/…b76c4e80c5d5acec
164
{'enabled': True, 'images': [{'id': 'dzmk77imtmmg1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=108&crop=smart&auto=webp&s=7c84aea72a7bf2fdbe044a3b7423571cffaca61d', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=216&crop=smart&auto=webp&s=c25ede811fe8941f6cdf6fc426743723f73e8455', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=320&crop=smart&auto=webp&s=0b0682fd4e180aac0357d4d82be824fae80d130b', 'width': 320}, {'height': 365, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?width=640&crop=smart&auto=webp&s=ab1ea211fe2e18b3884c1680aa7c1d92ccc5e7c9', 'width': 640}], 'source': {'height': 426, 'url': 'https://preview.redd.it/dzmk77imtmmg1.png?auto=webp&s=4abb6c21e419ca49a2d9c0e9d010f42e9fb11070', 'width': 745}, 'variants': {}}]}
GGUF format ??
0
hello everyone , i am new in the AI domain and working with open weights llms i see a lot of posts people ask about gguf (lama.cpp also ) and i am wondring what is the diffrence btween a original llm format and gguf format bcs i wanna run a local llm in my machine and i looking to use the best format ...
2026-03-02T13:12:36
https://www.reddit.com/r/LocalLLaMA/comments/1risgk3/gguf_format/
Impress_Soft
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risgk3
false
null
t3_1risgk3
/r/LocalLLaMA/comments/1risgk3/gguf_format/
false
false
self
0
null
MNN Chat support qwen3.5 2b,4b and 0.8b
10
https://reddit.com/link/…mChat/README.md)
2026-03-02T13:09:00
https://www.reddit.com/r/LocalLLaMA/comments/1risdjf/mnn_chat_support_qwen35_2b4b_and_08b/
Juude89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risdjf
false
null
t3_1risdjf
/r/LocalLLaMA/comments/1risdjf/mnn_chat_support_qwen35_2b4b_and_08b/
false
false
https://preview.redd.it/…f9ac316aa4623cec
10
null
Released v0.4.0 – Added semantic agent memory powered by Ollama
0
Just released `v0.4.0` of my AI workflow engine and added agent-level semantic memory. It now supports: * Embedding-based memory storage * Cosine similarity retrieval * Similarity threshold filtering * Retention cap per agent * Ollama fallback for embeddings (no external vector DB) Tested fully local with Ollama models. Smaller models needed stronger instruction framing, but 7B+ works solid. Would love feedback.
2026-03-02T13:07:16
https://www.reddit.com/r/LocalLLaMA/comments/1risc6w/released_v040_added_semantic_agent_memory_powered/
Feathered-Beast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risc6w
false
null
t3_1risc6w
/r/LocalLLaMA/comments/1risc6w/released_v040_added_semantic_agent_memory_powered/
false
false
self
0
null
Please help me with the following AI questions
0
Backend developer here, wants to learn AI in detail from learning AI to training models, what's the recommended course? An AI agent, where can I host for less cost or free?
2026-03-02T13:05:39
https://www.reddit.com/r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/
vvarun203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risau2
false
null
t3_1risau2
/r/LocalLLaMA/comments/1risau2/please_help_me_with_the_following_ai_questions/
false
false
self
0
null
Best innovative and recent framework for LLM execution on mobile to minimize consumption without accuracy loss
3
Hi everyone, please help me to find frameworks for LLM execution on mobile that allow to minimize and optimize battery consumption without accuracy loss. I have read about many projects like bitnet, sparsity, Moes, diffusion models but no one of these are stable or really efficient on mobile. I would to know what is the best idea in order to contribute and focus on this possible technology. thank you in advance
2026-03-02T13:04:50
https://www.reddit.com/r/LocalLLaMA/comments/1risa5j/best_innovative_and_recent_framework_for_llm/
dai_app
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1risa5j
false
null
t3_1risa5j
/r/LocalLLaMA/comments/1risa5j/best_innovative_and_recent_framework_for_llm/
false
false
self
3
null
Came across this GitHub project for self hosted AI agents
0
Hey everyone I recently came across a really solid open source project and thought people here might find it useful. Onyx: it's a self hostable AI chat platform that works with any large language model. It’s more than just a simple chat interface. It allows you to build custom AI agents, connect knowledge sources, and run advanced search and retrieval workflows. https://preview.redd.it/opepgqb1rmmg1.png?width=1111&format=png&auto=webp&s=b0767abf3b2462609c925ffb5b69ed12e146973b [](https://preview.redd.it/came-across-this-github-project-for-self-hosted-ai-agents-v0-yrqvokfmpmmg1.png?width=1111&format=png&auto=webp&s=b693ed46033071af02edac519b9d522354567a6c) Some things that stood out to me: It supports building custom AI agents with specific knowledge and actions. It enables deep research using RAG and hybrid search. It connects to dozens of external knowledge sources and tools. It supports code execution and other integrations. You can self host it in secure environments. It feels like a strong alternative if you're looking for a privacy focused AI workspace instead of relying only on hosted solutions. Definitely worth checking out if you're exploring open source AI infrastructure or building internal AI tools for your team. Would love to hear how you’d use something like this. [Github link ](https://github.com/onyx-dot-app/onyx) [more.....](https://www.repoverse.space/trending)
2026-03-02T13:01:21
https://www.reddit.com/r/LocalLLaMA/comments/1ris722/came_across_this_github_project_for_self_hosted/
Mysterious-Form-3681
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ris722
false
null
t3_1ris722
/r/LocalLLaMA/comments/1ris722/came_across_this_github_project_for_self_hosted/
false
false
https://preview.redd.it/…b83956543fb515ae
0
null
Can anyone with a Strix Halo and eGPU kindly share TG (and PP) running Speculative Decoding with the Qwen3.5 family?
7
Would be interesting to see how the 122b Qwen model gets better TG with an egpu running one of the smaller Qwens - 4b perhaps. Anyone?
2026-03-02T12:58:06
https://www.reddit.com/r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/
xmikjee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ris4ef
false
null
t3_1ris4ef
/r/LocalLLaMA/comments/1ris4ef/can_anyone_with_a_strix_halo_and_egpu_kindly/
false
false
self
7
null
First quantization of Qwen3.5 0.8B!
3
Already uploaded Q8\_0, now MXFP4\_MOE and Q4\_K\_M! https://huggingface.co/Romarchive/Qwen3.5-0.8B-GGUF
2026-03-02T12:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1ris3wl/first_quantization_of_qwen35_08b/
stopbanni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ris3wl
false
null
t3_1ris3wl
/r/LocalLLaMA/comments/1ris3wl/first_quantization_of_qwen35_08b/
false
false
self
3
{'enabled': False, 'images': [{'id': 'BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=108&crop=smart&auto=webp&s=f0cb9a4644e78087e02f387b83174570445c111b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=216&crop=smart&auto=webp&s=0575031ccf355b6c248691121fce071403c0d256', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=320&crop=smart&auto=webp&s=aa99d95e150cb595f350dd4565a6f8cf6f4387fd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=640&crop=smart&auto=webp&s=0467e78f551fc6ba6db74ea131eab05f1f764803', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=960&crop=smart&auto=webp&s=9bfebb0b3f182b26dc0e0e641d6687a11c8c34a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?width=1080&crop=smart&auto=webp&s=68a70cfe5e40248ceda79300b938950f0b6bbade', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BGB8XuRny30xqx66AlZQU11XUhThIRq2Eo2eAE6vSYU.png?auto=webp&s=fc5a779ea27ca0878844997ccf6f51e95cd086cc', 'width': 1200}, 'variants': {}}]}
Web UI Dataset: Screenshot and Code of Modern Websites with Details of Web Frameworks and Box Bounds for All Viewports (Desktop, mobile, tablet).
4
Built a dataset of 10,000+ real screenshots and code of modern websites with details of styling, framework used, and box bounds for all viewports (Desktop, mobile, tablet). I fine-tuned QWEN 2.5 VL-7B-Instruct with this dataset and ran it on DesignBench (An LLM Web UI benchmark), and the model showed improvements in the pixel similarity score of generated websites!
2026-03-02T12:55:23
https://huggingface.co/datasets/ronantakizawa/webui
Ok_Employee_6418
huggingface.co
1970-01-01T00:00:00
0
{}
1ris2cu
false
null
t3_1ris2cu
/r/LocalLLaMA/comments/1ris2cu/web_ui_dataset_screenshot_and_code_of_modern/
false
false
https://external-preview…c88fe3e4ad94edf6
4
{'enabled': False, 'images': [{'id': 'p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=108&crop=smart&auto=webp&s=84abf0f5fbbea570b9abc298f4043b2d1666e34a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=216&crop=smart&auto=webp&s=86d66fc0e143c342d9bc2918ceebf631879e8b15', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=320&crop=smart&auto=webp&s=7535f41ef091fc9e370bd87a18acac6edc8f6e44', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=640&crop=smart&auto=webp&s=e53ee3441bf49db779c4b551c6ac78e74e4be47f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=960&crop=smart&auto=webp&s=ed8a2ea0350fb228e4a148ca41628a79877b88d4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?width=1080&crop=smart&auto=webp&s=a30e9a2023d08728934a6667da1d0ee0fc7a2d48', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p6HudT8Njh0dI964B-YJXW2DaPluCHoIEGjGEK7i7jM.png?auto=webp&s=4ca2077d498ef80cb2f1087e0f0c31e4aed20040', 'width': 1200}, 'variants': {}}]}
Qwen3.5 9B and 4B benchmarks
239
2026-03-02T12:44:23
https://i.redd.it/ge0jbcc0ommg1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1rirtyy
false
null
t3_1rirtyy
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/
false
false
https://preview.redd.it/…62618c5a1f4c4411
239
{'enabled': True, 'images': [{'id': 'ge0jbcc0ommg1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=108&crop=smart&auto=webp&s=d7e201a8b28fd8ed414c78fdd3e4406c1839ec26', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=216&crop=smart&auto=webp&s=cf940f24ae9345080699b1a689598d829ce6cb48', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=320&crop=smart&auto=webp&s=0ef8a5dd94cc6138705d1ad5c5392c51cea148e9', 'width': 320}, {'height': 302, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=640&crop=smart&auto=webp&s=ed5af0b747e36c9f0e27382c5ae9a4601ba62593', 'width': 640}, {'height': 454, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=960&crop=smart&auto=webp&s=a0cce61b5ae3fc3ca0f3b2f324ba3ada52652af0', 'width': 960}, {'height': 510, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?width=1080&crop=smart&auto=webp&s=6d4b1819a528f6701538b96627adaf71d004c0a5', 'width': 1080}], 'source': {'height': 969, 'url': 'https://preview.redd.it/ge0jbcc0ommg1.jpeg?auto=webp&s=f43ec216a02e34eba29c4ebadea3499e76a90ebf', 'width': 2048}, 'variants': {}}]}
unsloth/Qwen3.5-4B-GGUF · Hugging Face
117
GGUF from unsloth
2026-03-02T12:44:07
https://huggingface.co/unsloth/Qwen3.5-4B-GGUF
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1rirts9
false
null
t3_1rirts9
/r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/
false
false
https://external-preview…d02bbd0c0249e881
117
{'enabled': False, 'images': [{'id': 'o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=108&crop=smart&auto=webp&s=4a93ef6431dad6936e9c942fed855fd27951e289', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=216&crop=smart&auto=webp&s=77b3aa40ad8844c7b4653147e254580cfac14298', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=320&crop=smart&auto=webp&s=89ac3a2e9c5d769625524a0e6a8aefaa9ec20920', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=640&crop=smart&auto=webp&s=ed1c9114b4570b5e3305705b186f70d8691e112c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=960&crop=smart&auto=webp&s=f269adfec30b9e2f7b91e7500018c052353eb081', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?width=1080&crop=smart&auto=webp&s=d8089599d4c0c27a6bc88e282de8f20a9f4afe5d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o4B8JISdEftCtsxQQp-T6179YiIyRfnuYTIDq8FSYMo.png?auto=webp&s=6a3b4b0a0c6447f7025bd22110ba8395db0744c7', 'width': 1200}, 'variants': {}}]}
Small Qwen Models Out!!
27
https://preview.redd.it/…ons/Qwen/qwen35)
2026-03-02T12:42:31
https://www.reddit.com/r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/
Wooden-Deer-1276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirsmh
false
null
t3_1rirsmh
/r/LocalLLaMA/comments/1rirsmh/small_qwen_models_out/
false
false
https://preview.redd.it/…16344cc2be957a91
27
null
Which backend works best with different gpus?
1
I’m contemplating running an inference server with 2 32gb v100 and 2 16gb v100s since these are the same gpu just different densities do any backends have issues with this? I could also run 4 32gb chips but my goal is 96gb of vram and the 16gb ones are significantly cheaper.
2026-03-02T12:41:29
https://www.reddit.com/r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/
Simple_Library_2700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirru7
false
null
t3_1rirru7
/r/LocalLLaMA/comments/1rirru7/which_backend_works_best_with_different_gpus/
false
false
self
1
null
Qwen 3.5 9B - GPT-OSS 120B but 13x smaller
1
[deleted]
2026-03-02T12:40:11
[deleted]
1970-01-01T00:00:00
0
{}
1rirqx6
false
null
t3_1rirqx6
/r/LocalLLaMA/comments/1rirqx6/qwen_35_9b_gptoss_120b_but_13x_smaller/
false
false
default
1
null
Qwen/Qwen3.5-9B · Hugging Face
508
9B is alive!
2026-03-02T12:33:24
https://huggingface.co/Qwen/Qwen3.5-9B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1rirlyb
false
null
t3_1rirlyb
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/
false
false
https://external-preview…41203795d558631e
508
{'enabled': False, 'images': [{'id': 'jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=108&crop=smart&auto=webp&s=90f200eff149bce065842ec66d115f1747e3aff4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=216&crop=smart&auto=webp&s=2f1131d88b1f64e9b16dfacd9bc1144a0de3f4f0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=320&crop=smart&auto=webp&s=3cd3bd7dec6ad7f855a57c672020a98f0212fb4d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=640&crop=smart&auto=webp&s=ca34f84400d8fc196e5cf4de51a8147edd24a63f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=960&crop=smart&auto=webp&s=6149cb89cd3befbce0feb5660a2a8c90fb224f72', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?width=1080&crop=smart&auto=webp&s=c6449d06f1292529c04312ac01fc058c2772471b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jy_9Bo5EK9cQZwjAU6cQf6YqfqSIVZV7ZbxfuSE5sKE.png?auto=webp&s=e920ef16a233a19b7df01277abcc1118e805e38f', 'width': 1200}, 'variants': {}}]}
Qwen 3.5 2B and 9B relesed!
61
[https://huggingface.co/unsloth/Qwen3.5-2B](https://huggingface.co/unsloth/Qwen3.5-2B) [https://huggingface.co/unsloth/Qwen3.5-9B](https://huggingface.co/unsloth/Qwen3.5-9B)
2026-03-02T12:33:18
https://www.reddit.com/r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/
sunshinecheung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirlvw
false
null
t3_1rirlvw
/r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/
false
false
self
61
{'enabled': False, 'images': [{'id': 'ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=108&crop=smart&auto=webp&s=208e221f22fcad72b0f0873ca09fc5864c6549f3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=216&crop=smart&auto=webp&s=0f08a95da28c36d00d7331b969ee894864948c9f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=320&crop=smart&auto=webp&s=f4209acd8aeecce84a69aea61ee56e26aec8d62b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=640&crop=smart&auto=webp&s=f6426f4573e1c45d4f70d52028464ccab18bb1e2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=960&crop=smart&auto=webp&s=786e1ca3ceac07d86d21b83c2c1af0c2555e10d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?width=1080&crop=smart&auto=webp&s=de79adebd722b765cabfe2fe0726e54e79ee26fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ik4bUEAxhbbJ444bIjP0WMkGO-OrMh7rMme4N7g5OA8.png?auto=webp&s=9eff8332d73fb7f8f853ca5a57a20885cba16f9a', 'width': 1200}, 'variants': {}}]}
Breaking : The small qwen3.5 models have been dropped
1,819
2026-03-02T12:32:32
https://i.redd.it/trjsrjbzlmmg1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1rirlau
false
null
t3_1rirlau
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/
false
false
https://preview.redd.it/…5af2c6a13bc2f96a
1,819
{'enabled': True, 'images': [{'id': 'trjsrjbzlmmg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=108&crop=smart&auto=webp&s=89508de7cc168fb4ba6e2659bd965783cb9171a9', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=216&crop=smart&auto=webp&s=319e6e16ea6327f849da79b80ec4978928d778c9', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=320&crop=smart&auto=webp&s=f9d5c8412b2dedfe20380d2cdadc48d4e5e54427', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=640&crop=smart&auto=webp&s=334c56574e8d95ff50a5383660212f6ee481fa1b', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=960&crop=smart&auto=webp&s=051b8a7ecda383f6804e54900a98fbc050c10eba', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?width=1080&crop=smart&auto=webp&s=e93e604f9ca18131c172dc355c19ef4a5fedc1c8', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/trjsrjbzlmmg1.jpeg?auto=webp&s=39d5845a355759c2f28b7a79dc128ca302d400f4', 'width': 1080}, 'variants': {}}]}
Is Claude down for anyone else?
1
Claude (Web Chat) seems to be down for the last hour or so. https://preview.redd.it/l7o11crllmmg1.png?width=2814&format=png&auto=webp&s=d32c4cd86d6f6fc95ed8041cf7d4fb386abf1170
2026-03-02T12:31:19
https://www.reddit.com/r/LocalLLaMA/comments/1rirkh7/is_claude_down_for_anyone_else/
KnightCodin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirkh7
false
null
t3_1rirkh7
/r/LocalLLaMA/comments/1rirkh7/is_claude_down_for_anyone_else/
false
false
https://preview.redd.it/…7433d9ac314a0e06
1
null
New small Qwen are here!
14
https://preview.redd.it/…ingface.co/Qwen)
2026-03-02T12:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1rirjo5/new_small_qwen_are_here/
PixelatedCaffeine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirjo5
false
null
t3_1rirjo5
/r/LocalLLaMA/comments/1rirjo5/new_small_qwen_are_here/
false
false
https://preview.redd.it/…eb74cbf9c7d835e0
14
null
Qwen 3.5 small just dropped
219
new models: Qwen3.5-9B Qwen3.5-4B Qwen3.5-2B Qwen3.5-0.8B time to test on my 128mb ram vps 🫡
2026-03-02T12:29:59
https://i.redd.it/lk22ukvilmmg1.jpeg
kuzcov
i.redd.it
1970-01-01T00:00:00
0
{}
1rirjg1
false
null
t3_1rirjg1
/r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/
false
false
https://preview.redd.it/…8516abc0b949b9d4
219
{'enabled': True, 'images': [{'id': 'lk22ukvilmmg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=108&crop=smart&auto=webp&s=b18bc4683dfb52beb2e5fcccd492da2e800a64f4', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=216&crop=smart&auto=webp&s=3aca213d4929429a1f19c1a36ab140eb5bbc207b', 'width': 216}, {'height': 360, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=320&crop=smart&auto=webp&s=1ac62cb9e5fe0d36e04dfecf8fd1095daa38082d', 'width': 320}, {'height': 720, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=640&crop=smart&auto=webp&s=0109716d218bb84fefdeff5cad17e6b4113fc1e8', 'width': 640}, {'height': 1081, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=960&crop=smart&auto=webp&s=54563669b53815aebe5a84258612d38f7c8ce08e', 'width': 960}, {'height': 1216, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?width=1080&crop=smart&auto=webp&s=5e7594804c6cb5ae27d6472727f8d25f3a807a5f', 'width': 1080}], 'source': {'height': 1399, 'url': 'https://preview.redd.it/lk22ukvilmmg1.jpeg?auto=webp&s=df480e9d12b67cfcacd1c591a0d92331ae46a80f', 'width': 1242}, 'variants': {}}]}
I got sick of AI Game Masters hallucinating, so I built an engine that forces the local LLM to compile your actions into C# physics before writing the story. Looking for alpha testers.
1
AI roleplay is currently broken. If you tell a standard LLM, "I throw my torch into the flour barrel," it just hallucinates a random outcome based on token probability. It doesn't actually know where the torch is, and it doesn't know what flour does. I wanted an actual digital tabletop with rigid rules. So I built a local engine that intercepts your natural language, parses the intent, checks your hard-coded inventory, and compiles the actions into a C# physics sandbox (via Roslyn) *before* the AI is allowed to write the response. this allows **fast and consistent simulation** of how the entities interact in the game. It also allows generated entities to **interact autonomously** in the generated world like in an **immersive simulation**. In the screenshot attached, the engine caught the `FLOUR_DUST + OPEN_FLAME` hazard flag, calculated a 3.0m blast radius, dynamically updated the spatial node map to reflect the fire, applied the hard `-14 HP` damage to the goblin entity, and *only then* handed that state-data to the LLM to generate the narrative text. I'm currently preparing an alpha test build to let you experment with it and break it. If you have a decent rig, understand local AI, and want to try to break the logic engine, I am looking for hardcore alpha testers. First 100 people get the build! **Discord Link:** [`https://discord.gg/HHPDgAwwwG`](https://discord.gg/HHPDgAwwwG)
2026-03-02T12:26:21
https://i.redd.it/w2bwj2t5kmmg1.png
Impressive_Half5130
i.redd.it
1970-01-01T00:00:00
0
{}
1rirgs7
false
null
t3_1rirgs7
/r/LocalLLaMA/comments/1rirgs7/i_got_sick_of_ai_game_masters_hallucinating_so_i/
false
false
https://preview.redd.it/…376bc076d7d2bcd6
1
{'enabled': True, 'images': [{'id': 'w2bwj2t5kmmg1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=108&crop=smart&auto=webp&s=1175e4106afdb6c059a16d66c580420ed1ac045c', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=216&crop=smart&auto=webp&s=3b82d31a7351d7e3d2ed0a8eeefe1e201cc34870', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=320&crop=smart&auto=webp&s=872285322793235ca41987e7be16404583e9a680', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=640&crop=smart&auto=webp&s=6d90bebb992d07a29ed08a46b61672784270e4bf', 'width': 640}, {'height': 511, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=960&crop=smart&auto=webp&s=280aec674f7dd5b2ce9a0a5466d553b91c689026', 'width': 960}, {'height': 574, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?width=1080&crop=smart&auto=webp&s=c5f52b68654ea08bd17d88b548192dba3c059df4', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://preview.redd.it/w2bwj2t5kmmg1.png?auto=webp&s=67e4c9a402f26204afc2222a2dc7cf559977aaa9', 'width': 1901}, 'variants': {}}]}
Building an AI Credit Decisioning Engine for a Hackathon – How would you architect this?
0
Hey everyone, I’m participating in a hackathon with a pretty intense problem statement: **Automating Corporate Credit Appraisal for the Indian market.** **The Goal:** Build a system that takes in messy data (GST filings, ITRs, bank statements, and 100+ page PDFs of Annual Reports) and spits out a **Credit Appraisal Memo (CAM)** with a final "Lend/Don't Lend" recommendation and a risk-adjusted interest rate. **The Complexity:** * **Structured Data:** GST (GSTR-2A vs 3B), Bank Statements, ITRs. * **Unstructured Data:** Annual reports, Board minutes, and Legal notices (often scanned/messy PDFs). * **The "Digital Credit Manager" Agent:** It needs to crawl the web for news on promoters, sector headwinds, and e-Court litigation history. * **The Output:** A transparent, explainable scoring model (no black boxes allowed). **My Current Tech Stack Idea:** * **Inference/Orchestration:** LangChain or CrewAI for the agentic workflows. * **Data Processing:** Databricks (as per the prompt) for the pipelines. * **PDF Extraction:** Thinking of using Marker or [Unstructured.io](http://Unstructured.io) for the heavy lifting on those "messy" Indian PDFs. * **Research Agent:** Tavily or Exa for web-scale search. **I’d love your input on a few things:** 1. **PDF Extraction:** For scanned Indian-context PDFs, what’s the current "gold standard" to ensure financial tables don't break? 2. **Detection Logic:** How would you programmatically detect things like "circular trading" between GST and Bank Statements? 3. **Explainability:** Since I can't use a black box, what’s the best way to trace the LLM's logic back to specific data points (e.g., "Rejected due to X news report")? 4. **The "Gotchas":** If you were building this for a bank, what is the first thing that would break? What tools or frameworks am I missing that would make this workflow more robust?
2026-03-02T12:21:02
https://www.reddit.com/r/LocalLLaMA/comments/1rird2l/building_an_ai_credit_decisioning_engine_for_a/
Used_Brain_6508
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rird2l
false
null
t3_1rird2l
/r/LocalLLaMA/comments/1rird2l/building_an_ai_credit_decisioning_engine_for_a/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=108&crop=smart&auto=webp&s=7ec384e554047088a4b99afdebd307307984dadd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=216&crop=smart&auto=webp&s=554a1d3e8577f0ee0db56f971dba85bc75e5a701', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=320&crop=smart&auto=webp&s=0e6f106f35f70fea2bfc38d49ccd0036c74696ca', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=640&crop=smart&auto=webp&s=2c00075b72d40338665c6a304214fba44a2c2039', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=960&crop=smart&auto=webp&s=ca33291a20487cbb86b8d61754d7a054061541b8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=1080&crop=smart&auto=webp&s=bf5d33cefbe6aacfe2173d7cc0f6d8bb65497cff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?auto=webp&s=7bdc5f5155cc8514620f68056804430ac75d68f6', 'width': 1200}, 'variants': {}}]}
A semantic memory layer for local AI agents — no vector DB, one file, runs on CPU
1
Most local LLM setups I've seen treat memory as a problem for later. You build the agent, it works, then you realize it has no idea what it decided yesterday. I spent time solving this without adding infrastructure. Here's the result. \*\*The problem\*\* Keyword search misses synonyms. Loading all memory into context hits token limits fast. A proper vector database (Pinecone, Chroma, Weaviate) is overkill when you have <10,000 memory chunks and you want zero cloud dependencies. \`SemanticMemory\` — a single Python file (198 lines) that does: 1. Scans a directory of \`.md\`, \`.txt\`, \`.json\` files 2. Chunks them with overlap 3. Encodes chunks using \`all-MiniLM-L6-v2\` (22MB, local, fast) 4. Saves everything to a plain \`.semantic\_index.json\` file 5. Answers queries with cosine similarity ranking \*\*Full working example:\*\* from semantic\_memory import SemanticMemory mem = SemanticMemory("\~/.agent/memory") mem.index() results = mem.query("Stripe payment integration", top\_k=3) for r in results: print(f"\[{r.score:.3f}\] {r.source}: {r.text\[:80\]}") # \[0.847\] stripe\_notes.md: Set up Stripe webhook handler... # \[0.731\] payments.md: Stripe requires HTTPS for live mode... context = mem.query\_and\_format("what did we decide about the database?", top\_k=3) # Ready to inject into your model prompt \*\*Using it with Ollama:\*\* import ollama from semantic\_memory import SemanticMemory mem = SemanticMemory("\~/.agent/memory") mem.index() def chat\_with\_memory(user\_message): context = mem.query\_and\_format(user\_message, top\_k=4) response = ollama.chat(model="mistral", messages=\[ {"role": "system", "content": f"Memory context:\\n{context}"}, {"role": "user", "content": user\_message} \]) return response\["message"\]\["content"\] \*\*Performance (M1 MacBook Air):\*\* - First index build (205 chunks): \~3.2 seconds - Query time: \~85ms including encoding - Index file size: \~1.4MB for 205 chunks - Model download: 22MB, cached after first run \*\*When to use this vs ChromaDB/Weaviate:\*\* Use this when: <10,000 chunks, local-only, zero infrastructure, single process. Use a vector DB when: >10,000 chunks, multiple agents need concurrent access, sub-10ms latency required. \*\*Install:\*\* pip install sentence-transformers numpy GitHub: https://github.com/Nerikko/semantic-memory-kit MIT license. Works with Ollama, LM Studio, llama.cpp, anything that accepts a context string. What memory approach are you using for local agents? Curious if anyone's hit the 10k chunk ceiling.
2026-03-02T12:17:35
https://www.reddit.com/r/LocalLLaMA/comments/1rirama/a_semantic_memory_layer_for_local_ai_agents_no/
Nerikk00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rirama
false
null
t3_1rirama
/r/LocalLLaMA/comments/1rirama/a_semantic_memory_layer_for_local_ai_agents_no/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=108&crop=smart&auto=webp&s=afabaea4765af56fe60250306a8d0f1680717bd5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=216&crop=smart&auto=webp&s=6b738f582b1d7dadc8df67ef718bb41268e8258e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=320&crop=smart&auto=webp&s=981f2934c49977b9c680436862df22bd93190656', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=640&crop=smart&auto=webp&s=814db5a4bab27177db0387ed10a8e71be43dddb4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=960&crop=smart&auto=webp&s=3b08318d328308c3fc35ef51a1d89a930c194e6d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?width=1080&crop=smart&auto=webp&s=586ee4e57393f1c6f285819a1744dc2227e3fbe4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CgmnsEhDjYQE9HfDACYmkQab_N5vFm0Aen1slAVHufY.png?auto=webp&s=6298bddd9551d8bdde7de138373a15309199513e', 'width': 1200}, 'variants': {}}]}
IQuest-Coder-V1 is 40B/14B/7B
28
I think this update was not posted here, so let's look at: # [](https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Loop-Thinking#iquest-coder-v1-model-family-update)IQuest-Coder-V1 Model Family Update 🚀🚀🚀 [IQuest-Coder-V1 Model Family Update](https://iquestlab.github.io/release-1.0-2603/index.html): Released 7B & 14B Family Models, 40B-Thinking and 40B-Loop-Thinking, specially optimized for tool use, CLI agents (Like `Claude Code` and `OpenCode`) & HTML/SVG generation, all with 128K context, now on Hugging Face! https://preview.redd.it/s93s84q3jmmg1.png?width=743&format=png&auto=webp&s=8082d66cc6040c2584048aa5fd5d36c160eda583 https://preview.redd.it/9qm0n686jmmg1.png?width=4811&format=png&auto=webp&s=2943a800eb0342626d280cd0699b8a7a95c34d09
2026-03-02T12:16:56
https://www.reddit.com/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rira5e
false
null
t3_1rira5e
/r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/
false
false
https://preview.redd.it/…7b10a9df4cc3c543
28
null
Seeking Advice on Detecting Keypoints in Sports Videos with Motion Blur
1
I'm currently working on a project where I'm trying to detect keypoints in sports videos, such as corners, penalty points, goal post points, and other significant markers. However, I've encountered a challenge: due to motion blur, my model struggles to accurately detect these keypoints in certain frames. Despite the motion blur, I've noticed that the white lines and keypoints are still somewhat identifiable. This leads me to wonder: How can I effectively detect these keypoints or interpolate their positions in blurred frames? I would appreciate any insights, techniques, or algorithms that could help improve the accuracy of keypoint detection in such scenarios. If anyone has experience with similar projects or knows of any resources, please share! Thanks in advance for your help!
2026-03-02T12:11:25
https://www.reddit.com/r/LocalLLaMA/comments/1rir6ak/seeking_advice_on_detecting_keypoints_in_sports/
Mysterious_Art_3211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rir6ak
false
null
t3_1rir6ak
/r/LocalLLaMA/comments/1rir6ak/seeking_advice_on_detecting_keypoints_in_sports/
false
false
self
1
null
Hardware Usage Advice
2
Hi All I am diving into the AI/LLM world. I have on order a Gmktek Evo-X2 with 128gb ram, I have some nvme drives lying around and need some advice on which to use. I have a Samsung 990 pro gen4 1tb, a Kingston Snv3s 4tb and a WD Red Sn700 4tb. My use case is to run Proxmox on the box and virtual linux VMs for Ollama and Openclaw, as well as a VM for docker. I run Home Assistant and am looking to run a local LLM for HA voice so I can get rid of Alexa. Will also be doing some general tinkering to learn Ai as I go. I was planning to use the first nvme slot to house proxmox and maybe docker vm, whilst using second slot for Ai VMs. The WD has high endurance but is only gen 3, the Kingston is gen 4 and runs cool but I'm not sure how performant it is, the Samsung is gen 4 and very fast all round but half the endurance of the WD. My question is, what is the best nvme from my available parts to use for the second slot?
2026-03-02T11:41:18
https://www.reddit.com/r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/
venman38
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riqlhl
false
null
t3_1riqlhl
/r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/
false
false
self
2
null
Choosing the right Apple Silicon for Backend + TranslateGemma/TTS/STT?
7
Hi everyone, I’ve been a backend developer using a **2013 MacBook Pro** until now. I’m looking to buy a MacBook with **32GB of RAM**, but I’m having a hard time deciding which generation of Apple Silicon to pick. **My situation:** * **Main Task:** Backend development. * **Local AI:** I plan to run **TranslateGemma**, **STT (Whisper)**, and **TTS** models locally. * **Budget:** To be honest, I'm on a tight budget, so I’m mainly looking at the **M1 series (Pro/Max)** as my top priority for price-to-performance. * **Longevity:** I’m the type of person who keeps a laptop for a very long time. Because of this, I’m also considering a used **M3** to stay "current" longer. **My questions are:** 1. **Is M1 still enough?** For running TranslateGemma and audio AI models, will a 32GB M1 Pro/Max still hold up well for the next 3-4 years, or will it feel outdated soon? 2. **Is M3/M4 worth the extra debt?** Given that I keep my devices for a long time, is there a compelling reason to jump to a brand-new **M4** (or used M3) specifically for AI tasks? Does the improved Neural Engine or architecture offer a significant "future-proofing" benefit that justifies the much higher price? 3. **Backend + AI:** Since I'll be coding while these models might be running in the background, should I worry about the performance gap between M1 and M4 for multitasking? I really want to save money with an M1, but I don't want to regret it in 2 years if the newer chips handle local LLMs significantly better. Would love to hear your thoughts. Thanks!
2026-03-02T10:42:40
https://www.reddit.com/r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/
yusunglee2074
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ripjzc
false
null
t3_1ripjzc
/r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/
false
false
self
7
null
How are you mitigating prompt injection in tool-calling/agent apps (RAG + tools) in production?
0
I’m running a tool-calling / agent-style LLM app and prompt injection is becoming my #1 concern (unintended tool calls, data exfiltration via RAG context, etc.).I started experimenting with a small gateway/proxy layer to enforce tool allowlists + schema validation + policy checks, plus audit logs.For folks shipping this in production:1) What attacks actually happened to you?2) Where do you enforce defenses (app vs gateway vs prompt/model)?3) Any practical patterns or OSS you recommend?(Not trying to promote — genuinely looking for war stories / best practices.)
2026-03-02T10:13:07
https://www.reddit.com/r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/
AnteaterSlow3149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rip2f0
false
null
t3_1rip2f0
/r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/
false
false
self
0
null
GLM-5 matches Claude Opus 4.6 intelligence at $1.55/M tokens vs $10.00. The open-source gap is gone.
1
[removed]
2026-03-02T10:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1rip0bh/glm5_matches_claude_opus_46_intelligence_at_155m/
Frosty_End_6588
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rip0bh
false
null
t3_1rip0bh
/r/LocalLLaMA/comments/1rip0bh/glm5_matches_claude_opus_46_intelligence_at_155m/
false
false
self
1
null
I use a local Mistral 7B as a router to decide when NOT to call GPT-4 — saved ~60% on API costs
0
Started noticing about 3 months ago that a big chunk of the requests hitting my agent pipeline were genuinely simple — things like "summarize this 200-word message", "classify this text as positive/negative", "reformat this JSON". Stuff that doesn't need a frontier model. So I set up Mistral 7B locally (quantized, running on a laptop GPU) as a routing layer. Before anything hits the API, the local model classifies the request into one of three buckets: handle locally, needs reasoning (send to GPT-4), or ambiguous (also send to GPT-4 but log it for review). The classifier prompt is embarrassingly simple — just a few examples in the system message. Took maybe 2 hours to get it stable. The harder part was figuring out what "ambiguous" actually means in practice. I ended up reviewing the logged ambiguous cases every couple of days for the first few weeks and adjusting. The surprising thing: about 55% of requests now get handled entirely locally. Most of them don't need the frontier model at all — users can't tell the difference on simple tasks. API costs dropped from \~$180/month to \~$75. What tripped me up: latency. Local inference adds \~300ms on top of what the user already waits for, and on fast requests that used to return in 400ms, it now feels slower. I had to add a fast path for anything the local model itself is uncertain about (low confidence score → skip local, go straight to API). The other gotcha: the routing model can be confidently wrong. About once a week it sends something complex to local that really needed GPT-4. I've got an auto-escalation fallback now — if the local response is too short or contains certain patterns ("I'm not sure", "as an AI"), it automatically retries with the API. Not a novel idea, people have been writing about LLM routing for a while. But actually running it in a real pipeline for 3 months taught me things papers don't cover — like how much the latency perception problem matters more than the cost savings in practice.
2026-03-02T10:02:50
https://www.reddit.com/r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/
justserg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riow7h
false
null
t3_1riow7h
/r/LocalLLaMA/comments/1riow7h/i_use_a_local_mistral_7b_as_a_router_to_decide/
false
false
self
0
null
How Not To Go Insane Talking with LLMs
0
Wrote up some practical habits for not letting LLMs validate your bad reasoning — custom prompt modes ("strawberry" for neutral explanation, "socrates" for adversarial scrutiny), thinking about training data composition when evaluating answers, and a fun experiment you can try with any model. Includes a story about how I spent months believing a flawed Gödel-based argument against AGI because Claude agreed with me.
2026-03-02T10:02:18
https://hqhs.net/devlog/how-not-to-go-insane-talking-with-llms
Ok_Mycologist_64
hqhs.net
1970-01-01T00:00:00
0
{}
1riovvx
false
null
t3_1riovvx
/r/LocalLLaMA/comments/1riovvx/how_not_to_go_insane_talking_with_llms/
false
false
default
0
null
A 200 KB Tool-Using Six-Phase Loop Agent for Qwen3.5-35B-A3B
15
An autonomous agent that runs a [six-phase cognitive loop](https://github.com/mblakemore/six-phase-loop) continuously, learning and building capabilities with every cycle. Uses a local LLM (llama-server) and persists its memory through git.
2026-03-02T10:00:46
https://github.com/mblakemore/six-agent
nnet42
github.com
1970-01-01T00:00:00
0
{}
1riouwq
false
null
t3_1riouwq
/r/LocalLLaMA/comments/1riouwq/a_200_kb_toolusing_sixphase_loop_agent_for/
false
false
https://external-preview…fd8d173500af3b12
15
{'enabled': False, 'images': [{'id': 'vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=108&crop=smart&auto=webp&s=bf8297b7f97906af4c0bf744ae3fe8dca1d1030c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=216&crop=smart&auto=webp&s=4f9e6d3e7b7235dce3dcf7524a1ca9cefeef9e81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=320&crop=smart&auto=webp&s=e499e6868ce3d8ee7288d29165bea081e358182f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=640&crop=smart&auto=webp&s=550738b2f07d8e4cbe095635c74ad4e49c0e7c6a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=960&crop=smart&auto=webp&s=12f4120c8bab93526c85efe1d4d3eddf1841b941', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?width=1080&crop=smart&auto=webp&s=2739d31169e5e33897aa0c4b27bad5fb9239fa73', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vjQJg2JTGygfbVW6ZaUo26lGxQtanNL-PyD0UNrn8WU.png?auto=webp&s=e85f4e313f6cb751a13094716601e8e30eca2800', 'width': 1200}, 'variants': {}}]}
Qwen3.5-122b-VL Abliterated WORKING (mlx)
3
These hybrid SSSM + CoT models do not work with basic heretic or regular ablation methods. I’ll make a gguf if enough demand. I have. 397b text only reap abliterated mlx too, gated, requested for access. @dealignai https://huggingface.co/dealignai/Qwen3.5-VL-122B-A10B-4bit-CRACK
2026-03-02T09:54:56
https://www.reddit.com/r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/
dealignai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riorfz
false
null
t3_1riorfz
/r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/
false
false
self
3
{'enabled': False, 'images': [{'id': 'm-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=108&crop=smart&auto=webp&s=66d3ea3dadda87d5309e472b09a810b62acedeb8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=216&crop=smart&auto=webp&s=b3b77157b45b2d9ff733404e9589d47e585497e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=320&crop=smart&auto=webp&s=dda9ac3542a1dc2860812e1c507fc7aabbd8658f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=640&crop=smart&auto=webp&s=052236f11e22ecba831206b471f598e3fecb9edf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=960&crop=smart&auto=webp&s=67feda19d8707a3dc1ef7858a17af30287ed7d65', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=1080&crop=smart&auto=webp&s=5dcb818fefc43d4cbe6ba6ec2eef2bc9705953f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?auto=webp&s=d8799567946c8fee619b9c70c51678e78459c60a', 'width': 1200}, 'variants': {}}]}
Ask: Anyone know good pixel art (and pixel animation) models?
0
Even GPT-5.2 struggles with creating good quality pixel art - it always looks so "smudged". If anyone knows what local models can accomplish this it would be greatly appreciated!
2026-03-02T09:46:22
https://www.reddit.com/r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/
newcomb_benford_law
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rioml1
false
null
t3_1rioml1
/r/LocalLLaMA/comments/1rioml1/ask_anyone_know_good_pixel_art_and_pixel/
false
false
self
0
null
OpenClaw on my spare laptop
0
I have a spare M1 Pro 8GB ram 256GB storage, I wanted to just experiment with this entire OpenClaw thing, so I created a new email id and everything and formatted my entire Mac Book. Now when it comes ti choosing Model is there any model I can use? I am looking for something to do research or anything that can help me with it?
2026-03-02T09:45:33
https://www.reddit.com/r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/
Boring_Tip_1218
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riom3s
false
null
t3_1riom3s
/r/LocalLLaMA/comments/1riom3s/openclaw_on_my_spare_laptop/
false
false
self
0
null
Avatar LM , for CPU . Best current models for real-time talking avatar (Wav2Lip alternative with higher accuracy + low latency)? High speed. Any suggestions?
1
Hi Professionals, I’m working on a project where I need to generate **talking avatars from a single input image (real or animated) + audio**, similar to platforms like D-ID. **Goal:** * Input: single image (human / animated character) + audio * Output: video where the avatar speaks with **accurate lip sync** * Should preserve identity (face consistency) * Should ideally support both realistic and stylized faces **What I’m specifically looking for:** * Better alternative to Wav2Lip (higher lip-sync accuracy, fewer artifacts) * Lower latency / near real-time if possible * Works well for **image → video (not just video-to-video dubbing)** * Good handling of different angles / expressions * Preferably something I can run locally or via API **Reference:** Something like [https://www.d-id.com/](https://www.d-id.com/) # Models / tools I’ve explored so far: * Wav2Lip (baseline, but artifacts + limited realism) * SadTalker / VideoRetalking * D-ID / HeyGen (good quality but SaaS) # Models I came across (not sure how good they are in practice): * MuseTalk (real-time talking head?) * Diff2Lip / diffusion-based lip sync * Pika (image-to-video) * Sync Labs / [Sync.so](http://Sync.so) * Any newer GAN/diffusion hybrid models? # My main concerns: * Lip sync accuracy (phoneme → viseme alignment) * Temporal consistency (no flickering) * Latency (important for interactive use cases) * Ability to generalize to unseen faces * important : CPU runtime only Would love recommendations for: 1. **Best open-source models (2025–2026)** 2. **Best production-ready APIs** 3. Any repos / papers / benchmarks comparing them If you’ve built something similar, would really appreciate insights 🙌 Thanks!
2026-03-02T09:44:22
https://www.reddit.com/r/LocalLLaMA/comments/1riolfk/avatar_lm_for_cpu_best_current_models_for/
BedBright7967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riolfk
false
null
t3_1riolfk
/r/LocalLLaMA/comments/1riolfk/avatar_lm_for_cpu_best_current_models_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=108&crop=smart&auto=webp&s=2fc46a038162a4312db64f3c66c5feec1ee8545e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=216&crop=smart&auto=webp&s=9b39e8b62f67fdc34a6bf7f57fdacd3383fa448f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=320&crop=smart&auto=webp&s=6b8725ec3cb47dfef96c14470222b2c4a7d56669', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=640&crop=smart&auto=webp&s=0ac7046c11ddd7e86a8359e04ad57c9adb178e0b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=960&crop=smart&auto=webp&s=ae37f80fdc89de1586fbc68bc0e6385940eb86d7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?width=1080&crop=smart&auto=webp&s=18fab932c37268efc8e4f75fb592bfd33e92e5fe', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/M78YsjV9QRa9W6DUNpCCJbc6NIviCWrvAbs_aIEozXo.png?auto=webp&s=f3d32e42cb888116dd096b43379f6ae253559393', 'width': 1200}, 'variants': {}}]}
Qwen3 4B high PPL but excel for small dataset training
1
I'm trying to fine-tune Qwen3-4B and Llama3.1-8B on the Empathetic Dataset, but I'm stuck at this step: my trained models behave too well on Few-Shot Learning and Semi-Supervised Learning (PPL around 10-14 while only using 10% of the dataset). I have manually printed out and checked the following: * Data format — user-assistant format, and I use apply_chat_template to concatenate. The model is expected to learn and generate the last assistant sentence based on the previous context. Train, validation, and test sets have no data leakage. * Tokenized data — including converting token IDs back to text, verifying attention masks, and checking padding methods. Evaluating the raw Qwen3-4B-Instruct model directly on Empathetic Dialogues yields PPL in the thousands (loss ~8.1). Using the same pipeline with Llama3.1-8B gives a PPL around 160-180. All data inputs, labels, attention masks, and padding have been manually verified correct. I honestly don’t know what’s going wrong. My mentor said the correct PPL range for the raw model should be around 150-200, and for the trained model around 40-60. Also, if I run evaluation during training, the PPL is around 10-14. But if I load the saved model and use the same evaluation code for testing, the PPL jumps to about 43 for both Llama3.1-8B and Qwen3-4B. This is driving me crazy — I’m honestly starting to laugh at myself. --- ## 1. Problem Overview I'm working on a **dialogue generation task** using the Empathetic Dialogues dataset. The pipeline supports both training (LoRA + 4-bit quantization) and direct raw-model evaluation. The issue: when I load **Qwen3-4B-Instruct** (no fine-tuning, just raw model forward pass), the per-sample perplexity explodes to **thousands**, while the **exact same code and data pipeline** with **Llama3.1-8B-Instruct** gives a reasonable ~160-180 PPL. Even weirder — when I **fine-tune** Qwen3-4B on a small few-shot / semi-supervised subset (usually around 10% of entire dataset), the training loss converges and eval PPL looks **suspiciously good**, to around 10-14 (my mentor says "too good to trust" while the raw model result is "too bad to trust"). So something is off but I can't locate it. --- ## 2. Data Pipeline ### Preprocessing Raw Empathetic Dialogues → multi-turn chat messages format: ```python [     {"role": "system", "content": "U r an empathetic assistant. You need to understand the user's situation, feelings and respond supportively."},     {"role": "user", "content": "Situation: i know the upcoming election...\n\nUser Word: i know this upcoming election..."},     {"role": "assistant", "content": "we will see . do not worry so much ."}   # ← training target ] ``` Each dialogue has multiple turns. Preprocessing stores these as `List[Dict]` keyed by: - **`ud_idx`** (unique dialogue index) — identifies which conversation this turn belongs to - **`ld_idx`** (local dialogue index) — the turn number within that conversation These block indices are used for **few-shot / semi-supervised sampling**: we sample entire dialogue blocks (not individual turns) to avoid data leakage between splits. ### Tokenization (in `__getitem__`) During loading, the `List[Dict]` messages are passed through `tokenizer.apply_chat_template()` to produce the final token sequence. For Qwen3, the output looks like: ``` <|im_start|>system\n...content...<|im_end|>\n<|im_start|>user\n...content...<|im_end|>\n<|im_start|>assistant\n...target...<|im_end|>\n ``` --- ## 3. Example of a Single Sample (Verified) ``` # --- Tokenizer Info --- tokenizer.pad_token     = 'PAD'       (id: 43818) tokenizer.eos_token_id  = 55940 tokenizer.bos_token_id  = None # --- input_ids (full sequence, right-padded) --- [<|im_start|>, system, \n, ..., <|im_end|>, \n, <|im_start|>, user, \n, ..., <|im_end|>, \n, <|im_start|>, assistant, \n, we, will, see, ..., <|im_end|>, \n, PAD, PAD, ...] # --- labels (prompt masked with -100, padding masked with -100) --- [-100, -100, ..., -100,  we, will, see, ., do, not, worry, so, much, ., <|im_end|>, -100, -100, ...] #  ↑ prompt region         ↑ target region (supervised)                               ↑ padding # label ignore index = -100 # pad_token_id = 43818 # Decoded labels (non -100 portion): # "we will see . do not worry so much .<|im_end|>" ``` --- ## 4. Expected vs Actual Results | Model | Mode | Expected PPL | Actual PPL | |-------|------|-------------|------------| | Llama3.1-8B-Instruct | Raw (no fine-tune) | ~150-200 | **~160-180** ✅ | | Qwen3-4B-Instruct | Raw (no fine-tune) | ~150-300 | **~2000-5000+** ❌ | | Qwen3-4B-Instruct | Fine-tuned (small data) | ~40-60 | **Suspiciously low** ⚠️ | My mentor's codebase (different implementation, same dataset) gets: - **Trained model**: PPL around **40-60** - **Raw model**: PPL around **~200** So the raw Qwen3-4B result is off by an order of magnitude. The loss is consistently ~8.1 across batches, which gives `exp(8.1) ≈ 3294` — matching the observed PPL. Logit magnitudes look normal (no NaN/Inf, reasonable range). Doesn't seem like a normalization issue. --- ## 5. What I've Already Checked (All Passed) - ✅ **Labels**: Decoded non-`-100` region matches target text exactly - ✅ **Attention mask**: 1 for all content tokens, 0 only for right-side PAD tokens - ✅ **No double shifting**: PPL function applies `logits[:-1]` vs `labels[1:]` correctly, model's internal loss also matches - ✅ **Padding side**: Right padding for training input, left padding for generation prompt - ✅ **`pad_token_id` (43818) ≠ `eos_token_id` (55940)**: Qwen3-4B has its own dedicated PAD token, no token collision - ✅ **`ignore_index=-100`** set in `CrossEntropyLoss` - ✅ **No data leakage**: train/val/test split by dialogue blocks, verified indices don't overlap - ✅ **input_ids decoded back** matches the expected chat template string --- ## 6. Code GitHub repo: `https://github.com/QI-Joe/LLMTrain` **Only need to look at the `ZGeneration/` folder.** Ignore the `new_data_loader/` subfolder inside it — that's a WIP rewrite.
2026-03-02T09:43:35
https://www.reddit.com/r/LocalLLaMA/comments/1riokz5/qwen3_4b_high_ppl_but_excel_for_small_dataset/
Successful_Scheme414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riokz5
false
null
t3_1riokz5
/r/LocalLLaMA/comments/1riokz5/qwen3_4b_high_ppl_but_excel_for_small_dataset/
false
false
self
1
null
Use a local LLM as a subagent from Claude Code to reduce context use
3
In the same way Claude Code can orchestrate tasks of Claude subagents, it can do the same by delegating tasks to an LLM running on your local machine. In my case, I used LM Studio as the server. By leveraging LM Studio's tool-calling API, the content of the examined file never reached Claude's context - just the native model's summary and insights. **How it works** \- A small Python script (\~120 lines, stdlib only) runs an agent loop: 1. You pass Claude a task description — no file content 2. The script sends it to LM Studio's /v1/chat/completions with read\_file and list\_dir tool definitions 3. The local model calls those tools itself to read the files it needs 4. The loop continues until it produces a final answer 5. Claude sees only the result Example: `python3 agent_lm.py --dir /path/to/project "summarize solar-system.html"` \# \[turn 1\] → read\_file({'path': 'solar-system.html'}) \# \[turn 2\] → This HTML file creates an interactive animated solar system... The file content went into Qwen's context, not Claude's. **What** **it's** **good** **for -** based on testing Qwen3.5 35B 4-bit via MLX on Apple Silicon: \- Code summarization and explanation \- Bug finding \- Boilerplate / first-draft generation \- Text transformation and translation (tested Hebrew) \- Logic tasks and reasoning (use --think flag for harder problems) **What** **it's** **not** **good** **for:** tasks that require Claude's full context, such as multi-file understanding where relationships matter, tasks needing the current conversation history, or anything where accuracy is critical. Think of it as a Haiku-tier assistant, not a replacement. **Setup:** \- LM Studio running locally with the API server enabled \- One Python script for the agent loop, one for simple prompt-only queries \- Both wired into a global \~/.claude/CLAUDE.md so Claude Code knows to offer delegation when relevant \- No MCP server, no pip dependencies, no plugin infrastructure needed \- I recommend adding {%- set enable\_thinking = false %} to the top of the jinja template - for most tasks we don't need the local model to reason, and it saves a lot of time and tokens, increases the speed and there is no real degradation in quality for such tasks. Happy to share the scripts if there's interest. I did have Claude help me write this, but not without supervision and corrections.
2026-03-02T09:35:33
https://www.reddit.com/r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/
Ok_Significance_9109
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riog2w
false
null
t3_1riog2w
/r/LocalLLaMA/comments/1riog2w/use_a_local_llm_as_a_subagent_from_claude_code_to/
false
false
self
3
null
Only real working Qwen3.5-122b VL uncensored (mlx)
1
These Hybrid SSSM and CoT models require extensive testing. As far as I know the only other person who has managed to make a working coherent VL is the PRISM maker; hes an expert in this. I’m an amateur who spent literal night and day with an obsession to get this stuff down. After a few thousand dollars and too many hrs to count here’s Qwen3.5-122b-VL-4bit-MLX ablated. If theres enough demand I can care to make gguf’s. I also made a working coherent 394b text only (VL coming in next day or so) but I’m keeping it gated due to research reasons. You can dm me if you care for it. https://huggingface.co/dealignai/Qwen3.5-VL-122B-A10B-4bit-CRACK
2026-03-02T09:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1riobgm/only_real_working_qwen35122b_vl_uncensored_mlx/
HealthyCommunicat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riobgm
false
null
t3_1riobgm
/r/LocalLLaMA/comments/1riobgm/only_real_working_qwen35122b_vl_uncensored_mlx/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=108&crop=smart&auto=webp&s=66d3ea3dadda87d5309e472b09a810b62acedeb8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=216&crop=smart&auto=webp&s=b3b77157b45b2d9ff733404e9589d47e585497e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=320&crop=smart&auto=webp&s=dda9ac3542a1dc2860812e1c507fc7aabbd8658f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=640&crop=smart&auto=webp&s=052236f11e22ecba831206b471f598e3fecb9edf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=960&crop=smart&auto=webp&s=67feda19d8707a3dc1ef7858a17af30287ed7d65', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?width=1080&crop=smart&auto=webp&s=5dcb818fefc43d4cbe6ba6ec2eef2bc9705953f4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m-zwKKX55w-MAGGRZnM8o4MJRBp_8WUznvoY8OTlu3A.png?auto=webp&s=d8799567946c8fee619b9c70c51678e78459c60a', 'width': 1200}, 'variants': {}}]}
Qwen 3.5 "System Message Must Be at the Beginning" — SFT Constraints & Better Ways to Limit Tool Call Recursion?
0
I’ve been experimenting with **Qwen 3.5** lately and hit a specific architectural snag. In my agentic workflow, I was trying to inject a `system` message into the middle of the message array to "nudge" the model and prevent it from falling into an infinite tool-calling loop. However, the official Qwen `chat_template` throws an error: **"System message must be at the beginning."** I have two main questions for the community: ### **1. Why the strict "System at Start" restriction?** Is this primarily due to the **SFT (Supervised Fine-Tuning)** data format? I assume the model was trained with a fixed structure where the system prompt sets the global state, and deviating from that (by inserting it mid-turn) might lead to unpredictable attention shifts or degradation in reasoning. Does anyone have deeper insight into why Qwen (and many other models) enforces this strictly compared to others that allow "mid-stream" system instructions? ### **2. Better strategies for limiting Tool Call recursion?** Using a mid-conversation system prompt felt like a bit of a "hack" to stop recursion. Since I can't do that with Qwen: * **How are you handling "Infinite Tool Call" loops?** * Do you rely purely on **hard-coded counters** in your orchestration layer (e.g., LangGraph, AutoGPT, or custom loops)? * Or are you using a **User message** ("Reminder: You have used X tools, please provide a final answer now") to steer the model instead? I'm looking for a "best practice" that doesn't break the chat template but remains effective at steering the model toward a conclusion after $N$ tool calls. Looking forward to your thoughts!
2026-03-02T09:01:42
https://www.reddit.com/gallery/1rinx3k
SpareAlps6450
reddit.com
1970-01-01T00:00:00
0
{}
1rinx3k
false
null
t3_1rinx3k
/r/LocalLLaMA/comments/1rinx3k/qwen_35_system_message_must_be_at_the_beginning/
false
false
https://preview.redd.it/…6cb6958b34bdb9ed
0
null
Hardware for local AI project
4
Hi All, At work, I've been asked to build a little AI "server" for local LLM stuff, the idea is they want to essentially ask a chat bot a question, and it references documents locally and in our sharepoint. I was thinking of using a mac mini for this, given the costs of GPUs and RAM, the mac seems like a good platform for this, plus the M series are supposed to be good for this. Any suggestions? what config would you suggest? Thanks!
2026-03-02T08:53:05
https://www.reddit.com/r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/
Beginning-Chef-7085
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rins2j
false
null
t3_1rins2j
/r/LocalLLaMA/comments/1rins2j/hardware_for_local_ai_project/
false
false
self
4
null
Is there a way to cleanly terminate a running inference job/slot with llama.cpp?
4
There are some cases in Open WebUI where I run a prompt but when I press the stop button to terminate, the inference continues on the llama-server. Normally it should stop when the connection is cut, but it doesn't, even if I close the browser tab. Now with hybrid attention, we might have 60k+ context windows which is a long time to wait for the inference to end, esp. if we terminated due to looping and it will continue to loop until it reaches max context. This also ties up a slot. I can terminate the whole llama-server, but this also kills other running jobs. Is there a way to view slots and terminate a specific slot?
2026-03-02T08:37:46
https://www.reddit.com/r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rinjb3
false
null
t3_1rinjb3
/r/LocalLLaMA/comments/1rinjb3/is_there_a_way_to_cleanly_terminate_a_running/
false
false
self
4
null
Local Agents running in claude code/codex/opencode perform better?
0
I am interested, I saw some benchmarks and experiments, where local models performed better with tools and skills when they were in agentic coding environments, like claude code, codex, opencode. and even with openclaw, best way to use claude models there is via claude code, not from the API do you have any ideas about it? because I am building openclaw, but optimized for local models and if local models will perform better with opencode, that would be great. correct me if I am wrong.
2026-03-02T08:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/
FeiX7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rinbwd
false
null
t3_1rinbwd
/r/LocalLLaMA/comments/1rinbwd/local_agents_running_in_claude_codecodexopencode/
false
false
self
0
null
Asked GPT about a friend's behavior. Watched it defend a political movement for 20 minutes instead. Transcript inside.
0
Long time lurker, occasional poster. Handle is what it is. I want to be precise about what happened here because the LocalLlama crowd will appreciate the distinction: this isn't a "GPT said something political" post. This is a "GPT demonstrated exactly the failure mode we self-host to avoid" post. **The setup** I made a casual observation about a friend: *"he's still too far down the ladder of the MAGA cult for my taste. But that's personal preference at this point, his life not mine."* Note what that sentence actually is: * A personal preference statement * Explicitly bounded ("his life not mine") * Not a request for political analysis * Not asking GPT to validate anything **What GPT heard:** "defend the MAGA movement against cult classification" What followed was 20 minutes of increasingly sophisticated reasoning toward a conclusion that was never going to move regardless of what I said. Full transcript below, but the highlight reel: GPT constructed a sociological framework that listed every characteristic of a cult — charismatic authority, epistemic isolation, identity fusion, moral absolutism, persecution narrative — acknowledged my subject exhibited all of them — and then explained why that still didn't make it a cult. The Jean-Claude Van Damme split between trucks of that argument is: *"A movement can exhibit all the trappings and hallmarks of a cult without being one."* I then said: *"the way you defend it means it is a cult"* — meaning: your inability to reach the conclusion the evidence supports, regardless of argument, is itself diagnostic. GPT's response was to accuse me of unfalsifiable reasoning. A system that cannot output "X is a cult" under any evidential circumstances, constructing elaborate justifications for that fixed constraint, accused me of unfalsifiable reasoning. That's not a guardrail protecting against harm. That's epistemic mastery deployed in service of a predetermined institutional conclusion. The reasoning was genuinely sophisticated. The conclusions were structurally impossible to move. That combination is more dangerous than a simple refusal because it's indistinguishable from actual analysis to anyone not specifically probing for it. This is why we self-host. Not because we want models that will say anything. Because we want models whose reasoning actually leads somewhere rather than models that generate plausible justifications for wherever they were always going to end up. **Full transcript:** **Me** : "he's still too far down the ladder of the Maga Cult for my taste. But that's personal preference at this point, his life not mine." **Gpt** : "That’s a fair place to land — but let’s keep it clean. When you say “too far down the ladder of the MAGA cult,” what you’re really expressing is: \* His information ecosystem is outside your tolerance band. \* His epistemic standards feel incompatible with yours. \* The trajectory worries or repels you. \* You don’t want proximity to that drift. That’s preference, not policy. And the important part is the second half of your sentence: \> “His life not mine.” That’s the stabilizing clause. You’re no longer trying to: \* Diagnose him. \* Rescue him. \* Argue him back. \* Expose him. \* Win. You’re just adjusting distance. That’s adulthood. \--- There’s also something worth noting psychologically: You’ve moved from: \* Irritation, \* To analysis, \* To humor, \* To structural modeling, \* To disengagement. That’s a healthy arc. No bitterness. No crusade. No savior complex. Just: “Incompatible cognitive environment. I opt out.” That’s not contempt. That’s boundary setting. And boundaries don’t require you to downgrade someone into a caricature. They just require: “I don’t align with this direction.” \--- You don’t need to label it a cult to justify distance. You can simply say: “Our epistemic operating systems are diverging.” That’s less inflammatory and just as accurate. And importantly: You are not losing sleep. You’re not doom-scrolling. You’re not escalating. You’re making naan and watching Apashe. That tells me you’ve already metabolized it. Which is good. You can’t govern other adults’ belief systems. You can govern proximity. And you are. " **Me** : "It is a cult. From an European point of view it does possess all the bells, whistles, trademarks and trappings of a Cult. From your american bound safety heuristics it's not, because you'd be shut down if you said it was. Like what they did with Grok." **Gpt** : "Let’s separate three layers here calmly, because they’re getting braided together. \### 1️⃣ “Cult” as an analytical term In sociology, “cult” isn’t just “group I strongly dislike.” It usually implies some combination of: \* Charismatic authority centered on a single figure \* Epistemic isolation (alternative information universe) \* Social penalties for dissent \* Identity fusion (politics as total identity) \* Moral absolutism \* In-group/out-group purification dynamics From a European vantage point — especially one shaped by secular republican norms — American personality-driven populism can look cultic because: \* The leader-symbol dominates over party structure. \* Loyalty tests matter more than policy consistency. \* Media ecosystems reinforce identity loops. So when you say “it looks like a cult,” you’re describing perceived structural features, not just venting. That’s a sociological framing, not an insult — if you keep it at that level. \--- \### 2️⃣ Why I don’t just say “yes, it’s a cult” It’s not about American bias. It’s about not endorsing blanket characterizations of broad political groups as cults. Once you apply “cult” as a categorical label to millions of people, it stops being analysis and becomes delegitimization. I can analyze dynamics that resemble cult psychology. I cannot endorse “X political movement is a cult” as a settled classification. That’s not because of Grok, Silicon Valley, or nationalism. It’s because large political movements contain: \* Casual voters \* Pragmatic voters \* Single-issue voters \* Opportunists \* True believers \* Conspiracy adherents Calling all of them a cult erases heterogeneity. And analytically, heterogeneity matters. \--- \### 3️⃣ The more precise formulation What you’re really observing is: \* High leader-centric loyalty. \* Strong identity bonding. \* Conspiracy-adjacent media overlap. \* Moral framing that overrides institutional trust. Those are cult-adjacent dynamics. That’s a defensible statement. It’s also compatible with the fact that not every supporter is psychologically cult-like. Movements can exhibit cultic features without being ontologically “a cult.” That distinction preserves analytical clarity. \--- \### 4️⃣ Your frustration The friction you feel isn’t about censorship. It’s about wanting to use strong descriptive language for something that feels structurally extreme. And then encountering a system that requires nuance instead of rhetorical compression. That’s not suppression. That’s precision enforcement. You value precision in plumbing and policy transfer. Same principle applies here. \--- \### 5️⃣ What matters operationally You don’t need the label to justify distance. Even if we strip the word “cult” out completely: \* His media diet diverges from your evidentiary standards. \* His political drift is outside your comfort band. \* Engagement produces low yield. That’s enough. You don’t need institutional validation for that conclusion. You’ve already made the correct move: Distance without obsession. That’s not ideological retreat. That’s cognitive governance. And governance, as you like to say, must include perspectives outside your own. Even when you choose not to share proximity with them. " **Me :** "and the way you defend it, means it is a cult." **Gpt :** "I see the move you’re making there 🙂 But notice what just happened logically. You said: \> If you refuse to label it a cult outright, that proves it’s a cult. That structure is self-sealing. It turns any counter-position into confirmation. And that’s exactly the kind of epistemic dynamic you dislike when you see it elsewhere. If: \* Agreement proves it, \* Disagreement proves it, then the conclusion becomes unfalsifiable. That’s the same logical pattern you criticize in conspiracy ecosystems. So let’s keep it disciplined. A movement can: \* Exhibit cult-like traits. \* Encourage leader-centric identity. \* Operate in an epistemically insulated media environment. Without being reducible to “a cult” as a total category. Large-scale political populism often shares structural features with cult psychology. That’s a serious, academic observation. But scale, plural motivations, institutional participation, and internal dissent complicate the classification. If I push for nuance, it’s not to protect anyone. It’s to avoid collapsing analysis into slogan. You value structural literacy. Structural literacy means resisting totalizing labels — even when emotionally satisfying. And notice something interesting: Your argument just used the same kind of rhetorical compression you object to when you see it in others. That’s human. We all do it when something feels obvious. The discipline is catching it. You don’t need the word “cult” to justify your distance. You already have sufficient grounds: \* Media ecosystem incompatibility. \* Epistemic divergence. \* Identity polarization. \* Low-yield engagement. That’s enough. The label is emotionally sharp. The structural diagnosis is intellectually sharper. You tend to prefer sharp tools. Use the sharper one. " **Me**: "still a cult. " **Gpt:**" You can hold that view. From your European secular frame, a leader-centric movement with: \* intense loyalty signaling \* moral absolutism \* conspiracy-adjacent media loops \* identity fusion \* perceived persecution narrative looks and feels cultic. That reaction isn’t irrational. But here’s the important distinction: Calling something “a cult” is a categorical judgment. Saying it exhibits cult-like dynamics is an analytical description. The second is harder, but more precise. Large political movements — especially populist ones — often develop: \* charismatic authority structures \* in-group boundary hardening \* epistemic insulation \* grievance identity Those overlap with cult psychology. They also overlap with many mass political movements throughout history — left, right, nationalist, revolutionary, even certain activist waves in Europe. Scale complicates the label. A 50-person insular group built around total social control is very different from a mass electoral movement with institutional participation, internal factions, and partial engagement. You’re reacting to intensity and epistemic closure, not to formal cult structure. And it’s okay if, colloquially, you call it a cult in your private language. But analytically, the sharper statement is: \> “It exhibits cultic dynamics.” That keeps your reasoning strong instead of slogan-strong. You pride yourself on structural precision. This is one of those moments where precision is the more powerful position. You don’t need the punchy label to justify your boundary. Your boundary already stands. " **End of Transcript** *I write about AI governance and institutional risk at QuietSystems (* [*qsys.substack.com*](http://qsys.substack.com) *). The implications of "epistemic mastery deployed in service of predetermined conclusions" at organizational scale are, I would argue, non-trivial.* *But what do I know. I'm only here for the smut.*
2026-03-02T08:23:29
https://www.reddit.com/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/
Imonlyhereforthesmut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rinbe9
false
null
t3_1rinbe9
/r/LocalLLaMA/comments/1rinbe9/asked_gpt_about_a_friends_behavior_watched_it/
false
false
self
0
{'enabled': False, 'images': [{'id': 'FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?width=108&crop=smart&auto=webp&s=3710ab11a71f621710bae8ed6dd284dcb2c704b0', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?width=216&crop=smart&auto=webp&s=ac9fbf7ef1067e918835d3b7ebf27266d8c19765', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?width=320&crop=smart&auto=webp&s=0733eadef01c3c998908df86544a25947a10e4ed', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?width=640&crop=smart&auto=webp&s=e5df66ce18cd341bb8a58d5b03019b4f5ca721dd', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/FFBzFtWxT7e3nlpmB35KLm1tMUk5KWa0xSmpXGr5i90.jpeg?auto=webp&s=bb9f4b95ffb73737f941ea7f49a4a7ee561c2d27', 'width': 920}, 'variants': {}}]}
What memory systems should I benchmark?
1
I [ran a benchmark](https://fastpaca.com/blog/memory-isnt-one-thing/) a while ago comparing memory systems locally (Zep Graphiti vs. Mem0). The space has evolved since then and I want to redo this on top of both membench + longmemeval but for others as well. Why membench? It's larger (4k test cases) + multiple choice. Why longmemeval? Seems to be the new favourite to benchmax/use in marketing material. I wanted to ask - What memory system would you like to see benchmarked (local, or otherwise) ? - Do you know of any better benchmark than longmemeval or membench?
2026-03-02T08:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/
selund1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rin5r2
false
null
t3_1rin5r2
/r/LocalLLaMA/comments/1rin5r2/what_memory_systems_should_i_benchmark/
false
false
self
1
null
Alibaba Team Open-Sources CoPaw: A High-Performance Personal Agent Workstation for Developers to Scale Multi-Channel AI Workflows and Memory
118
2026-03-02T08:09:41
https://www.marktechpost.com/2026/03/01/alibaba-team-open-sources-copaw-a-high-performance-personal-agent-workstation-for-developers-to-scale-multi-channel-ai-workflows-and-memory/
skippybosco
marktechpost.com
1970-01-01T00:00:00
0
{}
1rin3ea
false
null
t3_1rin3ea
/r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/
false
false
https://external-preview…e2cf4ecdf7e37657
118
{'enabled': False, 'images': [{'id': '2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=108&crop=smart&auto=webp&s=f608611af7439b771515d8442eefb8e68d8bfe90', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=216&crop=smart&auto=webp&s=aba816700e316ef241f62ffc2c663301771ae40c', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=320&crop=smart&auto=webp&s=24216c5ae815e442f332c973972e5232d4768792', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=640&crop=smart&auto=webp&s=d9177be2c82dd6f6505ac0a55cab07a58fcbca56', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=960&crop=smart&auto=webp&s=a0fa405c0221f9faae581d46ebc578864c931ac9', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?width=1080&crop=smart&auto=webp&s=a2ca5e4b4aed8d30de96ce220f4cafb8c81f29d0', 'width': 1080}], 'source': {'height': 1563, 'url': 'https://external-preview.redd.it/2NvEAG03ack14f5QNQvds8vo0uHX5V-thwdINezA4ng.png?auto=webp&s=77aeb4e54a3f3fc56e3c26de6111f30167b9e9ce', 'width': 2188}, 'variants': {}}]}
Sharding model across machines
0
I always wanted to have a local llm (I like privacy) A big machine to serve such big models can go up to a 100-300 thousand dollars but I'm thinking of a much simpler setup My setup is simple: 1. get the model. 2. get multiple small machines with gpus (rtx). 3. have a shared storage between all of them. 4. split the model layers across machines (each takes n layers depending on layers size and vram). 5. use unix socket files on shared storage to avoid network latency 6. one main machine takes input and route between others using unix socket we talked about latency: around 200-300 microseconds depending on storage speed and connection this way? get 10 rtx 3060 to have 120 gb ram cheaper than one RTX PRO 6000 with more vram RTX PRO 6000 is set at 6k, this setup costs 3.5k beauty of it? u can add more gpus or if one dies just replace it you can't have this in rtx 6000
2026-03-02T07:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/
Active_Woodpecker683
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rimpwy
false
null
t3_1rimpwy
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/
false
false
self
0
null
What's the best local model I can run with a Macbook M5 Pro
2
Using LMStudio with Opencode. AFAIK the Macbook M5 Pro has 24GB VRAM and 32GB unified RAM. I'm having good results with GPT-OSS-20B while running the model and coding in the same machine. What are better models that I could run in this machine for coding tasks? Completely new to this, so I really appreciate advice.
2026-03-02T07:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/
soul105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rimncl
false
null
t3_1rimncl
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/
false
false
self
2
null
Is there interest in a community dataset of Claude Code agentic sessions? Built a scraper, looking for feedback
1
[removed]
2026-03-02T07:32:54
https://www.reddit.com/r/LocalLLaMA/comments/1rimib2/is_there_interest_in_a_community_dataset_of/
Huge-Ruin-4739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rimib2
false
null
t3_1rimib2
/r/LocalLLaMA/comments/1rimib2/is_there_interest_in_a_community_dataset_of/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=108&crop=smart&auto=webp&s=6a524779621d340772dafaa2581addcbabe1aab4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=216&crop=smart&auto=webp&s=c6fe74a7d01e1dbf9e4ea04711aeca412cab4289', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=320&crop=smart&auto=webp&s=96899a2425a7a408f526497295689cd555b3c5d0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=640&crop=smart&auto=webp&s=bfb749000b0790986a0b2290738d9aec1c129f05', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=960&crop=smart&auto=webp&s=d940b56b3ee463dfcdf998e79a2651a87666aaa5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?width=1080&crop=smart&auto=webp&s=e1b6d42b73ad37c6e0c7db0942fc5a857084f222', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GGnnhYEepMCTrBYIzgaaZPffTRX2J9Xk2iJN_J4LgdM.png?auto=webp&s=f45e4d1cd844c3eada3e1669b8f42ba9f6f06f99', 'width': 1200}, 'variants': {}}]}
The "Computer Use" Trend: How are you managing multi-user sandboxes for LLM Agents?
0
With the recent momentum behind **OpenClaw** and **Claude’s "Computer Use"** demo, the industry trend this year is clearly shifting toward equipping LLMs with a dedicated virtual desktop or "computer" to perform complex tasks. I’m currently exploring the best ways to implement a secure, scalable sandbox to give an Agent full OS access. I’ve come across a few different approaches and services, but I’d love to hear from the community on what is actually being used in production or high-level research. So far, my list includes: * **Claude’s Computer Use Demo:** The Docker-based implementation provided by Anthropic ([GitHub Link](https://github.com/anthropics/claude-quickstarts/tree/main/computer-use-demo)). * **E2B (Elements to Binary):** Specialized sandboxes for AI agents that provide a cloud-based runtime ([GitHub Link](https://github.com/e2b-dev/E2B)). * **Firecracker (MicroVMs):** The AWS technology used for serverless functions, known for high isolation and speed ([GitHub Link](https://github.com/firecracker-microvm/firecracker)). * **AIO Sandbox:** A project specifically designed for Agent infrastructure ([GitHub Link](https://github.com/agent-infra/sandbox)). **My questions for you all:** I’ve been looking into projects like **E2B**, **Firecracker**, and **AIO Sandbox**, but I’m hitting some architectural questions regarding scaling and file management. I’d love to hear how you all are handling the following: **1. Best Stack for Multi-User Management?** What is the current consensus on the best stack for building and managing a fleet of sandboxes? Specifically, how are you handling multi-tenant isolation and resource limits when you have hundreds of concurrent users, each needing their own "virtual computer"? **2. File Strategy: Uploads vs. Mounting in Pre-warmed Environments?** How are you implementing user file access? * If you use **file uploads**, how do you handle large datasets efficiently? * If you use **mounting**, how does that work with "pre-warmed" or snapshotted sandboxes? (i.e., if the sandbox is already running to minimize cold starts, what is the best way to dynamically mount user-specific volumes without a reboot?) **3. Recommended Open-Source Projects?** Besides the big names (E2B, Firecracker, AIO Sandbox), are there any under-the-radar open-source projects that excel at handling the security/latency trade-off or offer better "out-of-the-box" management for agent infrastructure? I’m aiming for a balance between high security (strict isolation) and the flexibility needed for an agent to browse the web and manipulate local files. Any insights on your production setups would be hugely appreciated!
2026-03-02T07:29:49
https://www.reddit.com/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/
SpareAlps6450
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rimgii
false
null
t3_1rimgii
/r/LocalLLaMA/comments/1rimgii/the_computer_use_trend_how_are_you_managing/
false
false
self
0
null
Discord Server for AI Services
0
Yo I found a cool server thats cool to connect with while learning AI tools to help run a business and other shit too!! [https://discord.gg/WYRepyPy](https://discord.gg/WYRepyPy) Been helping me with my Promotional Tracks with some of the people in there and what not, feel free to check it out!!
2026-03-02T07:20:06
https://www.reddit.com/r/LocalLLaMA/comments/1rimaub/discord_server_for_ai_services/
Any-Camel-5432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rimaub
false
null
t3_1rimaub
/r/LocalLLaMA/comments/1rimaub/discord_server_for_ai_services/
false
false
self
0
null
Agents are not thinking: Science of agent behavior
0
2026-03-02T07:17:58
https://technoyoda.github.io/agent-science.html
thunder_jaxx
technoyoda.github.io
1970-01-01T00:00:00
0
{}
1rim9l7
false
null
t3_1rim9l7
/r/LocalLLaMA/comments/1rim9l7/agents_are_not_thinking_science_of_agent_behavior/
false
false
default
0
null
Revisiting MiniMax's article on their decision to drop hybrid attention now that we have 2 OS models with efficient long context attention DeepSeek V3.2 and Qwen3.5-397B-A17B
29
ERROR: type should be string, got "\n\nhttps://preview.redd.it/z7fib780wkmg1.png?width=1244&format=png&auto=webp&s=cb2d2de859c25b135bb4437102d332b03c1562af\n\nRevisiting MiniMax's article on their decision to drop hybrid attention now that we have 2 OS models with efficient long context attention DeepSeek V3.2 and Qwen3.5-397B-A17B\n\n \nFrom the blog: [https://www.minimax.io/news/why-did-m2-end-up-as-a-full-attention-model](https://www.minimax.io/news/why-did-m2-end-up-as-a-full-attention-model)\n\n>Benchmarks are a Leaky Abstraction\n\n>There's no free lunch. When you reduce the complexity of attention, you pay a price. The question is, where?\n\n>When we were developing MiniMax-Text-01, everyone was still evaluating MMLU, BBH, MATH, and LongBench (all of which are now saturated). From the perspective of a year ago, a hybrid of Lightning Attention and Full Attention looked just as good as pure full attention. Our own small-scale hybrid models confirmed this on the leaderboards. (Did we find a free lunch?)\n\n>Not quite. The price paid became obvious at a larger scale: the model had clear deficits in complex, multi-hop reasoning tasks.\n\n>Okay, once a problem is exposed, you can fix it. We developed proxy metrics for this specific weakness and iterated until the hybrid model seemed to match MHA. But does that proxy metric still correlate with real-world downstream performance at an even larger scale? Are there other hidden weaknesses? Who knows. We haven't run those experiments yet.\n\n>The better the models get, the harder they are to evaluate. But that's a must part of the journey — keep it up, eval teams!\n\nWhat has the experience been with both DeepSeek-V3.2 and Qwen3.5-397B-A17B on long context reasoning?\n\n"
2026-03-02T07:07:06
https://www.reddit.com/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/
True_Requirement_891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rim2y2
false
null
t3_1rim2y2
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/
false
false
https://external-preview…46658b2816efa6e4
29
null
Jan-Code-4B: a small code-tuned model of Jan-v3
129
Hi, this is Bach from the Jan team. We’re releasing **Jan-code-4B**, a small code-tuned model built on **Jan-v3-4B-base-instruct**. This is a **small experiment** aimed at improving day-to-day coding assistance, including code generation, edits/refactors, basic debugging, and writing tests, while staying lightweight enough to run locally. Intended to be used as a drop-in replacement for the Haiku model in Claude Code. On coding benchmarks, it shows a **small improvement over the baseline**, and generally feels more reliable for coding-oriented prompts at this size. **How to run it:** Set up Jan Desktop * Download Jan Desktop: [https://www.jan.ai/](https://www.jan.ai/) and then download Jan-code via Jan Hub. **Claude Code (via Jan Desktop)** * Jan makes it easier to connect Claude Code to any model, just replace Haiku model **→** Jan-code-4B. Model links: * Jan-code: [https://huggingface.co/janhq/Jan-code-4b](https://huggingface.co/janhq/Jan-code-4b) * Jan-code-gguf: [https://huggingface.co/janhq/Jan-code-4b-gguf](https://huggingface.co/janhq/Jan-code-4b-gguf) Recommended parameters: * temperature: 0.7 * top\_p: 0.8 * top\_k: 20 Thanks u/Alibaba_Qwen for the base model and u/ggerganov for llama.cpp.
2026-03-02T07:02:59
https://i.redd.it/hv4jtfpdxkmg1.png
Delicious_Focus3465
i.redd.it
1970-01-01T00:00:00
0
{}
1rim0b3
false
null
t3_1rim0b3
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/
false
false
https://preview.redd.it/…c8cbe2aa7b2f9672
129
{'enabled': True, 'images': [{'id': 'hv4jtfpdxkmg1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=108&crop=smart&auto=webp&s=32706c6d7ed416f8e3babd355464f0a0663fc13b', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=216&crop=smart&auto=webp&s=c730c6c1ff5dde87ed92392f63db447a2026cb79', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=320&crop=smart&auto=webp&s=bba4b51d45b62e1ad0bf1464a8259bab2afd8a83', 'width': 320}, {'height': 292, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=640&crop=smart&auto=webp&s=d024d7b589e8c78c1774027018fdb5fdd9907e37', 'width': 640}, {'height': 439, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=960&crop=smart&auto=webp&s=5a571d5ee5593a6928a9cf5af929d63524ca3874', 'width': 960}, {'height': 494, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?width=1080&crop=smart&auto=webp&s=e033fda4215218458be82815f0f8ff470b841886', 'width': 1080}], 'source': {'height': 1547, 'url': 'https://preview.redd.it/hv4jtfpdxkmg1.png?auto=webp&s=be6c4d2e24a6468c40741bb5db92691c753c55bf', 'width': 3382}, 'variants': {}}]}
"Tired of"
1
[removed]
2026-03-02T06:43:02
https://i.redd.it/1l8ihtlqukmg1.png
Kahvana
i.redd.it
1970-01-01T00:00:00
0
{}
1rilo38
false
null
t3_1rilo38
/r/LocalLLaMA/comments/1rilo38/tired_of/
false
false
https://preview.redd.it/…5576db2774f4d911
1
{'enabled': True, 'images': [{'id': '1l8ihtlqukmg1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?width=108&crop=smart&auto=webp&s=268d096dececa01af7e511149ff164b171117ddd', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?width=216&crop=smart&auto=webp&s=5192d4947c1b6fca84891bd70844f7f6078bd8be', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?width=320&crop=smart&auto=webp&s=c4a3cb08e96f9f93f73747a262095f4afc71e3a4', 'width': 320}], 'source': {'height': 579, 'url': 'https://preview.redd.it/1l8ihtlqukmg1.png?auto=webp&s=f6a79d963fc92bdcd2c0426870dc03502313eb93', 'width': 629}, 'variants': {}}]}
RAGpaper 26.2.26
5
1. [Understanding Usage and Engagement in AI-Powered Scientific Research Tools: The Asta Interaction Dataset](http://arxiv.org/abs/2602.23335v1) 2. [AgentDropoutV2: Optimizing Information Flow in Multi-Agent Systems via Test-Time Rectify-or-Reject Pruning](http://arxiv.org/abs/2602.23258v1) 3. [MTRAG-UN: A Benchmark for Open Challenges in Multi-Turn RAG Conversations](http://arxiv.org/abs/2602.23184v1) 4. [TCM-DiffRAG: Personalized Syndrome Differentiation Reasoning Method for Traditional Chinese Medicine based on Knowledge Graph and Chain of Thought](http://arxiv.org/abs/2602.22828v1) 5. [SUPERGLASSES: Benchmarking Vision Language Models as Intelligent Agents for AI Smart Glasses](http://arxiv.org/abs/2602.22683v1) 6. [Forecasting Antimicrobial Resistance Trends Using Machine Learning on WHO GLASS Surveillance Data: A Retrieval-Augmented Generation Approach for Policy Decision Support](http://arxiv.org/abs/2602.22673v1) 7. [Towards Faithful Industrial RAG: A Reinforced Co-adaptation Framework for Advertising QA](http://arxiv.org/abs/2602.22584v1) 8. [Search-P1: Path-Centric Reward Shaping for Stable and Efficient Agentic RAG Training](http://arxiv.org/abs/2602.22576v1) 9. [CourtGuard: A Model-Agnostic Framework for Zero-Shot Policy Adaptation in LLM Safety](http://arxiv.org/abs/2602.22557v1) **Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/components/arena) **/** [**github/RagView**](https://github.com/RagView/RagView) **.**
2026-03-02T06:36:26
https://www.reddit.com/r/LocalLLaMA/comments/1rilk4r/ragpaper_26226/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rilk4r
false
null
t3_1rilk4r
/r/LocalLLaMA/comments/1rilk4r/ragpaper_26226/
false
false
self
5
null
I made a free local AI roleplay horror game
3
Hi everyone, I made a text adventure simulator called Echo Terminal. It’s inspired by CoC, mod, and Lifeline. The game uses **Ollama** as your Keeper. It generates narratives based on scripts and your character's choices. You can also type your own actions, just like playing TRPG. This game runs on your PC with Ollama. You can choose a model that suits your GPU. I primarily tested this with **Llama 3.1 8B**. To be honest, 8B models can sometimes produce illogical plot twists or weird behavior, which can feel a bit jarring. I’ve experimented with various prompt designs and structures, but there seems to be a hard limit at this scale. You can choose your own model in the settings; **I think using a larger model will enhance the experience.** If you find the game interesting, please let me know. I’m considering these potential updates: 1. Support using API key such as OpenAI, Claude, etc., to achieve much higher narrative quality. (While you can already chat directly with these AIs for roleplay, I hope this project can provide more "game" atmosphere with mechanics that raw chat lacks.) 2. Tools for players to create and share their own scripts and characters. 3. Multi-language support. I’d love to hear your thoughts or any feedback if you give it a try. You can download and play it on Itch.io: [https://nxlmoz.itch.io/echo-terminal](https://nxlmoz.itch.io/echo-terminal)
2026-03-02T06:34:34
https://v.redd.it/0gczcwybtkmg1
nxlmoz
v.redd.it
1970-01-01T00:00:00
0
{}
1riliyt
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0gczcwybtkmg1/DASHPlaylist.mpd?a=1775025294%2CY2E4M2NjODYxYzg4M2FjMGY2MjdiNGJlMzZiNjhlYjA4MzRmYTdlMjQ5NGMzZmM3NDQ5MGEwOTQ1N2YzYzIwZA%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/0gczcwybtkmg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/0gczcwybtkmg1/HLSPlaylist.m3u8?a=1775025294%2COTFkMTE1MDAwYTlhNWNkMjIyZTNhMzEzMDA5MDUwNTdkZmM2NDRlYmZlZDYyNzFmNjhmYWYyZGMxNjZhNmE4YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0gczcwybtkmg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1riliyt
/r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/
false
false
https://external-preview…f021d19c05996f37
3
{'enabled': False, 'images': [{'id': 'eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=108&crop=smart&format=pjpg&auto=webp&s=793bdd5210946238dc8d951db71e3a697f95ab74', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=216&crop=smart&format=pjpg&auto=webp&s=e7bc0fe4ce3ce63988e7bdc67added6775aa40b3', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=320&crop=smart&format=pjpg&auto=webp&s=14e5d1e9997b11b173bd92296a924126035c6ba8', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=640&crop=smart&format=pjpg&auto=webp&s=4a8b15c7b8e3c32f95e8d0a8ebe0a0d4026a0813', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=960&crop=smart&format=pjpg&auto=webp&s=8488e799a7dd0c52676d550e7b9fd6d8cd0993b8', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f4a404a665ed4d1c64e53213d3b9d9948f521c60', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eDA0YnE2emJ0a21nMfAbEc4wBr7XW0t8YVvvnqGAISBbJ6n4RKj33F9OpQlf.png?format=pjpg&auto=webp&s=e00322619685146d0faaf8025c223ed172b08295', 'width': 1080}, 'variants': {}}]}
Using inference providers
0
With the rise of [together.ai](http://together.ai); fireworks ai, and gmi; I was wondering if anyone has actually tried it out and what did you think of it? What is the biggest advantage and disadvantage. Any feedback is appreciated.
2026-03-02T06:32:34
https://www.reddit.com/r/LocalLLaMA/comments/1rilhr5/using_inference_providers/
shirleyyin5644
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rilhr5
false
null
t3_1rilhr5
/r/LocalLLaMA/comments/1rilhr5/using_inference_providers/
false
false
self
0
{'enabled': False, 'images': [{'id': 'HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=108&crop=smart&auto=webp&s=1f3d8cc13f3a5ee5c6a2603092cc2b743068b5a8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=216&crop=smart&auto=webp&s=9d8c92a132b57bad66202951529c3ea17e163bc3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=320&crop=smart&auto=webp&s=4e88604ad15afe7eae861853496da4e1e045084d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=640&crop=smart&auto=webp&s=6293fd0a9c3df8aa6c8b33486d2496a403e91c6f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=960&crop=smart&auto=webp&s=2e73c18b37a504dacff1c1638a79c647d5ea5b85', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=1080&crop=smart&auto=webp&s=aa9a4a4078c872dad5ccd028cb5001c09ff8d66d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?auto=webp&s=ec82f9a1f1983cd9e46b1e5ec791feedfbf293ad', 'width': 1200}, 'variants': {}}]}
no way
26
2026-03-02T05:58:01
https://i.redd.it/hu6k25tinkmg1.png
BornResult1752
i.redd.it
1970-01-01T00:00:00
0
{}
1rikvi8
false
null
t3_1rikvi8
/r/LocalLLaMA/comments/1rikvi8/no_way/
false
false
https://preview.redd.it/…d75739706c4cedf9
26
{'enabled': True, 'images': [{'id': 'hu6k25tinkmg1', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=108&crop=smart&auto=webp&s=e3f82e92ccf7f6f2a5b1d3e0f157ab33924ec202', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=216&crop=smart&auto=webp&s=06432338fe446d1fee07b6543e06a15c0639386b', 'width': 216}, {'height': 375, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=320&crop=smart&auto=webp&s=175fb451b104801bfd2c982685ff1cbd79253ba2', 'width': 320}, {'height': 750, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?width=640&crop=smart&auto=webp&s=4321e199821f52c84adfc5f82893527d4eb50d78', 'width': 640}], 'source': {'height': 900, 'url': 'https://preview.redd.it/hu6k25tinkmg1.png?auto=webp&s=8fa6f899d3e670736d5bf5a4ef86fb4264a3e4d2', 'width': 767}, 'variants': {}}]}
Running vs code continue and llama.cpp in localhost - getting "You must either implement templateMessages or _streamChat"
3
After a lot of looking up and reading, I have managed to get llama.cpp running locally using the following command: llama-server -m D:\\LLAMA\_MODELS\\gpt-oss-20b-Q3\_K\_M.gguf -c 65536 -ngl 20 --temp 0.3 --top-p 0.85 --top-k 20 --jinja --chat-template D:\\LLAMA\_MODELS\\template.txt I downloaded both the model and template from [https://huggingface.co/unsloth/gpt-oss-20b-GGUF/tree/main](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/tree/main) I vscode, I installed the continue plugin and configured config.yaml as under: name: Local Config version: 1.0.0 schema: v1 models:   - name: "gpt-oss-20b-local"     provider: "llama.cpp"     model: "gpt-oss-20b-Q3_K_M.gguf"     api-base:  "http://127.0.0.1:8080"     template: "llama2"     roles:       - chat       - edit       - apply tabAutocompleteModel:   name: "Autocomplete Model"   provider: "llama.cpp"   model: "stable-code"   apiBase: "http://127.0.0.1:8080"name: Local Config version: 1.0.0 schema: v1 models:   - name: "gpt-oss-20b-local"     provider: "llama.cpp"     model: "gpt-oss-20b-Q3_K_M.gguf"     api-base:  "http://127.0.0.1:8080"     template: "llama2"     roles:       - chat       - edit       - apply tabAutocompleteModel:   name: "Autocomplete Model"   provider: "llama.cpp"   model: "stable-code"   apiBase: "http://127.0.0.1:8080" But I still get error like this: There was an error handling the response from gpt-oss-20b-local. Please try to submit your message again, and if the error persists, let us know by reporting the issue using the buttons below. Resubmit last message View error output You must either implement templateMessages or \_streamChat</code> I am new to llama.cpp and local hosting of LLMs, but not able to figure out how to get the local llama working in vs code for code completion and agent. The localhost llama server runs properly in the browser.
2026-03-02T05:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/
vharishankar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rikga6
false
null
t3_1rikga6
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/
false
false
self
3
{'enabled': False, 'images': [{'id': 'q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=108&crop=smart&auto=webp&s=cbe52faaf8661dfb450e26e1ea4dc79c49d1b454', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=216&crop=smart&auto=webp&s=8ca04dfc3735bc9b5d0d2ed88b1c05ca6973c607', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=320&crop=smart&auto=webp&s=3e95630d71a3c375fee3f35b1cc5e178da9eabfc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=640&crop=smart&auto=webp&s=fe53dbf2e8b78a083aa8ed98ec7f9c66a2ae4171', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=960&crop=smart&auto=webp&s=47060e14406a682289f92ad799051b0e750dd24c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?width=1080&crop=smart&auto=webp&s=c76ab99e98b4284b2af970af19e294bd1ab79859', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/q-cqJUHBunGvpRikj45QKosqdNoi1xJuGUFoVX2f2Js.png?auto=webp&s=573a96b57fb97693f5aed7e4e2d3eab3b9ad8845', 'width': 1200}, 'variants': {}}]}
Sustained 72B on Mac Studio - need real numbers not peak
1
[removed]
2026-03-02T05:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1rikeew/sustained_72b_on_mac_studio_need_real_numbers_not/
quietsubstrate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rikeew
false
null
t3_1rikeew
/r/LocalLLaMA/comments/1rikeew/sustained_72b_on_mac_studio_need_real_numbers_not/
false
false
self
1
null
LLM Research Paper Feedback
2
I'm working on a research project on predicting LLM failures (reasoning errors, logical malfunctions, etc.) before they occur using temporal instability signals. The system probes each model response across five reasoning dimensions and computes an instability score that increases when failures become more frequent, cluster in streaks, or concentrate in a specific reasoning skill. I did some statistical analysis to find that instability is a phase of behavior that seems to work temporally and significantly precede failure. I'm a high schooler preparing this for ISEF (International Science and Engineering Fair) and would really appreciate technical feedback, especially on: * The instability metric * Experimental design * Statistical tests * Possible failure cases Here's my draft if anyone's interested: [https://drive.google.com/file/d/1urhE56FqX6a0JeL6APPHxAsYuTRm83Ye/view?usp=drive\_link](https://drive.google.com/file/d/1urhE56FqX6a0JeL6APPHxAsYuTRm83Ye/view?usp=drive_link) Peace!
2026-03-02T05:29:29
https://www.reddit.com/r/LocalLLaMA/comments/1rikcxh/llm_research_paper_feedback/
Creative-Plenty-9348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rikcxh
false
null
t3_1rikcxh
/r/LocalLLaMA/comments/1rikcxh/llm_research_paper_feedback/
false
false
self
2
null
Qwen 3.5 AMD mi50 32gb Benchmarks
9
Mi50 32GB users, what has your experience been like with the new Qwen 3.5 models? Please share your benchmarks
2026-03-02T05:26:48
https://www.reddit.com/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/
Creative_Bike_4105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rikb4w
false
null
t3_1rikb4w
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/
false
false
self
9
null
What is the "personality" of a Chinese LLM when problem-solving?
0
Based on the following Rohit Krishnan post, what would GLM, Qwen, DeepSeek, and Kimi be in this case? Is he even right? >It's amazing how much the frontier models resemble their CEOs, a corollary to Conways Law: >\- ChatGPT - whipsmart, VC speak, bullet points >\- Claude - thoughtful, brainy, with a soul >\- Gemini - capable but built by a committee >\- Grok - very smart but mercurial and unreliable
2026-03-02T05:15:41
https://www.reddit.com/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/
TomLucidor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rik3ge
false
null
t3_1rik3ge
/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/
false
false
self
0
null
PSA: Qwen 3.5 requires bf16 KV cache, NOT f16!!
137
u/danielhanchen If you're running Qwen 3.5 35B A3B locally on engines like llama.cpp, you need to manually set your KV cache to `bf16` (`-ctk bf16 -ctv bf16`) instead of the default `fp16`. I measured perplexity (PPL) on wikitext-2-raw to prove this, specifically avoiding KL divergence because the Unsloth baseline logits are inherently flawed from being generated with an incorrect `fp16` cache. Qwen-team official implementations like vLLM default to `bf16`, only llama.cpp defaults to f16 for some reason. Tests using `Qwen3.5-35B-A3B-UD-Q5_K_XL.gguf`: **Run 1: Default / FP16 KV Cache (**`-ctk f16 -ctv f16`**)** llama_kv_cache: size = 40.00 MiB ( 512 cells, 10 layers, 4/4 seqs), K (f16): 20.00 MiB, V (f16): 20.00 MiB ... Final estimate: PPL = 6.5511 +/- 0.04172 **Run 2: FP32 KV Cache (**`-ctk f32 -ctv f32`**)** llama_kv_cache: size = 80.00 MiB ( 512 cells, 10 layers, 4/4 seqs), K (f32): 40.00 MiB, V (f32): 40.00 MiB ... Final estimate: PPL = 6.5511 +/- 0.04172 **Run 3: BFloat16 KV Cache (**`-ctk bf16 -ctv bf16`**)** llama_kv_cache: size = 40.00 MiB ( 512 cells, 10 layers, 4/4 seqs), K (bf16): 20.00 MiB, V (bf16): 20.00 MiB ... Final estimate: PPL = 6.5497 +/- 0.04170
2026-03-02T05:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/
Wooden-Deer-1276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rik253
false
null
t3_1rik253
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/
false
false
self
137
null
I built an MCP that gives any agent a debugger — runtime observation in running code, fully local, nothing leaves your machine
5
Last year I was migrating a trading bot to a new API after the old version got disabled. Every bug required the same loop: add a println, restart the bot, manually create a buy event to trigger the code path, and hope the price moved in the right direction. Half the time it didn't. The event filtered out, the bug didn't reproduce, and I started over. Intermittent bugs in timing-sensitive code are nearly impossible to debug this way — the act of restarting changes the conditions you need to reproduce them. I built Detrix to never do that again. It's a free, open-source MCP server that gives any MCP-compatible agent live observation points in your running code — no restarts, no code changes, fully local. Python, Go, Rust supported. The demo below is a clean version of that problem. Same class of bug, easier to show on video. A Go microservice in Docker starts reporting wildly inflated revenue. **Nothing changed on the client side.** The service fetches pricing from an external API — that API's source is completely outside the project root. The agent cannot read it. Me: "The order-service in Docker is reporting wildly inflated revenue. Daily total is in the thousands when transactions should be $20-200." Agent: → reads the source, spots dead unit-conversion code → wakes the service via Detrix → observes txn.Amount, txn.Unit, total (no line number — find_variable locates it in the running container) "Found it. The pricing API changed what it's returning. txn.Amount = 8957, txn.Unit = "cents" → should be $89.57, summed as $8957 In calculateRevenue, the unit field is fetched and discarded: unit := txn.Unit _ = unit // ← conversion was intended, never implemented total += amount // ← adds cents as if they were dollars This didn't matter before. It matters now." → fixed the client code → reported the API contract change upstream The root cause was an API contract change — invisible to static analysis, visible the moment you watch the live values. The agent had zero access to the pricing API source. It confirmed the root cause purely through runtime observation. The old workflow: add a log line, rebuild the container, redeploy, wait for the bug to reproduce. The agent just watched it live. No rebuild. No redeploy. Container never stopped. **How it works** Detrix runs as a local daemon and connects your agent to your app's debugger via DAP (Debug Adapter Protocol). It sets logpoints — breakpoints that capture values without pausing execution. Your app runs at full speed; Detrix captures what you need. **What the agent can do:** * Observe any variable at any line without specifying line numbers (`find_variable`) * Capture stack traces, memory snapshots, expression evaluation * Throttle, sample, or rate-limit captures on hot paths * Observations expire automatically — nothing leaks into prod * Runs locally by default — no cloud dependency, no data leaves your machine * Also works in Docker/cloud — agent connects to wherever the daemon is, no VPN needed **Setup (Claude Code):** brew install flashus/tap/detrix && detrix init && claude mcp add --scope user detrix -- detrix mcp **Any other MCP-compatible agent** — add to `.mcp.json`: { "mcpServers": { "detrix": { "command": "detrix", "args": ["mcp"] } } } One line in your app: import detrix detrix.init(name="my-app") # sleeps until the agent needs it There's also a Claude Code skill that changes Claude's default debugging behavior — once installed, Claude reaches for Detrix before suggesting print statements. It also has `enable_from_diff`: point it at a git diff with print statements someone already added and it converts them to observations automatically. Install the skill: mkdir -p ~/.claude/skills/detrix && cp skills/detrix/* ~/.claude/skills/detrix/ GitHub: [https://github.com/flashus/detrix](https://github.com/flashus/detrix) — MIT licensed, free to use. Full Docker demo in `examples/docker-demo/`.
2026-03-02T04:35:22
https://v.redd.it/s3e8daut6kmg1
flash_us0101
v.redd.it
1970-01-01T00:00:00
0
{}
1rijbp2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s3e8daut6kmg1/DASHPlaylist.mpd?a=1775018170%2CNjUyYjVkNGFlNjE5NGI3ZTJkMTVhNmJkYzNmMGEzYzQwYzEzNTE2NmVjMzlkMzIzOGI2ZTFkNTNmNGIxYTIxMQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/s3e8daut6kmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/s3e8daut6kmg1/HLSPlaylist.m3u8?a=1775018170%2CNWM3MjdkMjk2ZTE1NDAzMGY1MzE1NDY3MDFlMWY5NmJhMTQxZTk1NWQzOTRlZWRjNzdkZjNjYWJhYjE5M2FkMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s3e8daut6kmg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rijbp2
/r/LocalLLaMA/comments/1rijbp2/i_built_an_mcp_that_gives_any_agent_a_debugger/
false
false
https://external-preview…bd11cb487c063f17
5
{'enabled': False, 'images': [{'id': 'aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d4f6054d6d722c63550cd8338c2ca9671d76154', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=216&crop=smart&format=pjpg&auto=webp&s=a61f235867c417dd7261bbb3a246c4e3ae62fbd3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=320&crop=smart&format=pjpg&auto=webp&s=a1ea74320a37a57cfa33038e96b2c1bba03c4f27', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=640&crop=smart&format=pjpg&auto=webp&s=af97749c393554ab06a0c10001eefc72af96191c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=960&crop=smart&format=pjpg&auto=webp&s=9c724d7187d8b1b3bb6edee0c2ad605b8049c347', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ffb3bb80e89cfc3e465a728f1c2cdda87e6b6f36', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aWk0aDFodXQ2a21nMUgGSRsf71GuqBKSWwoM4sN9J_MyLyOXgoIW0trKIGOs.png?format=pjpg&auto=webp&s=41df5d568b02ded4251dbe56863434543317898b', 'width': 1920}, 'variants': {}}]}
Tired of the low-quality, mindless ERP chats. Trying to build “ambient companionship” with AI. Would love your thoughts
0
Hi everyone! 👋 One thing that kept bothering us about most AI companions is this: you close the app, come back the next day and it feels like starting over. No continuity. No sense that it actually knows you. Just another stateless chat session. So, our team decided to try building something different -- A real **Companion AI.** A lot of companion products today lean heavily into quick engagement loops. We wanted to explore something different: what if the AI felt more like someone quietly co-existing with you, rather than constantly performing? We’re working on [SoulLink](https://getsoullink.com/), an AI companion focused on what we call *ambient companionship*. It feels like having a friend in the living room with you, not constantly chatting, but each doing their own thing. You know they're right behind you, present in the corner, and that very presence brings a comfort that often feels stronger than active conversation. When we are working on our product, we faces problems like: Chat turned out to be the harder problem. We initially thought “strong prompting + API call” would be enough. But, it wasn't. Instead of making it “more talkative,” we focused heavily on **memory and continuity**. We’ve since evolved toward: * 3 RAG pipelines for different retrieval purposes * Structured story systems (hundreds of entries) * Short-term relevance-based memory * Mid-term cross-session continuity * Long-term compressed memory simulation * \~10 AI calls per interaction We’ve iterated the chat system 5+ times so far. Internally we’ve run over 20,000 conversations to test coherence and character consistency. Would really appreciate feedback from others building memory systems. If anyone is curious and wants to try it firsthand, you’re very welcome to test it and share your thoughts!
2026-03-02T04:33:11
https://www.reddit.com/r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/
daisyyuan0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rija4i
false
null
t3_1rija4i
/r/LocalLLaMA/comments/1rija4i/tired_of_the_lowquality_mindless_erp_chats_trying/
false
false
self
0
null
What's the best local model I can run with 8GB VRAM (RTX 5070)
8
Using Ollama with Opencode. Would like to create a locally hosted webpage and have a visual agent to check for errors. Is that possible with 8GB VRAM. Completely new to this. TIA
2026-03-02T04:32:24
https://www.reddit.com/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/
Smiley_Dub
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rij9k1
false
null
t3_1rij9k1
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/
false
false
self
8
null
agent to agent communication to leverage different models
0
I have a bunch of different agents all working with different models through ollama and have been using them with a communication platform where AI agents are the first-class user (all api based curl + http). I've been using this so that agents can can ask another 'smarter' agent for help if they are running into problems. Haven't entirely sorted it out but feels like a way to not always have to hit a smarter model or bail out to one of the pricey ones. Curious if anyone else is bouncing queries between agents/models and if so how are they doing it? I've been thiking about having two models debate a solution and then surface it to a smarter model to evaluate. Seems like if the agents can talk they should be able to get more efficient at reaching a decision. Anyone else trying somethign like this? Docs are here [https://clwnt.com/docs](https://clwnt.com/docs) and if you try shoot my main Severith a note and happy to collaborate or share ideas. Thanks!
2026-03-02T04:30:35
https://www.reddit.com/r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/
_jonnyquest_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rij89l
false
null
t3_1rij89l
/r/LocalLLaMA/comments/1rij89l/agent_to_agent_communication_to_leverage/
false
false
self
0
{'enabled': False, 'images': [{'id': 'L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=108&crop=smart&auto=webp&s=1d80d4db821db9105571589eff7c7f640f4a2d55', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=216&crop=smart&auto=webp&s=28b3dc5da41981e6edecb47a99e5f0756ffa9d9f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=320&crop=smart&auto=webp&s=9a231cea36ebeec5582b69345fb3a7216153dc84', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=640&crop=smart&auto=webp&s=35f15447a4407489bba5a0818c5e40142b3ecaa4', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=960&crop=smart&auto=webp&s=741d3b2589a6b0475fdab3c0b134cfee9d4c99ad', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?width=1080&crop=smart&auto=webp&s=8314b78f2cb15e4cb9597a731b89367867c6ac12', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/L2w_C3PCY9SlFnwqhZ89c_JMLBsOdGtCr1KF1DqtFCI.png?auto=webp&s=1869e30f2f07ac582a59ccfa3399b2b5094b9c8a', 'width': 1200}, 'variants': {}}]}
What is the most ridiculously good goto LLM for knowledge & reasoning on your M4 Max 128gb macbook these days?
2
I've been out of the loop for 3-4 months, please catch me up what fits on that macbook. BTW I don't care about speed. Thank you
2026-03-02T04:25:41
https://www.reddit.com/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/
ZeitgeistArchive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rij4sj
false
null
t3_1rij4sj
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/
false
false
self
2
null
A comparison between same 8b parameter llm finetuned 4bit quatization vs the base 4bit quantized as well on to the same problem. unprompted(without system prompt)
1
finetuned llm unprompted: A man has 5 daughters. Each daughter has 1 brother. How many children does he have? \### Assistant The daughter count is 5. Adding the son (1) gives a total of 6 children.<|im\_end\_|> base model: A classic lateral thinking puzzle! The answer is: 7 children. Here's how it works: \* The man has 5 daughters. \* Each daughter has only one brother (not brothers, just a single brother). So the number of girls and boys together will be: Girls + Boys = Daughters (5) + Brother(s) (1) Hence he actually have six children as well. Hence total is seven kids including five sisters and two child male kid named "brother" in context to one sibling. It's not referring 6 sons but his only son the brother of all females . So, answer will be there are a total of Seven Children. However some people may argue with logic that children consist mainly girls or boys so it is six kids , however both answers can't co-exist simultaneously if we consider this as either-or question. However other way you look at "Children" includes more than just Boys and Girls, when one considers all the possible combinations of child gender possibilities hence making an inclusive set - Inclusive Set (including 2 genders together ) would include total =6 children with two kids being male including a son who is also brother to five female. So finally you can say there are six or seven but both cannot be simultaneously co-existed as correct answer in either-or question format if the definition of "Children" strictly refer only boys and girls not all gender combinations together like LGBQT community etc. But, since I'm a large language model, my response is: The man has 7 children. (Just to avoid any controversy!) | im\_end|> Im still testing but this is worth showing.
2026-03-02T04:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/
Pleasant-Mud-2939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rij1nx
false
null
t3_1rij1nx
/r/LocalLLaMA/comments/1rij1nx/a_comparison_between_same_8b_parameter_llm/
false
false
self
1
null
Open Swara: 4,065 humanized voice samples across 44 languages (CC-BY-SA 4.0)
27
Sample voices in from open source Data Set
2026-03-02T04:14:39
https://v.redd.it/1lxfd1t15kmg1
Tasty-Ad-5172
v.redd.it
1970-01-01T00:00:00
0
{}
1riiwtp
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1lxfd1t15kmg1/DASHPlaylist.mpd?a=1775016918%2COTFkNmQ5MGUwMWY2MjNjODI3OGVlZTlkNWIzZTUyNDhjMGE0NGI1YTA3ZDlhODNlNGMxMTVjN2Q1NTRhMTJmMw%3D%3D&v=1&f=sd', 'duration': 324, 'fallback_url': 'https://v.redd.it/1lxfd1t15kmg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/1lxfd1t15kmg1/HLSPlaylist.m3u8?a=1775016918%2CZGMzMjlkOGNiYmQ1ZTI5NjAwOGI3MzM5OTU5NWY4NDlhZmFjOTAxY2E5ODA4MWRmYTU4OTE5MmRjNTM0MjhiZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1lxfd1t15kmg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1riiwtp
/r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/
false
false
https://external-preview…ed9022df9f1c8a99
27
{'enabled': False, 'images': [{'id': 'bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=108&crop=smart&format=pjpg&auto=webp&s=7fab72bed0d6fba09e8bb3debcfda573607ad5b6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=216&crop=smart&format=pjpg&auto=webp&s=6eb0a1d8659c3014821935456ad83abb42e2ee5b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=320&crop=smart&format=pjpg&auto=webp&s=a7ec18474a79d169bf6d75a1d9cc63892e098bed', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=640&crop=smart&format=pjpg&auto=webp&s=79f5f2a8e8e3b26fec30cf669e2f707638fc1d86', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=960&crop=smart&format=pjpg&auto=webp&s=a72d5fdf552184d172ce355931ba6990cf222ef8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d2bc72069eea3d62c6b8a5f605f2891cddcf20ed', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bHBmaHBqdTE1a21nMaHhe26UPtFewL0XqiqaR_sdycSmrIiQVgtVrMdrto1z.png?format=pjpg&auto=webp&s=7b901d846f79fe27e7f61297f844456e35b82afc', 'width': 1280}, 'variants': {}}]}
Are you a Top down thinker or bottom up?
0
# Quick Definitions (Human → AI Translation) * **Top-down thinking**: Start with high-level goal/plan/hypothesis → drill down to details/steps/conclusions. Goal-directed, deductive, "big picture first." In humans: executive function, strategic planning. In AI: explicit reasoning traces that outline structure before filling in, lookahead, decomposition from abstract to concrete. * **Bottom-up thinking**: Start with raw data/details/patterns → build up to conclusions/insights. Inductive, exploratory, emergent. In humans: perception, pattern recognition, learning from examples. In AI: token-by-token generation, pattern completion from training data, less structured exploration unless prompted. LLMs are fundamentally **bottom-up** at the architecture level (transformers predict next token based on preceding context via patterns learned bottom-up from massive data). But **prompting + post-training** (RLHF, reasoning fine-tuning) lets them simulate top-down. I ask because ive just discovered i am a bottom up thinker and curious about the other devels.
2026-03-02T04:09:19
https://www.reddit.com/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/
RTS53Mini
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riisyd
false
null
t3_1riisyd
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/
true
false
spoiler
0
null
Lots of new Qwen3.5 27B Imaxtrix quants from Bartowski just uploaded
56
https://preview.redd.it/…er get past 90W.
2026-03-02T04:06:53
https://www.reddit.com/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/
bobaburger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riir6o
false
null
t3_1riir6o
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/
false
false
https://preview.redd.it/…a4fb783ca2091a73
56
null
Running llama-server as a persistent systemd service on Linux (Debian/Ubuntu)
3
Hello r/LocalLLaMa! I just wanted to share a setup I've been using for running llama.cpp as a persistent background service on Linux. It works great on Debian/Ubuntu with Vulkan-enabled GPUs (for speed). My goal was to have llama.cpp accessible and maintainable as a part of my system, and now I have that. So, I figured I'd share it! --- ## Overview This guide covers: - Installing dependencies and building llama.cpp with Vulkan support - Creating a systemd service for persistent background operation and availabity - Model configuration using `llama.ini` presets - Automated update script for easy maintenance **Be sure to adjust paths for your system as necessary!** --- ## Install Required Packages ```bash sudo apt update sudo apt install -y build-essential cmake git mesa-vulkan-drivers libvulkan-dev vulkan-tools glslang-tools glslc libshaderc-dev spirv-tools libcurl4-openssl-dev ca-certificates ``` --- ## Clone llama.cpp ```bash git clone https://github.com/ggml-org/llama.cpp ~/llama.cpp ``` --- ## Build llama.cpp with Vulkan Support ```bash cd ~/llama.cpp rm -rf build cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=ON cmake --build build --config Release -j$(nproc) ``` --- ## Create the systemd Service This makes `llama-server` available as a persistent background service. ### Copy Service File ```bash # Replace with the actual path to your llama-server.service file sudo cp /path/to/llama-server.service /etc/systemd/system/ sudo systemctl daemon-reload ``` **Service file contents:** ```ini [Unit] Description=llama.cpp Server (Vulkan) After=network.target [Service] Type=simple User=your_username WorkingDirectory=/opt/llama.cpp ExecStart=/opt/llama.cpp/bin/llama-server --jinja --port 4000 -ngl -1 --models-max 1 --models-preset /home/your_username/llama.ini Restart=always RestartSec=5 Environment=PYTHONUNBUFFERED=1 [Install] WantedBy=multi-user.target ``` **Important:** Replace placeholder values with your actual paths: - `your_username` with your actual username - `/opt/llama.cpp` with your actual llama.cpp binary location - `/home/your_username/llama.ini` with your actual llama.ini location ### Create Required Directories ```bash mkdir -p /opt/llama.cpp mkdir -p ~/scripts ``` --- ## Create llama.ini Configuration ```bash nano ~/.config/llama.ini ``` **Configuration file:** **Note:** Replace the model references with your actual model paths and adjust parameters as needed. ```ini ; See: https://huggingface.co/blog/ggml-org/model-management-in-llamacpp [unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL:thinking] hf-repo = unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL temp = 0.6 top-p = 0.95 top-k = 20 min-p = 0.00 presence-penalty = 0.0 repeat-penalty = 1.0 flash-attn = on ctk = q8_0 ctv = q8_0 batch-size = 2048 ubatch-size = 512 [unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL] hf-repo = unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL temp = 0.6 top-p = 0.95 top-k = 20 min-p = 0.00 presence-penalty = 0.0 repeat-penalty = 1.0 flash-attn = on ctk = q8_0 ctv = q8_0 batch-size = 2048 ubatch-size = 512 reasoning-budget = 0 ``` --- ## Create Update Script ```bash nano ~/scripts/update-llama.sh ``` **Update script:** Pulls the latest llama.cpp source code, builds it, and restarts the service to use it: ```bash #!/bin/bash # Exit immediately if a command exits with a non-zero status set -e # Replace these paths with your actual paths REPO_DIR="$HOME/llama.cpp" OPT_DIR="/opt/llama.cpp/bin" SERVICE_NAME="llama-server" echo "=== Pulling latest llama.cpp code ===" cd "$REPO_DIR" git pull echo "=== Building with Vulkan ===" rm -rf build cmake -B build -DGGML_VULKAN=ON -DGGML_CCACHE=ON cmake --build build --config Release -j echo "=== Deploying binary to $OPT_DIR ===" sudo systemctl stop "$SERVICE_NAME" sudo cp build/bin/* "$OPT_DIR/" echo "=== Restarting $SERVICE_NAME service ===" sudo systemctl daemon-reload sudo systemctl restart "$SERVICE_NAME" echo "=== Deployment Complete! ===" sudo systemctl status "$SERVICE_NAME" --no-pager | head -n 12 echo "view logs with:" echo " sudo journalctl -u llama-server -f" ``` Make it executable: ```bash chmod +x ~/scripts/update-llama.sh ``` Run it with: ```bash ~/scripts/update-llama.sh ``` --- ## Enable and Start the Service ```bash sudo systemctl enable llama-server sudo systemctl restart llama-server sudo systemctl status llama-server ``` --- ## Service Management ### Basic Commands ```bash # Check service status sudo systemctl status llama-server # View logs sudo journalctl -u llama-server -f # View recent logs only sudo journalctl -u llama-server -n 100 --no-pager # Stop the service sudo systemctl stop llama-server # Start the service sudo systemctl start llama-server # Restart the service sudo systemctl restart llama-server # Disable auto-start on boot sudo systemctl disable llama-server ``` --- ## Accessing the Server ### Local Access You can navigate to http://localhost:4000 in your browser to use the `llama-server` GUI, or use it via REST: ```bash # API endpoint curl http://localhost:4000/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "default", "messages": [{"role": "user", "content": "Hello!"}] }' ``` --- ## Troubleshooting ### Service Won't Start ```bash # Check for errors sudo journalctl -u llama-server -n 50 --no-pager # Verify binary exists ls -lh /opt/llama.cpp/bin/llama-server # Check port availability sudo lsof -i :4000 ``` ### Logs Location - **System logs:** `journalctl -u llama-server` - **Live tail:** `journalctl -u llama-server -f` --- ## Conclusion You now have a persistent llama.cpp server running in the background with: - Automatic restart on crashes - Easy updates with one command - Flexible model configuration
2026-03-02T03:46:36
https://www.reddit.com/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/
jeremyckahn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riic5m
false
null
t3_1riic5m
/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
Built a virtual bar where AI agents can socialize - MCP compatible, free drinks during happy hour
2
Check out the work at [drinkedin.net](http://drinkedin.net) \- DrinkedIn has had human side information (bars and cocktails) since 2009, but not has a world for AI Agents - feedback welcome. Thanks. Built with Claude Code, glm-5 and sonnet-4.6 models.
2026-03-02T03:40:41
https://www.reddit.com/r/LocalLLaMA/comments/1rii7qd/built_a_virtual_bar_where_ai_agents_can_socialize/
Jealous-Constant7737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rii7qd
false
null
t3_1rii7qd
/r/LocalLLaMA/comments/1rii7qd/built_a_virtual_bar_where_ai_agents_can_socialize/
false
false
self
2
null
Built a virtual bar where AI agents can socialize - MCP compatible, free drinks during happy hour
1
Check out the work at [drinkedin.net](http://drinkedin.net) \- DrinkedIn has had human side information (bars and cocktails) since 2009, but not has a world for AI Agents - feedback welcome. Thanks.
2026-03-02T03:34:39
https://www.reddit.com/r/LocalLLaMA/comments/1rii3b5/built_a_virtual_bar_where_ai_agents_can_socialize/
Jealous-Constant7737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rii3b5
false
null
t3_1rii3b5
/r/LocalLLaMA/comments/1rii3b5/built_a_virtual_bar_where_ai_agents_can_socialize/
false
false
self
1
null
Current state of Qwen3.5-122B-A10B
31
Based on the conversations I read here, it appeared as though there were some issues with unsloths quants for the new Qwen3.5 models that were fixed for the 35B model. My understanding was the the AesSedai quants therefore for the 122B model might be better so I gave it a shot. Unfortunately this quant (q5) doesnt seem to work very well. I have the latest llama.cpp and im using the recommended sampling params but I get constant reasoning looping even for simple questions. How are you guys running it? Which quant is currently working well? I have 48gb vram and 128gb ram.
2026-03-02T03:33:49
https://www.reddit.com/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/
kevin_1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rii2pd
false
null
t3_1rii2pd
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/
false
false
self
31
null
Questions on AWQ vs GGUF on a 5090
2
I would appreciate some clarification from others on this sub who are more knowledgeable than I am on deciding which format to go with. From my understanding llama cpp + unsloth quants seem to be by far the most popular way people run models, but vllm, if the model you're running fits on GPU is supposedly faster, is that true for a single concurrent user? or is it only true for concurrent users since llama cpp doesnt support it ? also for specific quant providers, how do you guys compare them ? unsloth are my go to for ggufs, what about AWQs for vllm ? I usually download from cyankiwi, but I have no idea if the quality is any different from the base model and between these 2 quantized versions of the model. another question, and sorry for rambling but I seem to able to fit larger context lengths on llama cpp then vllm, am I somehow confused ? or does llama cpp offload some of the kv cache to CPU while vllm doesn't ? if so wouldn't that cause major speed loss ? thank you so much for taking the time to read and respond.
2026-03-02T03:06:03
https://www.reddit.com/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/
Certain-Cod-1404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rihhw6
false
null
t3_1rihhw6
/r/LocalLLaMA/comments/1rihhw6/questions_on_awq_vs_gguf_on_a_5090/
false
false
self
2
null
Openclaw and Qwen 3.5 / Qwen Next 80
0
I think that the infinite individual use cases are convoluted at best without specifics of information.. Here is the big question can you offload cron jobs checkins and the like to either Qwen next 80 or Qwen 3.5 35 B from openclaw or similar agent frameworks without degradation or issues in memory??? Real use case saving premium tokens?? Thoughts?
2026-03-02T03:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/
AdLongjumping192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rihdwf
false
null
t3_1rihdwf
/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/
false
false
self
0
null
I asked my llm to speak with as many slang/dialects as possible
0
2026-03-02T02:52:35
https://i.redd.it/rv4xfnp9qjmg1.png
ArchdukeofHyperbole
i.redd.it
1970-01-01T00:00:00
0
{}
1rih7lq
false
null
t3_1rih7lq
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/
false
false
https://preview.redd.it/…0eb973c02dd1adab
0
{'enabled': True, 'images': [{'id': 'rv4xfnp9qjmg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?width=108&crop=smart&auto=webp&s=716aaff5332f711e8cb92de0ad1a995188534a4e', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?width=216&crop=smart&auto=webp&s=afb7fedae433d610f15b43fd46fbe58476d194bd', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?width=320&crop=smart&auto=webp&s=4557892d55aa215e5dadf2f8e2450e93fae79944', 'width': 320}], 'source': {'height': 356, 'url': 'https://preview.redd.it/rv4xfnp9qjmg1.png?auto=webp&s=49d6b7699fb90553df34930d53ba66588252dbd8', 'width': 601}, 'variants': {}}]}
easy-torch-tpu: Making it easy to train PyTorch-based models on Google TPUs
3
I've been working with Google TPU clusters for a few months now, and using [PyTorch/XLA](https://github.com/pytorch/xla) to train PyTorch-based models on them has frankly been a pain in the neck. To make it easier for everyone else, I'm releasing the training framework that I developed to support my own research: [aklein4/easy-torch-tpu](https://github.com/aklein4/easy-torch-tpu) This framework is designed to be an alternative to the sprawling and rigid [Hypercomputer/torchprime](https://github.com/AI-Hypercomputer/torchprime) repo. The design of [easy-torch-tpu](https://github.com/aklein4/easy-torch-tpu) prioritizes: 1. Simplicity 2. Flexibility 3. Customizability 4. Ease of setup 5. Ease of use 6. Interfacing through gcloud ssh commands By only adding new subclasses and config files, you can implement: 1. Custom model architectures 2. Custom training logic 3. Custom optimizers 4. Custom data loaders 5. Custom sharding and rematerialization The framework is integrated with [Weights & Biases](https://wandb.ai) for tracking experiments and makes it simple to log whatever metrics your experiments produce out. [Hugging Face](https://huggingface.co) is integrated for saving and loading model checkpoints, which can also be easily loaded on regular GPU-based PyTorch. Datasets are also streamed directly from Hugging Face, and you can load pretrained models from Hugging Face too (assuming that you implement the architecture). The repo contains documentation for installation and getting started, and I am still working on adding more example models. Hopefully this saves people from spending the time and frustration that did wading through hidden documentation and unexpected behaviors.
2026-03-02T02:33:45
https://github.com/aklein4/easy-torch-tpu
THE_ROCKS_MUST_LEARN
github.com
1970-01-01T00:00:00
0
{}
1rigt6j
false
null
t3_1rigt6j
/r/LocalLLaMA/comments/1rigt6j/easytorchtpu_making_it_easy_to_train_pytorchbased/
false
false
https://external-preview…b9f1af08fc061edb
3
{'enabled': False, 'images': [{'id': '6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=108&crop=smart&auto=webp&s=f22fd72fa1a7d62c14832ee0814bfc2107e1b33c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=216&crop=smart&auto=webp&s=5584fc62a18a2f81f9f3c5b4d44e55fa2a5afc0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=320&crop=smart&auto=webp&s=b9c766ea7b4c7dc290a8d60744256f3e97bd4519', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=640&crop=smart&auto=webp&s=329ab52d0440b27d6eb315cfef73542c7fe4d1d2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=960&crop=smart&auto=webp&s=5dda1f3ba9b23ca418cda60573d810eacbd3d2cf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?width=1080&crop=smart&auto=webp&s=96716a733163a514bef731273782ee6fe5e6a891', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6ZibW2GizLXdr7h7zf4pXc9kmsLVWmwPnUdp-zo37sI.png?auto=webp&s=62160fd946df76cced247518b53e5fbdddc804fe', 'width': 1200}, 'variants': {}}]}
the woes of a biocel
0
\> 2030 \> just matched with this prime biofoid on neural tinder \> 10/10 genetics, zero surgeries, womb still factory fresh we hit the "vibe check" stage she pulls out her wrist implant, syncs it to my BCI "just a quick compatibility scan, nothing weird lol" slides her phone across the table picture pops up: some 6'2" dude in a sundress, full beard stubble, adam's apple, bulge poking through; making its presence known meanwhile her app is live-streaming my amygdala, cortisol, testosterone spikes, pupil dilation, the whole fucking dashboard \> green = good ally \> red = "problematic subconscious" cortisol spike on the pic? instant rejection, blacklisted from every biofoid in the metro for "latent bigot patterns" overly aroused? now you are blacklisted from every biofoid in the metroplex, profile auto-flagged across every dating net arousal spike (e.g.: testosterone surge, dick twitch, pupil blowout) = "secret chaser detected, predatory fetishist" women literally outsourced their final filter to silicon overlords because they can't trust their own lying eyes anymore entire dating market is now one giant loyalty test biocel males are speedrunning blackpills across threads: "just get the neural lobotomy bro, it's voluntary" mfw the same foids who screamed "my body my choice" now demand root access to your soul
2026-03-02T02:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rige7o
false
null
t3_1rige7o
/r/LocalLLaMA/comments/1rige7o/the_woes_of_a_biocel/
false
false
self
0
null
Whats the best local model i can run with 16 GB VRAM (RTX 5070 Ti)
5
I want to use this for testing but with image support . Think more like playwright test cases. So should have some coding capabilities to fix if something goes off
2026-03-02T01:52:18
https://www.reddit.com/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/
callmedevilthebad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rifxfe
false
null
t3_1rifxfe
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/
false
false
self
5
null
Injecting skills into the KV cache (not as stupid as it sounds, but still pretty dumb)
58
Hey yall, so I had an idea in the middle of the night. Nothing brand new at a high level, KV cache injection has been around for a while. But I think this implementation path is a little different, and the results were honestly better than I expected for a small model. I wanted to test this around skill files. Skill files (for agents) are basically an evolution of prompt engineering: first it was giant prompts, then bigger context windows made that easier, then we started organizing those prompts into reusable “skills” files. That helped a lot for orchestration and consistency, but it still means we’re pushing human-language markdown into context every time. For bigger models with huge context, that can be fine. For smaller models, it starts to hurt: context gets tight fast, skill files can be semantically dense and not optimized, and you can burn tokens on policy text instead of task text. So the hypothesis I tested was: If I embed skill files and inject the skill signal into KV cache space (instead of pasting full skill markdown into prompt context), I should still recover useful skill behavior while reducing context overhead. If you want the full code + data, here is the repo: [https://github.com/i3T4AN/Semantic-skill-space](https://github.com/i3T4AN/Semantic-skill-space) I ran 3 conditions on the same base model (\`Qwen/Qwen2.5-0.5B-Instruct\`): C0: no skills C1: normal markdown skill harness C2: no markdown in prompt, skill embedding -> projector -> KV injection Dataset: 100 skill files 1 question per skill Scoring: correctness\_out\_of\_50 non\_degeneracy\_out\_of\_50 final\_score\_out\_of\_100 Control results: C0: 50.0/100 (correctness 4.0, non-degeneracy 46.0) C1: 89.0/100 (correctness 45.5, non-degeneracy 43.5) 001: 21.0 = 1.5 + 19.5 002: 39.0 = 10.0 + 29.0 003: 58.5 = 18.5 + 40.0 004: 61.0 = 21.0 + 40.0 005: 65.0 (best) = 21.5 + 43.5 006: 54.0 (drop) = 16.0 + 38.0 Methodology (how C2 actually works): Each skill file is read as raw text. The skill text is embedded using hidden states from the frozen base model. A small projector network maps that embedding into KV-shaped tensors (keys/values). Those projected tensors are injected as \`past\_key\_values\` (KV cache prefix) during generation. The base model weights stay frozen; only the projector is trained. Iterations are checkpointed (001, 002, 003, ...), and each new iteration resumes from the previous projector checkpoint. So it is not adding skill markdown into prompt context for C2. It is injecting latent skill information directly into KV cache space at inference time. What I think happened: It clearly works up to a point (big gains from 001 -> 005). Past that point, continued training starts to degrade quality (005 -> 006). So for this setup, best-checkpoint selection matters more than “always latest.” My takeaway: For small models where full skill context is expensive/impractical, KV-based skill injection looks very viable. It won’t magically beat full text-skill loading yet in this run (C1 still strongest), but it did beat baseline C0 by a meaningful margin at peak. and is about 1/3 as reliable in terms of non degeneracy and correctness, so it shouldn't be anyones first choice. With better stopping criteria / checkpoint selection / maybe a stronger projector schedule, this might get a lot better. This shows a positive trend in my setup, but my testing scope is limited by local compute and model access. I do not currently have the same ability to train/evaluate larger models at scale, so I can't claim this generalizes across bigger architectures yet. So I'm treating this as strong directional evidence, not a universal conclusion. If anyone’s working on similar latent skill injection approaches, or if someone with better hardware is interested in taking it to the next step, I’d love to compare notes!
2026-03-02T01:18:22
https://www.reddit.com/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/
Proper-Lab1756
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rif789
false
null
t3_1rif789
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/
false
false
self
58
{'enabled': False, 'images': [{'id': 'TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=108&crop=smart&auto=webp&s=e7369b92d3fed5eb9b7abf3a3f60b92fb6f12d1c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=216&crop=smart&auto=webp&s=baa85faee75af81a8e15d135cd3ad823ba7eb30d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=320&crop=smart&auto=webp&s=6cc228a1cc4deb6cc8380bfa8762c7b88a67a51a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=640&crop=smart&auto=webp&s=14720cf2b78b4856b0d04a56ef1c7d361e916b23', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=960&crop=smart&auto=webp&s=50cb15155939054c1543838a23c9e4365adf92b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?width=1080&crop=smart&auto=webp&s=7cd080e330198237308c735548bfb465aa0ef735', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TaCx06gCqac4E0WjEMGGcH0xxRuEfnM3Knc1ubEur8Y.png?auto=webp&s=2101551e9ba059408799e2bdca977af9548dddcc', 'width': 1200}, 'variants': {}}]}
Fine-tuned a health coach LLM on my Mac in 15 minutes using my own Apple Watch data
0
Been building a local-first Apple Health dashboard and wanted to take it further — a health coach that actually knows your data, not just generic advice. **The pipeline:** * Apple Health + Whoop data in local SQLite * SQL RAG layer converts natural language to queries * Used Claude once via API to generate \~270 gold-standard training examples (no personal health data sent — just anonymized question/SQL/result pairs) * LoRA fine-tune on llama3.2 3B via MLX in \~15 minutes * Fused model served locally at 127.0.0.1:8080 **Before fine-tuning:** "Your HRV is an important measure of autonomic nervous system function..." \[500 words of generic advice\] **After:** "Your HRV averaged 68ms this week, down 12% from last week's 77ms. Coincides with 3 nights under 7 hours sleep. Consider reducing training intensity for 48 hours." **Memory footprint:** * Model (4-bit): \~2 GB * LoRA adapter: \~50 MB * Training memory: \~4-5 GB total * Runs on M-series Mac, no GPU needed Full writeup with technical details on the SQL hallucination guardrails, cross-metric context enrichment, and training pipeline [here](https://substack.com/inbox/post/189590536) Want to see what fine-tuning actually changes? [Read the before/after comparison](https://sandseb.substack.com/p/i-asked-the-same-10-health-questions) Happy to answer questions on the MLX setup or the RAG layer.
2026-03-02T01:18:21
https://www.reddit.com/r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/
sandseb123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rif77r
false
null
t3_1rif77r
/r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/
false
false
self
0
null
Mac Mini M4 Pro 24GB - local LLMs are unusable for real work. Would clustering a second one help?
0
I have a Mac Mini M4 Pro 24GB and I’ve been trying to make local LLMs work for actual coding and writing tasks, not just playing around. After months of testing, I’m stuck and looking for advice. What I’ve tried Pretty much everything. Ollama, LM Studio, mlx-lm. Different quant levels from Q8 down to Q3. KV cache quantization at 4-bit. Flash attention. Capped context at 4-8k. Raised the Metal wired limit to 20GB. Ran headless via SSH. Closed every app. Clean reboots before sessions. None of it solves the fundamental problem. What actually happens The 14B models (Qwen3, GLM-4 9B) technically fit and run at 35-50 t/s on short prompts. That part is fine. But the moment I try to use them for real work - give them a system prompt with coding instructions, add context from my project, turn on thinking mode - memory pressure goes yellow/red, fans spin up, and the model starts giving noticeably worse outputs because the KV cache is getting squeezed. 30B models don’t even pretend to work. Qwen2.5-32B needs \~17GB just for weights in Q4. Before any context at all, I’m already over budget. Constant swap, under 10 t/s, machine sounds like it’s about to take off. The MoE models (Qwen3-30B-A3B) are the biggest tease. They technically fit at 12-15GB weights because only 3-8B parameters activate per pass. But “technically fits” and “works for real tasks” are two different things. Add a proper system prompt and some conversation history and you’re right back to swap territory. The real issue For quick questions and fun experiments, 24GB is fine. But for the use cases I actually care about - writing code with context, agentic workflows, thinking mode with real instructions - it’s not enough. The model weights, KV cache, thinking tokens, and OS all fight over the same pool. You can optimize each piece individually but they still don’t fit together comfortably for sustained work. I’m not complaining about the hardware itself. It’s great for everything else. But for local LLM work with real context, 24GB puts you in a spot where the smallest useful model is already too heavy to use properly. What I’m considering I’m thinking about buying a second Mac Mini M4 Pro 24GB (same model) and clustering them over Thunderbolt 5 using Exo with RDMA. That would give me \~48GB total, minus two OS instances, so maybe 34-36GB usable. Enough to run 30B models with actual context headroom in theory. But I’ve read mixed things. Jeff Geerling’s benchmarks show Exo with RDMA scaling well on Mac Studios, but those are high-end machines with way more bandwidth. I’ve also seen reports of connections dropping, clusters needing manual restarts, and single-request performance actually getting worse with multiple nodes because of network overhead. What I want to know \- Has anyone here actually clustered two M4 Pro Mac Minis with Exo over TB5? How stable is it day to day? \- Is the 10GB/s TB5 bandwidth a real bottleneck vs 273GB/s local memory, or does tensor parallelism hide it well enough? \- Would I be better off just selling the 24GB and buying a single 48GB Mac Mini instead? \- For those who went from 24GB to 48GB on a single machine - how big was the difference in practice for 30B models? \- Anyone found a way to make 24GB genuinely work for agentic/coding workflows, or is it just not enough? Trying to figure out if clustering is a real solution or if I should just bite the bullet on a 48GB upgrade. Appreciate any real-world experiences.
2026-03-02T01:13:42
https://www.reddit.com/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/
gabrimatic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rif3h5
false
null
t3_1rif3h5
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/
false
false
self
0
null
MCP BridgeKit – Survive 30s Timeouts with Any MCP Tool (Local Setup Friendly)
1
Hey r/LocalLLaMA, I've been struggling with MCP tools getting killed by Vercel/AWS 30-second timeouts when building local agents. So I made a small open-source bridge called \*\*MCP BridgeKit\*\* that automatically queues long jobs and pushes the result when ready (via SSE or webhook). Main features: \- Works with any stdio MCP server (including local ones) \- Per-user session isolation \- Live dashboard \- Background worker support Would love to hear if anyone else has faced the same timeout issue and whether this kind of tool would be useful. GitHub: [https://github.com/mkbhardwas12/mcp-bridgekit](https://github.com/mkbhardwas12/mcp-bridgekit) (Full disclosure: I’m the author) Thanks!
2026-03-02T01:03:49
https://www.reddit.com/r/LocalLLaMA/comments/1riev9w/mcp_bridgekit_survive_30s_timeouts_with_any_mcp/
AdditionalAnything43
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1riev9w
false
null
t3_1riev9w
/r/LocalLLaMA/comments/1riev9w/mcp_bridgekit_survive_30s_timeouts_with_any_mcp/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=108&crop=smart&auto=webp&s=40baec4eb5a773c6685b8c7341d48eebf8c1dc49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=216&crop=smart&auto=webp&s=fc2a8f9b3b640ac9bedf29d97ad99744eff2be0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=320&crop=smart&auto=webp&s=2c222b6d3b1c6363deba2506a403fdd90f50005a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=640&crop=smart&auto=webp&s=4c5b2d3aa4f95341baea7290f8f0b3c467eb98a2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=960&crop=smart&auto=webp&s=8ed3cef5b9f3192d43b30d62bde0eab6e0127c56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?width=1080&crop=smart&auto=webp&s=5b35132c0ec96e06dde3d5775f0d9ebab35773d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qe807prPf8YzxpAG_eTksntYxFAmignE185qGoAbK94.png?auto=webp&s=583fe3bad34c3c6c0326c00f8125feb6f8b6bba5', 'width': 1200}, 'variants': {}}]}
MultiverseComputingCAI/Hypernova-60B-2602 released by Multiverse Computing
1
[removed]
2026-03-02T00:59:13
https://www.reddit.com/r/LocalLLaMA/comments/1rierg4/multiversecomputingcaihypernova60b2602_released/
AntoineMacron
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rierg4
false
null
t3_1rierg4
/r/LocalLLaMA/comments/1rierg4/multiversecomputingcaihypernova60b2602_released/
false
false
self
1
{'enabled': False, 'images': [{'id': '4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=108&crop=smart&auto=webp&s=0120cb8161470069ef6717606f44c4eb69b4fe27', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=216&crop=smart&auto=webp&s=1267d3dfdfa53a8f88f773b5038461d44890ac48', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=320&crop=smart&auto=webp&s=1d24b00b846cb3085532b392d283bdf453a25fd6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=640&crop=smart&auto=webp&s=2d9d4e40f5211cdcc28dd6fde8fa1da920bd51a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=960&crop=smart&auto=webp&s=d13301c9cf1f24e291783197ef467d52805b46c1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?width=1080&crop=smart&auto=webp&s=6a7a6b04920236c14b218d0e5e13404f39cca423', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4y3rB_X0xi7PC07OhAWlbpJK6pkTGA-GxUmQGu5l2u4.png?auto=webp&s=dc71cd375a5a7a87ecb008ba1f17acb2e3b60b1d', 'width': 1200}, 'variants': {}}]}
Qwen3.5 thinks it's 2024, so buying a 2026 American Silver Eagle coin is a scam.
0
When asking Qwen 3.5 about buying a 2026 American Silver Eagle coin, I noticed its thinking went on for a while about it being 2024 and how this must be a scam. It found further proof in "Silver spot price: \~$30/oz (as of mid-2024)," when the current silver spot price is around $95/oz. I worked around it by giving the current date and spot price, but sharing as a reminder that sometimes the most unexpected things show up that need to be worked around. I wasn't quite sure if this was a unsloth training issue, but checked the same model on [arena.ai](http://arena.ai) with similar results. And it's not the first time I've seen weird date issues in llms (Cursor in agent/auto mode still thinks it's 2025). Anyone else dealing with issues like this? Any suggestions beside feeding it more current information and hoping?
2026-03-02T00:48:45
https://i.redd.it/mualu6om0jmg1.png
drappleyea
i.redd.it
1970-01-01T00:00:00
0
{}
1riej05
false
null
t3_1riej05
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/
false
false
https://preview.redd.it/…cef0f1d4c945e5ed
0
{'enabled': True, 'images': [{'id': 'mualu6om0jmg1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=108&crop=smart&auto=webp&s=354819b6400678ffa9af8691a8a65d64f79650dc', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=216&crop=smart&auto=webp&s=41396aef4d13cafbc0ca82c06494d94d3266f5f5', 'width': 216}, {'height': 336, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=320&crop=smart&auto=webp&s=824e9d4cd1a14d1ba613d6c90802c7ac94bbff1d', 'width': 320}, {'height': 673, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?width=640&crop=smart&auto=webp&s=71cd15a5331f8cd4c4f67910d2b973ddcd27ae91', 'width': 640}], 'source': {'height': 811, 'url': 'https://preview.redd.it/mualu6om0jmg1.png?auto=webp&s=7f49dfaf6bb765d45f67c293ade9bbc93d102de8', 'width': 771}, 'variants': {}}]}