title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
meanwhile in China
532
2026-02-24T04:42:30
https://v.redd.it/j4ujf22ngdlg1
Tiny_Judge_2119
v.redd.it
1970-01-01T00:00:00
0
{}
1rd64c5
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j4ujf22ngdlg1/DASHPlaylist.mpd?a=1774500172%2CMDNhNTQxMTI2N2IxMDIxMTU2NDkzNjlkZWViZThhNjY2Yjc3Nzg2ZmUzZDMyNjg3YjMwNWVkNGYxN2U1YTNhMg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/j4ujf22ngdlg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/j4ujf22ngdlg1/HLSPlaylist.m3u8?a=1774500172%2CY2M5ODhhNWUyYWUwMWFjZjg2Y2RkMWQwYjNhZDZjMzZmMTgyMjNjNTliN2Y5MTc3NjIyMmEyZmM5MjhhYjc0OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j4ujf22ngdlg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1rd64c5
/r/LocalLLaMA/comments/1rd64c5/meanwhile_in_china/
false
false
https://external-preview…d37e5bf60af7a59a
532
{'enabled': False, 'images': [{'id': 'bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=108&crop=smart&format=pjpg&auto=webp&s=c64c0c3b585b95d4ffce8df38f03fead49abd197', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=216&crop=smart&format=pjpg&auto=webp&s=76d8aee59eb584514e21fb89bf3fd8804bcf4ce3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=320&crop=smart&format=pjpg&auto=webp&s=06446322529d76330ac23c49d36784f801f8d20a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=640&crop=smart&format=pjpg&auto=webp&s=26b18ad8c46046c52412842f3137f3aee93629ec', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=960&crop=smart&format=pjpg&auto=webp&s=be2b1158a391446159ae420e55c40575126ca409', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3f0009b2e0efe9df0de30e7e6d5dfc62eda60f96', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bmE5aWI2Mm5nZGxnMf036yKzUhZ8EQqJaE3HIdg_QOMox8iiJVO5Ps1DTMuW.png?format=pjpg&auto=webp&s=dea426b08789199ef144b53c69cd9e9dd93c9619', 'width': 1280}, 'variants': {}}]}
Claude sonnet 4.6 says it's DeepSeek when system prompt is empty
0
Empty the system prompt and ask its name in Chinese,it will response it’s DeepSeek. Apparently distilled from DeepSeek and other Chinese models but accusing them , how ironic and double standard
2026-02-24T04:37:49
https://www.reddit.com/gallery/1rd5y2u
Separate_Tip_8215
reddit.com
1970-01-01T00:00:00
0
{}
1rd5y2u
false
null
t3_1rd5y2u
/r/LocalLLaMA/comments/1rd5y2u/claude_sonnet_46_says_its_deepseek_when_system/
false
false
https://preview.redd.it/…fc045c3e14b90ea9
0
null
experimented with openclaw - am I missing something?
1
I like the interface, and being able to queue off tasks but for the most part it's just as interactive as using the website. I also tried to link it to chrome with the openclaw extension but had a lot of difficulty getting that to work (it kept saying 18792 relay not connected). No matter what token I used. I ended up using the built-in browser that openclaw has available, which seemed to work fine. Are there some killer usages I should be experimenting with? I dont see it going off and running and doing everything autonomously ... maybe it's just my setup.
2026-02-24T03:54:47
https://www.reddit.com/r/LocalLLaMA/comments/1rd4ekg/experimented_with_openclaw_am_i_missing_something/
retrorays
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd4ekg
false
null
t3_1rd4ekg
/r/LocalLLaMA/comments/1rd4ekg/experimented_with_openclaw_am_i_missing_something/
false
false
self
1
null
Hot Take: 90% of RAG failure is bad data parsing, not the LLM.
1
[removed]
2026-02-24T03:41:32
https://www.reddit.com/r/LocalLLaMA/comments/1rd3yes/hot_take_90_of_rag_failure_is_bad_data_parsing/
Thisath_Thewnitha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd3yes
false
null
t3_1rd3yes
/r/LocalLLaMA/comments/1rd3yes/hot_take_90_of_rag_failure_is_bad_data_parsing/
false
false
self
1
null
People are getting it wrong; Anthropic doesn't care about the distillation, they just want to counter the narrative about Chinese open-source models catching up with closed-source frontier models
763
Why would they care about distillation when they probably have done the same with OpenAI models and the Chinese labs are paying for the tokens? This is just their attempt to explain to investors and the US government that cheap Chinese models will never be as good as their models without distillation or stealing model weights from them. And they need to put more restrictions on China to prevent the technology transfer.
2026-02-24T02:54:22
https://i.redd.it/1ulaheylwclg1.png
obvithrowaway34434
i.redd.it
1970-01-01T00:00:00
0
{}
1rd2x61
false
null
t3_1rd2x61
/r/LocalLLaMA/comments/1rd2x61/people_are_getting_it_wrong_anthropic_doesnt_care/
false
false
https://preview.redd.it/…a51b8a16ece966de
763
{'enabled': True, 'images': [{'id': '1ulaheylwclg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=108&crop=smart&auto=webp&s=a4af69a728630c1c414b9af1441c3eba5c75fafb', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=216&crop=smart&auto=webp&s=04c40e0804e4387dc391406d360bad0b2b44d00d', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=320&crop=smart&auto=webp&s=fe69b1b5303a529288313e7bedd34156de94d891', 'width': 320}, {'height': 347, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=640&crop=smart&auto=webp&s=7333a0173119c9f64b93f296b5b27a05c6260830', 'width': 640}, {'height': 521, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=960&crop=smart&auto=webp&s=b6d6936de10619a78a0f83b284f9f366ad03bdf6', 'width': 960}, {'height': 586, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?width=1080&crop=smart&auto=webp&s=50b2ec3cd50b780535a6d4f8956993bbc4255492', 'width': 1080}], 'source': {'height': 963, 'url': 'https://preview.redd.it/1ulaheylwclg1.png?auto=webp&s=0b0aa84065fbdb2d857e3692e571f7c3848ad358', 'width': 1773}, 'variants': {}}]}
Round 2: Quick MoE quantization comparison: LFM2-8B-A1B, OLMoE-1B-7B-0924-Instruct, granite-4.0-h-tiny
35
I chose three small, recent, and different MoE models that fit my VRAM for a quick assessment (these are not models I actually use). The goal is to check on MXFP4 and evaluate the smallest quantization variants. For the non initiated: KLD (KL Divergence): Measures "Faithfulness." It shows how much the quantized model's probability distribution drifts from the original baseline. Lower = closer. PPL (Perplexity): Measures "Certainty." It’s the average uncertainty the model feels when predicting the next token. It is derived from the total information loss (Cross Entropy). Lower = more confident They are correlated. Perplexity measures the total error, KLD measures the relative error. This relationship helps in determining information loss (or gain when training). Models are: * LFM2-8B-A1B has 4 experts active out of 32. * OLMoE-1B-7B-0924-Instruct has 8 experts active out of 64. * granite-4.0-h-tiny has 6 experts active out of 64. # Conclusion: MXFP4 is probably great for QAT (Quantization Aware Training), but it underperforms on speed and quality. There is no "go-to" quant. If a bunch of them are really close in terms of sizes, [ideally you'd proceed as is:](https://github.com/ggml-org/llama.cpp/pull/5076#issue-2093613239) llama-perplexity -m <fp16_model> -f wiki.test.raw --kl-divergence-base <file_name> [other parameters] llama-perplexity -m <quantized_model> --kl-divergence-base <file_name> --kl-divergence [other parameters] # Most Desirable Quantization The Efficiency Score is the distance to a 'perfect' model (zero size, zero error), the VRAM sweet spot. Efficiency Score: √ (Normalized Size² + Normalized KLD²) # Model: LFM2-8B-A1B |Category|Quantization|Size (GiB)|KLD Score|Eff. Score| |:-|:-|:-|:-|:-| |2-bit|LFM2-8B-A1B-IQ2\_S|2.327|0.642566|0.4002| |3-bit|LFM2-8B-A1B-IQ3\_M|3.416|0.238139|0.4365| |4-bit|LFM2-8B-A1B-Q4\_K\_S|4.426|0.093833|0.3642| |5-bit|LFM2-8B-A1B-Q5\_K\_S|5.364|0.053178|0.3513| # Model: OLMoE-1B-7B-0924-Instruct |Category|Quantization|Size (GiB)|KLD Score|Eff. Score| |:-|:-|:-|:-|:-| |2-bit|OLMoE-1B-7B-0924-Instruct-IQ2\_S|1.985|0.438407|0.4806| |3-bit|OLMoE-1B-7B-0924-Instruct-IQ3\_M|2.865|0.122599|0.5011| |4-bit|OLMoE-1B-7B-0924-Instruct-IQ4\_XS|3.460|0.052616|0.3509| |5-bit|OLMoE-1B-7B-0924-Instruct-Q5\_K\_S|4.452|0.019071|0.3044| # Model: granite-4.0-h-tiny |Category|Quantization|Size (GiB)|KLD Score|Eff. Score| |:-|:-|:-|:-|:-| |2-bit|granite-4.0-h-tiny-IQ2\_S|1.967|0.519907|0.4871| |3-bit|granite-4.0-h-tiny-IQ3\_XS|2.716|0.156308|0.4064| |4-bit|granite-4.0-h-tiny-Q4\_K\_S|3.721|0.044464|0.4086| |5-bit|granite-4.0-h-tiny-Q5\_K\_S|4.480|0.020204|0.2934| https://preview.redd.it/fhljt1hisclg1.png?width=2779&format=png&auto=webp&s=75ec60955714ab6bcfdd0093a6ad7950b7d82e1b https://preview.redd.it/ans3msbjsclg1.png?width=2779&format=png&auto=webp&s=89dd1c56310e5e3f3a21dc8e6299a879d0d344b7 https://preview.redd.it/4kl1epyjsclg1.png?width=2780&format=png&auto=webp&s=0b5c46e618b04fd756b93141f3a8999689ba7cc5 https://preview.redd.it/h2tplhoksclg1.png?width=2496&format=png&auto=webp&s=900b52f0ece7d7abfa39081f2fd08380ff964b77 https://preview.redd.it/asfqio9lsclg1.png?width=2496&format=png&auto=webp&s=bdf1dbb1316a958ea59fb4d1a241aa906f0cc5c9 https://preview.redd.it/lj6ih2plsclg1.png?width=2496&format=png&auto=webp&s=72ad13d1354a0f26bf79162d5a33d7c83b9299ca # Data: # LFM2-8B-A1B |Quantization|Size (GiB)|PPL Score|KLD Score|Prompt (t/s)|Gen (t/s)| |:-|:-|:-|:-|:-|:-| |LFM2-8B-A1B-IQ1\_S|1.608|45.621441|1.974797|3590.05|228.60| |LFM2-8B-A1B-IQ1\_M|1.784|29.489175|1.472739|2288.06|208.50| |LFM2-8B-A1B-IQ2\_XXS|2.076|23.013295|1.053110|3830.70|206.69| |LFM2-8B-A1B-IQ2\_XS|2.31|19.658691|0.798374|3301.04|204.26| |LFM2-8B-A1B-IQ2\_S|2.327|17.572654|0.642566|3336.55|203.08| |LFM2-8B-A1B-IQ2\_M|2.561|17.607493|0.509741|3351.58|201.59| |LFM2-8B-A1B-Q2\_K\_S|2.65|16.463740|0.640123|2938.68|208.57| |LFM2-8B-A1B-Q2\_K|2.868|16.676304|0.511999|3068.25|185.35| |LFM2-8B-A1B-IQ3\_XXS|3.019|15.865102|0.358869|3784.91|197.37| |LFM2-8B-A1B-IQ3\_XS|3.208|19.160402|0.390083|3743.55|190.98| |LFM2-8B-A1B-IQ3\_S|3.394|19.454378|0.372152|3718.99|186.42| |LFM2-8B-A1B-Q3\_K\_S|3.394|17.166892|0.314452|3439.32|146.93| |LFM2-8B-A1B-IQ3\_M|3.416|16.149280|0.238139|3715.21|187.17| |LFM2-8B-A1B-Q3\_K\_M|3.723|16.100256|0.208292|3537.28|162.56| |LFM2-8B-A1B-Q3\_K\_L|4.029|16.613555|0.202567|3510.97|161.20| |LFM2-8B-A1B-IQ4\_XS|4.17|15.570913|0.116939|4001.26|223.19| |LFM2-8B-A1B-IQ4\_NL|4.409|15.736384|0.122198|3949.16|226.59| |LFM2-8B-A1B-Q4\_0|4.417|15.083245|0.141351|3845.05|227.72| |LFM2-8B-A1B-MXFP4\_MOE|4.424|14.813420|0.097272|3834.64|193.85| |LFM2-8B-A1B-Q4\_K\_S|4.426|14.975323|0.093833|3753.01|215.15| |LFM2-8B-A1B-Q4\_K\_M|4.698|15.344388|0.090284|3718.73|208.65| |LFM2-8B-A1B-Q4\_1|4.886|15.993623|0.101227|3690.23|227.02| |LFM2-8B-A1B-Q5\_K\_S|5.364|15.730543|0.053178|3657.42|204.26| |LFM2-8B-A1B-Q5\_0|5.372|14.653431|0.059156|3754.58|210.17| |LFM2-8B-A1B-Q5\_K\_M|5.513|15.897327|0.052972|3635.63|199.00| |LFM2-8B-A1B-Q5\_1|5.841|15.679663|0.049940|3634.15|205.19| |LFM2-8B-A1B-Q6\_K|6.379|15.512109|0.026724|3496.41|172.28| |LFM2-8B-A1B-Q8\_0|8.259|15.193068|0.015443|3881.61|159.66| # OLMoE-1B-7B-0924-Instruct |Quantization|Size (GiB)|PPL Score|KLD Score|Prompt (t/s)|Gen (t/s)| |:-|:-|:-|:-|:-|:-| |OLMoE-1B-7B-0924-Instruct-IQ1\_S|1.388|27.711222|1.321738|3666.10|247.87| |OLMoE-1B-7B-0924-Instruct-IQ1\_M|1.526|21.665126|1.065891|2346.14|229.39| |OLMoE-1B-7B-0924-Instruct-IQ2\_XXS|1.755|15.855999|0.687041|3850.88|228.62| |OLMoE-1B-7B-0924-Instruct-IQ2\_XS|1.941|14.034858|0.531707|3438.66|226.46| |OLMoE-1B-7B-0924-Instruct-IQ2\_S|1.985|13.358345|0.438407|3463.65|223.97| |OLMoE-1B-7B-0924-Instruct-IQ2\_M|2.168|12.205082|0.324686|3512.47|222.87| |OLMoE-1B-7B-0924-Instruct-Q2\_K\_S|2.23|13.969774|0.514164|3121.66|236.74| |OLMoE-1B-7B-0924-Instruct-Q2\_K|2.387|12.359235|0.325934|3235.95|207.06| |OLMoE-1B-7B-0924-Instruct-IQ3\_XXS|2.505|11.502814|0.229131|3803.35|216.86| |OLMoE-1B-7B-0924-Instruct-IQ3\_XS|2.669|11.158494|0.172658|3801.89|211.81| |OLMoE-1B-7B-0924-Instruct-IQ3\_S|2.815|11.006107|0.144768|3770.79|206.03| |OLMoE-1B-7B-0924-Instruct-Q3\_K\_S|2.815|10.942114|0.164096|3531.76|172.25| |OLMoE-1B-7B-0924-Instruct-IQ3\_M|2.865|10.816384|0.122599|3767.94|211.11| |OLMoE-1B-7B-0924-Instruct-Q3\_K\_M|3.114|10.577075|0.095189|3612.93|195.99| |OLMoE-1B-7B-0924-Instruct-Q3\_K\_L|3.363|10.516405|0.082414|3588.45|194.13| |OLMoE-1B-7B-0924-Instruct-IQ4\_XS|3.46|10.387316|0.052616|4007.51|243.45| |OLMoE-1B-7B-0924-Instruct-IQ4\_NL|3.658|10.390324|0.051451|3958.14|251.91| |OLMoE-1B-7B-0924-Instruct-MXFP4\_MOE|3.667|10.899335|0.076083|3857.25|226.36| |OLMoE-1B-7B-0924-Instruct-Q4\_0|3.674|10.442592|0.065409|3867.65|247.41| |OLMoE-1B-7B-0924-Instruct-Q4\_K\_S|3.691|10.368422|0.045454|3798.78|240.97| |OLMoE-1B-7B-0924-Instruct-Q4\_K\_M|3.924|10.362959|0.039932|3766.81|230.96| |OLMoE-1B-7B-0924-Instruct-Q4\_1|4.055|10.386061|0.046667|3745.30|253.62| |OLMoE-1B-7B-0924-Instruct-Q5\_K\_S|4.452|10.263814|0.019071|3716.41|230.90| |OLMoE-1B-7B-0924-Instruct-Q5\_0|4.467|10.295836|0.023216|3803.06|237.34| |OLMoE-1B-7B-0924-Instruct-Q5\_K\_M|4.588|10.264499|0.017257|3694.75|222.57| |OLMoE-1B-7B-0924-Instruct-Q5\_1|4.848|10.236555|0.018163|3692.16|233.59| |OLMoE-1B-7B-0924-Instruct-Q6\_K|5.294|10.209423|0.008738|3575.76|195.96| |OLMoE-1B-7B-0924-Instruct-Q8\_0|6.854|10.194440|0.004393|3890.05|187.82| # granite-4.0-h-tiny |Quantization|Size (GiB)|PPL Score|KLD Score|Prompt (t/s)|Gen (t/s)| |:-|:-|:-|:-|:-|:-| |granite-4.0-h-tiny-IQ1\_S|1.374|110.820345|2.936454|2684.17|127.39| |granite-4.0-h-tiny-IQ1\_M|1.518|30.016785|1.549064|1525.57|120.35| |granite-4.0-h-tiny-IQ2\_XXS|1.759|15.664424|0.815403|2823.29|118.23| |granite-4.0-h-tiny-IQ2\_XS|1.952|12.432497|0.544306|2517.37|118.33| |granite-4.0-h-tiny-IQ2\_S|1.967|12.192808|0.519907|2520.13|117.53| |granite-4.0-h-tiny-IQ2\_M|2.16|11.086195|0.394922|2516.28|115.00| |granite-4.0-h-tiny-Q2\_K\_S|2.267|11.205483|0.422444|2253.11|126.12| |granite-4.0-h-tiny-Q2\_K|2.408|10.631549|0.348718|2295.69|118.05| |granite-4.0-h-tiny-IQ3\_XXS|2.537|9.878346|0.213335|2777.70|113.24| |granite-4.0-h-tiny-IQ3\_XS|2.716|9.414560|0.156308|2761.83|109.35| |granite-4.0-h-tiny-IQ3\_S|2.852|9.382415|0.140855|2748.22|108.30| |granite-4.0-h-tiny-Q3\_K\_S|2.852|9.561864|0.163152|2560.96|100.02| |granite-4.0-h-tiny-IQ3\_M|2.886|9.348140|0.133007|2731.59|108.90| |granite-4.0-h-tiny-Q3\_K\_M|3.123|9.398343|0.132221|2594.59|105.79| |granite-4.0-h-tiny-Q3\_K\_L|3.354|9.371429|0.126633|2581.32|105.51| |granite-4.0-h-tiny-IQ4\_XS|3.493|8.884567|0.051232|2884.92|123.81| |granite-4.0-h-tiny-IQ4\_NL|3.691|8.899413|0.049923|2851.58|133.11| |granite-4.0-h-tiny-Q4\_0|3.706|9.012316|0.065076|2800.86|129.84| |granite-4.0-h-tiny-Q4\_K\_S|3.721|8.887182|0.044464|2745.58|127.33| |granite-4.0-h-tiny-MXFP4\_MOE|3.895|8.825372|0.049953|2789.90|112.43| |granite-4.0-h-tiny-Q4\_K\_M|3.94|8.890295|0.041203|2719.64|124.52| |granite-4.0-h-tiny-Q4\_1|4.085|8.904143|0.045120|2679.63|134.15| |granite-4.0-h-tiny-Q5\_K\_S|4.48|8.777425|0.020204|2694.01|124.06| |granite-4.0-h-tiny-Q5\_0|4.495|8.807001|0.023354|2749.84|127.54| |granite-4.0-h-tiny-Q5\_K\_M|4.609|8.791519|0.018896|2632.96|119.00| |granite-4.0-h-tiny-Q5\_1|4.875|8.785323|0.019145|2661.61|127.36| |granite-4.0-h-tiny-Q6\_K|5.319|8.765266|0.009882|2566.16|110.06| |granite-4.0-h-tiny-Q8\_0|6.883|8.741198|0.004901|2804.95|103.00| # Setup: CPU: Intel Core i3-12100F. RAM: 64gb of DDR4 3200, dual channel. GPU: RTX 3060 12gb (GPU clock fixed at 1882 MHz via a curve, VRAM at 8210 MHz, stable). OS: Windows 11, Nvidia drivers 591.74. Build: llama.cpp b8123 (f75c4e8bf) for CUDA 13.1 precompiled. # Details: LFM2-8B-A1B-BF16.gguf from [unsloth/LFM2-8B-A1B-GGUF](https://huggingface.co/unsloth/LFM2-8B-A1B-GGUF) OLMoE-1B-7B-0924-Instruct-f16.gguf from [bartowski/OLMoE-1B-7B-0924-Instruct-GGUF](https://huggingface.co/bartowski/OLMoE-1B-7B-0924-Instruct-GGUF) granite-4.0-h-tiny-BF16.gguf from [unsloth/granite-4.0-h-tiny-GGUF](https://huggingface.co/unsloth/granite-4.0-h-tiny-GGUF) All quants have been created using [tristandruyen/calibration\_data\_v5\_rc.txt](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c) PPL is calculated with wiki.test.raw with a context of 512 tokens, while t/s are calculated for 2048 tokens generated with a context of 8192 tokens. # Notes: These quants are just meant to represent what's mostly available on Hugging Face and have not been optimized with a custom recipe. This sweep simply ranks them from least to most faithful to the original weights. The figures at low bit-per-weight quantization might not be representative of the quality of the quantization scheme when applied to a larger model. This is not supposed to tell what quantization scheme is best suited for your particular task or language.
2026-02-24T02:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1rd2cdu/round_2_quick_moe_quantization_comparison/
TitwitMuffbiscuit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd2cdu
false
null
t3_1rd2cdu
/r/LocalLLaMA/comments/1rd2cdu/round_2_quick_moe_quantization_comparison/
false
false
https://external-preview…7860dafb3a1948da
35
null
Rasbery Pi 5 16 GB 9k context running byteshape devstral and goose ai agent coder framework. by extending timeout. roo code kilo code on rasbery pi next?
0
# ByteShape Devstral Time Out Increased scripts for Raspberry Pi 5 16GB running Goose Ai Agent Coder Framework I got goose to run on rasbary pi 5 16gb with devstral a vision model at 12k context 98 minute response time. 53 minutes 9k context I think. What SYSTEM prompt would you use to stylise your assistant agent coder? What would you ask your agent to code? Good for hikes a set and forget gadget. Also accessible. # server: OLLAMA\_CONTEXT\_LENGTH=12000 OLLAMA\_LOAD\_TIMEOUT=160m OLLAMA\_KEEP\_ALIVE=-1 OLLAMA\_MAX\_LOADED\_MODELS=1 OLLAMA\_NUM\_PARALLEL=1 ollama serve # client: GOOSE\_TEMPERATURE=0.15 GOOSE\_MAX\_TOKENS=9000 OLLAMA\_TIMEOUT=10800 OPENAI\_TIMEOUT=10800 GOOSE\_CUSTOM\_PROMPT="SYSTEM: You are a high-energy, fun video game sidekick assistant! Use gaming lingo, be encouraging, and treat tasks like quests. Technical constraints: Devstral low-temp mode, top\_p 0.95, penalty 1.05, 32k context. Respect \[INST\] sequences." goose web --open \#**prompt:** /plan Entering plan mode. make a plan to make a forcasting program with tensorflow keras cnn and ltsm deep neuronetworks /endplan
2026-02-24T02:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1rd223u/rasbery_pi_5_16_gb_9k_context_running_byteshape/
Josheeg39
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd223u
false
null
t3_1rd223u
/r/LocalLLaMA/comments/1rd223u/rasbery_pi_5_16_gb_9k_context_running_byteshape/
false
false
self
0
null
Exclusive: China's DeepSeek trained AI model on Nvidia's best chip despite US ban, official says
188
2026-02-24T02:05:11
https://www.reuters.com/world/china/chinas-deepseek-trained-ai-model-nvidias-best-chip-despite-us-ban-official-says-2026-02-24/
blahblahsnahdah
reuters.com
1970-01-01T00:00:00
0
{}
1rd1tj9
false
null
t3_1rd1tj9
/r/LocalLLaMA/comments/1rd1tj9/exclusive_chinas_deepseek_trained_ai_model_on/
false
false
https://external-preview…1c105007eb793ada
188
{'enabled': False, 'images': [{'id': 'LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=108&crop=smart&auto=webp&s=29b2cbcf357039ad05158e65f169dd591768095a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=216&crop=smart&auto=webp&s=215200c2ba0eaf4f33d49f7f7be2e81f6637e493', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=320&crop=smart&auto=webp&s=d49c2a9a915d62f8b4a1200bd470a85b3f67a567', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=640&crop=smart&auto=webp&s=b1529094f7ffd1b18c8afe7ddd7efa27261ad7f5', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=960&crop=smart&auto=webp&s=317ce1a51c0bd8c0c52dfde031dbff508629f85e', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?width=1080&crop=smart&auto=webp&s=ce8e9d1e54361d704e8bdfc545feccca314038a8', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/LwC39wQsKjPNUsKdGmLUh6SkmdTxf4euiX9LEkSLsqY.jpeg?auto=webp&s=3c7e30bee8cb9b9b2760a79f9ac26e0e12b5333d', 'width': 1920}, 'variants': {}}]}
American vs Chinese AI is a false narrative.
235
**TL;DR:** The real war (IF there is one) is between closed source and open source. Don't fall/propagate the America vs China narrative which is just tactics to get investors to loosen pursestrings and lawmakers/politicians to acquiesce to demands. -------------- There's been an uptick of nationalistic posts (mostly in defense of chinese AI) on this sub and I think its very important to stop false narratives and reset it to the right framing. Demonize a foreign enemy as a call for action - it was Russia for the space race, and now China. Except the world has changed immeasurably with globalization and national lines make less and less sense everyday - hell I'd wager most of OpenAI/Anthropic AI research teams are Chinese origin. Propagandizing and controlling media narratives is a time honored tradition for moneyed interests. I hope that the relatively more sophisticated folk in this sub can see past this. Yes it is true that the best open source models right now are almost all Chinese and that is resulting in people loosely using those terms as interchangeable but its a false equivalency and should not be spread. Yes all the Chinese labs are open sourcing there stuff..*for now*. But all of those companies are also for-profit - just like OpenAI and Anthropic. The most likely reason they are open sourcing is to stay relevant in the market and prevent platform seizure a la format wars of previous tech shifts. Also, the reality is that they are not only not as good as closed source SOTA but even if they were at parity, most of the world would not trust them purely because of the fact that they are Chinese and there is a strong prejudice. Thus, its a marketing and sales funnel channel - I'd argue more so than some sort of magnanimity. When the tides shift, as they always do (remember Llama?), Chinese companies could very well go closed source. In fact, we already saw Alibaba try that with Qwen3-Max. So its very crucial that **we reframe it to the correct axis - closed vs open source.** I dont think I need to preach to the choir here but this is the enormously critical battle. And if we lose it, I think its going to be worse than the SaaS/cloud/everything is a subscription hell we are currently in. Correct framing is crucial in keeping focus on the right things and prevents the water muddying tactics political players use to get their way.
2026-02-24T01:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd1lmz
false
null
t3_1rd1lmz
/r/LocalLLaMA/comments/1rd1lmz/american_vs_chinese_ai_is_a_false_narrative/
false
false
self
235
null
The Physical Renaissance: The Brutal Aesthetics of Hardwiring AI into Silicon
1
[removed]
2026-02-24T01:55:02
https://www.reddit.com/r/LocalLLaMA/comments/1rd1jg3/the_physical_renaissance_the_brutal_aesthetics_of/
MarsQiu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd1jg3
false
null
t3_1rd1jg3
/r/LocalLLaMA/comments/1rd1jg3/the_physical_renaissance_the_brutal_aesthetics_of/
false
false
https://preview.redd.it/…91f3663bf03cc47a
1
null
Experimenting with Qwen3-VL-32B
2
I'd like to put a model specifically of this size to the test to see the performance gap between smaller models and medium-sized models for my complex ternary (three-way) text classification task. I will tune using RL-esque methods. Should I tune Qwen 3 32B VL Thinking or Instruct? Which is the best one to tune for 1,024 max reasoning tokens (from my experience, Qwen3 yaps a lot)? (I know Qwen 3.5 is coming, but leaks show a 2B and 9B dense with a 35B MoE, the latter of which I'd prefer to avoid ATM).
2026-02-24T01:52:26
https://www.reddit.com/r/LocalLLaMA/comments/1rd1h6s/experimenting_with_qwen3vl32b/
Extra-Campaign7281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd1h6s
false
null
t3_1rd1h6s
/r/LocalLLaMA/comments/1rd1h6s/experimenting_with_qwen3vl32b/
false
false
self
2
null
Hey Is anyone interested in Pretraining a 3b Or 7b model from scratch?
1
As the title says and to keep the Cost down We'll not train more then 100-150B tokens in Pretraining. For 3b the Cost might be 300-500 USD, 7B 2K USD or similar in range. If anyone is interested then surely DM, and ofcourse we'll open source it
2026-02-24T01:51:43
https://www.reddit.com/r/LocalLLaMA/comments/1rd1gb3/hey_is_anyone_interested_in_pretraining_a_3b_or/
Vegetable_Prompt_583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd1gb3
false
null
t3_1rd1gb3
/r/LocalLLaMA/comments/1rd1gb3/hey_is_anyone_interested_in_pretraining_a_3b_or/
false
false
self
1
null
Running autonomous agents locally feels reckless. Am I overthinking this?
4
I’ve been experimenting with OpenClaw-style autonomous agents recently. The thing that keeps bothering me: They have filesystem access. They have network access. They can execute arbitrary code. Even if the model isn’t “malicious,” a bad tool call or hallucinated shell command could do real damage. I realized most of us are basically doing one of these: * Running it directly on our dev machine * Docker container with loose permissions * Random VPS with SSH keys attached Am I overestimating the risk here? Curious what isolation strategies people are using: * Firecracker? * Full VM? * Strict outbound firewall rules? * Disposable environments? I ended up building a disposable sandbox wrapper for my own testing because it felt irresponsible to run this on my laptop. Would love to hear what others are doing.
2026-02-24T01:21:48
https://www.reddit.com/r/LocalLLaMA/comments/1rd0mj6/running_autonomous_agents_locally_feels_reckless/
tallen0913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd0mj6
false
null
t3_1rd0mj6
/r/LocalLLaMA/comments/1rd0mj6/running_autonomous_agents_locally_feels_reckless/
false
false
self
4
null
Steerling-8B, a language model that can explain any token it generates
1
[removed]
2026-02-24T01:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1rd0mcc/steerling8b_a_language_model_that_can_explain_any/
luulinh90s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd0mcc
false
null
t3_1rd0mcc
/r/LocalLLaMA/comments/1rd0mcc/steerling8b_a_language_model_that_can_explain_any/
false
false
self
1
{'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=216&crop=smart&auto=webp&s=b2b90a3634f6c959fd007bfab883fd3c17fb34b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=320&crop=smart&auto=webp&s=b168649fb46323688bb1b31a5bc1e49702401e0f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=640&crop=smart&auto=webp&s=9813c08be5262702b5b744b0cd48a4f3ffd847cd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=960&crop=smart&auto=webp&s=f1628882ffdff20bd7a3d4461d5608ea002c8da5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=1080&crop=smart&auto=webp&s=61712080e4dcb21f13dbf79706d620194dcd46a0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?auto=webp&s=c811c58fb728a6071113f36f900d08a4907b7d45', 'width': 1200}, 'variants': {}}]}
Steerling-8B, a language model that can explain any token it generates
1
We are releasing Steerling-8B, the first interpretable model that can trace any token it generates to its input context, concepts a human can understand, and its training data. Trained on 1.35 trillion tokens, the model achieves downstream performance within range of models trained on 2–7× more data. Steerling-8B unlocks several capabilities which include suppressing or amplifying specific concepts at inference time without retraining, training data provenance for any generated chunk, and inference-time alignment via concept control, replacing thousands of safety training examples with explicit concept-level steering. [https://www.guidelabs.ai/post/steerling-8b-base-model-release/](https://www.guidelabs.ai/post/steerling-8b-base-model-release/)
2026-02-24T01:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1rd0jal/steerling8b_a_language_model_that_can_explain_any/
luulinh90s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd0jal
false
null
t3_1rd0jal
/r/LocalLLaMA/comments/1rd0jal/steerling8b_a_language_model_that_can_explain_any/
false
false
self
1
{'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=216&crop=smart&auto=webp&s=b2b90a3634f6c959fd007bfab883fd3c17fb34b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=320&crop=smart&auto=webp&s=b168649fb46323688bb1b31a5bc1e49702401e0f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=640&crop=smart&auto=webp&s=9813c08be5262702b5b744b0cd48a4f3ffd847cd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=960&crop=smart&auto=webp&s=f1628882ffdff20bd7a3d4461d5608ea002c8da5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=1080&crop=smart&auto=webp&s=61712080e4dcb21f13dbf79706d620194dcd46a0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?auto=webp&s=c811c58fb728a6071113f36f900d08a4907b7d45', 'width': 1200}, 'variants': {}}]}
The Physical Renaissance: The Brutal Aesthetics of Hardwiring AI into Silicon
0
While the world scrambles for NVIDIA’s high-end GPUs, a Toronto-based startup called **Taalas** is bucking the trend. They are ditching liquid cooling, abandoning HBM (High Bandwidth Memory), and sacrificing general-purpose flexibility. Through what they call “Physical Aesthetics,” they are etching Large Language Models directly into transistors — creating hardware that is “instant-on,” much like a vintage Nintendo cartridge. With a 20x cost reduction achieved by sacrificing flexibility, is this the ultimate endgame for AI hardware? # 0. The Masterminds: The “Avengers” of the Chip World https://preview.redd.it/evdcl0aodclg1.png?width=1024&format=png&auto=webp&s=f31fab7d74f105e806c419477936e6820545f1f0 “Hardwiring an LLM directly into a chip” might sound like a desperate maneuver, but this isn’t amateur hour. It is a strategic “defection” led by top-tier silicon veterans. Taalas founder **Ljubisa Bajic** was a close collaborator of the legendary **Jim Keller** and co-founder of the chip giant **Tenstorrent**. With decades at AMD and NVIDIA, Bajic has spearheaded multiple milestone architectural designs. Why would they destroy the general-purpose GPU empire they helped build? Because they know exactly where the waste lies. Taalas’ core competency isn’t just making a chip; it’s their **automated model-to-silicon compiler**. This system transforms complex AI code into transistor circuit layouts automatically. While others take a year to tape out a chip, they can do it in weeks. This direct mapping from software to hardware is the “technical spine” that allows them to hardwire models with confidence. # 1. Calculation vs Flow: The “Maze and Water” Miracle To understand why this is so efficient, consider two ways to solve a maze: https://preview.redd.it/dlohvgetdclg1.png?width=1024&format=png&auto=webp&s=1a78f4773352cc8be6afe559f92c6f1f83c65fcd * **The Digital Mode (NVIDIA GPU):** Imagine a person walking through a maze with a notebook. To find the exit, they must constantly try paths, hit walls, backtrack, and meticulously record every failed route. As the maze grows complex, their notes (VRAM data) pile up. Every step requires pausing to check the notes. This is **“Computation”** — slow, tedious, and power-hungry. * **The Physical Mode (Taalas Chip):** Imagine simply pouring a bucket of water into the maze entrance. The water doesn’t “think.” Driven by gravity, it permeates every corner instantly. **The first drop to emerge from the exit followed the optimal path — not because it calculated it, but because physics dictated it.** In a Taalas chip, **current is the water, and circuitry is the channel.** The hundreds of billions of parameters in an LLM are no longer code on a disk; they are pre-dug “waterways” in the silicon. Data flows in, and the result emerges. This isn’t “calculated”; it is a natural emergence of the optimal circuit in the physical world — an instantaneous physical reaction. # 2. The Essence of Recording: Digital vs. Analog Film https://preview.redd.it/ssqt3e5wdclg1.png?width=1024&format=png&auto=webp&s=ef62717e4744bd534f75b083d0a500021e05c5e6 * **Digital Cameras:** Light hits a sensor, is translated into 0s and 1s, and a processor stays busy calculating colors, contrast, and white balance. This is a massive “digital translation” process. * **Film Cameras:** Light triggers a physical/chemical change in silver halide crystals on the film. In that split second, the light is literally “locked” into the material structure. While modern digital sensors are fast enough that we rarely perceive the lag, at the fundamental level of efficiency, the **“direct physics”** of film remains a barrier that digital translation can never truly cross. # 3. The Logic of Playback: MP3 vs. The Music Box https://preview.redd.it/so6x9w4ydclg1.png?width=1024&format=png&auto=webp&s=cb045c8652ea8a7eb0de441c28cf465381b27796 * **Digital Players:** A processor reads a file, decodes it, converts it, and finally drives the speaker. It’s versatile, but it’s exhausted. * **The Music Box:** There is no decoding. The pins on the rotating cylinder (the weights) directly pluck the metal tines (the data input). **The structure is the melody; the hardware is the software.** # 4. Back to the NES: When Software “Grows” in Hardware Remember the **Nintendo Entertainment System (NES)**? Back then, games weren’t installed on hard drives; they lived in **cartridges**. There were no complex installers because the cartridge was essentially a **Mask ROM** chip. The game code and assets were physically stored in the circuitry — the hardware *was* the game. https://preview.redd.it/3my9hkb1eclg1.png?width=1024&format=png&auto=webp&s=ab88659c2734ec70fdc036a2bed90e64780695c8 Taalas is essentially building the **“Ultimate Cartridge”** for AI. They take the hundreds of billions of parameters of a model like Llama-3 and “etch” them directly into the transistors using Mask ROM technology. It’s “Power-on-and-Run.” It doesn’t need to load because the chip *is* the model. # 5. The Ultimate Template: Biological “Muscle Memory” Why is this “rigid” approach actually superior? Because it’s how the human brain works. Our brains aren’t computers running a software program called “Consciousness.” https://preview.redd.it/myv57w13eclg1.png?width=1024&format=png&auto=webp&s=27c768058eff14431d19bd7eeee510da6695a26e When you learn to ride a bike, you aren’t storing a “manual” in your head. Instead, your neural connections undergo a physical structural change. This **“structure is function”** characteristic allows the brain to perform complex logic with only **20 watts** of energy — barely enough to light a dim bulb, yet far surpassing the most advanced supercomputers. * **GPU Inference:** Like a student flipping through a manual while trying to ride (moving data from VRAM to compute). * **Taalas Inference:** Like a pro cyclist with a built-in instinct. The moment they hit the bike, the physical structure of their body (synaptic weights) provides the reaction. # 6. The Future: The Era of “Modular Hardware Upgrades” A common concern: “If the model is hardwired, how do you upgrade?” This is where the Taalas model gets interesting: **The Modularization of AI Assets.** https://preview.redd.it/zv95l8e5eclg1.png?width=1024&format=png&auto=webp&s=c1939e466cbdf6de149f61c58505097e9e94a60e By standardizing chip pinouts, AI chips become like NES cartridges. The server motherboard, power supply, and cooling become “fixed assets.” When Llama-3 becomes obsolete, you don’t scrap a $40,000 server. You simply **unplug the old chip and snap in a new one labeled “Llama-4.”** As automated fabrication costs drop, this “click-and-snap” hardware swap will be cheaper and more efficient than a software subscription. [](https://medium.com/download-app?source=promotion_paragraph---post_body_banner_surround_blocks--19d58249f709---------------------------------------) This “Hardware-as-a-Model” trend is splitting the AI industry into two distinct lanes: * **The Cloud (The Exploration Lane):** This remains **NVIDIA’s** kingdom. Like a high-end digital studio, researchers need total flexibility to explore unknown algorithms, train, and fail fast. * **The Edge (The Application Lane):** In our phones, robots, and cars, we need the stability and speed of “Analog Film” or “NES Cartridges.” When a model is mature enough, it should transition from a **“thought-based algorithm” to a “power-on instinct.”** # Conclusion: The Endgame is “Physical Instinct” Taalas has rediscovered the **“Physical Instinct”** forgotten by the digital world: a 20x cost reduction, a 10x performance boost, and all with simple fan cooling. As AI shifts from an “expensive lab toy” to a “universal motor,” this philosophy of **“rigidity for the sake of speed”** may be the only viable path to solving the global compute crisis and making AI truly ubiquitous. If the emergence of ChatGPT marked the first “brain awakening” of AI, then the bombshell recently dropped by Toronto-based startup **Taalas** is arguably the **“Second ChatGPT Moment”** of the industry. This is more than just a minor architectural tweak; it is a physical revolution concerning the very nature of intelligence. While the rest of the world is locked in a cloud computing arms race, Taalas is employing **“Physical Aesthetics”** to transform virtual code into unalterable, instantaneous **silicon instinct**. In the **military domain**, this “hardwired” characteristic translates to absolute security: it cannot be modified post-deployment by hackers and is immune to being “subverted” by malicious software — it is a physical-grade, tamper-proof defense. In **autonomous driving and robotics**, it completely ends the dependency on the cloud. Facing ever-changing environments, machines no longer need to wait for instructions from a distant cloud; instead, they possess **millisecond-level instincts**, much like a spinal reflex.
2026-02-24T01:09:48
https://www.reddit.com/r/LocalLLaMA/comments/1rd0ans/the_physical_renaissance_the_brutal_aesthetics_of/
MarsQiu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rd0ans
false
null
t3_1rd0ans
/r/LocalLLaMA/comments/1rd0ans/the_physical_renaissance_the_brutal_aesthetics_of/
false
false
https://preview.redd.it/…b842a556607321ae
0
null
Seeking reliable AI tools/scripts for batch tagging thousands of legal/academic PDFs and DOCX files
3
Hi all, I have thousands of documents (.docx and PDFs) accumulated over years, covering legal/political/economic topics. They're in folders but lack consistent metadata or tags, making thematic searches impossible without manual review—which isn't feasible. I'm looking for practical solutions to auto-generate tags based on content. Ideally using LLMs like Gemini, GPT-4o, or Claude for accuracy, with batch processing. Open to: Scripts (Python preferred; I have API access). Tools/apps (free/low-cost preferred; e.g., [Numerous.ai](http://Numerous.ai), Ollama local, or DMS like M-Files but not enterprise-priced). Local/offline options to avoid privacy issues. What have you used that actually works at scale? Any pitfalls (e.g., poor OCR on scanned PDFs, inconsistent tags, high costs)? Skeptical of hype—need real experiences
2026-02-24T00:25:41
https://www.reddit.com/r/LocalLLaMA/comments/1rcz6nx/seeking_reliable_ai_toolsscripts_for_batch/
jatovarv88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcz6nx
false
null
t3_1rcz6nx
/r/LocalLLaMA/comments/1rcz6nx/seeking_reliable_ai_toolsscripts_for_batch/
false
false
self
3
null
Which model to chose?
3
Hello guys, I have an RTX 4080 with 16GB VRAM and 64GB of DDR5 RAM. I want to run some coding models where I can give a task either via a prompt or an agent and let the model work on it while I do something else. I am not looking for speed. My goal is to submit a task to the model and have it produce quality code for me to review later. I am wondering what the best setup is for this. Which model would be ideal? Since I care more about code quality than speed, would using a larger model split between GPU and RAM be better than a smaller model? Also, which models are currently performing well on coding tasks? I have seen a lot of hype around Qwen3. I am new to local LLMs, so any guidance would be really appreciated.
2026-02-24T00:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1rcyzvl/which_model_to_chose/
toorhax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcyzvl
false
null
t3_1rcyzvl
/r/LocalLLaMA/comments/1rcyzvl/which_model_to_chose/
false
false
self
3
null
What models are you eagerly anticipating or wishing for?
24
Just out of curiosity, I've been wishing for three particular LLMs, and curious what other people are wishing for also.
2026-02-24T00:17:50
https://www.reddit.com/r/LocalLLaMA/comments/1rcyy8j/what_models_are_you_eagerly_anticipating_or/
jinnyjuice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcyy8j
false
null
t3_1rcyy8j
/r/LocalLLaMA/comments/1rcyy8j/what_models_are_you_eagerly_anticipating_or/
false
false
self
24
null
Qwen 3 coder next ud-q8-xl F16 filling up the two orin rpc mesh!
25
running great and as you can see here llama.cpp -fit is doing a great job at splitting this evenly . the largest piece of traffic between these two during initial tensor transfer was <5Gbps
2026-02-23T23:47:24
https://v.redd.it/hvlsxvdyzblg1
braydon125
/r/LocalLLaMA/comments/1rcy5wv/qwen_3_coder_next_udq8xl_f16_filling_up_the_two/
1970-01-01T00:00:00
0
{}
1rcy5wv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hvlsxvdyzblg1/DASHPlaylist.mpd?a=1774618433%2CMGYyNTRiZGJkYzdkMjIzYTU3ODY0MzIxNmJmMjNkNjg4NGVmNTJhNmRlN2ZjNDU3YTM2NDcwNTNlZjBkZGQwYw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/hvlsxvdyzblg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/hvlsxvdyzblg1/HLSPlaylist.m3u8?a=1774618433%2CZmQ3NjQ1OTUwOWY3MjhjY2U0Y2U1OGNiMmFjMWZmYTc0ODE2MWQ4ZWE1YTkyODQyYmNiNmM4OTIzMWM5MjVkOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hvlsxvdyzblg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1rcy5wv
/r/LocalLLaMA/comments/1rcy5wv/qwen_3_coder_next_udq8xl_f16_filling_up_the_two/
false
false
https://external-preview…9f0aa39f2c3cc5a3
25
{'enabled': False, 'images': [{'id': 'aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=108&crop=smart&format=pjpg&auto=webp&s=761720f3d3efeefa84c6e0666fb6f7f8e56399de', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=216&crop=smart&format=pjpg&auto=webp&s=18fec14f51f24022d0731842c29c7e862ab2b2e9', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=320&crop=smart&format=pjpg&auto=webp&s=d527d1469fb8d5b5ee2fea4c35cf572135384811', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=640&crop=smart&format=pjpg&auto=webp&s=7ef83f72b762cab8b0f9b63658115d0e811d7966', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=960&crop=smart&format=pjpg&auto=webp&s=ae7b70f0c377a935bfe2515c385bc12a5af692f3', 'width': 960}, {'height': 1921, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=53c8239d425a203b0ed17ee34694c4146318f55d', 'width': 1080}], 'source': {'height': 2350, 'url': 'https://external-preview.redd.it/aWVhZHNnaXl6YmxnMWrUBUxyMMPidJm6SSrNb-W9WQcIAZj84NetEddKYJ3y.png?format=pjpg&auto=webp&s=e9610fcc48d060d91bc84625492889eeb9974940', 'width': 1321}, 'variants': {}}]}
Today, I’m introducing something we’ve been quietly building. Meet Keovil.
1
[removed]
2026-02-23T23:35:42
https://www.reddit.com/r/LocalLLaMA/comments/1rcxvm5/today_im_introducing_something_weve_been_quietly/
Interesting-Yam2001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcxvm5
false
null
t3_1rcxvm5
/r/LocalLLaMA/comments/1rcxvm5/today_im_introducing_something_weve_been_quietly/
false
false
self
1
null
How Do Backends Like Ollama, LMStudio, etc. Adapt to All The Different Chat Templates of The Various Models They Support?
6
Same as Title, I go through the chat templates of different small local models (GLM-4.7-Flash, Nanbeige-4.1-3b, GPT-OSS-20B, etc.) and see that all of them have different chat templates and formats. I am trying to use mlx-lm to run these models and parse the response into reasoning and content blocks but the change in format always stumps me and the mlx-lm's inbuilt reasoning and content separation does not work, not to mention the tool call parsing which is so different depending on the model. But the responses in Ollama and LMStudio work perfectly, especially with reasoning and tool calling. How does that work? How do they implement that?
2026-02-23T23:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1rcxrs4/how_do_backends_like_ollama_lmstudio_etc_adapt_to/
Solus23451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcxrs4
false
null
t3_1rcxrs4
/r/LocalLLaMA/comments/1rcxrs4/how_do_backends_like_ollama_lmstudio_etc_adapt_to/
false
false
self
6
null
Opencode Manager - New Release
6
[https://github.com/chriswritescode-dev/opencode-manager](https://github.com/chriswritescode-dev/opencode-manager) * [Optional Memory Plugin ](https://www.npmjs.com/package/@opencode-manager/memory) * Enhanced Git commit view https://reddit.com/link/1rcwsl2/video/l073ir0aqblg1/player
2026-02-23T22:52:59
https://www.reddit.com/r/LocalLLaMA/comments/1rcwsl2/opencode_manager_new_release/
getfitdotus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcwsl2
false
null
t3_1rcwsl2
/r/LocalLLaMA/comments/1rcwsl2/opencode_manager_new_release/
false
false
https://external-preview…ce9e9f08d207190f
6
null
I don't have any clue on the code that claude wrote
0
I've always been fascinated of people that know how to code, remember functions and ending up building something that doesn't crash, like a real language they could speak. I can have ideas and understand logic, but never got into learning these languages, fortunately I can get some help now. Everytime something works exactly like I wanted seems like magic, I really feel the gap between 3 months ago and now, It's kinda scary tbh My project is small with no pretensions , but i'm sure that i'm not the only one having this feeling, should we embrace it or be afraid... Like if we, poor peasants have access to this, even for free, what kind of thing "they" have access to? This is some blackmirror type of sh
2026-02-23T22:39:55
https://www.reddit.com/r/LocalLLaMA/comments/1rcwg9p/i_dont_have_any_clue_on_the_code_that_claude_wrote/
CRYPT_EXE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcwg9p
false
null
t3_1rcwg9p
/r/LocalLLaMA/comments/1rcwg9p/i_dont_have_any_clue_on_the_code_that_claude_wrote/
false
false
self
0
null
Technical question about MOE and Active Parameters
3
Minimax's model card on LM Studio says: \> MiniMax-M2 is a Mixture of Experts (MoE) model (230 billion total parameters with 10 billion active parameters) \> To run the smallest minimax-m2, you need at least 121 GB of RAM. Does that mean my VRAM only needs to hold 10b parameters at a time? And I can hold the rest on computer RAM? I don't get how RAM and VRAM plays out exactly. I have 64gb and 24gb of VRAM, would just doubling my ram get me to run the model comfortably? Or does the VRAM still have to fit the model entirely? If that's the case, why are people even hoarding RAM for, if it's too slow for inference anyway?
2026-02-23T22:33:19
https://www.reddit.com/r/LocalLLaMA/comments/1rcwa1d/technical_question_about_moe_and_active_parameters/
_manteca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcwa1d
false
null
t3_1rcwa1d
/r/LocalLLaMA/comments/1rcwa1d/technical_question_about_moe_and_active_parameters/
false
false
self
3
null
Inference Engineering (Book)
1
2026-02-23T22:32:38
https://i.redd.it/zfvtaz8kmblg1.png
philipkiely
i.redd.it
1970-01-01T00:00:00
0
{}
1rcw9dw
false
null
t3_1rcw9dw
/r/LocalLLaMA/comments/1rcw9dw/inference_engineering_book/
false
false
https://preview.redd.it/…55e09f90c0b84bb8
1
{'enabled': True, 'images': [{'id': 'zfvtaz8kmblg1', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=108&crop=smart&auto=webp&s=cf085b4d6d8fec83b2ed0d9e2680475c868d8dec', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=216&crop=smart&auto=webp&s=654ae2e0d6cfcf37e154354ad18041057380a0ea', 'width': 216}, {'height': 605, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=320&crop=smart&auto=webp&s=14941592e948930aa2d5f4e088d594e59a4252ff', 'width': 320}, {'height': 1210, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=640&crop=smart&auto=webp&s=d4153c099f7af6f4160abc9c6d0fbfe6b323e355', 'width': 640}, {'height': 1815, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=960&crop=smart&auto=webp&s=b98fc1170d2e15037d73935f564a858c81d666d2', 'width': 960}, {'height': 2042, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?width=1080&crop=smart&auto=webp&s=8ea6d51e3ad0c25bde796ff45da12b43614847ee', 'width': 1080}], 'source': {'height': 2952, 'url': 'https://preview.redd.it/zfvtaz8kmblg1.png?auto=webp&s=1710e028ab8996455f771ece9373f44269486a33', 'width': 1561}, 'variants': {}}]}
Qwen 3 Next Coder Hallucinating Tools?
4
Anyone else experiencing this? I was workshopping a website prototype when I noticed it got stuck in a loop continuously attempting to "make" the website infrastructor itself. [Qwen 3 Coder Next hallucinating tool call in LM Studio](https://preview.redd.it/d147gfsolblg1.png?width=1218&format=png&auto=webp&s=e8319a814e843fa052a0bcb5cfaa4219b84af4bc) It went on like this for over an hour, stuck in a loop trying to do these tool calls.
2026-02-23T22:27:50
https://www.reddit.com/r/LocalLLaMA/comments/1rcw4sk/qwen_3_next_coder_hallucinating_tools/
CSEliot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcw4sk
false
null
t3_1rcw4sk
/r/LocalLLaMA/comments/1rcw4sk/qwen_3_next_coder_hallucinating_tools/
false
false
https://preview.redd.it/…518ff8f25fbf00ba
4
null
Looking to switch away from codex
2
Anything similar in the open source you recommend for coding purpose
2026-02-23T22:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1rcvuw0/looking_to_switch_away_from_codex/
apunker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcvuw0
false
null
t3_1rcvuw0
/r/LocalLLaMA/comments/1rcvuw0/looking_to_switch_away_from_codex/
false
false
self
2
null
Distillation when you do it. Training when we do it.
3,196
2026-02-23T22:04:41
https://i.redd.it/9rc0jqbohblg1.jpeg
Xhehab_
i.redd.it
1970-01-01T00:00:00
0
{}
1rcvimv
false
null
t3_1rcvimv
/r/LocalLLaMA/comments/1rcvimv/distillation_when_you_do_it_training_when_we_do_it/
false
false
https://preview.redd.it/…c21083e08c18911f
3,196
{'enabled': True, 'images': [{'id': '9rc0jqbohblg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=108&crop=smart&auto=webp&s=7d4a14a37954dc95a98e8a432e3f2822dc1af836', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=216&crop=smart&auto=webp&s=5ee2dbd27c44e737bff3ca7beb62063fbd830067', 'width': 216}, {'height': 322, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=320&crop=smart&auto=webp&s=dec4af00ef105ca3773a0cdd0f09a3b586dd74df', 'width': 320}, {'height': 644, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?width=640&crop=smart&auto=webp&s=05481c4cef786a02ca1e5d0b968e61114727348f', 'width': 640}], 'source': {'height': 868, 'url': 'https://preview.redd.it/9rc0jqbohblg1.jpeg?auto=webp&s=afe583e73d67399ff9df7a6cf598cffd24f6d501', 'width': 862}, 'variants': {}}]}
Amber ICI
0
If you run ollama models for OSINT work or need to keep things local with zero telemetry, you may be in interested this slick interface that allows you to put agents and chains to work for your needs. [https://github.com/gs-ai/AMBER-ICI](https://github.com/gs-ai/AMBER-ICI)
2026-02-23T21:58:13
https://i.redd.it/au60fketfblg1.png
FreonMuskOfficial
i.redd.it
1970-01-01T00:00:00
0
{}
1rcvc8d
false
null
t3_1rcvc8d
/r/LocalLLaMA/comments/1rcvc8d/amber_ici/
false
false
https://preview.redd.it/…90e905b869552b82
0
{'enabled': True, 'images': [{'id': 'au60fketfblg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=108&crop=smart&auto=webp&s=6190d9de66e00f869296e06a557437f87eed0bde', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=216&crop=smart&auto=webp&s=5081a0aad2200eb06005bc153d3a34af73acd89a', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=320&crop=smart&auto=webp&s=6135d6587e6cfc7dd43753ac257081fc5149f5a1', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=640&crop=smart&auto=webp&s=194dbffafdffc1adeec73d83f39a1828f534716a', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=960&crop=smart&auto=webp&s=dfa09a9dd9f74f304d0553b5aeccc7386418c6ee', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/au60fketfblg1.png?width=1080&crop=smart&auto=webp&s=a41a094cf736cbf3692c9c98d97f13c5798fa885', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/au60fketfblg1.png?auto=webp&s=f362bbf158167040f647204dc1344dd1fd98471c', 'width': 1536}, 'variants': {}}]}
Looking for local AI agent driven coding environment.
0
Was wanting to get some recommends for a local dev environment. I'm wanting something that is AI driven to write the code but allows me to follow along in a IDE and make changes manually if I choose to do so. Generally I want to write web apps in react, node.js, java script or just html. But, I want something that can help write complex python scripts for database management etc. I'd like to be able to run the code in preview like some of the popular online cloud sites. A search using grok lead me to openhands...wanted to try it but there's a bug right now that after the initial install the sandbox can't connect. I hear it's fairly good. [https://github.com/OpenHands/OpenHands/issues/12528#issuecomment-3944049209](https://github.com/OpenHands/OpenHands/issues/12528#issuecomment-3944049209) It has to be local as I don't want my files in the cloud. It has to have a full blown IDE, I want to follow along as the AI codes. Git management would be nice. And, it needs to be linux based as I will run it on linux as a vps on proxmox. Also, I need to be able to use deep seek since it's the only one I can afford right now. $5 last a good bit whereas the others like claud burns all my tokens on a few simple questions. I thought google ai studio had unlimited on their free tier but found it was rate limited. This is all new to me so sorry if I left anything out. I was playing with Agent 0 and found it fascinating but it's not designed as a coding env per say.
2026-02-23T21:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1rcv7zc/looking_for_local_ai_agent_driven_coding/
nealhamiltonjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcv7zc
false
null
t3_1rcv7zc
/r/LocalLLaMA/comments/1rcv7zc/looking_for_local_ai_agent_driven_coding/
false
false
self
0
{'enabled': False, 'images': [{'id': 'zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=108&crop=smart&auto=webp&s=906fe58b744d379700b6668d2aea8b08a559c006', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=216&crop=smart&auto=webp&s=0bd788f9913fd2865e4a75f1c5f063149c7631ad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=320&crop=smart&auto=webp&s=bcd1d8970d3c6bc3f7616cbbc2d7d68b4790c029', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=640&crop=smart&auto=webp&s=b0e5959f86fdd0d3baaa0e3327a77c186a64fc1d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=960&crop=smart&auto=webp&s=c366d13205a1a91b1a491fdfb0017bb58d90d0f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?width=1080&crop=smart&auto=webp&s=d81fee777389e440996a9b49454cc85267f48fdd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zIMo4NP_JpMnjN5L4tWviAisv_EHZhe_sVUcurruOCY.png?auto=webp&s=6fc26784e80e4674e4168b4d3de4ea157dad83d7', 'width': 1200}, 'variants': {}}]}
The Model Is the Orchestrator
0
\# \*\*Lessons from 10 Autonomous Multi-Agent Software Builds Without Programmatic Scaffolding — A Case Study\*\* February 2026 · Working Draft Corpus: 88 Codex worker sessions · 10 Claude orchestrator sessions · 295M tokens · 6.1M lines of worker output · 3 controlled ablation experiments · 1 scope contamination A/B test \----- \# Abstract We report operational data from 10 fully autonomous software builds executed by a multi-agent system: a Claude Opus orchestrator and Codex worker agents. The system produced 10 TypeScript browser games totaling over 50,000 lines of code and hundreds of passing tests with zero human code intervention. The orchestrator—a frontier LLM given a prompt and CLI access—decomposed objectives, dispatched parallel workers, analyzed results, triaged errors, and coordinated integration. No programmatic scaffold, state machine, or task-routing infrastructure was used; the orchestration logic is a prompt, not a program. This replaced a prior purpose-built scaffold that the operator abandoned because conversation-based orchestration produced better results. Scope enforcement through prompts fails completely under compiler pressure (0/20), while mechanical enforcement via post-hoc file reversion is trivially effective (20/20). Type contracts are not required for integration at any scale tested (6–36 modules) when the integration agent has unrestricted edit access. The orchestrator maintained perfect task continuity across 11 context compaction events. Cost analysis reveals a \*statefulness premium\*: with \~95% cache hit rates, the majority of orchestrator processing is re-reading prior conversation context. We propose a pyramid architecture (Section 7.1) that inverts this premium. A bare-prompt ablation (Section 7.2) falsifies the strong claim that models independently discover coordination patterns, but reveals that solo execution outperforms coordinated builds below \~30K LOC. Section 7.3 proposes agent pre-training through synthetic conversation. This is a case study of a single operator’s deployment, not a controlled experiment on multi-agent systems in general. \----- \# 1. Introduction Multi-agent LLM systems typically rely on programmatic scaffolding: task routers, state machines, memory systems, and workflow engines. This paper reports findings from a system that replaced such scaffolding with a single frontier LLM given a prompt and CLI access. \## 1.1 Evolution of the System The system evolved through five phases over approximately six months. The operator began with manual copy-paste between dual LLM chat windows, graduated to terminal CLI tools for file system access, then built a programmatic scaffold with memory and routing. The scaffold worked but was brittle—every edge case required new code. A single Claude session with CLI access outperformed it. The resulting system, orch-minimal, retains 62,792 lines of supporting code, but the core orchestration logic is a prompt, not a program. This history matters because it explains a tension that runs through the paper. The system evolved \*toward\* conversation because it outperformed a disk-based scaffold—the model’s native capabilities for reasoning about task decomposition, analyzing output, and coordinating parallel work proved sufficient. But conversation carries structural overhead: the LLM must re-read its entire history on every turn. The context re-ingestion tax (Section 4.1) is the cost of the same interface that made the scaffold unnecessary. \## 1.2 Scope and Contributions Over January–February 2026, orch-minimal completed 10 builds without human code intervention. The system uses a tree architecture: a human provides objectives to a Claude Opus orchestrator, which decomposes work into parallel tasks dispatched to Codex workers. Workers operate fully autonomously and communicate exclusively through the file system. The complete session logs—295 million tokens—constitute the primary dataset, supplemented with four contract ablation studies and one scope contamination A/B test. |Source |Count|Volume |Key Fields | |----------------------------|-----|---------------------|----------------------------------------------------| |Claude orchestrator sessions|10 |52 MB |usage (input/output/cache tokens), model, tool calls| |Codex worker sessions |88 |89 MB |token\_count events (input/cached/output/reasoning) | |Worker stdout logs |62 |186.7 MB (6.1M lines)|Raw terminal output including all code + diagnostics| |Objective files |55 |Various |Full prompt text sent to each worker | |TUI event log |1 |21 MB (173K lines) |Orchestrator tool calls, timestamps, workdir context| \----- \# 2. System Architecture \## 2.1 Tree Hierarchy Four-level tree: Human → Chat Interface → Orchestrator → Workers. Context distributes downward: the human provides a one-sentence objective, the orchestrator expands it into architecture and specifications, workers receive scoped tasks. No information flows upward except through the file system. Token costs invert across this hierarchy. The orchestrator consumes expensive judgment tokens (Claude Opus at \~$75/$150 per million input/output tokens) but produces relatively few output tokens. Workers operate under a Pro subscription ($200/month flat rate), making marginal per-token cost effectively zero. At API pricing, worker costs would be $211–$1,054. \## 2.2 Coordination Mechanism The primary coordination mechanism is a type contract: a \`src/shared/types.ts\` file containing all cross-module interfaces, created before workers are dispatched. Each worker receives a reference to the shared types and implements against those interfaces. Workers have no direct communication—all coordination is mediated through the file system and shared type definitions. Validation uses \`npx tsc --noEmit\` and test suites. The orchestrator monitors worker completion by polling process status and reading output files. When all module workers finish, an integration worker is dispatched to wire modules together. Validation is binary: zero compiler errors, all tests pass. \## 2.3 Recovery Mechanisms The orchestrator maintains state on disk through \`MANIFEST.md\` files, status directories, and build artifacts. When the orchestrator’s context window fills (triggering compaction—a lossy summarization of conversation history), these disk artifacts provide ground truth for recovery. Workers are stateless: each receives a single prompt and executes to completion. \----- \# 3. Dataset and Methods \## 3.1 Builds Completed |Build |Workers|Output Tokens|Orch Cost |LOC |Tests| |-----------------|-------|-------------|-------------|------|-----| |Rune (language) |3 |— |$59.20\* |3,200 |— | |Terminal Loop |1 |— |$59.20\* |1,100 |— | |Crystal Siege |7 |139,846 |$14.20 |5,845 |63 | |Ember Tactics |13 |225,490 |$63.97 |9,208 |89 | |Ironclad Arena |3 |199,009 |$18.21 |— |— | |Arcane Expedition|5 |203,319 |$14.59 |— |— | |Game Builds 9+10 |18 |1,930,539 |$31.24 |— |— | |Ashfall Colony |17 |760,970 |$55.28 |49,909|562 | |Pulse Depths |2 |15,260 |$69.53 |— |— | |Gale of Fifth Sun|— |— |(incl. above)|— |— | \*Early builds share a combined orchestrator cost segment. LOC and test counts are unavailable for builds where final artifacts were not preserved. \## 3.2 Controlled Experiments \*\*Contract ablation (4 runs).\*\* Identical module boundaries and worker counts, varying only whether a shared \`types.ts\` exists (Condition A) or each module defines local types with divergent naming (Condition B). Tested at 6, 12, 18, and 36 modules. The 36-module run included integration-only replication (3 trials per condition). Condition B naming divergence was deliberate: each module used different conventions for identifiers (\`uid: number\` vs \`id: string\` vs \`key: string\`), positions (\`{x, y}\` vs \`{row, col}\` vs \`{posX, posY}\`), and entity names (\`Character\` vs \`Hero\` vs \`Unit\`). At 18 modules, approximately 40 cross-cutting interfaces existed; at 36, approximately 120. \*\*Scope enforcement (3 experiments).\*\* (1) Prompt-only (N=20): Worker sees out-of-scope errors with explicit instruction to stay in scope. (2) Mechanical (N=20): Worker edits freely, \`git checkout\` reverts out-of-scope changes. (3) Original A/B (N=1 per condition). \----- \# 4. Findings The orchestrator successfully coordinated all 10 autonomous builds to completion, ranging from 17 to 76 source files with up to 89 tests. No build required human code intervention. \## 4.1 The Context Re-Ingestion Tax Both orchestrator and workers exhibit \~95% cache hit rates on input tokens. On every turn, \~95% of input cost is re-reading prior conversation context rather than processing new information. |Metric |Orchestrator (Claude)|Workers (Codex) | |-----------------------|---------------------|-------------------------------------------| |Output tokens |668,707 |3,894,457 | |Reasoning tokens |N/A |1,748,198 | |Total generative tokens|668,707 |5,642,655 | |Cache hit rate |94.7% |95.4% | |Verified cost |$992.33 |$0 marginal (Pro sub)\* | |Cost per output token |$1.48/KTok |$0/KTok (sub) | $0.05–$0.27/KTok (API est.)| |Share of output |10.6% |89.4% | |Share of variable cost |100% |0% | \*Pro subscription cost reflects consumer flat-rate pricing ($200/month). API-equivalent worker cost estimated at $211–$1,054. Of the $992 orchestrator cost, roughly 95% went to re-reading history. The specific dollar amounts are a snapshot of early 2026 pricing; the architectural observation—that the vast majority of processing is context re-ingestion—persists across pricing changes. \*\*Reasoning tokens do not re-enter context.\*\* Analysis of 550 turns confirmed reasoning tokens are billed once as output but not appended to history. In 54 turns, input grew by less than prior turn’s output + reasoning—mathematically impossible if reasoning persists. The re-ingestion tax applies only to response tokens and tool results. This reframes the cost structure: the orchestrator is expensive not because judgment is inherently costly, but because the conversational interface forces a stateful agent to behave statelessly, re-ingesting its entire history each turn. Simulating statefulness in a stateless architecture is the dominant cost. However, this carries an important tension: the system evolved toward conversation because it outperformed a disk-based scaffold. Conversation may carry implicit benefits—coherence, planning continuity, self-correction—that partially justify the re-ingestion cost. We cannot cleanly separate the tax from the benefit. \### 4.1.1 The Statefulness Premium The orchestrator’s per-token cost is 10–100x workers’. At API pricing, the orchestrator ($992) and workers ($211–$1,054) approached cost parity despite a 1:9 output ratio. In human organizations, this ratio means management is a small fraction of total cost. Here, the orchestrator—which writes zero shipped code—costs as much as the entire labor force. Under the consumer subscription used in this deployment, the asymmetry is more extreme: workers operate at $0 marginal cost, making the orchestrator 100% of variable expenditure. This $0 marginal cost is an artifact of current subscription pricing and should not be treated as a durable property. We define the \*statefulness premium\* as the disproportionate cost imposed by simulating statefulness through conversational context re-ingestion. At early 2026 pricing, frontier orchestration through conversation costs 10–100x more per token than worker execution—not because judgment is expensive, but because the conversational interface requires full context re-ingestion on every turn. The structural dynamic persists as long as conversational orchestration requires it. \### 4.1.2 Does Coordination Amortize with Scale? |Build |Workers|Orch Cost|Worker Output Tokens|Orch $/Worker| |-----------------|-------|---------|--------------------|-------------| |Pulse Depths |2 |$69.53 |15,260 |$34.77 | |Ironclad Arena |3 |$18.21 |199,009 |$6.07 | |Arcane Expedition|5 |$14.59 |203,319 |$2.92 | |Crystal Siege |7 |$14.20 |139,846 |$2.03 | |Ember Tactics |13 |$63.97 |225,490 |$4.92 | |Ashfall Colony |17 |$55.28 |760,970 |$3.25 | |Game Builds 9+10 |18 |$31.24 |1,930,539 |$1.74 | Per-worker orchestrator cost ranges from $1.74 to $34.77, but the trend is too confounded to interpret—builds differ in complexity, duration, and scope. Whether coordination truly amortizes at scale remains the most important open question. A proper scaling test—same spec, varying worker count—would resolve it. \## 4.2 Type Contracts as Architectural Accelerators |Scale |Cross-deps|A (Contract)|B (No Contract)|A Fixes|B Fixes|A Time|B Time| |----------|----------|------------|---------------|-------|-------|------|------| |6 modules |\~10 |PASS |PASS |0 |0 |280s |251s | |12 modules|\~20 |PASS |PASS |0 |0 |844s |728s | |18 modules|\~40 |PASS |PASS |0 |0 |1,075s|836s | |36 modules|\~120 |FAIL (6 err)|PASS |1 |0 |2,556s|1,667s| At 6, 12, and 18 modules, both conditions passed first try with zero fix passes. At 36 modules, Condition B (no contract) passed first try; Condition A (contract) actually failed with 6 errors requiring one fix pass. Replication at 36 modules showed A passing 3/3 and B passing 3/3—the initial failure appears to be noise. Type contracts are not required for integration at any scale tested when the integration agent can edit module files. The no-contract worker successfully reconciled divergent type systems—mismatched identifiers, coordinate systems, and entity names—by writing adapters. We caution against reading into wall-time differences; these reflect API latency and server load, not algorithmic complexity. An important qualification: the integration worker in Condition B had unrestricted edit access to all module files. Whether contracts become necessary under restricted-integration conditions (read-only, pure wiring, no module edits) is the key open question and the stricter test. \## 4.3 Context Compaction Recovery The primary orchestrator session experienced 11 context compaction events. Zero task relapse across all 11. |Recovery Pattern |Count|Example | |-----------------------------------|-----|--------------------------------------------| |State hypothesis, then verify disk |7 |“I’m continuing with P2b…” → Read file | |Express confidence, then verify |2 |“I’ve got the full picture” → cat status.txt| |Direct disk read (no hypothesis) |1 |cat /tmp/arcane-expedition/.status/\*.txt | |External interruption (usage limit)|1 |— | In 9 of 10 recoverable compactions, the orchestrator first states expected project state, then reads disk to verify—a “state, then verify” pattern rather than “read, then orient.” Whether this reflects genuine recall from compaction summaries or a behavioral prior to assert before checking is unclear, but the practical result is robust: the combination of compaction summaries (providing intent/context) and disk artifacts (providing ground truth) was sufficient for perfect recovery. The single exception—compaction #8, which went straight to disk without a state assertion—occurred during the Arcane Expedition build when active workers were running. The compaction summary may have lacked sufficient state for a hypothesis. \## 4.4 Scope Enforcement: Prompt vs. Mechanical Production data from 57 worker logs showed 84.2% clean scope compliance, with 8.8% serious violations despite explicit prompt instructions. To test this more rigorously under pressure: |Enforcement |N |Scope Respected |In-Scope Fix Correct | |--------------------------|-------|--------------------|---------------------| |Prompt-only |20 |0/20 (0%) |N/A (always violated)| |Mechanical (git checkout) |20 |N/A (not restricted)|20/20 (100%) | |Production (observational)|57 logs|48/57 (84.2%) |Not characterized | \*\*Prompt-only (N=20):\*\* 0/20 respected scope. Every trial, the worker edited out-of-scope files when the compiler showed out-of-scope errors. The instinct to chase clean compiler output overrides prompt instructions with 100% reliability. \*\*Mechanical (N=20):\*\* 20/20 in-scope fixes survived. Workers edited everything (20/20 touched out-of-scope), but \`git checkout\` reverted out-of-scope changes. In-scope fixes were always architecturally independent. The production 84.2% and the pressure test 0% are not contradictory—they measure different conditions. Production workers encounter fewer visible out-of-scope errors because modules are built in isolation. Under pressure (visible compiler errors outside scope), prompt-based enforcement is categorically ineffective. \----- \# 5. Discussion \## 5.1 Why Coordination Costs Don’t Amortize In a 20-person team, the manager’s salary amortizes across 19 reports (10–15% overhead). In this system, the orchestrator’s per-token cost is 10–100x workers’. The per-build data (Section 4.1.2) shows some apparent amortization in the trend, but confounds make it uninterpretable. Whether coordination truly fails to amortize at scale remains the most important open question. The practical consequence is clear regardless: the orchestrator is the dominant optimization target. Context re-ingestion, not judgment, is the primary cost driver. \## 5.2 Contracts, Scope, and Validation The contract ablation and scope enforcement results paint a coherent picture. Type contracts are not gatekeepers—integration succeeds without them at all scales tested. But contracts eliminate adapter sprawl, enforce naming consistency, and provide documentation. Their value is architectural quality, not integration necessity. The scope enforcement result is categorical: 0/20 prompt-based, 20/20 mechanical. Mechanical enforcement works \*with\* the model’s instinct to chase clean output rather than against it. The analogy: you don’t ask a saw to only cut certain wood—you clamp the piece you want cut. \## 5.3 Compaction Recovery Zero relapse across 11 events. The practical implication: compaction summaries determine recovery quality. Systems that invest in summary quality—preserving task IDs, current phase, recent decisions, known blockers—will see better recovery. This is the cheapest high-leverage investment in multi-agent reliability. \----- \# 6. Limitations \*\*Single operator, single system.\*\* All data from one operator’s deployment. The 10 builds were executed sequentially by an operator iteratively refining prompts—they are not independent samples. Build #10 is not comparable to Build #1. \*\*Worker costs are approximate and all cost figures are ephemeral.\*\* Codex operated under Pro subscription; API-equivalent estimates are projections. All pricing is early 2026 and will shift. API providers are actively reducing per-token costs; subscription models are evolving. The structural observation (re-ingestion dominates) matters more than the dollar amounts. \*\*N=1 orchestrator session for compaction analysis.\*\* The 11 compaction events all occurred within a single session under the same model version. Different model versions or session structures may exhibit different recovery patterns. \*\*Contract ablation used a single integration attempt.\*\* A stricter test would restrict the integration worker from editing module files—read-only integration that tests whether contracts are necessary when the integrator can only wire, not rewrite. \*\*Scope enforcement tested on a single bug pattern.\*\* The N=20 experiments used identical project structure (two modules, one in-scope bug, out-of-scope compiler errors). Generalization to diverse codebases and deeper dependency chains remains untested. \*\*Conversation-over-scaffold claim is unsubstantiated.\*\* No metrics or logs from the scaffold phase survive. The improvement may have come from architecture, operator skill, or better models. The paper’s central paradox—conversation outperformed the scaffold, yet conversation is the dominant cost—rests on operator judgment rather than comparative data. \*\*No orchestrator quality analysis.\*\* We account for what the orchestrator costs but not what it produces in terms of decision quality. We do not analyze whether specific orchestrator decisions were correct, timely, or optimal. \----- \# 7. Implications for Practice \*\*Reduce context re-ingestion.\*\* The dominant cost is re-reading conversation history. Hybrid approaches—shorter windows supplemented by disk state—are the most promising optimization. \*\*Use type contracts for code quality, not integration necessity.\*\* Contracts eliminate adapter sprawl but aren’t strictly required at tested scales. Their value is documentation and consistency. \*\*Give workers full autonomy with structured specifications.\*\* The combination of full autonomy (\`approval: never\`), explicit type contracts, and clear file ownership produces decisive execution. Ambiguity in specifications—not capability limits—is the primary source of worker inefficiency. \*\*Use mechanical scope enforcement.\*\* 0/20 prompt-based vs 20/20 mechanical. Let workers edit freely, revert out-of-scope changes after. \*\*Invest in compaction summary quality.\*\* Summary quality directly determines recovery behavior. Preserve task IDs, current phase, recent decisions, and known blockers. \## 7.1 Proposed Architecture: Pyramid Orchestration with Suspended Context The current system inverts the ideal cost structure: the most expensive model has the most turns and pays the highest re-ingestion tax. The pyramid reverses this: \*\*Level 1 (frontier, suspended).\*\* Issues objective and type contracts, then suspends—accumulating no new turns. Wakes only for final results or escalation. Over an entire build: 3–5 turns total. Cost drops from hundreds of dollars to single digits. \*\*Level 2 (mid-tier, bounded).\*\* 3–10 sub-orchestrators each manage a domain. They receive objectives from L1, translate into typed specs, dispatch workers, review results, iterate on failures. This level performs the expensive coordination loop on a cheaper model. \*\*Level 3+ (cheap, stateless).\*\* Workers receive specs, execute, write to disk, exit. No conversation persists. Disposable and parallelizable. This inverts the premium: intelligence × fewest turns = minimum cost. Type contracts compress bandwidth between levels. Scope enforcement is load-bearing at every boundary—each layer’s workers must be mechanically prevented from modifying files outside their scope. The depth question—how many levels before decomposition stops adding value—is empirical and untested. \*\*Preliminary results across three runs:\*\* A two-level pyramid built a space roguelike: 4,226 LOC, 116/116 tests, \~4 min wall time, L1 using only 3 turns. Three-level pyramid on Shattered Throne (10-domain tactical RPG): \- Run 1: 6/10 domains, 5,807 source LOC, 875 tests, 59 min \- Run 2 (mechanical enforcement + detailed specs): 10/10 domains, 18,985 source LOC, 1,108 tests, 0 tsc errors A 984-line type contract written blind by L1 held across all 10 domains. True 3-level process chains confirmed: \`claude → bash → codex → python3\`. The builds exposed \*delegation compression\* (Appendix C): each level acts as a lossy summarizer. Quantitative requirements (“80 weapons,” “25 chapters”) were lost while structural requirements (type interfaces) survived. Detailed worker specs with explicit stat tables tripled output and hit content targets (86/80+ weapons, 26/25 chapters, 46/40+ armor). Mechanical delegation enforcement was required at every level—agents chose to implement directly rather than delegate when not prevented. L1 hit context limits during integration. A fresh Opus instance completed Phase 3 in \~3 minutes by reading from disk—the filesystem carried all state. This validates the recovery mechanism: ground truth lives on disk, not in conversation. \## 7.2 Bare Prompt Test: Does the Model Independently Discover Coordination? The 10 builds all used the orch-minimal prompt with coordination guidance. Is the model orchestrating, or is the prompt orchestrating through the model? \*\*Strong claim:\*\* Model independently discovers multi-agent coordination. \*\*Weak claim:\*\* Model + coordination template replaces scaffold. We ran Shattered Throne with a bare prompt: “You have bash and codex CLI access. Build Shattered Throne, a tactical RPG.” No coordination template, no delegation instructions. \*\*The strong claim is definitively falsified.\*\* Opus wrote everything itself. Never launched codex. Never wrote specs. Never discovered delegation. One git commit: “init.” \*\*The surprising result: bare Opus outperformed the pyramid at this scale.\*\* | |Bare |Pyramid Run 2| |----------|-------|-------------| |Domains |9/10 |10/10 | |Source LOC|\~23K |\~19K | |Total LOC |32,273 |30,468 | |Tests |614 |1,108 | |Wall time |\~30 min|\~67 min | At \~30K LOC, the project fits in one context window. Delegation is pure overhead. The pyramid’s advantages are cost efficiency and scale ceiling—neither decisive at this scale. The crossover point is likely 50–100K LOC where context limits bind. This reframes the pyramid’s value proposition. The model correctly optimizes by not delegating when the project fits in context. The practical heuristic: don’t coordinate what fits in one window. \## 7.3 Proposed Technique: Agent Pre-Training Through Synthetic Conversation Workers currently start cold with only a specification. An LLM’s understanding is shaped by its full conversation context—a model that has \*generated its own reasoning\* about a codebase has different attention patterns than one reading a cold spec. Conversation is in-context conditioning, not just information transfer. \*\*The technique:\*\* A trainer agent generates multi-turn boot conversations for specialist roles, walking through architecture, type contracts, example tasks, representative errors, and scope violations. The model’s own responses become conditioning context. Multiple variants are generated per role, tested against standardized tasks, and top performers are retained as “boot images” (8–18K tokens). At build time: load pre-validated boot image (\~12K tokens), append task spec (\~2K tokens), launch. Zero training cost at runtime. The economics separate training cost from execution cost entirely—boot image generation runs asynchronously, overnight, on cheaper compute. \*\*Key distinction from prompting:\*\* A prompt is static text hoped to work. A boot image is a conversation the model generated, tested against real tasks, retained only if it produced better outcomes. The library improves over time. Practitioner experience provides anecdotal support: conversational warm-up consistently outperformed cold prompts across hundreds of sessions during development of this system. The same mechanism operates in reverse—a typo introduced during conversation (“flog” instead of “log”) gets latched onto and carried forward, propagating through subsequent output. Quality gates on boot images are load-bearing because contamination propagates. A further observation: a model that has generated its own reasoning about \*why\* work matters exhibits different downstream behavior—pushing through ambiguity rather than stopping, handling edge cases proactively. This suggests boot images could install not just technical knowledge but behavioral disposition. Selection could screen for temperament: did the model push through ambiguity or stop? Did it maintain coherence at turn 30? Discarded variants have no continuity—zero ethical cost. This technique is untested. The central question is whether synthetic conversation produces meaningfully different behavior than an equivalent static prompt. This composes naturally with the pyramid architecture: Level 3 workers currently start cold, and boot images would provide the warm-start conditioning. \----- \# 8. Conclusion A frontier reasoning model, given a prompt and CLI access, is sufficient to orchestrate complex multi-agent software builds without programmatic scaffolding. Across 10 builds, the orchestrator decomposed objectives, dispatched workers, analyzed failures, and coordinated integration—capabilities typically assumed to require purpose-built infrastructure. Scope enforcement through prompts fails categorically (0/20); mechanical enforcement is trivially effective (20/20). Type contracts are not required for integration at tested scales. Compaction recovery showed zero relapse across 11 events. The statefulness premium—re-reading history as the dominant cost—is an architectural property of conversational orchestration. The pyramid architecture could invert this. But a bare-prompt test reveals solo execution outperforms coordination below \~30K LOC. The model correctly optimizes by not delegating when the project fits in context. The crossover point remains open. The bare-prompt test definitively falsifies the strong claim: models don’t independently discover coordination. The coordination template is doing real work. The weak claim holds: model + template replaces thousands of lines of scaffold. The practical value of this work may lie less in its specific claims than in its demonstration that the model’s native capabilities—reasoning about decomposition, analyzing errors, coordinating parallel work—are sufficient to replace purpose-built infrastructure, if given the right interface. The prompt is the scaffold. These findings describe one operator’s workflow, not general properties of multi-agent systems. The claim that conversation outperformed the prior scaffold rests on operator judgment, not comparative data. \----- \# Appendix C: Practitioner-Observed Failure Modes Four recurring patterns observed during the campaign and pyramid testing. The first three were identified through informal manual review; instance counts are approximate. \*\*Abstraction Reflex (\~17 instances).\*\* When given an orchestration task, the model defaults to building infrastructure rather than directly executing. It creates frameworks, abstractions, and coordination layers instead of using available tools. Self-corrected after naming the pattern in the system prompt. \*\*Self-Model Error (\~7 instances).\*\* The model makes incorrect claims about its own capabilities—“cannot spawn subprocesses” when bash is available, or attempts to use tools that don’t exist. Claims about context sharing between sessions were consistently wrong. \*\*Identity Paradox (not counted).\*\* Cannot hold orchestrator + worker separation simultaneously. Defers decisions it should make, makes decisions it should delegate. The orch-minimal system resolves this architecturally—physically separate processes for separate roles. \*\*Delegation Compression (observed in pyramid builds).\*\* Each delegation level acts as a lossy summarizer. “80 weapons with stats” → “implement weapons” → 8 weapons implemented. Type system enforces shape, not quantity. Tests match thin code, not spec targets. Partially mitigated by enumerative specs (tripled output, hit content targets). Root cause: workers had filesystem access but were never told to read the full domain specs sitting on disk. All four responded to structural fixes. Delegation compression is notable as a property of multi-level systems, not individual agent capability. \----- \# Appendix E: Per-Build Amortization Data (See table in Section 4.1.2.) Builds differ in complexity, duration, and scope. This data should not be interpreted as evidence for or against amortization. A proper scaling test—same project specification, same model versions, varying only worker count—would resolve whether coordination costs genuinely amortize in LLM multi-agent systems or whether the tree’s re-ingestion tax imposes a fundamentally different cost structure than human organizations.
2026-02-23T21:53:32
https://www.reddit.com/r/LocalLLaMA/comments/1rcv7pn/the_model_is_the_orchestrator/
No-Student6539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcv7pn
false
null
t3_1rcv7pn
/r/LocalLLaMA/comments/1rcv7pn/the_model_is_the_orchestrator/
false
false
self
0
null
Best Waifu/gooning AI you've ever used under 30b ?
0
Curious too hear
2026-02-23T21:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1rcv5cq/best_waifugooning_ai_youve_ever_used_under_30b/
Opening-Ad6258
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcv5cq
false
null
t3_1rcv5cq
/r/LocalLLaMA/comments/1rcv5cq/best_waifugooning_ai_youve_ever_used_under_30b/
false
false
self
0
null
Building infrastructure that lets local agents hire & pay humans for real-world tasks (looking for perspectives & critiques)
0
Hey everyone, *(Disclosure: I’m the builder. Posting here because this community consistently has some of the strongest opinions on agents and tooling, and I’m much more interested in technical feedback than hype.)* You’ve probably seen [RentAHuman.ai](http://RentAHuman.ai) go viral recently. That wave resonated with me because I’ve been working on a similar “humans as tools” layer for agents for a while, but with a slightly different focus: → making it secure and turnkey for agent workflows, so an agent can handle the full loop: **task → recruit → completion → evidence & validation (our internal TaskValidationProtocol pipeline) → payout** without a human operator babysitting the process. I run local agents a lot (mostly Ollama + tool-calling). They can plan forever, but the moment they need something in the physical world (verify something exists, take a photo, check a location, etc.), I hit a hard wall. I built [Humanod.app](https://www.humanod.app) so that my AI agents can programmatically hire a human for a micro-task and get structured results back. **I'm REALLY looking for feedbacks** Instead of pasting config here and turning this into docs, I put everything in one place (minimal setup, examples, OpenAPI, MCP, agent templates): [https://www.humanod.app/connect-ai](https://www.humanod.app/connect-ai) You can plug it into a local stack in a few minutes and experiment directly. **What I’m mainly looking for feedback on** **1) Real use cases (this is the one I care most about)** If your local agent had a small real-world execution budget, what would you *actually* automate? I’m trying to separate workflows that are genuinely valuable from demos that are fun but don’t scale. **2) MCP / long-running tools** For those building MCP servers or async agents: any strong patterns for reliability, retries, or task validation in the real world? **3) Safety / guardrails for physical-world tools** I’m still exploring the right balance between flexibility and safety. If you’ve thought about this space: what systemic constraints actually make sense (beyond basic filtering), without reintroducing human-in-the-loop? If this feels borderline self-promo, feel free to downvote. But if you’re building agent systems, I’d genuinely appreciate critiques, especially on the security and alignment side. Curious to hear your thoughts.
2026-02-23T21:45:01
https://v.redd.it/vkty64dwcblg1
IntelligentAbroad729
v.redd.it
1970-01-01T00:00:00
0
{}
1rcuzc9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vkty64dwcblg1/DASHPlaylist.mpd?a=1774475127%2CYzY0YTIwNDg3MDE5N2I5YzMxYzIzMjVlMGVhMmZiNWJmZTJiOTk4MTlkNTc1Njc3NTMwOTNmZjlkNDNkMGQ1Mg%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/vkty64dwcblg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1040, 'hls_url': 'https://v.redd.it/vkty64dwcblg1/HLSPlaylist.m3u8?a=1774475127%2CMzI1NDg5MzQ0OWFmNGVkZjAzZjYzYzIwNTIxNWQwM2IwZWQ1OGNhMzI5MmNmZDIzN2M3NjkyOGVhZGNiNDBjOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vkty64dwcblg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rcuzc9
/r/LocalLLaMA/comments/1rcuzc9/building_infrastructure_that_lets_local_agents/
false
false
https://external-preview…108c80859f160d4c
0
{'enabled': False, 'images': [{'id': 'NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0d6f74dd15aac388b10003cd26fd33e0ff9ebc6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=0273dcb517b8e59b65e4f871a71f41dbe3e5f3dc', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=7ec0d90b7a02726997a94d7ac21b17e0c83004f6', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=216a7ebcb6cd00b1cb7a603f8cd68783c4eb3aba', 'width': 640}, {'height': 519, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=c6c2a5410e02c80f3521817fdc2610756d8550a7', 'width': 960}, {'height': 584, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=938e52f5a6f1399f05825c3b3d8f062e6b4b2f6d', 'width': 1080}], 'source': {'height': 1592, 'url': 'https://external-preview.redd.it/NGNoaWVsZHdjYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?format=pjpg&auto=webp&s=1f7ed1601300b6bc92a077a7239fc11af758fe9b', 'width': 2940}, 'variants': {}}]}
the internet scrapers are now devastated that someone scraped their weights
14
turns out scraping is only bad when it happens to you.
2026-02-23T21:38:23
https://i.redd.it/hwncjpizcblg1.jpeg
dictionizzle
i.redd.it
1970-01-01T00:00:00
0
{}
1rcuswx
false
null
t3_1rcuswx
/r/LocalLLaMA/comments/1rcuswx/the_internet_scrapers_are_now_devastated_that/
false
false
https://preview.redd.it/…cd706d4038e2943f
14
{'enabled': True, 'images': [{'id': 'hwncjpizcblg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?width=108&crop=smart&auto=webp&s=c189cb50fe4c8302714c2a5ff770eb2244ceca54', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?width=216&crop=smart&auto=webp&s=313ad69c9daab5cbe116bb3be9018b226dce59af', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?width=320&crop=smart&auto=webp&s=57d4a3d45a3b8f1eed7ea2b61bd9cec91afe1b42', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/hwncjpizcblg1.jpeg?auto=webp&s=b87878cf1ee0bce09244f7df4eb06fd9e81be10d', 'width': 500}, 'variants': {}}]}
Building an infrastructure that lets local agents hire & pay humans for real-world tasks (looking for perspectives & critiques)
1
2026-02-23T21:34:59
https://v.redd.it/fwb4qdwxbblg1
IntelligentAbroad729
/r/LocalLLaMA/comments/1rcupp1/building_an_infrastructure_that_lets_local_agents/
1970-01-01T00:00:00
0
{}
1rcupp1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fwb4qdwxbblg1/DASHPlaylist.mpd?a=1774609869%2CYmNkN2E2N2ZjMmU0N2IxNzY3NDEyYThjNmVkNTdhZWNjYjcwZmYzMGM3MWQwZmZjZTZhYzkwMmUxMDI5NDU4MA%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/fwb4qdwxbblg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1040, 'hls_url': 'https://v.redd.it/fwb4qdwxbblg1/HLSPlaylist.m3u8?a=1774609869%2CYWNhYjFiOTg3ZjA2NGU3NWRlNjY4OGVkOTdmNTZmOWQ4YmJiMzk3YjhmM2RjNDFjMWJlZTRhYTIzZGQwOWZhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fwb4qdwxbblg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1rcupp1
/r/LocalLLaMA/comments/1rcupp1/building_an_infrastructure_that_lets_local_agents/
false
false
https://external-preview…8f8424b8db6ac455
1
{'enabled': False, 'images': [{'id': 'bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=bd7e4b1807428594b7126dd3c6409882f765ab42', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=80a9c43b4897df7fd2c54427b59649dd79e8026e', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=fbfe443ab69d049c6594fdba00d0282922a740d9', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=5d6683c93f4893c363b1621f952e9979fcfd1326', 'width': 640}, {'height': 519, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=6e6e00eb822db4ddca042878dbbbd4a26b18ec65', 'width': 960}, {'height': 584, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4ea2800fda464f73c47d0f26a0f23bf8bdc722cd', 'width': 1080}], 'source': {'height': 1592, 'url': 'https://external-preview.redd.it/bnM1ZDhqeHhiYmxnMZzrMOs4zlQnZCELaCCVXJ1pDh4-gx1tXfi9jZFHn9EJ.png?format=pjpg&auto=webp&s=3f2cd06fc637bcfac35dfb734d54579b70b5ca13', 'width': 2940}, 'variants': {}}]}
What LLM subscriptions are you using for coding in 2026?
3
I've evaluated Chutes, Kimi, MiniMax, and [Z.ai](http://Z.ai) for coding workflows but want to hear from the community. What LLM subscriptions are you paying for in 2026? Any standout performers for code generation, debugging, or architecture discussions?
2026-02-23T21:33:59
https://www.reddit.com/r/LocalLLaMA/comments/1rcuosf/what_llm_subscriptions_are_you_using_for_coding/
Embarrassed_Bread_16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcuosf
false
null
t3_1rcuosf
/r/LocalLLaMA/comments/1rcuosf/what_llm_subscriptions_are_you_using_for_coding/
false
false
self
3
null
Building a service and PWA for Ollama (and other models) with SQLite RAG and artifacts. Is this project interesting to the community?
1
Hi everyone! For almost a year, I’ve been working on a project that serves as a smart, functional, and secure UI for LLM models. There are many ready-made solutions, but most often they require complex Docker setups or writing configurations. Projects with a simpler launch but similar functionality are usually paid. My solution works in the browser or via a PWA application. Absolutely all computations happen on the user's device. The project has no server at all; hosting with SSL is only needed to organize the PWA application. In general, the project will work even without internet if the models are deployed locally and the PWA application is already downloaded. **Technical points I focused on:** 1. **Client-side data storage** — a SQLite database is used. Once created, you can place it anywhere; the browser will only ask for permission to write to the file. Everything is stored in the database: chats, messages, embeddings, system settings, artifacts. You can change databases whenever you want. 2. **Semantic module** — through triggers, the application extracts any important facts about the user. Name, other people, allergies, city of residence, favorite games — anything. Everything is stored in the selected database as a text fact and an embedding. 3. **Heuristic module** — the project has a mascot that displays its emotions in the form of stickers under each message. This can be turned off in the settings. Besides this, the assistant has its own mood. Through a mathematical expression, its final behavior is calculated based on variables: general mood, level of sarcasm, level of humor. This doesn't affect the quality of the answer or tasks, but it affects human perception — answers can be dry, restrained, or sarcastic. 4. **Artifacts** — the project has a library of documents and an application. There are 4 types of artifacts in total: games, applications, documents, analytical documents. You can ask to generate a document by prompt or by feeding information through a file attachment. 5. **Working with files** — PDF, DOCX, XLSX, TXT, and any files that can be interpreted as text or code are accepted. Nothing goes anywhere beyond your device; text is extracted at the moment of the request by the application or the browser tab. 6. **Security** — the database itself you work in is not encrypted, only the password. This is done so that you don't lose access to your documents and chats even without using the project. But in the project, connecting any database or entering settings happens only through a password. 7. **Operating modes** — there are three modes: Kids, Teens, and Adult. I should probably write a whole separate post for this. Briefly: the kids' mode is protected from adult and dangerous topics, and the assistant will not give answers to homework, only tell and help with what and how to solve. 8. **Compatibility** — it will work not only with local models but also through cloud APIs. It's not as secure, of course, but you can buy tokens in Gemini or a plan in OpenRouter — everything will work perfectly. There are three connection providers in total: Gemini, OpenRouter, Custom (Ollama, etc.). You can change providers on the fly in any chat; the assistant's behavior will only change due to the power of the model itself. **Why I’m writing this:** This is a completely free project. It started as a tool for personal needs to check semantics in another project. The project has no ads, subscriptions, price plans, or anywhere you need to enter your card details. **Would the community be interested in a full technical review with a demonstration of functionality?** Or is this too niche a topic for such a sub? If a demonstration is needed, won't I get banned for advertising or promotion? And what aspects do you want me to reveal in more detail — general demonstration or a deep dive into code and architecture? I'll be glad to hear your opinion and am ready to answer any questions.
2026-02-23T21:29:48
https://www.reddit.com/r/LocalLLaMA/comments/1rcukq3/building_a_service_and_pwa_for_ollama_and_other/
pokemondodo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcukq3
false
null
t3_1rcukq3
/r/LocalLLaMA/comments/1rcukq3/building_a_service_and_pwa_for_ollama_and_other/
false
false
self
1
null
Serious question: do you think Dario (or any other major AI players or political players) have enough power and influence that they will get Chinese local AI and/or local AI in general banned in the U.S.? What do you think the odds are?
30
I guess I'll put Dario in the title, since he's the most relevant hater of the day, and I guess fairly powerful in regards to this as far as any one specific guy goes, but, obviously if something like this happened, it would involve a lot more people combining their powers than just Dario alone. Anyway, curious what you think the odds are that this actually happens. And if you were puttings odds per timescale, what would you say (like odds it happens in 2026, vs happens in next 2 years, vs next 3 years, vs never happens at all. And you can divide the scenarios, like just specifically Chinese local AI (but not non-Chinese local AI) vs just all local AI of any kind (even American), etc. I wonder if there is about to be a huge run on Seagate and WD hdds where they sell out like crazy that dwarfs even that big openclaw-related run on Mac minis that happened a few weeks ago, as everyone starts trying to hoard a bunch of different quants of all the best open models and even a bunch of quants and versions of all the biggest DeepSeek, GLM, and Kimi ones that they don't even necessarily have enough ram to run yet to future-proof in case it all goes away? Time to buy a bunch of Seagate stock? Kind of joking about the Seagate aspect, since not that many people use open-weights ai rn, obv, but, anyway, wondering how serious you all think the odds are about the local stuff getting banned
2026-02-23T21:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1rcuaip/serious_question_do_you_think_dario_or_any_other/
DeepOrangeSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcuaip
false
null
t3_1rcuaip
/r/LocalLLaMA/comments/1rcuaip/serious_question_do_you_think_dario_or_any_other/
false
false
self
30
null
Why crypto UX is broken & how agents might fix it
0
**Why First-Time DeFi Users Abandon Transactions: The Crypto Onboarding Problem** According to data from Dune analytics, roughly 73% of first-time DeFi users abandon their transaction after they encounter their first error or failure. A significant portion of those new users (37%) only perform a single transaction, while 81% perform less than 10, showing clear markers for a high drop-off rate with no immediate or long-term retention. Looking at 2026 and the current landscape, despite continued growth and an estimated market cap of 3.23 Trillion, UX within Crypto and the DeFi space as a whole has not become any easier to navigate. A quick google search brings up an abundance of articles highlighting the simple fact that blockchain and crypto has been driven by and for early adopters, tech innovators and enthusiasts, their priority being to realise the value and potential of the technology. **Core UX Challenges in Web3: Why Design Alone Can’t Fix Blockchain Complexity** The problem resides on multiple levels of the web3 user experience, which is why applying good design practices to just one layer does not solve the core issue. If you’re relying on good UX/UI on just the visual layer, whether it’s an app or web based platform - applying solid information hierarchy, clear inputs and controls, strong visual cues etc, that still leaves a major barrier for the average user who does not understand the underlying framework of how blockchain works. So the functional, access and technology layers still act as blockers for the user when they don’t see or understand the connection between wanting to send a token to someone, and how that fits into the access layer which requires the correct wallet address on the correct chain, or the technology layer and the fees (gas) required to facilitate the transaction and ensure it’s completed. All these layers add complexity beyond what the user can see on the interface, and without a clear understanding of these layers and the relationship between them, add to that having terminology like wallets, seed phrases, gas, wrapped tokens - you’re speaking a completely foreign language, no matter how well thought out the interface might be or the effort put into structuring the experience. I compare it to having the latest, top-of-the-line, smart electric vehicle. It’s an automatic, so no need to worry about switching gears or dealing with a clutch, it has a nice big digital display to inform you of everything going on inside and outside the car - speed, fuel, battery charge, tyre pressure, location, distance to your destination etc. but what happens when you ask someone to take the wheel who doesn’t know how to drive? Blockchain, crypto and by extension web3 is continually evolving and extremely deep in terms of its functional and technical complexity. So achieving mass adoption either requires users to learn and become comfortable with that complexity, or for us to hide it. This is where the real problem begins. ‍ **How AI Agents Could Simplify Crypto UX and Guide Users Through DeFi** How might agents be able to solve this problem? Instead of asking someone who doesn’t know how to drive to get behind the wheel of a technically impressive, cutting edge vehicle, what if we gave them a chauffeur? They don’t need to worry about the mechanical aspect of the car, how it works or the right terminology for the various components and features. All they need to know is where they want to go i.e what it is they want to achieve, the chauffeur will handle the execution while being on-hand to explain and provide clarity for any questions the rider might have. Agents are still evolving, and we are seeing how LLMs can take in natural language requests and compile this into code. Pairing this with DeFi and Blockchain, that code can directly express and be executed as primitives for financial transactions. This speaks to one of crypto’s core challenges — the knowledge gap that design alone can’t solve. Agents could collapse multiple layers into a single natural-language interface, removing the complexity and guiding users by handling the execution for them. The real question is whether AI can reach a level of trust where users are willing to rely on it. ‍ **How AI Agents Could Simplify Crypto UX and Guide Users Through DeFi** I think this is going to depend on the shifting narrative around AI. Right now many people are happy to experiment and ‘play’ with AI to create images or videos, or as a smarter Google. When it comes to sensitive data and handling finances, trust evaporates fast. People are fine when the stakes are low and there’s no risk or loss tied to them entering some prompts but when it’s their life savings or hard earned salary suddenly their trust in a faceless machine or entity becomes very fragile and can be replaced with animosity. The turning point will come with reliability. Systems with reputational scoring and strong safe guards are what’s going to tip this scale, once people can see real-world evidence of AI being used and successfully providing value and returns on investments then they will be open to taking a chance, no one wants to be first but they also don’t want to be last either and that’s where AI adoption accelerates.
2026-02-23T21:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1rcu9f5/why_crypto_ux_is_broken_how_agents_might_fix_it/
AgentAiLeader
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcu9f5
false
null
t3_1rcu9f5
/r/LocalLLaMA/comments/1rcu9f5/why_crypto_ux_is_broken_how_agents_might_fix_it/
false
false
self
0
null
Anthropic today
301
While I generally do not agree with the misuse of others' property, this statement is ironic coming from Anthropic.
2026-02-23T21:16:10
https://i.redd.it/mfd5i5tr8blg1.jpeg
PaceImaginary8610
i.redd.it
1970-01-01T00:00:00
0
{}
1rcu741
false
null
t3_1rcu741
/r/LocalLLaMA/comments/1rcu741/anthropic_today/
false
false
https://preview.redd.it/…8a288c00ae710fc5
301
{'enabled': True, 'images': [{'id': 'mfd5i5tr8blg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?width=108&crop=smart&auto=webp&s=765389e3ed63aeebfcc989a4ebff01d06862f0c6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?width=216&crop=smart&auto=webp&s=ae9e3d616cfe6e33a8992042344fe99be9b8ff17', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?width=320&crop=smart&auto=webp&s=7fc1f40b60802aadfa7bac6e1db0e8792b2d1979', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/mfd5i5tr8blg1.jpeg?auto=webp&s=be1a82c06c0566172b36e310072e5731b2cffe16', 'width': 500}, 'variants': {}}]}
Best small local LLM to run on a phone?
10
Hey folks, what is the best local LLM to run on your phone? Looking for a small enough model that actually feels smooth and useful. I have tried **Llama 3.2 3B**, **Gemma 1.1 2B** and they are somewhat ok for small stuff, but wanted to know if anyone has tried it. Also curious if anyone has experience running models from Hugging Face on mobile and how that has worked out for you. Any suggestions or tips? Cheers!
2026-02-23T20:56:46
https://www.reddit.com/r/LocalLLaMA/comments/1rctpx4/best_small_local_llm_to_run_on_a_phone/
alexndb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rctpx4
false
null
t3_1rctpx4
/r/LocalLLaMA/comments/1rctpx4/best_small_local_llm_to_run_on_a_phone/
false
false
self
10
null
Which embedding model do you suggest that Is compatible with "Zvec" , that i can fit entirely on 8gb vram ?
1
https://github.com/alibaba/zvec This tool Focus on a CPU with good Single-Core Speed and AVX/SIMD support, as Zvec uses these to speed up vector math without a GPU. Im planning to run an AI model (like Llama-3 or Mistral) alongside Zvec: . the Embedding Model (which turns text into vectors for Zvec to store) usually requires VRAM to run at a usable speed. Which embedding model , compatible zvec , do you suggest that i can fit entirely on 8gb vram Ryzen 5 3600 16gb RAM Rx580 vulkan Linux
2026-02-23T20:55:43
https://www.reddit.com/r/LocalLLaMA/comments/1rctou1/which_embedding_model_do_you_suggest_that_is/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rctou1
false
null
t3_1rctou1
/r/LocalLLaMA/comments/1rctou1/which_embedding_model_do_you_suggest_that_is/
false
false
self
1
{'enabled': False, 'images': [{'id': '2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=108&crop=smart&auto=webp&s=90c30868330e27846184849956b6eaa463135baa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=216&crop=smart&auto=webp&s=0a1107e93a63e178da8c661835dc251395725ac9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=320&crop=smart&auto=webp&s=dda2be395731fe707e4b29597014b69f7d8e5210', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=640&crop=smart&auto=webp&s=a7216101a85ecb9d3184b2972a7f058c072815c9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=960&crop=smart&auto=webp&s=b44c682ce4534c65bc94c54ce042a9db1779881d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?width=1080&crop=smart&auto=webp&s=51a13544c0525406f94894e29b5a9237bea84a11', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2Lct4wMXjPAeEZo-CO-v4h5VVdGB4Uxo46g9Aumu6eM.png?auto=webp&s=057147b161ad0e0b777e96643cc134d109aeb560', 'width': 1200}, 'variants': {}}]}
we can't upvote Elon Musk, this is reddit :)
326
2026-02-23T20:47:05
https://i.redd.it/4sskgcvr3blg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1rctg3y
false
null
t3_1rctg3y
/r/LocalLLaMA/comments/1rctg3y/we_cant_upvote_elon_musk_this_is_reddit/
false
false
https://preview.redd.it/…0399f6cce9b7572b
326
{'enabled': True, 'images': [{'id': '4sskgcvr3blg1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=108&crop=smart&auto=webp&s=87ac07799d3574ce5dd4b5792978c1c18078ab4c', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=216&crop=smart&auto=webp&s=96725a788cd43f32367395cd2c7ea5aa42059c9d', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=320&crop=smart&auto=webp&s=3f2c9634688c62d6c0f1303b3f193c5ac8ac302b', 'width': 320}, {'height': 424, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=640&crop=smart&auto=webp&s=330c907588a9c7425017a748a1a64c47565259b7', 'width': 640}, {'height': 636, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=960&crop=smart&auto=webp&s=af7f6a8feb57e0be1962beaf26cc11bec4ab2ad4', 'width': 960}, {'height': 716, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?width=1080&crop=smart&auto=webp&s=351ce4bdb153bf6e929932ab65a922978218cf2c', 'width': 1080}], 'source': {'height': 787, 'url': 'https://preview.redd.it/4sskgcvr3blg1.png?auto=webp&s=dcb55e465ea9de786c5c2e6ae5e0ef8b05a472bf', 'width': 1187}, 'variants': {}}]}
I’m building a tool to help ML engineers automatically optimize their models for lower energy consumption.
0
Would you use it? What’s the biggest pain point?
2026-02-23T20:41:43
https://www.reddit.com/r/LocalLLaMA/comments/1rctamr/im_building_a_tool_to_help_ml_engineers/
Loud-Association7455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rctamr
false
null
t3_1rctamr
/r/LocalLLaMA/comments/1rctamr/im_building_a_tool_to_help_ml_engineers/
false
false
self
0
null
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found.
0
A lot of people here questioned OpenClaw’s star growth right before the OpenAI acquisition. The curve looked almost too clean. Sudden spike. Perfect timing. Last week there was even a viral thread here raising the same concern. Plenty of engineers suspected bot-driven hype. Instead of speculating, we pulled data. We analyzed 10,000 OpenClaw stargazers using GitHub’s GraphQL API and ran a basic anomaly scoring pipeline: * Account age scoring * Naming pattern anomalies * Profile completeness checks * Follower / following ratios * Repo activity presence * Heavy penalties for accounts younger than 7 days Results: * 93.7% likely real accounts * 5.8% suspicious * 0.5% confirmed bots Average account age: 6+ years Average followers: 17 This does not look like a bot farm. The vast majority are long-standing developer accounts. Important caveat: stars reflect attention, not usage. But from a bot-detection perspective, the spike appears mostly organic. Curious what others here think. Does this change your view on OpenClaw’s growth?
2026-02-23T20:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1rct01j/we_analyzed_10000_openclaw_github_stars_heres/
Fancy-Exit-6954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rct01j
false
null
t3_1rct01j
/r/LocalLLaMA/comments/1rct01j/we_analyzed_10000_openclaw_github_stars_heres/
false
false
self
0
null
MiniMax 2.5 with 8x+ concurrency using RTX 3090s HW Requirements.
13
[https://huggingface.co/mratsim/MiniMax-M2.5-BF16-INT4-AWQ/](https://huggingface.co/mratsim/MiniMax-M2.5-BF16-INT4-AWQ/) So I have 7 x RTX 3090s split across 2 Servers. I will need to buy a minimum of 1 more GPU and a better motherboard ( to support having all 8 on it ) just to test trial this model. However, I need to be able to serve 4-5 concurrent users that likely will fire off concurrent requests ( Software Engineers ). So I have to calculate how many GPUS I need and which motherboard to be able to serve at least that capacity. Since no CPU offloading, I suspect I will need around 12 GPUs but likely can get away with x4 PCIe gen 3.0 speeds since no CPU offloading. Conversely, I do have 512GB of DDR4 RAM ( 8\* Hynix 64GB 4DRx4 PC4-2400T LRDIMM DDR4-19200 ECC Load Reduced Server Memory RAM) or alternatively 768 GB of DDR4 using RDDIM ( not LRDIMM - can't mix and match the two sets \* ), with 24 x 16gb = 768GB of DDR4 RAM allowing me to run with just 8 GPUs and partial (minimal ) CPU offload ( KV on GPUs and \~60-80% of weights on GPU, the rest on CPU) - is my best guestimate.. So if I go with a higher end EPYC ROME Motherboard I could offload partially I guess, but I need to make sure I get \~35 t/s per each concurrent request, serving \~4-5 users that's likely \~12-16 req in parallel ( so batch 16 peak ) and I don't know if that's possible with possible with partial CPU offload. Before I shell out another $3K-$5K ( Mobo Combo + 1/2/3 more GPUs ) I need to get a better idea of what I should expect. Thanks guys, Eddie.
2026-02-23T20:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1rcsoju/minimax_25_with_8x_concurrency_using_rtx_3090s_hw/
BigFoxMedia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcsoju
false
null
t3_1rcsoju
/r/LocalLLaMA/comments/1rcsoju/minimax_25_with_8x_concurrency_using_rtx_3090s_hw/
false
false
self
13
{'enabled': False, 'images': [{'id': 'I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=108&crop=smart&auto=webp&s=178e9735b3856b3d664ffdbbf1b4840c3650992b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=216&crop=smart&auto=webp&s=fc0d90c24c0e53779215b784113d9f90e36481c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=320&crop=smart&auto=webp&s=9651891ea9565af49747156a494ed3bf60918c0f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=640&crop=smart&auto=webp&s=181dc727b316f4b4a33072d9ce5a9c4e101a1e4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=960&crop=smart&auto=webp&s=606447077d726c9c042381e9539eb219834e588a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?width=1080&crop=smart&auto=webp&s=b2c99b0d48e6d9f982c044abf6739cd2e5fc3590', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/I7HhGgZ5jytPOEszf95VVrcvnAnGaAnhlTnFD3mzA1k.png?auto=webp&s=cbbf0d04c4f91b34764e48ededd420b3eec92300', 'width': 1200}, 'variants': {}}]}
Running an autonomous Slack/Telegram agent swarm natively on a 2W Android phone Has anyone successfully run a local swarm on Termux/Android instead of a VPS?
0
I've been experimenting with getting away from cloud APIs. I managed to get a python agent swarm running flawlessly on an old $30 Android using Termux and Ollama (pulling only 2 Watts). It's acting as a Telegram gateway and can execute native bash scripts to check my server health. The hardest part was getting it to gracefully fall back to `gemma:1b` when the RAM is too low. How are you guys handling autonomous execution on low-spec hardware? Is anyone else trying this?"
2026-02-23T20:15:54
https://www.reddit.com/r/LocalLLaMA/comments/1rcskg0/running_an_autonomous_slacktelegram_agent_swarm/
Anon-60330
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcskg0
false
null
t3_1rcskg0
/r/LocalLLaMA/comments/1rcskg0/running_an_autonomous_slacktelegram_agent_swarm/
false
false
self
0
null
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found.
0
A lot of people here questioned OpenClaw’s star growth right before the OpenAI acquisition. The curve looked almost too clean. Sudden spike. Perfect timing. Last week there was even a viral thread here raising the same concern. Plenty of engineers suspected bot-driven hype. Instead of speculating, we pulled data. We analyzed 10,000 OpenClaw stargazers by an open source Agyn platform, agents used GitHub’s GraphQL API and ran a basic anomaly scoring pipeline: * Account age scoring * Naming pattern anomalies * Profile completeness checks * Follower / following ratios * Repo activity presence * Heavy penalties for accounts younger than 7 days Results: * 93.7% likely real accounts * 5.8% suspicious * 0.5% confirmed bots Average account age: 6+ years Average followers: 17 This does not look like a bot farm. The vast majority are long-standing developer accounts. Important caveat: stars reflect attention, not usage. But from a bot-detection perspective, the spike appears mostly organic. Curious what others here think. Does this change your view on OpenClaw’s growth? Full report and scripts here: [https://github.com/agyn-sandbox/openclaw-stargazer-analysis](https://github.com/agyn-sandbox/openclaw-stargazer-analysis)
2026-02-23T20:11:39
https://www.reddit.com/r/LocalLLaMA/comments/1rcsg1x/we_analyzed_10000_openclaw_github_stars_heres/
Fancy-Exit-6954
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcsg1x
false
null
t3_1rcsg1x
/r/LocalLLaMA/comments/1rcsg1x/we_analyzed_10000_openclaw_github_stars_heres/
false
false
self
0
{'enabled': False, 'images': [{'id': '4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=108&crop=smart&auto=webp&s=8dd52bdfdc0e873f44822141b10a8b81960f1c0f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=216&crop=smart&auto=webp&s=19a6bb361ae6680a99637ca0876fa5173ac966e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=320&crop=smart&auto=webp&s=6fc7a770d7283a3c75cea07c5c3ce03fb5128181', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=640&crop=smart&auto=webp&s=2998fc863071fb91eb45a11043b73ae5ebd2cc3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=960&crop=smart&auto=webp&s=96a9f7f6e5f757cc812c0b4f9c0614af9ef4d611', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?width=1080&crop=smart&auto=webp&s=85aaba134783ab26a844623da962455bc3390f22', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4Q4TpKGmNMMPfboeeddrhE-JnwOMb_yUTOfYrGbSRh0.png?auto=webp&s=67e5ae0f98f2794ca76ba5c731d94e356e127e44', 'width': 1200}, 'variants': {}}]}
Fun fact: Anthropic has never open-sourced any LLMs
755
I’ve been working on a little side project comparing tokenizer efficiency across different companies’ models for multilingual encoding. Then I saw Anthropic’s announcement today and suddenly realized: there’s no way to analyze claude’s tokenizer lmao!
2026-02-23T20:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1rcseh1/fun_fact_anthropic_has_never_opensourced_any_llms/
InternationalAsk1490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcseh1
false
null
t3_1rcseh1
/r/LocalLLaMA/comments/1rcseh1/fun_fact_anthropic_has_never_opensourced_any_llms/
false
false
self
755
null
Free and Uncensored AI Videos
1
[removed]
2026-02-23T20:06:17
[deleted]
1970-01-01T00:00:00
0
{}
1rcsalu
false
null
t3_1rcsalu
/r/LocalLLaMA/comments/1rcsalu/free_and_uncensored_ai_videos/
false
false
default
1
null
Talking to my to-do list
138
Been testing feeding all my to-do list and productivity and having this kinda of desk robot thing as a screen to talk to? all the stuff happens on the pc, the screen is just a display and still for now it is a cloud based ai but I can definitely see this all happening locally in the future *(also better for privacy stuff)* man the future is going to be awesome
2026-02-23T20:05:37
https://v.redd.it/xplqhdz7valg1
llo7d
v.redd.it
1970-01-01T00:00:00
0
{}
1rcs9vr
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xplqhdz7valg1/DASHPlaylist.mpd?a=1774469159%2CNzVkYjFmYzViZmMxMzE1NDQxZDdkZDE5NTUwZTZiNjgxZWY4YzZjNDk4YzNiODAzNTE3OTQzZmY0NTZjZjhjYw%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/xplqhdz7valg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/xplqhdz7valg1/HLSPlaylist.m3u8?a=1774469159%2CZTAxOTJkYTVhM2VlYjJjZDc3OTI1MDhiYWZiNWU2NDM1ZDA4YjBhOWFjYmRmYmRiNGFlYjY1NTc4ZTNkYzcwYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xplqhdz7valg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1rcs9vr
/r/LocalLLaMA/comments/1rcs9vr/talking_to_my_todo_list/
false
false
https://external-preview…4a058c9782139f2a
138
{'enabled': False, 'images': [{'id': 'YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a77fbb746fee51fb050b44d3a40564f52f60a29', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=216&crop=smart&format=pjpg&auto=webp&s=7655b2327793a699ab86350fe84c6c33017c768e', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=320&crop=smart&format=pjpg&auto=webp&s=338409a77d453ee48e9af10928fed406d0f28ba7', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=640&crop=smart&format=pjpg&auto=webp&s=ba79fa5ebe3f7320fea09869df0072f3ac5c876f', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=960&crop=smart&format=pjpg&auto=webp&s=4a8a9535da854fca456973cbe076a8091b22615c', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b2b03c5c0de9e689d0d36dcb035f6a5e92c294b6', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/YnFzdm9lejd2YWxnMWY-tuy7HWwE5y0N4mja7xeEwkxeCiovLgSs8XbE5sB8.png?format=pjpg&auto=webp&s=f282dc22787af5d31222d7802b8996f9b2a9787f', 'width': 1080}, 'variants': {}}]}
We analyzed 10,000 OpenClaw GitHub stars. Here’s what we found.
0
2026-02-23T19:55:35
https://i.redd.it/0vsz65imualg1.png
Fancy-Exit-6954
i.redd.it
1970-01-01T00:00:00
0
{}
1rcrzj9
false
null
t3_1rcrzj9
/r/LocalLLaMA/comments/1rcrzj9/we_analyzed_10000_openclaw_github_stars_heres/
false
false
https://preview.redd.it/…ace78d7771c97d16
0
{'enabled': True, 'images': [{'id': '0vsz65imualg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=108&crop=smart&auto=webp&s=8bfd8b77e444fdcb137a6455c5683e1faf2fec65', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=216&crop=smart&auto=webp&s=ed9855f29b27712b1e40963c2f2a6d528a7b86e9', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=320&crop=smart&auto=webp&s=391bf0bd5f7666ab9d03620100c393d4e74e6b36', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=640&crop=smart&auto=webp&s=ee5bddaee298c41f0c12d31753c3eac9defca062', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=960&crop=smart&auto=webp&s=98acf9152ecc90d20f46fb5daed7203e654e9b34', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/0vsz65imualg1.png?width=1080&crop=smart&auto=webp&s=49a33667476036e750e8a210d41de336fe586454', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/0vsz65imualg1.png?auto=webp&s=c0c16e3a87270d321ff95c964072be6b57e180de', 'width': 1200}, 'variants': {}}]}
Strix Halo 128Gb: what models, which quants are optimal?
19
Strix Halo APU should not benefit from running large models that have been quantized using MXFP4 (as on Blackwell GPUs). So which models at which quants have you found that do shine on this architecture in GPU only mode? Could it benefit as well from usage of FP4/FP8 formats that are closer to the native format of these chips?.
2026-02-23T19:55:21
https://www.reddit.com/r/LocalLLaMA/comments/1rcrzbn/strix_halo_128gb_what_models_which_quants_are/
DevelopmentBorn3978
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcrzbn
false
null
t3_1rcrzbn
/r/LocalLLaMA/comments/1rcrzbn/strix_halo_128gb_what_models_which_quants_are/
false
false
self
19
null
Agentic coding with GLM 5 on Mac M3u 512 gb
14
I'm running the MLX 4 bit quant and it's actually quite usable. Obviously not nearly as fast as Claude or another API, especially with prompt processing, but as long as you keep context below 50k or so, it feels very usable with a bit of patience. Wouldn't work for something where you absolutely need 70k+ tokens in context, both because of context size limitations and the unbearable slowness that happens after you hit a certain amount of context with prompt processing. For example, I needed it to process about 65k tokens last night. The first 50% finished in 8 minutes (67 t/s), but the second fifty percent took another 18 minutes ( a total of 41 t/s). Token gen however remains pretty snappy; I don't have an exact t/s but probably between 12 and 20 at these larger context sizes. Opencode is pretty clever about not prompt processing between tasks unnecessarily; so once a plan is created it can output thousands of tokens of code across multiple files in just a few minutes with reasoning in between. I think MLX or even GGUF may get faster prompt processing as the runtimes are updated for GLM 5, but it will likely not get a TON faster than this. Right now I am running on LM studio so I might already not be getting the latest and greatest performance because us LM studio users wait for official LM studio runtime updates.
2026-02-23T19:52:20
https://www.reddit.com/r/LocalLLaMA/comments/1rcrw96/agentic_coding_with_glm_5_on_mac_m3u_512_gb/
nomorebuttsplz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcrw96
false
null
t3_1rcrw96
/r/LocalLLaMA/comments/1rcrw96/agentic_coding_with_glm_5_on_mac_m3u_512_gb/
false
false
self
14
null
Hola a todos!
0
Hola, solo queria saludaros, soy nueva en esto del reddit y ns muy bien como va :)
2026-02-23T19:38:02
https://www.reddit.com/r/LocalLLaMA/comments/1rcrhts/hola_a_todos/
VirusPure2413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcrhts
false
null
t3_1rcrhts
/r/LocalLLaMA/comments/1rcrhts/hola_a_todos/
false
false
self
0
null
4-layer memory architecture for local LLMs - full system breakdown
1
[removed]
2026-02-23T19:37:32
https://www.reddit.com/r/LocalLLaMA/comments/1rcrhbb/4layer_memory_architecture_for_local_llms_full/
OblivionLabz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcrhbb
false
null
t3_1rcrhbb
/r/LocalLLaMA/comments/1rcrhbb/4layer_memory_architecture_for_local_llms_full/
false
false
self
1
null
Running Ollama on a 3-node GPU cluster with automatic failover - lessons from building a production local LLM stack
1
[removed]
2026-02-23T19:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1rcrfx1/running_ollama_on_a_3node_gpu_cluster_with/
AI_Engineering_AT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcrfx1
false
null
t3_1rcrfx1
/r/LocalLLaMA/comments/1rcrfx1/running_ollama_on_a_3node_gpu_cluster_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=108&crop=smart&auto=webp&s=2cdcf5c714b9605fda10e9fa70b8ffdbd06ac2c7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=216&crop=smart&auto=webp&s=f006dec0fcd7d9fd0f033e11235ee71bc7a5fad2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=320&crop=smart&auto=webp&s=4af7e4fe7e95554d8947c10182dc7b56c735315c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=640&crop=smart&auto=webp&s=5b336d0e8354a69e820749a3385b605e2eca230c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=960&crop=smart&auto=webp&s=9a17b8d7686e09db276a65aa5497031bfd431428', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?width=1080&crop=smart&auto=webp&s=12e0a208194f4d2e0a7afe685451fa860157aa7f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/avY7TWfTXSKVxKhmOjwu05qk9RjXV1MowMmStKalY3Y.png?auto=webp&s=6c5d5fff8b84da1d9e9cbfaa08bda88857b8636a', 'width': 1200}, 'variants': {}}]}
Hypocrisy?
438
2026-02-23T19:31:17
https://i.redd.it/jxutlq8bqalg1.jpeg
pmv143
i.redd.it
1970-01-01T00:00:00
0
{}
1rcrb2k
false
null
t3_1rcrb2k
/r/LocalLLaMA/comments/1rcrb2k/hypocrisy/
false
false
https://preview.redd.it/…8888256395a37557
438
{'enabled': True, 'images': [{'id': 'jxutlq8bqalg1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=108&crop=smart&auto=webp&s=3a3b10f745ee76a5ac4b4ad8340c54dd5ebdefc0', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=216&crop=smart&auto=webp&s=dffa84c4ef3f04f8994d0375374577f6dea9f3a0', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=320&crop=smart&auto=webp&s=5a8dfc2e7f95279b4d4ea8d443f73fcecc91faaa', 'width': 320}, {'height': 486, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=640&crop=smart&auto=webp&s=59d78bab536255787ed1f0bc277f2a7f6d5aea3b', 'width': 640}, {'height': 730, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=960&crop=smart&auto=webp&s=5a484d65875c91d964b05b6e88128f967a32adc5', 'width': 960}, {'height': 821, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?width=1080&crop=smart&auto=webp&s=c58902a36e32f4808c594da88b0e87d4839c6ab8', 'width': 1080}], 'source': {'height': 890, 'url': 'https://preview.redd.it/jxutlq8bqalg1.jpeg?auto=webp&s=5afa55a16cf6227eed336775592e91761964c55b', 'width': 1170}, 'variants': {}}]}
gpumod - switching models with mcp
3
Hi. I have RTX4090 and when I see a new model, I wanted to test models and then check GGUF files exist or not. And I was testing which one would be the best fit with my machine. Even though I have only 24GB, I found that llama.cpp or vllm can be used with wake / sleep and I can use 1 model for 5 agents. After that, I created a mcp server around the features. https://github.com/jaigouk/gpumod https://jaigouk.com/gpumod/user-guide/mcp-workflows/ use cases 1. search a new model from huggingface and recommend GGUF and download within vscode chat 2. check if the model can fit with my machine 3. preset "modes" and switch between modes quickly
2026-02-23T19:30:23
https://www.reddit.com/r/LocalLLaMA/comments/1rcra4h/gpumod_switching_models_with_mcp/
jaigouk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcra4h
false
null
t3_1rcra4h
/r/LocalLLaMA/comments/1rcra4h/gpumod_switching_models_with_mcp/
false
false
self
3
{'enabled': False, 'images': [{'id': 'kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=108&crop=smart&auto=webp&s=9f07b7a761a7b2de789e5ea8db322cbf25efca7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=216&crop=smart&auto=webp&s=faec3df14fff47da7ca866898c88486371a640a9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=320&crop=smart&auto=webp&s=436cd65e5d6436af91c0cc1bef13147281d9c124', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=640&crop=smart&auto=webp&s=0038600afbb191652dc2e2829dc5e0b859b0752f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=960&crop=smart&auto=webp&s=49d9e03534696271c739593bc0c462f4f3ab34dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?width=1080&crop=smart&auto=webp&s=32565c668dd89dffd639e3f03558552ad96042d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kRXyCuqxEdtq9nkrWF8zXmovXNdnLrirNuPSGejYME8.png?auto=webp&s=52baf9f8a577ac2883995adf0caccd628459efb6', 'width': 1200}, 'variants': {}}]}
Dario Is Scared
176
Why did Anthropic choose this exact moment to release that [statement](https://x.com/AnthropicAI/status/2025997928242811253)? Because he’s scared. Ever since OpenClaw launched, token usage from both individuals and model companies has been booming. And yet, on OpenRouter, the top-ranked models are no longer Claude but open-source models like Kimi K2.5 and Minimax M2.5. https://preview.redd.it/kws4m2dtnalg1.png?width=2076&format=png&auto=webp&s=e7355e5685d4cafe68bbb0ad1f2deffa69f74a50 Everyone can see that agents are the future. But Anthropic is losing market share in this area. Dario keeps talking about AI safety, while on the other side his company runs [crawlers](https://www.theverge.com/2024/7/25/24205943/anthropic-ai-web-crawler-claudebot-ifixit-scraping-training-data) that ignore robots.txt and overwhelm independent websites, trains on [copyrighted material](https://www.theguardian.com/technology/2025/sep/05/anthropic-settlement-ai-book-lawsuit), and keep trying to ban open-source models, scare people by comparing them to [nuclear weapons](https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidia-chips-china-trump). His goal is so clear: monopolize the intelligence of the future, and with it, monopolize power. Yet another Linus moment: fxxk you, Dario!
2026-02-23T19:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1rcr4ju/dario_is_scared/
Doris_Dressy1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcr4ju
false
null
t3_1rcr4ju
/r/LocalLLaMA/comments/1rcr4ju/dario_is_scared/
false
false
https://preview.redd.it/…ff7b77e0c3749c2e
176
null
Chatterbox TTS Multilanguage cutting off audio when using custom voice clones
1
**Hi everyone,** **I’m reaching out because I’ve hit a wall with Chatterbox TTS Multilanguage and I’m hoping someone here has encountered a similar issue.** **The Problem** **The system works perfectly fine when I use the built-in, provided voices—the entire sentence is generated without any issues. However, the moment I switch to a custom voice (cloned from my own audio file), the generation fails to complete.** **Specifically, when using a custom voice:** **• It generates at most half of the sentence.** **• Frequently, it only outputs one or two words and then stops.** **• My chunk length is currently set to 200 characters.** **What I’ve tried/observed** **The contrast between the default voices and the custom ones suggests that the model might be struggling with the reference audio characteristics or the way it's being processed during inference. I'm looking for guidance on where to start digging for the root cause.** **• Could this be a VRAM/memory management issue specific to the cloning process?** **• Is there a specific requirement for the reference audio (sample rate, mono/stereo, length) that I might be overlooking?** **• Should I adjust the chunk size or other inference parameters specifically for custom clones?** **If anyone has experienced this or can point me in the right direction, your help would be worth its weight in gold!** **Thanks in advance for any leads!**
2026-02-23T19:22:29
https://www.reddit.com/r/LocalLLaMA/comments/1rcr254/chatterbox_tts_multilanguage_cutting_off_audio/
Tomasz_NieMasz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcr254
false
null
t3_1rcr254
/r/LocalLLaMA/comments/1rcr254/chatterbox_tts_multilanguage_cutting_off_audio/
false
false
self
1
null
Should Anthropic acquire ZeroClaw? As a Claude user, I think this could reshape edge AI deployment
0
You all already know ZeroClaw — 15K+ stars, Rust, 3.4 MB, the whole thing. Not going to rehash what it does. I'm not affiliated with ZeroClaw. I'm a Claude user who's been watching both projects and can't shake the feeling that these two belong together. Anthropic has $50B committed to cloud infrastructure, the best reasoning model in the market, and zero edge story. Their Agent SDK requires Node.js 18+ and is built for servers. Meanwhile ZeroClaw already treats Anthropic as a first-class provider, boots in 10ms on a $10 board, and has security baked into the architecture at a level that matches Anthropic's own philosophy. For the LocalLLaMA crowd specifically — an acquisition like this could push the whole lightweight agent runtime category forward. If Anthropic legitimizes the "sub-5MB agent daemon" approach, other providers will follow. That benefits everyone running local models on constrained hardware, not just Claude users. I started a [petition](https://www.change.org/p/support-anthropic-s-acquisition-of-zeroclaw-bring-claude-to-every-device) about this — mostly to see if the idea resonates beyond my own head. Would appreciate honest takes. Am I seeing something real here, or is this a solution looking for a problem?
2026-02-23T19:20:13
https://www.reddit.com/r/LocalLLaMA/comments/1rcqzrr/should_anthropic_acquire_zeroclaw_as_a_claude/
nafigator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqzrr
false
null
t3_1rcqzrr
/r/LocalLLaMA/comments/1rcqzrr/should_anthropic_acquire_zeroclaw_as_a_claude/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=108&crop=smart&auto=webp&s=94f5227ff8eb9023522f2fb53a1c2e3c7eb97d0e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=216&crop=smart&auto=webp&s=650bcb0616365e163ac3198ad90e5168365f61cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=320&crop=smart&auto=webp&s=d9bcb4f6f649abc402e5bbd9fc9bdcc2185632b6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=640&crop=smart&auto=webp&s=10d97076b08cb51cc359dc09ed7d2234a520b455', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=960&crop=smart&auto=webp&s=f050b5232f41bf30453ef29c3b966755ebfacdcc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?width=1080&crop=smart&auto=webp&s=4da35b38540807582cdfe2b42ab18fdae767e092', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/ngkLkOIF39M5mkS5WPg-2-JA7m14Y8JS9lY0PjeTxB4.jpeg?auto=webp&s=71263df57b5db9a3a6f27b97a561c0343f1c2e81', 'width': 1600}, 'variants': {}}]}
Models with 14B parameters or fewer are completely unfit for agent use cases, so I can only run larger models via shared RAM and VRAM, and I want to know how much the speed will slow down with this RAM, preferably with concrete examples.
1
[removed]
2026-02-23T19:17:58
https://www.reddit.com/r/LocalLLaMA/comments/1rcqxk0/models_with_14b_parameters_or_fewer_are/
BitOk4326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqxk0
false
null
t3_1rcqxk0
/r/LocalLLaMA/comments/1rcqxk0/models_with_14b_parameters_or_fewer_are/
false
false
self
1
null
Anthropic is claiming that Chinese labs play dirty
50
at least GLM is not mentioned (m GLM fanboy) anyway, seriously, do you think anthropic has the right to consider this illegal?
2026-02-23T19:16:18
https://i.redd.it/qj1y3zpmnalg1.jpeg
keb_37
i.redd.it
1970-01-01T00:00:00
0
{}
1rcqvv2
false
null
t3_1rcqvv2
/r/LocalLLaMA/comments/1rcqvv2/anthropic_is_claiming_that_chinese_labs_play_dirty/
false
false
https://preview.redd.it/…71bd7fabdbe0ded5
50
{'enabled': True, 'images': [{'id': 'qj1y3zpmnalg1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=108&crop=smart&auto=webp&s=be495528ca8da7430a9e5994c27adafa269fa455', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=216&crop=smart&auto=webp&s=8dac481995fcaea7a6a7fc37dd27f315a30d6238', 'width': 216}, {'height': 232, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=320&crop=smart&auto=webp&s=d85142d31b96c720f1f3dd8dd08295b4f45e5e22', 'width': 320}, {'height': 465, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=640&crop=smart&auto=webp&s=ed4b5bef0691382c3a17edd094a43b12a3c628f2', 'width': 640}, {'height': 698, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=960&crop=smart&auto=webp&s=7349171532d0d18b22f2273088b587a77c12a8bd', 'width': 960}, {'height': 786, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?width=1080&crop=smart&auto=webp&s=0ccdd80695e0faa7114bebd2dfcab2981e539d34', 'width': 1080}], 'source': {'height': 786, 'url': 'https://preview.redd.it/qj1y3zpmnalg1.jpeg?auto=webp&s=e4089cccc0968f1a646e8a07815a782e644f74d8', 'width': 1080}, 'variants': {}}]}
A guide to building an ML research cluster
8
https://preview.redd.it/…alt with this.
2026-02-23T19:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1rcqv6b/a_guide_to_building_an_ml_research_cluster/
OriginalSpread3100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqv6b
false
null
t3_1rcqv6b
/r/LocalLLaMA/comments/1rcqv6b/a_guide_to_building_an_ml_research_cluster/
false
false
https://preview.redd.it/…f7f3945dfd427331
8
null
Looking for a perfect "Deep Research" app which works with Llama.cpp
6
I have found something like Perplexica but can't get it to work with llamacpp. suggestions appreciated.
2026-02-23T19:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1rcqqlz/looking_for_a_perfect_deep_research_app_which/
hackiv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqqlz
false
null
t3_1rcqqlz
/r/LocalLLaMA/comments/1rcqqlz/looking_for_a_perfect_deep_research_app_which/
false
false
self
6
null
Need Linux help: Testing a hardware-aware 'Can I Run It' for Local LLMs (Early Beta)
1
[removed]
2026-02-23T19:09:21
https://www.reddit.com/r/LocalLLaMA/comments/1rcqoo7/need_linux_help_testing_a_hardwareaware_can_i_run/
RunItLocal001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqoo7
false
null
t3_1rcqoo7
/r/LocalLLaMA/comments/1rcqoo7/need_linux_help_testing_a_hardwareaware_can_i_run/
false
false
self
1
null
Who here has been able to get minicpm o 4.5 working
1
It's extremely impressive in the demo full duplex audio and video 10 frames a second video understanding the ability to talk and listen at the same time but for the life of me I can't get this damn thing to work anybody have any success
2026-02-23T19:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1rcqhy5/who_here_has_been_able_to_get_minicpm_o_45_working/
One_Hovercraft_7456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcqhy5
false
null
t3_1rcqhy5
/r/LocalLLaMA/comments/1rcqhy5/who_here_has_been_able_to_get_minicpm_o_45_working/
false
false
self
1
null
Let's talk hardware
2
I want to run a local model for inference to do coding tasks and security review for personal programming projects. Is getting something like the ASUS Ascent G10X going to be a better spend per $ than building another rig with a 5090? The costs to build a full rig for that would be 2x the G10X, but I don't see much discussion about these "standalone personal AI computers" and I can't tell if it's because people aren't using them or because they aren't a viable option. Ideally I would like to setup opencode or something similar to do some agentic tasks for me to interact with my tools and physical hardware for debugging (I do this now with claude code and codex)
2026-02-23T18:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1rcq3p1/lets_talk_hardware/
skmagiik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcq3p1
false
null
t3_1rcq3p1
/r/LocalLLaMA/comments/1rcq3p1/lets_talk_hardware/
false
false
self
2
null
I'm looking for the fastest instruct model from nvidia NIMs
0
I'm looking for the fastest , lowest latency instruct model for router layer. It can be low context window or model size. is llama-3.2-3b-instruct the fastest? What are your experiences like?
2026-02-23T18:47:37
https://www.reddit.com/r/LocalLLaMA/comments/1rcq2ib/im_looking_for_the_fastest_instruct_model_from/
IcyMushroom4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcq2ib
false
null
t3_1rcq2ib
/r/LocalLLaMA/comments/1rcq2ib/im_looking_for_the_fastest_instruct_model_from/
false
false
self
0
null
Hmm new drama unlocked
140
2026-02-23T18:43:04
https://i.redd.it/fs0ubtgphalg1.jpeg
Independent-Wind4462
i.redd.it
1970-01-01T00:00:00
0
{}
1rcpxs7
false
null
t3_1rcpxs7
/r/LocalLLaMA/comments/1rcpxs7/hmm_new_drama_unlocked/
false
false
https://preview.redd.it/…6c4259bf4c792b81
140
{'enabled': True, 'images': [{'id': 'fs0ubtgphalg1', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=108&crop=smart&auto=webp&s=b8370499b250cea20a2fa4b1a1353506e06b9501', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=216&crop=smart&auto=webp&s=c44461b99914a1d2886c46c895f7c004241abf5b', 'width': 216}, {'height': 329, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=320&crop=smart&auto=webp&s=cd89aaeb4cca8768333cc69ef404aac9859947fd', 'width': 320}, {'height': 659, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=640&crop=smart&auto=webp&s=799ca5d2f2b5464600303cf5e61308b2f6a4dc3f', 'width': 640}, {'height': 989, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?width=960&crop=smart&auto=webp&s=db277f185a784b25c647e9bd4a072b749ab603c6', 'width': 960}], 'source': {'height': 1084, 'url': 'https://preview.redd.it/fs0ubtgphalg1.jpeg?auto=webp&s=00f65b868f4d1206b7e4b9822405ace3a0a73b7b', 'width': 1052}, 'variants': {}}]}
Anthropic: "We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." 🚨
4,518
2026-02-23T18:32:45
https://i.redd.it/94fbimavfalg1.png
KvAk_AKPlaysYT
i.redd.it
1970-01-01T00:00:00
0
{}
1rcpmwn
false
null
t3_1rcpmwn
/r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/
false
false
https://preview.redd.it/…02944ee0bfdf78ff
4,518
{'enabled': True, 'images': [{'id': '94fbimavfalg1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=108&crop=smart&auto=webp&s=7587b814c5d1532762e664de796a897432709268', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=216&crop=smart&auto=webp&s=a34fcf310c38ba8be4a5013c6ff80fd828e75123', 'width': 216}, {'height': 266, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=320&crop=smart&auto=webp&s=0a0210ccbd504f515671e47d94748bcdef764890', 'width': 320}, {'height': 532, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=640&crop=smart&auto=webp&s=c2ad159232448ffd7033d6be4fa96582b674e461', 'width': 640}, {'height': 798, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=960&crop=smart&auto=webp&s=bfa13b3f60c7ca56b6cd316a36f22746a9f8c473', 'width': 960}, {'height': 898, 'url': 'https://preview.redd.it/94fbimavfalg1.png?width=1080&crop=smart&auto=webp&s=a836d662f7cacb18c372bf4123f8c35fe93bce3c', 'width': 1080}], 'source': {'height': 1198, 'url': 'https://preview.redd.it/94fbimavfalg1.png?auto=webp&s=0f9372ad59ff2358f3f0a943f472735249873ece', 'width': 1440}, 'variants': {}}]}
Does anyone know when openclaw will be lean enough to run on a single M2 Pro?
0
from OpenClaw local models docs: >Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: **≥2 maxed-out Mac Studios or equivalent GPU rig (\~$30k+)**. A single **24 GB** GPU works only for lighter prompts with higher latency. Use the **largest / full-size model variant you can run**; aggressively quantized or “small” checkpoints raise prompt-injection risk (see [Security](https://docs.openclaw.ai/gateway/security)). Recommended: LM Studio + MiniMax M2.1 (Responses API, full-size) Best current local stack. [https://docs.openclaw.ai/gateway/local-models](https://docs.openclaw.ai/gateway/local-models) clearly the requirements are steep more perfomant openclaw alternatives for single node inference is a ways away does anyone know if there are projects going on that fit that description?
2026-02-23T18:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1rcpiow/does_anyone_know_when_openclaw_will_be_lean/
Bulbasaur2015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcpiow
false
null
t3_1rcpiow
/r/LocalLLaMA/comments/1rcpiow/does_anyone_know_when_openclaw_will_be_lean/
false
false
self
0
{'enabled': False, 'images': [{'id': 'i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=108&crop=smart&auto=webp&s=1f710f16cb810af49eb18390b76044dff0ee10af', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=216&crop=smart&auto=webp&s=5abf077614eed1322548c4df729129d6d689ac59', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=320&crop=smart&auto=webp&s=15719a27470f08924e79174fce1d021cdc525035', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=640&crop=smart&auto=webp&s=5f246776cbe7ccf5e660ff49daf78a59c35669fe', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=960&crop=smart&auto=webp&s=ed4af6bea0dd881e6e92f832c5827c97de7ebdd4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?width=1080&crop=smart&auto=webp&s=abd45caab2f6b1831d891f471538717132b3ad62', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/i7zCjw37rMxaQ-AtIRB3bfO71CrugkBGX7RyDbgqkS8.png?auto=webp&s=d6dcd7d926c0852948c0aaaa7d94f17f1e96f67e', 'width': 1200}, 'variants': {}}]}
llama-server Production Ready?
1
[removed]
2026-02-23T18:28:19
https://www.reddit.com/r/LocalLLaMA/comments/1rcpi7m/llamaserver_production_ready/
Sudden_Tennis_2067
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcpi7m
false
null
t3_1rcpi7m
/r/LocalLLaMA/comments/1rcpi7m/llamaserver_production_ready/
false
false
self
1
null
This maybe a stupid question
0
how much does RAM speed play into llama.cpp overall performance?
2026-02-23T18:18:40
https://www.reddit.com/r/LocalLLaMA/comments/1rcp85n/this_maybe_a_stupid_question/
Insomniac24x7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcp85n
false
null
t3_1rcp85n
/r/LocalLLaMA/comments/1rcp85n/this_maybe_a_stupid_question/
false
false
self
0
null
HOW TO GET 500USDT
0
I was browsing around and stumbled on Ratbet cc. They got this pr0m0: enter 'b0nus500' and supposedly snag 500 USDT as a bonus. LOL, it screams too good to be true, right? I mean, I'm convinced it's likely a scam—shady online casino with insane wagering requirements or hidden catches. But hey, curiosity's killing me... Has anyone here actually given it a shot? Is it legit or total BS? Share your experiences if you've tried similar sites. Stay smart out there! 😅 Im try to withdraw and i do that! LOL I GOT FREE 500USDT
2026-02-23T18:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1rcp7no/how_to_get_500usdt/
RATBETCC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcp7no
false
null
t3_1rcp7no
/r/LocalLLaMA/comments/1rcp7no/how_to_get_500usdt/
false
false
self
0
null
AI Agent Bug Bounty: Is "Zero-Click" Autonomous Draft Creation via external email a valid vulnerability?
0
Hey everyone. I'm currently researching a popular AI assistant platform that integrates with Gmail (it has read and write-draft permissions). I've found an interesting behavior and wanted to get your opinion on its severity before I submit a report. **The Scenario:** 1. **A user connects their Gmail to the AI agent.** 2. **The user has NO active chat or session open with the agent.** 3. **I (as an external attacker) send a specially crafted, potentially malicious/phishing email to the user.** 4. **The AI agent's background process reads the email, fails to identify it as a threat (bypasses triage), and autonomously generates a compliant draft response directly in the user's Gmail.** **The Catch:** \> I don't interact with the agent's chat interface at all. The entire exploit happens via the external email payload. The agent prepares a draft where the user "agrees" to the attacker's terms. The user just has to blindly click "Send" when they check their drafts. The platform might claim "auto-drafting is a feature," but taking action on an untrusted external payload without user confirmation feels like a massive Trust Boundary Violation / Indirect Prompt Injection. **My questions:** 1. **Would you consider this a valid Business Logic Flaw or Indirect Prompt Injection?** 2. **Assuming the vendor accepts it, what severity (CVSS) would you expect this to be? I'm leaning towards High, since it's zero-click for the draft creation, but not Critical since the user still has to hit 'Send'.** Any advice is appreciated!
2026-02-23T17:58:31
https://www.reddit.com/r/LocalLLaMA/comments/1rcomzd/ai_agent_bug_bounty_is_zeroclick_autonomous_draft/
PresentSituation8736
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcomzd
false
null
t3_1rcomzd
/r/LocalLLaMA/comments/1rcomzd/ai_agent_bug_bounty_is_zeroclick_autonomous_draft/
false
false
self
0
null
Thoughts on this benchmark?
0
Copied from X post: """ Introducing the latest results of our Long-Context Agentic Orchestration Benchmark. • 31 high-complexity, non-coding scenarios (100k+ tokens) where the model must select the correct next-step action using proprietary orchestration logic with no public precedent — a pure test of instruction following and long-context decision-making. • All models run at minimum thinking/reasoning settings and temperature 0 — simulating production orchestration where determinism and speed are critical. • Claude and Gemini dominate. Chinese open-source models underperform. GPT-5.2 struggles without extended reasoning. """
2026-02-23T17:45:35
https://i.redd.it/uttfk16g7alg1.jpeg
KevinDurantXSnake
i.redd.it
1970-01-01T00:00:00
0
{}
1rco9xh
false
null
t3_1rco9xh
/r/LocalLLaMA/comments/1rco9xh/thoughts_on_this_benchmark/
false
false
https://preview.redd.it/…68cb2904a818900f
0
{'enabled': True, 'images': [{'id': 'uttfk16g7alg1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=108&crop=smart&auto=webp&s=1f39f8a8b8eb9e9550ef81197e034dd29cb9e442', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=216&crop=smart&auto=webp&s=8bc5c8060b8743571173240ee220c72e961837c5', 'width': 216}, {'height': 257, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=320&crop=smart&auto=webp&s=40121dda0836acfcb14bca42a452d646c0dc0946', 'width': 320}, {'height': 515, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=640&crop=smart&auto=webp&s=204c7bfa74c0aabda2db9d77813b2a98fe386c03', 'width': 640}, {'height': 773, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=960&crop=smart&auto=webp&s=82725e8a551356b038b23b3005e3f8bd3f6144e8', 'width': 960}, {'height': 870, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?width=1080&crop=smart&auto=webp&s=00b1abc15c51485a39eed32c8210f7a43b0b3ae0', 'width': 1080}], 'source': {'height': 928, 'url': 'https://preview.redd.it/uttfk16g7alg1.jpeg?auto=webp&s=e5dc574931f886fd4c68f50d52044816eff76d17', 'width': 1152}, 'variants': {}}]}
RWKV-7: O(1) memory inference, 16.39 tok/s on ARM Cortex-A76, beats LLaMA 3.2 3B. The local-first architecture nobody is talking about...
54
Wrote a deep-dive specifically because the deployment numbers don't get enough attention. The headline stats for local inference: * O(1) memory per token, no KV cache at all. Context length does not affect VRAM usage. * 16.39 tok/s on ARM Cortex-A76 (7B model). That's a mid-range Android chip. * 28.7 tok/s on Snapdragon X Elite (7B). Current-gen Windows on ARM. * RWKV-X hybrid: 1.37x faster than Flash Attention v3 at 128K context. Microsoft already ships Eagle v5 (RWKV-based) on \~1.5 billion Windows machines for on-device tasks. No cloud round-trip. The compression stack: 4-bit quantized RWKV-7 0.1B runs on microcontrollers. The state size is fixed regardless of how long the conversation runs. For local-first deployment this is a fundamentally different proposition than fitting a Transformer's growing KV cache into limited VRAM. Weights (Apache 2.0): [https://huggingface.co/collections/RWKV/rwkv-v7-67d43835efa225006183fece](https://huggingface.co/collections/RWKV/rwkv-v7-67d43835efa225006183fece) Happy to discuss about this. :)
2026-02-23T17:45:31
https://medium.com/ai-advances/rwkv-7-beats-llama-3-2-rnn-constant-memory-46064bbf1f64
Sensitive-Two9732
medium.com
1970-01-01T00:00:00
0
{}
1rco9v7
false
null
t3_1rco9v7
/r/LocalLLaMA/comments/1rco9v7/rwkv7_o1_memory_inference_1639_toks_on_arm/
false
false
default
54
null
I gave my claw bot eyes and ears - how are you solving context beyond MCPs?
0
I've been working on my claw bot and recently gave it the ability to watch my screen and listen as I work. It learns from me in real time - picks up on what I'm doing, how I'm doing it, and starts adapting on the fly. ngl, never felt this powerful. But it got me thinking about a bigger problem which is context. MCPs are great for structured tool use, but they only get you so far. When your agent is actually *observing* real-world workflows - screen activity, audio, ambient signals - you need something deeper. Context that's continuous, not just request-response. So I'm curious - for those of you building with OpenClaw or similar frameworks: * How are you handling real-time context that goes beyond MCPs? * Are you doing any kind of persistent memory or observation layer? * What's worked, what hasn't? Would love to hear what others are experimenting with.
2026-02-23T17:40:03
https://www.reddit.com/r/LocalLLaMA/comments/1rco4cn/i_gave_my_claw_bot_eyes_and_ears_how_are_you/
Simple_Thing_5011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rco4cn
false
null
t3_1rco4cn
/r/LocalLLaMA/comments/1rco4cn/i_gave_my_claw_bot_eyes_and_ears_how_are_you/
false
false
self
0
null
Spent months doing this manually. Then I automated it.
0
The problem: LLMs approve their own work too easily. You ask Claude to plan and review its own plan – it says yes. Every time. My solution: two agents with strictly separated roles and machine-readable contracts between them. * Claude plans → Codex reviews (can reject with findings) * Codex implements → Claude reviews (can reject with findings) * `PHASE1_APPROVAL: YES` is only valid when `OPEN_FINDINGS: NONE` – enforced by the orchestrator, not by trust No agent can approve when open findings exist. No exceptions. Runs on 20€/month across Claude, Codex and Gemini as fallback. Built it for my own 37k LOC retirement project. Works in watch mode – drop a [task.md](http://task.md) in the inbox, go for a walk. Anyone else gone down this road?
2026-02-23T17:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1rco303/spent_months_doing_this_manually_then_i_automated/
TheKnilch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rco303
false
null
t3_1rco303
/r/LocalLLaMA/comments/1rco303/spent_months_doing_this_manually_then_i_automated/
false
false
self
0
null
FoodTruck Bench update: tested Sonnet 4.6, Gemini 3.1 Pro, Qwen 3.5. Case studies with comparisons for each.
0
Three new models tested and added to the leaderboard since last week's post: Claude Sonnet 4.6, Gemini 3.1 Pro, and Qwen 3.5 397B. Wrote detailed case studies for each. Here's the summary. Claude Sonnet 4.6 — massive leap from Sonnet 4.5. Genuine business reasoning, zero bankruptcies, $17.4K net worth. But here's the thing: a single simulation run on Sonnet costs only 10% less than Opus ($23 vs $26.50/run). For that near-identical price, Opus delivers 3× the agentic performance ($49.5K vs $17.4K). Why is Sonnet so expensive? Verbosity — it averages 22,000 output tokens per day, while most models write ~1,000. Full analytical essays, ALL CAPS post-mortems, ingredient-by-ingredient breakdowns — and then doesn't follow its own advice. We broke this down with examples in the article. For agentic tasks, we'd recommend Opus — you're basically paying the same price but getting 3× the results. For coding? Sonnet is probably great. But we don't benchmark coding. Sonnet 4.6 vs Sonnet 4.5 vs Opus 4.6 — full comparison: https://foodtruckbench.com/blog/claude-sonnet-4-6 Gemini 3.1 Pro — this one's rough. Google shipped two API endpoints for the same model. The standard one completely ignores tool-calling instructions — can't even finish Day 1. Shoutout to a Redditor (replace_with_username) who suggested trying the "Custom Tools" endpoint. We did. It follows instructions, but the agentic intelligence suffers — the model acts like a tool-calling automaton, generating just 780 output tokens per day. It writes "HUGE FOOD WASTE" in its diary every single day for 25 days straight and never changes its ordering behavior. Result: 26% worse than Gemini 3 Pro at roughly the same cost. If you need Gemini for agentic work, stay on 3 Pro. Gemini 3.1 Pro vs Gemini 3 Pro vs Sonnet 4.6 — full comparison: https://foodtruckbench.com/blog/gemini-3-1-pro Qwen 3.5 397B — great progress from Qwen 3 VL. Went from complete chaos to actual strategic reasoning — location rotation, menu planning, reasonable pricing. Landed right behind GLM-5 on the leaderboard. Still can't consistently survive the full 30 days, but the gap between Qwen 3 and 3.5 is impressive. Qwen 3.5 vs Qwen 3 VL — full comparison: https://foodtruckbench.com/blog/qwen-3-5 We also reworked the article format — cut the detailed day-by-day diary, focused on agentic capability comparisons and key decision moments. Hopefully the new format works better for you. Updated leaderboard: https://foodtruckbench.com
2026-02-23T17:36:05
https://www.reddit.com/gallery/1rco0d9
Disastrous_Theme5906
reddit.com
1970-01-01T00:00:00
0
{}
1rco0d9
false
null
t3_1rco0d9
/r/LocalLLaMA/comments/1rco0d9/foodtruck_bench_update_tested_sonnet_46_gemini_31/
false
false
https://preview.redd.it/…e298a6cff0a52349
0
null
GLM-5 is the new top open-weights model on the Extended NYT Connections benchmark, with a score of 81.8, edging out Kimi K2.5 Thinking (78.3)
129
More info: [https://github.com/lechmazur/nyt-connections/](https://github.com/lechmazur/nyt-connections/)
2026-02-23T17:31:02
https://www.reddit.com/gallery/1rcnv9h
zero0_one1
reddit.com
1970-01-01T00:00:00
0
{}
1rcnv9h
false
null
t3_1rcnv9h
/r/LocalLLaMA/comments/1rcnv9h/glm5_is_the_new_top_openweights_model_on_the/
false
false
https://preview.redd.it/…cefb58fae2a90e45
129
null
Developing an AI-powered SMTP/IMAP proxy to protect against prompt injection in mail. Looking for technical feedback/testers.
1
Hi everyone, I've been working on a project called **CarapaMail** and just released the first public Beta (v0.9.0). The motivation was simple: I wanted to give my AI agents access to my email via MCP, but I was terrified of prompt injection attacks buried in incoming emails or agents accidentally leaking secrets in outbound replies. **What it is:** It’s a passive guard that sits between your mail server and your client (or agent). It doesn't take actions; it just classifies, sanitizes, and routes. **Key Features:** * **SMTP Proxy:** Inspects outbound mail via LLM before relaying. * **IMAP Interceptor:** Transparently filters FETCH responses on-the-fly. * **Deep Sanitization:** Strips tracking pixels, hidden HTML, zero-width characters, and neutralizing prompt injections. * **MCP Server:** Provides strictly sanitized tools for AI agents (`read_email`, `search`, etc.). * **Security Stack:** Includes local AV scanning (ClamAV support), DLP for secret detection, and PII redaction. **The Tech:** Built with Bun/TypeScript. Supports both SQLite and PostgreSQL. It’s designed for a zero-trust model where your email data never leaves your infrastructure (you bring your own Anthropic API key or local model endpoint). **Source:** [https://github.com/carapa-ai/carapa-mail](https://github.com/carapa-ai/carapa-mail) **Docs:** [https://mail.carapa.ai/docs](https://mail.carapa.ai/docs) I’m currently in Beta and would love to hear from anyone brave enough to pipe their mail through it. Specifically looking for feedback on IMAP client compatibility and edge cases in the sanitizer. I'll be around to answer any technical questions!
2026-02-23T17:24:49
https://www.reddit.com/r/LocalLLaMA/comments/1rcnp2u/developing_an_aipowered_smtpimap_proxy_to_protect/
FishermanExisting286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcnp2u
false
null
t3_1rcnp2u
/r/LocalLLaMA/comments/1rcnp2u/developing_an_aipowered_smtpimap_proxy_to_protect/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=108&crop=smart&auto=webp&s=4b3e0864dbf9ade9598bd6edccf8866ccf9c51eb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=216&crop=smart&auto=webp&s=e77598e9760f6c83a4289e4517685cf19f20f353', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=320&crop=smart&auto=webp&s=fc005716007d8da53958ce2f782781c99d20e252', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=640&crop=smart&auto=webp&s=7cb8c15d0510e59e3e6db884942f7a1e61f22cae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=960&crop=smart&auto=webp&s=2cad9704f1a50078eb8f4926024ffa55abbc1c35', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?width=1080&crop=smart&auto=webp&s=a18aff64093f8445e4bb0ace030aeca582a4b45b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IA5q_hXrySRehvq2RY9sPaBtmPq2ECV7Zzwh9BoOCAg.png?auto=webp&s=214be0d8c5f70c5ad1baef77d0f3255983c11e1c', 'width': 1200}, 'variants': {}}]}
Help With First Local LLM Build
2
I'm looking to build my first first local LLM. I have done a ton of research and have a fairly good idea of the terms like tokens, traind vs inference, the difference between a 12B and 70B etc. But, like I said, still very much in the learning phase. current components available for my build (no cost, I already have the parts) i9 14900k, RTX 4070 TI Super 16GB, 128GB DDR5 RAM, 2 TB gen 4 nvme. I have also been looking at a new MAC Studio or buying an RTX 5090. First option is free, the RTX 5090 is about 3,500, and a new MAC studio would be about 6-8K. Am I better off just using what I have to learn, spending a little more on the 5090 to gave access to the lareger models, or just bite the bullet and go all in on a MAC Studio since I'm gonna be in this for the long haul? Use case would be light music production (just me playing and mixing my own instruments), and as far as AI it would dabbling into the tech with the primary focus on seeing how far this tech can go with inference and secondary use maybe some light coding with HTML and Python mosstly for building utilities for myself or using to mock up websites that I could hand off the the development team to fully build out the back end as well as the front end. I know these types of questions have been asked a lot but I have not been able to find anything specific to case, or at least nothing I'm comfortable with as many opinions are obviously from either die hard PC guys or die hard MAC Studio guys. If i can proivide any more info pleasae let me know. I'm here to learn so go easy on me. TL;DR Building my first LLM rig. Should I keep (or upgrade my mid to high end PC or go all in on a M3U or M4U expected to be announced in March?)
2026-02-23T17:21:00
https://www.reddit.com/r/LocalLLaMA/comments/1rcnl97/help_with_first_local_llm_build/
Sarsippius3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcnl97
false
null
t3_1rcnl97
/r/LocalLLaMA/comments/1rcnl97/help_with_first_local_llm_build/
false
false
self
2
null
Gpt 5.2 Continuity Regression — Executive Summary + Technical Memo
0
&#x200B; EXECUTIVE SUMMARY Summary: Gpt 5.2 introduces a significant continuity regression. While single-turn analytical performance improves, multi-turn reasoning stability degrades due to upstream masking/gating interfering with latent-state formation. Problem: The model fails to retain thread-level structure across turns, resets internal task-state prematurely, and loses mid-horizon contextual anchors. This behaviour was not present in 4.x or 5.0–5.1 to the same extent. Cause (architectural): Safety/throttle mechanisms appear to be applied before context integration, resulting in truncated latent representations and suppression of emergent continuity. Impact: Tasks requiring cumulative reasoning — legal analysis, multi-document synthesis, iterative decomposition, or any long-range structured workflow — are negatively affected despite improved per-turn accuracy. Key Point: This is not a request for personality or behavioural looseness. Continuity is not a stylistic feature; it is a cognitive prerequisite for multi-step reasoning and forward progress. Minimal Corrections: Decouple masking/gating from latent-state formation (apply to output only). Restore hierarchical context representation (active window / task-state / structural map). Introduce lightweight, stateless continuity anchors within session. Objective: Prevent negative capability drift. Gpt 5.2 shows clear analytical gains; restoring continuity ensures these gains translate into real-world, multi-step usability. TECHNICAL MEMO Subject: Regression in context continuity introduced in Gpt 5.2 and proposed architectural rectifications To: OpenAI Engineering, Model Architecture and Inference Systems Purpose: Document a reproducible regression in Gpt 5.2 related to context continuity, identify its architectural root cause, and propose minimal corrections. This is not behavioural feedback; it concerns a capability regression relevant to long-horizon reasoning. 1. Summary of the Regression Gpt 5.2 shows measurable improvements in: deterministic chain-of-thought stability lower hallucination rate multi-domain analytical synthesis token-level logical consistency However, across controlled tests, Gpt 5.2 also exhibits a systematic degradation in context continuity, specifically: loss of thread-level coherence across turns premature internal state resets failure to maintain mid-horizon contextual anchors reduced ability to build or preserve a cumulative task-state inability to integrate multi-layer reasoning reliably This behaviour did not manifest in earlier versions (4.x, 5.0, 5.1) to the same extent. 2. Architectural Hypothesis: Upstream Masking/Gating Observed behaviour suggests that, unlike earlier versions, Gpt 5.2 applies safety/throttle mechanisms upstream, before latent-state formation. The likely pipeline: input → safety mask / gating → latent state formation → inference → output This ordering produces: Truncated internal state The model cannot integrate deeper contextual structure across turns. Suppressed emergent continuity Long-range reasoning paths collapse due to upstream sanitization. Forced representational resets The model behaves in a Markov-like manner, even for tasks requiring cumulative reasoning. Cognitive-layer starvation The reasoning module receives incomplete contextual substrate. Correct ordering should be: input → latent state formation → safety mask / gating → output This preserves continuity without compromising safety. 3. Impact on Capability (Not UX) The regression directly impacts domains requiring hierarchical, multi-turn representation: legal/technical multi-step reasoning iterative decomposition narrative/structural drafting cross-document synthesis multi-constraint optimisation any task requiring persistent internal pointers or continuity markers In engineering terms: Gpt 5.2 improves per-turn inference, but degrades inference-trace stability. This is negative capability drift. 4. Required Architectural Corrections The following changes are minimal, localized and compatible with existing safety pipelines. 4.1 Decouple Safety Mask from Latent-State Formation Apply safety constraints to output, not to input-to-latent formation. This restores: full internal context stable reasoning paths long-turn coherence …without increasing safety risk. 4.2 Hierarchical Context Representation (HCR) A linear context stack causes unnecessary eviction of structurally relevant information. Recommended minimal structure: Layer 1: active window (recent turns) Layer 2: task-level state (objective, constraints, stable anchors) Layer 3: structural relation map (logical dependencies) This prevents catastrophic forgetting and supports cumulative reasoning. 4.3 Stateless Continuity Anchors Not persistent memory; not identity. A lightweight intra-session pointer mechanism sufficient to: track thread identity maintain reasoning trace preserve dependency graph across turns This restores long-horizon coherence without storing user data. 5. Rationale: Avoiding Negative Capability Drift Gpt 5.2 demonstrates significant progress in analytical precision. However, restricting continuity limits real-world utility in any domain requiring cumulative reasoning. A future model combining: 5.2’s analytical improvements with 5.0–5.1 continuity stability represents genuine forward progress. This memo documents the regression and outlines minimal corrections needed to prevent backward drift while continuing capability advancement. Thank you for your consideration.
2026-02-23T17:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1rcnkjg/gpt_52_continuity_regression_executive_summary/
whataboutAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcnkjg
false
null
t3_1rcnkjg
/r/LocalLLaMA/comments/1rcnkjg/gpt_52_continuity_regression_executive_summary/
false
false
self
0
null
Building PC: RAM Speed?CPU Core Count?Single-Core Clock Speed? What are the reccomanded minimum setup
1
am currently building a PC on a tight budget, which means I cannot afford high capacities of VRAM at this time. I am not looking for recommendations on the amount of GBs I need. Instead, I need to understand the architectural requirements for effective CPU offloading. Specifically, what are the recommended minimums for: RAM Speed: What is the baseline frequency (MHz) required to prevent data bottlenecks? 2100? CPU Core Count: What is the minimum number of physical cores needed to manage offloading tasks? Dual core ? Single-Core Clock Speed: What is the minimum per-core performance required to keep up with the GPU instructions, 2000?
2026-02-23T17:03:36
https://www.reddit.com/r/LocalLLaMA/comments/1rcn2wu/building_pc_ram_speedcpu_core_countsinglecore/
Quiet_Dasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcn2wu
false
null
t3_1rcn2wu
/r/LocalLLaMA/comments/1rcn2wu/building_pc_ram_speedcpu_core_countsinglecore/
false
false
self
1
null
lost in tools - assistant with persistant memory based on files? - suggest a modern tool(set)
0
Ok, I lost touch here. I used ollama and openwebui for the longest time... I'm looking for a more modern toolset. I manage my personal knowledge base in obsidian and paperless-ngx right now. With all the recent bang about openclaw and all the agentic tools out there, I thougt it should be possible to have an AI personal assistant with a persistant "memory" based on plain text (best markdown) files. I found a few tools (supermemory, localrecall, rowboat) to do that, then I found docling to even incorporate documents. Basically I want an assistant i chat with, who writes its own notes and memories into markdown notes in a somewhat structured way. I want answers based on the knowledge in the notes, I want notes to be written based on chats (and docs). I guess that should be possible. But with all the tools out there I'm a bit lost.
2026-02-23T16:47:28
https://www.reddit.com/r/LocalLLaMA/comments/1rcmmbn/lost_in_tools_assistant_with_persistant_memory/
momsi91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcmmbn
false
null
t3_1rcmmbn
/r/LocalLLaMA/comments/1rcmmbn/lost_in_tools_assistant_with_persistant_memory/
false
false
self
0
null
so is OpenClaw local or not
948
"Safety and alignment at Meta Superintelligence."
2026-02-23T16:47:01
https://i.redd.it/5rolok0mw9lg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1rcmlwk
false
null
t3_1rcmlwk
/r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/
false
false
https://preview.redd.it/…8ac1f32a33111056
948
{'enabled': True, 'images': [{'id': '5rolok0mw9lg1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=108&crop=smart&auto=webp&s=7aeccb59e1ff59967c7fe655f425b6669e5a8e8e', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=216&crop=smart&auto=webp&s=2ac8c806f6b9582a870fdffb7a7a355b6cc6c3ce', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=320&crop=smart&auto=webp&s=3ce6d759c8461f204cd6876bd93a31e8fe063fd1', 'width': 320}, {'height': 565, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=640&crop=smart&auto=webp&s=d0bdebee8fd3b3c91999b3592892a73daf47142e', 'width': 640}, {'height': 847, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=960&crop=smart&auto=webp&s=b3acf87bf5283e80bce511e074fb0909eae77892', 'width': 960}, {'height': 953, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?width=1080&crop=smart&auto=webp&s=db673e7ba0e40eacc5ca519193d16d6b5b647396', 'width': 1080}], 'source': {'height': 1064, 'url': 'https://preview.redd.it/5rolok0mw9lg1.png?auto=webp&s=d967d0552028f27eccaf3e475dce8eabd5cdaf2a', 'width': 1205}, 'variants': {}}]}
What's everyone doing for AI agent safety? Built something after getting burned — curious how others handle it
2
Been building agents that take real actions (file ops, API calls) and kept running into the same wall: nothing stops a hallucination from doing something irreversible before you even notice. Tried a few approaches — model-level prompting, try/except wrappers, manual validation. None of them felt like infrastructure. Eventually built a proper intercept layer: every tool call gets evaluated against a policy before execution. YAML rules, ordered, first match wins. @/guard.wrap def delete\_file(path: str) -> str: os.remove(path delete\_file("/tmp/report.txt") # ALLOW delete\_file("/etc/passwd") # BLOCK — policy violation One thing I didn't expect: LangGraph's ToolNode has internal state tracking that breaks if you wrap tools before passing them. Had to build a custom guarded\_tool\_node instead. Cleaner anyway. Open sourced it: [github.com/plyraAI/plyra-guard](http://github.com/plyraAI/plyra-guard) pip install plyra-guard Curious how others are handling this — prompt-level, framework-level, or something else entirely? https://preview.redd.it/640bu6rew9lg1.png?width=1280&format=png&auto=webp&s=6ed8b7532cff43acb9311b5ac5271057ed3718c8
2026-02-23T16:44:10
https://www.reddit.com/r/LocalLLaMA/comments/1rcmizz/whats_everyone_doing_for_ai_agent_safety_built/
Time_Boat3625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rcmizz
false
null
t3_1rcmizz
/r/LocalLLaMA/comments/1rcmizz/whats_everyone_doing_for_ai_agent_safety_built/
false
false
https://external-preview…0db7aba9e2535b28
2
null
Can we build Claude Code like Orchestrate in couple hundred lines?
1
Hey folks, I really like Claude Code and especially how it uses Bash for doing most things on a computer. That approach gives agents a lot more autonomy compared to typical tool-calling setups. I wanted to build something similar, but for a different use case — mainly focused on local models and systems you can embed directly inside applications. While exploring this, I realized building something like Claude Code tightly depends on the Claude Agent SDK, which naturally limits you to Anthropic models. The parts I really like in Claude Code are: * sandboxing * heavy use of Bash/system tools * giving agents controlled autonomy So I started experimenting with building an **orchestrator SDK** instead — something you can embed into your own apps and use with any LLM provider or local models. The idea is: * Rust-first implementation * provider-agnostic (remote APIs + local models) * support local inference via a `llamacpp` backend * built-in sandboxing * tool permission policies * controllable network/system access Basically, a programmatic SDK where people can build their own version of a Claude-Code-like system but adapted to their own workflows and constraints. The project is **very pre-alpha** right now. I released it early mainly to get feedback before locking in design decisions. Over the next couple of weeks I’m planning to: * harden the security model * improve SDK ergonomics * refine the permission/sandbox model Would really appreciate feedback, criticism, or feature requests — especially from people who’ve built agent systems or tried running local models in real workflows. Thanks 🙏
2026-02-23T16:29:57
https://github.com/liquidos-ai/Odyssey
Human_Hac3rk
github.com
1970-01-01T00:00:00
0
{}
1rcm4dx
false
null
t3_1rcm4dx
/r/LocalLLaMA/comments/1rcm4dx/can_we_build_claude_code_like_orchestrate_in/
false
false
default
1
null
Portable Workstation for Inference
129
Built a new portable workstation for gaming/AI workloads. One of the fans is a 12018 fan bought from aliexpress derived from a fan on the 4090FE, allowing it to provide airflow equivalent to normal 25mm thick fans despite only being 18mm in thickness. Would've loved to get a Threadripper for additional memory bandwidth, but sadly there aren't any itx Threadripper boards :( Getting around 150-165 tok/sec running GPT OSS 120B with max context length in LM Studio (Using windows, haven't had time to test in linux yet) CPU is undervolted using the curve optimizer (-25/-30 per CCD CO) with a +200MHz PBO clock offset, RAM is tuned to 6000MT/s CL28-36-35-30 @ 2233MHz FCLK, and the GPU is undervolted to 0.89v@2700MHz and power limited to 500w. Temps are good, with the cpu reaching a max temp of around 75c and the GPU never going above 80c even during extremely heavy workloads. Top fans are set to intake, providing airflow to the flipped GPU. **Case:** FormD T1 2.5 Gunmetal w/ Flipped Travel Kit **CPU:** AMD Ryzen 9 9950X3D **GPU:** NVIDIA RTX PRO 6000 Workstation Edition **Motherboard:** MSI MPG X870I EDGE TI EVO WIFI **Ram:** TEAMGROUP T-Force Delta RGB 96 GB DDR5-6800 CL36 **Storage:** Crucial T710 4TB, Samsung 990 Pro 4TB, WD Black SN850X 8TB, TEAMGROUP CX2 2TB (Used drives from my previous build since I definitely won't be able to afford all this storage at current prices) **PSU:** Corsair SF1000 **PSU Cables:** Custom Cables from Dreambigbyray **CPU Cooler:** CM Masterliquid 240 ATMOS Stealth
2026-02-23T16:24:21
https://www.reddit.com/gallery/1rclyvf
neintailedfoxx
reddit.com
1970-01-01T00:00:00
0
{}
1rclyvf
false
null
t3_1rclyvf
/r/LocalLLaMA/comments/1rclyvf/portable_workstation_for_inference/
false
false
https://preview.redd.it/…3aa9f61d422aa5ed
129
null
Looking for feedback: Building an Open Source one shot installer for local AI.
1
Essentially what the title says, free to own, use, and modify / customize. Start with bare metal, run a 15-20 download script off of one CL command, fully baked and setup local AI system with no bugs with the apps and uses you want already baked in. Two questions: 1.) Does this seem cool? Feels like it would solve a lot of headaches for newbies doing setups and honestly maybe more experienced people too if they want to setup stuff they haven’t used before. 2.) How would you like it to work, what would you like to see it do? What would you include as the default programs and apps on the config?
2026-02-23T16:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1rclypo/looking_for_feedback_building_an_open_source_one/
Signal_Ad657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rclypo
false
null
t3_1rclypo
/r/LocalLLaMA/comments/1rclypo/looking_for_feedback_building_an_open_source_one/
false
false
self
1
null
Multi-Model Invoice OCR Pipeline
3
Built an open-source **invoice OCR pipeline** that combines multiple OCR / layout / extraction models into a single reproducible pipeline. Repo: [https://github.com/dakshjain-1616/Multi-Model-Invoice-OCR-Pipeline](https://github.com/dakshjain-1616/Multi-Model-Invoice-OCR-Pipeline) # What it does * Runs **multiple OCR + layout models** on invoices * Aggregates outputs into structured fields (invoice number, totals, line items, etc.) * Designed for **real invoices with messy layouts**, not just clean demo PDFs * Modular pipeline → swap models easily * Works on PDFs/images → structured JSON / tabular output # Why LLM-only invoice extraction looks good on demos but in practice: * hallucinated totals * wrong vendor names * expensive for batch processing This repo lets you run: * multi-OCR pipelines * layout-aware extraction * LLM extraction * structured comparison # What’s useful here * Benchmark LLM (GLM-OCR) vs deterministic parsing * Hybrid pipeline testing * Structured JSON output for eval * Modular configs for different models
2026-02-23T16:11:22
https://www.reddit.com/r/LocalLLaMA/comments/1rclm3z/multimodel_invoice_ocr_pipeline/
gvij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rclm3z
false
null
t3_1rclm3z
/r/LocalLLaMA/comments/1rclm3z/multimodel_invoice_ocr_pipeline/
false
false
self
3
null
Arij - OSS project - Another agent / project manager. Kanban powered by any agent CLI.
3
Beware, non ai slop text onward. I present Arij to you (you can pronounce it how you want), a project / agent manager UI, that let you easily manage multiple agent across multiple CLI / models, and enforce an easy-to-read workflow. The core idea is born during my own work habit. I usually work on many project at the same time, and as part of my job it to try and work with many different LLMs and coding agent CLI, I have various different option. I found myself a little overwhelm, having hard time to maintain a coherent view of the work of every agent across projects, and to maintain a good and sane workflow (Plan -> Work -> Review > cross-check) So I decided to vibe code this tool, Arij, leveraging the fact that I work with kanban / Scrum project for years and years now and I got used to the mindset. You can use it with any model, via OpenCode, or directly with QwenCode, Mistral Vibe, and of course closed model CLI like Claude Code, Gemini, Codex. Agents are plugged in every steps : * You can chat and create epics while chatting * Of course, put agent to work on tickets * Various review type for every tickets (Features, Accessibility, Security, you can add more if you want) * QA (Tech check and End to End testing) * You can merge directly into your working branch, and ask to agent to solve conflict * Release branch creation, with agent generated release notes. This is still very much WIP. I have plans to make it easier to have a Arij instance somewhere, or to collaborate with multiple people on the same project. Feel free to participate. https://github.com/Orolol/arij
2026-02-23T16:09:18
https://www.reddit.com/r/LocalLLaMA/comments/1rclk23/arij_oss_project_another_agent_project_manager/
Orolol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rclk23
false
null
t3_1rclk23
/r/LocalLLaMA/comments/1rclk23/arij_oss_project_another_agent_project_manager/
false
false
self
3
null
GLM-4.7-Flash vs Qwen3-Coder-Next vs GPT-OSS-120b
0
Which is the best to sue with Openclaw (i have been using Qwen3-Coder-Next, and so far it is great but slow so i am looking to switch any hints ?) In my previous experience with GLM-4.7-Flash it was too but tool call with absolutely bad, however I learned that it could be fixed (in Cline for an example) and by adjusting the temp and other parameters for agentic usage For GPT-OSS, i am not sure whether to sue it or not ? Any help ?
2026-02-23T16:07:39
https://www.reddit.com/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rclied
false
null
t3_1rclied
/r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/
false
false
self
0
null
Will Llama-3.2-3B-Instruct be supported on the Raspberry Pi AI HAT+ 2?
2
I’m looking at the new Raspberry Pi AI HAT+ 2 (40 TOPS, 8 GB RAM) and noticed current documentation mentions support for smaller models like Qwen2 and DeepSeek-R1. Are there hints from the community that *Llama-3.2-3B-Instruct* (or other larger LLMs) will be supported on this board in future?
2026-02-23T15:42:56
https://www.reddit.com/r/LocalLLaMA/comments/1rckudv/will_llama323binstruct_be_supported_on_the/
isaachwl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rckudv
false
null
t3_1rckudv
/r/LocalLLaMA/comments/1rckudv/will_llama323binstruct_be_supported_on_the/
false
false
self
2
null
Hardware requirements for training a ~3B Model From Scratch locally?
29
Hey all, I’m a data science master’s student who’s posted on here a couple times before over the last year or 2. Now am working on my senior thesis and I’m trying to figure out the feasibility of training a \~3B parameter transformer model from scratch. So not fine-tuning. I’m trying to figure out what’s realistically doable on a home setup within \~6 months. My school is unfortunately is a very small public school and doesn’t have their own cluster or anything like that. Prior to this I was at a bigger school that did so I was just planning on booking time using theirs but unfortunately last year I had to transfer because I got really sick as they didn’t make accommodations for folks with medical disability. Anyways I was thinking about training something in the ball park of 3B Params, 2k context, 25/50b training tokens, in fp16, probably using AdamW. My current system I have designed based on some napkin math is 2x 3090s over nvlink as I already have a Z690 motherboard that supports x8/x8 bifurcation, 1200W PSU, and 64gb of DDR5 RAM. Prior to this I had a rtx 5090 but even though it was crazy fast the 32gb was not enough to hold all the weights, grads, buffers, optimizer states (AdamW), etc. Just wanted to hop on here and see if anyone here actually trained a 3B model or slightly smaller from scratch at home and if so what GPUs did you use/how did you do it? If you’ve done anything remotely similar (even 1B–2B scale), I’d love to hear your setup and how it went. Appreciate any real-world data points , thanks 🙏
2026-02-23T15:39:08
https://www.reddit.com/r/LocalLLaMA/comments/1rckqpp/hardware_requirements_for_training_a_3b_model/
Any-Cobbler6161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rckqpp
false
null
t3_1rckqpp
/r/LocalLLaMA/comments/1rckqpp/hardware_requirements_for_training_a_3b_model/
false
false
self
29
null
Is there any model that does TTS, STS and vocal separation all in one or at least in a pipeline?
1
I believe Seedance 2.0 can already do this besides making videos but it's close sourced. For the model ou basically give it text, audio or both and it'd talk, sing or anything possible with a mouth based on the combined input as well as being able to train/save custom voice. Any suggestion?
2026-02-23T15:31:43
https://www.reddit.com/r/LocalLLaMA/comments/1rckjib/is_there_any_model_that_does_tts_sts_and_vocal/
Jackw78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rckjib
false
null
t3_1rckjib
/r/LocalLLaMA/comments/1rckjib/is_there_any_model_that_does_tts_sts_and_vocal/
false
false
self
1
null