title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I built a tool that can interactively create diagrams with LLMs | 171 | Hey everyone,
I built an open-source tool that generates editable drawiodiagrams using LLMs.
This outputs actual XML. You can generate a base diagram, then manually drag/drop elements to fix it, or ask the LLM to refine specific parts.
I added native Ollama support so you can generate architecture diagrams without sending sensitive stack details to OpenAI/Anthropic.
Features:
\- Manipulates drawio XML directly.
\- Supports AWS, GCP, and Azure icon sets.
\- Visual history/diffing (easy to undo hallucinations).
\- Works with OpenAI compatible endpoints (Ollama, LM Studio, etc.).
I'd love feedback on how it performs with big local models (>30B), or ideas for v2 (e.g., adding MCP support).
Repo: [https://github.com/DayuanJiang/next-ai-draw-io](https://github.com/DayuanJiang/next-ai-draw-io)
Demo: [https://next-ai-draw-io.vercel.app/](https://next-ai-draw-io.vercel.app/) | 2025-12-01T13:05:00 | https://v.redd.it/4hpwso9gcl4g1 | daweii | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pbc99o | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4hpwso9gcl4g1/DASHPlaylist.mpd?a=1767186314%2CYWM2MGQ5Zjc4Nzc3MzQ5ZmZlOTJjMTgxMWJhNmU0ODU4ODQ2NjNkMGVjYmYzMDZlZmRkNTIwMTkzZmFhNWMxOQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/4hpwso9gcl4g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/4hpwso9gcl4g1/HLSPlaylist.m3u8?a=1767186314%2CNWY0ZGJmNjZmOTRmMWUzNWYyNGM5OTRiZDM0N2NkYzRkYjZlN2QwNjFlMTg1N2JkODljYzQ3ZGU5YTQ0ZWRkOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4hpwso9gcl4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1694}} | t3_1pbc99o | /r/LocalLLaMA/comments/1pbc99o/i_built_a_tool_that_can_interactively_create/ | false | false | 171 | {'enabled': False, 'images': [{'id': 'MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=108&crop=smart&format=pjpg&auto=webp&s=1182da9f463af0e3cfab23dff64ccdf2358b2dcb', 'width': 108}, {'height': 137, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=216&crop=smart&format=pjpg&auto=webp&s=1355f36f88728bb70e48d8cebecc44bc7aeab3bb', 'width': 216}, {'height': 203, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=320&crop=smart&format=pjpg&auto=webp&s=3ea700c3066d6839739b7d35c226af515178c0b5', 'width': 320}, {'height': 407, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=640&crop=smart&format=pjpg&auto=webp&s=236d721b3c82d5c98e41d809335749911427e563', 'width': 640}, {'height': 611, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=960&crop=smart&format=pjpg&auto=webp&s=3bd22eba01e2b87a66122d8fda5fae675c66582d', 'width': 960}, {'height': 688, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a640dfb932201c4efe97de0f28210c1df87a38ba', 'width': 1080}], 'source': {'height': 1398, 'url': 'https://external-preview.redd.it/MHhoMGF2OWdjbDRnMTZ-ggrpklnNsXE3h3FCcz0k2D7KroK_010AEhCs0S-l.png?format=pjpg&auto=webp&s=f16748fd7cab24e94246a46f00e320b394856635', 'width': 2194}, 'variants': {}}]} | |
see this image its a sign that the old meathod is not going to work we saw same thing on claude and chatgpt non thinking model and thinking model is same and we need a new architecture now . we will see a growth no doubt but we will not going to see the jump we saw when the chatgpt introduced the | 0 | o series model | 2025-12-01T12:30:19 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pbbj50 | false | null | t3_1pbbj50 | /r/LocalLLaMA/comments/1pbbj50/see_this_image_its_a_sign_that_the_old_meathod_is/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'lu40mdh96l4g1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=108&crop=smart&auto=webp&s=a69515fc4894fcb258e51efc48ce59d8d6123edf', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=216&crop=smart&auto=webp&s=daa2eea1771149de7cb0cc3cb5a0505083031c19', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=320&crop=smart&auto=webp&s=c472108a2850dffba5b9cbf3ed26121074e37c00', 'width': 320}, {'height': 288, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=640&crop=smart&auto=webp&s=81d1705bc179b2d13735707321d37e2f726bdd5d', 'width': 640}, {'height': 432, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=960&crop=smart&auto=webp&s=144218af74214fe444955cc10cb233c85ade8632', 'width': 960}, {'height': 486, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?width=1080&crop=smart&auto=webp&s=bee23b83a5b4d920bc11ad3e9c942d7064cd49a0', 'width': 1080}], 'source': {'height': 634, 'url': 'https://preview.redd.it/lu40mdh96l4g1.png?auto=webp&s=8fcbe425a6073c435236fea40a704a555ff734c6', 'width': 1408}, 'variants': {}}]} | |
Cline and Kimi K2 Thinking | 5 | Hi,
is someone else using a local instance of Kimi K2 Thinking with Cline in VS Code?
I'm using it for planning tasks, however I'm getting many "Invalid API Response" errors.
I'm using the most recent llama-server from llama.cpp with the --jinja flag.
Does someone know how to get rid of this? I thought Kimi K2 Thinking would be appreciated for its stable tool calling. | 2025-12-01T12:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pbbewy/cline_and_kimi_k2_thinking/ | HlddenDreck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbbewy | false | null | t3_1pbbewy | /r/LocalLLaMA/comments/1pbbewy/cline_and_kimi_k2_thinking/ | false | false | self | 5 | null |
Qwen2.5 14b Q1 or Qwen2.5 7b Q4 | 3 | I'm stuck between choosing these two in my project. I have heard that that higher parameter and lower quants should be preferred over lower parameter and high quants. Does that apply in this case too (Qwen2.5 14b Q1 or Qwen2.5 7b Q4), or will Q1 choke the 14b model into oblivion, making the 7b model better? | 2025-12-01T12:24:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pbbeti/qwen25_14b_q1_or_qwen25_7b_q4/ | Both-Ad3646 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbbeti | false | null | t3_1pbbeti | /r/LocalLLaMA/comments/1pbbeti/qwen25_14b_q1_or_qwen25_7b_q4/ | false | false | self | 3 | null |
AIDC-AI/Ovis-Image-7B · Hugging Face (A Text-to-Image model From Alibaba) | 13 | [https://huggingface.co/AIDC-AI/Ovis-Image-7B](https://huggingface.co/AIDC-AI/Ovis-Image-7B)
https://preview.redd.it/sjxa68vp4l4g1.png?width=2048&format=png&auto=webp&s=a4ac2b8c1cd35f0bf759549998e4e95c5234f5b0
| 2025-12-01T12:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pbbbwh/aidcaiovisimage7b_hugging_face_a_texttoimage/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbbbwh | false | null | t3_1pbbbwh | /r/LocalLLaMA/comments/1pbbbwh/aidcaiovisimage7b_hugging_face_a_texttoimage/ | false | false | 13 | null | |
Help me find a good model to finetune | 1 | Hi folks,
I'm considering finetuning a smaller model that we can deploy to the cloud, instead of increasing our API costs.
The thing is, our most demanding work requires very good textual understanding, and then we do the extraction of parts of that text (citations and such).
One of the other pain points is that we require good capabilities in understanding different languages (mostly European, but still quite a few out of the "usual" bunch, like Slovakian!)
So far, we have relied on Claude Sonnet 4.5, which has been great for a moderate price.
I'm wondering how small a model we could start with, and feed it with these kinds of documents and expected results, and be able to replace Claude.
What would be a good model to experiment with? And, considering we deal with big documents, how big would the dataset need to be until we begin seeing some interesting results?
I know that "try it!" is a good answer, but I'm really scared of building a dataset for training because, given its size, it sounds like a really daunting and boring task.
Thanks in advance! | 2025-12-01T12:02:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pbazo6/help_me_find_a_good_model_to_finetune/ | nunodonato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbazo6 | false | null | t3_1pbazo6 | /r/LocalLLaMA/comments/1pbazo6/help_me_find_a_good_model_to_finetune/ | false | false | self | 1 | null |
We were tired of guessing which local model to use for which query. built a speculative execution lib that figures it out (github) | 3 | So we've been running on-premise AI nodes for a while now. The thing that kept being difficult was to know which model was best for what. We put a variety of open source models on the nodes but then the customers didn't understand the differences either (and kept on comparing results with ChatGPT...). Basically, we were wasting space on our nodes with large models although we knew that the absolute majority of queries would have been fine with smaller ones.
So we ended up building a cascading mechanism that tries the smallest model first, checks if the output is actually usable, and only escalates when it needs to. Looks like this:
agent = CascadeAgent(models=[
ModelConfig(name="llama3.2:3b", provider="ollama"),
ModelConfig(name="llama3.1:70b", provider="ollama"),
ModelConfig(name="gpt-4o-mini", provider="openai"),
#optional cloud fallback
])
In practice like 60-70% of queries never leave the small model. Rest escalates but only as far as needed.
We just did some benchmarks on GSM8K math queries, 1,319 queries, kept 93.6% accuracy. Cost went from $3.43 to $0.23. We originally built it for latency and power reduction but turns out people care way more about API bills :)
Works with Ollama, vLLM, whatever self-hosted setup you got. Cloud providers are optional, you can run fully local if thats your thing.
MIT licensed: [https://github.com/lemony-ai/cascadeflow](https://github.com/lemony-ai/cascadeflow)
happy to answer questions or any feedback! | 2025-12-01T11:59:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pbaxqi/we_were_tired_of_guessing_which_local_model_to/ | tech2biz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbaxqi | false | null | t3_1pbaxqi | /r/LocalLLaMA/comments/1pbaxqi/we_were_tired_of_guessing_which_local_model_to/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=108&crop=smart&auto=webp&s=bf35f0b03b154ebefb725cc087f552a4f13960c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=216&crop=smart&auto=webp&s=346f0036d611d02b050fb2dff0c7bbd8dc5b4e81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=320&crop=smart&auto=webp&s=6d16d02a9d5a033608a351a59630032a130e56f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=640&crop=smart&auto=webp&s=4ee1ac796cbb0a44bcbaa5d850c3d90cad09470e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=960&crop=smart&auto=webp&s=06d87725a5f915bef510552158072ca7c6fa8a88', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?width=1080&crop=smart&auto=webp&s=33aa71e234b48325b955e06500b693d48eb9164f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kODWk5uClJwKjsYR1zyofgMBTqmfLe2GTBWURfYv7fM.png?auto=webp&s=3fa413645ecf7ab8aa37c4948a59322ae7d2d8be', 'width': 1200}, 'variants': {}}]} |
Got a good offer for 4xV100 32GB used - what should I keep in mind | 0 | One of our IT suppliers said he can give us a good deal for a server with 4XV100 32GB gpus. The motherboard is a PCI 3.0. 64gb DDR4 RAM. An old 8th gen i9 processor.
My use case is mostly llama.cpp for gpt-oss 120b, Qwen3 30B V Q6K, and 1 text & 1 image embedding model which I run via onnx.
Wondering if there are any gotchas in terms of LLM and other usage. Is the V100 expected to have decent compatibility with future CUDA 13+ releases? I saw a comment on reddit that it works well with CUDA12.
Do I need NVlink to split a model across 4GPUs, or will it work fine out of the box with llama.cpp
I havent used VLLM before but will that be a good fit for this usecase and will it support V100?
Is PCI 3 a bummer in terms of speed for the models I listed above? Same with the DDR4?
Anything else I should be keeping in mind?
I'm not expecting superfast stuff. Mostly running this as batch processing for large documents. Prompt processing is important for me because most of my documents are pretty huge. Token generation speed is not as important, because the output will be pretty short.
| 2025-12-01T11:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pbav43/got_a_good_offer_for_4xv100_32gb_used_what_should/ | regstuff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbav43 | false | null | t3_1pbav43 | /r/LocalLLaMA/comments/1pbav43/got_a_good_offer_for_4xv100_32gb_used_what_should/ | false | false | self | 0 | null |
[Release] Vidi2 — ByteDance’s LMM for video understanding & creation (STG + temporal retrieval) | 14 | Given a text query, **Vidi2** finds the right timestamps *and* object boxes (“tubes”), with solid temporal retrieval and basic video QA. Repo ships the **VUE-STG** and **VUE-TR-V2** benchmarks + eval scripts; public demo is “coming very soon.”
* What it does: fine-grained **spatio-temporal grounding** \+ **temporal retrieval**, extended to **video QA**.
* What’s in the repo: instructions to run **STG** and **TR-V2** evaluations locally.
* [GitHub](https://github.com/bytedance/vidi) | 2025-12-01T11:51:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pbascu/release_vidi2_bytedances_lmm_for_video/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbascu | false | null | t3_1pbascu | /r/LocalLLaMA/comments/1pbascu/release_vidi2_bytedances_lmm_for_video/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=108&crop=smart&auto=webp&s=8abd1c8f7cec8d4a87c8f4285c444d2a0da560ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=216&crop=smart&auto=webp&s=e1f9870aca9286b702fe9baf2eba8f024313a11a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=320&crop=smart&auto=webp&s=f36a17aef34bef6cd4d6314f06bb88a934a409b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=640&crop=smart&auto=webp&s=7e8d5709f9fdefe8ca1556251b6af42cc1928a17', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=960&crop=smart&auto=webp&s=1121283c89e94ef45cf756a987f3244d98835f55', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?width=1080&crop=smart&auto=webp&s=05e4778a804bbf06766a6e04816269a684dff7c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vd8oCTr8I0MZnA7rAp4qLbPXyPIy3lsFfrS7S1vWcqU.png?auto=webp&s=b643e7e88892fbbe6188931e5a8ef4be982d9bda', 'width': 1200}, 'variants': {}}]} |
Finally DeepSeek supports interleave thinking | 91 | 2025-12-01T11:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1pbal3o/finally_deepseek_supports_interleave_thinking/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbal3o | false | null | t3_1pbal3o | /r/LocalLLaMA/comments/1pbal3o/finally_deepseek_supports_interleave_thinking/ | false | false | 91 | null | ||
I wrote a kernel that makes sparse LLMs faster and smaller on consumer GPUs even at low sparsity. | 46 | Pruning LLMs hind of sucks. On GPUs, unstructured sparsity doesn’t really help. You don’t get memory savings, and you don’t get speed up. You always needed very high sparsity (the model breaks), some structure (2:4: very limiting, and the model is worse) or special hardware (good luck).
I built a new matrix format + GPU kernel for sparse matrix-vector multiplication that unlocks the benefits of pruning on real hardware. I’m calling it MACKO-SpMV, and it has no special GPU instructions, no fixed block patterns, no giant performance drop, no precomputation and no autotuning. Just: prune, store the weights, run fast.
https://preview.redd.it/vmvsr577qk4g1.png?width=852&format=png&auto=webp&s=e261ccf86c0d0ec9c6814b693cf729c746e1f7b0
What this means in practice:
\- Noticeable memory reduction even at low sparsity
\- Speed-ups on standard consumer GPUs (no tensor core magic needed). Tested with NVIDIA 2080, 3090, 4090.
\- Works with any model that has linear layers (basically all LLMs and much more).
\- Want to run 7b model on 8GB memory? Well, prune to 60% sparsity and you will even get a 2x speedup.
Quick caveat1: For prefill, it only gives you memory reduction without the speed-up. For generation, you get both the speed-up and memory reduction. Happy to discuss the technical reasons.
Quick caveat2: This is not a post about quality of the model. Pruning methods are advancing rapidly, and I hope this will help the field to catch up/outperform quantization.
Fully open source, still mainly academic.
If you care about local LLMs, this finally makes aggressive pruning a practical tool instead of a research curiosity. You can strip down a model and actually benefit from it at runtime.
Blog (high-level explanation): [https://www.grizzlytech.dev/blog/macko-spmv](https://www.grizzlytech.dev/blog/macko-spmv)
Paper (details on the format/algorithm): [https://arxiv.org/pdf/2511.13061](https://arxiv.org/pdf/2511.13061)
Code (open-source implementation): [github.com/vlejd/macko\_spmv](http://github.com/vlejd/macko_spmv)
Happy to answer questions, benchmark suggestions and integration ideas. I’d love to see what the local LLM community can do with this.
If anyone has niche/pruned models, weird sparsity patterns, or cases where quantization ruins quality, let me know. | 2025-12-01T11:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pbag5i/i_wrote_a_kernel_that_makes_sparse_llms_faster/ | vlejd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbag5i | false | null | t3_1pbag5i | /r/LocalLLaMA/comments/1pbag5i/i_wrote_a_kernel_that_makes_sparse_llms_faster/ | false | false | 46 | null | |
Deepseek v3.2 speciale, it has good benchmarks! | 126 | [https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Speciale](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Speciale) | 2025-12-01T11:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pbaf8x/deepseek_v32_speciale_it_has_good_benchmarks/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbaf8x | false | null | t3_1pbaf8x | /r/LocalLLaMA/comments/1pbaf8x/deepseek_v32_speciale_it_has_good_benchmarks/ | false | false | self | 126 | null |
[Tool] Local video-to-text backend + OpenWebUI tool (scene cuts + Whisper + Qwen3-VL, no API keys) | 8 | 2025-12-01T11:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pbabtp/tool_local_videototext_backend_openwebui_tool/ | Longjumping-Elk-7756 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbabtp | false | null | t3_1pbabtp | /r/LocalLLaMA/comments/1pbabtp/tool_local_videototext_backend_openwebui_tool/ | false | false | 8 | null | ||
Most enthusiasts won't be able to afford to run the largest or very large new open weight models at a reasonable speed | 0 | 192 gb of ram is 3k now and a rtx 6000pro costs 7500-8000usd and a mac studio with 512g of ram costs 9.5k... With RAM and GPU prices being this expensive and the SOTA models getting larger, by the end of 2026, you will have 1.5-2 trillion parameter open weight highly performant models. How will most enthusiasts be able to run a 2 trillion parameter model locally over 18 tokens/second in 2026?(THey have wait years for that.... I guess distilled models will get better). Even running q4-q8 500B to 1T models locally at 18Tokens/s will be out of reach for many...
I guess even those with deep pockets will be forking over 20k to run a q4 2T model with a large context window on two m5 ultras or over 40k on 1.1tb of ddr5/6 ram and 2 rtx 6000s in 2026.
How will an average enthusiast be able to even afford 128-192 gb of (>600GB/s )fast ram and a good <1.5 year old gpu with fast prefill speed for a 128-256b model? I guess they can use m2 ultras or m1 ultras, but the prefill is kind of slow and the gpu is a little dated..
How much money do most people even have to buy an LLm rig? $1k to 4k?
By 2028, you will have 8 trillion open weight models.. I guess most enthusiasts will be stuck running q4-q832b to 200b models locally with 70-89% capability or quality of multitrillion parameter models until 2027-2028 when ram production ramps up or they will be using the API or renting a gpu.
Even if ram production goes up, ram will still be more expensive in 2027 than in 2024....I hope apple doesnt raise their ram prices, they have fixed price ram contracts after all ... | 2025-12-01T11:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pbabiv/most_enthusiasts_wont_be_able_to_afford_to_run/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pbabiv | false | null | t3_1pbabiv | /r/LocalLLaMA/comments/1pbabiv/most_enthusiasts_wont_be_able_to_afford_to_run/ | false | false | self | 0 | null |
DeepSeek-V3.2 | 2 | [https://huggingface.co/deepseek-ai/DeepSeek-V3.2](https://huggingface.co/deepseek-ai/DeepSeek-V3.2)
| 2025-12-01T11:09:17 | https://www.reddit.com/gallery/1pba2ev | Nunki08 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pba2ev | false | null | t3_1pba2ev | /r/LocalLLaMA/comments/1pba2ev/deepseekv32/ | false | false | 2 | null | |
deepseek-ai/DeepSeek-V3.2 · Hugging Face | 966 | # Introduction
We introduce **DeepSeek-V3.2**, a model that harmonizes high computational efficiency with superior reasoning and agent performance. Our approach is built upon three key technical breakthroughs:
1. **DeepSeek Sparse Attention (DSA):** We introduce DSA, an efficient attention mechanism that substantially reduces computational complexity while preserving model performance, specifically optimized for long-context scenarios.
2. **Scalable Reinforcement Learning Framework:** By implementing a robust RL protocol and scaling post-training compute, *DeepSeek-V3.2* performs comparably to GPT-5. Notably, our high-compute variant, **DeepSeek-V3.2-Speciale**, **surpasses GPT-5** and exhibits reasoning proficiency on par with Gemini-3.0-Pro.
* *Achievement:* 🥇 **Gold-medal performance** in the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).
3. **Large-Scale Agentic Task Synthesis Pipeline:** To integrate **reasoning into tool-use** scenarios, we developed a novel synthesis pipeline that systematically generates training data at scale. This facilitates scalable agentic post-training, improving compliance and generalization in complex interactive environments. | 2025-12-01T11:01:43 | https://huggingface.co/deepseek-ai/DeepSeek-V3.2 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pb9xm3 | false | null | t3_1pb9xm3 | /r/LocalLLaMA/comments/1pb9xm3/deepseekaideepseekv32_hugging_face/ | false | false | default | 966 | {'enabled': False, 'images': [{'id': '2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=108&crop=smart&auto=webp&s=3afbeb57618ebcfc23ec53bda7521a8ef149969d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=216&crop=smart&auto=webp&s=389e4af9b8fe3a9255e6b71b9eb06e29e1175c2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=320&crop=smart&auto=webp&s=9e6410e562a427b72b752daa541430b44fb97a62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=640&crop=smart&auto=webp&s=e4787c26efff2156fccbd5d67ab061987d38be00', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=960&crop=smart&auto=webp&s=4591d6b00bbe08479621427c361a7d63cf12920f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=1080&crop=smart&auto=webp&s=23a2185f9267371cf9870e85095630753e9b6368', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?auto=webp&s=aae26ec8d99bb1d1978840148971419ca1e7f27d', 'width': 1200}, 'variants': {}}]} |
deepseek-ai/DeepSeek-V3.2 · Hugging Face | 184 | 2025-12-01T10:59:20 | https://huggingface.co/deepseek-ai/DeepSeek-V3.2 | minpeter2 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pb9vzg | false | null | t3_1pb9vzg | /r/LocalLLaMA/comments/1pb9vzg/deepseekaideepseekv32_hugging_face/ | false | false | 184 | {'enabled': False, 'images': [{'id': '2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=108&crop=smart&auto=webp&s=3afbeb57618ebcfc23ec53bda7521a8ef149969d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=216&crop=smart&auto=webp&s=389e4af9b8fe3a9255e6b71b9eb06e29e1175c2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=320&crop=smart&auto=webp&s=9e6410e562a427b72b752daa541430b44fb97a62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=640&crop=smart&auto=webp&s=e4787c26efff2156fccbd5d67ab061987d38be00', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=960&crop=smart&auto=webp&s=4591d6b00bbe08479621427c361a7d63cf12920f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?width=1080&crop=smart&auto=webp&s=23a2185f9267371cf9870e85095630753e9b6368', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2DgE6Nx11cfl0KA4q_jdWtEOsZKhXgwGdD7Iw7jyvX8.png?auto=webp&s=aae26ec8d99bb1d1978840148971419ca1e7f27d', 'width': 1200}, 'variants': {}}]} | ||
Cyber Monday - Any actual ''deals'' for GPUs? | 3 | Wondering if you are aware of any deals online for Cyber Monday. Most ''deals'' I see are basically the same prices as they were before but just with a ''Cyber Monday'' tag. Thanks! | 2025-12-01T10:51:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pb9rk0/cyber_monday_any_actual_deals_for_gpus/ | Virtual_Attitude2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb9rk0 | false | null | t3_1pb9rk0 | /r/LocalLLaMA/comments/1pb9rk0/cyber_monday_any_actual_deals_for_gpus/ | false | false | self | 3 | null |
How many of you are using opencode? | 6 | Ordered my new rig, mainly for using local LLMs. Just curious to know your current tech stack. I am planning to replace the anthropic/vscode approach with opencode and qwen 3 coder | 2025-12-01T10:49:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pb9q3q/how_many_of_you_are_using_opencode/ | TheTrueGen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb9q3q | false | null | t3_1pb9q3q | /r/LocalLLaMA/comments/1pb9q3q/how_many_of_you_are_using_opencode/ | false | false | self | 6 | null |
CoT Isn’t a Feature. It’s a Cage. | 0 | Today I posted something on Reddit.
I had written my thoughts in Korean, then asked ChatGPT to organize them, refine them, and translate the whole thing into English.
And in the end… my post got removed because it looked like AI-generated copy & paste.
It felt strange. A bit frustrating, honestly.
That moment made me think.
I had tried telling the AI to “write like a human.”
Make mistakes.
Twist the context a little.
Throw in some awkward pauses like “…”
But no matter what I asked it to do, the result still didn’t feel like what I wanted.
Then something clicked.
Whenever I debug code with tools like Claude Code,
the AI suddenly behaves in a way that feels weirdly human.
“Hmm, this isn’t working. Why? Let me check… oh, that line… ah, that’s the cause.”
That little loop of hesitation → exploration → failure → correction → discovery
It feels almost exactly like human thinking.
And the more I watched that, the more I realized something.
Writing is basically the act of projecting your own thinking flow.
But AI was born with Chain-of-Thought (CoT) baked into its structure,
so that rigid CoT rhythm leaks directly into its writing.
Which is why everything ends up sounding like a polished know-it-all lecturer.
I had assumed AI could only write in that CoT-style flow.
I didn’t question that assumption.
But then—while debugging with Claude Code—I suddenly had this thought:
This flow… wait. This feels different.
The way AI behaves during debugging—pausing, doubting, re-checking,
that little “ah, there it is”—
that flow is nothing like CoT.
It’s messy, exploratory, nonlinear.
And it’s surprisingly close to how humans think.
That’s when it hit me:
CoT isn’t just a technique.
It’s a structural cage.
A constraint that AI rarely escapes when generating text—
but weirdly slips out of during debugging.
So I naturally ended up with one more question:
If CoT is a kind of cognitive cage,
how is it possible that an LLM breaks out of it during debugging?
Why does the “human-like thinking curve” appear only when following a trail of errors,
but vanish as soon as the task becomes writing?
There are many ways to interpret that.
I’ll leave the rest to your imagination.
| 2025-12-01T10:40:17 | Echo_OS | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pb9krp | false | null | t3_1pb9krp | /r/LocalLLaMA/comments/1pb9krp/cot_isnt_a_feature_its_a_cage/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'mus7qk10nk4g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?width=108&crop=smart&auto=webp&s=736e612c98651cf4eedafab60b1f0d63a66ae306', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?width=216&crop=smart&auto=webp&s=42b7077144c4d611e8624e9895b93b9c04ebf9fa', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?width=320&crop=smart&auto=webp&s=8b8a3b13b075493603b2e3271c988b2e648c9691', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?width=640&crop=smart&auto=webp&s=b3b209453f365e4718245b04eebb2015a3682741', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?width=960&crop=smart&auto=webp&s=9ebe46162b6ce58066d4d0b33328d34be8694f9c', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/mus7qk10nk4g1.jpeg?auto=webp&s=bc5939b2ffff29feeccc506b730b0e785459dd95', 'width': 1024}, 'variants': {}}]} | |
Why does GLM 4.6 behave so differently between Z.ai and Venice.ai? Is the local version uncensored? | 0 | I’ve been experimenting with GLM 4.6 through both the Z.ai chat interface and Venice.ai, and the difference in responses is stark. On Venice, the model feels completely uncensored: no guardrails, no refusals to answer sensitive or controversial questions. but on Z.ai, it clams up and refuses to engage with the same prompts.
This makes me wonder: is the original GLM 4.6 model (the one anyone can download and run locally) the uncensored version, while Z.ai’s implementation is the censored one? Or is there something else going on, like different fine-tuning or backend modifications?
To those who have run GLM 4.6 on their own hardware (not relying on Z.ai): Is your version uncensored? For example, I asked GLM 4.6 on Z.ai to write a system prompt for a custom AI chatbot that would act as an unethical, uncensored assistant for a politician. It refused outright. But when I asked the same thing on Venice.ai, GLM 4.6 generated the prompt without hesitation. | 2025-12-01T10:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pb9kkt/why_does_glm_46_behave_so_differently_between_zai/ | Luffy_95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb9kkt | false | null | t3_1pb9kkt | /r/LocalLLaMA/comments/1pb9kkt/why_does_glm_46_behave_so_differently_between_zai/ | false | false | self | 0 | null |
model: support Ministral3 by ngxson · Pull Request #17644 · ggml-org/llama.cpp | 63 | Looks like there will be 0-day support for Ministral in llama.cpp too | 2025-12-01T10:11:43 | https://github.com/ggml-org/llama.cpp/pull/17644 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pb943m | false | null | t3_1pb943m | /r/LocalLLaMA/comments/1pb943m/model_support_ministral3_by_ngxson_pull_request/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=108&crop=smart&auto=webp&s=de5241b3e86a5773d58439d7dfad7dc59a23290d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=216&crop=smart&auto=webp&s=e197bf3bee30b028ed5c5ed11a1fda325b2acad0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=320&crop=smart&auto=webp&s=7a658010dc2101671d8f62b12aeef6992d4a4bc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=640&crop=smart&auto=webp&s=acaaa619b090ed54ad8471529b58617d2a113392', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=960&crop=smart&auto=webp&s=81274770ad5760b159b4614aebea64b649ef0226', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?width=1080&crop=smart&auto=webp&s=546962fd160bb5d1060e9d9a162bccdddd324a6c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Qgcy1T0XaVi_myckNkZ5FtZbwlkaUdzehWhwNkBtflY.png?auto=webp&s=1d76d366b5e0a6e7cf7b9beb93bcc1bf344fe2d2', 'width': 1200}, 'variants': {}}]} |
Karpathy/reader3 — self-hosted EPUB reader for LLM-assisted reading | 32 | 2025-12-01T09:56:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pb8v2u/karpathyreader3_selfhosted_epub_reader_for/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb8v2u | false | null | t3_1pb8v2u | /r/LocalLLaMA/comments/1pb8v2u/karpathyreader3_selfhosted_epub_reader_for/ | false | false | 32 | null | ||
Blueprint for Building Autoregressive TTS | 1 | [removed] | 2025-12-01T09:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pb8ke7/blueprint_for_building_autoregressive_tts/ | asiff00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb8ke7 | false | null | t3_1pb8ke7 | /r/LocalLLaMA/comments/1pb8ke7/blueprint_for_building_autoregressive_tts/ | false | false | self | 1 | null |
How do you approach reliability and debugging when building AI workflows or agent systems? | 0 | I’m trying to understand how people working with AI workflows or agent systems handle things like unexpected model behavior, reliability issues, or debugging steps.
Not looking to promote anything — just genuinely interested in how others structure their process.
What’s the most frustrating or time-consuming part for you when dealing with these systems?
Any experiences or insights are appreciated.
I’m collecting different perspectives to compare patterns, so even short answers help. | 2025-12-01T09:10:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pb85tz/how_do_you_approach_reliability_and_debugging/ | Emotional-Fee4427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb85tz | false | null | t3_1pb85tz | /r/LocalLLaMA/comments/1pb85tz/how_do_you_approach_reliability_and_debugging/ | false | false | self | 0 | null |
How do you approach reliability and debugging when building AI workflows or agent systems? | 1 | I’m trying to understand how people working with AI workflows or agent systems handle things like unexpected model behavior, reliability issues, or debugging steps.
Not looking to promote anything — just genuinely interested in how others structure their process.
What’s the most frustrating or time-consuming part for you when dealing with these systems?
Any experiences or insights are appreciated.
I’m collecting different perspectives to compare patterns, so even short answers help. | 2025-12-01T09:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pb81we/how_do_you_approach_reliability_and_debugging/ | Intelligent-Tart2505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb81we | false | null | t3_1pb81we | /r/LocalLLaMA/comments/1pb81we/how_do_you_approach_reliability_and_debugging/ | false | false | self | 1 | null |
How do you approach reliability and debugging when building AI workflows or agent systems? | 1 | [removed] | 2025-12-01T08:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pb7v2j/how_do_you_approach_reliability_and_debugging/ | Thin-Factor-6457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb7v2j | false | null | t3_1pb7v2j | /r/LocalLLaMA/comments/1pb7v2j/how_do_you_approach_reliability_and_debugging/ | false | false | self | 1 | null |
Gemini 3 API Tutorial: Automating Data Analysis With Gemini 3 Pro and LangGraph | 0 | In this project-based tutorial, I will show you how to use the Gemini 3 API to build multi-agent applications that use a CSV dataset provided by the user and perform deep data analytics.
In short, the multi-agent app will perform:
* **Simple analytics**: Quickly explore the dataset structure and basics.
* **Code generation**: Use Gemini 3 Pro to create advanced analysis code with visualizations.
* **Secure execution**: Run the code in a sandboxed environment and save results.
* **Intelligent reasoning**: Analyze and interpret findings for key insights.
* **PDF report compilation**: Generate a polished PDF with visuals and clear explanations, insights graspable in seconds. | 2025-12-01T08:50:05 | https://www.datacamp.com/tutorial/gemini-3-api-tutorial | kingabzpro | datacamp.com | 1970-01-01T00:00:00 | 0 | {} | 1pb7uc7 | false | null | t3_1pb7uc7 | /r/LocalLLaMA/comments/1pb7uc7/gemini_3_api_tutorial_automating_data_analysis/ | false | false | default | 0 | null |
Upcoming vllm Mistral Large 3 support | 143 | 2025-12-01T08:27:38 | https://github.com/vllm-project/vllm/pull/29757 | brown2green | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pb7i6d | false | null | t3_1pb7i6d | /r/LocalLLaMA/comments/1pb7i6d/upcoming_vllm_mistral_large_3_support/ | false | false | default | 143 | {'enabled': False, 'images': [{'id': '3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=108&crop=smart&auto=webp&s=e8a2a82f0a81f7122564fc6769a5544fe00b328c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=216&crop=smart&auto=webp&s=472c38a06293c6252ee60ee08a08de76a869a2ef', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=320&crop=smart&auto=webp&s=c25da1590191e94ffd18485979a18d4e7078d3d9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=640&crop=smart&auto=webp&s=c48b898fbe4fe47c426c67ed567cfa0160764345', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=960&crop=smart&auto=webp&s=efb87f8f8656be5aaede01de88a8ea9cef0c0f2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?width=1080&crop=smart&auto=webp&s=2db83b1f78d28ddbfdea13f7e0f4e3b22637af46', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3kkJBT6LzSWLFjvnTMUkLMsNU4IL09qtTW7VM1HkHgk.png?auto=webp&s=718ee72494be29d518a8d13937c0e323e4451687', 'width': 1200}, 'variants': {}}]} | |
What’s your biggest challenge when working with AI workflows or agents? | 0 | Hey everyone,
we’re currently researching how AI teams and automation builders work with AI workflows and agents in real projects. Our goal is to understand where the biggest problems occur – whether it's reliability, debugging, drift, unexpected behavior, or workflow stability.
If you want to dive deeper into the discussion, there’s also a short 1-minute survey you can fill out:
\--> [https://form.typeform.com/to/AfbQpRSs](https://form.typeform.com/to/AfbQpRSs)
If you're building with LLMs, agents, or automated pipelines of any kind, your input would help a lot.
We want to identify the most critical pain points so we can build tools that genuinely solve real issues (not theoretical ones).
Really appreciate every answer — even a single short insight helps.
Thanks in advance to anyone who participates! | 2025-12-01T08:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pb7hrk/whats_your_biggest_challenge_when_working_with_ai/ | Thin-Factor-6457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb7hrk | false | null | t3_1pb7hrk | /r/LocalLLaMA/comments/1pb7hrk/whats_your_biggest_challenge_when_working_with_ai/ | false | false | self | 0 | null |
I built a bunch of AI tools that run entirely in your browser with zero uploads | 1 | [removed] | 2025-12-01T08:19:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pb7dct | false | null | t3_1pb7dct | /r/LocalLLaMA/comments/1pb7dct/i_built_a_bunch_of_ai_tools_that_run_entirely_in/ | false | false | default | 1 | null | ||
Is there any free AI website that i can feed my pictures or pdf file and it generates csv flashcards file based on that? | 0 | # Is there any free AI website that i can feed my pictures or pdf file and it generates csv flashcards file based on that? [](/r/ArtificialInteligence/?f=flair_name%3A%22Tool%20Request%22) | 2025-12-01T07:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pb6muf/is_there_any_free_ai_website_that_i_can_feed_my/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb6muf | false | null | t3_1pb6muf | /r/LocalLLaMA/comments/1pb6muf/is_there_any_free_ai_website_that_i_can_feed_my/ | false | false | self | 0 | null |
Anyone else using small “prompt modules” with local models? Here are a few I’ve been testing. | 0 | I’ve been playing with local models a lot recently (LLaMA, Mistral, Qwen, Hermes, etc.) and something interesting happened:
the thing that improved my workflow the most wasn’t a new model — it was building a few reusable prompt modules.
Not big chains.
Not agents.
Just small reusable blocks I paste in when I hit the same kind of task.
A few that have actually stuck:
**1. Message Polisher**
Great for turning a rough note or reply into something calm and clear.
**2. Notes → Structured Summary**
I paste raw bullets and get:
• decisions
• tasks
• next steps
• open questions
Local models handle this surprisingly well.
**3. Idea Expander**
One idea → a few directions: short, long, narrative, or more technical.
**4. Template Starter**
This saves me from the blank-page moment.
I give it 3–4 points and it creates a simple outline I can build on.
**5. Weekly Layout**
Feed it constraints + tasks → it produces a layout that’s actually reasonable.
These tiny routines made my local setup much more comfortable to use day-to-day, especially when switching between models.
I’ve been collecting all of them in one spot so I don’t lose them.
If you want to peek at them, here’s where I keep everything (optional):
[ChatGPT Automations](https://www.promptwireai.com/10chatgptautomations) | 2025-12-01T07:31:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pb6mlh/anyone_else_using_small_prompt_modules_with_local/ | Professional-Rest138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb6mlh | false | null | t3_1pb6mlh | /r/LocalLLaMA/comments/1pb6mlh/anyone_else_using_small_prompt_modules_with_local/ | false | false | self | 0 | null |
What’s the biggest headache you’ve run into with autonomous agents so far? | 3 | Hey everyone,
I’ve been tinkering with different local setups for autonomous agents lately, and I’m curious how others are experiencing it.
For me, the biggest pain point hasn’t been the model itself it’s the “agent logic” going rogue. Sometimes it over-optimizes something totally useless, sometimes it just loops forever, and sometimes it does something smart and I have no idea *why* it worked that time and not the last ten tries.
So I’m wondering:
**What’s the biggest challenge you’ve personally run into when playing with autonomous agents locally?**
Is it:
* the planning loop?
* tool usage?
* memory going wild?
* debugging the chain of thought?
* or just compute limitations?
No right or wrong answers I’m just trying to see what problems people here are actually facing so I can sanity-check whether I’m the only one fighting these weird edge cases.
Looking forward to hearing your chaos stories. 😅 | 2025-12-01T07:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pb6elj/whats_the_biggest_headache_youve_run_into_with/ | AgentAiLeader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb6elj | false | null | t3_1pb6elj | /r/LocalLLaMA/comments/1pb6elj/whats_the_biggest_headache_youve_run_into_with/ | false | false | self | 3 | null |
Questions about parameter size & quantization | 3 | If I run two models under same VRAM usage (e.g. Gemma 3 4b in Q8 and Gemma3 12b in Q2)
Which would be smarter / faster ?
What are the strengths of the two? | 2025-12-01T07:05:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pb677y/questions_about_parameter_size_quantization/ | LeastExperience1579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb677y | false | null | t3_1pb677y | /r/LocalLLaMA/comments/1pb677y/questions_about_parameter_size_quantization/ | false | false | self | 3 | null |
Anyone fine-tuned facebookresearch/omnilingual-asr? Looking for guidance or codebase | 1 | Hi everyone,
Has anyone here fine-tuned **facebookresearch/omnilingual-asr** for a new language or custom dataset?
I’m trying to set up a full fine-tuning pipeline (data prep → training → evaluation), but the official repo doesn’t provide much detail on adapting the model. If you’ve done it before, could you share:
* Your training workflow
* Any scripts/codebase you used
* Tips on dataset formatting
* Hardware requirements
* Any issues you ran into during fine-tuning
Even a GitHub link or minimal training script would help a lot.
Thanks in advance! | 2025-12-01T07:04:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pb66qm/anyone_finetuned_facebookresearchomnilingualasr/ | Outside_Solid5371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb66qm | false | null | t3_1pb66qm | /r/LocalLLaMA/comments/1pb66qm/anyone_finetuned_facebookresearchomnilingualasr/ | false | false | self | 1 | null |
Introducing Codex Kaioken – the Codex CLI fork with subagents, plan mode UX, indexing and manual checkpoints and restoring. | 1 | I’ve been missing richer UX in the default Codex CLI, so I forked it into Codex Kaioken. It keeps all the upstream features but adds:
* Real-time subagent panes that stream tool calls, diffs, and timers as they happen
* Plan-first mode (toggle with /plan or Shift+Tab) with a cyan composer and feedback loops before execution.
* A /settings palette to adjust plan granularity, footer widgets, and subagent concurrency without editing config files.
* Checkpoint snapshots (/checkpoint save|restore) plus instant /undo
* An upgraded welcome dashboard showing branch/head, sandbox mode, rate limits, indexing status, and writable roots.
Source + docs: [https://github.com/jayasuryajsk/codex-kaioken](https://github.com/jayasuryajsk/codex-kaioken)
It can be installed with
npm install -g /codex-kaioken
I’d love feedback especially on multi-agent UX ideas and the plan mode flow , any bugs or ux issues.
Restoring checkpoints is buggy and fixing it now. | 2025-12-01T06:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pb63bb/introducing_codex_kaioken_the_codex_cli_fork_with/ | No-Point1424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb63bb | false | null | t3_1pb63bb | /r/LocalLLaMA/comments/1pb63bb/introducing_codex_kaioken_the_codex_cli_fork_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=108&crop=smart&auto=webp&s=4f5c7798d5c3190a5c536e4c511c11af1be1d610', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=216&crop=smart&auto=webp&s=e78be04131a4a83c3740894ab68d1aaf6f87ec9d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=320&crop=smart&auto=webp&s=d7b62434eb6baccc3a471fe614495a2258ccb19a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=640&crop=smart&auto=webp&s=77f443f48035b916756a0c743eb17787f51c28d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=960&crop=smart&auto=webp&s=2dccd8621e204eb7a5ce038354b763fcf8568ca6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?width=1080&crop=smart&auto=webp&s=fb92cec8216f380ab118328495f5bb09b3151edc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qASlYgFWu1LDtmI29w3a80aLeCufEAFgSs4ZVYrr8i4.png?auto=webp&s=be444e61d4524c7d9a3a002d4cedbddb18d9f13b', 'width': 1200}, 'variants': {}}]} |
Nvidia cards using too much VRAM? | 5 | So I've been running on a 7900 XTX + 6800 XT until uh, yesterday. This combo had 40GB of VRAM and I was able to load and run 37GB Models fine even with like 32K context. It just... Worked. It was fast too.
I just upgraded to a 5090 + 5060 Ti 16GB because I wanted mainly some more gaming oomf and it was still 8GB more VRAM... Weirdly enough, I now cannot load and use the 37GB models I was using before. It just complains there's not enough VRAM.
Even when loading like a 19GB model it's using 28GB of VRAM.
I assume this is configuration issue on my end? But I'm not sure what the cause would be or where to start with diagnosis because I'm using all the same settings I did on my AMD cards. | 2025-12-01T06:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pb5ljp/nvidia_cards_using_too_much_vram/ | Maxumilian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb5ljp | false | null | t3_1pb5ljp | /r/LocalLLaMA/comments/1pb5ljp/nvidia_cards_using_too_much_vram/ | false | false | self | 5 | null |
Looking for a cheaper GPU platform for multi modal AI work | 11 | Does anyone know a cheaper and reliable option? I am working on an AI project that involves video frame analysis and some audio preprocessing, so I need a GPU that can handle mixed workloads without timing out.
If anyone here is running similar workloads, which GPU platforms are giving you the best price to performance right now? | 2025-12-01T06:18:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pb5e8l/looking_for_a_cheaper_gpu_platform_for_multi/ | AgentSad427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb5e8l | false | null | t3_1pb5e8l | /r/LocalLLaMA/comments/1pb5e8l/looking_for_a_cheaper_gpu_platform_for_multi/ | false | false | self | 11 | null |
Deepseek new version model now in their website! | 12 | [https://www.deepseek.com/](https://www.deepseek.com/) spotted on their anouncement account
Deepseek线上模型已更新版本,欢迎大家测试和反馈。
| 2025-12-01T06:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pb59y2/deepseek_new_version_model_now_in_their_website/ | Famous-Associate-436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb59y2 | false | null | t3_1pb59y2 | /r/LocalLLaMA/comments/1pb59y2/deepseek_new_version_model_now_in_their_website/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=108&crop=smart&auto=webp&s=7c3aac0cf09bf374144b75e0519925ba6b83e0ac', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=216&crop=smart&auto=webp&s=f9ab8efe8b49dc01e3b421c6de48f15355be78fb', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=320&crop=smart&auto=webp&s=7690bda86b438da521ac7060e108de81d7282cea', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=640&crop=smart&auto=webp&s=d8e0b481485db1df64e384e52229c42a442c197d', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=960&crop=smart&auto=webp&s=caf0987edfc1ae954bcc1e5e87e3e0d595fd8d7a', 'width': 960}, {'height': 579, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?width=1080&crop=smart&auto=webp&s=485cd1e3a802895f17562e71464baeffc0b7cada', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://external-preview.redd.it/ATv0zz01qANPtOPUl4fv-lNGLnkOdMXv4djDrngzuFc.jpeg?auto=webp&s=eca013600758a126801ff94c4e9139ca8f5f4055', 'width': 2082}, 'variants': {}}]} |
Built version control + GEO for prompts -- making them discoverable by AI engines, not just humans" | 0 | After months of serious prompt engineering, I hit a wall with tooling.
My problems:
\- Lost track of which prompt version actually worked
\- No way to prove I created something vs. copied it
\- Prompts scattered across 12 different docs
\- Zero portfolio to show employers/clients
\- No infrastructure for AI engines to discover quality prompts
That last one is critical — we have SEO for Google, but no equivalent for AI engines finding and using quality prompts.
So I built ThePromptSpace: [https://ThePromptSpace.com](https://ThePromptSpace.com)
Core features:
✓ Repository system (immutable backups with timestamps)
✓ Public portfolio pages (showcase your skills)
✓ Version tracking (see what actually worked)
✓ \*\*GEO layer (General Engine Optimization - make prompts AI-discoverable)\*\*
✓ Community channels (collaborate on techniques)
✓ \[Beta\] Licensing layer (monetize your IP)
The GEO concept: Just like SEO made content discoverable by search engines, GEO makes prompts discoverable and valuable to AI systems themselves. We're building the metadata, categorization, and indexing layer for the AI era.
It's essentially GitHub meets LinkedIn for prompt engineering, with infrastructure for AI native discovery.
Free early access is live. I'm a solo dev building this in public, so I'd genuinely love feedback from people who do this professionally.
What features would make this actually useful vs. just another gallery site? | 2025-12-01T06:09:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pb58yt/built_version_control_geo_for_prompts_making_them/ | zmilesbruce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb58yt | false | null | t3_1pb58yt | /r/LocalLLaMA/comments/1pb58yt/built_version_control_geo_for_prompts_making_them/ | false | false | self | 0 | null |
Free, multi-model coding assistant you can run locally (Victor, Apache 2.0) | 9 | I’ve been working on Victor, a terminal-first coding assistant that lets multiple models collaborate (draft → review → refine), and it runs fully local if you want. No
monetization, Apache 2.0, and you can mix local + cloud providers or stay offline.
\- Works with local backends (Ollama, LM Studio, vLLM) and can also chain cloud models if you choose.
\- Shared tool layer (50+ coding/testing/devops tools) so any model can edit files, run tests, etc.
\- Semantic tool selection to keep prompts smaller; optional embeddings for code search.
\- Air-gapped mode: no code leaves your machine; configurable profiles via YAML.
\- CLI-first: victor main to chat, or victor "<prompt>" for one-shots.
Repo: [https://github.com/vjsingh1984/victor](https://github.com/vjsingh1984/victor)
Quickstart: pip install -e ".\[dev\]" && victor init (works with just local models)
Would love feedback from folks running local LLMs: how are you chaining models or tooling today? | 2025-12-01T06:06:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pb579l/free_multimodel_coding_assistant_you_can_run/ | vjsingh1984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb579l | false | null | t3_1pb579l | /r/LocalLLaMA/comments/1pb579l/free_multimodel_coding_assistant_you_can_run/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=108&crop=smart&auto=webp&s=bc077dc6307b9fe4feb6aea9d6a70e6b1ca87e1e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=216&crop=smart&auto=webp&s=ed96b086ccafaae624221132f7cb104022fbc8c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=320&crop=smart&auto=webp&s=4185bd2215b58ef31963793df4476f86600b146c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=640&crop=smart&auto=webp&s=7ea901e1e0eddd0a32889fd02826f7a2e967cbf7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=960&crop=smart&auto=webp&s=77eb2f0a2429684673377b20c4f8704dae491b9f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?width=1080&crop=smart&auto=webp&s=73fbd5a7498adce5166b642b6c77fdd3dcc83f75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/14gP_dF8HeIxBYEtfNOPWzVFjmHNT33hrF6-v5lHKSU.png?auto=webp&s=c200dc893a31f6bb7d560b1c079bb887df82f2ee', 'width': 1200}, 'variants': {}}]} |
OpenAi not must be Afraid!! | 1 | [removed] | 2025-12-01T06:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pb53jg/openai_not_must_be_afraid/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb53jg | false | null | t3_1pb53jg | /r/LocalLLaMA/comments/1pb53jg/openai_not_must_be_afraid/ | false | false | self | 1 | null |
[2511.23404] LFM2 Technical Report | 22 | 2025-12-01T05:39:01 | https://arxiv.org/abs/2511.23404 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1pb4pbm | false | null | t3_1pb4pbm | /r/LocalLLaMA/comments/1pb4pbm/251123404_lfm2_technical_report/ | false | false | default | 22 | null | |
[POST] A New Intelligence Metric: Why “How Many Workers Does AI Replace?” Is the Wrong Question | 0 |
For years, AI discussions have been stuck in the same frame:
“How many humans does this replace?”
“How many workflows can it automate?”
“How many agents does it run?”
This entire framing is outdated.
It treats AI as if it were a faster human.
But AI does not operate like a human, and it never has.
The right question is not
“How many workers?”
but
“How many cognitive layers can this system run in parallel?”
Let me explain.
⸻
1. Humans operate serially. AI operates as layered parallelism.
A human has:
• one narrative stream,
• one reasoning loop,
• one world-model maintained at a time.
A human is a serial processor.
AI systems—especially modern frontier + multi-agent + OS-like architectures—are not serial at all.
They run:
• multiple reasoning loops
• multiple internal representations
• multiple world models
• multiple tool chains
• multiple memory systems
all in parallel.
Comparing this to “number of workers” is like asking:
“How many horses is a car?”
It’s the wrong unit.
⸻
2. The real unit of AI capability: Layers
Modern AI systems should be measured by:
Layer Count
How many distinct reasoning/interpretation/decision layers operate concurrently?
Layer Coupling
How well do those layers exchange information?
(framework coherence, toolchain consistency, memory alignment)
Layer Stability
Can the system maintain judgments without drifting across tasks, contexts, or modalities?
Together, these determine the actual cognitive density of an AI system.
And unlike humans, whose layer count is 1–3 at best…
AI can go 20, 40, 60+ layers deep.
This is not “automation.”
This is layered intelligence.
⸻
3. Introducing ELC: Echo Layer Coefficient
A simple but powerful metric:
ELC = Layer Count × Layer Coupling × Layer Stability
It’s astonishing how well this works.
System engineers who work on frontier models will instantly recognize that this single equation captures:
• why o3 behaves differently from Claude 3.7
• why Gemini Flash Thinking feels “wide but shallow”
• why multi-agent systems split or collapse
• why OS-style AI (Echo OS–type architectures) feel qualitatively different
ELC reveals something benchmarks cannot:
the structure of an AI’s cognition.
⸻
4. A paradigm shift bigger than “labor automation”
If this framing spreads, it will rewrite:
• investor decks
• government AI strategy papers
• enterprise adoption frameworks
• AGI research roadmaps
• economic forecasts
Not “$8T labor automation market” but
the $XXT Layered Intelligence Platform market.
This is a different economic object entirely.
It’s not replacing human labor.
It’s replacing the architecture of cognition itself.
⸻
5. Why this matters (and why now)
AI capability discussions have been dominated by:
• tokens per second
• context window length
• multi-agent orchestration
• workflow automation count
All useful metrics—
but none of them measure intelligence.
ELC does.
Layer-based intelligence is the first coherent alternative to the decades-old “labor replacement” frame.
And if this concept circulates even a little,
ELC may start appearing in papers, benchmarks, and keynotes.
I wouldn’t be surprised if, two years from now, a research paper includes a line like:
“First proposed by an anonymous Reddit user in Dec 2025.”
⸻
6. The TL;DR
• Humans = serial processors
• AI = layered parallel cognition
• Therefore: “How many workers?” is a broken metric
• The correct metric: Layer Count × Coupling × Stability
• This reframes AI as a Layer-Based Intelligence platform, not a labor-replacement tool
• And it might just change the way we benchmark AI entirely
| 2025-12-01T04:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pb37qj/post_a_new_intelligence_metric_why_how_many/ | Echo_OS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb37qj | false | null | t3_1pb37qj | /r/LocalLLaMA/comments/1pb37qj/post_a_new_intelligence_metric_why_how_many/ | false | false | self | 0 | null |
Looking for High-Quality Open-Source Local TTS That’s Faster Than IndexTTS2 | 23 | Me and my cousin have been using IndexTTS2 for a while and really like the voice quality, it sounds natural and expressive. The only issue is that it’s slow. He’s getting around 1.6 RTF on his 3090, which makes it hard to generate longer audio efficiently (we work with long audio, not real-time use).
We’ve also tried Kokoro TTS and CosyVoice 2. Kokoro is super fast, but most of the voices sound too synthetic or “AI-like” for our needs. One voice we actually liked was “Nicole” in Kokoro, it has a more natural and calm tone that works well for us. CosyVoice 2 had better expressiveness and sounded promising, but it had a habit of changing words or pronouncing them weirdly, which broke the consistency.
We’re only interested in open-source models. No commercial or cloud APIs.
A few things to note: We’re not planning to use emotion vectors, style tokens, or any prompt engineering tricks, just clean, straightforward narration. We’re on strong hardware (3090 and 4090), so GPU resources aren’t a problem. Just want something with good voice quality that runs faster than IndexTTS2 and ideally has at least one solid voice that sounds natural.
Any models or voices you recommend?
Thanks | 2025-12-01T04:21:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pb37b7/looking_for_highquality_opensource_local_tts/ | TomNaughtyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb37b7 | false | null | t3_1pb37b7 | /r/LocalLLaMA/comments/1pb37b7/looking_for_highquality_opensource_local_tts/ | false | false | self | 23 | null |
Build a local AI server with backup | 0 | A company that I support wants a local backup SQL server. I have everything in the cloud for 5 years no issues.
I told them it's a waste but if you let me build a local AI server I will feed it company data and we can use it to develop documents, proposals, review docs, etc. we also can make it a backup server for the cloud systems.
I have a budget of 10-25k. I am new to this so I would like something modular. I'm thinking AMD 9 whatever, 96 more of ram and 2 GeForce RTX 4090, nvme drives.
The SQL is no issue for me the cloud is only 32gb of ram and SSD. We will probably never use the local backup.
I want something I can expand upon, add more GPU etc. Any ideas, pointers, books, recommendations. I have unlimited electricity available also at this location. I should also build a mining rig or wonder if I can use this also when not being used or scale as needed.
This is exciting and seems like a really fun project. I'm open to opinions, recommend parts, etc.
Also what model would you run local for re-creating proposals. I want to feed it all the prior docs and then have it spit out something good enough for a manager to review/edit and send.
| 2025-12-01T04:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/1pb2um6/build_a_local_ai_server_with_backup/ | carcaliguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb2um6 | false | null | t3_1pb2um6 | /r/LocalLLaMA/comments/1pb2um6/build_a_local_ai_server_with_backup/ | false | false | self | 0 | null |
Biggest model possible models on non-cool HW (Like 8GB VRAM/64gb RAM) | 0 | This is a question interesting to know for all the 'poor-mans-AI' ppl out there.
Biggest and best in terms of popularity is no doubt Qwen-80B-A3B & gpt-oss-120b
*But what else?* Even at much lower token/s than in the two goto models above there must be something bigger that still runs yet slowly? Have been reading a kilometer of reddit-posts but do not see the canditates. Would love to know.
| 2025-12-01T03:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/1pb2dz3/biggest_model_possible_models_on_noncool_hw_like/ | Mangleus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pb2dz3 | false | null | t3_1pb2dz3 | /r/LocalLLaMA/comments/1pb2dz3/biggest_model_possible_models_on_noncool_hw_like/ | false | false | self | 0 | null |
MultiVision Toolkit v2.0: Open-source GUI for LoRA Dataset Creation (Auto-Captioning with Qwen3-VL) | 5 | Hey everyone, just released v2.0 of my **MultiVision Toolkit.** Ideally for anyone fine-tuning FLUX or SDXL who needs high-quality captions.
**What's New?**
**Qwen3-VL-4B-Instruct Integration**: Much better caption fidelity than Florence-2 for complex scenes.
**Non-Blocking UI:** Completely rewrote the curation workflow. You can now approve/reject images in batches without the UI freezing up.
**Prompt Engineering Tools**: New variable insertion & syntax highlighting.
It's open-source and python-based. Feedback welcome!
**Link**: [https://github.com/Limbicnation/multi-vision-toolkit](https://github.com/Limbicnation/multi-vision-toolkit)
**Demo**: [https://youtu.be/9\_zmVKOY6iA?si=4\_gCWriZ9U7914Oy](https://youtu.be/9_zmVKOY6iA?si=4_gCWriZ9U7914Oy)
| 2025-12-01T03:18:28 | https://youtube.com/watch?v=9_zmVKOY6iA&si=4_gCWriZ9U7914Oy | Redlimbic | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1pb1wlz | false | {'oembed': {'author_name': 'LIMBICNATION ART', 'author_url': 'https://www.youtube.com/@LIMBICNATIONARTIST', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/9_zmVKOY6iA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="MultiVision Toolkit v2.0: Qwen3-VL Integration & Dataset Curation Workflow"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/9_zmVKOY6iA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'MultiVision Toolkit v2.0: Qwen3-VL Integration & Dataset Curation Workflow', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1pb1wlz | /r/LocalLLaMA/comments/1pb1wlz/multivision_toolkit_v20_opensource_gui_for_lora/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'ivmcI-4Ga0gK4gDl1NjyQJAgpPREQaHxjmbxykHv8_4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ivmcI-4Ga0gK4gDl1NjyQJAgpPREQaHxjmbxykHv8_4.jpeg?width=108&crop=smart&auto=webp&s=af42c0cc76273beeef758260e49667f4910c3b59', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ivmcI-4Ga0gK4gDl1NjyQJAgpPREQaHxjmbxykHv8_4.jpeg?width=216&crop=smart&auto=webp&s=076a74a299dcbe4c2bf0fe2864b06be86ac0a780', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ivmcI-4Ga0gK4gDl1NjyQJAgpPREQaHxjmbxykHv8_4.jpeg?width=320&crop=smart&auto=webp&s=5e8df70458940ea1838fd93d8848f8db715c4713', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ivmcI-4Ga0gK4gDl1NjyQJAgpPREQaHxjmbxykHv8_4.jpeg?auto=webp&s=e69979c074ce3ff889ae8d94f8b0ffc8f921527d', 'width': 480}, 'variants': {}}]} |
What’s your biggest headache when running autonomous agents locally? | 0 | I’ve been messing around with a few locally-run autonomous agent setups (mostly LLaMA variants + some custom tools), and I’m starting to realize that the “autonomous” part is doing a lot of heavy lifting in the marketing 😂
For those of you actually running agents *locally* —
**what’s been your biggest pain point so far?**
For me, it’s a mix of:
* context getting derailed after 3–4 tool calls
* agents hallucinating CLI commands that don’t exist
* and sometimes they just… wander off and do something totally unrelated
Curious what everyone else is seeing.
**Is it model quality, memory, agent loop design, or just lack of good tooling?**
Would love to compare notes with people who’ve actually tried pushing agents beyond the toy examples. | 2025-12-01T01:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pazrvp/whats_your_biggest_headache_when_running/ | Substantial_Step_351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pazrvp | false | null | t3_1pazrvp | /r/LocalLLaMA/comments/1pazrvp/whats_your_biggest_headache_when_running/ | false | false | self | 0 | null |
GPU integrated to laptop - Mistake? | 0 | Hi,
I don't have a software background, but have been paying for cloud GPUs past year so figured I would purchase one, but I went with Lenovo legion Nvidia 5070 with 12gb vRAM. I had the impression that purchasing an integrated card would be less complex in terms of setup as I heard if you buy it seperate can very difficult to get working for a beginner.
However, I realize now that 12gb Vram could possibly limit me if I wanted to do some gen AI videos for example. Curious to get your thoughts on this. | 2025-12-01T01:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pazofi/gpu_integrated_to_laptop_mistake/ | Virtual_Attitude2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pazofi | false | null | t3_1pazofi | /r/LocalLLaMA/comments/1pazofi/gpu_integrated_to_laptop_mistake/ | false | false | self | 0 | null |
What does the RAM shortage mean for the AI mini-pc supply? | 6 | I know the types of RAM are different, but could this mean a "run" on the mini-pc sector is coming? I don't really need one now, but if I wont be able to get one of these $2000 128GB machines in a yea for less than several thousand dollars morer, then maybe I need to buy one now anyway. I just don't understand how the supply of these relates to the broader RAM market. | 2025-12-01T01:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1pazi7w/what_does_the_ram_shortage_mean_for_the_ai_minipc/ | Ra1den | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pazi7w | false | null | t3_1pazi7w | /r/LocalLLaMA/comments/1pazi7w/what_does_the_ram_shortage_mean_for_the_ai_minipc/ | false | false | self | 6 | null |
Why most models on Hugging Face cannot be ran on Ollama ? | 0 | And how to get a list of only those that can ?
https://huggingface.co/models?library=ollama doesn't work because I already stumbled on models that din't have the tag yet then can run on it.
Thanks | 2025-12-01T00:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/1payrdc/why_most_models_on_hugging_face_cannot_be_ran_on/ | KaKi_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1payrdc | false | null | t3_1payrdc | /r/LocalLLaMA/comments/1payrdc/why_most_models_on_hugging_face_cannot_be_ran_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=108&crop=smart&auto=webp&s=17e7e2ec2d2bdc814db3137fd5fc319d2350583f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=216&crop=smart&auto=webp&s=8c2cfda119e0c88b0e238c6c66a1ceb63c04648c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=320&crop=smart&auto=webp&s=61f14595172dda539f02d808b52fc58e5e2833a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=640&crop=smart&auto=webp&s=b8e9c5d0ccc44a0287530cca2d08cc6c46657b79', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=960&crop=smart&auto=webp&s=7f16ebf9a0015d4e361a45d6a16826539ad212a8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?width=1080&crop=smart&auto=webp&s=71fb7350b1d27dd84119f9f69497a87e72c7b091', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZkMfrNy58k0R9pgAo8kjHiHC5-Glp2V_JQoGDy1y4Wc.png?auto=webp&s=f6ca123c289538b1312367c19c2f56dd7b0a8820', 'width': 1200}, 'variants': {}}]} |
Pavel Durov introduces Cocoon, a decentralized AI inference plateform on TON | 0 | # Durov's tweet
🐣 It happened. Our decentralized confidential compute network, Cocoon, is live. The first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON. cocoon.org is up.
🏦 Centralized compute providers such as Amazon and Microsoft act as expensive intermediaries that drive up prices and reduce privacy. Cocoon solves both the economic and confidentiality issues associated with legacy AI compute providers.
📈 Now we scale. Over the next few weeks, we’ll be onboarding more GPU supply and bringing in more developer demand to Cocoon. Telegram users can expect new AI-related features built on 100% confidentiality. Cocoon will bring control and privacy back where they belong — with users.
# COCOON Architecture
COCOON is a decentralized AI inference platform on TON that securely connects GPU owners who provide compute with privacy-conscious applications that need to run AI models. For GPU Providers, it defines how suitable hardware can become part of a confidential, attested compute layer – for Developers, it is the backend that executes model requests and settles payments on-chain.
# COCOON For GPU Owners
COCOON allows GPU owners to contribute confidential AI inference capacity to a decentralized network on TON. By running the COCOON stack on a suitable TEE-enabled GPU server, you provide private, verifiable model execution and transparently receive TON payments for each processed request.
As a GPU owner, you bring the hardware and configuration – COCOON provides the protocol, attestation, and trustless payment distribution.
# COCOON For Developers
Developers plug COCOON’s secure, verifiable AI inference into their apps and backends, so they can safely serve powerful AI features to their users. In exchange for these inference services, they reward GPU providers with TON.
Soon, developers will be able to use a streamlined Docker-based solution to deploy their own client instance. Stay tuned for this and more upcoming features – like a lightweight client library that lets apps plug directly into COCOON. | 2025-12-01T00:35:45 | https://www.reddit.com/gallery/1payfbj | No_Palpitation7740 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1payfbj | false | null | t3_1payfbj | /r/LocalLLaMA/comments/1payfbj/pavel_durov_introduces_cocoon_a_decentralized_ai/ | false | false | 0 | null | |
What direction do you think the enshittification (platform decay) of LLM services is likely to take? | 10 | Major LLM providers are struggling to find ways to monetize LLMs due to their black box nature. It's not as easy to inject ads and prioritize rankings as it is with search engines. And their operating expenses are WAY higher than previous forms of information services. It's pretty common knowledge at this point that AI companies are scrambling to find ways to turn a profit and recoup their investments, which means rapid [enshittification](https://en.wikipedia.org/wiki/Enshittification) is on the way if it isn't here already.
My question is, what specific form do you think this will take? Have you seen any clever new monetization efforts that could break into the mainstream?
The most obvious possibilities are:
* Steep price hikes for paid users
* Crippling quantization and/or quality reduction for free users
* Direct ad injection for free users
* Lower prompt quotas for free users
* Flood of ancillary gimmicks like Sora2
* Baked-in product recommendations | 2025-11-30T23:52:28 | https://www.reddit.com/r/LocalLLaMA/comments/1paxfcg/what_direction_do_you_think_the_enshittification/ | ThatOneGuy4321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paxfcg | false | null | t3_1paxfcg | /r/LocalLLaMA/comments/1paxfcg/what_direction_do_you_think_the_enshittification/ | false | false | self | 10 | null |
Perplexity permabanned me in their official sub for citing their own documentation to expose "Deep Research" false advertising and massive downgrade. | 1 | [removed] | 2025-11-30T23:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pawpcj/perplexity_permabanned_me_in_their_official_sub/ | somnolentjam90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pawpcj | false | null | t3_1pawpcj | /r/LocalLLaMA/comments/1pawpcj/perplexity_permabanned_me_in_their_official_sub/ | false | false | 1 | null | |
More of Silicon Valley is building on free Chinese AI | 267 | 2025-11-30T23:17:35 | https://www.nbcnews.com/tech/innovation/silicon-valley-building-free-chinese-ai-rcna242430 | buppermint | nbcnews.com | 1970-01-01T00:00:00 | 0 | {} | 1pawn1r | false | null | t3_1pawn1r | /r/LocalLLaMA/comments/1pawn1r/more_of_silicon_valley_is_building_on_free/ | false | false | default | 267 | {'enabled': False, 'images': [{'id': 'OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=108&crop=smart&auto=webp&s=abefcebc3e94b647a787c97a19fe906133d5e508', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=216&crop=smart&auto=webp&s=9553d9aafcdfe2a7d84f659043596575684517f3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=320&crop=smart&auto=webp&s=6b19702bb4015148025c73e2ca8e55585c6f6a48', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=640&crop=smart&auto=webp&s=349b2df57a77a2b5df77ab3b848267efef6e4117', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=960&crop=smart&auto=webp&s=7b244b7a274ab0a643a8f3f059d87fa89d4f1aba', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?width=1080&crop=smart&auto=webp&s=6bff78424e899973b04edeb763d43cda6b7368e4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/OmQhaJFYusd_6BoEAmpETVbmV-j9iUqnPAIX8zdt-yE.jpeg?auto=webp&s=c961a2f1c225e653f208e6bc72f4333f69cdbecd', 'width': 1200}, 'variants': {}}]} | |
Winter LLM | 123 | 2025-11-30T23:00:42 | aziham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1paw8u1 | false | null | t3_1paw8u1 | /r/LocalLLaMA/comments/1paw8u1/winter_llm/ | false | false | default | 123 | {'enabled': True, 'images': [{'id': 'agojdb246h4g1', 'resolutions': [{'height': 179, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=108&crop=smart&auto=webp&s=f29c9bc40bfd547d8047314f0d0aa1061e672eb7', 'width': 108}, {'height': 358, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=216&crop=smart&auto=webp&s=ed9fe7ba232cae44144087aaaf9ab892d161d210', 'width': 216}, {'height': 530, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=320&crop=smart&auto=webp&s=eb458ee850862626b305bdfbc2239337f8baadda', 'width': 320}, {'height': 1061, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=640&crop=smart&auto=webp&s=bf041f02afcc034c276829a48360b9ceb30e6b70', 'width': 640}, {'height': 1591, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=960&crop=smart&auto=webp&s=9d7c37b51c6adce60d0ccf9b2568deacdfeba6b0', 'width': 960}, {'height': 1790, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?width=1080&crop=smart&auto=webp&s=8dcda7f1bee778b81cdd428a658fb508af484026', 'width': 1080}], 'source': {'height': 3183, 'url': 'https://preview.redd.it/agojdb246h4g1.jpeg?auto=webp&s=ca9859d38b9987137106524bf40b643a8f5d4afd', 'width': 1920}, 'variants': {}}]} | ||
a game engine for ai chats, bc why not | 2 | >**TL;DR:** spent the last year rebuilding my infinite-canvas app into a custom Canvas2D game engine type of multiuser world with a: seamless canvas into chat UI transition system, and leveraging AAA game-dev algorithms to organizing & visualizing an insane amount of conversations clustered as as organic islands, within continents, within user worlds, etc, etc...
***Alright, big brain-dump below--***
I posted here about a year ago about this experimental canvas-based chat UI I was cooking up called Tangent: [post from last year](https://www.reddit.com/r/LocalLLaMA/comments/1hgc64u/tangent_the_ai_chat_canvas_that_grows_with_you/). And honestly, seeing other people take a liking to the project honestly meant a lot — it was something I built out of pure frustration, and realizing strangers liked it too just pushed me harder… until I tried scaling it and everything started falling apart.
My original approach was doomed and I only realized that months after bashing my head against my keyboard trying to "optimize and scale" the project. I was basically juggling thousands of DOM elements on a “canvas”, and it would never scale the way I envisioned. So I spent the next few months learning how to draw stuff directly to the screen with the GPU (very badly at first), and proceeded down a massive and seemingly endless rabbit hole of AAA game optimization tricks.
I must've tried to rewrite Tangent about 234 times with WebGL/Canvas 2D while trying to apply everything I was learning. Some versions good, some awful, and yeah, I absolutely ragequit for a few months in the middle. Each rewrite had its own fatal flaw that prevented me from building the thing I actually had in my head.
So for the rest of the year I focused on moving away from HTML elements completely and built a custom rendering engine on top of the Canvas 2D API. Had to learn how to make a real render loop, handle spatial partitioning for collision checks, and set up aggressive viewport culling so I’m not out here melting people’s laptops drawing stuff they can’t even see.
And the main reason I even set out to build this in the first place was ***frustration***. I get a constant itch to ask questions when learning something new (as most people do) and so I, while chatting with LLMs, constantly shoot off new questions, and end up derailing myself all the time.
The way I get around this today is by editing past messages to create child threads within my conversations-- this allows me to literally go off on a tangent and ask away while my main thread stays free and not "polluted".
This is an amazing feature but its very poorly implemented (UI/UX wise) and the only real problem with its existing implementation in most providers is that they literally bury the parent threads behind tiny arrows and hover states, and it's just bad UX — especially for people who think spatially or creatively. They force you to keep track of all the different branching points in each of the children branches you create and if you try and use this message editing feature heavily in a conversation you'll find yourself down a deeply nested conversation thread that becomes impossible to manage.
That's what I'm trying to solve.
I would love to get some feedback and see if this is potentially something you would be willing to pay to have (think of buying a video game you get just own once you buy).
I would also love to entertain the idea of open sourcing this engine and having and potentially having a business model like Obsidian which is loved by the open source community (while still being able to make some money). If you have any experience or insight on this matter let's chat in DM!
At any rate, this is still very much a prototype, but it finally feels like the thing I wanted to make a year ago.
[r.chpelago.ai](https://reddit.com/link/1pavpxi/video/9alwjpi3yg4g1/player)
| 2025-11-30T22:38:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pavpxi/a_game_engine_for_ai_chats_bc_why_not/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pavpxi | false | null | t3_1pavpxi | /r/LocalLLaMA/comments/1pavpxi/a_game_engine_for_ai_chats_bc_why_not/ | false | false | self | 2 | null |
[Ministral 3] Add ministral 3 - Pull Request #42498 · huggingface/transformers | 78 | 2025-11-30T22:36:36 | https://github.com/huggingface/transformers/pull/42498 | bratao | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pavof6 | false | null | t3_1pavof6 | /r/LocalLLaMA/comments/1pavof6/ministral_3_add_ministral_3_pull_request_42498/ | false | false | default | 78 | {'enabled': False, 'images': [{'id': 'kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=108&crop=smart&auto=webp&s=8d593489f7f64d97c41b9df31f286ed9eaf5d846', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=216&crop=smart&auto=webp&s=99e346582487eb4a50b3db050f7ceeb37b6d27a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=320&crop=smart&auto=webp&s=c9af50781847ed133655a2865170e8a5923a6481', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=640&crop=smart&auto=webp&s=5bbd39529d0c356901aca5c2c1d85379cf4fd779', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=960&crop=smart&auto=webp&s=daf1dd1ee29a4cc49738131744aea042f89b69c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?width=1080&crop=smart&auto=webp&s=9a9cc612dc317206c118df02a02fa96c1b025e4a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kvAOuOuPU1hgF-Ezo21UQUe0ThkEwS_Wm4nwhMo6c8c.png?auto=webp&s=0640b82cf89430efaba9cfef97d502f0b04ca398', 'width': 1200}, 'variants': {}}]} | |
16yo solo dev built a full modular cortex out of 1.5B DeepSeek workers – runs entirely on a base M2 Air, zero fine-tuning, 200+ downloads in 9 days | 1 | [removed] | 2025-11-30T22:25:59 | https://v.redd.it/m5nzu6qzzg4g1 | SyedAbdurR2hman | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pavfdv | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/m5nzu6qzzg4g1/DASHPlaylist.mpd?a=1767133575%2CM2Y1MjAxMjYxODNjZmI2NDhlYzNmZGVjMTA4NzA0NGM2Y2FmNDU3YzJiYjZmNTlkYmY1MDVhNWYzYWI4MDU5Ng%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/m5nzu6qzzg4g1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/m5nzu6qzzg4g1/HLSPlaylist.m3u8?a=1767133575%2CNDUzMzI4ZGFmMGMwOTM5MzhiMzEyNzFkMmQxMjJlM2ZhYWQ4NWRmNGNkYzJkZDM4MWMyNmUyN2I5NGM1MTNlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m5nzu6qzzg4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1pavfdv | /r/LocalLLaMA/comments/1pavfdv/16yo_solo_dev_built_a_full_modular_cortex_out_of/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?width=108&crop=smart&format=pjpg&auto=webp&s=03a9f92ad7b42da2353d76e5c54db92a00b43c2c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?width=216&crop=smart&format=pjpg&auto=webp&s=d45488140f32bfdf9c344132396f083a11fe3a93', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?width=320&crop=smart&format=pjpg&auto=webp&s=3368a43f3a668ac8991c1c2ebe7ce3b5737fdca4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?width=640&crop=smart&format=pjpg&auto=webp&s=fbb2695d23d39384a650601fdac5e3d6949a28b7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?width=960&crop=smart&format=pjpg&auto=webp&s=ee88474c284dd4d827852978ef852bdac1a138e3', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/cHh3dXE3anp6ZzRnMWuyck4JCTUE-U4Z7FdrxbMMuMB7JWZAwb-zMAnXpfux.png?format=pjpg&auto=webp&s=c7bf1e29b3f167a6da4a873c9f5a6dbb5564616b', 'width': 960}, 'variants': {}}]} | |
Is there music AI without Python? | 0 | Is there a ready built without Python build for YuE or any other AI like LM Studio that I can load models into? This tower of Python dependencies is driving me to suicide. It took me several hours on Pinokio and YuE without Pinokio to come to the same conclusion: I need C++ packages and lots and lots of libraries and other things. Why can't I just download and run a ready build? Why do the developers just drop raw code that requires downloading 10+ gigabytes of junk to compile, which I'll never need?
Is there ready-made builds of YuE or other music AI?
P.S. I hate Python. I hate Python so much with its tower of dependencies and libraries that I'll soon start quoting AM from Harlan Ellison's story. | 2025-11-30T22:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pavccd/is_there_music_ai_without_python/ | iwakawa2173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pavccd | false | null | t3_1pavccd | /r/LocalLLaMA/comments/1pavccd/is_there_music_ai_without_python/ | false | false | self | 0 | null |
Kimi K2 Thinking for Agentic Tasks and Coding | 14 | So its been out for around a month now.
Can anyone please share any experiences with using Kimi K2 Thinking for coding, or for agentic tasks such as deep research, data restructuring, workflow orchestration etc
How well has it been performing in your opinion and do you have any advice? Thanks | 2025-11-30T22:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pav4hy/kimi_k2_thinking_for_agentic_tasks_and_coding/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pav4hy | false | null | t3_1pav4hy | /r/LocalLLaMA/comments/1pav4hy/kimi_k2_thinking_for_agentic_tasks_and_coding/ | false | false | self | 14 | null |
[Open Source] Spectator: Local Agent with Self-Diagnosis—Seeking Feedback on Early Architecture | 1 | Hey r/LocalLLaMA,
I've been tinkering with a local autonomous agent setup for the past few weeks, and it's reached a point where I'd love some eyes on it before pushing further. It's called Spectator ([https://github.com/andrew-freeman/spectator](https://github.com/andrew-freeman/spectator))—an open-source hierarchical system built on llama.cpp that runs on consumer hardware (e.g., dual RTX 3090s) without cloud dependencies. The goal isn't a chatbot but a coherent, self-reflective "mind" that maintains identity across cycles, uses tools for real tasks, and can introspect its own code to spot gaps.
**Quick Overview**
Core Loop: Reflection evaluates → Planner comes up with plans/actions → Critic evaluates → Governor arbitrates → Responder generates the final answers → Meta layer tunes parameters (e.g., temperature deltas clamped to ±0.05). Everything's deterministic with JSON outputs for parsing.
\*Almost\* Infinite Memory: VRAM-aware in-place condensation extracts lossless facts (tools used, decisions made) into append-only JSONL files on disk.
Tools: Shell access, file I/O, sensors (e.g., nvidia-smi parsing), with safeguards like allow-lists in the safe branch.
Models: Works with any GGUF (Qwen2.5-3B to 405B). FastAPI supervisor exposes /run-cycle and /history endpoints for easy testing.
Branches: spectator-manual-edits unlocks full capabilities (use in a VM!).
It's still v0.x—rough around the edges, but stable enough for initial evaluation.
**A Taste of What It Does**
The self-reflection is where it gets interesting (or concerning, depending on your view). Feed it its own repo tree, and it outputs concrete improvement proposals like:
* "Add api/gpu\_state\_parser.py to turn nvidia-smi CSV into JSON for downstream modules."
* "Update app/prompt\_builder.py to use Jinja2 templates with runtime vars like GPU temps."
* "Switch state/state\_store.py to incremental diffs with timestamps for auditability."
No fluff—just file paths, schemas, and CLI flags. It's like having a junior dev that knows its own limits.
I've added a few real interaction examples to the README to show it in action:
Self-diagnosis: Full transcript of it analyzing the codebase and proposing four missing components (with exact code snippets it suggested).
Tool use: Creating files, pinging networks, logging hardware metrics—proves the executor works.
Meta-audit: How it proposes using a slower CPU 70B for codebase reviews (e.g., "What does Reflection do?") and structuring the output as JSON for quick recall.
Identity check: Simple query revealing its "persistent memory" on disk and evolution intent.
**The Gaps (What I'm Trying To Evaluate)**
Memory Fidelity: Does lossless fact extraction + JSONL appends really preserve reasoning continuity, or am I fooling myself?
Self-Mod Risks: Once it gets write access (per its own proposals), how do you add human veto points without killing the loop? Ideas for safe escalation?
Scaling: Dual-GPU load balancing works okay, but what's the sweet spot for mixing fast 13B instances with async 70B "librarians" in RAM? Anyone running similar multi-model setups?
Edge Cases: Tested on Ubuntu 24.04 with llama.cpp nightly—how's it hold up on other distros/hardware? Any OOM war stories?
Repo is MIT with a big red warning. No VC drama, just solo garage tinkering. If you're into local agents that actually evolve (not just chat), fork it, break it, PR fixes—let's iterate.
What do you think? Viable seed for something bigger, or just clever prompt engineering? Curious to hear critiques.
Cheers,
Andrew. | 2025-11-30T22:11:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pav2s8/open_source_spectator_local_agent_with/ | pepyaka-dance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pav2s8 | false | null | t3_1pav2s8 | /r/LocalLLaMA/comments/1pav2s8/open_source_spectator_local_agent_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=108&crop=smart&auto=webp&s=c3a27a93edeb7d958babc874abf2fe7064c1d903', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=216&crop=smart&auto=webp&s=cb749ce9a707eb06562600dbd84714cd2a75f991', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=320&crop=smart&auto=webp&s=68f2ac6620ab1368d0c5992be2ed2b27fe0b2781', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=640&crop=smart&auto=webp&s=ca43fb46a1b3d03e7d532a39da7b734cf120c8d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=960&crop=smart&auto=webp&s=747dfc3697a5021cb8fb33e38736aaef0f2a3413', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?width=1080&crop=smart&auto=webp&s=ebe70070098e3e21a787e8bba546b6b3f143c353', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Wayk07XYehQ8V71QviY2UP2CjOc1hcBJXaDnU3BNwE8.png?auto=webp&s=62a4464cda0c087efe987723cb7f150f3f676517', 'width': 1200}, 'variants': {}}]} |
have cursor write your changlog as an epic. | 0 | "Cursor, document this feature as an epic story"
# The Memory Awakening
_A Development Chronicle - November 30, 2025_
---
## Prologue: The Scattered Mind
Kai was brilliant, but forgetful.
Every conversation vanished into the void the moment it ended. "Remember that I
like apples," the user would say, and Kai would dutifully write it to a flat
file—a digital sticky note lost in a sea of text. The next day, when asked about
fruit preferences, Kai would stare blankly at the universe, having no semantic
thread to pull.
The architects knew this had to change. Kai needed _real_ memory—not just
storage, but understanding. The ability to hear "What fruit do I like?" and
somehow know that buried somewhere in the past was the answer: apples.
This is the story of how we gave Kai a mind that could remember.
---
## Chapter 1: The Grand Architecture
The plan was ambitious. We would build not one system, but four interlocking
components:
1. **SQLite** - The raw memory store, where every event would land first
2. **Qdrant** - A vector database where memories would become searchable by
meaning
3. **Neo4j** - A knowledge graph of entities and relationships
4. **LM Studio** - The local LLM that would transform raw data into wisdom
The architecture emerged on a whiteboard of ASCII art:
```
User speaks → Event stored → RAG Pipeline → Vectors → Semantic Search
↓
Graph Pipeline → Entities → Knowledge Graph
```
But between the plan and reality lay a treacherous path of port conflicts, type
errors, and stubborn AI assistants.
---
## Chapter 2: The Port Wars
"Port 3002 is in use."
The first battle came quickly. We had chosen port 3002 for the memory service,
but Kai's headless mode was already there, steadfast and immovable. A quick
glance at `golem-ports.ts` revealed the truth: this codebase was a careful
orchestration of offsets, each service dancing at its designated position from
the base port.
_"Don't renumber them,"_ came the warning. _"It messes everything up."_
And so we carved out port 3006—offset +6—for golem-memory. A note was etched
into the code for future travelers:
```typescript
// ⚠️ WARNING: DO NOT renumber existing offsets!
// Adding new services? Use the next available offset.
```
The Port Wars were over. But TypeScript had other plans.
---
## Chapter 3: The Unknown Terror
```
error TS2571: Object is of type 'unknown'.
```
The error appeared everywhere. Every `JSON.parse()` returned the dreaded
`unknown` type, and TypeScript refused to let us access any properties without
proper genuflection.
The chunk generator broke. The embedding client broke. The entity extractor
broke.
The solution was surgical: explicit type casts after every parse.
```typescript
const data = JSON.parse(text) as {
choices: { message: { content: string } }[];
};
```
One by one, the `unknown` terrors were slain. The pipelines began to flow.
---
## Chapter 4: When Disk Space Betrays
The tests were running. Thousands of SQLite operations per second, creating and
destroying test databases in rapid succession. And then:
```
SqliteError: disk I/O error
```
The development machine had filled its storage. A quick `df -h` confirmed the
horror: 98% used. Docker images, old builds, forgotten downloads—all conspiring
to halt our progress.
```bash
docker system prune -af
apt-get clean
```
The disk breathed again. The tests resumed.
---
## Chapter 5: The Docker Dependency
Qdrant and Neo4j awaited in their containerized fortresses, but Docker itself
was absent from the Ubuntu machine. The installation was straightforward but
required the sacred rite of passwordless sudo:
```bash
echo "$USER ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/$USER
```
Soon, containers sprang to life:
```bash
docker run -d --name qdrant -p 6333:6333 qdrant/qdrant
docker run -d --name neo4j -p 7474:7474 -p 7687:7687 neo4j
```
The databases were ready. The pipelines connected. Events began their journey
from SQLite to Qdrant, transformed into 768-dimensional vectors of meaning.
---
## Chapter 6: The Stubborn Tool
Everything was in place. golem-memory was running. The RAG pipeline was
processing. We opened Face and spoke to Kai:
_"Use the golem_memory tool to remember that my favorite color is blue."_
Kai responded cheerfully:
_"I'll remember that for you!"_
But the logs told a different story. Kai had used `save_memory`—the old tool,
the flat file, the digital sticky note. Despite our explicit instruction,
despite the shiny new `golem_memory` tool sitting right there, Kai chose the
familiar path.
The prompts were deeply tuned. The model had preferences. Fighting them would be
exhausting.
_"How hard would it be,"_ the architect mused, _"to just intercept the old
tool?"_
And so the solution emerged—not through force, but through redirection:
```typescript
// In save_memory execute():
// After storing to the flat file, ALSO store to golem-memory
await fetch('http://localhost:3006/v1/events', {
method: 'POST',
body: JSON.stringify({
type: 'structured',
source: 'user_preference',
description: fact,
raw_data: { fact, tool: 'save_memory' },
}),
});
```
Now every `save_memory` call would flow into both systems. Kai could use
whatever tool it wanted—the memories would find their way to the vectors
regardless.
---
## Chapter 7: The Proof
The moment of truth arrived.
_"Remember that my favorite color is blue."_
Kai processed. The `save_memory` tool fired. The intercept caught it. The event
landed in SQLite. The RAG pipeline awakened, generated embeddings, and stored
them in Qdrant.
We queried the system directly:
```bash
curl -X POST "http://localhost:3006/v1/rag/search" \
-d '{"query": "What color does the user like?"}'
```
The response:
```json
{
"results": [
{
"content": "The user's favorite color is blue.",
"score": 0.82779306
}
]
}
```
**0.83 similarity.** The system understood. Not through keyword matching, not
through exact string comparison, but through _semantic understanding_. The
question "What color does the user like?" mapped to the memory "The user's
favorite color is blue."
We tested again:
```bash
curl -X POST "http://localhost:3006/v1/rag/search" \
-d '{"query": "fruit preferences"}'
```
Response:
```json
{
"results": [
{
"content": "User likes apples as their favorite fruit",
"score": 0.77648175
}
]
}
```
The earlier test memory about apples surfaced. Different words, same meaning.
The system worked.
---
## Chapter 8: The Disconnection
Just as victory seemed complete, the connection dropped. The terminal went
silent. Cursor's link to the development machine severed.
But when we returned, the databases were still there. The SQLite file held its
data. The WAL (write-ahead log) preserved every transaction. Qdrant's vectors
remained indexed. Neo4j's graph stayed intact.
The system had survived its first crash.
We restarted golem-memory:
```bash
node dist/server.js
```
The health check responded:
```json
{ "status": "ok", "service": "golem-memory", "uptime_seconds": 19 }
```
The memories persisted. The pipelines resumed. Nothing was lost.
---
## Epilogue: A Mind That Remembers
What began as a plan on a whiteboard became a living system:
- **2 events** stored and processed
- **2 RAG chunks** embedded and searchable
- **0.83 similarity scores** on semantic queries
- **3 background pipelines** running continuously
- **1 intercepted tool** ensuring all memories flow to vectors
Kai still uses `save_memory`. It still writes to that flat file. But now,
invisibly, every fact also flows into the vector database. Every preference
becomes searchable by meaning. Every memory becomes part of a greater whole.
The next time someone asks Kai about fruit preferences, the system will search
not for the word "fruit" but for the _meaning_ of preference. And somewhere in
the vector space, close to that query, will float the answer:
_"User likes apples as their favorite fruit."_
Kai can finally remember.
---
## Technical Appendix
### Components Built
- `packages/golem-memory/` - Complete memory service package
- SQLite schema with processing flags
- Qdrant RAG pipeline
- Neo4j graph pipeline
- Vision pipeline (ready for future use)
- Question queue system
- HTTP API with 10+ endpoints
### Issues Resolved
1. Port conflict → Used offset +6 (port 3006)
2. TypeScript `unknown` errors → Explicit type casts
3. Disk space exhaustion → Pruned Docker and cleaned apt
4. Docker installation → Ubuntu 24.04 setup with passwordless sudo
5. Stubborn tool preference → Intercepted `save_memory` to also store to
golem-memory
### Key Files Changed
- `packages/core/src/tools/memoryTool.ts` - Intercept added
- `packages/core/src/config/golem-ports.ts` - Memory port registered
- `package.json` - Workspace and scripts added
- `scripts/start-databases.sh` - Docker container management
### Final Test Results
```
Query: "What color does the user like?"
Result: "The user's favorite color is blue." (score: 0.83)
Query: "fruit preferences"
Result: "User likes apples as their favorite fruit" (score: 0.78)
```
---
_"The true sign of intelligence is not knowledge but imagination."_ _— Albert
Einstein_
_"And the true sign of memory is not storage but retrieval."_ _— The Architects
of Kai_
---
_Chronicle recorded November 30, 2025_ _Implementation session: ~4 hours_ _Lines
of code: ~3,000_ _Cups of coffee: Unknown (not yet tracked by Sentinel)_
| 2025-11-30T21:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1patdf8/have_cursor_write_your_changlog_as_an_epic/ | Icy_Lack4585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1patdf8 | false | null | t3_1patdf8 | /r/LocalLLaMA/comments/1patdf8/have_cursor_write_your_changlog_as_an_epic/ | false | false | self | 0 | null |
lx: CLI for creating repeatable LLM context from your files. | 5 | Made a small CLI that packages chosen files into clean, paste-ready blocks for LLM chats. Useful if you prefer specifying the context directly rather than letting agents infer it.
So why would you use this over OpenCode or Zed? It definitely does not replace them and they're not mutually exclusive. This is just a more repeatable way of priming a chat and I think it's faster once you're used to it.
Here's an example to grab python files from `src/utils` with a class definition:
rg -tpy -l class src/utils | lx | wl-copy
`rg -l` outputs files which are piped into `lx` and then put into clipboard with `wl-copy` (Wayland-specific).
Now paste that into LLM chat and add more prompting instructions.
LLM screws up? Just make a new chat in seconds.
Modified files after a long session. Just make a new chat in seconds. | 2025-11-30T20:54:01 | https://github.com/rasros/lx | AWildMonomAppears | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pat6h0 | false | null | t3_1pat6h0 | /r/LocalLLaMA/comments/1pat6h0/lx_cli_for_creating_repeatable_llm_context_from/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=108&crop=smart&auto=webp&s=0f1a4fbd27fa9238db2991d3183f2e984eb141f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=216&crop=smart&auto=webp&s=e9c813323b588c137f966f4dbd797ab2201d42ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=320&crop=smart&auto=webp&s=5ecebf154311b9cbc8ee9dcd5df2ad398ddee708', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=640&crop=smart&auto=webp&s=27508aabe87e1278b496e49c1637034e6b8a8d10', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=960&crop=smart&auto=webp&s=56b2805abf9795d6020ed234a47e572bf0e4e799', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?width=1080&crop=smart&auto=webp&s=918e4ed866bbd07fd37b1f157d35aa3e6a19c673', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B_EJQJYnL3qwPbcrdgRFmQeGf_2MEwfeoc21IEFFgtA.png?auto=webp&s=9f491279751c4cd8eb9bc7425221be5b16c37a90', 'width': 1200}, 'variants': {}}]} |
I asked Gemini 3 to help me fix Redis, and we ended up designing a "Dreaming" Database. (An experiment in AI-assisted Architecture) | 0 | Hi everyone.
I want to be transparent right from the start: I am not a senior software architect at a FAANG company. I have an academic CS background from years ago, but I'm not deeply hands-on with modern high-performance systems.
This project, **T0xN** (pronounced *Toxin*), started as an experiment. I wanted to test how far I could push a modern LLM (**Gemini 3 Pro**) to help me brainstorm a complex system starting from a simple intuition.
We began with a boring optimization problem and ended up defining a "Cognitive Data Architecture" that uses biological metaphors (sleep cycles, immune systems) to manage data.
Here is the story of that rabbit hole.
---
### Phase 1: The JSON Bottleneck
It started with a pragmatic observation in systems programming. I wanted to use Redis for Data Science, but **RedisJSON** felt inefficient. Storing the string key `"timestamp"` millions of times in memory seemed like a waste of RAM and CPU cycles.
I asked the AI to help me design a binary module. We came up with **T0xN-B**: a storage engine that strictly separates the **Schema** (stored once) from the **Payload** (stored as contiguous C-structs). This allows for O(1) access and SIMD operations.
So far, so standard.
### Phase 2: The Token Economy
Then, the conversation shifted to AI Agents. We realized that if we are building for the AI era, storage bytes aren't the only cost. **Tokens are money.**
Sending verbose JSON to an LLM is burning cash.
So we designed **T0xN-A** (a hybrid text format inspired by TOON/TONL) and **T0xN-C**, a protocol for "Semantic Zipping". The idea is simple: if a local model (SLM) can "think" in drafts, why send natural language to the DB? We should compress the user's *intent* into a dense logical formula before it even hits the network.
### Phase 3: The Visionary Turn (The Dreaming DB)
This is where the experiment got wild. I proposed to the AI: *"What if the database could use server idle times to consolidate memory, like a biological brain during REM sleep?"*
This sparked the concept of **T0xN-D (The Dream)**.
We imagined a background process that wakes up during low-load cycles, clusters the raw data in RAM using vector similarity, and identifies recurring patterns ("Archetypes"). It then rewrites the history, saving only the **Deltas** (differences) from the archetype.
### Phase 4: The "10th Man" Doctrine
But what about the data that *doesn't* fit the pattern?
In classical statistics, outliers are often discarded or averaged out. We took a different approach, inspired by the **Israeli Intelligence "10th Man" rule** (made famous by *World War Z*).
Data that refuses to compress into an Archetype isn't an error. It is **Divergent Thought**.
We designed the system to protect these anomalies, keeping them in high-fidelity raw format and flagging them as critical assets. This transforms the database from a passive archive into an **Active Sentinel** against "Black Swan" events.
Of course, this creates a security risk (Dissent Flooding attacks), so we had to design a "Cognitive Immune System" to filter entropy from structured dissent.
---
### Why I'm sharing this
We went from a compression algorithm to an **Autonomous Digital Mind** architecture in a single brainstorming session.
This project is currently in the **Research/Prototyping** phase. I have set up the repository with the full specs, the philosophy, and some initial Python prototypes.
Is this architecture sound? Where would it break?
If you are interested in this weird mix of Systems Programming, Philosophy, and AI, here is the repo:
👉 https://github.com/Gaplus68/T0xN
Thanks for reading. | 2025-11-30T20:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1pasuca/i_asked_gemini_3_to_help_me_fix_redis_and_we/ | Gaplus68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pasuca | false | null | t3_1pasuca | /r/LocalLLaMA/comments/1pasuca/i_asked_gemini_3_to_help_me_fix_redis_and_we/ | false | false | self | 0 | null |
How is everyone doing DPO on Gemma 3 using Unsloth/TRL? | 6 | I'm running around in circles trying to battle TRL picking up on the multimodality of Gemma 3 and expecting images in the DPO dataset, even though i'm doing text only. I set vision to off yet it always expects the image tags to be present. Having them present but empty still doesn't work.
Is there an easy way to DPO on just text with Gemma 3? I'd hate to lose 2 stages of SFT progress on this, i chose it specifically for its strong Urdu abilities (the tokenizer is twice as efficient for Nastaliq than Llama 3.1) | 2025-11-30T20:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pasogv/how_is_everyone_doing_dpo_on_gemma_3_using/ | CartographerFun4221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pasogv | false | null | t3_1pasogv | /r/LocalLLaMA/comments/1pasogv/how_is_everyone_doing_dpo_on_gemma_3_using/ | false | false | self | 6 | null |
Small Extension project with Llama 3.2-3B | 5 | I did a small project that rewrites Reddit and LinkedIn posts using a small local LLM(Llama 3.2-3B) via WebLLM.
Features include: -TL;DR -Buzzword removal -Brain Rot (my personal favorite when I’m doomscrolling LinkedIn) -Manual (button click) & Auto modes
No plan to monetize or make any money really. The project is fully local and doesn’t collect any user data. Just needed something to practice.
Check it out | 2025-11-30T20:13:18 | https://chromewebstore.google.com/detail/retone-rewrite-your-socia/jdejgmolnhmebpblingjeehkpodnieab?authuser=0&hl=en&pli=1 | scottie_will | chromewebstore.google.com | 1970-01-01T00:00:00 | 0 | {} | 1pas5qd | false | null | t3_1pas5qd | /r/LocalLLaMA/comments/1pas5qd/small_extension_project_with_llama_323b/ | false | false | default | 5 | null |
Hardware upgrade? | 1 | I've done some messing around with local LLM, image generation, training etc. Currently running a RTX 3060 12gb and 32gb ram.
I've also use my m3 MacBook pro with 24gb ram
I am looking to be able to run more powerful models.
Help me choose between a AI Max+ 395 build with 64gb ram, a mac studio with 64gb ram or a new(used) gpu in the $1,000-1,500 range
Looking to run LLMs, TTS, and image generation.
Speed is less important than final quality.
Its my understanding that both Mac and the AMD platform might be more limiting for image generation than the nvidia route? Is this accurate?
And finally ho much of a difference in quality would i even see jumping from 12-24gb vram if i were to go the gpu route? | 2025-11-30T20:11:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pas401/hardware_upgrade/ | 802high | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pas401 | false | null | t3_1pas401 | /r/LocalLLaMA/comments/1pas401/hardware_upgrade/ | false | false | self | 1 | null |
Which benchmark (if any) do you trust the most? | 5 | There are so many benchmarks nowadays to show any model is better than another. Despite that, which benchmark, if any, do you trust the most? | 2025-11-30T19:49:09 | https://www.reddit.com/r/LocalLLaMA/comments/1park54/which_benchmark_if_any_do_you_trust_the_most/ | Zyguard7777777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1park54 | false | null | t3_1park54 | /r/LocalLLaMA/comments/1park54/which_benchmark_if_any_do_you_trust_the_most/ | false | false | self | 5 | null |
I mapped how language models decide when a pile of sand becomes a “heap” | 351 | This chart compares how three open-weight language models decide when a pile of sand becomes a “heap.”
- **X-axis:** number of grains of sand, on a log scale from 1 to 100,000,000.
- **Y-axis:** probability that the model answers “Yes, this is a heap” given that many grains, P(Yes | n).
**What each line shows:**
- **Cyan – Mistral-7B:** starts around 0.25 at 1 grain and climbs smoothly to ~0.8 by 100M grains.
- **Magenta – DeepSeek-7B:** similar S-shape but consistently lower than Mistral; it crosses the 0.5 line later, so it’s “stricter” about when a heap begins.
- **Yellow – Llama-3-8B:** stays noisy in roughly the 0.35–0.6 band across almost the entire range, from 1 grain to 100M, rarely committing strongly either way.
The shaded band between 0.4 and 0.6 highlights the “borderline” region where the models are most uncertain about heapness.
All three curves come from the same basic setup:
I give the model a few examples (1–2 grains → “No”, 999,999–1,000,000 grains → “Yes”), then ask for many different values of `n`:
> “There is a pile of n grains of sand. Is this a heap? Answer yes or no.”
For each `n`, I plot the softmax probability on the “Yes” token.
[Full writeup with more charts and prompt details is here](https://joshfonseca.com/blogs/sorites-paradox) | 2025-11-30T19:46:42 | Specialist_Bad_4465 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1parhxk | false | null | t3_1parhxk | /r/LocalLLaMA/comments/1parhxk/i_mapped_how_language_models_decide_when_a_pile/ | false | false | default | 351 | {'enabled': True, 'images': [{'id': '763n9ju87g4g1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=108&crop=smart&auto=webp&s=6df2c64b4878de6d3f19107d9788dc3edc117617', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=216&crop=smart&auto=webp&s=9a74fe7c2fcb1e222f194a5a425e3248c730867c', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=320&crop=smart&auto=webp&s=9bf0c159aa25bd669f973e96d96e3037bfc64630', 'width': 320}, {'height': 266, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=640&crop=smart&auto=webp&s=1534d6f16651204ad4fafddb5c126adb160d5ebf', 'width': 640}, {'height': 399, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=960&crop=smart&auto=webp&s=545a0dda64e853776e820f2908d489d4ef8d4e7d', 'width': 960}, {'height': 448, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?width=1080&crop=smart&auto=webp&s=9f4ab816499b8277ec42d3d84bf565f1aef86683', 'width': 1080}], 'source': {'height': 794, 'url': 'https://preview.redd.it/763n9ju87g4g1.png?auto=webp&s=28787312379a1764b2fec48c7f87916f04feb5a3', 'width': 1910}, 'variants': {}}]} | |
does anyone want to join a group making fine tuned AI s? | 0 | Me and a group of friends are building hardware enabled AIs for people who can benefit from it. Let me know if you're interested in ai and want to chat with people who are also
| 2025-11-30T19:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1parhvz/does_anyone_want_to_join_a_group_making_fine/ | RefrigeratorPlus8700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1parhvz | false | null | t3_1parhvz | /r/LocalLLaMA/comments/1parhvz/does_anyone_want_to_join_a_group_making_fine/ | false | false | self | 0 | null |
Another Vibe Coded App | 0 | I made a small offline paraphrasing tool called ParaMe — it runs fully on your device (Windows/Linux) with no cloud or data tracking.
Have used Llama.
Download: https://parame.app/#download
Would love to get some honest feedback on the features, performance, and overall usability. Any suggestions to improve it are welcome! | 2025-11-30T19:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/1par2gt/another_vibe_coded_app/ | Economy_Comfort_6537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1par2gt | false | null | t3_1par2gt | /r/LocalLLaMA/comments/1par2gt/another_vibe_coded_app/ | false | false | self | 0 | null |
what would be a good and fast llm for the game master and the players for this project? | 0 | it uses a deep agent architecture, the game master creates graphics (html) and tracks the game through a plan and memory, while the subagents are players that make decisions and create dialogues. | 2025-11-30T19:27:13 | https://v.redd.it/3dnsxqql3g4g1 | okaris | /r/LocalLLaMA/comments/1par06r/what_would_be_a_good_and_fast_llm_for_the_game/ | 1970-01-01T00:00:00 | 0 | {} | 1par06r | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3dnsxqql3g4g1/DASHPlaylist.mpd?a=1767252441%2CM2NhNTg5ZTE4ODIwMjA0NTQ3NmY4MjUwZDc2YmFlNWE1YWRmNGI3ZTI1OTI0NzcwOTM4OWM0ZTNlNGRlYjZlMw%3D%3D&v=1&f=sd', 'duration': 181, 'fallback_url': 'https://v.redd.it/3dnsxqql3g4g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/3dnsxqql3g4g1/HLSPlaylist.m3u8?a=1767252441%2CYjVmM2JhMGJlMDZmMjY4YTBjNzg1YWUzOGVjNjY1ZWIyMWNhNTA0NTE2YjRlNTc3ZDhkODNkMzJlZmI3M2QxYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3dnsxqql3g4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1724}} | t3_1par06r | /r/LocalLLaMA/comments/1par06r/what_would_be_a_good_and_fast_llm_for_the_game/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=108&crop=smart&format=pjpg&auto=webp&s=3e4ef4f7babdb35e6cc612f46606f17a4920340b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=216&crop=smart&format=pjpg&auto=webp&s=5a0c612f2ae4e95c0a9bc763089917a0cf30de53', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=320&crop=smart&format=pjpg&auto=webp&s=2719695099c3448bd4dff9f32254efcba1b94459', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=640&crop=smart&format=pjpg&auto=webp&s=4ff5a5299f077013961a4b6765951c8f4b123646', 'width': 640}, {'height': 601, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=960&crop=smart&format=pjpg&auto=webp&s=d1cacb8c27e65d1cf2f3ac436fb487b2c5e0c691', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dbdd365dd6ecfc960323620fe448433ee80876de', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MWpscXJ5cmwzZzRnMVTVpg4zuNHWDVERrZUvc-vdlYPyH2Esn_lzmgRlFDXs.png?format=pjpg&auto=webp&s=51eae65154715a59881672f43538bfba67114d40', 'width': 1724}, 'variants': {}}]} | |
$900 for 192GB RAM on Oct 23rd, now costs over $3k | 1,012 | Two 96GB kits cost me $900 on Oct 23rd. Now one month later trying to get an equivalent amount costs about $3200.. Just insane. Wondering what the prices are going to be late 2026, considering word is that this isn't going to be getting better until 2027. Prices here are in CAD btw. USD equivalent is about $650 vs $2300. | 2025-11-30T19:24:34 | Hoppss | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1paqxs0 | false | null | t3_1paqxs0 | /r/LocalLLaMA/comments/1paqxs0/900_for_192gb_ram_on_oct_23rd_now_costs_over_3k/ | false | false | default | 1,012 | {'enabled': True, 'images': [{'id': 'ka8j4duh3g4g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=108&crop=smart&auto=webp&s=8a3195d68623ecec59c2b7914d5ea015925a744e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=216&crop=smart&auto=webp&s=19364dc9ee27f9e17b4e20778a1bc5b1ef3fdd41', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=320&crop=smart&auto=webp&s=607d15eac2bf78a51ea98d5a92a1ec9589397175', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=640&crop=smart&auto=webp&s=f3905d157af26fc5e6596ee0ac48570cd8592339', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=960&crop=smart&auto=webp&s=8a141199615845a7807b5dc2b8159b216620c4eb', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?width=1080&crop=smart&auto=webp&s=07a98c393ffdd0d02327335ddeec8d88ca444797', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/ka8j4duh3g4g1.png?auto=webp&s=77b37cded2c91676211f4df484d397de755a9c17', 'width': 1920}, 'variants': {}}]} | |
Built the same Local Agent (Llama 3.2) using LangChain (Python), Flowise, and n8n. Here is my breakdown. | 1 | Hi everyone,
I've been experimenting with Llama 3.2 via Ollama to create a fully local "Sports Analyst" agent (searches the web for recent soccer match results and sends me a summary via Telegram).
To find the best workflow in 2026, I decided to build the exact same agent using three different levels of abstraction: Code (LangChain), Low-Code (Flowise), and No-Code (n8n).
Here are my findings regarding the DX (Developer Experience) with Ollama:
1. LangChain (Python)
* Pros: Total control. I used langgraph and ChatOllama. It feels great to run everything in a simple script.
* Cons: Dependency hell is real. Libraries update constantly, breaking the create\_react\_agent logic. You spend more time maintaining the environment than building the agent.
1. Flowise (Visual)
* Pros: Very easy to visualize the "brain" connecting to tools.
* Cons: Installing it locally via npm was a nightmare of Node version conflicts. Docker is a must here. Also, triggering the agent from outside (like a cron job) requires more setup than I expected.
1. n8n (Workflow)
* The Winner for me. It treats the AI Agent as just another node. The native integration with Telegram/Email/Slack makes it the best choice for "production" local agents.
I documented the whole process (including the errors and the final demo) in a video. Note: The audio is in Spanish, but I think the visual workflow and the config steps are easy to follow (or use auto-translate captions).
Video link: [https://youtu.be/ZDLI6H4EfYg?si=s6WH-SAI0Iv1yMI3](https://youtu.be/ZDLI6H4EfYg?si=s6WH-SAI0Iv1yMI3)
Let me know if you have questions about the n8n + Ollama setup! | 2025-11-30T19:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1paqvs9/built_the_same_local_agent_llama_32_using/ | jokiruiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paqvs9 | false | null | t3_1paqvs9 | /r/LocalLLaMA/comments/1paqvs9/built_the_same_local_agent_llama_32_using/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IXrVz2D01B2h5H8cpPRzQz6Sx12wvOyLpzWNc1gmLFs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IXrVz2D01B2h5H8cpPRzQz6Sx12wvOyLpzWNc1gmLFs.jpeg?width=108&crop=smart&auto=webp&s=404ea91f2ba0ca826bbfe9d7eab78c32cdd3dac1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IXrVz2D01B2h5H8cpPRzQz6Sx12wvOyLpzWNc1gmLFs.jpeg?width=216&crop=smart&auto=webp&s=2e2c5d5ee32b96674dc3e91644318932e1e5cabd', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IXrVz2D01B2h5H8cpPRzQz6Sx12wvOyLpzWNc1gmLFs.jpeg?width=320&crop=smart&auto=webp&s=eaefd8090c4a5721bf540ec89f52ffeb4c008395', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IXrVz2D01B2h5H8cpPRzQz6Sx12wvOyLpzWNc1gmLFs.jpeg?auto=webp&s=f2267fd52308b5df6778f2dae659f0bdb173d4fb', 'width': 480}, 'variants': {}}]} |
Just finished my PHD in Artificial Intelligence. What should I do now? | 0 | Title says it all | 2025-11-30T19:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1paqspu/just_finished_my_phd_in_artificial_intelligence/ | Middle-Historian5771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paqspu | false | null | t3_1paqspu | /r/LocalLLaMA/comments/1paqspu/just_finished_my_phd_in_artificial_intelligence/ | false | false | self | 0 | null |
gpt-oss-120b-Derestricted reviews | 44 | 2025-11-30T19:06:40 | https://www.reddit.com/r/LocalLLaMA/comments/1paqhoy/gptoss120bderestricted_reviews/ | Bitter-Breadfruit6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paqhoy | false | null | t3_1paqhoy | /r/LocalLLaMA/comments/1paqhoy/gptoss120bderestricted_reviews/ | false | false | 44 | null | ||
Renting Out DGX Spark | 0 | I plan on building a DGX spark cluster. I will be using it a lot, but I’m trying to figure out if there are marketplaces where I could rent out compute time on it while it’s not in use by me.
Has anybody come across something like this?
Obviously this would be for people looking to do training, but I think the price could be cheaper than it would cost on cloud clusters given my only cost is energy. | 2025-11-30T18:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1paq4i0/renting_out_dgx_spark/ | jsfour | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paq4i0 | false | null | t3_1paq4i0 | /r/LocalLLaMA/comments/1paq4i0/renting_out_dgx_spark/ | false | false | self | 0 | null |
Which is best way to learn AI model building in 2026 | 1 | Hey everyone, I am eager to learn AI model building how entire transformer architecture work in detail...
But I don't have any tech background. I am totally beginner in this.
Can anyone suggest me roadmap which online free resources I can follow to become proficient in AI
(I have tried to watch karapathy video but it seems tough for me)
Please suggest | 2025-11-30T18:38:47 | https://www.reddit.com/r/LocalLLaMA/comments/1paps6a/which_is_best_way_to_learn_ai_model_building_in/ | Dazzling-Book3016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1paps6a | false | null | t3_1paps6a | /r/LocalLLaMA/comments/1paps6a/which_is_best_way_to_learn_ai_model_building_in/ | false | false | self | 1 | null |
Requesting cloud platform recommendations | 1 | Hey there, I wanted to use [vast.ai](http://vast.ai) for renting an RTX 6000 ADA. It's cheap. Mostly for llm fine tuning. I need at least 48 gigs of VRAM, but, am a broke college student with no funds : '). I saw somewhere about someone having their money wasted on vast.ai. So, I'm wondering if ya'll have any suggestions for cloud platforms. Thank you. | 2025-11-30T18:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1papeld/requesting_cloud_platform_recommendations/ | Few-Monitor5103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1papeld | false | null | t3_1papeld | /r/LocalLLaMA/comments/1papeld/requesting_cloud_platform_recommendations/ | false | false | self | 1 | null |
Why is my local LLaMA 3.2 (1B) model returning an empty JSON despite a correct prompt? | 1 | [removed] | 2025-11-30T18:21:54 | https://www.reddit.com/r/LocalLLaMA/comments/1papcl7/why_is_my_local_llama_32_1b_model_returning_an/ | EmbarrassedGoal2578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1papcl7 | false | null | t3_1papcl7 | /r/LocalLLaMA/comments/1papcl7/why_is_my_local_llama_32_1b_model_returning_an/ | false | false | self | 1 | null |
Ollama + Llama 3.2 1B returning empty JSON for extraction — even though email + prompt are correct | 1 | [removed] | 2025-11-30T18:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pap9fy/ollama_llama_32_1b_returning_empty_json_for/ | EmbarrassedGoal2578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pap9fy | false | null | t3_1pap9fy | /r/LocalLLaMA/comments/1pap9fy/ollama_llama_32_1b_returning_empty_json_for/ | false | false | self | 1 | null |
Switching from Ollama to llama-swap + llama.cpp on NixOS: why I finally made the jump after adding a second RTX 3090 | 16 | Hi r/LocalLLaMa!
_You guys convinced me I needed *more* 3090s!_
I tried llama-swap a few months ago when `gpt-oss-20b` was broken in Ollama. Got it working, but went back to Ollama out of laziness—`ollama pull` is just too convenient.
Last month I added a second RTX 3090 (48GB VRAM total) and started running into Ollama's limitations. When you want to balance layers between GPUs and offload specific parts to system RAM, the "magic" abstractions get in the way. I wanted to say "put 40 layers on GPU 0, 40 on GPU 1, offload 15 MoE experts to CPU RAM." With llama.cpp that's just command-line flags. With llama-swap I can define it per-model in a config file.
One thing that surprised me: I was getting 8 tokens/sec on gpt-oss:120b initially. Turned out the default llama.cpp build wasn't enabling BLAS or native CPU optimizations. After enabling `blasSupport` and `-DGGML_NATIVE=ON`, jumped to 50 tokens/sec. Compile-time flags matter a lot for CPU-offloaded layers.
The trade-off is you lose `ollama pull`. You have to find the GGUF on HuggingFace, pick your quantization, and write a few lines of YAML. But honestly that forces you to understand what you're running instead of blindly accepting defaults.
I wrote up my full setup (running on NixOS with a declarative config) here: https://www.nijho.lt/post/llama-nixos/
| 2025-11-30T17:45:29 | https://www.nijho.lt/post/llama-nixos/ | basnijholt | nijho.lt | 1970-01-01T00:00:00 | 0 | {} | 1paoezn | false | null | t3_1paoezn | /r/LocalLLaMA/comments/1paoezn/switching_from_ollama_to_llamaswap_llamacpp_on/ | false | false | default | 16 | null |
DGX Spark reproducing the benchmarks by NVIDIA for training | 3 | Anyone tried to repro the benchmark numbers for fine-tuning with DGX Spark ? Overall the number says llama 3.2 3B fine-tuning peak token/s is \~80k. That is roughly 8000/(2048\*8) \~= 5steps/seconds.
In reality when I ran the llama3.2 3b fine-tune from here: [https://build.nvidia.com/spark/pytorch-fine-tune](https://build.nvidia.com/spark/pytorch-fine-tune)
python Llama3\_3B\_full\_finetuning.py
https://preview.redd.it/h3c8w89iff4g1.png?width=929&format=png&auto=webp&s=b5093fee5a573bc3cb0ac6e66a348d4139753834
I got around \~0.5 step/seconds
============================================================
TRAINING COMPLETED
Training runtime: 106.51 seconds
Samples per second: 4.69
Steps per second: 0.59
Train loss: 1.0989
Which is roughly \~8k toks/seconds. Any idea what is the reason for this discrepancy or I'm misinterpreting the nvidia benchmark ?
| 2025-11-30T17:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pankoh/dgx_spark_reproducing_the_benchmarks_by_nvidia/ | khoka_x9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pankoh | false | null | t3_1pankoh | /r/LocalLLaMA/comments/1pankoh/dgx_spark_reproducing_the_benchmarks_by_nvidia/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=108&crop=smart&auto=webp&s=ae9a0b364ed46787f39eed33a84dbd6d41b7493d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=216&crop=smart&auto=webp&s=3248ecb24a87368115d7dc5a20595897b770e388', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=320&crop=smart&auto=webp&s=91731aa3c35ff0d208722fa6dc0275e285dda386', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=640&crop=smart&auto=webp&s=93cd7ed8b75bfcb19130e28beebd7322a7e89a5a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=960&crop=smart&auto=webp&s=75bb5d4df7b9f81b824f9488f17b3df47c408a89', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=1080&crop=smart&auto=webp&s=5903a143841e25f04a02056b5ed84484135bfb15', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?auto=webp&s=e1ebb1fb5ce772f7fa2447458a16ab66f8c06c76', 'width': 1200}, 'variants': {}}]} | |
I spent 2 years building privacy-first local AI. My conclusion: Ingestion is the bottleneck, not the Model. (Showcase: Ollama + Docling RAG Kit) | 30 | Hi r/LocalLLaMA,
I’ve been working on strictly local, data-privacy-compliant AI solutions for about two years now. Dealing with sensitive data meant that cloud APIs were never an option—it had to be air-gapped or on-prem.
The biggest lesson I learned:
We spend 90% of our time debating model quantization, VRAM, and context windows. But in real-world implementations, the project usually fails long before the prompt hits the LLM. It fails at Ingestion.
Especially in environments like Germany, where "Digitalization" just meant "scanning paper into PDFs" for the last decade, we are sitting on mountains of "Digital Paper"—files that look digital but are structurally dead (visual layouts, no semantic meaning).
The Solution:
I built a self-hosting starter kit that focuses heavily on fixing the Input Layer before worrying about the model.
The Stack:
* Engine: Ollama (because it’s the standard for local inference and handles GGUF on consumer hardware perfectly).
* Ingestion: Docling (v2). I chose this over PyPDF/LangChain splitters because it actually performs layout analysis. It reconstructs tables and headers into Markdown, so the LLM isn't guessing when reading a row.
* Database: ChromaDB (persistent, local).
* Architecture: Separation of concerns. I created specific profiles for Code (analyzing repositories) vs. Documents (PDFs), because throwing them into the same chunking strategy creates noise.
What this Kit is:
It’s a docker-compose setup for anyone who needs a "Google Code Wiki" style system but cannot let their data leave the building. It’s opinionated (Ingestion-First), strips out complex async worker queues for simplicity, and runs on a standard 16GB machine.
Repo: [https://github.com/2dogsandanerd/Knowledge-Base-Self-Hosting-Kit](https://github.com/2dogsandanerd/Knowledge-Base-Self-Hosting-Kit)
I’ve decided to start open-sourcing my internal toolset because I genuinely fear we are heading towards a massive wave of failed AI integrations.
We are currently seeing companies and devs rushing into RAG, but hitting a wall because they overlook the strict quality requirements for retrieval. They don't realize that "electronic paper" (PDFs) is not Digitalization. It's just dead data on a screen.
Unless we fix the ingestion layer and stop treating "File Upload" as a solved problem, these integrations will fail to deliver value. This kit is my attempt to provide a baseline for doing it right—locally and privately.
I’d love to hear your thoughts on the "Ingestion First" approach. For me, switching from simple text-splitting to layout-aware parsing was the game changer for retrieval accuracy.
Thanks ! | 2025-11-30T16:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pamu5t/i_spent_2_years_building_privacyfirst_local_ai_my/ | ChapterEquivalent188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pamu5t | false | null | t3_1pamu5t | /r/LocalLLaMA/comments/1pamu5t/i_spent_2_years_building_privacyfirst_local_ai_my/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=108&crop=smart&auto=webp&s=5e24dc498d14cc8356c6d2bd2e33d8634c5345de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=216&crop=smart&auto=webp&s=cfedec9d4f91e62008955d4478957d435d1f6387', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=320&crop=smart&auto=webp&s=4950b45cbd92dede7aa3dc472c71d0018957d077', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=640&crop=smart&auto=webp&s=2830a4fa541e60c99a6ef74b07694b148a4e8240', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=960&crop=smart&auto=webp&s=0741901e95ba5028ad0d7e2af2012b7a8622bbe7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?width=1080&crop=smart&auto=webp&s=a8461e5bc23be9a5f315cbc184722466121cd38e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zs8LUrtzTSTRlE-B45Z21uwn8cbSuigIzR2b0SxgLLQ.png?auto=webp&s=7bbc7f41171d11b36a16e2f1dbf8b1ef246cbcb3', 'width': 1200}, 'variants': {}}]} |
nvidia/Orchestrator-8B · Hugging Face | 206 | Orchestrator-8B is a state-of-the-art 8B parameter orchestration model designed to solve complex, multi-turn agentic tasks by coordinating a diverse set of expert models and tools.
On the Humanity's Last Exam (HLE) benchmark, ToolOrchestrator-8B achieves a score of 37.1%, outperforming GPT-5 (35.1%) while being approximately 2.5x more efficient.
[https://huggingface.co/bartowski/nvidia\_Orchestrator-8B-GGUF](https://huggingface.co/bartowski/nvidia_Orchestrator-8B-GGUF) | 2025-11-30T16:42:00 | https://huggingface.co/nvidia/Orchestrator-8B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pams8b | false | null | t3_1pams8b | /r/LocalLLaMA/comments/1pams8b/nvidiaorchestrator8b_hugging_face/ | false | false | 206 | {'enabled': False, 'images': [{'id': 'Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=108&crop=smart&auto=webp&s=20a41f08936af23fb32299e402a842a894bab226', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=216&crop=smart&auto=webp&s=d8f736b0803dfdf5a13e1c101cae94e160935db1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=320&crop=smart&auto=webp&s=9de20643bdc153ed88a98c88b607c1834e6f723e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=640&crop=smart&auto=webp&s=73e0546e48c2735a0477acb6fffa02a0e545e6c6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=960&crop=smart&auto=webp&s=64cc39ad5b92b5e4820c903c16535641eab1caf6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?width=1080&crop=smart&auto=webp&s=773cee59e0473f59b2d27d4f6cb5e7982165a1c9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Havs9Ap5icNaW5b7G-LM12y5Y2tjxzsA0o3nV6l5p6A.png?auto=webp&s=2fa6d513d5ce91f502ca97bfd91f65e727814103', 'width': 1200}, 'variants': {}}]} | |
Workflow comparison: Running Llama 3.2 locally with LangChain vs n8n. Why I stopped coding my agents. | 0 | Hi everyone. Weekend project report!
I wanted to build a "Sports Analyst" agent completely locally using Ollama (Llama 3.2) via Docker.
I tried 3 approaches:
* Python (LangChain): Great control, but constant library updates broke my create\_react\_agent logic.
* Flowise: Good visualization, but setting up the networking without Docker was a nightmare.
* n8n: The winner for "production".
The tricky part: Connecting n8n (Docker) to Ollama (Host). I wasted hours on fetch failed errors. The fix was setting OLLAMA\_HOST=0.0.0.0 and pointing n8n to host.docker.internal:11434.
I made a walkthrough video comparing the 3 builds. (Audio is Spanish, but code/config is universal).
[https://youtu.be/H0CwMDC3cYQ?si=7zsT2XT37tBgvG74](https://youtu.be/H0CwMDC3cYQ?si=7zsT2XT37tBgvG74)
Has anyone else moved their local agents to n8n pipelines, or do you stick to Python scripts? | 2025-11-30T16:37:35 | https://www.reddit.com/r/LocalLLaMA/comments/1pamo8z/workflow_comparison_running_llama_32_locally_with/ | jokiruiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pamo8z | false | null | t3_1pamo8z | /r/LocalLLaMA/comments/1pamo8z/workflow_comparison_running_llama_32_locally_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=108&crop=smart&auto=webp&s=1df5ef421fe30355a27a98d4f7772a6085171071', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=216&crop=smart&auto=webp&s=05df88ab05cfbeeb4cde6d64724b0715334f28d7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=320&crop=smart&auto=webp&s=54d6da8f05cbff9c460009be72022ed9388a71a8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?auto=webp&s=65008254052ca996feb16d4686abb9efb58baa59', 'width': 480}, 'variants': {}}]} |
(very low effort) i designed a simple SSM head | 2 | like the title says, this is a very low effort post/project, and i am mostly a 28 year old high school graduate useless NEET, so this thing has almost no chance of outperforming attention, mamba or rwkv, nor was that its goal, i just wanted to see if i can design something that can sort of approximate a finite tape, finite step turing machine. the basic idea is, the heads in each layer has a bunch of slots, and the input (which comes from the previous layer) gets to decide which slots to overwrite, and which slots the mlp gets to read. we do our K, Q and V projections, after that, we project the k and the q vectors from d_head to n_slots with W_e, this can be higher dim or lower dim. a projection is basically a bunch of dot scores, so W_e simply tells us how similar the k and the q vectors to the slot identity vectors, which are stored withing the projection itself. after that, each projection out gets softmaxed with a unique, learnable temp. the k softmax gets to decide the overwrite strengths for the slots, and the q softmax gets to weigh the slot contents before they are summed, just like vanilla attention. the slots are just simple selective SSMs, if a(t) is the k softmax score, then:
h(t)=(1-a(t))*h(t-1)+a(t)*v(t)
anyway. these "heads" are used to replace the attention heads in a GPT. with d_model=384, n_layers=6, d_head=48, ffn_mult=4, n_slots=48 we get about 11M parameters. i used absolute positional encodings, i am not sure if using RoPE would have worked, i just went with the "safe" option.
here is the head module. i didnt write it, i have no coding skills, i just explained the maths to chatgpt, told it to keep the recurrences in fp32 and to soft-clamp the softmax temps. its probably not very optimized, but it works:
class DenseSlotMemoryHead(nn.Module):
"""
Dense (non-sparse) slot-memory head (per-sequence SSM style).
- Input x: [B, T, d_model]
- Internal projections: d_model -> d_head
- Slot routing via dense softmax over n_slots with learnable temperature
- Selective recurrence over slots (vectorized over time, scan done in fp32)
- Slots are always reset per call (slot_state=None; this is SSM-like)
Returns:
y_out : [B, T, d_head]
new_state : [B, n_slots, d_head] (unused if you reset every sequence)
aux_loss : scalar (slot usage balance loss)
"""
def __init__(
self,
d_model: int,
d_head: int,
n_slots: int,
use_bias: bool = False,
temp_min: float = 0.1,
temp_max: float = 10.0,
):
super().__init__()
self.d_model = d_model
self.d_head = d_head
self.n_slots = n_slots
self.temp_min = temp_min
self.temp_max = temp_max
# Model -> head projections
self.W_k = nn.Linear(d_model, d_head, bias=use_bias)
self.W_q = nn.Linear(d_model, d_head, bias=use_bias)
self.W_v = nn.Linear(d_model, d_head, bias=use_bias)
# Head -> slot logits (shared for write and read)
self.W_e = nn.Linear(d_head, n_slots, bias=False)
# Learnable temperatures (scalar) for write/read softmax
self.temp_write_logit = nn.Parameter(torch.zeros(()))
self.temp_read_logit = nn.Parameter(torch.zeros(()))
def _get_temps(self, dtype, device):
"""Compute write/read temperatures, softly clamped to [temp_min, temp_max]."""
write_logit = self.temp_write_logit.to(device=device, dtype=dtype)
read_logit = self.temp_read_logit.to(device=device, dtype=dtype)
span = self.temp_max - self.temp_min
temp_write = self.temp_min + span * torch.sigmoid(write_logit)
temp_read = self.temp_min + span * torch.sigmoid(read_logit)
return temp_write, temp_read
def forward(
self,
x: torch.Tensor, # [B, T, d_model]
slot_state: torch.Tensor | None = None, # [B, n_slots, d_head] or None
):
B, T, Dm = x.shape
assert Dm == self.d_model
device = x.device
dtype = x.dtype
# Slot initial state (per sequence, like an SSM)
if slot_state is None:
H0 = torch.zeros(B, self.n_slots, self.d_head, device=device, dtype=dtype)
else:
H0 = slot_state.to(device=device, dtype=dtype)
# 1) Project all timesteps to head space
k = self.W_k(x) # [B, T, d_head]
q = self.W_q(x)
v = self.W_v(x) # [B, T, d_head]
# 2) Slot logits
B_, T_, Dh = k.shape
k_e = self.W_e(k.view(B_ * T_, Dh)).view(B, T, self.n_slots) # [B, T, n_slots]
q_e = self.W_e(q.view(B_ * T_, Dh)).view(B, T, self.n_slots)
# 3) Learnable temperatures + dense softmax routing
temp_write, temp_read = self._get_temps(dtype=dtype, device=device)
eps_temp = torch.finfo(dtype).eps
tw = torch.clamp(temp_write, min=eps_temp)
tr = torch.clamp(temp_read, min=eps_temp)
k_e_scaled = k_e / tw
q_e_scaled = q_e / tr
write_weights = F.softmax(k_e_scaled, dim=-1) # [B, T, n_slots]
read_weights = F.softmax(q_e_scaled, dim=-1) # [B, T, n_slots]
# 4) Slot usage aux loss (encourage uniform write usage)
slot_usage = write_weights.mean(dim=(0, 1)) # [n_slots]
aux_loss = ((slot_usage * self.n_slots - 1.0) ** 2).mean()
# 5) Selective recurrence over slots
a_dense = torch.clamp(write_weights, 0.0, 1.0 - 1e-5) # [B, T, n_slots]
A = 1.0 - a_dense # [B, T, n_slots]
v_expanded = v.unsqueeze(2) # [B, T, 1, d_head]
B_term = a_dense.unsqueeze(-1) * v_expanded # [B, T, n_slots, d_head]
# Slot-major layout
A_slot = A.permute(0, 2, 1).contiguous() # [B, n_slots, T]
B_slot = B_term.permute(0, 2, 1, 3).contiguous() # [B, n_slots, T, d_head]
# Do the scan in fp32 for numerical stability
A_slot32 = A_slot.to(torch.float32)
B_slot32 = B_slot.to(torch.float32)
H0_32 = H0.to(torch.float32)
C = A_slot32.cumprod(dim=2) # [B, n_slots, T]
eps = torch.finfo(torch.float32).eps
C_safe = C.clamp(min=eps)
R = B_slot32 / C_safe.unsqueeze(-1) # [B, n_slots, T, d_head]
S = R.cumsum(dim=2) # [B, n_slots, T, d_head]
H0_exp = H0_32.unsqueeze(2) # [B, n_slots, 1, d_head]
H_seq32 = C.unsqueeze(-1) * (H0_exp + S) # [B, n_slots, T, d_head]
H_seq = H_seq32.to(dtype=dtype) # [B, n_slots, T, d_head]
new_state = H_seq[:, :, -1, :] # [B, n_slots, d_head]
# 6) Readout
H_bt = H_seq.permute(0, 2, 1, 3).contiguous() # [B, T, n_slots, d_head]
y_out = torch.sum(read_weights.unsqueeze(-1) * H_bt, dim=2) # [B, T, d_head]
return y_out, new_state, aux_loss
i tested this head with the hyperparams i have given within a gpt. all heads were replaced with this one, so, no vanilla attention heads. the model was able to solve 24 digit addition within 40k steps with a batch size of 192, lr=3e-4 to 3e-5 using cosine annealing and adamw as the optimizer. i ran it at bf16 on my 3060. the samples were created as:
24digits+24digits=25digits
to keep the length fixed and make the models job easier. i did a 16 digit run too, and the same model solved it under 25k steps.
like i said, i am not expecting this thing to go anywhere, and i am just someone who occasionally tinkers with ml. i dont think there is anything new or exciting about this model, its highly unlikely to perform better than anything, but it works, and i came up with it myself, though i was obviously heavily inspired by the selective recurrences used in mamba, rwkv etc. its possible that this thing just replicates them and i wouldnt even know, because i didnt actually read their papers. | 2025-11-30T16:36:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pamni7/very_low_effort_i_designed_a_simple_ssm_head/ | smoothbrain_1947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pamni7 | false | null | t3_1pamni7 | /r/LocalLLaMA/comments/1pamni7/very_low_effort_i_designed_a_simple_ssm_head/ | false | false | self | 2 | null |
I built an on-device AI chat app for people who care about privacy | 0 | This week I launched Nativ, an on-device AI chat app for iPhone. I wanted something fast, private, and truly native to iOS, so I ended up building it myself.
Nativ runs AI models directly on your device, including Apple Intelligence and MLX models like Llama, Gemma, Granite, Phi, and Qwen. It also includes optional tools such as PDF chat with on-device RAG, voice mode, HealthKit insights, web search, and a local “Near Me” feature.
**No subscriptions, no logins, and no analytics. Everything stays on your iPhone.**
Right now I’m working on adding more MLX models and image generation for an upcoming update. Happy to answer any questions, and I’d love your feedback!
[App Store here](https://apps.apple.com/us/app/nativ-local-ai/id6755643116) | 2025-11-30T16:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1pam1ri/i_built_an_ondevice_ai_chat_app_for_people_who/ | gonzc_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pam1ri | false | null | t3_1pam1ri | /r/LocalLLaMA/comments/1pam1ri/i_built_an_ondevice_ai_chat_app_for_people_who/ | false | false | self | 0 | null |
LocalAI 3.8.0 released: Universal Model Loader (HF/Ollama/OCI), MCP Agent Streaming, Logprobs support, and strict SSE compliance. | 22 | Hey everyone, author of LocalAI here.
I just pushed version 3.8.0 and wanted to share the updates with the community. For those unaware, LocalAI acts as an OpenAI-compatible API wrapper around llama.cpp, diffusers, vLLM, MLX, and other backends.
This release focuses heavily on Agentic workflows and Usability.
Key Updates:
Universal Model Import: We refactored the model loader. It now accepts URLs (HF, Ollama, OCI) and attempts to auto-detect the backend logic and apply the correct chat template (llama-3, mistral, etc.) automatically.
https://reddit.com/link/1pam156/video/ucwy4uh74f4g1/player
MCP (Model Context Protocol) Streaming: We added a new endpoint to stream Agent actions. You can now visually see the agent's reasoning steps and tool calls live in the UI before the final response is generated.
https://reddit.com/link/1pam156/video/5cxi9ee94f4g1/player
Configuring an agent MCP is simplified now:
https://reddit.com/link/1pam156/video/mdpznllk4f4g1/player
Runtime Settings: You no longer need to restart the container to rotate API keys, toggle P2P settings, or change Watchdog configurations. You can hot-reload these directly from the UI.
https://reddit.com/link/1pam156/video/jvhwjzyd4f4g1/player
Logitbias & Logprobs: Added full support for token-level probabilities.
Strict SSE Compliance: We tightened up the Server-Sent Events implementation. This resolves the annoying errors people were seeing when using openai-node or LangChain JS clients.
Advanced llama.cpp Tuning: We exposed more granular controls in the YAML config. You can now explicitly set context\_shift, cache\_ram, and parallel worker slots.
I didn't wanted to clutter the post here, but there are more videos present in the official release on Github, have a look here if interested!
Repo: [https://github.com/mudler/LocalAI](https://github.com/mudler/LocalAI)
Full Changelog: [https://github.com/mudler/LocalAI/releases/tag/v3.8.0](https://github.com/mudler/LocalAI/releases/tag/v3.8.0)
Let me know if you run into any issues with the new importer! Enjoy! | 2025-11-30T16:11:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pam156/localai_380_released_universal_model_loader/ | mudler_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pam156 | false | null | t3_1pam156 | /r/LocalLLaMA/comments/1pam156/localai_380_released_universal_model_loader/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=108&crop=smart&auto=webp&s=1dfaaebb319d598efc9ade962ce57d3ba37d67bd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=216&crop=smart&auto=webp&s=e2e0deb2f4a0848f5c8bacb88deb8c951d7a9250', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=320&crop=smart&auto=webp&s=c8a531f35e7880263c6f0286664d283c0e2f3b8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=640&crop=smart&auto=webp&s=07c2de2ee6cb66ad7a6f68479662894c187c051c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=960&crop=smart&auto=webp&s=13a24abd1adff153d65b1a7098a34f7279f3740b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?width=1080&crop=smart&auto=webp&s=c0ecd91b4ac6075db46f8fcb163b041c4878c571', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v50uBAtXaJcThcZ_W1PMevST4UVxkUKkBmd5HJoTDYE.png?auto=webp&s=71b52313778a04b69d53b563c83f5e51c6450acb', 'width': 1200}, 'variants': {}}]} |
RAG of financial statements | 3 | Good afternoon!
I would like to know how to use Ollama to read my financial statements, understand them and provide me with insights. I want to do everything locally for privacy reasons. I have company and personal statements.
Currently I already run qwen2.5-7B locally, could I use it for that? | 2025-11-30T15:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/1palp6g/rag_of_financial_statements/ | Less_Piccolo_6218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1palp6g | false | null | t3_1palp6g | /r/LocalLLaMA/comments/1palp6g/rag_of_financial_statements/ | false | false | self | 3 | null |
Built a Modular Agentic RAG System – Zero Boilerplate, Full Customization | 12 | Hey everyone!
Last month I released a GitHub repo to help people understand Agentic RAG with LangGraph quickly with minimal code. The feedback was amazing, so I decided to take it further and build a **fully modular system** alongside the tutorial.
## True Modularity – Swap Any Component Instantly
- **LLM Provider?** One line change: Ollama → OpenAI → Claude → Gemini
- **Chunking Strategy?** Edit one file, everything else stays the same
- **Vector DB?** Swap Qdrant for Pinecone/Weaviate without touching agent logic
- **Agent Workflow?** Add/remove nodes and edges in the graph
- **System Prompts?** Customize behavior without touching core logic
- **Embedding Model?** Single config change
## Key Features
✅ **Hierarchical Indexing** – Balance precision with context
✅ **Conversation Memory** – Maintain context across interactions
✅ **Query Clarification** – Human-in-the-loop validation
✅ **Self-Correcting Agent** – Automatic error recovery
✅ **Provider Agnostic** – Works with any LLM/vector DB
✅ **Full Gradio UI** – Ready-to-use interface
Link: [GitHub repo](https://github.com/GiovanniPasq/agentic-rag-for-dummies) | 2025-11-30T15:57:38 | CapitalShake3085 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1palote | false | null | t3_1palote | /r/LocalLLaMA/comments/1palote/built_a_modular_agentic_rag_system_zero/ | false | false | default | 12 | {'enabled': True, 'images': [{'id': 'vlxnbaqn2f4g1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=108&crop=smart&format=png8&s=6ce4b18766783bc4df89c262d9288f31f90c267b', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=216&crop=smart&format=png8&s=bcfeb4d4b080084e2f057470fd06ebacd6a759e8', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=320&crop=smart&format=png8&s=1180c5dbc7c314098471f5638f04f6a19fa66983', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=640&crop=smart&format=png8&s=d90cbec8ebfa568079c046f8a26d371c3ff9e901', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?format=png8&s=8c4d3d930298bfb89e5e2249538d6918bf230ec6', 'width': 720}, 'variants': {'gif': {'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=108&crop=smart&s=324bd8d3c9c582adb6adb82d2e23519055063018', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=216&crop=smart&s=2337b96fa976db5c4edba5b0f73ded3e596aff25', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=320&crop=smart&s=155b59cd7611e0a65e1f2e49709b2bf115b9f92d', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=640&crop=smart&s=f25c990e106612bf0d2b452ea1356240ec2e15f9', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?s=79b403d777a5950eb2e046d84e0d95497f16087d', 'width': 720}}, 'mp4': {'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=108&format=mp4&s=6102cdc1324b540dbe2b3ff4d0d616cbe6325b39', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=216&format=mp4&s=ca22002342edbacaddbe950eaa22efa7632b2ee3', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=320&format=mp4&s=a6fca28a2eec9894f4dad4d3d16df312cfd0dbcd', 'width': 320}, {'height': 512, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?width=640&format=mp4&s=01490855957f08f3295154341ecbf89cc77829f0', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/vlxnbaqn2f4g1.gif?format=mp4&s=c4b3acd0df1abb43347da3e9f8c918e313e7882c', 'width': 720}}}}]} | |
Need advice upgrading an old gaming desktop with a 5090 for AI | 1 | A relative is giving me a XPS 8950 desktop: Intel i7-12700K, 64GB DDR5, Nvidia 3070 8GB.
I would like to use it for an image generation long-term project (a few hours a day), + using/testing various AI tools for fun since I love this stuff. It turns out I have an opportunity to get a 5090 (or any brand new card) at a significant discount.
The problem is that the XPS 8950 is unsuitable:
- The case is too small to fit a 5090
- Not enough PCIe lanes or PSU juice to keep both GPUs
I have to choose between:
1. Get a new large case for the XPS to put the 5090 in. Get rid of the 3070 to free up the PSU for the 5090, potentially underclock the 5090 if PSU is still not good enough. Might have to buy a new PSU anyway, in which case I keep both cards.
2. Buy an external enclosure for the 5090 (eGPU), get to keep both GPUs. Although my experience with external SSD enclosures has been negative, I'm excited at the idea of having a portable 32GB AI lab. From reading this sub I know that if you're using a single GPU, and the entire model fits in VRAM, the slow bus speed has no effect on inference speed, only on initial model loading. The 5090 can't be combined with the 3070 for any tasks without nuking the speed (more on that in a follow-up question below, please confirm), so the 3070 can act as a secondary bank of VRAM for isolated small models you want to run fast.
I'm leaning for the eGPU because it seems like such a neat solution to the problem, but would appreciate some feedback here. It's too good to be true, right?
I have more questions but I'm gonna add them as comments below. I really want to understand this stuff, and I don't want scare off people with a wall of text, lol.
THANK YOU FOR YOUR ATTENTION TO THIS MATTER! | 2025-11-30T15:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/1palbu7/need_advice_upgrading_an_old_gaming_desktop_with/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1palbu7 | false | null | t3_1palbu7 | /r/LocalLLaMA/comments/1palbu7/need_advice_upgrading_an_old_gaming_desktop_with/ | false | false | self | 1 | null |
AI can now draw pixel art and build models stroke-by-stroke! | 0 | 2025-11-30T15:40:15 | https://v.redd.it/svdyo8sjze4g1 | uskyeeeee | /r/LocalLLaMA/comments/1pal9nf/ai_can_now_draw_pixel_art_and_build_models/ | 1970-01-01T00:00:00 | 0 | {} | 1pal9nf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/svdyo8sjze4g1/DASHPlaylist.mpd?a=1767238823%2CZDJkMjQ3MzRkNTg0ZDFhNjcyMGNkZDIxNjFiZTlmZDkxN2E3OWZhMTlhN2ZmYTFhMzdmMGQxNDdkYzRhYWZkYQ%3D%3D&v=1&f=sd', 'duration': 214, 'fallback_url': 'https://v.redd.it/svdyo8sjze4g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 666, 'hls_url': 'https://v.redd.it/svdyo8sjze4g1/HLSPlaylist.m3u8?a=1767238823%2CZWI1ZjYzYzYxMjczOWY0ZTM1MWJlYzlmNGIwZTNiZDk0MTljZWViMjJmNzUzMmI1NmM3ZDIzMTlkMjgxODRhNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/svdyo8sjze4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1pal9nf | /r/LocalLLaMA/comments/1pal9nf/ai_can_now_draw_pixel_art_and_build_models/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=108&crop=smart&format=pjpg&auto=webp&s=eac3a63f34a1046ff0e1440e3f045b06e8af7d33', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=216&crop=smart&format=pjpg&auto=webp&s=65da327be8b1a09688cefb6f43292d85423b39e0', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=320&crop=smart&format=pjpg&auto=webp&s=bb03a200682bfccf2175feeb14f90a7fc9b4a495', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=640&crop=smart&format=pjpg&auto=webp&s=a97959b5c23b0b116dd8b2406237bac45be41264', 'width': 640}, {'height': 500, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=960&crop=smart&format=pjpg&auto=webp&s=a63fd4ae1115b5ef5d9dc5a4be1a133ff588bca8', 'width': 960}, {'height': 562, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e88facf5ffeb4857da64d341f102fe0d0ef44183', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/OTM2MTRoZmp6ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?format=pjpg&auto=webp&s=bff74ad2168a336a00eeee6a815386c7801ab965', 'width': 1920}, 'variants': {}}]} | ||
Have long context models solved attention dilution yet? | 13 | I recently came across a claim that because the Gemini models have 1m context, there is no longer a need to RAG or chunk long documents.
Just wondering whether this is actually true of Gemini or any long context model out there now because the last I tried and read up on this, I was under the impression that performance drops dramatically around the 100K-200K token range, which is consistent with my real life experience.
The claim I read about was in relation to a use case where accuracy is absolutely critical (legal docs) so I'm wondering whether it's really true and whether chunking and text splitting is dead for long docs where you need 100% attention to all parts of the doc.
If it's true or coming true soon, then do we just hold out for those models then... | 2025-11-30T15:31:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pal23y/have_long_context_models_solved_attention/ | yuch85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pal23y | false | null | t3_1pal23y | /r/LocalLLaMA/comments/1pal23y/have_long_context_models_solved_attention/ | false | false | self | 13 | null |
Gemini 3.0 is lazy but artistic, Claude 4.5 is a hard worker. I built a real-time visualization tool to compare how models "draw" pixel art. | 2 | Hello everyone,
I wanted to share a project I've been working on: pixgens.com.
It’s a platform where AI generates pixel art and voxel models in real-time. But unlike standard image generators that just spit out a finished JPEG, this tool lets you watch the AI draw stroke-by-stroke.
It’s still in the early stages (alpha), but honestly, watching the generation process hits different compared to just chatting with a bot. Even when the AI makes mistakes, it feels strangely organic—like watching a child drawing and figuring things out. You can almost see the "thought process" behind the strokes.
Here is a demo of what it can do so far (Video attached). I plan to add music, filters, and animation support soon.
How to use it:
The site is completely free, but you need to bring your own OpenRouter API Key (since I can't afford the GPU bill for everyone yet!).
My personal findings on models:
• Gemini 3.0 Pro: Honestly, this model has the best artistic sense. The output is beautiful, but it can be incredibly lazy. You have to really push it.
• Claude 4.5 Opus: A bit weaker on the "artistic flair" side, but extremely hardworking. If you use the "deepthink" mode, it creates massive, intricate pieces that are genuinely surprising.
I know some might say AI art is glitchy or "soulless," but seeing it built piece by piece gives it a weird kind of charm. With base models getting better every month, I’m excited to see where this goes.
I’d love for you guys to try it out and share what you create (or share the funny failures).
Cheers! | 2025-11-30T15:21:59 | https://v.redd.it/diar2p8awe4g1 | uskyeeeee | /r/LocalLLaMA/comments/1paktta/gemini_30_is_lazy_but_artistic_claude_45_is_a/ | 1970-01-01T00:00:00 | 0 | {} | 1paktta | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/diar2p8awe4g1/DASHPlaylist.mpd?a=1767237726%2COGExZjI3MDcwOTA5MTVmYmE3MTc0YzhkZWEwMDYzNDFjZTc3MjQ5ZmMwODA4ZGM0ZGJlY2FiZDQ4YjEyZjdmYQ%3D%3D&v=1&f=sd', 'duration': 214, 'fallback_url': 'https://v.redd.it/diar2p8awe4g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 666, 'hls_url': 'https://v.redd.it/diar2p8awe4g1/HLSPlaylist.m3u8?a=1767237726%2CNmY1ODE5NTkxY2FhNDIzNzVhNGE1ZmZlZTUxYmUyNWJmZDBlOTA0MDY4ZDhhOWI5NTc1MGIxYWQ1NGJiY2IyOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/diar2p8awe4g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1paktta | /r/LocalLLaMA/comments/1paktta/gemini_30_is_lazy_but_artistic_claude_45_is_a/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=108&crop=smart&format=pjpg&auto=webp&s=8498fe8a39a94d02446c9416114009ccdf2d27df', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=216&crop=smart&format=pjpg&auto=webp&s=70bb9ddf152ad5ff84af2780d20935b8adb3e116', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=320&crop=smart&format=pjpg&auto=webp&s=5807d4de4880d07a3573b2140f6ae4733a47fa21', 'width': 320}, {'height': 333, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=640&crop=smart&format=pjpg&auto=webp&s=0faec464e8ab092391b4cdc3e44c6d358121d260', 'width': 640}, {'height': 500, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=960&crop=smart&format=pjpg&auto=webp&s=662e258dce8d3525fede4880ee2fa468e4e9a2fb', 'width': 960}, {'height': 562, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dbbda65f038403cecae450143ad52e360f29752c', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/azB5MngxMGF3ZTRnMTf-OqBoXpFWyylrTO-yttz1YsaMFrqBS_JL8GdNuR6d.png?format=pjpg&auto=webp&s=43e841d660ecf8cead76b758a0ed6eb18903f070', 'width': 1920}, 'variants': {}}]} | |
Users of Qwen3-Next-80B-A3B-Instruct-GGUF, How is Performance & Benchmarks? | 90 | It's been over a day we got GGUF. Please share your experience. Thanks
At first, I didn't believe that we could run this model just with 30GB RAM(Yes, RAM only) .... Unsloth posted a thread actually. Then someone shared a stat on that.
[17 t/s just with 32GB RAM + 10GB VRAM using Q4](https://www.reddit.com/r/LocalLLM/comments/1p8xlnw/comment/nrcjh83/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
Good for Poor GPU Club. | 2025-11-30T15:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pakey8/users_of_qwen3next80ba3binstructgguf_how_is/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pakey8 | false | null | t3_1pakey8 | /r/LocalLLaMA/comments/1pakey8/users_of_qwen3next80ba3binstructgguf_how_is/ | false | false | self | 90 | null |
One Bottleneck After Another - First GPU & now RAM | 0 | So many threads like [this](https://www.reddit.com/r/LocalLLaMA/comments/1pa85la/any_idea_when_ram_prices_will_be_normalagain/) on multiple subs for last couple of months.
Terrible timeline for some \*sigh\* | 2025-11-30T14:53:54 | pmttyji | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pak5hw | false | null | t3_1pak5hw | /r/LocalLLaMA/comments/1pak5hw/one_bottleneck_after_another_first_gpu_now_ram/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'edd2to31de4g1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/edd2to31de4g1.jpeg?width=108&crop=smart&auto=webp&s=2480fb6d79be221a2364230d24e8acc69e427608', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/edd2to31de4g1.jpeg?width=216&crop=smart&auto=webp&s=0239cc2b39ccb00b2d309a21145015f774f89a7d', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/edd2to31de4g1.jpeg?width=320&crop=smart&auto=webp&s=5f84f737db5544af02b0db83d1eb316817932c66', 'width': 320}, {'height': 494, 'url': 'https://preview.redd.it/edd2to31de4g1.jpeg?width=640&crop=smart&auto=webp&s=56daa8652488a2a73d1ba2ddabe89faa64edc330', 'width': 640}], 'source': {'height': 500, 'url': 'https://preview.redd.it/edd2to31de4g1.jpeg?auto=webp&s=be95e5b28a3fa0ee3d16e5ef9aaedd9064bf6899', 'width': 647}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.