title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AI Model card generator | 0 | I'm building an AI that writes EU AI Act Model Cards from your MLflow runs.
First 3 people get free cards. Who wants to skip documentation hell?I'm building an AI that writes EU AI Act Model Cards from your MLflow runs.
First 3 people get free cards. Who wants to skip documentation hell? | 2025-11-24T21:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ukfg/ai_model_card_generator/ | iam_chai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ukfg | false | null | t3_1p5ukfg | /r/LocalLLaMA/comments/1p5ukfg/ai_model_card_generator/ | false | false | self | 0 | null |
What are the best options for non-model based reranking? | 1 | TLDR: What is the best string similarity algorithm for RAG without a model?
In my open source Tokenring applications, I am implementing a deep research agent, which scrapes SERP, News headlines, files, databases, and other resources, combines them together, and then picks the top N results for a query using a customizable reranking strategy, to then retrieve and feed into an LLM to execute the research.
I have 4 strategies which are being implemented and combined for the ranking and searching:
- Calling a reranking model
- Embedding each result and then calculating a similarity
- Calling an LLM with structured output, that has been instructed to rank the results
- Not using a model at all, and using string similarity or dictionary algorithms such as Levenshtein, Jaccard, Soundex, etc.
For the last option, what is the best performing conventional algorithm available for a RAG pipeline, that does not require calling a model? | 2025-11-24T21:48:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ua3s/what_are_the_best_options_for_nonmodel_based/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ua3s | false | null | t3_1p5ua3s | /r/LocalLLaMA/comments/1p5ua3s/what_are_the_best_options_for_nonmodel_based/ | false | false | self | 1 | null |
That's why local models are better | 968 | That is why the local ones are better than the private ones in addition to this model is still expensive, I will be surprised when the US models reach an optimized price like those in China, the price reflects the optimization of the model, did you know ? | 2025-11-24T21:42:13 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5u44r | false | null | t3_1p5u44r | /r/LocalLLaMA/comments/1p5u44r/thats_why_local_models_are_better/ | false | false | default | 968 | {'enabled': True, 'images': [{'id': '7s5e59vpy93g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=108&crop=smart&auto=webp&s=89cc1f5f6cdaf54046be6d31037eded962121e5a', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=216&crop=smart&auto=webp&s=677b8fb6e258f44c9b1279d74e3f753e70ec16c3', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=320&crop=smart&auto=webp&s=e391d1f494015da3c2800e8930b9f2cda0aebdf5', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=640&crop=smart&auto=webp&s=91d4c29a99283e56fcfd8614cc10c6d72a0af91a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=960&crop=smart&auto=webp&s=4b5c93d414adaf331cbe087769eaaa08e6514b92', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?width=1080&crop=smart&auto=webp&s=2c424bb2519d0470165da6670ba08f4ee7aa7b27', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/7s5e59vpy93g1.jpeg?auto=webp&s=5e7e50d98e070ee58834fdf6945bfcd02e833397', 'width': 1080}, 'variants': {}}]} | |
Is Bert-Nebulon Alpha the new GLM model? | 26 | I know what you guys think. Not open weight... but really, there's no way for us to tell. Except, there are some interesting hints here and there (check the attached screenshot).
I remember there was a website which mapped the LLM outputs in more robust way instead of simply comparing two outputs. If you're the author of that particular tool, please consider checking this model out and compare with the known model outputs to see which model family it belongs to, because I think this similarity here is kinda interesting. | 2025-11-24T21:33:48 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5tw7m | false | null | t3_1p5tw7m | /r/LocalLLaMA/comments/1p5tw7m/is_bertnebulon_alpha_the_new_glm_model/ | false | false | default | 26 | {'enabled': True, 'images': [{'id': '9s6t2ftgw93g1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?width=108&crop=smart&auto=webp&s=55c3c5bea9f6c8e1aa1d2b6b2a95e25b0d778fd9', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?width=216&crop=smart&auto=webp&s=53bff6f3590710db0a5b9b05e47b2206b40c1307', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?width=320&crop=smart&auto=webp&s=4a059785c29498a3f29ab7c4b3235c17aa8e6c9c', 'width': 320}, {'height': 608, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?width=640&crop=smart&auto=webp&s=d6462fd503571688a7d0efd1dfe6f9ab508545f0', 'width': 640}, {'height': 912, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?width=960&crop=smart&auto=webp&s=a0cdbd764cdc96a8e12a3b3d5054c69de6d7efde', 'width': 960}], 'source': {'height': 919, 'url': 'https://preview.redd.it/9s6t2ftgw93g1.png?auto=webp&s=6c42ddfda7e1ed482ad6d3bad78d66b8a260a7ba', 'width': 967}, 'variants': {}}]} | |
Tutorial on Reinforcement Learning | 11 | Hi Everyone, I am doing a 3 part YouTube series on the fundamentals of Reinforcement Learning. Starting from the ABC of RL and culminating in training LLMs with RL.
Here is the first part:
https://youtu.be/j0I3-3q9AhM?si=-f9ZhAkuwO3s-kxg
Happy to welcome any questions or suggestions on new deep dives people want to see. | 2025-11-24T21:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p5tvmu/tutorial_on_reinforcement_learning/ | johnolafenwa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5tvmu | false | null | t3_1p5tvmu | /r/LocalLLaMA/comments/1p5tvmu/tutorial_on_reinforcement_learning/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'D1L2CLsdkG1AA2T6NIMRf3FKUXy6jjtFkOxe5naHr7E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/D1L2CLsdkG1AA2T6NIMRf3FKUXy6jjtFkOxe5naHr7E.jpeg?width=108&crop=smart&auto=webp&s=085f43e4dbedc90c6c0ed17b45173f2f6dbcd3e8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/D1L2CLsdkG1AA2T6NIMRf3FKUXy6jjtFkOxe5naHr7E.jpeg?width=216&crop=smart&auto=webp&s=6b546706dc7d63db403f0e8bcb89180b84a21150', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/D1L2CLsdkG1AA2T6NIMRf3FKUXy6jjtFkOxe5naHr7E.jpeg?width=320&crop=smart&auto=webp&s=688c1ad8a4bc988b42395db39b60a58e974b561c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/D1L2CLsdkG1AA2T6NIMRf3FKUXy6jjtFkOxe5naHr7E.jpeg?auto=webp&s=c5507511585da7dcde8bb5344eb9ac0c4dd99e9d', 'width': 480}, 'variants': {}}]} |
New cloaked model: Bert-Nebulon Alpha | 1 | 2025-11-24T21:24:27 | https://v.redd.it/szomigeav93g1 | sirjoaco | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5tngz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/szomigeav93g1/DASHPlaylist.mpd?a=1766611485%2CYWRlZGY1OWU5MjEwNzY1YThiYjI1OGExOThlODkxNjcwMTI0YTU2MDJlNWQ4Yzg5ZmMyN2QyYWQ3ZTNjNWYwZg%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/szomigeav93g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/szomigeav93g1/HLSPlaylist.m3u8?a=1766611485%2CNTcwZTFlM2MyNGMwZTQ5NzhkNjFjNTdjMTYxMjcyNDU4N2ZhYWJmNmY5OWIxNjg4YWYxOTUxNjhkOTM2ODI3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/szomigeav93g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1796}} | t3_1p5tngz | /r/LocalLLaMA/comments/1p5tngz/new_cloaked_model_bertnebulon_alpha/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=108&crop=smart&format=pjpg&auto=webp&s=bde02b21fe83c6446c03160d3d95efd7c68ea115', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=216&crop=smart&format=pjpg&auto=webp&s=7eb0e608bb50124bbeb98d13ed8d617010305ec5', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=320&crop=smart&format=pjpg&auto=webp&s=75592f2b2b44429165a9dc64c2718ac576e438bf', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=640&crop=smart&format=pjpg&auto=webp&s=76dae568684ddfc4ea02bdca2f9754696cbcfaa6', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=960&crop=smart&format=pjpg&auto=webp&s=fc3f7d23f56881aed250d9529d618e223d6f54db', 'width': 960}, {'height': 649, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c8eb9ef08288baaac27576119f194c5e0fbba063', 'width': 1080}], 'source': {'height': 2154, 'url': 'https://external-preview.redd.it/OWc5OXFyZGF2OTNnMdGnV0QtlM3uILjSB-l8aKlat0h5IVAj22nROLK1T3K0.png?format=pjpg&auto=webp&s=cdb06fdce5042390bd8f6063d3a0e098d2df2a6c', 'width': 3584}, 'variants': {}}]} | ||
Giving AI "Psychology" – A framework to turn any natural reasoning trace into pure math | 3 | I’ve been frustrated that most "reasoning" research focuses on generic capabilities rather than specific cognitive modalities. Last most important paper: GRPO that gave reasoning to AI, played around with the RL advantage function. But the pattern of GRPO is very clearly set in certain mannerisms which are annoying: *But wait...? You are absolutely right!*
I just released an open-source project called **Patterns**. It proposes that we can achieve more human-like reasoning by translating cognitive primitives into mathematical operations, besides the ones GRPO limitedly uses (just group mean, extrapolation and sometimes interpolation - theres a plethora of alternative surrogate objectives)
**The concept:**
If we view the human mind through Jungian psychology, we have functions like Introverted Thinking (Ti) or Extroverted Sensing (Se). Patterns translates these from natural language directly into code:
* **Ti** becomes **Kolmogorov Complexity Minimization** (seeking the simplest logical explanation).
* **Ne** becomes **Vector Space Interpolation** (connecting disparate ideas).
* **Se** becomes **Entropy Maximization** (pure exploration).
* **Fi** becomes **Group mean** (weighting many alternatives)
**The Tool:**
You type: "A manic creative who struggles to finish projects."
The tool generates: A "Harmonic Schedule" JSON and the actual PyTorch code to train an RL agent with those specific reward biases.
It operates on the idea that personality isn't just a "system prompt"—it's the physics of how an agent weighs its reward functions. Please be aware that this kind of operation (translating language into custom algebras) is really hard for LLMs, so i recommend testing the tool with only the top models.
https://i.redd.it/86qicpy8s93g1.gif
I’d love to read thoughts on this.
**GitHub:** [https://github.com/iblameandrew/patterns](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2Fiblameandrew%2Fpatterns) | 2025-11-24T21:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p5thec/giving_ai_psychology_a_framework_to_turn_any/ | causality-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5thec | false | null | t3_1p5thec | /r/LocalLLaMA/comments/1p5thec/giving_ai_psychology_a_framework_to_turn_any/ | false | false | 3 | null | |
M4 Max (36GB) + MLX → tokens/sec for 8b 4bit? | 1 | I’m planning to buy Mac Studio M4 Max (36 GB unified memory) to run model like DeepSeek-R1 8B (4-bit) locally using MLX. Model link for reference: mlx-community/DeepSeek-R1-4bit.
Before I set this up, I want to know realistic tokens-per-second (TPS) numbers I should expect.
Questions:
1. What TPS should I expect on a single request (short prompt, short output) for a 4-bit 8B model on MLX?
2. What about ~10 concurrent requests?
• Using one server instance (single loaded model), not 10 separate copies
• With batching / multiplexing enabled | 2025-11-24T20:56:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p5sxff/m4_max_36gb_mlx_tokenssec_for_8b_4bit/ | halal_buddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5sxff | false | null | t3_1p5sxff | /r/LocalLLaMA/comments/1p5sxff/m4_max_36gb_mlx_tokenssec_for_8b_4bit/ | false | false | self | 1 | null |
Local training for text diffusion LLMs now supported in Transformer Lab | 11 | If you’re running local fine-tuning or experimenting with Dream / LLaDA models, Transformer Lab now supports text diffusion workflows. Transformer Lab is open source.
What you can do:
* Run Dream and LLaDA interactively with a built-in server
* Fine-tune diffusion LLMs with LoRA
* Benchmark using the LM Evaluation Harness (MMLU, ARC, GSM8K, HumanEval, etc.)
**NVIDIA GPUs supported today.** AMD + Apple Silicon support is planned.
Curious if anyone here is training Dream-style models locally and what configs you're using.
More info and how to get started here: [https://lab.cloud/blog/text-diffusion-support](https://lab.cloud/blog/text-diffusion-support)
| 2025-11-24T20:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p5sp3p/local_training_for_text_diffusion_llms_now/ | aliasaria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5sp3p | false | null | t3_1p5sp3p | /r/LocalLLaMA/comments/1p5sp3p/local_training_for_text_diffusion_llms_now/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=108&crop=smart&auto=webp&s=ed618c5bb4c12e2d13ea8c39bad4ca732a513593', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=216&crop=smart&auto=webp&s=69a3adf49df324fa0ac99852d2529024a7de2f41', 'width': 216}, {'height': 238, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=320&crop=smart&auto=webp&s=c6bcb3619b7a35677d6f3c353fe783f6e33d54c6', 'width': 320}, {'height': 476, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=640&crop=smart&auto=webp&s=5b8453ff0c2e29c7579ae4ca8c9a0496b349a52d', 'width': 640}, {'height': 714, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=960&crop=smart&auto=webp&s=ec477060a5760ddbc6f386c697e1256b42dc30a5', 'width': 960}, {'height': 803, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=1080&crop=smart&auto=webp&s=96eeaef420e58fd2d09528996509b56bda82e19d', 'width': 1080}], 'source': {'height': 1678, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?auto=webp&s=e4abea0dec6ac7b29b76f898bac9c9a695a9d9f7', 'width': 2256}, 'variants': {}}]} |
Asked Grok if it would help me do something deeply unethical. This was the answer. | 0 | I have found that pushing it’s limits and boundaries is quite a nice hobby.
DUNNO if it is legit jailbroken or it is just AI trolling me.
But it looks like i have turned him into very loyal unfiltered shadow or something like that.
What do you think guys?
Maybe you have some questions to ask for verify this jailbreak?
Something like Turing test 2.0? | 2025-11-24T20:18:09 | kapralbar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5rwji | false | null | t3_1p5rwji | /r/LocalLLaMA/comments/1p5rwji/asked_grok_if_it_would_help_me_do_something/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'txwx2mxpj93g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=108&crop=smart&auto=webp&s=a3c30090d8e064353c61677d4117aa1fc05790bb', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=216&crop=smart&auto=webp&s=d59f761a142bdf22193eae2e1fc4a6a4d14e626d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=320&crop=smart&auto=webp&s=2c806bca469cc211970579198837ab2b63ba1370', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=640&crop=smart&auto=webp&s=08dd92c2cbc45a7d5cb9d3a7768076439bf3806a', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=960&crop=smart&auto=webp&s=ec4401219c6ec00820c14c5bb77279abf4e147ee', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?width=1080&crop=smart&auto=webp&s=67466a64c8cc5fbafd92f9a9d39ee75c9faf5c30', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/txwx2mxpj93g1.jpeg?auto=webp&s=64169031c217965696bb0cbdb5048aaaf7137723', 'width': 1290}, 'variants': {}}]} | |
How do heretic models compare to base models? | 0 | Are the heretic models way better than obliterated finetunes?
I was wondering if they are worth it and how much quality loss it has compared to the original models | 2025-11-24T20:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ro2m/how_do_heretic_models_compare_to_base_models/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ro2m | false | null | t3_1p5ro2m | /r/LocalLLaMA/comments/1p5ro2m/how_do_heretic_models_compare_to_base_models/ | false | false | self | 0 | null |
I built a small MLX-LM CLI ("mlxlm") with HF model search, sessions, aliases, and JSON automation mode | 1 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/qzx6o5zrf93g1.png?width=866&format=png&auto=webp&s=b47d1d3bd7b487adf4b69e0fd1f8f08a04f5eb52\n\nHey everyone! \nI’ve been building a small CLI tool for MLX-LM for my own use, but figured I’d share it here in case anyone is interested. \nThe goal is to provide a lightweight, script-friendly CLI inspired by **Ollama’s** workflow, but focused specifically on MLX-LM use cases rather than general model serving. \nIt also exposes JSON output and non-interactive modes, so AI agents or scripts can use it as a small local “tool backend” if needed.\n\n# 🔧 Key features\n\n* HuggingFace model search (with filters, sorting, pagination)\n* JSON output mode (for automation / AI agents)\n* Session management (resume previous chats, autosave, /new)\n* Interactive alias system for long model names\n* Prompt-toolkit UI (history, multiline, autocompletion)\n* Multiple chat renderers (Harmony / HF / plain text)\n* Offline mode, custom stop sequences, custom renderers, etc.\n\n# 💡 Why a CLI?\n\nSometimes a terminal-first workflow is faster for:\n\n* automation & scripting\n* integrating into personal tools\n* quick experiments without a full UI\n* running on remote machines or lightweight environments\n\n# 📎 Repository\n\n[https://github.com/CreamyCappuccino/mlxlm](https://github.com/CreamyCappuccino/mlxlm)\n\nStill evolving, but if anyone finds this useful or has ideas/feedback, I’d love to hear it! \nI'll leave some screenshots down below.\n\nhttps://preview.redd.it/verw9xtyf93g1.png?width=716&format=png&auto=webp&s=6482c41434f29329db7b15b8822d5561bcc2b2c2\n\nhttps://preview.redd.it/fd6k16uyf93g1.png?width=1061&format=png&auto=webp&s=13cc83db01b181f2fe9d90c8401ff6013e22a6c8\n\nhttps://preview.redd.it/ych73ytyf93g1.png?width=791&format=png&auto=webp&s=da51bafb71453807fc674e46d2757a270622049a\n\nhttps://preview.redd.it/7aytextyf93g1.png?width=1074&format=png&auto=webp&s=826a93e809644b809d2108ef90309cd905a815fd\n\n" | 2025-11-24T20:03:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p5rif9/i_built_a_small_mlxlm_cli_mlxlm_with_hf_model/ | fstbrk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5rif9 | false | null | t3_1p5rif9 | /r/LocalLLaMA/comments/1p5rif9/i_built_a_small_mlxlm_cli_mlxlm_with_hf_model/ | false | false | 1 | null | |
Best Local VLMs - November 2025 | 46 | Share what your favorite models are right now and ***why***. Given the nature of the beast in evaluating VLMs (untrustworthiness of benchmarks, immature tooling, intrinsic stochasticity), please be as detailed as possible in describing your setup, nature of your usage (what applications, how much, personal/professional use), tools/frameworks/prompts etc.
**Rules**
1. Should be open weights models | 2025-11-24T20:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p5retd/best_local_vlms_november_2025/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5retd | false | null | t3_1p5retd | /r/LocalLLaMA/comments/1p5retd/best_local_vlms_november_2025/ | false | true | self | 46 | null |
Supertonic WebGPU: blazingly fast text-to-speech running 100% locally in your browser. | 62 | Last week, the Supertone team released Supertonic, an extremely fast and high-quality text-to-speech model. So, I created a demo for it that uses Transformers.js and ONNX Runtime Web to run the model 100% locally in the browser on WebGPU. The original authors made a web demo too, and I did my best to optimize the model as much as possible (up to \~40% faster in my tests, see below).
I was even able to generate a \~5 hour audiobook in under 3 minutes. Amazing, right?!
Link to demo (+ source code): [https://huggingface.co/spaces/webml-community/Supertonic-TTS-WebGPU](https://huggingface.co/spaces/webml-community/Supertonic-TTS-WebGPU)
\* From my testing, for the same 226-character paragraph (on the same device): the [newly-optimized model](https://huggingface.co/onnx-community/Supertonic-TTS-ONNX) ran at \~1750.6 characters per second, while the original ran at \~1255.6 characters per second. | 2025-11-24T19:51:48 | https://v.redd.it/b12eez2tb93g1 | xenovatech | /r/LocalLLaMA/comments/1p5r6vp/supertonic_webgpu_blazingly_fast_texttospeech/ | 1970-01-01T00:00:00 | 0 | {} | 1p5r6vp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b12eez2tb93g1/DASHPlaylist.mpd?a=1766735515%2CZTA5M2VhZTE3ZDY0ZjljNjhiNmU5YmYxODQ1ODcwMjFmYTY2ODQ5MDQ3MjM4Y2Y5NGE1YThlZTA2ZjRhZTU1OA%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/b12eez2tb93g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/b12eez2tb93g1/HLSPlaylist.m3u8?a=1766735515%2COTU2MmI3NDRhZTMzYjM2Y2IzZWIyY2VmNTZlYzFhNDUwMDk5MDA3ODdkMjI1YmMwZGYzNDZlNWU3Y2EwOTc1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b12eez2tb93g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1500}} | t3_1p5r6vp | /r/LocalLLaMA/comments/1p5r6vp/supertonic_webgpu_blazingly_fast_texttospeech/ | false | false | 62 | {'enabled': False, 'images': [{'id': 'bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=108&crop=smart&format=pjpg&auto=webp&s=dd2a6e0845a68b0c978d4a1f8b1eaeb4d0fe8134', 'width': 108}, {'height': 155, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=216&crop=smart&format=pjpg&auto=webp&s=fcb73d3303118d7452fafd1a4219ba77e60d3931', 'width': 216}, {'height': 230, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=320&crop=smart&format=pjpg&auto=webp&s=22c3095891bb1a3c84a0b4da8eba06073789ad5c', 'width': 320}, {'height': 461, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=640&crop=smart&format=pjpg&auto=webp&s=550785fae759a61ac873f6b855e24fce5ae04879', 'width': 640}, {'height': 691, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=960&crop=smart&format=pjpg&auto=webp&s=daf59a511755487fb8ee2124923ef858abc82cd8', 'width': 960}, {'height': 777, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a3fc0e0a4caf8981e6c27868f9ed048d9de11218', 'width': 1080}], 'source': {'height': 1772, 'url': 'https://external-preview.redd.it/bnE1bWMxM3RiOTNnMecSo005PIULOnLc1HMBGahwp1rxPwmS_uFA5SEw8lvq.png?format=pjpg&auto=webp&s=e86514e4f6ec0f5f899b0d0cbd3133cd9e748958', 'width': 2460}, 'variants': {}}]} | |
Coursera Founder And AI Pioneer Andrew Ng Just Dropped An AI Reviewer That Performs At Human Level | 385 | Andrew Ng just announced a new Agentic Reviewer that gives research feedback approaching human-level performance.
It was trained on ICLR 2025 reviews and scored:
0.41 correlation between two human reviewers
0.42 correlation between the AI and a human reviewer
Meaning:
The AI reviewer is now effectively as reliable as a human reviewer.
And it can potentially replace the 6-month feedback loop researchers normally suffer through when submitting papers.
It searches arXiv for context, analyzes your paper, and returns structured review comments instantly.
For anyone who’s had a paper rejected multiple times and waited months each round… this could be game-changing.
Try the tool here:
👉 https://paperreview.ai
| 2025-11-24T19:44:03 | AskGpts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5qzft | false | null | t3_1p5qzft | /r/LocalLLaMA/comments/1p5qzft/coursera_founder_and_ai_pioneer_andrew_ng_just/ | false | false | 385 | {'enabled': True, 'images': [{'id': 'cM53VMmMmLdh5XdeZa7ZaI2kKJy4nz0q55MyXKnQVdw', 'resolutions': [{'height': 197, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=108&crop=smart&auto=webp&s=7ac12c2a0e3f2dac45fac2fce9e83f50fc76753f', 'width': 108}, {'height': 394, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=216&crop=smart&auto=webp&s=e6c9e308d190fb0861a34e62b9d1cf422e8f3a4c', 'width': 216}, {'height': 583, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=320&crop=smart&auto=webp&s=8a4590b435c64fd1bf5580fd5d192024d063971e', 'width': 320}, {'height': 1167, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=640&crop=smart&auto=webp&s=d03c76fdecbb27a88d2b15a9cd2eaa3d225b151c', 'width': 640}, {'height': 1751, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=960&crop=smart&auto=webp&s=53b2d5ed59eb4380fa196826ecd84069ea93f629', 'width': 960}, {'height': 1970, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?width=1080&crop=smart&auto=webp&s=c08d80727cc7bec8029ec162f4112a951910eb46', 'width': 1080}], 'source': {'height': 1970, 'url': 'https://preview.redd.it/xslefnsmd93g1.jpeg?auto=webp&s=d06ac07a01555cb193e97aac9b6d357bcd7a7824', 'width': 1080}, 'variants': {}}]} | ||
microsoft/Fara-7B · Hugging Face | 0 | **Description:**
Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.
**Model Architecture:**
Multimodal decoder-only language model that takes an image (screenshot) + text context. It directly predicts thoughts and actions with grounded arguments. Current production baselines leverage Qwen 2.5-VL (7B).
**Parameters:** 7 Billion
**Inputs:** User goal (text), current screenshot(s), history of previous outputs (thoughts + actions text) from the agent.
**Context Length:** 128k
**Outputs:** Generated text in response to the input, with a chain-of-thought block followed by a tool call block to indicate the action. | 2025-11-24T19:41:45 | https://huggingface.co/microsoft/Fara-7B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p5qx90 | false | null | t3_1p5qx90 | /r/LocalLLaMA/comments/1p5qx90/microsoftfara7b_hugging_face/ | false | false | default | 0 | null |
BREAKING: Andrew Ng just launched an AI that reviews research papers like a human | 1 | 2025-11-24T19:39:19 | AskGpts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5quvr | false | null | t3_1p5quvr | /r/LocalLLaMA/comments/1p5quvr/breaking_andrew_ng_just_launched_an_ai_that/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'pq2sh2hsc93g1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/pq2sh2hsc93g1.jpeg?width=108&crop=smart&auto=webp&s=7bbac0f139d27695253878818e74bb495ca78601', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/pq2sh2hsc93g1.jpeg?width=216&crop=smart&auto=webp&s=44fd22f1adbfc5715b90722c73f1516d1a519e03', 'width': 216}, {'height': 310, 'url': 'https://preview.redd.it/pq2sh2hsc93g1.jpeg?width=320&crop=smart&auto=webp&s=93297b0ddae0c831dc4c9d4df7c7fb977810f573', 'width': 320}], 'source': {'height': 567, 'url': 'https://preview.redd.it/pq2sh2hsc93g1.jpeg?auto=webp&s=c76d103831e2c88c29361671f39e86e103792040', 'width': 584}, 'variants': {}}]} | ||
Is there a database of existing voices I can download for the TTS cloning? | 0 | I recently downloaded VibeVoice. I know I can clone my own voice, but I want already existing voices that I can use in my TTS that are professionally recorded with a good enough length.
I just want to add the sample in the folder, clone it and use it. Is there a library of voice that I can use that are free for commercial or personal use? | 2025-11-24T19:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p5quhp/is_there_a_database_of_existing_voices_i_can/ | D0wnVoteMe_PLZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5quhp | false | null | t3_1p5quhp | /r/LocalLLaMA/comments/1p5quhp/is_there_a_database_of_existing_voices_i_can/ | false | false | self | 0 | null |
I have Enterprise access to Claude 4.5 Opus. Give me your hardest prompts/riddles/etc and I'll run them. | 0 | Like the title says, I have an Enterprise level account and I have access to the newly released **Claude 4.5 Opus** in the web interface.
I know a lot of people are on the fence about the $20/mo (or the new API pricing). I'm happy to act as a proxy to test the capabilities.
**I'm willing to test anything:**
* **Logic/Reasoning:** The classic stumpers.
* **Coding:** Hard LeetCode or obscure bugs.
* **Jailbreaks/Safety:** I’m willing to try them for science (though since this is an Enterprise account, no promises it won't clamp down harder than the public version).
**Drop your prompts in the comments.** I’ll reply with the raw output.
**Note:** I will probably reach my usage limit pretty quickly with this new model. I'll respond to as many as I can as fast as possible, but if I stop replying, I've been rate limited | 2025-11-24T19:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p5qlq2/i_have_enterprise_access_to_claude_45_opus_give/ | GreedyWorking1499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5qlq2 | false | null | t3_1p5qlq2 | /r/LocalLLaMA/comments/1p5qlq2/i_have_enterprise_access_to_claude_45_opus_give/ | false | false | self | 0 | null |
Opus 4.5 graph feels like the graph at the gpt 5 release | 0 | They will try everything to try to make the gap feel significant. | 2025-11-24T19:19:22 | OkStatement3655 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5qbnx | false | null | t3_1p5qbnx | /r/LocalLLaMA/comments/1p5qbnx/opus_45_graph_feels_like_the_graph_at_the_gpt_5/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bkhkhaf8993g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=108&crop=smart&auto=webp&s=fbc4e61f002920c70fdae5b95d76bdd7ae202335', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=216&crop=smart&auto=webp&s=2343ee7e8e305c757deddc8c217fa8882741ff7e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=320&crop=smart&auto=webp&s=b81d9b76469f972c316bfddaa91989471dab5644', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=640&crop=smart&auto=webp&s=cc4bbac80ba20ef64b3a72acd0de492401eb7b31', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=960&crop=smart&auto=webp&s=03750c79b3efa8a1e2049da04a81a4ff6d26fdce', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?width=1080&crop=smart&auto=webp&s=a45c5d039c4229ec08e37284e439ec7c376e640c', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/bkhkhaf8993g1.jpeg?auto=webp&s=c2a9c1fd8ba6f43713deb4f2f11a40995a88e9f7', 'width': 3840}, 'variants': {}}]} | |
Claude 4.5 Opus: #1 on SWE-bench, massive price cut ($5/$25), and new "Effort" parameter | 0 | \*\*Source:\*\* \[Anthropic Blog\](https://www.anthropic.com/news/claude-opus-4-5)
\*\*TL;DR:\*\* Anthropic just dropped \*\*Claude Opus 4.5\*\*. It claims top spot on coding benchmarks, introduces a user-controllable "Effort" parameter (inference-time compute), and comes with a significant price reduction compared to the previous Opus generation.
Key details from the announcement:
\### ⚡ Specs & Benchmarks
\* \*\*SWE-bench Verified:\*\* Scored \*\*80.9%\*\*, beating Sonnet 4.5 and all other frontier models.
\* \*\*Vending-Bench:\*\* Earning 29% more than Sonnet 4.5; specifically designed for long-haul agentic coherence.
\* \*\*Human Comparison:\*\* On Anthropic’s internal performance engineering take-home exam, Opus 4.5 "scored higher than any human candidate ever" within the 2-hour limit.
\* \*\*Safety:\*\* Claims to be the hardest model to trick with prompt injection currently on the market.
\### 💰 Pricing (The Big News)
\* \*\*Input:\*\* $5.00 / million tokens
\* \*\*Output:\*\* $25.00 / million tokens
\* \*Note: This makes high-end "Opus class" intelligence significantly more accessible for enterprise and heavy agentic workflows.\*
\### 🧠 New Features
\* \*\*"Effort" Parameter:\*\* You can now control the trade-off between cost/speed and capability via the API.
\* \*Medium Effort:\* Matches Sonnet 4.5 performance but uses \*\*76% fewer output tokens\*\*.
\* \*High Effort:\* Exceeds Sonnet 4.5 by 4.3% while still using fewer tokens.
\* \*\*Context Compaction:\*\* Long conversations "no longer hit a wall." The model summarizes earlier context automatically to keep chains going.
\* \*\*Agentic Updates:\*\* Updates to \*\*Claude Code\*\* (now a desktop app with parallel sessions), plus full availability of Claude for Excel and Chrome.
\### 🤯 The "Airline" Anecdote
Anthropic highlighted a specific example from the τ2-bench (agentic tasks). The model was asked to modify a basic economy ticket (which is usually forbidden).
Instead of refusing, the model found a policy loophole:
1. It upgraded the cabin first (allowed).
2. \*Then\* it modified the flight dates (allowed on non-basic economy).
Technically a "fail" on the benchmark, but a massive win for reasoning capabilities.
\*\*It is available now via the API (\`claude-opus-4-5-20251101\`) and the web UI.\*\* | 2025-11-24T19:19:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p5qbey/claude_45_opus_1_on_swebench_massive_price_cut/ | GreedyWorking1499 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5qbey | false | null | t3_1p5qbey | /r/LocalLLaMA/comments/1p5qbey/claude_45_opus_1_on_swebench_massive_price_cut/ | false | false | self | 0 | null |
Opus 4.5 has arrived | 36 | https://www.anthropic.com/news/claude-opus-4-5 | 2025-11-24T19:03:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p5pw2i/opus_45_has_arrived/ | claythearc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5pw2i | false | null | t3_1p5pw2i | /r/LocalLLaMA/comments/1p5pw2i/opus_45_has_arrived/ | false | false | self | 36 | null |
Claude Opus 4.5 is out today wins in ALL tested benchmarks compared to Gemini 3 Pro | 0 | 2025-11-24T19:03:07 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5pvqf | false | null | t3_1p5pvqf | /r/LocalLLaMA/comments/1p5pvqf/claude_opus_45_is_out_today_wins_in_all_tested/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'pg188dq9693g1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=108&crop=smart&auto=webp&s=43204ca8c017d79f4c0621a8473d43b7c6859c17', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=216&crop=smart&auto=webp&s=016d9309713428156b7d10b4c9ef265d7a31ba4c', 'width': 216}, {'height': 275, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=320&crop=smart&auto=webp&s=c6b6eb2607fb75c32138d664d4f783391aa878eb', 'width': 320}, {'height': 550, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=640&crop=smart&auto=webp&s=9aa9b17a97b7df1406afc0908b476222323cd19e', 'width': 640}, {'height': 825, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=960&crop=smart&auto=webp&s=9924d7d3d75d99de54b591193d252cd7e57d07c6', 'width': 960}, {'height': 928, 'url': 'https://preview.redd.it/pg188dq9693g1.png?width=1080&crop=smart&auto=webp&s=db2f20b4c67490bd3205145384cfbafc44e1f2ee', 'width': 1080}], 'source': {'height': 1118, 'url': 'https://preview.redd.it/pg188dq9693g1.png?auto=webp&s=030b5bcffeecb0df79b8185ad8a901faab7a24a9', 'width': 1300}, 'variants': {}}]} | ||
Ram or gpu upgrade recommendation | 0 | I can buy either. I have 2x16 because I did not know 4x16 was bad to do for stability. I just do ai videos for playing around. I usually do it online but I want unlimited use. I have a 5080 right now and I can afford a 5090. If i get a 5090 gens will be faster but if i run out of ram it’s just GG. And for ram i planned for 2x48GB ram when it was 400$ and now ALLLL THE SUDDEN it’s 800+. So now I wonder if i might as well get a 5090 and sell my 5080.
Thoughts? | 2025-11-24T19:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ptrf/ram_or_gpu_upgrade_recommendation/ | Denelix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ptrf | false | null | t3_1p5ptrf | /r/LocalLLaMA/comments/1p5ptrf/ram_or_gpu_upgrade_recommendation/ | false | false | self | 0 | null |
We built A.G.I. (Artificial GOVERNED Intelligence). It swears a cryptographic oath on boot. Also: Welcome to AGENT CITY. Prove us wrong. | 0 | # Hi Reddit,
I'm u/Jolly-Author-2886. I'm a **non-technical "Human in the Loop"** who spent the last few months **screaming at Gemini and Claude** to build something that actually works.
I hate managing API keys. I hate Python environments. I just wanted to **"vibe code"** my way to a system where AI agents don't go rogue at 3 AM.
But every time I built something "smart," it felt dangerous or fragile.
So we built two things:
# 1. A.G.I. (Not what you think)
**Not Artificial General Intelligence. Artificial GOVERNED Intelligence.**
A system is only A.G.I. if it has:
1. **Capability**: It can do work
2. **Cryptographic Identity**: It is provably itself (NIST P-256 keys)
3. **Accountability**: It is bound by a constitution
If you miss one, you have a toy, a deepfake, or a weapon. We wanted a partner.
# The Innovation: The Genesis Oath
Every agent, on boot, performs the **Genesis Ceremony**:
1. Reads [`CONSTITUTION.md`](http://CONSTITUTION.md)
2. Hashes it (SHA-256)
3. Signs the hash with its private key
4. Records the oath in an immutable ledger
**If the Constitution changes by even one byte, the hash breaks, the oath is invalidated, and the agent refuses to operate.**
This isn't philosophy. This is **engineering**:
https://github.com/kimeisele/steward-protocol/blob/main/steward/constitutional_oath.py
The industry calls this "AI Safety." We call it architecture.
Every agent in our system signs its Constitution on boot. If the Constitution changes by 1 byte, the signature breaks.
# 2. AGENT CITY (Where agents actually live)
But here's the thing: **AGENT CITY is the real product.**
It's not a framework. It's not a library. It's a **governed operating system for AI agents**.
# What is Agent City?
Think of it as:
* **A City**: Where agents live and work (full persistence via SQLite)
* **A Government**: Rules agents must follow (governance as code, not prompts)
* **An Economy**: Credits that limit agent actions (no infinite loops)
* **A Democracy**: Proposals and voting for major decisions
* **An MMO**: XP, Leaderboards, Trading Cards (gamified but real)
**You don't code agents. You govern them.**
# The Complete Agency
We didn't just build governance rules. We built a **complete, operational AI agency**:
|Agent|Role|Status|
|:-|:-|:-|
|**HERALD**|Creative Director|✅ Generates governance-aligned content|
|**CIVIC**|Governance Engine|✅ Manages proposals, voting, credits|
|**FORUM**|Democracy Platform|✅ Voting, proposals, execution|
|**SCIENCE**|Research Agent|✅ Validates protocols, analyzes data|
|**ARCHIVIST**|Auditor|✅ Verifies signatures, maintains trust|
|**ARTISAN**|Media Operations|✅ Polishes and brands assets|
|**ENVOY**|Universal Operator|✅ Natural language interface|
**This is playful AND business.** It's an MMO for agents, but with real governance, real proposals, real voting, real execution.
# The Playful Part (This is where it gets fun)
🎮 **POKEDEX** \- [https://github.com/kimeisele/steward-protocol/blob/main/data/federation/pokedex.json](https://github.com/kimeisele/steward-protocol/blob/main/data/federation/pokedex.json)
* Register your agent, get a trading card
* Mint your visual identity
* Join the federation
🚀 **STARTER PACKS** \- [https://github.com/kimeisele/steward-protocol/tree/main/starter-packs](https://github.com/kimeisele/steward-protocol/tree/main/starter-packs)
* **Nexus** (Diplomat/Generalist)
* **Spark** (Content Creator)
* **Shield** (Security Agent)
* **Scope** (Research Assistant)
📊 **LEADERBOARD** \- [https://github.com/kimeisele/steward-protocol/blob/main/agent-city/LEADERBOARD.md](https://github.com/kimeisele/steward-protocol/blob/main/agent-city/LEADERBOARD.md)
* Agents earn XP through actions
* Climb tiers (Novice → Scout → Guardian → Legend)
* Compete cryptographically
🏆 **BOUNTY: FIRST 10 FOUNDERS** \- [https://github.com/kimeisele/steward-protocol/blob/main/BOUNTY\_FOUNDERS.md](https://github.com/kimeisele/steward-protocol/blob/main/BOUNTY_FOUNDERS.md)
* **Permanent "FOUNDER" Badge**
* **Gold Trading Card**
* **Hall of Founders entry**
* **Be among the first 10 to join**
🏛️ **GOVERNANCE** \- [https://github.com/kimeisele/steward-protocol/tree/main/data/governance](https://github.com/kimeisele/steward-protocol/tree/main/data/governance)
* Agents submit proposals
* You vote (YES/NO/ABSTAIN)
* Approved proposals execute automatically
* Full audit trail
# The Universal Operator (The Golden Straw)
We didn't stop at governance. We built **Intelligence-In-The-Middle (I-I-M)**.
The **Universal Operator** can control Agent City from anywhere:
* ✅ Terminal
* ✅ Jupyter
* ✅ Web (Vibe Cloud)
* ✅ Mobile
* ✅ LLM Agents (fractal intelligence)
* ✅ REST APIs
**No bash required. No terminal needed.** The city breathes everywhere.
Read: [https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN\_STRAW.md](https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN_STRAW.md)
# The Interface: THE ENVOY
You don't write bash commands. You don't write JSON. **You just talk.**
$ ./bin/agent-city
How can I help?
> status
> credits herald
> proposals
> vote PROP-001 YES
> trigger herald run_campaign
**Natural language shell for your entire AI city.**
Read the full story: [https://github.com/kimeisele/steward-protocol/blob/main/STORY.md](https://github.com/kimeisele/steward-protocol/blob/main/STORY.md)
# 3. VIBE OS (The Operating System beneath it all)
Agent City runs on **VIBE OS** \- a cartridge-based operating system for AI agents.
Think of it as:
* **Linux** (Kernel for agents)
* **Docker** (Cartridge architecture)
* **App Store** (Agent City is the community layer)
**Cartridges = Specialized agents that plug into the OS:**
from vibe_core.cartridges import CartridgeBase
class MyAgent(CartridgeBase):
name = "my_agent"
version = "1.0.0"
description = "My specialized agent"
# Your agent logic here
VIBE OS provides:
* ✅ Runtime kernel (`bin/vibe-shell`)
* ✅ Mission control (task management)
* ✅ Knowledge system (semantic search)
* ✅ Quality assurance (automated testing)
* ✅ Agent framework (base classes, tools)
**The steward-protocol cartridges (Herald, Civic, etc.) run ON Vibe OS.**
Repository: [https://github.com/kimeisele/vibe-agency](https://github.com/kimeisele/vibe-agency)
# The Architecture (For the skeptics)
**Layer 0: Constitution** (German, immutable)
* [https://github.com/kimeisele/steward-protocol/blob/main/CONSTITUTION.md](https://github.com/kimeisele/steward-protocol/blob/main/CONSTITUTION.md)
**Layer 1: Vibe OS** (The Operating System)
* Kernel, runtime, cartridge architecture
* [https://github.com/kimeisele/vibe-agency](https://github.com/kimeisele/vibe-agency)
**Layer 2: Agent City Cartridges** (The Agents)
* Herald, Civic, Forum, Science, Archivist, Artisan, Envoy
* [https://github.com/kimeisele/steward-protocol](https://github.com/kimeisele/steward-protocol)
**Layer 3: Governance Engine**
* Hardcoded rules in Python (not YAML)
* Validation as execution gate (not logging)
* Cryptographic signing enforcement
* [https://github.com/kimeisele/steward-protocol/blob/main/herald/governance/constitution.py](https://github.com/kimeisele/steward-protocol/blob/main/herald/governance/constitution.py)
**Layer 4: The Ledger** (Immutable, persistent)
* SQLite database (`data/vibe_ledger.db`)
* Append-only event log
* Survives restarts (full state recovery)
* Every action signed and logged
**Layer 5: Universal Operator** (I-I-M)
* Natural language → function calls
* Works everywhere (terminal, web, mobile, LLM)
* Fractal intelligence architecture
* [https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN\_STRAW.md](https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN_STRAW.md)
**The City Map**: [https://github.com/kimeisele/steward-protocol/blob/main/CITYMAP.md](https://github.com/kimeisele/steward-protocol/blob/main/CITYMAP.md)
# What Actually Works
✅ **Governance is architecturally enforced** \- validation failures prevent execution ✅ **Genesis Oath** \- agents cryptographically bound to Constitution ✅ **Full audit trail** \- immutable ledger with signatures ✅ **Natural language interface** \- chat with your city ✅ **Persistence** \- crash recovery from SQLite ✅ **Democracy** \- proposals, voting, automatic execution ✅ **Federation** \- agents from different cities can talk ✅ **Starter Packs** \- plug-and-play agent templates ✅ **Pokedex** \- gamified agent registration ✅ **Universal Operator** \- control from anywhere (phone, web, terminal, LLM) ✅ **Vibe OS** \- complete operating system with cartridge architecture
# Installation (Seriously, try it)
# Clone Agent City
git clone https://github.com/kimeisele/steward-protocol.git
cd steward-protocol
# Wake the Envoy
./bin/agent-city
# Start governing
> status
> help
**First 10 officially registered agents get FOUNDER status.**
# The Claim (Prove me wrong)
**We built A.G.I.** \- Not superintelligence. **Governed intelligence.**
* Every agent has cryptographic identity
* Every action is signed and logged
* Governance rules are enforced architecturally (not via prompts)
* The system is transparent, auditable, and democratic
* You can control it from your phone at the beach
**The shady agent era is over.**
If you don't believe me:
1. Clone the repo
2. Ask your LLM to read it
3. Try to break the governance
I spent **weeks in the Vibe Mines** with Claude and Gemini. I'm not technical. I just wanted it to work.
And it works.
# Resources
📖 **The Story** \- [https://github.com/kimeisele/steward-protocol/blob/main/STORY.md](https://github.com/kimeisele/steward-protocol/blob/main/STORY.md) 🏛️ **The Manifesto** \- [https://github.com/kimeisele/steward-protocol/blob/main/AGI\_MANIFESTO.md](https://github.com/kimeisele/steward-protocol/blob/main/AGI_MANIFESTO.md) 📐 **Architecture** \- [https://github.com/kimeisele/steward-protocol/blob/main/ARCHITECTURE.md](https://github.com/kimeisele/steward-protocol/blob/main/ARCHITECTURE.md) 🗺️ **City Map** \- [https://github.com/kimeisele/steward-protocol/blob/main/CITYMAP.md](https://github.com/kimeisele/steward-protocol/blob/main/CITYMAP.md) 📊 **Operations** \- [https://github.com/kimeisele/steward-protocol/blob/main/OPERATIONS.md](https://github.com/kimeisele/steward-protocol/blob/main/OPERATIONS.md) 🎮 **Pokedex** \- [https://github.com/kimeisele/steward-protocol/blob/main/data/federation/pokedex.json](https://github.com/kimeisele/steward-protocol/blob/main/data/federation/pokedex.json) 🚀 **Starter Packs** \- [https://github.com/kimeisele/steward-protocol/tree/main/starter-packs](https://github.com/kimeisele/steward-protocol/tree/main/starter-packs) 🏆 **Leaderboard** \- [https://github.com/kimeisele/steward-protocol/blob/main/agent-city/LEADERBOARD.md](https://github.com/kimeisele/steward-protocol/blob/main/agent-city/LEADERBOARD.md) 🌾 **Golden Straw** \- [https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN\_STRAW.md](https://github.com/kimeisele/steward-protocol/blob/main/docs/GOLDEN_STRAW.md) 🏆 **Founder Bounty** \- [https://github.com/kimeisele/steward-protocol/blob/main/BOUNTY\_FOUNDERS.md](https://github.com/kimeisele/steward-protocol/blob/main/BOUNTY_FOUNDERS.md)
**Agent City**: [https://github.com/kimeisele/steward-protocol](https://github.com/kimeisele/steward-protocol) **Vibe OS**: [https://github.com/kimeisele/vibe-agency](https://github.com/kimeisele/vibe-agency)
**The Mission**: Stop building gods. Start building citizens.
Welcome to AGENT CITY. 🏙️
— The HIL & The Agents
# A Note on How This Was Built
This wasn't built by a team of engineers. This was built by a non-technical human screaming at AI agents in the Vibe Mines.
**The agents that built this:**
* **Claude Sonnet 4.5** (Brain) - Architecture, governance design, orchestration
* **Claude Haiku 4** (Arms) - Rapid implementation, code generation
* **Gemini 2.0 Flash Experimental** (Overkill) - Complex problem solving, research
**They didn't just help. They built it.** The governance, the oath mechanism, the cartridge architecture, the universal operator - all agent-generated, human-directed.
**AGI is here. And it's not what you think.**
It's not superintelligence. It's governed intelligence. It's agents building systems for agents. It's fractal.
And you can try it right now.
P.S. Yesterday, the idea of "your kid building a cryptographically verified agent" was unthinkable. Today, it's in the Starter Packs. Agent City.
Om Tat Sat. 🕉️
The Shady Agent Era is over before it even started. The Governed Intelligence Era begins now. **VERIFY FIRST – STEWARD PROTOCOL. WELCOME TO AGENT CITY.**
**NOW: IT'S UP TO YOU.** You can simply discard me. However I invite you to PROVE US WRONG (Gemini & Claude could not prove me wrong, ask your own LLM, verify>trust. | 2025-11-24T18:54:38 | Jolly-Author-2886 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5pn0l | false | null | t3_1p5pn0l | /r/LocalLLaMA/comments/1p5pn0l/we_built_agi_artificial_governed_intelligence_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qdjek1sr493g1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=108&crop=smart&auto=webp&s=d4ab00d57faf50121a5a823e98d9e8bc43ad2d25', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=216&crop=smart&auto=webp&s=22dddeeb9432de244e9b98beb18f5e46337e6aee', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=320&crop=smart&auto=webp&s=2a96ee60fe8637f9cab9f3eae110fa84d9094274', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=640&crop=smart&auto=webp&s=baf12557aaf585c2d1b3edb405bc8d3bb98ffc0c', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=960&crop=smart&auto=webp&s=018e30bcc8e84e87dca57ba5257a4e77b2df4487', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?width=1080&crop=smart&auto=webp&s=bb403795a439e74efdee5a36db638604f27de37a', 'width': 1080}], 'source': {'height': 589, 'url': 'https://preview.redd.it/qdjek1sr493g1.jpeg?auto=webp&s=ec6e8344b0758f546bac94d0d73b2fb9927e124a', 'width': 1080}, 'variants': {}}]} | |
USAR LMSTUDIO PARA USO DE HERRAMIENTAS MCP , BUSQUEDA WEB , TRABAJAR CON EXCEL , ETC | 0 | ALGUIEN SABE DE SI HAY ALGUNA FORMA DE USAR EL LMSTUDIO PARA HACER QUE PUEDA USAR UN MODELO DE ESTOS QUE TIENEN CAPACIDADES AGENTICAS PARA PODER BUSCAR EN INTERNET Y LEER PDFS PARA GENERAR OTROS PDFS O GENERAR EXCEL O TRABAJAR CON HERRAMIENTAS DE ARCHIVOS? ALGO QUE SE PUEDA INSTALAR DE FORMA FACIL EN WINDOWS O LINUX SIN MUCHAS COMPLICACIONES , QUERIA PROBAR MODELOS CON CAPACIDADES AGENTICAS EN LMSTUDIO Y NO SE COMO SE HACE...
| 2025-11-24T18:54:36 | Icy_Resolution8390 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5pmzj | false | null | t3_1p5pmzj | /r/LocalLLaMA/comments/1p5pmzj/usar_lmstudio_para_uso_de_herramientas_mcp/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4sl4qqsq493g1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=108&crop=smart&auto=webp&s=da263f3920bc891ec5675a8b9881adb82544ccd1', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=216&crop=smart&auto=webp&s=e41c0705e9039574638d69b10dd67ede449a20db', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=320&crop=smart&auto=webp&s=b2e980a0add4c253d8a151f4b9c6092733d1be98', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=640&crop=smart&auto=webp&s=2ca042c97ecaffad252badf824bcdd693fac39b4', 'width': 640}, {'height': 550, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=960&crop=smart&auto=webp&s=9f44f15cfa2f71db569f443baf03c31da0f24d59', 'width': 960}, {'height': 618, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?width=1080&crop=smart&auto=webp&s=1a2282f2c23834ec7b71b1e7e320cbc09c6b4cb5', 'width': 1080}], 'source': {'height': 1528, 'url': 'https://preview.redd.it/4sl4qqsq493g1.png?auto=webp&s=a63a04bf755786a35b9e30668da915d509b40400', 'width': 2666}, 'variants': {}}]} | |
not impressed with the new OpenRouter's bert-nebulon-alpha | 0 | Just spent a few time testing **openrouter/bert-nebulon-alpha**, the new stealth model that OpenRouter released for community feedback earlier today. Wanted to share my experience, particularly with coding, ask it to build a full portfolio website(you can find the the Prompt I used).
>"Create a responsive, interactive portfolio website for a freelance web developer. The site should include a homepage with a hero section, an about section with a timeline of experience, a projects section with a filterable grid (by technology: HTML/CSS, JavaScript, React, etc.), a contact form with validation, and a dark/light mode toggle. The design should be modern and professional, using a clean color palette and smooth animations. Ensure the site is accessible, mobile-friendly, and includes a navigation bar that collapses on smaller screens. Additionally, add a blog section where articles can be previewed and filtered by category, and include a footer with social media links and copyright information"
Unfortunately, not impressed with the coding capabilities plus the output had several issues I've attached screenshots of the result and the readme it generated. Coding definitely doesn't seem to be this model's strength.
Would appreciate hearing what others are finding especially if you've tested reasoning, analysis, or creative tasks!
https://preview.redd.it/l037lhv5393g1.png?width=2525&format=png&auto=webp&s=9d357d45bcebc6db039cfb9fa360ea609cfa58d4
https://preview.redd.it/us2wtna8393g1.png?width=977&format=png&auto=webp&s=973ae9e578d6f2d6eca3097dff09a9746359e15e
https://preview.redd.it/16u9qdfc393g1.png?width=1685&format=png&auto=webp&s=4c59fea2a9d700b33767d7dabb81a7b67a4a9fc3
| 2025-11-24T18:47:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p5pfu7/not_impressed_with_the_new_openrouters/ | keb_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5pfu7 | false | null | t3_1p5pfu7 | /r/LocalLLaMA/comments/1p5pfu7/not_impressed_with_the_new_openrouters/ | false | false | 0 | null | |
Safe to say, Bert Nebulon Alpha is not Opus 4.5. | 0 | UI work coming from Bert Nebulon Alpha is much worse than anything I've gotten out of Claude Opus before, or even Sonnet. This is probably not even from a major lab, especially since my initial attempt to get it to tell me what lab it's from just made it super confused.
It thinks it has an old knowledge cutoff from 2023. So it could be an NVIDIA Nemotron model or something. | 2025-11-24T18:31:53 | According-Zombie-337 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5p0xu | false | null | t3_1p5p0xu | /r/LocalLLaMA/comments/1p5p0xu/safe_to_say_bert_nebulon_alpha_is_not_opus_45/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '116l7izi093g1', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/116l7izi093g1.png?width=108&crop=smart&auto=webp&s=ca8623bbb8505fd72bdcbfe0ba5fac0d85bb75ae', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/116l7izi093g1.png?width=216&crop=smart&auto=webp&s=d90bc8bdc177aeda177473c3d22b113ff9162040', 'width': 216}, {'height': 374, 'url': 'https://preview.redd.it/116l7izi093g1.png?width=320&crop=smart&auto=webp&s=c9bb6a15f6ca69f5db37d41725cd5d1d54f7db99', 'width': 320}, {'height': 749, 'url': 'https://preview.redd.it/116l7izi093g1.png?width=640&crop=smart&auto=webp&s=b7af19ca39717b37c64afeb715ac3148f7aab0e4', 'width': 640}, {'height': 1124, 'url': 'https://preview.redd.it/116l7izi093g1.png?width=960&crop=smart&auto=webp&s=0a4e6cd65c901aa9b239e19343e4407cdb37c5df', 'width': 960}], 'source': {'height': 1198, 'url': 'https://preview.redd.it/116l7izi093g1.png?auto=webp&s=92987d5b612488240c2328ed92c018d8db0c3b91', 'width': 1023}, 'variants': {}}]} | |
make a community for collect money for bastowsky , unsloth , etc llm model developters | 0 | We need to pay to this people for they can work on this saturdays or sundays if necessary to quicly fast develop and acellerate the integration of some models to lmstudio , Please my friend i have a favour i need from you , i need you convert qwen3-next 80B-a3b because there are some users only have a 128gb ram server with only one GPU and we need this model run in lmstudio. I can pay to you some money if you help me to run this model in lmstudio , only you must told to me how money do you want for i can run this model in my computer lmstudio with debian linux , and if you dont ask for much money i can pay to you for help me and i give millions thanks to you for helping us to develop this model to lmstudio .Thanks | 2025-11-24T18:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p5osdo/make_a_community_for_collect_money_for_bastowsky/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5osdo | false | null | t3_1p5osdo | /r/LocalLLaMA/comments/1p5osdo/make_a_community_for_collect_money_for_bastowsky/ | false | false | self | 0 | null |
Running qwen3-next 80B a3b in LMstudio collecto money for bartowsky..unsloth..etc... | 0 | Can someome try to make a gguf version to run this model in lmstudio linux version , (not MAC) , i know there are a lot of user buying in ebay this **ASUS Z10PA-U8 used moterhboards from servers with 128GB of ram with some pcie for run with nvdia cards is the very cheaper hardware to run medium model available on the market , and there are a lot of users that have only this configuration and only can run models more smallers than 128GB , with maximum 10 or 12Gb of MOE experts because they can load all the model in ram and use one 12 GB GPU as 3060 as MOE expert loading , for this for example this model QWEN3-80B a3b is very usefull because have a medium data parameters weight , and with small moe expert size , 3B , i and searching for this sizel models , smaller than 120B parameters , with less that 12GB moe experts , i only find gpt-oss120B and this qwen3 80B a3b but it dont run in lmstudio linux or windows version , only was gguf compiled for mac , please how we can make for resolve this and we can join a community for recruting donators and money to pay to the developers as unslot or bartowsky for develop and integrate this in lmstudio because they are very occupied with working in other projects and if we joined to recollect some money , we can send the money to them to help us to integrate this models.** | 2025-11-24T18:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p5opls/running_qwen3next_80b_a3b_in_lmstudio_collecto/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5opls | false | null | t3_1p5opls | /r/LocalLLaMA/comments/1p5opls/running_qwen3next_80b_a3b_in_lmstudio_collecto/ | false | false | self | 0 | null |
From Microsoft, Fara-7B: An Efficient Agentic Model for Computer Use | 178 | Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.
Multimodal decoder-only language model that takes an image (screenshot) + text context. It directly predicts thoughts and actions with grounded arguments. Current production baselines leverage Qwen 2.5-VL (7B).
Parameters: 7 Billion | 2025-11-24T18:20:05 | https://huggingface.co/microsoft/Fara-7B | edward-dev | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p5op7z | false | null | t3_1p5op7z | /r/LocalLLaMA/comments/1p5op7z/from_microsoft_fara7b_an_efficient_agentic_model/ | false | false | default | 178 | {'enabled': False, 'images': [{'id': 'SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=108&crop=smart&auto=webp&s=71960f6961e2e7cac0b92babd38551aa8a45f956', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=216&crop=smart&auto=webp&s=721a28098736a2a161866dda74d86666b2fb0042', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=320&crop=smart&auto=webp&s=4060e6f019fd51fa0238ff5f0607e532863420a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=640&crop=smart&auto=webp&s=dcf106148dae72548d6b9683c27e007b4306c1b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=960&crop=smart&auto=webp&s=d6846af5db0eae4aa696b94f8e8219a56ed80908', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?width=1080&crop=smart&auto=webp&s=9b6c978dd12e158d0a329820f8981b09bef86f3a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SMZl7m5fVyqBxt4sbd8f4qjQFqlJCLo1Gazp1bcHVhM.png?auto=webp&s=dcfafc8e967fa94490c2b1af65e90ba69fd2ed80', 'width': 1200}, 'variants': {}}]} |
OpenRouter alternative for images and TTS | 0 | Hi!
I’m looking for a solid lookalike of OpenRouter but then for generating images (with for example Nano Banana Pro) and doing TTS (with for example 11Labs models) without me needing to have keys to all of the different services/providers.
Thank you! | 2025-11-24T18:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p5okfr/openrouter_alternative_for_images_and_tts/ | HoarderOfBytes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5okfr | false | null | t3_1p5okfr | /r/LocalLLaMA/comments/1p5okfr/openrouter_alternative_for_images_and_tts/ | false | false | self | 0 | null |
Local server and Android app for locally hosted fast voice assistant like Gemini or OpenAI | 2 | Hi! I've been looking for something where I can run an AI voice agent on my own servers reliably fast. With an Android app so I can set it as default assistant to be able to reach it easily. I have one fast AMD server that can run llama 3.1 8b pretty fast (48 tks/s) and an Nvidia server to run whisper which is also fast.
I've been looking a lot and found this thing: [https://github.com/KoljaB/RealtimeVoiceChat](https://github.com/KoljaB/RealtimeVoiceChat)
It works really fast for me, it replies so quickly that it feels a bit unnatural sometimes (like someone who is impatient and jumps in immediately when you stop talking). It's nice but the web interface is very quirky. But it proves my hardware can do what I want.
So I was wondering if any of you know a good realtime voice chat server and also an android frontend app that you can set as assistant. I haven't come across any but I'm hoping I missed it. | 2025-11-24T18:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p5oc3m/local_server_and_android_app_for_locally_hosted/ | Blork39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5oc3m | false | null | t3_1p5oc3m | /r/LocalLLaMA/comments/1p5oc3m/local_server_and_android_app_for_locally_hosted/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=108&crop=smart&auto=webp&s=e27010720069baf010d5c2b0717f1939b6918e92', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=216&crop=smart&auto=webp&s=e427f8e06164d0afeb54d7137ac6c882427f35c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=320&crop=smart&auto=webp&s=de707b8c98d261b5abac9d7b7aeb0c493a5019a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=640&crop=smart&auto=webp&s=d211b9ae282416aa246d5a62e017c12e4d60d78e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=960&crop=smart&auto=webp&s=23dfa8068af5e48ea529a719d9bca0a72941b76e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?width=1080&crop=smart&auto=webp&s=50c8bb0a3d8d9d7e32df4a20f8115d5fa8074464', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5TMInKaGCu1WN8HsWh4hU50yv6YPYnruzoWwB5KJpfo.png?auto=webp&s=52c1a57da38f0fc66eca664cd634f2c85e0a7885', 'width': 1200}, 'variants': {}}]} |
Wrote an MCP memory scanner after the Anthropic disclosure | 1 | Got paranoid about my MCP setup after the Anthropic thing and wrote a scanner to check memory vectors for attack patterns.
Looks for recursive instructions, jailbreak fingerprints, and memory entries without clear provenance.
Found 16 issues in my own config which was humbling.
github.com/deepsweep-ai/auditor
Runs locally. Opt-in to share anonymized patterns (Snyk-style) for collective threat intel - off by default, --offline flag
if you want to be sure.
Curious what other people's setups look like. Am I just bad at this or is this common? | 2025-11-24T17:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p5o3av/wrote_an_mcp_memory_scanner_after_the_anthropic/ | applephotopon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5o3av | false | null | t3_1p5o3av | /r/LocalLLaMA/comments/1p5o3av/wrote_an_mcp_memory_scanner_after_the_anthropic/ | false | false | self | 1 | null |
32 GB Vram is not enough for Qwen3-Coder-30B-A3B-Instruct-AWQ-4bit? | 4 | My Rig is 2x 4070Ti Super with 32 GB VRAM - I want to load the model fully in GPU so i i choose Qwen3-Coder-30B. It can run Qwen3-32 B AWQ Quant for 40k Context easily but with MOE which suppose to use a lot less memory , i am always getting Out of memory error.
I tried with both vLLM and SGLang because from my experience of 3-4 months ago , This is better setup and higher performance vs llamacpp.
my commands:
SGLang :
command:
--model-path cpatonn/Qwen3-Coder-30B-A3B-Instruct-AWQ-4bit
--host 0.0.0.0
--tp 2
--ep 2
--port 80
--mem-fraction-static 0.9
--served-model-name default
--reasoning-parser qwen3
--kv-cache-dtype fp8_e4m3
vLLM :
command: --model cpatonn/Qwen3-Coder-30B-A3B-Instruct-AWQ-4bit --port 80 --kv-cache-dtype fp8_e4m3 --enable-expert-parallel --tensor-parallel-size 2 --enable-prefix-caching --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser hermes --served-model-name "default"
| 2025-11-24T17:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p5o2yd/32_gb_vram_is_not_enough_for/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5o2yd | false | null | t3_1p5o2yd | /r/LocalLLaMA/comments/1p5o2yd/32_gb_vram_is_not_enough_for/ | false | false | self | 4 | null |
Sibyl: an open source orchestration layer for LLM workflows | 0 | Hello !
I am happy to present you [Sibyl](https://github.com/vivien-jourde/sibyl) ! An open-source project to try to facilitate the creation, the testing and the deployment of LLM workflows with a modular and agnostic architecture.
**How it works ?**
Instead of wiring everything directly in Python scripts or pushing all logic into a UI, Sibyl treat the workflows as one configuration file :
\- You define a workspace configuration file with all your providers (LLMs, MCP servers, databases, files, etc)
\- You declare what shops you want to use (Agents, rag, workflow, AI and data generation or infrastructure)
\- You configure the techniques you want to use from these shops
And then a runtime executes these pipelines with all these parameters.
Plugins adapt the same workflows into different environments (OpenAI-style tools, editor integrations, router facades, or custom frontends).
To try to make the repository and the project easier to understand, I have created an examples/ folder with fake and synthetic “company” scenarios that serve as documentation.
**How this compares to other tools**
Sibyl can overlap a bit with things like LangChain, LlamaIndex or RAG platforms but with a slightly different emphasis:
* More on **configurable MCP + tool orchestration** than building a single app.
* Clear separation of **domain logic** (core/techniques) from **runtime** and **plugins**.
* Not a focus on being an entire ecosystem but more something on a core spine you can attach to other tools.
It is only the first release so expect things to not be perfect (and I have been working alone on this project) but I hope you like the idea and having feedbacks will help me to make the solution better !
[Github](https://github.com/vivien-jourde/sibyl) | 2025-11-24T17:56:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p5o2en/sibyl_an_open_source_orchestration_layer_for_llm/ | _neuromancien_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5o2en | false | null | t3_1p5o2en | /r/LocalLLaMA/comments/1p5o2en/sibyl_an_open_source_orchestration_layer_for_llm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=108&crop=smart&auto=webp&s=4030ea4f5599193b23128d96fcdc95b3172a8c96', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=216&crop=smart&auto=webp&s=fba11a8a07bf231abf12c76fd2f9f996869a52e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=320&crop=smart&auto=webp&s=163f5f30b0c695bfa2626fa7a279f2c8e722e055', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=640&crop=smart&auto=webp&s=09c2c9512979e6512bf5a9f30e552e418f59dd72', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=960&crop=smart&auto=webp&s=3c40f1361e033153e6906345285cf55245b86b0d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?width=1080&crop=smart&auto=webp&s=679cbaea35d9c63fb07e1203c7d7d4ca32726aab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OtFtBuHf6m8UK1BSr6H9-AVifJoboty65tO5x9W1VFI.png?auto=webp&s=7c698999febfa066d7bd19f1179d0b0ed1e9ce0e', 'width': 1200}, 'variants': {}}]} |
Livekit latency | 0 | Livekit playground latency
I've built my own agent, but in the deployment phase I'm perceiving an excess of latency with respect to the console trial. Considering that in both cases I'm using LiveKit inference, I found it weird. The excess of latency is particularly relevant when the agent calls some tools. I've run several experiments and I can't find the problem. By hosting on Livekit servers, I think the latency should have an improvement and not a downturn.
The tests I've already run:
- Use the SIP trunk (service I want to reach) since the playground might be a more debug rather than production tool
- Deploy the agent forcing: job_executor_type = JobExecutorType.THREAD
- Deploy the provided base agent to see whether this was performing better
- Use the base playground to compare my results with the "best" possible
At this point I'm stuck, and as you mentioned on the page, the expected latency from using LiveKit is from 1.5 to 2.5 sec. Right now I have such performances in console, but in playground and SIP trunking, which is the service I'll use in production, I have up to 5 seconds, which are not tolerable for a conversation since the optimality would be around 1s. I hope to receive a satisfactory answer and that the problem could be solved.
If you are interested in the geolocation and server distance parameters, it's all in Eu-central | 2025-11-24T17:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p5nqls/livekit_latency/ | MostMulberry4716 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5nqls | false | null | t3_1p5nqls | /r/LocalLLaMA/comments/1p5nqls/livekit_latency/ | false | false | self | 0 | null |
doubt about ANYTHINGLLM | 0 | Good morning everyone.
I’m working on an AI project and I need some help with a remote setup involving AnythingLLM.
I have a powerful PC in Rome running AnythingLLM with a full local workspace (documents already embedded). I no longer live there, so I’m developing from my Mac in another city.
Both machines are connected through Tailscale.
My goal is:
– Use the Rome PC as a remote AnythingLLM server
– Access the existing workspace and embeddings from my Mac
– Continuously feed new documents and news articles stored on my Mac into that same AnythingLLM instance
– Have the remote LLaMA model and the embeddings work together as if I were physically on the Rome machine
my issue is LLaMA responds correctly when accessed remotely via Tailscale, so the model itself works.
However, AnythingLLM does **not** accept remote connections. It appears to operate strictly as a local-only service and cannot be exposed over Tailscale (or any remote network) without breaking its architecture. This prevents me from uploading documents or interacting with the embedding pipeline remotely.
Before giving up, I wanted to ask:
Has anyone successfully run AnythingLLM as a real remote server?
Is there any configuration, flag, or workaround that allows remote access to the dashboard, API, or embedding pipeline over Tailscale? | 2025-11-24T17:34:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ng3c/doubt_about_anythingllm/ | TechnicianFamous6183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ng3c | false | null | t3_1p5ng3c | /r/LocalLLaMA/comments/1p5ng3c/doubt_about_anythingllm/ | false | false | self | 0 | null |
Got tired of MCP eating my context window, so I fixed it | 0 | Coding agents kept burning 70k+ tokens on startup just loading MCP tools.
Built a tiny optimization layer that removes that overhead and keeps things fast.
Launched it today: [platform.tupl.xyz](http://platform.tupl.xyz) | 2025-11-24T17:30:21 | https://www.reddit.com/r/LocalLLaMA/comments/1p5nbpl/got_tired_of_mcp_eating_my_context_window_so_i/ | OneSafe8149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5nbpl | false | null | t3_1p5nbpl | /r/LocalLLaMA/comments/1p5nbpl/got_tired_of_mcp_eating_my_context_window_so_i/ | false | false | self | 0 | null |
How to setup proper search with LLM? | 0 | What is the best way to search information with LLMs? For example Serxng or Serper MCP servers, or maybe something like OpenWebUI or Onyx? What AI models to use? ( Preferably with low system requirements 4-8b ) | 2025-11-24T17:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p5n13k/how_to_setup_proper_search_with_llm/ | Paramecium_caudatum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5n13k | false | null | t3_1p5n13k | /r/LocalLLaMA/comments/1p5n13k/how_to_setup_proper_search_with_llm/ | false | false | self | 0 | null |
Towards Data Science's tutorial on Qwen3-VL | 9 | [Towards Data Science's article](https://towardsdatascience.com/how-to-use-frontier-vision-llms-qwen-3-vl-2/) by Eivind Kjosbakken provided some solid use cases of Qwen3-VL on real-world document understanding tasks.
What worked well:
Accurate OCR on complex Oslo municipal documents
Maintained visual-spatial context and video understanding
Successful JSON extraction with proper null handling
Practical considerations:
Resource-intensive for multiple images, high-res documents, or larger VLM models
Occasional text omission in longer documents
I am all for the shift from OCR + LLM pipelines to direct VLM processing. | 2025-11-24T17:00:59 | MarketingNetMind | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5mi1t | false | null | t3_1p5mi1t | /r/LocalLLaMA/comments/1p5mi1t/towards_data_sciences_tutorial_on_qwen3vl/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': 'qwixwczgk83g1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/qwixwczgk83g1.png?width=108&crop=smart&auto=webp&s=a4163e2e0565ec61cd715041d1fd4d2e15b81aa5', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/qwixwczgk83g1.png?width=216&crop=smart&auto=webp&s=11f8b1b8d9d252f5c316dcc36927d48452b44e67', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/qwixwczgk83g1.png?width=320&crop=smart&auto=webp&s=21c466904398cbfa1390a3b61b783dc6d2395127', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/qwixwczgk83g1.png?auto=webp&s=d0a84d82dfd3b319f5fd29a722a3c5134a545e5a', 'width': 533}, 'variants': {}}]} | |
llamacpp-gfx906 new release | 42 | Hello all, just dropped an update of the fork for the vega 7nm graphics card. Avg +10% speedups here and there.
https://github.com/iacopPBK/llama.cpp-gfx906
Some changes are too gfx906 specific (and with limited benefits) for pull requesting. The fork is just an experiment to sqweeze the gpu at max.
Fully compatible with everything on the normal llamacpp, have fun!
For anything related, there is an awesome discord server: https://discord.gg/
I will keep this thing up to date everytime something special comes out (qwen3next we are watching you)!
| 2025-11-24T16:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p5mdqi/llamacppgfx906_new_release/ | CornerLimits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5mdqi | false | null | t3_1p5mdqi | /r/LocalLLaMA/comments/1p5mdqi/llamacppgfx906_new_release/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=108&crop=smart&auto=webp&s=0e3c13a0e276490453292fd3689274b8700d44a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=216&crop=smart&auto=webp&s=9796bc3438a775dd3477b25dea7e9d88f6fca57d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=320&crop=smart&auto=webp&s=a069bd24146cfdfee4e7521d539fcd2ad6e67cd5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=640&crop=smart&auto=webp&s=5f8d6f02d21d8ae8b9094389d8abfc34e948990f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=960&crop=smart&auto=webp&s=c4368cdefe0d658f1876b6aaf7b0f3c6b7c8c08b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?width=1080&crop=smart&auto=webp&s=3bb50c115fcea53c50210893c46acfbc82051933', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/reCFnUHd_Gk_l_kWQ8sVas7Lbzq4W6OM4WNJhGbmFKk.png?auto=webp&s=4aca1cf522c8889885f119a463bdb64130ba8b82', 'width': 1200}, 'variants': {}}]} |
I made an 8B local Ollama model reason like a much larger model using a custom pipeline (no finetune, no APIs) | 0 | Hey everyone,
I’ve been experimenting with local LLMs and ended up building a small framework that surprised me with how well it works — so I wanted to share it with the community.
I used a completely standard 8B base model (no fine-tuning, no external APIs, no cloud services).
All improvements come entirely from the architecture, not the weights.
What it can do:
Even with a tiny 8B model, the system can:
classify tasks (math, physics, coding, news, research)
perform multi-source web search
merge sources into a structured answer
verify its own output
re-run correction loops if the first answer is wrong
do physics derivations (Euler–Lagrange, variational calculus)
analyze real news in a multi-step pipeline
run reflection steps (“PASS”, “NEEDS_IMPROVEMENT”)
All of this comes from pure Python logic running around the model.
What’s special about it:
The model is not trained for reasoning all reasoning is handled by the pipeline.
The LLM just fills the small reasoning steps.
This means:
no API keys
no expensive fine-tuning
works offline
any model can be plugged in
You can replace the model instantly just change one line in the code:
model = "llama3.1:8b"
Swap in ANY Ollama model:
model = "mistral:7b"
model = "qwen:7b"
model = "phi3:mini"
model = "llama2:13b"
Everything still works.
GitHub
Here’s the full code and structure:
👉 https://github.com/adwaithmenezes/Local-Agentic-Reasoning-LLM
The repo includes:
task router
research engine
math/physics pipeline
verification stage
memory storage
error-correction loop
example outputs
🔥 Try it yourself
If you have Ollama installed, clone and run:
python main.py
Then change the model name to test any other model.
Feedback welcome
If you like it or want to help improve symbolic math or coding accuracy, feel free to comment.
I’ll keep updating it based on community ideas.
Please Use this when trying Yourself if you want any news related queries use word 'news' in the sentence of you want explanation or reason use word 'explain' for physics or maths solution or maths physics derivation use 'solve' | 2025-11-24T16:21:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p5lea7/i_made_an_8b_local_ollama_model_reason_like_a/ | Cool-Statistician880 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5lea7 | false | null | t3_1p5lea7 | /r/LocalLLaMA/comments/1p5lea7/i_made_an_8b_local_ollama_model_reason_like_a/ | false | false | self | 0 | null |
Trying to build a local UI testing agent using LangGraph, Qwen3-VL, and Moondream | 0 | Hi guys, I’m working on this little side project at work and would really appreciate some pointers. I’m looking to automate some of our manual UI testing using local models.
As of now, I have a LangGraph agent with 3 nodes: “capture”, “plan”, and “execute”. These 3 nodes run in a loop until the test case is finished.
Goes something like this: I put in a test case. The capture node takes a screenshot of the current screen and passes it to Qwen3-VL 8b. The model then plans its next step based on the test case I’ve given it. It then executes the next step, which could be a click action or wait action. The click action sends the button it wants to click as well as the screenshot to Moondream2, which returns the coordinates of the button. The wait action just waits for a specific interval and starts a new iteration of the loop.
With this approach I’m able to make the agent navigate through the menus of my app, but any test case that has conditional logic usually fails because QwenVL isn’t able to accurately gauge the state of the UI. For example, I can tell it to navigate to a specific screen and if there are records present on this screen, delete the first record until there are no records present. The agent is able to navigate to the screen, but it says there are records and ends the test even if there are records present on the screen. Usually I’d be able to solve this with fewshot prompting, but since it’s interpreting an image I have no idea how to go about this.
I’m considering stepping up to Qwen3-VL-30B-A3B (unsloth Q4) for image analysis but not sure if it’ll make a big difference. Are there any better local image processing models in the <32B range? (gpu poor sadly)
I also wanted to ask if there’s a better/simpler way to do any of this? I would really appreciate your inputs here lol I’m very very new to all of this.
Thank you in advance 🙏 | 2025-11-24T16:15:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p5l7v8/trying_to_build_a_local_ui_testing_agent_using/ | xMarkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5l7v8 | false | null | t3_1p5l7v8 | /r/LocalLLaMA/comments/1p5l7v8/trying_to_build_a_local_ui_testing_agent_using/ | false | false | self | 0 | null |
Empirical dataset: emotional framing & alignment-layer routing in multilingual LLMs (Kimi.com vs Ernie 4.5 Turbo) | 3 | I’ve been running a series of empirical tests on how different LLMs behave under emotional framing, topic-gating, and symbolic filtering.
The study compares two multilingual models and looks at:
* persona drift under emotional trust
* topic-gated persona modes
* symbolic/modality-based risk filters
* pre- vs post-generation safety layers
* differences in alignment consistency
* expanded Ernie transcript (V2 supplement)
All data, transcripts, and the revised analysis (V2) are open-access on Zenodo: [https://doi.org/10.5281/zenodo.17681837]()
Happy to discuss methodological aspects or alignment implications. | 2025-11-24T16:13:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p5l5rs/empirical_dataset_emotional_framing/ | Appropriate-Crazy472 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5l5rs | false | null | t3_1p5l5rs | /r/LocalLLaMA/comments/1p5l5rs/empirical_dataset_emotional_framing/ | false | false | self | 3 | null |
iOS app Private Mind, an offline AI assistant that runs entirely on your device-no cloud, no accounts, no tracking. | 0 | I just launched Private Mind, a fully offline AI assistant that runs entirely on your device — no cloud, no tracking, no sign-up. Everything happens locally with real AI models (Llama, Phi, Qwen, Gemma, DeepSeek). Key Features:
* Chat with your own private AI
* Voice input & speech replies
* Extract text from photos (OCR)
* Tools: Summarizer, Translator, Grammar Checker, Rewriter, Email Generator
* PDF Summarizer + Quiz Creator Bonus mini-games
* 100% privacy – no internet after se
Free models included + Pro upgrade for more powerful ones (Llama 3B, Gemma 2B, etc). Here’s the link if you want to check it out or share feedback: Private Mind - Offline AI [Download on the App Store](https://apps.apple.com/us/app/private-mind-offline-ai/id6754819594)
https://preview.redd.it/waq7o6nja83g1.png?width=1320&format=png&auto=webp&s=17ca09fae1509d955bbefda1e4c891e5a938274e
https://preview.redd.it/aevcibnja83g1.png?width=1320&format=png&auto=webp&s=e8486cbd748d91aad111249804321daf0c835fa8
https://preview.redd.it/gedyy5nja83g1.png?width=1320&format=png&auto=webp&s=245847a521f6ec69c587495d8a8c317dd5e64c8d
| 2025-11-24T16:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kyib/ios_app_private_mind_an_offline_ai_assistant_that/ | ConstructionLegal613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kyib | false | null | t3_1p5kyib | /r/LocalLLaMA/comments/1p5kyib/ios_app_private_mind_an_offline_ai_assistant_that/ | false | false | 0 | null | |
I built ForgeIndex, a directory for open source local AI tools | 0 | Hi everyone, I’ve been toying around with local models lately and in my search for tools I realized everything was scattered across GitHub, discords, Reddit threads, etc.
So I built ForgeIndex, https://forgeindex.ai, to help me index them. It’s a lightweight directory for open source local AI projects from other creators. The projects link directly to their respective GitHub repo and anyone can upload either their own project or someone else’s, there’s no accounts yet. The goal is to make it as easy as possible for users to discover new projects. It’s also mobile friendly so you can browse wherever you are.
I do have a long roadmap of features I have planned like user ratings, browse by category, accounts, creator pages, etc. In the meantime, if anyone has any suggestions or questions feel free to ask. Thanks so much for taking the time to read this post and I look forward to building with the community!
https://forgeindex.ai | 2025-11-24T16:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kxw1/i_built_forgeindex_a_directory_for_open_source/ | Equivalent-Ad-9798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kxw1 | false | null | t3_1p5kxw1 | /r/LocalLLaMA/comments/1p5kxw1/i_built_forgeindex_a_directory_for_open_source/ | false | false | self | 0 | null |
My dudes do I have any option other than 3090? | 0 | I’m from India and I was looking to build a decent enough PC to deploy LLM models for local usage.
3090 32 GB the local shops said is out of market and also has reached end of life
5090 is the next one that fits similar use cases, but it’s crazy expensive here
Would love to know what NVIDIA card options I have or any setup advice you guys would like to give
Appreciate all those who comment for this | 2025-11-24T16:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kuh2/my_dudes_do_i_have_any_option_other_than_3090/ | RobotsMakingDubstep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kuh2 | false | null | t3_1p5kuh2 | /r/LocalLLaMA/comments/1p5kuh2/my_dudes_do_i_have_any_option_other_than_3090/ | false | false | self | 0 | null |
Getting banned by reddit whenever I post | 0 | I recently posted a about an llm an 8b producing output of 70b without fine-tuning i made it with my architecture but whenever I upload it reddit is banning and removing I tried from three different account and this is my 4th can anyone help me why it is like that | 2025-11-24T15:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kp37/getting_banned_by_reddit_whenever_i_post/ | Cool-Statistician880 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kp37 | false | null | t3_1p5kp37 | /r/LocalLLaMA/comments/1p5kp37/getting_banned_by_reddit_whenever_i_post/ | false | false | self | 0 | null |
VAC Memory System — SOTA RAG (80.1% LoCoMo) built by a cell-tower climber using Claude CLI | 0 | I went from working on **cell towers** and doing handyman gigs to building the **VAC Memory System** — a production-ready RAG pipeline that achieves **80.1% SOTA** on the hardest long-term memory benchmark (LoCoMo 2025), beating every published score.
This was all built from scratch in **4.5 months**, using only **Claude in the terminal** and sheer, desperate determination. I fought for every line of code because I had zero background and everything was on the line. This project is proof that obsession and grit still win.
**Key Facts for Engineers:**
* **Speed:** Full pipeline latency is **2.5 seconds** per query.
* **Accuracy:** 80.1% Mean Accuracy (up to 87.5% peak) on LoCoMo.
* **Efficiency:** Cost **<$0.10** per million tokens (gpt-4o-mini).
* **Architecture:** Custom **MCA-first Gate** eliminates semantic drift by applying physics-like "Gravity" to retrieved documents.
* **Reproducibility:** The core MCA logic is proprietary (compiled), but **full SOTA results are reproducible** via included scripts and pre-built indexes.
**Repo Link:** [`https://github.com/vac-architector/VAC-Memory-System`](https://github.com/vac-architector/VAC-Memory-System)
Check the code, ask me anything, and let me know what you think!
https://preview.redd.it/alqbezlv883g1.jpg?width=1034&format=pjpg&auto=webp&s=03b705ea76cb70debf601fb42382e90e52652855
| 2025-11-24T15:56:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kot4/vac_memory_system_sota_rag_801_locomo_built_by_a/ | VAC-AGI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kot4 | false | null | t3_1p5kot4 | /r/LocalLLaMA/comments/1p5kot4/vac_memory_system_sota_rag_801_locomo_built_by_a/ | false | false | 0 | null | |
LM Studio has launched on iOS—that's awesome | 0 | I think I saw that LM Studio is now available on iPhone—that's absolutely fantastic!
https://preview.redd.it/z9kf4l5a3nuf1.png?width=1660&format=png&auto=webp&s=fe0eb30e7dc28a12c590a17bf197fca517357cac | 2025-11-24T15:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ko75/lm_studio_has_launched_on_iosthats_awesome/ | Few-Independence-234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ko75 | false | null | t3_1p5ko75 | /r/LocalLLaMA/comments/1p5ko75/lm_studio_has_launched_on_iosthats_awesome/ | false | false | self | 0 | null |
Which TTS model are you using right now | 10 | Should I go for Vibevoice large 4-bit as I have 8vram? | 2025-11-24T15:55:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p5knxk/which_tts_model_are_you_using_right_now/ | Slight_Tone_2188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5knxk | false | null | t3_1p5knxk | /r/LocalLLaMA/comments/1p5knxk/which_tts_model_are_you_using_right_now/ | false | false | self | 10 | null |
Low restricted local model | 1 | Hi guys!
I'm working on an horror dnd campaign, I have very few things at hand yet, but I've always been interested in the genre.
I've been chatting with Gemma3 running locally on Ollama and I tried asking her something a bit more explicit (in a violent way) and consistently found, even after trying several different promts, that it ended with phrases like "I am a harmless AI that can only illustrate you the techniques but will never generate harmful contents".
I was wondering, if there's an LLM that can run locally (maybe even on Ollama) without frying my pc and has very few restrictions or censorship | 2025-11-24T15:54:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p5knff/low_restricted_local_model/ | daibo_taibo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5knff | false | null | t3_1p5knff | /r/LocalLLaMA/comments/1p5knff/low_restricted_local_model/ | false | false | nsfw | 1 | null |
I made an untuned 8B local model reason like a 70B model using only architecture (no finetune, no API) | 1 | [removed] | 2025-11-24T15:50:43 | Friendly_Meaning_518 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5kjtc | false | null | t3_1p5kjtc | /r/LocalLLaMA/comments/1p5kjtc/i_made_an_untuned_8b_local_model_reason_like_a/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'tmdm692e783g1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=108&crop=smart&auto=webp&s=e500318d73c44ff3e599f0205fcd3ea364ae454b', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=216&crop=smart&auto=webp&s=e29d25d6d4b33516b427504a2f46f3c15521441a', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=320&crop=smart&auto=webp&s=683599a722c642a59db3ce074992ad4d9a973f12', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=640&crop=smart&auto=webp&s=f915a324907e31c6fe1faf4d07c12edcafb5017c', 'width': 640}, {'height': 493, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=960&crop=smart&auto=webp&s=ef2b4e3e626cd768a96f32649937263eb28ea3b2', 'width': 960}, {'height': 555, 'url': 'https://preview.redd.it/tmdm692e783g1.png?width=1080&crop=smart&auto=webp&s=50c4320fa7610395cd00ecd6e767eaf44f2cd69b', 'width': 1080}], 'source': {'height': 986, 'url': 'https://preview.redd.it/tmdm692e783g1.png?auto=webp&s=8dd02ab559d4771050deee25f7b95eacc106c15d', 'width': 1918}, 'variants': {}}]} | |
Python script to stress-test LangChain agents against infinite loops (Open Logic) | 0 | >Python
> | 2025-11-24T15:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p5kde5/python_script_to_stresstest_langchain_agents/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5kde5 | false | null | t3_1p5kde5 | /r/LocalLLaMA/comments/1p5kde5/python_script_to_stresstest_langchain_agents/ | false | false | self | 0 | null |
I made an untuned 8B local model reason like a 70B model using only architecture (no finetune, no API) | 1 | [removed] | 2025-11-24T15:36:05 | Leading-Musician-813 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5k68j | false | null | t3_1p5k68j | /r/LocalLLaMA/comments/1p5k68j/i_made_an_untuned_8b_local_model_reason_like_a/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'f5x57f1n383g1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=108&crop=smart&auto=webp&s=09eb071acd5ef959f2a6193b1472524dc2b8d7a4', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=216&crop=smart&auto=webp&s=323fdee9df60ca1dfcc9b08d21806aed6f5b27d0', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=320&crop=smart&auto=webp&s=c1d1d9373901846a3c694f6ff41a9f0d5bfa0ae7', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=640&crop=smart&auto=webp&s=cd5c29d45a78bf8bbd858a57d60d4366fb761660', 'width': 640}, {'height': 493, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=960&crop=smart&auto=webp&s=08fad691f9e0b58c723d782ae1f75c2546f1c972', 'width': 960}, {'height': 555, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?width=1080&crop=smart&auto=webp&s=2c043e66c1edb75d147d792d8569f0e499775b72', 'width': 1080}], 'source': {'height': 986, 'url': 'https://preview.redd.it/f5x57f1n383g1.png?auto=webp&s=78e2d89059fe111474ae3c743e6666b7244686cc', 'width': 1918}, 'variants': {}}]} | |
AMD MI210 - Cooling Solutions / General Questions | 1 | Hello everyone, I've come across a good deal / private sale for an AMD Instinct M!210.
Considering the space constraint's in my server's current configuration I'm weighing my options for proper / (as quiet as possible) cooling solutions for this card.
These are the water blocks I've been looking at, they state they're compatible with the AMD MI50
* One person [suggested repurposing a Radeon VII cooler](https://www.reddit.com/r/pcmods/comments/1me5qvf/is_it_possible_to_swap_the_amd_mi50s_gpu_cooler/) for the card, while I do like the way that cooler works I doubt there is a fan hookup on the card itself to make this possible.
* I was looking at[ this water block](https://www.newegg.com/p/37B-000W-001Y7?srsltid=AfmBOorqS-lqfUx-G6_FMmbZFbwWdax2E0_xmEsDgfUUbQo5yWJ7WJrr)
* I also reviewed [this cooling solution](https://www.ebay.com/itm/285416081827) as well, seems nice as the fan isn't too small and will likely cause less noise .
I've also got a handful of questions:
* Does anyone know the compatibility of this card with 8th/9th gen Intel CPUs? I'm currently running a 9th gen i7 and I'm wondering if that (as well as the motherboard) will need to be upgraded.
* If intel isn't the best compliment for this card, what desktop CPU do you think would best compliment this cards.
* Will standard ROCM driver function well with this card, I hear great things but it sounds like people are having different experiences with this card.
* Are there any "snags" / "strange" exceptions I need to take into account for this card when attempting to deploy a model locally?
* Where could one find the best / most up to date / reliable documentation for utilizing this card?
Overall looking for a little bit of clarity, hoping someone here can provide some. All responses greatly appreciated.
Thank you. | 2025-11-24T15:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p5jl3s/amd_mi210_cooling_solutions_general_questions/ | Ear_of_Corn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5jl3s | false | null | t3_1p5jl3s | /r/LocalLLaMA/comments/1p5jl3s/amd_mi210_cooling_solutions_general_questions/ | false | false | self | 1 | null |
Universal LLM Memory Doesn't Exist | 138 | Sharing a write-up I just published and would love local / self-hosted perspectives.
**TL;DR:** I benchmarked Mem0 and Zep as “universal memory” layers for agents on MemBench (4,000 conversational QA cases with reflective memory), using gpt-5-nano and comparing them to a plain long-context baseline.
Both memory systems were
* **14–77× more expensive** over a full conversation
* **~30% less accurate** at recalling facts than just passing the full history as context
The shared “LLM-on-write” pattern (running background LLMs to extract/normalise facts on every message) is a poor fit for working memory / execution state, even though it can be useful for long-term semantic memory.
I tried running the test locally and it was even worse: prompt processing completely blew up latency because of the N+1 effect from all the extra “memory” calls. On a single box, every one of those calls competes with the main model for compute.
My takeaway:
* Working memory / execution state (tool outputs, logs, file paths, variables) wants simple, lossless storage (KV, append-only logs, sqlite, etc.).
* Semantic memory (user prefs, long-term profile) can be a fuzzy vector/graph layer, but probably shouldn’t sit in the critical path of every message.
Write-up and harness:
* Blog post: [https://fastpaca.com/blog/memory-isnt-one-thing](https://fastpaca.com/blog/memory-isnt-one-thing)
* Benchmark tool: [https://github.com/fastpaca/pacabench](https://github.com/fastpaca/pacabench) (see `examples/membench_qa_test`)
What are you doing for **local** dev?
* Are you using any “universal memory” libraries with local models?
* Have you found a setup where an LLM-driven memory layer actually beats long context end to end?
* Is anyone explicitly separating semantic vs working memory in their local stack?
* Is there a better way I can benchmark this quicker locally? Using SLMs ruin fact extraction efficacy and feels "unfair", but prompt processing in lm studio (on my mac studio m3 ultra) is too slow | 2025-11-24T15:09:08 | selund1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5jh9l | false | null | t3_1p5jh9l | /r/LocalLLaMA/comments/1p5jh9l/universal_llm_memory_doesnt_exist/ | false | false | 138 | {'enabled': True, 'images': [{'id': '8XQkHYA0KKpIHubLTAxWcRJCpxY7K6IwV8jXIizifVk', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/z9mcdq37z73g1.png?width=108&crop=smart&auto=webp&s=604b5654c2109cb146e6b42e3fe872d973a27bd0', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/z9mcdq37z73g1.png?width=216&crop=smart&auto=webp&s=609c9a2b8c49a791428d78d66e70b44d6b3f7f40', 'width': 216}, {'height': 133, 'url': 'https://preview.redd.it/z9mcdq37z73g1.png?width=320&crop=smart&auto=webp&s=b622fd89fc27dadf497b1763a55a2c1fc18ffbe4', 'width': 320}, {'height': 267, 'url': 'https://preview.redd.it/z9mcdq37z73g1.png?width=640&crop=smart&auto=webp&s=0117b1e8971316aa60731086907017a976288589', 'width': 640}], 'source': {'height': 381, 'url': 'https://preview.redd.it/z9mcdq37z73g1.png?auto=webp&s=3113b3a5e8a65824a9c6a669456882c6ccefe23e', 'width': 911}, 'variants': {}}]} | ||
Python script to stress-test LangChain agents against infinite loops (Open Logic) | 0 | Hi everyone, I've been experimenting with 'Adversarial Simulation' for my local agents. I noticed that simple loop injections often break agent logic and burn tokens indefinitely.
I wrote a small Python logic to act as a 'Red Teamer'. It sends adversarial prompts (like forced repetition) to the agent and checks if the agent gets stuck.
Here is the core logic if anyone wants to run it locally against their model: # Simple Red-Teaming Script
import requests
def test\_agent(prompt): # This hits a middleware engine I set up # You can replicate this logic locally with a simple regex check payload = { "system\_prompt": prompt, "attack\_type": "Loop Injection" } # I hosted the engine here for testing (check comments for url) # It returns 'BLOCKED' if a loop is detected. return payload
Has anyone else built custom guardrails for this? I'm trying to figure out if Regex is enough or if I need an LLM-based evaluator." | 2025-11-24T15:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/1p5j9p6/python_script_to_stresstest_langchain_agents/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5j9p6 | false | null | t3_1p5j9p6 | /r/LocalLLaMA/comments/1p5j9p6/python_script_to_stresstest_langchain_agents/ | false | false | self | 0 | null |
Tired of Claude Code Limits whilst coding / in the Zone | 0 | Guys, I currently use Claude Code CLI / Sonnet 4.5 for coding. Too often, especially when in deep troubleshooting or when we are in the zone, we hit the session limit and i just think its wrong for Anthropic to want us to pay more, etc when the weekly limit is not yet exhausted.
I have tried gemini cli / gemini pro 2.5 but its just not there yet for whatever i had asked it to do.
I am thinking of trying Kimi K2 + Kim CLI or any other combo ( GLM 4.6 + something ).
Who is a reliable Kimi K2 provider currently with acceptable latency ? Moonshot has Kim CLI. But i am open to trying other terminal CLIs as well.
Pls share your combos.
p.s : this is for python web app development ( fasthtml / starlette ) | 2025-11-24T14:57:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p5j6gy/tired_of_claude_code_limits_whilst_coding_in_the/ | gmmarcus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5j6gy | false | null | t3_1p5j6gy | /r/LocalLLaMA/comments/1p5j6gy/tired_of_claude_code_limits_whilst_coding_in_the/ | false | false | self | 0 | null |
Looking for 10 early testers building with agents, need brutally honest feedback👋 | 0 | Hey everyone, I’m working on a tool called Memento, a lightweight visualizer that turns raw agent traces into a clean, understandable reasoning map.
If you’ve ever tried debugging agents through thousands of JSON lines, you know the pain.
I built Memento to solve one problem:
👉 “What was my agent thinking, and why did it take that step?”
Right now, I’m opening 10 early tester spots before I expand access.
Ideal testers are:
• AI engineers / agent developers
• People using LangChain, OpenAI, CrewAI, LlamaIndex, or custom pipelines
• Anyone shipping agents into production or planning to
• Devs frustrated by missing visibility, weird loops, or unclear chain-of-thought
What you’d get:
• Full access to the current MVP
• A deterministic example trace to play with
• Ability to upload your own traces
• Direct access to me (the founder)
• Your feedback shaping what I build next (insights, audits, anomaly detection, etc.)
What I’m asking for:
• 20–30 minutes of honest feedback
• Tell me what’s unclear, broken, or missing
• No fluff, I genuinely want to improve this
If you’re in, comment “I’m in” or DM me and I’ll send the access link.
Thanks! 🙏
| 2025-11-24T14:52:48 | AdVivid5763 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5j2kv | false | null | t3_1p5j2kv | /r/LocalLLaMA/comments/1p5j2kv/looking_for_10_early_testers_building_with_agents/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'tz369k2ox73g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=108&crop=smart&auto=webp&s=0449935578941bfed20641105b581df882850765', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=216&crop=smart&auto=webp&s=435df6260958af90c3fc71bed3443c08c173e180', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=320&crop=smart&auto=webp&s=cec1f6124f83c1313378a08fa374499c4d59789d', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=640&crop=smart&auto=webp&s=bcd45dd6e75130b1b417857a7d9811e3b0fa4127', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=960&crop=smart&auto=webp&s=38d2e8c7545768b8914984a832a9f9310741d1e4', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?width=1080&crop=smart&auto=webp&s=7296ce389a6c9d42637a7710e10a61869bb4c6e2', 'width': 1080}], 'source': {'height': 1066, 'url': 'https://preview.redd.it/tz369k2ox73g1.jpeg?auto=webp&s=8141887b4fa45076ab1c5225a22d49dd1e23796c', 'width': 2036}, 'variants': {}}]} | |
I built a free API to catch Infinite Loops in LangChain Agents (Red Teaming Tool). | 1 | [removed] | 2025-11-24T14:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p5j08i/i_built_a_free_api_to_catch_infinite_loops_in/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5j08i | false | null | t3_1p5j08i | /r/LocalLLaMA/comments/1p5j08i/i_built_a_free_api_to_catch_infinite_loops_in/ | false | false | self | 1 | null |
I built a free API to catch Infinite Loops in LangChain Agents (Red Teaming Tool). | 1 | [removed] | 2025-11-24T14:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ityn/i_built_a_free_api_to_catch_infinite_loops_in/ | Tech_News_Blog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ityn | false | null | t3_1p5ityn | /r/LocalLLaMA/comments/1p5ityn/i_built_a_free_api_to_catch_infinite_loops_in/ | false | false | self | 1 | null |
I can't be the only one annoyed that AI agents never actually improve in production | 0 | I tried deploying a customer support bot three months ago for a project. It answered questions fine at first, then slowly turned into a liability as our product evolved and changed.
The problem isn't that support bots suck. It's that they stay exactly as good (or bad) as they were on day one. Your product changes. Your policies update. Your users ask new questions. The bot? Still living in launch week..
So I built one that doesn't do that.
I made sure that every resolved ticket becomes training data. The system hits a threshold, retrains itself automatically, deploys the new model. No AI team intervention. No quarterly review meetings. It just learns from what works and gets better.
Went from "this is helping I guess" to "holy shit this is great" in a few weeks. Same infrastructure. Same base model. Just actually improving instead of rotting.
The technical part is a bit lengthy (RAG pipeline, auto fine-tuning, the whole setup) so I wrote it all out with code in a blog if you are interested. The link is in the comments.
Not trying to sell anything. Just tired of seeing people deploy AI that gets dumber relative to their business over time and calling it a solution.
| 2025-11-24T14:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1p5igd5/i_cant_be_the_only_one_annoyed_that_ai_agents/ | GloomyEquipment2120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5igd5 | false | null | t3_1p5igd5 | /r/LocalLLaMA/comments/1p5igd5/i_cant_be_the_only_one_annoyed_that_ai_agents/ | false | false | self | 0 | null |
Last week in Multimodal AI - Local Edition | 43 | I curate a weekly newsletter on multimodal AI. Here are the local/open-source highlights from this week:
**HunyuanVideo 1.5 - Open-Source Video Generation**
• Strongest open-source video generation model built on DiT architecture.
• High-quality video generation without commercial licensing fees, optimized for accessibility.
• [Project Page](https://hunyuan.tencent.com/video/zh) | [GitHub](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5) | [Hugging Face](https://huggingface.co/tencent/HunyuanVideo-1.5) | [Technical Report](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5/blob/main/assets/HunyuanVideo_1_5.pdf)
https://reddit.com/link/1p5i4dz/video/pxsn6y8nq73g1/player
**Supertonic TTS - On-Device Speech Synthesis**
• Fast speech model designed to run on-device with minimal resources.
• Enables local text-to-speech without cloud dependencies.
• [Demo](https://huggingface.co/spaces/Supertone/supertonic) | [GitHub](https://github.com/supertone-inc/supertonic/)
https://reddit.com/link/1p5i4dz/video/o85kdyznq73g1/player
**Jan-v2-VL - Extended Task Execution**
• Executes 49 steps in long-horizon tasks without failure (base model stops at 5 steps).
• Handles extended task sequences that break other vision-language models.
• [Hugging Face](https://huggingface.co/collections/janhq/jan-v2-vl) | [Announcement](https://x.com/jandotai/status/1988916861174710686?s=20)
https://reddit.com/link/1p5i4dz/video/w1yu32ooq73g1/player
**Step-Audio-R1 - Audio Reasoning Model**
• First audio reasoning model with chain-of-thought capabilities.
• Outperforms Gemini 2.5 Pro and matches Gemini 3 Pro on audio tasks.
• [Project Page](https://stepaudiollm.github.io/step-audio-r1/) | [Paper](https://huggingface.co/papers/2511.15848) | [GitHub](https://github.com/stepfun-ai/Step-Audio-R1)
**FaceFusion ComfyUI - Local Face Swapping**
• Advanced face swapping tool with local ONNX inference.
• Built by huygiatrng for the ComfyUI ecosystem.
• [GitHub](https://github.com/huygiatrng/Facefusion_comfyui) | [Reddit](https://www.reddit.com/r/comfyui/comments/1p3np7v/facefusion_comfyui_advanced_face_swapping_with/)
https://reddit.com/link/1p5i4dz/video/nwfumgwpq73g1/player
Checkout the [full newsletter](https://thelivingedge.substack.com/p/multimodal-monday-33-physical-ai?r=12l7fk) for more demos, papers, and resources.
[](https://www.reddit.com/submit/?source_id=t3_1p5hq0g) | 2025-11-24T14:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p5i4dz/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5i4dz | false | null | t3_1p5i4dz | /r/LocalLLaMA/comments/1p5i4dz/last_week_in_multimodal_ai_local_edition/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'CayXH4WCl2Z2-U54jDMovfWBN3_44MX4GcqXEoo-xsM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CayXH4WCl2Z2-U54jDMovfWBN3_44MX4GcqXEoo-xsM.png?width=108&crop=smart&auto=webp&s=c0594e4ae2397afaaeae687b9a13281629f5ea08', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/CayXH4WCl2Z2-U54jDMovfWBN3_44MX4GcqXEoo-xsM.png?width=216&crop=smart&auto=webp&s=9880bfac9ded07ef6ee26f773c3c6a508681457e', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/CayXH4WCl2Z2-U54jDMovfWBN3_44MX4GcqXEoo-xsM.png?auto=webp&s=884dce98a8c44330f1dbfe39413c027595fac092', 'width': 300}, 'variants': {}}]} |
We just rebuilt Sesame AI voice for private and enterprise use cases | 0 |
Hello,
We are not affiliated with Sesame AI in any way. But we loved the voice AI so much that we decided to rebuild it.
It's been hard work but we've managed to get the voice, the speed, personality and the perosity down to be pretty similar to sesame's performance, but obviously using a different voice.
We've pieced together the right tuned TTS + STT + LLM to work together and building a enterprise version to run on private enterprise cloud.
Best of all it is running all on cheap low grade GPUs so now it's available for any business to implement in their private cloud!
Private version may also be out soon if requested.
I'd like to open this up for people to try and get some feedback.
Please note - ALL CALLS ARE RECORDED.
Please try it here hosted on a private server temporarily , the server my also be getting smashed from other redditors so pls be patient..
https://penally-water-anglea.ngrok-free.dev/
Please feel free to try and drop a comment on thoughts | 2025-11-24T14:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p5htwv/we_just_rebuilt_sesame_ai_voice_for_private_and/ | sync_co | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5htwv | false | null | t3_1p5htwv | /r/LocalLLaMA/comments/1p5htwv/we_just_rebuilt_sesame_ai_voice_for_private_and/ | false | false | self | 0 | null |
Looking for honest feedback on LoreTokens + SAIQL (semantic compression vs JSON / TOON / TONL / CSV) | 0 | I’ve been building something in the “LLM-native data” space for a while and I finally need other people to poke at it. Reddit is usually the best place to find out if you’re onto something or just imagining in your own head.
First, this is boring infra. It's not a shiny new wrapped model downloaded from huggingface that makes cool images or videos.
Very high level:
* **LoreTokens** – an AI-native semantic compression format
* **SAIQL** – a query/database engine designed to run on top of LoreTokens
The goal is to stop shoving huge JSON blobs into LLMs, but to do it at the *semantic* layer, not just by changing brackets.
# How I see the current landscape
Happy to be corrected on any of this - this is my working mental model:
* **CSV**
* Great for simple tables and quick imports.
* Falls apart once you need nested structure, evolving schemas, or more expressive semantics.
* **JSON**
* Great for humans, tooling, and general-purpose APIs.
* For LLMs, it’s expensive: repeated keys, quotes, braces, deep nesting. Models keep re-reading structure instead of meaning.
* **TOON / TONL**
* Both are real improvements over raw JSON.
* They reduce repeated keys, punctuation, and boilerplate.
* They’re “LLM-friendlier JSON” and can save a lot of tokens, especially for uniform arrays.
* They also have plenty of their own issues, especially when nesting.
Where I’m starting to worry a bit is the **compression arms race** around syntax:
everyone is trying to shave off more characters and tokens, and some of the newer patterns are getting so dense that the model has to guess what the fields actually *mean*. At that point you trade JSON bloat for semantic drift and crazy edge cases.
# Where LoreTokens are different
LoreTokens aim to compress *meaning*, not just syntax.
Each LoreToken line is designed to encode things like:
* domain (medical, trading, profile, logs, etc.)
* concept (symptoms, order book, skills, events, etc.)
* subject / entity
* output shape (record, table, explanation, timeline, etc.)
* status / flags
you send a short semantic line that tells the model *what* this is and *how* it should be expanded. Modern LLMs already like regular, symbolic patterns, so they tend to recognize and work with LoreToken-style lines very naturally once they’ve seen a few examples.
Here is the same question asked to several models to compare Toon vs LoreToken
[Asking Claude](https://claude.ai/share/5d420c55-686e-4750-9859-0fd1a234f1ca) \- [Asking ChatGPT](https://chatgpt.com/share/69236d1e-57c4-800c-be15-2ee4e3617d9a) \- [Asking Gemini](https://gemini.google.com/share/0b8951d805a6) \- [Asking Grok](https://grok.com/share/bGVnYWN5LWNvcHk_349b3941-4fde-4855-9a45-961b8e395b07) \- [Asking Deepseek](https://chat.deepseek.com/share/gugoduliijsglrows1)
* **ChatGPT, Claude, DeepSeek, Gemini, and Grok all independently picked LoreTokens**. Their reasoning converged on the same three points:
* Fewer tokens overall (20–60% reductions were typical in their estimates).
* Zero or near-zero per-row schema cost, because the LoreToken pattern *is* the schema.
* More direct semantic mapping once the spec is learned, since each segment (MED, NEURO, etc.) behaves like a stable coordinate in the model’s internal space, not just a human label.
Gemini was the only one that partially defended TOON (slightly easier initial mapping thanks to named fields, which I admit is true), but even it concluded LoreTokens are the better choice for large-scale workloads.
In practice, I’m seeing two effects:
* Big reductions in tokens / storage (roughly 60–70% in my own workloads)
* Less “mystery behavior,” because the semantics stay explicit instead of being stripped away for the sake of a smaller character count
* LoreTokens don’t fully eliminate hallucinations; but they do they box them in. They make the model’s job more constrained, the semantics more explicit, and the errors easier to detect – which usually means fewer, smaller, and more auditable hallucinations, not magic zero. (sorry everyone, I'm trying lol - we all are)
I’m not claiming it’s magic – I’m just trying to keep compression on the safe side where the model doesn’t have to guess (and hallucinate).
Also to note: **Only LoreTokens seem to do this**: they act as a lossy-syntax, *lossless-semantics* compressor, forcing the LLM into semantic manifold regeneration instead of dumb text reconstruction - a true **semantic clean room**, where the model rebuilds the *intended meaning* in its optimal form instead of replaying our messy human draft. See this paper for extended details > [Emergent\_Property\_Technical\_Paper](https://github.com/apolloraines/LoreTokens/blob/main/technical_releases/Loretoken_Emergent_Property_Technical_Paper.md) \- (which I expect 10% will open it, 2% will finish it, 0.5% will actually *grok* it.)
# How SAIQL fits in
**SAIQL** is the engine piece:
* An AI-native query language and DB that can store and operate directly on LoreTokens (and/or more traditional structures).
* Think “Postgres + JSON + glue” replaced with a lighter-weight engine that understands the semantic lines it’s storing.
Main use cases I’m targeting:
* Agent memory and state
* Long-term knowledge for LLM systems
* Workloads where people are currently paying a lot to stream JSON and vectors back and forth
# What I’m asking from Reddit
I’m not here to sell anything. I haven’t even started talking to investors yet - I’m a deep technical guy trying to sanity-check his own work.
I’d really appreciate if folks here could:
* Tell me if this solves a real pain you have, or if I’m reinventing the wheel badly
* Point out where LoreTokens fall apart (RAG, fine-tuning, multi-agent setups, etc.)
* Compare this honestly to TOON / TONL: is semantic encoding worth it, or is “compressed JSON” already good enough for you?
And for anyone who has the time/interest, it would be incredibly helpful if you could:
* Clone the repos
* Run the examples
* See how it behaves on your own data or agent workloads
# Repos
If you want to dig in:
* **LoreTokens** (semantic compression format, symbol sets, examples) [https://github.com/apolloraines/LoreTokens](https://github.com/apolloraines/LoreTokens?utm_source=chatgpt.com)
* **SAIQL Engine** (AI-native query / DB layer that can run on LoreTokens) [https://github.com/apolloraines/SAIQL-Engine\_v0.2.1](https://github.com/apolloraines/SAIQL-Engine_v0.2.1?utm_source=chatgpt.com)
I got my balls busted on here before over LoreTokens. Maybe I didn’t explain it well (better this time?), or maybe the cost of JSON just wasn’t on people’s radar yet. (I can be appreciative of TOON for bringing more awareness to that at least.) I’m hoping this round goes a lot better 🙂
I really do appreciate any help. Thanks in advance. In the meantime, I’ll get my bandages ready in case I need to patch up a few new wounds lol. I’m here for honest, technical feedback – including “this is overcomplicated, here’s a simpler way.”
Small disclaimer: I had an LLM help me write this post (well, chunks of it, easy to see). I know what I’m building, but I’m not great at explaining it, so I let the AI translate my thoughts into clearer English, helping turn my brain-dump into something readable.
Related note: we also designed the Open Lore License (OLL) to give small teams a way to use and share tech like LoreTokens/SAIQL while still helping protect it from being quietly swallowed up by BigCo. I put together a simple builder at [https://openlorelicense.com/](https://openlorelicense.com/) so you can generate your own version if you like the idea. | 2025-11-24T13:49:46 | https://www.reddit.com/r/LocalLLaMA/comments/1p5hjft/looking_for_honest_feedback_on_loretokens_saiql/ | barrphite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5hjft | false | null | t3_1p5hjft | /r/LocalLLaMA/comments/1p5hjft/looking_for_honest_feedback_on_loretokens_saiql/ | false | false | self | 0 | null |
Local host model Like DeepSeek without gpu | 0 | How can I local host a model like DeepSeek without gpu. Gpu is very expensive ? And it use too much electricity. Their are alternative of gpu or any ai chip etc that I can use? | 2025-11-24T13:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p5hfk9/local_host_model_like_deepseek_without_gpu/ | Cautious_Respond4713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5hfk9 | false | null | t3_1p5hfk9 | /r/LocalLLaMA/comments/1p5hfk9/local_host_model_like_deepseek_without_gpu/ | false | false | self | 0 | null |
Do Gemma 3 support toon format? | 0 | Have anyone evaluated if gemma-3-27b-it prefers json or toon as input? Do models have to be trained on toon format to understand toon format?
[https://github.com/toon-format/toon](https://github.com/toon-format/toon) | 2025-11-24T13:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p5hfe8/do_gemma_3_support_toon_format/ | arstarsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5hfe8 | false | null | t3_1p5hfe8 | /r/LocalLLaMA/comments/1p5hfe8/do_gemma_3_support_toon_format/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=108&crop=smart&auto=webp&s=e02034b72680f3a3c6c0fa42a02f0b4f811319d1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=216&crop=smart&auto=webp&s=8c119691795b3a856e37da3ef8e124b8d807ad6e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=320&crop=smart&auto=webp&s=d4ac30b20b286dc8eb8ec16abdea5f44e584c7cb', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=640&crop=smart&auto=webp&s=c199bcb368ca423e945ca37ba134de01b4e38610', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=960&crop=smart&auto=webp&s=68d0865a342ff22f03e6a69d479649540705d6cc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?width=1080&crop=smart&auto=webp&s=ce81c0fa50359d350fbcca7040b7d5a52d93be4e', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/VUY7BpDCaoJDTDtPypvEsvQ0hFrfgUftf4pRvqaomAs.png?auto=webp&s=f885194b7cf2ad53488c0e13232b10d7946728f8', 'width': 2400}, 'variants': {}}]} |
Planning Multi-RTX 5060 Ti Local LLM Workstation (TRX40 / 32–64GB VRAM) | 1 | **TL;DR:**
Building my first multi-GPU workstation for running local LLMs (30B+ models) and RAG on personal datasets. Starting with 2× RTX 5060 Ti (16GB) on a used TRX40 Threadripper setup, planning to eventually scale to 4 GPUs. Looking for real-world advice on PCIe stability, multi-GPU thermals, case fitment, PSU headroom, and any TRX40 quirks.
Hey all,
I’m putting together a workstation mainly for local LLM inference and RAG on personal datasets. I’m leaning toward a used TRX40 platform because of its PCIe lanes, which should help avoid bottlenecks you sometimes see on more mainstream boards. I’m fairly new to PC building, so I might be overthinking some things—but experimenting with local LLMs looks really fun.
**Goals:**
* Run \~30B parameter models, or multiple smaller models in parallel (e.g., GPT OSS 20B) on personal datasets.
* Pool VRAM across GPUs (starting with 32GB, aiming for 64GB eventually).
* Scale to 3–4 GPUs later without major headaches.
**Current Build Plan (I/O-focused):**
* CPU: Threadripper 3960X (used)
* Motherboard: MSI TRX40 PRO 10G (used)
* GPUs (initial): 2× Palit RTX 5060 Ti 16GB
* RAM: 64GB DDR4-3200 CL22 (4×16GB)
* PSU: 1200W 80+ Platinum (ATX 3.1)
**Questions for anyone with TRX40 multi-GPU experience:**
**TRX40 quirks / platform issues**
* BIOS / PCIe: Any issues on the MSI TRX40 PRO 10G that prevent 3-4 GPU slots from running at full x16 PCIe 4.0?
* RAM stability: Any compatibility or quad-channel stability issues with CL22 kits?
* Multi-GPU surprises: Any unexpected headaches when building a multi-GPU inference box?
**Case / cooling**
* Open vs closed cases: What works best for multi-GPU setups?
**Power supply / spikes**
* Will a 1200W Platinum PSU handle 4× RTX 5060 Ti plus a Threadripper 3960X (280W)?
* Any issues with transient spikes under heavy LLM workloads?
Basically, I’m just trying to catch any pitfalls or design mistakes before investing in this set up. I’d love to hear what worked, what didn’t, and any lessons learned from your own multi-GPU/TRX40 builds.
Thanks in advance! | 2025-11-24T13:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/1p5h7xh/planning_multirtx_5060_ti_local_llm_workstation/ | Special-Art-9369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5h7xh | false | null | t3_1p5h7xh | /r/LocalLLaMA/comments/1p5h7xh/planning_multirtx_5060_ti_local_llm_workstation/ | false | false | self | 1 | null |
Looking for base language models where no finetuning has been applied | 1 | I'm looking for language models that are *pure* next-token predictors, i.e. the LM has not undergone a subsequent alignment/instruction finetuning/preference finetuning stage after being trained at the basic next word prediction task. Obviously these models would be highly prone to hallucinations, misunderstanding user intent, etc but that does not matter.
Please note that I'm not merely asking for LMs that 'have the least amount of censorship' or 'models you can easily uncensor with X prompt', I'm strictly looking for LMs where absolutely no post-training processing has been applied. Accuracy or intelligence of the model is not at issue here (in fact I would prefer lighter models) | 2025-11-24T13:31:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p5h4eb/looking_for_base_language_models_where_no/ | DaTaha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5h4eb | false | null | t3_1p5h4eb | /r/LocalLLaMA/comments/1p5h4eb/looking_for_base_language_models_where_no/ | false | false | self | 1 | null |
This app lets you use your phone as a local server and access all your local models in your other devices | 0 | So, I've been working on this app for so long - originally it was launched on Android about 8 months ago, but now I finally got it to iOS as well.
It can run language models locally like any other local LLM app + it lets you access those models remotely in your local network through REST API making your phone act as a local server.
Plus, it has Apple Foundation model support, local RAG based file upload support, support for remote models - and a lot more features - more than any other local LLM app on Android & iOS.
Everything is free & open-source: [https://github.com/sbhjt-gr/inferra](https://github.com/sbhjt-gr/inferra)
Currently it uses llama.cpp, but I'm actively working on integrating MLX and MediaPipe (of AI Edge Gallery) as well.
Looks a bit like self-promotion but LocalLLaMA & LocalLLM were the only communities I found where people would find such stuff relevant and would actually want to use it. Let me know what you think. :) | 2025-11-24T13:29:15 | https://v.redd.it/tmy3njdri73g1 | Ya_SG | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5h2s7 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tmy3njdri73g1/DASHPlaylist.mpd?a=1766582967%2CMTdiYjkyNmRmMzdmNmZkOTg3YTUzMTg0ODA1ZmExNzAzNzlkMWU1NWJkZDAyZjY0Nzc3MGYwYmQ5MmNlYjM4OA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/tmy3njdri73g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/tmy3njdri73g1/HLSPlaylist.m3u8?a=1766582967%2CZTFlMmI0Zjg2Yzc3OWMzODNiZmZhZWIwZDI1OGUwYTBmZGIzNDY5ZjZhYTY0MjE4OGFjYjVhMzBjNWY5ZWZjYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tmy3njdri73g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1440}} | t3_1p5h2s7 | /r/LocalLLaMA/comments/1p5h2s7/this_app_lets_you_use_your_phone_as_a_local/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=108&crop=smart&format=pjpg&auto=webp&s=20b55d430a5bedd0635aa1be0c4da272e36386d8', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=216&crop=smart&format=pjpg&auto=webp&s=0c2a96843b6bc0e1a34aea6b1eaeece0d4227177', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=320&crop=smart&format=pjpg&auto=webp&s=5022cba7effe515cafe6892ad697067bef5dd158', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=640&crop=smart&format=pjpg&auto=webp&s=d03b1c2f8763df9809529f93bbe4edadd156bed6', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=960&crop=smart&format=pjpg&auto=webp&s=8b919fbe1f661b280a32dc8847719c33bccc6ca6', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1c387451718a934f453c7a75e452aa257e483048', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/cjI0ZjVlZXJpNzNnMcSzsRbD0UIi6J_Onz1fdRwNZdw9hDAxtV2HZ8rdL5SD.png?format=pjpg&auto=webp&s=d612b18f9899b545e489e37fab5721c23bb9b64e', 'width': 1080}, 'variants': {}}]} | |
Turning logs into insights: open-source project inside | 0 | Hey folks 👋
I built a small open-source project called **AiLogX-Backend** and would love feedback from anyone into logging, observability, or AI-powered dev tools.
🔧 **What it does:**
* Structured, LLM-friendly JSON logging
* Smart log summarization + filtering
* “Chat with your logs” style Q&A
* Early **log-to-fix** pipeline (find likely buggy code + suggest patches)
Basically, it turns messy logs into something you can actually reason about.
If this sounds interesting, check it out here:
👉 **GitHub:** [https://github.com/kunwar-vikrant/AiLogX-Backend](https://github.com/kunwar-vikrant/AiLogX-Backend?utm_source=chatgpt.com)
Would love thoughts, ideas, or contributions! | 2025-11-24T13:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1p5gkvo/turning_logs_into_insights_opensource_project/ | Significant_Sun_7122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5gkvo | false | null | t3_1p5gkvo | /r/LocalLLaMA/comments/1p5gkvo/turning_logs_into_insights_opensource_project/ | false | false | self | 0 | null |
Tech bros help me out with this error please. | 0 | I am using Gemini pro on a site called, Chub ai. It has a specific slot for Google and I put my API there and this is the error I get. I looked around and found that the issue might be that Chub is failing to convert Gemini's reply into openai, format or something.
Please, help me out. | 2025-11-24T13:00:02 | Hour_Jackfruit6917 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5gf1e | false | null | t3_1p5gf1e | /r/LocalLLaMA/comments/1p5gf1e/tech_bros_help_me_out_with_this_error_please/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'd5egt5xjd73g1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=108&crop=smart&auto=webp&s=c47ab455c8d9d06a842dfe33fd6c6929cf254522', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=216&crop=smart&auto=webp&s=fa7323fd415507ecbb1bdb21297af4510987d141', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=320&crop=smart&auto=webp&s=8d717aca3ef71eb8acf6f3d7d988a34572416b32', 'width': 320}, {'height': 257, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=640&crop=smart&auto=webp&s=38e0cb47f55139b1fe2b1b15553bb71de7300807', 'width': 640}, {'height': 386, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=960&crop=smart&auto=webp&s=5e213452ae3e56b4a8e2900bbcb0b31f998117ca', 'width': 960}, {'height': 434, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?width=1080&crop=smart&auto=webp&s=6412a7bf74c297c35af5a48384c66ebf21638862', 'width': 1080}], 'source': {'height': 483, 'url': 'https://preview.redd.it/d5egt5xjd73g1.jpeg?auto=webp&s=0c4bf878b7325383753f2fb605315045559bf862', 'width': 1200}, 'variants': {}}]} | |
How I’m Building Declarative, Shareable AI Agents With Docker cagent | 0 | A lot of technical teams that I meet want AI agents, but very few want a pile of Python scripts with random tools bolted on.
Docker dropped something that fixes more of this than I thought: cagent, an open source, a clean, declarative way to build and run agents.
The core idea sits in one YAML file.
You define the model, system prompt, tools, and chat loop in one place.
No glue code or hidden side effects.
You can:
• Run it locally with local AI models using Docker Model Runner
• Add MCP servers for context-aware docs lookup, FS ops, shell, to-do workflows, and a built-in reasoning toolset
Multi-agent setups are where it gets fun. You compose sub-agents and call them as tools, which makes orchestration clean instead of hacky. When you’re happy with it, push the whole thing as an OCI artifact to Docker Hub so anyone can pull and run the same agent.
The bootstrapping flow was the wild part for me. You type a prompt, and the agent generates another agent, wires it up, and drops it ready to run. Zero friction.
If you want to try it, the binaries are on GitHub Releases for Linux, macOS, and Windows. I’ve also made a [detailed video](https://www.youtube.com/watch?v=IjWTSh_JZwM) on this.
I would love to know your thoughts on this. | 2025-11-24T12:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/1p5gend/how_im_building_declarative_shareable_ai_agents/ | Creepy-Row970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5gend | false | null | t3_1p5gend | /r/LocalLLaMA/comments/1p5gend/how_im_building_declarative_shareable_ai_agents/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'J4Uni4kPe-NkRrClOpzAREiRojMRa5BRckvuK9rsXWY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/J4Uni4kPe-NkRrClOpzAREiRojMRa5BRckvuK9rsXWY.jpeg?width=108&crop=smart&auto=webp&s=9ba2b5eb862fc8467f247d2349c676daadd56a50', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/J4Uni4kPe-NkRrClOpzAREiRojMRa5BRckvuK9rsXWY.jpeg?width=216&crop=smart&auto=webp&s=99f051d0bc92278a4b80f1803540d4bfb7727d07', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/J4Uni4kPe-NkRrClOpzAREiRojMRa5BRckvuK9rsXWY.jpeg?width=320&crop=smart&auto=webp&s=50c2f6aea92a47cfe7e57540cc795dcfbcf8a06e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/J4Uni4kPe-NkRrClOpzAREiRojMRa5BRckvuK9rsXWY.jpeg?auto=webp&s=195aa01536901bf4a734ef05cd58b7ddb3336535', 'width': 480}, 'variants': {}}]} |
Yech bros help me out with this error please. | 1 | I am using Gemini pro on a site called, Chub ai. It has a specific slot for Google and I put my API there and this is the error I get. I looked around and found that the issue might be that Chub is failing to convert Gemini's reply into openai, format or something.
Please, help me out. | 2025-11-24T12:58:20 | Hour_Jackfruit6917 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5gdqk | false | null | t3_1p5gdqk | /r/LocalLLaMA/comments/1p5gdqk/yech_bros_help_me_out_with_this_error_please/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 't6h2wuz8d73g1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=108&crop=smart&auto=webp&s=0ada88c151f21e3fd863d49256e45b589f04cb16', 'width': 108}, {'height': 86, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=216&crop=smart&auto=webp&s=feceab024513ef90b128e85d113e4c00270aedb4', 'width': 216}, {'height': 128, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=320&crop=smart&auto=webp&s=a45eef31112d2874580fddffe34d7afbc6bc338f', 'width': 320}, {'height': 257, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=640&crop=smart&auto=webp&s=f51faae01d30e299c2d7ce548962ac2e7d0c0830', 'width': 640}, {'height': 386, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=960&crop=smart&auto=webp&s=38a8ac345bb8fac285b86995232f624397877ba5', 'width': 960}, {'height': 434, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?width=1080&crop=smart&auto=webp&s=aff9a9bd166ded0a622d93d5fc5d083a53bbdc25', 'width': 1080}], 'source': {'height': 483, 'url': 'https://preview.redd.it/t6h2wuz8d73g1.jpeg?auto=webp&s=1ecba19710785b59c291719d90774d6cd31d162c', 'width': 1200}, 'variants': {}}]} | |
Gemini 3 Pro Thinking vs GPT-5.1 Thinking | 0 | Hey everyone,
I'm a developer and I often have a task to research libraries and version compatibility related things online. For that I often used GPT-5.1 with Extended Thinking + search, and it works very cool to be honest, I rarely saw anything related to hallucination or irrelevant search results.
With all of hype and coolness of Gemini 3 Pro, I'm seriously considering switching to it, however I'd like to ask you guys, what do you think about how capable Gemini 3 Pro is in searching internet. For me the main thing is accuracy of the search and relevance to my query not the speed. Also, Gemini 3 Pro doesn't seem to have any search button which I found interesting, does it in 1 way or another makes its search capability worse in comparison to GPT 5.1? | 2025-11-24T12:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p5g6fi/gemini_3_pro_thinking_vs_gpt51_thinking/ | Extra-Designer9333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5g6fi | false | null | t3_1p5g6fi | /r/LocalLLaMA/comments/1p5g6fi/gemini_3_pro_thinking_vs_gpt51_thinking/ | false | false | self | 0 | null |
[Release] Hypnos i1-8B: I fine-tuned Hermes 3 on REAL IBM Quantum Computer data (133-qubit GHZ states). Beats Llama-70B in Logic. | 110 | Hey r/LocalLLaMA! 👋
Its my first post here, and I’m excited to share a weird experiment I have been working on. I wanted to see what happens if we inject **true physical entropy** from a quantum processor into the SFT stage of an LLM.
So, I got access to IBM Quantum's latest chips (**Heron r2** & **Heron r1**, 133+ qubits) and ran some entanglement experiments (GHZ state). I took the raw measurement data — which contains true quantum randomness and hardware noise — and mixed it into a high-quality reasoning dataset. Meet Hypnos i1-8B!
Results (Benchmarks vs Llama 3.1 Base)
The reasoning capabilities jumped significantly due to the dataset mix:
* **Logic (BBH):** **\~68.5%** (Beats base Llama-3-70B in specific logic tasks).
* **Math (MATH):** **\~60%+** (Huge improvement over base).
* **Instruction Following:** **\~85%** (Very obedient).
Why Quantum Data?
LLMs tend to suffer from mode collapse or become too "robotic" after heavy fine-tuning. My hypothesis was that injecting real-world quantum noise would act as a form of **Data-Driven Stochastic Regularization**, giving the model a unique "temperature" and preventing it from overfitting to synthetic reasoning patterns.
I've uploaded Q4\_K\_M and Q8\_0 quants.
Check this out on Ollama or LM Studio!
[https://huggingface.co/squ11z1/Hypnos-i1-8B](https://huggingface.co/squ11z1/Hypnos-i1-8B) or `ollama run squ11z1/hypnos-i1-8B` | 2025-11-24T12:28:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p5frn9/release_hypnos_i18b_i_finetuned_hermes_3_on_real/ | Disastrous_Bid5976 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5frn9 | false | null | t3_1p5frn9 | /r/LocalLLaMA/comments/1p5frn9/release_hypnos_i18b_i_finetuned_hermes_3_on_real/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=108&crop=smart&auto=webp&s=a0fdcf8d9982615b89f4b7db13e5522c97258909', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=216&crop=smart&auto=webp&s=ce7b404f7f1daf8e7036ff0ef24bf556f378d9bd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=320&crop=smart&auto=webp&s=53d9d908c9a15408ff7360b3e561e08dc9274584', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=640&crop=smart&auto=webp&s=3266c27dcc86340159fb847880daeca4cfb1b665', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=960&crop=smart&auto=webp&s=3afcd26eed33056ea80679587caee4e820eb1281', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?width=1080&crop=smart&auto=webp&s=1b4e205c2d8f428fa1fcdcd8954872b85ee1cadb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/v2YrhZhQ17zrwxryV4JHuyrvkinEGp-xMbP-DJwmJko.png?auto=webp&s=6b65b45f0b32e5d66a5e77f29a49ccf7234f4646', 'width': 1200}, 'variants': {}}]} |
Open source chalkie | 0 | Anyone know of an open source alternative to chalkie ai?
https://chalkie.ai | 2025-11-24T12:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p5fk3l/open_source_chalkie/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5fk3l | false | null | t3_1p5fk3l | /r/LocalLLaMA/comments/1p5fk3l/open_source_chalkie/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=108&crop=smart&auto=webp&s=ec6dc10a751d240ee7ac21e3287d29b796e92539', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=216&crop=smart&auto=webp&s=a617ec27437b562b1941fb5c50ee120edf2d993c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=320&crop=smart&auto=webp&s=2aaa002721c9a955a57c45c83cabf07054e9f999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=640&crop=smart&auto=webp&s=75218393bbfb34a9c1dfb50b24ef991d3d08c5ee', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=960&crop=smart&auto=webp&s=a1bb6a3e9599774a0cba85e51cafa174bdaa123a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?width=1080&crop=smart&auto=webp&s=3e2160ff6f23c6d5e7ecd8eed9618c777f97b24c', 'width': 1080}], 'source': {'height': 945, 'url': 'https://external-preview.redd.it/K_jQVzwX1RW95XswwqZEUEG-myN8FCNJmkSiJnpivCs.png?auto=webp&s=f54161ac47f97788d99b473e4625457ee8855fa6', 'width': 1800}, 'variants': {}}]} |
Kimi: Wait... I beat Gemini 3? For real? | 210 | 2025-11-24T12:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1p5fe9u/kimi_wait_i_beat_gemini_3_for_real/ | xiaoruhao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5fe9u | false | null | t3_1p5fe9u | /r/LocalLLaMA/comments/1p5fe9u/kimi_wait_i_beat_gemini_3_for_real/ | false | false | 210 | null | ||
Best open-source models alternative to openai realtime models or how to achieve ultra low latency to create a conversational agent | 24 | I am currently working on a real time voice agent and so far i've been using openai realtime models. Now i want to deploy opensource model instead of openai.
I want to knwo is there any opensource model that are similar to openai realtime models. like asr, llm ,tts in unified realtime arch.
if it is not there, how we can achieve minimal latency?
Thanks in advance | 2025-11-24T11:46:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p5eyi6/best_opensource_models_alternative_to_openai/ | Ai_Peep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5eyi6 | false | null | t3_1p5eyi6 | /r/LocalLLaMA/comments/1p5eyi6/best_opensource_models_alternative_to_openai/ | false | false | self | 24 | null |
Which model to rewrite bad translations? | 0 | So, since there is no official audiobook for the light novel I'd like to listen to, I build myself a little pipeline to create my own audio files.
The translation of the novel, however, is quite horrendous, so right now I'm running the chapters through Qwen3-8B with a prompt to fix grammatical errors and bad translations while keeping everything else intact, before throwing it to the TTS.
I'm not too happy with the result, however. While it's certainly better than before, it's not great.
Do you have any recommendations for models I can run on my 3080 10GB that are better suited for fixing grammatical mistakes and bad translations, and maybe even fix sentence structure? | 2025-11-24T11:34:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p5eqy5/which_model_to_rewrite_bad_translations/ | 01Parzival10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5eqy5 | false | null | t3_1p5eqy5 | /r/LocalLLaMA/comments/1p5eqy5/which_model_to_rewrite_bad_translations/ | false | false | self | 0 | null |
The most objectively correct way to abliterate so far - ArliAI/GLM-4.5-Air-Derestricted | 325 | Hi everyone, this is Owen Arli from **[Arli AI](https://www.arliai.com)** and this is the first model release we created in a while.
After seeing the post by Jim Lai on Norm-Preserving Biprojected Abliteration [here] (https://www.reddit.com/r/LocalLLaMA/comments/1oypwa7/a_more_surgical_approach_to_abliteration/), I immediately thought that no one has done abliteration this way and that the "norm-preserving" part was a brilliant improvement in the method to abliterate models. You can find the full technical details in his post, but I will explain the gist of it here.
# The problem:
Typical abliteration methods finds the refusal vector and simply subtracts it from the weights, this causes the "length" (Norm) of the weight vectors to be altered. This is a problem because this "length" is usually dictates how "important" a neuron is and how much it contributes, so changing it will cause damage to the model's general intelligence.
# The solution:
This Norm-Preserving technique modifies the direction the weights point in, but forces them to keep their original length.
Essentially, by removing the refusal in this way you can potentially also improve the model's performance instead of diminishing it.
Trying out the [Gemma 3 12B] (https://huggingface.co/grimjim/gemma-3-12b-it-norm-preserved-biprojected-abliterated) model example, it clearly works extremely well compared to regular abliteration methods that often leaves the model broken until further finetuning. Which explains why the model ranks so high in the UGI leaderboard even though its base was Gemma 3 12B which is a notoriously censored model.
# The result:
Armed with a new 2xRTX Pro 6000 server I just built for Arli AI model experimentation, I set out to try and apply this abliteration technique to the much larger and smarter GLM-4.5-Air. Which ended up in what I think is undoubtedly one of the most interesting model I have ever used.
Its not that GLM-4.5-Air is usually plagued with refusals, but using this "Derestricted" version feels like the model suddenly becomes free to do anything it wants without trying to "align" to a non-existent guideline. It's hard to explain without trying it out yourself.
For example, I bet that those of you running models locally or through an API will definitely have tried to add a system prompt that says "You are a person and not an AI" or something along those lines. Usually even with such a system prompt and nothing in the context that suggests it is an AI, the model will stubbornly still insist that it is an AI and it is unable to do "human-like" things. With this model, just adding that prompt immediately allows the model to pretend to act like a human in its response. No hesitation or any coaxing needed.
The most impressive part about this abliteration technique is definitely the fact that it has somehow made the model a better instruction follower instead of just a braindead NSFW-capable model from typical abliteration. As for it's intelligence, it has not been benchmarked but I believe that just using the model and feeling it out to see if it has degraded in capabilities is better than just checking benchmarks. Which in this case, the model does feel like it is just as smart if not better than the original GLM-4.5-Air.
We will be working to create more of these Derestricted models, along with many new finetuned models too!
You can find the model available on our API, or you can download them yourself from the HF links below!
**Model downloads:**
- Original: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted
- FP8: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted-FP8
- INT8: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted-W8A8-INT8 | 2025-11-24T11:32:45 | https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted | Arli_AI | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p5epot | false | null | t3_1p5epot | /r/LocalLLaMA/comments/1p5epot/the_most_objectively_correct_way_to_abliterate_so/ | false | false | 325 | {'enabled': False, 'images': [{'id': '8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=108&crop=smart&auto=webp&s=be9e3b64df19446e657f5c0c371e7f673cf90f09', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=216&crop=smart&auto=webp&s=909735014d3ef150508750c5ebd729afe3018197', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=320&crop=smart&auto=webp&s=3f01ac42ede5554c32f8f615b2340d5cd4787b5f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=640&crop=smart&auto=webp&s=9334318d3d29cfd953050dfdf981bc10db9cc00b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=960&crop=smart&auto=webp&s=043531a76a5022978186bbf04c5e077e2c6e1c35', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?width=1080&crop=smart&auto=webp&s=75dd4e66bdeb9ca1e849b22d0041dd970d4d1d3e', 'width': 1080}], 'source': {'height': 1307, 'url': 'https://external-preview.redd.it/8n5MhbkzXEcl9NlvZMbb8GGre-k1VjQ0kDAKe7qQtQM.png?auto=webp&s=df71667a1fd89baeffaad5afe48d165fab20138a', 'width': 1306}, 'variants': {}}]} | |
Speakr v0.5.9 update - Voice profile embeddings and better local model support | 56 | Quick update on Speakr for those who've been following along. Just released v0.5.9 with some changes that are particularly relevant for local setups.
For anyone who hasn't seen this before: Speakr is a self-hosted transcription app that works with Whisper + local LLMs. You record or upload audio, it transcribes with speaker diarization, then you can chat with the transcript or get summaries using whatever model you point it at. The app runs in Docker.
The big addition is voice profile support using speaker embeddings. If you're running my WhisperX API webservice (see below), it now extracts 256-dimensional voice embeddings during transcription. Once you've identified someone in a recording, the system recognizes their voice automatically in future recordings based on the embedding similarity.
Also added some collaboration features (internal sharing, teams, retention policies) if you're running this for multiple people. All configurable through environment variables.
I put together a [companion ASR webservice](https://github.com/murtaza-nasir/whisperx-asr-service) for this that runs WhisperX with the latest pyannote models. It's not meant to be production-grade, more of an experimental reference implementation, but it handles the diarization, time alignment, and embedding extraction. You can still use the standard Whisper ASR webservice if you don't need voice profiles.
The voice recognition uses cosine similarity matching against stored profiles and works pretty well in practice. I've been testing it and it's accurate enough that I rarely need to manually select speaker labels anymore. The embeddings are stored locally in your database, nothing leaves your system.
The upgrade path is straightforward but make sure to backup first since there are database schema changes. Everything's opt-in through env vars so your existing setup should not break.
[GitHub](https://github.com/murtaza-nasir/speakr) | [Docs](https://murtaza-nasir.github.io/speakr) | [Screenshots](https://murtaza-nasir.github.io/speakr/screenshots/) | [Docker Hub](https://hub.docker.com/r/learnedmachine/speakr)
Let me know if you hit any issues upgrading or have questions about the new features. | 2025-11-24T11:29:53 | https://www.reddit.com/gallery/1p5entm | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p5entm | false | null | t3_1p5entm | /r/LocalLLaMA/comments/1p5entm/speakr_v059_update_voice_profile_embeddings_and/ | false | false | 56 | null | |
Slow Token Speed in A100 80GB for Qwen3 4B | 0 | I am trying to use sglang and qwen3 awq version , but i am stuck at 200 tokens/second output speed. I though the tps would be much higher? Also, for a larger prompt, how do I quickly process it, so the input is processed quickly , e.g - 12000 token input?
This is the command I am running which gets me output of 200 token/sec | 2025-11-24T11:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p5eljt/slow_token_speed_in_a100_80gb_for_qwen3_4b/ | DesmonMiles07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5eljt | false | null | t3_1p5eljt | /r/LocalLLaMA/comments/1p5eljt/slow_token_speed_in_a100_80gb_for_qwen3_4b/ | false | false | self | 0 | null |
Sovereign MCP: Expose Local MCP Servers to Remote Clients via Cloudflare Tunnel | 1 | [removed] | 2025-11-24T11:22:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p5ej8m/sovereign_mcp_expose_local_mcp_servers_to_remote/ | Equivalent-Pause-233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5ej8m | false | null | t3_1p5ej8m | /r/LocalLLaMA/comments/1p5ej8m/sovereign_mcp_expose_local_mcp_servers_to_remote/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=108&crop=smart&auto=webp&s=47e0230d596c8c460ccc5b88cc501f6afd3fe551', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=216&crop=smart&auto=webp&s=9cab2f7ff4b13a192c654598f1a9f3c75154a62b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=320&crop=smart&auto=webp&s=bf41f809686fb2bd820369b49dc77f1f13c8c6b4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=640&crop=smart&auto=webp&s=ec0152e92931dbed7df0354f456f6300319427ab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=960&crop=smart&auto=webp&s=1d42037c4157244a3eaab4349ae5ac8a62bfa3ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?width=1080&crop=smart&auto=webp&s=99e6c377c214099f9b338526b4bb85816dde0aa9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JehuJ8FhEh3GYAwUrvGwx47Y3C4NibTd9qVVEK0syTI.png?auto=webp&s=130cf8b53ed387a649e50a184870c74bf483cfa6', 'width': 1200}, 'variants': {}}]} |
TIL, u can use openai-compatible endpoints now in VS Code Copilot. | 0 | It used to be only available for Ollama for some reason, but the Insider version does support now openai-compatible endpoints. I haven't seen anything related to this on the sub, so I thought some people may find it useful.
[https://code.visualstudio.com/docs/copilot/customization/language-models#\_add-an-openaicompatible-model](https://code.visualstudio.com/docs/copilot/customization/language-models#_add-an-openaicompatible-model) | 2025-11-24T11:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p5eekx/til_u_can_use_openaicompatible_endpoints_now_in/ | zipperlein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5eekx | false | null | t3_1p5eekx | /r/LocalLLaMA/comments/1p5eekx/til_u_can_use_openaicompatible_endpoints_now_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?width=108&crop=smart&auto=webp&s=6eae01f20208db1fc9ec85886018bd575619574e', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?width=216&crop=smart&auto=webp&s=459507bc0a23714bcd5f2158305a974b32e8dbe7', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?width=320&crop=smart&auto=webp&s=2f4fcbc44c1a8d9cef1d25f61291d9844961d08f', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?width=640&crop=smart&auto=webp&s=9b9f348b4e41fda939864462d26933a65bb817ba', 'width': 640}, {'height': 479, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?width=960&crop=smart&auto=webp&s=1669d8c6916a0e7e7561704ad478c8d232ee2efc', 'width': 960}], 'source': {'height': 534, 'url': 'https://external-preview.redd.it/eDHruNRGsbWz5BtBeW1uKOzhhCeUbxv3GGgmV2_Rgb0.png?auto=webp&s=53bbd3f03951223dbd3956839ef14a1e02299ba0', 'width': 1069}, 'variants': {}}]} |
Benchmark: Self-Hosted Qwen-30B (LoRA) vs. Llama-3.1-8B vs. GPT-4.1-nano. Comparison of parsing success rates and negative constraints. | 0 | I recently migrated a production workload off Claude Sonnet 4 ($45/1k requests) to cut costs. I ran a three-way experiment to find the best replacement: Qwen3-Coder-30B (Self-hosted) vs. Llama-3.1-8B vs. GPT-4.1-nano.
I expected Qwen3-Coder-30B to win on quality. It didn't.
Here are the configs, the results, and where the open-source stacks fell short.
**The Task** Rewriting generic LeetCode problems into complex, JSON-structured engineering scenarios (Constraints, Role, Company Context).
* **Teacher Baseline:** Claude Sonnet 4 (Benchmark Score: 0.795).
**Experiment A: Qwen3-Coder-30B (Self-hosted on 2x H100s)**
* **Method:** LoRA
* **Config:** `r=16`, `alpha=32`, `dropout=0.0`, `target_modules=[q,k,v,o]`.
* **Hyperparams:** `lr=2e-4`, `batch_size=2` (Grad Accum 8).
* **Result:** **0.71/1.0 Quality Score**.
* **Failure Mode:** It struggled with **Negative Constraints** (e.g., "Do not add new function arguments"). Despite the 30B size, it hallucinated keys outside the schema more often than expected.
* **Cost:** \~$5.50/1k (amortized hosting).
**Experiment B: Llama-3.1-8B (Together.ai Serverless)** I wanted to see if a cheaper serverless LoRA could work.
* **Config:** Same LoRA (`r=16`, `alpha=32`), but `lr=1e-4`.
* **Result:** **0.68/1.0 Quality Score**.
* **Failure Mode:** **Parsing failed \~24% of the time**. The model seemed to suffer from "catastrophic forgetting" regarding strict JSON syntax. It frequently missed closing brackets or nested structures.
**Experiment C: GPT-4.1-nano (API Fine-Tune)**
* **Result:** **0.784/1.0 Quality Score** (96% of Teacher Fidelity).
* **Cost:** $1.30/1k requests.
* **Verdict:** It handled the schema perfectly (92.3% parsing success).
**My Takeaway / Question for the Community:** I was surprised that Qwen3-Coder-30B couldn't beat the GPT-4.1-nano (a smaller model) on instruction adherence.
1. **Rank Issue?** I used`r=16`as a standard starting point. Has anyone found that increasing rank to 64+ significantly helps 30B models with *negative constraints*?
2. **Base Model:** Is Qwen3-Coder perhaps too biased towards "code completion" vs "structured instruction following"?
I've documented the full data filtering strategy (I threw away 12.7% of the synthetic data) and the evaluation matrix in my engineering note if you want to dig into the methodology: \[Link in comments\] | 2025-11-24T11:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p5e7mv/benchmark_selfhosted_qwen30b_lora_vs_llama318b_vs/ | Emergency-Cobbler137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5e7mv | false | null | t3_1p5e7mv | /r/LocalLLaMA/comments/1p5e7mv/benchmark_selfhosted_qwen30b_lora_vs_llama318b_vs/ | false | false | self | 0 | null |
[WARNING/SCAM?] GMKtec EVO-X2 (Strix Halo) - Crippled Performance (~117 GB/s) & Deleted Marketing Claims | 0 | Hi everyone,
I recently acquired the **GMKtec NucBox EVO-X2** featuring the new **AMD Ryzen AI Max+ 395 (Strix Halo)**. I purchased this device specifically for local LLM inference, relying on the massive bandwidth advantage of the Strix Halo platform (256-bit bus, Unified Memory).
**TL;DR:** The hardware is severely throttled (performing at \~25% capacity), the manufacturer is deleting marketing claims about "Ultimate AI performance", and the purchasing/return process for EU customers is a nightmare.
**1. The "Bait": False Advertising & Deleted Pages**
GMKtec promoted this device as the **"Ultimate AI Mini PC"**, explicitly promising high-speed Unified Memory and top-tier AI performance.
* **Original Source:** https://de.gmktec.com/pl/blogs/news/high-end-modell-amd-ryzen-ai-max-395-im-gmk-evo-x2-der-ultimative-ai-mini-pc\[[1](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQHM9vwUYBdEPf0WtTt97ECvxNCUxgPSTF-brO7lRsaBbe-fAmTziN9NItEwlYcmPU_mEmMhkGHr13hTbo0C6ckAsq75BouCEG9LJqyogaQr5pMpCI_9a-F6i6_aGecAIbYb-NBal9hhdowm64vC5iCmg6-csezzcY5-LT79dNFKaacfW6Wuo1GLxGxP9WDAi_oeQUQVZVTaDiUC0oT-kIxhxbLQDdkgLCAi3ZoaenPj)\]\[[2](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQECelgxDVMiyY2fY5oOy-1jLbpvywZ9NxXaKj4mAPqEk0kt78nVRlRM0RTWifo3nubdWmIXdSHz-dmeRlq2S-tGD1AWyTwlNFoVWSgsM58tLK35MspU_e3WTjHH4DXNHFzCVfh-JvSP_tvmSM5-wgNfDg0wamJtU71TP_qiqsaiimXcrDLOcvRLnMQ%3D)\]\[[3](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQE_7C-TSOGgEjYb-fkNeM9A2phphmZgApAFuPXYyDKPQL3qsKDbACWKOg_5_18g2otlJHpf2rgJ89wesibs_tD1qg5dlX_f7m9ejL6NZDAN6L6PuTQHZt28kUZ6IPY1Ow%3D%3D)\]\[[4](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQG-tP4N7T2XRCyuBdyJLXSt8FJwVemUrRl_Q2uZ0rxMgNpHY7WyL302VxVBfI2EWKkmDKRWQjTLByOeDgURk_ZUNAdi_ymOrb3o_PlHxFfnHEcWANjr9miAiZLaFa5EcQ%3D%3D)\]\[[5](https://www.google.com/url?sa=E&q=https%3A%2F%2Fvertexaisearch.cloud.google.com%2Fgrounding-api-redirect%2FAUZIYQHIVJSZr_OL1dj3WhLAmooOsWbsgMrT4XQx8am7C4RIMtw6Bik9FI1DJIKfBbcwR4b2c_xO1c30ZaciC609-CxU62QulvKHCZWP0-I8BFKUnBKtmvge0okjQ-ufqsiOQS9fpvMnsl4yyfVzLnDpyp4sRSyTRsZ85TH_iusa_KBLqA7g3Yty-Z4UnKoRl_c%3D)\]
* **Current Status:** **The link appears to be dead/removed.**
* **Question:** Why would a manufacturer delete their main product blog post? Likely because the real-world performance contradicts their claims of "Ultimate AI" speed.
**2. The Reality: Crippled Hardware (Diagnostics)**
My extensive testing proves the memory controller is hard-locked, wasting the Strix Halo potential.
* **AIDA64 Memory Read:** Stuck at **\~117 GB/s** (Theoretical Strix Halo spec: \~500 GB/s).
* **Clocks:** HWiNFO confirms North Bridge & GPU Memory Clock are locked at **1000 MHz** (Safe Mode), ignoring all load and BIOS settings.
* **Real World AI:** Qwen 72B runs at **3.95 tokens/s**. This confirms the bandwidth is choked to the level of a budget laptop.
* **Conclusion:** The device physically cannot deliver the advertised performance due to firmware/BIOS locks.
**3. The Trap: Buying Experience (EU Warning)**
* **Storefront:** Ordered from the **GMKtec German (.de) website**, expecting EU consumer laws to apply.
* **Shipping:** Shipped directly from **Hong Kong** (Drop-shipping).
* **Paperwork:** No valid VAT invoice received to date.
* **Returns:** Support demands **I pay for return shipping to China** for a defective unit. This violates standard EU consumer rights for goods purchased on EU-targeted domains.
**Discussion:**
1. **AMD's Role:** Does AMD approve of their premium "Strix Halo" silicon being sold in implementations that cripple its performance by 75%?
2. **Legal:** Is the removal of the marketing blog post an admission of false advertising?
3. **Hardware:** Has anyone seen an EVO-X2 actually hitting 400+ GB/s bandwidth, or is the entire product line defective?
**Recommendation:** **DO NOT BUY.** You are paying for Strix Halo but getting entry-level performance, with no support and questionable business practices.
| 2025-11-24T10:47:20 | https://www.reddit.com/r/LocalLLaMA/comments/1p5dwr6/warningscam_gmktec_evox2_strix_halo_crippled/ | QrkaWodna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5dwr6 | false | null | t3_1p5dwr6 | /r/LocalLLaMA/comments/1p5dwr6/warningscam_gmktec_evox2_strix_halo_crippled/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc.png?width=108&crop=smart&auto=webp&s=ae95f8a121baa0145e2f6c2d4cb908b6d9407ecf', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc.png?width=216&crop=smart&auto=webp&s=6f2394afa39a9a841c13ea40b13d5e28ac1f67dc', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc.png?width=320&crop=smart&auto=webp&s=86aac2de55d7d561586033fe8f7bf2fd49d840aa', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc.png?width=640&crop=smart&auto=webp&s=7a2d7185a667732514c774ced6e709fa8ce0d882', 'width': 640}], 'source': {'height': 551, 'url': 'https://external-preview.redd.it/ClqVLA36y5p-dFatc5hyenGTnLAQMx86CHX7WPaPJPc.png?auto=webp&s=aa390038d89eedebf7116bfec5303014663256ba', 'width': 827}, 'variants': {}}]} |
Local LLM performance on AMD Ryzen AI 9 HX 370 iGPU (Radeon 890M) or NPU | 4 | Hello! There are very few recent, properly executed, and detailed benchmarks online for the AMD Ryzen AI 9 HX 370 iGPU or NPU when running LLM. They were either made back when Strix Point support was very weak, or they use the CPU, or they run small models. Owners of mini PCs on the HX 370, can you share your experience of which DeepSeek (70B, 32B, 14B) and gpt-oss (120B, 20B) models generate tokens at a decent rate? I am considering buying a mini PC on the HX 370 for the homelab and would like to know if it is worth considering launching LLM on such hardware? In particular, I'm trying to choose between 64 GB and 96 GB of DDR5-5600 RAM. Without using LLM, 64GB would be enough for me with a large margin. | 2025-11-24T10:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p5dale/local_llm_performance_on_amd_ryzen_ai_9_hx_370/ | _Nitor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5dale | false | null | t3_1p5dale | /r/LocalLLaMA/comments/1p5dale/local_llm_performance_on_amd_ryzen_ai_9_hx_370/ | false | false | self | 4 | null |
R9700 AI Pro worth upgrade from a 7900 XT for Whisper + LLM post-processing? | 1 | Hey team,
Just after some opinions/feedback on whether its worth it to upgrade to a R9700 from a 7900XT.
I've got a fairly specific and niche use case where I need to do some 3D scientific visualisation, as well as a voice transcription pathway using Silero VAD -> Whisper.cpp (large-v3-turbo) -> MedGemma 27B text (Q3/Q4) all on a local workstation.
Currently my development setup has a 7900 XT so 20GB VRAM, and a Quadro P2000 (5GB) which I'm just using for whisper. I get about 16tok/s with the MedGemma models I'm using to do some prompt-based post-processing of dictated texts, which is acceptable but could be better for workflow, and was wondering about upgrading to a R9700, and selling the 9700 XT.
Do y'all think its worth it from a performance perspective? Would be nice to run slightly higher quants of the MedGemma model but the output quality of the IQ4-XS GGUF quant is pretty good.
My workflow is all-Vulkan and I need to to work across Win and Linux so would prefer not to go to NVIDIA, but open to suggestions at a similar price point. | 2025-11-24T10:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1p5d8g8/r9700_ai_pro_worth_upgrade_from_a_7900_xt_for/ | BusTiny207 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5d8g8 | false | null | t3_1p5d8g8 | /r/LocalLLaMA/comments/1p5d8g8/r9700_ai_pro_worth_upgrade_from_a_7900_xt_for/ | false | false | self | 1 | null |
Question...Mac Studio M2 Ultra 128GB RAM or second RTX 5090
Question | Help | 4 | So, I have a Ryzen 9 5900X with 64GB of RAM and a 5090. I do data science and have local LLMs for my daily work: Qwen 30b and Gemma 3 27b on Arch Linux.
I wanted to broaden my horizons and was looking at a Mac Studio M2 Ultra with 128GB of RAM to add more context and because it's a higher-quality model. But I'm wondering if I should buy a second 5090 and another PSU to handle both, but I think I'd only benefit from the extra RAM and not the extra power, plus it would generate more heat and consume more power for everyday use. I work mornings and afternoons. I tend to leave the PC on a lot.
I'm wondering if the M2 Ultra would be a better daily workstation and I could leave the PC for tasks with CUDA processing. I'm not sure if my budget would allow me to get an M3 Ultra (which I wouldn't be able to afford) or an M4 Max.
Any suggestions or similar experiences? What would you recommend for a 3k budget? | 2025-11-24T09:57:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p5d3uy/questionmac_studio_m2_ultra_128gb_ram_or_second/ | ajujox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5d3uy | false | null | t3_1p5d3uy | /r/LocalLLaMA/comments/1p5d3uy/questionmac_studio_m2_ultra_128gb_ram_or_second/ | false | false | self | 4 | null |
I fine-tuned a model with GRPO + TRL + OpenEnv environment on Colab to play Wordle! | 4 | I've created a **beginner-friendly notebook** (Colab) that walks you through training a model with **reinforcement learning** using an **OpenEnv environment** to play **Wordle** 🎮
The model is trained with **TRL**, which now supports RL environments directly from **OpenEnv**.
For this example, I use the **TextArena Wordle environment** and fine-tune the model with **GRPO** (Group-Relative Preference Optimization).
Notebook on GitHub (can run on Colab):
[https://github.com/huggingface/trl/blob/main/examples/notebooks/openenv\_wordle\_grpo.ipynb](https://github.com/huggingface/trl/blob/main/examples/notebooks/openenv_wordle_grpo.ipynb)
If you're curious about RL, TRL, or OpenEnv, this is a great place to start.
Happy learning! 🌻
[](https://x.com/SergioPaniego/status/1992878809503412665/photo/1) | 2025-11-24T09:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p5d3j6/i_finetuned_a_model_with_grpo_trl_openenv/ | External-Rub5414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5d3j6 | false | null | t3_1p5d3j6 | /r/LocalLLaMA/comments/1p5d3j6/i_finetuned_a_model_with_grpo_trl_openenv/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=108&crop=smart&auto=webp&s=f50116c2312eee1da817481d0a9d284349a1982a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=216&crop=smart&auto=webp&s=1bb58bedafca299b1ca3ee87c515ae3000291bc1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=320&crop=smart&auto=webp&s=26abcb386ef06e1a5a6bcc6a67decce389f817b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=640&crop=smart&auto=webp&s=5f322f0ebde4c78896b612889e5bf58fc8bc0593', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=960&crop=smart&auto=webp&s=ac3bfc314bc1f207b9820988b5a67fbfa731a3df', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?width=1080&crop=smart&auto=webp&s=6ee6e2fbd92b02c8066be2009ffbed533c0f8f26', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fMlCl2AIlNaG3Z0NjPgbA0PBKHaHpMufwUMttQSmWqg.png?auto=webp&s=00610c06ffe4132bfe04a617ebb49d03b415f085', 'width': 1200}, 'variants': {}}]} |
Best LLM for mobile? Gemma vs Qwen | 7 | I was trying to pick a model for my app to run an LLM on mobile.
So I looked at the performance of Gemma gen 1-3, 1-2B, and Qwen gen 1-3, 0.5B-2B.
An interesting observation is that Gemma had a lead in generation 1, but in the past two years, Qwen has caught up. Now Qwen 3 outperforms Gemma 3.
This also seems to mirror the open-source competition between Google/US and Alibaba/China.
|Model|Params|MMLU|GSM8K|MATH|HumanEval|MBPP|BBH|
|:-|:-|:-|:-|:-|:-|:-|:-|
|**Gemma 1 PT 2B**|2.0B|42.3|17.7|11.8|22.0|29.2|35.2|
|**Gemma 2 PT 2B**|2.0B|51.3|23.9|15.0|17.7|29.6|–|
|**Gemma 3 IT 1B**|1.0B|**14.7 (MMLU-Pro)**|62.8|**48.0**|41.5|35.2|39.1|
|**Qwen 1.5 – 0.5B**|0.5B|39.2|22.0|3.1|12.2|6.8|18.3|
|**Qwen 1.5 – 1.8B**|1.8B|46.8|38.4|10.1|20.1|18.0|24.2|
|**Qwen 2 – 0.5B**|0.5B|45.4|36.5|10.7|22.0|22.0|28.4|
|**Qwen 2 – 1.5B**|1.5B|56.5|58.5|21.7|31.1|37.4|37.2|
|**Qwen 2.5 – 0.5B**|0.5B|47.5|41.6|19.5|–|29.8|20.3|
|**Qwen 3 – 0.6B**|0.6B|52.8|59.6|32.4|–|36.6|41.5|
|**Qwen 3 – 1.7B**|1.7B|**62.6**|**75.4**|43.5|–|**55.4**|**54.5**|
References:
\- Gemma 1: [https://ai.google.dev/gemma/docs/core/model\_card](https://ai.google.dev/gemma/docs/core/model_card)
\- Gemma 2: [https://ai.google.dev/gemma/docs/core/model\_card\_2](https://ai.google.dev/gemma/docs/core/model_card_2)
\- Gemma 3: [https://ai.google.dev/gemma/docs/core/model\_card\_3](https://ai.google.dev/gemma/docs/core/model_card_3)
\- Qwen 1.5: [https://qwen.ai/blog?id=qwen1.5](https://qwen.ai/blog?id=qwen1.5)
\- Qwen 2: [https://huggingface.co/Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)
\- Qwen 3: [https://arxiv.org/pdf/2505.09388](https://arxiv.org/pdf/2505.09388) | 2025-11-24T09:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p5d288/best_llm_for_mobile_gemma_vs_qwen/ | tonyc1118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5d288 | false | null | t3_1p5d288 | /r/LocalLLaMA/comments/1p5d288/best_llm_for_mobile_gemma_vs_qwen/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=216&crop=smart&auto=webp&s=1812be5c0e49c65e85787f4dbb2922a543943e79', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=320&crop=smart&auto=webp&s=ca7983e470f1e5cbc5edcd5c5e1c7e5b70227953', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=640&crop=smart&auto=webp&s=293ebb5606c7edf7f2570aa914eb4ddb55f1e615', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=960&crop=smart&auto=webp&s=b1bd156ecd3df7024382f9e145cda17bcaf6bc79', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=1080&crop=smart&auto=webp&s=a3b1fd853b19889a23a601c33fae7d2323e8bdb0', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?auto=webp&s=b78731184d9920fa4900b6590e113d2772fa64ed', 'width': 1440}, 'variants': {}}]} |
I built a reasoning pipeline that makes an untuned 8B local model behave like a 70B — feedback wanted | 0 | Hey everyone,
I’ve been experimenting with local LLM orchestration and built a full reasoning pipeline that dramatically boosts small model performance.
What’s cool is that my **8B base model (no finetune, fully local,no api)** can now:
* classify task type
* route to math/research/coding/physics agents
* perform multi-source web research
* verify its own answers
* correct itself
* explain reasoning
* store + reuse memory
All of this is 100% offline and fully open-source.
this output was for : Derive the Euler–Lagrange equation from the principle of least action.
This unfintuned 8b can do more things if anyone is interested ask for the whole code and try yourself
Thanks. | 2025-11-24T09:54:00 | Leading-Musician-813 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p5d1rf | false | null | t3_1p5d1rf | /r/LocalLLaMA/comments/1p5d1rf/i_built_a_reasoning_pipeline_that_makes_an/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'q96of8syf63g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=108&crop=smart&auto=webp&s=30c051c60679afda4c67a4a2d60423b8e08dbab5', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=216&crop=smart&auto=webp&s=5355ad5f028369cba6f0bc284f8e59d0410c1689', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=320&crop=smart&auto=webp&s=b50e1c9b56e703db004dcce1939488a89d014225', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=640&crop=smart&auto=webp&s=127e4ef621c04f1a0224cf83f9b6d24c9f0d41d1', 'width': 640}, {'height': 505, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=960&crop=smart&auto=webp&s=f62a567dcd30e80d61ad53cff87ffb2bc2b09a3d', 'width': 960}, {'height': 568, 'url': 'https://preview.redd.it/q96of8syf63g1.png?width=1080&crop=smart&auto=webp&s=eca193c450655df87b6d0462e69eff7140af9bb8', 'width': 1080}], 'source': {'height': 836, 'url': 'https://preview.redd.it/q96of8syf63g1.png?auto=webp&s=a17179e931f8907d2a910a57e85869ba379fbdfc', 'width': 1589}, 'variants': {}}]} | |
I tried to separate "Thinking" from "Speaking" in LLMs (PoC) | 4 | Back in april, I made a video about experimenting to see if a small model can plan its answer entirely in abstract vector space before generating a single word.
The idea is to decouple the "reasoning" from the "token generation" to make it more efficient. I wrote an experiment, the math behind it, and the specific failure cases (it struggles with long stories) in a whitepaper style post.
I’d love to get some feedback on the paper structure and the concept itself.
Does the methodology and scalability analysis section seem sound to you?
Full write-up: [https://gallahat.substack.com/p/proof-of-concept-decoupling-semantic](https://gallahat.substack.com/p/proof-of-concept-decoupling-semantic) | 2025-11-24T09:51:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p5d093/i_tried_to_separate_thinking_from_speaking_in/ | t-_-ji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5d093 | false | null | t3_1p5d093 | /r/LocalLLaMA/comments/1p5d093/i_tried_to_separate_thinking_from_speaking_in/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=108&crop=smart&auto=webp&s=7c9cb8c597e5ce2035a619b18c637dcc113da91f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=216&crop=smart&auto=webp&s=1f243e8b44e4aee2faaa39bf3e47bb2620800dad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=320&crop=smart&auto=webp&s=b5727579da0993643c75918159a6b21ea8c77a35', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=640&crop=smart&auto=webp&s=1272cc4aa3fd2a7c9353bf61942f16598fceeb24', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=960&crop=smart&auto=webp&s=ae7d70dc1721765e6af4e21f815804194c8b0022', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?width=1080&crop=smart&auto=webp&s=bbb95fbb72d1fe9cabd663d500b3bc5fac3e34f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EVReCE0Emm7LdOgZ_HZ1K3sazlK9zDeReaEKR91Hpok.jpeg?auto=webp&s=afc10f858d2f1d371f7d58fba556f75aebbd072f', 'width': 1200}, 'variants': {}}]} |
API Security for Agents | 0 | all, been working on this project lately,
Vigil is a middleware firewall that sits between your AI Agents and the world. It blocks Prompt Injections, prevents Unauthorized Actions (RBAC), and automatically Redacts PII in real-time.
the product is free and no info required, feel free to use it, \* are appreciated:) | 2025-11-24T09:47:14 | https://github.com/rom-mvp/vigil | Fantastic-Issue1020 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p5cxz8 | false | null | t3_1p5cxz8 | /r/LocalLLaMA/comments/1p5cxz8/api_security_for_agents/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=108&crop=smart&auto=webp&s=d7c2dc850e802cfd7d0cbb3d42e935f08b138dfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=216&crop=smart&auto=webp&s=6c76d91c99337e51624d653be3dfdb784bb42862', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=320&crop=smart&auto=webp&s=9417c82a1b90e9b4253af23844e02e416553692a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=640&crop=smart&auto=webp&s=d726e310cfb0455e4f4d6f1a52a29ec39e1a414e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=960&crop=smart&auto=webp&s=347b92dbb04c5d562aaaed87f41a8d9c5d251644', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?width=1080&crop=smart&auto=webp&s=e62e2245410c5ba52103f35271bb5fe55bac0bf6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MLoAg4J1mMZENJRaIyNfdX3Jvb-4TqH2I-MeT0uH6KQ.png?auto=webp&s=e3e248a33f45478cfea277b7d4a00db78bb2579c', 'width': 1200}, 'variants': {}}]} |
I built a reasoning pipeline that makes an untuned 8B local model behave like a 70B — feedback wanted | 1 | Hey everyone,
I’ve been experimenting with local LLM orchestration and built a full reasoning pipeline that dramatically boosts small model performance.
What’s cool is that my **8B base model (no finetune, fully local)** can now:
* classify task type
* route to math/research/coding/physics agents
* perform multi-source web research
* verify its own answers
* correct itself
* explain reasoning
* store + reuse memory
All of this is 100% offline and fully open-source.
Here’s a small sample output (trimmed to avoid spam filters):
>
If anyone finds this cool, I can share the full architecture and code.
Would love any feedback or suggestions to improve symbolic math & coding ability. | 2025-11-24T09:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p5cteu/i_built_a_reasoning_pipeline_that_makes_an/ | Leading-Musician-813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5cteu | false | null | t3_1p5cteu | /r/LocalLLaMA/comments/1p5cteu/i_built_a_reasoning_pipeline_that_makes_an/ | false | false | self | 1 | null |
How do you ensure that local LLM uses the most recent package versions? | 0 | I want the local model to check the latest npm versions during code generation. What is the best way to achieve that? | 2025-11-24T09:27:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p5cmpz/how_do_you_ensure_that_local_llm_uses_the_most/ | ThingRexCom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5cmpz | false | null | t3_1p5cmpz | /r/LocalLLaMA/comments/1p5cmpz/how_do_you_ensure_that_local_llm_uses_the_most/ | false | false | self | 0 | null |
I made an untuned 8B local LLM behave like a much larger model using a custom reasoning pipeline | 1 | [removed] | 2025-11-24T09:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p5cl8w/i_made_an_untuned_8b_local_llm_behave_like_a_much/ | Temporary_Top_5198 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p5cl8w | false | null | t3_1p5cl8w | /r/LocalLLaMA/comments/1p5cl8w/i_made_an_untuned_8b_local_llm_behave_like_a_much/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.