title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Looking for Access to Sentiment Analysis Unveiled | 0 | Could anyone share this book? Unfortunately, my university isn’t listed among the institutions that provide access.
**Title:** *Sentiment Analysis Unveiled: Techniques, Applications, and Innovations*
**Editors:** Neha Nandal, Rohit Tanwar, Varun Sapra
Thanks in advance! | 2025-08-28T22:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n2psb8/looking_for_access_to_sentiment_analysis_unveiled/ | InsectActive95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2psb8 | false | null | t3_1n2psb8 | /r/LocalLLaMA/comments/1n2psb8/looking_for_access_to_sentiment_analysis_unveiled/ | false | false | self | 0 | null |
Help with GraphRAG - How do you select the right entities and relationships? | 6 | Hi, I am building a RAG-based document retrieval program for a course I'm doing in mechanical refrigeration. the textbooks are huge and studying for the licensing exam is a massive pain with the publisher's DRM infected ebook software.
I built a system that uses traditional RAG - I converted my documents to embeddings and then i am able to ask a question in english, convert that to an embedding as well and then compare the cosine similarity between my database and the question. I rank the database by similarity scores, then feed the top 5-10 documents into an LLM model and get back a nice fairly trustworthy response.
The problem with it currently is that the answers are limited and uninspired- there are often times where I will ask a question about a specific type of reciprocating compressor, but the answer does not include the crucial details of how the casing should be packed to maintain the longevity of the equipment. stuff like that. It misses the relationships between the concepts.
So that is why I am trying to upgrade my program from cosine similarity. I read about GraphRAG and I think it sounds like it fits my needs. If I can process my documents into communities of entities and their relationships, then a smarter document retrieval and generating better answers to my technical questions would be possible.
The problem that I've run into is in selecting the entities and the relationships.
I tried letting Cursor wild on it and giving the control of selecting them to the LLM, but the results were too generic and limited to be useful.
With GraphRAG do you think it's better to come up wit ha list of entities and relationships myself? Or should I continue on with trying to let the LLM generate these on its own?
I am using Neo4j, Cursor with claude-4-sonnet, and the LLM I'm using is gpt-5-nano. | 2025-08-28T22:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n2pq9i/help_with_graphrag_how_do_you_select_the_right/ | sh-rink | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2pq9i | false | null | t3_1n2pq9i | /r/LocalLLaMA/comments/1n2pq9i/help_with_graphrag_how_do_you_select_the_right/ | false | false | self | 6 | null |
olmOCR VRAM usage | 0 | Guys, I have a relatively powerful machine, with 48GB of VRAM. I need to run olmOCR on it to extract text from some images. However, I can't leave it solely for that purpose (I say this because when I'm processing an image with olmOCR, it becomes practically unusable, using a lot of VRAM). Do you know of any way to reduce this usage? I don't want to use quantized models; I need to maintain translation fidelity. | 2025-08-28T21:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n2phtr/olmocr_vram_usage/ | Alive-Movie-3418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2phtr | false | null | t3_1n2phtr | /r/LocalLLaMA/comments/1n2phtr/olmocr_vram_usage/ | false | false | self | 0 | null |
I built a self-theorizing AI in 4 weeks (Kaleidoscope E8 Cognitive Engine) | 0 | I am not a coder. **I built this in 4 weeks.**
For the past month I have been working on **Kaleidoscope**, a cognitive architecture that tries to understand complex systems from first principles. Unlike assistants that just answer questions, it **builds its own theories.** I just opened the repo under **GPL-3.0** for anyone who wants to look, run, or hack on it.
**Key features:**
* **100% Autonomous Reasoning** → generates its own questions and insights
* **Toy Universe Model** → E8 lattice physics engine as the backbone of thought space
* **Holographic Memory System** → encodes complex data into a simpler, interconnected structure
* **Quasicrystal Memory Indexing** → memory stored on nodes like pixels in a holographic universe
* **Serendipity Engine** → discovers non linear, creative links between unrelated ideas
* **Evolves and Self Corrects** → RL agent refines its own reasoning over time
* **40,000+ Cognitive Steps** → stable long running simulation, graduated from toy to prototype
* **Emergent Theories** → developed insights about financial markets and even its own consciousness
[Mark 16 Screenshot](https://preview.redd.it/mqrjo9qdytlf1.png?width=3839&format=png&auto=webp&s=1c337027e73defe3d48572ab3f4aa04e2ee5395b)
[Telemetry of dimensional tension during a black hole event.](https://preview.redd.it/bbyo5sxgytlf1.png?width=517&format=png&auto=webp&s=e90dfd44aeb72cebce56ef7cf7b78ca3ec86a7ad)
[Ealy Version Mark 8](https://preview.redd.it/4c5wybdkytlf1.png?width=2579&format=png&auto=webp&s=fd93891734355d76ba07ec6be497b19a8973ac54)
* **Multi Agent Dialogue** → teacher, explorer, and subconscious agents in recursive loops
* **Quantum and Classical Modes** → flips between probabilistic and deterministic reasoning paths
* **Visualization Hooks** → embedding maps, trajectory graphs, black hole compression events
Repo link: [Howtoimagine/E8-Kaleidescope-AI: E8Mind](https://github.com/Howtoimagine/E8-Kaleidescope-AI)
**Why it matters:**
**Realistic part**
* Building something like this in just 4 weeks is unusual. Even without a coding background, it shows that a motivated person can pull together cognitive scaffolding using existing tools, symbolic structures, and reinforcement loops.
* The emergent properties are the most striking. Once running long enough, the system starts to produce **unexpected connections and theories** that were not programmed directly. That is an early signal that architectures combining symbolic geometry, LLM reasoning, and RL dynamics can move beyond question answering into **self-driven knowledge formation.**
**Speculative part**
* I believe this is the **first example of quasicrystal indexing made public**. Encoding memory in this way means information is stored like pixels in a holographic structure, rather than in a flat sequence or simple graph.
* E8 is one of the most complex and symmetric structures in mathematics. By using it as a lattice for thought, Kaleidoscope is essentially experimenting with a **toy universe of cognition.**
* If memory can be encoded holographically into E8 shells and re-indexed quasicrystallinely, then over time you might see **emergent coherence that mirrors physical law.** That is, the system could begin shaping its memory in ways that resemble a universe discovering its own order.
* This points to a possibility: cognitive engines may not only simulate theories, but **grow new ones out of their own geometry and memory constraints.**
I am curious:
* What is the most surprising emergent behavior you have seen in a complex system
* If you had an AI that could develop novel theories, what problem would you point it at
Happy to dive into the architecture or share more from its research logs if people are interested.
— **Skye Malone** | 2025-08-28T21:43:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n2p7uy/i_built_a_selftheorizing_ai_in_4_weeks/ | thesoraspace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2p7uy | false | null | t3_1n2p7uy | /r/LocalLLaMA/comments/1n2p7uy/i_built_a_selftheorizing_ai_in_4_weeks/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=108&crop=smart&auto=webp&s=00ee5e6505c8346ce9fabbaf2afc1799c6fb990a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=216&crop=smart&auto=webp&s=be30ac604a489e59b9c6413a20907728bea99ea7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=320&crop=smart&auto=webp&s=96e78de8b1902c755c4d3dd2815d06bea5fd18e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=640&crop=smart&auto=webp&s=0db0a577fc1160b06fe001530e5e4fadb3cc7c0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=960&crop=smart&auto=webp&s=e6a5470f4ebdb28bdcf508bc5a964694bc7304e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?width=1080&crop=smart&auto=webp&s=dde83c5399e4b87db3f9a63675cc0267bb1eb838', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4v6aiDnj-8lcaT-ChVp1V8CddbxW2SV5_9ZkHsBfvuU.png?auto=webp&s=c3da0e5598cd2856e79e73ebcfa0fd1ca890802d', 'width': 1200}, 'variants': {}}]} | |
85% of Nvidia's $46.7 billion revenue last quarter came from just 6 companies. | 1,062 | 2025-08-28T21:37:34 | vergogn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2p2wi | false | null | t3_1n2p2wi | /r/LocalLLaMA/comments/1n2p2wi/85_of_nvidias_467_billion_revenue_last_quarter/ | false | false | 1,062 | {'enabled': True, 'images': [{'id': 'U3BGLkKDxllnzmAobIihj8cP78tj5pEs9fFx4rz2AXM', 'resolutions': [{'height': 155, 'url': 'https://preview.redd.it/k0279pnmxtlf1.png?width=108&crop=smart&auto=webp&s=eee6668c8496eb0abca5c868746991ff9233efc0', 'width': 108}, {'height': 311, 'url': 'https://preview.redd.it/k0279pnmxtlf1.png?width=216&crop=smart&auto=webp&s=40af1d17bddb787ce47626e147eefe80bcab9dbf', 'width': 216}, {'height': 461, 'url': 'https://preview.redd.it/k0279pnmxtlf1.png?width=320&crop=smart&auto=webp&s=49226eb389b90cae7c01e1c7b4bf70aca40e3f55', 'width': 320}, {'height': 923, 'url': 'https://preview.redd.it/k0279pnmxtlf1.png?width=640&crop=smart&auto=webp&s=0e282ac0e96e904a51aa3f0f7e514a47b6d02ed2', 'width': 640}], 'source': {'height': 1068, 'url': 'https://preview.redd.it/k0279pnmxtlf1.png?auto=webp&s=b34df1f03685ff4f168f2bd147dcdb7be9049dc5', 'width': 740}, 'variants': {}}]} | |||
DeepWiki integration in LM Studio? | 3 | I want to somehow get ai to know everything about c# additional layer and its kinda getting updates and partially linked to visual stuff, i wonder is there way for local ai to gather all useful info like deepwiki does (no, i cant get documentation as convinient file as its just scattered around as random pages, if there is some way to download ENTIRE WEBSITE it could be helpful too but it may have too much extra info for ai to analyse what would waste tokens but at least know more) as downloading each page of documentation separately (around 140 pages) and then stitching them as one whole document sounds quite painful and deepwiki already has all knowledge i need on some documentations and no idea how to get it offline
Currently using qwen3 4B 2507 for assisting with code, if there is better option i would like to hear | 2025-08-28T21:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n2p2pa/deepwiki_integration_in_lm_studio/ | LIVE4MINT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2p2pa | false | null | t3_1n2p2pa | /r/LocalLLaMA/comments/1n2p2pa/deepwiki_integration_in_lm_studio/ | false | false | self | 3 | null |
Which local LLM will work best with Beelink | 8 | I have Beelink SER8. 32G (16G\*2) DDR5 5600MHz, AMD Ryzen™ 7 8845HS,
[https://www.bee-link.com/products/beelink-ser8-8845hs](https://www.bee-link.com/products/beelink-ser8-8845hs)
iam using it as Headless via Tailscale and use my Macbook to connect to it | 2025-08-28T21:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ovhy/which_local_llm_will_work_best_with_beelink/ | Amirzezo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ovhy | false | null | t3_1n2ovhy | /r/LocalLLaMA/comments/1n2ovhy/which_local_llm_will_work_best_with_beelink/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c.png?width=108&crop=smart&auto=webp&s=4736b02068c3aecc390ff5c3be7cf5658660905b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c.png?width=216&crop=smart&auto=webp&s=d37f8b186d9c80f7ac7071d1f8be0ebbfe5fccbe', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c.png?width=320&crop=smart&auto=webp&s=8e82c877240fae4f9d5ebb1c550e1fa047d0a888', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c.png?width=640&crop=smart&auto=webp&s=413fb99c9987221fac0d185a00163713dc89f272', 'width': 640}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/D_F6q5dPjiGHznzyBOhoe9bw-2rdFzGXstBZQsH5S-c.png?auto=webp&s=65a73a710424cf5864dcb05406c56ac82ab20de7', 'width': 800}, 'variants': {}}]} |
[Guide + Code] Fine-Tuning a Vision-Language Model on a Single GPU (Yes, With Code) | 18 | I wrote a step-by-step guide (with code) on how to fine-tune SmolVLM-256M-Instruct using Hugging Face TRL + PEFT. It covers lazy dataset streaming (no OOM), LoRA/DoRA explained simply, ChartQA for verifiable evaluation, and how to deploy via vLLM. Runs fine on a single consumer GPU like a 3060/4070.
Guide: [https://pavankunchalapk.medium.com/the-definitive-guide-to-fine-tuning-a-vision-language-model-on-a-single-gpu-with-code-79f7aa914fc6](https://pavankunchalapk.medium.com/the-definitive-guide-to-fine-tuning-a-vision-language-model-on-a-single-gpu-with-code-79f7aa914fc6?utm_source=chatgpt.com)
Code: [https://github.com/Pavankunchala/Reinforcement-learning-with-verifable-rewards-Learnings/tree/main/projects/vllm-fine-tuning-smolvlm](https://github.com/Pavankunchala/Reinforcement-learning-with-verifable-rewards-Learnings/tree/main/projects/vllm-fine-tuning-smolvlm?utm_source=chatgpt.com)
Also — I’m open to roles! Hands-on with real-time pose estimation, LLMs, and deep learning architectures. Resume: [https://pavan-portfolio-tawny.vercel.app/](https://pavan-portfolio-tawny.vercel.app/) | 2025-08-28T21:14:46 | Solid_Woodpecker3635 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2oi65 | false | null | t3_1n2oi65 | /r/LocalLLaMA/comments/1n2oi65/guide_code_finetuning_a_visionlanguage_model_on_a/ | false | false | 18 | {'enabled': True, 'images': [{'id': '6bcCFB8xIHUHMvnigB-VzEmgY1wBIdcW4Ql1s9cTgTk', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=108&crop=smart&auto=webp&s=e99cc33fdac2f898206a498f3f0471ce25ff1a1d', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=216&crop=smart&auto=webp&s=f3ff23acf5dabc21ea80d37d472dd2d0e7b88ca6', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=320&crop=smart&auto=webp&s=c381886a5d2173f0552469c7e4096af6ceefe7a5', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=640&crop=smart&auto=webp&s=878d7b8c3aafc625f919a07d374939f75a147ba0', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=960&crop=smart&auto=webp&s=3d9a79e4f380130f9c5be022141a5e4b7999e13b', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?width=1080&crop=smart&auto=webp&s=f71f3d90d2d2da237316fa18db17ef74e2f7d53d', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/v6sxaqq5ttlf1.png?auto=webp&s=f862566d8a618512bf0d2c292500374a66e45272', 'width': 1536}, 'variants': {}}]} | ||
Memp
: Exploring Agent Procedural Memory | 0 | Anyone know if this paper has a repo or if one is planned?
"A new technique from [Zhejiang University](https://www.zju.edu.cn/english/) and [Alibaba Group](https://www.alibabagroup.com/) gives large language model (LLM) agents a dynamic memory, making them more efficient and effective at complex tasks. The technique, called [Memp](https://www.arxiv.org/abs/2508.06433), provides agents with a “procedural memory” that is continuously updated as they gain experience, much like how humans learn from practice.
Memp creates a lifelong learning framework where agents don’t have to start from scratch for every new task. Instead, they become progressively better and more efficient as they encounter new situations in real-world environments, a key requirement for reliable enterprise automation."
[https://venturebeat.com/ai/how-procedural-memory-can-cut-the-cost-and-complexity-of-ai-agents/](https://venturebeat.com/ai/how-procedural-memory-can-cut-the-cost-and-complexity-of-ai-agents/) | 2025-08-28T20:54:51 | https://www.arxiv.org/pdf/2508.06433 | ChainOfThot | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1n2nzlm | false | null | t3_1n2nzlm | /r/LocalLLaMA/comments/1n2nzlm/memp_exploring_agent_procedural_memory/ | false | false | default | 0 | null |
ollama qwen3:30b-a3b with continue.dev performance issue? | 3 | Hello,
im trying to configure my local development setup using visual studio [continue.dev](http://continue.dev) plugin and qwen3:30b.
im using the a3b bcs it was said that qwen3-coder:30b does not support tools?
anycase i just wanted a PoC of what i can expect from the local setup
my hardware is
https://preview.redd.it/7u39n8fzotlf1.png?width=1305&format=png&auto=webp&s=19ee744c7d4f1fa64f337ada3209a5a1e3396461
with an nvidia 5090 rtx.
as far as i understand it the model should be manageable by my setup, however it is too slow. i was not expecting usain bolt kind of results, but i did to some extend expect this to be functional which this is not.
im assuming im not using the GPU to the full extent
https://preview.redd.it/rnh1b3qdptlf1.png?width=1973&format=png&auto=webp&s=1563d4727f27902370b2ec2caacd543367c32c97
even though i might be wrong. if i am is there a better setup i can go for, and if im not what am i doing wrong? | 2025-08-28T20:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ny0r/ollama_qwen330ba3b_with_continuedev_performance/ | Numerous-Photograph4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ny0r | false | null | t3_1n2ny0r | /r/LocalLLaMA/comments/1n2ny0r/ollama_qwen330ba3b_with_continuedev_performance/ | false | false | 3 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} | |
GLM-4.5 is now leading the Berkeley Function-Calling Leaderboard V4, Beating Opus 4 | 191 | https://gorilla.cs.berkeley.edu/leaderboard.html?s=09 | 2025-08-28T20:44:11 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2npu9 | false | null | t3_1n2npu9 | /r/LocalLLaMA/comments/1n2npu9/glm45_is_now_leading_the_berkeley_functioncalling/ | false | false | default | 191 | {'enabled': True, 'images': [{'id': 'pa10b6f5otlf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=108&crop=smart&auto=webp&s=9df5ba9e21de9ff3248a68d3286dfbb28d67a364', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=216&crop=smart&auto=webp&s=4534ccf5a3512ef824bd8e015c06ab50de7dcbac', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=320&crop=smart&auto=webp&s=3c80551d59c6058063d767251cab806c24ba9d14', 'width': 320}, {'height': 346, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=640&crop=smart&auto=webp&s=ad1a522ed166bb920414041f430c97aef7d1fdf9', 'width': 640}, {'height': 519, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=960&crop=smart&auto=webp&s=27379cc6d627992e6dec5b45f3eb02e81410291d', 'width': 960}, {'height': 584, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?width=1080&crop=smart&auto=webp&s=967793633516cf3332c69ed9b90e15d617e97ef0', 'width': 1080}], 'source': {'height': 1072, 'url': 'https://preview.redd.it/pa10b6f5otlf1.jpeg?auto=webp&s=8940ea581cea76af1e51157564d34c384d18b8bf', 'width': 1982}, 'variants': {}}]} | |
Are We Really Getting the Best of the Current Models? Discussion! | 6 | For a while now, AI models—both diffusion and autoregressive—have been steadily improving without any major changes to their core technology or architecture. That’s great. Still, I wonder if we’re actually getting the best these models can offer, or if we’re just leaning on ever-larger piles of compute.
This year saw a flood of new releases, and most are clear leaps over earlier generations. The one that truly surprised me, though, is Qwen3-4B-thinking. Hardly anyone talks about it—probably because people assume small models can’t be smart or useful—but if you still think that, give it a spin. Gemma-4B and Qwen-3-4B punch way above their weight class.
On the image-generation side, I’ve been using Illustrious, built on the SDXL architecture, and again, it’s impressive for its size—around 3.5 B parameters if memory serves. SDXL was a marvel at launch, yet Illustrious consistently outperforms it. SDXL gave me mangled hands seven times out of ten; Illustrious gets them right eight times out of ten. Its grasp of overall composition just feels sharper. Same architecture, same parameter count—so why the big gap?
That leads me to a worrying conclusion: we may not be squeezing the best out of these models at all, and given how the AI arms race is accelerating, I doubt we ever will. I fear we’ll let hardware advances keep bailing out sloppy software, the same way it’s happened before—just throw more silicon at it and forget about real optimization.
Before the AI boom, an average person owned one GPU. Two GPUs? Probably a graphics or 3-D pro. More than two? Almost certainly a crypto miner. Now I see people stacking four, six, even eight GPUs just to run models locally—mostly for privacy. Meanwhile the models themselves keep ballooning, which means even more GPUs. Here we go again: bloated software pushes us toward ever-heavier hardware.
What do you think? | 2025-08-28T20:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n2nnyi/are_we_really_getting_the_best_of_the_current/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2nnyi | false | null | t3_1n2nnyi | /r/LocalLLaMA/comments/1n2nnyi/are_we_really_getting_the_best_of_the_current/ | false | false | self | 6 | null |
dinosaur needing a vendor | 0 | came by a blackwell 6000 maxq - just had hand surgery so building i just don't have it in me .......... talked to maingear puget etx ..... prices seem a bit high and the wait 2 weeks ..... any vendor you can recommend ? saw bizon tech prices seem good and build time as well but never heard of them .......... kind of thinking just going with origin pc and waiting ? any advise would be appreciated | 2025-08-28T20:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n2nnng/dinosaur_needing_a_vendor/ | That-Thanks3889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2nnng | false | null | t3_1n2nnng | /r/LocalLLaMA/comments/1n2nnng/dinosaur_needing_a_vendor/ | false | false | self | 0 | null |
We built a Claude-like flat-rate subscription API to run open-source models | 1 | [removed] | 2025-08-28T20:38:28 | https://synthetic.new/newsletter/entries/subscriptions | reissbaker | synthetic.new | 1970-01-01T00:00:00 | 0 | {} | 1n2nkm5 | false | null | t3_1n2nkm5 | /r/LocalLLaMA/comments/1n2nkm5/we_built_a_claudelike_flatrate_subscription_api/ | false | false | default | 1 | null |
[Tool][OpenCL2.0] A 512B-aligned memory encoder backend for local LLM deployments (No CUDA/ROCm required) | 8 | Hi all,
I'm sharing a memory optimization backend that may interest those experimenting with **locally hosted LLMs on AMD hardware** — especially if you're trying to avoid CUDA or ROCm dependencies.
---
### 🔧 What is it?
This is a **512B-aligned memory encoder backend**, implemented entirely in:
- 🧠 OpenCL 2.0 + Shared Virtual Memory (SVM)
- 🔌 Standard AMD ICD loader (no ROCm)
- 💾 Custom memory RAID + semantic zero-copy optimizer
It’s designed to **simulate memory tiering** (DDR4/5, L3 cache, SAM) and provide **stable 4MB block encoding** with zero-copy transfer, suitable for workloads like embedding preloading, KV cache streaming, or quantized tensor IO.
---
### ⚙️ Use Case for Local LLMs
For users who run **LLaMA / Mistral / Qwen / Phi models** locally on:
- AMD APUs / GPUs (gfx10x, 11x)
- Non-CUDA / no-dGPU systems
- Linux *and* Windows environments
This can **speed up memory shuttling and reduce CPU↔GPU copy overhead**, especially for preloaded inference or attention-weight reshaping steps.
---
### 📊 Benchmark Summary (SVM-backed, 512B aligned)
| Size | RS Latency | LRC Latency | RS Efficiency | LRC Efficiency |
|-------|------------|-------------|----------------|----------------|
| 0.1MB | 14.29ms | 5.54ms | 0.007 MB/ms | 0.018 MB/ms |
| 0.2MB | 5.17ms | 5.14ms | 0.039 MB/ms | 0.039 MB/ms |
| 1.0MB | 6.18ms | 7.28ms | 0.162 MB/ms | 0.137 MB/ms |
| 4.0MB | 8.17ms | 7.16ms | 0.49 MB/ms | 0.56 MB/ms |
📈 Graphs:
- Latency vs Size → https://raw.githubusercontent.com/Retryixagi/Demo/main/latency_vs_size.png
- Efficiency vs Size → https://raw.githubusercontent.com/Retryixagi/Demo/main/efficiency_vs_size.png
---
### 🧪 Preview Availability
📁 GitHub: https://github.com/Retryixagi/Demo
📅 Code release on **Aug 30**, dual-licensed:
- Free for academic / personal use (non-derivative)
- Commercial use requires license
---
Feel free to fork, test, or benchmark it under your own local model pipeline.
If you're experimenting with MPS-aware inference or SVM-backed quantized matrix IO — happy to discuss ideas or integrate improvements!
🚀 Happy to answer any questions.
| 2025-08-28T20:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n2mm6k/toolopencl20_a_512baligned_memory_encoder_backend/ | inhogon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2mm6k | false | null | t3_1n2mm6k | /r/LocalLLaMA/comments/1n2mm6k/toolopencl20_a_512baligned_memory_encoder_backend/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=108&crop=smart&auto=webp&s=4be3c138f6f6952461b93778cc77aa6adfa3f805', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=216&crop=smart&auto=webp&s=f8e50c9ccaae63291a57cce246e0dcce469d830c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=320&crop=smart&auto=webp&s=3e120b484015e5051dfa1cad953a862bb660a4ad', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=640&crop=smart&auto=webp&s=c700bf8ab0521dd1352fd7df84927e63be937ec4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=960&crop=smart&auto=webp&s=ec59965d18639998fcbe3d354287c70c1a669c42', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?width=1080&crop=smart&auto=webp&s=018e6efa326e000506774344e43b13843cbccc02', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/QxgnTGq_2FFIF4P_EinLMQ7af244CUpQLXiXygirRP0.png?auto=webp&s=fe91a6d899f321d89aa8b5b02134a5d15722af21', 'width': 1600}, 'variants': {}}]} |
Radeon RX9070/Radeon AI PRO R9700 updated vLLM image | 12 | Optimized vLLM for AMD Radeon 9070 (RDNA gfx1201 architecture) and theoretically, including the new, just released this month - Radeon PRO AI R9700 (since it's gfx1201) as well. (only for gfx1201, i do not have the time to build for others)
Took me almost a week after stumbling to bugs in ROCm 6.4.1 that caused problems training AI models with unsloth and now it works perfectly.
Also updated the image from Ubuntu from 22.04 LTS to 24.04 LTS, latest libBlaslt, pytorch, rccl, triton, ROCm 6.4.3, vLLM [0.10.1.1](http://0.10.1.1) etc and remove the bloat like CDNA specific configuration, to make it a lot lighter.
The Docker image can be pulled here: [https://hub.docker.com/r/muhammadn/vllm-rocm](https://hub.docker.com/r/muhammadn/vllm-rocm)
Latest Unsloth works as well, had been training some models using this docker image.
Enjoy!
https://preview.redd.it/l82d7su4ftlf1.png?width=2880&format=png&auto=webp&s=ba382bb83f438f73e1b68c412d3cd9aca1754ab5
https://preview.redd.it/rgr4lgx4ftlf1.png?width=2880&format=png&auto=webp&s=5c06b2aaf62bae9e5107137186c135492814d33d
https://preview.redd.it/1ekbtru4ftlf1.png?width=2880&format=png&auto=webp&s=f43eb69f10151ed171c01fb439fdc139582808b0
https://preview.redd.it/uln87ru4ftlf1.png?width=2880&format=png&auto=webp&s=4d2bd4f7f60d9ca36d0ffa10233e12eaa23818b9
https://preview.redd.it/7fdiztu4ftlf1.png?width=2880&format=png&auto=webp&s=e630ffd43be1d7e07049b15aa20d7eef4c95348b
| 2025-08-28T19:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n2mhh2/radeon_rx9070radeon_ai_pro_r9700_updated_vllm/ | nuzaihan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2mhh2 | false | null | t3_1n2mhh2 | /r/LocalLLaMA/comments/1n2mhh2/radeon_rx9070radeon_ai_pro_r9700_updated_vllm/ | false | false | 12 | null | |
Favorite/Good MOE models collection & Leaderboard? | 1 | Is there any Leaderboard for list of best/good MOE models? or any Leaderboard with all models and with MOE column(so we could filter out)? I already checked more than bunch of Leaderboards from my browser bookmarks. None has MOE column. Please share if any Leaderboard has MOE column. Useful for Poor GPU Club & also Quarter/Semi GPU rich club.
It would be nice to see a Leaderboard just with MOE models
**Model Name** \- **Total params** \- **Activated params**
* Qwen3-Coder-480B-A35B-Instruct - 480B - 35B
* GLM-4.5 - 355B - 32B
* Llama-4-Maverick-17B-Instruct - 400B - 17B
* ERNIE-4.5-300B-A47B-PT - 300B - 47B
* ERNIE-4.5-21B-A3B-PT - 21B - 3B
* Qwen3-30B-A3B - 30B - 3B
* Qwen3-Coder-30B-A3B - 30B - 3B
* SmallThinker-21BA3B - 21B - 3B
* Ling-lite-1.5-2507 - 16.8B - 2.75B
* Gpt-oss-20b - 21B - 3.6B
* Moonlight-16B-A3B - 16B - 3B
* Hunyuan-A13B-Instruct - 80B - 13B
Someone created [Mixture‑of‑Experts (MoE) Model Speed Calculator](https://www.reddit.com/r/LocalLLaMA/comments/1n1xdvu/i_wrote_a_calculator_to_estimate_token_generation/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) , I just shared above models for that calculator's dropdown. What other MOE models need to be added? Frankly I would've shared more models if I had more replies(with more models) to my [recent post](https://www.reddit.com/r/LocalLLaMA/comments/1mvfuqn/what_other_moe_models_are_you_using/)
Thanks | 2025-08-28T19:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n2mes3/favoritegood_moe_models_collection_leaderboard/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2mes3 | false | null | t3_1n2mes3 | /r/LocalLLaMA/comments/1n2mes3/favoritegood_moe_models_collection_leaderboard/ | false | false | self | 1 | null |
Patent application filling for AI space ( on the cheap) | 0 | Hi ,
Looking for some advice and suggestions on filling AI patents for the startup. We are looking to file some patents in modeling and AI infrastructure space .
1. How good and reliable is self-filling patents ? any experience with this ?
2. Any info on how the patent office is scoping AI patent applications to identify novelty ?
3. Do VC consider self-filed patents at the same level as a normal patent ?
4. Any recommended patent lawyers who work with startups ( and are reasonably priced) | 2025-08-28T19:17:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n2lh53/patent_application_filling_for_ai_space_on_the/ | Curious_me_too | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2lh53 | false | null | t3_1n2lh53 | /r/LocalLLaMA/comments/1n2lh53/patent_application_filling_for_ai_space_on_the/ | false | false | self | 0 | null |
GPT 1 or 2 | 0 | Is it possible to build a voice assistant using GPT 2? If so how? | 2025-08-28T19:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1n2l60t/gpt_1_or_2/ | Critical_Dare_2066 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2l60t | false | null | t3_1n2l60t | /r/LocalLLaMA/comments/1n2l60t/gpt_1_or_2/ | false | false | self | 0 | null |
Excuse my ignorance, but what are Uncensored for? | 0 | Do they focus more on diverse cultures?
What do these Uncensored "liberate"? | 2025-08-28T19:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n2l4yy/excuse_my_ignorance_but_what_are_uncensored_for/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2l4yy | false | null | t3_1n2l4yy | /r/LocalLLaMA/comments/1n2l4yy/excuse_my_ignorance_but_what_are_uncensored_for/ | false | false | self | 0 | null |
85% of Nvidia's $46.7 billion revenue last quarter came from just 6 companies. | 1 | 2025-08-28T19:01:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n2l2hv/85_of_nvidias_467_billion_revenue_last_quarter/ | vergogn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2l2hv | false | null | t3_1n2l2hv | /r/LocalLLaMA/comments/1n2l2hv/85_of_nvidias_467_billion_revenue_last_quarter/ | false | false | 1 | null | ||
What is a good spatial 3D boundary box on items in a 2D image | 1 | Gemini flash 2.0 has spatial 3D bounding box on items in 2D image. (https://aistudio.google.com/apps/bundled/spatial-understanding). The detection is not that great and also not sure how long google will keep 2.0 models. What would be a good opensource alternative for this? | 2025-08-28T18:59:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n2l01s/what_is_a_good_spatial_3d_boundary_box_on_items/ | phone_radio_tv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2l01s | false | null | t3_1n2l01s | /r/LocalLLaMA/comments/1n2l01s/what_is_a_good_spatial_3d_boundary_box_on_items/ | false | false | self | 1 | null |
Looking for Open-Source Python Projects (Preferably LangGraph) for a Multi-Source Agent | 2 | Hi everyone,
I’m searching for open-source Python projects—ideally built with **LangGraph**—that implement an agent with the following features:
* A user interface to upload and manage **PDFs and other documents**.
* The ability to add and process **web links** (URLs).
* Integration via **MCP** with **cloud storage** (Google Drive, Google Docs, etc.).
* The agent should be able to **retrieve and synthesize information** from these connected sources.
If you know of any projects or frameworks that fit this description, I’d love to hear your recommendations! Bonus points if it’s actively maintained and has good documentation.
Thanks in advance! | 2025-08-28T18:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n2kt96/looking_for_opensource_python_projects_preferably/ | SignatureHuman8057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2kt96 | false | null | t3_1n2kt96 | /r/LocalLLaMA/comments/1n2kt96/looking_for_opensource_python_projects_preferably/ | false | false | self | 2 | null |
When an in-context-learning-guide changes the reasoning strategy of Gpt-Oss:20b | 2 | Hi everyone! I'm new to this community, i'm Serena! I wanted to share a report on how a 15 examples in-context-learning-guide can affect the reasoning strategy of a 20b model!
In this report i was able to contrast the differences in reasoning strategy, time response and accuracy from the original model vs the modified model!
You'll find in my research the modelfile used, thinking proccess comparisons, the full access to all the tests in CVS files and graphics and stadistics!
I think it opens a new door of conversation about the idea of more quantity of information = more impact! | 2025-08-28T18:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n2kr0e/when_an_incontextlearningguide_changes_the/ | Zestyclose-Pea-516 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2kr0e | false | null | t3_1n2kr0e | /r/LocalLLaMA/comments/1n2kr0e/when_an_incontextlearningguide_changes_the/ | false | false | self | 2 | null |
Battle of the new Multi-Modal models: MiniCPM-V 4.5 8B vs InternVL3.5 8B | 23 | EDIT - Added GLM-4.1V 9B scores.
New multimodal models based off Qwen3, MiniCPM and InternVL, were released very recently, as in just a few days ago, which got me interested and wondering which were better.
Unfortunately, InternVL3.5's model card did not include benchmark results for the 8B model, they only posted results for the 30b-a3b model and the 240b-a20b models, which make it hard to compare their 8B model to minicpm-v 4.5 8b. Doing a little digging, and reading through their paper on axiv [https://arxiv.org/html/2508.18265v1](https://arxiv.org/html/2508.18265v1) I was able to find benchmark results for their 8B model, and more luckily, results for their older InternVL3 8B model *which is* also available in the MiniCPM model card. This gives me a way to cross check that I am comparing the correct results from their corresponding tests accurately (although this did end up creating a significant amount of work for me).
*\*MME not included in average or geomean score for obvious reasons (the values are too large and will throw off the weighting)*
*\*\*Mantis not included in average or geomean cause GLM4.1V did not have results for this*
|Model|InternVL3.5-8B|MiniCPM-V 4.5-8B|GLM-4.1V-9B|
|:-|:-|:-|:-|
| MMMU (val)|73.4|67.7|68|
| MathVista (mini)|78.4|79.9|80.7|
|AI2D|84|86.5|87.9|
| TextVQA (val)|78.2|82.2|79.6|
| DocVQA (test)|92.3|94.7|93.3|
| OCR Bench|83.2|89|82.3|
| Mantis Eval\*\*|70.5|82.5|\-|
| MMT (val)|66.7|68.3|68.4|
| MME (sum)\*|2380.6|2500|2445.8|
| MMB v1.1 (EN)|79.5|84.2|85.8|
| MMVet (turbo)|83.1|75.5|66.4|
|MMStar|69.3|72.1|72.9|
| HallBench (avg)|54.5|61.2|63.2|
| Video-MME (w/o sub)|66|67.9|68.2|
| Video-MME (w sub)|68.6|73.5|73.6|
| MLVU (M-Avg)|70.2|75.1|71.5|
|LongVideoBench (val total)|62.1|63.9|44|
|Average|73.75|76.51|73.72|
|Geomean|73.15|75.95|72.69| | 2025-08-28T18:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n2kh2y/battle_of_the_new_multimodal_models_minicpmv_45/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2kh2y | false | null | t3_1n2kh2y | /r/LocalLLaMA/comments/1n2kh2y/battle_of_the_new_multimodal_models_minicpmv_45/ | false | false | self | 23 | null |
Local AI + state machine (yells at Amazon drivers peeing on my house) | 70 | Experimenting with state machines and LLMs in local pipelines. The LLM handles perception fuzziness (natural language, vision, edge cases), while the state machine enforces deterministic control flow. The combo makes agents way more reliable than just letting an LLM run solo.
Motivation for this latest test: Amazon drivers legit keep peeing on my house. So I wired up a workflow where the AI watches a live video feed. If it detects someone urinating in my driveway, the state machine flips the app from passive mode (just watching) into active mode (video + audio ingestion, \~1s TTS out), at which point it verbally shames them in real-time.
Some observations:
* **Conditional state changes:** Instead of always-on chatter, the LLM only activates when the state machine sees a trigger event. This makes it more deterministic and predictable.
* **Division of labor:** LLM handles perception + reasoning on noisy inputs. State machine handles orchestration + gating when/what gets executed.
* **Flexibility:** The detection logic can be swapped out easily, so the same workflow could be used for different scenarios like spotting trespassing, logging deliveries, or recognizing gestures.
* **Weak spots:** Detection can hallucinate/miss under odd angles and lighting. Convo quality is hit-or-miss and depends on the model used.
I used GPT for reasoning in this demo, but it could easily be swapped for Qwen to keep everything 100% local.
TL;DR
AI Urination Detection: not the hero we wanted, but the hero we needed. | 2025-08-28T18:28:33 | https://v.redd.it/257gigswwslf1 | Weary-Wing-6806 | /r/LocalLLaMA/comments/1n2k6st/local_ai_state_machine_yells_at_amazon_drivers/ | 1970-01-01T00:00:00 | 0 | {} | 1n2k6st | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/257gigswwslf1/DASHPlaylist.mpd?a=1759127319%2CNjQwMzVlMzVjMDE1OGQyZWVhMTM5MjA1MGM0NDFiMmM4ODFkNjM2MDQwOWM0MDU2OTE5NjI2MTM3MTEwNzAwNA%3D%3D&v=1&f=sd', 'duration': 117, 'fallback_url': 'https://v.redd.it/257gigswwslf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/257gigswwslf1/HLSPlaylist.m3u8?a=1759127319%2CZmNiODYyZGVhNjU5ZjMwYmE2N2JhZmI1NWJjNzRhZTkyOTA4YTdjZGIzY2RkY2JjNjUyZWViZjE3OWU4NThjYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/257gigswwslf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n2k6st | /r/LocalLLaMA/comments/1n2k6st/local_ai_state_machine_yells_at_amazon_drivers/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=108&crop=smart&format=pjpg&auto=webp&s=ebed2bce205860f132bf19ceada29357b61d4304', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=216&crop=smart&format=pjpg&auto=webp&s=66da17fb444beb65eb166343b1959009f8db58ae', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=320&crop=smart&format=pjpg&auto=webp&s=9087878a2332c0e2c19b0cbeb011194f170c46a8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=640&crop=smart&format=pjpg&auto=webp&s=f78c3cee3b25edafcac33ad0f2628d58ab367aae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=960&crop=smart&format=pjpg&auto=webp&s=ab06181c2c76e7359cca81398a78f0f5059c241b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?width=1080&crop=smart&format=pjpg&auto=webp&s=53fe856df3df1e941532f939592988fff73caaf3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXM1ZWM1c3d3c2xmMZj5V4nY1VQiFgNlKq8PGxD_fB9khJueOQN3FmEXQ4it.png?format=pjpg&auto=webp&s=6a8a642bceaf1790f3d53fa78f13b51221f1aa43', 'width': 1920}, 'variants': {}}]} | |
Gpt-oss Fine-tuning - now with 60K context length and fits on <13GB VRAM | 541 | Hey guys we've got LOTS of updates for gpt-oss training today! We’re excited to introduce [Unsloth](https://github.com/unslothai/unsloth) Flex Attention support for OpenAI gpt-oss training that enables **>8× longer context lengths, >50% less VRAM usage and >1.5× faster training** vs. all implementations including those using Flash Attention 3 (FA3). Unsloth Flex Attention makes it possible to train with a 60K context length on just 80GB of VRAM for BF16 LoRA. Also:
1. You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, Ollama or HF
2. We fixed gpt-oss training losses going to infinity on float16 GPUs (like T4 Colab)
3. We fixed gpt-oss implementation issues irrelevant to Unsloth, most notably ensuring that swiglu_limit = 7.0 is properly applied during MXFP4 inference in transformers
4. Unsloth Flex Attention scales with context, longer sequences yield bigger savings in both VRAM and training time
🦥 Would highly recommend you guys to read our blog which has all the bug fixes, guides, details, explanations, findings etc. and it'll be really educational: https://docs.unsloth.ai/basics/long-context-gpt-oss-training
We'll likely release our gpt-oss training notebook with direct saving capabilities to GGUF, llama.cpp next week.
And we'll be releasing third-party Aider polygot benchmarks for DeepSeek-V3.1 next week. You guys will be amazed at how well IQ1_M performs!
And next week we'll might have a great new update for RL! 😉
Thanks guys for reading and hope you all have a lovely Friday and long weekend,
Daniel! 🦥 | 2025-08-28T18:12:00 | danielhanchen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2jraj | false | null | t3_1n2jraj | /r/LocalLLaMA/comments/1n2jraj/gptoss_finetuning_now_with_60k_context_length_and/ | false | false | 541 | {'enabled': True, 'images': [{'id': 'xaGwXH7t-nMjob0Keg_T4WbnrBy-0X2PU0c7gConrG0', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=108&crop=smart&auto=webp&s=8c80fc0822f0db9a6012ddfcc741a6fabdffdfdb', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=216&crop=smart&auto=webp&s=7dfab4e2755b23e07e7ac29d937e6ec099def422', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=320&crop=smart&auto=webp&s=19b96dc28b9eab653d7cc22deb09274a8bfc1463', 'width': 320}, {'height': 700, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=640&crop=smart&auto=webp&s=01d59299286be897d49e1da4b5b96ae312e88050', 'width': 640}, {'height': 1050, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=960&crop=smart&auto=webp&s=8edd8a433414fd0837bfa4b1de4a26f1193a97e0', 'width': 960}, {'height': 1181, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?width=1080&crop=smart&auto=webp&s=3e2e9eeb1c1bbb1aa2c53105443e5b68aaf29924', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/rwu8gezzwslf1.jpeg?auto=webp&s=466d4fa60db9100b544f5bac1ff94f08148393cc', 'width': 2560}, 'variants': {}}]} | ||
[EMNLP 2025] CCPS: Confidence from Consistency under Perturbation of States — Superior Calibration Performance Across Benchmarks/Models | 8 | Hi everyone,
Our paper **“*****Confidence from Consistency under Perturbation of States (CCPS)*****”** was accepted to the **EMNLP 2025 Main Conference**, placing in the **top 15% of accepted papers** with a **final meta-review rating of 9 (strong accept)**.
# 🔍 Motivation
LLMs don’t just make mistakes, they’re often confidently wrong. That’s fine when asking for trivia, but risky in domains like healthcare and finance. Reliable confidence estimation is critical for safe deployment.
# ✨ What is CCPS?
CCPS looks at the hidden states of an LLM. We apply small perturbations to the final hidden representations and observe how stable the prediction is:
* If the answer remains stable → the model was truly confident.
* If the answer flips → the confidence was unreliable.
This approach is simple, efficient, and does not require fine-tuning the base LLM.
# 📊 Results
Across LLaMA, Mistral, and Qwen on MMLU and MMLU-Pro, CCPS outperformed prior methods like LitCab and Calibration Tuning (CT):
* **Calibration**: Error cut by more than 50%, down to \~4.5% on the toughest benchmarks.
* **Discrimination**: More accurate at telling right vs. wrong answers than prior SOTA (LitCab, CT, etc.).
* **Performance**: Boosts accuracy and robustness, all without fine-tuning the base LLM.
# 💡 Why it matters
CCPS delivers more reliable, better-calibrated LLMs, models that don’t just generate answers but also provide trustworthy confidence signals. This is key for high-stakes AI applications, especially in the medical and finance industries.
# 📎 Resources
* 📄 Paper: [arXiv link](https://arxiv.org/abs/2505.21772)
* 💻 Code: [GitHub repo](https://github.com/ledengary/CCPS)
* 📊 Data: [HF Dataset](https://huggingface.co/datasets/ledengary/CCPS)
Happy to hear feedback, especially from anyone working on calibration, verifiers (for RL), or LLM deployment. | 2025-08-28T18:11:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n2jqym/emnlp_2025_ccps_confidence_from_consistency_under/ | erfan_mhi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2jqym | false | null | t3_1n2jqym | /r/LocalLLaMA/comments/1n2jqym/emnlp_2025_ccps_confidence_from_consistency_under/ | false | false | self | 8 | null |
Gpt-oss Fine-tuning - now with 60K context length and fits on <13GB VRAM | 1 |
Hey guys we've got LOTS of updates for gpt-oss training today! We’re excited to introduce [Unsloth](https://github.com/unslothai/unsloth) Flex Attention support for OpenAI gpt-oss training that enables *>8× longer context lengths, >50% less VRAM usage and >1.5× faster training* vs. all implementations including those using Flash Attention 3 (FA3). Unsloth Flex Attention makes it possible to train with a 60K context length on just 80GB of VRAM for BF16 LoRA. Also:
1. You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, Ollama or HF
2. We fixed gpt-oss training losses going to infinity on float16 GPUs (like T4 Colab)
3. We fixed gpt-oss implementation issues irrelevant to Unsloth, most notably ensuring that swiglu_limit = 7.0 is properly applied during MXFP4 inference in transformers
Unsloth Flex Attention scales with context, longer sequences yield bigger savings in both VRAM and training time
🦥
Would highly recommend you guys to read our blog which has all the bug fixes, guides, details, explanations, findings etc. and it'll be really educational: https://docs.unsloth.ai/basics/long-context-gpt-oss-training
We'll likely release our gpt-oss training notebook with direct saving capabilities to GGUF, llama.cpp next week.
And we'll be releasing third-party Aider polygot benchmarks for DeepSeek-V3.1 next week. You guys will be amazed at how well IQ1_M performs!
And next week we'll might have a great new update for RL! 😉
Thanks guys for reading and hope you all have a lovely Friday and long weekend,
Daniel! 🦥
Hide quoted text
On Thu, Aug 28, 2025, 10:58 AM Michael Han <michael@unsloth.ai> wrote:
Hey guys we've got LOTS of updates for gpt-oss training today! We’re excited to introduce Unsloth Flex Attention support for OpenAI gpt-oss training that enables >8× longer context lengths, >50% less VRAM usage and >1.5× faster training vs. all implementations including those using Flash Attention 3 (FA3). Unsloth Flex Attention makes it possible to train with a 60K context length on just 80GB of VRAM for BF16 LoRA. Also:
You can now export/save your QLoRA fine-tuned gpt-oss model to llama.cpp, vLLM, Ollama or HF
We fixed gpt-oss training losses going to infinity on float16 GPUs (like T4 Colab)
We fixed gpt-oss implementation issues irrelevant to Unsloth, most notably ensuring that swiglu_limit = 7.0 is properly applied during MXFP4 inference in transformers
Unsloth Flex Attention scales with context, longer sequences yield bigger savings in both VRAM and training time
🦥 Would highly recommend you guys to read our blog which has all the bug fixes, guides, details, explanations, findings etc. and it'll be really educational: https://docs.unsloth.ai/basics/long-context-gpt-oss-training
We'll likely release our gpt-oss training notebook with direct saving capabilities to GGUF, llama.cpp next week.
And we'll be releasing third-party Aider polygot benchmarks for DeepSeek-V3.1 next week. You guys will be amazed at how well IQ1_M performs!
And next week we'll might have a great new update for RL! 😉
Thanks guys for reading and hope you all have a lovely Friday and long weekend,
Daniel! 🦥 | 2025-08-28T18:07:59 | danielhanchen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2jngk | false | null | t3_1n2jngk | /r/LocalLLaMA/comments/1n2jngk/gptoss_finetuning_now_with_60k_context_length_and/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'fNGANDGKux4bJag8wLrG6n66jzg_UaneAeAiTxyRnXg', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=108&crop=smart&auto=webp&s=13b46d8bd6f210a7d849dfe331ee5af72aee5e2e', 'width': 108}, {'height': 236, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=216&crop=smart&auto=webp&s=1c6d8bd16a126e204379a4bfac2b01b341f7f127', 'width': 216}, {'height': 350, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=320&crop=smart&auto=webp&s=61fc5a60a996030dcad0db1b8713efe500987f79', 'width': 320}, {'height': 700, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=640&crop=smart&auto=webp&s=2bf5e5a434bd070b0855ebece49d8f9f745a1fd0', 'width': 640}, {'height': 1050, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=960&crop=smart&auto=webp&s=1a0c8b9100396798a659c12e164ee65d3cfd8923', 'width': 960}, {'height': 1181, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?width=1080&crop=smart&auto=webp&s=c0e351efcf0ba042db206f656b414ea7a5b76e74', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/dyw3b93awslf1.jpeg?auto=webp&s=fcab23afa85191c6901c5978c5fdf81eb80825f2', 'width': 2560}, 'variants': {}}]} | ||
L3.3-Ignition-v0.1-70B - New Model Merge | 19 | Ignition v0.1 is a Llama 3.3-based model merge designed for **creative roleplay** and **fiction writing** purposes. The model underwent a multi-stage merge process designed to optimise for creative writing capability, minimising slop, and improving coherence when compared with its constituent models.
The model shows a preference for **detailed character cards** and is **sensitive to system prompting**. If you want a specific behavior from the model, prompt for it directly.
Inferencing has been tested at fp8 and fp16, and **both are coherent up to \~64k context**.
I'm running the following sampler settings. If you find the model isn't working at all, try these to see if the problem is your settings:
**Prompt Template**: Llama 3
**Temperature**: 0.75 (this model runs pretty hot)
**Min-P**: 0.03
**Rep Pen**: 1.03
**Rep Pen Range**: 1536
High temperature settings (above 0.8) tend to create less coherent responses.
Huggingface: [https://huggingface.co/invisietch/L3.3-Ignition-v0.1-70B](https://huggingface.co/invisietch/L3.3-Ignition-v0.1-70B)
GGUF: [https://huggingface.co/mradermacher/L3.3-Ignition-v0.1-70B-GGUF](https://huggingface.co/mradermacher/L3.3-Ignition-v0.1-70B-GGUF)
GGUF (iMat): [https://huggingface.co/mradermacher/L3.3-Ignition-v0.1-70B-i1-GGUF](https://huggingface.co/mradermacher/L3.3-Ignition-v0.1-70B-i1-GGUF) (SOON) | 2025-08-28T17:41:28 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ixp0/l33ignitionv0170b_new_model_merge/ | realechelon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ixp0 | false | null | t3_1n2ixp0 | /r/LocalLLaMA/comments/1n2ixp0/l33ignitionv0170b_new_model_merge/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=108&crop=smart&auto=webp&s=e388e944415eda9194f5cd4ee04a1b03e8673493', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=216&crop=smart&auto=webp&s=2a40870ae444797138bd62646d8a16bc34aad781', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=320&crop=smart&auto=webp&s=f75be69e98b6b452db3f536b8780765ac35d977b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=640&crop=smart&auto=webp&s=171d908163512d334eab2201bf743e664b75a1be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=960&crop=smart&auto=webp&s=39c2b1529ce9591419b1c478df4aeac47dad7735', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?width=1080&crop=smart&auto=webp&s=d9b9fe6f0aa9ab6a8ee003b8ee0f6af944f9d6a5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/f1OwmI4PKn-Oa5JDWenKgB0koCYFz7mFL2sTEfxXDCU.png?auto=webp&s=d733332a356deca7c1348baa2e0d8081a1f1dadd', 'width': 1200}, 'variants': {}}]} |
Battle of the new Multi-Modal models: MiniCPM-V 4.5 8B vs InternVL3.5 8B | 1 | New multimodal models based off Qwen3, MiniCPM and InternVL, were released very recently, as in just a few days ago, which got me interested and wondering which were better.
Unfortunately, InternVL3.5's model card did not include benchmark results for the 8B model, they only posted results for the 30b-a3b model and the 240b-a20b models, which make it hard to compare their 8B model to minicpm-v 4.5 8b. Doing a little digging, and reading through their paper on axiv [https://arxiv.org/html/2508.18265v1](https://arxiv.org/html/2508.18265v1) I was able to find some benchmark results for the 8B model, but only two of these results were directly comparable to the results on MiniCPM's model card.
|Model| MMMU (val) | MathVista (mini) |
|:-|:-|:-|
|InternVL3.5-8B|73.4|78.4|
|MiniCPM-V 4.5-8B|67.7|79.9|
Yeah, not really much to go off of. I hope some others can share their experiences with these two models and their testing. And hopefully, InternVL will finally run their wider suite of tests for their 8B model too.
| 2025-08-28T17:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n2i4mj/battle_of_the_new_multimodal_models_minicpmv_45/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2i4mj | false | null | t3_1n2i4mj | /r/LocalLLaMA/comments/1n2i4mj/battle_of_the_new_multimodal_models_minicpmv_45/ | true | false | self | 1 | null |
glm mini will be comming | 329 | 2025-08-28T17:05:29 | untanglled | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2hyt2 | false | null | t3_1n2hyt2 | /r/LocalLLaMA/comments/1n2hyt2/glm_mini_will_be_comming/ | false | false | default | 329 | {'enabled': True, 'images': [{'id': 'h1ss59p4lslf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=108&crop=smart&auto=webp&s=89fb953b389e54849945e56bd4b7872edb83ad4e', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=216&crop=smart&auto=webp&s=c8e9bc69fe9e959f381a5d11b68d14ca488d5d4f', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=320&crop=smart&auto=webp&s=3b23351735131bc04fb1485da55d471a162ef37d', 'width': 320}, {'height': 355, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=640&crop=smart&auto=webp&s=4d8d73abbfbb1def80b73cdd1845129f4a319098', 'width': 640}, {'height': 533, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=960&crop=smart&auto=webp&s=874102e7ae603908bbfa32d7c1770f926a796e56', 'width': 960}, {'height': 600, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?width=1080&crop=smart&auto=webp&s=7b88f9a969bd02d76628557705fc0ebc31b8f712', 'width': 1080}], 'source': {'height': 700, 'url': 'https://preview.redd.it/h1ss59p4lslf1.jpeg?auto=webp&s=712a873002cd5a9edb007a8eb6a83bbc2f462a7d', 'width': 1260}, 'variants': {}}]} | ||
Self-Hosting a Writing Assistant Help | 0 | I'm looking to self-host a tool that can help me edit and brainstorm on my writing projects. The ideal solution must be uncensored, as my work often involves heavy topics that are frequently censored.
**Hardware Specifications**
CPU: Ryzen 5 3600
GPU: Radeon RX 5700
RAM: 16 GB
OS: Windows 10
I've tried a few solutions already, including Llama 3 Dolphin, but have encountered issues such as a failure to perform tasks as requested and the generation of nonsensical responses when fed rough drafts. I was running maybe some older models though, or running them incoorrectly, and have been interested in DeepSeek and such.
I am willing to run larger, higher-quality models, even if it means longer response times, since I’m definitely more interested in the quality than the speed and willing to sacrifice one for the other. I would also like if possible for the AI to have decent memory to recall previous passages and avoid repetition from previous edited passages, for example not having the AI write out a characters entire title every other page.
Please let me know if you have any suggestions or recommendations! I feel like I have been doing things incorrectly on my own, and so I’m now starting to reach out for advice. | 2025-08-28T16:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n2hs7n/selfhosting_a_writing_assistant_help/ | sampanchan4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2hs7n | false | null | t3_1n2hs7n | /r/LocalLLaMA/comments/1n2hs7n/selfhosting_a_writing_assistant_help/ | false | false | self | 0 | null |
Anyone Successfully Converted DotsOCR to GGUF? | 1 | [removed] | 2025-08-28T16:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n2hhxy/anyone_successfully_converted_dotsocr_to_gguf/ | NoBlackberry3264 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2hhxy | false | null | t3_1n2hhxy | /r/LocalLLaMA/comments/1n2hhxy/anyone_successfully_converted_dotsocr_to_gguf/ | false | false | self | 1 | null |
Confusion regarding creating chain of thought dataset for fine tuning Qwen 3 | 1 | I'm writing my master thesis on "Parameter-Efficient Fine Tuning of Small Language Models to Improve Accuracy and Reduce Memory Consumption in Code Smell Detection in Django". im using this paper as anchor : Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code Review - Southern Cross University
I'm trying to create Django code smell dataset with chain of thought for finetuning Qwen 3 14b , to detect code smells in Django code, and generates comment on issue type, position of issue, description of issue and possibly solution. the anchor paper uses customized alpaca format for dataset but I'm not sure which dataset format to use for Qwen 3. based on Qwen3: How to Run & Fine-tune | Unsloth Documentation it uses both ChatML and Alpaca format . ChatGPT and Gemini tell me to use ChatML, since Qwen is trained on this format and its best for chain of thought, but neither of agree on template format, ChatGPT tells me to use "<think>...</think>" tag in assistant content but Gemini tells me not to use them .
what i want is single turn conversation like anchor paper but i have hard deciding between Alpaca format or ChatML format for Qwen 3 Finetuning. and i don't understand this <think> tag do i have to include it dataset Json file ? | 2025-08-28T16:26:40 | https://www.reddit.com/r/LocalLLaMA/comments/1n2gxm1/confusion_regarding_creating_chain_of_thought/ | Eshimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2gxm1 | false | null | t3_1n2gxm1 | /r/LocalLLaMA/comments/1n2gxm1/confusion_regarding_creating_chain_of_thought/ | false | false | self | 1 | null |
Confusion on creating Chain of thought Dataset for fine tuning Qwen 3 | 1 | [removed] | 2025-08-28T16:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1n2grye/confusion_on_creating_chain_of_thought_dataset/ | Eshimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2grye | false | null | t3_1n2grye | /r/LocalLLaMA/comments/1n2grye/confusion_on_creating_chain_of_thought_dataset/ | false | false | self | 1 | null |
Why Deno Is Fighting Oracle Over JavaScript’s Name | 1 | 2025-08-28T16:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n2gizo/why_deno_is_fighting_oracle_over_javascripts_name/ | bipin_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2gizo | false | {'oembed': {'author_name': 'Codedigipt', 'author_url': 'https://www.youtube.com/@codedigiptbiplab', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/cHz0rI7yxN0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Why Deno Is Fighting Oracle Over JavaScript’s Name"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/cHz0rI7yxN0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Why Deno Is Fighting Oracle Over JavaScript’s Name', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n2gizo | /r/LocalLLaMA/comments/1n2gizo/why_deno_is_fighting_oracle_over_javascripts_name/ | false | false | 1 | null | ||
AMA With Z.AI, The Lab Behind GLM Models | 531 | # AMA with Z.AI — The Lab Behind GLM Models. Ask Us Anything!
Hi r/LocalLLaMA 👋
Today we are having **Z.AI**, the research lab behind the **GLM family of models**. We’re excited to have them open up and answer your questions directly.
Our participants today:
* [**u/zixuanlimit**](https://www.reddit.com/user/zixuanlimit/)
* [**u/Maximum_Can9140**](https://www.reddit.com/user/Maximum_Can9140/)
* [**u/zxdu**](https://www.reddit.com/user/zxdu/)
* [**u/Sengxian**](https://www.reddit.com/user/Sengxian/)
| 2025-08-28T16:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ghx4/ama_with_zai_the_lab_behind_glm_models/ | XMasterrrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ghx4 | false | null | t3_1n2ghx4 | /r/LocalLLaMA/comments/1n2ghx4/ama_with_zai_the_lab_behind_glm_models/ | false | true | self | 531 | null |
Hardware for 4 x MI50 | 4 | Looking for any suggestions on a cheap workstation tower that can house 4 MI50 or if I am force to use a 4U server. Also what motherboard can accommodate this. | 2025-08-28T16:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n2gg27/hardware_for_4_x_mi50/ | Lost_Cherry6202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2gg27 | false | null | t3_1n2gg27 | /r/LocalLLaMA/comments/1n2gg27/hardware_for_4_x_mi50/ | false | false | self | 4 | null |
Best & Smallest Model For Commentary | 0 | Hi,
Working on a fun side project to generate live commentary. LLM output to TTS. For context I have a 5070ti.
While I have 16gb VRAM, my main app is using around 11GB. I'm not sure how much Kokoro TTS is using but I think its around 1-2.
Currently using mistral:7b-instruct-q4_K_M
Is there a better or smaller model which has context on racing that you guys would recommend that works under my conditions. My main goal is good commentary + low latency.
If i'm asking for too much I understand, just wanted to know if theres something better/faster than mistral:7b-instruct-q4_K_M for my use case. Thanks in advance. | 2025-08-28T16:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n2g7sc/best_smallest_model_for_commentary/ | Cinicyal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2g7sc | false | null | t3_1n2g7sc | /r/LocalLLaMA/comments/1n2g7sc/best_smallest_model_for_commentary/ | false | false | self | 0 | null |
Confusion regarding creating Chain of thought dataset for Qwen3 | 1 | I'm writing my master thesis on "Parameter-Efficient Fine Tuning of Small Language Models to Improve Accuracy and Reduce Memory Consumption in Code Smell Detection in Django". im using this paper as anchor : [Fine-Tuning Large Language Models to Improve Accuracy and Comprehensibility of Automated Code Review - Southern Cross University](https://researchportal.scu.edu.au/esploro/outputs/journalArticle/Fine-Tuning-Large-Language-Models-to-Improve/991013222313202368)
I'm trying to create Django code smell dataset with chain of thought for finetuning Qwen 3 14b , to detect code smells in Django code, and generates comment on issue type, position of issue, description of issue and possibly solution.
the anchor paper uses customized alpaca format for dataset but I'm not sure which dataset format to use for Qwen 3. based on [Qwen3: How to Run & Fine-tune | Unsloth Documentation](https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune) it uses both ChatML and Alpaca format . ChatGPT and Gemini tell me to use ChatML, since Qwen is trained on this format but they neither of can agree on template format :
ChatGPT : | 2025-08-28T15:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1n2g6ne/confusion_regarding_creating_chain_of_thought/ | Eshimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2g6ne | false | null | t3_1n2g6ne | /r/LocalLLaMA/comments/1n2g6ne/confusion_regarding_creating_chain_of_thought/ | false | false | self | 1 | null |
[ Removed by Reddit ] | 1 | [removed] | 2025-08-28T15:51:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n2g06a/removed_by_reddit/ | Sure_Explorer_6698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2g06a | false | null | t3_1n2g06a | /r/LocalLLaMA/comments/1n2g06a/removed_by_reddit/ | false | false | self | 1 | null |
Fixed file size | 1 | I fixed the file list to show file size. :)
https://github.com/DroidSpectre/hf-downloader | 2025-08-28T15:48:53 | https://www.reddit.com/gallery/1n2fx8l | Sure_Explorer_6698 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n2fx8l | false | null | t3_1n2fx8l | /r/LocalLLaMA/comments/1n2fx8l/fixed_file_size/ | false | false | 1 | null | |
audio transcription plus speaker identification? | 3 | I'm trying to transcribe and summarize phone calls that are recorded in stereo. All recordings have 1 channel for near side, and 1 channel for the far side, and usually they are just 2 people on the call, so 1 person per channel, sometimes the remote side may have multiple people on the same channel.
I've seen a few diarization projects based on pyannote, https://github.com/m-bain/whisperX and https://github.com/MahmoudAshraf97/whisper-diarization it seems counterintuitive to me that they want all the audio on a single mono channel. I'm sure it's for the purpose of context for whisper. The other issue is neither of them perform well on apple silicone due to lack of mps support in one of the dependency libraries they both share.
Wondering if there are any other options for me? | 2025-08-28T15:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n2fwms/audio_transcription_plus_speaker_identification/ | flying_unicorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2fwms | false | null | t3_1n2fwms | /r/LocalLLaMA/comments/1n2fwms/audio_transcription_plus_speaker_identification/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=108&crop=smart&auto=webp&s=ef751701eaebd132747fc37ae2ee6c4b64102247', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=216&crop=smart&auto=webp&s=4fb01887f72b9b38029bb9e299a398b0f12a64bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=320&crop=smart&auto=webp&s=b5b1a36d4761ad8e0794cfd1019cfaff95b56870', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=640&crop=smart&auto=webp&s=79ab382a8d07ceba87b5927e92482ac4c811a310', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=960&crop=smart&auto=webp&s=24f888274ddbd6e9e865982d1da20c9d6cb28a49', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?width=1080&crop=smart&auto=webp&s=654fc39ecd842010775ca2168ffde9deb4cb278c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fjunBAlVK359FeGjk3Ddf2pRYuSwd6jryE04peW17pA.png?auto=webp&s=759ae65a3363cdb5cd810291a6b35fc8e958d35f', 'width': 1200}, 'variants': {}}]} |
Should we have a website to share prompts and compare outputs? | 3 | I'm surprised there isn't one, and if there is, I don't know about it, but I think it would be interesting to have a tool where you can see the outputs of multiple language models for a single input.
The concept would be that you have a collection of "interesting" prompts, i.e. open-ended or creative questions, or hard questions that only a few models can answer, divided into categories, and the website would automatically evaluate several models on each of them, save the responses and show the results side by side.
I think it would also be useful for smaller model creators as a way to showcase their models, particularly for the sake of avoiding single-number "benchmarks" and letting people see what how they actually behave (yes I'm including NSFW models here).
Ideally, people would be able to submit new prompts and "like" both prompts and responses, which would mean "I think this prompt/response is interesting" (interesting probably being more important here than "good"). Plus they could leave comments describing why they thought that prompt was interesting, etc.
An open question would be what and how many models to evaluate on each prompt, since you obviously have limited money and thousands of potentially interesting models, but it could be proportional to a prompt's score (i.e. more popular prompts get evaluated on more models) or just based on user requests.
Just throwing it out there in case someone wants to build it. | 2025-08-28T15:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1n2fwkr/should_we_have_a_website_to_share_prompts_and/ | Mickenfox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2fwkr | false | null | t3_1n2fwkr | /r/LocalLLaMA/comments/1n2fwkr/should_we_have_a_website_to_share_prompts_and/ | false | false | self | 3 | null |
CohereLabs/command-a-translate-08-2025 · Hugging Face | 4 | 2025-08-28T15:46:21 | https://huggingface.co/CohereLabs/command-a-translate-08-2025 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n2fuwf | false | null | t3_1n2fuwf | /r/LocalLLaMA/comments/1n2fuwf/coherelabscommandatranslate082025_hugging_face/ | false | false | default | 4 | null | |
CohereLabs/command-a-reasoning-08-2025 · Hugging Face | 1 | [deleted] | 2025-08-28T15:45:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1n2fu8p | false | null | t3_1n2fu8p | /r/LocalLLaMA/comments/1n2fu8p/coherelabscommandareasoning082025_hugging_face/ | false | false | default | 1 | null | ||
I've open sourced my commercially used e2e dataset creation + SFT/RL pipeline | 10 | There’s a massive gap in AI education.
There's tons of content to show how to fine-tune LLMs on pre-made datasets.
There's also a lot that shows how to make simple BERT classification datasets.
But...
Almost nothing shows how to build a high-quality dataset for LLM fine-tuning in a real, commercial setting.
I’m open-sourcing the exact end-to-end pipeline I used in production. The output is a social media pot generation model that captures your unique writing style.
To make it easily reproducible, I've turned it into a manifest-driven pipeline that turns raw social posts into training-ready datasets for LLMs.
This pipeline will guide you from:
→ Raw JSONL
→ Golden dataset
→ SFT/RL splits
→ Fine-tuning via Unsloth
→ RL
And at the end you'll be ready for inference.
It powered my last SaaS GrowGlad and fueled my audience growth from 750 to 6,000 followers in 30 days. In the words of Anthony Pierri, it was the first AI -produced content on this platform that he didn't think was AI-produced.
And that's because the unique approach:
1. Generate the “golden dataset” from raw data
2. Label obvious categorical features (tone, bullets, etc.)
3. Extract non-deterministic features (topic, opinions)
4. Encode tacit human style features (pacing, vocabulary richness, punctuation patterns, narrative flow, topic transitions)
5. Assemble a prompt-completion template an LLM can actually learn from
6. Run ablation studies, permutation/correlation analyses to validate feature impact
7. Train with SFT and GRPO, using custom reward functions that mirror the original features so the model learns why a feature matters, not just that it exists
Why this is different:
- It combines feature engineering + LLM fine-tuning/RL in one reproducible repo
- Reward design is symmetric with the feature extractors (tone, bullets, emoji, length, structure, coherence), so optimization matches your data spec
- Clear outputs under data/processed/{RUN_ID}/ with a manifest.json for lineage, signatures, and re-runs
- One command to go from raw JSONL to SFT/DPO splits
This approach has been used in a few VC-backed AI-first startups I've consulted with. If you want to make money with AI products you build, this is it.
Repo: https://github.com/jacobwarren/social-media-ai-engineering-etl | 2025-08-28T15:29:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ff06/ive_open_sourced_my_commercially_used_e2e_dataset/ | Big-Helicopter-9356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ff06 | false | null | t3_1n2ff06 | /r/LocalLLaMA/comments/1n2ff06/ive_open_sourced_my_commercially_used_e2e_dataset/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=108&crop=smart&auto=webp&s=aa751554c0ea31fd6d57ef22c4f2ceb12c6a1539', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=216&crop=smart&auto=webp&s=f8a6418f1cb6e4551876ff03d83d4686a0f039be', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=320&crop=smart&auto=webp&s=0e050ae948f6d126b022451327695dc9e05cd1c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=640&crop=smart&auto=webp&s=5702a5b65c3e536197cb6b7f05fec75e209863e5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=960&crop=smart&auto=webp&s=ef308bd48307114e30b577662e6ca150302ae41d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?width=1080&crop=smart&auto=webp&s=d41142d9fe13e64a69c75a0636dadfe996952e9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oxIR6QQtIpdc0Ya6LDVDqP5sGPhI_A6yooWZtRPjq20.png?auto=webp&s=77e4849a631f4018360cff37e49990e88a24100b', 'width': 1200}, 'variants': {}}]} |
Any project that runs locally for chat with local LLM + infinite memory? | 0 | I kinda like how ChatGPT has a Memory feature and you can talk to it and it will personalize and know more about what you tell.
Is there any cool project that lets you do the same, kinda like some AI assistant/ friend but for you? Locally and offline?
I tried OpenWebui but the memory feature is like you have to type it in yourself, and if you don’t have amazing specs (I only got 16gb vram and 64gb ram) you can’t run the most amazing models at a fast enough speed. (GLM 4.5 air is too slow (Q3_K_XL UD) and GPT OSS 120b is a bit faster but still not fast enough and uses up most of my memory )
I guess I could try (Qwen 3 30bA3b)
Models as small as 8b aren’t satisfying enough to chat to.
So is there any really good project that can auto save memory and you can chat to it like a personal Ai assistant/ friend?
(Why local? Privacy.)
| 2025-08-28T15:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1n2fbq7/any_project_that_runs_locally_for_chat_with_local/ | OrganicApricot77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2fbq7 | false | null | t3_1n2fbq7 | /r/LocalLLaMA/comments/1n2fbq7/any_project_that_runs_locally_for_chat_with_local/ | false | false | self | 0 | null |
Evaluate any computer-use agent with HUD + OSWorld-Verified | 5 | We integrated Cua with HUD so you can run OSWorld-Verified and other computer-/browser-use benchmarks at scale.
Different runners and logs made results hard to compare. Cua × HUD gives you a consistent runner, reliable traces, and comparable metrics across setups.
Bring your stack (OpenAI, Anthropic, Hugging Face) — or Composite Agents (grounder + planner) from Day 3. Pick the dataset and keep the same workflow.
See the notebook for the code: run OSWorld-Verified (~369 tasks) by XLang Labs to benchmark on real desktop apps (Chrome, LibreOffice, VS Code, GIMP).
Heading to Hack the North? Enter our on-site computer-use agent track — the top OSWorld-Verified score earns a guaranteed interview with a YC partner in the next batch.
Links:
Repo: https://github.com/trycua/cua
Blog: https://www.trycua.com/blog/hud-agent-evals
Docs: https://docs.trycua.com/docs/agent-sdk/integrations/hud
Notebook: https://github.com/trycua/cua/blob/main/notebooks/eval_osworld.ipynb | 2025-08-28T15:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1n2f25o/evaluate_any_computeruse_agent_with_hud/ | Impressive_Half_2819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2f25o | false | null | t3_1n2f25o | /r/LocalLLaMA/comments/1n2f25o/evaluate_any_computeruse_agent_with_hud/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=108&crop=smart&auto=webp&s=8c23d558022eb88b28403dadcbbc2835c5788233', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=216&crop=smart&auto=webp&s=55b5bee870bdfec56deed1918ecb459cb0fed1cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=320&crop=smart&auto=webp&s=8fb85e9dbd14138e0af5ad367cd7c7b3a89677ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=640&crop=smart&auto=webp&s=d652c2060576ddc989cad0ac1ebf39e20e5eaa65', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=960&crop=smart&auto=webp&s=63ff18184348e7fee53485af2f009a2f3ca88ee6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?width=1080&crop=smart&auto=webp&s=e3a41d5116c1ef40759b51d45bc0e586c3916c96', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RQq1IV3s_hZ7c0xKzRYXzxVo0aUEKQFgYOqr0-FB-fs.png?auto=webp&s=2a8bc6ca424808d5f27fc9253afe60d0e83cf529', 'width': 1200}, 'variants': {}}]} |
CohereLabs/command-a-translate-08-2025 · Hugging Face | 109 | Cohere Labs Command A Translate is an open weights research release of a 111 billion parameter model that achieves state-of-the-art performance on translation quality.
Developed by: [Cohere](https://cohere.com/) and [Cohere](https://cohere.com/research) Labs
* Point of Contact: Cohere For AI: [**Cohere Labs**](https://cohere.com/research)
* License: [CC-BY-NC](https://cohere.com/cohere-labs-cc-by-nc-license), requires also adhering to [**Cohere Lab's Acceptable Use Policy**](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
* Model: command-a-translate-08-2025
* Model Size: 111B
* Context length: 8k input, 8k output | 2025-08-28T15:09:18 | https://huggingface.co/CohereLabs/command-a-translate-08-2025 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n2ev3c | false | null | t3_1n2ev3c | /r/LocalLLaMA/comments/1n2ev3c/coherelabscommandatranslate082025_hugging_face/ | false | false | default | 109 | {'enabled': False, 'images': [{'id': 'eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=108&crop=smart&auto=webp&s=3a5a67e8b2be024e953e9e43c0268a6ab83457c7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=216&crop=smart&auto=webp&s=6d72a363b2b1a010cf22fe08ab4c5e35691f3e1e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=320&crop=smart&auto=webp&s=f657938b0434dab82bebd6d914dbc7dd060610f3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=640&crop=smart&auto=webp&s=3193747f5f1f29e1784d71c482e40d0b96413aa8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=960&crop=smart&auto=webp&s=7adc589b4ded84cbb2e09093f511f9d3069d99d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?width=1080&crop=smart&auto=webp&s=740a30ca8645f02f8ec3b89d466ca7dd165ba22d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eR8XbSOhZiSMjrknKTRQhEYtliTvav81RbiIcBJQlDg.png?auto=webp&s=f861ca6a6c1ca025bf38b7317c0a29fd478100fe', 'width': 1200}, 'variants': {}}]} |
Eagle model compatibility with Qwen3 30B-A3B-2507-thinking? | 6 | Hi all!
I want to improve latency for the qwen3 30b-a3b 2507-thinking by applying speculative decoding.
When I checked the supported model checkpoints at official eagle github, I found only Qwen3-30B-A3B.
Is it possible to use the eagle model of Qwen3-30B-A3B as the draft model for qwen3 30b-a3b 2507-thinking?
P.S : Any performance comparison between medusa and eagle, for qwen3 30b-a3b 2507-thinking? | 2025-08-28T14:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n2dz6v/eagle_model_compatibility_with_qwen3/ | lionsheep24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2dz6v | false | null | t3_1n2dz6v | /r/LocalLLaMA/comments/1n2dz6v/eagle_model_compatibility_with_qwen3/ | false | false | self | 6 | null |
Achieving 80% task completion: Training LLMs to actually USE tools | 20 | I recently worked on a LoRA that improves tool use in LLM. Thought the approach might interest folks here.
The issue I have had when trying to use some of the local LLMs with coding agents is this:
Me: "Find all API endpoints with authentication in this codebase"
LLM: "You should look for @app.route decorators and check if they have auth middleware..."
But I often want it to search the files and show me but the LLM doesn't trigger a tool use call.
To fine-tune it for tool use I combined two data sources:
1. **Magpie scenarios** - 5000+ diverse tasks (bug hunting, refactoring, security audits)
2. **Real execution** - Ran these on actual repos (FastAPI, Django, React) to get authentic tool responses
This ensures the model learns both breadth (many scenarios) and depth (real tool behavior).
**Tools We Taught**
- `read_file` - Actually read file contents
- `search_files` - Regex/pattern search across codebases
- `find_definition` - Locate classes/functions
- `analyze_imports` - Dependency tracking
- `list_directory` - Explore structure
- `run_tests` - Execute test suites
**Improvements**
- Tool calling accuracy: 12% → 80%
- Correct parameters: 8% → 87%
- Multi-step tasks: 3% → 78%
- End-to-end completion: 5% → 80%
- Tools per task: 0.2 → 3.8
**Real Example**
Task: "Find ValueError in payment module"
The model:
1. Calls `search_files` with pattern "ValueError"
2. Gets 4 matches across 3 files
3. Calls `read_file` on each match
4. Analyzes context
5. Reports: "Found 3 ValueError instances: payment/processor.py:47 for invalid amount, payment/validator.py:23 for unsupported currency..."
**Key Insights**
- Synthetic scenarios alone aren't enough - you need real execution feedback
- Models learn tool chaining patterns (search → read is common)
- Error handling is crucial - we trained on timeouts, missing files, etc.
- Tool usage is strategic, not just syntactic
**Training Details**
- Base: Llama-3.2-1B
- LoRA rank: 32
- 50+ real repos for execution
- 3 epochs, 4096 token context
**Resources**
- [Colab notebook](https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_3_Enhanced_Tool_Calling_and_Code_Understanding.ipynb)
- [Model](https://huggingface.co/codelion/llama-3-2-1b-tool-calling-lora)
- [GitHub](https://github.com/codelion/ellora)
The key for this LoRA was combining synthetic diversity with real execution. Pure synthetic data leads to models that format tool calls correctly but use them inappropriately. Real execution teaches actual tool strategy.
What's your experience with tool-calling models? Any tips for handling complex multi-step workflows? | 2025-08-28T14:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1n2dmku/achieving_80_task_completion_training_llms_to/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2dmku | false | null | t3_1n2dmku | /r/LocalLLaMA/comments/1n2dmku/achieving_80_task_completion_training_llms_to/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
Getting Started with MCP | 8 | I am hosting few llm models locally through llama.cpp and using Open-WebUI to interact with these. I have been traditionally using simple chat feature of Open-WebUI but recently I started exploring some advanced features like Knowledge base and tools. I realised Open-WebUI tools are not getting updated nowadays, I figured people are moving towards MCP servers.
I want to understand from community how to get started with MCP servers. I have absolutely zero knowledge on mcp servers. I want to mostly use it to automate some of my work locally like summarizing web searches, documents, analysing emails, help with coding etc. I tried to do a web search but find lot of mcp servers and it is confusing which one to use and is there a single server which can help me with all my work. How to get started with it? | 2025-08-28T14:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n2dlxw/getting_started_with_mcp/ | No_Pollution2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2dlxw | false | null | t3_1n2dlxw | /r/LocalLLaMA/comments/1n2dlxw/getting_started_with_mcp/ | false | false | self | 8 | null |
I built a local “second brain” AI that actually remembers everything (321 tests passed) | 851 | For the past months I’ve been building **Kai**, a cognitive operating system that acts like a *second brain*. Unlike ChatGPT or Claude, it doesn’t forget what you tell it.
* 100% local – no cloud, no surveillance
* **Graph-based memory** (3D visualization below)
* Spreading activation → memory retrieval works like a brain
* **321 passing tests** → not a toy prototype
* Learns from *everything you do* on your machine
I’m curious:
* What’s the biggest pain you’ve hit with current AI tools?
* Would you actually use a local AI that builds a persistent memory of your knowledge/work?
Happy to dive into the architecture or share more demos if people are interested.
Here’s a shot of the memory graph growing as I feed it data : | 2025-08-28T14:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1n2djpx/i_built_a_local_second_brain_ai_that_actually/ | IntelligentCause2043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2djpx | false | null | t3_1n2djpx | /r/LocalLLaMA/comments/1n2djpx/i_built_a_local_second_brain_ai_that_actually/ | false | false | self | 851 | null |
LatteReview: a low-code Python package designed to automate systematic literature review processes through AI-powered agents. | 14 | I encountered this project (not mine), it looks really cool:
>LatteReview is a powerful Python package designed to automate academic literature review processes through AI-powered agents. Just like enjoying a cup of latte ☕, reviewing numerous research articles should be a pleasant, efficient experience that doesn't consume your entire day!
**Abstract**
>Systematic literature reviews and meta-analyses are essential for synthesizing research insights, but they remain time-intensive and labor-intensive due to the iterative processes of screening, evaluation, and data extraction. This paper introduces and evaluates LatteReview, a Python-based framework that leverages large language models (LLMs) and multi-agent systems to automate key elements of the systematic review process. Designed to streamline workflows while maintaining rigor, LatteReview utilizes modular agents for tasks such as title and abstract screening, relevance scoring, and structured data extraction. These agents operate within orchestrated workflows, supporting sequential and parallel review rounds, dynamic decision-making, and iterative refinement based on user feedback.
LatteReview's architecture integrates LLM providers, enabling compatibility with both cloud-based and locally hosted models. The framework supports features such as Retrieval-Augmented Generation (RAG) for incorporating external context, multimodal reviews, Pydantic-based validation for structured inputs and outputs, and asynchronous programming for handling large-scale datasets. The framework is available on the GitHub repository, with detailed documentation and an installable package.
* Repo: [https://github.com/PouriaRouzrokh/LatteReview](https://github.com/PouriaRouzrokh/LatteReview)
* Paper: [https://arxiv.org/abs/2501.05468](https://arxiv.org/abs/2501.05468) | 2025-08-28T14:16:06 | https://github.com/PouriaRouzrokh/LatteReview | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n2dfb9 | false | null | t3_1n2dfb9 | /r/LocalLLaMA/comments/1n2dfb9/lattereview_a_lowcode_python_package_designed_to/ | false | false | default | 14 | {'enabled': False, 'images': [{'id': 'MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=108&crop=smart&auto=webp&s=bc0d2b8f18ff000dd85c2a40e008074cf81b7955', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=216&crop=smart&auto=webp&s=9fe28db48572162394ecc282d8dad0d5df8677d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=320&crop=smart&auto=webp&s=20896e1a00caad55cc6bd836a5a0b0901ba959cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=640&crop=smart&auto=webp&s=3eebd7c41c26f22f39345117cd35c1a6c56b43a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=960&crop=smart&auto=webp&s=86e5d90f81afdf03c07e291bd89d4a8ba8be5099', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?width=1080&crop=smart&auto=webp&s=8c963bee93b19ae09f76b11358464909eca29974', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MLrpxjPePecFrnekjys9fFomerrkkYi_NRDQObRezwk.png?auto=webp&s=a4e5e63dc80be1034ecce809cb71e053f39d4780', 'width': 1200}, 'variants': {}}]} |
BMwebcast Livestream + Real time AI | 1 | [removed] | 2025-08-28T13:58:04 | https://v.redd.it/11vdkt2mnrlf1 | onerookie | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2cyf0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/11vdkt2mnrlf1/DASHPlaylist.mpd?a=1758981497%2COTVhYzkwOTc0Y2VkYTAyMjg3MGRiYmRkNmVkNWFkN2ZhMjk4MzUzNmFjMWU5YTViODc4YTVmMzJiZmNmYTYxOA%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/11vdkt2mnrlf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/11vdkt2mnrlf1/HLSPlaylist.m3u8?a=1758981497%2CYWY2MTkyNDdjODFjZDMyOGY2NmZmMDE5MjlkOWU0M2E3YWNkZmIwNjk4OTk4ZDIzOTc3NjYyZjg0YjJlOTM1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/11vdkt2mnrlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n2cyf0 | /r/LocalLLaMA/comments/1n2cyf0/bmwebcast_livestream_real_time_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=108&crop=smart&format=pjpg&auto=webp&s=cd912f238ba31dd82fcb37a9bd60827b9f6f47ca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=216&crop=smart&format=pjpg&auto=webp&s=6a58302e45606be3ed75ee4a4f7f15a39776a8c1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=320&crop=smart&format=pjpg&auto=webp&s=50f98fb8dc114e5c598a345d9108f78046fd39b9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=640&crop=smart&format=pjpg&auto=webp&s=5ffb05215579704b7c1af008aae319d0a5c06582', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=960&crop=smart&format=pjpg&auto=webp&s=868514ce5a5e8b608e9e8cf99dd4f6a4dce7b759', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=35d8d6b14a5e132e4b71714a2bdf262482fd95fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dmk1dm90Mm1ucmxmMWtUyxQ3zrxnqc5RT5RTVIli9ygbw_06Uukj3DIhyGbD.png?format=pjpg&auto=webp&s=8c53b081a9794659839999209d5645edfa52f660', 'width': 1920}, 'variants': {}}]} | |
AGI moment for Seed OSS 36B | 0 | So, trying out the relatively new Seed model (at Q6), pretty cute as it got confused but finally made it right!
https://preview.redd.it/sjjvsvvslrlf1.png?width=1841&format=png&auto=webp&s=34842c6d18e471c917a1b37f9d8d2e345eba7de3
| 2025-08-28T13:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ctfj/agi_moment_for_seed_oss_36b/ | Mart-McUH | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ctfj | false | null | t3_1n2ctfj | /r/LocalLLaMA/comments/1n2ctfj/agi_moment_for_seed_oss_36b/ | false | false | 0 | null | |
Again where behemoth and reasoning model from meta ?? | 279 | 2025-08-28T13:39:03 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2chrm | false | null | t3_1n2chrm | /r/LocalLLaMA/comments/1n2chrm/again_where_behemoth_and_reasoning_model_from_meta/ | false | false | default | 279 | {'enabled': True, 'images': [{'id': 'xma7ru49krlf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=108&crop=smart&auto=webp&s=892199b2014fc92703f6019a5079c608ef001982', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=216&crop=smart&auto=webp&s=82fad2227f924c5bd98fa7868921be8d0f639b7e', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=320&crop=smart&auto=webp&s=288d41ec7b142689538f2432afc02b2d436b1c8a', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=640&crop=smart&auto=webp&s=214fa2574efffdfe39bf57c819059660b5a2a371', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=960&crop=smart&auto=webp&s=bee46c2c553e12ccdec3fb2881467e058bb14097', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?width=1080&crop=smart&auto=webp&s=9cd376c716ee2e91fec9548bcfbefbe61b88818f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/xma7ru49krlf1.png?auto=webp&s=d379d465e0af62403825806163c9c8cc6be225ab', 'width': 1920}, 'variants': {}}]} | ||
Bulk schema sources for fine-tuning - need thousands of examples | 1 |
anyone know good places to find massive amounts of training schemas? trying to fine-tune some models and need diverse data structures at scale - especially financial and ecommerce but really any domain works. talking thousands of different schema types here. where do you all typically source your training data schemas from when you need huge variety? | 2025-08-28T13:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1n2c7xw/bulk_schema_sources_for_finetuning_need_thousands/ | Fragrant-Dog-3706 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2c7xw | false | null | t3_1n2c7xw | /r/LocalLLaMA/comments/1n2c7xw/bulk_schema_sources_for_finetuning_need_thousands/ | false | false | self | 1 | null |
The Maze Protocol: A White Paper on Reverse-Engineered Narrative Cognition | 1 | [removed] | 2025-08-28T13:06:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n2bqg6/the_maze_protocol_a_white_paper_on/ | Few_Chip_873 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2bqg6 | false | null | t3_1n2bqg6 | /r/LocalLLaMA/comments/1n2bqg6/the_maze_protocol_a_white_paper_on/ | false | false | self | 1 | null |
Llama.cpp --verbose | 24 | I've noticed something a bit weird?
Qwen coder famously doesn't work in roo. I used --verbose on LCP to try and capture the exact failure but IT NEVER FAILS WHEN VERBOSE IS ON?!
In fact, it works flawlessly. So flawlessly, I believed Devstral had fixed the chat template for me in one prompt.
Now I feel silly.
How exactly is --verbose smoothing over the chat template difficulties? It feels like verbose enables something extra? | 2025-08-28T12:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n2bdal/llamacpp_verbose/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2bdal | false | null | t3_1n2bdal | /r/LocalLLaMA/comments/1n2bdal/llamacpp_verbose/ | false | false | self | 24 | null |
Qwen / Tongyi Lab launches GUI-Owl & Mobile-Agent-v3 | 100 | Github: https://github.com/X-PLUG/MobileAgent
Full Research Paper: https://arxiv.org/abs/2508.15144 | 2025-08-28T12:39:15 | https://www.reddit.com/gallery/1n2b4et | vibedonnie | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n2b4et | false | null | t3_1n2b4et | /r/LocalLLaMA/comments/1n2b4et/qwen_tongyi_lab_launches_guiowl_mobileagentv3/ | false | false | 100 | null | |
Interactive Game for LLM Application Builders - Test Your LLM Knowledge | 1 | Hi folks, I made a small game to help you test your knowledge of building LLM applications:
[https://shir-man.com/llm-master/](https://shir-man.com/llm-master/)
It’s free and, in my opinion, useful
If you encounter any statements in the test that you disagree with, please share them in the comments
This is the first version; I’ll update it later
P.S. Originally I built it for my colleagues at JetBrains, that is why there is a Kotlin in branding :D | 2025-08-28T12:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n2b37t/interactive_game_for_llm_application_builders/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2b37t | false | null | t3_1n2b37t | /r/LocalLLaMA/comments/1n2b37t/interactive_game_for_llm_application_builders/ | false | false | self | 1 | null |
Made everything with Ai (tutorial & prompt in comment) | 0 | **More cool prompts on my profile Free** 🆓
Step1:- you need an image . any real or ai(i generated it just with a logo)
Ai will use it as inspiration frame.
Step2:- upload your image + prompt to generate the video.
⏺️ **Here's the Prompt** 👇🏻👇🏻👇🏻
```
Begin with the logo [Ο Λούκουμος] on a clean white background. The first letter 'O' slowly pops out of the logo and transforms into a shiny, sugar-coated donut with sprinkles. A small joyful child, around 5 years old, runs into the frame, laughing, and hugs the giant donut 'O' as if it’s too heavy but fun to hold. The child playfully struggles, then lifts it up proudly. Suddenly, the donut gently floats back into its place inside the word [Ο Λούκουμος], completing the logo again in a magical, glowing effect. End with the full logo shining softly, warm and inviting, with a playful bakery vibe."
```
Edit the prompt accordingly with ai. | 2025-08-28T12:35:06 | https://v.redd.it/wfggtxwv8rlf1 | shadow--404 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n2b182 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wfggtxwv8rlf1/DASHPlaylist.mpd?a=1758976523%2CMDI0OWIyZmRiMDdmY2UwMGNmZjFiYmI3MDNiMDUyYWM3YWVhNDhlZjUxN2Y5YWY3ZGJkNWQxZDk2MTQxZWE4Yg%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/wfggtxwv8rlf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/wfggtxwv8rlf1/HLSPlaylist.m3u8?a=1758976523%2COGZiYmJmOGYyNDQzNDkwOTBlNjAzZjIxYzA1NWUxMjFkM2MwZDZmM2RkNGQ2YmZjNjA4ZGYyY2UwNzJmMGE5MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wfggtxwv8rlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1n2b182 | /r/LocalLLaMA/comments/1n2b182/made_everything_with_ai_tutorial_prompt_in_comment/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=108&crop=smart&format=pjpg&auto=webp&s=3ecf9f038abf2396da703732e1c1cc157d54b09b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=216&crop=smart&format=pjpg&auto=webp&s=0b3c4dc33f6b2d685fb7e580ed0587240dbf0bb3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=320&crop=smart&format=pjpg&auto=webp&s=f7f838d6d0ba246853400b2e3249378436701a0b', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=640&crop=smart&format=pjpg&auto=webp&s=b4f6438d919791aa78aebf7bd5f71d7099ed18ab', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=960&crop=smart&format=pjpg&auto=webp&s=6c7ae1cafb6b5cde600f0d8a13e60161e218e212', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=61fe5e0a1b1e9cdc04fdcb88a4b5e36e91fdb52e', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/MnR5NmN0eXY4cmxmMcou4MmMgEflldDgaIZ3hCnyJHO9gDlXRfHSsFpKkO6z.png?format=pjpg&auto=webp&s=6fa27e83b083e440caaebfee94ca5f7d4a255aed', 'width': 1080}, 'variants': {}}]} | |
How much does it cost Google to run Genie 3? | 0 | title | 2025-08-28T12:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n2awag/how_much_does_it_cost_google_to_run_genie_3/ | Timely_Smoke324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2awag | false | null | t3_1n2awag | /r/LocalLLaMA/comments/1n2awag/how_much_does_it_cost_google_to_run_genie_3/ | false | false | self | 0 | null |
Looking for feedback on Exosphere: open source runtime to run reliable agent workflows at scale | 0 | Hey r/LocalLLaMA , I am building Exosphere, an open source runtime for agentic workflows. I would love feedback from folks who are shipping agents in production.
**TLDR**
Exosphere lets you run dynamic graphs of agents and tools with autoscaling, fan out and fan in, durable state, retries, and a live tree view of execution. Built for workloads like deep research, data-heavy pipelines, and parallel tool use. Links in comments.
**What it does**
* Define workflows as Python nodes that can branch at runtime
* Run hundreds or thousands of parallel tasks with backpressure and retries
* Persist every step in a durable State Manager for audit and recovery
* Visualize runs as an execution tree with inputs and outputs
* Push the same graph from laptop to Kubernetes with the same APIs
**Why we built it**
We kept hitting limits with static DAGs and single long prompts. Real tasks need branching, partial failures, queueing, and the ability to scale specific nodes when a spike hits. We wanted an infra-first runtime that treats agents like long running compute with state, not just chat.
**How it works**
* Nodes: plain Python functions or small agents with typed inputs and outputs
* Dynamic next nodes: choose the next step based on outputs at run time
* State Manager: stores inputs, outputs, attempts, logs, and lineage
* Scheduler: parallelizes fan out, handles retries and rate limits
* Autoscaling: scale nodes independently based on queue depth and SLAs
* Observability: inspect every node run with timing and artifacts
**Who it is for**
* Teams building research or analysis agents that must branch and retry
* Data pipelines that call models plus tools across large datasets
* LangGraph or custom agent users who need a stronger runtime to execute at scale
**What is already working**
* Python SDK for nodes and graphs
* Dynamic branching and conditional routing
* Durable state with replays and partial restarts
* Parallel fan out and deterministic fan in
* Basic dashboard for run visibility
**What is rough or in progress**
* More first class data types in the SDK
* Iterative outputs for very large result sets
* Signals like SkipState or TryAfter for smarter control flow
**Example project**
We built an agent called WhatPeopleWant that analyzes Hacker News and posts insights on X every few hours. It runs a large parallel scrape and synthesis flow on Exosphere. Links in comments.
**What I want feedback on**
* Does the graph and node model fit your real workflows
* Must have features for parallel runs that we are missing
* How you handle retries, timeouts, and idempotency today
* What would make you comfortable moving a critical workflow over
* Pricing ideas for a hosted State Manager while keeping the runtime open source
**If you want to try it**
I will drop GitHub, docs, and a quickstart in the comments to keep the post clean. Happy to answer questions and share more design notes. | 2025-08-28T12:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n2amal/looking_for_feedback_on_exosphere_open_source/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2amal | false | null | t3_1n2amal | /r/LocalLLaMA/comments/1n2amal/looking_for_feedback_on_exosphere_open_source/ | false | false | self | 0 | null |
built a opensource tool that explores your files with deep research like workflow | 16 | [research workflow](https://preview.redd.it/zzg6frcr4rlf1.png?width=1286&format=png&auto=webp&s=acb614d0a76dc0df628ef411b3066eae5f6eebbe)
[Demo](https://i.redd.it/s7dx6h2t4rlf1.gif)
repo - [https://github.com/Datalore-ai/deepdoc](https://github.com/Datalore-ai/deepdoc)
a while back I released a small open source project and the support it got honestly meant a lot. the feedback and love here keep me building more stuff so thank you for that.
recently I have been working on something new called **DeepDoc**. it follows a deep research type workflow but on local resources instead of internet. the idea is simple. instead of digging through your own files manually, the tool explores them and hands back a clean report.
you just point it to directory containing local files like pdf, docx etc. it extracts the text, splits it into chunks, runs semantic search, builds a structure based on your instructions and then writes out a markdown report. each section is built step by step by exploring the right pieces, creating research queries, refining with reflection and finally stitching everything into a structured write up.
the result is something that feels like a researched report of your own documents without you having to scroll skim or copy paste.
still early but already works nicely on research papers, reports and even scanned files. planning to push it further soon.
if you want to see what the reports look like just drop a comment or dm me. | 2025-08-28T12:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n2alye/built_a_opensource_tool_that_explores_your_files/ | Interesting-Area6418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2alye | false | null | t3_1n2alye | /r/LocalLLaMA/comments/1n2alye/built_a_opensource_tool_that_explores_your_files/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=108&crop=smart&auto=webp&s=c0f8629d7f1b7e7a23205b01a3b1944e831da662', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=216&crop=smart&auto=webp&s=48fea87e09f34e55d9471916dc3ea1b37489ded6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=320&crop=smart&auto=webp&s=e721303766242e6a97f1ae3444ba6703eb9fb1cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=640&crop=smart&auto=webp&s=fae3fa637225d92705332fa1f59996eb6a85201b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=960&crop=smart&auto=webp&s=2e279a9d114f6d02c464575401478f4ea444bb8f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?width=1080&crop=smart&auto=webp&s=33afff292be5a4089e926f184b5971581eda5433', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VjhzNXFqseZP5gshgU7f0eCX58wU9q276mVwsHcHWx8.png?auto=webp&s=a4ed53c4d6d1561f219c3009f802d013fc55f11c', 'width': 1200}, 'variants': {}}]} | |
How much does it cost to run Genie 3? | 0 | What is an estimate for running cost of Genie 3 of Google Deepmind? | 2025-08-28T11:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1n2a9ru/how_much_does_it_cost_to_run_genie_3/ | Timely_Smoke324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2a9ru | false | null | t3_1n2a9ru | /r/LocalLLaMA/comments/1n2a9ru/how_much_does_it_cost_to_run_genie_3/ | false | false | self | 0 | null |
How to host 120b custom model in GCP for inference serverless / pay per use ?? or any best advice | 1 | I want to inference llm like gpt-oss 120b , I know there are lots of place to get serverless api but how can I host the custom gpt-oss in serverless way does gcp has anything for this .
Any method in gcp is best for me , otherwise I think there are serverless service provider like runpod .
Due to some circumstance gcp is first place for me any advice will appreciated I am new to cloud and stuff | 2025-08-28T11:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n29vzj/how_to_host_120b_custom_model_in_gcp_for/ | Zestyclose-Bug-6278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n29vzj | false | null | t3_1n29vzj | /r/LocalLLaMA/comments/1n29vzj/how_to_host_120b_custom_model_in_gcp_for/ | false | false | self | 1 | null |
How to inference LLM around 120b for inference , the thing is I need a way to inference it because I might finetune it with DPO or is this too much to do ?? | 1 | I want to inference llm like gpt-oss 120b , I know there are lots of place to get serverless api but how can I host the custom gpt-oss in serverless way does gcp has anything for this .
Any method in gcp is best for me , otherwise I think there are serverless service provider like runpod .
Due to some circumstance gcp is first place for me any advice will appreciated I am new to cloud and stuff | 2025-08-28T11:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1n29tdo/how_to_inference_llm_around_120b_for_inference/ | Zestyclose-Bug-6278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n29tdo | false | null | t3_1n29tdo | /r/LocalLLaMA/comments/1n29tdo/how_to_inference_llm_around_120b_for_inference/ | false | false | self | 1 | null |
What is your method for creating large datasets? | 7 | When I finetune a model with data from several pdfs, I first use the "unstructured" library to make the data more easily readable for training and then run the gpt5 API to create a dataset out of the newly structured data. However, this way of doing it is more or less self-taught and I am pretty sure there are better and more efficient ways of doing this. So I would be thankful to learn from you guys :) | 2025-08-28T11:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n29o4d/what_is_your_method_for_creating_large_datasets/ | urmel42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n29o4d | false | null | t3_1n29o4d | /r/LocalLLaMA/comments/1n29o4d/what_is_your_method_for_creating_large_datasets/ | false | false | self | 7 | null |
Are there lists of compatible packages for different versions? (torch, vllm, bitsandbytes) | 2 | Trying this example
```
from vllm import LLM
import torch
model_id = "unsloth/tinyllama-bnb-4bit"
llm = LLM(model=model_id, dtype=torch.bfloat16, trust_remote_code=True)
```
installed vllm==0.10.1.1
got `ImportError: Please install bitsandbytes>=0.46.1`
installed `bitsandbytes==0.46.1`
lots of python package changes losts of nvidia and e.g.
```
- torch==2.7.1+cu128
+ torch==2.8.0
- triton==3.3.1
+ triton==3.4.0
```
New error `ModuleNotFoundError: Could not import module 'ProcessorMixin'. Are this object's requirements defined correctly?` | 2025-08-28T11:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n29gf3/are_there_lists_of_compatible_packages_for/ | arstarsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n29gf3 | false | null | t3_1n29gf3 | /r/LocalLLaMA/comments/1n29gf3/are_there_lists_of_compatible_packages_for/ | false | false | self | 2 | null |
Phantom Fragment | 0 | I think now I can finally publish this as a beta project basically i just made some minor changes in overall ai sandbox i hope it is okay and I would work on it to make it better if you guys have any review or chnages wants to implemented in it tell me here's the url i guess
https://github.com/Intro0siddiqui/Phantom-Fragment/tree/main
U guys can read it's working and tell me if u think it's needs more improvement or is overworked in some areas so I would trim it down and make it more usable for us | 2025-08-28T11:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n299p9/phantom_fragment/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n299p9 | false | null | t3_1n299p9 | /r/LocalLLaMA/comments/1n299p9/phantom_fragment/ | false | false | self | 0 | null |
OpenRouter - Privacy settings. Disable Logging | 7 | [Disable the loggin option if you are privacy focussed. ](https://preview.redd.it/b071grmirqlf1.png?width=1748&format=png&auto=webp&s=8a8a12515f3330272d5d651afaf2e4df62fd3bd6)
| 2025-08-28T10:58:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n293h7/openrouter_privacy_settings_disable_logging/ | shoeshineboy_99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n293h7 | false | null | t3_1n293h7 | /r/LocalLLaMA/comments/1n293h7/openrouter_privacy_settings_disable_logging/ | false | false | 7 | null | |
Any benchmark for LLM regarding non-code autocomplete? | 2 | I'm hoping to find a good model to use for my app's autocomplete, but failing to find any nono-code autocomplete benchmarks.
It don't even have to be one with up-to-date scores; I can run them locally. | 2025-08-28T10:52:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n28zir/any_benchmark_for_llm_regarding_noncode/ | Realm__X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n28zir | false | null | t3_1n28zir | /r/LocalLLaMA/comments/1n28zir/any_benchmark_for_llm_regarding_noncode/ | false | false | self | 2 | null |
I'm proud of my iOS LLM Client. It beats ChatGPT and Perplexity in some narrow web searches. | 0 | I’m developing an iOS app that you guys can test with this link:
https://testflight.apple.com/join/N4G1AYFJ
It’s an LLM client like a bunch of others, but since none of the others have a web search functionality I added a custom pipeline that runs on device.
It prompts the LLM iteratively until it thinks it has enough information to answer. It uses Serper.dev for the actual searches, but scrapes the websites locally. A very light RAG avoids filling the context window.
It works way better than the vanilla search&scrape MCPs we all use. In the screenshots here it beats ChatGPT and Perplexity on the latest information regarding a very obscure subject.
Try it out! Any feedback is welcome!
It works with any OpenAI compatible endpoint. For LMStudio and Ollama with tailscale use “tailscale serve” to get an https endpoint (Apple really doesn’t like http).
Since I like voice prompting I added in settings the option of downloading whisper-v3-turbo on iPhone 13 and newer. It works surprisingly well (10x real time transcription speed).
| 2025-08-28T10:36:02 | Valuable-Run2129 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n28pld | false | null | t3_1n28pld | /r/LocalLLaMA/comments/1n28pld/im_proud_of_my_ios_llm_client_it_beats_chatgpt/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'dBhhPJKf_FbAifdMjkIX-LFVbhvG39aksQNJ1oPcT5A', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=108&crop=smart&auto=webp&s=82f20953b1f9d53d33e10998582714246933a314', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=216&crop=smart&auto=webp&s=7a753dcc86d1c5bd36d8ab90cd577e0b640a5973', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=320&crop=smart&auto=webp&s=c130093d6ec810d156dcd0d3dedd535fbe2baf9a', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=640&crop=smart&auto=webp&s=b83fb37aa74f05bf2638e2ab0e0aa98d394c8795', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=960&crop=smart&auto=webp&s=9dc38786d3449a56c76ca2a74a8e8d7a3fddf9c2', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?width=1080&crop=smart&auto=webp&s=727cecd122239e5fdbf89e765ede702b777864d4', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/31loizcnnqlf1.jpeg?auto=webp&s=226acde6683a7bc626a3d87514e81676d8fd8411', 'width': 1280}, 'variants': {}}]} | ||
Sparrow: Custom language model architecture for microcontrollers like the ESP32 | 99 | Hey everyone,
Above is a video of Sparrow LM running on 1 core of the ESP32S3 while another core dedicated to the webserver/webapp, to showcase a ChatGPT-like system, although of course the models can be used for anything from text to sentiment analysis, time series analysis and more, depending how it is trained.
I've been super focused for a while now in bringing Language Models and complex NLP capabilities to microcontrollers and finally been able to finish the architecture and an ML Toolkit that enables training models from scratch, with this architecture and enables easy deployment on almost any MCUs.
The architecture uses state of the art methods, with many in-depth optimisations tested through over 1700 trained models, to get the most of every single memory byte and clock cycle, specifically for MCUs while also enabling extremely fast responses on PC.
The idea is to have domain specific and task specific models, using Sparrow's architecture, instead of a general prupose frontier model like ChatGPT/Llama etc. In the demo I showcase a Biology only model, that was made to give straight answrs (as per research papers showcasing that's what people want) for a question-answering chat-like system. Anything can be created. And then due to the model being only 50-200KB depending on how it is build (with twice that needed in total when flashed), mutiple models could be loaded in memory and a mixture-of-experts system can be designed. Which is what I want to explore with SPARROW 2.
I still have to see exactly how to proceed in terms of making the code open-source, best licensing methods, how to create the API, etc. But the idea is that it would be easy to create language models for MCUs, similar to how Sci-kit Learn is used for regular ML.
It supports encoder, decoder, encoder-decoder models, and the fastest model uses linear attention, but I have also been able to deploy dot attention and additive attention on the ESP32.
Let me know what you think! I have a lot more videos with the models running on PC with full phrases/paragraphs outputs in less than 10 miliseconds, have different versions Small, Main, Large running on the ESP32S3, have the Main flavour running on the ESP32P4 which can process everything 5-6 times faster due to the intrustions available, and outputting a phrase every 50-100ms, compared to ESP32S3's 300-600ms. | 2025-08-28T10:31:54 | https://v.redd.it/pefagkhgkqlf1 | c-f_i | /r/LocalLLaMA/comments/1n28n3v/sparrow_custom_language_model_architecture_for/ | 1970-01-01T00:00:00 | 0 | {} | 1n28n3v | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pefagkhgkqlf1/DASHPlaylist.mpd?a=1759098721%2CZGFjYzA1MjUwZjk1ODFlNzg4MzA0OGJjYzI2Y2I5MTU1YzgxMmQyMGMwNGM0NWU1YjdiYTY5YjkxNjRlYmU0NQ%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/pefagkhgkqlf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/pefagkhgkqlf1/HLSPlaylist.m3u8?a=1759098721%2CMzg0ZjM1ZDVlZmM3OGU5MDExYmExY2QzZTliMTU1NDgxNGIxMzU4YTcwZGZmY2E3OTBlNzNiZTdlYjI1OTVkOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pefagkhgkqlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n28n3v | /r/LocalLLaMA/comments/1n28n3v/sparrow_custom_language_model_architecture_for/ | false | false | 99 | {'enabled': False, 'images': [{'id': 'dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f0297e3fc623d3099305d23eac2983208c6d9e2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=216&crop=smart&format=pjpg&auto=webp&s=b67ae896c8ecac0f2c8bf780c0f51345b8642628', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=320&crop=smart&format=pjpg&auto=webp&s=da3b0d1d245e81e946a96326ba4293a0d9cd3348', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=640&crop=smart&format=pjpg&auto=webp&s=ffc3433adcfcf01aedbd15b414aed25fde6684d6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=960&crop=smart&format=pjpg&auto=webp&s=3b9636137dd8972ba415bbc0e60ed06c810931dc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=946cba5217443837713d7c51c77d6ee3cf63a36b', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/dnB2ZnpraGdrcWxmMXI7s2J-dYT-lngU-7I3sc5b7CKL3t5WhtAsvCq_0YDI.png?format=pjpg&auto=webp&s=981bf23d74555f0ba3bf27594dacb9012ac492a7', 'width': 3840}, 'variants': {}}]} | |
I get "request timed out after 60 seconds" in vs code for ollama | 0 | Guys, I have installed ollama and vs code and then installed Cline and Continue. Ollama is working very well but when I try to use it in Cline or Continue, I get "request timed out after 60 seconds" error in Cline and an error as you can see in the screenshot. Everything is done as these videos: [https://www.youtube.com/watch?v=aM0sS5TIaVI](https://www.youtube.com/watch?v=aM0sS5TIaVI) and [https://www.youtube.com/watch?v=P5YXTTS8OFk](https://www.youtube.com/watch?v=P5YXTTS8OFk) Then why doesn't it work for me? please keep in mind that I can use [openrouter.ai](http://openrouter.ai) services via API key and without any problem. | 2025-08-28T10:26:05 | https://www.reddit.com/gallery/1n28jf8 | Aggressive_Mix_4258 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n28jf8 | false | null | t3_1n28jf8 | /r/LocalLLaMA/comments/1n28jf8/i_get_request_timed_out_after_60_seconds_in_vs/ | false | false | 0 | null | |
This is a test about how it works | 1 | [deleted] | 2025-08-28T09:53:41 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1n2801d | false | null | t3_1n2801d | /r/LocalLLaMA/comments/1n2801d/this_is_a_test_about_how_it_works/ | false | false | default | 1 | null | ||
ASUS Ascent GX10 (NVIDIA DGX™ Spark): s this genuine and decent for llms? | 0 | 2025-08-28T09:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n27vhg/asus_ascent_gx10_nvidia_dgx_spark_s_this_genuine/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n27vhg | false | null | t3_1n27vhg | /r/LocalLLaMA/comments/1n27vhg/asus_ascent_gx10_nvidia_dgx_spark_s_this_genuine/ | false | false | 0 | null | ||
Qwen3 rbit rl finetuned for stromger reasoning | 18 | available now on hugging face and ollama adeelahmad/ReasonableQwen3-4B gguf and mlx | 2025-08-28T09:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n27p5g/qwen3_rbit_rl_finetuned_for_stromger_reasoning/ | adeelahmadch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n27p5g | false | null | t3_1n27p5g | /r/LocalLLaMA/comments/1n27p5g/qwen3_rbit_rl_finetuned_for_stromger_reasoning/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=108&crop=smart&auto=webp&s=7ddb6d21604ef5a7ecfb1af6493cfed4a990827d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=216&crop=smart&auto=webp&s=222891d9406bc2b9c9d405e8d102a2feb9cde3d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=320&crop=smart&auto=webp&s=ad61892969cd6452e0609a73d7137fe07725e046', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=640&crop=smart&auto=webp&s=f43d71084674fa5b261e6592dc3fd3be00e2b720', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=960&crop=smart&auto=webp&s=377ac7a49bb482e14045f32a60dfd7f85e6891e9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?width=1080&crop=smart&auto=webp&s=f8d740cd458e85f62e9ee69a529406b4df86589c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tq5SZ2LhNi4P80gWp7cBK3BxpOP0xxZxJLXSghN_klo.png?auto=webp&s=76295b45ce2cf1c5921bb773e18a393a190effad', 'width': 1200}, 'variants': {}}]} |
Ollama Model Manager | 0 | Different LLMs available via Ollama have differing translation capabilities depending on the language pair. Users have to test the various models to find the best one for their particular translation task. At the request of our customers we have introduced a Model Manager within the Local AI Translator. Users can now download, install and delete LLMs without leaving the application. For more see https://localai.world. | 2025-08-28T09:33:37 | https://v.redd.it/s7ebq4qzaqlf1 | Connect-Flight8490 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n27odm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s7ebq4qzaqlf1/DASHPlaylist.mpd?a=1758965633%2CNTUwMjYzOWQxMzNiOTEzNWU5ZDhkMGIxNjgzYjAyMWY0NDA3NWYyODQxNTUyNTc0YWIwZDVmNjAwMjUzMWRjMQ%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/s7ebq4qzaqlf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/s7ebq4qzaqlf1/HLSPlaylist.m3u8?a=1758965633%2CMDNjZDcxZDA3NjUxOGY1YTI1NWVmMjA2NWZkMWE4MmU5NTgwZDk5MmY5MjhiOGM1YzRmYzIwMGY2ZDIyZThhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s7ebq4qzaqlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n27odm | /r/LocalLLaMA/comments/1n27odm/ollama_model_manager/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a969ba7537f75d5e17df087b1321179b2ea4b8d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=216&crop=smart&format=pjpg&auto=webp&s=57f3534c8afea3e729d59a348d6296bbf4cf27a3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=320&crop=smart&format=pjpg&auto=webp&s=35d070e9f5604e30d9c66fcd133d7bec83b0dd99', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=640&crop=smart&format=pjpg&auto=webp&s=7e8359ef593af3e6a8466a00a38a3f4994b0b948', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=960&crop=smart&format=pjpg&auto=webp&s=ae662148ec65d248010f1e5f21084638ec73077d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9f62d725a6c7ce3ff5c649720ae800cf671456c9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N3RydGU1cXphcWxmMdQHKsQTZnVK-8vHVOkKD8lts3MOnyXrg3sQpIEPdERI.png?format=pjpg&auto=webp&s=1fcaa85a49704353d325c0a3b83c7f7c82e3c73b', 'width': 1920}, 'variants': {}}]} | |
Claude Code in VS Code vs. Claude Code in Cursor | 0 | Hey guys, so I am starting my journey with using Claude Code and I wanted to know in which instances would you be using Claude Code in VS Code vs. Claude Code in Cursor?
I am not sure and I am deciding between the two. Would really appreciate any input on this. Thanks! | 2025-08-28T08:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n26xnz/claude_code_in_vs_code_vs_claude_code_in_cursor/ | redd-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n26xnz | false | null | t3_1n26xnz | /r/LocalLLaMA/comments/1n26xnz/claude_code_in_vs_code_vs_claude_code_in_cursor/ | false | false | self | 0 | null |
Has anyone implemented a concept-based reasoning system? | 7 | Hey everyone,
I'm working on a chatbot right now and I've hit a pretty clear wall with simple keyword-based reasoning. No matter how complex I make the logic, it still feels like the bot's just fixated on a few words. It's not a fundamental solution.
To make an AI that thinks like a living organism, I think we need it to recognize **concepts**, not just keywords.
For example, instead of treating words like 'travel', 'vacation', and 'flight' as separate things, the bot would group them all into a single **'leisure concept'** vector. This way, if the conversation shifts from 'plane' to 'hotel', the AI doesn't lose the essence of the conversation because the core concept of 'leisure' is still active.
This is roughly how I'd approach the implementation, but has anyone here actually built something like this? How did you do it? | 2025-08-28T08:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n26ls5/has_anyone_implemented_a_conceptbased_reasoning/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n26ls5 | false | null | t3_1n26ls5 | /r/LocalLLaMA/comments/1n26ls5/has_anyone_implemented_a_conceptbased_reasoning/ | false | false | self | 7 | null |
Contextual AI Reranker v2 1B; SequenceClassification (single-logit) Converted Model | 21 | Contextual AI’s reranker v2 is a Qwen3-based multilingual reranker that already behaves like a classifier: the score is the last-token logit for vocab id 0 (next\_logits\[:, 0\]), with BF16 numerics and left padding so the final position is aligned across a batch.
[https://huggingface.co/sigridjineth/ctxl-rerank-v2-1b-seq-cls](https://huggingface.co/sigridjineth/ctxl-rerank-v2-1b-seq-cls)
That design is great for clarity, but the causal-LM interface still exposes a full vocab projection, which isn’t ideal for CrossEncoder pipelines or classification-style serving.A small conversion fixes that. The Qwen3 discussion by [Tom Aarsen](https://www.linkedin.com/in/tomaarsen/) on “Converting a reranker model to a single label classification model” showed how to collapse a generative head into a classifier by mapping label-word logits; for reranker v2 it’s even simpler, the score lives in a single channel.I copy lm\_head.weight\[0\] into a 1-logit SequenceClassification head (bias zero or the matching LM bias), propagate pad/eos/bos ids to config, enforce left padding, and verify strict parity by comparing the classifier logit to next\_logits\[:, 0\] under the same prompt, with a BF16→FP32 readout.
[https://www.linkedin.com/posts/sigridjineth\_sigridjinethctxl-rerank-v2-1b-seq-cls-activity-7366726911789629440-a0HT?utm\_source=social\_share\_send&utm\_medium=member\_desktop\_web&rcm=ACoAABRjkPcB873c2QQdGuFf5vmfAJXAqmBQOOQ](https://www.linkedin.com/posts/sigridjineth_sigridjinethctxl-rerank-v2-1b-seq-cls-activity-7366726911789629440-a0HT?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAABRjkPcB873c2QQdGuFf5vmfAJXAqmBQOOQ)
The result is numerically identical scores, lower overhead (1×H instead of V×H), and drop-in compatibility with CrossEncoder and standard classification tooling.If that’s useful, try the converted model. It ships with the conversion and parity scripts; stars, issues, and PRs (including 2B/6B variants) are welcome. | 2025-08-28T07:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/1n25ygg/contextual_ai_reranker_v2_1b/ | Ok_Rub1689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n25ygg | false | null | t3_1n25ygg | /r/LocalLLaMA/comments/1n25ygg/contextual_ai_reranker_v2_1b/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=108&crop=smart&auto=webp&s=abb9ec6e30ec8550d4eaa53a31f5d34366352676', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=216&crop=smart&auto=webp&s=00d111ad50c076de490dba2935a5f62cc735a6f5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=320&crop=smart&auto=webp&s=4c15c9593455f6a4052d9817e3da7e86e548f9db', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=640&crop=smart&auto=webp&s=9b59932c5277112026e3b793ebdaeb592a8e7e85', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=960&crop=smart&auto=webp&s=5c351ed9fc73874ced1535d1b32a2e6c7d3e7b58', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?width=1080&crop=smart&auto=webp&s=58a9c8b47abdad792addd06b36cdb9f2f9e9e051', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/VL8ujyRgOY-X_dzMC30a6CHhHXeyxwd2H0NL2byLNDc.png?auto=webp&s=61007e4049faecd6bfeda1cea419700227c8bd67', 'width': 1200}, 'variants': {}}]} |
High/low noise models for image generation? | 0 | Would it be possible to split image generation between two noise level models, such as wan 2.2 for video? With the goal of enabling lower vram consumer cards/macs at the cost og longer generation times? | 2025-08-28T07:09:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n25hk3/highlow_noise_models_for_image_generation/ | lodott1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n25hk3 | false | null | t3_1n25hk3 | /r/LocalLLaMA/comments/1n25hk3/highlow_noise_models_for_image_generation/ | false | false | self | 0 | null |
Use Local MCP through API | 7 | Im building the APP. It has API connection with my lmstudio and works well.
I build my specific MCP server. Im running it in docker and I can use it through the LmStudio UI. It works
But
I want to send API request to LmStudio and ask to use my MCP server for additional details.
Is this even possible through the API?
Probably I need to understand MCP in another way. | 2025-08-28T06:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1n24wfl/use_local_mcp_through_api/ | Visual-Barracuda8991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n24wfl | false | null | t3_1n24wfl | /r/LocalLLaMA/comments/1n24wfl/use_local_mcp_through_api/ | false | false | self | 7 | null |
RELEASED: ComfyUI Wrapper for Microsoft’s new VibeVoice TTS (voice cloning in seconds) | 273 | I created and released open source the ComfyUI Wrapper for VibeVoice.
* **Single Speaker Node** to simplify workflow management when using only one voice.
* Ability to load text from a file. This allows you to generate speech for the equivalent of dozens of minutes. The longer the text, the longer the generation time (obviously).
* I tested cloning my real voice. I only provided a 56-second sample, and the results were very positive. You can see them in the video.
* From my tests (not to be considered conclusive): when providing voice samples in a language other than English or Chinese (e.g. Italian), the model can generate speech in that same language (Italian) with a decent success rate. On the other hand, when providing English samples, I couldn’t get valid results when trying to generate speech in another language (e.g. Italian).
* **Multiple Speakers** **Node**, which allows up to 4 speakers (limit set by the Microsoft model). Results are decent only with the 7B model. The valid success rate is still much lower compared to single speaker generation. In short: the model looks very promising but still premature. The wrapper will still be adaptable to future updates of the model. Keep in mind the 7B model is still officially in *Preview*.
* **How much VRAM is needed?** Right now I’m only using the official models (so, maximum quality). The 1.5B model requires about **5GB VRAM**, while the 7B model requires about **17GB VRAM**. I haven’t tested on low-resource machines yet. To reduce resource usage, we’ll have to wait for quantized models or, if I find the time, I’ll try quantizing them myself (no promises).
**My thoughts on this model:**
A big step forward for the Open Weights ecosystem, and I’m really glad Microsoft released it. At its current stage, I see single-speaker generation as very solid, while multi-speaker is still too immature. But take this with a grain of salt. I may not have fully figured out how to get the best out of it yet. The real difference is the success rate between single-speaker and multi-speaker.
This model is *heavily* influenced by the seed. Some seeds produce fantastic results, while others are really bad. With images, such wide variation can be useful. For voice cloning, though, it would be better to have a more deterministic model where the seed matters less.
In practice, this means you have to experiment with several seeds before finding the perfect voice. That can work for some workflows but not for others.
With multi-speaker, the problem gets worse because a single seed drives the entire conversation. You might get one speaker sounding great and another sounding off.
Personally, I think I’ll stick to using single-speaker generation even for multi-speaker conversations unless a future version of the model becomes more deterministic.
That being said, it’s still a *huge* step forward.
**URL to ComfyUI Wrapper:**
[https://github.com/Enemyx-net/VibeVoice-ComfyUI](https://github.com/Enemyx-net/VibeVoice-ComfyUI) | 2025-08-28T06:29:26 | https://v.redd.it/yy7k60z8eplf1 | Fabix84 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n24utb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/yy7k60z8eplf1/DASHPlaylist.mpd?a=1758954582%2CMjJkMGNiYzllMmEwMDU1Nzk5YjhjYjE1N2FhMmIwZWNmMWQ4ZjRmMTk0M2RjMWU3Y2I0MGU4NTBkY2Q4NmZjNg%3D%3D&v=1&f=sd', 'duration': 165, 'fallback_url': 'https://v.redd.it/yy7k60z8eplf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 644, 'hls_url': 'https://v.redd.it/yy7k60z8eplf1/HLSPlaylist.m3u8?a=1758954582%2CMmI1ODYzNzNiN2FjNDRhOTBjMmE1ZDIzMTM4NGY0NTY1ZDdhNDRmYTEwNjdmNzkxYWFkYmEyYTFlYzk5ODJhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yy7k60z8eplf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1n24utb | /r/LocalLLaMA/comments/1n24utb/released_comfyui_wrapper_for_microsofts_new/ | false | false | 273 | {'enabled': False, 'images': [{'id': 'eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=108&crop=smart&format=pjpg&auto=webp&s=7806a59497322dc636bf2103275813ba1cf81603', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=216&crop=smart&format=pjpg&auto=webp&s=61211209f6d891bca54c46f21d78b453df5f6631', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=320&crop=smart&format=pjpg&auto=webp&s=f308fb7ca82cd2d2f504304a79e57ee547460aeb', 'width': 320}, {'height': 321, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=640&crop=smart&format=pjpg&auto=webp&s=e6d8c8dace4c754d035117016bda685309e04014', 'width': 640}, {'height': 482, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=960&crop=smart&format=pjpg&auto=webp&s=a2d306c10aec1e16bdc3cf9726aaf138a2add2ed', 'width': 960}, {'height': 542, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?width=1080&crop=smart&format=pjpg&auto=webp&s=16c8f5a7644fc5b5216c68efcc681546dbb9a2e0', 'width': 1080}], 'source': {'height': 760, 'url': 'https://external-preview.redd.it/eTdoNDByeThlcGxmMX--5rdiQuwxJ4jOINV8QPW9HN9UrvcxZxCYZhm1-TIi.png?format=pjpg&auto=webp&s=d71d05bec65eda8fef76fd2c68b990df09088d6c', 'width': 1512}, 'variants': {}}]} | |
Need help: llama.cpp CUDA offload is slower than CPU-only (RTX 3080 + dual EPYC) | 1 | [removed] | 2025-08-28T06:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1n24tbd/need_help_llamacpp_cuda_offload_is_slower_than/ | Powerful_Hand_558 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n24tbd | false | null | t3_1n24tbd | /r/LocalLLaMA/comments/1n24tbd/need_help_llamacpp_cuda_offload_is_slower_than/ | false | false | self | 1 | null |
Did I just make a mistake? | 8 | I just purchased a Jetson Thor
https://share.google/AHYYv9qpp24Eb3htw
On a drunk impulse buy after learning about it moments ago.
Meanwhile I'm still waiting for both Dell and HP to give any sort of announcement on the preorders for their GB10 sparx mini PCs.
Am i regarded or does it seem like the Thor is superior to the sparx? | 2025-08-28T06:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1n24p3p/did_i_just_make_a_mistake/ | LsDmT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n24p3p | false | null | t3_1n24p3p | /r/LocalLLaMA/comments/1n24p3p/did_i_just_make_a_mistake/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=108&crop=smart&auto=webp&s=abc452b3e0475ac0852e5ffdce494af42ef7aab2', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=216&crop=smart&auto=webp&s=abb11a9a4047009501faefa0a6108ece69a132e5', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=320&crop=smart&auto=webp&s=ccf95fed0a3f6d2f0e5f9dcd1843872f196bd7e5', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=640&crop=smart&auto=webp&s=6b2331a3de300d42d02f42793336b692d488190c', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=960&crop=smart&auto=webp&s=f27a875b1678eee36ebfbf0c660a05e241c73f68', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?width=1080&crop=smart&auto=webp&s=847297b3b82b90bb23f4191c551ea8e0a792273d', 'width': 1080}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/u8kThawr4ak91AWc1r9vU1nlxnWBtkcTlQoiyh5Gjq0.png?auto=webp&s=e5c88cfe2821fac7c6de36654fafc1881e1fe10a', 'width': 1280}, 'variants': {}}]} |
Llama 3.1 output seems alright but deeper look reveals it's full of hallucination | 2 | Hi all. I am new to making LLM applications so I was hoping you guys would point me to the right direction.
I was trying to make a RAG powered AI Agent to analyse research papers. I need the LLM to run locally (on my RTX 3080 8GB GPU). I have made the primary version using `llama3.1:8b`. It works well. It can summarise the given paper. But when I look deeper, I notice it has missed important details or giving wrong facts.
For example, I have given it a paper, and asked it "From where the samples were collected?" Though the paper very clearly mentioned the city and country names of the data source, the AI cannot see it. It keeps repeating "The paper doesn't mention any specific location". Or, sometimes it says "Greater Washington DC area", though the research was nowhere near this region. Another example, if I tell it to compare two papers, it points out incorrect similarities or differences.
This makes the app basically useless. Now, I don't have much clue what I can do to improve it. Is it because I am running a smaller model, is it how I've implemented the RAG, is it the prompt template, is it because I am trying to use it in a specialised domain the model was not trained for? Can you suggest what I can try next to improve its output?
Here is the project [https://github.com/AhsanShihab/research-copilot](https://github.com/AhsanShihab/research-copilot) | 2025-08-28T05:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n23ytu/llama_31_output_seems_alright_but_deeper_look/ | Key_Influence_3832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n23ytu | false | null | t3_1n23ytu | /r/LocalLLaMA/comments/1n23ytu/llama_31_output_seems_alright_but_deeper_look/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=108&crop=smart&auto=webp&s=1e69fed495ad77def90048fce666fb9a80f3895d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=216&crop=smart&auto=webp&s=034116330797b8328f0295b76027d787d7660499', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=320&crop=smart&auto=webp&s=ba89d38a0cfb40aa2c60d133cf96335d0a4ae838', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=640&crop=smart&auto=webp&s=75fadd3c5a33577551e15844ff21311c5c779741', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=960&crop=smart&auto=webp&s=f463355c6b20a1caaa1f2dab6ffa228fd72067c2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?width=1080&crop=smart&auto=webp&s=0e153a0176d815f5cfbb893e48638012ebbeeeb7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rZy-Qv6qPtl_PYCM_LSJphyoD0KC7s77BItvFrPIk2Q.png?auto=webp&s=c2eddc3afb618a8a360eacdec9811ff8705d5192', 'width': 1200}, 'variants': {}}]} |
Replicating the Research | 1 | Have any of you ever tried replicating the “Attention Is All You Need” results or the results of any basic AI research papers? | 2025-08-28T05:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n23ylc/replicating_the_research/ | DumbMoneyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n23ylc | false | null | t3_1n23ylc | /r/LocalLLaMA/comments/1n23ylc/replicating_the_research/ | false | false | self | 1 | null |
Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies | 5 | **TL;DR.** We introduce **discrete diffusion** as the action decoder **inside a single transformer** for VLA. Two simple components—Adaptive decoding order and Secondary re-masking—yield consistent action refinement and outperform AR and continuous-diffusion heads. Trains with the **same cross-entropy objective** as VLMs, preserving pretrained priors. This design shows better success rates vs AR and continuous diffusion.
**Disclosure:** I’m an author.
**What’s new**
* **First discrete-diffusion action head for VLA** (to our knowledge).
* **Single-transformer, VLM-style training:** keeps the discrete token interface and uses the same CE loss as the VLM backbone → **maximizes retention of pretrained VLM priors**.
* **Adaptive decoding order:** in each refinement round, we **keep easy tokens first** via confidence / confidence-gap scores and a cosine keep schedule; the rest remain masked for the next round.
* **Secondary re-masking:** previously kept tokens are **re-checked** (threshold + residual-drop) and **re-masked** if uncertain/inconsistent, enabling robust cross-round error correction.
**Why it matters**
* For robotics manipulation tasks, unlike continuous diffusion decoders, our formulation keeps action generation inside a unified transformer and trains with the same cross-entropy objective used by VLMs. This **preserves the backbone’s pretrained vision-and-language capability**—akin to extending a vocabulary—while opening a path to **inherit unified transformers’ scaling behavior**, paving the way for **large-scale VLA**. Moreover, Discrete Diffusion VLA **breaks the left-to-right bottleneck** of AR decoders: action chunks are **adaptively decoded in parallel** over a small, fixed number of steps, and uncertain tokens can be revisited via iterative re-masking, leveraging full cross-modal context (including inter-action dependencies) for refinement.
**Links**
* Paper: [https://arxiv.org/abs/2508.20072](https://arxiv.org/abs/2508.20072)
* Demo videos: [https://huggingface.co/papers/2508.20072](https://huggingface.co/papers/2508.20072) | 2025-08-28T05:13:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n23m90/discrete_diffusion_vla_bringing_discrete/ | Lonely-Loquat9638 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n23m90 | false | null | t3_1n23m90 | /r/LocalLLaMA/comments/1n23m90/discrete_diffusion_vla_bringing_discrete/ | false | false | self | 5 | null |
I think I am missing some division of ai innovation, so please help me out ! 1.browser automation (comet) 2. Code agents (cursor, windsurf...) 3. Ai os?????? | 0 |
1.browser automation (comet)
2. Code agents (cursor, windsurf...)
3. Ai os?????? | 2025-08-28T04:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n232on/i_think_i_am_missing_some_division_of_ai/ | Immediate-Action5124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n232on | false | null | t3_1n232on | /r/LocalLLaMA/comments/1n232on/i_think_i_am_missing_some_division_of_ai/ | false | false | self | 0 | null |
Sexual explicit content AI | 0 | Helo there, I write novels with sexual explicit content and Im looking for a AI model that allows sexual explicit content. I'm running Ollama in my pc. Do u know any AI model I can use? Also, is there a AI generative Image that allow create sexually explicit images? | 2025-08-28T04:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n231wj/sexual_explicit_content_ai/ | valmonnnn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n231wj | false | null | t3_1n231wj | /r/LocalLLaMA/comments/1n231wj/sexual_explicit_content_ai/ | false | false | nsfw | 0 | null |
HunyuanVideo-Foley is out, an open source text-video-to-audio model | 312 | try HunyuanVideo-Foley: https://hunyuan.tencent.com/video/zh?tabIndex=0
HuggingFace: https://huggingface.co/tencent/HunyuanVideo-Foley
GitHub: https://github.com/Tencent-Hunyuan/HunyuanVideo-Foley
Project Page: https://szczesnys.github.io/hunyuanvideo-foley/
Research report: https://arxiv.org/abs/2508.16930 | 2025-08-28T04:33:24 | https://v.redd.it/jpjpqw2xuolf1 | vibedonnie | /r/LocalLLaMA/comments/1n22xbl/hunyuanvideofoley_is_out_an_open_source/ | 1970-01-01T00:00:00 | 0 | {} | 1n22xbl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jpjpqw2xuolf1/DASHPlaylist.mpd?a=1759077210%2CMWJkZjE5YTIzZTMwMDk5OThiOWRmNzM2MzkyNTc4NTI0NTVkNTJmYTAzZGM4MzkzZTRkOTNhNGFmMWM5NTA2Zg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/jpjpqw2xuolf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jpjpqw2xuolf1/HLSPlaylist.m3u8?a=1759077210%2CNDE3OTE2MDEwYTMxMmY0Y2Q1MGZiZjcxNjg4OTc3ZjI5MGYzNDY0ZTYzZmY3ZWYzOTUzMjg4YTA1YzdjNGY2OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jpjpqw2xuolf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n22xbl | /r/LocalLLaMA/comments/1n22xbl/hunyuanvideofoley_is_out_an_open_source/ | false | false | 312 | {'enabled': False, 'images': [{'id': 'dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=108&crop=smart&format=pjpg&auto=webp&s=1ad53413191d8412f5c6c74f95489e196db1821e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=216&crop=smart&format=pjpg&auto=webp&s=e340d4a6d199b76e7d3d209f932a4badd717a00a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=320&crop=smart&format=pjpg&auto=webp&s=0160e994f2f03bc71ff51b5b3e79261c7f0189cc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=640&crop=smart&format=pjpg&auto=webp&s=1792c78cb441b38e6fb32739506bb6450fd0bfc3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=960&crop=smart&format=pjpg&auto=webp&s=c74e5556e0a0bbd4c8da5d5703f8d3fc5de92bd5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cc4cff64105f787ce4ff174dbdcc02a889ccd267', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXU2amRweXd1b2xmMTawZyv5aMEWeESK9yBcqymop7gFK-DtVYY3rCRDUSQp.png?format=pjpg&auto=webp&s=663c6ccc70e4ea833a0d72b05f4d262cdb44af35', 'width': 1920}, 'variants': {}}]} | |
Advice on AI PC/Workstation | 1 | Considering to buy or build one primary purpose to play around with Local LLM, Agentic AI that sort of thing, diffusion models may be, Gaming is not priority.
Now considering DGX Spark or 3 - 4x RTX 4000 Pro, with Milan CPU and DDR4 3200 RAM for now with some U.2 NVME storage. (eventually upgrade to SP5/6 based system to support those PCIE5 cards. PCIE lanes, I understand, deal with Datacenter equipment, including GPUs, primarily for Server Virtualization, K8S that sort of things.
Gaming, FPS that sort of a thing is no where in the picture.
Now .. fire away suggestion, trash the idea.. !!
| 2025-08-28T03:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n2288o/advice_on_ai_pcworkstation/ | No_Night679 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2288o | false | null | t3_1n2288o | /r/LocalLLaMA/comments/1n2288o/advice_on_ai_pcworkstation/ | false | false | self | 1 | null |
Using Qwen to generate 3D assets in Blender | 117 | Working on an AI agent that hooks up to Blender to generate low poly models. So far I'm impressed by Qwen's ability to generate and think usable code for this. Inspired by indie game dev where I constantly needed quick models for placeholders or prototyping. | 2025-08-28T03:33:45 | https://imgur.com/a/qzOMpqr | spacespacespapce | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1n21tb6 | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FqzOMpqr%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D500&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FqzOMpqr&image=https%3A%2F%2Fi.imgur.com%2FACRmKLP.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="500" height="60" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 540, 'thumbnail_url': 'https://i.imgur.com/ACRmKLP.jpg?fb', 'thumbnail_width': 960, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/qzOMpqr', 'version': '1.0', 'width': 500}, 'type': 'imgur.com'} | t3_1n21tb6 | /r/LocalLLaMA/comments/1n21tb6/using_qwen_to_generate_3d_assets_in_blender/ | false | false | 117 | {'enabled': False, 'images': [{'id': '9lw-jNZtM3P7UM0YLkk9QsYtyNKHtt0w68X3pVYN-ls', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?width=108&crop=smart&auto=webp&s=89b0fb3fba3b2fdae63ba539b019a7d393cc8f06', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?width=216&crop=smart&auto=webp&s=9d47ebba4d607fc5b64189af1038dca22b2fead7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?width=320&crop=smart&auto=webp&s=889c3a6a723bb71c99827f7bb2c0d20d8ba4adb3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?width=640&crop=smart&auto=webp&s=a9cd1fe81e5b3dea5c4706e7a6a97566c3d8f5c9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?width=960&crop=smart&auto=webp&s=d653c4f80508fb974e3bc87ffbf51bc4f2421d31', 'width': 960}], 'source': {'height': 540, 'url': 'https://external-preview.redd.it/9uCJCtXh3hjjUjvHiYJX5nSL5ngxbe-WclVIpprEgOA.jpg?auto=webp&s=e874acbea5dd02bb6cfdd13b823763015776f15e', 'width': 960}, 'variants': {}}]} | |
Conteúdo sexualmente explícito | 0 | Galera, queria criar uns contos eroticos com algum modelo de IA, mas todos q testei possuem filtros q não permitem cenas de sexo explícita. Mesmo rodando localmente no meu computador. Voces podem me indicar um modelo de escrita q escreva conteudo sexualmente explicito? e um modelo de geração de imagem q faça imagens sexualmente explicitas? | 2025-08-28T03:14:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n21feo/conteúdo_sexualmente_explícito/ | valmonnnn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n21feo | false | null | t3_1n21feo | /r/LocalLLaMA/comments/1n21feo/conteúdo_sexualmente_explícito/ | false | false | nsfw | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.