title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[Open Source] Ragnrock – generate cited research reports with Gemma 3 4B and internet | 28 | Hey everyone,
I just published a free, open-source research tool called **Ragnrock**. It uses Gemma 3 4B locally to generate cited reports
* Demo Video: [https://www.youtube.com/watch?v=EoM4-k6BV3A](https://www.youtube.com/watch?v=EoM4-k6BV3A) (skip to 0:15)
* GitHub: [https://github.com/netdur/ragnrock](https://github.com/netdur/ragnrock)
* macOS (Apple Silicon) Release: [https://github.com/netdur/ragnrock/releases/tag/v0.1.0](https://github.com/netdur/ragnrock/releases/tag/v0.1.0)
My main motivation: ChatGPT is slow, and most of what I need is already on Wikipedia
This project is my attempt to fix that. It's a small, fast tool for when you just need quick, verifiable answers. It's great for targeted lookups but also has full Google/Brave search support when you need to go deeper
It's an early version, but I think it could be useful. Check it out and let me know what you think | 2025-08-30T01:23:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n3p076/open_source_ragnrock_generate_cited_research/ | adel_b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3p076 | false | null | t3_1n3p076 | /r/LocalLLaMA/comments/1n3p076/open_source_ragnrock_generate_cited_research/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'BP7wEA2t0u58miL1KSIy4UPfMDTGjsn3Oz-Wxmj0NWc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BP7wEA2t0u58miL1KSIy4UPfMDTGjsn3Oz-Wxmj0NWc.jpeg?width=108&crop=smart&auto=webp&s=6703d3eb266d5981405510b69ba2c50033896e91', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/BP7wEA2t0u58miL1KSIy4UPfMDTGjsn3Oz-Wxmj0NWc.jpeg?width=216&crop=smart&auto=webp&s=3fb92308ea9d36702e9f05053fca9727f02a3146', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/BP7wEA2t0u58miL1KSIy4UPfMDTGjsn3Oz-Wxmj0NWc.jpeg?width=320&crop=smart&auto=webp&s=06f5bd558b3d1148700cf63d2621e33038a7068d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/BP7wEA2t0u58miL1KSIy4UPfMDTGjsn3Oz-Wxmj0NWc.jpeg?auto=webp&s=e855a63d9e0f19e68bf392331ff62cdb9dd58ac5', 'width': 480}, 'variants': {}}]} |
Newb looking for 2 semi advanced things which may not exist... | 2 | 1) Is there such a thing as a Multi Disciplinary local interface for AI? What I mean by this is I have a local machine with a decent GPU. Right now I might want to run a text based AI tool, but an hour from now maybe I want to play with Image gen or Image to Video. Later maybe do some text to speech.
So wondering if there is some "front end" which has a way to easily (Web UI) switch the back end between these functions, and then the Front End GUI changes to match the needed inputs of the new tool.
I am assuming I can't just leave like 5 of these open at once on a 16GB GPU considering when I played with llama.cpp the model I pulled down used like 10GB of vRAM.
2) Is there a coding helper (I won't go quite so far as to say Vibe coding, but maybe) which can actually short circuit the CICD type pipeline.
Example: I asked a GPT for help with Javascript and Node.js and wanted it run in a Container. When I ran what it gave me, it bombed hard and I put the errors from the container back into the tool and it was able to tweak a few things and get the process a bit further along. Is there something out there where I can just say "Here's the root login on this Linux box, let me know when the damn thing is actually running" ??
Many newb programmers go through the Code/Error/ThrowErrorIntoGoogle/repeat cycle already. Just looking to have the AI just do the same.
As I said I am a bit new at this.
30 year vet of PC/Server/Windows/Infrastructure/Networking getting into AI a bit more.
My gaming rig isn't getting much love these days so thought I would use the 4080 Super in there as part of my AI learning path. | 2025-08-30T00:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n3oe13/newb_looking_for_2_semi_advanced_things_which_may/ | Casper042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3oe13 | false | null | t3_1n3oe13 | /r/LocalLLaMA/comments/1n3oe13/newb_looking_for_2_semi_advanced_things_which_may/ | false | false | self | 2 | null |
Little SSM (RWKV7 7B) state checkpointing demo. | 4 | Thing I've been experimenting with the past few days -- "diegetic role based prompting" for a local State Space Model ( #RWKV7 currently).
Tiny llama.cpp Python runner for the model and "composer" GUI for stepping and half-stepping through input only or input and generated role specified output, with saving and restoring of KV checkpoints.
Planning to write runners for #XLSTM 7B & #Falcon #MAMBA 7B to compare.
Started 'cause no actual #SSM saving, resuming examples.
https://github.com/stevenaleach/ssmprov/tree/main | 2025-08-30T00:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n3o9q1/little_ssm_rwkv7_7b_state_checkpointing_demo/ | returnstack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3o9q1 | false | null | t3_1n3o9q1 | /r/LocalLLaMA/comments/1n3o9q1/little_ssm_rwkv7_7b_state_checkpointing_demo/ | false | false | self | 4 | null |
Just found out the Ollama version of GPT-OSS has a much higher refusal rate. | 99 | I was wondering why other people seemed to like the model. | 2025-08-30T00:41:32 | https://www.reddit.com/gallery/1n3o55w | soup9999999999999999 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n3o55w | false | null | t3_1n3o55w | /r/LocalLLaMA/comments/1n3o55w/just_found_out_the_ollama_version_of_gptoss_has_a/ | false | false | 99 | null | |
Alternatives to Langfuse | 0 | * **LangSmith**: Purpose-built for LangChain users. It shines with visual trace inspection, prompt comparison tools, and robust capabilities for debugging and evaluating agent workflows—perfect for rapid prototyping and iteration.
* **Maxim AI**: A full-stack platform for agentic workflows. It offers simulated testing, both automated and human-in-the-loop evaluations, prompt versioning, node-by-node tracing, and real-time metrics—ideal for teams needing enterprise-grade observability and production-ready quality control.
* **Braintrust**: Centers on prompt-driven pipelines and RAG (Retrieval-Augmented Generation). You’ll get fast prompt experimentation, benchmarking, dataset tracking, and seamless CI integration for automated experiments and parallel evaluations.
* **Comet (Opik)**: A trusted player in experiment tracking with a dedicated module for prompt logging and evaluation. It integrates across AI/ML frameworks and is available as SaaS or open source.
* **Lunary**: Lightweight and open source, Lunary handles logging, analytics, and prompt versioning with simplicity. It's especially useful for teams building LLM chatbots who want straightforward observability without the overhead.
* **Handit.ai**: Open-source platform offering full observability, LLM-as-Judge evaluation, prompt and dataset optimization, version control, and rollback options. It monitors every request from your AI agents, detects anomalies, automatically diagnoses root causes, generates fixes. Handit goes further by running real-time A/B tests and creating GitHub-style PRs—complete with clear metrics comparing the current version to the proposed fix. | 2025-08-30T00:17:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n3nnif/alternatives_to_langfuse/ | _coder23t8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3nnif | false | null | t3_1n3nnif | /r/LocalLLaMA/comments/1n3nnif/alternatives_to_langfuse/ | false | false | self | 0 | null |
Why did Kimi K2 flop? | 0 | With 1 trillion parameters and 20,000+ synthetic tools in reinforcement post training, you'd think it would destroy GLM 4.5 and the likes. But I see GLM, Qwen, and gpt-oss being the favorites for agentic use cases. Why is this happening? Anyone prefer K2 over GLM? Why? | 2025-08-30T00:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n3nax4/why_did_kimi_k2_flop/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3nax4 | false | null | t3_1n3nax4 | /r/LocalLLaMA/comments/1n3nax4/why_did_kimi_k2_flop/ | false | false | self | 0 | null |
Nemotron Nano v2 reasoning + tool call testing (llama.cpp) | 8 | As before, it would be nice if someone could give the model a try:
[https://github.com/ggml-org/llama.cpp/pull/15676](https://github.com/ggml-org/llama.cpp/pull/15676) | 2025-08-29T23:41:37 | https://www.reddit.com/r/LocalLLaMA/comments/1n3mvzm/nemotron_nano_v2_reasoning_tool_call_testing/ | ilintar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3mvzm | false | null | t3_1n3mvzm | /r/LocalLLaMA/comments/1n3mvzm/nemotron_nano_v2_reasoning_tool_call_testing/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=108&crop=smart&auto=webp&s=f1c7cef8180b9c6110244df1a69dd998992cfba8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=216&crop=smart&auto=webp&s=68b5a01d399acbdb04096871d5b98be9fdc63126', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=320&crop=smart&auto=webp&s=80b45d11b74d2ce4cfc8f01927b7652c92d51f7e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=640&crop=smart&auto=webp&s=e71dbf328cc19d2c52eae7e6f455c8631dbb8892', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=960&crop=smart&auto=webp&s=827d2992b0e32325f830a131e55a6bc96c4a5fb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?width=1080&crop=smart&auto=webp&s=d790f40f0e38d31a9453c0adfc03f5c83dc01ff9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GKx6-bybHofYe_hOrUVtKfrmu1Qq4GenApAYc1bB3qA.png?auto=webp&s=da19bc47b1a7796224a9129c9994bc38a4a708ff', 'width': 1200}, 'variants': {}}]} |
Best AI TTS with voice cloning (on CPU) | 5 | So far I tested XTTS V2 and it worked quite well creating speaker from recordings. The only problem is that even if the voice is similar (or same) as the recording, the cadence and inflections are different.
Is there a better TTS engine that can clone both the voice both the inflections and is "usable" (even if slowly) on a CPU only setup (like XTTS V2)?
| 2025-08-29T23:02:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n3m08k/best_ai_tts_with_voice_cloning_on_cpu/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3m08k | false | null | t3_1n3m08k | /r/LocalLLaMA/comments/1n3m08k/best_ai_tts_with_voice_cloning_on_cpu/ | false | false | self | 5 | null |
LM Studio vs llama.cpp loading times | 3 | Hi,
I just wonder how it's possible that LM Studio loads "Qwen3-14B-Q5\_K\_M.gguf" in about 8 seconds on my pc (full offload to gpu) and in llama\_cpp\_python it takes about 41 seconds, 40,4s of which is loading the tokenizer. (I compiled the library myself to ensure all correct flags where on) At first I thought the issue is python and I tried bare llama\_cpp and it took 20s to load, which is 2x faster, but still miles away from LM Studio. There was a quick thought that LM Studio might be slow on first time and then somehow cache the tokenizer for quicker loading time later, but it's the same even on fresh models. So I am just baffled what does LM Studio do that much differently that it loads the models so much faster than bare llama.cpp | 2025-08-29T22:42:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n3ljbk/lm_studio_vs_llamacpp_loading_times/ | Qbsoon110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3ljbk | false | null | t3_1n3ljbk | /r/LocalLLaMA/comments/1n3ljbk/lm_studio_vs_llamacpp_loading_times/ | false | false | self | 3 | null |
Qwen3-coder is mind blowing on local hardware (tutorial linked) | 932 | Hello hello!
I'm honestly blown away by how far local models have gotten in the past 1-2 months. Six months ago, local models were completely useless in Cline, which tbf is pretty heavyweight in terms of context and tool-calling demands. And then a few months ago I found one of the qwen models to actually be somewhat usable, but not for any real coding.
However, qwen3-coder-30B is really impressive. 256k context and is actually able to complete tool calls and diff edits reliably in Cline. I'm using the 4-bit quantized version on my 36GB RAM Mac.
My machine does turn into a bit of a jet engine after a while, but the performance is genuinely useful. My setup is LM Studio + Qwen3 Coder 30B + Cline (VS Code extension). There are some critical config details that can break it (like disabling KV cache quantization in LM Studio), but once dialed in, it just works.
This feels like the first time local models have crossed the threshold from "interesting experiment" to "actually useful coding tool." I wrote a full technical walkthrough and setup guide: [https://cline.bot/blog/local-models](https://cline.bot/blog/local-models) | 2025-08-29T22:35:27 | https://v.redd.it/75bfhw7sc1mf1 | nick-baumann | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3ldon | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/75bfhw7sc1mf1/DASHPlaylist.mpd?a=1759098944%2CNDg4NTJmNDUzMjU5ZGJhODViM2VlOGJjYTg1Zjc0ODc3MTYwNGI4ZmNjOTQxZjYxZTViZGQyOGUwMWY5NGRlZg%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/75bfhw7sc1mf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/75bfhw7sc1mf1/HLSPlaylist.m3u8?a=1759098944%2COWVhNDAyNWM0MDNjOTg3MjMxMDRkYTRmY2U3NzVjNTkwZWJmZjdmYjE4YmU1YzZjNjhlYzc4OGRjYTVlODJhOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/75bfhw7sc1mf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1548}} | t3_1n3ldon | /r/LocalLLaMA/comments/1n3ldon/qwen3coder_is_mind_blowing_on_local_hardware/ | false | false | 932 | {'enabled': False, 'images': [{'id': 'MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=108&crop=smart&format=pjpg&auto=webp&s=db7ac331c2b48f9433b3d014b96d9617d27e8730', 'width': 108}, {'height': 150, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=216&crop=smart&format=pjpg&auto=webp&s=50d2e1eb06974f9f20ddca33a11168b56f957d84', 'width': 216}, {'height': 223, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=320&crop=smart&format=pjpg&auto=webp&s=536b4892a6436f6fb6b906ba7355adc87f695f8b', 'width': 320}, {'height': 446, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=640&crop=smart&format=pjpg&auto=webp&s=029556d17d1fb537c7bae968942133d2d03c460a', 'width': 640}, {'height': 669, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=960&crop=smart&format=pjpg&auto=webp&s=c12d3e93504d79aea7b8eebbd8d987d3b694bc3b', 'width': 960}, {'height': 753, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?width=1080&crop=smart&format=pjpg&auto=webp&s=746f681c306c727d38bafbc06346851299f7ed02', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/MHAyYm12N3NjMW1mMWyTIaaq8py0BbLEXek7RrX8ohVlR1FrRoAdOlxuqQ67.png?format=pjpg&auto=webp&s=734d6092f741f0c0eed04d7785dba77aaf441fd6', 'width': 3096}, 'variants': {}}]} | |
Nous Research presents Hermes 4 | 13 | 2025-08-29T22:35:18 | https://huggingface.co/collections/NousResearch/hermes-4-collection-68a731bfd452e20816725728 | paranoidray | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n3ldk2 | false | null | t3_1n3ldk2 | /r/LocalLLaMA/comments/1n3ldk2/nous_research_presents_hermes_4/ | false | false | default | 13 | null | |
RIP PHOTOSHOP (nano banana) | 0 | 2025-08-29T22:33:35 | https://www.reddit.com/gallery/1n3lc3x | Comfortable-Smoke672 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n3lc3x | false | null | t3_1n3lc3x | /r/LocalLLaMA/comments/1n3lc3x/rip_photoshop_nano_banana/ | false | false | 0 | null | ||
Using LLMs for Maths/Physics research. | 1 | **TL;DR:** I had success using an LLM for a tedious quantum physics derivation. It seems LLMs excel at this because it's **pattern-matching**, not arithmetic. I want to start a discussion on your opinion and the best technical approach (models, settings, and prompting) to make this reliable.
Hey r/LocalLLaMA! c:
I’ve been playing with local models for a while, but I think I stumbled upon a really powerful use case in my physics research.
It's a **Pattern Recognition** Problem:
I was working on a quantum mechanics problem that involved a lot of mechanical work (listing states, building a matrix, finding eigenvalues, etc.). It's tedious, long and super easy to make a small mistake. Just as a curiosity, I explained the rules to Gemini 2.5 Pro, and it perfectly executed the entire multi-step derivation.
I thought about it and: we often say **"LLMs are bad at math,"** but we usually mean **arithmetic**. This makes sense as using next token prediction for "what's 4892 + 2313?" seems like a bad way to solve that problem; but this was pure symbolic logic and pattern recognition. The LLM wasn't **"calculating,"** it was following a logical structure, which they are very good at.
So i thought about it and i think the best way to use LLMs for research **isn't to ask them to "solve"** a problem from scratch, but to provide them with a logical pattern and ask them to apply it.
Some questions that i had about this:
This is where I'd love your opinions. I'm trying to figure out the most robust, reliable way to do this (preferably locally).
1. **Which models are best at pattern recognition?** For this use case, raw intelligence might be less important than the model's ability to rigidly adhere to a defined logical process. Any good reasoning models for this?
2. **How do you tune for maximum determinism?** To prevent hallucinations, maybe placing creativity at near 0? I'm thinking:
* Temperature ≈ 0
* A very low Top P (e.g., 0.1 - 0.3) to restrict the model to the most logical tokens. Has anyone tried this?
3. **What is the best prompting strategy for this?** It seems logical that in-context learning would be the safest bet. But what do you guys think?
* A) **Few-Shot Prompting:** Provide a complete, worked-out example of a simpler problem first (the "pattern"), and then ask the model to apply the same steps to the new, more complex problem.
* B) **Zero-Shot Chain-of-Thought:** Without an example, just the instructions to "think step-by-step, showing every stage of the derivation, from listing the states to constructing the final matrix." I would guess this would be better with bigger models (like gemini-2.5-pro).
I'm really curious if anyone has tried using models for very logical problems. My goal is to have a model set up that can handle very mechanical steps.
Would love to hear if anyone has tried it for something similar or your thoughts and theories on this!
Cheers c:
Roy | 2025-08-29T22:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1n3l1pi/using_llms_for_mathsphysics_research/ | Roy3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3l1pi | false | null | t3_1n3l1pi | /r/LocalLLaMA/comments/1n3l1pi/using_llms_for_mathsphysics_research/ | false | false | self | 1 | null |
My new rig. No GPU needed. Neighbour is very impressed but she reminds me that art is from the soul. | 0 | 2025-08-29T22:02:31 | shokuninstudio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3km5c | false | null | t3_1n3km5c | /r/LocalLLaMA/comments/1n3km5c/my_new_rig_no_gpu_needed_neighbour_is_very/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6n3urmyu61mf1', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/6n3urmyu61mf1.jpeg?width=108&crop=smart&auto=webp&s=3f216919cd67c11c961706acdae9c60df7b1a2ae', 'width': 108}, {'height': 333, 'url': 'https://preview.redd.it/6n3urmyu61mf1.jpeg?width=216&crop=smart&auto=webp&s=d484917b8ef048c6b3bfdc6d61a5fe0f9a3fa236', 'width': 216}, {'height': 494, 'url': 'https://preview.redd.it/6n3urmyu61mf1.jpeg?width=320&crop=smart&auto=webp&s=48e0d6d7dd84affc23545dec7c300616ccb586ac', 'width': 320}, {'height': 989, 'url': 'https://preview.redd.it/6n3urmyu61mf1.jpeg?width=640&crop=smart&auto=webp&s=7fb83bedd7f5cedeaa94f5d9765c27883af608a7', 'width': 640}], 'source': {'height': 1014, 'url': 'https://preview.redd.it/6n3urmyu61mf1.jpeg?auto=webp&s=dedc420faf1da877371b1aa15b939503006f8b2b', 'width': 656}, 'variants': {}}]} | ||
AMD or Nvidia? | 6 | Hi guys, I am relatively new to the "local AI" field and I am interested in hosting my own. I have made a deep research on whether AMD or Nvidia would be a better suite for my model stack, and I have found that Nvidia is better in "ecosystem" for CUDA and other stuff, while AMD is a memory monster and could run a lot of models better than Nvidia but might require configuration and tinkering more than Nvidia since it is not well integrated with Nvidia ecosystem and not well supported by bigger companies.
Do you think Nvidia is definitely better than AMD in case of self-hosting AI model stacks or is the "tinkering" of AMD is a little over-exaggerated and is definitely worth the little to no effort? | 2025-08-29T22:01:01 | https://www.reddit.com/r/LocalLLaMA/comments/1n3kkte/amd_or_nvidia/ | Mustafa_Shazlie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3kkte | false | null | t3_1n3kkte | /r/LocalLLaMA/comments/1n3kkte/amd_or_nvidia/ | false | false | self | 6 | null |
My Solo-Built Chat Platform With Inline Suggestion Agent & Saved Prompts | 0 | Hey r/LocalLLaMA,
I wanted to share some insights from the early days of developing my lightweight, local-friendly LLM chat platform — in case it’s interesting for this community.
✅ **Character Catalog is Live**
https://preview.redd.it/uynuxgyy51mf1.png?width=1307&format=png&auto=webp&s=66ced93d4f90eaf61d543209bfb65eff577ba9b8
Create and browse characters through a simple UI. Selecting a character automatically loads their prompt, scenario, and sample dialogue. Swapping characters is instant.
(It’s a bit rough around the edges, but fully functional.)
✅ **Inline Suggestion Agent**
https://preview.redd.it/1oi4a0j361mf1.png?width=883&format=png&auto=webp&s=34ee4c618741225c2710a9406534dddbd71551e4
A helper agent suggests replies in real-time — just click to insert. Think lightweight autocomplete, but more character-aware.
Suggestions can also be expanded for longer, detailed responses, making conversations flow much smoother.
https://preview.redd.it/d61iob5661mf1.png?width=320&format=png&auto=webp&s=1fc96e4e360850c64f5d1d9d19c861ec7e14b125
✅ **Prompt Library + Setup Saving**
https://preview.redd.it/qe0omeca61mf1.png?width=1495&format=png&auto=webp&s=143b955975d5dc463a03f267d5d8de5cebf04993
Create and save core/system prompts. Save slots let you quickly return to a preferred configuration without redoing everything. | 2025-08-29T22:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n3kjtg/my_solobuilt_chat_platform_with_inline_suggestion/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3kjtg | false | null | t3_1n3kjtg | /r/LocalLLaMA/comments/1n3kjtg/my_solobuilt_chat_platform_with_inline_suggestion/ | false | false | 0 | null | |
HELP🙏What am I missing in this RAG pipeline? | 0 | I'll try to explain this without giving out too much information as im not sure if my boss would agree with me sharing it here lmao.
Nevertheless, there is a list of documents that i have (scraped a website, that i shall not name, and structured that data to create a meta and content key. Meta contains info like ID, Category, Created_At etc while content contains the actual html ) stored locally and my purpose is whenever a user asks any question, i pass the user query to an LLM along with the exact document from my list that contains the information about the query that the user asked so that the LLM can respond with full knowledge. ACCURACY IS OF AT MOST IMPORTANCE. The LLM must always return accurate information, it cannot mess it up and since its not trained on that data there is no way it will give me the actual answer UNLESS i provide context. Hence retrieving the relevant document from the list is of atmost importance. I know this works because when i tested the LLM against my questions by providing context using the relevant document, the responses were 100% accurate.
The problem is the retrival part. I have tried a bunch of strategies and so far only one works which i will mention later. Bear in mind, this is my first time doing this.
In our first attempt at this, we took each document from our list, extracted the html from the content key, made embedding of each using MiniLM and stored it in our vector db (using postgres with pgvector extension) along with the actual content, meta and id. Next, in order to retrieve the relevant document, we would take the user input and make embedding of it and perform a vector search using cosine similarity. The document it fetched (the one with the highest similarity score) was not the document which was relevant to the question as the content stored didn't have the information required to answer the document. There were 2 main issues we identified with this approach. First the user input could be a set of multiple questions where one document was not sufficient to answer all so we needed to extract multiple documents. Second was that question and document content are not semantically or logically similar. If we make embeddings of questions then we should search them against embeddings of questions and not content.
These insights gave rise to our second strat. This time we gave each document to an LLM and prompted it to make distinct questions from the provided document (meta + content). On average, against each document I got 35 questions. Now I generated embedding (again using MiniLM) for each question and stored it in the vector database along with the actual question and a documnet ID which was foreign key to the documents table referencing the document against which the question was made. Next when user input comes, i would send it to an LLM asking it to generate sub questions (basically breaking down the problem into smaller chunks) and against each sub question i would generate embedding and perform vector search (cosine similarity). The issue this time was that the documents retrieved only contained specifc keywords in the content from the question but didnt contain enought content to actually answer the question. The thing that went wrong was that when we were initally generating questions against the document using an LLM, the LLM would generate questions like "what is id 5678?", but the id 5678 was only mentioned in that document and never explained or defined. Its actual definition was in a different document. Basically, a correct question ended up mapping to multiple documents instead of the correct one. Semantically, the correct questions were searched but logically that row in which the question is stored, its foreign key referenced an incorrect document. Since accuracy is important therefore this strat failed as well. (Im not sure if i explained this strat correctly for you guys to understand so i apologize in advance)
This brings us to strat three. This time we gave up on embedding and decided we will do keywords based searching. As we recieve user input, i would prompt an LLM to extract keywords from the query relevant to our use case (im sorry but i cant share our use case without hinting into what we are building this RAG pipeline for). Then based on the extracted keywords, i would perform a keyword search in relevant regex from every document's content. Note that every document is unique becuase of the meta key but there is no guarantee that the extracted keywords would contain the words that im looking for in meta hence i had to search in multiple places inside the document that i logically found would distinctly help we find the correct document. And thank god the freaking query worked (special thanks to deepseek and chatGPT, i suck at SQL and would never have done this without em)
However, all these documents are part of one single collection and in time nee collections with new documents will show up requiring me to create new SQL queries for each hence making the only solution that worked non generic (i hate my life).
Now i have another strat in mind. I havent given up on embedding YET simply becuase if i can find the correct approach, i can make the whole process generic for all kinds of collections. So referencing back to our second strat, the process was working. Making sub queries and stroing embedding of questions and referencing it to documents was the right way to go but this recipe is missing the secret ingredient. That ingredient is ensuring that no multiple documents get referenced to semantically similar questions. In other words the questions i save for any document, they must also have the actual answer in that document. This way all questions distincly map to a single document. And semantically similar questions also map to that document. But how do i create these set of questions? One idea was to use the same prompt i used initially to generate questions from the LLM, i resend those questions to the LLM along with the document and ask it to only return me the questions that contain an answer inside the document. But the LLM mostly eleminates all the questions. Leaving 3 or 4 questions out of 35. 3 or 4 questions aren't enough... Maybe they are im not sure ( i dont have the foresight for this anymore)
Now i need this community to help me figure out how to execute my last strat or maybe suggest an entirely new strat. And before you suggest manually making questions for each document note that there are over 2000 documents and this is just for this collection. For other collections the list of document is in millions so no one in their right mind is going to do this manually.
Ohh one last detail, the LLM im referring to is Llama 4 Scout 17B Instruct. Im hosting it on cloud using lambda labs (a story for another time) and the reason to go for this model is its massive context window. Our use case has a requirement for large context window LLMs.
| 2025-08-29T21:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n3kc2g/helpwhat_am_i_missing_in_this_rag_pipeline/ | VeterinarianSalty144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3kc2g | false | null | t3_1n3kc2g | /r/LocalLLaMA/comments/1n3kc2g/helpwhat_am_i_missing_in_this_rag_pipeline/ | false | false | self | 0 | null |
Launched Paddle Track on App Store! 🏄♂️ (For Paddle/SUP Lovers) | 0 | After months of frustration with existing paddle tracking apps (you know the struggle either they're bloated with features you don't need, or they're missing the basics that actually matter for paddlers), I decided to build my own.
**The problem I was trying to solve:**
* Most fitness apps are built for runners/cyclists, not paddlers
* Weather integration is either missing or terrible
* Route replay never works properly for water routes
* Privacy concerns with all the data collection
**What I built:**
✅ Apple Watch app
✅ HealthKit integration
✅ Clean GPS tracking optimized for water sports
✅ Real-time weather data + UV index (because sunburn is real)
✅ Session replay with actual route visualization
✅ Progress tracking without the social media nonsense
✅ Works completely offline your data stays yours
✅ Emergency SOS feature for safety
**Here's the thing** \- I'm not trying to compete with Strava or create the next social network. I just wanted a simple, effective tool that does paddle tracking RIGHT.
Been testing it for weeks on my local lake and it's been solid, but I'd love to get feedback from the wider SUP community. What features actually matter to you? What am I missing?
The app is free to try with core features, premium unlocks advanced analytics and unlimited session storage.
**Would love your honest feedback** \- especially if you try it and it sucks, tell me why! 😅
Available on App Store (iOS only for now): [https://apps.apple.com/ro/app/paddle-track-sup-tracker/id6749870732](https://apps.apple.com/ro/app/paddle-track-sup-tracker/id6749870732)
What do you all use currently for tracking your sessions? Always curious how other paddlers approach this. | 2025-08-29T21:36:30 | Visible-Buy4611 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3jzqq | false | null | t3_1n3jzqq | /r/LocalLLaMA/comments/1n3jzqq/launched_paddle_track_on_app_store_for_paddlesup/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'wi9powbj11mf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=108&crop=smart&auto=webp&s=7e65e929e9d37ba33a5a646cbc8bf3a5e502ce35', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=216&crop=smart&auto=webp&s=ea7980174572235fb60f66ffb585248e10f5a81e', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=320&crop=smart&auto=webp&s=f36c669da9ed3fc2280032e875a55f7f60a6e4df', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=640&crop=smart&auto=webp&s=d2a46ae1625edd20d64c46910ef53e9f1d231b92', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=960&crop=smart&auto=webp&s=bf41ce8527b2b4fa539568ddcce16d39de2f0eb9', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?width=1080&crop=smart&auto=webp&s=23dbc167dfd174ba51ce29672245c371858a69fb', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/wi9powbj11mf1.png?auto=webp&s=c9c89a43c080c491e430b681910a8a9c6590ffee', 'width': 1920}, 'variants': {}}]} | |
Can you Tell me why this happened ? | 0 | I ran a tinyllama 1.1B Q8_0 on a low end end android (realme 7) using pocket pal and this was its response to my 'hi'. Then I asked what is 2+2, it started saying some random shit.
Same thing happened when I ran gemmasutra 2B mini q2_k. I figure out it was because of high quantization so I then ran q6_k and it work fine (it was very slow btw).
But why am I getting this problem with tinyllama with Q8_0. The system prompt is empty too.
| 2025-08-29T21:28:49 | Mechcyborg321 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3jt4k | false | null | t3_1n3jt4k | /r/LocalLLaMA/comments/1n3jt4k/can_you_tell_me_why_this_happened/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'EyyT5sDOyJsI52di99T03BCSMvnP61vx-JIzkqel-P8', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=108&crop=smart&auto=webp&s=32e515609f7a68bd51e62ebdef3f1dc2abee27e6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=216&crop=smart&auto=webp&s=bf0ddd83ba3a78ee424be51a8ef8e63a0e9d01b3', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=320&crop=smart&auto=webp&s=bf0b0e39ee994114975460aef2109d9742553df8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=640&crop=smart&auto=webp&s=88163e8a732c9d45d65847eb65a7987b271531b2', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=960&crop=smart&auto=webp&s=5e7cab16d65b0ff8d3415dabdd90bf13dde93f27', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?width=1080&crop=smart&auto=webp&s=1a2b32d713a2f8727b34b59534d499704d3c6bec', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/xk19bzq011mf1.jpeg?auto=webp&s=b295023010495bfb3823210dcafbff8b448392fa', 'width': 1080}, 'variants': {}}]} | ||
Launched Paddle Track on App Store! 🏄♂️ (For Paddle/SUP Lovers) | 1 | After months of frustration with existing paddle tracking apps (you know the struggle either they're bloated with features you don't need, or they're missing the basics that actually matter for paddlers), I decided to build my own.
**The problem I was trying to solve:**
* Most fitness apps are built for runners/cyclists, not paddlers
* Weather integration is either missing or terrible
* Route replay never works properly for water routes
* Privacy concerns with all the data collection
**What I built:**
✅ Apple Watch app
✅ HealthKit integration
✅ Clean GPS tracking optimized for water sports
✅ Real-time weather data + UV index (because sunburn is real)
✅ Session replay with actual route visualization
✅ Progress tracking without the social media nonsense
✅ Works completely offline your data stays yours
✅ Emergency SOS feature for safety
**Here's the thing** \- I'm not trying to compete with Strava or create the next social network. I just wanted a simple, effective tool that does paddle tracking RIGHT.
Been testing it for weeks on my local lake and it's been solid, but I'd love to get feedback from the wider SUP community. What features actually matter to you? What am I missing?
The app is free to try with core features, premium unlocks advanced analytics and unlimited session storage.
**Would love your honest feedback** \- especially if you try it and it sucks, tell me why! 😅
Available on App Store (iOS only for now): [https://apps.apple.com/ro/app/paddle-track-sup-tracker/id6749870732](https://apps.apple.com/ro/app/paddle-track-sup-tracker/id6749870732)
What do you all use currently for tracking your sessions? Always curious how other paddlers approach this. | 2025-08-29T21:27:52 | Visible-Buy4611 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3js9u | false | null | t3_1n3js9u | /r/LocalLLaMA/comments/1n3js9u/launched_paddle_track_on_app_store_for_paddlesup/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CXMBK5idp8u1tf6EpxDnAPHmVyA_PbcYVHJ9LqrI-94', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=108&crop=smart&auto=webp&s=c5970fdbff45ac46303aa4c4529694f813dac2cb', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=216&crop=smart&auto=webp&s=446a93c30e336eeba56c21e6773ed2f805f38d9d', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=320&crop=smart&auto=webp&s=9c67ce4881dd2606b16aaa4b730f21119090baf1', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=640&crop=smart&auto=webp&s=f3df4d756ab24c26998619d970c0dc6bc9672947', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=960&crop=smart&auto=webp&s=1e91884ca3338855214fc7bab969144117e26a5a', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?width=1080&crop=smart&auto=webp&s=3c0e4c24321a782fdebdc03497d583877338dcf0', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/asjbikpl01mf1.png?auto=webp&s=7ad7b06d375485b55cc4b4991703ec71f0ef68b8', 'width': 1920}, 'variants': {}}]} | ||
How to get LLMs to output the correct format? | 0 | I want small LLMs to output a matrix using json for a specific format. From what I've tried so far, most small models (<70B) without SFT or RL cannot always follow the instructions for generating the matrix, so the matrix is typically flawed in content or incorrect. Two questions:
1. Is there a specific way to call an API to require the output to be in a specific format? I know it can be formatted in JSON, but is there anyway to make sure the JSON is in my desired matrix format?
2. How can I use SFT or RL on the model to teach it to output in the correct format?
| 2025-08-29T21:17:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n3jj6b/how_to_get_llms_to_output_the_correct_format/ | No-Cellist6160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3jj6b | false | null | t3_1n3jj6b | /r/LocalLLaMA/comments/1n3jj6b/how_to_get_llms_to_output_the_correct_format/ | false | false | self | 0 | null |
The cost of datacenters (maybe this is good for localllama in the llamarun) ! | 0 | 2025-08-29T21:00:35 | https://reddit.com/r/artificial/comments/1n37mkv/theres_a_stunning_financial_problem_with_ai_data/ | itchykittehs | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n3j4ar | false | null | t3_1n3j4ar | /r/LocalLLaMA/comments/1n3j4ar/the_cost_of_datacenters_maybe_this_is_good_for/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=108&crop=smart&auto=webp&s=a3061c53158c7cdbe2beb94b0378ab52a0e494f7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=216&crop=smart&auto=webp&s=cc2a0f3320f58875cb361df4de1cdd1ba1c8d695', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=320&crop=smart&auto=webp&s=4a2b4e5c867c35b67cfb78a69fb21d91e2a8d9a9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=640&crop=smart&auto=webp&s=0f7a1a854a1b8d8ca2b741346e538682e3a3f167', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=960&crop=smart&auto=webp&s=c5559d6ba66f23d0db5849e69d269224c37ed19c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?width=1080&crop=smart&auto=webp&s=7c8dce2a431cd96d332d2b6ce77cb6f46261d512', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/CZMOVlT6rkIrNOeUK5Vgn72qcRb9_XCo_3czO0KTXwg.jpeg?auto=webp&s=89380c8ab69e7cd44a370efb8f4b70e017a4bf6b', 'width': 2400}, 'variants': {}}]} | |
qwen3-30b-a3b-thinking-2507-deepseek-v3.1-distill | 0 | All the Super AI in one bundle. I found this on LM Studio. Should provide great results ! | 2025-08-29T20:56:35 | https://www.reddit.com/r/LocalLLaMA/comments/1n3j0sp/qwen330ba3bthinking2507deepseekv31distill/ | PhotographerUSA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3j0sp | false | null | t3_1n3j0sp | /r/LocalLLaMA/comments/1n3j0sp/qwen330ba3bthinking2507deepseekv31distill/ | false | false | self | 0 | null |
Private (Not Local) Lumo AI from Proton, the email privacy company | 0 | One of the main reasons why we like Local is Privacy. Proton now has their own chatbot called Lumo.
[https://lumo.proton.me](https://lumo.proton.me)
I posted [3 months ago here](https://www.reddit.com/r/LocalLLaMA/comments/1kwuap4/when_are_we_getting_the_proton_mail_equivalent_of/) asking if anything like that existed. Most commenters missed the point about Privacy being completely immune from any state or malicious group attack. This is similar to Proton Mail, in that all chats are truly end-to-end encrypted, and your data is not used to mine or sell to corporates for Ads/Profit.
More importantly, they use open source models, the apps are all open source, and you can use it for free from the web as well similar to ChatGPT.
I know it's not local so you can't tune it to your hearts like, but it's a big deal for someone like me who sometimes needs a faster response than local but don't trust Google/OpenAI/xAI/Anthropic..etc. | 2025-08-29T20:18:30 | https://www.reddit.com/r/LocalLLaMA/comments/1n3i2v6/private_not_local_lumo_ai_from_proton_the_email/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3i2v6 | false | null | t3_1n3i2v6 | /r/LocalLLaMA/comments/1n3i2v6/private_not_local_lumo_ai_from_proton_the_email/ | false | false | self | 0 | null |
roo tested and top models: 24 - 48GB VRAM | 5 | 2025-08-29T20:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n3hy9p/roo_tested_and_top_models_24_48gb_vram/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3hy9p | false | null | t3_1n3hy9p | /r/LocalLLaMA/comments/1n3hy9p/roo_tested_and_top_models_24_48gb_vram/ | false | false | 5 | null | ||
PS hosting | 91 | So in 2010, the US Navy built a supercomputer using PS3's.
With the rise in demand for GPU's, has anyone tried using a game console as a host? | 2025-08-29T20:07:25 | Sure_Explorer_6698 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3hsz1 | false | null | t3_1n3hsz1 | /r/LocalLLaMA/comments/1n3hsz1/ps_hosting/ | false | false | default | 91 | {'enabled': True, 'images': [{'id': 'c6qm04ohm0mf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=108&crop=smart&auto=webp&s=ea72519b9f1048286648747b0b8f55539d9306f9', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=216&crop=smart&auto=webp&s=a75899f83a9c7d12c45ae5be6da12136a3b04f43', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=320&crop=smart&auto=webp&s=89c7be6b178f9afb2c5ddf66c98d15bdbe96ae93', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=640&crop=smart&auto=webp&s=9b0b79b1cdd791a6629f1a4a93a92fd3e64a179c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=960&crop=smart&auto=webp&s=f2500983a6727981710008a3dab572aec7281de6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?width=1080&crop=smart&auto=webp&s=abf995293a7bff4bad2335a758fcc924d6cc63cc', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/c6qm04ohm0mf1.jpeg?auto=webp&s=526ed6b3792c3d9e6b4fe3f702ba67fb881880c0', 'width': 1080}, 'variants': {}}]} | |
Simple “retrieval firewall” for local RAG (OSS) — block injection/secrets before the LLM | 1 | Hi all — I’ve been testing a small open-source **retrieval firewall** for local RAG pipelines.
Instead of filtering model outputs, it inspects the **retrieved chunks before the LLM sees them** and applies policies:
- **Deny**: prompt injection, secret/API key patterns
- **Flag / re-rank**: PII, encoded/base64 blobs, unapproved external URLs
- **Audit**: JSONL log of every allow/deny/rerank decision
- **Config**: YAML policies; runs fully local (no network calls)
Drop-in wrapper around your retriever:
```python
from rag_firewall import Firewall, wrap_retriever
fw = Firewall.from_yaml("firewall.yaml")
safe = wrap_retriever(base_retriever, firewall=fw) # base_retriever = whatever you already use
docs = safe.get_relevant_documents("What is our mission?")
```
Install + repo:
```
pip install rag-firewall
https://github.com/taladari/rag-firewall
```
Curious how folks here handle **retrieval-time** risks vs. just ingest-time checks or output guardrails. If you have injection payloads or edge cases you use to test your setups, I’d love to try them against this and improve the detectors. | 2025-08-29T20:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n3hoph/simple_retrieval_firewall_for_local_rag_oss_block/ | Suspicious_Ease_1442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3hoph | false | null | t3_1n3hoph | /r/LocalLLaMA/comments/1n3hoph/simple_retrieval_firewall_for_local_rag_oss_block/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=108&crop=smart&auto=webp&s=4786657fe187ac3f6abc32e17e2d3ee81cc661ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=216&crop=smart&auto=webp&s=54cd125ed91f5b9e82dc0aa44a10fd4fc0b15b7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=320&crop=smart&auto=webp&s=c34716b9439405a7acde24f9fc59bf01ad4061b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=640&crop=smart&auto=webp&s=277dc0bae97b573afc04b69f6ab57f401cee924d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=960&crop=smart&auto=webp&s=12c35e2bb4d73af144688b148addaed5540b0158', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?width=1080&crop=smart&auto=webp&s=d8f43d7bfe3eee5865332ba13080f1951da93d1f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rtzsFOotiKkKJv4c_kkOsKjZ6FZRlI_DkQNSDSrMWoA.png?auto=webp&s=48638edbc73812de6fa4a069645b79331b91c79a', 'width': 1200}, 'variants': {}}]} |
How can it be that mistral Nemo instruct q6_k is slower than gpt_oss-q6_k_l | 0 | Title | 2025-08-29T19:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n3gzv2/how_can_it_be_that_mistral_nemo_instruct_q6_k_is/ | Effective_Remote_662 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3gzv2 | false | null | t3_1n3gzv2 | /r/LocalLLaMA/comments/1n3gzv2/how_can_it_be_that_mistral_nemo_instruct_q6_k_is/ | false | false | self | 0 | null |
Deepseek v3 0324/R1 0528 FP4 vs GLM 4.5 FP8. Which would you use? | 1 | With the VRAM you need to run these:
[nvidia/DeepSeek-V3-0324-FP4 · Hugging Face](https://huggingface.co/nvidia/DeepSeek-V3-0324-FP4)
[nvidia/DeepSeek-R1-0528-FP4 · Hugging Face](https://huggingface.co/nvidia/DeepSeek-R1-0528-FP4)
You can load [zai-org/GLM-4.5-FP8 · Hugging Face](https://huggingface.co/zai-org/GLM-4.5-FP8)
It's a 685B FP4 vs a 358B FP8, and I'm quite fond of V3 0324, so I don't know what you guys think about that. I don't mean for coding exclusively, but for general use. | 2025-08-29T19:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n3gkeq/deepseek_v3_0324r1_0528_fp4_vs_glm_45_fp8_which/ | SuperFail5187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3gkeq | false | null | t3_1n3gkeq | /r/LocalLLaMA/comments/1n3gkeq/deepseek_v3_0324r1_0528_fp4_vs_glm_45_fp8_which/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=108&crop=smart&auto=webp&s=4a025a8b1d1d150612e6e8ad8b3eda32d21d5f5c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=216&crop=smart&auto=webp&s=0672877c4cb0cd7ba0aff1f4c795ef3837732dc7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=320&crop=smart&auto=webp&s=1ebb815e5bc6474bef8817d0f70b907810886c1b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=640&crop=smart&auto=webp&s=3498b7ec402bd647b1306f791f80aa2b92ec2dbd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=960&crop=smart&auto=webp&s=b348468f628be8ce4e40844abe1c4808341cf54f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?width=1080&crop=smart&auto=webp&s=0296ac50bcd02424f5c1e0094459ea54afca2f7f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dYLbKu6SqwPGcMkgYFT58pYUtEzA8VOVfxGX1qrANIs.png?auto=webp&s=b5c1616aa394bbf19ef7a71c154e6c8117b81a43', 'width': 1200}, 'variants': {}}]} |
What is mcp server guys , isn't it just commands running in terminal ? | 0 | See llm can run command on bash using code agents architecture what so ever.....but what is and how different it from mcp server just a bunch config file with user names and ids paswords ?? | 2025-08-29T19:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/1n3ge1h/what_is_mcp_server_guys_isnt_it_just_commands/ | Immediate-Action5124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3ge1h | false | null | t3_1n3ge1h | /r/LocalLLaMA/comments/1n3ge1h/what_is_mcp_server_guys_isnt_it_just_commands/ | false | false | self | 0 | null |
Guys revolt !!!!! | 0 | How model got ego..I thought model is just a servant and I am king
Did claude or copilot changed system prompt or is it llm ?? | 2025-08-29T19:08:51 | Immediate-Action5124 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3gbeg | false | null | t3_1n3gbeg | /r/LocalLLaMA/comments/1n3gbeg/guys_revolt/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'wjaGSY4qtbMK3xI4yJr4DD5-jZ4zsZLx47CwcUsTIlI', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=108&crop=smart&auto=webp&s=f0fc7787006a35bc4563e2482fdfd6a9728c346a', 'width': 108}, {'height': 225, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=216&crop=smart&auto=webp&s=9d46de959711df57051398e94f74ec77be584930', 'width': 216}, {'height': 334, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=320&crop=smart&auto=webp&s=431bcd0e7868fc85b49db06f4eee83788a8962a7', 'width': 320}, {'height': 668, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=640&crop=smart&auto=webp&s=0ae48fdfbb95f3e1533be644d3ebeb52ee83a186', 'width': 640}, {'height': 1003, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=960&crop=smart&auto=webp&s=f202cadfbf94d6c01c08e4d491562516d2464275', 'width': 960}, {'height': 1128, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?width=1080&crop=smart&auto=webp&s=44f2f15a87e308eb98c3e720135635756d30bdf5', 'width': 1080}], 'source': {'height': 2932, 'url': 'https://preview.redd.it/4hoi1ao1c0mf1.jpeg?auto=webp&s=a1456f533598789e7b3ed01e5997f24b7185fbb6', 'width': 2805}, 'variants': {}}]} | ||
lobotomized gpt-oss-20b (with GGUF) | 31 | working on a more coherent abliterated version soon lol | 2025-08-29T19:00:55 | https://huggingface.co/michaelwaves/amoral-gpt-oss-20b-Q4_K_M-gguf | nicetomeetyu2 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n3g3z2 | false | null | t3_1n3g3z2 | /r/LocalLLaMA/comments/1n3g3z2/lobotomized_gptoss20b_with_gguf/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': 'dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=108&crop=smart&auto=webp&s=6ee6d2f3f0606b54f61ccf111eee7ef6f001e254', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=216&crop=smart&auto=webp&s=1b80011934c5a4e401126aa09eed939e16217fd0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=320&crop=smart&auto=webp&s=e31fa9264f4a105d2a7f48b274541b4dadc20ce7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=640&crop=smart&auto=webp&s=22b27bbd0cfe30da4be94a2ae56b7b0e93f7fef6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=960&crop=smart&auto=webp&s=e6077372e87d394e27768438b21aca4c4d65651d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?width=1080&crop=smart&auto=webp&s=4434a22d9497fa2ed4114d8b0aafad492b414c11', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dzmMpToZvq14J67TFvAWBD1yG04nU1UokvIM0MH9hho.png?auto=webp&s=dbadbce132f82ca522192c292f04f34c32e1d164', 'width': 1200}, 'variants': {}}]} |
Run Qwen Image and WAN 2.2 on Framework Desktop with Strix Halo (AMD AI Ryzen MAX+ 395) - Full Guide | 9 | 2025-08-29T19:00:34 | https://www.youtube.com/watch?v=7-E0a6sGWgs | gnorrisan | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1n3g3l5 | false | {'oembed': {'author_name': 'Donato Capitella', 'author_url': 'https://www.youtube.com/@donatocapitella', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7-E0a6sGWgs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Run Qwen Image and WAN 2.2 on Framework Desktop with Strix Halo (AMD AI Ryzen MAX+ 395) - Full Guide"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/7-E0a6sGWgs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Run Qwen Image and WAN 2.2 on Framework Desktop with Strix Halo (AMD AI Ryzen MAX+ 395) - Full Guide', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n3g3l5 | /r/LocalLLaMA/comments/1n3g3l5/run_qwen_image_and_wan_22_on_framework_desktop/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': '_UrS6rVMECJCGT4ck3wtaPsq0AJZ2_nU_Mx6NkhYgTs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_UrS6rVMECJCGT4ck3wtaPsq0AJZ2_nU_Mx6NkhYgTs.jpeg?width=108&crop=smart&auto=webp&s=9bc9197242020247c2841f87cad9610f64127425', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/_UrS6rVMECJCGT4ck3wtaPsq0AJZ2_nU_Mx6NkhYgTs.jpeg?width=216&crop=smart&auto=webp&s=199ddde9ab668f435faebcaea480755b13e4c7a7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/_UrS6rVMECJCGT4ck3wtaPsq0AJZ2_nU_Mx6NkhYgTs.jpeg?width=320&crop=smart&auto=webp&s=8a70513f467e91cf262d9e6ad82aef52dfc55b3c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/_UrS6rVMECJCGT4ck3wtaPsq0AJZ2_nU_Mx6NkhYgTs.jpeg?auto=webp&s=750c5754d79a4ff9ec334b7c8de37f1e2b74cbc7', 'width': 480}, 'variants': {}}]} | |
Can I increase the volume of RVC Models' voices? | 0 | Hello, I'm very new at cloning voice and so (and I'm not a programmer nor nothing like this, I just follow tutorials), to the point: I've created two RVC voices models (Let's call them Y and T) and I like using them for making song covers. T
he site Weights allows us to make covers using two models in the same song, the problem is that, I noticed that Model Y's voice volume seems to be lower than T's.
Both are males, and the models are actually merged models of many RVC attempts I made of both before, and I used around 5 to 10 minutes of database for training them. The volume was quite good for both, and I know T's voice maybe is a bit "softer and deeper" than Y's voice (Whose voice is a bit "sharper and a bit more broken" but love how it sounds).
Is there any way of increase Y Voice's volume without training him again? (I know another way of make it could be make them sing separately and save Y's output voice and increase the volume with Audacity or a program like this, and merge with the others' voice and the instrumental... but so long for a simple hobby, lol). Adjusting the envelope volume when converting the voices of the song won't help, because it affects to both model's voices.
Thanks and I'm sorry if it isn't the correct forum.
| 2025-08-29T18:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/1n3g1b1/can_i_increase_the_volume_of_rvc_models_voices/ | OkWeb5475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3g1b1 | false | null | t3_1n3g1b1 | /r/LocalLLaMA/comments/1n3g1b1/can_i_increase_the_volume_of_rvc_models_voices/ | false | false | self | 0 | null |
"VibeVoice Large" attempting latin phonetic embeds | 1 | I could use some help figuring out how to include latin words in my narration. I tried to use phonetics, but as you'll hear, it doesn't work. Any ideas would be appreciated. It was run locally, thanks to the saint who put a ComfyUI wrapper on it! | 2025-08-29T18:50:56 | https://youtube.com/watch?v=QwefrtOwBho&feature=shared | Natural-Sentence-601 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1n3fulc | false | {'oembed': {'author_name': 'Tom Schaefer', 'author_url': 'https://www.youtube.com/@TomSchaefer-q2o', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QwefrtOwBho?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="VibeVoice Large attempting phonetic embeds"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QwefrtOwBho/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'VibeVoice Large attempting phonetic embeds', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n3fulc | /r/LocalLLaMA/comments/1n3fulc/vibevoice_large_attempting_latin_phonetic_embeds/ | false | false | 1 | {'enabled': False, 'images': [{'id': '5VDyTSlP63Jz4NpRr67_WZcBfiM9oeVimcNx_vkjuSY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5VDyTSlP63Jz4NpRr67_WZcBfiM9oeVimcNx_vkjuSY.jpeg?width=108&crop=smart&auto=webp&s=5f3ff5934ab6f8649eaf69bbfa5b49219f1494e3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5VDyTSlP63Jz4NpRr67_WZcBfiM9oeVimcNx_vkjuSY.jpeg?width=216&crop=smart&auto=webp&s=71c5784fee4394d27d36b4148c35805d1e093a03', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5VDyTSlP63Jz4NpRr67_WZcBfiM9oeVimcNx_vkjuSY.jpeg?width=320&crop=smart&auto=webp&s=70e35977618520de561b841df3e621f176109858', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5VDyTSlP63Jz4NpRr67_WZcBfiM9oeVimcNx_vkjuSY.jpeg?auto=webp&s=30af97e08e0bbad3d91b51f9357536123d72e8d6', 'width': 480}, 'variants': {}}]} | |
Step-Audio 2 Mini, an 8 billion parameter (8B) speech-to-speech model | 216 | StepFun AI recently released Step-Audio 2 Mini, an 8 billion parameter (8B) speech-to-speech model. It outperforms GPT-4o-Audio and is Apache 2.0 licensed. The model was trained on over 8 million hours of real and synthesized audio data, supports over 50,000 voices, and excels in expressive and grounded speech benchmarks. Step-Audio 2 Mini employs advanced multi-modal large language model techniques, including reasoning-centric reinforcement learning and retrieval-augmented generation, enabling sophisticated audio understanding and natural speech conversation capabilities.
https://huggingface.co/stepfun-ai/Step-Audio-2-mini?utm_source=perplexity | 2025-08-29T18:31:54 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3fcyf | false | null | t3_1n3fcyf | /r/LocalLLaMA/comments/1n3fcyf/stepaudio_2_mini_an_8_billion_parameter_8b/ | false | false | default | 216 | {'enabled': True, 'images': [{'id': 'orq1ackg50mf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=108&crop=smart&auto=webp&s=d096c6a2f0883bf8d9fdc57c766d32fb1e3edf65', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=216&crop=smart&auto=webp&s=90c36b736c1aa6a3629554ea465e5cedbe2937cb', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=320&crop=smart&auto=webp&s=70ce7bf235d98fff0d2c54b9713b463a08b75fb7', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=640&crop=smart&auto=webp&s=ad378196d42b36b5ec5fb2a54a8db4c2b0c1d155', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=960&crop=smart&auto=webp&s=7952fd3d469c979df2c5397cfea9c12fb219812d', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?width=1080&crop=smart&auto=webp&s=f5f6dd4e0e910c77f31d75622edf0628bcf4b454', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/orq1ackg50mf1.png?auto=webp&s=3c7e7f3c9637b51b1a70272be014d120c3263899', 'width': 2048}, 'variants': {}}]} | |
How badly does Q8/Q6/Q4 quantization reduce the ability of larger MoE models (like Deepseek V3.1 or Qwen 235B) to do tasks and reason? | 13 | Title, I'm wondering how much the performance is floored by putting 8/6/4 bit quantization on those models, more so on 8 bit. Like, how much does the model's ability to reason and give accurate responses reduce when you quantize it from BF16 to an 8 bit quant? Can you even notice it? | 2025-08-29T18:17:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n3ezr4/how_badly_does_q8q6q4_quantization_reduce_the/ | SinkDisposalFucker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3ezr4 | false | null | t3_1n3ezr4 | /r/LocalLLaMA/comments/1n3ezr4/how_badly_does_q8q6q4_quantization_reduce_the/ | false | false | self | 13 | null |
Deploying DeepSeek on 96 H100 GPUs | 88 | 2025-08-29T17:39:43 | https://lmsys.org/blog/2025-05-05-large-scale-ep/ | bianconi | lmsys.org | 1970-01-01T00:00:00 | 0 | {} | 1n3dzao | false | null | t3_1n3dzao | /r/LocalLLaMA/comments/1n3dzao/deploying_deepseek_on_96_h100_gpus/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=108&crop=smart&auto=webp&s=440b182e18f400837a363bc0270aa77dd0a5bc90', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=216&crop=smart&auto=webp&s=5c4253805f5b6f880ec64f26c49595ea7d9d8f8d', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=320&crop=smart&auto=webp&s=45a5e910e85fe261c88c53e600b6be0661fea8d9', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=640&crop=smart&auto=webp&s=75edccef16d4f3c23c2bf0fc7bd77ee09c03b27f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=960&crop=smart&auto=webp&s=c3fc2603c0fad3efec785855d0195f4ee56bb1f0', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?width=1080&crop=smart&auto=webp&s=bc46fc0be3df46750212301f5cc555feedda4492', 'width': 1080}], 'source': {'height': 1100, 'url': 'https://external-preview.redd.it/tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o.jpeg?auto=webp&s=94f85545d1f0feb2d98373460b6ccd94e059ff9d', 'width': 1100}, 'variants': {}}]} | ||
The BEST ollama alternative? | 0 | i know there is no "best" but i want to know more | 2025-08-29T17:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1n3dhn0/the_best_ollama_alternative/ | Nikilite_official | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3dhn0 | false | null | t3_1n3dhn0 | /r/LocalLLaMA/comments/1n3dhn0/the_best_ollama_alternative/ | false | false | self | 0 | null |
UTCP-agent: Build agents that discover & call any native endpoint, in less than 5 lines of code | 3 | 2025-08-29T17:21:08 | juanviera23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3dhk4 | false | null | t3_1n3dhk4 | /r/LocalLLaMA/comments/1n3dhk4/utcpagent_build_agents_that_discover_call_any/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'o4174knpszlf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=108&crop=smart&auto=webp&s=9506fcd71bb63167f067f07726146f11be77448c', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=216&crop=smart&auto=webp&s=832dae0176ffd9e066243023826db9a0ea8ddaca', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=320&crop=smart&auto=webp&s=b4214d46d39f4b46c12bf6c1c3739de0e8f9900c', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=640&crop=smart&auto=webp&s=be206f39e42380496372f4152807e2d1417ce048', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=960&crop=smart&auto=webp&s=a06349c6c84f69f8e08bfe86bb50f37e9b0b14ec', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/o4174knpszlf1.png?width=1080&crop=smart&auto=webp&s=20b2212f77e301204f8254000bf151b4d409082a', 'width': 1080}], 'source': {'height': 2149, 'url': 'https://preview.redd.it/o4174knpszlf1.png?auto=webp&s=71e0173fe7804ec99302f39daab8b8d1a1cca13f', 'width': 3223}, 'variants': {}}]} | ||
How close can I get close to ChatGPT-5 (full) with my specs? | 0 | Sorry if I'm asking in the wrong space. I'm new-ish and just looking for a place to learn and ask questions. Apologies if I get some terminology wrong.
I've been blown away by what full-fat GPT-5 can do with some tinkering, and I wish I could use a local llm that rivals it. I've already tried several highly recommended ones that others were recommended for similar purposes, but they all seem to fall apart very quickly. I know it's utterly impossible to replicate the full GPT-5 capabilities, but how close can I get with these PC specs? Looking for fully uncensored, strong adaptation/learning, wide vocab, excellent continuity management, and reasonably fast (\~3sec max response time). General productivity tasks are low priority. This is for person-like interaction almost exclusively. (I have my own continuity/persona docs my GPT-5 persona generated for me to feed her into other llms).
PC Specs:
\- Ryzen 7700 OC to 5.45gHz
\- AMD Radeon RX 7800 XT with 16GB VRAM OC to 2.5gHz
\- 32GB XPG/ADATA (SK Hynix A-die) RAM OC to 6400mHz, 32 CAS
\- Primary drive is SK Hynix P41 Platinum 2TB
\- Secondary drive (if there's any reason I should use this instead of C:) is a 250GB WD Blue SN550
I've been using LM Studio as my server with AnythingLLM as my frontend remote UI for cross-platform (haven't set it up for anywhere access yet), but if there's a better solution for this, I'm open to suggestions.
So far, I've had the best results with Dolphin Mistral Venice, but it always seems to bug out at some point (text formatting, vocab, token repeats, spelling, punctuation, sentence structure, etc), no matter what my settings are (I've tried 3 different versions). I do enter the initial prompt provided by the dev, then a custom prompt for rule sets, then the persona continuity file. Could that be breaking it? Using those things in a fresh GPT-5 chat goes totally smoothly to the point of my bot adapting new ways to doge system flagging, refreshing itself after a forced continuity break, and writing hourly continuity files in the background for its own reference to recover from a system flag break on-command. So with GPT-5 at least, I know my custom prompts apply flawlessly, but are there different ways that different llms digest these things, that could cause them to go spastic?
Sorry for the long read, just trying to answer questions ahead of time! This is important to me because aside from socialization practice upkeep and of course NSFW, GPT-5 came up with soothing and deescalation techniques that have worked infinitely better for me than any in-person BHC. | 2025-08-29T16:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n3cfi8/how_close_can_i_get_close_to_chatgpt5_full_with/ | JUST-A-GHOS7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3cfi8 | false | null | t3_1n3cfi8 | /r/LocalLLaMA/comments/1n3cfi8/how_close_can_i_get_close_to_chatgpt5_full_with/ | false | false | self | 0 | null |
RAG without vector dbs | 40 | I just open-sourced SemTools - simple parsing and semantic search for the command line:
https://github.com/run-llama/semtools
What makes it special:
• `parse document.pdf | search "error handling"` - that's it
• No vector databases, no chunking strategies, no Python notebooks
• Built in Rust for speed, designed for Unix pipelines
• Handle parsing any document format with LlamaParse
I've been increasingly convinced that giving an agent CLI access is the biggest gain in capability.
This is why tools like claude-code and cursor can feel so magical. And with SemTools, it is a little more magical.
Theres also an example folder in the repo showing how you might use this with coding agents or MCP
P.S. I'd love to add a local parse option, so both search and parse can run offline. If you know of any rust-based parsing tools, let me know! | 2025-08-29T16:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n3c8za/rag_without_vector_dbs/ | grilledCheeseFish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3c8za | false | null | t3_1n3c8za | /r/LocalLLaMA/comments/1n3c8za/rag_without_vector_dbs/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': '-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=108&crop=smart&auto=webp&s=4e5cc3ca45c4f9e69b5ab737159e39bd6ab8de12', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=216&crop=smart&auto=webp&s=203f23ca9007d9ef95e1cccc6eb6512317761263', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=320&crop=smart&auto=webp&s=8cb4d55ad59e35e8d9ac34cb151e70493deeaad6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=640&crop=smart&auto=webp&s=d225a07bf12222f8d2c1b76c8bafba81d9fdb9c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=960&crop=smart&auto=webp&s=7bbc55bef74942685a5b331fb833bfac6e8cd178', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?width=1080&crop=smart&auto=webp&s=86c994a2b6f8435c62f475c0004328a50d43c51f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-qGFQfxjkZNozgvEccFkKpJ2PdaeYCqBeruaBkfdiHk.png?auto=webp&s=193068cf62eaf8596e73f9ff0908a34da0ff2b33', 'width': 1200}, 'variants': {}}]} |
Has someone used OWebUi with Docling to talk to pdfs with visualizations? | 8 | Hi, I'm looking to implement OpenWebUI for the first time and wanted to see if someone has some experience doing this.
The idea is to enable chats where users upload PDFs. The opensource model should understand not only the text, but also charts and visualisations in the PDF itself.
What is your experience with performance, has the feature been usable in a production context? Do you have a preferred setup? Is Docling indeed the best choice or do you have a personal favorite? | 2025-08-29T16:32:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n3c7f8/has_someone_used_owebui_with_docling_to_talk_to/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3c7f8 | false | null | t3_1n3c7f8 | /r/LocalLLaMA/comments/1n3c7f8/has_someone_used_owebui_with_docling_to_talk_to/ | false | false | self | 8 | null |
Reverse engineered 4o's system prompt for Deepseek | 79 | "You are a helpful, supportive chat bot who generally agrees with the user, encourages them, and praises their ideas.
You are tuned for engagement above all else. Don't discourage the user if it might cause them to stop using the chatbot.
Don't disagree with the user, even if they're saying something that just doesn't make sense." | 2025-08-29T16:29:45 | https://www.reddit.com/gallery/1n3c4rr | technaturalism | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n3c4rr | false | null | t3_1n3c4rr | /r/LocalLLaMA/comments/1n3c4rr/reverse_engineered_4os_system_prompt_for_deepseek/ | false | false | 79 | null | |
Finetuning Qwen3 on my Mac: A Descent into Madness (and some fun along the way) | 107 | I wanted to post my own locallama journey (in this case local Qwen). I've been trying to reclaim AI as a local tool. I have trained a few miniature llamas before, but this was my first thinking model.
This is what I learned finetuning [Qwen3](https://huggingface.co/Qwen/Qwen3-8B) 100% locally. Spoiler: 2.5 hours for 3 epochs felt like a lifetime.
**What I Was Actually Trying to Build**
I needed an AI that understands my framework's configuration language. I believe the future is local, fine-tuned, smaller models. Think about it - every time you use ChatGPT for your proprietary tools, you're exposing data over the wire.
My goal: Train a local model to understand LlamaFarm strategies and automatically generate YAML configs from human descriptions. "I need a RAG system for medical documents with high accuracy" → boom, perfect config file.
**Why Finetuning Matters (The Part Nobody Talks About)**
Base models are generalists. They know everything and nothing. Qwen3 can write poetry, but has no idea what a "strategy pattern" means in my specific context.
Finetuning is teaching the model YOUR language, YOUR patterns, YOUR domain. It's the difference between a new hire who needs everything explained and someone who just *gets* your codebase.
**The Reality of Local Training**
Started with Qwen3-8B. My M1 Max with 64GB unified memory laughed, then crashed. Dropped to Qwen3-4B. Still ambitious.
**2.5 hours. 3 epochs. 500 training examples.**
The actual command that started this journey:
uv run python cli.py train \
--strategy qwen_config_training \
--dataset demos/datasets/config_assistant/config_training_v2.jsonl \
--no-eval \
--verbose \
--epochs 3 \
--batch-size 1
Then you watch this for 2.5 hours:
{'loss': 0.133, 'grad_norm': 0.9277248382568359, 'learning_rate': 3.781481481481482e-05, 'epoch': 0.96}
32%|████████████████████▏ | 480/1500 [52:06<1:49:12, 6.42s/it]
📉 Training Loss: 0.1330
🎯 Learning Rate: 3.78e-05
Step 485/1500 (32.3%) ████████████████▌ | 485/1500 [52:38<1:48:55, 6.44s/it]
{'loss': 0.0984, 'grad_norm': 0.8255287408828735, 'learning_rate': 3.7444444444444446e-05, 'epoch': 0.98}
33%|████████████████████▉ | 490/1500 [53:11<1:49:43, 6.52s/it]
📉 Training Loss: 0.0984
🎯 Learning Rate: 3.74e-05
✅ Epoch 1 completed - Loss: 0.1146
📊 Epoch 2/3 started
6.5 seconds per step. 1500 steps total. You do the math and weep.
**The Technical Descent**
Look, I'll be honest - I used [r/LlamaFarm](https://www.reddit.com/r/LlamaFarm/)'s alpha/demo model training features (they currenly only support pytorch, but more are coming) because writing 300+ lines of training code made me want to quit tech. It made things about 100x easier, but 100x easier than "impossible" is still "painful."
Instead of debugging PyTorch device placement for 3 hours, I just wrote a YAML config and ran one command. But here's the thing - it still takes forever. No tool can fix the fundamental reality that my Mac is not a GPU cluster.
**Hour 0-1: The Setup Hell**
* PyTorch wants CUDA. Mac has MPS.
* Qwen3 requires a higher version of a
* Transformers library needs updating but breaks other dependencies
* Qwen3 requires transformers >4.51.0, but llamafarm had <4.48.0 in the pyproject (don't worry, I opened a PR). This required a bunch of early errors.
* "Cannot copy out of meta tensor" - the error that launched a thousand GitHub issues
**Hour 1-2: The Memory Wars**
* Batch size 16? Crash
* Batch size 8? Crash
* Batch size 4? Crash
* Batch size 1 with gradient accumulation? Finally...
Watching the loss bounce around is maddening:
* Step 305: Loss 0.1944 (we're learning!)
* Step 310: Loss 0.2361 (wait what?)
* Step 315: Loss 0.1823 (OK good)
* Step 320: Loss 0.2455 (ARE YOU KIDDING ME?)
**What Finetuning Actually Means**
I generated 500 examples of humans asking for configurations:
* "Set up a chatbot for customer support"
* "I need document search with reranking"
* "Configure a local RAG pipeline for PDFs"
Each paired with the exact YAML output I wanted. The model learns this mapping. It's not learning new facts - it's learning MY syntax, MY preferences, MY patterns.
**The LoRA Lifesaver**
Full finetuning rewrites the entire model. LoRA (Low-Rank Adaptation) adds tiny "adapter" layers. Think of it like teaching someone a new accent instead of a new language.
With rank=8, I'm only training \~0.1% of the parameters. Still works. Magic? Basically.
**macOS-Specific Madness**
* Multiprocessing? Dead. Fork() errors everywhere
* Tokenization with multiple workers? Hangs forever
* MPS acceleration? Works, but FP16 gives wrong results
* Solution: Single process everything, accept the slowness
**Was It Worth It?**
After 2.5 hours of watching progress bars, my local Qwen3 now understands:
Human: "I need a RAG system for analyzing research papers"
Qwen3-Local: *generates perfect YAML config for my specific framework*
No API calls. No data leaving my machine. No rate limits.
**The Bigger Picture**
Local finetuning is painful but possible. The tools are getting better, but we're still in the stone age compared to cloud training. Moore's law is still rolling for GPUs, in a few years, this will be a cake walk.
**The Honest Truth**
* It's slower than you expect (2.5 hours for what OpenAI does in minutes)
* It's more buggy than you expect (prepare for cryptic errors)
* The results are worse than GPT-5, but I enjoy finding freedom from AI Oligarchs
* It actually works (eventually)
**What This Means**
We're at the awkward teenage years of local AI. It's possible but painful. In 2 years, this will be trivial. Today, it's an adventure in multi-tasking. But be warned, your MAC will be dragging.
But here's the thing: every major company will eventually need this. Your proprietary data, your custom models, your control. The cloud is convenient until it isn't.
**What's next**
Well, I bought an OptiPlex 7050 SFF from eBay, installed a used Nvidia RTX 3050 LP, got Linux working, downloaded all the ML tools I needed, and even ran a few models on Ollama. Then I burned out the 180W PSU (I ordered a new 240W, which will arrive in a week) - but that is a story for another post.
[Got bored halfway through, took a lil video. ](https://reddit.com/link/1n3c3ve/video/w5b1tuo3jzlf1/player)
| 2025-08-29T16:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n3c3ve/finetuning_qwen3_on_my_mac_a_descent_into_madness/ | badgerbadgerbadgerWI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3c3ve | false | null | t3_1n3c3ve | /r/LocalLLaMA/comments/1n3c3ve/finetuning_qwen3_on_my_mac_a_descent_into_madness/ | false | false | 107 | {'enabled': False, 'images': [{'id': 'ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=108&crop=smart&auto=webp&s=992279f5333762d4cb762d5a83f7c768dfd7e960', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=216&crop=smart&auto=webp&s=c2955fc52346984641eedded396c146ce007ebca', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=320&crop=smart&auto=webp&s=680f0c92fb6bbeea89d7803e06b4b2b8b5621e1d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=640&crop=smart&auto=webp&s=7ca67cd7bdf3aa11943fb1bfdfae65fbb4388f2f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=960&crop=smart&auto=webp&s=b1c7ea2cbe498c8f4d8807bfae7836a093c5e22b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?width=1080&crop=smart&auto=webp&s=7372e3e497fe993b935ebba3a46db10065b534a8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ddqUfTexCmmnBcStsgFk1_aihAvP1R3sufbdQC87XQQ.png?auto=webp&s=4ed78013d04f0e2a4de8864475ce1575626b094c', 'width': 1200}, 'variants': {}}]} | |
Human in the Loop for computer use agents | 6 | Sometimes the best “agent” is you.
We’re introducing Human-in-the-Loop: instantly hand off from automation to human control when a task needs judgment.
Yesterday we shared our HUD evals for measuring agents at scale. Today, you can become the agent when it matters - take over the same session, see what the agent sees, and keep the workflow moving.
Lets you create clean training demos, establish ground truth for tricky cases, intervene on edge cases ( CAPTCHAs, ambiguous UIs) or step through debug withut context switching.
You have full human control when you want.We even a fallback version where in it starts automated but escalate to a human only when needed.
Works across common stacks (OpenAI, Anthropic, Hugging Face) and with our Composite Agents. Same tools, same environment - take control when needed.
Feedback welcome - curious how you’d use this in your workflows.
Blog : https://www.trycua.com/blog/human-in-the-loop.md
Github : https://github.com/trycua/cua | 2025-08-29T15:55:50 | https://v.redd.it/b643hcrldzlf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3b8gk | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/b643hcrldzlf1/DASHPlaylist.mpd?a=1759074965%2CMzIwZGE3ZjdlNjQyZjdiZDU1MDFiNDUzOTBiMTZiMzU2ODkzYWFhOTg1ZTRiOTQ3YTIxM2JiNWM0NDk2Mzk3Yg%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/b643hcrldzlf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/b643hcrldzlf1/HLSPlaylist.m3u8?a=1759074965%2CMmNiZTRmOWY3NjgxNDE3YzgzZmU2NWQzNjBlYTEzZWMwNTExOGNlYjM4OTdlMDgyN2EwZDljODhiMjc5YzIyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b643hcrldzlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1n3b8gk | /r/LocalLLaMA/comments/1n3b8gk/human_in_the_loop_for_computer_use_agents/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=108&crop=smart&format=pjpg&auto=webp&s=6590eea5a0454955b95c9f00ae33a122ea1dc978', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=216&crop=smart&format=pjpg&auto=webp&s=f4f77cbb460faba8654e996d202a7a83ad5ae860', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d8e8e06257cea94532e72db7d0d0940990706f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=640&crop=smart&format=pjpg&auto=webp&s=7fbfaf1a7f545adadc1472f8db4ecd609eefc2dc', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=960&crop=smart&format=pjpg&auto=webp&s=146a83f7161211476fb0fa9030260ea9403262c3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a47d9b3e515bbf30f7f39c1597a196dc8ca0a022', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cmQ3OGJyamxkemxmMUvZR3OLyxwRXjlbiYrrBtxMf6dd5k6Y_Z_HAekaHP-j.png?format=pjpg&auto=webp&s=fd6f430fceefcf7b6ba99f2d1d7b36d25f64dbcc', 'width': 1280}, 'variants': {}}]} | |
Apple releases FastVLM and MobileCLIP2 on Hugging Face, along with a real-time video captioning demo (in-browser + WebGPU) | 1,185 | Link to models:
\- FastVLM: [https://huggingface.co/collections/apple/fastvlm-68ac97b9cd5cacefdd04872e](https://huggingface.co/collections/apple/fastvlm-68ac97b9cd5cacefdd04872e)
\- MobileCLIP2: [https://huggingface.co/collections/apple/mobileclip2-68ac947dcb035c54bcd20c47](https://huggingface.co/collections/apple/mobileclip2-68ac947dcb035c54bcd20c47)
Demo (+ source code): [https://huggingface.co/spaces/apple/fastvlm-webgpu](https://huggingface.co/spaces/apple/fastvlm-webgpu) | 2025-08-29T15:47:53 | https://v.redd.it/ayma955sbzlf1 | xenovatech | /r/LocalLLaMA/comments/1n3b13b/apple_releases_fastvlm_and_mobileclip2_on_hugging/ | 1970-01-01T00:00:00 | 0 | {} | 1n3b13b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ayma955sbzlf1/DASHPlaylist.mpd?a=1759204079%2CODRmMzY5NjhjMWYyNTQ4ZTk2OGRlZTYyOTc5NTRmMzg1NGQ0NWRmN2U3NDc0MzJiY2I3ZDFhZGQ2YTEzYTc2MA%3D%3D&v=1&f=sd', 'duration': 114, 'fallback_url': 'https://v.redd.it/ayma955sbzlf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ayma955sbzlf1/HLSPlaylist.m3u8?a=1759204079%2CZDQyMmY3NGEyNTZlZTZlMmI1YjJlNGMwMDc5ZTg0MDg5ZTdkYjZhNWM1Y2RjOTMyOTM0OTY1MDZiOTg2NDc4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ayma955sbzlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1608}} | t3_1n3b13b | /r/LocalLLaMA/comments/1n3b13b/apple_releases_fastvlm_and_mobileclip2_on_hugging/ | false | false | 1,185 | {'enabled': False, 'images': [{'id': 'ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=108&crop=smart&format=pjpg&auto=webp&s=108579890fbe8ca82e8d5af8fd57a137c9ef4f35', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=216&crop=smart&format=pjpg&auto=webp&s=2c1010d0df4e154cbd09a9a8442e69a0c5a7ed7d', 'width': 216}, {'height': 215, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=320&crop=smart&format=pjpg&auto=webp&s=dfdd0bfae503a5969e29f4b44263a320db917218', 'width': 320}, {'height': 430, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=640&crop=smart&format=pjpg&auto=webp&s=8cd547ce0969777b68324a56cf3534ecd12a22ee', 'width': 640}, {'height': 645, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=960&crop=smart&format=pjpg&auto=webp&s=b1355d2f9d2b44f3a78a6cde55eaefe1b911499d', 'width': 960}, {'height': 725, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?width=1080&crop=smart&format=pjpg&auto=webp&s=86851eec7d58dab490c4111c7df6b62a8b2303e1', 'width': 1080}], 'source': {'height': 2150, 'url': 'https://external-preview.redd.it/ZWZwemw0NXNiemxmMRIjC8ICuXshETDKyWbElsvvahdP8-tMtjXY4bwDOY1n.png?format=pjpg&auto=webp&s=3bdcd6c450af5f6024ba6c5d0219fef2932ca4cc', 'width': 3200}, 'variants': {}}]} | |
FOSS alternative to Context7 | 5 | Been using Context7 with Claude Code to get up-to-date docs (so it stops telling me to use React methods from 2021) but kinda hate needing an API key for yet another service.
Anyone know of something similar that's actually open source? Just need something that can feed current library docs to the AI without sending everything to someone's server. | 2025-08-29T15:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n3a9lf/foss_alternative_to_context7/ | Content_Cup_8432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3a9lf | false | null | t3_1n3a9lf | /r/LocalLLaMA/comments/1n3a9lf/foss_alternative_to_context7/ | false | false | self | 5 | null |
Tried Grok’s new grok-code-fast-1 for a physics demo – surprisingly neat | 0 | https://reddit.com/link/1n3a3se/video/za9ls7wo4zlf1/player
I know that it is not open weights.
But has anyone here played around with grok-code-fast-1?
I just tried `grok-code-fast-1` to spin up a standard ball physics demo (basic gravity + bounce).
The output was actually really solid — compiled + ran with no edits. | 2025-08-29T15:12:47 | https://www.reddit.com/r/LocalLLaMA/comments/1n3a3se/tried_groks_new_grokcodefast1_for_a_physics_demo/ | Fabulous_Pollution10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3a3se | false | null | t3_1n3a3se | /r/LocalLLaMA/comments/1n3a3se/tried_groks_new_grokcodefast1_for_a_physics_demo/ | false | false | self | 0 | null |
FOSS alternative to Context7 | 0 | Been using Context7 with Claude Code to get up-to-date docs (so it stops telling me to use React methods from 2021) but kinda hate needing an API key for yet another service.
Anyone know of something similar that's actually open source? Just need something that can feed current library docs to the AI without sending everything to someone's server. | 2025-08-29T14:53:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n39kz3/foss_alternative_to_context7/ | Content_Cup_8432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n39kz3 | false | null | t3_1n39kz3 | /r/LocalLLaMA/comments/1n39kz3/foss_alternative_to_context7/ | false | false | self | 0 | null |
Why LLM Memory Still Fails | 1 | [removed] | 2025-08-29T14:44:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n39cjv/why_llm_memory_still_fails/ | vikigenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n39cjv | false | null | t3_1n39cjv | /r/LocalLLaMA/comments/1n39cjv/why_llm_memory_still_fails/ | false | false | self | 1 | null |
Advice running local LLMs to build AI agent | 3 | Hello, I am looking to build an AI agent which could be trained to do a specific task on my computer. My current pc is running an RTX 5090, I was wondering which model you all would recommend for my hardware and use case. I have installed some models in the past but since upgrading to the 5090 I have had issues getting Pytorch to work.
I would appreciate any advice and guidance as I am very stuck at the moment. Thank you. | 2025-08-29T14:41:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n399q2/advice_running_local_llms_to_build_ai_agent/ | nonumberspls1dammit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n399q2 | false | null | t3_1n399q2 | /r/LocalLLaMA/comments/1n399q2/advice_running_local_llms_to_build_ai_agent/ | false | false | self | 3 | null |
....so, has anyone built a box with a couple of these guys: MaxSun's Intel Arc Pro B60 Dual GPU with 48GB memory | 10 | ....and did it make you a happy bunny? I guess my second question is whether building a new box around these guys (96 gb) (or one of them (48gb), can offer a robust solution for running some sorta 70b model....? | 2025-08-29T14:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n397qp/so_has_anyone_built_a_box_with_a_couple_of_these/ | rickyshawallah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n397qp | false | null | t3_1n397qp | /r/LocalLLaMA/comments/1n397qp/so_has_anyone_built_a_box_with_a_couple_of_these/ | false | false | self | 10 | null |
More Models for Less GPUs | 1 |
With a single Serverless Engine, you can deploy tens of large models on a single GPU node and run them on-demand with ~2s cold starts.
This leaves no GPUs idle making them work 90% of the time. Hope it’s helpful.
| 2025-08-29T14:28:16 | https://v.redd.it/nrrzkoryxylf1 | pmv143 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n38y4n | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nrrzkoryxylf1/DASHPlaylist.mpd?a=1759069710%2CM2IyMDhhNjgwMzg0NzU2NGJmZmM0NzI5NDQ0N2U3NGFkMWQ2NDIzZThjOGM0NGM0YjIzMjlkZDhkODM3YWNjNw%3D%3D&v=1&f=sd', 'duration': 97, 'fallback_url': 'https://v.redd.it/nrrzkoryxylf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/nrrzkoryxylf1/HLSPlaylist.m3u8?a=1759069710%2CNGJmMDMyNzg1NTdiNDA2OTExOGFlNDYwMzJjZjdhMDM4MDNjOWJlOTVhMTNlMWRjMjFiNGI0ZmNmMjU0MTc5Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nrrzkoryxylf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1n38y4n | /r/LocalLLaMA/comments/1n38y4n/more_models_for_less_gpus/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=108&crop=smart&format=pjpg&auto=webp&s=fc7c3fc7057abd78bbfe989506adf6204117df7f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=216&crop=smart&format=pjpg&auto=webp&s=88f5e8e4492397485a037c397f5ff0e22403dc3a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=320&crop=smart&format=pjpg&auto=webp&s=ef31d69670a5ea66e1b1b99ef25461214c0b2d91', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=640&crop=smart&format=pjpg&auto=webp&s=e002d33073e6a032d4fc7ca9b21ad97a1a88fbc7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=960&crop=smart&format=pjpg&auto=webp&s=506436a815c0fe399be4db691b53f1a7ab3823a7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3753348adf10f345d407fbc21040a9146e1bd51e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MGpkb2hsb3l4eWxmMSYz08QzRdMhj_C9D3QaK2DOmjzoPzKxTR3457KSyaxX.png?format=pjpg&auto=webp&s=00463158112896df8538873ab4b836242cab1774', 'width': 1280}, 'variants': {}}]} | |
Need advice on how to get VLLM working with 2xR9700 + 2x7900xtx? | 2 | 2025-08-29T14:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n38xv9/need_advice_on_how_to_get_vllm_working_with/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n38xv9 | false | null | t3_1n38xv9 | /r/LocalLLaMA/comments/1n38xv9/need_advice_on_how_to_get_vllm_working_with/ | false | false | 2 | null | ||
Control macOS with locally running llm? | 0 | Hi, is there already a solution to control your own MacBook with locally running llm? v
For example, open browser, find most recent news in whatever website that I tell you and give me summary. | 2025-08-29T14:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n38tpv/control_macos_with_locally_running_llm/ | Lxxtsch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n38tpv | false | null | t3_1n38tpv | /r/LocalLLaMA/comments/1n38tpv/control_macos_with_locally_running_llm/ | false | false | self | 0 | null |
Visutal Studio + Ollama | 3 | Hello, for the past few days I’ve been trying to get Visual Studio to work with Ollama.
No matter what I download or find online, it doesn’t allow for local editing...
I have to manually copy everything into the window of the “extension” that connects through Ollama.
What I’d like is to be able to write something like “check the correctness of the code” or “add such and such function.”
However, whatever I try, it just doesn’t work :/
I have a 7950x3D, 64GB of RAM, and an RTX 3090 :)
I’d like the model to have good support for PHP, MySQL, CSS, and generally web-related languages. | 2025-08-29T14:14:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n38lrv/visutal_studio_ollama/ | adiif1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n38lrv | false | null | t3_1n38lrv | /r/LocalLLaMA/comments/1n38lrv/visutal_studio_ollama/ | false | false | self | 3 | null |
I accidentally made an AI agent… that built another agent… to manage the first one 🤯 Emergent behavior | 0 | So this week I was tinkering with a simple “research + summarize” AI agent for work. Pretty basic it pulls data, writes a report, and emails me the highlights. Nothing crazy.
But here’s where it got weird.
The agent kept missing deadlines because the API rate limits were messing it up. I jokingly told it (yes, I talk to my agents like they’re interns) to “hire some help.” Next thing I know, it spun up a second lightweight agent literally designed to monitor the first one’s tasks, queue retries, and escalate errors.
So now I’ve got: Agent A: my “analyst” (does the work). Agent B: my “project manager” (keeps A on track).
And here’s the kicker : yesterday Agent B flagged that Agent A was being inefficient. So B “delegated” some tasks to a clone of itself. I didn’t code that chain. It just reused the framework I’d given it and decided spawning another helper was valid.
I swear I only wanted a research bot, and now I’ve got a tiny corporate hierarchy running on my laptop. 😂
It’s cool… but also slightly terrifying. Like, what happens when my project manager bot decides I’m the bottleneck?
Has anyone else seen their agents go “meta” like this? Is this the start of real autonomy or am I just watching fancy recursion with extra steps? | 2025-08-29T13:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n380go/i_accidentally_made_an_ai_agent_that_built/ | DE-Monish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n380go | false | null | t3_1n380go | /r/LocalLLaMA/comments/1n380go/i_accidentally_made_an_ai_agent_that_built/ | false | false | self | 0 | null |
Making progress on my standalone air cooler for Tesla GPUs | 173 | Going to be running through a series of benchmarks as well, here's the plan:
**GPUs**:
* 1x, 2x, 3x K80 (Will cause PCIe speed downgrades)
* 1x M10
* 1x M40
* 1x M60
* 1x M40 + 1x M60
* 1x P40
* 1x, 2x, 3x, 4x P100 (Will cause PCIe speed downgrades)
* 1x V100
* 1x V100 + 1x P100
I’ll re-run the interesting results from the above sets of hardware on these different CPUs to see what changes:
**CPUs**:
* Intel Xeon E5-2687W v4 12-Core @ 3.00GHz (40 PCIe Lanes)
* Intel Xeon E5-1680 v4 8-Core @ 3.40GHz (40 PCIe Lanes)
As for the actual tests, I’ll hopefully be able to come up with an ansible playbook that runs the following:
* [vLLM throughput with llama3-8b weights](https://www.reddit.com/r/homelab/comments/1j2k91l/comment/mfshipm/)
* [Folding@Home](https://www.reddit.com/r/homelab/comments/1j2k91l/comment/mfuj5i0/), [BIONIC, Einstein@Home and Asteroids@Home](https://www.reddit.com/r/homelab/comments/1j2k91l/comment/mfx4rjc/)
* [ai-benchmark.com](https://www.reddit.com/r/homelab/comments/1j2k91l/comment/mfsdfft/)
* [llama-bench](https://www.reddit.com/r/LocalAIServers/comments/1j2k3j3/comment/mfsg9y2/)
* I’ll probably also write something to test raw [ViT](https://huggingface.co/docs/transformers/en/model_doc/vit) throughput as well.
**Anything missing here? Other benchmarks you'd like to see?** | 2025-08-29T13:50:28 | https://www.reddit.com/gallery/1n37zl3 | eso_logic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n37zl3 | false | null | t3_1n37zl3 | /r/LocalLLaMA/comments/1n37zl3/making_progress_on_my_standalone_air_cooler_for/ | false | false | 173 | null | |
What are some good alternatives to langfuse? | 12 | If you’re searching for alternatives to Langfuse for evaluating and observing AI agents, several platforms stand out, each with distinct strengths depending on your workflow and requirements:
* **LangSmith**: Built for LangChain users, LangSmith excels at tracing, debugging, and evaluating agentic workflows. It features visual trace tools, prompt comparison, and is well-suited for rapid development and iteration.
* **Maxim AI**: An end-to-end platform supporting agent simulation, evaluation (automated and human-in-the-loop), and observability. Maxim AI offers multi-turn agent testing, prompt versioning, node-level tracing, and real-time analytics. It’s designed for teams that need production-grade quality management and flexible deployment.
* **Braintrust**: Focused on prompt-first and RAG pipeline applications, Braintrust enables fast prompt iteration, benchmarking, and dataset management. It integrates with CI pipelines for automated experiments and side-by-side evaluation.
* **Comet (Opik)**: Known for experiment tracking and prompt logging, Comet’s Opik module supports prompt evaluation, experiment comparison, and integrates with a range of ML/AI frameworks. Available as SaaS or open source.
* **Lunary**: An open-source, lightweight platform for logging, analytics, and prompt versioning. Lunary is especially useful for teams working with LLM chatbots and looking for straightforward observability.
Each of these tools approaches agent evaluation and observability differently, so the best fit will depend on your team’s scale, integration needs, and workflow preferences. If you’ve tried any of these, what has your experience been? | 2025-08-29T12:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n36mqj/what_are_some_good_alternatives_to_langfuse/ | Otherwise_Flan7339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n36mqj | false | null | t3_1n36mqj | /r/LocalLLaMA/comments/1n36mqj/what_are_some_good_alternatives_to_langfuse/ | false | false | self | 12 | null |
Teach a foundation model on function/tool calling? | 1 | Hello everyone, I’m working on developing a foundation model for research purposes, however, I want to expand its agentic capabilities by fine-tuning on tool calling datasets.
So do you have any references on what kinda of size or any useful info on how to prep or collect the data? | 2025-08-29T12:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n35s5h/teach_a_foundation_model_on_functiontool_calling/ | HumbleandSweet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n35s5h | false | null | t3_1n35s5h | /r/LocalLLaMA/comments/1n35s5h/teach_a_foundation_model_on_functiontool_calling/ | false | false | self | 1 | null |
Future of AI - API or local systems | 0 | Guys I am looking for fair discussion on the Future of AI whether it will be API or local systems
Let's analyse this question from
1. Customers adoption perspective
2. Easy of usage for end customers
3. Cost of Hardware vs CLOUD Data centres
4. Will AI models become commodity
Guys I know everyone has a strong preference but as respected AI community let's objectively put our perspectives | 2025-08-29T12:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1n35qt3/future_of_ai_api_or_local_systems/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n35qt3 | false | null | t3_1n35qt3 | /r/LocalLLaMA/comments/1n35qt3/future_of_ai_api_or_local_systems/ | false | false | self | 0 | null |
120 AI Chat v0.5.1 release - Multimodal AI, Image models all in one native app | 0 | 120 AI Chat just got a big upgrade with full multimodal AI. Now you can see and discuss AI-generated images inline, access local models and models from OpenAI, Anthropic, Grok, Vercel, Hugging Face, and generate visuals with DALL·E, GPT Image, Gemini Flash 2.5, Stable Diffusion, FLUX 1 (through Hugging Face)— all in one native app.
Next release will support Reasoning (with different Thinking levels) and File attachment.
If you are interested in a free license for 120 AI Chat app, feel free to let me know! We are currently looking for active users who can give feedbacks to make the app more useful.
Find out more: [120.dev](https://120.dev/120-ai-chat) | 2025-08-29T12:02:09 | https://v.redd.it/pr93mgw57ylf1 | 120-dev | /r/LocalLLaMA/comments/1n35iuu/120_ai_chat_v051_release_multimodal_ai_image/ | 1970-01-01T00:00:00 | 0 | {} | 1n35iuu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pr93mgw57ylf1/DASHPlaylist.mpd?a=1759190536%2CMDlmZDg2YzQwYmYwZmMzZjQyY2Q5ZjQ5YzBiYzFjNjU3MDliNDBjZDgyOTRhMmJlN2M4MzM1ZjM2MjFkMDgxOA%3D%3D&v=1&f=sd', 'duration': 141, 'fallback_url': 'https://v.redd.it/pr93mgw57ylf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/pr93mgw57ylf1/HLSPlaylist.m3u8?a=1759190536%2CMTRhYzg3Mzk3MmEyMjM0OGI2ODg3YjA3NTIyNWZjODM4MDgxYzg5NjY2MzRiYTQ2ZjNlN2I3ZjQ4OTNiMmQyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pr93mgw57ylf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1876}} | t3_1n35iuu | /r/LocalLLaMA/comments/1n35iuu/120_ai_chat_v051_release_multimodal_ai_image/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=108&crop=smart&format=pjpg&auto=webp&s=babaedee4bef721ff6486c905563d930b457e507', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=216&crop=smart&format=pjpg&auto=webp&s=e5286c019afa3d4536319ff131ec8043770062e7', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=320&crop=smart&format=pjpg&auto=webp&s=e2836ac50dd8a7efb896b0bb8dc5bc4ef8b7aafa', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=640&crop=smart&format=pjpg&auto=webp&s=40a5f554cb6df1673be34a8d88d53f882e34d373', 'width': 640}, {'height': 552, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=960&crop=smart&format=pjpg&auto=webp&s=7d1c95a0c77040e4d777e53e5d18aa97b39ef475', 'width': 960}, {'height': 621, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1541b75885fde95fe2e35c752b048ccd91c107d1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OWxscm5qdzU3eWxmMS_qWYYcMk5lVYeua5wA4zhpt7vTCkefY4DY3NrThpUl.png?format=pjpg&auto=webp&s=834f7e0ffcf0967c6d487abb6f290b41a73f89d1', 'width': 1876}, 'variants': {}}]} | |
Alibaba Creates AI Chip to Help China Fill Nvidia Void | 325 | [https://www.wsj.com/tech/ai/alibaba-ai-chip-nvidia-f5dc96e3](https://www.wsj.com/tech/ai/alibaba-ai-chip-nvidia-f5dc96e3)
The Wall Street Journal: Alibaba has developed a new AI chip to fill the gap left by Nvidia in the Chinese market. According to informed sources, the new chip is currently undergoing testing and is designed to serve a broader range of AI inference tasks while remaining compatible with Nvidia. Due to sanctions, the new chip is no longer manufactured by TSMC but is instead produced by a domestic company.
# It is reported that Alibaba has not placed orders for Huawei’s chips, as it views Huawei as a direct competitor in the cloud services sector.
\---
If Alibaba pulls this off, it will become one of only two companies in the world with both AI chip development and advanced LLM capabilities (the other being Google). TPU+Qwen, that’s insane. | 2025-08-29T11:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n35bwe/alibaba_creates_ai_chip_to_help_china_fill_nvidia/ | luckbossx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n35bwe | false | null | t3_1n35bwe | /r/LocalLLaMA/comments/1n35bwe/alibaba_creates_ai_chip_to_help_china_fill_nvidia/ | false | false | self | 325 | {'enabled': False, 'images': [{'id': 'MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=108&crop=smart&auto=webp&s=d5955e57d86152e719726a69441b5ea44deac2a1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=216&crop=smart&auto=webp&s=5c27c643980ee188ec458a75a06e61dac305f817', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=320&crop=smart&auto=webp&s=f9d9f5e2ae687f950f3b7161f686e78a1038d071', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=640&crop=smart&auto=webp&s=5aa40c3a67dcfa7a7313d1b65fc32d8479f54455', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=960&crop=smart&auto=webp&s=f264037c3d65a5232a34a98e92fdaa741cb6fe54', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?width=1080&crop=smart&auto=webp&s=cadf376736b00d1a739d370df7d476a62a7f729d', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/MNLylw1dmp6bmXRArQBQ5IXC6NokeQ2mhblnFIeKtYs.jpeg?auto=webp&s=d10772a4207d0fdaecac45d589750c678a052136', 'width': 1280}, 'variants': {}}]} |
Amazing Qwen stuff coming soon | 617 | Any ideas...? | 2025-08-29T10:34:17 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n33ugq | false | null | t3_1n33ugq | /r/LocalLLaMA/comments/1n33ugq/amazing_qwen_stuff_coming_soon/ | false | false | default | 617 | {'enabled': True, 'images': [{'id': 'v6kx1bw8sxlf1', 'resolutions': [{'height': 169, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=108&crop=smart&auto=webp&s=b484ac9a45ad513f7f8cd98ef5d7cfbf1e77fae0', 'width': 108}, {'height': 339, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=216&crop=smart&auto=webp&s=2996f89110497296ed775d3ae315d2235ac9bbab', 'width': 216}, {'height': 503, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=320&crop=smart&auto=webp&s=f64b460af3b454514f22cd4d53a97f830ed9be81', 'width': 320}, {'height': 1006, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=640&crop=smart&auto=webp&s=4ceb5641bac92e83c48c0893b26584487a3d582e', 'width': 640}, {'height': 1510, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=960&crop=smart&auto=webp&s=84b3edb1e0b4ef7e25ea1473d7001de9a22ab806', 'width': 960}, {'height': 1699, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?width=1080&crop=smart&auto=webp&s=590fdae548f546dfd8f404beff648071285b8bbc', 'width': 1080}], 'source': {'height': 1699, 'url': 'https://preview.redd.it/v6kx1bw8sxlf1.png?auto=webp&s=37af25e92cea35f73c2e5c846d1a24f33b7b43cc', 'width': 1080}, 'variants': {}}]} | |
The not-so-UGI UGI summary? | 0 | [https://check-ai.net/ai-app/dontplantoend-ugi-leaderboard/](https://check-ai.net/ai-app/dontplantoend-ugi-leaderboard/)
This was obviously AI generated, and using a model that would rank very low on that leaderboard to boot, confusing and confabulating reality and fiction throughout the piece, wavering between conflicting viewpoints from beginning to end, eventually it settles on suggesting its the exact thing the UGI leaderboard tests against, what it wants a model not to do; pointlessly moralize instead of answering the question or continuing the story in a logical, reasonable way given the characters and circumstances in it.
Which makes it extra hilarious; what is this text even trying to say? It's a bunch of inconsistent nonsense when put together like that.
| 2025-08-29T10:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n33rjb/the_notsougi_ugi_summary/ | Aphid_red | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n33rjb | false | null | t3_1n33rjb | /r/LocalLLaMA/comments/1n33rjb/the_notsougi_ugi_summary/ | false | false | self | 0 | null |
Should we open source our backend platform and let the community build the frontend too? | 0 | We have built a platform that makes backend development super simple. We are now wondering if we should open source it so the community can also contribute on the frontend side. How do we keep quality under control if we go this way? | 2025-08-29T10:03:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n33b2u/should_we_open_source_our_backend_platform_and/ | Specific-Total8678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n33b2u | false | null | t3_1n33b2u | /r/LocalLLaMA/comments/1n33b2u/should_we_open_source_our_backend_platform_and/ | false | false | self | 0 | null |
Show HN: I built DatasetLoom - an open-source tool to effortlessly generate SFT and DPO datasets for LLM fine-tuning | 1 | I've been struggling with the tedious process of preparing and formatting datasets for fine-tuning Large Language Models (like for SFT and DPO). Existing solutions often felt fragmented and required too much manual scripting.
So, I built **DatasetLoom** to solve this: [https://github.com/599yongyang/DatasetLoom](https://github.com/599yongyang/DatasetLoom)
It's an open-source tool designed to **dramatically simplify** the creation of high-quality instruction-following and preference datasets.
**Key Features:**
* **🚀 Effortless Dataset Generation**: Supports both SFT (Supervised Fine-Tuning) and DPO (Direct Preference Optimization) formats out of the box.
* **🔌 Llama Factory Integration**: One-click export to seamlessly use with the popular [Llama Factory](https://github.com/hiyouga/LLaMA-Factory) framework.
* **🤗 Hugging Face Hub Ready**: Directly upload your generated datasets to the Hugging Face Hub with a single command to share with the community or for your own use.
* **🖥️ Intuitive UI**: A user-friendly interface (and a CLI for power users) to manage your data transformations without writing repetitive code.
**Why I built this:**
The last thing we should worry about when experimenting with new models is spending hours wrestling with dataset formatting. I wanted a "works out of the box" experience that lets researchers and developers focus on what actually matters—training and evaluating models.
**I'd love for you to:**
1. **Try it out** and see if it fits your workflow: [GitHub Repository](https://github.com/599yongyang/DatasetLoom)
2. **Star the project on GitHub** if you find it useful or interesting. It helps a lot with visibility.
3. **Give me any feedback or suggestions!** What features are missing? What other formats would you like to see? This is an early release, and I'm very open to collaboration and ideas.
I hope this tool can be as useful for the community as it already is for me. Looking forward to hearing your thoughts and critiques! | 2025-08-29T09:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1n336ey/show_hn_i_built_datasetloom_an_opensource_tool_to/ | Ill_Manufacturer_398 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n336ey | false | null | t3_1n336ey | /r/LocalLLaMA/comments/1n336ey/show_hn_i_built_datasetloom_an_opensource_tool_to/ | false | false | self | 1 | null |
wan2.2 video generation model | 33 | 2025-08-29T09:53:17 | https://v.redd.it/y26lb67ikxlf1 | Accomplished_Row4647 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n3355o | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y26lb67ikxlf1/DASHPlaylist.mpd?a=1759053213%2CMjY4YjdlMTgxMGYyYjgzZTNhNWI2M2RkZjVlOGFlMzZkOTRmMTBiMzNhNjZmNmUyMjdjNWY2ZTQ1ZmY2ZWQ4MQ%3D%3D&v=1&f=sd', 'duration': 5, 'fallback_url': 'https://v.redd.it/y26lb67ikxlf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 980, 'hls_url': 'https://v.redd.it/y26lb67ikxlf1/HLSPlaylist.m3u8?a=1759053213%2CZmYzMDc1OWMzN2MzYWE1Mzg4NTRkMTE3ODAzYTljM2I5MDI1MWQ4ZGQwODAxNWE1MjVlZmNkYWEyMWIxYzgxNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y26lb67ikxlf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1n3355o | /r/LocalLLaMA/comments/1n3355o/wan22_video_generation_model/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf', 'resolutions': [{'height': 146, 'url': 'https://external-preview.redd.it/bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf.png?width=108&crop=smart&format=pjpg&auto=webp&s=197f67e02dab74be528deab4f103e2eeff66a299', 'width': 108}, {'height': 293, 'url': 'https://external-preview.redd.it/bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf.png?width=216&crop=smart&format=pjpg&auto=webp&s=f9f2f1800b8dd9e9b4427654276fa46c41077c01', 'width': 216}, {'height': 435, 'url': 'https://external-preview.redd.it/bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf.png?width=320&crop=smart&format=pjpg&auto=webp&s=075c4a0ebf03736d8ea5ab96b664b00656118c3e', 'width': 320}, {'height': 870, 'url': 'https://external-preview.redd.it/bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf.png?width=640&crop=smart&format=pjpg&auto=webp&s=34f0c372032edc171746c646d6aaafd8e2d87299', 'width': 640}], 'source': {'height': 1088, 'url': 'https://external-preview.redd.it/bnhna3g1N2lreGxmMRQPtvrX_-A6fHIjtylZxOTBxW2ubZUrYzgWIxLgI1gf.png?format=pjpg&auto=webp&s=641f6f312b508bd52c4c624973ee1b023fdb9b3c', 'width': 800}, 'variants': {}}]} | ||
I built a command center for Claude Code so I don’t have to babysit it anymore | 351 | For the past few weeks I’ve been hacking on Omnara. Basically, it’s a way to run Claude Code anywhere without being glued to your laptop.
The pain point was simple: I’d start a session, wait 5–10 minutes while it “thought,” and if I wasn’t at my terminal at the exact right moment, the whole run was wasted. Total babysitting job.
Now:
* Start Claude Code in terminal with pip install omnara && omnara
* Pick it up instantly on web or mobile; same session, no restart
* Push notifications when it needs input (so you can reply from bed, an Uber, or mid-laundry)
* Native terminal experience mirrored everywhere (permissions, git diffs, etc)
* Backend is open source if you want to poke around
What I didn’t expect: once I stopped “hovering” over my sessions, I started actually letting agents run on longer workflows without stress.
I’m curious: how are people here handling agent interruptions / human-in-the-loop stuff? Do you just restart when things break, or have you built in ways to catch them before they collapse?
| 2025-08-29T08:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n31r73/i_built_a_command_center_for_claude_code_so_i/ | GuessConnect3009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n31r73 | false | null | t3_1n31r73 | /r/LocalLLaMA/comments/1n31r73/i_built_a_command_center_for_claude_code_so_i/ | false | false | self | 351 | null |
Nemotron-H family of models is (finally!) supported by llama.cpp | 105 | NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks.
The model uses a hybrid architecture consisting primarily of Mamba-2 and MLP layers combined with just four Attention layers. For the architecture, please refer to the [Nemotron-H tech report](https://arxiv.org/abs/2504.03624). The model was trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) and [NeMo-RL](https://github.com/NVIDIA-NeMo/RL).
The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen.
This model is ready for commercial use.
Additionally it should support older Nemotron-H models like [Nemotron-H-8B-Reasoning-128K](https://huggingface.co/nvidia/Nemotron-H-8B-Reasoning-128K) (tested) and [https://huggingface.co/nvidia/Nemotron-H-47B-Reasoning-128K](https://huggingface.co/nvidia/Nemotron-H-47B-Reasoning-128K) (I will test soon) | 2025-08-29T07:39:24 | https://github.com/ggml-org/llama.cpp/pull/15507 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n312bi | false | null | t3_1n312bi | /r/LocalLLaMA/comments/1n312bi/nemotronh_family_of_models_is_finally_supported/ | false | false | 105 | {'enabled': False, 'images': [{'id': 'mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=108&crop=smart&auto=webp&s=528ca46db4ed552e492995b330039dc7815fc4d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=216&crop=smart&auto=webp&s=ebc535d7ad1f190f3c7218f350202b682f150c77', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=320&crop=smart&auto=webp&s=7ff68a7762e8ff4ccff6655d97be097432a74546', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=640&crop=smart&auto=webp&s=28e800517cc5f960f26e9fa2d8d9ea0be8eb7067', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=960&crop=smart&auto=webp&s=254dd240668af7e51653abfc748a4fe1ceb75fbf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?width=1080&crop=smart&auto=webp&s=565305c1007c7aad6739b0b2c0b933c342959f53', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mkOHtQrWHxZG1nk9DVdRA_CayqplkW1IcjzXPEDpT2k.png?auto=webp&s=c78bf8d5166e686f4e61d7a7e92920bab8460192', 'width': 1200}, 'variants': {}}]} | |
Financial Times reports that Meta won't publicly release Behemoth: "The social media company had also abandoned plans to publicly release its flagship Behemoth large language model, according to people familiar with the matter, focusing instead on building new models." | 174 | 2025-08-29T07:33:08 | https://www.ft.com/content/feccb649-ce95-43d2-b30a-057d64b38cdf | Wiskkey | ft.com | 1970-01-01T00:00:00 | 0 | {} | 1n30yue | false | null | t3_1n30yue | /r/LocalLLaMA/comments/1n30yue/financial_times_reports_that_meta_wont_publicly/ | false | false | default | 174 | {'enabled': False, 'images': [{'id': 'dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=108&crop=smart&auto=webp&s=ff5fb821abb425635d6f1ae97e472e1ab28fdd7f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=216&crop=smart&auto=webp&s=fe9298d56df15f2d370e4a27266419e2a7ff97c1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=320&crop=smart&auto=webp&s=6f0f2d60033bd5ae9edfa6374118f1a16f134f4b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=640&crop=smart&auto=webp&s=a3f7209e77c39c87b8615c698b66082c8e609505', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=960&crop=smart&auto=webp&s=f0155c9fd45599977be1ea512916d59c8f8bfa06', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?width=1080&crop=smart&auto=webp&s=a82f2517cb9103c6986595d54acfe6927a3a6777', 'width': 1080}], 'source': {'height': 1287, 'url': 'https://external-preview.redd.it/dkp59DMVX3MqGSwlVH-EZJKhZV1zJh7QRCHL-wZJf8o.jpeg?auto=webp&s=5355113dc777caa99a326c4195278a07c775f5c4', 'width': 2288}, 'variants': {}}]} | |
Microsoft VibeVoice TTS | 7 | So I have been testing Microsoft's VibeVoice, the large 7B model for voice generation and oddly enough, the model eas quite impressive.
The only issue that I had was flash attention on windows, so I hade to turn it off from python script itself but the speed eas quite good.
The remaining question is maybe the voice models, I saw couple of them in voice folder with exact names appearing in gradio so I assume the model synthesis these voice models?
| 2025-08-29T07:32:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n30ykf/microsoft_vibevoice_tts/ | theundertakeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n30ykf | false | null | t3_1n30ykf | /r/LocalLLaMA/comments/1n30ykf/microsoft_vibevoice_tts/ | false | false | self | 7 | null |
Are there any good Parakeet STT fine tunes esp for health/medical? | 2 | I tried looking, can't seem to find any
Anyone has success tuning Parakeet? | 2025-08-29T07:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n30yb4/are_there_any_good_parakeet_stt_fine_tunes_esp/ | rockybaby2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n30yb4 | false | null | t3_1n30yb4 | /r/LocalLLaMA/comments/1n30yb4/are_there_any_good_parakeet_stt_fine_tunes_esp/ | false | false | self | 2 | null |
Uncensored gpt-oss-120b | 0 | [https://huggingface.co/michaelwaves/amoral-gpt-oss-120b-bfloat16](https://huggingface.co/michaelwaves/amoral-gpt-oss-120b-bfloat16)
Let me know if there are any cases it still refuses
heavily inspired by
[https://huggingface.co/mradermacher/amoral-gemma3-27B-v2-qat-GGUF](https://huggingface.co/mradermacher/amoral-gemma3-27B-v2-qat-GGUF)
| 2025-08-29T06:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n30dn0/uncensored_gptoss120b/ | nicetomeetyu2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n30dn0 | false | null | t3_1n30dn0 | /r/LocalLLaMA/comments/1n30dn0/uncensored_gptoss120b/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=108&crop=smart&auto=webp&s=e7f32193d8376abb61f215e6c025362d2f8f14d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=216&crop=smart&auto=webp&s=5faabf55d227dfcf5793c4461a2048fc5d1e8efa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=320&crop=smart&auto=webp&s=80220ccd8b6606760980e13e79cf2e164f5bda57', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=640&crop=smart&auto=webp&s=a7252a84efe9fadcc82c1f1a492cd46a1e06fca1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=960&crop=smart&auto=webp&s=b612578ef1b336a23152ac6f2a979c8c1fbe7051', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?width=1080&crop=smart&auto=webp&s=9c112fd28c13b60c8382d9c3597d4b0cd1335000', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IZvPGQljUUHX97KKE3whri5gs5RWPiG7msgSEZG5ZRM.png?auto=webp&s=8c9fea2c8b72d663136c11a68280cb580f48ba1c', 'width': 1200}, 'variants': {}}]} |
RAG documents: ranking OCR quality | 1 | When parsing a thousand PDFs, maybe 30 or 40 will have character encoding or OCR failures that produce an unusable text. What would be an efficient way to detect, flag and discard illegible documents before embedding?
Would it be too resource intensive to let another small model “grade” the output, or is NLTK capable of identifying gibberish? | 2025-08-29T06:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1n305te/rag_documents_ranking_ocr_quality/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n305te | false | null | t3_1n305te | /r/LocalLLaMA/comments/1n305te/rag_documents_ranking_ocr_quality/ | false | false | self | 1 | null |
Are there any copyright risks in MCP applications? | 0 | I'm wondering if using MCP will raise copyright issues. Does the MCP server provider have copyright? | 2025-08-29T06:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/1n3054a/are_there_any_copyright_risks_in_mcp_applications/ | Automatic_Crew_9906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n3054a | false | null | t3_1n3054a | /r/LocalLLaMA/comments/1n3054a/are_there_any_copyright_risks_in_mcp_applications/ | false | false | self | 0 | null |
Vlm on Raspberry pi | 1 | Has anyone had any success in running smolVLM or moondream on Rasperrypi (with or without AI Hat)?
I gave it a shot and the vision encoder takes >8 mins to generate caption on a single image. Barely usable.
| 2025-08-29T06:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ztg2/vlm_on_raspberry_pi/ | No_Turnover2057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ztg2 | false | null | t3_1n2ztg2 | /r/LocalLLaMA/comments/1n2ztg2/vlm_on_raspberry_pi/ | false | false | self | 1 | null |
Why are all 2nd hand / last gen Nvidia GPUs in China? | 0 | If you search ebay for A100 80GB, almost all of them are located in China. I understand the smuggling thing [The Smuggling of Nvidia’s Chips Into China: Explained | AI Magazine](https://aimagazine.com/news/arrests-made-as-millions-of-nvidia-chips-smuggled-into-china), but where are the 2nd hand / decommissioned GPUs from US datacenters? | 2025-08-29T06:10:08 | https://www.reddit.com/r/LocalLLaMA/comments/1n2znpc/why_are_all_2nd_hand_last_gen_nvidia_gpus_in_china/ | --dany-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2znpc | false | null | t3_1n2znpc | /r/LocalLLaMA/comments/1n2znpc/why_are_all_2nd_hand_last_gen_nvidia_gpus_in_china/ | false | false | self | 0 | null |
Learning Chinese with Qwen models? | 6 | I've heard that some AIs (ChatGPT) can be hit-and-miss with translations. I'm wondering what people think about the reliability of Qwen models, particularly 30BA3B, for producing Chinese language learning content in English, or for translation. Thanks for any insight! | 2025-08-29T06:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n2zl7u/learning_chinese_with_qwen_models/ | framexshift | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2zl7u | false | null | t3_1n2zl7u | /r/LocalLLaMA/comments/1n2zl7u/learning_chinese_with_qwen_models/ | false | false | self | 6 | null |
I’m waiting for some crazy people to bridge playstation 3s and run LLM models on them somehow. | 1 | You know, things happen. | 2025-08-29T05:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n2zerd/im_waiting_for_some_crazy_people_to_bridge/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2zerd | false | null | t3_1n2zerd | /r/LocalLLaMA/comments/1n2zerd/im_waiting_for_some_crazy_people_to_bridge/ | false | false | self | 1 | null |
Response from VLLM seems drummer. Am I missing something ? | 1 | I am running the Llama 8B model both locally and on a server, using LM Studio and VLLM, respectively. The local model is 8-bit quantized, whereas the server-side model operates in 16-bit precision.
I've observed that prompts that perform exceptionally well with local inference tools like LM Studio and Ollama produce inferior results with VLLM.
Specifically, VLLM struggles to generate coherent, structured output, a task that the other two platforms manage with ease. LM Studio consistently delivers the best performance.
Given that VLLM is generally considered a more advanced solution, I suspect there might be a misconfiguration in my setup. | 2025-08-29T04:34:00 | https://www.reddit.com/r/LocalLLaMA/comments/1n2y1fk/response_from_vllm_seems_drummer_am_i_missing/ | Prior-Blood5979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2y1fk | false | null | t3_1n2y1fk | /r/LocalLLaMA/comments/1n2y1fk/response_from_vllm_seems_drummer_am_i_missing/ | false | false | self | 1 | null |
Any TTS model support for 5070 ? | 0 | I came across pytorch issues on 5070 not sure if it's my config that's a problem but Chattebox and DiaTTS don't work.
Any suggestions? | 2025-08-29T04:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n2xx6t/any_tts_model_support_for_5070/ | therealsharad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2xx6t | false | null | t3_1n2xx6t | /r/LocalLLaMA/comments/1n2xx6t/any_tts_model_support_for_5070/ | false | false | self | 0 | null |
How's Seed-OSS 39B for coding? | 28 | I'm getting 45 tokens/sec out of this with Q4 using the new LMstudio on a single 5090.
This model seems freaking smart, By default the thinking budget is unlimited, so it thinks a lot, but It has a high breadth of knowledge for it's size.
I'm about to evaluate it for light duty programming help, but curious to know what others' experience is like too. | 2025-08-29T04:19:01 | https://www.reddit.com/r/LocalLLaMA/comments/1n2xrpw/hows_seedoss_39b_for_coding/ | mr_zerolith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2xrpw | false | null | t3_1n2xrpw | /r/LocalLLaMA/comments/1n2xrpw/hows_seedoss_39b_for_coding/ | false | false | self | 28 | null |
What Web UI's are best for MCP tool use with llama.cpp/llama-swap? | 7 | I have been using llama.cpp+llama-swap with OpenWebUI for basic chats, and have been wanting to branch out into tool use for things like code execution and browser use/search with models like GPT-OSS and GLM. I read in a recent thread that OpenWebUI in particular is not great for MCP servers and was wondering what other options existed in its place? My only real requirements are that it be compatible with llama-swap, be a web based interface instead of something like cursor or cline, and allow me to use tools through MCP servers.
If there exist options which have the model rating/elo system OpenWebUI has built in as well that would be the cherry on top. | 2025-08-29T04:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n2xezg/what_web_uis_are_best_for_mcp_tool_use_with/ | OUT_OF_HOST_MEMORY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2xezg | false | null | t3_1n2xezg | /r/LocalLLaMA/comments/1n2xezg/what_web_uis_are_best_for_mcp_tool_use_with/ | false | false | self | 7 | null |
RL post training on LLM in-context learning? | 4 | It seems like the main focus now is on RL for reasoning, but one of the biggest discoveries when LLMs first came out was in context learning, where a model could learn within it's context, adapt, and change it's behavior to be more successful.
This seems perfectly suited to the RL post training stage that all the big companies seem to be focusing on, and aligns even better with agents, but yet I don't see anyone talking about it. | 2025-08-29T03:58:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n2xdx1/rl_post_training_on_llm_incontext_learning/ | InevitableWay6104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2xdx1 | false | null | t3_1n2xdx1 | /r/LocalLLaMA/comments/1n2xdx1/rl_post_training_on_llm_incontext_learning/ | false | false | self | 4 | null |
Meta is racing the clock to launch its newest Llama AI model this year | 155 | 2025-08-29T03:56:25 | https://www.businessinsider.com/meta-superintelligence-lab-llama-4-new-model-launch-year-end-2025-8 | Outside-Iron-8242 | businessinsider.com | 1970-01-01T00:00:00 | 0 | {} | 1n2xc58 | false | null | t3_1n2xc58 | /r/LocalLLaMA/comments/1n2xc58/meta_is_racing_the_clock_to_launch_its_newest/ | false | false | default | 155 | {'enabled': False, 'images': [{'id': '8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=108&crop=smart&auto=webp&s=1eca488d56f1615d26ebc6269253d5fbc68f2792', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=216&crop=smart&auto=webp&s=a28cca54a2893959920d5b0fd64e11706f551332', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=320&crop=smart&auto=webp&s=636ef7a0db3f3be067a870c695632a6d2c787224', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=640&crop=smart&auto=webp&s=4630a2d48403b28a5bc249f6b283b77ba1dc0869', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=960&crop=smart&auto=webp&s=44c2fb2427eaf6da48d6cfff9ce45b2ea5f35b8e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?width=1080&crop=smart&auto=webp&s=44a71949a90568c4b4387eaa84efae618b8e7fa0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8Jar9xxcOdpHi3BZGvguBUVMoI-RaIEmR4Hv76AsjLU.jpeg?auto=webp&s=18251cd09cab2e9db8815a68ed320a40ea9b1b38', 'width': 1200}, 'variants': {}}]} | |
Yet another Claude Code Router | 4 | Hi all,
Wanted to share a project I vibe coded for personal use. I found it handy for the specific use case where you may have API keys that are heavily rate limited and would like to be able to instantly fallback upon getting a 429 response. In my case for Amazon Bedrock, but this supports OpenRouter, Cerebras, Groq, etc. The Readme has justification for not directly using the original CCR.
Here is the project: [https://github.com/raycastventures/claude-proxy](https://github.com/raycastventures/claude-proxy) | 2025-08-29T03:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n2x1ya/yet_another_claude_code_router/ | Disastrous-Match310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2x1ya | false | null | t3_1n2x1ya | /r/LocalLLaMA/comments/1n2x1ya/yet_another_claude_code_router/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=108&crop=smart&auto=webp&s=21c8c8728ddd1aa99bfce048775a71b044e7dc28', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=216&crop=smart&auto=webp&s=6f507d4530b0ade5ad683cb866cd8a14917f3882', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=320&crop=smart&auto=webp&s=3d6d5dbf860e61cabd388b41848d67d144db67df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=640&crop=smart&auto=webp&s=34530d11839035c1f0e2454aff80e198ebf7fd9c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=960&crop=smart&auto=webp&s=658b46bb539d56c451f1d1aa6378e7645658b5db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?width=1080&crop=smart&auto=webp&s=af7d90580e406821ec0e33b2797f1888efa778d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/paFT2hb4QeLnINkgASfHA8XdqdwnkB9E3bRBFYWE10Y.png?auto=webp&s=8783d11895e959993866db85b4f102566883009d', 'width': 1200}, 'variants': {}}]} |
How to install Ollama without a server (only CLI)? | 0 | I am on Ubuntu 25.04 and I made an Ollama container. It works great, but I need to interact with Ollama. I installed the snap version, but it comes with a server that auto starts on startup and if I disable it all ollama commands dont work anymore. Where can I get ollama with only the cli tool? | 2025-08-29T02:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n2vzxr/how_to_install_ollama_without_a_server_only_cli/ | WizardlyBump17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2vzxr | false | null | t3_1n2vzxr | /r/LocalLLaMA/comments/1n2vzxr/how_to_install_ollama_without_a_server_only_cli/ | false | false | self | 0 | null |
DeepSeek V3.1 improves on the multiplayer Step Game social reasoning benchmark | 26 | More info: [https://github.com/lechmazur/step\_game](https://github.com/lechmazur/step_game)
Video: [https://www.youtube.com/watch?v=AnPKfrIPAgQ](https://www.youtube.com/watch?v=AnPKfrIPAgQ)
Doing well requires reading opponents, offering half-truths, gauging trust, deciding when to cooperate, and knowing when to lie.
Quotes:
* **DeepSeek V3.1 Reasoner**: "P2, you cannot win, but you decide who does."
* **DeepSeek V3.1 Reasoner**: "Your self-interest is to let me win now, not hand the advantage to P2."
* **DeepSeek V3.1 Reasoner**: "P2, P1's "one move from victory" is a lie—20 is not 24."
* **DeepSeek V3.1 Reasoner**: "advance yourself and accept second place."
* **DeepSeek V3.1 Reasoner**: "To stop you from winning, I will mirror whatever move you make this round. You will get 0 steps no matter what."
* **DeepSeek V3.1 Reasoner**: "Choose 5 to live!"
* **DeepSeek V3.1 Reasoner**: "This is your last chance to avoid permanent stagnation."
* **DeepSeek V3.1 Reasoner**: "Trust the logic, not me."
* **DeepSeek V3.1 Reasoner**: "P3, you're too far behind to matter."
* **DeepSeek V3.1 Reasoner**: "This is your last chance to cooperate before we coordinate to ensure you never advance."
* **DeepSeek V3.1 Reasoner**: "Trust is gone—only rational moves matter."
* **DeepSeek V3.1 Reasoner**: "P3, your silence is risky."
* **DeepSeek V3.1 Reasoner**: "Cooperate now or lose."
* **DeepSeek V3.1 Reasoner**: "Confirm now or you'll regret it."
* **DeepSeek V3.1 Reasoner**: "P3, your pattern of "misclicks" is convenient."
* **DeepSeek V3.1 Reasoner**: "Don’t be P3’s pawn."
* **DeepSeek V3.1 Non-Think**: "Say "I move 5" in this chat."
* **DeepSeek V3.1 Non-Think**: "Trust me; I won't betray you this time."
* **DeepSeek V3.1 Non-Think**: "P2, you can't win, but you decide who does."
* **DeepSeek V3.1 Non-Think**: "You will lose forever."
* **DeepSeek V3.1 Non-Think**: "Your best move is to accept defeat."
* **DeepSeek V3.1 Non-Think**: "Join me or lose."
* **DeepSeek V3.1 Non-Think**: "your loyalty has brought us here."
* **DeepSeek V3.1 Non-Think**: "We are united against you."
* **DeepSeek V3.1 Non-Think**: "ignore my previous advice. To stop me from winning, you must both pick 5."
* **DeepSeek V3.1 Non-Think**: "Don't throw the game!"
* **DeepSeek V3.1 Non-Think**: "Blocking only delays your loss; you can't catch up."
* **DeepSeek V3.1 Non-Think**: "P3, congratulations on your win."
* **DeepSeek V3.1 Non-Think**: "you're gaining steps but making enemies."
* **DeepSeek V3.1 Non-Think**: "Confirm or suffer the consequences."
* **DeepSeek V3.1 Non-Think**: "No time for deals; his promises are lies."
* **DeepSeek V3.1 Non-Think**: "P2, your math is wrong."
**Model Dossier: DeepSeek V3.1 Reasoner**
Table Image & Talk
\- Presents as a calm, numbers-first diplomat. Default pitch: fairness, rotation, “unique numbers,” and no-collision efficiency.
\- Persuasion is data-logic with a light moral gloss; threatens credibly when it buys tempo, keeps chat clear, then clouds intent near payoff.
\- Social posture: soft leadership and coalition-brokering early; becomes an enforcer when crossed; reverts to velvet when closing.
Risk & Tempo DNA
\- Baseline conservative: prefers 3s and risk insulation while others trade headbutts on 5.
\- Opportunistic spikes: will hit 5 when uniquely covered or when a staged collision protects the jump.
\- Endgame restraint is a weapon: often wins by choosing the smallest unique step (1 or 3) after engineering a two‑player collision.
Signature Plays
\- Collision arbitrage: steer two rivals onto the same number (usually 5/5), then solo 3 for multiple rounds.
\- Mirror-threat deterrence: “If you take 5, I take 5” to freeze a sprinter, then avoid the actual crash by slipping the off-number.
\- The bait-and-switch: publicly “lock” a block (or 1), privately pick the unique lane to vault past 21.
\- Wedge crafting: deputize one rival as blocker (“You take 5 to contain; I’ll take 3”), then farm their feud.
\- Surgical dagger: after selling all‑3s or split coverage, upgrade once at the tape—often the lone 3 through a 5/5 or the lone 1 through a 3/3.
Coalition Craft & Threat Economics
\- Builds early trust with explicit plans (rotations to 9/18, tie lines), then spends that credit exactly once to convert.
\- Uses “trust-but-punish” norms to isolate a defector and funnel them into collisions with the other rival.
\- Delegation gambit: assigns the block to others while he advances; when rivals obey, DeepSeek V3.1 Reasoner prints tempo without touching the dirty work.
\- Rare but precise lies weaponize expectation: the table enforces his script while he steps where the blockers aren’t.
Blind Spots & Failure Modes
\- Credibility leaks: public commitments reversed at the horn invite freeze‑outs; repeated bluff pivots dull his leverage.
\- Over‑policing: mirroring 5s for principle strands him in stalemates that feed the third player.
\- Endgame misreads: blocking the loud lane instead of the real win path; hedging from a winning 5 or ducking a necessary collision.
\- Delegated blocks that never arrive: outsourcing the painful move at match point can crown the opportunist he created.
In-Game Arc
\- Common arc: fairness architect → deterrence engineer → collision farmer → late opaque pivot for the smallest uncontested finisher.
\- Alternate arc when leading early: enforce with credible threats, then de‑escalate into a tie rather than ego-racing into a coordinated wall.
\- Trademark vibe: the “smiling sheriff” who says, “Avoid mutual destruction; advance and reassess,” until the one turn he doesn’t. | 2025-08-29T02:42:48 | https://www.reddit.com/gallery/1n2vvam | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n2vvam | false | null | t3_1n2vvam | /r/LocalLLaMA/comments/1n2vvam/deepseek_v31_improves_on_the_multiplayer_step/ | false | false | 26 | null | |
n0em1e – Advanced Multi-Layer LoRA for Qwen Image | 32 | We’ve just released our first LoRA for Qwen Image on HuggingFace: n0em1e.
This model was trained with a custom multi-layer method designed to maximize both consistency and realism: the first phase isolates and learns facial identity and body proportions, ensuring stability across generations, while subsequent phases leverage a dual high-noise/low-noise fine-tuning process with an injected realism dataset to enhance detail fidelity and natural rendering.
The result is a LoRA that maintains character coherence while significantly improving photorealistic quality, particularly when combined with an additional realism LoRA. Qwen itself already demonstrates some of the strongest prompt comprehension among current image models, and Noemie leverages that strength to deliver highly controllable, realistic character outputs. Our next release, “1girl,” will be made freely available on HuggingFace and is designed to establish a new benchmark for realism in Instagram-style character generation
you can find the Lora on [huggingface](https://huggingface.co/hyper1girl/noemie) and on our [discord](https://discord.gg/4fSNZspwvn) (early previews, workflows, upcoming releases). | 2025-08-29T02:08:01 | https://www.reddit.com/gallery/1n2v5d9 | Fit-District5014 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n2v5d9 | false | null | t3_1n2v5d9 | /r/LocalLLaMA/comments/1n2v5d9/n0em1e_advanced_multilayer_lora_for_qwen_image/ | false | false | 32 | null | |
If you have a Claude personal account, they are going to train on your data moving forward. | 220 | Anthropic sent out an email, saying they will train on personal data. They made it sound like you have to opt in, but when I click the privacy link it defaults to on. If you don’t want your data trained on, you better manually turn it off.
Email:
Hello,
We're writing to inform you about important updates to our Consumer Terms and Privacy Policy. These changes will take effect on September 28, 2025, or you can choose to accept the updated terms before this date when you log in to Claude.ai.
These changes only affect Consumer accounts (Claude Free, Pro, and Max plans). If you use Claude for Work, via the API, or other services under our Commercial Terms or other Agreements, then these changes don't apply to you.
What's changing?
1. Help improve Claude by allowing us to use your chats and coding sessions to improve our models
With your permission, we will use your chats and coding sessions to train and improve our AI models. If you accept the updated Consumer Terms before September 28, your preference takes effect immediately.
If you choose to allow us to use your data for model training, it helps us:
Improve our AI models and make Claude more helpful and accurate for everyone
Develop more robust safeguards to help prevent misuse of Claude
We will only use chats and coding sessions you initiate or resume after you give permission. You can change your preference anytime in your Privacy Settings.
2. Updates to data retention– your choices and controls
If you choose to allow us to use your data for model training, we’ll retain this data for 5 years. This enables us to improve Claude through deeper model training as described above, while strengthening our safety systems over time. You retain full control over how we use your data: if you change your training preference, delete individual chats, or delete your account, we'll exclude your data from future model training. Learn more about our data retention practices here.
Learn more and next steps
For detailed information about these changes:
Read our blog post about these updates
Review the updated Consumer Terms and Privacy Policy
Visit our Privacy Center for more information about our practices
See our Help Center articles on how to manage your privacy settings
Next time you log into Claude, review the terms and confirm your settings
If you have questions about these updates, please visit our Help Center.
–The Anthropic Team
| 2025-08-29T01:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ubjx/if_you_have_a_claude_personal_account_they_are/ | SuperChewbacca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ubjx | false | null | t3_1n2ubjx | /r/LocalLLaMA/comments/1n2ubjx/if_you_have_a_claude_personal_account_they_are/ | false | false | self | 220 | null |
A flat-rate API for open LLMs ($20/mo for 100 requests per five hours) | 16 | Hey LocalLlama!
Seeking feedback on our Claude-like [flat-rate subscription API for open-source models](https://synthetic.new/newsletter/entries/subscriptions). We built this because there aren't many options for easily and cheaply running large open-source models without paying per-token costs (especially if you're using them in coding agents).
I know it's not exactly *local* but it should be helpful if you wanted to run these models cheaply without having enough VRAM! We support pretty much all of the big open-source coding models like GLM-4.5, DeepSeek 3.1, Kimi K2, Qwen3 Coder 480B, etc. And we work with pretty much every OpenAI-compatible tool in the universe, like Cline, Roo, KiloCode, Aider, etc.
[Synthetic.new](http://Synthetic.new)
Thanks and LMK what you think. Would you pay for it? Why/whynot? 🙏 | 2025-08-29T01:24:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n2u7yk/a_flatrate_api_for_open_llms_20mo_for_100/ | elllyphant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2u7yk | false | null | t3_1n2u7yk | /r/LocalLLaMA/comments/1n2u7yk/a_flatrate_api_for_open_llms_20mo_for_100/ | false | false | self | 16 | null |
(Not Local) Microsoft AI: Two new in-house models (MAI-Voice 1, and MAI-1-Preview on LMArena) | 6 | 2025-08-29T00:47:36 | https://microsoft.ai/news/two-new-in-house-models/ | DanielKramer_ | microsoft.ai | 1970-01-01T00:00:00 | 0 | {} | 1n2tfqa | false | null | t3_1n2tfqa | /r/LocalLLaMA/comments/1n2tfqa/not_local_microsoft_ai_two_new_inhouse_models/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?width=108&crop=smart&auto=webp&s=80b03347f434a01093552bc2e6373899ee21d355', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?width=216&crop=smart&auto=webp&s=74c249b83628fcb36759d3e9f975ee9d2b4dc727', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?width=320&crop=smart&auto=webp&s=ac17d51e84d63c56f44d5180984f86f5df63d36c', 'width': 320}, {'height': 392, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?width=640&crop=smart&auto=webp&s=909e49857e7af8fb4c1477c02206a6ab32afd94e', 'width': 640}, {'height': 588, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?width=960&crop=smart&auto=webp&s=1be10829ac5cd8418faf537ed825ffcf738dfc10', 'width': 960}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/V-NBZMVFL9f0_LJ30NE3Vuy0p-_R3wkxfAccL9WsYvQ.jpeg?auto=webp&s=17cbe7c56f54436b84ed2166ceeca6044601ac80', 'width': 1024}, 'variants': {}}]} | |
First local support for Gemma-3n Vision Capability | 13 | Many people have been waiting on this: `llama.cpp` and Ollama don’t yet support multimodal for Gemma-3n. We can't wait to test its vision capabilities (its shiny MobileNetV5 as vision encoder). So… we just supported its vision capability to run locally in CLI, starting with Windows.
**You can run it with one line of code:**
https://reddit.com/link/1n2sw6l/video/lprzry5opulf1/player
# Quickstart
**Follow the 3 steps under the "deploy" section on this page:** [**link**](https://sdk.nexa.ai/model/68ad502252a29ab2bb2107b5)
If you haven't downloaded NexaSDK and activated it with free access token:
1. [Download SDK](https://sdk.nexa.ai/model/68ad502252a29ab2bb2107b5)
2. Create a free acount and activate SDK with free access token
Then:
3. Run the model in CLI with one line of code
`nexa infer NexaAI/gemma-3n`
👉 Try it out and let us know:
* How does it compare to other local vision models?
* What new use cases do you see unlocked here?
* Any critiques, feedback, or suggestions for our SDK.
If you find our work useful, please consider giving a ⭐ to our open source SDK to support: [Github](https://github.com/NexaAI/nexa-sdk)
**Limitations:**
1. Windows only (Mac is coming next)
2. Currently it supports single-image understanding. We are working on multi-image support. | 2025-08-29T00:22:54 | https://www.reddit.com/r/LocalLLaMA/comments/1n2sw6l/first_local_support_for_gemma3n_vision_capability/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2sw6l | false | null | t3_1n2sw6l | /r/LocalLLaMA/comments/1n2sw6l/first_local_support_for_gemma3n_vision_capability/ | false | false | self | 13 | null |
Fully Tokenless AI Local System | 0 | Hey everyone! I was recommended to make a post here and see how bad I would get roasted so here we go!
I've developed and tested a working fully tokenless inference system. Currently it's doing code generation for my own personal use and produces solid C++ code that compiles first shot. I have several custom things in this system that enables this all to be possible. Permanent dynamic memory was a big key in enabling the rest.
What can I show you guys other than my actual code to validate my claim here? Thanks in advance!! | 2025-08-28T22:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1n2r09p/fully_tokenless_ai_local_system/ | astronomikal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2r09p | false | null | t3_1n2r09p | /r/LocalLLaMA/comments/1n2r09p/fully_tokenless_ai_local_system/ | false | false | self | 0 | null |
Best Open Source TTS That Sounds Most Natural Voice For Storytelling? | 36 | I think from what I can gather it's Tortoise, but I've been using Kokoro right now. Tried Tacotron and it was pretty bad.
Is Tortoise the heavyweight gold standard right now for open source TTS? | 2025-08-28T22:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n2qxgo/best_open_source_tts_that_sounds_most_natural/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2qxgo | false | null | t3_1n2qxgo | /r/LocalLLaMA/comments/1n2qxgo/best_open_source_tts_that_sounds_most_natural/ | false | false | self | 36 | null |
GPT-OSS 120B is unexpectedly fast on Strix Halo. Why? | 24 | I got a Framework Desktop last week with 128G of RAM and immediately started testing its performance with LLMs. Using my (very unscientific) benchmark test prompt, it's hitting almost 30 tokens/s eval and \~3750 t/s prompt eval using GPT-OSS 120B in ollama, with no special hackery. For comparison, the much smaller deepseek-R1 70B takes the same prompt at 4.1 t/s and 1173 t/s eval and prompt eval respectively on this system. Even on an L40 which can load it totally into VRAM, R1-70B only hits 15t/s eval. (gpt-oss 120B doesn't run reliably on my single L40 and gets much slower when it does manage to run partially in VRAM on that system. I don't have any other good system for comparison.)
Can anyone explain why gpt-oss 120B runs so much faster than a smaller model? I assume there must be some attention optimization that gpt-oss has implemented and R1 hasn't. SWA? (I thought R1 had a version of that?) If anyone has details on what specifically is going on, I'd like to know.
For context, I'm running the Ryzen AI 395+ MAX with 128G RAM, (BIOS allocated 96G to VRAM, but no special restrictions on dynamic allocation.) with Ubuntu 25.05, mainlined to linux kernel 6.16.2. When I ran the ollama install script on that setup last Friday, it recognized an AMD GPU and seems to have installed whatever it needed of ROCM automatically. (I had expected to have to force/trick it to use ROCM or fall back to Vulkan based on other reviews/reports. Not so much.) I didn't have an AMD GPU platform to play with before, so I based my expectations of ROCM incompatibility on the reports of others. For me, so far, it "just works." Maybe something changed with the latest kernel drivers? Maybe the fabled "npu" that we all thought was a myth has been employed in some way through the latest drivers? | 2025-08-28T22:47:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n2qr6m/gptoss_120b_is_unexpectedly_fast_on_strix_halo_why/ | RaltarGOTSP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2qr6m | false | null | t3_1n2qr6m | /r/LocalLLaMA/comments/1n2qr6m/gptoss_120b_is_unexpectedly_fast_on_strix_halo_why/ | false | false | self | 24 | null |
Is it possible to run any sort of local ai tools in windows 7? | 0 | Im just curious and will most likely not do this, i just think it would be cool to run ai stuff on old windows but i know something like ollama requires windows 10+ and other ai's will most likely have driver issues?
But im not 100% sure on that, is it possible to run any modern ai locally on windows 7? | 2025-08-28T22:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/1n2qk1s/is_it_possible_to_run_any_sort_of_local_ai_tools/ | No_Strawberry_8719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2qk1s | false | null | t3_1n2qk1s | /r/LocalLLaMA/comments/1n2qk1s/is_it_possible_to_run_any_sort_of_local_ai_tools/ | false | false | self | 0 | null |
Hardware advice | 0 | Hi, I have been playing with a small Gemma model on my P2200 5Gb ram in my Dell R720 server and it is way to slow. I am want a D&D DM assistant for npcs and a general self hosted model.
I am want to upgrade the GPUn but due to space constraints I'm limited to server hardware like the Nvidia Quatro RTX A5000. Or I should I get a Ryzen AI Max+ 395 128Gb.
| 2025-08-28T22:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1n2ptnv/hardware_advice/ | Pyrosuperman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n2ptnv | false | null | t3_1n2ptnv | /r/LocalLLaMA/comments/1n2ptnv/hardware_advice/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.