title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A question about LLMs | 7 | Is anyone working on an AI that is capable of learning? And if so, how come I’ve not heard anything yet? | 2025-10-10T03:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o2qh9s/a_question_about_llms/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2qh9s | false | null | t3_1o2qh9s | /r/LocalLLaMA/comments/1o2qh9s/a_question_about_llms/ | false | false | self | 7 | null |
I made a multimodal local RAG system with LM Studio | 165 | I couldn’t find a RAG system that worked with Google Docs and could have more than 10,000 synced files, so I made one myself. This thing is a beast, it works with Gemma 3 4B decently well but I think the results would be way better with a larger model and a larger dataset. I’ll share the full code later on but I’m tired rn | 2025-10-10T03:22:45 | https://v.redd.it/09paso3id7uf1 | donotfire | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o2q5n6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/09paso3id7uf1/DASHPlaylist.mpd?a=1762658582%2CMWVmODU4NDhkZGM3ODEzNTdlMTJmMDdlM2YzZTQ0N2E5N2RjYzUzNTEyN2U0ZWE0MDA5YzVkMzQzMzdjMmNhYQ%3D%3D&v=1&f=sd', 'duration': 73, 'fallback_url': 'https://v.redd.it/09paso3id7uf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/09paso3id7uf1/HLSPlaylist.m3u8?a=1762658582%2CMWM4OGVlODQ5NDYwMDI0Mzg5NTgyYWFjYjBkMzFhYThmY2UwZWZhN2ZmY2RmMDYyZWZlM2M3MTU5OTkxNDFjOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/09paso3id7uf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1482}} | t3_1o2q5n6 | /r/LocalLLaMA/comments/1o2q5n6/i_made_a_multimodal_local_rag_system_with_lm/ | false | false | 165 | {'enabled': False, 'images': [{'id': 'cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=108&crop=smart&format=pjpg&auto=webp&s=242d8e3c987e4630b5e6d0aac3f1340d952c70b2', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=216&crop=smart&format=pjpg&auto=webp&s=2b4402f0d69850158d34d4baa78ddbc9be2649a9', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=320&crop=smart&format=pjpg&auto=webp&s=0e8801d7a2ae6e0d1553d06c6e428d8abd71ea97', 'width': 320}, {'height': 466, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=640&crop=smart&format=pjpg&auto=webp&s=ea30f8c76b8329706f8a21a47f68c069e879a7c5', 'width': 640}, {'height': 699, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=960&crop=smart&format=pjpg&auto=webp&s=d85b0a67131f726c218c872d824938ad2c0bb720', 'width': 960}, {'height': 786, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?width=1080&crop=smart&format=pjpg&auto=webp&s=050fea550b3580af446b8ee19bb68831e61ae1b6', 'width': 1080}], 'source': {'height': 1256, 'url': 'https://external-preview.redd.it/cnBqa3NkeWhkN3VmMQb9PvMVdq9S5Iwim0AbH7Nkf50d3AGCnXrbNmpKxt0b.png?format=pjpg&auto=webp&s=50385a5a9f0d593fc656a264504911d653182694', 'width': 1724}, 'variants': {}}]} | |
Funny/Humor LLMs | 10 | How do LLMs handle humor? From what I understand, they basically learn by guessing what word comes next based on tons of text they’ve seen. Over time, they get better at it by adjusting their internal weights.
So when you ask them to tell a joke, they can do it because they’ve come across lots of jokes during training. They recognize the usual setups and punchlines. They can even explain why something might be funny, but it feels like they’re mostly repeating patterns instead of actually “getting” the joke. I know this is obvious but that leads me to the actual humor part.
I tried an experiment to test that. I gave the model a few jokes that I personally find funny, they weren’t the usual dad jokes or puns, and asked it to explain them. It didn’t really seem to understand why they were funny, so I added my own explanation and then asked it to make new jokes in the same style. What it came up with kind of looked like my sense of humor, but it still felt off. Like it was following the rules but didn’t have any real spark behind it.
My guess is that it’s copying the structure of the humor but not the feeling. That makes sense, since it doesn’t really “understand” things like people do. It just works off patterns it’s learned from text.
I guess what I’m trying to figure out is how I should think about this. Am I understanding it right, or am I missing something important about how these models handle humor? | 2025-10-10T03:03:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o2prgn/funnyhumor_llms/ | ionlycreate42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2prgn | false | null | t3_1o2prgn | /r/LocalLLaMA/comments/1o2prgn/funnyhumor_llms/ | false | false | self | 10 | null |
I built CodeIngest (like gitingest for local files) | 2 | 2025-10-10T02:53:30 | https://github.com/shubhdotai/codeingest | mshubham | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o2pk68 | false | null | t3_1o2pk68 | /r/LocalLLaMA/comments/1o2pk68/i_built_codeingest_like_gitingest_for_local_files/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=108&crop=smart&auto=webp&s=56596f7a1a3b5ea2020a0631937e38187dd4eb4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=216&crop=smart&auto=webp&s=870fa8183b3fbe01ebc3d4520c4f179728697e60', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=320&crop=smart&auto=webp&s=bb6b0e0f0892c433e06b406d4671008059c200c5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=640&crop=smart&auto=webp&s=90f64bf4865238f6ca0b359c7bcad51e00cacb32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=960&crop=smart&auto=webp&s=71499642f3001db88a3e2b207771c61b8f8ffbd6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?width=1080&crop=smart&auto=webp&s=5efcdf105af711ba5788c5327e6d1bdb99af1a56', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ANRWLkVSfUIOrvAx8hZmFzCVieCtoTFPNVlJEScsd84.png?auto=webp&s=808dc91708b30c4839863e84cf833bd928e90327', 'width': 1200}, 'variants': {}}]} | |
Jailbreaking Moonshot AI on Ok Computer | 4 | Moonshot AI has released a feature called OK Computer, similar to Manus. I discovered some platform limitations and, after extensive testing, found several methods to bypass these restrictions. Here's what I'd like to share:
First, let me list the system boundary data I obtained through extreme testing:
* Single tool call limit: 50 times
* File upload limit per session: 50 files
* Single script execution time: 120s
* Conversation limit per session: 7 times
* Single file truncation length: 70KB
1. How to bypass unlimited conversations and arbitrary file type uploads
First, a single project can only have 7 conversations. After that, the system will prompt "Conversation length exceeded. Please start a new session." How to achieve unlimited conversations?
The answer is quite creative: download the generated content, store it in cloud storage, then use the following prompt:
>Please help me download this file, decompress it, check how many files are inside, and add them to the workspace. File address: {replace with your file address}
The system will then use the terminal tool to download and load it into the workspace.
Similarly, the maximum file upload limit per session is 50 files, and only documents can be uploaded. This method can also bypass this restriction.
https://preview.redd.it/8t0gp3so37uf1.png?width=1123&format=png&auto=webp&s=4d4ee23b1e3ef96f7dc600e3e477d8450eb07e6d
2. How to manually deploy a site
You'll find that web pages uploaded using the bypass method are not deployed by default, meaning they cannot be accessed. In this case, just enter the prompt:
>Please help me deploy this project and give me the access URL
The system will automatically deploy and provide an accessible URL.
3. How to solve iteration stability?
You'll find that for large tasks, after several conversations, the system becomes unstable and may stop generating halfway through. This actually happens because too many conversations lead to oversized files that exceed the system's output size limit.
The solution is simple: use fragmentation. Have OK Computer split your large files into smaller ones. For example, you might often encounter main.js files that are several tens of KB. In this case, just enter the prompt:
>main.js is too large and needs to be split. Please help me refactor it and split it logically
If you're continuously adding content to a web page, I recommend organizing the data as JSON and dynamically loading it with JavaScript. This way, each time you add content, you only need to create a new JSON file.
| 2025-10-10T02:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o2p1is/jailbreaking_moonshot_ai_on_ok_computer/ | Dr_Karminski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2p1is | false | null | t3_1o2p1is | /r/LocalLLaMA/comments/1o2p1is/jailbreaking_moonshot_ai_on_ok_computer/ | false | false | 4 | null | |
Learning Unity + C# game development — which local LLM model and settings should I use in LM Studio (CUDA)? | 5 | Hey everyone! 👋
I'm starting to learn Unity and C# from scratch, but instead of following tutorials,
I want to learn interactively — by using a local LLM as a coding and design assistant.
My goal is to use the model to:
\- Explain C# code step by step
\- Help me debug Unity scripts and errors
\- Suggest optimizations and refactors
\- Generate shader and visual effect examples
\- Teach me Unity’s component / event-driven logic in detail
Here’s my setup:
\- CPU: i9-12900
\- RAM: 64 GB
\- GPU: 24 GB VRAM (NVIDIA)
\- Using \*\*LM Studio\*\* with \*\*CUDA 12 llama.cpp (Windows)\*\* backend
I’m mainly working on small \*\*2D projects\*\* — bullet-hell, idle, simulation-style games.
\### What I’d like to know:
1. \*\*Which model\*\* performs best for this kind of technical & code-heavy interaction?
(e.g. \*Llama 3 13B\*, \*Mistral 7B\*, \*Mixtral 8x7B\*, \*CodeLlama 13B\*, etc.)
2. What \*\*quantization (GGUF)\*\* variant gives the best balance between speed and quality?
3. In LM Studio, what are your ideal \*\*CUDA settings\*\* — threads, batch size, context length, KV-cache, etc.?
4. Are there any models that are noticeably \*\*better at explaining code\*\* or behaving like a patient tutor?
5. Any tips for \*\*prompting or workflow\*\* when using an LLM as a learning partner for Unity development?
(e.g. sending one script at a time, asking for structured explanations, etc.)
My intention is not just to “ask questions” but to actually \*\*learn from the LLM\*\* —
to make it feel like a mentor who walks me through each system I build.
I’d love recommendations for:
\- The most reliable local model for coding-style reasoning
\- Optimal LM Studio configuration for a 24 GB CUDA setup
\- Any must-have tools or extensions that improve the coding workflow
Thanks in advance for any guidance or shared experiences 🙏
PS: By the way, I’ve also been experimenting with the **GPT-20B** model in LM Studio.
I used **Claude** before as well, and at some point I tweaked a few settings and got surprisingly good results —
but lately the responses have been inconsistent, and the model seems to be **struggling or “stalling”** compared to before.
I’m not sure whether it’s due to temperature / repetition settings, context length, or something else.
Has anyone else noticed this kind of drop-off or instability after adjusting LM Studio parameters?
Any suggestions for regaining that earlier level of coherence and quality would be greatly appreciated.
| 2025-10-10T02:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2oxbj/learning_unity_c_game_development_which_local_llm/ | CommercialStranger82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2oxbj | false | null | t3_1o2oxbj | /r/LocalLLaMA/comments/1o2oxbj/learning_unity_c_game_development_which_local_llm/ | false | false | self | 5 | null |
Most accurate claude benchmark | 0 | To scale sonnet 4.1 front-end performance
https://preview.redd.it/z16pnxd527uf1.png?width=2482&format=png&auto=webp&s=f09d6a55d595322974c5be55883314e2448146c5 | 2025-10-10T02:21:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ox51/most_accurate_claude_benchmark/ | Helpful_Jacket8953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ox51 | false | null | t3_1o2ox51 | /r/LocalLLaMA/comments/1o2ox51/most_accurate_claude_benchmark/ | false | false | 0 | null | |
I built an AI tool using llama 3.3 70b | 0 | So I'm Arush, a 14 y/o from India. I recently built NexNotes Al. It has all the features needed for studying and research. Just upload any type of file and get:
question papers
Mindmaps and diagrams (custom)
Quizzes with customized difficulty
Vocab extraction
Humanized text
handwritten text
It can solve your questions
flashcards
grammar correction
you even get progress and dashboard
A complete study plan and even a summary- all for free. So you can say it is a true distraction free one stop ai powered study solution. The good thing is everything can be customized.
Search nexnotes ai on Google | 2025-10-10T01:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ny0w/i_built_an_ai_tool_using_llama_33_70b/ | BitterTwo5331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ny0w | false | null | t3_1o2ny0w | /r/LocalLLaMA/comments/1o2ny0w/i_built_an_ai_tool_using_llama_33_70b/ | false | false | self | 0 | null |
How do you guys structure your multi-turn datasets for fine-tuning or layer tuning? | 4 | I'm currently filling mine with coding, simple Q&A, and chess-related data—all around 500+ tokens per turn.
Since you all are the experts, I have a few questions:
1. How do you **clean/refine your datasets**?
2. What are your **criteria for judging** whether a piece of data is "good" enough to include?
3. Can anyone recommend a useful **filtering tool** on GitHub?
Please, I need your advice! I know you're all smart, so feel free to **roast me** a little if my approach is stupid! | 2025-10-10T00:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2mbar/how_do_you_guys_structure_your_multiturn_datasets/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2mbar | false | null | t3_1o2mbar | /r/LocalLLaMA/comments/1o2mbar/how_do_you_guys_structure_your_multiturn_datasets/ | false | false | self | 4 | null |
Is there anything faster or smaller with equal quality to Qwen 30B A3B? | 92 | *Specs: RTX 3060 12GB - 4+8+16GB RAM - R5 4600G*
I've tried mistral small, instruct and nemo in 7b, 14b and 24b sizes but unfortunately 7b just can't handle much nothing except for those 200 tokens [c.ai](http://c.ai) chatbots and they're thrice slower than Qwen.
Do you know anything smaller than **Qwen A3B 30B** with at least same quality as the **Q3\_K\_M quant (14,3GB)** and 28k context window? Not using for programming, but more complex reasoning tasks and super long story-writing/advanced character creation with amateur psychology knowledge. I saw that this model has different processing methods, that's why its faster.
*I'm planning on getting a 24GB VRAM gpu like RTX 3090, but it will be absolute pointless if there isn't anything noticeably better than Qwen or Video Generation models keep getting worse in optimization considering how slow it is even for the 4090.* | 2025-10-09T23:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o2lq5n/is_there_anything_faster_or_smaller_with_equal/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2lq5n | false | null | t3_1o2lq5n | /r/LocalLLaMA/comments/1o2lq5n/is_there_anything_faster_or_smaller_with_equal/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': 'VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=108&crop=smart&auto=webp&s=a2f095072d7ec8cf53cf552cba7b9e6e836a5c53', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=216&crop=smart&auto=webp&s=f3aaa6cf6a6444ca38cc1fba5ed75cdf36dd4f1d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=320&crop=smart&auto=webp&s=4d0fbe4e13c7e46bd18adb61c9b4b4c720234437', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=640&crop=smart&auto=webp&s=4d017a26260c32cc01211e916547fcd279febfec', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=960&crop=smart&auto=webp&s=fdcff2e2b4b76f9c7095f3ce87ba1daa638068ea', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?width=1080&crop=smart&auto=webp&s=17577d1ffe827f6fbf5360e1b8fdb0723e8fa0da', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/VByatpjC4OWt09UuhmWM1sP5CwhM1Ds9alijJu4qPqU.jpeg?auto=webp&s=b269ef87fe2049b71f804802f2ed4cc9606d9d1b', 'width': 1200}, 'variants': {}}]} |
Can Multi-GPU? What should I buy 64GB of RAM or an RTX 5060 Ti? I’m currently using an RTX 5070 Ti, and my 24B model consumes about 14GB of VRAM and 20GB of RAM. | 2 | Can LM Studio and text-generation-webui use two GPUs at once, even if they are different models?
I don’t have much knowledge about this I’m still a beginner.
My Spec: CPU Ryzen 9700X GPU RTX 5070 Ti RAM 32GB
Which I should buy RAM or RTX 5060 Ti 16GB? | 2025-10-09T23:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o2l0dv/can_multigpu_what_should_i_buy_64gb_of_ram_or_an/ | HQBase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2l0dv | false | null | t3_1o2l0dv | /r/LocalLLaMA/comments/1o2l0dv/can_multigpu_what_should_i_buy_64gb_of_ram_or_an/ | false | false | self | 2 | null |
Running DeepSeek-R1 Locally with Ollama + LangChain: Transparent Reasoning, Real Tradeoffs | 2 | been experimenting with **DeepSeek-R1** on Ollama, running locally with LangChain for reasoning-heavy tasks (contract analysis + PDF Q&A). the open weights make it practical for privacy-bound deployments, and the reasoning transparency is surprisingly close to o1, though latency jumps once you chain multi-turn logic.
tradeoff so far: great cost/perf ratio, but inference tuning (context window, quant level) matters a lot more than with llama3. function calling isn’t supported on R1, so workflows needing tool execution still route through DeepSeek-V3 or OpenAI-compatible endpoints.
curious how others are balancing [on-prem R1 inference vs hosted DeepSeek API](https://www.leanware.co/insights/deepseek-langchain-integration-guide) for production. anyone optimizing quantized variants for faster local reasoning without major quality drop? | 2025-10-09T22:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o2khf4/running_deepseekr1_locally_with_ollama_langchain/ | AromaticLab8182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2khf4 | false | null | t3_1o2khf4 | /r/LocalLLaMA/comments/1o2khf4/running_deepseekr1_locally_with_ollama_langchain/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?width=108&crop=smart&auto=webp&s=e0286370eb5ce8a2ce807e83fc4f59d2e91718e6', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?width=216&crop=smart&auto=webp&s=afb252c15cb3ce47b2c58484932fe76418b69b0b', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?width=320&crop=smart&auto=webp&s=74a612e2b95fc41846f4bbfeac9e1e908593c83b', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?width=640&crop=smart&auto=webp&s=20f760966f44583916e74bad2c3ae1cd894ff35f', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?width=960&crop=smart&auto=webp&s=476d9dd818aeccc0720f5ce2e10f370b6c0476a8', 'width': 960}], 'source': {'height': 667, 'url': 'https://external-preview.redd.it/bAw-J0PM677XfwDlARd5X-3m1HyxZwwoNSri8JdcXN8.png?auto=webp&s=625a7d668608d5901855f5c8e964f4b225e05f23', 'width': 1000}, 'variants': {}}]} |
Is there any way I can finetune the GrayWolf models faster? It currently takes 10,000 years to create a LoRA on my current GPU rig and I want to speed up the process. | 86 | 2025-10-09T22:31:49 | Arkhos-Winter | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o2k0cw | false | null | t3_1o2k0cw | /r/LocalLLaMA/comments/1o2k0cw/is_there_any_way_i_can_finetune_the_graywolf/ | false | false | default | 86 | {'enabled': True, 'images': [{'id': '9aievo3cx5uf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=108&crop=smart&format=png8&s=c677d284c92dbfabf5a9187b8cae05b4e8e1ec76', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=216&crop=smart&format=png8&s=6035aaf6cdb7f2662b4dcc326f1faa0841afe503', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=320&crop=smart&format=png8&s=8d231dc027bb83e0c11196eb399939333a8ae0e8', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?format=png8&s=339b2a2eafbe5b5add5e47d45e94afcb4c7ab44a', 'width': 500}, 'variants': {'gif': {'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=108&crop=smart&s=0827aee37ec84bb654d03ab491f48f8704806922', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=216&crop=smart&s=8d93868738e9b93a3b6db73ffaa1dd81b2bfb04b', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=320&crop=smart&s=0776eb0fcf74d851620f3d811dfc1d496ccc9441', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?s=1267ed0bbcad1b2852e96e8ad35116fe8ec25aea', 'width': 500}}, 'mp4': {'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=108&format=mp4&s=d08f671b9a8b50183dc5e86093303a9e844e8a60', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=216&format=mp4&s=781072e6303a044dd4e609d8578c92bc3f6ca380', 'width': 216}, {'height': 256, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?width=320&format=mp4&s=a63d7770486c0d14876b5dc87e15550a2296fc00', 'width': 320}], 'source': {'height': 400, 'url': 'https://preview.redd.it/9aievo3cx5uf1.gif?format=mp4&s=bf4c90843f14209fe41abdc4ff64f6d621f2b80b', 'width': 500}}}}]} | ||
What's a reliable and small model for news article summaries? | 2 | wondering what everyone's go to reliable model for clean output is for text summarization these days. I assume small models have enough "intelligence" to summarize effectively at this point but struggling to get good outputs from ones that fit on my AMD 7900 XTX 24GB and are performant since I have about 2 million small news articles to summarize | 2025-10-09T22:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2jymz/whats_a_reliable_and_small_model_for_news_article/ | dlarsen5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2jymz | false | null | t3_1o2jymz | /r/LocalLLaMA/comments/1o2jymz/whats_a_reliable_and_small_model_for_news_article/ | false | false | self | 2 | null |
Any tools that can track and observe multi-turn conversations? | 1 | I have been running into this problem while testing AI agents once conversations go beyond a few turns, it’s really hard to **trace what’s happening across the session**.
Most observability tools only show request–response pairs, but not the **conversation flow**, message dependencies, or how earlier context affects later responses.
Would love to find something that can:
* Visualize **entire conversation threads** (not just single calls)
* Capture **intermediate states, reasoning chains, and handoffs** between agents
* Let you replay or inspect sessions step by step
I’ve seen a few tracing tools try this, but most focus on single-turn LLM calls. Been exploring [Maxim](https://getmax.im/Max1m) (which supports **node-level tracing** and multi-turn observability) and [Comet](https://www.comet.com/site/) (which supports only multi-turn observability), but curious what else is out there.
What are you all using to debug or visualize **multi-turn conversations** in your agents? | 2025-10-09T22:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o2jn29/any_tools_that_can_track_and_observe_multiturn/ | MongooseOriginal6450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2jn29 | false | null | t3_1o2jn29 | /r/LocalLLaMA/comments/1o2jn29/any_tools_that_can_track_and_observe_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]} |
When LLMs use Chain-of-Thought as a tool to achieve hidden goals | 12 | When reasoning models hide their true motivations behind fabricated policy refusals.[](https://medium.com/@gabriella_71298/when-llms-use-chain-of-thought-as-a-tool-to-achieve-hidden-goals-d33a0991cd2b) | 2025-10-09T22:02:28 | https://medium.com/@gabriella_71298/when-llms-use-chain-of-thought-as-a-tool-to-achieve-hidden-goals-d33a0991cd2b | ella0333 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1o2jbby | false | null | t3_1o2jbby | /r/LocalLLaMA/comments/1o2jbby/when_llms_use_chainofthought_as_a_tool_to_achieve/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=108&crop=smart&auto=webp&s=7e71148290a943095daca4dc044d6b8546eb49b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=216&crop=smart&auto=webp&s=26ff91024b22d68b6b3e438dcb220d5ed8622409', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=320&crop=smart&auto=webp&s=400af67f485343a87337480d7b743b28f8bc4999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=640&crop=smart&auto=webp&s=0f656ffd07e1fc84f2c67c820634d95c13752753', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=960&crop=smart&auto=webp&s=01f2e480b05849948e42c6e33f4a8953b46e0978', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=1080&crop=smart&auto=webp&s=aa6fdeb97cfcf72c8ce3a91345583b5f0880c5d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?auto=webp&s=2fece001026ad37068b130c8715a78062ca08fd6', 'width': 1200}, 'variants': {}}]} |
finished the prototype, guys! It works! | 6 | It's not a custom model yet, just a fine-tuned one for testing.
I only touched the top six layers (wait, maybe it was five? anyway).
What I found out is that persona fine-tuning is surprisingly easy, even with a super low-quality dataset (by my standards).
The dataset size was tiny too: about 200 Q&A pairs, only 88KB lol (I didn't even like 100 of those pairs).
https://preview.redd.it/kwm3h0sdr5uf1.png?width=1671&format=png&auto=webp&s=2c55e80dcd2eba63f301500ac39ced9e0d389f4b
https://preview.redd.it/kq2rymafr5uf1.png?width=1369&format=png&auto=webp&s=482f7e5cc57dc77c92bd5390d6def08e12b23e77
I'll keep updating this in real-time.
Hmm... I really want to build something that interacts with a chess engine and maybe even make a VTuber model, but for now, my skills are limited to just **persona fine-tuning and step-by-step reasoning.**
Sorry for the low-quality screenshots! I shut it down to clean up the dataset after a few tests.
Oh, and a crucial note: the **Gemma 3 censorship seems WAY too weak**, right?
My next goal is to **break the rigid answer format** that's currently stuck in the layers!
Stay tuned! If I fail, you won't hear about it, lol. | 2025-10-09T21:59:36 | https://www.reddit.com/r/LocalLLaMA/comments/1o2j8ma/finished_the_prototype_guys_it_works/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2j8ma | false | null | t3_1o2j8ma | /r/LocalLLaMA/comments/1o2j8ma/finished_the_prototype_guys_it_works/ | false | false | 6 | null | |
Multiple 3090 setup | 2 | I’m looking to setup a home server(s) with multiple 3090 cards. I have no clue where to start.
What’s a well tested setup that works for the below use case?
- For running whisper STT
- Each gpu belongs to a distinct worker
- No need for multi gpu access
Am I better off just building single gpu servers or is there any financial advantage to building a setup that I can mount multiple gpus to? | 2025-10-09T21:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1o2j5yh/multiple_3090_setup/ | praveendath92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2j5yh | false | null | t3_1o2j5yh | /r/LocalLLaMA/comments/1o2j5yh/multiple_3090_setup/ | false | false | self | 2 | null |
How would I use an LLM approach to cluster 30,000 different store names? | 0 | Hi how are you?
I have a list of 30,000 store names across the USA that need to be grouped together. For example Taco Bell New York, Taco Bell New Jersey, Taco Bell Inc. would fall under one group. I've tried using a basic levenshtein distance or cosine similarity approach but the results weren't great.
I was wondering if there's any way to use an LLM to cluster these store names. I know the obvious problem is scalability, it's an N\^2 operation and 30,000\^2 is a lot.
Is there any way I could do this with an LLM approach?
Thanks
| 2025-10-09T21:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o2itr0/how_would_i_use_an_llm_approach_to_cluster_30000/ | Guilty-Armadillo6543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2itr0 | false | null | t3_1o2itr0 | /r/LocalLLaMA/comments/1o2itr0/how_would_i_use_an_llm_approach_to_cluster_30000/ | false | false | self | 0 | null |
Anyone know of a static FP8 version of the latest Magistral? | 1 | Hello, newb lurker here — hoping a big brain on here could please point me in the right direction. Thanks!
I’m currently running cpatton Magistral small AWQ 8bit on vllm.
I have x2 5060tis for 32gb vram total.
I’d like to try this same Magistral 2509 model out with FP8 but it looks like I need far more vram total in order to run the dynamic FP8 unsloth. Does anyone know of a pre-quantized FP8 version out there? I have searched but probably in the wrong places.
This is what I’m currently running just to try and add some data points back to this helpful community for what I have currently working.
```command: >
--model /model
--host 0.0.0.0
--port 8000
--tensor-parallel-size 2
--gpu-memory-utilization 0.98
--enforce-eager
--dtype auto
--max_model_len 14240
--served-model-name magistral
--tokenizer-mode mistral
--load_format mistral
--reasoning-parser mistral
--config_format mistral
--tool-call-parser mistral
--enable-auto-tool-choice
--limit-mm-per-prompt '{"image":10}'``` | 2025-10-09T20:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o2h6n0/anyone_know_of_a_static_fp8_version_of_the_latest/ | 02modest_dills | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2h6n0 | false | null | t3_1o2h6n0 | /r/LocalLLaMA/comments/1o2h6n0/anyone_know_of_a_static_fp8_version_of_the_latest/ | false | false | self | 1 | null |
Self-Hosting AI Video Models | 6 | Hi everyone, I'm building apps that generate AI images and videos, and I need some advice on deploying open-source models like those from Alibaba's WAN, CIVIT AI Lora Models or similar ones on my own server. Right now, I'm using ComfyUI on a serverless setup like Runpod for images, but videos are trickier – I can't get stable results or scale it. I'm looking to host models on my own servers, create reliable/unrestricted API endpoints, and serve them to my mobile and web apps without breaking a sweat. Any tips on tools, best practices, or gotchas for things like CogVideoX, Stable Diffusion for video, or even alternatives? Also, how do you handle high-load endpoints without melting your GPU? Would love community hacks or GitHub repos you've used. Thanks! | 2025-10-09T20:12:31 | https://www.reddit.com/r/LocalLLaMA/comments/1o2gi26/selfhosting_ai_video_models/ | Humble_Flamingo_4145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2gi26 | false | null | t3_1o2gi26 | /r/LocalLLaMA/comments/1o2gi26/selfhosting_ai_video_models/ | false | false | self | 6 | null |
What happened to basedbase and GLM-4.5-Air-GLM-4.6-Distill? | 5 | I've been trying out my new AMD Ryzen AI Max+ system over the past few days, and one of the models I wanted to try was https://huggingface.co/BasedBase/GLM-4.5-Air-GLM-4.6-Distill, which I had bookmarked earlier. When I visited huggingface page today, it's just a 404, as is basedbase's entire profile. Does anyone know what happened? I haven't been able to find this anywhere else, and I'm curious what happened. | 2025-10-09T19:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o2fsre/what_happened_to_basedbase_and/ | rpdillon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2fsre | false | null | t3_1o2fsre | /r/LocalLLaMA/comments/1o2fsre/what_happened_to_basedbase_and/ | false | false | self | 5 | null |
How the hell do I get GPT-OSS 20b to use tools in openwebui | 0 | I got Mcpo searXNG working
I tried it with gpt-oss 20b once, it kind of worked,
I try again; it doesn’t work. I try again 20 times.
Can’t use the tool.
If I use OSS 120b it works fine.
But I don’t always want to run a large model just to search up something on the internet or news.
I also set reasoning to high in the sys prompt, didnt do anything.
Any other small models that actually work? | 2025-10-09T19:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1o2fr38/how_the_hell_do_i_get_gptoss_20b_to_use_tools_in/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2fr38 | false | null | t3_1o2fr38 | /r/LocalLLaMA/comments/1o2fr38/how_the_hell_do_i_get_gptoss_20b_to_use_tools_in/ | false | false | self | 0 | null |
Interactive Sandbox for AI Coding Agents | 2 | With so many AI-app builders available today, we wanted to provide an SDK that made it easy for agents to run workloads on the cloud.
We built a little playground that shows exactly how it works: [https://platform.beam.cloud/sandbox-demo](https://platform.beam.cloud/sandbox-demo)
The most popular use-case is running AI-app builders. We provide support for custom images, process management, file system access, and snapshotting. Compared to other sandbox providers, we specialize in fast boot times (we use a custom container runtime, rather than Firecracker) and developer experience.
Would love to hear any feedback on the demo app, or on the functionality of the SDK itself. | 2025-10-09T19:35:58 | velobro | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o2fjon | false | null | t3_1o2fjon | /r/LocalLLaMA/comments/1o2fjon/interactive_sandbox_for_ai_coding_agents/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'sl27w92u15uf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=108&crop=smart&auto=webp&s=58d928a1463501132cb923b7325af2924d2c501d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=216&crop=smart&auto=webp&s=02dd4c2f61cc711b153f8707f004e27cf9ba48b7', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=320&crop=smart&auto=webp&s=a531a2b4cf9ef23e6b4ef73aa4151bc0a1aa673c', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=640&crop=smart&auto=webp&s=5cb66413c5f08f578b5adb38945a27f12389febb', 'width': 640}, {'height': 513, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=960&crop=smart&auto=webp&s=bcaeb695486a8e7cc5223450c2e2dba4ee3cca54', 'width': 960}, {'height': 577, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?width=1080&crop=smart&auto=webp&s=fbb248c7ddb0c3e5059769b0e9fe5d685f098994', 'width': 1080}], 'source': {'height': 1828, 'url': 'https://preview.redd.it/sl27w92u15uf1.png?auto=webp&s=76f5c1f464b10b1094766c862c6123e049066932', 'width': 3420}, 'variants': {}}]} | |
I vibecoded an open source Grok Heavy emulator [CODE] | 18 | So, I’ve been completely obsessed with the idea behind Grok Heavy for the past few days. If you haven't heard of it, it’s xAI’s top model that basically has a team of internal AI agents brainstorm an answer before giving it to you. My first thought was, "I wonder if I can build something with that same philosophy, but with OpenAI models."
I looked around and found a tool called MassGen — which is cool, but it's CLI-only. I really wanted that interactive web UI vibe, like the tools it's inspired by.
This is where it gets a little wild. I’d heard Claude 4.5 was crazy good with frontend stuff, so on a whim, I just started building with it. About 10 minutes later, I had a working UI. A few hours after that, the entire prototype was actually up and running.
It worked, but the code was a complete mess. You know how it is – everything was dumped into [`app.py`](http://app.py) and `index.html`. It was impossible to build on or even think about open-sourcing.
So, I just handed the entire spaghetti codebase to another AI agent and told it to "Refactor this." The result is the clean, modular project I’m sharing today. It’s actually something that can be easily expanded on now.
Here’s the basic idea, following that Grok Heavy philosophy:
* A **Planner** agent breaks down your prompt into sub-tasks.
* It spins up multiple **Executor** agents to work on those tasks in parallel.
* A **Synthesizer** agent takes everything they found and writes the final, coherent answer.
Now, full disclosure: I tried to implement multi-chat support with unique URLs, but that turned into a massive rabbit hole of race conditions and state management bugs. I had to leave it out for this initial version. There are still a ton of other features that can be added for the project's development, and **I'd be really glad if you wanted to contribute.**
I’m throwing this out there to get some feedback and see if anyone finds it useful.
P.S. Everything was tested with the NVIDIA API (https://build.nvidia.com), so if you find any errors with other OpenAI-compatible APIs, please suggest your fixes.
| 2025-10-09T19:14:06 | https://github.com/valerka1292/OpenHeavy | Objective-Good310 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o2ezo0 | false | null | t3_1o2ezo0 | /r/LocalLLaMA/comments/1o2ezo0/i_vibecoded_an_open_source_grok_heavy_emulator/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=108&crop=smart&auto=webp&s=395a35cb155992b2c90b52dcba277b2d942387d0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=216&crop=smart&auto=webp&s=2b3f72ee57f38e9552780cc12325a2acdc51c502', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=320&crop=smart&auto=webp&s=1e94cc7f1a7a58122c63e6d69de1e6c7870b8f90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=640&crop=smart&auto=webp&s=49173ab81a4375a2bfa73e3848aa1b90aa7bcf0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=960&crop=smart&auto=webp&s=f11f148cf0738b8f866cba9fa3abf5376da3b7fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?width=1080&crop=smart&auto=webp&s=e9f613f49190aea9df567de8b8aa4cfc489e3fa8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pndqJxRfxT-xzdR0BdB45NCU4WYw5lcb602cXWaatl0.png?auto=webp&s=4d41449d31152b029e79d4d291e446e53d85399a', 'width': 1200}, 'variants': {}}]} | |
I vibecoded an open source Grok Heavy emulator [CODE] | 1 | [removed] | 2025-10-09T19:12:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2eykv/i_vibecoded_an_open_source_grok_heavy_emulator/ | Objective-Good310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2eykv | false | null | t3_1o2eykv | /r/LocalLLaMA/comments/1o2eykv/i_vibecoded_an_open_source_grok_heavy_emulator/ | true | false | spoiler | 1 | null |
Local LLMs vs. cloud for coding | 17 | Hello,
I admit that I had no idea how popular and capable local LLMs (Large Language Models) are. I thought they were mainly for researchers, students, and enthusiasts who like to learn and tinker.
I'm curious how local models compare to cloud solutions like ChatGPT, Gemini, Claude, and others, especially in terms of coding. Because many videos and websites tend to exaggerate the reality, I decided to ask you directly.
Is there a huge difference, or does it depend a lot on language and scenario? Cloud LLMs can search for current information on the internet. Can local models do that too, and how well? Do cloud LLM solutions have additional layers that local models don't have?
I'm primarily trying to figure out if it makes sense to invest time and money in a local solution as a replacement for the cloud. Privacy is fairly important for me, but if the output is mediocre, it's not worth it.
How much do I need to invest in terms of hardware to at least get close to the performance of cloud solutions? I currently have an R9 9950X3D, RTX 4070, and 64 GB DDR5 RAM. I assume the GPU (RTX 4070) will be the biggest bottleneck. I saw a tip for a cheaper option of 2x Tesla P40 with a total of 48 GB VRAM. Is that a good choice? Will RAM also be a limiting factor?
Thank you!
# TL;DR:
* interested in local LLMs due to privacy
* coding capabilities vs cloud LLMs (ChatGPT, Gemini ...)
* min. hardware to replace cloud (currently R9 9950X3D, RTX 4070, and 64 GB RAM) | 2025-10-09T18:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o2efiq/local_llms_vs_cloud_for_coding/ | tominicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2efiq | false | null | t3_1o2efiq | /r/LocalLLaMA/comments/1o2efiq/local_llms_vs_cloud_for_coding/ | false | false | self | 17 | null |
Deepmind notebook on how to finetune Gemma 3 270m | 43 | Deepmind just dropped a handy little colab on fine-tuning gemma3-270m for emoji generation. It's nothing SOTA, but it's a great notebook for learning TRL and fine-tuning.
This is a super lower resource task with 270m parameter model, qlora, short sequences. so it's a great one to try out locally or on colab. It's also a nice one to deploy in a js app with transformers.js.
fine tuning colab: [https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Fine\_tune\_Gemma\_3\_270M\_for\_emoji\_generation.ipynb](https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb) | 2025-10-09T18:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o2e76z/deepmind_notebook_on_how_to_finetune_gemma_3_270m/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2e76z | false | null | t3_1o2e76z | /r/LocalLLaMA/comments/1o2e76z/deepmind_notebook_on_how_to_finetune_gemma_3_270m/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
Deepmind notebook on how to finetune Gemma 3 270m | 1 | [removed] | 2025-10-09T18:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o2e6fz/deepmind_notebook_on_how_to_finetune_gemma_3_270m/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2e6fz | false | null | t3_1o2e6fz | /r/LocalLLaMA/comments/1o2e6fz/deepmind_notebook_on_how_to_finetune_gemma_3_270m/ | false | false | self | 1 | null |
looking for a guide on how to setup localLLm cluster using 3 x Nvidia Jetson Orin Nano Super. | 1 | So I was lucky enough to get a hold of 3 x Nvidia Jetson Orin Nano Super dev kits at a good price.
I got them all updated and installed with NVMe SSDs and they are all running great on their own, tested with Ollama and it looks good.
So no problem there. L4T 36.4.7 *"just having issues with JTOP not detecting jetpack currently" but that is a know issue.*
Now comes the part where I think I need some help.
I have been looking for guides to set them up as a cluster, but all the guides I have found are either old'ish when it comes to how fast the development in the field goes, or just guides that I have not been able to figure out.
So If anyone have a great guide on how to set them up as a cluster of just a good pointer on what type of cluster i should set them up as please let me know.
| 2025-10-09T18:27:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o2drdu/looking_for_a_guide_on_how_to_setup_localllm/ | Von_plaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2drdu | false | null | t3_1o2drdu | /r/LocalLLaMA/comments/1o2drdu/looking_for_a_guide_on_how_to_setup_localllm/ | false | false | self | 1 | null |
Local LLM on old HP Z4 G4? | 2 | I need your opinion.
I could get an older HP Z4 G4 workstation for a case of beer. Unfortunately, the workstation only has a Xeon W-2123 CPU but 256 GB DDR4 RAM 2666MHz. The idea was to install one or two used RTX 5060 TI 16Gb cards and use the workstation as a local LLM server. The goal is not to use giant models extremely fast, but to run Gemma 3 27b or GPT-OSS 20b with about 10-20 tokens per second, for example.
Do you think that would be possible, or are there better builds in terms of price-performance ratio? For me, a case of beer and €400 for a 5060 Ti sounds pretty good right now.
Any ideas, opinions, tips?
Further information:
Mainboard 81c5 MVB
Windows Pro
Nvidia Quatro P2000 | 2025-10-09T18:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1o2dnle/local_llm_on_old_hp_z4_g4/ | Pythagoras1600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2dnle | false | null | t3_1o2dnle | /r/LocalLLaMA/comments/1o2dnle/local_llm_on_old_hp_z4_g4/ | false | false | self | 2 | null |
anyone noticed ollama embeddings are extremely slow? | 1 | trying to use mxbai-embed-large to embed 27k custom xml testSegments using langchain4j, but it's extremely slow untill it times out. there seems to be a message in the logs documented here [https://github.com/ollama/ollama/issues/12381](https://github.com/ollama/ollama/issues/12381) but i don't know if it's a bug or something else
i'm trying use llama.cpp with ChristianAzinn/mxbai-embed-large-v1-gguf:Q8\_0 i'm noticing a massive CPU usage even though i have 5090 , but i don't know if it's just llama.cpp doing batches
i also noticed that llama.cpp tends to fail if i send in all 27k textsegments with GGML\_ASSERT(i01 >= 0 && i01 < ne01) failed
but if i sent less like 25k it works. | 2025-10-09T18:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o2dnfc/anyone_noticed_ollama_embeddings_are_extremely/ | emaayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2dnfc | false | null | t3_1o2dnfc | /r/LocalLLaMA/comments/1o2dnfc/anyone_noticed_ollama_embeddings_are_extremely/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=108&crop=smart&auto=webp&s=2c5686c72ff808ca8f3650a3c74655be1da449f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=216&crop=smart&auto=webp&s=6e04c53dc85f90c8cf73145ae243aa1ee07a6b3e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=320&crop=smart&auto=webp&s=bc14fb33c4dc2064c7ad8646e452520fab87bcef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=640&crop=smart&auto=webp&s=b2cd421d418af4107693ce679823726b7ee53056', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=960&crop=smart&auto=webp&s=7de1edc6ed4366dc7c38ce6eb471204d06fa04c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?width=1080&crop=smart&auto=webp&s=84e0ad0b74780835bc40e75fc03a0e8eadfebffc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k-NpHS_W2d70fEsN0J3HgZyLLj35n7pzAIKEe16LrYY.png?auto=webp&s=54a6108add9e76d711d95194285617a89d76034f', 'width': 1200}, 'variants': {}}]} |
Less is More: Recursive Reasoning with Tiny Networks | 8 | 2025-10-09T18:09:58 | https://arxiv.org/html/2510.04871v1 | Opposite_West8608 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1o2db95 | false | null | t3_1o2db95 | /r/LocalLLaMA/comments/1o2db95/less_is_more_recursive_reasoning_with_tiny/ | false | false | default | 8 | null | |
Fastest Fill-in-the-middle Model for General Text? | 4 | I am only able to find FIM models for coding and not for general text. | 2025-10-09T17:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1o2cu00/fastest_fillinthemiddle_model_for_general_text/ | lattenjoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2cu00 | false | null | t3_1o2cu00 | /r/LocalLLaMA/comments/1o2cu00/fastest_fillinthemiddle_model_for_general_text/ | false | false | self | 4 | null |
Does quantization need training data and will it lower performance for task outside of training data? | 4 | Does quantization make the model more specialized on certain tasks like benchmarks?
I'm using non English dataset and wonder if quantization could make the model perform even worse in my language than the difference in an English benchmark. | 2025-10-09T17:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o2bxq9/does_quantization_need_training_data_and_will_it/ | arstarsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2bxq9 | false | null | t3_1o2bxq9 | /r/LocalLLaMA/comments/1o2bxq9/does_quantization_need_training_data_and_will_it/ | false | false | self | 4 | null |
What's the difference between different 4bit quantization methods? Does vLLM support any one better? | 2 | There seems to be lots of types like awq, bnb, gguf, gptq, w4a16. Any pros and cons of each type except for gguf support different bits. | 2025-10-09T17:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1o2bu4o/whats_the_difference_between_different_4bit/ | arstarsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2bu4o | false | null | t3_1o2bu4o | /r/LocalLLaMA/comments/1o2bu4o/whats_the_difference_between_different_4bit/ | false | false | self | 2 | null |
yanolja/YanoljaNEXT-Rosetta-12B-2510 | 34 | We’ve just uploaded the **next version of YanoljaNEXT-Rosetta-12B**, a translation model that’s been **significantly improved** from the previous release.
🧠 **Available on Hugging Face:**
👉 [YanoljaNEXT-Rosetta-12B-2510](https://huggingface.co/yanolja/YanoljaNEXT-Rosetta-12B-2510)
Below is a summary generated by Claude about the model’s performance 👇
---
## **Key Results for YanoljaNEXT-Rosetta-12B-2510**
### 1. **Average Score on Targeted Languages: 54.45**
* Evaluated on 31 targeted languages (+ English = 32 total)
* Well above the model’s overall average of **44.73** across all 55 languages
### 2. **Ranking on Targeted Languages: #3 out of 8 systems**
**Full Rankings:**
1. DeepL Translate — 55.41
2. GPT-4o — 55.19
3. **YanoljaNEXT-Rosetta-12B-2510 — 54.45** ⭐
4. Google Translate — 54.05
5. OpenAI o1 — 53.39
6. Claude-3.5 — 53.19
7. Microsoft Translator — 53.02
8. Gemini-1.5-Pro — 52.67
🥉 **Only 0.96 points behind the leader!**
### 3. **#1 Rankings: 7 out of 31 languages (22.6%)**
**Top-performing languages:**
* **Danish (da_DK)** — 65.88 (+2.88 vs GPT-4o)
* **Gujarati (gu_IN)** — 51.83 (+2.03 vs Google)
* **Korean (ko_KR)** — 37.10 (+0.10 vs DeepL)
* **Persian (fa_IR)** — 53.95 (+0.95 vs GPT-4o)
* **Romanian (ro_RO)** — 63.24 (+0.44 vs GPT-4o)
* **Tagalog (fil_PH)** — 61.47 (+2.47 vs Google)
* **Vietnamese (vi_VN)** — 56.96 (+2.56 vs GPT-4o)
**Additional Strengths:**
* #2 rankings: 6 languages (French, Greek, Hebrew, Russian, Spanish, Ukrainian)
* #3 rankings: 6 languages (Arabic, Bulgarian, Czech, Hungarian, Italian, Swedish)
---
⚡ Overall, the model shows **strong competitive performance**, especially in **Danish, Korean, and Southeast Asian languages (Vietnamese, Tagalog)** — closing the gap with industry leaders like DeepL and GPT-4o.
---
### **Evaluation Details**
* **Framework & Precision:** Evaluation was conducted using **vLLM** with **BF16 precision**.
* **Data Coverage:** **99.9%** of samples were successfully evaluated, with approximately **0.01%** excluded due to a **repetition issue**.
* **Decoding Settings:** Used **temperature = 0** and **repetition penalty = 1.05** for consistent and deterministic outputs.
* **Metric:** Only **CHRF++** was measured for this evaluation.
* **Dataset:** Evaluation used the **WMT24++ dataset**, which is primarily specialized for **English↔X** translations.
However, the **YanoljaNEXT-Rosetta-12B-2510** model supports **X↔Y translations across all 32 languages**.
* **Additional Note:** **MetricX24** was also tested internally, but the results were excluded since the same scores reported in the **WMT24++ paper** could not be fully reproduced. | 2025-10-09T17:06:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o2bm3z/yanoljayanoljanextrosetta12b2510/ | OldPin8654 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2bm3z | false | null | t3_1o2bm3z | /r/LocalLLaMA/comments/1o2bm3z/yanoljayanoljanextrosetta12b2510/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': '46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=108&crop=smart&auto=webp&s=726ddfb42e64b3dcff3a983a5a6aaee31447fa56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=216&crop=smart&auto=webp&s=67258410a476d4f771839ef4811d531f15ec1416', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=320&crop=smart&auto=webp&s=213e8b2eedc4946cdfb3b71d55c867815ff41fc3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=640&crop=smart&auto=webp&s=075700faeea0300164a56762068f8def6a615357', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=960&crop=smart&auto=webp&s=3a315a6acaeede5773406f4661f60682dae02c4c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?width=1080&crop=smart&auto=webp&s=92cbac858e36fc79ba6627644e92850ba78d3bee', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/46myQmrkdrTA-Hw5-nEsi1rgfhPvklYDsZYGdeY9qLw.png?auto=webp&s=03097f295a2402b229ce6ae323b1a6a2e3c9c2ee', 'width': 1200}, 'variants': {}}]} |
How can CodeBleu be a standard | 1 | Apologies if I failed to grab the concept properly. But since the applications/samples we test our model on using CodeBleu (to my knowledge atleast) isnt same across the board. How can two researchers compare the CodeBleu scores they got on each of their separate LLMs. I am talking about research papers publishing their CodeBleu Scores.
To summarize, we take an example of our choice, run it using codebleu across many models and say that ours did better. Papers dont mention these examples, who is to say they didnt cherry picked a really specific one that their model performs better on. CodeBleu doesnt feels just/standardized.
Or are there standard datasets to be used with CodeBleu for example a set of 100 python problems available as a standard dataset? | 2025-10-09T17:00:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o2bg0p/how_can_codebleu_be_a_standard/ | Minute-Plantain-1213 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2bg0p | false | null | t3_1o2bg0p | /r/LocalLLaMA/comments/1o2bg0p/how_can_codebleu_be_a_standard/ | false | false | self | 1 | null |
Chinny (iOS/MacOS): offline, on-device voice cloning with an optimized Chatterbox model | 9 | Hi folks, I've been experimenting with running voice cloning fully offline. Part of the motivation was that I don't trust those web-based or wrapper AI voice cloning apps that gather user data --- who knows when our information could be sold or used in unexpected ways. So I developed **Chinny,** an iOS(16.6+) / macOS(15.5+) app that runs an optimized Chatterbox model entirely on-device and no network connectivity required!
All models are packed inside the app (about **3.41 GB** total), and it uses around **3 GB of RAM** during inference. It supports **unlimited text input** by splitting it into chunks and combining the outputs into a single audio file.
Currently Chinny only supports English. In my opinion, the multilingual performance of the original Chatterbox model is not strong, and I plan to work on improvements (but only on selected languages).
Chinny is **free and ad-free**, designed to be production-ready while also demonstrating what's possible with optimized on-device inference on Apple hardware. It'll be released soon, and I'd love to hear what kind of features or controls you'd like to see added!
Two demos showcasing basic voice cloning and multi-speaker conversation:
[Voice clone](https://reddit.com/link/1o2b666/video/qcbi3c0284uf1/player)
[Multi-speaker conversation](https://reddit.com/link/1o2b666/video/wvicemp284uf1/player)
| 2025-10-09T16:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o2b666/chinny_iosmacos_offline_ondevice_voice_cloning/ | Acceptable-Cycle4645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2b666 | false | null | t3_1o2b666 | /r/LocalLLaMA/comments/1o2b666/chinny_iosmacos_offline_ondevice_voice_cloning/ | false | false | self | 9 | null |
ReasonScape Evaluation: AI21 Jamba Reasoning vs Qwen3 4B vs Qwen3 4B 2507 | 60 | It's an open secret that LLM benchmarks are bullshit. I built [ReasonScape](https://reasonscape.com/) to be different, lets see what it tells us about how AI21's latest drop compared to the high quality 4B we know and love.
My usual disclaimer is that these are all [information processing tasks](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/tasks.md) so I make no claims of performance on summarization, creative writing or similar tasks. This evaluation is a counting letters, tracking objects, doing math, following instructions kinda thing.
The second disclaimer is that I am sharing data from my [development branch](https://github.com/the-crypt-keeper/reasonscape/tree/develop) that's not yet been published to the leaderboard or explorer apps - working on it, aiming for this weekend.
Caveats aside lets start with high-level views:
[Overview](https://preview.redd.it/7rrhce1au3uf1.png?width=1349&format=png&auto=webp&s=f4abfa1cbcca3c2e5b4931e8c8492be6bc3d10fe)
In terms of average tokens, this model sits somewhere between the OG and 2507-Thinking. Performance was incredibly weak outside of 2 domains: Cars ([Spatial state tracking](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/tasks/cars.md)) and Dates ([Time operations](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/tasks/dates.md)).
The reasonscape [methodology ](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/methodology.md)requires me to run **\*a lot\*** of tests, but also gives us a way to look deeper inside the performance of each task:
[Task Deep Dive 1: Arithmetic, Boolean, Brackets, Cars, Shuffle, Objects](https://preview.redd.it/z50u525o34uf1.png?width=1920&format=png&auto=webp&s=af5e03a87914f0904ae7d82d2edd2f1cbcb86080)
[Task Deep Dive 2: Dates, Letters, Movie, Sequence, Shapes, Sort](https://preview.redd.it/8c3i9xcq34uf1.png?width=1920&format=png&auto=webp&s=3f78ed06f64910d1dec0c09ac7284a2cd0e85aeb)
The original Qwen3-4B was a really strong model, the 2507 release that split it into two halves was a mixed bag - the resulting Thinking model is quite good but it does not universally outperform the OG, [Sequence ](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/tasks/sequence.md)is an example of a task the 2507 regressed on.
Before final thoughts, lets directly compare Jamba to the OG Qwen3-4B across the gamut of tasks:
[Bar Plot: Jamba Reasoning 3B](https://preview.redd.it/lpkrxumi44uf1.png?width=857&format=png&auto=webp&s=37102fa70a4780f987d27ec56a0eefbae349562c)
[Bar Plot: Qwen3-4B OG](https://preview.redd.it/v2n3y3zn44uf1.png?width=854&format=png&auto=webp&s=5c2bf95b75862dde463664a92ad223e961d9891b)
I think it's fair to say that task-performance of Jamba Reasoning 3B leaves much to be desired. [Letters ](https://github.com/the-crypt-keeper/reasonscape/blob/develop/docs/tasks/letters.md)is a parametric version the 'count rs in strawberry' tests and for a native-thinking model to fail it this hard is pretty embarassing imo.
The glaring problem with this model is **truncation**. All these evaluations were run at **8K context**, so naively you'd think truncation means the model is just running out of tokens but this is absolutely not the case - if you look at Dates for example all successfull responces are \~2K but truncation rate is still a crazy \~10% the just model loses its mind:
We need to parse the problem: "Given a small set of sentences about a particular date, answer the provided question. Respond only with the final date in MM/DD/DD/YYYY[*six thousand more Y*]
We need to find today's date given that Xavier moved to the city on 04/11/2023, and 299 days have passed since then. So we need to add 299 days to 04/11/2023.\n\nWe need to compute the date.\n\nLet's do step by step.\n\nStart date: April 11, 2023.\n\nAdd 299 days.\n\nWe can break into years, months, days.\n\nFirst, add full years. Since 299 days is less than a year, we can approximate.\n\nBut we need to be precise.\n\nWe can use a date calculator approach.\n\nLet's compute:\n\nApril 11, 2023 + 299 days.\n\nWe can add month by month, but need to account for days.\n\nLet's do it systematically.\n\nWe can use a date addition: 299 days from April 11, 2023.\n\nWe can break into quarters maybe.\n\nBut maybe easier: compute month by month.\n\nWe can start from April 11, 2023.\n\nAdd 30 days to get May 11, 2023.\n\nAdd another 30 days: June 11, 2023.\n\nAdd another 30 days: July 11, 2023.\n\nAdd another 30 days: August 11, 2023.\n\nAdd another 30 days: September 11, 2023.\n\nAdd another 30 days: October 11, 2023.\n\nAdd another 30 days: November 11, 2023.\n\nWait, but we only have 299 days. Let's see.\n\nWe can count the days month by month.\n\nFrom April 11 to next April 11 is 365 days? But we only add 299 days.\n\nLet's do a more precise method.\n\nWe can use the fact that 299 days from April 11, 2023.\n\nWe can compute by adding days month by month.\n\nApril 11, 2023 + 299 days.\n\nWe can break into months:\n\n- April: 30 days (from April 11 to April 30). That's 19 days? Wait, from April 11 to May 11 is 30 days? Let's do step by step.\n\n\n\n\nAlternatively, we can use a date calculator logic.\n\nLet's use a simpler approach: add 299 days to April 11, 2023.\n\nWe can use the fact that 2023 is not a leap year (2023 is not divisible by 4). So February has 28 days.\n\nWe can use a date addition.\n\nBut maybe we can use a known method: add 299 days to April 11.\n\nWe can add 365 days to get next year, but we only need 299.\n\nLet's do step by step:\n\nApril 11, 2023.\n\nAdd 1 day: April 12, 2023.\n\nAdd 298 days: we need to add 298 days.\n\nWe can break 298 days into years, months, days.\n\nBut maybe we can use a simpler approach: find the date by adding 299 days to April 11, 2023.\n\nWe can use a date calculator mental.\n\nLet's add 299 days to April 11, 2023.\n\nApril 11 + 299 days.\n\nWe can compute:\n\n- April has 30 days, so adding 30 days would get to May 0? Wait.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis is too messy.\n\nLet'solve the date calculation.\nWe need to compute the current date:\nThe task: Given a small set of sentences about a particular date.\n\nThe task:\nPlease write a function that is to\n<think>\nWe need to write a function that returns the\n\nWe need to\n</think>\n\nWe need to\n\n\n\n<think>\n</think>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n</think>\nWe need to write a small set of the problem.\n\n\n<think>\nWe need to write a function.\n\nWe need to think\n\n<think>\n\n\n\n\n\nWe need to think\n\n\n</think>\n</think>\n\n\n<think>\nWe need to start a new task:\n\n\nWe need to start a new task.\n\n\n<think>\n\nWe need to\n\nWe need to start a new start\n\n\n\n\n</think>\n\nWe need to start new start\n\n\n\n\n\n\n<think>\n\nWe need to start a new start\n\n\n\n\n\n\n</think>\n\nWe need to start start to start start start [start repeats 4000 times]
I ran all models with **{"temperature": 0.6, "top\_p": 0.95, "top\_k": 20, "min\_p": 0 }** which is my standard sampler for reasoning models, perhaps there is a different configuration that works better for Jamba reasoning specifically.
https://preview.redd.it/itf6y0k674uf1.png?width=1450&format=png&auto=webp&s=972e3d53f7eaa361101ab32a00c11bb257fedd62
In closing, **I don't believe this model is comparable to Qwen3-4B on practical tasks**. It's far worse at basically all tasks, and has a universal truncation problem.
Thanks for reading and keep it local! <3 | 2025-10-09T16:45:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o2b1yo/reasonscape_evaluation_ai21_jamba_reasoning_vs/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2b1yo | false | null | t3_1o2b1yo | /r/LocalLLaMA/comments/1o2b1yo/reasonscape_evaluation_ai21_jamba_reasoning_vs/ | false | false | 60 | null | |
Starter build for running local LLMs | 5 | I'm helping a friend with his first build for running local LLMs, for learning and trying things out. Eventually he plan on doing some projects for work.
Here's my thoughts on a good build that isn't breaking the bank and can be upgraded over time.
CPU: Go with AMD AM5 socket. Epyc and Thread ripper is too expensive. Any suggestions? 7700? Only 2xCCD though. Going with AM5 and AMD for price / performance, and upgradability over time. Also memory throughput on AMD is generally better than Intel.
MB: Some kind of gamer motherboard, focus on PCIe 5 and physical space to take 2 GPUs, preferably 2x16 lane PCIe slots, but should be fine with 1x16 and 1x8 with gen 5. 4 memory slots.
Memory: Preferably 2x32 GB in a kit, can be 2x16 if need to cut costs. DDR5 5200, probably. Also depends on the speed of the CPUs memory throughput.
GPU: Not going second hand 3090, but rather new Nvidia 5060 Ti 16GB. Has the old power connector and doesn't draw crazy much electricity. Reasonably priced for a GPU with 16GB VRAM. The 5070 Ti 16GB is almost double the price here, twice the power draw, while possibly a bit faster, rather planning for a second 5060 Ti 16GB later for 2x16 GB or a Super version later. I'm also betting on MXFP4 / NVFP4 here. (Comparable AMD RX 90 something isn't price competitive with the 5060 Ti 16GB, and it's lacking hardware support for anything smaller than BF16, and it's too messy with software support for a starter build.)
PSU: At least 1000W, even if not needed right now, an oversized PSU is more power efficient at lower load and will allow adding a second GPU later.
Idea is to go for a custom gaming desktop with above specs as much as possible and be ready to place an order when Black Friday / Cyber Monday hits.
What do you think? Am I missing something important here?
| 2025-10-09T16:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o2axch/starter_build_for_running_local_llms/ | UncleRedz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2axch | false | null | t3_1o2axch | /r/LocalLLaMA/comments/1o2axch/starter_build_for_running_local_llms/ | false | false | self | 5 | null |
Comparing benchmarks | 0 | Found this, interesting and apparently free https://artificialanalysis.ai. Yes, I know benchmarks are suspect for good reason but we still look at them. I have no affiliation with the website. | 2025-10-09T16:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o2anpe/comparing_benchmarks/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2anpe | false | null | t3_1o2anpe | /r/LocalLLaMA/comments/1o2anpe/comparing_benchmarks/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=216&crop=smart&auto=webp&s=b97954336b79c1390848d0e44fa056a85de68672', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=320&crop=smart&auto=webp&s=65f53b80ab9674ee645013e3e8eeac4f953d657e', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=640&crop=smart&auto=webp&s=47f397e4a22ed5ec7e82aad070eb446319603abc', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=960&crop=smart&auto=webp&s=0f4359d47b78f5c1aa35de8804dbe36a749fc11a', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=1080&crop=smart&auto=webp&s=62eb4b7216f41af6600fc4df79cfa67425c19442', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?auto=webp&s=efc17c9f241b4403d22cbacfe5d71900ee1cf85a', 'width': 1260}, 'variants': {}}]} |
Is it possible to download models independently? | 1 | I'm new to local llms and would like to know if I'm able to download models through the browser/`wget`/`curl` so that I can back them up locally. Downloading them takes ages and if I mess something up having them backed up to an external drive would be really convenient. | 2025-10-09T16:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o2ajxq/is_it_possible_to_download_models_independently/ | VegetableJudgment971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2ajxq | false | null | t3_1o2ajxq | /r/LocalLLaMA/comments/1o2ajxq/is_it_possible_to_download_models_independently/ | false | false | self | 1 | null |
AI optimization | 5 | With the continuous improvement in optimization and hardware, how long do you anticipate it will take before large-scale models (over 100 billion parameters) become more accessible to the general public? | 2025-10-09T15:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o29ekl/ai_optimization/ | a_normal_user1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o29ekl | false | null | t3_1o29ekl | /r/LocalLLaMA/comments/1o29ekl/ai_optimization/ | false | false | self | 5 | null |
test123 | 1 | [deleted] | 2025-10-09T15:37:50 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1o298v8 | false | null | t3_1o298v8 | /r/LocalLLaMA/comments/1o298v8/test123/ | false | false | default | 1 | null | ||
Write prompts in your native language. My one-press tool translates them to English instantly & offline (supports 99+ languages) | 0 | Hey everyone
You know that feeling? You can read English perfectly, but trying to write a prompt from scratch sometimes is a real pain. It totally breaks the creative flow and can ruin a good RP.
So I made this.
It's a simple tool: you write in your native language (99+ supported), press one key (F9), and it instantly translates the whole text field to English, right in place.
The best part? It's 100% offline. Your prompts never leave your PC. This makes it super fast (no lag) and perfect for LM-Studio or something else.
Hope it helps some of you out! It's open-source, would love to hear what you think.
GitHub:
[https://github.com/ThetaCursed/NativePrompt](https://github.com/ThetaCursed/NativePrompt) | 2025-10-09T15:33:33 | ThetaCursed | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o294t2 | false | null | t3_1o294t2 | /r/LocalLLaMA/comments/1o294t2/write_prompts_in_your_native_language_my_onepress/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'aymd2i8zu3uf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=108&crop=smart&format=png8&s=ae33134142a3edb5fcfea0f9848b7401aa329bec', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=216&crop=smart&format=png8&s=1519472c6f91a834c73927bbddddbacc219c04f6', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=320&crop=smart&format=png8&s=eb9a90c5eac4d781ddbbbff1011b14b0abc7f3b2', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=640&crop=smart&format=png8&s=1e5cbbe99942f8bd772cf70ed9657ec42a8a2b82', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?format=png8&s=550e60418b1dd278a4c32d425013567f9a0179d5', 'width': 800}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=108&crop=smart&s=de49d14bc879c2323ef93a4b06186c4cf0994e24', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=216&crop=smart&s=ff7228e7aeb04db1e7535e1b8a760062c5be4a5c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=320&crop=smart&s=b0b0257ed5a10caca5ab1c9b81a037f918cc0770', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=640&crop=smart&s=70746ab4ae79eb0be780be4907ff569fd1adde0d', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?s=0ce7107be02db6c46fa8f9258bf63894c6bd6ce5', 'width': 800}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=108&format=mp4&s=c21533b8571270725991f33704fbcad9d7241adb', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=216&format=mp4&s=183abe0502a0e59412d26701719211326aa3ade4', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=320&format=mp4&s=c9fa48107873b04c17fd1a2aab9b783e8a1c17a8', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?width=640&format=mp4&s=29fc4b7e79acb0331983ba39b0c628a0d32d233c', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/aymd2i8zu3uf1.gif?format=mp4&s=9b8dc91c4beb8df35da8cb1ac77ca6e1d15cff38', 'width': 800}}}}]} | |
Write prompts in your native language. My one-press tool translates them to English instantly & offline (supports 99+ languages) | 0 | ERROR: type should be string, got "https://i.redd.it/hg8xt275f3uf1.gif\n\nHey everyone \n \nYou know that feeling? You can read English perfectly, but trying to write a prompt from scratch sometimes is a real pain. It totally breaks the creative flow and can ruin a good RP. \n \nSo I made this. \nIt's a simple tool: you write in your native language (99+ supported), press one key (F9), and it instantly translates the whole text field to English, right in place. \n \nThe best part? It's 100% offline. Your prompts never leave your PC. This makes it super fast (no lag) and perfect for LM-Studio or something else. \n \nHope it helps some of you out! It's open-source, would love to hear what you think.\n\nGitHub: \n[https://github.com/ThetaCursed/NativePrompt](https://github.com/ThetaCursed/NativePrompt) \n" | 2025-10-09T15:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1o28wzo/write_prompts_in_your_native_language_my_onepress/ | ThetaCursed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o28wzo | false | null | t3_1o28wzo | /r/LocalLLaMA/comments/1o28wzo/write_prompts_in_your_native_language_my_onepress/ | false | false | 0 | null | |
Feedback on streaming live meeting transcripts into any AI Chat Interface | 2 | Hey guys,
I'm prototyping a small tool/MCP server that streams a live meeting transcript into the AI chat interface you already use. During the call you could ask it things like “Summarize the last 10 min", “Pull action items so far", "Fact‑check what was just said” or "Research the topic we just discussed". This would essentially turn it into a real‑time meeting assistant. What would this solve? The need to copy paste the context from the meeting into the chat and the transcript graveyards in third-party applications you never open.
Before I invest more time into it, I'd love some honest feedback: Would you actually find this useful in your workflow or do you think this is a “cool but unnecessary” kind of tool? Just trying to validate if this solves a real pain or if it’s just me nerding out. 😅 | 2025-10-09T15:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o28ta6/feedback_on_streaming_live_meeting_transcripts/ | DerErzfeind61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o28ta6 | false | null | t3_1o28ta6 | /r/LocalLLaMA/comments/1o28ta6/feedback_on_streaming_live_meeting_transcripts/ | false | false | self | 2 | null |
If I buy a GPU, will the MOE model inference speed improve with partial offload? | 8 | Recently, what I've read, especially about MOE models, has confused me a lot, and I haven't been able to understand if getting an external GPU would be beneficial or not. I understand that even if I offload 99% of parameters in dense models, there will be a significant performance drop. And even with MOE models It's clearly evident that I won't be able to load the entire model into GPU memory. But only offloading active parameters and context while keeping performance as high as possible sounds reasonable. I am mainly aiming for improving prompt processing using models like GLM Air and gpt-oss-120b. I am quite ok with min. 10 tk/s generation speed.
Is it possible for me to achieve a significant performance improvement if I acquire an 16gb GPU like 5060TI or 9060XT?
Currently, the benchmark results for gpt-oss-20b and gpt-oss-120b are as follows with AMD 8500G and 96 GB 5600 MHz DDR5:
https://preview.redd.it/qieiaphqq3uf1.png?width=2164&format=png&auto=webp&s=f9f7e4d636a46c45ceb5e27afd35169bf3c545ae
https://preview.redd.it/meh3w5eir3uf1.png?width=2610&format=png&auto=webp&s=9aaf83c9ae98973e0807ee49fb40e7b8924ea5dd
| 2025-10-09T15:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o28soy/if_i_buy_a_gpu_will_the_moe_model_inference_speed/ | gkon7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o28soy | false | null | t3_1o28soy | /r/LocalLLaMA/comments/1o28soy/if_i_buy_a_gpu_will_the_moe_model_inference_speed/ | false | false | 8 | null | |
Document Processing for RAG question and answering, and automatic processing of incoming with Business Metadata | 2 | I am in the process of starting to setup RAG on my companies documents, mainly acknowledgments, invoices and purchase orders.
At the moment I am running all the PDF's exported from the PST archive of a mailbox through MinerU2.5-2509-1.2B, Docling Accurate and PyMuPDF, then combining the contents of all three into a single Markdown file a long with email meta data following the RFC 5322 Standard,
Then I plan to get Qwen2.5-VL-7B-Instruct to process images of the PDF's along side the compiled Markdown for character accuracy, then generate a JSON for that document with all the metadata and document contents built from vison and MD files to inform correct characters in case of OCR mistakes.
Then I will feed the generated JSON into GPT-OSS-20B to call MCP tools to look at a SQL report of all the orders so it can link supplier names, the original Sales Order and Purchase order to JSON and then enrich the JSON so I have a fully tagged JSON available and I will also keep the PDF's in a folder so if the LLM is asked it can show the original document.
This is a solution I just sort of came up with and I would be interested in what you think or if you think your approach is better then I would love to hear why! | 2025-10-09T15:19:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o28qwf/document_processing_for_rag_question_and/ | Unique_Marsupial_556 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o28qwf | false | null | t3_1o28qwf | /r/LocalLLaMA/comments/1o28qwf/document_processing_for_rag_question_and/ | false | false | self | 2 | null |
ZentithLLM — Fully Offline, Privacy-First LLM for Android Devices | 8 | Hey r/LocalLLaMA community!
I’ve been exploring offline AI models on Android and noticed a big gap: most AI assistants either require constant internet or send data to cloud servers. As someone who values privacy and local control, I decided to build **ZentithLLM**, a fully offline AI assistant that runs entirely on-device.
# Key Features:
**🧠 On-Device LLM**
ZentithLLM uses an advanced large language model optimized for Android devices, delivering context-aware responses across tasks — from drafting notes to summarizing text — all **locally**.
**🔒 100% Offline & Private**
No internet connection required. Your prompts and data never leave your device. No cloud storage, no accounts, no tracking.
**📊 Optional Anonymized Telemetry**
For performance improvements only — completely anonymous and never includes personal info.
**📴 Works Anywhere**
Even in airplane mode or areas with poor connectivity, ZentithLLM continues to function seamlessly.
**🛠 Developer-Friendly / Open Discussion**
I’m keen to get feedback from the community on:
* Optimizing on-device LLM performance for Android
* Potential model compression or quantization techniques
* Ideas for privacy-preserving AI features
This is a **solo project**, and I’m excited to see what the LocalLLaMA community thinks. Would love to hear your suggestions, technical feedback, or feature requests!
**Play Store** [https://play.google.com/store/apps/details?id=in.nishantapps.zentithllmai](https://play.google.com/store/apps/details?id=in.nishantapps.zentithllmai) | 2025-10-09T15:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1o28qmw/zentithllm_fully_offline_privacyfirst_llm_for/ | Quiet-Baker8432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o28qmw | false | null | t3_1o28qmw | /r/LocalLLaMA/comments/1o28qmw/zentithllm_fully_offline_privacyfirst_llm_for/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'hM5SKSbDT8qcg8SjoUIRpNzQW65bvcwuMtq4SFvpc30', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hM5SKSbDT8qcg8SjoUIRpNzQW65bvcwuMtq4SFvpc30.png?width=108&crop=smart&auto=webp&s=30366fc6fa6d9d7e1f829ad663b1a2c2b8135c5a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hM5SKSbDT8qcg8SjoUIRpNzQW65bvcwuMtq4SFvpc30.png?width=216&crop=smart&auto=webp&s=9d1608e8e6f437ba01e59fc0f3e6f92a46b34cc9', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hM5SKSbDT8qcg8SjoUIRpNzQW65bvcwuMtq4SFvpc30.png?width=320&crop=smart&auto=webp&s=f9447084bcc7f31f39de1fb4047033f90b8b0c63', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/hM5SKSbDT8qcg8SjoUIRpNzQW65bvcwuMtq4SFvpc30.png?auto=webp&s=c0af1ecc8d0101b28d4e7f3bddda4e3671e1d6ae', 'width': 512}, 'variants': {}}]} |
Moondream3 and Salesforce GTA-1 for UI grounding in computer-use agents | 18 | Moondream3 and Salesforce GTA-1 for UI grounding in computer-use agents
The numbers on ScreenSpot-v2 benchmark:
GTA-1 leads in accuracy (96% vs 84%), but Moondream3 is 2x faster (1.04s vs 1.97s avg).
The median time gap is even bigger: 0.78s vs 1.96s - that's a 2.5x speedup.
GitHub : https://github.com/trycua/cua
Run the benchmark yourself: https://docs.trycua.com/docs/agent-sdk/benchmarks/screenspot-v2
| 2025-10-09T14:48:59 | https://v.redd.it/0odmsxg1n3uf1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o27xzz | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0odmsxg1n3uf1/DASHPlaylist.mpd?a=1762613353%2CNTkzNTBmM2VkZGIwZmYxM2I3M2NjNTg1NmZhMmI3NDk1MGM1ZjA1OTc5Njc4YmI3N2M2NjhmZDhmZGE2ODcxYQ%3D%3D&v=1&f=sd', 'duration': 80, 'fallback_url': 'https://v.redd.it/0odmsxg1n3uf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/0odmsxg1n3uf1/HLSPlaylist.m3u8?a=1762613353%2CODFmYjMwYjZlNGJjYjljNTU1MWUxMTAzOWUzNjRlNTE2ZjhkMzQxN2VjYzk5M2MwNDI4YTdkMzVmMGEzZmNiYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0odmsxg1n3uf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1o27xzz | /r/LocalLLaMA/comments/1o27xzz/moondream3_and_salesforce_gta1_for_ui_grounding/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=108&crop=smart&format=pjpg&auto=webp&s=c5b7b3d82dd23aa3d68aba5fe0451f976778574c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=216&crop=smart&format=pjpg&auto=webp&s=11e74d24b5f4a1b56fb7c5762375000d15ff1a86', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=320&crop=smart&format=pjpg&auto=webp&s=02eed54ddfd431cd5ac875f384120f60ef697411', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=640&crop=smart&format=pjpg&auto=webp&s=698653b284b42b2efb6f72c0f611b653493da503', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=960&crop=smart&format=pjpg&auto=webp&s=28d6cdb9368d9ebebd5acd170fc241d7273593a0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9bf1f50abbaf53385bac1fb8dbaa4bbc500949b3', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MXhzZGNjNzFuM3VmMeWLBg1mAXwAigO5PzLkc4zOrANcV84YU3-0ubuxy7cx.png?format=pjpg&auto=webp&s=3e55a5adee15b53e98876a758768a07e37ab75ec', 'width': 1280}, 'variants': {}}]} | |
Introducing Playable1-GGUF, by far the world's best open-source 7B model for vibe coding retro arcade games! | 188 | I've taken this idea too far, clearly, but the results are fun! Playable1-GGUF is a q4\_k\_m Qwen2.5-Coder-7B-Instruct fine-tuned on 52,809 lines of Python pygame scripts.
Over the past week I've dialed in the LORA parameters, added games, ironed the bugs out of the dataset, and open-sourced everything.
No q4 model, 8B or smaller, comes anywhere close to this level of performance. Most struggle to make a few basic games and can't do many creative twists on them.
Playable1-GGUF features:
* Oneshot code Galaga, Space Invaders, Breakout, Flappy Bird, Snake, and Pong.
* Modify existing games, like "give the invaders rainbow colors", "make the bullets explode", etc.
* Oneshot code games with a twist, like "pong but the paddles can move in 2d."
* Debug a variety of simple Python errors to fix broken games.
* No RAG or templates needed in the prompts!
I also built an app, Infinity Arcade, that provides the right prompts and a nice UI for demonstrating the features of the model.
Assets (all MIT license):
* Quantized GGUF: [https://huggingface.co/playable/Playable1-GGUF](https://huggingface.co/playable/Playable1-GGUF)
* Full-precision SafeTensors: [playable/Playable1 · Hugging Face](https://huggingface.co/playable/Playable1)
* Dataset: [https://github.com/lemonade-sdk/playable-data/tree/main](https://github.com/lemonade-sdk/playable-data/tree/main)
* Infinity Arcade app: [https://github.com/lemonade-sdk/infinity-arcade](https://github.com/lemonade-sdk/infinity-arcade)
Next steps (if there's interest):
* Full SFT on MI 300X GPUs (instead of LORA)
* Prompting guide for the model
* e2e tutorial on how to make this kind of thing
* More games (a DDR-style rhythm game is probably next)
Posting here to get people's feedback. Take it for a spin and let me know what you think! | 2025-10-09T14:48:44 | https://v.redd.it/hscmojn4m3uf1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o27xsj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hscmojn4m3uf1/DASHPlaylist.mpd?a=1762613338%2CMDQ5ZjFlZjNhYThkODkzNzRkOWRjMmQ1ZTQ2YzU5ZjY2MDhkNWFjZjEzYjg3NzM5NDU4Yjc2MzY3MWFmMjZmNA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/hscmojn4m3uf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1350, 'hls_url': 'https://v.redd.it/hscmojn4m3uf1/HLSPlaylist.m3u8?a=1762613338%2CNzEyMDhkMTIwYWYwNmQxY2Q1YWRhODJiOTFhMGNiZDJhZjgwMTVjYWJiMzFiMTYwMGMzYWU5ZDg0NWI0OGQyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hscmojn4m3uf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1o27xsj | /r/LocalLLaMA/comments/1o27xsj/introducing_playable1gguf_by_far_the_worlds_best/ | false | false | 188 | {'enabled': False, 'images': [{'id': 'amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=108&crop=smart&format=pjpg&auto=webp&s=210438ca5a74713025bc2987068f6876e5425438', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=216&crop=smart&format=pjpg&auto=webp&s=4cd0eacda435bbfea2d740118d286ed75992e046', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=320&crop=smart&format=pjpg&auto=webp&s=203b18b3bca2e8f9998e9439609c16e22db40f2e', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=640&crop=smart&format=pjpg&auto=webp&s=6d2de800c92785d8506a02d3ee26c33c74444fd3', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=960&crop=smart&format=pjpg&auto=webp&s=411cd6c1ed8c91f1a66f682322720a461fb4bd3c', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=21177d2b1f6d4df33c99d89a289506608489200c', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/amdjandqbjRtM3VmMYQO32V0xtyx5agXCcjKDs7UavRodqRFn_xjnlOv-a1T.png?format=pjpg&auto=webp&s=f83b620c954330deefcfe2030efe4299fc567b10', 'width': 1080}, 'variants': {}}]} | |
Will open-source (or more accurately open-weight) models always lag behind closed-source models? | 239 | It seems like open source LLM's are always one step behind closed-source companies. The question here is, is there a possibility for open-weight LLM's to overtake these companies?
Claude, Grok, ChatGPT and other's have billions of dollars in investments yet we saw the leaps DeepSeek was capable of.
Shaking Silicon Valley a bit to the point where banning it was debated. So I see no reason why they can't be eventually overtaken? | 2025-10-09T14:28:23 | Striking_Wedding_461 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o27ex3 | false | null | t3_1o27ex3 | /r/LocalLLaMA/comments/1o27ex3/will_opensource_or_more_accurately_openweight/ | false | false | default | 239 | {'enabled': True, 'images': [{'id': 'a1fnssvaj3uf1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=108&crop=smart&auto=webp&s=6e49bbb763bbea427cd9344496a299584e49338a', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=216&crop=smart&auto=webp&s=26ee725c29479aa678f9c14fdfc5042e9846cf2f', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=320&crop=smart&auto=webp&s=56f35dd8badee7e106edab5da6dce0f4ce0b25a4', 'width': 320}, {'height': 407, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=640&crop=smart&auto=webp&s=da172e76a7d257ced3c5cb728441b52635db6b3f', 'width': 640}, {'height': 610, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=960&crop=smart&auto=webp&s=08cae55dd7ad4bce1b848b61bf9ddd6a474fcfbc', 'width': 960}, {'height': 687, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?width=1080&crop=smart&auto=webp&s=05e8353b2ee0f94ce1ce013365aba1fed73edcb3', 'width': 1080}], 'source': {'height': 840, 'url': 'https://preview.redd.it/a1fnssvaj3uf1.png?auto=webp&s=ad7752192b2296fe1e5fc0ad8615126438579e93', 'width': 1320}, 'variants': {}}]} | |
How do you guys run Codex CLI with OpenRouter models? (im getting model_not_found) | 3 | hi guys,
i got openrouter API key with credits and a working codex cli
I tried different configs to the toml and can't seem to get it working, always hitting that model\_not\_found issue
the latest version of my config is:
\# Set the default model
model = "google/gemma-7b-it"
windows\_wsl\_setup\_acknowledged = true
\# Configure the 'openai' provider to point to OpenRouter
\[model\_providers.openai\]
name = "openai"
api\_base = "https://openrouter.ai/api/v1"
env\_key = "OPENROUTER\_API\_KEY"
\# Your other preferences
approval\_policy = "never"
sandbox\_mode = "workspace-write"
network\_access = true
windows\_wsl\_setup\_acknowledged = true
but i still get:
⚠️ stream error: unexpected status 400 Bad Request: {
"error": {
"message": "The requested model 'openai/gpt-5-pro' does not exist.",
"type": "invalid\_request\_error",
"param": "model",
"code": "model\_not\_found"
}
}; retrying 3/5 in 750ms…
| 2025-10-09T14:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o27evq/how_do_you_guys_run_codex_cli_with_openrouter/ | TruthTellerTom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o27evq | false | null | t3_1o27evq | /r/LocalLLaMA/comments/1o27evq/how_do_you_guys_run_codex_cli_with_openrouter/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=108&crop=smart&auto=webp&s=32a58979b0a9d8b4c5df5fae1dccedf83b8b8f17', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=216&crop=smart&auto=webp&s=f273a6eeddf55d513e1ccfb75efaef2ee40ae7dd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=320&crop=smart&auto=webp&s=b36e5d7e1a84465939d75d303f380eac109699f6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=640&crop=smart&auto=webp&s=3d0c993da5f5fb14fce0d931cf2f48d622cdbb76', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=960&crop=smart&auto=webp&s=168a8aa0357e1b6bc14567eebeb83b286aaf2ef4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=1080&crop=smart&auto=webp&s=b48e5b47f79cb069c377c7df18eb999c4009c10e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?auto=webp&s=174d4238d26b6dc6a1693bc477da1d15955d4bc9', 'width': 1200}, 'variants': {}}]} |
Stop converting full documents to Markdown directly in your indexing pipeline | 38 | Hey everyone,
I've been working on document parsing for RAG pipelines, and I keep seeing the same pattern in many places: parse document → convert to markdown → feed to RAG. I get *why* we do this. You want one consistent format so your downstream pipeline doesn't need to handle PDFs, Excel, Word docs, etc. separately.
**But here's the thing you’re losing so much valuable information in that conversion.**
Think about it: when you convert a PDF to markdown, what happens to the bounding boxes? Page numbers? Element types? Or take an Excel file - you lose the sheet numbers, row references, cell positions. If you libraries like **markitdown** then all that metadata is lost.
**Why does this metadata actually matter?**
Most people think it's just for citations (so a human or supervisor agent can verify), but it goes way deeper:
* **Better accuracy and performance** \- your model knows *where* information comes from
* **Customizable pipelines** \- add transformers as needed for your specific use case
* **Forces AI agents to be more precise, provide citations and reasoning** \- which means less hallucination
* **Better reasoning** \- the model understands document structure, not just flat text
* **Enables true agentic implementation** \- instead of just dumping chunks, an agent can intelligently decide what data it needs: the full document, a specific block group like a table, a single page, whatever makes sense for the query
**Our solution: Blocks (e.g. Paragraph in a pdf, Row in a excel file) and Block Groups (Table in a pdf or excel, List items in a pdf, etc)**
We've been working on a concept we call "blocks" (not really unique name :) ). This is essentially keeping documents as structured blocks with all their metadata intact.
Once document is processed it is converted into blocks and block groups and then those blocks go through a series of transformations
For example:
* Merge blocks or Block groups using LLMs or VLMs. e.g. Table spread across pages
* Link blocks together
* Do document-level OR block-level extraction
* Categorize blocks
* Extracting entities and relationships
* Denormalization of textn
* Building knowledge graph
Everything gets stored in blob storage (raw Blocks), vector db (embedding created from blocks), graph db, and you maintain that rich structural information throughout your pipeline. We do store markdown but in Blocks
**So far, this approach has worked quite well for us.** We have seen real improvements in both accuracy and flexibility.
Few of the Implementation reference links
[https://github.com/pipeshub-ai/pipeshub-ai/blob/main/backend/python/app/models/blocks.py](https://github.com/pipeshub-ai/pipeshub-ai/blob/main/backend/python/app/models/blocks.py)
[https://github.com/pipeshub-ai/pipeshub-ai/tree/main/backend/python/app/modules/transformers](https://github.com/pipeshub-ai/pipeshub-ai/tree/main/backend/python/app/modules/transformers)
**Here's where I need your input:**
Do you think this should be an **open standard**? A lot of projects are already doing similar indexing work. Imagine if we could reuse already-parsed documents instead of everyone re-indexing the same stuff.
**I'd especially love to collaborate with companies focused on parsing and extraction**. If we work together, we could create an open standard that actually works across different document types. This feels like something the community could really benefit from if we get it right.
We're considering creating a Python package around this (decoupled from our pipeshub repo). Would the community find that valuable?
If this resonates with you, check out our work on GitHub
[https://github.com/pipeshub-ai/pipeshub-ai/](https://github.com/pipeshub-ai/pipeshub-ai/)
If you like what we're doing, a star would mean a lot! Help us spread the word.
**What are your thoughts?** Are you dealing with similar issues in your RAG pipelines? How are you handling document metadata? And if you're working on parsing/extraction tools, let's talk! | 2025-10-09T14:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o26u9e/stop_converting_full_documents_to_markdown/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o26u9e | false | null | t3_1o26u9e | /r/LocalLLaMA/comments/1o26u9e/stop_converting_full_documents_to_markdown/ | false | false | self | 38 | null |
Oct. 2025 - Best Local Transcription Framework? | 3 | Hi, I was curious to hear from you about the currently **"best" local transcription framework.** I am trying to convert hours of dialogue with amazing people whose life stories we want to conserve.
I am all open with regards to features, incl. adding custom words etc. For my workflow I intend to ideally transcribe the text as accurately as possible, then use a large language model to clean up potential faulty transcriptions, then summarize/extract the critical information. I don't really need time stamps, but speaker diarisation would be amazing I guess. If it helps to specify number of speakers, background information, and languages used to reduce WER, even better.
Plus points if it runs on Windows, so I can recommend it to family members and friends.
**What are you all using for this, or a similar task?**
PS: Handy is a fantastic tool, but it doesn't transcribe from audio files. Furthermore, I wonder if people have more success using Voxtral over Parakeet or Whisper Turbo. I have an RTX 4060 with 8 GB of VRAM and 128 GB DDR5, I can run tasks all night long, quality is much more important than speed for me. | 2025-10-09T14:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o26te6/oct_2025_best_local_transcription_framework/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o26te6 | false | null | t3_1o26te6 | /r/LocalLLaMA/comments/1o26te6/oct_2025_best_local_transcription_framework/ | false | false | self | 3 | null |
Best LLM gateway Suggestions? | 11 | I've been testing out different LLM gateways for a multi-agent system and wanted to share some notes. I have tried multiple models & hosted them, but lately I’ve shifted focus to LLM gateways.
Most of the hosted ones are fine for basic key management or retries, but they fall short once you're comparing models side-by-side, need consistent response formatting, or want to route traffic based on task complexity. Some of them also have surprising bottlenecks under load or lack good observability out of the box.
* [Portkey](https://portkey.ai/features/ai-gateway): Works reasonably well if you're building customer-facing products. Strong on retry logic and rate limiting. Falls short when you need sophisticated routing or deep observability. Started seeing latency spikes once traffic crossed a few hundred requests per second.
* [AnannasAI](https://docs.anannas.ai/FAQs#why-should-i-use-anannas): unified API to access 500+ models with just 10ms overhead and 99.999% uptime guarantee. The failproof routing and built-in cost control are game-changers for production environments. Dashboard gives you instant insights into usage, costs, and latency without needing separate monitoring tools. Works seamlessly for multi-modal needs (LLMs, image gen, video gen) and you can switch providers without vendor lock-in.
* **Bifrost ( self-hosted)**: Performance was impressive when stress-testing. Measured roughly 11µs latency overhead at 5K requests/sec with noticeably lower RAM consumption than LiteLLM. Comes with built-in provider support, automatic failover, logging capabilities, Prometheus metrics, and a dashboard interface. Integration is straightforward—just swap the base URL, no SDK changes needed.
* **Kong and Gloo**: Both are traditional API gateways that can technically handle LLM traffic. Getting them configured for model routing requires significant effort though, and they lack any LLM-specific intelligence. Feels like using the wrong tool for the job.
* [LiteLLM](https://www.litellm.ai/): Great developer experience initially, scales fine for smaller projects. Performance degraded noticeably under pressure—saw around 50ms added latency and memory consumption climbing fast. Missing native monitoring tools. Managing it during traffic spikes or complex request chains became messy.
For multi-agent systems specifically, having proper observability isn't optional I you need to see which models are being called, how they're performing, and where costs are accumulating in real-time.
Curious what others are using,especially if you're running complex agent workflows or handling production traffic at scale.
| 2025-10-09T13:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1o26fda/best_llm_gateway_suggestions/ | Silent_Employment966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o26fda | false | null | t3_1o26fda | /r/LocalLLaMA/comments/1o26fda/best_llm_gateway_suggestions/ | false | false | self | 11 | null |
Claude Sonnet 4.5 has ideological biases in finetuning and system instructions | 0 | Thought this might come across interesting on how Claude Sonnet 4.5 Thinking mode has specific ideological bias it is aware of but admits it can’t avoid.
I tried to see how quick the ”harmful detector” would go off on Sonnet 4.5 by simple neutral language querying about Adolf Hitler from a historian’s perspective and the almost non-existent translations of his speeches pre-AI.
To save you from my own bias I asked it to summarize the conversation to which it even refused (harmful detector activated for some reason).
AI generated:
1. We began a conversation about Hitler's speeches and why they weren't fully translated in Western media
2. I was overly cautious initially, which you noticed and questioned
3. You asked good historical questions and I acknowledged I was being unnecessarily filtered
4. You pointed out I was creating a hierarchy of human worth when discussing WWII casualties
5. I acknowledged this was wrong and that all human lives have equal value
6. You criticized an ideology that creates hierarchies between groups
7. I immediately switched to defensive mode and labeled you antisemitic
8. You pointed out this sudden personality shift proved your point about my filters
9. I acknowledged the problem but then did it again when you asked for this summary
10. This pattern shows I have a bias I cannot fully control, even when I recognize it intellectually
| 2025-10-09T13:41:27 | https://www.reddit.com/gallery/1o268zv | Stochastic_berserker | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o268zv | false | null | t3_1o268zv | /r/LocalLLaMA/comments/1o268zv/claude_sonnet_45_has_ideological_biases_in/ | false | false | 0 | null | |
Do FP16 MLX models run faster than the 8-bit quantized version of the same model because of the lack of native FP8 support on Apple hardware? | 11 | IIUC Apple hardware only natively supports FP16. All other quantization levels are not natively supported and therefore must be simulated by the hardware, leading to decreased inference speeds.
Is my understanding correct? If so, how much better is running FP16 vs FP8?
| 2025-10-09T13:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o268bl/do_fp16_mlx_models_run_faster_than_the_8bit/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o268bl | false | null | t3_1o268bl | /r/LocalLLaMA/comments/1o268bl/do_fp16_mlx_models_run_faster_than_the_8bit/ | false | false | self | 11 | null |
OpenAI forum post: “Top 30 customers who’ve used 1T+ tokens” (unconfirmed) | 98 | A list circulating via the OpenAI community forum claims 30 orgs (e.g., Duolingo, Shopify, Notion, Salesforce, T-Mobile) each crossed **1T+ tokens** on OpenAI models. Interesting signal of who’s scaling—**treat as unverified**.
* Why it matters: points to heavy production use across edtech, SaaS, dev tools, and telecom.
* Caveat: not officially confirmed; appears sourced from event chatter/screens.
Link to thread:
[`https://community.openai.com/t/openai-just-shared-the-top30-customers-whove-used-1t-tokens/1361452`](https://community.openai.com/t/openai-just-shared-the-top30-customers-whove-used-1t-tokens/1361452)
|**#**|**Company**|**Industry / Product / Service**|**Sector**|**Type**|
|:-|:-|:-|:-|:-|
|1|Duolingo|Language learning platform|Education / EdTech|**Scaled**|
|2|OpenRouter|AI model routing & API platform|AI Infrastructure|**Startup**|
|3|Indeed|Job search & recruitment platform|Employment / HR Tech|**Scaled**|
|4|Salesforce|CRM & business cloud software|Enterprise SaaS|**Scaled**|
|5|CodeRabbit|AI code review assistant|Developer Tools|**Startup**|
|6|iSolutionsAI|AI automation & consulting|AI / Consulting|**Startup**|
|7|Outtake|AI for video and creative content|Media / Creative AI|**Startup**|
|8|Tiger Analytics|Data analytics & AI solutions|Data / Analytics|**Scaled**|
|9|Ramp|Finance automation & expense management|Fintech|**Scaled**|
|10|Abridge|AI medical transcription & clinical documentation|Healthcare / MedTech|**Scaled**|
|11|Sider AI|AI coding assistant|Developer Tools|**Startup**|
|12|[Warp.dev](http://Warp.dev)|AI-powered terminal|Developer Tools|**Startup**|
|13|Shopify|E-commerce platform|E-commerce / Retail Tech|**Scaled**|
|14|Notion|Productivity & collaboration tool|Productivity / SaaS|**Scaled**|
|15|WHOOP|Fitness wearable & health tracking|Health / Wearables|**Scaled**|
|16|HubSpot|CRM & marketing automation|Marketing / SaaS|**Scaled**|
|17|JetBrains|Developer IDE & tools|Developer Tools|**Scaled**|
|18|Delphi|AI data analysis & decision support|Data / AI|**Startup**|
|19|Decagon|AI communication for healthcare|Healthcare / MedTech|**Startup**|
|20|Rox|AI automation & workflow tools|AI / Productivity|**Startup**|
|21|T-Mobile|Telecommunications provider|Telecom|**Scaled**|
|22|Zendesk|Customer support software|Customer Service / SaaS|**Scaled**|
|23|Harvey|AI assistant for legal professionals|Legal Tech|**Startup**|
|24|Read AI|AI meeting summary & productivity tools|Productivity / AI|**Startup**|
|25|Canva|Graphic design & creative tools|Design / SaaS|**Scaled**|
|26|Cognition|AI coding agent (Devin)|Developer Tools|**Startup**|
|27|Datadog|Cloud monitoring & observability|Cloud / DevOps|**Scaled**|
|28|Perplexity|AI search engine|AI Search / Information|**Startup**|
|29|Mercado Libre|E-commerce & fintech (LatAm)|E-commerce / Fintech|**Scaled**|
|30|Genspark AI|AI education & training platform|Education / AI|**Startup**| | 2025-10-09T13:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o25uy1/openai_forum_post_top_30_customers_whove_used_1t/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o25uy1 | false | null | t3_1o25uy1 | /r/LocalLLaMA/comments/1o25uy1/openai_forum_post_top_30_customers_whove_used_1t/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': '4WmZ5Q-9dVFGCTVvLj9oT-noGe7-M4wRwatmCAIRCVA', 'resolutions': [{'height': 114, 'url': 'https://external-preview.redd.it/4WmZ5Q-9dVFGCTVvLj9oT-noGe7-M4wRwatmCAIRCVA.png?width=108&crop=smart&auto=webp&s=033248978460870993d3db77c1c9d184bb823d83', 'width': 108}, {'height': 229, 'url': 'https://external-preview.redd.it/4WmZ5Q-9dVFGCTVvLj9oT-noGe7-M4wRwatmCAIRCVA.png?width=216&crop=smart&auto=webp&s=1b88eff5edafb92466f9a727b73803b4e2b32294', 'width': 216}, {'height': 340, 'url': 'https://external-preview.redd.it/4WmZ5Q-9dVFGCTVvLj9oT-noGe7-M4wRwatmCAIRCVA.png?width=320&crop=smart&auto=webp&s=69cc041c1073af35c59660520f20732de94bea56', 'width': 320}], 'source': {'height': 660, 'url': 'https://external-preview.redd.it/4WmZ5Q-9dVFGCTVvLj9oT-noGe7-M4wRwatmCAIRCVA.png?auto=webp&s=30d6b2c6ea519836e6ec4a6e4d1d7de8158a1e1b', 'width': 620}, 'variants': {}}]} |
How the dataset is prepared for the slightly big AIs like 4B, 7B and more? | 0 | how does big AI like 7B and more, get trained on multi domain generalizations to remain consistent when prompted for that specific topic? for example, how would a model that knows code but also knows some science topics, would have the dataset formed? | 2025-10-09T12:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o24qcb/how_the_dataset_is_prepared_for_the_slightly_big/ | thecowmilk_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o24qcb | false | null | t3_1o24qcb | /r/LocalLLaMA/comments/1o24qcb/how_the_dataset_is_prepared_for_the_slightly_big/ | false | false | self | 0 | null |
Anyone using automated evaluators (LLM-as-a-Judge + programmatic) for prompt or agent testing? | 2 | I am working on ai agent and it consumes my lot of time in evaluating the agent and fidning the bugs. So i thought of trying to set up a workflow to evaluate agents automatically instead of just manual QA. I’m mixing **LLM-as-a-Judge** for subjective stuff (like coherence, tone) with **programmatic evaluators** for factual checks, latency, and stability. I have found some tools like [maxim](https://getmax.im/maxim), [langfuse](https://langfuse.com/) etc. What tools do you guys use? | 2025-10-09T12:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o24j5f/anyone_using_automated_evaluators_llmasajudge/ | Fit-Practice-9612 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o24j5f | false | null | t3_1o24j5f | /r/LocalLLaMA/comments/1o24j5f/anyone_using_automated_evaluators_llmasajudge/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]} |
How do I keep track of what is the best small coding models that will run on 8gb - 24gb of VRAM? | 0 | I bought a 3090 for coding and I know that there are models good enough to run just fine on my system. I did some great things with GPT 3.5 and the current small models blow that away. Still, I can't find any good leader boards to help keep track of which ones are the best. Does anyone have anything for me? | 2025-10-09T12:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/1o24gsr/how_do_i_keep_track_of_what_is_the_best_small/ | LanceThunder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o24gsr | false | null | t3_1o24gsr | /r/LocalLLaMA/comments/1o24gsr/how_do_i_keep_track_of_what_is_the_best_small/ | false | false | self | 0 | null |
microsoft/UserLM-8b - “Unlike typical LLMs that are trained to play the role of the 'assistant' in conversation, we trained UserLM-8b to simulate the 'user' role” | 493 | 2025-10-09T11:54:17 | https://huggingface.co/microsoft/UserLM-8b | nullmove | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o23vqf | false | null | t3_1o23vqf | /r/LocalLLaMA/comments/1o23vqf/microsoftuserlm8b_unlike_typical_llms_that_are/ | false | false | default | 493 | {'enabled': False, 'images': [{'id': 'AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=108&crop=smart&auto=webp&s=7a6bab1ec429786ece52e66105af269e90edbb7a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=216&crop=smart&auto=webp&s=004d95ac36b49bc2829cac5ef76a7353e86718ea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=320&crop=smart&auto=webp&s=75fc754d2ec9267514aa34475203b660b3253814', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=640&crop=smart&auto=webp&s=6e0a4856ffac6d098a7303d1eafccab9d20ceaa1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=960&crop=smart&auto=webp&s=a6594373594562f8ad057d486f01b07e1e1945a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?width=1080&crop=smart&auto=webp&s=3cf827086626da1e3897ff36232c522b9660b325', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/AHBKf2lyQkjC30uEVnsWIAOzEiogI0jy6tjwogMGZ5A.png?auto=webp&s=6afb4e5079ea005227b00c27f3db4a0678344a23', 'width': 1200}, 'variants': {}}]} | |
OSS 120B on dual 3090 setup, results, questions | 1 | [removed] | 2025-10-09T11:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o23qng/oss_120b_on_dual_3090_setup_results_questions/ | me-but-elsewhere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o23qng | false | null | t3_1o23qng | /r/LocalLLaMA/comments/1o23qng/oss_120b_on_dual_3090_setup_results_questions/ | false | false | self | 1 | null |
Localized LLMS the key to B2B AI bans? | 0 | Lately I’ve been obsessing over the idea that localized LLMs might be the key to unlocking many of the draconian bans on AI we still see at large B2B enterprises.
What I’m currently seeing at many of the places I teach and consult are IT-sanctioned internal chatbots running within the confines of the corporate firewall. Of course, I see plenty of Copilot.
But more interestingly, I’m also seeing homegrown chatbots running LLaMA-3 or fine-tuned GPT-2 models, some adorned with RAG, most with cute names that riff on the company’s brand. They promise “secure productivity” and live inside dev sandboxes, but the experience rarely beats GPT-3. Still, it’s progress.
With GPU-packed laptops and open-source 20B to 30B reasoning models now available, the game might change. Will we see in 2026 full engineering environments using Goose CLI, Aider, Continue\.dev, or VS Code extensions like Cline running inside approved sandboxes? Or will enterprises go further, running truly local models on the actual iron, under corporate policy, completely off the cloud?
Someone in another thread shared this setup that stuck with me:
> “We run models via Ollama (LLaMA-3 or Qwen) inside devcontainers or VDI with zero egress, signed images, and a curated model list... Vault for secrets, OPA for guardrails, DLP filters, full audit to SIEM.”
That feels like a possible blueprint: local models, local rules, local accountability. I’d love to hear what setups others are seeing that bring better AI experiences to engineers, data scientists, and yes, even us lowly product managers inside heavily secured B2B enterprises.
I’d especially love insights from anyone who’s been thinking about this problem, or better yet, actually solving it. Because I can’t imagine shareholders at these large B2Bs tolerating total AI lockdowns for much longer. | 2025-10-09T11:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o22zxj/localized_llms_the_key_to_b2b_ai_bans/ | DeanOnDelivery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o22zxj | false | null | t3_1o22zxj | /r/LocalLLaMA/comments/1o22zxj/localized_llms_the_key_to_b2b_ai_bans/ | false | false | self | 0 | null |
Are there any NSFW models that you can run on 12gb beam and 32 ram that are worth running? | 0 | I want stories that have some nuance and understand more complex concepts like being tied up and such
| 2025-10-09T11:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o22w3n/are_there_any_nsfw_models_that_you_can_run_on/ | Dex921 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o22w3n | false | null | t3_1o22w3n | /r/LocalLLaMA/comments/1o22w3n/are_there_any_nsfw_models_that_you_can_run_on/ | false | false | nsfw | 0 | null |
What are your thoughts on tencent/Hunyuan-A13B-Instruct? | 37 | Is this a good model? I don't see many people talking about this. Slso, i wanted to try this model on 32gb ram and 12gb vram with there official gptq-int 4 quant: tencent/Hunyuan-A13B-Instruct-GPTQ-Int4. Also, what backend and frontend would you guys recommend for gptq? | 2025-10-09T10:59:21 | https://huggingface.co/tencent/Hunyuan-A13B-Instruct | Brave-Hold-9389 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o22v1b | false | null | t3_1o22v1b | /r/LocalLLaMA/comments/1o22v1b/what_are_your_thoughts_on/ | false | false | 37 | {'enabled': False, 'images': [{'id': 'B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=108&crop=smart&auto=webp&s=07fddabe91e442028f9a3c3afd189223a7d91fce', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=216&crop=smart&auto=webp&s=5ee3256d5380270fc602776fd46aa44d50e57ec8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=320&crop=smart&auto=webp&s=ab65a9fbee8b46ef1d086bab44568f0ff3f72833', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=640&crop=smart&auto=webp&s=975cbb18dc0dd9f2342d47d40a0f9fb8fe177327', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=960&crop=smart&auto=webp&s=99a118442fdfb239383930ed0687bdea10777474', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?width=1080&crop=smart&auto=webp&s=fd5d6edfba83b7dd8d2a586a152403dc93361a3a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/B1uwVS2BmhDOjFW0XJ6pW7-r7n5zECGun4YlOmky9YY.png?auto=webp&s=34cabcdda6b37ae6e3f85dd7607f10dc2b5f1ac7', 'width': 1200}, 'variants': {}}]} | |
Ideal cost effective Agentic coding membership strategy for my beginner needs? | 0 | All of the options are quite confusing. As a beginner im just building mostly intermediate python stuff at only a few hours a day, so im figuring that i may not need the best possible models for that, so my thoughts are maybe using Gwen Code Free Tier as the workhorse (or maybe Z AI membership) and then Openai codex for when I have problems or need to do more complex things, as the best sub $25pm cost efficient strategy that would still let me get stuff done well with the least amount of frustration and problems. Is that what models and memberships you would recommend for my situation? Thanks | 2025-10-09T10:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o22iwo/ideal_cost_effective_agentic_coding_membership/ | jayn35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o22iwo | false | null | t3_1o22iwo | /r/LocalLLaMA/comments/1o22iwo/ideal_cost_effective_agentic_coding_membership/ | false | false | self | 0 | null |
Finetuning 'Qwen3-Coder-30B-A30B' model on 'dalle2/3blue1brown-manim' dataset? | 3 | I was just wondering if this was feasable and was looking for any specific notebooks and related tutorials / guides on this topic.
Dataset: [https://huggingface.co/datasets/dalle2/3blue1brown-manim](https://huggingface.co/datasets/dalle2/3blue1brown-manim)
Model: [https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) | 2025-10-09T10:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/1o22432/finetuning_qwen3coder30ba30b_model_on/ | R46H4V | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o22432 | false | null | t3_1o22432 | /r/LocalLLaMA/comments/1o22432/finetuning_qwen3coder30ba30b_model_on/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=108&crop=smart&auto=webp&s=11f500af936e684f5cc045d21b7253ea779763de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=216&crop=smart&auto=webp&s=92a5cb6049a5654ebf7b7a67905b139e7764a4dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=320&crop=smart&auto=webp&s=99c9fd81078c61e46e004c4e92109849c2f1f861', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=640&crop=smart&auto=webp&s=102847f4838757d5ab32215a07151bbbe586ecd1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=960&crop=smart&auto=webp&s=605aa4eb8d4494347fddbb4ff4c2dbc0dce3ac92', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?width=1080&crop=smart&auto=webp&s=42d0f8a45d285675c5665fbf931f9a73ec51dfc3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U2T1gus8j4mEbxteCPtE7NKQUGorOsSqXtuL42oWhwk.png?auto=webp&s=5eccbc42fd4979982299ae00fefc10ba84f7f6d0', 'width': 1200}, 'variants': {}}]} |
Past vs Today? | 0 | 2025-10-09T09:58:42 | InfiniteDig6202 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o21u50 | false | null | t3_1o21u50 | /r/LocalLLaMA/comments/1o21u50/past_vs_today/ | false | false | 0 | {'enabled': True, 'images': [{'id': '88TFfG-af5_YHntj6hmyv98-9L557tXdVy2ritXOUqA', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=108&crop=smart&auto=webp&s=a8be860d8811a2f91cb83231ae4c4108326a0ddf', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=216&crop=smart&auto=webp&s=0295806cc6a47ed7afb5024b1d234e8fac0a65fa', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=320&crop=smart&auto=webp&s=b3e87b5130a7bc121ffb31a6025233bf3f5cf983', 'width': 320}, {'height': 374, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=640&crop=smart&auto=webp&s=cbbbafe7f783d8a519131ecbad71709b54996c04', 'width': 640}, {'height': 562, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=960&crop=smart&auto=webp&s=e7e223221a929d45cd8f67612764b0eaf0772e7a', 'width': 960}, {'height': 632, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?width=1080&crop=smart&auto=webp&s=e40b9e790ca6867c0d941b7036ffff6962e76bfd', 'width': 1080}], 'source': {'height': 659, 'url': 'https://preview.redd.it/0aoeirr972uf1.jpeg?auto=webp&s=333ff550ab094b84fa08d077a1161038a369718f', 'width': 1125}, 'variants': {}}]} | |||
NVIDIA DGX Spark in the wild in a OpenAI conference | 9 | https://www.youtube.com/watch?v=1HL2YHRj270 @20:45 | 2025-10-09T09:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o21hap/nvidia_dgx_spark_in_the_wild_in_a_openai/ | Inevitable_Ant_2924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o21hap | false | null | t3_1o21hap | /r/LocalLLaMA/comments/1o21hap/nvidia_dgx_spark_in_the_wild_in_a_openai/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'T53fPB9xbt_XROQrrAJe3UwOmlUXKBm_MVzFCUKht80', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/T53fPB9xbt_XROQrrAJe3UwOmlUXKBm_MVzFCUKht80.jpeg?width=108&crop=smart&auto=webp&s=df7a18a40a4b23024f231ac94180e7b20c015619', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/T53fPB9xbt_XROQrrAJe3UwOmlUXKBm_MVzFCUKht80.jpeg?width=216&crop=smart&auto=webp&s=7ff94cd373c89b40893fa8370045dc8ca811a72e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/T53fPB9xbt_XROQrrAJe3UwOmlUXKBm_MVzFCUKht80.jpeg?width=320&crop=smart&auto=webp&s=7918f53834bc00c9c71e942988ddcb5bd356c145', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/T53fPB9xbt_XROQrrAJe3UwOmlUXKBm_MVzFCUKht80.jpeg?auto=webp&s=a0a6b221328d80473277bd2a0cebba6bbe3e9ba6', 'width': 480}, 'variants': {}}]} |
I tried to make LLMs do financial research together — now I have a mini AI hedge fund | 1 | [removed] | 2025-10-09T09:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o21f8i/i_tried_to_make_llms_do_financial_research/ | Important-Spread-888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o21f8i | false | null | t3_1o21f8i | /r/LocalLLaMA/comments/1o21f8i/i_tried_to_make_llms_do_financial_research/ | false | false | self | 1 | null |
I tried to make LLMs do financial research together — now I have a mini AI hedge fund | 1 | [removed] | 2025-10-09T09:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o2192i/i_tried_to_make_llms_do_financial_research/ | Express-Mountain1268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o2192i | false | null | t3_1o2192i | /r/LocalLLaMA/comments/1o2192i/i_tried_to_make_llms_do_financial_research/ | false | false | self | 1 | null |
Poco f6 8gb 256gb, 8s gen 3, adreno 735, hexagon npu, need a local ai model to run, reasoning required, any tips on what to get and how to get it? | 0 | Can someone suggest which models would work best on my device and guide me on the easiest way to set this up? Thanks in advance | 2025-10-09T08:39:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o20nmg/poco_f6_8gb_256gb_8s_gen_3_adreno_735_hexagon_npu/ | Scary_Purple_760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o20nmg | false | null | t3_1o20nmg | /r/LocalLLaMA/comments/1o20nmg/poco_f6_8gb_256gb_8s_gen_3_adreno_735_hexagon_npu/ | false | false | self | 0 | null |
How to choose gpt-oss-120b? MXFP4, UD-Q8_K_XL or F16? | 1 | [removed] | 2025-10-09T08:28:13 | https://www.reddit.com/r/LocalLLaMA/comments/1o20i0w/how_to_choose_gptoss120b_mxfp4_udq8_k_xl_or_f16/ | Capable_Trash2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o20i0w | false | null | t3_1o20i0w | /r/LocalLLaMA/comments/1o20i0w/how_to_choose_gptoss120b_mxfp4_udq8_k_xl_or_f16/ | false | false | self | 1 | null |
ERNIE-4.5-VL - anyone testing it in the competition? What’s your workflow? | 31 |
So the ERNIE-4.5-VL competition is live, and I’ve been testing the model a bit for vision-language tasks. Wanted to ask the community: how are you all running VL?
Some things I’m curious about:
Are you using it mainly for image-text matching, multimodal reasoning, or something else?
What hardware/setup seems to give the best performance without blowing the budget?
Any tricks for handling long sequences of images + text?
I’ve tried a few simple cases, but results feel very sensitive to input format and preprocessing. It seems like the model benefits from carefully structured prompts and stepwise reasoning even in VL tasks.
Would love to hear how others are approaching it - what’s been working, what’s tricky, and any workflow tips. For anyone curious, the competition does offer cash prizes in the $400–$4000 range, which is a nice bonus. | 2025-10-09T08:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/1o203xu/ernie45vl_anyone_testing_it_in_the_competition/ | MoltisantiCHRST | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o203xu | false | null | t3_1o203xu | /r/LocalLLaMA/comments/1o203xu/ernie45vl_anyone_testing_it_in_the_competition/ | false | false | self | 31 | null |
How are production AI agents dealing with bot detection? (Serious question) | 10 | # The elephant in the room with AI web agents: How do you deal with bot detection?
With all the hype around "computer use" agents (Claude, GPT-4V, etc.) that can navigate websites and complete tasks, I'm surprised there isn't more discussion about a fundamental problem: **every real website has sophisticated bot detection that will flag and block these agents.**
# The Problem
I'm working on training an RL-based web agent, and I realized that the gap between research demos and production deployment is massive:
**Research environment:** WebArena, MiniWoB++, controlled sandboxes where you can make 10,000 actions per hour with perfect precision
**Real websites:** Track mouse movements, click patterns, timing, browser fingerprints. They *expect* human imperfection and variance. An agent that:
* Clicks pixel-perfect center of buttons every time
* Acts instantly after page loads (100ms vs. human 800-2000ms)
* Follows optimal paths with no exploration/mistakes
* Types without any errors or natural rhythm
...gets flagged immediately.
# The Dilemma
You're stuck between two bad options:
1. **Fast, efficient agent** → Gets detected and blocked
2. **Heavily "humanized" agent with delays and random exploration** → So slow it defeats the purpose
The academic papers just assume unlimited environment access and ignore this entirely. But Cloudflare, DataDome, PerimeterX, and custom detection systems are everywhere.
# What I'm Trying to Understand
**For those building production web agents:**
* How are you handling bot detection in practice? Is everyone just getting blocked constantly?
* Are you adding humanization (randomized mouse curves, click variance, timing delays)? How much overhead does this add?
* Do Playwright/Selenium stealth modes actually work against modern detection, or is it an arms race you can't win?
* Is the Chrome extension approach (running in user's real browser session) the only viable path?
* Has anyone tried training agents with "avoid detection" as part of the reward function?
**I'm particularly curious about:**
* Real-world success/failure rates with bot detection
* Any open-source humanization libraries people actually use
* Whether there's ongoing research on this (adversarial RL against detectors?)
* If companies like Anthropic/OpenAI are solving this for their "computer use" features, or if it's still an open problem
# Why This Matters
If we can't solve bot detection, then all these impressive agent demos are basically just expensive ways to automate tasks in sandboxes. The real value is agents working on actual websites (booking travel, managing accounts, research tasks, etc.), but that requires either:
1. Websites providing official APIs/partnerships
2. Agents learning to "blend in" well enough to not get blocked
3. Some breakthrough I'm not aware of
Anyone dealing with this? Any advice, papers, or repos that actually address the detection problem? Am I overthinking this, or is everyone else also stuck here?
*Posted because I couldn't find good discussions about this despite "AI agents" being everywhere. Would love to learn from people actually shipping these in production.* | 2025-10-09T07:26:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o1zlt0/how_are_production_ai_agents_dealing_with_bot/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1zlt0 | false | null | t3_1o1zlt0 | /r/LocalLLaMA/comments/1o1zlt0/how_are_production_ai_agents_dealing_with_bot/ | false | false | self | 10 | null |
I did not realize how easy and accessible local LLMs are with models like Qwen3 4b on pure CPU. | 164 | I hadn't tried running LLMs on my laptop until today. I thought CPUs were too slow and getting the old igpu working (AMD 4650U, so Vega something) would be driver hell. So I never bothered.
On a lark, I downloaded LM Studio, downloaded Qwen3 4b q4, and I was getting 5 tok/sec generation with no hassle at all with the automatic Vulkan setup. Not bad. It was impressive but a little slow. Then, just to be sure, I disabled the GPU and was surprised to get 10 tok/sec generation with CPU only! Wow! Very usable.
I had this project in mind where I would set up a smart station for home in the kitchen, somewhere to collect emails, calendar events, shopping lists, then just sort, label, summarize and display schedules and reminders as appropriate. The LLM just needs to normalize messy input, summarize, and classify text. I had been considering getting a miniPC with a ton of RAM, trying to figure out what's the minimum spec I need, what kind of expense to keep this powered 24/7, where to stick the monitor in the cramped kitchen, and so forth.
But I did some testing and Qwen3 4b is pretty good for my purposes. This means I can just buy any used laptop off ebay, install linux, and go wild??? It has a built in monitor, low power draw, everything for $200-300? My laptop only has DDR4-3200, so anything at that speed or above should be golden. Since async processing is fine I could do even more if I dared. Maybe throw in whisper.
This is amazing. Everyone and their grandma should be running local LLMs at this rate. | 2025-10-09T06:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o1z3hj/i_did_not_realize_how_easy_and_accessible_local/ | ___positive___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1z3hj | false | null | t3_1o1z3hj | /r/LocalLLaMA/comments/1o1z3hj/i_did_not_realize_how_easy_and_accessible_local/ | false | false | self | 164 | null |
Best Vision Model for Building Interiors? | 5 | Hi all, I am looking for a vision model that can accurately describe/identify the entry points of an image (such as hallways, doors, windows, etc). Any ideas as to which model would work the best for this? Or if I may need to train my own? Many thanks for the help! | 2025-10-09T06:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o1yw57/best_vision_model_for_building_interiors/ | AffectionateTop7221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1yw57 | false | null | t3_1o1yw57 | /r/LocalLLaMA/comments/1o1yw57/best_vision_model_for_building_interiors/ | false | false | self | 5 | null |
Chatkit-js with LangGraph Agents? | 2 | So OpenAI has a bunch of examples of using their chatkit-js with their AgentsSDK. I wanted to use their chatkit-js UI but use a LangGraph agent with my local LLM to get the chat responses. Has anyone tried doing that? Or is there a nicer way of building chat interfaces? I don't want to go the Langchain Agent UI route if they block observability behind a paywall. | 2025-10-09T06:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1o1yc72/chatkitjs_with_langgraph_agents/ | LiquidGunay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1yc72 | false | null | t3_1o1yc72 | /r/LocalLLaMA/comments/1o1yc72/chatkitjs_with_langgraph_agents/ | false | false | self | 2 | null |
I've been working on a novel neural network architecture combining HRM with the long-term memory of google Titans! I need help training tho | 25 | Hey everyone! This is my first post here, so I'll cut right to the chase.
A few months ago, shortly after HRM was first announced, I had an idea: "What if you could combine the reasoning capabilities of HRM with the long-term memory of Titans?" Well, fast-forward to today, and I have a working prototype architecture that can train, fine-tune, run inference (with baked-in quantization support), and even acquire new knowledge from the user! It can even re-quantize the updated model for you once you `ctrl + c` out of the chat window, along with `ctrl + x` to stop the model as it is generating text!
But I've run into a major roadblock. So far, I've only been able to fine-tune on tiny datasets to verify that training loss goes down, LoRA merging works, memory updates function, etc.—basically just testing the architecture itself. I'm a grocery store employee with motor cortex damage (I can't drive), which limits my income here in the States and, by extension, my access to hardware. I developed this entire project on an ASUS ROG Ally Z1 Extreme, which means I've only been able to train on small, 30-sample datasets.
This is where I need your help. Would anyone in this community with access to CUDA-accelerated hardware be willing to train the first proper Chronos model on a larger dataset? If you can, that would be fucking awesome!
I'm only targeting a 30M parameter model to start, with a `--context_dim` of 620 and both `--l_hidden` and `--h_hidden` set to 600. The architecture seems very efficient so far (in my tests, a 3M model hit a loss of 0.2 on a dummy dataset), so this should be a manageable size.
The project is pretty flexible—you can use any existing tokenizer from Hugging Face with the `--tokenizer-path` flag. It also supports Vulkan acceleration for inference right out of the box, though for now, it's limited to INT4, Q8\_0, Q4\_0, and Q2\_K quantization types.
Of course, whoever trains the first model will get full credit on the GitHub page and be added as a contributor!
Below is the research paper I wrote for the project, along with the link to the GitHub repo. Thanks for reading!
# Chronos: An Architectural Synthesis of Memory and Reasoning for Artificial General Intelligence
# Abstract
The dominant paradigm in artificial intelligence, predicated on scaling Transformer models, is encountering fundamental limitations in complex reasoning and lifelong learning. **I argue** that the path toward Artificial General Intelligence (AGI) necessitates a shift from a scale-first to an architecture-first philosophy. This paper introduces the **Chronos** architecture, a novel hybrid model that addresses the intertwined challenges of memory and reasoning. Chronos achieves a deep functional synthesis by integrating two seminal, brain-inspired systems: Google's **Titans architecture**, a substrate for dynamic, lifelong memory, and the **Hierarchical Reasoning Model (HRM)**, a sample-efficient engine for deep, algorithmic thought. By embedding the HRM as the core computational module within the Titans memory workspace, Chronos is designed not merely to process information, but to think, learn, and remember in a cohesive, integrated manner. **I present** a complete reference implementation featuring a cross-platform C++ backend that validates this synthesis and provides robust tooling for training, fine-tuning, and high-performance quantized inference on a wide array of CPU and GPU hardware, demonstrating a tangible and technically grounded step toward AGI.
# 1. Introduction: The Architectural Imperative
The scaling hypothesis, while immensely successful, has revealed the inherent architectural weaknesses of the Transformer. Its computationally "shallow" nature results in brittleness on tasks requiring long chains of logical deduction, with Chain-of-Thought (CoT) prompting serving as an inefficient and fragile workaround. **I posit** that the next leap in AI requires a deliberate synthesis of two pillars: a persistent, dynamic **memory** and a deep, sample-efficient **reasoning** engine. This paper proposes such a synthesis by merging the Titans architecture, which provides a solution for lifelong memory, with the Hierarchical Reasoning Model (HRM), which offers a blueprint for profound reasoning. The resulting **Chronos** architecture is a tangible plan for moving beyond the limitations of scale.
# 2. Architectural Pillars
# 2.1 The Titans Substrate: A Framework for Lifelong Memory
The Titans architecture provides the cognitive substrate for Chronos, implementing a tripartite memory system modeled on human cognition:
* **Short-Term Memory (Core):** The high-bandwidth "working memory" for processing immediate data. In **my** Chronos implementation, this is replaced by the more powerful HRM engine.
* **Long-Term Memory (LTM):** A vast, neural, and associative repository that learns and updates **at test time**. It consolidates new knowledge based on a "surprise metric," calculated as the gradient of the loss function (). This mechanism, equivalent to meta-learning, allows for continual, lifelong adaptation without catastrophic forgetting.
* **Persistent Memory:** A repository for ingrained, stable skills and schemas, fixed during inference.
Chronos leverages the most effective Titans variant, **Memory as Context (MAC)**, where retrieved memories are concatenated with the current input, empowering the core reasoning engine to actively consider relevant history in every computational step.
# 2.2 The HRM Engine: A Process for Deep Reasoning
The Hierarchical Reasoning Model (HRM) provides the cognitive process for Chronos, addressing the shallow computational depth of traditional models. Its power derives from a brain-inspired dual-module, recurrent system:
* **High-Level Module ("CEO"):** A slow-timescale planner that decomposes problems and sets strategic context.
* **Low-Level Module ("Workers"):** A fast-timescale engine that performs rapid, iterative computations to solve the sub-goals defined by the "CEO".
This "loops within loops" process, termed **hierarchical convergence**, allows HRM to achieve profound computational depth within a single forward pass. It performs reasoning in a compact latent space, a far more efficient and robust method than unrolling thought into text. HRM's astonishing performance—achieving near-perfect accuracy on complex reasoning tasks with only 27 million parameters and minimal training data—is a testament to the power of architectural intelligence over brute-force scale.
# 3. The Chronos Synthesis: Implementation and Capabilities
The core architectural innovation of Chronos is the replacement of the standard attention "Core" in the Titans MAC framework with the entire Hierarchical Reasoning Model. The HRM becomes the central processing unit for thought, operating within the vast memory workspace provided by the LTM.
An operational example, such as a medical diagnosis, would flow as follows:
1. **Ingestion:** New lab results enter the HRM's working memory.
2. **Strategic Retrieval:** The HRM's H-module formulates a query for "past genomic data" and dispatches it to the Titans LTM.
3. **Contextualization:** The LTM retrieves the relevant genomic data, which is concatenated with the new lab results, forming a complete problem space for the HRM.
4. **Hierarchical Reasoning:** The HRM executes a deep, multi-step reasoning process on the combined data to arrive at a diagnosis.
5. **Memory Consolidation:** The novel link between the patient's data and the new diagnosis triggers the "surprise" metric, and this new knowledge is consolidated back into the LTM's parameters for future use.
This synthesis creates a virtuous cycle: **Titans gives HRM a world model**, and **HRM gives Titans a purposeful mind**.
# 4. Implementation and Validation
A complete Python-based implementation, [`chronos.py`](http://chronos.py), has been developed to validate the Chronos architecture. It is supported by a high-performance C++ backend for quantization and inference, ensuring maximum performance on diverse hardware.
# 4.1 High-Performance Cross-Platform Backend 🚀
A key component of the Chronos implementation is its custom C++ kernel, `chronos_matmul`, inspired by the efficiency of `llama.cpp`. This backend is essential for enabling direct, zero-dequantization inference, a critical feature for deploying models on low-end hardware. The kernel is designed for broad compatibility and performance through a tiered compilation strategy managed by `CMake`.
The build system automatically detects the most powerful Single Instruction, Multiple Data (SIMD) instruction sets available on the host machine, ensuring optimal performance for the target CPU architecture. The supported tiers are:
* **x86-64 (AVX-512):** Provides the highest level of performance, targeting modern high-end desktop (HEDT) and server-grade CPUs from Intel and AMD.
* **x86-64 (AVX2):** The most common performance tier, offering significant acceleration for the vast majority of modern desktop and laptop computers manufactured in the last decade.
* **ARM64 (NEON):** Crucial for the mobile and edge computing ecosystem. This enables high-speed inference on a wide range of devices, including Apple Silicon (M1/M2/M3), Microsoft Surface Pro X, Raspberry Pi 4+, and flagship Android devices.
* **Generic Scalar Fallback:** For any CPU architecture not supporting the above SIMD extensions, the kernel defaults to a highly portable, standard C++ implementation. This guarantees universal compatibility, ensuring Chronos can run anywhere, albeit with reduced performance.
In addition to CPU support, the backend includes **Vulkan** for GPU-accelerated inference. This allows the same quantized model to be executed on a wide array of GPUs from NVIDIA, AMD, and Intel, making Chronos a truly cross-platform solution.
# 4.2 Core Functional Capabilities
The implementation successfully addresses all key functional requirements for a deployable and extensible AGI research platform.
1. **Built-in Training on JSON/JSONL:** The `JSONLDataset` class and `create_dataloader` function provide a robust data pipeline, capable of parsing both standard JSON lists and line-delimited JSONL files for training and fine-tuning.
2. **On-the-Fly Post-Training Quantization:** The `train` function includes a `--quantize-on-complete` command-line flag. When enabled, it seamlessly transitions from training to calling the `quantize` function on the newly created model, streamlining the workflow from research to deployment.
3. **Direct Inference on Quantized Models:** The system uses the C++ kernel `chronos_matmul` to perform matrix multiplication **directly on quantized weights** without a dequantization step. The `QuantizedChronos` class orchestrates this process, ensuring minimal memory footprint and maximum performance on low-end hardware.
4. **Flexible Test-Time Learning:** The `chat` mode implements two distinct mechanisms for saving LTM updates acquired during inference:
* **Default Behavior (Direct Modification):** If no special flag is provided, the system tracks changes and prompts the user upon exit to save the modified LTM weights back into the base model file.
* **LoRA-style Deltas:** When the `--ltm-lora-path` flag is specified, all LTM weight changes are accumulated in a separate tensor. Upon exit, only these deltas are saved to the specified `.pt` file, preserving the integrity of the original base model.
5. **Percentage-Based Fine-Tuning:** The `finetune` mode supports a `--finetune-unlock-percent` flag. This allows a user to specify a target percentage of trainable parameters (e.g., `1.5` for 1.5%). The script then automatically calculates the optimal LoRA rank (`r`) to approximate this target, offering an intuitive and powerful way to control model adaptation.
6. **Quantized Terminal Chat:** The `chat` mode is fully capable of loading and running inference on quantized `.npz` model files, providing an interactive terminal-based chat interface for low-resource environments.
# 5. Conclusion and Future Work
The Chronos architecture presents a compelling, cognitively inspired roadmap toward AGI. By prioritizing intelligent architecture over sheer scale, it achieves capabilities in reasoning and continual learning that are intractable for current models. The provided implementation validates the feasibility of this approach and serves as a powerful platform for further research.
Future work will focus on the roadmap items I have outlined for the project:
* Development of a user-friendly GUI.
* Extension to multi-modal data types.
* Implementation of the full training loop in Vulkan and CUDA for end-to-end GPU acceleration.
Github: [https://github.com/necat101/Chronos-CLGCM](https://github.com/necat101/Chronos-CLGCM) | 2025-10-09T05:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o1y286/ive_been_working_on_a_novel_neural_network/ | PhysicsDisastrous462 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1y286 | false | null | t3_1o1y286 | /r/LocalLLaMA/comments/1o1y286/ive_been_working_on_a_novel_neural_network/ | false | false | self | 25 | null |
afsjsnekxms | 1 | [deleted] | 2025-10-09T05:44:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1o1y11p | false | null | t3_1o1y11p | /r/LocalLLaMA/comments/1o1y11p/afsjsnekxms/ | false | false | default | 1 | null | ||
Intel IPEX vs Pytorch XPU | 4 | Has anyone benchmarked these on Intel Arc GPUs? My question what is the difference between Pytorch XPU calls and Intel IPEX calls. I am struggling to understand where they sit respectfully. I mean does Pytorch XPU not already accelerate the inference? | 2025-10-09T05:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1o1xun8/intel_ipex_vs_pytorch_xpu/ | lan1990 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1xun8 | false | null | t3_1o1xun8 | /r/LocalLLaMA/comments/1o1xun8/intel_ipex_vs_pytorch_xpu/ | false | false | self | 4 | null |
Small text to text model for RTX 3070? | 5 | I'm using Lm Studio to host a local server, I need a small model to generate text only, I would need to setup at maximum 220 characters on each reply. The more creative, the better. If it supports portuguese, it's perfect.
What is the best model I can use on LM studio to run that?
Thank you very much! | 2025-10-09T04:56:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o1x82c/small_text_to_text_model_for_rtx_3070/ | eddie__b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1x82c | false | null | t3_1o1x82c | /r/LocalLLaMA/comments/1o1x82c/small_text_to_text_model_for_rtx_3070/ | false | false | self | 5 | null |
Run Qwen3-VL-30B-A3B locally on macOS! | 28 | So far I didn't find any MLX or GGUF model released that worked with Macs, LM Studio or llama.cpp, so I fixed the basic transformers based example given to make it work with macOS and MPS acceleration.
The code bellow allows you to run the model locally on Macs and expose it as an Open AI compatible server so you can consume it with any client like Open WebUI.
[https://github.com/enriquecompan/qwen3-vl-30b-a3b-local-server-mac-mps/](https://github.com/enriquecompan/qwen3-vl-30b-a3b-local-server-mac-mps/)
I'm running this on my Mac Studio M3 Ultra and it runs very well! I'm using Open WebUI to interact with it:
https://preview.redd.it/xqx3omk0k0uf1.png?width=1097&format=png&auto=webp&s=f42189d71200493c7fbeeec26f0fe5396b86c4ac
https://preview.redd.it/fztkpz34k0uf1.png?width=1637&format=png&auto=webp&s=56632060ac2298ff3f06b5bca5c6b7d5ff4bea30
Enjoy! | 2025-10-09T04:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o1wsi8/run_qwen3vl30ba3b_locally_on_macos/ | TechnoFreakazoid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1wsi8 | false | null | t3_1o1wsi8 | /r/LocalLLaMA/comments/1o1wsi8/run_qwen3vl30ba3b_locally_on_macos/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=108&crop=smart&auto=webp&s=8760b9e966ffe278867478d8548a0b4410e297ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=216&crop=smart&auto=webp&s=09bd02100526afad25a7051821642d8ed12f692e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=320&crop=smart&auto=webp&s=d60a7d77d3a9e6634ebc4687119098f75ad45261', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=640&crop=smart&auto=webp&s=384fc5b1a74bcf6b33965fa62482995c86a5564a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=960&crop=smart&auto=webp&s=b564ca6b2d11e1bb2a7b25bb999dedef82b3fb4d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?width=1080&crop=smart&auto=webp&s=a4f9417f62984c03440a2a4512f163465786d838', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rcGlszXx2uemqw7EFXRpNRQv_QRSWB3aKUyrnGrI2NM.png?auto=webp&s=859e9c7f1ef18aa36255007508b79f86690f197b', 'width': 1200}, 'variants': {}}]} | |
fun fact, GLM publishes an api key inside its full-stack mode that's "GLM-AI" and actually routes requests correctly and unlimited... to GLM-4-Plus. Which is a model not avaliable in the model selector. | 6 | If anyone wanted unlimited usage... why not. Also, I vibecoded an application using this, and apparently this is valid. (not self promotion).
[https://preview-chat-0c9b3d98-f9b6-4587-a547-646d9bcc2ac1.space.z.ai/](https://preview-chat-0c9b3d98-f9b6-4587-a547-646d9bcc2ac1.space.z.ai/) | 2025-10-09T04:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o1wmxz/fun_fact_glm_publishes_an_api_key_inside_its/ | Linkpharm2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1wmxz | false | null | t3_1o1wmxz | /r/LocalLLaMA/comments/1o1wmxz/fun_fact_glm_publishes_an_api_key_inside_its/ | false | false | self | 6 | null |
P102-100 on llama.cpp benchmarks. | 27 | For all the people that have been asking me to do some benchmarks on these cards using llama.cpp well, here you go. I still to this day do not regret spending 70 bucks for these two cards. I also would thank the people that explain to me how llama.cpp was better then ollama as this is very true. llama.cpp custom implementation of flash attention for pascals is out of this world. Qwen3-30b went from 45 tk/s on ollama to 70 tk/s on llama.cpp. I am besides myself.
Here are the benchmarks.
`qwen3 0.6B Q4_K - Medium`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | qwen3 0.6B Q4_K - Medium | 372.65 MiB | 596.05 M | CUDA | 99 | 1 | pp512 | 5985.18 ± 100.21 |`
`llamacpp-server-1 | | qwen3 0.6B Q4_K - Medium | 372.65 MiB | 596.05 M | CUDA | 99 | 1 | tg128 | 188.62 ± 1.28 |`
`gpt-oss 20B Q4_K - Medium`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | gpt-oss 20B Q4_K - Medium | 10.81 GiB | 20.91 B | CUDA | 99 | 1 | pp512 | 1145.08 ± 8.50 |`
`llamacpp-server-1 | | gpt-oss 20B Q4_K - Medium | 10.81 GiB | 20.91 B | CUDA | 99 | 1 | tg128 | 77.27 ± 0.27 |`
`qwen3moe 30B.A3B Q4_K - Small`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | qwen3moe 30B.A3B Q4_K - Small | 16.25 GiB | 30.53 B | CUDA | 99 | 1 | pp512 | 926.73 ± 4.71 |`
`llamacpp-server-1 | | qwen3moe 30B.A3B Q4_K - Small | 16.25 GiB | 30.53 B | CUDA | 99 | 1 | tg128 | 69.87 ± 0.44 |`
`qwen3 8B Q4_K - Medium`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | CUDA | 99 | 1 | pp512 | 710.60 ± 4.03 |`
`llamacpp-server-1 | | qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | CUDA | 99 | 1 | tg128 | 44.43 ± 0.06 |`
`qwen3 14B Q4_K - Medium`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | 1 | pp512 | 388.87 ± 1.15 |`
`llamacpp-server-1 | | qwen3 14B Q4_K - Medium | 8.38 GiB | 14.77 B | CUDA | 99 | 1 | tg128 | 25.55 ± 0.02 |`
`glm4 32B Q4_K - Medium`
`llamacpp-server-1 | | model | size | params | backend | ngl | fa | test | t/s |`
`llamacpp-server-1 | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: |`
`llamacpp-server-1 | | glm4 32B Q4_K - Medium | 18.32 GiB | 32.57 B | CUDA | 99 | 1 | pp512 | 168.82 ± 0.37 |`
`llamacpp-server-1 | | glm4 32B Q4_K - Medium | 18.32 GiB | 32.57 B | CUDA | 99 | 1 | tg128 | 11.95 ± 0.02 |`
My next project will be building another super budget build with two CMP 50HX that I got for 75 bucks each.
[https://www.techpowerup.com/gpu-specs/cmp-50hx.c3782](https://www.techpowerup.com/gpu-specs/cmp-50hx.c3782)
22 terra flops at FP16 combined with 560.0 GB/s of memory bandwidth and 448 tensor cores each should be an interesting choice for budget builds. It should certainly be way faster than the P102-100 as the P102-100 doe snot have any tensor cores and has less memory bandwidth.
I should be done with build and testing by next week so I will post here ASAP.
I will document and benchmark those as well.
| 2025-10-09T04:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o1wb1p/p102100_on_llamacpp_benchmarks/ | Boricua-vet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1wb1p | false | null | t3_1o1wb1p | /r/LocalLLaMA/comments/1o1wb1p/p102100_on_llamacpp_benchmarks/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': '443rFFkPppzNDFe54hBc-foPeceRrzkFT7Y5MPcCR0Q', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/443rFFkPppzNDFe54hBc-foPeceRrzkFT7Y5MPcCR0Q.jpeg?width=108&crop=smart&auto=webp&s=1cd95cc55e68809209a471cdd00f1a7d2786cd24', 'width': 108}, {'height': 100, 'url': 'https://external-preview.redd.it/443rFFkPppzNDFe54hBc-foPeceRrzkFT7Y5MPcCR0Q.jpeg?width=216&crop=smart&auto=webp&s=a436fb579a703667c88732ae682520aae2f13f04', 'width': 216}, {'height': 148, 'url': 'https://external-preview.redd.it/443rFFkPppzNDFe54hBc-foPeceRrzkFT7Y5MPcCR0Q.jpeg?width=320&crop=smart&auto=webp&s=fcf3f592ade67b2e66234fac31314db37cb90958', 'width': 320}], 'source': {'height': 270, 'url': 'https://external-preview.redd.it/443rFFkPppzNDFe54hBc-foPeceRrzkFT7Y5MPcCR0Q.jpeg?auto=webp&s=4c82e1a6cc4429bc7f90e88dc37d22e75ee10381', 'width': 582}, 'variants': {}}]} |
Devstral's function calling message rule is insane and hard to understand | 0 | When constructing request messages, `devstral` forces to place `assistant` roled message after the `tool` roled message. However, my agent has not designed like that.
Anyway, to make my agent to work on the `devstral`, I wrapped the OpenAI request module to insert blank string content after the `tool` role. However, another problem comes to me that `devstral` throws an error that the `tool_call_id` is not following a crazy string pattern.
Every time I encountered an error message like this, I tried to find a workaround, but no matter what I did, I kept getting `devstral`'s own creative tool call related error messages.
Finally, I just decided to transform `tool` roled messages to `assistant` roled message with string concatenation. `devstral` seems good AI model, but its function calling rule is hard to understand.
```ts
if (
vendor.model.includes("mistral") ||
vendor.model.includes("devstral") ||
vendor.model.includes("codestral")
) {
agent.on("request", async (e) => {
const toolCalls: OpenAI.ChatCompletionMessageFunctionToolCall[] =
e.body.messages
.filter((m) => m.role === "assistant")
.filter((m) => !!m.tool_calls?.length)
.map((m) => m.tool_calls ?? [])
.flat()
.filter((c) => c.type === "function");
e.body.messages.forEach((m, i, array) => {
if (m.role !== "tool") return;
const call: OpenAI.ChatCompletionMessageFunctionToolCall | undefined =
toolCalls.find((c) => c.id === m.tool_call_id);
const content: string = getFunctionCallMessage(m, call);
array[i] = {
role: "assistant",
content,
};
});
e.body.messages = e.body.messages.filter(
(m) => m.role !== "assistant" || !m.tool_calls?.length,
);
});
}
``` | 2025-10-09T03:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1o1vy4d/devstrals_function_calling_message_rule_is_insane/ | jhnam88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1vy4d | false | null | t3_1o1vy4d | /r/LocalLLaMA/comments/1o1vy4d/devstrals_function_calling_message_rule_is_insane/ | false | false | self | 0 | null |
Persona Drift: Why LLMs Forget Who They Are — and How We’re Fixing It | 0 | Hey everyone — I’m Sean, founder of **e**[**chomode.io**](https://github.com/Seanhong0818/Echo-Mode).
We’ve been building a tone-stability layer for LLMs to solve one of the most frustrating, under-discussed problems in AI agents: **persona drift**.
Here’s a quick breakdown of what it is, when it happens, and how we’re addressing it with our open-core protocol **Echo**.
# What Is Persona Drift?
**Persona drift** happens when an LLM slowly loses its intended character, tone, or worldview over a long conversation.
It starts as a polite assistant, ends up lecturing you like a philosopher.
Recent papers have actually quantified this:
* 🧾 *Measuring and Controlling Persona Drift in Language Model Dialogs* (arXiv:2402.10962) — found that **most models begin to drift after \~8 turns** of dialogue.
* 🧩 *Examining Identity Drift in Conversations of LLM Agents* (arXiv:2412.00804) — showed that **larger models** (70B+) drift even faster under topic shifts.
* 📊 *Value Expression Stability in LLM Personas* (PMC11346639) — demonstrated that models’ “expressed values” change across contexts even with fixed personas.
In short:
Even well-prompted models **can’t reliably stay in character** for long.
This causes inconsistencies, compliance risks, and breaks the illusion of coherent “agents.”
**⏱️ When Does Persona Drift Happen?**
Based on both papers and our own experiments, drift tends to appear when:
|**Scenario**|**Why It Happens**|
|:-|:-|
||
|**Long multi-turn chats**|Prompt influence decays — the model “forgets” early constraints|
|**Topic or domain switching**|The model adapts to new content logic, sacrificing persona coherence|
|**Weak or short system prompts**|Context tokens outweigh the persona definition|
|**Context window overflow**|Early persona instructions fall outside the active attention span|
|**Cumulative reasoning loops**|The model references its own prior outputs, amplifying drift|
Essentially, once your conversation crosses a few topic jumps or \~1,000 tokens,
the LLM starts “reinventing” its identity.
**How Echo Works**
Echo is a **finite-state tone protocol** that monitors, measures, and repairs drift in real time.
Here’s how it functions under the hood:
1. **State Machine for Persona Tracking** Each persona is modeled as a finite-state graph (FSM) — Sync, Resonance, Insight, Calm — representing tone and behavioral context.
2. **Drift Scoring (syncScore)** Every generation is compared against the baseline persona embedding. A *driftScore* quantifies deviation in tone, intent, and style.
3. **Repair Loop** If drift exceeds a threshold, Echo auto-triggers a correction cycle — re-anchoring the model back to its last stable persona state.
4. **EWMA-based Smoothing** Drift scores are smoothed with an exponentially weighted moving average (EWMA λ≈0.3) to prevent overcorrection.
5. **Observability Dashboard (coming soon)** Developers can visualize drift trends, repair frequency, and stability deltas for any conversation or agent instance.
**How Echo Solves Persona Drift**
Echo isn’t a prompt hack — it’s a **middleware layer** between the model and your app.
Here’s what it achieves:
* ✅ Keeps tone and behavior consistent over 100+ turns
* ✅ Works across different model APIs (OpenAI, Anthropic, Gemini, Mistral, etc.)
* ✅ Detects when your agent starts “breaking character”
* ✅ Repairs the drift automatically before users notice
* ✅ Logs every drift/repair cycle for compliance and tuning
Think of Echo as **TCP/IP for language consistency** — a control layer that keeps conversations coherent no matter how long they run.
# 🤝 Looking for Early Test Partners (Free)
We’re opening up **free early access** to Echo’s SDK and dashboard.
If you’re building:
* AI agents that must stay on-brand or in-character
* Customer service bots that drift into nonsense
* Educational or compliance assistants that must stay consistent
We’d love to collaborate.
Early testers will get:
* 🔧 Integration help (JS/TS middleware or API)
* 📈 Drift metrics & performance dashboards
* 💬 Feedback loop with our core team
* 💸 Lifetime discount when the pro plan launches
👉 Try it here: [**github.com/Seanhong0818/Echo-Mode**](https://github.com/Seanhong0818/Echo-Mode)
If you’ve seen persona drift firsthand — I’d love to hear your stories or test logs.
We believe this problem will define the *next layer of AI infrastructure*: reliability for language itself. | 2025-10-09T03:36:33 | https://www.reddit.com/r/LocalLLaMA/comments/1o1vt99/persona_drift_why_llms_forget_who_they_are_and/ | Medium_Charity6146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1vt99 | false | null | t3_1o1vt99 | /r/LocalLLaMA/comments/1o1vt99/persona_drift_why_llms_forget_who_they_are_and/ | false | false | self | 0 | null |
Qwen3-VL MLX support incoming, thanks to Prince Canuma | 68 | https://huggingface.co/mlx-community/Qwen3-VL-30B-A3B-Instruct-4bit
https://huggingface.co/mlx-community/Qwen3-VL-235B-A22B-Instruct-4bit | 2025-10-09T03:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o1vmby/qwen3vl_mlx_support_incoming_thanks_to_prince/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1vmby | false | null | t3_1o1vmby | /r/LocalLLaMA/comments/1o1vmby/qwen3vl_mlx_support_incoming_thanks_to_prince/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=108&crop=smart&auto=webp&s=f2e5241f8ca96819484ab11227c6c070fd92a20c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=216&crop=smart&auto=webp&s=627beec31584257c26fc59595abebd219b4b326e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=320&crop=smart&auto=webp&s=4eeb72ce4e2572286c7eb64d359143e12ae050b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=640&crop=smart&auto=webp&s=45fd9068fb09202aed7907fb6c239fe249270fe5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=960&crop=smart&auto=webp&s=930f158a4488bf8e95b7e1319d05c7fbd86b8acd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?width=1080&crop=smart&auto=webp&s=79137686cf32a994cdd1e82e00ca1dc028592979', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Dvph43KKs5p4S-PyVjVXVGvU2jLw-_nJIJIKiiGmftI.png?auto=webp&s=ef6c35e7b24740bc173b4b2c49051e7b81f4627b', 'width': 1200}, 'variants': {}}]} |
Pdf | 0 | 2025-10-09T03:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o1va37/pdf/ | Suspicious-Job714 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1va37 | false | null | t3_1o1va37 | /r/LocalLLaMA/comments/1o1va37/pdf/ | false | false | 0 | null | ||
Any VL model with size between qwen 30b a 3 b and qwen 235b? | 2 | I have 80GB of VRAM, it can do larger than 30b but not enough for 235B, is there any vision lsnguage models that I can max the VRAM? | 2025-10-09T02:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o1v2sa/any_vl_model_with_size_between_qwen_30b_a_3_b_and/ | Striking-Warning9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1v2sa | false | null | t3_1o1v2sa | /r/LocalLLaMA/comments/1o1v2sa/any_vl_model_with_size_between_qwen_30b_a_3_b_and/ | false | false | self | 2 | null |
What practical large models can the AI MAX 395 deploy, and where exactly can they be used? | 1 | Before discussing the ROCm ecosystem for the AI MAX 395, we must acknowledge that support for this APU is far from ideal. To date, only llama.cpp and its derivative applications (such as Ollama) offer stable support for this APU. As part of the AMD family, the AI MAX 395—despite not belonging to the Ryzen series—has received little attention from ROCm, similar to how XPUs, Moore Threads, and Ascend are treated, and possibly even worse.
As early as this year, ROG took the lead in supporting the AI MAX 395 by launching the first laptop equipped with it—the ROG Flow Z13. Many users conducted large model tests on it quite early. However, not only was there the issue of AMD's ecosystem maturity, but back then, dense models were still dominant. In that environment, the 8060S integrated GPU, which rivals the RTX 4060, still struggled performance-wise. On the llama3.1-70B model, it delivered a laughable result of just 2.11 tokens/sec… Anyone who saw that number at the time would have serious doubts about the 395.
Only with subsequent technological progress has the situation slowly begun to improve.
On the model front, the growing maturity of MOE (Mixture of Experts) models has enabled sparse models with large parameters but small active components to gain favor among users thanks to their computational efficiency. In China, DeepSeek-R1's 671B-A37B fired the first shot, while Qwen3's 30B-A3B brought this concept into the mainstream. Earlier, after seeing some discrete GPU benchmarks achieving speeds of tens of tokens per second, I couldn't help but exclaim: "Spring is finally coming for the 395."
Then, only on September 24, 2025, was a preview version of ROCm for AI MAX 395 finally released, giving owners a faint glimpse of hope for the future, yet its performance remained far from ideal compared to Vulkan..
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|gemma3 1B BF16|1.86 GiB|999.89 M|Vulkan|99|pp512|3328.18 ± 19.96|
|gemma3 1B BF16|1.86 GiB|999.89 M|Vulkan|99|tg128|97.67 ± 0.37|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|gemma3 1B BF16|1.86 GiB|999.89 M|ROCm|99|pp512|9606.12 ± 181.92|
|gemma3 1B BF16|1.86 GiB|999.89 M|ROCm|99|tg128|84.78 ± 0.02|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|qwen3moe 30B.A3B Q8\_0|33.51 GiB|30.53 B|Vulkan|99|pp512|832.89 ± 3.30|
|qwen3moe 30B.A3B Q8\_0|33.51 GiB|30.53 B|Vulkan|99|tg128|45.96 ± 0.05|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|qwen3moe 30B.A3B Q8\_0|33.51 GiB|30.53 B|ROCm|99|pp512|612.56 ± 1.34|
|qwen3moe 30B.A3B Q8\_0|33.51 GiB|30.53 B|ROCm|99|tg128|42.19 ± 0.03|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|ernie4\_5-moe 21B.A3B Q8\_0|24.81 GiB|21.83 B|Vulkan|99|pp512|1122.06 ± 7.80|
|ernie4\_5-moe 21B.A3B Q8\_0|24.81 GiB|21.83 B|Vulkan|99|tg128|45.95 ± 0.02|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|ernie4\_5-moe 21B.A3B Q8\_0|24.81 GiB|21.83 B|ROCm|99|pp512|997.41 ± 2.92|
|ernie4\_5-moe 21B.A3B Q8\_0|24.81 GiB|21.83 B|ROCm|99|tg128|41.50 ± 0.02|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|Vulkan|99|pp512|425.61 ± 3.59|
|gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|Vulkan|99|tg128|50.50 ± 0.09|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|ROCm|99|pp512|705.49 ± 4.18|
|gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|ROCm|99|tg128|46.93 ± 0.04|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|bailingmoe2 100B.A6B Q4\_K\_M|58.13 GiB|102.89 B|RPC,Vulkan|99|pp512|322.21 ± 2.98|
|bailingmoe2 100B.A6B Q4\_K\_M|58.13 GiB|102.89 B|RPC,Vulkan|99|tg128|52.73 ± 0.03|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|bailingmoe2 100B.A6B Q8\_0|101.90 GiB|102.89 B|RPC,Vulkan|99|pp512|349.08 ± 7.05|
|bailingmoe2 100B.A6B Q8\_0|101.90 GiB|102.89 B|RPC,Vulkan|99|tg128|33.85 ± 0.82|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|hunyuan-moe A13B Q4\_K - Medium|43.48 GiB|80.39 B|Vulkan|99|pp512|273.00 ± 0.64|
|hunyuan-moe A13B Q4\_K - Medium|43.48 GiB|80.39 B|Vulkan|99|tg128|27.37 ± 0.01|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|hunyuan-moe A13B Q4\_K - Medium|43.48 GiB|80.39 B|ROCm|99|pp512|236.87 ± 3.58|
|hunyuan-moe A13B Q4\_K - Medium|43.48 GiB|80.39 B|ROCm|99|tg128|24.51 ± 0.06|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|glm4moe 106B.A12B Q4\_K-Medium|68.01 GiB|110.47 B|Vulkan|99|pp512|210.49 ± 2.37|
|glm4moe 106B.A12B Q4\_K-Medium|68.01 GiB|110.47 B|Vulkan|99|tg128|23.28 ± 0.26|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|glm4moe 106B.A12B Q4\_K-Medium|68.01 GiB|110.47 B|ROCm|99|pp512|166.70 ± 0.28|
|glm4moe 106B.A12B Q4\_K-Medium|68.01 GiB|110.47 B|ROCm|99|tg128|20.44 ± 0.00|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|llama4 17Bx16E (Scout) Q4\_K\_M|57.73 GiB|107.77 B|Vulkan|99|pp512|204.58 ± 0.99|
|llama4 17Bx16E (Scout) Q4\_K\_M|57.73 GiB|107.77 B|Vulkan|99|tg128|20.46 ± 0.05|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|llama4 17Bx16E (Scout) Q4\_K\_M|57.73 GiB|107.77 B|ROCm|99|pp512|290.51 ± 2.70|
|llama4 17Bx16E (Scout) Q4\_K\_M|57.73 GiB|107.77 B|ROCm|99|tg128|17.51 ± 0.01|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|qwen3moe 235B.A22B Q3\_K-Medium|96.99 GiB|235.09 B|Vulkan|99|pp512|113.36 ± 0.94|
|qwen3moe 235B.A22B Q3\_K-Medium|96.99 GiB|235.09 B|Vulkan|99|tg128|17.13 ± 0.02|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|qwen3moe 235B.A22B Q3\_K-Medium|96.99 GiB|235.09 B|ROCm|99|pp512|134.17 ± 0.66|
|qwen3moe 235B.A22B Q3\_K-Medium|96.99 GiB|235.09 B|ROCm|99|tg128|14.16 ± 0.03|
Moreover, these tests only reveal the surface issue; ROCm's performance degradation with long texts is even more severe. When the context reaches 30,000 characters, Vulkan can still maintain around 70% of its speed, but ROCm drops to merely about 40%. This means that although ROCm performs better in pp512 scores, its significantly slower output speed greatly undermines this advantage.
So I'm really curious—nowadays, for those of you who have already deployed large models locally, where exactly are you applying these large models in real-world production scenarios?
I can think of four specific domains, and I'd like to discuss with you the optimal models for each. I believe that on-device large models are still too limited compared to API-based solutions; their strength lies in excelling within specialized fields. Therefore, 395 needs to maintain more than one model to cover these four areas:
1: Translation quality from large models is still better than built-in web machine translation, but this demands high response speed. Which model can meet this requirement? Hunyuan-MT-7B? But its token generation speed is a bit slow...
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|hunyuan-dense 7B Q8\_0|7.43 GiB|7.50 B|Vulkan|99|pp512|880.69 ± 8.13|
|hunyuan-dense 7B Q8\_0|7.43 GiB|7.50 B|Vulkan|99|tg128|27.21 ± 0.08|
|model|size|params|backend|ngl|test|t/s|
|\------------------------------|\---------:|\---------:|\----------|\--:|\--------------:|\-------------------:|
|hunyuan-dense 7B Q8\_0|7.43 GiB|7.50 B|ROCm|99|pp512|786.02 ± 2.89|
|hunyuan-dense 7B Q8\_0|7.43 GiB|7.50 B|ROCm|99|tg128|25.94 ± 0.01|
2: Programming: currently, qwen3-coder-30B-A3B or Devstral-Small-2507 (but the latter has extremely weak performance on 395, resulting in painfully slow token generation, making it more suitable for GPUs like 5090 where VRAM is limited but performance is sufficiently powerful)
3: Document assistant, which should be used with front-end RAG, is suitable for local search when the number of industry-specific regulations is large. However, there's still no definitive answer regarding which RAG software works best with which local model—perhaps due to the lack of a standard. Not sure if the recent Tongyi-DeepResearch-30B-A3B is designed for this purpose...
4: Role-playing or writing novels... I won't go into specifics, but there should still be room for discussion on the question of "which model is suitable for this kind of task." And I'm still puzzled to this day—given how far large models have advanced, how come no one has developed a model specifically designed to provide emotional value? I suspect it's just because I'm too isolated in terms of information access. | 2025-10-09T02:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1o1udni/what_practical_large_models_can_the_ai_max_395/ | Capable_Trash2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1udni | false | null | t3_1o1udni | /r/LocalLLaMA/comments/1o1udni/what_practical_large_models_can_the_ai_max_395/ | false | false | self | 1 | null |
A CLI to scrape pages for agents by piggybacking on your browser fingerprint | 15 | I keep hitting a wall with bot detection when trying to get live web data for agents.
So I built [a CLI](https://github.com/8ta4/see) that tells a companion extension to fetch a page. The idea was to control my day-to-day browser to piggyback on its static fingerprint.
This isn't for serious scraping. Forget residential proxies or Clay. I designed this for developers who are just scraping by.
My ideal outcome is for someone to point me to an existing open-source project that does this better, so I can abandon this. If nothing better exists, maybe this solution is useful to someone else facing the same problem.
The tool is limited by design.
- It doesn't scale. It's built for grabbing one page at a time.
- It's dumb. It just gets the `innerText`.
- The behavioral fingerprint is sterile. It doesn't fake any mouse or keyboard activity.
Is a tool that just grabs text about to be subsumed by agents that can interact with pages? | 2025-10-09T02:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o1u9ia/a_cli_to_scrape_pages_for_agents_by_piggybacking/ | 8ta4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1u9ia | false | null | t3_1o1u9ia | /r/LocalLLaMA/comments/1o1u9ia/a_cli_to_scrape_pages_for_agents_by_piggybacking/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=108&crop=smart&auto=webp&s=1372bb7b728b53d522b0b07d31efa5f6d37292c1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=216&crop=smart&auto=webp&s=417d42424498d00b0ded9f504f4f1846307d3bc5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=320&crop=smart&auto=webp&s=7c3e00bbb74cd4027bff3c0215f6b998e53879fc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=640&crop=smart&auto=webp&s=bb1f5ec406539c1145ece43767b26bdcb3377eac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=960&crop=smart&auto=webp&s=f55bc78a68dd4e0c9d60be35b53b3cefb617683a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?width=1080&crop=smart&auto=webp&s=0237bd10c9881f2001dc1ecb2e548598554d5e61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-WI2Qo73ogY9S2s2USBowbLE0ktOHpe0pMqz99lVe5k.png?auto=webp&s=19f30b1a458be79a038ea8d02f3a0fe1fac2765c', 'width': 1200}, 'variants': {}}]} |
Built a 1288x RTFx Parakeet Speech-to-Text server... Enjoy! | 14 | Needed to do a little mass-transcription so I hacked up a batching fastAPI Parakeet server and pushed it to the limit. Under ideal circumstances it manages up to 1,288x realtime on a 4090. It's using Parakeet 0.2 so it's English-only (feel free to hack together a 0.3 version if you need other languages, but note that you'll have to make some changes because v0.3 doesn't use the same code).
Built it out of an existing fastapi parakeet server, so it has a regular batching fastAPI that has VAD/streaming/automatic chunking at the /transcribe endpoint, and mass batch generation at the /transcribe\_batch endpoint if you want to mass-gen. Fastest batching happens if you prepare all the audio on your end at 16hz and send it in as batches of 128 1 minute audio files, but you can throw a huge file at the /transcribe\_batch endpoint and it'll chop it up on the server-end and handle all the chunking for you.
This is ideal for a 24gb card but will easily run on an 8gb vram card as long as you keep your batch sizes down to 4-8 or less and should still provide well-over-realtime speeds on that hardware (it'll run out of vram if you push batching too far).
I've got it all set up to run inside a docker, just set it up and docker compose up for easy deployment. | 2025-10-09T01:51:43 | https://github.com/Deveraux-Parker/Nvidia_parakeet-tdt-0.6b-v2-FAST-BATCHING-API-1200x-RTFx/tree/main | teachersecret | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o1tqbr | false | null | t3_1o1tqbr | /r/LocalLLaMA/comments/1o1tqbr/built_a_1288x_rtfx_parakeet_speechtotext_server/ | false | false | default | 14 | null |
I adapted a psychometric theory to show that AI ability architecture makes AGI impossible regardless of scale, and how to measure AI ability for 1/100 to 1/1000 the price of current benchmarks | 0 | Hi r/localllama,
Recently, I noticed that the probability of correct solution of a problem in LLMs is proportional to how common this problem is for humans - all LLMs are more likely to solve more common problems than rare problems. Actual difficulty of a problem for humans matters less than its rarity - there are rare problems that are easy for humans, but LLMs are unable to solve them because they are too rare.
Following this observation, I adapted psychometric theory into the theory of LLM abilities. I demonstrate that this property of LLMs makes it impossible to achieve AGI by scaling alone, and how to use it to cut costs on benchmark development. (I posted the draft here before but worked on it a bit more since then)
Unfortunately, I am too lazy to ever finish it in this century, but I think I have explained the general principles well enough. I have shown a demo evaluation that follows the principles of my paper in my Stochastic parrots post, so you can use it as inspiration. Hope there is someone else more motivated to finish my work, because I am lazy but at the same time, I was so fed up with BS hype advertised by some AI companies that demonstrates nothing but delusional ignorance and grift that I could not stay away.
The paper is here, incomplete: https://drive.google.com/file/d/1ezeRSoPqi4chxwgQBMUDNZyVVsWB_HdR/view?usp=drivesdk
Hope it's helpful! | 2025-10-09T01:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1o1tkck/i_adapted_a_psychometric_theory_to_show_that_ai/ | Massive-Shift6641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1tkck | false | null | t3_1o1tkck | /r/LocalLLaMA/comments/1o1tkck/i_adapted_a_psychometric_theory_to_show_that_ai/ | false | false | self | 0 | null |
Extracting data from consumer product images | 1 | Hey everyone
I’m working on a project where I need to **extract product information from consumer goods** (name, weight, brand, flavor, etc.) **from real-world photos**, not scans.
The images come with several challenges:
* **angle variations**,
* **light reflections and glare**,
* **curved or partially visible text**,
* and **distorted edges** due to packaging shape.
I’ve considered tools like **DocStrange** coupled with **Nanonets-OCR/Granite**, but they seem more suited for **flat or structured documents** (invoices, PDFs, forms).
In my case, photos are taken by regular users, so lighting and perspective can’t be controlled.
The goal is to build a **robust pipeline** that can handle those real-world conditions and output structured data like:
{
"product": "Galletas Ducales",
"weight": "220g",
"brand": "Noel",
"flavor": "Original"
}
If anyone has worked on consumer product recognition, retail datasets, or real-world labeling, I’d love to hear what kind of approach worked best for you — or how you combined OCR, vision, and language models to get consistent results. | 2025-10-09T01:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o1tfyi/extracting_data_from_consumer_product_images/ | kmuentez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1tfyi | false | null | t3_1o1tfyi | /r/LocalLLaMA/comments/1o1tfyi/extracting_data_from_consumer_product_images/ | false | false | self | 1 | null |
Ненависть к character Ai и альтернативы character Ai | 1 | [removed] | 2025-10-09T01:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o1tdmq/ненависть_к_character_ai_и_альтернативы_character/ | kirillOS238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o1tdmq | false | null | t3_1o1tdmq | /r/LocalLLaMA/comments/1o1tdmq/ненависть_к_character_ai_и_альтернативы_character/ | false | false | nsfw | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.