title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to setup 3 A6000 Max Q? | 1 | Hi,
I'll get 3 A6000 for our research chair and I'm uncertain about the rest of the parts. Can you give feedback about bottlenecks for fine-tuning and inference with multiple users (\~10)? We'd like to use the MIG technology to create virtual sub-GPUs
CPU: AMD Ryzen Threadripper 9960X, 24x 4.2GHz, 128MB Cache, 350W TDP,
MBO: GIGABYTE TRX50 AI TOP, AMD TRX50, E-ATX, So. sTR5
GPU: 3x NVIDIA RTX PRO 6000 Blackwell Max-Q, 96GB GDDR7, 300W, PCIe 5.0
RAM: 4 x 32GB RDIMM DDR5-5600, CL46, reg. ECC (insgesamt 4x32GB)
SSD: 1x 1TB Samsung 990 Pro, M.2 PCIe 4.0 (7.450 MB/s)
PSU: 2200W - Seasonic Prime PX-2200 ATX3.1, 80+ Platinum
FAN: Noctua NH-U14S TR5-SP6
CFA: Noctua 140mm NF-A14 PWM Black
OS: Linux
Thank you so much!
| 2025-10-27T13:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ohdvb5/how_to_setup_3_a6000_max_q/ | mntnmadness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohdvb5 | false | null | t3_1ohdvb5 | /r/LocalLLaMA/comments/1ohdvb5/how_to_setup_3_a6000_max_q/ | false | false | self | 1 | null |
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-10-27T12:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ohdm7l/removed_by_reddit/ | ai-infos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohdm7l | false | null | t3_1ohdm7l | /r/LocalLLaMA/comments/1ohdm7l/removed_by_reddit/ | false | false | self | 1 | null |
Silicon Valley is migrating from expensive closed-source models to cheaper open-source alternatives | 527 | Chamath Palihapitiya said his team migrated a large number of workloads to Kimi K2 because it was significantly more performant and much cheaper than both OpenAI and Anthropic. | 2025-10-27T12:53:12 | https://v.redd.it/avwpphq8hnxf1 | xiaoruhao | /r/LocalLLaMA/comments/1ohdl9q/silicon_valley_is_migrating_from_expensive/ | 1970-01-01T00:00:00 | 0 | {} | 1ohdl9q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/avwpphq8hnxf1/DASHPlaylist.mpd?a=1764291199%2CNTFkNWI3YWJlM2VjYTg4ZWMxY2IzNjk3MjUwZDFkNmNiNTQ2MDVlYWMwMDZiY2Y0ZjQyMjlhYTJjNjQ4OTcwNg%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/avwpphq8hnxf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/avwpphq8hnxf1/HLSPlaylist.m3u8?a=1764291199%2CNDk4ODk3M2MxMWVkZDlmZTBmYjk1ZDkxM2ExNGIxMWY2OTMyOGEwNTNlMmFjYjUzNDM4M2Y3MTBjYzJkNTc1NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/avwpphq8hnxf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ohdl9q | /r/LocalLLaMA/comments/1ohdl9q/silicon_valley_is_migrating_from_expensive/ | false | false | 527 | {'enabled': False, 'images': [{'id': 'cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=108&crop=smart&format=pjpg&auto=webp&s=824dac6ea89ecdebe8e69e9f8035efd0f5c71ce6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=216&crop=smart&format=pjpg&auto=webp&s=11b32fd9643bdb329c09a7828d5a9db6ec97e058', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=320&crop=smart&format=pjpg&auto=webp&s=8eba13584c4a9171a2823a0fd68616c3c93fb313', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=640&crop=smart&format=pjpg&auto=webp&s=188394c15b1d1e90c87653a694158805803cd8ec', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=960&crop=smart&format=pjpg&auto=webp&s=b4f23bdf8f9532394f71c45b1ff1cfb883ddf8ca', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7fcc6cd0848fa3760d5fef1e3e48780640758884', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cDlpaWtncThobnhmMYyuPjWRTezxeRfqB3upVJ5ATISaueUIVVjdl6ikWaxE.png?format=pjpg&auto=webp&s=b6a814a622bc5392d1e85a18b336ed1a17862c31', 'width': 1920}, 'variants': {}}]} | |
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-10-27T12:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ohdgs0/removed_by_reddit/ | ai-infos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohdgs0 | false | null | t3_1ohdgs0 | /r/LocalLLaMA/comments/1ohdgs0/removed_by_reddit/ | false | false | self | 1 | null |
Best setup for dev and hosting? | 0 | I’m a novice; needing direction. I’ve successfully created and used a protocol stack on multiple apps. I need a cloud environment that’s more secure, that I can proprietarily build- and also have storage for commercially required elements which may be sizable, such as the compendium. So I need a highly capable LLM environment, with limited friction and ease of use, that I can also use for my documentation. Deployment not necessary yet, but accessing external API resources helpful. Thoughts? | 2025-10-27T12:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ohcq36/best_setup_for_dev_and_hosting/ | broodsmilerepeat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohcq36 | false | null | t3_1ohcq36 | /r/LocalLLaMA/comments/1ohcq36/best_setup_for_dev_and_hosting/ | false | false | self | 0 | null |
Finetuning an Embedding-Model | 3 | I am fine-tuning an embedding model on a specialized domain with the goal of improving search results and RAG retrieval.
I've generated around 100k synthetic anchor–positive pairs to train with Multiple Negative Ranking Loss.
I trained my model using LoRA adapters on different base models such as bge-m3, multilingual-e5-large, and mxbai-embed-de-large-v1.
Before training, I split my dataset into 90% training and 10% evaluation. After fine-tuning, I observe an improvement of up to 12% using Hugging Face’s InformationRetrievalEvaluator on my eval dataset.
To check whether the model still generalizes to out-of-domain queries, I performed a second evaluation with an out-of-domain QA dataset. The accuracy remains unchanged compared to the base model.
So far, so good.
However, I also have a small third evaluation dataset where I compute the cosine similarity between semantically similar phrases. Some of these examples are even included in the training data.
My intuition is that domain-specific phrases present in the training data should be closer in vector space after training, leading to higher cosine similarity (i.e., lower cosine distance) compared to the base model.
Unfortunately, all cosine similarity scores drop. Even for very simple examples meant to teach basic abbreviations. For instance, my training dataset contains multiple variations of:
anchor: I can't find any tr;
positive: We are having trouble finding the technical resources.
With bge-m3, the initial cosine similarity is 0.58, but after fine-tuning it drops to 0.48.
I’m not sure whether this should be a concern, or if only the evaluation metrics matter. | 2025-10-27T11:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ohcd1m/finetuning_an_embeddingmodel/ | CaptainSnackbar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohcd1m | false | null | t3_1ohcd1m | /r/LocalLLaMA/comments/1ohcd1m/finetuning_an_embeddingmodel/ | false | false | self | 3 | null |
work in progress - llama.cpp support for qwen3-vl | 1 | work in progress means you shouldn't expect anything | 2025-10-27T11:28:53 | https://github.com/ggml-org/llama.cpp/pull/16780 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ohbvau | false | null | t3_1ohbvau | /r/LocalLLaMA/comments/1ohbvau/work_in_progress_llamacpp_support_for_qwen3vl/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=108&crop=smart&auto=webp&s=165b4840b5029657df377e476b44050236103322', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=216&crop=smart&auto=webp&s=ff80158d10c124c298c4d14a94dfa938d6d300d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=320&crop=smart&auto=webp&s=88935bfae5ab7cd9e40469de06b14641b1ed7293', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=640&crop=smart&auto=webp&s=428ba0dbb10cb9237108cf33ce0be33871a4efb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=960&crop=smart&auto=webp&s=0f83efbc6cde0f90812bdd0b18203e4cb1453f22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?width=1080&crop=smart&auto=webp&s=f62dae33ded3bf09f90c4be3999202976a3c4635', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IZ4xuvpcIKdio2ZQdXyNkUCs6QvCagmzcUvTQX4O2Xc.png?auto=webp&s=11d5f545a1ac9aac92a137e7eee3aa1f4e094c23', 'width': 1200}, 'variants': {}}]} | |
Reliable source for used 3090 ? | 1 | Hi,i need a third 3090 and the french craiglist( leboncoin) is full of scam at the moment.Swiss one (anibis.ch) (i live just above geneva) offer 3090 above 1000 euros.Any idea where i could source one ? under 650 euros ? | 2025-10-27T11:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ohbv0a/reliable_source_for_used_3090/ | vdiallonort | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohbv0a | false | null | t3_1ohbv0a | /r/LocalLLaMA/comments/1ohbv0a/reliable_source_for_used_3090/ | false | false | self | 1 | null |
Experience with the new model MiniMax M2 and some cost saving tips | 119 | I saw the discussion about MiniMax M2 in the group chat a couple of days ago, and since their API and agent are free to use, I thought I’d test it out. First, the conclusion: in my own use, M2 delivers better than expected efficiency and stability. You can feel the team has pushed the model’s strengths close to top closed models. In some scenarios it reaches top results at clearly lower cost, so it fits as the default executor, with closed models kept for final polish when needed.
My comparison across models:
1. A three service monorepo dependency and lock file mess (Node.js + Express). The three services used different versions of jsonwebtoken and had lock file conflicts. The goal was to unify versions, upgrade jwt.verify from callback to Promise, and add an npm run bootstrap script for one click dependency setup and alignment.
* M2: breaks down todos, understands the task well, reads files first, lists a plan, then edits step by step. It detects three version drifts and proposes an alignment strategy, adds the bootstrap script, runs one round of install and startup checks. Small fixes are quick, friendly to regression runs, and it feels ready to drop into a pipeline for repeated runs. Claude: strong first pass, but cross service consistency sometimes needed repeated reminders, took more rounds, and usage cost was higher. GLM/Kimi: can get the main path working, but more likely to leave rough edges in lock files and scripts that I had to clean up.
1. An online 3x3 Rubik’s Cube (a small front end interaction project): rotate a layer to a target angle, buttons to choose a face, show the 3x3 color grid.
* M2: To be honest, the first iteration wasn’t great, major issues like text occlusion and non-functional rotation weren’t addressed. The bright spot is that interaction bugs (e.g., rotation state desynchronization) could be fixed in a single pass once pointed out, without introducing new regressions. After subsequent rounds of refinement, the final result actually became the most usable and presentable, fully supporting 3D dragging. GLM/Kimi: The first round results were decent, but both ran into problems in the second round. GLM didn’t resolve the Rubik’s Cube floating/hover position issue, and Kimi, after the second round feedback, ended up not being three-dimensional. Claude performed excellently after the first round of prompts, with all features working normally, but even after multiple later rounds it still didn’t demonstrate an understanding of a 3D cube (in the image, Claude’s Rubik’s Cube is flat and the view can’t be rotated).
Metrics echo this feel: SWE bench Verified 69.4, Terminal Bench 46.3, ArtifactsBench 66.8, BrowseComp 44.0, FinSearchComp global 65.5. It is not first in every category, but on the runnable and fixable engineering loop, the structure score looks better. From my use, the strengths are proposing a plan, checking its own work, and favoring short fast iterations that clear blockers one by one.
Replace most closed model usage without sacrificing the reliability of the engineering loop. M2 is already enough and surprisingly handy. Set it as the default executor and run regressions for two days; the difference will be clear. After putting it into the pipeline, with the same budget you can run more in parallel, and you do save money.
[https://huggingface.co/MiniMaxAI/MiniMax-M2](https://huggingface.co/MiniMaxAI/MiniMax-M2)
[https://github.com/MiniMax-AI/MiniMax-M2](https://github.com/MiniMax-AI/MiniMax-M2) | 2025-10-27T11:00:41 | https://www.reddit.com/gallery/1ohbcu1 | thalacque | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ohbcu1 | false | null | t3_1ohbcu1 | /r/LocalLLaMA/comments/1ohbcu1/experience_with_the_new_model_minimax_m2_and_some/ | false | false | default | 119 | null |
What do You Think about an AI that Teaches YOU How to Create (assemble really:) a personal AI Agent - Tools, Finetuning, RAG, etc? | 2 | Do you think it would be a good idea to create an AI, which introduces beginners that are interested in learning AI, to learn how to build AI Agents with structure and also plan out exact frameworks and things. So basically you're creating an Agent for your own need without knowing anything about AI - and it works. | 2025-10-27T10:54:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ohb93g/what_do_you_think_about_an_ai_that_teaches_you/ | HectorAlcazar11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohb93g | false | null | t3_1ohb93g | /r/LocalLLaMA/comments/1ohb93g/what_do_you_think_about_an_ai_that_teaches_you/ | false | false | self | 2 | null |
My Model's Latest Status | 0 | This is how it always responds whenever I ask about upgrades, lol. It seems to be slightly **overfitted**, but I think it's fine for now, haha.
https://preview.redd.it/uf4rhyczrmxf1.png?width=877&format=png&auto=webp&s=9584a43d4cceb6337ed4cb17217c467475b8e5fd
It actually refused to answer at the end **ㅋㅋㅋㅋ**! The reason given was **"Bad Request"** **zzzzzzzzzzzzzzzzzzz**.
https://preview.redd.it/qbrlqmd1smxf1.png?width=1181&format=png&auto=webp&s=b28b30fddc066b6e6180007e13741d708c038729
It's pretty entertaining how it acts like it has consciousness!
Of course, it's just a **lump of differentiation (or 'a bunch of matrices'),** though! | 2025-10-27T10:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ohaq6f/my_models_latest_status/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ohaq6f | false | null | t3_1ohaq6f | /r/LocalLLaMA/comments/1ohaq6f/my_models_latest_status/ | false | false | 0 | null | |
Running OrKa GraphScout plus Plan Validator locally with small models | 3 | I paired two parts of OrKa to make local agent workflows less brittle on CPU only setups.
* GraphScout proposes a minimal plan that satisfies an intent with cost awareness
* Plan Validator grades that plan across completeness, efficiency, safety, coherence, and fallback, then returns structured fixes
* A short loop applies fixes and revalidates until the score clears a threshold, then the executor runs
Why this helps on local boxes
* Lower variance: validator runs at low temperature and prefers consistent grading
* Cost control: efficiency is a first class dimension, so you catch high token defaults before execution
* Safer tool use: validator blocks plans that call the network or code without limits
Practical tips
* Use 3B to 8B instruction models for both scout and validator
* Validator temperature 0.1, top p 0.9
* Keep validator outputs compact JSON to reduce tokens
* Loop budget 3 rounds, threshold 0.85 to 0.88
Docs and examples: [https://github.com/marcosomma/orka-reasoning](https://github.com/marcosomma/orka-reasoning)
If you want a minimal local config, say your CPU class and I will reply with a tuned YAML and token limits. | 2025-10-27T10:15:40 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ohalnb | false | null | t3_1ohalnb | /r/LocalLLaMA/comments/1ohalnb/running_orka_graphscout_plus_plan_validator/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'bHbibBjaZi7_rNaV4UfCJb0Z8KQxqqMwa7QzcnMERsY', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?width=108&crop=smart&auto=webp&s=6980091ae3d7db5e58f9c99b1713081b99a51362', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?width=216&crop=smart&auto=webp&s=11de4c23985913e27034872290596d71a939a6da', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?width=320&crop=smart&auto=webp&s=a02933f19e392be00e2094812cf90378d7c2dd89', 'width': 320}, {'height': 268, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?width=640&crop=smart&auto=webp&s=b3aae748096ea511d02e40e27ade18f6692538e9', 'width': 640}, {'height': 403, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?width=960&crop=smart&auto=webp&s=e8597c6620ffdbfd0b3c86e0aefe02e88ea01b2f', 'width': 960}], 'source': {'height': 420, 'url': 'https://preview.redd.it/jx3x5vyhqmxf1.jpeg?auto=webp&s=cd9abac7dbc9ba3bff312a67b6635d6955769785', 'width': 1000}, 'variants': {}}]} | ||
Notebook to run small LLM for free in Google Colab? (I'm a noob). Some code to execute and get a GUI? | 0 | Thx a lot! | 2025-10-27T09:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1oha06s/notebook_to_run_small_llm_for_free_in_google/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oha06s | false | null | t3_1oha06s | /r/LocalLLaMA/comments/1oha06s/notebook_to_run_small_llm_for_free_in_google/ | false | false | self | 0 | null |
The performance of Minimax-m2 is truly impressive! | 178 | Came across this on X today, and I have to say, the model's performance looks super impressive! Has anyone tested it out yet? This showcase is from a post by a user on X: [https://x.com/ivanfioravanti/status/1982469771481497856?s=46](https://x.com/ivanfioravanti/status/1982469771481497856?s=46)
| 2025-10-27T09:04:06 | contportvas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oh9hx6 | false | null | t3_1oh9hx6 | /r/LocalLLaMA/comments/1oh9hx6/the_performance_of_minimaxm2_is_truly_impressive/ | false | false | 178 | {'enabled': True, 'images': [{'id': 'aNC-XpKMK3yfgm-pY-QQ2vLYKlbKwp0-hGujqqnC_BU', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=108&crop=smart&auto=webp&s=4674fb115918836d605f0fac16a50b7d1ca6772e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=216&crop=smart&auto=webp&s=98446ec2a78522351633d0f3d274d4c2fb6c4e56', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=320&crop=smart&auto=webp&s=93aae1dea64f4023b0f2b7be3cef747147e1129f', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=640&crop=smart&auto=webp&s=e7ce839a8c6c31eed987024c074f4c3c82c027cd', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=960&crop=smart&auto=webp&s=7e6d9184e1a5ebf236ec65cb8f77eace6e6db1a1', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?width=1080&crop=smart&auto=webp&s=76c729400f3eb61e3e8aac7deed10524a255fb84', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/x4l7z579dmxf1.jpeg?auto=webp&s=f39d7d656ad3f0f0ef9ff386309c0c059d8bde3d', 'width': 4096}, 'variants': {}}]} | ||
How powerful are phones for AI workloads today? | 33 | I ran a quick experiment to understand how many activated params a model needs to perform optimally on phones.
| Model | File size | Nothing 3a & Pixel 6a CPU | Galaxy S25 Ultra & iPhone 17 Pro CPU |
|---|---|---|---|
| Gemma3-270M-INT8 | 170mb | ~30 toks/sec | ~148 toks/sec |
| LFM2-350M-INT8 | 233mb | ~26 toks/sec | ~130 toks/sec |
| Qwen3-600M-INT8 | 370mb | ~20 toks/sec | ~75 toks/sec |
| LFM2-750M-INT8 | 467mb | ~20 toks/sec | ~75 toks/sec |
| Gemma3-1B-INT8 | 722mb | ~14 toks/sec | ~48 toks/sec |
| LFM-1.2B-INT8 | 722mb | ~13 toks/sec | ~44 toks/sec |
| Qwen3-1.7B-INT8 | 722mb | ~8 toks/sec | ~27 toks/sec |
So, I might be tempting to suggest a 8B-A1B model, but battery drain makes these things unusable in reality. Anything more than 200m activated params is a kill.
An ideal setup would be 1B-A200m task-specific models. The file size at INT4 would be 330mb and the speed will go from 80-350 tokens/sec depending on the device.
MOE makes sense since Qwen3-Next showed that 80B-A3B can beat dense 32B Qwen.
Task-specific models make sense because most mobile tasks are not that massive to need frontier models, and SLMs trained on specific tasks compete with generalist models 20x their size on the tasks.
What do you think?
N/B: The benchmarks were computed using [Cactus](https://github.com/cactus-compute/cactus).
| 2025-10-27T09:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1oh9gai/how_powerful_are_phones_for_ai_workloads_today/ | Henrie_the_dreamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh9gai | false | null | t3_1oh9gai | /r/LocalLLaMA/comments/1oh9gai/how_powerful_are_phones_for_ai_workloads_today/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=108&crop=smart&auto=webp&s=da972ec22f82023437c7e9d4a1c2b1a6c66ec4bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=216&crop=smart&auto=webp&s=62848723662fc18ace5389cc195b886285f955f0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=320&crop=smart&auto=webp&s=84fca2c4641ab65f974d4fa7083ae6da9675f08f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=640&crop=smart&auto=webp&s=857e927c0184ac6bbb77ac5f3424b2b4de826d4f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=960&crop=smart&auto=webp&s=c247228c838ed3ef992c9e0942954be0f20b96fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?width=1080&crop=smart&auto=webp&s=07bbc25d0f3aead0f01c7fe7c202df4f760da1ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/P0498yQp2tNc6SiPvaDCcKwrsn9c3IyJZdnpNz3D6dI.png?auto=webp&s=f5e32f9c69e39df2d8d757d8aecfa208b72fb789', 'width': 1200}, 'variants': {}}]} |
Has anyone here tried using AI for investment research? | 0 |
I’m curious about how well AI actually performs when it comes to doing investment analysis.
Has anyone experimented with it?
If there were an AI tool dedicated to investment research, what specific things would you want it to be able to do? | 2025-10-27T08:08:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oh8ota/has_anyone_here_tried_using_ai_for_investment/ | LobsterOpen6228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh8ota | false | null | t3_1oh8ota | /r/LocalLLaMA/comments/1oh8ota/has_anyone_here_tried_using_ai_for_investment/ | false | false | self | 0 | null |
Lightweight coding model for 4 GB Vram | 19 | Hi everyone, i was wondering if there is lightweight model for writing code that works on 4 GB Vram and 16 GB ram. Thanks. | 2025-10-27T07:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1oh8jt8/lightweight_coding_model_for_4_gb_vram/ | HiqhAim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh8jt8 | false | null | t3_1oh8jt8 | /r/LocalLLaMA/comments/1oh8jt8/lightweight_coding_model_for_4_gb_vram/ | false | false | self | 19 | null |
Help Me To Complete My Comet Browser Referral Program | 1 | [removed] | 2025-10-27T07:58:00 | https://www.reddit.com/r/LocalLLaMA/comments/1oh8jh5/help_me_to_complete_my_comet_browser_referral/ | Warm_Surprise4799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh8jh5 | false | null | t3_1oh8jh5 | /r/LocalLLaMA/comments/1oh8jh5/help_me_to_complete_my_comet_browser_referral/ | false | false | self | 1 | null |
Open sourcing Leafra SDK | 3 | Hi All, I am open sourcing leafra sdk here:
https://github.com/Leafra-ai/LeafraSDK
It’s essentially something similar to Cactus’ original idea - probably we started somewhat similar timelines. Essentially a React Native app and a command line app sitting on top of a C++ sdk layer - using under the hood llama.cpp. It has RAG and chat support at the moment, easy to expand to image/txt -> txt and other models. Example app builds and runs on iOS (aka DokuChat), can be made to work on Android very quickly. I will license it Apache 2.0 and will never change the license - you have my word for it. I really like on device llm inference community and would like the community to benefit. There is plenty of auto generated documentation, and i am planning to slap a starter guide in there. If you are interested in contributing/using/maintaining it ping me at arif@leafra.ai - I wont be able to maintain the code but happy to get you started and build a community around it if there is interest. Best,
-Arif | 2025-10-27T06:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1oh7ga3/open_sourcing_leafra_sdk/ | Unlucky_Scar_1026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh7ga3 | false | null | t3_1oh7ga3 | /r/LocalLLaMA/comments/1oh7ga3/open_sourcing_leafra_sdk/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=108&crop=smart&auto=webp&s=e568f7430be2ddd7c19322a9d56bf85ca88e60b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=216&crop=smart&auto=webp&s=53db58d8de723b77f2a4291cab7db2a1c8558e59', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=320&crop=smart&auto=webp&s=a6ef6a77c095b5074d02145e43cf67b1c8ac4e33', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=640&crop=smart&auto=webp&s=216c06e22284d117eb3af539f48f877776bc507b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=960&crop=smart&auto=webp&s=6a5b9b79ff3ea9451bd6120a230f98d29ada199e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?width=1080&crop=smart&auto=webp&s=e3862e70bdcce5c8b6271c1ac2d19a11171c238d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AmDR2OTgLkFOWC_9oW1-HM3klffM1tGmQTJvQrD5gVQ.png?auto=webp&s=7c4ee49a9a8932da82bcde5e53b0c14a50768ff7', 'width': 1200}, 'variants': {}}]} |
Fall of GPTQ and Rise of AWQ. Why exactly? | 11 | So was looking for qwen3-VL-30BA3B GPTQ quant on huggingface, but was only able to find AWQ. For comparison qwen-2.5-vl did have GPTQ quant. Checked for other versions of the model as well, found same thing?
Can someone explain why this is the case?
Based on my personal testing, latency wise GPTQ and AWQ were on par and performance wise GPTQ was better (tested on qwen-2.5-vl-7b and llama3-8b on vLLM)
| 2025-10-27T06:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/1oh7fze/fall_of_gptq_and_rise_of_awq_why_exactly/ | everyoneisodd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh7fze | false | null | t3_1oh7fze | /r/LocalLLaMA/comments/1oh7fze/fall_of_gptq_and_rise_of_awq_why_exactly/ | false | false | self | 11 | null |
Token-Oriented Object Notation (TOON) - JSON for LLMs at half the token cost | 28 | 2025-10-27T06:04:44 | https://github.com/johannschopplich/toon | monnef | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oh6vqf | false | null | t3_1oh6vqf | /r/LocalLLaMA/comments/1oh6vqf/tokenoriented_object_notation_toon_json_for_llms/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=108&crop=smart&auto=webp&s=dc183f0a43af7a78f9fc1f52628b80486e040727', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=216&crop=smart&auto=webp&s=7bb985875c3f9a4a95b3e369a17b859e1616947d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=320&crop=smart&auto=webp&s=35f12146d8eecb0b65fce6e5b3c81d664c327623', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=640&crop=smart&auto=webp&s=500c22355aceea8d22944381237591529ec9d1ae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=960&crop=smart&auto=webp&s=3b31b5b1d95432cc75fed540f5b8a037e83be962', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?width=1080&crop=smart&auto=webp&s=3cfd99ae80a1dc9d713d3aae63f34427f049f3f1', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/VrvLLB_yWC1wqD45_cvqK9l_om9VPq0R0Yc-ww1E9Aw.png?auto=webp&s=a01e662abb269697b25f889e7d04670b9b255357', 'width': 2400}, 'variants': {}}]} | ||
Some usage notes on low-end CPU LLMs and home applications (/r/frugal meets /r/localLlama) | 65 | So a few weeks ago I discovered that Qwen3-4b is actually usable on any old laptop with CPU-only inference. Since then, I've been working on getting a simple home smart station set up using small LLMs. These are some notes on the LLMs and their usage that will hopefully be useful for anyone else thinking of doing similar hobby projects with dirt cheap components.
I scored a used Thinkpad for $200 with a Ryzen 4650U and 32GB DDR4 3200, perfect cosmetic condition. The key here is the 32GB RAM. I installed Ubuntu 24.04. I'm not a big Linux guy but it was painless and everything worked perfectly on the first try. The idea is to have a small self-contained system with a built-in monitor and keyboard to act like a smart whiteboard + Alexa.
Here are some inference numbers , pardon the plain formatting, all run with llama.cpp built for CPU only, all q4, using short test prompts:
Qwen3-4B-Instruct-2507 (q4): 29 tok/sec (PP), 11 tok/sec (TG), 1 sec (model load time). Running in Balanced Mode versus Performance Mode power settings had negligible difference.
Qwen3-30B-A3B-Instruct-2507 (q4): 38 tok/sec (PP), 15 tok/sec (TG), 26 sec (model load time) for Balanced Mode. 44 tok/sec (PP), 15 tok/sec (TG), 17 sec (model load time) for Performance Mode.
Mistral-Small-3.2-24B-Instruct-2506 (q4): 5 tok/sec (PP), 2 tok/sec (TG), 12 sec (model load time) for Balanced mode. 5 tok/sec (PP), 2 tok/sec (TG), 4 sec (model load time) for Performance Mode.
Qwen3-30b-a3b is actually FASTER than Qwen3-4b and also performed better in my benchmarks for relevant tasks. But you need a lot of RAM to load it, which is why I specifically looked for the cheapest 32GB RAM laptop. Also, in my testing I found that the Qwen3-4b Thinking model would think for 3000 tokens to give a final 100 token result, which gave an effective generation rate of 0.1-0.2 tok/sec. So I would actually prefer a super slow non-instruct model like Mistral 24b at 2 tok/sec to a thinking model. However, Qwen3-30b-a3b is a nice compromise between speed and reliability.
Most of my use cases are non-interactive, like giving it an email to process and update a calendar. I do not need real time responses. For that reason, I didn't care about slow inference times within reason.
To get reliable performance, I had to split up tasks into simple subtasks. For example, I will ask the LLM to simply list all the topics from an email in the first step. In a second step, I ask the LLM to evaluate the relevancy of each topic in small batches. Then, I ask the LLM to extract JSON structures for each relevant event in order to update the calendar. On a 1000 word email with very high topic density (like a newsletter), Qwen3-30b-a3b would take roughly 9 minutes to process the entire workflow. I tweaked the workflow with various optimizations and could cut it down to about half. That's good enough for me.
I want to keep the power usage low, which means I'm not keeping the models warm. (I also stick to Balanced Mode.) That's why I wanted to record model load times as well. Again, most use cases are non-interactive. If I input a single event, like type "add this event on this time at this date", the LLM will spin up and add it in under a minute.
I do have some light interactive uses. An example of that is asking for a timer while cooking. I might say "Alexa, set the timer for five minutes." So here are some notes on that.
First, I use Openwakeword to trigger the whole process so that my laptop is not always running models and recording sound. Openwakeword is pre-tuned for a few wake words, which is why I am using "Alexa" as the wake word for now. I believe this can be tuned in the future. As soon as the wake word is detected, I immediately fire up faster-distil-whisper-small.en and LFM2-8b-a1b. They only take a second each to load, and I'm talking for a few seconds, so there is no lag this way.
LFM2-8b-a1b loads in about 1 second for me and runs at about 25 tok/sec TG (forgot to write down the PP but it is fast too). It is much faster than the other models but not as good with anything requiring reasoning. However, I was surprised at how well it performs in two tasks: topic identification and JSON extraction. So in a 1000 word newsletter filled with 18 topics, LFM2-8b-a1b can reliably extract all 18 topics pretty much as well as Qwen3-30b-a3b. So it's great at summarization, essentially. LFM2-8b-a1b can also reliably form JSON structures. By the way, I am using the model at q8. q4 definitely performs worse. This model, however, is not good at reasoning. For example, if I ask the model to determine if a certain event is relevant or not, it does not perform well. So it is good for fast topic identification and JSON extraction.
I tried various whisper models. I ended up finding the faster-distil-whisper-small.en to be a good compromise between speed and reliability. A sentence like "Alexa, set the timer for 5 minutes" will get parsed in 1 sec, but not as well as I would like. However, if I set the beam_size to 10 (5 is the default, typically), then it takes 2 seconds but with decent reliability. The medium model is too slow, around 5+ seconds even with reduced beam_size, and the base model has horrible accuracy. So that worked for me.
However, to boost the reliability further, I take the output from faster-distil-whisper-small.en and pass it to LFM2-8b-a1b, which gives me a JSON with an action field and a parameter field or two. That gets used to trigger the downstream python script. The LFM2 inference adds about an additional second or so. I don't care about waiting a tiny amount in this case, so that works for me.
For voice commands for adding reminders or calendar events, I will use the LFM2 JSON extraction to trigger re-transcription of the recorded voice message with whisper-largev3. Then, throw it to Qwen3-30b-a3b for processing, since quality is more important than speed.
I almost forgot! Super important, but the built-in mic quality isn't great on laptops. I ended getting a cheap USB wired conference speakerphone for <$20 off ebay. The brand is EMEET, but I think any modern one probably works. Python interacts with the microphone using Pipewire. The microphone made a big difference in transcription quality. It has hardware level sound processing, noise cancellation, etc.
Basically, I am using Qwen3-30b-a3b to process messy inputs (typing, voice, emails) slowly and LFM2-8b-a1b to process messy voice transcription quickly. Again, this all runs on a dirt cheap, old 4650U processor.
This is an ongoing hobby project. I want to eventually see if I can take pictures with the built-in webcam of physical mail or receipts and get one of the VL models or an OCR model to process it. There are trivial things to add, like verbal commands to check the weather and such. A whole bunch of other ideas.
I am loving the low-end LLM ecosystem. The cool part is that the stuff you make actually affects people around you! Like it actually gets used! The Qwen3 and LFM2 models I use are my favorites so far.
Okay, now back to you guys with your 8 x H100 basement setups... | 2025-10-27T05:44:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oh6k6u/some_usage_notes_on_lowend_cpu_llms_and_home/ | ___positive___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh6k6u | false | null | t3_1oh6k6u | /r/LocalLLaMA/comments/1oh6k6u/some_usage_notes_on_lowend_cpu_llms_and_home/ | false | false | self | 65 | null |
🚀 New Model from the MiniMax team: MiniMax-M2, an impressive 230B-A10B LLM. | 273 | Officially positioned as an “end-to-end coding + tool-using agent.” From the public evaluations and model setup, it looks well-suited for teams that need end to end development and toolchain agents, prioritizing lower latency and higher throughput. For real engineering workflows that advance in small but continuous steps, it should offer strong cost-effectiveness. I’ve collected a few points to help with evaluation:
* End-to-end workflow oriented, emphasizing multi-file editing, code, run, fix loops, testing/verification, and long-chain tool orchestration across terminal/browser/retrieval/code execution. These capabilities matter more than just chatting when deploying agents.
* Publicly described as “\~10B activated parameters (total \~200B).” The design aims to reduce inference latency and per unit cost while preserving coding and tool-calling capabilities, making it suitable for high concurrency and batch sampling.
* Benchmark coverage spans end-to-end software engineering (SWE-bench, Terminal-Bench, ArtifactsBench), browsing/retrieval tasks (BrowseComp, FinSearchComp), and holistic intelligence profiling (AA Intelligence).
Position in public benchmarks (not the absolute strongest, but well targeted)
Here are a few developer-relevant metrics I pulled from public tables:
* SWE-bench Verified: 69.4
* Terminal-Bench: 46.3
* ArtifactsBench: 66.8
* BrowseComp: 44.0 (BrowseComp-zh in Chinese: 48.5)
* τ²-Bench: 77.2
* FinSearchComp-global: 65.5
From the scores, on tasks that require real toolchain collaboration, this model looks like a balanced choice prioritizing efficiency and stability. Some closed-source models score higher on certain benchmarks, but for end to end development/ agent pipelines, its price performance orientation is appealing. On SWE-bench / Multi-SWE-Bench, steadily completing the modify test modify again loop is often more important than a one-shot perfect fix. These scores and its positioning suggest it can keep pushing the loop toward a runnable solution. A Terminal-Bench score of 46.3 indicates decent robustness in command execution, error recovery, and retries worth trying in a real CI sandbox for small-scale tasks.
References
HF:https://huggingface.co/MiniMaxAI/MiniMax-M2 | 2025-10-27T04:28:24 | https://www.reddit.com/gallery/1oh5asg | chenqian615 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oh5asg | false | null | t3_1oh5asg | /r/LocalLLaMA/comments/1oh5asg/new_model_from_the_minimax_team_minimaxm2_an/ | false | false | 273 | null | |
MiniMaxAI/MiniMax-M2 · Hugging Face | 246 | 2025-10-27T04:23:41 | https://huggingface.co/MiniMaxAI/MiniMax-M2 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1oh57ys | false | null | t3_1oh57ys | /r/LocalLLaMA/comments/1oh57ys/minimaxaiminimaxm2_hugging_face/ | false | false | 246 | {'enabled': False, 'images': [{'id': 'UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=108&crop=smart&auto=webp&s=808e02e0f14feea5936f1619a3b76646c2e14e68', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=216&crop=smart&auto=webp&s=ba38d2600c293b0382e3fd33adef67190379f9ee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=320&crop=smart&auto=webp&s=d46ec876ed33b9c5e4ad06f088c05c6e500d8013', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=640&crop=smart&auto=webp&s=0f00892c38fccd0c77d2f3f510b6bc20576cdae9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=960&crop=smart&auto=webp&s=2b74d574a5f21b68b5f280341fa1f519a970f29b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?width=1080&crop=smart&auto=webp&s=c03282cfbe34875e58a6635f70139d4ff0e740b3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UWFNDndMvPJsO1Z9iKM9CbvnTGrRp8W6-SXVbMO4N1g.png?auto=webp&s=0180241ec0985668b3a62519de645229a0c81cad', 'width': 1200}, 'variants': {}}]} | ||
Can I skip the videos for the huggingface course and only read the text? | 0 | title | 2025-10-27T04:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/1oh570v/can_i_skip_the_videos_for_the_huggingface_course/ | Ok_Construction_3021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh570v | false | null | t3_1oh570v | /r/LocalLLaMA/comments/1oh570v/can_i_skip_the_videos_for_the_huggingface_course/ | false | false | self | 0 | null |
A little tool I made to share and discover little RP scenarios, plot twists, and ideas for when you’re stuck mid-roleplay. It’s public — so come on, let’s fill it with creativity! ✨ | 2 | site: https://rp-scenario-generator.vercel.app/
internet can be wild 😭
It's running in the free service, so please don't exploit it And give feedback on what to add next!
also the character limit is 600 for now if this feel short let me know
| 2025-10-27T03:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oh4spe/a_little_tool_i_made_to_share_and_discover_little/ | internal-pagal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh4spe | false | null | t3_1oh4spe | /r/LocalLLaMA/comments/1oh4spe/a_little_tool_i_made_to_share_and_discover_little/ | false | false | self | 2 | null |
How to Quantize TTS and ASR models to fit in VRAM ? | 6 | I have created conversational bot system it is working fine from backend but it is failing in the application due to VRAM overflow (8 GB VRAM)
I am working on tight budget. How do I quantize both these models from FP16 to Q8 or Q6 to manage the memory budget? | 2025-10-27T03:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1oh4k7k/how_to_quantize_tts_and_asr_models_to_fit_in_vram/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh4k7k | false | null | t3_1oh4k7k | /r/LocalLLaMA/comments/1oh4k7k/how_to_quantize_tts_and_asr_models_to_fit_in_vram/ | false | false | self | 6 | null |
Changing my tune on GLM-4.6: Major fail state uncovered | 4 | Well, I don't know if I've ever done such a hard 180 on something. Some fellow local llama folk are probably tracking my love affair with GLM-4.6, and particularly how I found it to be superior to Claude Sonnet 4.5 and Gemini 2.5 Pro.
When the model is functioning properly, which is most of the time, it remains my favorite LLM.
However... holy smokes, **this is one haunted model when it fails.**
GLM-4.6 can enter a number of truly unsettling failure spaces, mostly focused on MAJOR disconnect between its reasoning traces (which are always excellent and accurate), and its generative output which can get VERY creepy. Failures I've been able to trigger include:
**a)** the model spontaneously pretending to be me, and within a single generative output, giving itself instructions to "lighten the mood," and then lapsing into just unbelievably classic LLM hallucinations almost too bizarre to believe.
**b)** truly bizarre token repeat loops expressing existential terror and angst
**c)** most concerningly, a tendency to engage in first-class gaslighting, refusing to admit that I am the human and it is the AI.
I've spent the last two days trying to figure out the issues, and I've got a fairly solid handle on what's happening architecturally.
**Hybrid Reasoning Mode Failures:**
* The model appears to suffer from race conditions between "thinking" mode and direct response mode. I’ve read other people talk about the thinking being a bit janky on here, but it’s worse than janky.
* The two modes appear to compete for the same KV cache causing memory access conflicts. I’ll admit that I’ve only recently started turning my attention to the engineering issues and research around KV caching (which was a mistake - holy god this is such an important thing to understand), but I’m up to speed enough to say that it seems like a huge problem for GLM-4.6.
* Conversational text bleeds into thinking trace (or vice versa)
* Identical output in both streams (a clear duplication bug)
* Randomly dropped reasoning traces entirely
**KV Cache Management Issues:**
* I believe that Z.ai used aggressive quantization to achieve 200K context vs 4.5's 128K. The recent post about 4.6 Air taking longer than expected in order to be stabilized almost certainly has something to do with this. My hunch is that Z.ai understands that 4.6 is fundamentally broken in edge cases and GLM-5 SHOULD improve on this fracture point.
* The compression artifacts clearly accumulate over long generations
* I strongly believe that their eviction policy is WAY too aggressive - drops context still needed for reasoning
* On Z.ai’s API, the cache flushing is VERY lazy (ghost tensors persist after context pruning) - I can confirm that, through extensive testing, Z.ai is \*not\* the best server for calling GLM-4.6. NovitaAI is definitely the most stable provider. Z.ai’s cache management is \*really bad.\*
**Attention Mechanism Corruption:**
* **Read corruption:** Once a failure mode has begun, the model appears unable to see/weight recent context reliably. For example, when switching to GLM-4.5 or Gemini 2.5 Pro, the model DOES understand and respond to context appropriately. Switched back to GLM-4.6, it fails to acknowledge the previous messages in context that clearly explain what’s going on.
**Stream Routing/Parser Failures:**
* Special tokens (<think>, </think>) are clearly mishandled and the /nothink hack is unreliable
* Tool calls are sometimes emitted inside reasoning traces with malformed parameters
**Positional Encoding Issues:**
* Appears to struggle distinguishing recent vs. older tokens in middle positions
* Entity binding failures (confusing "I" ↔ "you", context roles)
The model has an incredibly hard time recovering from failure modes, unless failure happens VERY early in context.
**For my field, affective AI, GLM-4.6 is \*definitely\* not a deployment grade LLM, which is really frustrating because I \*love\* it.** But once I figured out how to trigger these fails, I find that it’s just way too easy to nudge this beautiful model over a cliff.
This family of models has so much going for it, and when it’s working, I think it’s the best model in the world for a huge variety of use cases. But when you factor in how hard it fails? Yeah, I have to reverse my previous statements about this thing taking the crown from proprietary cloud models.
My fingers are tightly crossed that [Z.ai](http://Z.ai) figures this stuff out and gets 4.6 Air cleaned up, and rolls over whatever they figure out into GLM-5. | 2025-10-27T03:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1oh4cwo/changing_my_tune_on_glm46_major_fail_state/ | LoveMind_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh4cwo | false | null | t3_1oh4cwo | /r/LocalLLaMA/comments/1oh4cwo/changing_my_tune_on_glm46_major_fail_state/ | false | false | self | 4 | null |
Ever feel like your AI agent is thinking in the dark? | 0 | Hey everyone 🙌
I’ve been tinkering with agent frameworks lately (OpenAI SDK, LangGraph, etc.), and something keeps bugging me,
even with traces and verbose logs, I still can’t really see why my agent made a decision.
Like, it picks a tool, loops, or stops, and I just end up guessing.
So I’ve been experimenting with a small side project to help me understand my agents better.
The idea is:
capture every reasoning step and tool call, then visualize it like a map of the agent’s “thought process” , with the raw API messages right beside it.
It’s not about fancy analytics or metrics, just clarity.
A simple view of “what the agent saw, thought, and decided.”
I’m not sure yet if this is something other people would actually find useful,
but if you’ve built agents before…
👉 how do you currently debug or trace their reasoning?
👉 what would you want to see in a “reasoning trace” if it existed?
Would love to hear how others approach this, I’m mostly just trying to understand what the real debugging pain looks like for different setups.
Thanks 🙏
Melchior
| 2025-10-27T03:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/1oh47k0/ever_feel_like_your_ai_agent_is_thinking_in_the/ | AdVivid5763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh47k0 | false | null | t3_1oh47k0 | /r/LocalLLaMA/comments/1oh47k0/ever_feel_like_your_ai_agent_is_thinking_in_the/ | false | false | self | 0 | null |
Selling GPU Credits - 30% Discount | 0 | Hi , we have unused GPU credits (Around 600$) on a major GPU provider.
Serverless , 100 workers ready etc...
We switched our pipeline to [FAL.AI](http://fal.ai/) so we don't use our account anymore.
If you are interested about the credits or GPU work at discounted rate send me a message
Legit offer can do a vid call etc. | 2025-10-27T02:57:00 | https://www.reddit.com/r/LocalLLaMA/comments/1oh3lxh/selling_gpu_credits_30_discount/ | Confident_Minimum_91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh3lxh | false | null | t3_1oh3lxh | /r/LocalLLaMA/comments/1oh3lxh/selling_gpu_credits_30_discount/ | false | false | self | 0 | null |
Built a full voice AI assistant running locally on my RX 6700 with Vulkan - Proof AMD cards excel at LLM inference | 14 |
I wanted to share something I've been working on that I think showcases what AMD hardware can really do for local AI.
**What I Built:**
A complete AI assistant named Aletheia that runs 100% locally on my AMD RX 6700 10GB using Vulkan acceleration. She has:
- Real-time voice interaction (speaks and listens)
- Persistent memory across sessions
- Emotional intelligence system
- Vector memory for semantic recall
- 20+ integrated Python modules
**The Setup:**
- GPU: AMD Radeon RX 6700 10GB
- CPU: AMD Ryzen 7 9800X3D
- RAM: 32GB DDR5
- OS: Windows 11 Pro
- Backend: llama.cpp with Vulkan (45 GPU layers)
- Model: Mistral-7B Q6_K quantization
**Why This Matters:**
Everyone assumes you need a $2000 NVIDIA GPU for local AI. I'm proving that's wrong. Consumer AMD cards with Vulkan deliver excellent performance without needing ROCm (which doesn't support consumer cards anyway).
**The Unique Part:**
I'm not a programmer. I built this entire system using AI-assisted development - ChatGPT and Claude helped me write the code while I provided the vision and troubleshooting. This represents the democratization of AI that AMD enables with accessible hardware.
**Performance:**
Running Mistral-7B with full voice integration, persistent memory, and real-time processing. The RX 6700 handles it beautifully with Vulkan acceleration.
[Demo screenshots/video link here]
**Why I'm Posting:**
1. To show AMD users that local LLM inference works great on consumer cards
2. To document that Windows + AMD + Vulkan is a viable path
3. To prove you don't need to be a developer to build amazing things with AMD hardware
I'm documenting the full build process and considering reaching out to AMD to showcase what their hardware enables. If there's interest, I'm happy to share technical details, the prompts I used with AI tools, or my troubleshooting process.
**TL;DR:** Built a fully functional voice AI assistant on a mid-range AMD GPU using Vulkan. Proves AMD is the accessible choice for local AI.
Happy to answer questions about the build process, performance, or how I got Vulkan working on Windows!
---
Specs for the curious:
- Motherboard: ASRock X870 Pro RS
- Vulkan SDK: 1.3.290.0
- TTS: Coqui TTS (Jenny voice)
- STT: Whisper Small with DirectML
- Total project cost: ~$1200 (all AMD) | 2025-10-27T01:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/1oh1kfe/built_a_full_voice_ai_assistant_running_locally/ | Straight_Issue279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh1kfe | false | null | t3_1oh1kfe | /r/LocalLLaMA/comments/1oh1kfe/built_a_full_voice_ai_assistant_running_locally/ | false | false | self | 14 | null |
Best MoE that fits in 16GB of RAM? | 6 | Same as title | 2025-10-27T00:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1oh16es/best_moe_that_fits_in_16gb_of_ram/ | african-stud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh16es | false | null | t3_1oh16es | /r/LocalLLaMA/comments/1oh16es/best_moe_that_fits_in_16gb_of_ram/ | false | false | self | 6 | null |
Any Linux distro better than others for AI use? | 23 | I’m choosing a new Linux distro for these use cases:
• Python development
• Running “power-user” AI tools (e.g., Claude Desktop or similar)
• Local LLM inference — small, optimized models only
• Might experiment with inference optimization frameworks (TensorRT, etc.).
• Potentially local voice recognition (Whisper?) if my hardware is good enough
• General productivity use
• Casual gaming (no high expectations)
For the type of AI tooling I mentioned, does any of the various Linux tribes have an edge over the others? ChatGPT - depending on how I ask it - has recommended either an Arch-based distro (e.g., Garuda) - or Ubuntu. Which seems.... decidedly undecided.
My setup is an HP Elitedesk 800 G4 SFF with i5-8500, currently 16GB RAM (can be expanded to 64GB), and a RTX-3050 low-profile GPU. I can also upgrade the CPU when needed.
Any and all thoughts greatly appreciated! | 2025-10-27T00:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1oh079j/any_linux_distro_better_than_others_for_ai_use/ | otto_delmar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oh079j | false | null | t3_1oh079j | /r/LocalLLaMA/comments/1oh079j/any_linux_distro_better_than_others_for_ai_use/ | false | false | self | 23 | null |
Model named "ernie-exp-251022" spotted on Lmarena. Baidu cooking? | 31 | For those wondering, the prompt was to create a retro game character in html, single file. Nothing fancy. Usually models add some basic mechanics akin to the side scrollers.
There were some bugs in the code this model created, but so were in the code created by the model on the right side.
I must say apart from the bugs, the output was pretty impressive anyway on the left and felt much different than anything I encountered before. That and it was actually better than the output on the right overall, so I voted for it just to see which model it was and there you have it.
Model named ernie-exp-251022. What do you guys think it is? Baidu cooking, or something else entirely? Something cloud only, or perhaps open weight? So many questions... | 2025-10-27T00:10:45 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oh06du | false | null | t3_1oh06du | /r/LocalLLaMA/comments/1oh06du/model_named_ernieexp251022_spotted_on_lmarena/ | false | false | 31 | {'enabled': True, 'images': [{'id': 'VNATm5803YDeD8ziGOdq2XdPJUzwWMmix-zuv45IsqI', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=108&crop=smart&auto=webp&s=aee095db40bb65e253e090e3e92ffef45a3f7101', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=216&crop=smart&auto=webp&s=c3dbf1ceb03c24c4abef4427d11cf53ed19226dd', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=320&crop=smart&auto=webp&s=f76c45c07f03d1c2835652606935b086b9e91ee5', 'width': 320}, {'height': 268, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=640&crop=smart&auto=webp&s=5347258deaa2d8b9fd41517456f1916402365502', 'width': 640}, {'height': 402, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=960&crop=smart&auto=webp&s=ddf03c69612720afb6e321e07fc62978786b081c', 'width': 960}, {'height': 452, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?width=1080&crop=smart&auto=webp&s=36b6ca8b1e9dd2779175f64becc7bd922dc63613', 'width': 1080}], 'source': {'height': 547, 'url': 'https://preview.redd.it/aebg6a4zojxf1.png?auto=webp&s=6db001548062de6c1168e0eef2d5e8a04867d50c', 'width': 1305}, 'variants': {}}]} | ||
Tested a few small models on a local CLI agent. I was surprised by the results. | 8 | I've been building a CLI-based tool-using agent for my own purposes.
I've mostly used cloud models for this work up until now, but I had a little time today and decided to run some benchmark tests against the small models I have on my PC with a 16 GB 4060.
My agent has a number of categorized tools at its disposal (categories: web, files, system, dev, containers). These tools do things like list processes, measure memory usage, examine git repositories and so on - all kinds of stuff you can do with read-only access to the local system.
I ran a small suite of prompts through each of the models I had on hand to assess their ability to select the correct tools and provide a useful response.
These are the models I tested, in order of viability for this purpose:
\- Qwen3:4b is the clear leader with excellent quality outputs
\- Llama3.2:3b provides pretty solid responses but needs heavier prompting to select the right tools
\- Granite3.3:8b, which has excellent quality when it works (about half the time)
\- Qwen3:0.6b just doesn't have the "brain power" to figure out complex tool chains
\- Phi4:14b, which couldn't use any tools at all
None of this is to say that my results are gospel for anyone else, but I think it's really surprising and interesting how useful that little llama model is for my agent. Goes to show that benchmarks are one thing but testing for your own use case is critical. | 2025-10-26T23:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ogznvh/tested_a_few_small_models_on_a_local_cli_agent_i/ | dsartori | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogznvh | false | null | t3_1ogznvh | /r/LocalLLaMA/comments/1ogznvh/tested_a_few_small_models_on_a_local_cli_agent_i/ | false | false | self | 8 | null |
4B model RL trained on a11y, lighthouse, and axe scores to generate more accessible websites | 2 | Hey everyone! I've been hearing feedback about the UIGEN models, most of the feedback being the sites are just flashy but not useable or the models break and start repeating (especially in quants.) So, just to experiment, I took the most repetitious model and postrained it with RL (GRPO) to penalize repeats and focus on accessibility scores like axe, lighthouse, a11y, and minimize console errors. I think its much better now especially on the website generation front (in both react and html). It generates a website 32/34 times and works a loooot better.
I made an issue (and I do apologize) with the chat template loading so you will see "assistant" and no markdown code blocks, but I hope to improve on that front. If you are building an application around it, use <html>...</html> to catch the code.
I'm always looking for feedback! Also, we have a couple of new members at the UIGEN team so hopefully we can see more releases soon.
Here's the model! Please instruct it to what you want, but the system prompt I trained it on included "use html css js tailwind and output in <html>...</html>"
[https://huggingface.co/Tesslate/UIGEN-FX-4B-RL-Preview](https://huggingface.co/Tesslate/UIGEN-FX-4B-RL-Preview)
DM me for free API access to try it out if you're too lazy to download it! U can use PageUI chrome extension.
I'm building an open source vibecoding application at [https://tesslate.com](https://tesslate.com) where you can pick or make your own coding agents where models like this will be used in it. (**Open sourcing it on Nov 15**. I'm still working on adding llama.cpp integration and packaging it into a downloadable before I share it. Everything will be Apache 2.0) I'm only sharing this right now because I'm looking for feedback and contributors so DM me if you are interested.
My goal is to build a set of vibecoding tools + models (like the one featured above) where you can run a fully 100% local no internet version of a vibecoding tool that actually has good design.
I also don't like big provider models and would love to run everything locally. Right now you can use GPT-5 and Qwen-Coder-480B for free as well as use openrouter to use your own api keys and models. You can find out more information on our discord community: [https://discord.gg/DkzMzwBTaw](https://discord.gg/DkzMzwBTaw) | 2025-10-26T23:40:20 | https://www.reddit.com/gallery/1ogziwh | smirkishere | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ogziwh | false | null | t3_1ogziwh | /r/LocalLLaMA/comments/1ogziwh/4b_model_rl_trained_on_a11y_lighthouse_and_axe/ | false | false | 2 | null | |
Best Model for local AI? | 0 | I’m contemplating on getting a M4 Max 128GB or 48GB M4 Pro for 4K video editing, music production, and Parallels virtualization.
In terms of running local AI, I was wondering which model would be perfect for expanded context, reasoning, and thinking, similar to how ChatGPT will ask users if they’d like to learn more about a subject or provide a detailed report/summary on a particular subject (Ex: All of the relevant laws in the US pertaining to owning a home, for instance). In some cases, writing out a full novel (100k words).
With all that said, which model would achieve that and what hardware can even run it? | 2025-10-26T23:19:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ogz2go/best_model_for_local_ai/ | Super_Revolution3966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogz2go | false | null | t3_1ogz2go | /r/LocalLLaMA/comments/1ogz2go/best_model_for_local_ai/ | false | false | self | 0 | null |
Oh my REAP-ness. Qwen3-Coder-30B-A3B-Instruct_Pruned_REAP-15B-A3B-GGUF on BC-250 | 32 | **TLDR: AMD BC-250 on a REAP Qwen3-Coder-30B-A3B-Instruct Q4 running 100/70 tok/s**
Here is a post I did a while back super impressed with Llama 3.1 running \~27 tok/s tg on An AMD BC-250 with Vulkan drivers.
[Meta-Llama-3.1-8B-Instruct-Q8\_0.gguf - 26.89 tok/s for $20 : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1h8el9m/metallama318binstructq8_0gguf_2689_toks_for_20/)
For giggles today I dusted off my bench BC-250 and recompiled the latest llama.cpp and was pleasantly surprised to see almost 30% uplift in pp & tg. See below:
slot launch_slot_: id 0 | task 513 | processing task
slot update_slots: id 0 | task 513 | new prompt, n_ctx_slot = 4096, n_keep = 0, n_prompt_tokens = 45
slot update_slots: id 0 | task 513 | old: ... are an expert of | food and food preparation. What
slot update_slots: id 0 | task 513 | new: ... are an expert of | agentic coding systems. If
slot update_slots: id 0 | task 513 | 527 459 6335 315 3691 323 3691 18459 13 3639
slot update_slots: id 0 | task 513 | 527 459 6335 315 945 4351 11058 6067 13 1442
slot update_slots: id 0 | task 513 | n_past = 10, memory_seq_rm [10, end)
slot update_slots: id 0 | task 513 | prompt processing progress, n_past = 45, n_tokens = 35, progress = 1.000000
slot update_slots: id 0 | task 513 | prompt done, n_past = 45, n_tokens = 35
slot print_timing: id 0 | task 513 |
prompt eval time = 282.75 ms / 35 tokens ( 8.08 ms per token, 123.78 tokens per second)
eval time = 23699.99 ms / 779 tokens ( 30.42 ms per token, 32.87 tokens per second)
total time = 23982.74 ms / 814 tokens
slot release: id 0 | task 513 | stop processing: n_past = 823, truncated = 0
I thought I would give the 50% REAP Qwen3-Coder-30B-A3B-Instruct a shot with Q4\_K\_M which should fit within the 10gb of 16gb visible to llama.cpp
[12bitmisfit/Qwen3-Coder-30B-A3B-Instruct\_Pruned\_REAP-15B-A3B-GGUF · Hugging Face](https://huggingface.co/12bitmisfit/Qwen3-Coder-30B-A3B-Instruct_Pruned_REAP-15B-A3B-GGUF)
YOOOO! nearly 100 tok/s pp and 70 tok/s tg
slot update_slots: id 0 | task 2318 | new: ... <|im_start|>user
| You are a master of the
slot update_slots: id 0 | task 2318 | 151644 872 198 14374 5430 510 31115 264 63594
slot update_slots: id 0 | task 2318 | 151644 872 198 2610 525 264 7341 315 279
slot update_slots: id 0 | task 2318 | n_past = 3, memory_seq_rm [3, end)
slot update_slots: id 0 | task 2318 | prompt processing progress, n_past = 54, n_tokens = 51, progress = 1.000000
slot update_slots: id 0 | task 2318 | prompt done, n_past = 54, n_tokens = 51
slot print_timing: id 0 | task 2318 |
prompt eval time = 520.59 ms / 51 tokens ( 10.21 ms per token, 97.97 tokens per second)
eval time = 22970.01 ms / 1614 tokens ( 14.23 ms per token, 70.27 tokens per second)
total time = 23490.60 ms / 1665 tokens
slot release: id 0 | task 2318 | stop processing: n_past = 1667, truncated = 0
srv update_slots: all slots are idle
* You are a master of the Pyspark eco system. At work we have a full blown Enterprise Databricks deployment. We want to practice at how. We already have a Kubernetes Cluster. Walk me through deployment and configuration.
Output pastebin:
[Oh my REAP-ness. Qwen3-Coder-30B-A3B-Instruct\_Pruned\_REAP-15B-A3B-GGUF on BC-250 - Pastebin.com](https://pastebin.com/728Pw4Y9) | 2025-10-26T23:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ogz0b7/oh_my_reapness_qwen3coder30ba3binstruct_pruned/ | MachineZer0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogz0b7 | false | null | t3_1ogz0b7 | /r/LocalLLaMA/comments/1ogz0b7/oh_my_reapness_qwen3coder30ba3binstruct_pruned/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': '8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=108&crop=smart&auto=webp&s=19ad0dca7a6f5934771f9a572bf05282d539cfac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=216&crop=smart&auto=webp&s=c256fcf2104e99c375f2fa68464ba5ad8a440026', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=320&crop=smart&auto=webp&s=191b26952ca5899d05fe822efcbfb49d0f8eac9c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=640&crop=smart&auto=webp&s=45cd00e10fe849159218ae2b821d56c58ad1069f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=960&crop=smart&auto=webp&s=7b0541608832045ae69e2087ca65ae340bddeb23', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?width=1080&crop=smart&auto=webp&s=b1fa89ead29de62bbb4fd4bff911a578a3d39c8f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8iKb5Z5_rgoHcNSyIae3ZsqkVszieoLknDyIPFZNpM8.png?auto=webp&s=a19483d03f7eb918abbac4b59dd3b20ae6dd23af', 'width': 1200}, 'variants': {}}]} |
Qwen's VLM is strong! | 131 | 2025-10-26T22:46:45 | dulldata | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogybvr | false | null | t3_1ogybvr | /r/LocalLLaMA/comments/1ogybvr/qwens_vlm_is_strong/ | false | false | 131 | {'enabled': True, 'images': [{'id': '5B4LkAlAkjAyo_Qypl5foVziwTh3JJP4qlsigv5EGTg', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/jc97wpepbjxf1.png?width=108&crop=smart&auto=webp&s=bd6d939763fa8068c30e2af663202c717082bedb', 'width': 108}, {'height': 257, 'url': 'https://preview.redd.it/jc97wpepbjxf1.png?width=216&crop=smart&auto=webp&s=831a445e0a0a70d847151fccaee1c52111ec61fb', 'width': 216}, {'height': 380, 'url': 'https://preview.redd.it/jc97wpepbjxf1.png?width=320&crop=smart&auto=webp&s=0895935673cc7bbe35bc8ea3a71d20d4837c8861', 'width': 320}], 'source': {'height': 714, 'url': 'https://preview.redd.it/jc97wpepbjxf1.png?auto=webp&s=3da731d134e8cd9d8c0b121246a87ad862f40d77', 'width': 600}, 'variants': {}}]} | |||
Quantizing MoE models to MXFP4 | 10 | Lately its like my behind is on fire, and I'm downloading and quantizing models like crazy, but into this specific MXFP4 format only.
And cause of this format, it can be done only on Mixture-of-Expert models.
Why, you ask?
Why not!, I respond.
Must be my ADHD brain cause I couldn't find a MXFP4 model quant I wanted to test out, and I said to myself, why not quantize some more and uplaod them to hf?
So here we are.
I just finished quantizing one of the huge models, DeepSeek-V3.1-Terminus, and the MXFP4 is a cool 340GB...
But I can't run this on my PC! I've got a bunch of RAM, but it reads most of it from disk and the speed is like 1 token per day.
Anyway, I'm uploading it.
And I want to ask you, would you like me to quantize other such large models? Or is it just a waste?
You know the other large ones, like Kimi-K2-Instruct-0905, or DeepSeek-R1-0528, or cogito-v2-preview-deepseek-671B-MoE
Do you have any suggestion for other MoE ones that are not in MXFP4 yet?
Ah yes here is the link:
[https://huggingface.co/noctrex](https://huggingface.co/noctrex)
| 2025-10-26T22:43:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ogy9lh/quantizing_moe_models_to_mxfp4/ | noctrex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogy9lh | false | null | t3_1ogy9lh | /r/LocalLLaMA/comments/1ogy9lh/quantizing_moe_models_to_mxfp4/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=108&crop=smart&auto=webp&s=db02e80f00d3d6e269022af7d3cd497fefbb5ffa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=216&crop=smart&auto=webp&s=2cb88383b8af6efd93ec93b6de6cdf44b6b6f278', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=320&crop=smart&auto=webp&s=6c15666b59a3d1e89cbb7c22686819da69fdbf2b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=640&crop=smart&auto=webp&s=7a847210ec92a949eb90458dd89301e6e19f1907', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=960&crop=smart&auto=webp&s=45012771bc1ba84ea5eaaa3a0ffb5dfb0622ecc6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?width=1080&crop=smart&auto=webp&s=2fa8bd7bcb40ef40a3be2c8c2e17f80b7af37095', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WAyIOr5jVedejqkKhPpZqx0urQQdFcGlFgOnHGgz_5k.png?auto=webp&s=dd9bf5ea86ff53249eb23d0582518353015f5959', 'width': 1200}, 'variants': {}}]} |
What are some of the best open-source LLMs that can run on the iPhone 17 Pro? | 0 | I’ve been getting really interested in running models locally on my phone. With the A19 Pro chip and the extra RAM, the iPhone 17 should be able to handle some pretty solid models compared to earlier iPhones. I’m just trying to figure out what’s out there that runs well.
Any recommendations or setups worth trying out? | 2025-10-26T22:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ogxuqv/what_are_some_of_the_best_opensource_llms_that/ | JordanStoner2299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogxuqv | false | null | t3_1ogxuqv | /r/LocalLLaMA/comments/1ogxuqv/what_are_some_of_the_best_opensource_llms_that/ | false | false | self | 0 | null |
New text diffusion model from inclusionAI - LLaDA2.0-flash-preview | 73 | [https://huggingface.co/inclusionAI/LLaDA2.0-flash-preview](https://huggingface.co/inclusionAI/LLaDA2.0-flash-preview)
As its smaller brother LLaDA2-mini-preview this is a text diffusion mixture of experts model but instead of only 16b total parameters this one comes with 100b total non embedding and 6b active parameters, which as far as I know makes it the biggest opensource text diffusion models out there. A very interesting model, though since it is a preview its still not the final version and it only has a 4096 token context window, which makes it not really useful for most practical tasks, though lets not forget that the original GPT-3.5 Turbo model started with the same context. Though I hope the full release will have a bigger one (;
So this isnt really a model for people who seek the best of the best (yet), but its certainly extremely cool that inclusionai decided to open source this experimental model (;
I think they released a new framework to run such diffusion models recently, otherwise there is no support outside of transformers as far as I know.
https://preview.redd.it/n0b8dgyg4jxf1.png?width=489&format=png&auto=webp&s=02ca366acc269b87059dd2b0878e47650cb553c4
| 2025-10-26T22:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ogxo2l/new_text_diffusion_model_from_inclusionai/ | Finanzamt_Endgegner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogxo2l | false | null | t3_1ogxo2l | /r/LocalLLaMA/comments/1ogxo2l/new_text_diffusion_model_from_inclusionai/ | false | false | 73 | {'enabled': False, 'images': [{'id': 'eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=108&crop=smart&auto=webp&s=1b7582830cca67c0b01130c3f81d90b4aff403ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=216&crop=smart&auto=webp&s=bb8b28014e8698f6e39ad2bfe4fc61d630d946a1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=320&crop=smart&auto=webp&s=158178211220260e1cb77b4ee1c12f26fc0d1654', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=640&crop=smart&auto=webp&s=83c616737b8550a99d38eaf9d5ae4ca19a76a068', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=960&crop=smart&auto=webp&s=c6d37ec303ac542c1ac48bc50aab864178d38a81', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?width=1080&crop=smart&auto=webp&s=a4d777d11c3ac36ed2a7fc42a4f3c766306ecc91', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eSP0omdugNqFpul8K2DNpGSQodYj7BcPVqbrd3f9DmM.png?auto=webp&s=80b8866985fdc71f6f92c79715bebf50df79a08f', 'width': 1200}, 'variants': {}}]} | |
What is your setup for agentic coding using local inference? I am aware that Cline can work with local however would prefer agentic coding from terminal instead of VS. | 1 | As | 2025-10-26T22:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ogxjuj/what_is_your_setup_for_agentic_coding_using_local/ | seeming_stillness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogxjuj | false | null | t3_1ogxjuj | /r/LocalLLaMA/comments/1ogxjuj/what_is_your_setup_for_agentic_coding_using_local/ | false | false | self | 1 | null |
What is the best local Large Language Model setup for coding on a budget of approximately $2,000? | 63 | My initial research has highlighted three main hardware options:
1. A dedicated GPU with 16–32GB of VRAM.
2. A Mac Ultra with 64GB+ of Unified Memory.
3. An AMD Strix Halo system with 64–128GB of RAM.
My understanding is that all three options can run similar models at an acceptable t/s speed. In fact, they might even be overpowered if we are focusing on Mixture-of-Experts (MoE) models.
I'm also weighing the following trade-offs:
Mac Ultra: Appears to be the "sweet spot" due to its ease of setup and strong all-around performance, but I have a strong preference against the Apple ecosystem.
Strix Halo: The fully-specced mini-PC versions, often from Chinese manufacturers, already push the $2,000 budget limit. While the lower power consumption is appealing, I'm concerned about a potentially complicated setup and performance bottlenecks from its memory bandwidth and/or throttling due to thermals.
Multi-GPU PC: Building a system with multiple GPUs seems the most future-proof, but the high peak power consumption is a significant concern and hard limits on the models it can run.
What other considerations should I keep in mind? Are there any exciting new developments coming soon (either hardware or models), and should I hold off on buying anything right now? | 2025-10-26T21:29:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ogwjdj/what_is_the_best_local_large_language_model_setup/ | Independent-Band7571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogwjdj | false | null | t3_1ogwjdj | /r/LocalLLaMA/comments/1ogwjdj/what_is_the_best_local_large_language_model_setup/ | false | false | self | 63 | null |
M5 Neural Accelerator benchmark results from Llama.cpp | 186 | ## Summary
**LLaMA 7B**
| SoC | BW [GB/s] | GPU Cores | F16 PP [t/s] | F16 TG [t/s] | Q8_0 PP [t/s] | Q8_0 TG [t/s] | Q4_0 PP [t/s] | Q4_0 TG [t/s] |
|:--|--:|--:|--:|--:|--:|--:|--:|--:|
| ✅ M1 [1] | 68 | 7 | | | 108.21 | 7.92 | 107.81 | 14.19 |
| ✅ M1 [1] | 68 | 8 | | | 117.25 | 7.91 | 117.96 | 14.15 |
| ✅ M1 Pro [1] | 200 | 14 | 262.65 | 12.75 | 235.16 | 21.95 | 232.55 | 35.52 |
| ✅ M1 Pro [1] | 200 | 16 | 302.14 | 12.75 | 270.37 | 22.34 | 266.25 | 36.41 |
| ✅ M1 Max [1] | 400 | 24 | 453.03 | 22.55 | 405.87 | 37.81 | 400.26 | 54.61 |
| ✅ M1 Max [1] | 400 | 32 | 599.53 | 23.03 | 537.37 | 40.20 | 530.06 | 61.19 |
| ✅ M1 Ultra [1] | 800 | 48 | 875.81 | 33.92 | 783.45 | 55.69 | 772.24 | 74.93 |
| ✅ M1 Ultra [1] | 800 | 64 | 1168.89 | 37.01 | 1042.95 | 59.87 | 1030.04 | 83.73 |
| ✅ M2 [2] | 100 | 8 | | | 147.27 | 12.18 | 145.91 | 21.70 |
| ✅ M2 [2] | 100 | 10 | 201.34 | 6.72 | 181.40 | 12.21 | 179.57 | 21.91 |
| ✅ M2 Pro [2] | 200 | 16 | 312.65 | 12.47 | 288.46 | 22.70 | 294.24 | 37.87 |
| ✅ M2 Pro [2] | 200 | 19 | 384.38 | 13.06 | 344.50 | 23.01 | 341.19 | 38.86 |
| ✅ M2 Max [2] | 400 | 30 | 600.46 | 24.16 | 540.15 | 39.97 | 537.60 | 60.99 |
| ✅ M2 Max [2] | 400 | 38 | 755.67 | 24.65 | 677.91 | 41.83 | 671.31 | 65.95 |
| ✅ M2 Ultra [2] | 800 | 60 | 1128.59 | 39.86 | 1003.16 | 62.14 | 1013.81 | 88.64 |
| ✅ M2 Ultra [2] | 800 | 76 | 1401.85 | 41.02 | 1248.59 | 66.64 | 1238.48 | 94.27 |
| 🟥 M3 [3] | 100 | 8 | | | | | | |
| 🟨 M3 [3] | 100 | 10 | | | 187.52 | 12.27 | 186.75 | 21.34 |
| 🟨 M3 Pro [3] | 150 | 14 | | | 272.11 | 17.44 | 269.49 | 30.65 |
| ✅ M3 Pro [3] | 150 | 18 | 357.45 | 9.89 | 344.66 | 17.53 | 341.67 | 30.74 |
| ✅ M3 Max [3] | 300 | 30 | 589.41 | 19.54 | 566.40 | 34.30 | 567.59 | 56.58 |
| ✅ M3 Max [3] | 400 | 40 | 779.17 | 25.09 | 757.64 | 42.75 | 759.70 | 66.31 |
| ✅ M3 Ultra [3] | 800 | 60 | 1121.80 | 42.24 | 1085.76 | 63.55 | 1073.09 | 88.40 |
| ✅ M3 Ultra [3] | 800 | 80 | 1538.34 | 39.78 | 1487.51 | 63.93 | 1471.24 | 92.14 |
| 🟥 M4 [4] | 120 | 8 | | | | | | |
| ✅ M4 [4] | 120 | 10 | 230.18 | 7.43 | 223.64 | 13.54 | 221.29 | 24.11 |
| ✅ M4 Pro [4] | 273 | 16 | 381.14 | 17.19 | 367.13 | 30.54 | 364.06 | 49.64 |
| ✅ M4 Pro [4] | 273 | 20 | 464.48 | 17.18 | 449.62 | 30.69 | 439.78 | 50.74 |
| 🟥 M4 Max [4] | 410 | 32 | | | | | | |
| ✅ M4 Max [4] | 546 | 40 | 922.83 | 31.64 | 891.94 | 54.05 | 885.68 | 83.06 |
| ✅ **M5 (Neural Accel)** [5] | 546 | 40 | | | | | **608.05** | **26.59** |
| ✅ **M5 (no Accel)** [5] | 546 | 40 | | | | | **252.82** | **27.55** |
| 🟥 M4 Ultra | 820 | 64 | | | | | | |
| 🟥 M4 Ultra | 1092 | 80 | | | | | | |
M5 source: https://github.com/ggml-org/llama.cpp/pull/16634
All Apple Silicon results: https://github.com/ggml-org/llama.cpp/discussions/4167 | 2025-10-26T21:24:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ogwf6b/m5_neural_accelerator_benchmark_results_from/ | auradragon1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogwf6b | false | null | t3_1ogwf6b | /r/LocalLLaMA/comments/1ogwf6b/m5_neural_accelerator_benchmark_results_from/ | false | false | self | 186 | {'enabled': False, 'images': [{'id': 'IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=108&crop=smart&auto=webp&s=39631834e9885c3b3ec6001c9b5b221905a1afaa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=216&crop=smart&auto=webp&s=14ee241933edb99159161f9238c27512842f27d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=320&crop=smart&auto=webp&s=a45b8d8de62514e36204c375109b76cc471c246b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=640&crop=smart&auto=webp&s=3593b6869c70d1c6c9c6fa0feff8fc463df8af68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=960&crop=smart&auto=webp&s=ac7d29d96092bfdff824ef4a2dafe384425e7860', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?width=1080&crop=smart&auto=webp&s=3dad600749424f0f67ba9f9f02ee71f7424a1003', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IYXwt-Gv2avbliAaV3G5JMcBCg9ks8YjPP5ANcac5qU.png?auto=webp&s=9374aecc1905fe890948e1bb5d1b1bf2c5d6bc53', 'width': 1200}, 'variants': {}}]} |
Best model for rig 6x L4? | 1 | Subj | 2025-10-26T21:06:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ogvzqr/best_model_for_rig_6x_l4/ | Adorable_Net7338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogvzqr | false | null | t3_1ogvzqr | /r/LocalLLaMA/comments/1ogvzqr/best_model_for_rig_6x_l4/ | false | false | self | 1 | null |
Custom full stack AI suite for local Voice Cloning (TTS) + LLM | 11 | Howdy!
This is a short video I put together for some friends of mine who were curious about a project I’m working on in my free time.
Like many of you, I was very disappointed when I found out PlayHT got acquired by Meta. Especially because without warning my subscription was canceled — even their help-desk was down. In an effort to push myself to learn more about the underlying technology, I developed this prototype platform which leverages VoxCPM, an open source TTS software.
The platform consists of a trivial flask API to communicate with an Ollama docker container (with a few models installed) as well as a frontend react interface. I decided to go with Untitled UI since they’ve got decent documentation, and I’m by no means a frontend developer by trade. For those curious, I’m using a JS library called WaveSurfer to visualize the generated audio waveform.
Because VoxCPM struggles to produce consistent voices per generation; each “voice” consists of two components, a JSON text transcription (stimulus) paired with an audio file of the speaker. VoxCPM natively supports supplementing a generation with these components, which when paired constitute a voice (since this allows one to achieve continuity between generations). For those familiar with local voice synthesis, this pairing is not uncommon. Voice continuity (matching the speakers cadence, timbre, and vocal inflections) is typically achieved by supplementing a zero-shot model with N seconds of speaker audio.
I’d like to continue to improve on this interface and potentially extend its range of capabilities to near real time streaming of synthetic audio to a virtual microphone. I’m a Security Engineer by day, so I figure this has some interesting use cases for both red/blue team and certainly for operational security.
I’m open to feedback and questions as well! | 2025-10-26T20:53:39 | https://v.redd.it/82vajkokrixf1 | Chronos127 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogvo4c | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/82vajkokrixf1/DASHPlaylist.mpd?a=1764104033%2CODU5YTcxNjRlMzA0ZjkyMTFmNjBiNzQzYzZkODljMzAxYWFmOTQ4MTkzYWE3YTNiZGY2NTEyYzRiZjllOTNmMQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/82vajkokrixf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 590, 'hls_url': 'https://v.redd.it/82vajkokrixf1/HLSPlaylist.m3u8?a=1764104033%2CZGI4ZDAyOTBiNTZlZWIyODY5YWQ0MTRkYjU1NTQ0ZjkyYmNmNTY5MTc0YTQyM2E4NmRkYTcxNzlhODM1OWM0Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/82vajkokrixf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1ogvo4c | /r/LocalLLaMA/comments/1ogvo4c/custom_full_stack_ai_suite_for_local_voice/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=108&crop=smart&format=pjpg&auto=webp&s=04df464f2a5f4891de3ae538b8b3d8319e8f083e', 'width': 108}, {'height': 99, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=216&crop=smart&format=pjpg&auto=webp&s=438df6a5c3215d88186f22384a989ff5c0143909', 'width': 216}, {'height': 147, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=320&crop=smart&format=pjpg&auto=webp&s=0e2a0def050fa1399579dc7ce2ab9243a9474918', 'width': 320}, {'height': 295, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=640&crop=smart&format=pjpg&auto=webp&s=212c5f746909e06756ee5b865dfe94cf553dfeda', 'width': 640}, {'height': 443, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=960&crop=smart&format=pjpg&auto=webp&s=8e949e42ba2f8cc1200743249b60d6fabeda4672', 'width': 960}, {'height': 498, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e48f05326b6bdfeea7d333432993331ce4c2a2de', 'width': 1080}], 'source': {'height': 886, 'url': 'https://external-preview.redd.it/YjQ0OGRxbWtyaXhmMSKcb8o0zPVq1qoDpNTl8rp0D1GBAHWa7o6rXtPtMXQ-.png?format=pjpg&auto=webp&s=8ee08121527dcb6f9037c00687a3f553c2b0c2a1', 'width': 1920}, 'variants': {}}]} | |
LLMs Keep Messing Up My Code After 600 Lines | 0 | Hi! I’ve been testing various local LLMs, even closed Gemini and ChatGPT, but once my code exceeds \~600 lines, they start deleting or adding placeholder content instead of finishing the task. Oddly, sometimes they handle 1,000+ lines just fine.
Do you know any that can manage that amount of code reliably? | 2025-10-26T20:44:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ogvgad/llms_keep_messing_up_my_code_after_600_lines/ | haterloco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogvgad | false | null | t3_1ogvgad | /r/LocalLLaMA/comments/1ogvgad/llms_keep_messing_up_my_code_after_600_lines/ | false | false | self | 0 | null |
Does anyone know what it's saying? Especially towards the end | 0 | 2025-10-26T20:20:29 | https://www.reddit.com/gallery/1oguuct | toomanythingstothink | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oguuct | false | null | t3_1oguuct | /r/LocalLLaMA/comments/1oguuct/does_anyone_know_what_its_saying_especially/ | false | false | 0 | null | ||
Running local models with multiple backends & search capabilities | 9 | Hi guys, I’m currently using this desktop app to run llms with ollama,llama.cpp and web gpu at the same place, there’s also a web version that stores the models to cache memory
What do you guys suggest for extension of capabilities | 2025-10-26T20:13:44 | https://v.redd.it/oixys9qgkixf1 | Ibz04 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oguocr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/oixys9qgkixf1/DASHPlaylist.mpd?a=1764101639%2CNjEyZjdmYTFiZmE3MGQxMTUxNWRhOWQ5NTgzZjg4NTI5YTNmNmI4OWFmMDRiNzRkNWE4MjNjOWUyNzRmZTcxNA%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/oixys9qgkixf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1060, 'hls_url': 'https://v.redd.it/oixys9qgkixf1/HLSPlaylist.m3u8?a=1764101639%2CYjQwNmU3YTZkMjQwMTAxZjYyNGIxYzMwNjdlNzZiYjQwODcyMTJmNzNiMGNhMWU1NWQ5NjA5NmQyOWNkMmRjOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oixys9qgkixf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1oguocr | /r/LocalLLaMA/comments/1oguocr/running_local_models_with_multiple_backends/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=108&crop=smart&format=pjpg&auto=webp&s=fc183d8a91309fae199288f094fc564c19b60a68', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=216&crop=smart&format=pjpg&auto=webp&s=ae451497fd64de38936c78a9a1494a34675a7281', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=320&crop=smart&format=pjpg&auto=webp&s=4f2860f8ae8d2f20ebbb17c7ba2930a7afffd6ed', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=640&crop=smart&format=pjpg&auto=webp&s=6adce04a7294bb66cffb283cdc1f62a3210fe9b8', 'width': 640}, {'height': 530, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=960&crop=smart&format=pjpg&auto=webp&s=1234883365fec0fbce5ba18abeae97dd5ffb4b6a', 'width': 960}, {'height': 596, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ecfea8b1de43a2b218611901ba8e0aebf67a7a42', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NmtybGkwa2draXhmMWYfCBFJq_CgHTgLeaR86wcT1wa8nSuVfXl8XGpmTK5H.png?format=pjpg&auto=webp&s=05aefcf34e82de16d67560be73eca611f0c6e579', 'width': 1956}, 'variants': {}}]} | |
Can someone with a Mac with more than 16 GB Unified Memory test this model? | 2 | [https://huggingface.co/abnormalmapstudio/Qwen3-Omni-30B-A3B-Instruct-mxfp4-mlx](https://huggingface.co/abnormalmapstudio/Qwen3-Omni-30B-A3B-Instruct-mxfp4-mlx)
Thanks.
https://preview.redd.it/wkggfhd2gixf1.png?width=2680&format=png&auto=webp&s=bd03e82591bd129b8dc594ff890390da69f66ab6
idk why I got 16 GB MacBook 3 years ago.
| 2025-10-26T19:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ogu2g2/can_someone_with_a_mac_with_more_than_16_gb/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogu2g2 | false | null | t3_1ogu2g2 | /r/LocalLLaMA/comments/1ogu2g2/can_someone_with_a_mac_with_more_than_16_gb/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=108&crop=smart&auto=webp&s=b3ba6eafc3aed03d06f5e77a3af20c537f125fef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=216&crop=smart&auto=webp&s=c71c4d122d5a9224555040ac7b77259a97865ffa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=320&crop=smart&auto=webp&s=671c2c11a810b1ff3a7a4268d6db675addfad2d3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=640&crop=smart&auto=webp&s=6c4d6bf8f5e8a08436bd5f980f12432ca260304e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=960&crop=smart&auto=webp&s=025245f73dbd336527daab0455a2ed6c8cecf14a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?width=1080&crop=smart&auto=webp&s=60b30965798acd02aa56c1254774f10198be66e2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HSvkYPb_IYaIejCp2txyJ5b-1jnHwdTx0ot7zEs7JbU.png?auto=webp&s=929a95985554d716753b9ab0b0dd3ebccfc82e1f', 'width': 1200}, 'variants': {}}]} | |
Models for Fiction Writing? - 8GB VRAM | 0 | My System Info: (**8GB VRAM & 32GB RAM**)
My system could run up to 14B Dense models(Q4 fits 8GB VRAM) & 30B MOE models. So please recommend suitable models for above hardware & below requirements. Thanks
**My Targets**:
* Short stories to small Novels(Novella/Novelette) like 150-200 pages
* Children/Young Adults. Also General audiences (I'm not looking for NSFW stuff as my writing would be G to PG-13 mostly)
* Genres like Fairy tale, Drama, Crime, Horror, Sci-fi, Thriller, Fantasy, Pulp, etc.,
* Additionally need models for Comedy to write Sketch & Stand-ups (Don't want to post this as separate thread)
I'm gonna use LLMs as reference mostly so I'll be doing 90% of work so I'm not gonna expect everything from models.
**My Requirements**: By giving my idea to model, it could help on starting below stuffs step by step. I know it's not gonna be a single process .... It's gonna be regular process with many questions(context) and responses like back & forth thing.
* Outlining
* Characters, Plot, Settings, Theme, Style, etc.,
* Brainstorming
* Misc
* Additionally Proofreading & Editing.
In my case(GPU Poor), I'll be happy with tiny/small models for writing than just staring at blank pages. Models could help me to do stuff faster step by step regularly. Hoping to convert my ideas(from my 3 notebooks) to decent sellers in couple of years. | 2025-10-26T19:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ogtqx7/models_for_fiction_writing_8gb_vram/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogtqx7 | false | null | t3_1ogtqx7 | /r/LocalLLaMA/comments/1ogtqx7/models_for_fiction_writing_8gb_vram/ | false | false | self | 0 | null |
I successfully ran GPT-OSS 120B locally on a Ryzen 7 / 64 GB RAM PC — and published the full analysis (w/ DOI) | 0 | After months of testing, I managed to run the open-source GPT-OSS 120B model locally on a consumer PC
(Ryzen 7 + 64 GB RAM + RTX 4060 8 GB VRAM).
We analyzed CPU vs GPU configurations and found that a fully RAM-loaded setup (ngl = 0) outperformed mixed modes.
The full results and discussion (including the “identity persistence” behavior) are published here:
📄 \[Running GPT-OSS 120B on a Consumer PC – Full Paper (Medium)\](https://medium.com/@massimozito/gpt-oss-we-ran-a-120-billion-parameter-model-on-a-home-pc-25ce112ae91c)
🔗 DOI: \[10.5281/zenodo.17449874\](https://doi.org/10.5281/zenodo.17449874)
Would love to hear if anyone else has tried similar large-scale tests locally. | 2025-10-26T19:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ogtgn6/i_successfully_ran_gptoss_120b_locally_on_a_ryzen/ | Sufficient_Machine47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogtgn6 | false | null | t3_1ogtgn6 | /r/LocalLLaMA/comments/1ogtgn6/i_successfully_ran_gptoss_120b_locally_on_a_ryzen/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=108&crop=smart&auto=webp&s=36ec98a4b3fc0aa15d7f9fbc9c7b4a9e6ab3b3ce', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=216&crop=smart&auto=webp&s=a422622882c157f6eb4bbf4baf2501ef30991fb2', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=320&crop=smart&auto=webp&s=5407ecf8e9309b22a1be2b7f3668ca519888b4ae', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=640&crop=smart&auto=webp&s=cbc6f55c5a2332cc610ecb9862fa6d271e3397ca', 'width': 640}, {'height': 455, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=960&crop=smart&auto=webp&s=1fff58212517704a4b322b54e0220fc5290b3ffe', 'width': 960}, {'height': 512, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?width=1080&crop=smart&auto=webp&s=689bc4629c32e06946d543c76bcae0d284a830fc', 'width': 1080}], 'source': {'height': 569, 'url': 'https://external-preview.redd.it/kvhH-hbEUkvl6eObasVNIpUNJQkw7sc9b6CCEl7mAPI.jpeg?auto=webp&s=2596b6b0a5305fb7497af57340cdaeb49d9818f2', 'width': 1200}, 'variants': {}}]} |
What is the real world hit of using PCIe 4.0 instead of PCIe 5.0 with a 5090? | 65 | I’m trying to be a bit “cheap” and just buy a 5090 for my desktop that is currently running a 3060. It’s a high end build 128gb RAM, video card is the worst part. I’ll probably slowly end up upgrading everything, but I would like to start with the GPU.
I’m assuming someone might have tried this already? | 2025-10-26T19:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ogtdbg/what_is_the_real_world_hit_of_using_pcie_40/ | liviuberechet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogtdbg | false | null | t3_1ogtdbg | /r/LocalLLaMA/comments/1ogtdbg/what_is_the_real_world_hit_of_using_pcie_40/ | false | false | self | 65 | null |
Voice 2 voice models? | 3 | Hi, are there any open weight voice to voice small can fit in 24gb VRAM models?
Thanks. | 2025-10-26T18:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ogst0l/voice_2_voice_models/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogst0l | false | null | t3_1ogst0l | /r/LocalLLaMA/comments/1ogst0l/voice_2_voice_models/ | false | false | self | 3 | null |
OpenEnv: Agentic Execution Environments for RL post training in PyTorch | 0 | 2025-10-26T18:58:52 | https://www.deepfabric.dev/blog/introduction_to_openenv | DecodeBytes | deepfabric.dev | 1970-01-01T00:00:00 | 0 | {} | 1ogsssa | false | null | t3_1ogsssa | /r/LocalLLaMA/comments/1ogsssa/openenv_agentic_execution_environments_for_rl/ | false | false | default | 0 | null | |
[P] VibeVoice-Hindi-7B: Open-Source Expressive Hindi TTS with Multi-Speaker + Voice Cloning | 18 | Released VibeVoice-Hindi-7B and VibeVoice-Hindi-LoRA — fine-tuned versions of the Microsoft VibeVoice model, bringing frontier Hindi text-to-speech with long-form synthesis, multi-speaker support, and voice cloning.
• Full Model: https://huggingface.co/tarun7r/vibevoice-hindi-7b
• LoRA Adapters: https://huggingface.co/tarun7r/vibevoice-hindi-lora
• Base Model: https://huggingface.co/vibevoice/VibeVoice-7B
Features: • Natural Hindi speech synthesis with expressive prosody
• Multi-speaker dialogue generation
• Voice cloning from short reference samples (10–30 seconds)
• Long-form audio generation (up to 45 minutes context)
• Works with VibeVoice community pipeline and ComfyUI
Tech Stack: • Qwen2.5-7B LLM backbone with LoRA fine-tuning
• Acoustic (σ-VAE) + semantic tokenizers @ 7.5 Hz
• Diffusion head (~600M params) for high-fidelity acoustics
• 32k token context window
Released under MIT License. Feedback and contributions welcome! | 2025-10-26T18:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ogsng2/p_vibevoicehindi7b_opensource_expressive_hindi/ | martian7r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogsng2 | false | null | t3_1ogsng2 | /r/LocalLLaMA/comments/1ogsng2/p_vibevoicehindi7b_opensource_expressive_hindi/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=108&crop=smart&auto=webp&s=0b4886f9446e5f92c522edb7fa4f223bd3dc8727', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=216&crop=smart&auto=webp&s=3fada750c015d7c275eba07945a921fde0682cd7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=320&crop=smart&auto=webp&s=3e39942f8d958af23c14098828891cd70c5e8e56', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=640&crop=smart&auto=webp&s=6db80df2ec83b30c44a2d4d13b1d398e75c2fad8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=960&crop=smart&auto=webp&s=fdde4948e3876180cd55a7a916974aecf10467fd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?width=1080&crop=smart&auto=webp&s=986eabe1901bb6c9c9d412f344b6635db19ac473', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SM7O8ZQr24s7mFVQxHEgjFALFMzQc3fPuDBx7bXp9wM.png?auto=webp&s=7de0014d3e4cdf4b4ff7ef2d7f3f6048063b6105', 'width': 1200}, 'variants': {}}]} |
My course sales went skyrocket after I started uploading my photos ( AI photos ) daily, used this community led AI photography agent for very cheap price | 0 | I am 60 year old guy and after covid19 I started writing my learnings across sales, marketing and used to make tiktok and post on X to sell my course to share my learnings.
Somehow I got dependent on the revenue of my course, I never wanted it to happen but it happened eventually.
And my revenue is going flat due to saturation, major reason was my course was expensive and people do not know me, and my face. But at 60 I do not have energy and mood for photos or face camera.
Last week I saw on reddit about [looktara.com](http://looktara.com) AI photography tool made by linkedin creators community to post photos daily on their socials and none caught its AI.
I bought smallest plan and tried. Really found it helpful and I sent my son my photos and he asked me dad are you scuba diving haha!
I started uploading my photos with good insights on captions and making post relevant photos. I saw engagement getting increased and sales killing it.
Last month I recorded peak sales just because of posting daily and posting my face almost daily. | 2025-10-26T18:16:24 | Fierce_5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogrpt1 | false | null | t3_1ogrpt1 | /r/LocalLLaMA/comments/1ogrpt1/my_course_sales_went_skyrocket_after_i_started/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'E--MgtohY7-iAZMgblRhEsqfIXgmJ747WEVV502jFzM', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=108&crop=smart&auto=webp&s=16c4f47de4bfd2f7b55bebb7c2b81dcb42e8527b', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=216&crop=smart&auto=webp&s=ec1794b935c6fe079ba3be729657665389483a3e', 'width': 216}, {'height': 161, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=320&crop=smart&auto=webp&s=563c68f4f0433ab643d8174c4f94d46fed9c6273', 'width': 320}, {'height': 323, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=640&crop=smart&auto=webp&s=5c874d0ff2a3c19a15680e3bd3c1b7275a9074c5', 'width': 640}, {'height': 485, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=960&crop=smart&auto=webp&s=decb636acfc596eb6af3af86c9147ee32039dcb8', 'width': 960}, {'height': 546, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?width=1080&crop=smart&auto=webp&s=f7904d323e9a1919bbcffb0be868a6aae9c2002e', 'width': 1080}], 'source': {'height': 809, 'url': 'https://preview.redd.it/gei9aj6kzhxf1.png?auto=webp&s=c231404634f81574993b083eaa088501b6d2c544', 'width': 1600}, 'variants': {}}]} | ||
Uncensored AI for scientific research | 0 | Uncensored AI for scientific research without any filters, and can stay consistent on long tasks without going off the rails or making stuff up halfway? | 2025-10-26T18:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ogrp22/uncensored_ai_for_scientific_research/ | PrintCreepy8982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogrp22 | false | null | t3_1ogrp22 | /r/LocalLLaMA/comments/1ogrp22/uncensored_ai_for_scientific_research/ | false | false | self | 0 | null |
My course sales went skyrocket after I started uploading my photos ( AI photos ) daily, used this community led AI photography agent for very cheap price | 1 | [deleted] | 2025-10-26T18:15:33 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ogrp06 | false | null | t3_1ogrp06 | /r/LocalLLaMA/comments/1ogrp06/my_course_sales_went_skyrocket_after_i_started/ | false | false | default | 1 | null | ||
780M IGPU for Rocm and Vulkan Ubuntu instructions. (Original from MLDataScientist) | 19 | # Getting llama.cpp Running on AMD 780M (Ubuntu 25.04)
I cannot take credit for this, as it was started by this thread here from MLDataScientist.
[gpt-oss 120B is running at 20t/s with $500 AMD M780 iGPU mini PC and 96GB DDR5 RAM : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1nxztlx/gptoss_120b_is_running_at_20ts_with_500_amd_m780/)
I just wanted to document what I had to do to get it working this weekend on my MinisForum UM890 Pro that has a Ryzen 9 8945HS with 96GB of DDR5-5600 Ram. [https://www.amazon.com/dp/B0D9YLQMHX](https://www.amazon.com/dp/B0D9YLQMHX)
These notes capture a working configuration for running llama.cpp with both ROCm and Vulkan backends on a MinisForum mini PC with Radeon 780M iGPU. Steps were validated on Ubuntu 25.04.
\---
Step 1 Base Install
* Install Ubuntu 25.04 (or newer) on the mini PC.
* Create an admin user (referenced as `myusername`).
# Step 2 Kernel 6.17.5
Upgrade kernel with [`ubuntu-mainline-kernel.sh`](http://ubuntu-mainline-kernel.sh) and reboot into the new kernel.
# Step 3 GTT/TTM Memory Tuning
Reserve \~8788GiB of RAM for the iGPUs GTT pool. Reduce `gttsize` (e.g., `87000`) if the allocation fails.
Reboot, then verify the allocation:
Expected lines:
# Optional GRUB flags
I did not have to touch GRUB flags, see original thread if you need to try that.
Step 4 Grab llama.cpp Builds
Keep two directories so you can swap backends freely:
* **Vulkan build (official ggml):** [https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases) → `~/llama-vulkan/`
* **ROCm build (lemonade SDK, gfx110x):** [https://github.com/lemonade-sdk/llamacpp-rocm/releases/tag/b1090](https://github.com/lemonade-sdk/llamacpp-rocm/releases/tag/b1090) → `~/llama-rocm/`
After extracting I had to make the binaries executable. `chmod +x ~/llama-*/llama-*`.
# Step 5 Render Node Permissions
If you hit `Permission denied` on `/dev/dri/renderD128`, add yourself to the `render` group and re-login (or reboot).
# Step 6 Vulkan Runtime Packages
# Step 7 Sanity Check ROCm Build
Sample startup output:
Load a small model (e.g., llama2-7B Q4\_0) to confirm inference runs without segfaults.
# Step 8 Sanity Check Vulkan Build
Sample startup output:
ggml_vulkan: Found 1 Vulkan devices:
0 = AMD Radeon Graphics (RADV PHOENIX) (radv) | uma: 1 | fp16: 1 | bf16: 0
load_backend: loaded Vulkan backend ...
llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon Graphics (RADV PHOENIX)) (0000:c6:00.0) - 60638 MiB freeload_backend: loaded RPC backend from /home/username/llama-vulkan/libggml-rpc.so
ggml_vulkan: Found 1 Vulkan devices:
0 = AMD Radeon Graphics (RADV PHOENIX) (radv) | uma: 1 | fp16: 1 | bf16: 0
load_backend: loaded Vulkan backend ...
llama_model_load_from_file_impl: using device Vulkan0 (AMD Radeon Graphics (RADV PHOENIX)) (0000:c6:00.0) - 60638 MiB free
Maybe this helps someone else navigate but wanted to share how I got it working. | 2025-10-26T18:14:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ogrnxv/780m_igpu_for_rocm_and_vulkan_ubuntu_instructions/ | Mnemoc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogrnxv | false | null | t3_1ogrnxv | /r/LocalLLaMA/comments/1ogrnxv/780m_igpu_for_rocm_and_vulkan_ubuntu_instructions/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]} |
Choosing the right model | 3 | I need your opinion/help. I'm looking for a self-hosted LLM that's perfect at tool calling and also has logical reasoning/understanding (it should be somewhat familiar with tax/invoicing and legal issues). I currently have 48 GB of VRAM available. I was thinking about using llama3.1 70b instruct awq. I would describe everything in detail in the system prompt, what it should do and how, what superficial rules there are, etc. I've already tested a few models, like Llama3.1 8b Instruct, but it's quite poor in terms of the context for tool calling. Qwen3 32b works quite well but unfortunately fails at tool calling with VLLM openapi and langchain ChatOpenAi. Thanks in advance :) | 2025-10-26T18:09:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ogrjo2/choosing_the_right_model/ | Bowdenzug | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogrjo2 | false | null | t3_1ogrjo2 | /r/LocalLLaMA/comments/1ogrjo2/choosing_the_right_model/ | false | false | self | 3 | null |
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-10-26T17:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ogr30k/removed_by_reddit/ | ai-infos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogr30k | false | null | t3_1ogr30k | /r/LocalLLaMA/comments/1ogr30k/removed_by_reddit/ | false | false | self | 1 | null |
Choosing between M4 and M4 Pro for local inference (Ollama, up to 32B models) | 0 | Hi everyone,
I’m planning to build a small local server that will mainly run Ollama, mostly for email classification tasks using something like gpt-oss-20b. I’d like to make it somewhat futureproof, in case my needs grow over time, but I doubt I’ll ever go beyond 32B models.
Besides Ollama, I’ll also run n8n to automate the classification workflow, and probably a few MCP servers for things like home automation.
I’m really tempted by the Mac Mini, especially the base model, since prices are quite attractive right now. But I’m not sure how well the M4 handles inference compared to the M4 Pro, which quickly gets much more expensive.
If you’ve used either for local inference, I’d love to know how they perform, especially in terms of tokens per second. In my case, the models will be used inside automated pipelines rather than for real-time interaction, so slower inference wouldn’t be a dealbreaker, as long as it stays reasonably fast in case my workloads grow.
Also, how much unified memory would you recommend to comfortably run inference alongside other services like n8n and MCP servers? I think I’ll need at least 32Gb, at most 64Gb?
Finally, if I go with Apple, is macOS stable enough to run as a small always-on server? I’d rather avoid installing Linux on Apple Silicon if it ends up being less stable or less convenient for 24/7 use.
Any real-world feedback or benchmarks would be really appreciated.
Thanks! | 2025-10-26T17:50:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ogr1ms/choosing_between_m4_and_m4_pro_for_local/ | Fun-Employment-5212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogr1ms | false | null | t3_1ogr1ms | /r/LocalLLaMA/comments/1ogr1ms/choosing_between_m4_and_m4_pro_for_local/ | false | false | self | 0 | null |
Looking for a simple real-time local speech transcription API for Windows | 3 | I'd like to experiment with something that could help my immobile relative control his computer with voice. He's been using Windows 10 Speech Recognition for years, but it does not support his language (Latvian). Now he's upgraded to Windows 11 with Voice Access, but that one is buggy and worse.
Now we have better voice recognition out there. I know that Whisper supports Latvian and have tested whisper-fast on my ComfyUI installation.
I will implement the mouse, keyboard and system commands myself - should be easy, I've programmed desktop apps in C#.
All I need is to have some kind of a small background server that receives audio from a microphone and has a simple HTTP or TCP API that I could poll for accumulated transcribed text, and ideally, with some kind of timestamps or relative time since the last detected word, so that I could distinguish separate voice commands by pauses when needed. Ideally, it should also have a simple option to select the correct microphone and also maybe to increase gain for preprocessing the audio, because his voice is quite weak, and default mic settings even at 100% might be too low. Although Windows 10 SR worked fine, so, hopefully, Whisper won't be worse.
I have briefly browsed a few GitHub projects implementing faster-whisper but there are too many unknowns about every project. Some seem to not support Windows at all. Some need Docker (which I wouldn't want to install to every end-user's machine, if my project ends up useful for more people). Some might work only with a latest generation GPU (I'm ready to buy him a 3060 if the solution in general turns out to be useful). Some might not support real-time microphone transcription. It might take me weeks to test them all and fail many times until I find something usable.
I hoped that someone else has already found such a simple real-time transcription tool that could easily be set up on a computer of someone who does not have any development tools installed at all. Wouldn't want it suddenly fail because it cannot build a Python wheel, which some GitHub projects attempt to do. Something that runs with embedded Python would be ok - then I could set up everything on my computer and copy everything to his machine when its ready. | 2025-10-26T17:48:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ogqzqj/looking_for_a_simple_realtime_local_speech/ | martinerous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogqzqj | false | null | t3_1ogqzqj | /r/LocalLLaMA/comments/1ogqzqj/looking_for_a_simple_realtime_local_speech/ | false | false | self | 3 | null |
Anyone have experience with Local Motion Capture models? | 2 | I can only find datasets on hugging face but not the models. if anyone has any ideas. that would be appreciated! | 2025-10-26T17:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ogqv0p/anyone_have_experience_with_local_motion_capture/ | onil34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogqv0p | false | null | t3_1ogqv0p | /r/LocalLLaMA/comments/1ogqv0p/anyone_have_experience_with_local_motion_capture/ | false | false | self | 2 | null |
Have access to the LLM but don't know what to do with it .... | 0 | I have a 5080 and a 4070, used to have a 3090, subscription to GLM 4.6 that allow 500 calls every 5 hours, Codex CLI enterprise, MiniMax Free till November, Nano Banana credit, 80$ left in Openrouter credit, and more. And yet, I don't know what to do with the LLM.
I think my access to LLM is considering infinite now for my case. I feel truly stuck with the ideas right now. Is there anyone else also like this? | 2025-10-26T17:32:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ogql0m/have_access_to_the_llm_but_dont_know_what_to_do/ | GTHell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogql0m | false | null | t3_1ogql0m | /r/LocalLLaMA/comments/1ogql0m/have_access_to_the_llm_but_dont_know_what_to_do/ | false | false | self | 0 | null |
Ryzen AI Max+ 395 vs RTX 4000 ada SFF | 5 | Hi,
Quick question to you all.
Context: I have a RTX 4000 ada that was just sitting in a drawer here. Also had a unused machine with a 10th gen i7 and 64gb of ram collecting dust. I decided to put them together and try to run ollama on Ubuntu.
I am getting about 31 tokens per second with Gemma3:12b.
However, the system is too big and I want something compact, so I bought a GMKtec with the Ryzen AI Max+ 395 and 64gb of shared memory.
The GMKtec is doing 24 tokens per second on the same model on windows ollama.
I saw some people here having like 40 tokens per second with the Ryzen AI Max+ 395 with models of like 37b parameters.
So, what am I missing here? Is my expectation that the Ryzen should be faster for llm wrong? | 2025-10-26T17:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ogq54x/ryzen_ai_max_395_vs_rtx_4000_ada_sff/ | dougmaitelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogq54x | false | null | t3_1ogq54x | /r/LocalLLaMA/comments/1ogq54x/ryzen_ai_max_395_vs_rtx_4000_ada_sff/ | false | false | self | 5 | null |
I built a personal AI that learns who you are and what actually works for you | 0 | Matthew McConaughey on Joe Rogan (#2379) talked about wanting a private AI trained only on his own writings and experiences - something that learns from YOUR stuff, not the entire internet. That's exactly what I built.
A few months back I was talking with ChatGPT and went on a tangent about building a personal assistant. Tossed some ideas around, built the file structure with its help, started copy-pasting code. It showed signs of life.
Hit roadblocks. Dug deeper. Worked with Gemini to refactor it modularly so I could swap in any LLM. Then heard people talking about Grok - used it, made strides with code the others couldn't handle. Found Cursor, eventually Claude Code. Piece by piece, it came together.
Only problem: I vastly overengineered it. Went to school for psychology, wanted to model memory like a human brain. Built belief trees, sentiment learning, automatic scoring systems, the whole deal. Went OVERBOARD.
But stripping out the overengineering showed me what was actually needed. I had the system rigidly controlling everything - automatically scoring memories, deciding what to keep, following strict rules. The LLM needed freedom. So I gave it autonomy - it decides what's worth remembering, how to score things, what patterns matter, how to organize its own understanding. You still have override control, but it's the AI's brain to manage, not mine.
[**Here's what came out of it**](https://youtu.be/bgL35eJVh8w)
Roampal. A personal AI that learns who YOU are - what you need, what you want, what you like, what actually works for your specific situation.
**How it works:**
5-tier memory system tracking everything from current context to proven patterns. The system detects outcomes automatically - whether something worked or failed - and updates scores across a knowledge graph. You can also mark outcomes manually. Over time it builds genuine understanding of what approaches work for you specifically.
Runs locally via Ollama (Llama, Qwen, Mistral, whatever). Your conversations never leave your machine. Built with ChromaDB, FastAPI, Tauri.
The thing empowers you in a way cloud AI never could - because it's learning YOUR patterns, YOUR preferences, YOUR outcomes. Not optimizing for some corporate metric.
**Current state:**
Open source: [https://github.com/roampal-ai/roampal](https://github.com/roampal-ai/roampal) (MIT)
Paid executables: [https://roampal.ai](https://roampal.ai) ($9.99) if you don't want to build it
Alpha stage, rough around the edges.
Looking for feedback from people running local models! | 2025-10-26T17:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ogpz4h/i_built_a_personal_ai_that_learns_who_you_are_and/ | Roampal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogpz4h | false | null | t3_1ogpz4h | /r/LocalLLaMA/comments/1ogpz4h/i_built_a_personal_ai_that_learns_who_you_are_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'H5sfRG9tkit_0KnmDqA-cXu7eQbySTE97EXg4EfpVkk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/H5sfRG9tkit_0KnmDqA-cXu7eQbySTE97EXg4EfpVkk.jpeg?width=108&crop=smart&auto=webp&s=7a4749e62d654172af37e9f7167bb0a76e620d16', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/H5sfRG9tkit_0KnmDqA-cXu7eQbySTE97EXg4EfpVkk.jpeg?width=216&crop=smart&auto=webp&s=fe364978fc3100aabf21992cbb8446302947b2bf', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/H5sfRG9tkit_0KnmDqA-cXu7eQbySTE97EXg4EfpVkk.jpeg?width=320&crop=smart&auto=webp&s=a8c6a73d659e526a8bbe931b5d52b2ae06465043', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/H5sfRG9tkit_0KnmDqA-cXu7eQbySTE97EXg4EfpVkk.jpeg?auto=webp&s=52d9a7c99e6301a4f9bfb83aa87d66102805f07a', 'width': 480}, 'variants': {}}]} |
LocalLLaMA with a File Manager -- handling 10k+ or even millions of PDFs and Excels. | 1 | Hello. Happy Sunday. Would you like to add a File manager to your local LLaMA applications, so that you can handle millions of local documents?
I would like to collect feedback on the need for a file manager in the RAG system.
I just posted on LinkedIn
[https://www.linkedin.com/feed/update/urn:li:activity:7387234356790079488/](https://www.linkedin.com/feed/update/urn:li:activity:7387234356790079488/) about the file manager we recently launched at [https://chat.vecml.com/](https://chat.vecml.com/)
The motivation is simple: Most users upload one or a few PDFs into ChatGPT, Gemini, Claude, or Grok — convenient for small tasks, but painful for real work:
(1) What if you need to manage 10,000+ PDFs, Excels, or images?
(2) What if your company has millions of files — contracts, research papers, internal reports — scattered across drives and clouds?
(3) Re-uploading the same files to an LLM every time is a massive waste of time and compute.
A File Manager will let you:
1. Organize thousands of files hierarchically (like a real OS file explorer)
2. Index and chat across them instantly
3. Avoid re-uploading or duplicating documents
4. Select multiple files or multiple subsets (sub-directories) to chat with.
5. Convenient for adding access control in the near future.
On the other hand, I have heard different voices. Some still feel that they just need to dump the files in (somewhere) and AI/LLM will automatically and efficiently index/manage the files. They believe file manager is an outdated concept. | 2025-10-26T16:44:50 | https://www.reddit.com/gallery/1ogpdp8 | DueKitchen3102 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ogpdp8 | false | null | t3_1ogpdp8 | /r/LocalLLaMA/comments/1ogpdp8/localllama_with_a_file_manager_handling_10k_or/ | false | false | 1 | null | |
Qwen3-VL-32B is really good. Quick test vs several other local models I keep on my workstation (details in comments) | 99 | 2025-10-26T16:28:19 | EmPips | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogoyza | false | null | t3_1ogoyza | /r/LocalLLaMA/comments/1ogoyza/qwen3vl32b_is_really_good_quick_test_vs_several/ | false | false | 99 | {'enabled': True, 'images': [{'id': 'PJp0Q0EO_kcgm-eqRwh7cRh5dl9ethwhq-dhipwqTe8', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=108&crop=smart&auto=webp&s=1adec1ca86d16ad49ef9b8d3cad0cae567999cb3', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=216&crop=smart&auto=webp&s=4210ef919def50844eac231d5999d3cf1256e42e', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=320&crop=smart&auto=webp&s=0421fe6bd1a38754ceb142394a7d19dc1de30fca', 'width': 320}, {'height': 293, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=640&crop=smart&auto=webp&s=d362439263cafe886a82048ec21177d435463df4', 'width': 640}, {'height': 440, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=960&crop=smart&auto=webp&s=6476e82fd9c8b49e38be5d7f16fe052d0d1067ab', 'width': 960}, {'height': 495, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?width=1080&crop=smart&auto=webp&s=4cad10b45097c58bfc42a2909613f4b781db5c65', 'width': 1080}], 'source': {'height': 1753, 'url': 'https://preview.redd.it/8a00jiy4ghxf1.png?auto=webp&s=2d6abd9b2a48fc508fdac365b13f6180693b408b', 'width': 3819}, 'variants': {}}]} | |||
Using my Mac Mini M4 as an LLM server—Looking for recommendations | 3 | I’m looking to set up my Mac Mini M4 (24 GB RAM) as an LLM server. It’s my main desktop, but I want to also use it to run language models locally. I’ve been playing around with the OpenAI API, and ideally I want something that:
• Uses the OpenAI API endpoint (so it’s compatible with existing OpenAI API calls and can act as a drop-in replacement)
• Supports API key authentication. Even though everything will run on my local network, I want API keys to make sure I’m implementing projects correctly.
• Is easy to use or has excellent documentation.
• Can start at boot, so the service is always accessible.
I have been looking into LocalAI but documentation is poor and i simply couldn’t get it to run .
I’d appreciate any pointers, recommendations, or examples of setups people are using on macOS for this.
Thanks in advance!
| 2025-10-26T15:55:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ogo4vt/using_my_mac_mini_m4_as_an_llm_serverlooking_for/ | cockpit_dandruff | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogo4vt | false | null | t3_1ogo4vt | /r/LocalLLaMA/comments/1ogo4vt/using_my_mac_mini_m4_as_an_llm_serverlooking_for/ | false | false | self | 3 | null |
I made a 1B model to generate 3d files (barely) | 62 | 2 weeks ago, I finetuned Gemma3 1B on Synthetic 3D file data. I called the model K-1B.
Yesterday I packaged it into an app, hosting the model on Modal.
I would appreciate any feedback as this is a hobby project that I will keep on training the model etc.
Thanks :) | 2025-10-26T15:52:22 | https://cadmonkey.web.app | ThomasPhilli | cadmonkey.web.app | 1970-01-01T00:00:00 | 0 | {} | 1ogo2jv | false | null | t3_1ogo2jv | /r/LocalLLaMA/comments/1ogo2jv/i_made_a_1b_model_to_generate_3d_files_barely/ | false | false | default | 62 | null |
This is expensive. Anyone know where I can get a better deal? | 0 | 2025-10-26T15:27:56 | Excellent_Koala769 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogngmu | false | null | t3_1ogngmu | /r/LocalLLaMA/comments/1ogngmu/this_is_expensive_anyone_know_where_i_can_get_a/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'C10eTzDD_z7tiB2fYvva8RXRfqKCFiAwIdmGOZnuq94', 'resolutions': [{'height': 150, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=108&crop=smart&auto=webp&s=6ee51535eea6eb246cc3097c670ff5d590b48a0f', 'width': 108}, {'height': 300, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=216&crop=smart&auto=webp&s=cea622a9163d4fdcece6bf584a3fd20f3d3c5fc1', 'width': 216}, {'height': 444, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=320&crop=smart&auto=webp&s=eb439c8621330abe109219e7b7dfba7a6f1c52d8', 'width': 320}, {'height': 889, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=640&crop=smart&auto=webp&s=07b4add61c7a69dfe17703bbf5dbc7d13f130878', 'width': 640}, {'height': 1334, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=960&crop=smart&auto=webp&s=3c6d6c61380a5d46ac9b609d99cf803f10d53900', 'width': 960}, {'height': 1501, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?width=1080&crop=smart&auto=webp&s=4e1394ceae40975c9d8d63494c37a2d4e1b9c7a1', 'width': 1080}], 'source': {'height': 1916, 'url': 'https://preview.redd.it/13pl93mi4hxf1.png?auto=webp&s=39e929df4e62b384c59a2e6f40de33232f05401b', 'width': 1378}, 'variants': {}}]} | |||
How to take advantage of parallel requests to keep inference pipeline full for one user task? | 1 | A lot of the current models can serve 5000-10000/tks per second in parallel requests but only 50-60 in single requests. How can we break down user asks into simultaneous parallel requests, either via agents or something else. Especially thinking of coding and image generation/editing. | 2025-10-26T15:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ogndcf/how_to_take_advantage_of_parallel_requests_to/ | LargelyInnocuous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogndcf | false | null | t3_1ogndcf | /r/LocalLLaMA/comments/1ogndcf/how_to_take_advantage_of_parallel_requests_to/ | false | false | self | 1 | null |
Call for feedback on an open-source RAG API platform that can run with local LLMs | 5 | We've just launched [Skald](https://github.com/skaldlabs/skald), an API platform for building AI apps. It's MIT-licensed and self-hostable, and we've actually made it work with both local embedding models and a locally-hosted LLM. We're new to this space but we believe it's important for people to have the option to run AI applications without sending the data to third-parties.
Keen to hear from people in this community if this works with your setup and what improvement suggestions you'd have! Here are [our docs for self-hosting with no third-parties](https://docs.useskald.com/docs/self-host/full-local). | 2025-10-26T15:22:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ognbw2/call_for_feedback_on_an_opensource_rag_api/ | brodagaita | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ognbw2 | false | null | t3_1ognbw2 | /r/LocalLLaMA/comments/1ognbw2/call_for_feedback_on_an_opensource_rag_api/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=108&crop=smart&auto=webp&s=d61fe57ae1659d8ffceddec21290fab1b23ea435', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=216&crop=smart&auto=webp&s=3ab661132924af4f5f7f5bb9a976e0cd2fda846d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=320&crop=smart&auto=webp&s=e8514177e4e849714608acb81a0fe7cb4fb2bf4a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=640&crop=smart&auto=webp&s=4acb92f5e6de0bfe0bfd85c919d0f0eec10614dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=960&crop=smart&auto=webp&s=41aa882f65807bfe386bb4b2fec7ca2ac9e0f619', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?width=1080&crop=smart&auto=webp&s=0e164dfdf95d8f7761e628604fbb25a8003dd141', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hU_wzX7WR_mpalrNT5cWeXsEbV1YHZVG3MABQPw475c.png?auto=webp&s=24f76ff787e3b245f3d595b4e1d809923a9109ac', 'width': 1200}, 'variants': {}}]} |
Built a lightweight Trust & Compliance layer for AI. Am curious if it’s useful for local / self-hosted setups | 3 | Hey all!
I’ve been building something with a policy expert who works on early drafts of the **EU AI Act** and **ISO 42001**.
Together we made [**Intilium**](https://intilium.ai/). A small **Trust & Compliance layer** that sits in front of your AI stack.
It’s basically an **API gateway** that:
Enforces model and region policies (e.g. EU-only, provider allow-lists)
Detects and masks PII before requests go out
Keeps a full audit trail of every LLM call
Works with OpenAI, Anthropic, Google, Mistral and could extend to **local models** too
The idea is to help teams (or solo builders) **prove compliance automatically**, especially with new EU rules coming in.
Right now it’s **live and free to test** in a sandbox environment.
I’d love feedback from anyone running **local inference or self-hosted LLMs** \- what kind of compliance or logging would actually be *useful* in that context?
[https://intilium.ai](https://intilium.ai)
Would really appreciate your thoughts on how something like this could integrate into local LLM pipelines (Ollama, LM Studio, custom APIs, etc.). | 2025-10-26T15:12:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ogn3a6/built_a_lightweight_trust_compliance_layer_for_ai/ | Capable-Property-539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogn3a6 | false | null | t3_1ogn3a6 | /r/LocalLLaMA/comments/1ogn3a6/built_a_lightweight_trust_compliance_layer_for_ai/ | false | false | self | 3 | null |
Built a lightweight Trust & Compliance layer for AI. Am curious if it’s useful for local / self-hosted setups | 1 | Hey all 👋
I’ve been building something with a policy expert who works on early drafts of the **EU AI Act** and **ISO 42001**.
Together we made [**Intilium**](https://intilium.ai/) \- a small **Trust & Compliance layer** that sits in front of your AI stack.
It’s basically an **API gateway** that:
* ✅ Enforces model and region policies (e.g. EU-only, provider allow-lists)
* 🔒 Detects and masks PII before requests go out
* 🧾 Keeps a full audit trail of every LLM call
* ⚙️ Works with OpenAI, Anthropic, Google, Mistral — and could extend to **local models** too
The idea is to help teams (or solo builders) **prove compliance automatically**, especially with new EU rules coming in.
Right now it’s **live and free to test** in a sandbox environment.
I’d love feedback from anyone running **local inference or self-hosted LLMs** — what kind of compliance or logging would actually be *useful* in that context?
👉 [https://intilium.ai](https://intilium.ai)
Would really appreciate your thoughts on how something like this could integrate into local LLM pipelines (Ollama, LM Studio, custom APIs, etc.). | 2025-10-26T15:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ogmy4i/built_a_lightweight_trust_compliance_layer_for_ai/ | Capable-Property-539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogmy4i | false | null | t3_1ogmy4i | /r/LocalLLaMA/comments/1ogmy4i/built_a_lightweight_trust_compliance_layer_for_ai/ | false | false | self | 1 | null |
A highly adaptable toolkit to build APIs and agents, with friendly interfaces for streaming and multimodality | 2 | Hi everyone! I've been working for quite a while on a toolkit/framework to build APIs and agents easily, in a way friendly to developers that would not hide complexity behind abstractions, but that would also be in step with modern requirements and capabilities: stateful, async execution, streaming, multimodality, persistence, etc.
I thought this community would be a perfect place to get feedback, and also that the library itself can be genuinely useful here, so feedback is very welcome!
Landing page with a few nice demos: [https://actionengine.dev/](https://actionengine.dev/)
Code examples in Python, TypeScript, C++: [https://github.com/google-deepmind/actionengine/tree/main/examples](https://github.com/google-deepmind/actionengine/tree/main/examples)
To get an overall grasp, check out the stateful ollama chat sessions example: [demo](https://actionengine.dev/gemini?q=ollama), [backend handlers](https://github.com/google-deepmind/actionengine/blob/main/examples/007-python-generative-media/actions/gemini.py), [server](https://github.com/google-deepmind/actionengine/blob/main/examples/007-python-generative-media/server.py), [chat page frontend code](https://github.com/google-deepmind/actionengine/blob/main/web/app/gemini/page.tsx).
# Why another framework?
I don't really like the word, but it's hard to find anything better and still have people understand what the project is about. IMO, the problem of "agentic frameworks" is that they give excessively rigid abstractions. The novel challenge is not to "define" "agents". They are just chains of calls in some distributed context. The actual novel challenge is to build tools and cultivate a common language to express highly dynamic, highly experimental interactions performantly (and safely!) in very different kinds of applications and environments. In other words, the challenge is to acknowledge and enable the diversity of applications and contexts code runs from.
**That means that the framework itself should allow experimentation and adapt to applications, not have applications adapt to it.**
I work at Google DeepMind (hence releasing Action Engine under the org), and the intention for me and co-authors/internal supporters is to validate some shifts we think the agent landscape is experiencing, have a quick-feedback way to navigate that, including checking very non-mainstream approaches. Some examples for me are:
* developers don't seem to really need "loop runner" type frameworks with tight abstractions, but rather a set of thin layers they can combine to:
* relieve "daily", "boring" issues (e.g. serialisation of custom types, chaining tasks),
* have consistent, similar ways to store and transmit state and express agentic behaviour across backend peers, browser clients, model servers etc. (maybe edge devices even),
* "productionise": serve, scale, authorise, discover,
* it is important to design such tools and frameworks at the full stack to enable builders of all types of apps: web/native, client orchestration or a worker group in a cluster, etc.,
* data representation, storage and transport matter much more than the runtime/execution context.
I'm strongly convinced that such a framework should be absolutely flexible to runtimes, and should accommodate different "wire" protocols and different storage backends to be useful for the general public. Therefore interactions with those layers are extensible:
* for "wire" connections, there are websockets and WebRTC (and Stubby internally at Google), and this can be extended,
* for "store", there is an in-memory implementation and one over Redis streams (also can be extended!)
# What the library is, exactly
Action Engine is built as a kit of optional components, for different needs of different applications. IMO that makes it stand out from other frameworks: they lock you in the whole set of abstractions, which you might not need.
The core concepts are *action* and *async node*. "Action" is simple: it's just executable code with a name and i/o schema assigned, and some well-defined behaviour to prepare and clean up. Async node is a logical "stream" of data: a channel-like interface that one party (or parties!) can write into, and another can read with a "block with timeout" semantics.
These core concepts are easy to understand. Unlike with loaded terms like "agent", "context" or "graph executor", you won't make any huge mistake thinking about *actions* as about *functions*, and about *async nodes* as about *channels* or *queues* that go as inputs and outputs to those functions.
The rest of the library simply cares about building context to run or call actions, and lets you do that yourself—there are implementations:
* for particular-backend *wire streams*,
* for *sessions* that share a data context between action runs,
* for *services* that hold multiple sessions and route wire connections into them,
* for *servers* that listen to connections / do access control / etc.
...but it's not a package offering. No layer is obligatory, and in your particular project, you may end up having a nicer integration and less complexity than if you used ADK, for example.
**Flexibility to integrate any use case, model or API, and flexibility to run in different infrastructure are first-class concerns here, and so is avoiding large cognitive footprint.**
Anyway, I'd be grateful for feedback! Have a look, try it out—the project is WIP and the level of documentation is definitely less than needed, but I'll be happy to answer any questions! | 2025-10-26T14:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ogm384/a_highly_adaptable_toolkit_to_build_apis_and/ | apnkv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogm384 | false | null | t3_1ogm384 | /r/LocalLLaMA/comments/1ogm384/a_highly_adaptable_toolkit_to_build_apis_and/ | false | false | self | 2 | null |
Tool Calling with TabbyAPI and Exllamav3 | 4 | Did anybody get this to work? I attempted to use exllamav3 with qwen code, the model loads but no tool calls do not work. Im surely doing something wrong. I use the chat template specified by unsloth for tool calling. I dont know what Im doing wrong, but certainly something is wrong. Help would be appreciated | 2025-10-26T13:51:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ogl5hl/tool_calling_with_tabbyapi_and_exllamav3/ | Flashy_Management962 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogl5hl | false | null | t3_1ogl5hl | /r/LocalLLaMA/comments/1ogl5hl/tool_calling_with_tabbyapi_and_exllamav3/ | false | false | self | 4 | null |
🚀 Sleepless Agent — Turn Your Unused Claude Credits into an Autonomous AgentOS | 0 | Ever looked at your Claude credits and thought… *“man, I’m not even using half of these”?*
What if you could turn that unused compute into something **that works while you sleep**?
That’s what [**Sleepless Agent**](https://github.com/context-machine-lab/sleepless-agent) is about —
an **AgentOS built on Claude Code**, designed to capture your random thoughts, half-baked project ideas, or TODOs — and then let your AI finish them overnight.
# 🌙 How It Works
You just drop an idea like:
>
and go to sleep.
By morning, your agent has:
* brainstormed the concept
* written the README
* drafted the slides
* maybe even pushed an initial repo update
All powered by **Claude Agent SDK**, so it inherits every dev feature:
file access, function tools, structured agents, interactive execution — but now fully automated through an **AgentOS daemon** that runs your tasks.
# 💡 Example Use Cases
* 💬 Capture your stray ideas anytime — your agent will pick them up later.
* 📊 Want a PPT from your notes? Just drop a one-line prompt.
* 🔎 Want to crawl Xiaohongshu for specific posts (like all “相亲” threads)? Add the Xiaohongshu MCP — your agent will find them while you sleep.
* ⚙️ Plug in any Claude Code-compatible toolchain. It just works.
# 🧠 Why “Sleepless”
Because your **agent never sleeps** — it turns late-night creativity into next-morning results.
It’s like having a background AI cofounder who actually works on your ideas while you rest.
# 🔗 Check it out
👉 [GitHub – context-machine-lab/sleepless-agent](https://github.com/context-machine-lab/sleepless-agent) | 2025-10-26T13:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ogl3yp/sleepless_agent_turn_your_unused_claude_credits/ | TimeLover935 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogl3yp | false | null | t3_1ogl3yp | /r/LocalLLaMA/comments/1ogl3yp/sleepless_agent_turn_your_unused_claude_credits/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=108&crop=smart&auto=webp&s=87c9910c7357d283025922570df4c802f60cf270', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=216&crop=smart&auto=webp&s=2dfe7b0324e5778dc5c7d294c6c8997b1ce3a10c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=320&crop=smart&auto=webp&s=e3bcb1583db22b70329549d8565133951f4eaf4b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=640&crop=smart&auto=webp&s=9f62ba677d5b04244722d5bd893855524c4c9ff4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=960&crop=smart&auto=webp&s=94ed19d3f3bf98255a6a73b04019ef216d76cac9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=1080&crop=smart&auto=webp&s=00a50c154cc0028d3f459c184b07b5d63710a6f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?auto=webp&s=9b620511009bdd4aad7c4b21e62090267507b554', 'width': 1200}, 'variants': {}}]} |
What AI voice / TTS model is used in these YouTube videos? | 0 | Hey everyone, I came across these two YouTube videos and was wondering if anyone recognizes the AI voice or text-to-speech model being used in them:
* [https://www.youtube.com/watch?v=yXbda83VERk](https://www.youtube.com/watch?v=yXbda83VERk)
* [https://www.youtube.com/watch?v=o-L-8AYMD2w](https://www.youtube.com/watch?v=o-L-8AYMD2w)
Thanks in advance! | 2025-10-26T13:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ogkjzs/what_ai_voice_tts_model_is_used_in_these_youtube/ | Evening-Wolverine997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogkjzs | false | null | t3_1ogkjzs | /r/LocalLLaMA/comments/1ogkjzs/what_ai_voice_tts_model_is_used_in_these_youtube/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'V7LXqxTx39jZW0XYaOOs7VanYshWujVQ2_zcqCFfdik', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/V7LXqxTx39jZW0XYaOOs7VanYshWujVQ2_zcqCFfdik.jpeg?width=108&crop=smart&auto=webp&s=941c0d7456c8e76281f995ddf1372875511db362', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/V7LXqxTx39jZW0XYaOOs7VanYshWujVQ2_zcqCFfdik.jpeg?width=216&crop=smart&auto=webp&s=72237e9ae8c85b52130c2cf9dc7ee9bfd13f12eb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/V7LXqxTx39jZW0XYaOOs7VanYshWujVQ2_zcqCFfdik.jpeg?width=320&crop=smart&auto=webp&s=a96fc8159567b7f460e0d21dc30bf899326ca245', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/V7LXqxTx39jZW0XYaOOs7VanYshWujVQ2_zcqCFfdik.jpeg?auto=webp&s=6a4a5e6ac991d952eb0c73e4fe196498d14574bd', 'width': 480}, 'variants': {}}]} |
Gowall - OCR with LLM's ,Traditional, Hybrid, AI Image upscaling (Swiss Army knife for image processing) | 2 | Greetings fellas,
Gowall is a swiss army knife for image processing but since this is a AI place im going to focus on 2-3 specific features of the whole catalog that gowall provides.
Github link : https://github.com/Achno/gowall
Docs: (visual examples,tips,use gowall with scripts): https://achno.github.io/gowall-docs/
**OCR** -> You can OCR pdf's and images with `VLM's` ,`Classic OCR methods (Tesseract,EasyOCR)` , `Hybrid methods` (Tesseract + LLM's for example)
- Gowall connects to tons of providers : Docling,VLLM,Ollama,Any openAi compatible endpoint and all the popular llm cloud providers ... you get the point. Both local and cloud providers options.
- Schemas : instead of passing a gazillion flags like the model,provider,rate limits, pre-processing,post-processing ... everything is defined inside schema.yml and you simply reference the schema name
```
- name: "op-qwen"
config:
ocr:
provider: "openrouter"
model: "qwen/qwen2.5-vl-72b-instruct:free"
rate_limit:
rps: 4
burst: 4
then you simply do :
gowall ocr test.png -s op-qwen
```
- Take a look at the docs to see how Hybrid methods like tesseract + an llm combo look like.
- Gowall being a cli, intergrates with your favorite Screenshot utility (i like flameshot since im on linux). This very useful since i modify an image with annotations to hide text content i don't want to OCR.
**AI image upscaling**
The feature is self-explenatory. The good thing is that only needs Vulkan so anyone can run it basically.
**Closing Words***
I'm also looking to integrate `onnx` so i can level up the image background extraction and get it to a state like rembg's.
The **OCR feature is in Alpha**, i just wanted to release it to get some feedback, improvements etc...
I hope you give it a spin and checkout its other image processing features if you are interested. The documentation is absolutely heavenly, i put a lot of effort to it and its 100% manually typed by me, every single word. As much as i like AI, i dislike AI slop especially when it comes to docs. | 2025-10-26T13:08:33 | FormationHeaven | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogk6u3 | false | null | t3_1ogk6u3 | /r/LocalLLaMA/comments/1ogk6u3/gowall_ocr_with_llms_traditional_hybrid_ai_image/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'msdwmP7wvBTZo0B6BeSVoN-lZkmbdDnPS1nGtLaiC2M', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=108&crop=smart&auto=webp&s=0d34dc3e33d809ca46dde495413c4a41f4b0889a', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=216&crop=smart&auto=webp&s=62c93bdab1d872dc4c2267f5fcb103d71b7b2b81', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=320&crop=smart&auto=webp&s=3eee3eab365b7f20877288ebb0ff5936353fb95b', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=640&crop=smart&auto=webp&s=504b5d1ecbfc3e1c83238e5d1f3b5f8eca36c315', 'width': 640}, {'height': 509, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=960&crop=smart&auto=webp&s=708b1b21e7dd39cee74918e36a5b9ba4dce7c4ab', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?width=1080&crop=smart&auto=webp&s=ec973d3a8491a49a2d3b0d2774f34ab2a2c378a5', 'width': 1080}], 'source': {'height': 1551, 'url': 'https://preview.redd.it/ehur4h0lggxf1.png?auto=webp&s=439cbefc11a8ea49fb7053588df267187ffe2dc8', 'width': 2925}, 'variants': {}}]} | ||
Gowall : OCR with LLMs ,Traditional, Hybrid, AI Image upscaling (Swiss Army knife for image processing) | 1 | [removed] | 2025-10-26T13:00:36 | FormationHeaven | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogk0td | false | null | t3_1ogk0td | /r/LocalLLaMA/comments/1ogk0td/gowall_ocr_with_llms_traditional_hybrid_ai_image/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'G8CjbwuQCp4QyWUEysmiw2DNzOFVp06eMY5U-S3KD7M', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=108&crop=smart&auto=webp&s=92c4f2cc8b4eadc47c6f84017729c497375e8e7d', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=216&crop=smart&auto=webp&s=88f4be5112ddfd537c19869340f9ec900a553e85', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=320&crop=smart&auto=webp&s=24f6f92bc22da6e12a6b4f24b8b7d9c719a767ed', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=640&crop=smart&auto=webp&s=5ae3afbf86d7c9e41c5f270d235741ad3217b8f8', 'width': 640}, {'height': 509, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=960&crop=smart&auto=webp&s=41ae4e29f980f249373bb4acfedf1c8ade107f3d', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?width=1080&crop=smart&auto=webp&s=4295c3e28c4439244329bf626a4819b0f017c664', 'width': 1080}], 'source': {'height': 1551, 'url': 'https://preview.redd.it/x9ffo0e3fgxf1.png?auto=webp&s=93fc50528f59e0eb851054bc0fdac46c1be54b7a', 'width': 2925}, 'variants': {}}]} | ||
All thinking MOE models behave like this? "wait, but, wait, but, also, wait, but". In some ways checking thinking enjoyable and fun. Is this an army of experts arguing with each other? I give corrections that now 2025 and the 5060 ti exists, it's the same as the 4060ti but 20-30% more powerful. | 0 | "Wait, but in their message, they wrote "5060ti" with lowercase 't' and no space? Maybe it's a typo for "4060ti". So the correct term is RTX 4060 Ti, not 5060. Let me clarify that."
"Also, they mentioned "16 gb" but the 4060 Ti typically has 8GB or 12GB of VRAM. Wait, maybe they're referring to a different variant with more memory? But standard 4060 Ti is 8GB (or sometimes 12GB), so maybe their GPU has 16GB? But that's not typical for 4060 Ti."
"Wait, no: the RTX 4060 Ti usually comes in versions with 8GB or 12GB VRAM. The 16GB would be a different model like the 4070 Ti (which has 16GB), but not 4060 Ti. So maybe they're confusing it with another GPU.
"But regardless, I should clarify that "5060ti" is incorrect and likely a typo for "4060ti". So in my response, I need to correct that by stating the actual GPU model they have is RTX 4060 Ti (or similar), not 5060."
The temperature setting is not touched, the number of experts is also. I only adjust settings for improve performance. After think about 5060, he start check all other question and finished with repeating massage from 3 screenshot and it took 43 minutes and he didnt give me the answer. For other questions from time to time he just thinking, but also didn't answer.
| 2025-10-26T12:59:44 | https://www.reddit.com/gallery/1ogk05b | R_dva | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ogk05b | false | null | t3_1ogk05b | /r/LocalLLaMA/comments/1ogk05b/all_thinking_moe_models_behave_like_this_wait_but/ | false | false | 0 | null | |
Qwen offers similar UI to openai - free, has android app | 0 | [https://chat.qwen.ai/](https://chat.qwen.ai/) \- free qwen3 max | 2025-10-26T12:56:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ogjxp5/qwen_offers_similar_ui_to_openai_free_has_android/ | cranberrie_sauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogjxp5 | false | null | t3_1ogjxp5 | /r/LocalLLaMA/comments/1ogjxp5/qwen_offers_similar_ui_to_openai_free_has_android/ | false | false | self | 0 | null |
Gowall v0.2.3 - OCR with VLM's Traditional & Hybrid, AI Image upscaling (Swiss Army knife for image processing) | 1 | [removed] | 2025-10-26T12:56:03 | FormationHeaven | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogjxjs | false | null | t3_1ogjxjs | /r/LocalLLaMA/comments/1ogjxjs/gowall_v023_ocr_with_vlms_traditional_hybrid_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WBYX_pPw0kN2SgsoaEA12SlCfTVu4Vpe_PVPkpA0-ls', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=108&crop=smart&auto=webp&s=bd24e3e1f3490b58938e52878287f122864ca968', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=216&crop=smart&auto=webp&s=cf2139ec9235ace5e5fe51d62139ead113d26bce', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=320&crop=smart&auto=webp&s=f2ceeaaeab4cbbe1b33e235b21328586406c679b', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=640&crop=smart&auto=webp&s=33629e344001208084b8736fc3ad27b838e35d90', 'width': 640}, {'height': 509, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=960&crop=smart&auto=webp&s=b5bc1d62dfa67ada76e418a6e83651dd46a20ad2', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?width=1080&crop=smart&auto=webp&s=aa00896b088ffe1d16ea6b6e11fbfaef83bdd15e', 'width': 1080}], 'source': {'height': 1551, 'url': 'https://preview.redd.it/9nbzysedegxf1.png?auto=webp&s=a161b4affa1fb06dd70c26ccd779c39b1ce9a9a9', 'width': 2925}, 'variants': {}}]} | ||
Gowall v0.2.3 - OCR with VLM's Traditional & Hybrid, AI Image upscaling (Swiss Army knife for image processing) | 1 | [removed] | 2025-10-26T12:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ogjvh9/gowall_v023_ocr_with_vlms_traditional_hybrid_ai/ | FormationHeaven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogjvh9 | false | null | t3_1ogjvh9 | /r/LocalLLaMA/comments/1ogjvh9/gowall_v023_ocr_with_vlms_traditional_hybrid_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=108&crop=smart&auto=webp&s=13472a977464762a8fbad43413bae08041aaa00f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=216&crop=smart&auto=webp&s=e9c1cb7dce1fb34b3fbbd748f209117e73527577', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=320&crop=smart&auto=webp&s=e43b3775edeb89abe6ef57cacc25281fc13417ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=640&crop=smart&auto=webp&s=8c87dc331d4303a65223f7a28fa377eb610ab344', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=960&crop=smart&auto=webp&s=5b69554ba1487886ebf1d79dbbb5acb499efa89c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?width=1080&crop=smart&auto=webp&s=49589399f47328fff2fcf5465ebf045f559c9c10', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rlcRAALTkA8Mgf4C9DU_92vvW9z7yfOz9Ql-sxUHlM8.png?auto=webp&s=99ba1e58d436a9377734c2c366569117afc24e94', 'width': 1200}, 'variants': {}}]} |
Poor GPU Club : Good Worthy Pruned models? | 38 | Wanted to explore more on this after seeing recent threads( [3](https://www.reddit.com/r/LocalLLaMA/comments/1oefu29/cerebras_reapd_glm46_25_30_40_pruned_fp8/) , [2](https://www.reddit.com/r/LocalLLaMA/comments/1obrde8/cerebras_reap_update_pruned_checkpoints_for/) , [1](https://www.reddit.com/r/LocalLLaMA/comments/1o98f57/new_from_cerebras_reap_the_experts_why_pruning/) ) from Cerebras. They already pruned few MOE models such as Qwen3-Coder-30B, Qwen3-Coder-480B, GLM-4.5-Air, GLM-4.6. I'm just waiting for few small MOE models from them, hope they do soon or later.
Meanwhile [one other person pruned few other MOE models](https://www.reddit.com/r/LocalLLaMA/comments/1octe2s/pruned_moe_reap_quants_for_testing/)(Qwen3-30B, Qwen3-30B-Instruct, Qwen3-Coder-30B, GPT-OSS-20B, GPT-OSS-120B) using same Reap by Cerebras.
I'll be trying those small pruned models for sure since I have only 8GB VRAM(and 32GB RAM).
I'm sure some of you might have tried few pruned models before. HuggingFace has 100s of pruned models. Below are links to pruned models with different tags. Of course they must be some more pruned models without below tags.
[Pruned](https://huggingface.co/models?other=pruned) , [Prune](https://huggingface.co/models?other=prune) , [Pruning](https://huggingface.co/models?other=pruning) , [pruned-model](https://huggingface.co/models?other=pruned-model) , [expert-pruning](https://huggingface.co/models?other=expert-pruning)
1\] Please recommend good worthy pruned models particularly small ones under 50B
2\] Cerebras Reap method is only for MOE models. Does anyone came across anything for Dense models? Recently I posted a thread about Q3/Q2 quants of Dense models since I couldn't run those models with high quants like Q4 & above. [Anyone use Q3/Q2 quants of 20-40B Dense models? How's it?](https://www.reddit.com/r/LocalLLaMA/comments/1o3zz30/poor_gpu_club_anyone_use_q3q2_quants_of_2040b/) Unfortunately I couldn't run even Q3 with bearable t/s.
Currently I'm looking for Pruned models of below ones:
* Seed-OSS-36B-Instruct
* Devstral-Small-2507
* Magistral-Small-2509
* Mistral-Small-3.2-24B-Instruct-2506
* reka-flash-3.1
* Gemma-3-27B-it
* Qwen3-32B
* GLM-4-32B-0414
* And lot of 20B+ finetunes from sources like TheDrummer, SicariusSicariiStuff, etc.,
It would be great if someone shrink those dense models to 50%(at least 25-35%) so I could use Q4 with decent/bearable t/s with my 8GB VRAM(and 32GB RAM). | 2025-10-26T12:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ogjtkn/poor_gpu_club_good_worthy_pruned_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogjtkn | false | null | t3_1ogjtkn | /r/LocalLLaMA/comments/1ogjtkn/poor_gpu_club_good_worthy_pruned_models/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=108&crop=smart&auto=webp&s=c58faeb60d6cd1478f77717010b54d2ec5ab95aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=216&crop=smart&auto=webp&s=ac6e76a4b92cde06bfe8de6386029fe6e13d300a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=320&crop=smart&auto=webp&s=74411f402b7aa23512ee64feee8b30c532f827cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=640&crop=smart&auto=webp&s=3633345496a9e7fe8ee77d630eed16e17aa9d76c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=960&crop=smart&auto=webp&s=7d1f508758e0820c3ba4c956558fbb03b374d9ae', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?width=1080&crop=smart&auto=webp&s=128cf1f3a3c707f58eeaac2a787b22669c50d896', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/An0iJLapq-5CUQQlm3lWegevVWf7wlANjmn1iOwCTqk.png?auto=webp&s=72d1eb93c8099528b2174d2087be2b488b2e9529', 'width': 1200}, 'variants': {}}]} |
Should I keep my GeForce RTX 5060 Ti? | 2 | Hi everyone,
For the past 9-12 months I been thinking in getting into local AI + learning CUDA programming. I never expected to run very large models as I am on a very thight budget (\~ 600$), so I have been postponing it foever. Anyway, I am more interested in the CUDA programming part. My idea is to take it as a hobby and mostly get in touch witth the local AI tools and models...
The thing is, that if I want to get into this I must have a NVIDIA GPU. I saw a discount for a GeForce RTX 5060 Ti 16 GB and went for it, as it is around my budget. However, I've been wondering if I did OK or not.
My first limitation is that had to go in my current (old) system. For my job I need a large core count + large RAM amount, so currently I have:
* Xeon E5-2698-v4: 20C/40T 2.2 GHZ - 3.5 Ghz
* 192 GB of DDR4 2400 MHz
* x2 PCIe x16 3.0 and x1 PCIe x8 3.0 slots
Therefore, I went for 5060 Ti the tought that I benefit from the RAM and do offloading to it. However, all my components are "slow" compared to state-of-the-art machines, so I don't know if it is a good idea or not.
So far, I haven't had time to test it, but I tested it in gaming and the performance has not been amazing, but I guess I am facing a strong CPU bottleneck. Anyway, gaming is not my thing and I don't care about it, it was just an easy benchmark test to do.
I also didn't care about PCIe version, as for gaming does not appear to matter, but I have read that PCIe version bandwith is much more important for local AI, specially for RAM off-loading. Since the RTX 5060 Ti is only PCIe x8 and my PCie is 3.0 I am limited to 8GB/s (I think). Will this make everything very slow?
Does anybody know what can I expect from my system? I can handle the system being slow, as I am not in any hurry, this would be only a hobby. Are all my other components too old?
I have been thinking about returning my RTX 5060Ti (I am thinking also that Black Friday is very close) and going for somethign older, like x2 RTX3060Ti (to have more VRAM). Is this a good idea?
However, I am worried about driver support (for the 3060), going into the future.
For me, there's a lot of money at stake, so I would really appreacity any help.
**TL;DR: Is RTX 5060 Ti 16B in PCIe 3.0 + 192 GB DDR4 2400 MHz good for learning local AI or will it be extermly slow? Would it be better to go for dual RTX 3060 Ti (more VRAM)?** | 2025-10-26T12:42:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ogjnsz/should_i_keep_my_geforce_rtx_5060_ti/ | Accomplished_Ad4103 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogjnsz | false | null | t3_1ogjnsz | /r/LocalLLaMA/comments/1ogjnsz/should_i_keep_my_geforce_rtx_5060_ti/ | false | false | self | 2 | null |
Cheaper & faster LLM stack in 2025: Kimi/Qwen vs OpenAI | 23 | [Chamath Palihapitiya](https://preview.redd.it/v0ddm42g9gxf1.png?width=680&format=png&auto=webp&s=7f75f7809cead99b006dc49dc76a53f453f06a8f)
https://preview.redd.it/dmbx1rcl9gxf1.png?width=1196&format=png&auto=webp&s=154810c46f400e52c2ef4cef6c6a44c79fab9fef
The valley is built on open-source models?
On the All-In podcast, Chamath Palihapitiya says his team redirected a ton of workloads to **Kimi K2** because it was “**way more performant**” and “**a ton cheaper**” than OpenAI and Anthropic.
Airbnb CEO Brian Chesky says they’re relying a lot on Alibaba’s **Qwen** in production because it’s “**fast and cheap.**” They still use OpenAI’s latest models, but “typically don’t use them that much in production” due to **faster/cheaper** options. | 2025-10-26T12:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ogjkfj/cheaper_faster_llm_stack_in_2025_kimiqwen_vs/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogjkfj | false | null | t3_1ogjkfj | /r/LocalLLaMA/comments/1ogjkfj/cheaper_faster_llm_stack_in_2025_kimiqwen_vs/ | false | false | 23 | null | |
Is SSM dead now? | 32 | I tried researching about it and found almost all of the news and information is 1 years ago. Is it discontinued? | 2025-10-26T11:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ogigyu/is_ssm_dead_now/ | Spapoxl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogigyu | false | null | t3_1ogigyu | /r/LocalLLaMA/comments/1ogigyu/is_ssm_dead_now/ | false | false | self | 32 | null |
Using GLM 4.6 to understand it's limitations | 30 | ERROR: type should be string, got "https://preview.redd.it/gq9yiommnfxf1.png?width=1994&format=png&auto=webp&s=04be61d0c1fe988448c06878ea77b577ddd6aee1\n\nThe actual loosing point will start at 30% less than the number in the table. For example, tool calling actually starting to fail randomly at 70k context." | 2025-10-26T10:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ogh8ec/using_glm_46_to_understand_its_limitations/ | Vozer_bros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogh8ec | false | null | t3_1ogh8ec | /r/LocalLLaMA/comments/1ogh8ec/using_glm_46_to_understand_its_limitations/ | false | false | 30 | null | |
can anybody tell me that how deepseek 3.1 trading i want to know how i can do this same thing , right now 3.1 as a open source model and only model have a return rate of 50 percent so can u guys help me so i can use this open source model for good use | 0 | 2025-10-26T10:12:08 | Select_Dream634 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oggzgr | false | null | t3_1oggzgr | /r/LocalLLaMA/comments/1oggzgr/can_anybody_tell_me_that_how_deepseek_31_trading/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'qQ6ELRxRtvcDhS8gAJA8x6i-pAUFtmXbGyU7H3pKDDk', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/sxf77eeukfxf1.png?width=108&crop=smart&auto=webp&s=f1f6f3cdd5b0078f2baf4638dbd801f851b2c857', 'width': 108}, {'height': 66, 'url': 'https://preview.redd.it/sxf77eeukfxf1.png?width=216&crop=smart&auto=webp&s=14ab7a38eb01b1a21d020bae95dcc3e9916f76fc', 'width': 216}, {'height': 98, 'url': 'https://preview.redd.it/sxf77eeukfxf1.png?width=320&crop=smart&auto=webp&s=fddfe3b7960b0a88e336331dd3fcacd6dd1c4f24', 'width': 320}, {'height': 197, 'url': 'https://preview.redd.it/sxf77eeukfxf1.png?width=640&crop=smart&auto=webp&s=e80f1f24229abcb68b8a49a92e81e80cb00b9fc4', 'width': 640}], 'source': {'height': 246, 'url': 'https://preview.redd.it/sxf77eeukfxf1.png?auto=webp&s=6df40fa8a274e3448c6e360724fadeb3090828b2', 'width': 797}, 'variants': {}}]} | |||
GraphScout: Intelligent Routing for Local LLM Agent Workflows | 2 | # The Local LLM Orchestration Challenge
When running local models, every token matters. You can't afford to waste inference calls on irrelevant agent sequences. Static routing often over-provisions—calling agents "just in case" because the logic can't adapt to actual query content.
GraphScout provides runtime path discovery for local LLM workflows. It evaluates which agents to call based on actual input, reducing unnecessary inference overhead.
# The Token Waste Problem
Static routing with local models:
# Always calls this sequence, regardless of query
workflow: [memory_check, web_search, analysis, synthesis, response]
For simple queries, you're paying for memory checks and web searches you don't need. For complex queries, you might need multiple analysis passes that aren't in the sequence.
# Dynamic Path Selection
GraphScout uses your local LLM to evaluate which agent sequence makes sense:
- id: smart_router
type: graph_scout
config:
k_beam: 5
max_depth: 3
evaluation_model: "local_llm"
evaluation_model_name: "gpt-oss:20b"
cost_budget_tokens: 1000
prompt: "Select optimal path for: {{ input }}"
The system discovers available agents, simulates paths, and executes only what's needed.
* Cost Control for Local Models
* Token Budget Management
* Set maximum tokens per path: cost\_budget\_tokens: 1000
* GraphScout filters candidates that exceed budget before evaluation
* Latency Constraints
* Control max execution time: latency\_budget\_ms: 2000
* Important when running quantized models with variable throughput
* Beam Search
* Configurable exploration depth prevents combinatorial explosion
* k\_beam: 3 with max\_depth: 2 keeps evaluation overhead minimal
# Works with Any Local Provider
Ollama:
evaluation_model: "local_llm"
evaluation_model_name: "gpt-oss:20b"
provider: "ollama"
LM Studio, llama.cpp, vLLM: Any OpenAI-compatible endpoint
GraphScout uses your local model for path evaluation no external API calls required for routing decisions.
# Example: Memory-Aware Local Workflow
orchestrator:
agents: [graph_scout, memory_reader, local_analyzer, memory_writer, response_builder]
agents:
- id: graph_scout
type: graph_scout
config:
evaluation_model: "local_llm"
evaluation_model_name: "qwen2.5:7b"
k_beam: 3
cost_budget_tokens: 800
- id: local_analyzer
type: local_llm
model: "gpt-oss:20b"
provider: ollama
- id: response_builder
type: local_llm
model: "qwen2.5:7b"
provider: ollama
GraphScout automatically orders memory operations (readers first, writers last) and only calls the analyzer when needed.
# Real Benefit: Adaptive Token Usage
Instead of fixed sequences that waste tokens on unnecessary operations, GraphScout adapts to query complexity:
* Simple query: Skip memory check, direct to response builder
* Factual query: Memory check → web search → response
* Complex query: Memory → multiple analysis passes → synthesis → write back
The routing intelligence runs locally on your own hardware.
# Privacy First
All routing decisions happen locally using your models. No external API calls for path selection. Complete control over execution.
Works with RedisStack for local vector storage or in-memory backends. Entire reasoning workflow stays on your infrastructure.
Part of OrKa-Reasoning v0.9.3+
GitHub: [github.com/marcosomma/orka-reasoning](http://github.com/marcosomma/orka-reasoning)
Apache 2.0 licensed, self-hostable | 2025-10-26T09:22:42 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ogg723 | false | null | t3_1ogg723 | /r/LocalLLaMA/comments/1ogg723/graphscout_intelligent_routing_for_local_llm/ | false | false | 2 | {'enabled': True, 'images': [{'id': '60rH69BXQ6YhYaihHuExpw-A0es3y8SRkwLFT7vRFb0', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?width=108&crop=smart&auto=webp&s=5588911ed73b73048bad39cdfa27f40670e0433e', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?width=216&crop=smart&auto=webp&s=ea9211600c751c8bda2a5f6ccd6b7f7c1e760600', 'width': 216}, {'height': 134, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?width=320&crop=smart&auto=webp&s=5a2c95c1b45dc175b43eea704e21bbca9e415fa1', 'width': 320}, {'height': 268, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?width=640&crop=smart&auto=webp&s=f91c4026cf196174c14077e41cd9593beedb1257', 'width': 640}, {'height': 403, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?width=960&crop=smart&auto=webp&s=1a31c29d3aa94ed99a53966cdba375974c359b0a', 'width': 960}], 'source': {'height': 420, 'url': 'https://preview.redd.it/akbwnyzbcfxf1.jpeg?auto=webp&s=f9309d0c00ce9db4a954c0a263f122f8f2602499', 'width': 1000}, 'variants': {}}]} | ||
Why didn't LoRA catch on with LLMs? | 279 | ## Explanation of LoRA for the folks at home
I only know it from the image generation Stable Diffusion world, and I only tried that briefly, so this won't be 100% exact.
Let's say your image generation model is Stable Diffusion 1.5, which came out a few years ago. It can't know the artstyle of a new artist that came up in the past year, let's say his name his Bobsolete.
What lora creators did is create a small dataset of Bobsolete's art, and use it to train SD 1.5 for like 1-2 days. This outputs a small lora file (the SD 1.5 model is 8GB, a lora is like 20MB). Users can download this lora, and when loading SD 1.5, say "also attach Bobsolete.lora to the model". Now the user is interacting with SD 1.5 that has been augmented with knowledge of Bobsolete. The user can specify "drawn in the style of Bobsolete" and it will work.
Loras are used to add new styles to a model, new unique characters, and so on.
## Back to LLMs
llama.cpp apparently supports loras, but no one seems to use them. I've never ever seen them discussed on this sub in my 2 years of casual browsing, although I see they exist in the search results.
I was wondering why this hasn't caught on. People could add little bodies of knowledge to an already-released model. For example, you take a solid general model like Gemma 3 27B. Someone could release a lora trained on all scifi books, another based on all major movie scripts, etc. You could then "./llama.cpp -m models/gemma3.gguf --lora models/scifi-books-rev6.lora --lora models/movie-scripts.lora" and try to get Gemma 3 to help you write a modern scifi movie script. You could even focus even more on specific authors, cormac-mccarthy.lora etc.
A more useful/legal example would be attaching current-events-2025.lora to a model whose cutoff date was December 2024.
So why didn't this catch on the way it did in the image world? Is this technology inherently more limited on LLMs? Why does it seem like companies interested in integrating their doc with AI are more focused on RAG than training a Lora on their internal docs? | 2025-10-26T09:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ogg6sz/why_didnt_lora_catch_on_with_llms/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogg6sz | false | null | t3_1ogg6sz | /r/LocalLLaMA/comments/1ogg6sz/why_didnt_lora_catch_on_with_llms/ | false | false | self | 279 | null |
GLM 4.5 air for coding | 16 | You who use a local glm 4.5 air for coding, can you please share your software setup?
I have had some success with unsloth q4_k_m on llama.cpp with opencode. To get the tool usage to work I had to use a jinja template from a pull request, and still the tool calling fails occasionally. Tried unsloth jinja template from glm 4.6, but no success. Also experimented with claude code with open router with a similar result. Considering to trying to write my own template and also trying with vllm.
Would love to hear how others are using glm 4.5 air. | 2025-10-26T08:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ogfmt4/glm_45_air_for_coding/ | Magnus114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogfmt4 | false | null | t3_1ogfmt4 | /r/LocalLLaMA/comments/1ogfmt4/glm_45_air_for_coding/ | false | false | self | 16 | null |
DemyAgent | 2 | Hi,
Did anyone of you already try the new DemyAgent Model? How did it perform for you?
For a small model it should be very good - according to Benchmarks (but again I fear it's just benchmaxxed) | 2025-10-26T08:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ogfl3a/demyagent/ | Old-Cardiologist-633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogfl3a | false | null | t3_1ogfl3a | /r/LocalLLaMA/comments/1ogfl3a/demyagent/ | false | false | self | 2 | null |
deepseek ocr | 1 | can i use the new deepseek ocr locally and include it to a flutter project without using any api , what that going to cost me
| 2025-10-26T08:31:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ogfefw/deepseek_ocr/ | iimo_cs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogfefw | false | null | t3_1ogfefw | /r/LocalLLaMA/comments/1ogfefw/deepseek_ocr/ | false | false | self | 1 | null |
Use Cases for DeepSeek-OCR | 0 | I've been focused on other models for the last couple of months and haven't kept up on the most current releases.
Can someone give me a plain English explanation of the the core use cases for DeepSeek-OCR? | 2025-10-26T08:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ogfa1y/use_cases_for_deepseekocr/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogfa1y | false | null | t3_1ogfa1y | /r/LocalLLaMA/comments/1ogfa1y/use_cases_for_deepseekocr/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.