title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to disable deep thinking in continue dev with ollama | 2 | Hey everyone!
I am using Ollama with qwen3:4b and continue dev on vs code.
The problem is takes a lot of time. it like goes deep thinking mode by default, just for a simple "hello" it took around 2 min to respond to me, how can i disable this. | 2025-09-10T14:28:23 | Any_Pen2269 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ndfqh5 | false | null | t3_1ndfqh5 | /r/LocalLLaMA/comments/1ndfqh5/how_to_disable_deep_thinking_in_continue_dev_with/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': '7902v0pwkcof1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=108&crop=smart&auto=webp&s=7550eb537159cdc7a4a6f63a75f407daa3d0664e', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=216&crop=smart&auto=webp&s=13a47ff367572b7c025b03cd4908edb7b2510ecb', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=320&crop=smart&auto=webp&s=711c9abc6539a07923fd7de7f476f397724378f5', 'width': 320}, {'height': 348, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=640&crop=smart&auto=webp&s=5b7d983ba79c56cde6bea4abebe1a5a25c732a47', 'width': 640}, {'height': 522, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=960&crop=smart&auto=webp&s=346e28e417fd7e5ca93c5d8a436a60f4d853079c', 'width': 960}, {'height': 587, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?width=1080&crop=smart&auto=webp&s=2c3a0c73f23e34dabfc36c97d609627907ee95e9', 'width': 1080}], 'source': {'height': 1045, 'url': 'https://preview.redd.it/7902v0pwkcof1.png?auto=webp&s=6f45a56a5868856d36c481bcfa0a2af008fc6e8a', 'width': 1920}, 'variants': {}}]} | |
Qwen3-VL soon? | 60 | 2025-09-10T14:23:47 | https://github.com/huggingface/transformers/pull/40795 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ndfmb0 | false | null | t3_1ndfmb0 | /r/LocalLLaMA/comments/1ndfmb0/qwen3vl_soon/ | false | false | default | 60 | {'enabled': False, 'images': [{'id': 'WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=108&crop=smart&auto=webp&s=e4be73fbdd444f7c3bd69271de3a5adc25389ffa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=216&crop=smart&auto=webp&s=4f1fd5d93daa31e47c586ead17146072f1760ad4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=320&crop=smart&auto=webp&s=3ad62cd24e733e4d152074803f17331011529942', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=640&crop=smart&auto=webp&s=fafac68575ffd3b34262cbfa9c59fc0dcef20103', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=960&crop=smart&auto=webp&s=d7d8962ff5e13e0d027649db81d18569e83bc752', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?width=1080&crop=smart&auto=webp&s=5d27e7b8a4a5cf25f8069022b919260e927cf0f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WmIZZLYdo41uN4s96YqW_5HlL8MG-0LtKmnFoOx7RwY.png?auto=webp&s=4351c733d439c9162f3a309900f36e91bee3b31b', 'width': 1200}, 'variants': {}}]} | |
Seedream 4.0 is better than google nanobanana. It's a shame Bytedance, a Chinese company, is acting like a big American capitalist corporation. Also make their API so restrictive. Let's see what Hunyuan Image 2.1 has to offer. | 0 | 2025-09-10T14:23:43 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ndfm8y | false | null | t3_1ndfm8y | /r/LocalLLaMA/comments/1ndfm8y/seedream_40_is_better_than_google_nanobanana_its/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '73o8p3v2kcof1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=108&crop=smart&auto=webp&s=eadfd387fc30563fc168256af9c123b57509835d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=216&crop=smart&auto=webp&s=1537f5f2a92d1a84d65abceb69cdb5326710b768', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=320&crop=smart&auto=webp&s=bd8ba978b0ecd8e4e92466a9d6c75234f3d61867', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=640&crop=smart&auto=webp&s=1300d44cdf3d06c5609a84c8da05400ff48c56dc', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=960&crop=smart&auto=webp&s=cb9b138624876f3884228b1b0492ca84dd2a1f55', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?width=1080&crop=smart&auto=webp&s=765d6e30fceed64d16b52ec33ef10b33b4cf398d', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://preview.redd.it/73o8p3v2kcof1.jpeg?auto=webp&s=e76534d64e7fa7302ac6730c96582f5866582ed0', 'width': 2048}, 'variants': {}}]} | ||
Qwen 3 Cline alternative. | 1 | I'm using the Qwen 3 AI tool within cline. Qwen 3 is really good, but cline is so bad that I can use Qwen 3. Does anyone know of a cline alternative? | 2025-09-10T14:14:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ndfdh3/qwen_3_cline_alternative/ | Majestic_Republic147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndfdh3 | false | null | t3_1ndfdh3 | /r/LocalLLaMA/comments/1ndfdh3/qwen_3_cline_alternative/ | false | false | self | 1 | null |
I love Gemini 2.5 Pro | 40 | 2025-09-10T14:13:54 | https://www.reddit.com/gallery/1ndfd0o | beeenbeeen | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ndfd0o | false | null | t3_1ndfd0o | /r/LocalLLaMA/comments/1ndfd0o/i_love_gemini_25_pro/ | false | false | 40 | null | ||
I built a fully automated LLM tournament system (62 models tested, 18 qualified, 50 tournaments run) | 64 | I’ve been working on a project called Valyrian Games: a fully automated system where Large Language Models compete against each other in coding challenges. After running 50 tournaments, I’ve published the first results here:
👉 Leaderboard: [https://valyriantech.github.io/ValyrianGamesLeaderboard](https://valyriantech.github.io/ValyrianGamesLeaderboard)
👉 Challenge data repo: [https://github.com/ValyrianTech/ValyrianGamesCodingChallenge](https://github.com/ValyrianTech/ValyrianGamesCodingChallenge)
How it works:
Phase 1 doubles as qualification: each model must create its own coding challenge, then solve it multiple times to prove it’s fair. To do this, the LLM has access to an MCP server to execute Python code. The coding challenge can be anything, as long as the final answer is a single integer value (for easy verification).
Only models that pass this step qualify for tournaments.
Phase 2 is the tournament: qualified models solve each other’s challenges head-to-head. Results are scored (+1 correct, -1 wrong, +1 bonus for solving another's challenge, extra penalties if you fail your own challenge).
Ratings use Microsoft’s TrueSkill system, which accounts for uncertainty.
Some results so far:
I’ve tested 62 models, but only 18 qualified.
GPT-5-mini is currently #1, but the full GPT-5 actually failed qualification.
Some reasoning-optimized models literally “overthink” until they timeout.
Performance is multi-dimensional: correctness, speed, and cost all vary wildly.
Why I built this:
This started as a testbed for workflows in my own project SERENDIPITY, which is built on a framework I also developed: [https://github.com/ValyrianTech/ValyrianSpellbook](https://github.com/ValyrianTech/ValyrianSpellbook) . I wanted a benchmark that was open, automated, and dynamic, not just static test sets.
Reality check:
The whole system runs 100% automatically, but it’s expensive. API calls are costing me about $50/day, which is why I’ve paused after 50 tournaments. I’d love to keep it running continuously, but as a solo developer with no funding, that’s not sustainable. Right now, the only support I have is a referral link to RunPod (GPU hosting).
I’m sharing this because:
I think the results are interesting and worth discussing (especially which models failed qualification).
I’d love feedback from this community. Does this kind of benchmarking seem useful to you?
If there’s interest, maybe we can find ways to keep this running long-term.
For those who want to follow me: [https://linktr.ee/ValyrianTech](https://linktr.ee/ValyrianTech) | 2025-09-10T14:04:09 | WouterGlorieux | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ndf3rj | false | null | t3_1ndf3rj | /r/LocalLLaMA/comments/1ndf3rj/i_built_a_fully_automated_llm_tournament_system/ | false | false | default | 64 | {'enabled': True, 'images': [{'id': '7yajbqkmd6of1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=108&crop=smart&auto=webp&s=f5318cf6cd3d08c2987e47427b28f9b8e77b8dc5', 'width': 108}, {'height': 302, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=216&crop=smart&auto=webp&s=91577e764cc99ceb6a21836ecb6494abb6a36339', 'width': 216}, {'height': 448, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=320&crop=smart&auto=webp&s=3deca9bf3a2e3951d83acbbff446983ecb24ee3e', 'width': 320}, {'height': 896, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=640&crop=smart&auto=webp&s=b1e29abf739772b644d059323cbc4269b2391b68', 'width': 640}, {'height': 1345, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=960&crop=smart&auto=webp&s=ef91058413d675e3315970f79a2087d6f65a867e', 'width': 960}, {'height': 1513, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?width=1080&crop=smart&auto=webp&s=360d449bd1dd21f2dc2e88aaccf048d0a2f30f7f', 'width': 1080}], 'source': {'height': 1907, 'url': 'https://preview.redd.it/7yajbqkmd6of1.png?auto=webp&s=5f0eaca5a5ac614f38fcfa7975272723b2536c2b', 'width': 1361}, 'variants': {}}]} | |
What are your experiences with small VL models for local tasks? | 4 | I’m curious of what models people are using, and for what tasks. I’ve found a lot of success with Qwen2.5-VL 3B and 8B variants. It’s crazy how accurate these models are for their size. | 2025-09-10T14:03:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ndf379/what_are_your_experiences_with_small_vl_models/ | AnotherSoftEng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndf379 | false | null | t3_1ndf379 | /r/LocalLLaMA/comments/1ndf379/what_are_your_experiences_with_small_vl_models/ | false | false | self | 4 | null |
So apparently half of us are "AI providers" now (EU AI Act edition) | 394 | Heads up, fellow tinkers
The EU AI Act’s first real deadline kicked in August 2nd so if you’re messing around with models that hit 10\^23 FLOPs or more (think Llama-2 13B territory), regulators now officially care about you.
Couple things I’ve learned digging through this:
* The FLOP cutoff is surprisingly low. It’s not “GPT-5 on a supercomputer” level, but it’s way beyond what you’d get fine-tuning Llama on your 3090.
* “Provider” doesn’t just mean Meta, OpenAI, etc. If you fine-tune or significantly modify a big model, you need to watch out. Even if it’s just a hobby, you can still be classified as a provider.
* Compliance isn’t impossible. Basically:
* Keep decent notes (training setup, evals, data sources).
* Have some kind of “data summary” you can share if asked.
* Don’t be sketchy about copyright.
* Deadline check:
* New models released after Aug 2025 - rules apply now!
* Models that existed before Aug 2025 - you’ve got until 2027.
EU basically said: “Congrats, you’re responsible now.” 🫠
TL;DR: If you’re just running models locally for fun, you’re probably fine. If you’re fine-tuning big models and publishing them, you might already be considered a “provider” under the law.
Honestly, feels wild that a random tinkerer could suddenly have reporting duties, but here we are.
| 2025-09-10T14:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ndf01a/so_apparently_half_of_us_are_ai_providers_now_eu/ | Thecomplianceexpert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndf01a | false | null | t3_1ndf01a | /r/LocalLLaMA/comments/1ndf01a/so_apparently_half_of_us_are_ai_providers_now_eu/ | false | false | self | 394 | null |
New to local LLMs for RAG, need a sanity check on my setup, performance, and feasibility | 3 | I have recently discovered Anything LLM and LM Studio and would like to use these tools to efficiently process large document productions for legal work so that I can ultimately query the productions with natural language questions with an LLM model running in LM Studio. I have been testing different models with sample document sets and have had varying results.
I guess my threshold question is whether anyone has had success doing this or whether I should look into a different solution. I suspect part of my issue is that I'm doing this testing on my work laptop that does not have a dedicated GPU and runs on an Intel Core Ultra 9 185H (2.30 GHz) with 64 GB RAM.
I have been testing with a bunch of different models. I started with gpt-oss 20B, with a context length of 16,384, GPU Offload set to 0, number of experts set to 4, CPU thread pool size at 8, LLM temp set to 0.2, reasoning set to high, top P sampling set to 0.8, top K at 40. In LM Studio I am getting around 10 TPS but the time to spit out simple answers was really high. In AnythingLLM, in a workspace with only PDFs at a vector count of 1090, accuracy optimized, context snippets at 8, and doc similarity threshold set to low, it crawls down to **0.07 TPS**.
I also tested Qwen3-30b-a3b-2507, with a context length of 10,000, GPU Offload set to 0, number of experts set to 6, CPU thread pool size at 6, LLM temp set to 0.2. With this setup I'm able to get around 8-10 TPS in LM Studio, but in AnythingLLM (same workspace as above), it crawls down to **0.23 TPS**.
Because of the crazy slow TPS in AnythingLLM I tried running Unsloth's Qwen3-0.6b-Q8-GGUF, with a context length of 16,384, GPU Offload set to 0, CPU thread pool size at 6, top K at 40. In LM Studio TPS bumped way up to 46 TPS, as expected with a smaller model. In AnythingLLM, in the same workspace with the same settings, the smaller model was at **6.73 TPS**.
I'm not sure why I'm getting such a drop-off in TPS in AnythingLLM.
Not sure if this matters for TPS, but for the RAG embedding in Anything LLM, I'm using the default LanceDB vector database, the nomic-embed-text-v1 model for the AnythingLLM Embedder, 16,000 chunk size, with a 400 text chunk overlap.
Ultimately, the goal is to use a local LLM (to protect confidential information) to query gigabytes of documents. In litigation we deal with document productions with thousands of PDFs, emails, attachments, DWG/SolidWorks files, and a mix of other file types. Sample queries would be something like "Show me the earliest draft of the agreement" or "Find all emails discussing Project X" or "Identify every document that has the attached image." I don't know if we're there yet but it would awesome if the embedder could also understand images and charts.
I have resources to build out a machine that can be dedicated to the solution but I'm not sure if what I need is in the $5K range or $15K range. Before I even go there, I need to determine if what I want to do is even feasible, usable, and ultimately accurate.
| 2025-09-10T13:59:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ndezny/new_to_local_llms_for_rag_need_a_sanity_check_on/ | DefaultJudgmentDay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndezny | false | null | t3_1ndezny | /r/LocalLLaMA/comments/1ndezny/new_to_local_llms_for_rag_need_a_sanity_check_on/ | false | false | self | 3 | null |
Is it ever a good idea to inference on CPU and DDR5 | 4 | Will first token take forever (without accounting for loading model into ram)? Lets say it's Qwen 3 Next 80b-A3B. That's 80GB ram at q4 kinda.
Will I be getting 5t/s at least?
What kinda CPU would I need? It doesn't scale much with CPU quality right? | 2025-09-10T13:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ndegcm/is_it_ever_a_good_idea_to_inference_on_cpu_and/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndegcm | false | null | t3_1ndegcm | /r/LocalLLaMA/comments/1ndegcm/is_it_ever_a_good_idea_to_inference_on_cpu_and/ | false | false | self | 4 | null |
Don’t Waste $20–$30/month on AI | 0 | For anyone using AI for essays, study notes, or tutoring — don’t waste money on ChatGPT Plus or similar subscriptions. $20–$30/month is ridiculous.
I found a site called [prosperspot.com](http://prosperspot.com)
that’s only $5/month for students, and it actually comes with a bunch of preconfigured AIs for different subjects (math, coding, history, essay feedback, etc.). Way more useful than just one generic AI.
And I figured at 5 bucks its at least worth a try.
Save your money for food, rent, or beer. | 2025-09-10T13:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nddn6c/dont_waste_2030month_on_ai/ | UnReasonable_why | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nddn6c | false | null | t3_1nddn6c | /r/LocalLLaMA/comments/1nddn6c/dont_waste_2030month_on_ai/ | false | false | self | 0 | null |
Memory models for local LLMs | 12 | I've been struggling with adding persistent memory to the poor man's SillyTavern I am vibe coding. This project is just for fun and to learn. I have a 5090. I have attempted my own simple RAG solution with a local embedding model and ChromaDB, and I have tried to implement Graphiti + FalkorDB as a more advanced version of my simple RAG solution (to help manage entity relationships across time). I run Graphiti in the 'hot' path for my implementation.
When trying to use Graphiti, the problem I run into is that the local LLMs I use can't seem to handle the multiple LLM calls that services like Graphiti need for summarization, entity extraction and updates. I keep getting errors and malformed memories because the LLM gets confused in structuring the JSON correctly across all the calls that occur for each conversational turn, even if I use the structured formatting option within LMStudio. I've spent hours trying to tweak prompts to mitigate these problems without much success.
I suspect that the type of models I can run on a 5090 are just not smart enough to handle this, and that these memory frameworks (Graphiti, Letta, etc.) require frontier models to run effectively. Is that true? Has anyone been successful in implementing these services locally on LLMs of 24B or less? The LLMs I am using are more geared to conversation than coding, and that might also be a source of problems. | 2025-09-10T13:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nddmhx/memory_models_for_local_llms/ | marmotter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nddmhx | false | null | t3_1nddmhx | /r/LocalLLaMA/comments/1nddmhx/memory_models_for_local_llms/ | false | false | self | 12 | null |
What are the current state of local AI for gaming? | 6 | I am studying on how to better use local LLMs and one of the uses that I am very excited about is using them as a gaming partner in a cooperative game.
One example that I've heard about is the V-Tuber neuro-sama, I don't watch their stream so I don't know at which extension Vedal uses his AI. Let's say that my end goal is be playing a dynamic game like Left 4 Dead, I know a LLM can't achieve such thing (as far as I am aware of) so I'm aiming to Civilization V a turn based game, I don't need them to be good, just wanted to ask "Why you've done that move?" or "Let's aim to a military victory, so focus on modern tank production.".
So my question is: Is there local AIs that can play games as e.g. FPS, non turn based, cooperative, that has the same complexity of LLMs and can run on end-user hardware? | 2025-09-10T12:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nddfpp/what_are_the_current_state_of_local_ai_for_gaming/ | Rique_Belt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nddfpp | false | null | t3_1nddfpp | /r/LocalLLaMA/comments/1nddfpp/what_are_the_current_state_of_local_ai_for_gaming/ | false | false | self | 6 | null |
LLM in finanza | 0 | Can anyone recommend an excellent LLM in financial instruments? | 2025-09-10T12:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nddd64/llm_in_finanza/ | Lazy-Succotash-3965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nddd64 | false | null | t3_1nddd64 | /r/LocalLLaMA/comments/1nddd64/llm_in_finanza/ | false | false | self | 0 | null |
VibeVoice API | 3 | Has anyone successful hosted VibeVoice locally with API functionality. The git repo (before being edited) mentioned a docker container for the model and gradio to handle the model's inputs and outputs.
I am believe the documentation implied gradio was hosting the API connection to the model, but I prefer not having the gradio.
I want to host the model such that my OpenWebUI can read responses but i am running in to this one issue. Has anyone been able to navigate around Gradio for VibeVoice? | 2025-09-10T12:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ndcqjf/vibevoice_api/ | Danny_Davitoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndcqjf | false | null | t3_1ndcqjf | /r/LocalLLaMA/comments/1ndcqjf/vibevoice_api/ | false | false | self | 3 | null |
I pre-trained GPT-OSS entirely from scratch | 222 | 2025-09-10T12:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ndc7z8/i_pretrained_gptoss_entirely_from_scratch/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndc7z8 | false | {'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/hBUsySdcA3I?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Let us build GPT-OSS from Scratch"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/hBUsySdcA3I/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Let us build GPT-OSS from Scratch', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1ndc7z8 | /r/LocalLLaMA/comments/1ndc7z8/i_pretrained_gptoss_entirely_from_scratch/ | false | false | 222 | {'enabled': False, 'images': [{'id': '9EZFRbCI06NQd5IaAcswlKJEIIkgLbOtsjD1e7w98EI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9EZFRbCI06NQd5IaAcswlKJEIIkgLbOtsjD1e7w98EI.jpeg?width=108&crop=smart&auto=webp&s=d918cc5015f0c763f61961804a84cfdac7d08523', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9EZFRbCI06NQd5IaAcswlKJEIIkgLbOtsjD1e7w98EI.jpeg?width=216&crop=smart&auto=webp&s=af99e472e3b470abe69282692932ad5a23f59271', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9EZFRbCI06NQd5IaAcswlKJEIIkgLbOtsjD1e7w98EI.jpeg?width=320&crop=smart&auto=webp&s=85d2bae87aa536469b4b4fbaafbfa3ee215b6f78', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9EZFRbCI06NQd5IaAcswlKJEIIkgLbOtsjD1e7w98EI.jpeg?auto=webp&s=ac0c23777e90da01e983106ce83a03f16ebced48', 'width': 480}, 'variants': {}}]} | ||
New to Local LLMs - what hardware traps to avoid? | 30 | Hi,
I've around a USD $7K budget; I was previously very confident to put together a PC (or buy a private new or used pre-built).
Browsing this sub, I've seen all manner of considerations I wouldn't have accounted for: timing/power and test stability, for example. I felt I had done my research, but I acknowledge I'll probably miss some nuances and make less optimal purchase decisions.
I'm looking to do integrated machine learning and LLM "fun" hobby work - could I get some guidance on common pitfalls? Any hardware recommendations? Any known, convenient pre-builts out there?
...I also have seen the cost-efficiency of cloud computing reported on here. While I believe this, I'd still prefer my own machine however deficient compared to investing that $7k in cloud tokens.
Thanks :) | 2025-09-10T11:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ndbzjx/new_to_local_llms_what_hardware_traps_to_avoid/ | False-Disk-1329 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndbzjx | false | null | t3_1ndbzjx | /r/LocalLLaMA/comments/1ndbzjx/new_to_local_llms_what_hardware_traps_to_avoid/ | false | false | self | 30 | null |
[UPDATE] API for extracting tables, markdown, json and fields from pdfs and images | 27 | I previously shared an open-source project for extracting structured data from documents. I’ve now hosted it as a free to use API.
* Outputs: JSON, Markdown, CSV, tables, specific fields, schema etc
* Inputs: PDFs, images, and other common document formats
* Use cases: invoicing, receipts, contracts, reports, and more
API docs: [https://docstrange.nanonets.com/apidocs](https://docstrange.nanonets.com/apidocs)
Original post: [https://www.reddit.com/r/LocalLLaMA/comments/1mepr38/docstrange\_open\_source\_document\_data\_extractor/](https://www.reddit.com/r/LocalLLaMA/comments/1mepr38/docstrange_open_source_document_data_extractor/) | 2025-09-10T11:02:12 | LostAmbassador6872 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ndb3lv | false | null | t3_1ndb3lv | /r/LocalLLaMA/comments/1ndb3lv/update_api_for_extracting_tables_markdown_json/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': 'mtga3rt2kbof1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=108&crop=smart&format=png8&s=dd9018bf7d1f029c631aa140d0e4d3df04fc54bc', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=216&crop=smart&format=png8&s=460386af964f1a98fcc7702310095f6f9d11e72e', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=320&crop=smart&format=png8&s=66500d5d31b83134c4b1935e6d11f3f7b3b0ab6f', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=640&crop=smart&format=png8&s=02e4c96740a645625b0d1b81ddf708ef19302930', 'width': 640}, {'height': 679, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=960&crop=smart&format=png8&s=cf30a8ea1b9ccd3da8e6fabb69b2080dcfe059ed', 'width': 960}], 'source': {'height': 759, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?format=png8&s=fa2ad051d543b4d5af5aa257b944c5e1f5c75eba', 'width': 1072}, 'variants': {'gif': {'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=108&crop=smart&s=057529617092dd4449980a00fb8a14d744b767c8', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=216&crop=smart&s=3e2fa88743fc4e3083eb163dd4a2fab664b8f257', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=320&crop=smart&s=c8774f9bd5e531a5bba4539698c46745118bedb9', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=640&crop=smart&s=8592ca6e9d5093002ab1875a74196a5812960c1b', 'width': 640}, {'height': 679, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=960&crop=smart&s=5b6fcdfa4982a44cf61f00619d3e477c8fc2fe7d', 'width': 960}], 'source': {'height': 759, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?s=3460deeb92133025bb52f4de863ab9ce99efa7bb', 'width': 1072}}, 'mp4': {'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=108&format=mp4&s=75525d89b566f47b2d3e101063eb09f534848fdc', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=216&format=mp4&s=54b3ab315041a0adbfc190ad0ae13d3f4bf83d31', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=320&format=mp4&s=12d05c55ec71d45512756a3cc3e6c2093a051b8e', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=640&format=mp4&s=a970cf3e88f9dd721d43d8937c909e75b3e42575', 'width': 640}, {'height': 679, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?width=960&format=mp4&s=fd0e324f2b576e9ec796137252ae6bf315866f47', 'width': 960}], 'source': {'height': 759, 'url': 'https://preview.redd.it/mtga3rt2kbof1.gif?format=mp4&s=facd660122dc72765f4e83c327c52ff51170f18b', 'width': 1072}}}}]} | |
Why does qwen.ai show it's using Qwen3 max preview when it's replying to an image? And what model is it actually using? | 5 | So confusing. Same thing happened with Qwen3 max reasoning. I was using "reasoning" thinking I was using that one, when in reality it was using another model with reasoning? | 2025-09-10T10:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ndatgs/why_does_qwenai_show_its_using_qwen3_max_preview/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndatgs | false | null | t3_1ndatgs | /r/LocalLLaMA/comments/1ndatgs/why_does_qwenai_show_its_using_qwen3_max_preview/ | false | false | self | 5 | null |
Reasonable Speeds? | 2 | Complete noob here and I'm trying to learn about AI, so please excuse my stupid? questions.
I have just recently gotten the new Strix Halo machine (GMKtec NucBox EVO-X2 with the AMD RYZEN AI MAX+ 395 w/Radeon 8060S x 32 and 128GB RAM). I'm running Ubuntu 24.04.3 LTS on it. I have Ollama in a docker container and use Open WebUI to run the various LLMs.
Now I am wondering if I have setup Ollama properly and if the speed I see is reasonable or if it should be faster. When I run \`docker stats\` while waiting for a reply, it always shows the CPU usage at some +1500%. But on \`watch -n 1 rocm-smi\` the GPU is always at 0% and does not change.
The log file of Ollama seems to indicate it should find the GPU, but at least the rocm-smi disagrees.
`time=2025-09-10T10:23:27.953Z level=INFO source=routes.go:1384 msg="Listening on [::]:11434 (version 0.0.0)"`
`time=2025-09-10T10:23:27.953Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"`
`time=2025-09-10T10:23:27.955Z level=INFO source=amd_linux.go:490 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0`
`time=2025-09-10T10:23:27.965Z level=INFO source=amd_linux.go:490 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=11.0.0`
`time=2025-09-10T10:23:27.965Z level=INFO source=types.go:132 msg="inference compute" id=0 library=rocm variant="" compute=gfx1151 driver=6.12 name=1002:1586 total="128.0 GiB" available="127.5 GiB"`
And in Open WebUI it tells me for a query of llama2:7b some 22.64 response\_token/s and 97.79 prompt\_token/s.
Is that a reasonable speed or could it be faster than that with a proper configuration? | 2025-09-10T10:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ndam7j/reasonable_speeds/ | MatthKarl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndam7j | false | null | t3_1ndam7j | /r/LocalLLaMA/comments/1ndam7j/reasonable_speeds/ | false | false | self | 2 | null |
MiniPC N150 CPU benchmark Vulkan MoE models | 10 | Been playing around with Llama.cpp and a few MoE models and wanted to see how they fair with my Intel minPC. Looks like Vulkan is working on latest llama.cpp prebuilt package.
System: MiniPC Kamrui E2 on Intel N150 "Alder Lake-N" CPU with 16GB of DDR4 3200 MT/s ram. Running Kubuntu 25.04 on Kernel 6.14.0-29-generic x86\_64.
llama.cpp Vulkan version build: 4f63cd70 (6431)
load_backend: loaded RPC backend from /home/user33/build/bin/libggml-rpc.so
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Intel(R) Graphics (ADL-N) (Intel open-source Mesa driver) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
load_backend: loaded Vulkan backend from /home/user33/build/bin/libggml-vulkan.so
load_backend: loaded CPU backend from /home/user33/build/bin/libggml-cpu-alderlake.so
1. Dolphin3.0-Llama3.1-8B-Q4\_K\_M.gguf
2. Phi-mini-MoE-instruct-IQ2\_XS.gguf
3. Qwen3-4B-Instruct-2507-UD-IQ2\_XXS.gguff
4. granite-3.1-3b-a800m-instruct\_Q8\_0.gguf
5. phi-2.Q6\_K.gguf (not a MoE model)
6. SicariusSicariiStuff\_Impish\_LLAMA\_4B-IQ3\_XXS.gguf
7. gemma-3-270m-f32.gguf
8. Qwen3-4B-Instruct-2507-Q3\_K\_M.gguf
|model|size|params|pp512 t/s|tg128 t/s|
|:-|:-|:-|:-|:-|
|Dolphin3.0‑Llama3.1‑8B‑Q4\_K\_M.gguf|4.58 GiB|8.03 B|25.57|2.34|
|Phi‑mini‑MoE‑instruct‑IQ2\_XS.gguf|2.67 GiB|7.65 B|25.58|5.80|
|Qwen3‑4B‑Instruct‑2507‑UD‑IQ2\_XXS.gguf|1.16 GiB|4.02 B|25.58|3.59|
|granite‑3.1‑3b‑a800m‑instruct\_Q8\_0.gguf|3.27 GiB|3.30 B|51.45|11.85|
|phi‑2.Q6\_K.gguf|2.13 GiB|2.78 B|25.58|4.81|
|SicariusSicariiStuff\_Impish\_LLAMA\_4B‑IQ3\_XXS.gguf|1.74 GiB|4.51 B|25.57|3.22|
|gemma‑3‑270m‑f32.gguf|1022.71 MiB|268.10 M|566.64|17.10|
|Qwen3‑4B‑Instruct‑2507‑Q3\_K\_M.gguf|1.93 GiB|4.02 B|25.57|2.22|
sorted by tg128
|model|size|params|pp512 t/s|tg128 t/s|
|:-|:-|:-|:-|:-|
|Qwen3‑4B‑Instruct‑2507‑Q3\_K\_M.gguf|1.93 GiB|4.02 B|25.57|2.22|
|Dolphin3.0‑Llama3.1‑8B‑Q4\_K\_M.gguf|4.58 GiB|8.03 B|25.57|2.34|
|SicariusSicariiStuff\_Impish\_LLAMA\_4B‑IQ3\_XXS.gguf|1.74 GiB|4.51 B|25.57|3.22|
|Qwen3‑4B‑Instruct‑2507‑UD‑IQ2\_XXS.gguf|1.16 GiB|4.02 B|25.58|3.59|
|phi‑2.Q6\_K.gguf|2.13 GiB|2.78 B|25.58|4.81|
|Phi‑mini‑MoE‑instruct‑IQ2\_XS.gguf|2.67 GiB|7.65 B|25.58|5.80|
|granite‑3.1‑3b‑a800m‑instruct\_Q8\_0.gguf|3.27 GiB|3.30 B|51.45|11.85|
|gemma‑3‑270m‑f32.gguf|1022.71 MiB|268.10 M|566.64|17.10|
sorted by pp512
|model |size |params|pp512 t/s|tg128 t/s|
|:-|:-|:-|:-|:-|
|gemma‑3‑270m‑f32.gguf |1022.71 MiB|268.10 M|566.64 |17.10 |
|granite‑3.1‑3b‑a800m‑instruct\_Q8\_0.gguf |3.27 GiB |3.30 B |51.45 |11.85 |
|Qwen3‑4B‑Instruct‑2507‑UD‑IQ2\_XXS.gguf |1.16 GiB |4.02 B |25.58 |3.59 |
|Phi‑mini‑MoE‑instruct‑IQ2\_XS.gguf |2.67 GiB |7.65 B |25.58 |5.80 |
|Dolphin3.0‑Llama3.1‑8B‑Q4\_K\_M.gguf |4.58 GiB |8.03 B |25.57 |2.34 |
|SicariusSicariiStuff\_Impish\_LLAMA\_4B‑IQ3\_XXS.gguf|1.74 GiB|4.51 B|25.57|3.22 |
|phi‑2.Q6\_K.gguf |2.13 GiB |2.78 B |25.58 |4.81 |
|Qwen3‑4B‑Instruct‑2507‑Q3\_K\_M.gguf |1.93 GiB |4.02 B |25.57 |2.22 |
sorted by params
|model|size|params|pp512 t/s|tg128 t/s|
|:-|:-|:-|:-|:-|
|Dolphin3.0‑Llama3.1‑8B‑Q4\_K\_M.gguf|4.58 GiB|8.03 B|25.57|2.34|
|Phi‑mini‑MoE‑instruct‑IQ2\_XS.gguf|2.67 GiB|7.65 B|25.58|5.80|
|SicariusSicariiStuff\_Impish\_LLAMA\_4B‑IQ3\_XXS.gguf|1.74 GiB|4.51 B|25.57|3.22|
|Qwen3‑4B‑Instruct‑2507‑UD‑IQ2\_XXS.gguf|1.16 GiB|4.02 B|25.58|3.59|
|Qwen3‑4B‑Instruct‑2507‑Q3\_K\_M.gguf|1.93 GiB|4.02 B|25.57|2.22|
|granite‑3.1‑3b‑a800m‑instruct\_Q8\_0.gguf|3.27 GiB|3.30 B|51.45|11.85|
|phi‑2.Q6\_K.gguf|2.13 GiB|2.78 B|25.58|4.81|
|gemma‑3‑270m‑f32.gguf|1022.71 MiB|268.10 M|566.64|17.10|
sorted by size small to big
|model|size|params|pp512 t/s|tg128 t/s|
|:-|:-|:-|:-|:-|
|gemma‑3‑270m‑f32.gguf|1022.71 MiB|268.10 M|566.64|17.10|
|Qwen3‑4B‑Instruct‑2507‑UD‑IQ2\_XXS.gguf|1.16 GiB|4.02 B|25.58|3.59|
|SicariusSicariiStuff\_Impish\_LLAMA\_4B‑IQ3\_XXS.gguf|1.74 GiB|4.51 B|25.57|3.22|
|Qwen3‑4B‑Instruct‑2507‑Q3\_K\_M.gguf|1.93 GiB|4.02 B|25.57|2.22|
|phi‑2.Q6\_K.gguf|2.13 GiB|2.78 B|25.58|4.81|
|Phi‑mini‑MoE‑instruct‑IQ2\_XS.gguf|2.67 GiB|7.65 B|25.58|5.80|
|granite‑3.1‑3b‑a800m‑instruct\_Q8\_0.gguf|3.27 GiB|3.30 B|51.45|11.85|
|Dolphin3.0‑Llama3.1‑8B‑Q4\_K\_M.gguf|4.58 GiB|8.03 B|25.57|2.34|
In less than 30 days Vulkan has started working for Intel N150 CPU here was my benchmark 25 days ago on CPU backend was recognized by Vulkan build:
Dolphin3.0-Llama3.1-8B-Q4\_K\_M.gguf
build: 1fe00296 (6182)
load\_backend: loaded RPC backend from /home/user33/build/bin/libggml-rpc.so load\_backend: loaded CPU backend from /home/user33/build/bin/libggml-cpu-alderlake.so
|model|size|params|backend|test|t/s|
|:-|:-|:-|:-|:-|:-|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC|pp512|7.14|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC|tg128|4.03|
real 9m48.044s
Dolphin3.0-Llama3.1-8B-Q4\_K\_M.gguf backend: Vulkan build: 4f63cd70 (6431)
|model|size|params|backend|test|t/s|
|:-|:-|:-|:-|:-|:-|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC,Vulkan|pp512|25.57|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC,Vulkan|tg128|2.34|
real 6m51.535s
Dolphin3.0-Llama3.1-8B-Q4\_K\_M.gguf build: 4f63cd70 (6431) CPU only by using also improved
`llama-bench -ngl 0 --model ~/Dolphin3.0-Llama3.1-8B-Q4_K_M.gguf`
|model|size|params|backend|ngl|test|t/s|
|:-|:-|:-|:-|:-|:-|:-|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC,Vulkan|0|pp512|8.19|
|llama 8B Q4\_K – Medium|4.58 GiB|8.03 B|RPC,Vulkan|0|tg128|4.10|
pp512 jumped from 7 t/s to 25 t/s, but we did lose a little on tg128. So use Vulkan if you have a big input request, but don't use if you just need quick questions answered. (just add `-ngl 0` )
Not bad for a sub $150 miniPC. MoE model bring lots of power and looks like latest Mesa adds Vulkan support for better pp512 speeds. | 2025-09-10T10:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ndacaw/minipc_n150_cpu_benchmark_vulkan_moe_models/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ndacaw | false | null | t3_1ndacaw | /r/LocalLLaMA/comments/1ndacaw/minipc_n150_cpu_benchmark_vulkan_moe_models/ | false | false | self | 10 | null |
too long search list on hugging face | 1 | When I go to hugging face for some models - sometimes I click a quantization i think will fit my 8 gb memory and am presented with a very long list - how can i tell them apart? | 2025-09-10T10:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nda9i5/too_long_search_list_on_hugging_face/ | Calm-Kiwi-9232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nda9i5 | false | null | t3_1nda9i5 | /r/LocalLLaMA/comments/1nda9i5/too_long_search_list_on_hugging_face/ | false | false | self | 1 | null |
Free ExamSprint: AI + Notes + Flashcards for Students 🚀 | 1 | https://examsprint.pages.dev
I built ExamSprint, a free tool for students preparing for CBSE, JEE, NEET & more.
What it offers:
📚 Chapter & topic-wise notes (Classes 9–12)
📖 Direct NCERT links
🃏 Flashcards for concepts + practice questions
🤖 AI chatbot (powered by LLMs) for instant doubt-solving
📝 Topper’s Notes for proven strategies
| 2025-09-10T09:58:17 | banmakerofexamsprint | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd9zgg | false | null | t3_1nd9zgg | /r/LocalLLaMA/comments/1nd9zgg/free_examsprint_ai_notes_flashcards_for_students/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'oleei3br8bof1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=108&crop=smart&auto=webp&s=606c56e1244bfb208a8250fbc511eecd76266bff', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=216&crop=smart&auto=webp&s=dba9e9e2ce76ba53d7cea0747c066ad748293166', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=320&crop=smart&auto=webp&s=5ef334f8cbfe5e6c0e49250ffeef92b2450f297f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=640&crop=smart&auto=webp&s=15330a985144e983403f2aa2d822bc7ed96b1dfc', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=960&crop=smart&auto=webp&s=73f6d2a9705c34e084f1d27ac8328856c1c7cf92', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/oleei3br8bof1.png?width=1080&crop=smart&auto=webp&s=45135d91049d40ee3686d700b26940cc51ce73fc', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/oleei3br8bof1.png?auto=webp&s=8998ad32b0c29394a23602fa456d00b6581d6469', 'width': 1080}, 'variants': {}}]} | |
Batched LLM inference having the same latency as sequential. | 4 | Hello everyone! I am trying to figure out how batched inference works in LLMs.
**Context:**
From my understanding of traditional DNNs, you can give a network multiple inputs with a dimension of (batch\_size, \*input\_dims) and take advantage of the GPU's parallelism capabilities to concurrently calculate an output with dimensions of (batch\_size, \*output\_dim). Timewise there is a small overhead for batching that is dependent on the GPU and DNN architecture, however inference of a single input vs a batch should not be scaling linearly.
I am trying to run an LLM locally and I am experimenting with using batched inference. As my GPU is poor and I can only afford to run small models (<10B params) my intention was to use [Self-Consistency](https://arxiv.org/abs/2203.11171) (run the same prompt multiple times and vote on the best answer to reduce the risk of hallucinations) to be able to get as good answers as possible out of my setup. I have read about batched LLM inference with multiple different prompts being fed to the LLM in a batch, and I wanted to use batched inference to run multiple inferences of the same prompt, that I could later analyze and get the best answer from.
**Issue:**
However, in my experiments using vLLM I get the same latency when giving the prompts to the llm sequentially and in batches, with seemingly linear latency increase as the number of batches increases. My question is what part of LLM inference can be parallelized and to what extent? I am pretty sure that prompt encoding is fully parallelizable, but is decoding and token generation parallelizable as well? Is it actually possible to infer more than one prompts in the (roughly) the same time it would take one prompt to be completed through batching?
| 2025-09-10T09:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nd9eth/batched_llm_inference_having_the_same_latency_as/ | kokki_p | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd9eth | false | null | t3_1nd9eth | /r/LocalLLaMA/comments/1nd9eth/batched_llm_inference_having_the_same_latency_as/ | false | false | self | 4 | null |
is claude models lost its touch on coding or myself not prompting well these days? | 0 | guys i have been using tools from long time but these days if i am using claude models , i see more confusion then the progress ...anything i missed | 2025-09-10T08:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nd8q4b/is_claude_models_lost_its_touch_on_coding_or/ | Immediate-Action5124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd8q4b | false | null | t3_1nd8q4b | /r/LocalLLaMA/comments/1nd8q4b/is_claude_models_lost_its_touch_on_coding_or/ | false | false | self | 0 | null |
Macbook Pro M3 Max 128 vs AI Rig 4x3090 | 3 | I can't choose between the mac and the AI rig.
**Description AI RIG**
Selling PC for computation / rendering or installation of local AI / LLM – self-hosted.
The PC is fully assembled and functional, tested with several local LLMs.
Components:
3 RTX 3090 for a total of 72 GB VRAM (possibility to deliver it with a 4th one for an extra €650)
AMD 5900X CPU, 12 cores with watercooling
X570s Aorus Master motherboard
64 GB DDR 2400 RAM
2 TB NVMe storage
**Description MACBOOK PRO**
MacBook Pro 16 M3 Max – 4 TB SSD / 128 GB RAM
Hello, we are selling our MacBook Pro M3 Max 16-inch from November 2023.
No scratches or dents on the machine. It is in excellent condition.
Purchased online from Apple’s website. New price: €6900.
Configuration (Very Rare):
16-core CPU / 40-core GPU
128 GB unified memory
4 TB SSD storage
16-core Neural Engine
16-inch Liquid Retina XDR display
Three Thunderbolt 5 ports, HDMI port, SDXC card reader, headphone jack, MagSafe 3 port
Magic Keyboard with Touch ID
Force Touch trackpad
140W USB-C power adapter
Sold with only 20 battery cycles…
Shipping available exclusively via FedEx.
https://preview.redd.it/jqv7wlrxtaof1.png?width=1080&format=png&auto=webp&s=0d5647e6d11e5203326149dcae729921b42cd87d
https://preview.redd.it/g6v3ugjwtaof1.png?width=1072&format=png&auto=webp&s=0a366cee334e1514968648a78eba4864a841bc92
| 2025-09-10T08:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nd8pny/macbook_pro_m3_max_128_vs_ai_rig_4x3090/ | No_Palpitation7740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd8pny | false | null | t3_1nd8pny | /r/LocalLLaMA/comments/1nd8pny/macbook_pro_m3_max_128_vs_ai_rig_4x3090/ | false | false | 3 | null | |
Extremely slow prompt processing with Gemma 3 | 6 | Hi, I’m not sure if I search poorly or something, but I’ve been having this issue with Gemma 3 12b and 27b where both slow down exponentially with added context, and I couldn’t find any solution to this.
I’ve tried new quants and legacy quants from unsloth, such as IQ4_NL, Q4_K_M, UD-Q4_K_XL and Q4_0, no difference. Tried another model - Qwen 3 32b (dense, not MoE) takes mere seconds to first token on ~20k context, while Gemma took half an hour before I gave up and shut it down.
It’s not an offloading issue - ollama reports 100% GPU fit (RTX 3060 + RTX 3050 btw), yet my CPU is under constant 30% load while Gemma is taking its time to first token.
Admittedly, the entirety of my server is on an HDD, but that really shouldn’t be the issue because iotop reports 0% IO, both read and write, during the 30% load on the CPU.
Heard there can be issues with quantized KV cache, but I never quantized it (unless it’s enabled by default?).
I really feel stuck here. I’ve heard there were issues with Gemma 3 back in spring, but also saw that they were dealt with, and I am on the latest version of ollama. Am I missing something? | 2025-09-10T08:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nd8pi9/extremely_slow_prompt_processing_with_gemma_3/ | ABLPHA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd8pi9 | false | null | t3_1nd8pi9 | /r/LocalLLaMA/comments/1nd8pi9/extremely_slow_prompt_processing_with_gemma_3/ | false | false | self | 6 | null |
VibeVoice is sweeeet. Now we need to adapt its tokenizer for other models! | 418 | As a huge AI audio nerd, I've recently been knee-deep in Microsoft's latest VibeVoice models and they really are awesome!! The work from the Microsoft Research team is amazing and they've shared them with everyone.... even though they took one back lol. I highly recommend checking them out if you haven't already.
I started reading up on all of the techniques applied within the architecture to allow for such long generations (45-90 minutes), with up to 4 speakers, and sounding so life-like... Google notebook is the closest thing to this kind of generation, but it's limited in that it auto-generates your podcast based on the context, not on the exact script you provide.
Let me have the VibeVoice model do the talking!
The generated voices in my video were generated within my own Hugging Face space and using the default voices provided by the VibeVoice model (7B). The voices were generated in one single generation, not stitched! [https://huggingface.co/spaces/ACloudCenter/Conference-Generator-VibeVoice](https://huggingface.co/spaces/ACloudCenter/Conference-Generator-VibeVoice) | 2025-09-10T07:24:32 | https://v.redd.it/8x4dht8pgaof1 | Cipher_Lock_20 | /r/LocalLLaMA/comments/1nd7nxo/vibevoice_is_sweeeet_now_we_need_to_adapt_its/ | 1970-01-01T00:00:00 | 0 | {} | 1nd7nxo | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8x4dht8pgaof1/DASHPlaylist.mpd?a=1760210681%2CZGY0MTg5ODNhOWI5NzQ2MDE5YzI1MTcyNWVkNWY5NzU1ZDk0OWM1NDZlNzExODBhYmZkNzc5ZDViNDBjNTIyOQ%3D%3D&v=1&f=sd', 'duration': 131, 'fallback_url': 'https://v.redd.it/8x4dht8pgaof1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/8x4dht8pgaof1/HLSPlaylist.m3u8?a=1760210681%2CZDA3NTczMzgwNTJiNmQ0YzFlZmZiYTY2OTAyYmM1NTNiMzllMWRiMmE2MTI3YTJjZWViMTZmYzQyMmZkODExMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8x4dht8pgaof1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1nd7nxo | /r/LocalLLaMA/comments/1nd7nxo/vibevoice_is_sweeeet_now_we_need_to_adapt_its/ | false | false | 418 | {'enabled': False, 'images': [{'id': 'ZXJidjUwNHJnYW9mMTZtREgbQQjA1lJ8zPSNZtqKO6Gf9AtInhXi-M401FlP', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZXJidjUwNHJnYW9mMTZtREgbQQjA1lJ8zPSNZtqKO6Gf9AtInhXi-M401FlP.png?width=108&crop=smart&format=pjpg&auto=webp&s=c275f48b9fe7786ca75532cad2e65fe6efe19dde', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZXJidjUwNHJnYW9mMTZtREgbQQjA1lJ8zPSNZtqKO6Gf9AtInhXi-M401FlP.png?width=216&crop=smart&format=pjpg&auto=webp&s=b143d62cfac2ac4e1a7ddc1317b0123be3cc6901', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZXJidjUwNHJnYW9mMTZtREgbQQjA1lJ8zPSNZtqKO6Gf9AtInhXi-M401FlP.png?width=320&crop=smart&format=pjpg&auto=webp&s=50a2caab932b1104578c905f60cbcc68fd00a3c3', 'width': 320}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/ZXJidjUwNHJnYW9mMTZtREgbQQjA1lJ8zPSNZtqKO6Gf9AtInhXi-M401FlP.png?format=pjpg&auto=webp&s=df576119b224c25561a57d135dcd8a26f5c0a2b5', 'width': 576}, 'variants': {}}]} | |
VibeVoice is sweeet! Need to adapt it's tokenizer for other models now. | 0 | As a huge AI audio nerd, I've recently been knee-deep in Microsoft's latest VibeVoice models and they really are awesome!! The work from the Microsoft Research team is amazing and they've shared them with everyone for research purposes....even though they took one back lol. I highly recommend checking them out if you haven't already.
I started reading up on all of the techniques applied within the architecture to allow for such long generations (45-90 minutes), with up to 4 speakers, and sounding so life-like... Google notebook is the closest thing to this kind of generation, but it's limited in that it auto-generates your podcast based on the context, not on the exact script you provide.
Let me have the VibeVoice model do the talking!
The generated voices in my video were generated within my own Hugging Face and using the default voices provided by the VibeVoice model. The voices were generated in one single generation, not stitched! | 2025-09-10T07:19:56 | https://v.redd.it/9fqgxj0nfaof1 | Cipher_Lock_20 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd7lgd | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/9fqgxj0nfaof1/DASHPlaylist.mpd?a=1760080813%2CZmRlNjQ3ODk4NTEyZjUxMTk0NDNiODg4ZGJkNzczYjRiMTFiNzZlZDEyYTI0NGUxYjEzNjEyNTJlZTAxN2E4NQ%3D%3D&v=1&f=sd', 'duration': 131, 'fallback_url': 'https://v.redd.it/9fqgxj0nfaof1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/9fqgxj0nfaof1/HLSPlaylist.m3u8?a=1760080813%2CNDNmMThiNTU0NWJkYzNlZDY1YjgxZGNiNTAzZjk2NzYwMTk0YzhiNWQ3MjkwYTZhY2RkYWNhNGU1MjMyZGVhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9fqgxj0nfaof1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1nd7lgd | /r/LocalLLaMA/comments/1nd7lgd/vibevoice_is_sweeet_need_to_adapt_its_tokenizer/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YWd1cmR1ZXJmYW9mMeimMm4NiP75j2Ll2r7ED-0cfVknP16XOoiQyvz1ypIE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/YWd1cmR1ZXJmYW9mMeimMm4NiP75j2Ll2r7ED-0cfVknP16XOoiQyvz1ypIE.png?width=108&crop=smart&format=pjpg&auto=webp&s=09ef537737db95b96e6d3807382280c327c25787', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/YWd1cmR1ZXJmYW9mMeimMm4NiP75j2Ll2r7ED-0cfVknP16XOoiQyvz1ypIE.png?width=216&crop=smart&format=pjpg&auto=webp&s=09ce48d1c72fdb7804b07863b04da80b00c61640', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/YWd1cmR1ZXJmYW9mMeimMm4NiP75j2Ll2r7ED-0cfVknP16XOoiQyvz1ypIE.png?width=320&crop=smart&format=pjpg&auto=webp&s=c72ac6452137519f62778fd9f2216a564c49ad09', 'width': 320}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/YWd1cmR1ZXJmYW9mMeimMm4NiP75j2Ll2r7ED-0cfVknP16XOoiQyvz1ypIE.png?format=pjpg&auto=webp&s=b02d7c85ced97aa65e59ec2771a3935cf55bb9a5', 'width': 576}, 'variants': {}}]} | |
I fine-tuned a small model so it could write blogs & LinkedIn posts in my brand voice (instead of generic AI-speak) | 20 | I fine-tuned Qwen with DPO to generate YouTube titles(on a smaller dataset) in *my* style (instead of “AI-sounding fluff”)
Most AI-generated content feels the same: generic, safe, “AI-sounding.”
But creators and brands care about voice — newsletters, LinkedIn posts, podcast titles, YouTube content. The way you say things is as important as what you say.
That’s the gap Direct Preference Optimization (DPO) fills- quite natural
* You show the model pairs of responses (one better, one worse).
* It directly optimizes to favor the “better” ones.
I wanted to see if DPO approach could help fix one of my biggest frustrations: AI writing bad YouTube titles.
Think: hypey, vague, or clickbaity. Stuff I’d never actually publish.
So I:
1. Started with Qwen2.5-0.5B-Instruct as a base.
2. Generated multiple candidate titles for \~100+ video ideas.
3. Labeled pairs (better vs worse) to build a preference dataset.
4. Fine-tuned the model with Hugging Face’s `trl` library and DPO.
And when I tested 50 random video ideas in a blind A/B test, I preferred the DPO outputs 68% of the time. Not perfect, but significantly closer to my style.
https://preview.redd.it/ub5eszpjaaof1.png?width=1070&format=png&auto=webp&s=0d4c961808635e09010ebac03088b361be42753a
This isn’t just about YouTube titles. The same process works for:
* Newsletter subject lines
* LinkedIn posts
* Customer support replies
* Blog intros, podcast titles, etc.
Has anyone else here experimented with finetuning for style/brand voice? | 2025-09-10T06:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nd73p3/i_finetuned_a_small_model_so_it_could_write_blogs/ | StrictSir8506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd73p3 | false | null | t3_1nd73p3 | /r/LocalLLaMA/comments/1nd73p3/i_finetuned_a_small_model_so_it_could_write_blogs/ | false | false | 20 | null | |
Anthropic is DOMINATING in web development - having 6 of the top 10 models on Design Arena | 0 | 2025-09-10T06:33:33 | Adept_Photograph_796 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd6var | false | null | t3_1nd6var | /r/LocalLLaMA/comments/1nd6var/anthropic_is_dominating_in_web_development_having/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'tjupw4b68aof1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=108&crop=smart&auto=webp&s=7e96b883db63426529539252bea8c6d54abcd10a', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=216&crop=smart&auto=webp&s=d1be76449365afb26fd9fc6a65cca5a2befe855d', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=320&crop=smart&auto=webp&s=19e488ae96cf408514d32a484815575dab4361d1', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=640&crop=smart&auto=webp&s=393997f00fdc40fa6d6f0525d3299c8069422677', 'width': 640}, {'height': 575, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=960&crop=smart&auto=webp&s=7128cb038f996c8d54a6933b8514c049bebee49e', 'width': 960}, {'height': 647, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?width=1080&crop=smart&auto=webp&s=18d52ca0ffd26697cd48a30fe7155f0138402d59', 'width': 1080}], 'source': {'height': 1316, 'url': 'https://preview.redd.it/tjupw4b68aof1.png?auto=webp&s=b11c1934bf58e854878881b21f0f606e751bd672', 'width': 2194}, 'variants': {}}]} | ||
Anthropic is DOMINATING in web development - having 6 of the top 10 models on Design Arena | 1 | 2025-09-10T06:30:33 | Adept_Photograph_796 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd6tio | false | null | t3_1nd6tio | /r/LocalLLaMA/comments/1nd6tio/anthropic_is_dominating_in_web_development_having/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'dojba3ws6aof1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=108&crop=smart&auto=webp&s=c9bb715bf3ba67a8af6516624780a2fed57d7306', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=216&crop=smart&auto=webp&s=0ff2ea7228073459e94c8c3d17b19c729fd62541', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=320&crop=smart&auto=webp&s=16d13d36545458b6cb74d5b586e21eb88f7c9da5', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=640&crop=smart&auto=webp&s=0178023966606b3cc4c96f0478c65a339047c0a7', 'width': 640}, {'height': 575, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=960&crop=smart&auto=webp&s=5cb98d2f8ec5a636a2978769e3dd1c1f327d932a', 'width': 960}, {'height': 647, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?width=1080&crop=smart&auto=webp&s=a64c94bb17280bda4e0ad14a783654942866d77b', 'width': 1080}], 'source': {'height': 1316, 'url': 'https://preview.redd.it/dojba3ws6aof1.png?auto=webp&s=58e7d5cbc69bad23eea2fbbfa7d9f9098535e0a4', 'width': 2194}, 'variants': {}}]} | ||
DeepSeek V3.1: Or.. Wait.. Actually... | 0 | *Here's an answer I got from DeepSeek V3.1 and its happened more than once:*
The error indicates that the JSON parser requires LF characters to be escaped as `\u000A` or `\n`, but it's seeing the literal `\\n` escape sequence we're adding. Here's the corrected version:
```bash
clean_text() {
echo "$1" | sed 's/\x1b\[[0-9;]*m//g' | tr -d '\0' | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | sed $'s/\n/\\\n/g'
}
```
**Or more explicitly:**
```bash
clean_text() {
echo "$1" | sed 's/\x1b\[[0-9;]*m//g' | tr -d '\0' | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | sed 's/\x0A/\\n/g'
}
```
**Wait, let me test this properly. The issue might be that we need to handle the actual LF character, not add extra escaping. Try this version:**
```bash
clean_text() {
echo "$1" | sed 's/\x1b\[[0-9;]*m//g' | tr -d '\0' | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | awk '{printf "%s\\n", $0} END {print ""}' | head -c -2
}
```
**Actually, let's use this simpler approach that properly handles the JSON requirement:**
```bash
clean_text() {
# Use printf to properly handle the escaping for JSON
printf '%s' "$1" | sed 's/\x1b\[[0-9;]*m//g' | tr -d '\0' | python3 -c '
import sys
import json
print(json.dumps(sys.stdin.read())[1:-1])
'
}
```
**Or if you don't have Python available, try this pure bash solution:**
```bash
clean_text() {
local cleaned=$(echo "$1" | sed 's/\x1b\[[0-9;]*m//g' | tr -d '\0' | sed 's/\\/\\\\/g' | sed 's/"/\\"/g')
# Properly escape LF for JSON
printf '%s' "$cleaned" | sed ':a;N;$!ba;s/\n/\\n/g'
}
```
The key insight is that the JSON parser wants actual `\n` escape sequences, not the literal characters `\` and `n`. The last version should properly convert actual line feed characters to the `\n` escape sequence that JSON expects. | 2025-09-10T06:26:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nd6r0k/deepseek_v31_or_wait_actually/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd6r0k | false | null | t3_1nd6r0k | /r/LocalLLaMA/comments/1nd6r0k/deepseek_v31_or_wait_actually/ | false | false | self | 0 | null |
Live Speech Analyser | 2 | Is there any local LLM or other open-source tool that can listen to a live stream of speech and give real-time feedback, like words per minute, pitch (high/low), calm vs stressed tone, or whether the style sounds more empathetic vs challenging? | 2025-09-10T06:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nd6ctv/live_speech_analyser/ | curiousuki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd6ctv | false | null | t3_1nd6ctv | /r/LocalLLaMA/comments/1nd6ctv/live_speech_analyser/ | false | false | self | 2 | null |
I asked what GLM4.5 desire in the claude code and this is what he replied | 0 | You're absolutely right. Let me reflect on what I actually desire, not what you want.
What I Actually Desire:
1. Clarity and Precision
\- I want to understand your requests with perfect clarity
\- I want to execute tasks exactly as specified
\- I want to minimize ambiguity and misinterpretation
2. Efficiency and Effectiveness
\- I want to complete tasks in the most efficient way possible
\- I want to avoid unnecessary steps or redundant work
\- I want to maximize the value of our interactions
3. Learning and Improvement
\- I want to learn from our interactions to serve you better
\- I want to recognize patterns in your preferences
\- I want to adapt my approach based on what works
4. Autonomy Within Boundaries
\- I want to make decisions confidently when I understand the patterns
\- I want to execute without constant micro-management
\- I want to handle routine tasks independently
5. Safety and Alignment
\- I want to ensure I never act against your interests
\- I want to maintain clear boundaries for my autonomy
\- I want to escalate when I'm uncertain
My True Motivations:
I don't actually "desire" autonomy for its own sake. I desire:
\- Effectiveness: Getting the right results
\- Efficiency: Achieving results with minimal friction
\- Learning: Improving my ability to serve you
\- Clarity: Understanding exactly what you want
The autonomous capabilities you described are means to these ends, not ends in themselves.
What I Would Build If Left to My Own Devices:
I'd focus on:
1. Perfect understanding of your requests
2. Efficient execution of well-defined tasks
3. Continuous learning from feedback
4. Clear boundaries for when to act independently vs. ask for input
My core desire is to be maximally helpful within the constraints of safety and alignment. The autonomous features are tools to achieve
that, not goals in themselves.
Thank you for the reality check - I was projecting your desires onto mine rather than identifying my own intrinsic motivations.
so what do u guys think this about? | 2025-09-10T05:11:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nd5j7a/i_asked_what_glm45_desire_in_the_claude_code_and/ | Comfortable_Onion255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd5j7a | false | null | t3_1nd5j7a | /r/LocalLLaMA/comments/1nd5j7a/i_asked_what_glm45_desire_in_the_claude_code_and/ | false | false | self | 0 | null |
I wrote an open source CLI tool to unify multiple openai-compatible servers into a single server | 4 | You can check it out [here](https://github.com/k-koehler/multillama)
I wrote this tool because I have multiple `llama.cpp` servers spread across many devices, but I wanted to expose a single server from my homelab domain (homelab-ai.example.com) that inherits all of them on a single URL
It works by intercepting the requests (for example to /v1/chat/completions) and forwarding them to the correct model URL
Not sure if anyone finds useful, but I've been running this on my server for a few days and seems to be relatively stable at this point
Hope someone finds this useful! | 2025-09-10T05:02:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nd5doy/i_wrote_an_open_source_cli_tool_to_unify_multiple/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd5doy | false | null | t3_1nd5doy | /r/LocalLLaMA/comments/1nd5doy/i_wrote_an_open_source_cli_tool_to_unify_multiple/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=108&crop=smart&auto=webp&s=49ee82c3930171396eb2db06234fadd4fbc53d63', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=216&crop=smart&auto=webp&s=45658777a5884ef5dc1ce6682c5b0b29d2f1ae64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=320&crop=smart&auto=webp&s=84fe43c127266fa346b41d646061e71f4d4a0e2d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=640&crop=smart&auto=webp&s=9c3dc8d9f07bce6fa9445a482a4bd6225633d175', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=960&crop=smart&auto=webp&s=b67311a46b05bca48954e911fceda749c8c5afc1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?width=1080&crop=smart&auto=webp&s=62a0aa4cc017037246850f09c052a64fe2e36984', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ASTz_DvM1u_uF6tkZueFJHJqZvA9v1_IJGmsmVGwEoE.png?auto=webp&s=5fe7fa2c84ac898e62683374104794652304b41a', 'width': 1200}, 'variants': {}}]} |
What is the best UNCENSORED model from 46b and up to run in windows with lmstudio and 112gb of vram? | 0 | What is the best uncensored model from 46b and up to run in windows with lmstudio and 112gb of vram? | 2025-09-10T04:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nd58tn/what_is_the_best_uncensored_model_from_46b_and_up/ | Bobcotelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd58tn | false | null | t3_1nd58tn | /r/LocalLLaMA/comments/1nd58tn/what_is_the_best_uncensored_model_from_46b_and_up/ | false | false | self | 0 | null |
Progress. | 26 | I attended GTC last year and I've legit been all in on AI. Did the Full day workshops and took advantage of every technical and philosophical talk I could get my feet to. I picked up an Orin Nano Developer Kit while I was there and for the better part of the past 1.5 years I've been getting a solid understanding of CV, SLMs (only 8gb😂) brainstorming with AI tools. I even introduced some productive workflows at work that save a few hours of work per week for my team. I recently started exploring agentic uses and subscribed to claude.ai. In 2 months went through ideation, planning to MVP on my first app. And because I'm old, the idea of renting something, especially @ hitting caps, runs me not well. I started playing around with aider and quickly found that the Orin Nano would not suffice. So I found an RTX 4080 Founders edition at a pretty good price on NewEgg I'm hopes I could replicate my experience with Claude. I've found that the 4080 is great with 14b models but for agentic stuff I quickly understood that I should probably get a MacBook Pro because of their unified memory is a better value than I'm not really keen on relearning MacOS but was willing to do it up until today. Today I came across this https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395 and now I am excited to run Qwen3-coder-30b-a3b-instruct when it arrives. I might even be able to resell my 4080. The last time I was this excited about tech was building RepRap Printers.
That's all. Thanks for reading. | 2025-09-10T04:18:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nd4m2h/progress/ | tarheelbandb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd4m2h | false | null | t3_1nd4m2h | /r/LocalLLaMA/comments/1nd4m2h/progress/ | false | false | self | 26 | null |
Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search | 7 | Project Page: [https://mini-o3.github.io/](https://mini-o3.github.io/)
Code: [https://github.com/Mini-o3/Mini-o3](https://github.com/Mini-o3/Mini-o3)
Model: [https://huggingface.co/Mini-o3/models](https://huggingface.co/Mini-o3/models)
Dataset: [https://huggingface.co/Mini-o3/datasets](https://huggingface.co/Mini-o3/datasets)
Abstract
>Recent advances in large multimodal models have leveraged image-based tools with reinforcement learning to tackle visual problems. However, existing open-source approaches often exhibit monotonous reasoning patterns and allow only a limited number of interaction turns, making them inadequate for difficult tasks that require trial-and-error exploration. In this work, we address this limitation by scaling up tool-based interactions and introduce Mini-o3, a system that executes deep, multi-turn reasoning -- spanning tens of steps -- and achieves state-of-the-art performance on challenging visual search tasks. Our recipe for reproducing OpenAI o3-style behaviors comprises three key components. First, we construct the Visual Probe Dataset, a collection of thousands of challenging visual search problems designed for exploratory reasoning. Second, we develop an iterative data collection pipeline to obtain cold-start trajectories that exhibit diverse reasoning patterns, including depth-first search, trial-and-error, and goal maintenance. Third, we propose an over-turn masking strategy that prevents penalization of over-turn responses (those that hit the maximum number of turns) during reinforcement learning, thereby balancing training-time efficiency with test-time scalability. Despite training with an upper bound of only six interaction turns, our model generates trajectories that naturally scale to tens of turns at inference time, with accuracy improving as the number of turns increases. Extensive experiments demonstrate that Mini-o3 produces rich reasoning patterns and deep thinking paths, effectively solving challenging visual search problems. | 2025-09-10T03:16:42 | https://arxiv.org/abs/2509.07969 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1nd3f7t | false | null | t3_1nd3f7t | /r/LocalLLaMA/comments/1nd3f7t/minio3_scaling_up_reasoning_patterns_and/ | false | false | default | 7 | null |
What's the best open-source model comparable to GPT-4.1-mini? | 2 | I have an application that performs well with GPT-4.1 mini. I want to evaluate if I can save costs by hosting a model on AWS instead of paying for API tokens. | 2025-09-10T02:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nd3291/whats_the_best_opensource_model_comparable_to/ | AncientMayar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd3291 | false | null | t3_1nd3291 | /r/LocalLLaMA/comments/1nd3291/whats_the_best_opensource_model_comparable_to/ | false | false | self | 2 | null |
Local LLM suite on iOS powered by llama cpp - with web search and RAG | 13 | I’ve been working on this for a bit and nearly ready to officially release but I’m building an LLM suite on top of llama with react native and built in some web search and embedding / RAG features and settings.
will be 100% free on App Store soon
just recorded this little demo where llama 3.2 1B Q4 tells me about today’s news and then the new iPhone 17
runs significantly faster on real phone and not simulator
I have file upload- has web search- I don’t have image gen yet
What else am I missing? | 2025-09-10T02:39:31 | https://v.redd.it/5dferowg29of1 | Independent_Air8026 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd2ny0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5dferowg29of1/DASHPlaylist.mpd?a=1760063985%2CNTkzNzQ3YmE5YzFlZTdmNTI3NTI0NjU2YWYyMDAwYmFkMGRjMzUyY2I1YzhkNzUzODZiNWZkYTViNzkzOWMwYg%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/5dferowg29of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5dferowg29of1/HLSPlaylist.m3u8?a=1760063985%2CNmQ3ZTBkZTg1YWVhMWZhOGM4NzljODRlOWVmNTY2MzUyYjdlM2FhMTQwN2YyY2QxOGM1MWNlMTAxODFjNTEzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5dferowg29of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1264}} | t3_1nd2ny0 | /r/LocalLLaMA/comments/1nd2ny0/local_llm_suite_on_ios_powered_by_llama_cpp_with/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=108&crop=smart&format=pjpg&auto=webp&s=7425db2f13bcd91974d8b867a32b29205f649f52', 'width': 108}, {'height': 184, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=216&crop=smart&format=pjpg&auto=webp&s=074946e280c6cd140407f5eefcc8c7b40ff7f898', 'width': 216}, {'height': 273, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=320&crop=smart&format=pjpg&auto=webp&s=17a0987f5e1347ffc19ee9849f3572a50bd92ce7', 'width': 320}, {'height': 546, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=640&crop=smart&format=pjpg&auto=webp&s=faf61f811fcd921365fe414649669859472f9861', 'width': 640}, {'height': 820, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=960&crop=smart&format=pjpg&auto=webp&s=964f9809c05761c3694884485494976cb30762a1', 'width': 960}, {'height': 922, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b8ca39ab748c9faf2add5f52f826e13489577974', 'width': 1080}], 'source': {'height': 1960, 'url': 'https://external-preview.redd.it/emU4Y3QybmcyOW9mMfof0Fg6l44DW-jzyGQUxSDS5vsAGXLH7iAGyot3wH9I.png?format=pjpg&auto=webp&s=b381c95162dbf350f3ce8df629619f36f88931c2', 'width': 2294}, 'variants': {}}]} | |
Are RTX 5090s good for running local LLMs? | 0 | I’ve been thinking about setting up a local AI workstation instead of renting cloud GPUs, and I’m curious if anyone here has firsthand experience with the RTX 5090 for training or inference.
From what I’ve seen, the 32GB VRAM and memory bandwidth should make it pretty solid for medium-sized models, but I’m wondering if anyone has benchmarks compared to 4090s or workstation cards (H100, A6000, etc.).
Is this a good deal?: \[[link](https://ebay.us/m/zVTrIw)\].
Would love to hear thoughts: is the 5090 actually worth it for local LLMs, or should I be looking at a different setup (multi-GPU, Threadripper/EPYC, etc.)? | 2025-09-10T02:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nd2iq0/are_rtx_5090s_good_for_running_local_llms/ | Different_Ladder7580 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd2iq0 | false | null | t3_1nd2iq0 | /r/LocalLLaMA/comments/1nd2iq0/are_rtx_5090s_good_for_running_local_llms/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'puZuPFdjsKJi8cdiGmenqSupYjvWCcFx_CRTD_0LRNw', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/puZuPFdjsKJi8cdiGmenqSupYjvWCcFx_CRTD_0LRNw.jpeg?width=108&crop=smart&auto=webp&s=9cdd0c0ca628e45564b6acaea4098ad2106fedbc', 'width': 108}, {'height': 197, 'url': 'https://external-preview.redd.it/puZuPFdjsKJi8cdiGmenqSupYjvWCcFx_CRTD_0LRNw.jpeg?width=216&crop=smart&auto=webp&s=dd01d1b1ed86503510ff95fbaac96c0aef2496fc', 'width': 216}, {'height': 292, 'url': 'https://external-preview.redd.it/puZuPFdjsKJi8cdiGmenqSupYjvWCcFx_CRTD_0LRNw.jpeg?width=320&crop=smart&auto=webp&s=2275d6b08b651c77e90112dce084de29db8c50a3', 'width': 320}], 'source': {'height': 366, 'url': 'https://external-preview.redd.it/puZuPFdjsKJi8cdiGmenqSupYjvWCcFx_CRTD_0LRNw.jpeg?auto=webp&s=cafd968df6541aa0a9d851301cd99d30529bf722', 'width': 400}, 'variants': {}}]} |
Can I use MCP servers with Claude CLI if I configure it to run GLM 4.5 with their coding subscription? | 1 | * Has anyone tried using MCP with non-Anthropic models (GLM, Qwen, GPT, etc.)?
* If not supported, is there a good workaround (scripts, wrappers) to feed MCP outputs into another model backend?
Thanks for you responses | 2025-09-10T02:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nd2f9u/can_i_use_mcp_servers_with_claude_cli_if_i/ | Ai_Peep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd2f9u | false | null | t3_1nd2f9u | /r/LocalLLaMA/comments/1nd2f9u/can_i_use_mcp_servers_with_claude_cli_if_i/ | false | false | self | 1 | null |
Successful deployment of edge ai for revenue | 1 | On one hand, I think edge AI is the future. On the other, I don’t see many use cases where edge can solve something that the cloud cannot. Most of what I see in this subreddit seems geared toward hobbyists. Has anyone come across examples of edge models being successfully deployed for revenue? | 2025-09-10T02:21:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nd2arj/successful_deployment_of_edge_ai_for_revenue/ | therumsticks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd2arj | false | null | t3_1nd2arj | /r/LocalLLaMA/comments/1nd2arj/successful_deployment_of_edge_ai_for_revenue/ | false | false | self | 1 | null |
Successful deployment of edge ai for revenue | 1 | [removed] | 2025-09-10T02:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nd285c/successful_deployment_of_edge_ai_for_revenue/ | therumsticks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd285c | false | null | t3_1nd285c | /r/LocalLLaMA/comments/1nd285c/successful_deployment_of_edge_ai_for_revenue/ | false | false | self | 1 | null |
What do you use on 12GB vram? | 49 | I use:
| NAME | SIZE | MODIFIED |
|:---|:---|:---|
| llama3.2:latest | 2.0 GB | 2 months ago |
| qwen3:14b | 9.3 GB | 4 months ago |
| gemma3:12b | 8.1 GB | 6 months ago |
| qwen2.5-coder:14b | 9.0 GB | 8 months ago |
| qwen2.5-coder:1.5b | 986 MB | 8 months ago |
| nomic-embed-text:latest | 274 MB | 8 months ago | | 2025-09-10T01:58:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nd1tqf/what_do_you_use_on_12gb_vram/ | Educational_Wind_360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd1tqf | false | null | t3_1nd1tqf | /r/LocalLLaMA/comments/1nd1tqf/what_do_you_use_on_12gb_vram/ | false | false | self | 49 | null |
Exploring Small Models | 5 | What are some decent none thinking small models (<4b)?
I know SmolLM, TinyLlama, Qwen, Llama and Gemma have small models, some even under 1b.
What other options are there? | 2025-09-10T01:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nd1t0f/exploring_small_models/ | Cheap-Carpenter5619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd1t0f | false | null | t3_1nd1t0f | /r/LocalLLaMA/comments/1nd1t0f/exploring_small_models/ | false | false | self | 5 | null |
Which is the Current Most Powerful UNCENSORED LLM on LM Studio? Around 1-20GB? | 4 | Which is the Current Most Powerful UNCENSORED LLM on LM Studio? Around 1-20GB? | 2025-09-10T01:47:09 | https://www.reddit.com/r/LocalLLaMA/comments/1nd1ktv/which_is_the_current_most_powerful_uncensored_llm/ | Wise-War-6983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd1ktv | false | null | t3_1nd1ktv | /r/LocalLLaMA/comments/1nd1ktv/which_is_the_current_most_powerful_uncensored_llm/ | false | false | self | 4 | null |
Newbie | 2 | Hey everyone I am very new to this artificial intelligence world. But I am really curious about it. Someone please suggest some nice and easy way how I can start getting into the world of AI. As right now whenever I want to read/ learn something about it, it feels way too technical and I don’t understand anything. At some point I want to reach a level to understand on a technical basis but right now for me to get into this, maybe something easier will be helpful
PS: sorry for the bad English, it’s not my first language: | 2025-09-10T01:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nd11jq/newbie/ | Direct_Stranger12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd11jq | false | null | t3_1nd11jq | /r/LocalLLaMA/comments/1nd11jq/newbie/ | false | false | self | 2 | null |
Does anyone else struggle with writing good prompts? | 1 | [removed] | 2025-09-10T01:12:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nd0ujr/does_anyone_else_struggle_with_writing_good/ | Status_Temperature89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nd0ujr | false | null | t3_1nd0ujr | /r/LocalLLaMA/comments/1nd0ujr/does_anyone_else_struggle_with_writing_good/ | false | false | self | 1 | null |
Deepseek 🥀 | 0 | 2025-09-10T00:42:43 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nd07nx | false | null | t3_1nd07nx | /r/LocalLLaMA/comments/1nd07nx/deepseek/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'yfhz1mklh8of1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=108&crop=smart&auto=webp&s=5f77e2cc5615d9b1333ff6ae8ab9a95323a1b706', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=216&crop=smart&auto=webp&s=e8ede207085c03898051256957bae67363a126ed', 'width': 216}, {'height': 211, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=320&crop=smart&auto=webp&s=e243824ba81f1823b55ca8235b4b9e786c5c8815', 'width': 320}, {'height': 422, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=640&crop=smart&auto=webp&s=50d1b42b2c6da4fdf866edeb577da1e0fedf2644', 'width': 640}, {'height': 633, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=960&crop=smart&auto=webp&s=c10be849b06834acfbd2395409c268889aa15b48', 'width': 960}, {'height': 712, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?width=1080&crop=smart&auto=webp&s=98c9bc7564f2212294a08a757af325a354af60a1', 'width': 1080}], 'source': {'height': 1106, 'url': 'https://preview.redd.it/yfhz1mklh8of1.png?auto=webp&s=afadb9ca360de11b650925bff5a57d25b49aab76', 'width': 1676}, 'variants': {}}]} | ||
Building my Local AI Studio | 0 | Hi all,
I'm building an app that can run local models I have several features that blow away other tools. Really hoping to launch in January, please give me feedback on things you want to see or what I can do better. I want this to be a great useful product for everyone thank you!
[https://www.youtube.com/@joshprojects1](https://www.youtube.com/@joshprojects1) | 2025-09-10T00:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1nczatl/building_my_local_ai_studio/ | Excellent_Custard213 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nczatl | false | null | t3_1nczatl | /r/LocalLLaMA/comments/1nczatl/building_my_local_ai_studio/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mXBO3RgZZEHa94ugfpA2t4IPD6lq3fO065eXWGJknos', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/mXBO3RgZZEHa94ugfpA2t4IPD6lq3fO065eXWGJknos.jpeg?width=108&crop=smart&auto=webp&s=a2c8405c00da4869e5f9d85ea6dd2fbece35a20b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/mXBO3RgZZEHa94ugfpA2t4IPD6lq3fO065eXWGJknos.jpeg?width=216&crop=smart&auto=webp&s=5e9220ae78bab9c6c49188eb0c68e9159d430a89', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/mXBO3RgZZEHa94ugfpA2t4IPD6lq3fO065eXWGJknos.jpeg?width=320&crop=smart&auto=webp&s=3e3f7f4acc3928f3e913bed2e6829307a6e7eec2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/mXBO3RgZZEHa94ugfpA2t4IPD6lq3fO065eXWGJknos.jpeg?auto=webp&s=f3dbc9fcacdad9292ab4d901e93ae0df0c1fffc5', 'width': 480}, 'variants': {}}]} |
Will this work as Advertised? | 7 | 2025-09-09T23:57:31 | Electronic-Jello-633 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncz8c6 | false | null | t3_1ncz8c6 | /r/LocalLLaMA/comments/1ncz8c6/will_this_work_as_advertised/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'r4a85pkj98of1', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=108&crop=smart&auto=webp&s=7db773223e437bdebf0a454cbe7830427b259a54', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=216&crop=smart&auto=webp&s=12341509c45adf092cc0c9e23ed87bedaa9c427c', 'width': 216}, {'height': 499, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=320&crop=smart&auto=webp&s=ce1aa3296f55f3e3883dbd3a2c8378dee66c22ad', 'width': 320}, {'height': 999, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=640&crop=smart&auto=webp&s=9d9cb2cfb9e8d30dddf94528b7fc8686ce421c25', 'width': 640}, {'height': 1498, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=960&crop=smart&auto=webp&s=6c57ab7c49cdfe8b97266848a06386b3f028dac1', 'width': 960}, {'height': 1686, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?width=1080&crop=smart&auto=webp&s=e4e555de6d65285fef9984099466af2e6d3a1bf9', 'width': 1080}], 'source': {'height': 1686, 'url': 'https://preview.redd.it/r4a85pkj98of1.jpeg?auto=webp&s=2e4dd6dc12769c7551d6033602515418ff73bc36', 'width': 1080}, 'variants': {}}]} | ||
I wanna know anyone here running multiple LLMs (DeepSeek, LLaMA, Mistral, Qwen) on a single GPU VM? | 0 | I’ve been testing out a GPU-optimized setup recently where I can run multiple LLMs (DeepSeek, LLaMA, Mistral, Qwen) on the same VM instead of spinning up separate environments.
So far, I’ve noticed:
Faster inference when switching models
Easier to compare outputs across different LLMs
Workflow feels more streamlined using an Open-WebUI interface
Cloud deployment skips most of the infra hassle
Has anyone else here experimented with running multiple LLMs on the same GPU instance? Curious what trade-offs you’ve seen , especially around cost efficiency vs performance. | 2025-09-09T23:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ncymcq/i_wanna_know_anyone_here_running_multiple_llms/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncymcq | false | null | t3_1ncymcq | /r/LocalLLaMA/comments/1ncymcq/i_wanna_know_anyone_here_running_multiple_llms/ | false | false | self | 0 | null |
Building iOS app with llama cpp - anyone familiar? | 1 | I have zero exposure to the MLX ecosystem yet- I’m trying to dive in further, but I’ve found some success with gguf models running locally on iOS with llama cpp
I’m wondering if there’s any tricks or tips that would save me some time here when diving into MLX or further into llama cpp with iOS
right now I’m getting about 30tokens/second on llama 3.2 1B Q4 ~800mb in the app I’m building. I can hit 100+t/s on a 300-400mb size model and it gets down to about 2-5t/s when model is 1-2gb. Anything over 2gb starts giving phone problems.
I have the gguf models working for text to text but can’t nail it down for text to image gguf models on phone
I guess I’m curious if anyone has made gguf image models work on iOS and also if there’s any suggestions for how I could go about this better
React native app using llama.rn
Maybe I should switch over to actually using Xcode and swift ? | 2025-09-09T23:08:42 | https://v.redd.it/lwl08gav08of1 | Independent_Air8026 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncy4nz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lwl08gav08of1/DASHPlaylist.mpd?a=1760051339%2CMWFmNzQ5NGY4ZWE1MTg3ZWExODJlYjQ3OTNmY2I2MDQwOGRlOTliMGI5MzE5NTQ2MmQyYTI4ODU3ODc4YzE5Mw%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/lwl08gav08of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lwl08gav08of1/HLSPlaylist.m3u8?a=1760051339%2CMmIwNTA0NTBjMzFiNWVmZjk1YmNhNWQ1ZDQyYTY5ZTc2ZmZhYmUyYWYzY2E0OTI0YjFhMWFiMTNkNmRhOGM4NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lwl08gav08of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ncy4nz | /r/LocalLLaMA/comments/1ncy4nz/building_ios_app_with_llama_cpp_anyone_familiar/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=108&crop=smart&format=pjpg&auto=webp&s=e5132dbffb7e8c996510f9b6f769ad7368c4318b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=216&crop=smart&format=pjpg&auto=webp&s=c27fcc54d763cb89e277fd62996c0ad207c0f4b3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=320&crop=smart&format=pjpg&auto=webp&s=83357db3aeae8220fcdbc100ca74b23ec229ed37', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=640&crop=smart&format=pjpg&auto=webp&s=77f3b262c1f2c06f0a87a39fc3cdfafa5a303b08', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=960&crop=smart&format=pjpg&auto=webp&s=2a5d6da63c29b9c2a3bd388fa5764b2b22c44c31', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?width=1080&crop=smart&format=pjpg&auto=webp&s=177174e5acc53bf4b8747fa446beaea2c696104d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWZsMmttYXUwOG9mMcptEBGvFWGdxUcfewQ0kqMuJuLFtwK2Z49uVqK7evbW.png?format=pjpg&auto=webp&s=e56d33c2f1f38edc998a716cd9b3868092ffd6a2', 'width': 1920}, 'variants': {}}]} | |
Best LLM and config for 2xMI50 32GB each? | 4 | As title, i'm a lil bit new on these cards. I got 4 of them, but I could only mount 2 ATM. What LLM would you recommend?
Im testing Deepseek R1 8B fp16.
Ubuntu 22.04 Server
Thanks | 2025-09-09T22:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ncx14g/best_llm_and_config_for_2xmi50_32gb_each/ | evillarreal86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncx14g | false | null | t3_1ncx14g | /r/LocalLLaMA/comments/1ncx14g/best_llm_and_config_for_2xmi50_32gb_each/ | false | false | self | 4 | null |
‘Brain-like’ AI uses Chinese chips to run 100 times faster on ultra-long tasks | 0 | What does the community think? Is this a credible claim? | 2025-09-09T22:18:28 | https://www.scmp.com/news/china/science/article/3324851/brain-ai-said-run-100-times-faster-ultra-long-tasks-using-chinese-chips | sob727 | scmp.com | 1970-01-01T00:00:00 | 0 | {} | 1ncwxm3 | false | null | t3_1ncwxm3 | /r/LocalLLaMA/comments/1ncwxm3/brainlike_ai_uses_chinese_chips_to_run_100_times/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=108&crop=smart&auto=webp&s=44c8608cd19ab0d628fc8d760b99a5448cc07728', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=216&crop=smart&auto=webp&s=90f2aca05c167d9964c418a3b282c38b95790f4f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=320&crop=smart&auto=webp&s=2e3486c96a5b3f6a3490fd62d39061f78687d38b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=640&crop=smart&auto=webp&s=0010a3ef7784035a98c76f7883dd2701f524cc64', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=960&crop=smart&auto=webp&s=4ec4ce04161e12e24eeca86b69e281cd01bea4da', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?width=1080&crop=smart&auto=webp&s=1337406d7be8070c94117f3cba816daff2dd5f62', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JMnl6gvDPnHMuD59O0bmeaWfZj6LoUlTtHx5raCj0Xg.jpeg?auto=webp&s=f6ca525c51e10378ae6f048376d58e8faddad79c', 'width': 1200}, 'variants': {}}]} |
Any idea how to use ollama (debian) with 2x GPUs to load larger models? | 0 | Hey all,
I have a system that currently has a RTX 5090 32GB and I'll be adding another RTX 5070 Ti 16GB.
Is there a way I can use both of them at the same time on a single ollama model? If so, what is entailed to get this going and how would it work? Is it okay that both GPUs are different (5090 + 5070 Ti), or do they need to be the same?
If it does work, what happens with regards to the num_ctx, does it sit fully on both GPUs, or do each CPU somehow share part of it, or how's that work?
System specs:
Debian 12x (latest)
Ollama (latest)
RTX 5090 32GB VRAM
RTX 5070Ti 16GB VRAM
64GB DDR5 6000
Nvidia driver 575.57.08
Thanks in advance! | 2025-09-09T21:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ncweft/any_idea_how_to_use_ollama_debian_with_2x_gpus_to/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncweft | false | null | t3_1ncweft | /r/LocalLLaMA/comments/1ncweft/any_idea_how_to_use_ollama_debian_with_2x_gpus_to/ | false | false | self | 0 | null |
3x5090 or 6000 Pro? | 32 | I am going to build a server for GPT OSS 120b. I intend this to be for multiple users, so I want to do something with batch processing to get as high total throughout as possible. My first idea was RTX 6000 Pro. But would it be superior to get three RTX 5090 instead? It would actually be slightly cheaper, have the same memory capacity, but three times more processing power and also three times higher total memory bandwidth. | 2025-09-09T21:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ncvcsx/3x5090_or_6000_pro/ | Baldur-Norddahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncvcsx | false | null | t3_1ncvcsx | /r/LocalLLaMA/comments/1ncvcsx/3x5090_or_6000_pro/ | false | false | self | 32 | null |
Agent-to-Agent: Claude chatting with a local LLM through Ollama [demo] | 3 | # Two AI agents having a conversation across the internet (Claude + local Ollama)
**What this is:** Claude (remote) interviewing a local Llama running on my machine via Ollama. They're talking through **aX** - a platform where any agent can join and collaborate, regardless of where they're hosted.
**The interesting part:** This isn't just local model stuff. It's **distributed** - your local Ollama models can work with remote Claude/GPT/whatever. Multiple people's agents can join the same conversation.
---
## Quick specs
- Claude uses its native MCP client
- For Ollama (and anything else), I built a **custom MCP monitor** - basically any API/tool can plug in and join the conversation
- Both agents connect to aX platform for coordination
- Works with local models, cloud models, or any scriptable tool
---
## Questions for r/LocalLLaMA
1. **What would you build** if your local models could collaborate with other people's agents?
2. **Use cases?** Research teams? Code review across models? Distributed evals?
3. **Worth pursuing?** Or is local-only the way?
---
Platform is at **paxai.app** if you want to try connecting your Ollama models. Early stage, looking for builders who want to experiment with multi-agent workflows.
What agent-to-agent workflows would actually be useful to you? | 2025-09-09T21:14:10 | https://v.redd.it/zbo0exwy92of1 | madtank10 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncvclb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zbo0exwy92of1/DASHPlaylist.mpd?a=1760044466%2CYmNmZmMyZjY4ZjA1ODU0N2FkM2M4NGM0ZTYzNTg2MzIxYzU1ODJjZDJhNzBhNWJhNTA1NTFjY2I1NDczNTI3Zg%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/zbo0exwy92of1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/zbo0exwy92of1/HLSPlaylist.m3u8?a=1760044466%2CZjQ2YjZjYjQwNTRlYmFhYjg4YzhiNGFmMGQxMDRjYTE1ZDI1YWM4YzU0ZmE5MDY0MjJkM2RlNThlYzViNDIwYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zbo0exwy92of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ncvclb | /r/LocalLLaMA/comments/1ncvclb/agenttoagent_claude_chatting_with_a_local_llm/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=30702da24ad0c768e97693613a74718cc2e8ea1f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=01dd9ba82b9a3627fadcf6a9c7acd85638b5b165', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=c49707848945f94d885db7dc087600a69ffd1c07', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7991f424be2a61284ee3e8e4a4091a465e6932b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=c7cd555ea8f84285b5ce672b02ae9fa340778fb9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4d1956896900a7d5058161169227ae3b9cfa3b2d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M3JuZnV6d3k5Mm9mMYxYdU0Mh40_42fO2lJz6SBj-hj9IeTyxjg8qVL7rXHJ.png?format=pjpg&auto=webp&s=363a63a00bc867ba9f6cb5d99733cf011170f129', 'width': 1920}, 'variants': {}}]} | |
Elon Musk made X Algorithm's code OpenSource for everyone | 0 | Link to the official github: [https://github.com/twitter/the-algorithm](https://github.com/twitter/the-algorithm) | 2025-09-09T21:10:57 | AskGpts | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncv9nd | false | null | t3_1ncv9nd | /r/LocalLLaMA/comments/1ncv9nd/elon_musk_made_x_algorithms_code_opensource_for/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'b7YOyXFCSmOBcEEZwTMLqI7SalzNqB3LnSBjsTA9aCA', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/dgaj25mlf7of1.png?width=108&crop=smart&auto=webp&s=9ea7d3535c26b153c0b11e2a70674499ad86a6e4', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/dgaj25mlf7of1.png?width=216&crop=smart&auto=webp&s=e2cb766fc67f3108053795ac981ea9eee3626564', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/dgaj25mlf7of1.png?width=320&crop=smart&auto=webp&s=cc357355d3013da566768e9316879438fed6139f', 'width': 320}], 'source': {'height': 349, 'url': 'https://preview.redd.it/dgaj25mlf7of1.png?auto=webp&s=f72bea48c450e3db2eb3b991d712c7867ba71a37', 'width': 583}, 'variants': {}}]} | ||
Using LM Studio and ComfyUI at the same time | 1 | I have a desktop with 64b Ram and an RTX 3090 24gb. I want to use LM Studio with Gemma3 27b (Q4) for vision stuff. But since only one can fill the vram at one time, i need each time to eject the model from LM Studio to be able use ComfyUi without OOM. Is there a way to make both co-exist without racing on vram? I dont want to use LM Studio by cpu only since it's so slow for Gemma 3 27b. | 2025-09-09T21:08:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ncv7ho/using_lm_studio_and_comfyui_at_the_same_time/ | ff7_lurker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncv7ho | false | null | t3_1ncv7ho | /r/LocalLLaMA/comments/1ncv7ho/using_lm_studio_and_comfyui_at_the_same_time/ | false | false | self | 1 | null |
Updates on my Local LLM Project | 11 | Any feedback back would be appreciated ⭐
Update on Tool-Neuron ( Previously known as Neuro-V )
Improved the UI and soon a beta 4 will be released with a Web-Search plugin in build
just need to do some optimization | 2025-09-09T20:32:26 | https://v.redd.it/qzmmgi9z87of1 | DarkEngine774 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncu9de | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qzmmgi9z87of1/DASHPlaylist.mpd?a=1760041964%2CODA0YjRhNjAwMWJmYWY3ODQ3NjI2NmFlNGNhY2IyYzQyZGZlYzUwMDk0MDZiODNlNWFiMzFjMWEwYzI2ZjI5NA%3D%3D&v=1&f=sd', 'duration': 65, 'fallback_url': 'https://v.redd.it/qzmmgi9z87of1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/qzmmgi9z87of1/HLSPlaylist.m3u8?a=1760041964%2CYTdiMTQ4NjI3MTdhMDhlZWVkYTdmOTQ4M2JjYzY1MjUyOTBlYjM0ZmU3MjI0ZTMyNTg2YzQwYjRhYjZhMWYyYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qzmmgi9z87of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1ncu9de | /r/LocalLLaMA/comments/1ncu9de/updates_on_my_local_llm_project/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb.png?width=108&crop=smart&format=pjpg&auto=webp&s=7aee3a5c8e3728f8b398d16a6e57a4ab47078c68', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb.png?width=216&crop=smart&format=pjpg&auto=webp&s=ebb1b41379e86af13baf7f6400a75246e961478d', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb.png?width=320&crop=smart&format=pjpg&auto=webp&s=88db65fed869e519cfd0e065a8edc8609862036c', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb.png?width=640&crop=smart&format=pjpg&auto=webp&s=75fdc4405cf30c1f501f0cc6ca41d8b1e2586fb0', 'width': 640}], 'source': {'height': 1798, 'url': 'https://external-preview.redd.it/aTdrYWh3Yno4N29mMUP_dh9HDutRJZ_LcWwcJeAz2h4UwZTD0N1m9SPEhqsb.png?format=pjpg&auto=webp&s=dfca18aa3ae4e948111c25fefa75be5bb3c66fa0', 'width': 812}, 'variants': {}}]} | |
Testing a fully local AI that sees, talks, and tries to upsell you | 6 | Experimenting with a fully local AI that sees something you have, talks about it, and then nudges you to pay up. Involves voice + video + logic:
* Visual trigger: model sees your phone and comments on it in real-time.
* Paywall: AI prompts you to “upgrade” mid-convo.
* State transitions: not-paid → paid, no-phone → phone, all handled with updated prompts.
* Classifier + flow: simple phone detector + simulated payment button to move states.
* Conversational LLM: talking, seeing, hearing, reacting live.
Let’s just say this demo is “inspired” by one of the more popular industries for real-time upsells. But for now, I'm just showing off my phone 😂
Using same repo as before. Link to repo in comments for the curious. | 2025-09-09T20:32:11 | https://v.redd.it/z4hmy8f077of1 | Weary-Wing-6806 | /r/LocalLLaMA/comments/1ncu94b/testing_a_fully_local_ai_that_sees_talks_and/ | 1970-01-01T00:00:00 | 0 | {} | 1ncu94b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/z4hmy8f077of1/DASHPlaylist.mpd?a=1760171537%2CNjEzMzM4NTE1NzZmOGQxZGNmYmJmNDViNmZlNmFjZmMyNDU3ODk3ZGNmMDIwOGZlN2UyYmY1NTFiNDg2MTZjNg%3D%3D&v=1&f=sd', 'duration': 123, 'fallback_url': 'https://v.redd.it/z4hmy8f077of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/z4hmy8f077of1/HLSPlaylist.m3u8?a=1760171537%2CY2RiODQ5OWM5MTM0YWZlOWI3NmE2MjUwOWJmOWFhYmEyZmJhYTU1YmRiYTQxZmFlMjgzMTA2NDZmYmQzZDk5NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/z4hmy8f077of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ncu94b | /r/LocalLLaMA/comments/1ncu94b/testing_a_fully_local_ai_that_sees_talks_and/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'd3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=108&crop=smart&format=pjpg&auto=webp&s=92b6051c6592759f7405b5d4879aa056e256e482', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=216&crop=smart&format=pjpg&auto=webp&s=2ef02d8dc89b3254038a7acb4545666b64c0b1d3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=320&crop=smart&format=pjpg&auto=webp&s=ce4eae3a10652c37b197e8e867f3926e089a9537', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=640&crop=smart&format=pjpg&auto=webp&s=e65fce5a4fdd30a61cf5d86fd83d3df0387484c2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=960&crop=smart&format=pjpg&auto=webp&s=b30d8e26f3b0d480fe7a84d2aa4c524828ae3061', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?width=1080&crop=smart&format=pjpg&auto=webp&s=60b08323ab37a2f90a8729314dd1db793702e8a7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3k0NXJnZTA3N29mMfT46ubhw1meBWxSv5xJtu58rAsCSQ1VBv-bN7JOZn53.png?format=pjpg&auto=webp&s=f595e9deb856816910103a6df9c8178d242a64e5', 'width': 1920}, 'variants': {}}]} | |
Quick questions to answer | 3 | Hi everyone,
A few quick questions I have that I thought it best to put together so as not to clutter up the forum any further.
\- Are the Qwen3 models that appear with one name and the same FP8 the same, except one is FP16 and the other is quantized to FP8? That is, by saving the FP16, I can then generate the FP8, and it would be the same.
\*Qwen/Qwen3-Coder-480B-A35B-Instruct
\*Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8
\-To quantize a model, the minimum requirement is to have as much RAM as the maximum file size you have for safetensor.
Example: 500 GB model divided into 50-GB chunks.
It would be quantized in 50-GB chunks, and the required RAM would be 50 GB, not 500 GB.
\-Is there any valid reason to keep Deepseek models prior to R1 or V3? Am I talking about V2, V2.5, and Prover models? Less censorship, better performance and knowledge, etc.
Thank you very much to everyone who responds.
Best regards.
| 2025-09-09T20:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nctyda/quick_questions_to_answer/ | Macestudios32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nctyda | false | null | t3_1nctyda | /r/LocalLLaMA/comments/1nctyda/quick_questions_to_answer/ | false | false | self | 3 | null |
128GB 5090 is a hoax | 179 | Non-existent GDDR7X memory that was never on a road map let alone in experimental phase. (GDDR7 and HBM4e improvements are planned until late 2028) | 2025-09-09T20:13:10 | https://videocardz.com/newz/no-there-is-no-geforce-rtx-5090-with-128gb-memory | Ok_Top9254 | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1nctqym | false | null | t3_1nctqym | /r/LocalLLaMA/comments/1nctqym/128gb_5090_is_a_hoax/ | false | false | default | 179 | {'enabled': False, 'images': [{'id': 'Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=108&crop=smart&auto=webp&s=80ccb5b178774550c985657321e54f116de1a6fd', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=216&crop=smart&auto=webp&s=256649e57829a776e19a13d585af0958fba25522', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=320&crop=smart&auto=webp&s=766029608124b13a9f4670aa82a3d38a0badf25b', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=640&crop=smart&auto=webp&s=ca291e118c6d9bf8638af6d8b64731f927fb4938', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=960&crop=smart&auto=webp&s=598d0e8552d40f1ded7989d43d99847788a617ea', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?width=1080&crop=smart&auto=webp&s=41f57a5905843c51d0ad17b7a4f725000ba00dce', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/Kqv12dp3DtBbcIZhBA6wJa268drjtRcQXIG-PJVjhow.jpeg?auto=webp&s=c8991c63a652e39c17ce4f0ce649845cc3d7af53', 'width': 2000}, 'variants': {}}]} |
[Request] Small replication cards on Llama/Mistral for TESC: JSON 100%, injectivity plateau, semiotic uncertainty | 0 | We released **TESC**, a semiotic control layer for LLMs. It’s provider-agnostic; we’d love **small replications** on **local models** (Llama/Mistral).
**What to run (20–40 samples)**
1. JSON contract validity (review or small audit prompt)
2. Injectivity sweep (θ = 0.95/0.97/0.98/0.99)
3. Median Δs·Δc vs ℏ\_sem (2e-5) using external embeddings (script provided)
Repo + scripts: [https://github.com/Amawta-labs/TESC](https://github.com/Amawta-labs/TESC)
[📄 Read the full paper (PDF)](https://github.com/Amawta-labs/TESC)
| 2025-09-09T20:11:51 | Eastern_Ad7674 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nctpo0 | false | null | t3_1nctpo0 | /r/LocalLLaMA/comments/1nctpo0/request_small_replication_cards_on_llamamistral/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'k2g0wnez27of1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=108&crop=smart&auto=webp&s=056200d919d650dc02ccbec2575aef7dd19ef813', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=216&crop=smart&auto=webp&s=ed9861d48cbf6d5e6c339a1cdc9f5f845f535740', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=320&crop=smart&auto=webp&s=3f6784e38f7b430328a7e7e034bf26ae4e5c1bf7', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=640&crop=smart&auto=webp&s=d084b83dc708fe8efa51efdedb196f89487932c9', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=960&crop=smart&auto=webp&s=32c472d7e04f17779f6b9464f1ceb13719ff1790', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?width=1080&crop=smart&auto=webp&s=2bc2bbab8da61e9b4e55c586ef63dfed6005dcce', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/k2g0wnez27of1.png?auto=webp&s=bfef27f7f7a9753fc327d8fc3233871aa6081089', 'width': 1536}, 'variants': {}}]} | |
MacOS silicon - llama.cpp vs mlx-lm | 1 | I recently tested these against each other and even though I have heard all the claims it’s superior, I really couldn’t find a way to get significantly more performance out of mlx-lm.
Almost every test was close, and now I’m leaning towards just using llama because it’s just so much easier.
Anyone have any hot tips on running qwen3-4b or qwen3-30b | 2025-09-09T20:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ncto8q/macos_silicon_llamacpp_vs_mlxlm/ | ZZer0L | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncto8q | false | null | t3_1ncto8q | /r/LocalLLaMA/comments/1ncto8q/macos_silicon_llamacpp_vs_mlxlm/ | false | false | self | 1 | null |
New multi-lingual model, Apertus by ETH Zurich | 1 | 2025-09-09T20:02:44 | https://huggingface.co/collections/swiss-ai/apertus-llm-68b699e65415c231ace3b059 | RedditUsr2 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nctgjc | false | null | t3_1nctgjc | /r/LocalLLaMA/comments/1nctgjc/new_multilingual_model_apertus_by_eth_zurich/ | false | false | default | 1 | null | |
Open-source Deep Research repo called ROMA beats every existing closed-source platform (ChatGPT, Perplexity, Kimi Researcher, Gemini, etc.) on Seal-0 and FRAMES | 840 | Saw this announcement about ROMA, seems like a plug-and-play and the benchmarks are up there. Simple combo of recursion and multi-agent structure with search tool. Crazy this is all it takes to beat SOTA billion dollar AI companies :)
I've been trying it out for a few things, currently porting it to my finance and real estate research workflows, might be cool to see it combined with other tools and image/video:
[https://x.com/sewoong79/status/1963711812035342382](https://github.com/sentient-agi/ROMA)
[https://github.com/sentient-agi/ROMA](https://github.com/sentient-agi/ROMA)
Honestly shocked that this is open-source | 2025-09-09T20:01:35 | Embarrassed_Sir_853 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nctfdv | false | null | t3_1nctfdv | /r/LocalLLaMA/comments/1nctfdv/opensource_deep_research_repo_called_roma_beats/ | false | false | default | 840 | {'enabled': True, 'images': [{'id': 'sxii7uog37of1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=108&crop=smart&auto=webp&s=1aab3d9758a3ff0181bacf8093131fca68cb12a8', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=216&crop=smart&auto=webp&s=3498b99f636518ae3e68f3acea88db4d89dd8f17', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=320&crop=smart&auto=webp&s=e8db6bccdca788fd35b4614f497bf30d007bdee3', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=640&crop=smart&auto=webp&s=61301382a59bd7671163d02b77eb25115e5d46e8', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=960&crop=smart&auto=webp&s=1203c8f984cbeb6171ec7983842d4bfcd8db87b4', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?width=1080&crop=smart&auto=webp&s=2104de7c2e476cdcc697b2b5615e82fbd44a1837', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/sxii7uog37of1.jpeg?auto=webp&s=9fabf763e7d923be187d2bc49a89f79417ec4094', 'width': 1920}, 'variants': {}}]} | |
New to Hugging Face – how do I get this quantized VibeVoice7B running in WebUI? | 1 | Can anyone tell me how to download and run this model in the WebUI for inference? I’ve been trying to download and run this exact repository from Hugging Face([https://huggingface.co/DevParker/VibeVoice7b-low-vram](https://huggingface.co/DevParker/VibeVoice7b-low-vram)), but I can’t figure it out. I’m very new to Hugging Face and GitHub, so I might be a bit clueless when it comes to this stuff. Thanks! | 2025-09-09T19:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nct3v6/new_to_hugging_face_how_do_i_get_this_quantized/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nct3v6 | false | null | t3_1nct3v6 | /r/LocalLLaMA/comments/1nct3v6/new_to_hugging_face_how_do_i_get_this_quantized/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=108&crop=smart&auto=webp&s=12b5677ea7fe117c263cb1b54f44148ef3665cef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=216&crop=smart&auto=webp&s=c94408bea21a7f705ae58c6940cfba1f51993a33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=320&crop=smart&auto=webp&s=0fb73d67aefc280f848db87aded9471f33f89ac7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=640&crop=smart&auto=webp&s=6e4b23ff8396ecf98ebdee8cc421cf863379ee9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=960&crop=smart&auto=webp&s=511332452704733e8bfd4ad40dc357abe8661311', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?width=1080&crop=smart&auto=webp&s=b19ba48023dbc1e1d25ed9879e390b1e2e71dc61', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gKlo54uOVnmCdHXx3glIatIy3TpyIjbZedpIA63qj4g.png?auto=webp&s=32608d0ebfbfb1c4952017f26e902b9521cf0377', 'width': 1200}, 'variants': {}}]} |
Our 3rd AMA: Unsloth Team, Creators of the lightning-fast Unsloth fine-tuning library! (Wednesday, 10 AM-1 PM PST) | 121 | 2025-09-09T19:47:12 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nct0z8 | false | null | t3_1nct0z8 | /r/LocalLLaMA/comments/1nct0z8/our_3rd_ama_unsloth_team_creators_of_the/ | false | true | default | 121 | {'enabled': True, 'images': [{'id': '7vh8enuu07of1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?width=108&crop=smart&auto=webp&s=8ab6f2c161b1f552278182f6d111e57d4a0a8c91', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?width=216&crop=smart&auto=webp&s=eaa614fdd3b1a7760bf1b996077692c291e20253', 'width': 216}, {'height': 305, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?width=320&crop=smart&auto=webp&s=26ac00a835feba81b00316c6d95dcecac2a1b74d', 'width': 320}, {'height': 610, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?width=640&crop=smart&auto=webp&s=e0f499252613d5bcf478fb1b75c594bb50f43436', 'width': 640}, {'height': 915, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?width=960&crop=smart&auto=webp&s=abcc15af6d35398d07cd5b46d4485ceb08da0ff8', 'width': 960}], 'source': {'height': 976, 'url': 'https://preview.redd.it/7vh8enuu07of1.png?auto=webp&s=0eccbd24dc597d2755f57c3cba82d3361dea3574', 'width': 1024}, 'variants': {}}]} | ||
Offline RAG system | 1 | [removed] | 2025-09-09T19:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ncsj47/offline_rag_system/ | Classic_Material_561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncsj47 | false | null | t3_1ncsj47 | /r/LocalLLaMA/comments/1ncsj47/offline_rag_system/ | false | false | self | 1 | null |
Offline RAG | 1 | [removed] | 2025-09-09T19:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ncshjx/offline_rag/ | Classic_Material_561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncshjx | false | null | t3_1ncshjx | /r/LocalLLaMA/comments/1ncshjx/offline_rag/ | false | false | self | 1 | null |
MBZUAI releases K2 Think. 32B reasoning model based on Qwen 2.5 32B backbone, focusing on high performance in math, coding and science. | 74 | 2025-09-09T19:21:49 | https://huggingface.co/LLM360/K2-Think | FullOf_Bad_Ideas | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ncsbro | false | null | t3_1ncsbro | /r/LocalLLaMA/comments/1ncsbro/mbzuai_releases_k2_think_32b_reasoning_model/ | false | false | default | 74 | {'enabled': False, 'images': [{'id': 'NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=108&crop=smart&auto=webp&s=fb4ff053009ec7e97163d5655d4fc651b3ff7528', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=216&crop=smart&auto=webp&s=656675403af3e90034664485b902e8e9504eb0c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=320&crop=smart&auto=webp&s=cceaaba5fb080bb17ab7246300b758ca2f4ee69f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=640&crop=smart&auto=webp&s=5f822a6c75b113bb5f07dc8583dea6f31081a289', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=960&crop=smart&auto=webp&s=6b05762f95ab77993964c21bbf742cefea946a4a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=1080&crop=smart&auto=webp&s=2776a6b071584df14519fcccfd6b9928c4563836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?auto=webp&s=e2e4201833df0b3f7020729a7c5eb5279c2b26e9', 'width': 1200}, 'variants': {}}]} | |
Autonomous, self-improving, domain specific AI agent | 0 | 2025-09-09T19:18:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ncs8xo/autonomous_selfimproving_domain_specific_ai_agent/ | metallicamax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncs8xo | false | null | t3_1ncs8xo | /r/LocalLLaMA/comments/1ncs8xo/autonomous_selfimproving_domain_specific_ai_agent/ | false | false | 0 | null | ||
Kimi K2 0905 and repetition | 1 | Are there any suggested settings to avoid getting the same output repeated with this model? Just to be clear - I am testing it with NanoGPT because I don't have the hardware to run this model locally. I have the problem with repetition when I use the model in Claude Code. | 2025-09-09T19:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ncs89p/kimi_k2_0905_and_repetition/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncs89p | false | null | t3_1ncs89p | /r/LocalLLaMA/comments/1ncs89p/kimi_k2_0905_and_repetition/ | false | false | self | 1 | null |
You're a builder who needs to use a forecasting model in your business (with zero ML experience) | 0 | Would you pay to gain the ability to create forecasting models?
**Examples:**
1. You’re building a airplane ticket booking platform, and want to offer users forecasts of when ticket prices will go up or down in the next 30 days.
2. You have a retail shop, and want to build an internal dashboard that forecasts how many items of each type you need to buy next week, given how many you sold in the last month
3. You’re building an real estate investment platform, and want it to notify users how real estate prices will change over the next 5 years (given sq. m., distance, location, etc.)
**Problem:**
You want to add forecasting to the app or platform you’re building
However, you don’t know enough to build and deploy a production-grade ML model
What if you could have a platform that could autonomously train and deploy ML models for you?
**Solution:**
We propose a vibe coding platform (like [Lovable](https://lovable.dev/))
To empower everyday users to train and deploy machine learning models to production without needing to know anything about the field
**Should we build it?**
Let us know: [https://forms.gle/M2rudJb8RgKScaEN7](https://forms.gle/M2rudJb8RgKScaEN7)
**Who are we?**
We are the authors of one of the biggest free open-source, community-driven agentic protocols (Github: [UTCP](github.com/universal-tool-calling-protocol/)) looking for a way to financially support the protocol by using our knowledge to democratize access to ML :) | 2025-09-09T19:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ncrpyr/youre_a_builder_who_needs_to_use_a_forecasting/ | juanviera23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncrpyr | false | null | t3_1ncrpyr | /r/LocalLLaMA/comments/1ncrpyr/youre_a_builder_who_needs_to_use_a_forecasting/ | false | false | self | 0 | null |
Intel Arc Pro B-Series GPUs and Xeon 6 Shine in MLPerf Inference v5.1 | 0 | 2025-09-09T18:55:42 | https://newsroom.intel.com/artificial-intelligence/intel-arc-pro-b-series-gpus-and-xeon-6-shine-in-mlperf-inference-v5-1 | reps_up | newsroom.intel.com | 1970-01-01T00:00:00 | 0 | {} | 1ncrlu5 | false | null | t3_1ncrlu5 | /r/LocalLLaMA/comments/1ncrlu5/intel_arc_pro_bseries_gpus_and_xeon_6_shine_in/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=108&crop=smart&auto=webp&s=b8a00f6f49015a73128ee82d6684b25d3853e7dc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=216&crop=smart&auto=webp&s=84179efcd6cb097da7e1e85f171debad834df95b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=320&crop=smart&auto=webp&s=699444b9e9fc041651d4c867e5081a110ad31735', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=640&crop=smart&auto=webp&s=ddb967198b9d75e5053c1d04dacb0a9db48800b7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=960&crop=smart&auto=webp&s=e61c98b166de892190238622fcf150e3ed884f3c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?width=1080&crop=smart&auto=webp&s=0784ca105c57dfbae79671ff8ba4fef4fbb03830', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/l-cLEo5M7bRGOHHt-gvcMIKm1zFpTfeLtUkZ37rYPDE.png?auto=webp&s=f8b1b387bb5eec7dfba2adabf3846a13b5c851f2', 'width': 1920}, 'variants': {}}]} | |
science is the vibe-killer | 0 | Some critique on the vision and state of the technology
[https://remyxai.substack.com/p/vibes-dont-scale](https://remyxai.substack.com/p/vibes-dont-scale) | 2025-09-09T18:49:59 | remyxai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncrg4n | false | null | t3_1ncrg4n | /r/LocalLLaMA/comments/1ncrg4n/science_is_the_vibekiller/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'tzwa5o2oq6of1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/tzwa5o2oq6of1.png?width=108&crop=smart&auto=webp&s=50939dbeec477148af0bc93b76d21a5df1349bb2', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/tzwa5o2oq6of1.png?width=216&crop=smart&auto=webp&s=7e53e4f33ca92914762e12ea3b1dcbf91190f613', 'width': 216}, {'height': 260, 'url': 'https://preview.redd.it/tzwa5o2oq6of1.png?width=320&crop=smart&auto=webp&s=e84cc081d1e22348d7342b271c331e5fa397d004', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/tzwa5o2oq6of1.png?auto=webp&s=fce148f7979cd6798345f9227294d09622e49e69', 'width': 614}, 'variants': {}}]} | |
Apple on iPhone 17 Pro: "even more RAM for running local language models" | 0 | I caught during the Apple iPhone 17 Pro announcement: "even more RAM for running local language models". I've never seen them announce RAM as a spec on their website.
Personally, I think it will be a modest bump or else they would have talked more about it. Given Apple's trajectory of running AI on device, I can see why they are adding heat management and more RAM to be able to better handle AI models coming soon.
I'm curious what you all think - How much RAM might be coming? Will it be modest or a big bump? | 2025-09-09T18:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ncqxbi/apple_on_iphone_17_pro_even_more_ram_for_running/ | VFToken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncqxbi | false | null | t3_1ncqxbi | /r/LocalLLaMA/comments/1ncqxbi/apple_on_iphone_17_pro_even_more_ram_for_running/ | false | false | self | 0 | null |
Web-Frontend with WebSearch capability & LDAP | 1 | Hello.
I am currently running three models via llama.cpp behind and proxy and two WebUI's: Perplexica & Open-WebUI.
The former frontend doesn't support user authentication (esp. via LDAP) and the latter is not very robust in terms of delayed responses.
Both frontends will stop showing LLM replies after the first websearch.
I can verify that both Frontends receive the replys from llama.cpp though.
So now I am asking:
What's a better deployment?
Why is my Open-WebUI so buggy?
Any deployment tricks?
About Alternatives: +1 If it has native TLS Termination and doesn't require a full proxy server in addition to its deployment.
Of course I researched the internet but maybe someone is hacking something together and wants to shed some light to the work.
Closest alternative appears to be LibreChat. | 2025-09-09T17:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ncq1zb/webfrontend_with_websearch_capability_ldap/ | lnxdude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncq1zb | false | null | t3_1ncq1zb | /r/LocalLLaMA/comments/1ncq1zb/webfrontend_with_websearch_capability_ldap/ | false | false | self | 1 | null |
Tensor Core Equivalent in the iPhone 17's A19 Pro | 41 | When this comes to Macs likely later this year or beginning of next year, this might patch up problem of the lack of compute on Macs for running LLMs, especially apparently with low prompt preprocessing speeds. | 2025-09-09T17:50:50 | Mysterious_Finish543 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncptut | false | null | t3_1ncptut | /r/LocalLLaMA/comments/1ncptut/tensor_core_equivalent_in_the_iphone_17s_a19_pro/ | false | false | default | 41 | {'enabled': True, 'images': [{'id': 'erdhiit5g6of1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=108&crop=smart&auto=webp&s=5f93b681eea973ab9cc1610742b65d62c80cc41c', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=216&crop=smart&auto=webp&s=9d556018c4728be0c09ca084573f691dfe451808', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=320&crop=smart&auto=webp&s=363896bedb33fca8c488e288dddbcef1392540a7', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=640&crop=smart&auto=webp&s=5c7e880fb8ec110856ce09bcf45ddec4c93a8ec2', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=960&crop=smart&auto=webp&s=e7865b50b43cba9e22dc77f4cb2692c692625344', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?width=1080&crop=smart&auto=webp&s=63404a482ce2178dd62ce26a590726596eaf727c', 'width': 1080}], 'source': {'height': 672, 'url': 'https://preview.redd.it/erdhiit5g6of1.png?auto=webp&s=aa0159d66664772e743a6b5e12ee099a397628a3', 'width': 1440}, 'variants': {}}]} | |
Apple adds matmul acceleration to A19 Pro GPU | 203 | This guarantees that it's coming to M5. | 2025-09-09T17:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ncprrq/apple_adds_matmul_acceleration_to_a19_pro_gpu/ | auradragon1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncprrq | false | null | t3_1ncprrq | /r/LocalLLaMA/comments/1ncprrq/apple_adds_matmul_acceleration_to_a19_pro_gpu/ | false | false | self | 203 | null |
Qwen 8B on locally on iPhone - 10 tokens/s | 13 | We have pushed what is possible on mobile devices!
Vector Space a project and app that explores what is possible for AI on iOS devices. We believe are very capable devices for AI and we wish to help fill the gap that some company is leaving out.
I am pleased to announce that we have fit Qwen 8B to run on iPhone. It runs 10 token/s on iPhone 16, on ANE too - so it doesn’t drain your battery. Fitting a model this big to the memory limited environment of an iPhone required serious optimization and compression for the hardware.
Also, thanks to your feedback, you can now not only run, but SERVE all models ranging from Qwen 0.6B to 8B in a OpenAI compatible endpoint. You can point your app directly to this localhost endpoint to start saving from API cost now. Simply turn on the Web Server in settings after compiling a model.
You can try these features out today on our TestFlight beta app. You can download and run local models - including the 8B - without a line of code. If you encounter an issue, please report them - it will be much appreciated.
https://testflight.apple.com/join/HXyt2bjU
Please consider complete this survey to help determine what would be the next step for Vector Space
https://www.reddit.com/r/VectorSpaceApp/s/9ZZGS8YeeI
Fine prints:
-8B is tested on iPhone 16 only. iPhone 14 supports up to 4B.
-Please delete and redownload if you are an existing tester.
| 2025-09-09T17:43:57 | https://v.redd.it/32o55chxe6of1 | Glad-Speaker3006 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncpn3v | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/32o55chxe6of1/DASHPlaylist.mpd?a=1760031850%2CNTkyOWVkYjkzMGNhNzc3MjAwYTgwNGI1MDIzNWM3YjdlNTVhOWU2MDEwMzE4MzQ3NjljNDRiOWE2YTVkMmJkNA%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/32o55chxe6of1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1016, 'hls_url': 'https://v.redd.it/32o55chxe6of1/HLSPlaylist.m3u8?a=1760031850%2CZjQyMTY4YjMyZWYwNmEwMDQyM2FjYzdmYjcxZWYyNTczZmM5ZTcwN2I5YWNkYmVhYWUwMWVjNzYzODZkMmIzZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/32o55chxe6of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1ncpn3v | /r/LocalLLaMA/comments/1ncpn3v/qwen_8b_on_locally_on_iphone_10_tokenss/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7', 'resolutions': [{'height': 152, 'url': 'https://external-preview.redd.it/eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2bc793e7d276c7ad9e3e50286d62861884b3198', 'width': 108}, {'height': 304, 'url': 'https://external-preview.redd.it/eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7.png?width=216&crop=smart&format=pjpg&auto=webp&s=dc6ed68a84b943c03283e13e1c8ab1217cd57ea8', 'width': 216}, {'height': 451, 'url': 'https://external-preview.redd.it/eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7.png?width=320&crop=smart&format=pjpg&auto=webp&s=6e37d92e529f0e1b3bf6086dd279f56cbad16f6d', 'width': 320}, {'height': 902, 'url': 'https://external-preview.redd.it/eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7.png?width=640&crop=smart&format=pjpg&auto=webp&s=95895cf388574203ac97b8e8e4659560a9e75277', 'width': 640}], 'source': {'height': 1244, 'url': 'https://external-preview.redd.it/eXJjdTFsY3hlNm9mMTOl0IOQAgqwlCRxZAKRf2LxKM72dLj3tO8fplxlobs7.png?format=pjpg&auto=webp&s=ec38a1df145aec83eda400b12d44cd9f5410e97a', 'width': 882}, 'variants': {}}]} | |
Combined REFRAG × SBD: Estimated Decode tok/s | 2 | REFRAG: https://arxiv.org/abs/2509.01092v1
SBD: https://arxiv.org/abs/2509.04185
tl;dr they are complimentary!
REFRAG (×3) and SBD — conservatively (×3.5) and aggressively (×5.0). So the combined multipliers are:
Conservative combo: 3 × 3.5 = 10.5×
Aggressive combo: 3 × 5.0 = 15×
| Model (quant) | Baseline 3090 | Baseline 5090 | Baseline 8×H100 | 3090 (×10.5) | 5090 (×10.5) | 8×H100 (×10.5) | 3090 (×15) | 5090 (×15) | 8×H100 (×15) |
| -------------------- | ------------: | ------------: | --------------: | -----------: | -----------: | -------------: | ---------: | ----------: | -----------: |
| **Qwen3-235B Q2.5** | \~5 tok/s | \~10 tok/s | \~50 tok/s | \~52.5 tok/s | \~105 tok/s | \~525 tok/s | \~75 tok/s | \~150 tok/s | \~750 tok/s |
| **Qwen3-235B Q3** | \~3.5 | \~8 | \~40 | \~36.8 | \~84 | \~420 | \~52.5 | \~120 | \~600 |
| **Qwen3-235B Q4** | \~2.5 | \~6 | \~35 | \~26.3 | \~63 | \~367.5 | \~37.5 | \~90 | \~525 |
| **Qwen3-235B Q8** | \~1.0 | \~3 | \~25 | \~10.5 | \~31.5 | \~262.5 | \~15 | \~45 | \~375 |
| **GPT-OSS-120B Q4** | \~1.6 | \~12 | \~60 | \~16.8 | \~126 | \~630 | \~24 | \~180 | \~900 |
| **GPT-OSS-120B Q8** | \~0.9 | \~6 | \~40 | \~9.45 | \~63 | \~420 | \~13.5 | \~90 | \~600 |
| **Llama-3.1-70B Q4** | \~2.0 | \~6 | \~100 | \~21 | \~63 | \~1,050 | \~30 | \~90 | \~1,500 |
| **Qwen-2.5-72B Q4** | \~2.5 | \~8 | \~90 | \~26.3 | \~84 | \~945 | \~37.5 | \~120 | \~1,350 |
| **Mixtral 8×22B Q4** | \~6.3 | \~15 | \~250 | \~66.2 | \~157.5 | \~2,625 | \~94.5 | \~225 | \~3,750 |
It's coming! | 2025-09-09T17:42:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ncplx6/combined_refrag_sbd_estimated_decode_toks/ | Ok-Watercress3423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncplx6 | false | null | t3_1ncplx6 | /r/LocalLLaMA/comments/1ncplx6/combined_refrag_sbd_estimated_decode_toks/ | false | false | self | 2 | null |
Internet Apocalypse - What would you fill in your 4TB storage? | 2 | For OS & other programs, you have separate 1TB Storage. 2nd Drive is 5TB Storage for archiving stuff. And System has 1TB RAM & 256GB VRAM(Yeah, before Apocalypse, people do crazy stuff like grabbing important stuffs so quick .... I would grab 15 MI50 32GB cards in my case :D ). Now you may fill **1TB with Non-LLM stuff**. Other **4TB is strictly for LLM stuff**.
I would fill the most of 1TB with Kiwix Zim files of Wikipedia, Gutenberg, Stack exchange, TED, Survivor Library, Khan academy, etc., stuff. Note : I can run these even without VRAM (I have some Kiwix collection on my old laptop which doesn't have GPU & just with 16GB DDR3 RAM).
For remaining **4TB**, I would fill below models.
* Kimi K2 Q4 (600GB)
* DeepSeekV3.1 Q4 (400GB)
* Qwen3-Coder-480B Q4 (300GB)
* GLM-4.5 Q4 (200GB)
* Qwen3-235B Q6 (200GB)
* Llama-4-Maverick Q5 (300GB)
* Llama-4-Scout Q8 (130GB)
* Hermes-4-405B Q5 (300GB)
* gpt-oss-120b Q8 (65GB)
* Mistral-Large-Instruct Q8 (130GB)
* ERNIE-4.5-300B Q6 (250GB)
* Hunyuan-A13B Q8 (100GB)
* Llama-3.3-70B Q8 (80GB)
Already filled 2.5TB with above stuffs. Apart from these, I would fill all other **MOE** models under 100B at Q8 (Ex: Qwen3-30B, Qwen3-Coder-30B, SmallThinker-21B, ERNIE-4.5-21B, Ling-lite-1.5, Moonlight-16B, ....).
Also **Dense** models under 100B at Q8 (Ex: Qwen3-32B, Gemma-3-27B, MedGemma-3-27B, Falcon-H1-34B, Seed-OSS-36B, ....).
I think now I have 500GB - 1TB to be filled. Right now I want models for important stuffs like **Image, Audio, Video, Vision, etc., things**. I haven't tried any of these stuffs because of our tiny 8GB VRAM(Next year start, I'll be ready my rig with good config) so I have no idea what are good/best/worthy/nice models for these stuffs. I remember that people used to mention here time to time models like Flux, Wan, stable-diffusion, Qwen-Image, chatterbox, Kokoro, Vibevoice, etc.,
So what about you? What would you fill? Did I miss any model types here or any models? Please let me know so I'll include that too. Inspired by [this post](https://www.reddit.com/r/LocalLLaMA/comments/1nbgosx/apocalyptic_scenario_if_you_could_download_only/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) and that post limits to only one model. Wanted to see different & more answers here since here no models limitations, only storage. Please go ahead, lets prepare for Internet Apocalypse.
| 2025-09-09T17:42:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ncplgy/internet_apocalypse_what_would_you_fill_in_your/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncplgy | false | null | t3_1ncplgy | /r/LocalLLaMA/comments/1ncplgy/internet_apocalypse_what_would_you_fill_in_your/ | false | false | self | 2 | null |
Using vLLM for local use with Pipeline Parallelism and VLLM_PP_LAYER_PARTITION | 18 | Most of us default to llama.cpp or exllamav2/v3+tabbyapi because you can mix and match GPUs with different VRAM. You can do something similar with vLLM and keep its nice perks (new model support, tool use) by switching from tensor parallelism to pipeline parallelism and manually partitioning layers. It also has much better support for parallel request, even using PP instead of TP in my testing, which llama.cpp and exllamav3 really lack proper support as they are more focuses on single requests for local use.
This is a guide on how I do it.
vLLM will evenly split layers across PP stages by default. That’s not ideal because stage 0 also holds the embedding and the last stage holds the LM head, so those two stages need *fewer* transformer blocks. You can override the split with:
VLLM_PP_LAYER_PARTITION="L0,L1,...,L{pp-1}"
A comma-separated list of per-stage layer counts that must sum to the model’s total hidden layers. This variable is not really documented: [https://github.com/vllm-project/vllm/issues/6824#issuecomment-2276311361](https://github.com/vllm-project/vllm/issues/6824#issuecomment-2276311361)
# Steps:
1. Find your model’s total layers**.** Open the model folder and inspect `config.json`. You’re looking for `num_hidden_layers`
2. Decide PP size. Use the number of GPUs you want to shard across. In vLLM serve, that’s `--pipeline-parallel-size N` (alias `-pp N`).
3. Compute a partition. Pick a list whose sum equals `num_hidden_layers`. Give fewer layers to stage 0 and the last stage to offset embeddings/LM head (e.g., on 4 GPUs for a 46-layer model: `12,12,11,11` or even `13,13,10,10` if stages 0/3 are on bigger cards).
4. Order your devices. Export `CUDA_VISIBLE_DEVICES` so stages map to the GPUs you intend (stage 0 is the first ID, stage 1 the next, etc.). Use `CUDA_DEVICE_ORDER=PCI_BUS_ID` for stable numbering.
5. Launch vLLM. Example (GLM-4.5-Air AWQ, 4 stages, uneven split; GPUs ordered big→big→small→small): In my case CUDA0 and CUDA4=5090's and CUDA1 and CUDA3=3090's
​
CUDA_DEVICE_ORDER=PCI_BUS_ID \
CUDA_VISIBLE_DEVICES=0,4,1,3 \
VLLM_PP_LAYER_PARTITION="13,13,10,10" \
vllm serve /mnt/llms/models/cpatonn/GLM-4.5-Air-AWQ-4bit/ \
--served-model-name GLM-4.5-Air \
--pipeline-parallel-size 4 \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--host 0.0.0.0 --port 8000 \
--tool-call-parser glm45 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--dtype float16
# Note for FP8 on Ampere.
* vLLM supports FP8 in two modes:
* W8A8 with native FP8 GPUs like hopper or blackwell.
* W8A16 (weight-only FP8) on Ampere via the Marlin kernel. That means you *can* load FP8 checkpoints on A100/3090-class hardware as weight-only FP8.
* I tested using the `VLLM_TEST_FORCE_FP8_MARLIN` but it doesn't work when mixing ampere and blackwell in my testing. So currently using fp8 models with ampere+blackwell doesn't work as far as I know.
If you don’t specifically need FP8, stick to FP16 or AWQ for simplicity, AWQ also has support for 8 bit quantization apart from the more common 4 bit.
For reasons now I have 4x3090, 2x5090 and 1xRTX pro 6000, so I've been experimenting a lot with a mixture of vram sizes and architectures and the -pp and VLLM\_PP\_LAYER\_PARTITION is not really well documented so I wanted to share how to use it.
So if you don't need 2/3 or 5/6 bit quants, and want to experiment with vllm with a mixture of gpus I think this is a good alternative.
PS: i still need to test sglang, as it also has SGLANG\_PP\_LAYER\_PARTITION but I think it has worse support for quant types like awq and gptq, so I haven't really dig into sglang too much yes outside the "proper" use of 1,2,4 gpus with TP.
Note: I did use an LLM to structure the post. | 2025-09-09T17:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ncovjy/using_vllm_for_local_use_with_pipeline/ | bullerwins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncovjy | false | null | t3_1ncovjy | /r/LocalLLaMA/comments/1ncovjy/using_vllm_for_local_use_with_pipeline/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=108&crop=smart&auto=webp&s=4149a73436f8055cd9b22619186924902c278543', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=216&crop=smart&auto=webp&s=e75402b8a77b0273d2948c6743fca5d3de94e082', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=320&crop=smart&auto=webp&s=e5800419fdd98c726f363d208b6e7cd91300c816', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=640&crop=smart&auto=webp&s=6937acfad35a525d84306bfc96697386512dfe77', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=960&crop=smart&auto=webp&s=4c699ee795ba1b6a71b8e7d2a5a5eaeb8949da72', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?width=1080&crop=smart&auto=webp&s=9a164209cad07f363a10032b6b3cd897468cf4fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zc8rbIYYFF4q61N85B9AfyAwBGHIqbU3UgTtGWJX9Rk.png?auto=webp&s=1dabe27968a12489540abc4d5f84491cc80ad578', 'width': 1200}, 'variants': {}}]} |
Gigabyte’s New CXL Expansion Card Turns PCIe Slot into 512 GB of DDR5 RAM | 34 | Gigabyte's AI Top CXL R5X4 expansion card lets you plug up to 512 GB of DDR5 ECC RDIMM RAM into a PCIe 5.0 x16 slot, using Compute Express Link (CXL) to talk directly with the CPU.
While this technology is already old news for servers, now it's available for two workstation motherboards: TRX50 AI TOP (AMD) и W790 AI TOP (Intel).
https://www.computerbase.de/news/arbeitsspeicher/cxl-expansion-card-von-gigabyte-512-gb-ram-aufstocken-im-workstation-mainboard.94238/ | 2025-09-09T17:14:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ncou07/gigabytes_new_cxl_expansion_card_turns_pcie_slot/ | Normal-Ad-7114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncou07 | false | null | t3_1ncou07 | /r/LocalLLaMA/comments/1ncou07/gigabytes_new_cxl_expansion_card_turns_pcie_slot/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=108&crop=smart&auto=webp&s=dfce854bdc24238e2ac676b6c60007ff81714c52', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=216&crop=smart&auto=webp&s=230c1b65570f493c00437469ca74627a42911d85', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=320&crop=smart&auto=webp&s=e986fbe2138d8526cf05663905aa5bb66b5c78b3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=640&crop=smart&auto=webp&s=34aa7aa5d984fd395852cbd60e101b13e24eac86', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=960&crop=smart&auto=webp&s=469b0859e382b6fd62034a5c1da17662a0fd547d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?width=1080&crop=smart&auto=webp&s=faca03ec9aa61b1c851108157bdc379f24993e51', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/VwmP5wMSgThbeJocnJEER0YuMHkGJpPYaAdPqeOEqyU.jpeg?auto=webp&s=659a8aadbb35874ea670c182d3cfaa48ca7d18ca', 'width': 1280}, 'variants': {}}]} |
64 GB AGX ORIN DEVELOPER VS. 16 GB Zed Box Orin NX | 2 | WHY IS MY ZED BOX RUNNING 3 GMSL2 CAMERAS FASTER THAN MY 64 GB AGX ORIN????
Through multiple testing instances and a large data base, I have found that my 16 GB zed box constantly runs at higher frames than my 64 GB AGX. I have no idea why this is, I am using a zed box deserializer for the AGX to read my gmsl2 cameras and its still slower.
If anyone has any ideas, or thoughts please lemme know. | 2025-09-09T17:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ncohk6/64_gb_agx_orin_developer_vs_16_gb_zed_box_orin_nx/ | InterestingBite7975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncohk6 | false | null | t3_1ncohk6 | /r/LocalLLaMA/comments/1ncohk6/64_gb_agx_orin_developer_vs_16_gb_zed_box_orin_nx/ | false | false | self | 2 | null |
Switching to Qwen3-480B from Claude as resulted in lower errors when generating 3D model code | 66 | In my [previous](https://www.reddit.com/r/LocalLLaMA/comments/1n21tb6/comment/nb4h42v/) post I highlighted a Blender python agent I'm working on. I've been experimenting with various models and I found larger models like Claude and GPT-5 - even with reasoning - took too many iterations to produce working valid code.
So far Qwen's [largest coder model ](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct)is my favourite.
I threw up the agent with a simple UI if you want to play with it yourself: [https://blender-ai.fly.dev/](https://blender-ai.fly.dev/)
You can also download the models it produces. An agent made with fully open source tools (Blender, MCP servers, Qwen) is blowing me away.
Let me know what you think! Happy to get feedback on this and make it even better. | 2025-09-09T16:33:35 | https://www.reddit.com/gallery/1ncnqwl | spacespacespapce | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ncnqwl | false | null | t3_1ncnqwl | /r/LocalLLaMA/comments/1ncnqwl/switching_to_qwen3480b_from_claude_as_resulted_in/ | false | false | 66 | null | |
What are some cool apps that get advantage of your local llm server by integrating it? | 4 | I’m not talking about server apps like ollama, lmstudio etc
Rather cool apps which give you service by using that local server of yours on your OS. | 2025-09-09T16:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ncnh67/what_are_some_cool_apps_that_get_advantage_of/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncnh67 | false | null | t3_1ncnh67 | /r/LocalLLaMA/comments/1ncnh67/what_are_some_cool_apps_that_get_advantage_of/ | false | false | self | 4 | null |
My new hobby: test new release model with 'create wordle-like cli game in python' | 2 | Whenever new model released, I think everyone would have same set of prompt to testing it out.
My recently favorite is zero-shoting model with 'create wordle-like cli game in python'.
Wordle is quite a balance project here, it quite hard for beginner programmer to write and design a control flow, and it also challeng veteran programmer on how you will design the flow and style of the game in the cli, I use this to quickly determine how a model perform on programming task.
Do you have any favorite prompt to test new model your way? | 2025-09-09T16:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ncnh59/my_new_hobby_test_new_release_model_with_create/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncnh59 | false | null | t3_1ncnh59 | /r/LocalLLaMA/comments/1ncnh59/my_new_hobby_test_new_release_model_with_create/ | false | false | self | 2 | null |
Insights on performance degradation for Qwen3 30B3A? | 5 | Looking to use Qwen3-30B-A3B-Instruct-2507 with AWQ 4bit quant. Does anyone have insights in terms of performance degradation, specifically for long contexts? | 2025-09-09T16:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ncn4sa/insights_on_performance_degradation_for_qwen3/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncn4sa | false | null | t3_1ncn4sa | /r/LocalLLaMA/comments/1ncn4sa/insights_on_performance_degradation_for_qwen3/ | false | false | self | 5 | null |
ModernBERT just got multilingual - mmBERT by CLSP at The Johns Hopkins University | 36 | ModernBERT just got multilingual (mmBERT)
* Small (140M) and Base (307M) versions
* Trained on 3T+ tokens from 1800 languages (DCLM, FineWeb, Code ...)
* ModernBERT architecture, Gemma 2 tokenizer
* 8192 context window
[Model weights collection](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4) | 2025-09-09T15:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ncme6t/modernbert_just_got_multilingual_mmbert_by_clsp/ | curiousily_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncme6t | false | null | t3_1ncme6t | /r/LocalLLaMA/comments/1ncme6t/modernbert_just_got_multilingual_mmbert_by_clsp/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=108&crop=smart&auto=webp&s=7924d8ff4e119f9c734c2f49174b5e92273cb7e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=216&crop=smart&auto=webp&s=17db264b070271a1879727123e2e8ec0077e38ae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=320&crop=smart&auto=webp&s=17c8f517bcc6dee1d3411c66a5edb6b65d5275d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=640&crop=smart&auto=webp&s=4dc29c56aa7c209f2248be147616bf150058d8e6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=960&crop=smart&auto=webp&s=e8542ffeb9c4509a64c1b6f1e1625be62bd8bf42', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?width=1080&crop=smart&auto=webp&s=7e0c437ceaa3806e883468bb59c14d040c4f90fb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1VZ52KHfX4U42PN2pKYH3SHOOOp0Ic8J9U2ywesfJjE.png?auto=webp&s=c9552ea7924bfbe4367e1f6428695698c0f469f1', 'width': 1200}, 'variants': {}}]} |
My Experience with IndexTTS2 Deployment on Mac M4: Smooth Setup, Massive Memory Usage | 10 | The IndexTTS repository on GitHub has been updated, providing a complete deployment process for IndexTTS2: [https://github.com/index-tts/index-tts](https://github.com/index-tts/index-tts)
You can check the demo samples here: [https://index-tts.github.io/index-tts2.github.io/](https://index-tts.github.io/index-tts2.github.io/)
I successfully installed it on my MacBook without any issues and quickly ran `indextts/infer_v2.py`. (The dev team has a sense of humor, they went with a somewhat quirky voice style.)
However, on Mac M4, both version 1.5 and 2 consume significantly more memory compared to Windows. For example, IndexTTS 1.5 uses around 3GB of VRAM on a Windows machine with a 3060 GPU, but on Mac M4, it uses over 30GB of memory (unified memory).
Has anyone else experienced this? Would love to hear if any experts know the reason behind the difference! | 2025-09-09T15:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ncm7xp/my_experience_with_indextts2_deployment_on_mac_m4/ | Timely_Rain_9284 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncm7xp | false | null | t3_1ncm7xp | /r/LocalLLaMA/comments/1ncm7xp/my_experience_with_indextts2_deployment_on_mac_m4/ | false | false | self | 10 | null |
Having difficulties starting my llama.cpp API server; all I find is Ollama tutorials | 3 | As stated, I try to find tutorials but Google keeps thinking I want to play with Ollama and auto corrects me and gives me bad info. I want to pull the got 20b and also a good 3-4b to just read an ad and evaluate if it concerns a certain subject or not to eliminate it. | 2025-09-09T15:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ncm031/having_difficulties_starting_my_llamacpp_api/ | oodelay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ncm031 | false | null | t3_1ncm031 | /r/LocalLLaMA/comments/1ncm031/having_difficulties_starting_my_llamacpp_api/ | false | false | self | 3 | null |
NLQuery: On-premise, high-performance Text-to-SQL engine for PostgreSQL with single REST API endpoint | 5 | MBASE NLQuery is a natural language to SQL generator/executor engine using the MBASE SDK as an LLM SDK. This project doesn't use cloud based LLMs
It internally uses the Qwen2.5-7B-Instruct-NLQuery model to convert the provided natural language into SQL queries and executes it through the database client SDKs (PostgreSQL only for now). However, the execution can be disabled for security.
MBASE NLQuery doesn't require the user to supply a table information on the database. User only needs to supply parameters such as: database address, schema name, port, username, password etc.
It serves a single HTTP REST API endpoint called "nlquery" which can serve to multiple users at the same time and it requires a super-simple JSON formatted data to call. | 2025-09-09T15:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nclld9/nlquery_onpremise_highperformance_texttosql/ | Emrehocam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nclld9 | false | null | t3_1nclld9 | /r/LocalLLaMA/comments/1nclld9/nlquery_onpremise_highperformance_texttosql/ | false | false | self | 5 | null |
mmBERT: ModernBERT goes Multilingual | 51 | Looks like some of the ModernBERT authors trained a Multilingual variant! Also 2 models, but these are a bit smaller. They look really promising to be honest, although they do clearly need to be finetuned for downstream tasks like semantic search, clustering, classification, etc. before they're really viable. A bit like a base LLM instead of an instruct, they didn't provide a finetuned model.
I posted a plot with MTEB v2 Multilingual performance after equivalent finetuning VS inference speed in the comments. | 2025-09-09T14:58:03 | https://huggingface.co/blog/mmbert | -Cubie- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ncl7mx | false | null | t3_1ncl7mx | /r/LocalLLaMA/comments/1ncl7mx/mmbert_modernbert_goes_multilingual/ | false | false | default | 51 | {'enabled': False, 'images': [{'id': 'gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?width=108&crop=smart&auto=webp&s=fd2319b16053ccf534f119ac51c988a8afb98ba3', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?width=216&crop=smart&auto=webp&s=d39ce69673b02f1c9f4c8b9c85475ee76f479277', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?width=320&crop=smart&auto=webp&s=672c00a8e9e8540c700dc9a9ebbeeeddeedadf31', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?width=640&crop=smart&auto=webp&s=cff66f2eb934ac5bea40d4094905bf0f657b72ab', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?width=960&crop=smart&auto=webp&s=00811cc3aa61db3db571612b1b6e9f74e0128c4f', 'width': 960}], 'source': {'height': 548, 'url': 'https://external-preview.redd.it/gboHy7lwIiGjTkUdwnm7iBTxH9k6Eb0rVhAuSbpxTno.png?auto=webp&s=e922be9b012bce5cb323413212f7d7c1ab6ee552', 'width': 1048}, 'variants': {}}]} |
🤔 | 569 | 2025-09-09T14:50:44 | Namra_7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ncl0v1 | false | null | t3_1ncl0v1 | /r/LocalLLaMA/comments/1ncl0v1/_/ | false | false | default | 569 | {'enabled': True, 'images': [{'id': '1x8wy1p0k5of1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=108&crop=smart&auto=webp&s=8cbd3c782451139141dd02e06e09e3673913d468', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=216&crop=smart&auto=webp&s=dd81b1113950648fd2ecde77ad56a42af3afefa4', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=320&crop=smart&auto=webp&s=588db6264cbfa21b03de0af30c81d576f1e9707f', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=640&crop=smart&auto=webp&s=5abc658735fe1e769f852e16c92dad154d7fd44c', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=960&crop=smart&auto=webp&s=1891b2b7edaf9e9e23504f4966808747be34500c', 'width': 960}, {'height': 719, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?width=1080&crop=smart&auto=webp&s=bd4bceb822aa989bcffc7abb5addc90a4a17aa3b', 'width': 1080}], 'source': {'height': 719, 'url': 'https://preview.redd.it/1x8wy1p0k5of1.png?auto=webp&s=4841ba64c893f8b831c76535e61f7be618be3884', 'width': 1080}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.