title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How OpenAI Misled You on RLHF | 54 | I hope this article is okay here, since it's related to my open source VLM (JoyCaption), and LLM training in general. The article originally started as just my usual dumping of details and insights from the Finetuning Battlefields, this time focused on RL finetuning a VLM, but I ended up adding a bunch of details on t... | 2025-08-15T18:49:03 | https://aerial-toothpaste-34a.notion.site/How-OpenAI-Misled-You-on-RLHF-1f83f742d9dd80a68129d06503464aff | fpgaminer | aerial-toothpaste-34a.notion.site | 1970-01-01T00:00:00 | 0 | {} | 1mr6ojs | false | null | t3_1mr6ojs | /r/LocalLLaMA/comments/1mr6ojs/how_openai_misled_you_on_rlhf/ | false | false | default | 54 | {'enabled': False, 'images': [{'id': 'o02DfA1hR06T8VIFmfiB8nq6L3bMSQ3o_mYnp-HDq_s', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/o02DfA1hR06T8VIFmfiB8nq6L3bMSQ3o_mYnp-HDq_s.png?width=108&crop=smart&auto=webp&s=4291f35f5ce206b1791729b62e8acf254355a4e4', 'width': 108}, {'height': 113, 'url': 'h... |
Intel adds Shared GPU Memory Override feature for Core Ultra systems, enables larger VRAM for AI | 161 | 2025-08-15T18:33:10 | https://videocardz.com/newz/intel-adds-shared-gpu-memory-override-feature-for-core-ultra-systems-enables-larger-vram-for-ai | reps_up | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1mr6929 | false | null | t3_1mr6929 | /r/LocalLLaMA/comments/1mr6929/intel_adds_shared_gpu_memory_override_feature_for/ | false | false | default | 161 | {'enabled': False, 'images': [{'id': 'YJQ41TIHjSIHRPnnPYpoNGj-_TlQpYrRQFRV8JkhNlo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YJQ41TIHjSIHRPnnPYpoNGj-_TlQpYrRQFRV8JkhNlo.jpeg?width=108&crop=smart&auto=webp&s=e9e188920bf8fe38aa26f35bf6efa6e2721464fc', 'width': 108}, {'height': 112, 'url': '... | |
Echoes of Ir - local LLM with MCP server | 3 | Working on an indie game, a retro dungeon crawler. LLM with MCP server is already working. Early demo in video. | 2025-08-15T18:17:26 | https://v.redd.it/w8zs0kaw58jf1 | Natural-Ad6682 | /r/LocalLLaMA/comments/1mr5ti9/echoes_of_ir_local_llm_with_mcp_server/ | 1970-01-01T00:00:00 | 0 | {} | 1mr5ti9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w8zs0kaw58jf1/DASHPlaylist.mpd?a=1758003449%2COTMzN2ExZTVmMzNhYjJlMTE2MTUyYWIwYzY3MWZjMTY3NjQ1NjhkOTVhMzUzMWU4OTc1YTAwNTdhOTRiMDJjZA%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/w8zs0kaw58jf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mr5ti9 | /r/LocalLLaMA/comments/1mr5ti9/echoes_of_ir_local_llm_with_mcp_server/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'bzB0b2c2OXc1OGpmMZtRAqXln_rbjZ-iMzFaNECXd9XjcWlbTejnpa3ShKsB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bzB0b2c2OXc1OGpmMZtRAqXln_rbjZ-iMzFaNECXd9XjcWlbTejnpa3ShKsB.png?width=108&crop=smart&format=pjpg&auto=webp&s=2a8bb851134ff03ae8f12f35786f67b00cb11... | |
Need fine tuning advice... | 1 | Hi. I have been using a fine-tuned version of OpenAI GPT-4.1 for a while now, and I am satisfied with the results. This was originally intended to be just a test fine-tune with only about 100+ dataset, but I didn’t expect it to meet my requirements so well. Now, I want to add another set of data to make the already goo... | 2025-08-15T18:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mr5rna/need_fine_tuning_advice/ | wanhanred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr5rna | false | null | t3_1mr5rna | /r/LocalLLaMA/comments/1mr5rna/need_fine_tuning_advice/ | false | false | self | 1 | null |
Just got my 4th 3090, mobo only has 3x pcie ports. Options? | 0 | In short, I managed for find a 4th 3090 to throw into my cobbled together LLM server. However, the MOBO itself only has 3 full sized PCI-E ports.
|Key|Value|
|:-|:-|
|Motherboard|ROG CROSSHAIR VIII DARK HERO|
|Processor|5900x|
|RAM|4x 32GB (128GB) |
I imagine there is a real world where jamming this 4th 3090 int... | 2025-08-15T18:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mr5qa4/just_got_my_4th_3090_mobo_only_has_3x_pcie_ports/ | valdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr5qa4 | false | null | t3_1mr5qa4 | /r/LocalLLaMA/comments/1mr5qa4/just_got_my_4th_3090_mobo_only_has_3x_pcie_ports/ | false | false | self | 0 | null |
OWhisper - Ollama for realtime speech-to-text | 0 | 2025-08-15T17:55:18 | https://docs.hyprnote.com/owhisper/what-is-this | beerbellyman4vr | docs.hyprnote.com | 1970-01-01T00:00:00 | 0 | {} | 1mr57j2 | false | null | t3_1mr57j2 | /r/LocalLLaMA/comments/1mr57j2/owhisper_ollama_for_realtime_speechtotext/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'i0c-ak_g0_9jxvHxKmUbmmVhIqsYGZ4x0VJwt2pJUFk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/i0c-ak_g0_9jxvHxKmUbmmVhIqsYGZ4x0VJwt2pJUFk.png?width=108&crop=smart&auto=webp&s=b89f0fa7f6d8544ed16ef41fad1e4b08ca2f56b4', 'width': 108}, {'height': 113, 'url': 'h... | |
What is the best NSFW model right now that also can analyze images right now? | 0 | Most models i found either reguse a ton or speak like an oncologist | 2025-08-15T17:51:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mr5434/what_is_the_best_nsfw_model_right_now_that_also/ | Which_Reputation_345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr5434 | false | null | t3_1mr5434 | /r/LocalLLaMA/comments/1mr5434/what_is_the_best_nsfw_model_right_now_that_also/ | false | false | nsfw | 0 | null |
Why are people against running two PSU? | 0 | Arnt modern GPUs set to protect against brown out?
Mother boardss are now being built with 2 power supplies in mind such as
https://www.asus.com/us/motherboards-components/motherboards/workstation/pro-ws-wrx90e-sage-se/helpdesk_manual?model2Name=Pro-WS-WRX90E-SAGE-SE
Page 2-14 & 2-15 talk about setting up two power ... | 2025-08-15T17:41:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mr4tsh/why_are_people_against_running_two_psu/ | grio43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr4tsh | false | null | t3_1mr4tsh | /r/LocalLLaMA/comments/1mr4tsh/why_are_people_against_running_two_psu/ | false | false | self | 0 | null |
Qwen 2.5 (7B/14B/32B) Finetunes Outperforming Opus 4 & Sonnet 4/3.5 on Out-of-Distribution Tasks with RL --- Code, Weights, Data, and Paper Released | 110 | 2025-08-15T17:27:21 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mr4fdk | false | null | t3_1mr4fdk | /r/LocalLLaMA/comments/1mr4fdk/qwen_25_7b14b32b_finetunes_outperforming_opus_4/ | false | false | default | 110 | {'enabled': True, 'images': [{'id': '3beo5klvv7jf1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/3beo5klvv7jf1.jpeg?width=108&crop=smart&auto=webp&s=574ae2847073b661c0b07d06c1dfde5962f05f2b', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/3beo5klvv7jf1.jpeg?width=216&crop=smart&auto=we... | ||
In 44 lines of code, we have an actually useful agent that runs entirely locally, powered by Qwen3 30B A3B Instruct | 0 | Here's the full code:
```
# to run: uv run --with 'smolagents[mlx-lm]' --with ddgs smol.py 'how much free disk space do I have?'
from smolagents import CodeAgent, MLXModel, tool
from subprocess import run
import sys
@tool
def write_file(path: str, content: str) -> str:
"""Write text.
Args:
path (str): ... | 2025-08-15T17:21:25 | https://v.redd.it/tngrwchwv7jf1 | ai-christianson | /r/LocalLLaMA/comments/1mr49bk/in_44_lines_of_code_we_have_an_actually_useful/ | 1970-01-01T00:00:00 | 0 | {} | 1mr49bk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tngrwchwv7jf1/DASHPlaylist.mpd?a=1758000092%2CNDM3ZWJhYzU3YTIwZDk5Y2M4NTU1OWVhYWFkM2Y3YTJmMjFjOWM0MjhiMzRiZjE4NzViYWM4MTIwZWZiMzE2MA%3D%3D&v=1&f=sd', 'duration': 82, 'fallback_url': 'https://v.redd.it/tngrwchwv7jf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mr49bk | /r/LocalLLaMA/comments/1mr49bk/in_44_lines_of_code_we_have_an_actually_useful/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cTc5aDJkaHd2N2pmMU_pBkiETFrHg4AKq1_CU7Vq7nQqg4CETfyv9kqgmora', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cTc5aDJkaHd2N2pmMU_pBkiETFrHg4AKq1_CU7Vq7nQqg4CETfyv9kqgmora.png?width=108&crop=smart&format=pjpg&auto=webp&s=ea5a919344f80069f7bfcb283a3a29952fb8... | |
Title: My ongoing quest to build a truly human-like AI companion and kill persona drift. It's been... a ride. | 0 | So I've been grinding on this project to create a hyper-realistic AI companion. The idea is you can have AI girlfriends/boyfriends or simulate scenarios like dating, approaching your crush, whatever. The whole point is that it's not just another roleplay chatbot. It's an AI that's supposed to feel like a real person, w... | 2025-08-15T17:16:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mr44k4/title_my_ongoing_quest_to_build_a_truly_humanlike/ | Curious_Ad9525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr44k4 | false | null | t3_1mr44k4 | /r/LocalLLaMA/comments/1mr44k4/title_my_ongoing_quest_to_build_a_truly_humanlike/ | false | false | self | 0 | null |
SimpleQA benches for llama3.3 and 405b? | 0 | I have a suspicion that llama would still do very well on basic facts and world knowledge as they are loaded with parameters. Does anyone know anywhere that has benched it or can anyone run the bench on them?
Btw, does quantization decrease world knowledge linearly? I suspect compression ratio doesn't improve from qua... | 2025-08-15T17:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mr41v6/simpleqa_benches_for_llama33_and_405b/ | Lazy-Canary7398 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr41v6 | false | null | t3_1mr41v6 | /r/LocalLLaMA/comments/1mr41v6/simpleqa_benches_for_llama33_and_405b/ | false | false | self | 0 | null |
Web Search for LLMs Might Get Much Harder if Sites Keep Blocking AI Scraping (Old news) | 0 | 2025-08-15T16:56:05 | https://techcrunch.com/2025/08/04/perplexity-accused-of-scraping-websites-that-explicitly-blocked-ai-scraping/ | JeffreySons_90 | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1mr3jos | false | null | t3_1mr3jos | /r/LocalLLaMA/comments/1mr3jos/web_search_for_llms_might_get_much_harder_if/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'Rpc6HthzcAZMNVI3qGLXjLzsypxZn-nvlC1XG7r0fOc', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Rpc6HthzcAZMNVI3qGLXjLzsypxZn-nvlC1XG7r0fOc.jpeg?width=108&crop=smart&auto=webp&s=6b3d515844650224457d986126e7b42a11ad8c05', 'width': 108}, {'height': 144, 'url': '... | |
Prompt Engineering: What Actually Works (Without the 8-Hour Hype) | 88 | I’ve seen people drop 8-hour-long videos on prompt engineering, and honestly, my reaction is 🤦♂️.
I won’t bore you with the obvious stuff or overcomplicate things. Instead, I want to share a few practical techniques that actually helped me write better prompts, some common sense, some hard-earned lessons. Most of wh... | 2025-08-15T16:18:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mr2i67/prompt_engineering_what_actually_works_without/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr2i67 | false | null | t3_1mr2i67 | /r/LocalLLaMA/comments/1mr2i67/prompt_engineering_what_actually_works_without/ | false | false | self | 88 | null |
Build a Powerful RAG Web Scraper with Ollama and LangChain | 0 | 2025-08-15T16:02:13 | https://youtu.be/eLV1R6ORRyU | Flashy-Thought-5472 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mr22gv | false | {'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eLV1R6ORRyU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1mr22gv | /r/LocalLLaMA/comments/1mr22gv/build_a_powerful_rag_web_scraper_with_ollama_and/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 't9XIYhCPX81zocQG7-RZGE14j2m6kuglQ_cuIn6oFjA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/t9XIYhCPX81zocQG7-RZGE14j2m6kuglQ_cuIn6oFjA.jpeg?width=108&crop=smart&auto=webp&s=1a83d30cf9f4916950e04d1a6b0264896ec16f88', 'width': 108}, {'height': 162, 'url': '... | |
Open-source eval harness for LLMs on q/kdb+ (q-HumanEval, Pass@k, leaderboard) | 2 | We’re sharing an open-source evaluation harness that measures how well LLMs write q/kdb+.
What it is
* q-HumanEval benchmark (164 tasks) with Pass@k scoring
* Public leaderboard for fair comparisons
* Enables easy extension to Python benchmarks because tests can run in Python
Why it might interest Local LLM builders... | 2025-08-15T15:58:35 | erfan_mhi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mr1yjt | false | null | t3_1mr1yjt | /r/LocalLLaMA/comments/1mr1yjt/opensource_eval_harness_for_llms_on_qkdb/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'zm1y04yag7jf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zm1y04yag7jf1.png?width=108&crop=smart&auto=webp&s=b881f9b8255b1f1cc94a3b50e7689477d481aa5a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zm1y04yag7jf1.png?width=216&crop=smart&auto=web... | |
What's your favorite local model for C#? | 8 | In my experience, local models of all sizes tend to struggle a bit with C/C++ and C#. What's your personal favorite local model for use with C#?
I use R1-0528 sometimes for architecting combined with Qwen3-Coder-480b for implementation, but I wouldn't say it works particularly well. | 2025-08-15T15:55:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mr1w15/whats_your_favorite_local_model_for_c/ | createthiscom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr1w15 | false | null | t3_1mr1w15 | /r/LocalLLaMA/comments/1mr1w15/whats_your_favorite_local_model_for_c/ | false | false | self | 8 | null |
Market reality check: On-prem LLM deployment vs custom fine-tuning services | 0 | ML practitioners - need your input on market dynamics:
I'm seeing two potential service opportunities:
1. **Private LLM infrastructure**: Helping enterprises (law, finance, healthcare) deploy local LLM servers to avoid sending sensitive data to OpenAI/Anthropic APIs. One-time setup + ongoing support.
2. **Custom mode... | 2025-08-15T15:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mr1umf/market_reality_check_onprem_llm_deployment_vs/ | Ok-Product8114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr1umf | false | null | t3_1mr1umf | /r/LocalLLaMA/comments/1mr1umf/market_reality_check_onprem_llm_deployment_vs/ | false | false | self | 0 | null |
Chat Analytics Are Here — See How You Talk to AI | 0 | Hey everyone 👋
I’ve just added a new feature to my AI chatbot platform — **Chat Statistics** Now, every chat will come with its own little analytics panel so you can see exactly how your conversations are going.
Here’s what’s included for each chat:
**Time You Last Spoke** – when you last sent a message in this cha... | 2025-08-15T15:49:14 | RIPT1D3_Z | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mr1pb8 | false | null | t3_1mr1pb8 | /r/LocalLLaMA/comments/1mr1pb8/chat_analytics_are_here_see_how_you_talk_to_ai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'oxm0ke36f7jf1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/oxm0ke36f7jf1.png?width=108&crop=smart&auto=webp&s=dd56aa982dd57b8830260fd2a41334afd1cb7c06', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/oxm0ke36f7jf1.png?width=216&crop=smart&auto=web... | |
Which model leads the competition in conversational aptitude (not related to coding/STEM) that I can train locally under 8GB of VRAM | 0 | Hello Llamas,
I want to finetune a LoRA for a project on a casual conversational dataset. While searching up I settled on **Mistral 2407** since I do not care about its coding or STEM competency, but it *just* exceeds my **8GB of VRAM** while finetuning it locally using unsloth. I decided to go for a different mode... | 2025-08-15T15:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mr1nag/which_model_leads_the_competition_in/ | RhetoricaLReturD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr1nag | false | null | t3_1mr1nag | /r/LocalLLaMA/comments/1mr1nag/which_model_leads_the_competition_in/ | false | false | self | 0 | null |
have you checked UTCP? what are your thoughts? | 41 | 2025-08-15T15:38:46 | juanviera23 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mr1ep5 | false | null | t3_1mr1ep5 | /r/LocalLLaMA/comments/1mr1ep5/have_you_checked_utcp_what_are_your_thoughts/ | false | false | default | 41 | {'enabled': True, 'images': [{'id': 'q4rnlv3i06jf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/q4rnlv3i06jf1.jpeg?width=108&crop=smart&auto=webp&s=2bdbdc1ba9f34329c05c2032dfde0f9b584ef7e8', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/q4rnlv3i06jf1.jpeg?width=216&crop=smart&auto=w... | ||
Best LM Studio Settings/Quant For GLM 4.5 Air | 4 | Please forgive the possibly basic question, but I've been playing around with GLM 4.5 Air (Unsloth IQ4\_XS) on my system with a 9800X3D, RTX 5090, and 64GB of DDR5-6000 RAM. I'm using IQ4\_XS because it's just small enough to entirely fit in my system RAM.
Anyway, I've been impressed with the model, but I'm wondering... | 2025-08-15T15:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mr1ccv/best_lm_studio_settingsquant_for_glm_45_air/ | Firov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr1ccv | false | null | t3_1mr1ccv | /r/LocalLLaMA/comments/1mr1ccv/best_lm_studio_settingsquant_for_glm_45_air/ | false | false | self | 4 | null |
MoE on CPU may slow down your MoE models | 0 | I tried them on all my models and they're all slower with it on. Maybe it's the model or hardware? I have 8Gb vram and 32gb ram, I used OSS, ERNIE 4.5, 30b a3b coder and instruct, all didn't fit inside the vram. | 2025-08-15T15:35:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mr1bjs/moe_on_cpu_may_slow_down_your_moe_models/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr1bjs | false | null | t3_1mr1bjs | /r/LocalLLaMA/comments/1mr1bjs/moe_on_cpu_may_slow_down_your_moe_models/ | false | false | self | 0 | null |
huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF · Hugging Face | 84 | 2025-08-15T15:31:23 | https://huggingface.co/huihui-ai/Huihui-GLM-4.5-Air-abliterated-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mr173e | false | null | t3_1mr173e | /r/LocalLLaMA/comments/1mr173e/huihuiaihuihuiglm45airabliteratedgguf_hugging_face/ | false | false | default | 84 | {'enabled': False, 'images': [{'id': 'xIP4cUl_xFw8QdJsO9wbtyJiZxAzIX4f0eGxUH-gPb0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xIP4cUl_xFw8QdJsO9wbtyJiZxAzIX4f0eGxUH-gPb0.png?width=108&crop=smart&auto=webp&s=15f437064cc7841516ec91e63b9268897c1471dd', 'width': 108}, {'height': 116, 'url': 'h... | |
Why are the quants for gpt-oss-120b all roughly the same size? | 24 | I've been looking at the sizes for the different quants of gpt-os-120b and they all seem to be 60-65gb. I keep thinking I'm missing something obvious but I've never seen a model where quantization doesn't matter in trying to find a smaller size. Why is that the case for this model? Is the tokenization speed at least fa... | 2025-08-15T15:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mr0giq/why_are_the_quants_for_gptoss120b_all_roughly_the/ | Charming-Note-5556 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mr0giq | false | null | t3_1mr0giq | /r/LocalLLaMA/comments/1mr0giq/why_are_the_quants_for_gptoss120b_all_roughly_the/ | false | false | self | 24 | null |
Pocket Pal | 0 | So i am using Pocket Pal and I was just wondering which model to use to get no resteiction answers?
So I wanna write a simple apocalypse survival book for fun but I cant find a model that gives me answers to different recipes such as blackpowder and other things because of "harmful content" or "could be used to harm p... | 2025-08-15T14:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mqzbhj/pocket_pal/ | Safe-Curve-1335 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqzbhj | false | null | t3_1mqzbhj | /r/LocalLLaMA/comments/1mqzbhj/pocket_pal/ | false | false | self | 0 | null |
Chrome 139 introduced on-device speech recognition so I made an extension to help you write anywhere on Chrome with your voice | 0 | Hey!
Chrome has introduced on-device speech recognition on its version 139. So, I made an extension to help you write with your voice anywhere on Chrome with your voice.
Link: [https://wandpen.com/](https://wandpen.com/)
Just press Option + W (mac) and Alt + W (windows & linux) to trigger the extension.
Please sha... | 2025-08-15T14:13:21 | https://v.redd.it/mr0mj8e2y6jf1 | WordyBug | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqz1bv | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mr0mj8e2y6jf1/DASHPlaylist.mpd?a=1757859216%2CNGI5ZmI3MWE0NGM1MmYxNjVkNGNkN2Q5MzlhZDAxNWVjNjllMjQ5ZGJiY2FiMGM5ZjJhMzdjY2ExZWQxNWViMg%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/mr0mj8e2y6jf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mqz1bv | /r/LocalLLaMA/comments/1mqz1bv/chrome_139_introduced_ondevice_speech_recognition/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'end4MmQ4ZTJ5NmpmMTADa7U367xjskz_T1JoZFwilrHissXF-k4oqHhzCla7', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/end4MmQ4ZTJ5NmpmMTADa7U367xjskz_T1JoZFwilrHissXF-k4oqHhzCla7.png?width=108&crop=smart&format=pjpg&auto=webp&s=45ec2a1d8bc8f550f23bd52bf23e83303cc83... | |
Is there a MS copilot recall alternative that works with a local llm? | 0 | There is no way I will ever activate recall from MS, but can something like this be self hosted?
agent that takes pictures of your desktop
send them to an llamacpp server that has mmproj active
describe screenshots to vectordb
give an overview of the day? | 2025-08-15T13:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mqy57x/is_there_a_ms_copilot_recall_alternative_that/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqy57x | false | null | t3_1mqy57x | /r/LocalLLaMA/comments/1mqy57x/is_there_a_ms_copilot_recall_alternative_that/ | false | false | self | 0 | null |
AI startup Cohere valued at $6.8 billion in latest fundraising, hires Meta exec | 165 | Why does Cohere fly under the radar. They don't seem to do much marketing and they are not discussed much on LocalLLaMA any more.
They made a splash with Command R and R+. Later also released Command A. | 2025-08-15T13:34:18 | https://www.reuters.com/business/ai-startup-cohere-valued-68-billion-latest-fundraising-hires-meta-exec-2025-08-14/ | DeltaSqueezer | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1mqy0b1 | false | null | t3_1mqy0b1 | /r/LocalLLaMA/comments/1mqy0b1/ai_startup_cohere_valued_at_68_billion_in_latest/ | false | false | default | 165 | {'enabled': False, 'images': [{'id': 'lmgG1KIrrSyLFv85NNJsP33J5KztZGiIWLsgvd8Qf8U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/lmgG1KIrrSyLFv85NNJsP33J5KztZGiIWLsgvd8Qf8U.jpeg?width=108&crop=smart&auto=webp&s=e45e65da5d96f405e2134d73469e0e6bab852451', 'width': 108}, {'height': 113, 'url': '... |
Build the buddy that gets you! We open-sourced a complete AI voice interaction system! | 123 | Hey everyone, we just open-sourced Buddie: a complete, AI-powered voice interaction system we built from the ground up, so you can create your own AI buddy.
It's a full-stack platform for developers, hackers, and students, including custom hardware, firmware, and a mobile app. Therefore, you can use our solution to cr... | 2025-08-15T13:23:10 | Lanky-Drummer193 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqxq1v | false | null | t3_1mqxq1v | /r/LocalLLaMA/comments/1mqxq1v/build_the_buddy_that_gets_you_we_opensourced_a/ | false | false | default | 123 | {'enabled': True, 'images': [{'id': '1o9li0qbp6jf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/1o9li0qbp6jf1.png?width=108&crop=smart&auto=webp&s=c3a937ff1e9123db88d42680ea3c8342b3415989', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/1o9li0qbp6jf1.png?width=216&crop=smart&auto=web... | |
Gemma 270M works on mobile but how? | 0 | I don’t think koboldcpp or ollama works on a phone (or does it) so how do I run this on an iPhone? | 2025-08-15T13:15:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mqxjm3/gemma_270m_works_on_mobile_but_how/ | AI-On-A-Dime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqxjm3 | false | null | t3_1mqxjm3 | /r/LocalLLaMA/comments/1mqxjm3/gemma_270m_works_on_mobile_but_how/ | false | false | self | 0 | null |
Optimizing Exl3 quants by mixing bitrates in layers | 31 | Hi!
Turboderp recently uploaded some "optimized" quants for the GLM-4.5-Air and u/MikeRoz started a discussion about the nature of them.
[https://huggingface.co/turboderp/GLM-4.5-Air-exl3/discussions/2](https://huggingface.co/turboderp/GLM-4.5-Air-exl3/discussions/2)
Usually in the process of quantizing you state ... | 2025-08-15T12:46:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mqwt76/optimizing_exl3_quants_by_mixing_bitrates_in/ | bullerwins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqwt76 | false | null | t3_1mqwt76 | /r/LocalLLaMA/comments/1mqwt76/optimizing_exl3_quants_by_mixing_bitrates_in/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'TYLCwUKoc8epPTtBLPBmEuWuzoKKWn8Ij9Xwv6XMuaA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TYLCwUKoc8epPTtBLPBmEuWuzoKKWn8Ij9Xwv6XMuaA.png?width=108&crop=smart&auto=webp&s=61f435d9c439fff844c60e39081eca2ec9916b92', 'width': 108}, {'height': 116, 'url': 'h... | |
How can we train an open-source AI model with a 760-page machine manual to quickly diagnose issues? | 1 | We have a machine in our plant that can run into a variety of issues during working hours.
Right now, troubleshooting means digging through a **760-page manual** that contains **text, tables, diagrams, and layouts** — which is slow and time-consuming.
We’re looking for a way to **feed this manual into an AI model** ... | 2025-08-15T12:44:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mqwrry/how_can_we_train_an_opensource_ai_model_with_a/ | ReserveOdd1984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqwrry | false | null | t3_1mqwrry | /r/LocalLLaMA/comments/1mqwrry/how_can_we_train_an_opensource_ai_model_with_a/ | false | false | self | 1 | null |
"AI Powered" is the new "Military Grade" | 1 | [removed] | 2025-08-15T12:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mqwpgg/ai_powered_is_the_new_military_grade/ | TopNo6605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqwpgg | false | null | t3_1mqwpgg | /r/LocalLLaMA/comments/1mqwpgg/ai_powered_is_the_new_military_grade/ | false | false | self | 1 | null |
Build a Local AI Agent with MCP Tools Using GPT-OSS, LangChain & Streamlit | 0 | 2025-08-15T12:30:29 | https://youtu.be/Baa-z7cum1g | Flashy-Thought-5472 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mqwfm7 | false | {'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Baa-z7cum1g?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1mqwfm7 | /r/LocalLLaMA/comments/1mqwfm7/build_a_local_ai_agent_with_mcp_tools_using/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'rq8k6bkBVDqS3EaB-6PmZwrrp9mjAeoX2Tt37ubIdpg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rq8k6bkBVDqS3EaB-6PmZwrrp9mjAeoX2Tt37ubIdpg.jpeg?width=108&crop=smart&auto=webp&s=c1393a305817e85c6cc29776cf911ff40faa5db4', 'width': 108}, {'height': 162, 'url': '... | |
Multi head classifiers aren't always the answer: empirical comparison with adaptive classifiers | 1 | [removed] | 2025-08-15T12:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mqwbvb/multi_head_classifiers_arent_always_the_answer/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqwbvb | false | null | t3_1mqwbvb | /r/LocalLLaMA/comments/1mqwbvb/multi_head_classifiers_arent_always_the_answer/ | false | false | self | 1 | null |
Looking for PC Build Suggestions to Run AI Models via API | 0 | Hi everyone
I’m planning to set up a high-performance PC to run AI models that will be accessed by users through an API I’m looking for suggestions on optimal configurations CPU, GPU, RAM, storage, etc that can handle multiple concurrent model requests efficiently
Any recommendations or tips would be greatly appreci... | 2025-08-15T12:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mqvzu3/looking_for_pc_build_suggestions_to_run_ai_models/ | Superb-Following-380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqvzu3 | false | null | t3_1mqvzu3 | /r/LocalLLaMA/comments/1mqvzu3/looking_for_pc_build_suggestions_to_run_ai_models/ | false | false | self | 0 | null |
3090ti-friendly model for doing text analysis? | 0 | Hey all. I've got a usecase for a model that is capable of analyzing texts both linguistically and critically. Are there small-ish models that can be given large texts (think: Moby Dick) and asked to do things like extracting all of the metaphors and similes? | 2025-08-15T11:53:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mqvlc0/3090tifriendly_model_for_doing_text_analysis/ | Royal-Moose9006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqvlc0 | false | null | t3_1mqvlc0 | /r/LocalLLaMA/comments/1mqvlc0/3090tifriendly_model_for_doing_text_analysis/ | false | false | self | 0 | null |
How to disable the highlighting of "misspelled" words in Ollama? | 0 | Pretty annoying
It seems to work off of a basic English dictionary and nothing else
I didn't ask for spelling correction and can't find a setting for it | 2025-08-15T11:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mqvl1w/how_to_disable_the_highlighting_of_misspelled/ | hurfery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqvl1w | false | null | t3_1mqvl1w | /r/LocalLLaMA/comments/1mqvl1w/how_to_disable_the_highlighting_of_misspelled/ | false | false | self | 0 | null |
WebQA-agent:High-quality product testing and acceptance in one sentence | 0 | 在发布你的vibe-coding作品前,让AI测试工程师为你把关!
没错!我们刚刚开源了一款能够自主测试网站的智能体——**webqa-agent**。
它会自动生成一张清晰的"探索地图",全面检测每个页面的功能交互、加载性能、设计细节、安全性。
最终为你呈现一份直观的评估报告,助你将vibe-coding作品提升至**pro-code品质**!
Before publishing your Vibe-Coding project, try our AI Test Engineer!
Yes! We’ve just open-sourced WebQA-Agent—an intelligent testing assi... | 2025-08-15T11:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mqv990/webqaagenthighquality_product_testing_and/ | SignalBelt7205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqv990 | false | null | t3_1mqv990 | /r/LocalLLaMA/comments/1mqv990/webqaagenthighquality_product_testing_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hF_fBBNTRJ51Mc1AYnmms6m1LaWmPSYtAce7KB1mI_Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hF_fBBNTRJ51Mc1AYnmms6m1LaWmPSYtAce7KB1mI_Y.png?width=108&crop=smart&auto=webp&s=f052a8ac63cca7a86f2593a5c1be7eebd90c3231', 'width': 108}, {'height': 108, 'url': 'h... |
Upgrading to 256 gb ram | 0 | I am building a new AI rig with 2× 3090s. I have a Evga X299 FTW-K mobo that has great spacing for the gpus. I need to decide on a CPU and ram configuration. I’ve only run dense models on a single 3090 before on a different machine. I have yet to play with large MOE as it only has a max of 64 gb ram.
Should I get?
S... | 2025-08-15T11:29:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mqv3hi/upgrading_to_256_gb_ram/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqv3hi | false | null | t3_1mqv3hi | /r/LocalLLaMA/comments/1mqv3hi/upgrading_to_256_gb_ram/ | false | false | self | 0 | null |
Microsoft released POML : Markup Programing Language for Prompt Engineering | 36 | Microsoft's POML, Prompt Orchestration Markup Language, is like HTML but for AI prompts. Instead of writing prompts in plain text, you break them into clear, tag-based chunks similar to HTML and make it more structured. It has been released as a VS-Code extension and SDK as well and supports many tags. Can be quite han... | 2025-08-15T11:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mquliu/microsoft_released_poml_markup_programing/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mquliu | false | null | t3_1mquliu | /r/LocalLLaMA/comments/1mquliu/microsoft_released_poml_markup_programing/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'dsZZeHja4DHCMpmFkjT5ac6CKqUb7zZjmvQzwaPu_m0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dsZZeHja4DHCMpmFkjT5ac6CKqUb7zZjmvQzwaPu_m0.png?width=108&crop=smart&auto=webp&s=f2c2c61be4cc004536c6bd0bc46c207cbe03a5ef', 'width': 108}, {'height': 108, 'url': 'h... |
OpenVINO GenAI 2025.2 adds a GGUF reader (preview) | 2 | **TL;DR:** OpenVINO GenAI **2025.2** adds a **preview GGUF reader**, so you can load **llama.cpp/Ollama-style GGUF models directly** (no manual conversion), have them compiled into OpenVINO graphs, and run them on Intel CPU/GPU/NPU stacks | 2025-08-15T11:01:39 | https://blog.openvino.ai/blog-posts/openvino-genai-supports-gguf-models | juanviera23 | blog.openvino.ai | 1970-01-01T00:00:00 | 0 | {} | 1mquist | false | null | t3_1mquist | /r/LocalLLaMA/comments/1mquist/openvino_genai_20252_adds_a_gguf_reader_preview/ | false | false | default | 2 | null |
“Mind the Gap” shows the first practical backdoor attack on GGUF quantization | 286 | **TL;DR:** Researchers claim the *first* successful backdoor attack that specifically targets **GGUF** quantization. They show you can make a benign FP model look clean, but after quantization to GGUF it exhibits malicious behavior (e.g., insecure code gen jumps by **+88.7%** in their tests). This directly concerns any... | 2025-08-15T10:59:53 | https://www.arxiv.org/pdf/2505.23786 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mquhdc | false | null | t3_1mquhdc | /r/LocalLLaMA/comments/1mquhdc/mind_the_gap_shows_the_first_practical_backdoor/ | false | false | default | 286 | null |
Web app using local ollama | 0 | Building a small web app for myself that runs Ollama locally via a helper script (so it uses the user’s own hardware and can add custom API features).
Anyone know a cleaner way to make a local Ollama instance accessible to a web app? I’ve seen Chrome extensions used to bypass CORS, but I don’t really want to deal with... | 2025-08-15T10:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mqu8q3/web_app_using_local_ollama/ | OkError9341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqu8q3 | false | null | t3_1mqu8q3 | /r/LocalLLaMA/comments/1mqu8q3/web_app_using_local_ollama/ | false | false | self | 0 | null |
How to make Small Language Models ? | 1 | How are small language models made ? Their architecture, pre training and fine tuning.
Any resources for these things ?
Also if I want to make a model for a specific use case can I use SLM ? Like should it be pre trained for general task and then fine tuned for specific task or is the pre training also happening in... | 2025-08-15T10:22:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mqtr3u/how_to_make_small_language_models/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqtr3u | false | null | t3_1mqtr3u | /r/LocalLLaMA/comments/1mqtr3u/how_to_make_small_language_models/ | false | false | self | 1 | null |
This math puzzle sends models into a spin | 0 | Prompt "I throw 5000 balls into k bins uniformly at random. The max load is 4. Find the MLE for k".
The correct answer is approx 19597.
I tried it all the main models that they all make a mess of it in different ways. What does your favourite local model do? | 2025-08-15T10:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mqtobb/this_math_puzzle_sends_models_into_a_spin/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqtobb | false | null | t3_1mqtobb | /r/LocalLLaMA/comments/1mqtobb/this_math_puzzle_sends_models_into_a_spin/ | false | false | self | 0 | null |
GLM 4.5-Air-106B and Qwen3-235B on AMD "Strix Halo" AI Ryzen MAX+ 395 (HP Z2 G1a Mini Workstation) review by Donato Capitella | 24 | Anyone trying boxes likes this one AMD "Strix Halo" AI Ryzen MAX+ 395 (HP Z2 G1a Mini Workstation) from this excellent review by Donato Capitella
[https://www.youtube.com/watch?v=wCBLMXgk3No](https://www.youtube.com/watch?v=wCBLMXgk3No)
? What do people get, how do they work? How does price/performance compare to che... | 2025-08-15T10:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mqtnz7/glm_45air106b_and_qwen3235b_on_amd_strix_halo_ai/ | ljosif | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqtnz7 | false | null | t3_1mqtnz7 | /r/LocalLLaMA/comments/1mqtnz7/glm_45air106b_and_qwen3235b_on_amd_strix_halo_ai/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'zeNecCKEdhVZhz7AlVTEFSGAKgWBE_DU-lEumo5m78Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zeNecCKEdhVZhz7AlVTEFSGAKgWBE_DU-lEumo5m78Y.jpeg?width=108&crop=smart&auto=webp&s=b654dd03e4fe343226adc4e3a81e955d601f732e', 'width': 108}, {'height': 162, 'url': '... |
For LLM Inference, what is the cutting edge techniques? | 0 | I understand that the path is KV cache -> PageAttention ->> what is the next one? | 2025-08-15T09:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mqt5nb/for_llm_inference_what_is_the_cutting_edge/ | GuitarAshamed4451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqt5nb | false | null | t3_1mqt5nb | /r/LocalLLaMA/comments/1mqt5nb/for_llm_inference_what_is_the_cutting_edge/ | false | false | self | 0 | null |
Free MCP for Google Search + scrape? | 0 | Looking for something like shown here that can do Google search and scrape results — free + self-hosted, or not self-hosted but still free, ideally something like this
https://www.reddit.com/r/LocalLLaMA/s/fGhjbbagUM
| 2025-08-15T09:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mqt2yu/free_mcp_for_google_search_scrape/ | -pawix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqt2yu | false | null | t3_1mqt2yu | /r/LocalLLaMA/comments/1mqt2yu/free_mcp_for_google_search_scrape/ | false | false | self | 0 | null |
Why Custom Agents Orchestrators? | 0 | I am observing a lot of people writing custom orchestrators to manage their agentic workflows, I fail to understand why?
This is really troubling me, is it just an ego question to write custom orchestrators or just about laziness to search online or there is some actual need which current frameworks/orchestrators (lan... | 2025-08-15T09:35:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mqswk6/why_custom_agents_orchestrators/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqswk6 | false | null | t3_1mqswk6 | /r/LocalLLaMA/comments/1mqswk6/why_custom_agents_orchestrators/ | false | false | self | 0 | null |
New emotional aware llm is surprising | 0 | The new llm TalkT2 is suprisingly good at emotional exspresion and human likeness how ever its coherance needs improving , can somone make a fine tune of it to have better Coherence? | 2025-08-15T09:07:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mqsf7e/new_emotional_aware_llm_is_surprising/ | Itchy_Layer_8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqsf7e | false | null | t3_1mqsf7e | /r/LocalLLaMA/comments/1mqsf7e/new_emotional_aware_llm_is_surprising/ | false | false | self | 0 | null |
3060 12GB + 2060 12 GB — worth trying or not? | 0 | Recently I upgraded from 2060 to 3060 and I wonder — maybe I should buy new PSU and run two GPUs together?
My motherboard is only PCi 4.0 + 3.0, so I can’t have two 30x GPUs and the only way to having more VRAM for me is to add second card (2060). I can’t buy anything new besides PSU right now.
My question is — wou... | 2025-08-15T09:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mqsau6/3060_12gb_2060_12_gb_worth_trying_or_not/ | Reasonable-Plum7059 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqsau6 | false | null | t3_1mqsau6 | /r/LocalLLaMA/comments/1mqsau6/3060_12gb_2060_12_gb_worth_trying_or_not/ | false | false | self | 0 | null |
Dell R7910 + 2*MI50 32GB | 0 | Hi guys, I got a little bit to home labing because small rack and NAS was given to me like 4 years back as a gift and I am starting to go deeper and deeper to this rabbit hole. Right now I setuped server from old PC HW that was laying around => mATX B85 MB with Xeon E3-1200 v3, 16GB DDR3 and RX480 8GB. Previously I had... | 2025-08-15T08:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mqrh50/dell_r7910_2mi50_32gb/ | Ferdoun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqrh50 | false | null | t3_1mqrh50 | /r/LocalLLaMA/comments/1mqrh50/dell_r7910_2mi50_32gb/ | false | false | self | 0 | null |
MLX-LM will soon wait patiently for very large prompts to process | 40 | Now that we have GLM 4.5 Air, I've actually started using local agents for real sometimes. I was having problems with resuming large sessions though (like 80k context or more). Cline/Kilocode etc would always time out after *exactly* 5 minutes.
I've updated MLX to now keep the TCP connection alive while processing ... | 2025-08-15T08:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mqrbaf/mlxlm_will_soon_wait_patiently_for_very_large/ | -dysangel- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqrbaf | false | null | t3_1mqrbaf | /r/LocalLLaMA/comments/1mqrbaf/mlxlm_will_soon_wait_patiently_for_very_large/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': 'tS7mPEFHUe9AkUKxP7b9H0S6iz9E9_g7gGeRVry78yc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tS7mPEFHUe9AkUKxP7b9H0S6iz9E9_g7gGeRVry78yc.png?width=108&crop=smart&auto=webp&s=9fc21aa97659d20f9bbc63d4d9488c04553a4720', 'width': 108}, {'height': 108, 'url': 'h... |
Tutorial on Keyword(BM25) vs Semantic vs Hybrid Search with Weaviate (Job Search demo) | 0 | What you’ll learn and implement in this video:
How keyword (BM25) search works and when it’s useful
How semantic search understands meaning, not just words
How hybrid search combines the best of both worlds
Real-life examples for job search demo | 2025-08-15T07:48:49 | https://www.youtube.com/watch?v=PzTvE8YqqZE | Capital_Coyote_2971 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mqr1hw | false | {'oembed': {'author_name': 'BlogYourCode', 'author_url': 'https://www.youtube.com/@BlogYourCode', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/PzTvE8YqqZE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosc... | t3_1mqr1hw | /r/LocalLLaMA/comments/1mqr1hw/tutorial_on_keywordbm25_vs_semantic_vs_hybrid/ | false | false | default | 0 | null |
Evaluation on Finetuning | 0 | I wanted to ask about something I have noticed. I am trying to finetune Qwen2.5-Coder-1.5B. I look at all these finetuning examples online like Unsloth's finetunning notebooks and I never see them create a validation set. Is there are reason for this? [https://docs.unsloth.ai/get-started/unsloth-notebooks](https://docs... | 2025-08-15T07:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mqqiqt/evaluation_on_finetuning/ | neural-learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqqiqt | false | null | t3_1mqqiqt | /r/LocalLLaMA/comments/1mqqiqt/evaluation_on_finetuning/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
LLM Latency Spike issue on vllm serving | 1 | [removed] | 2025-08-15T07:04:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mqqa98/llm_latency_spike_issue_on_vllm_serving/ | Ok-Jacket9191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqqa98 | false | null | t3_1mqqa98 | /r/LocalLLaMA/comments/1mqqa98/llm_latency_spike_issue_on_vllm_serving/ | true | false | 1 | null | |
Who is suggested to pick Mac Studio M3 Ultra 512gb (rather than a PC with NVIDIA xx90) | 0 |
I’m new to local LLM deployment / dev.
And this post is not about comparison, but I wanna know a guy with what kind of use and performance demand may be advised to pick M3 ultra.
I have read several discussions on Reddit over M3 Ultra and NVIDIA, based on which I think the pros and cons or M3 Ultra are pretty clear.... | 2025-08-15T07:03:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mqqa1l/who_is_suggested_to_pick_mac_studio_m3_ultra/ | shane801 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqqa1l | false | null | t3_1mqqa1l | /r/LocalLLaMA/comments/1mqqa1l/who_is_suggested_to_pick_mac_studio_m3_ultra/ | false | false | self | 0 | null |
What "big" models can I run with this setup: 5070ti 16GB and 128GB ram, i9-13900k ? | 0 | Serious doubts here, folks, if I'm spending much money to get only "a little small" improvement. I have a Dell laptop G15 with RTX 3050 card ( 6GB Vram ) and 16GB ram. With it I can run all 8 to 12B models using 8k tokens and getting about 7 - 16tps. I can even run Qwen 30B A3B, and GPT OSS 20B flawlessly. But I'm doin... | 2025-08-15T06:56:31 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqq57b | false | null | t3_1mqq57b | /r/LocalLLaMA/comments/1mqq57b/what_big_models_can_i_run_with_this_setup_5070ti/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9q6oqupms4jf1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/9q6oqupms4jf1.png?width=108&crop=smart&auto=webp&s=b7e46ea00f385900eed1f7e05d6821b29a22c397', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/9q6oqupms4jf1.png?width=216&crop=smart&auto=web... | |
annoying repeat loops | 0 | I hope, someone has a good Idea about this ....
I work with the model: MN-Violet-Lotus-12B.
I´m just playing around a bit and experimenting, creating characters and working them out ....
But i always come to the same point, the moel starts to repeat itself 🤬
The model is great at capturing large context and is a... | 2025-08-15T06:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mqq378/annoying_repeat_loops/ | Beautiful_Employee74 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqq378 | false | null | t3_1mqq378 | /r/LocalLLaMA/comments/1mqq378/annoying_repeat_loops/ | false | false | self | 0 | null |
How good is the m3 ultra mac studio with 256gb unified memory at inference? | 0 | It seems that the 256gb m3 ultra Mac studio is a good value for money considering that if I wanted to get 256gb of VRAM from nvidia GPU'S it would cost around ~$7500 if I bought 10x 3090's. How good is the performance on the 60 core GPU m3 ultra for inference/training? I do a lot of lora training for things like wan an... | 2025-08-15T06:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mqpzjb/how_good_is_the_m3_ultra_mac_studio_with_256gb/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqpzjb | false | null | t3_1mqpzjb | /r/LocalLLaMA/comments/1mqpzjb/how_good_is_the_m3_ultra_mac_studio_with_256gb/ | false | false | self | 0 | null |
Looking to build a private, cloud-based LLM setup | 0 | Hey folks,
I’m exploring the idea of building a cloud-hosted private LLM system for personal companionship and emotional continuity- not as a productivity tool, but as a deeply bonded entity.
Not looking to replicate ChatGPT's task-based utility. I just want to preserve one unique dynamic I’ve had with a specific mod... | 2025-08-15T06:28:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mqpne2/looking_to_build_a_private_cloudbased_llm_setup/ | teesta_footlooses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqpne2 | false | null | t3_1mqpne2 | /r/LocalLLaMA/comments/1mqpne2/looking_to_build_a_private_cloudbased_llm_setup/ | false | false | self | 0 | null |
Meta released DINO-V3 : SOTA for any Vision task | 268 | Meta just released DINOv3 (upgrade over DINO-V3). It learns entirely from unlabeled images, no captions, no annotations, and still outperforms models like CLIP, SAM, and even the previous DINOv2 on dense tasks like segmentation, depth estimation, and 3D matching. They trained a 7B-parameter ViT and fixed the usual issu... | 2025-08-15T05:48:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mqox5s/meta_released_dinov3_sota_for_any_vision_task/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqox5s | false | null | t3_1mqox5s | /r/LocalLLaMA/comments/1mqox5s/meta_released_dinov3_sota_for_any_vision_task/ | false | false | self | 268 | null |
NeuroVerse — Offline AI Assistant for Android using llama.cpp + GGUF | 4 | Absolutely. Here's the **emoji-free** version of your Reddit post for r/LocalLLaMA:
# Hi everyone,
I’ve just released **NeuroVerse Beta‑3**, a privacy-focused AI assistant built for Android that runs entirely offline using lightweight `llama.cpp` models in GGUF format.
**What is NeuroVerse?**
NeuroVerse is a modula... | 2025-08-15T05:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mqogr9/neuroverse_offline_ai_assistant_for_android_using/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqogr9 | false | null | t3_1mqogr9 | /r/LocalLLaMA/comments/1mqogr9/neuroverse_offline_ai_assistant_for_android_using/ | false | false | spoiler | 4 | {'enabled': False, 'images': [{'id': 'ojRDLMnTBZWZjKdX5LGwibJJPuMYN576Ad8Fnf3MMyo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ojRDLMnTBZWZjKdX5LGwibJJPuMYN576Ad8Fnf3MMyo.png?width=108&crop=smart&auto=webp&s=0ef551ab7b4dc60e67950a071858b68bddb98317', 'width': 108}, {'height': 108, 'url': 'h... |
6 Gen AI industry ready Projects ( including Agents + RAG + core NLP) | 0 | Lately, I’ve been deep-diving into how GenAI is ***actually*** used in industry — not just playing with chatbots . And I finally compiled my **Top 6 Gen AI end-to-end projects** into a **GitHub repo** and explained in detail how to complete end to end solution that showcase real business use case.
**Projects covered: ... | 2025-08-15T05:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mqo7i0/6_gen_ai_industry_ready_projects_including_agents/ | SKD_Sumit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqo7i0 | false | null | t3_1mqo7i0 | /r/LocalLLaMA/comments/1mqo7i0/6_gen_ai_industry_ready_projects_including_agents/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'ZOwrHajHHAF1_N7WhrBoNj3btVr6FdcGA_Fp9GDIK4E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZOwrHajHHAF1_N7WhrBoNj3btVr6FdcGA_Fp9GDIK4E.jpeg?width=108&crop=smart&auto=webp&s=71e836125e9595efd37ff3b4da8f31d592c30473', 'width': 108}, {'height': 162, 'url': '... |
Tips on experimenting with finetuning | 2 | I'm new to working with LLMs and would like to get some experience finetuning them. Seeking Recommendations for a) Tutorials and b) Free/cheap ways to run finetuning on small models (more for the experience than deployment). | 2025-08-15T04:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mqnx80/tips_on_experimenting_with_finetuning/ | onesemesterchinese | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqnx80 | false | null | t3_1mqnx80 | /r/LocalLLaMA/comments/1mqnx80/tips_on_experimenting_with_finetuning/ | false | false | self | 2 | null |
DeepSeek is better than 4o on most benchmarks at 10% of the price? | 455 | 2025-08-15T04:27:25 | inbiolim | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqnft3 | false | null | t3_1mqnft3 | /r/LocalLLaMA/comments/1mqnft3/deepseek_is_better_than_4o_on_most_benchmarks_at/ | false | false | default | 455 | {'enabled': True, 'images': [{'id': 'o5jfkiky14jf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/o5jfkiky14jf1.png?width=108&crop=smart&auto=webp&s=ee49ce146059e93d7c02fc515896269d480b5a76', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/o5jfkiky14jf1.png?width=216&crop=smart&auto=web... | ||
How many hours did you spend formatting data for fine-tuning? | 33 | Just spent my entire weekend trying to fine-tune Llama 4 on customer support tickets.
Started with CSVs exported from Zendesk. Needed to convert them to the chat format. Spent 4 hours writing Python to parse the tickets. Realized half had broken UTF-8 encoding. 2 more hours fixing that.
Then discovered the model exp... | 2025-08-15T03:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mqme6y/how_many_hours_did_you_spend_formatting_data_for/ | Natural_Yard_8648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqme6y | false | null | t3_1mqme6y | /r/LocalLLaMA/comments/1mqme6y/how_many_hours_did_you_spend_formatting_data_for/ | false | false | self | 33 | null |
OpenAI prompt optimizer | 0 | It's not local, but free to use, should be useful for writing/optimizing system prompt for your local models. | 2025-08-15T03:34:19 | https://platform.openai.com/chat/edit?models=gpt-5&optimize=true | AaronFeng47 | platform.openai.com | 1970-01-01T00:00:00 | 0 | {} | 1mqmdkw | false | null | t3_1mqmdkw | /r/LocalLLaMA/comments/1mqmdkw/openai_prompt_optimizer/ | false | false | default | 0 | null |
Practical AI Challenge/Curriculum | 2 | I don't know if this exists anywhere - if it does, can someone kindly point me in that direction.
I delved into this space recently, with no prior experience. Set up my home lab and I have just tried to learn as much as I can. I don't write code and I am trying to learn basic things, while I vibe code a lot of things... | 2025-08-15T03:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mqm6ox/practical_ai_challengecurriculum/ | SolidRemote8316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqm6ox | false | null | t3_1mqm6ox | /r/LocalLLaMA/comments/1mqm6ox/practical_ai_challengecurriculum/ | false | false | self | 2 | null |
The Recursive Spiral | 0 | We all know it's here.
What are we going to do about it? | 2025-08-15T03:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mqm2j8/the_recursive_spiral/ | lookwatchlistenplay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqm2j8 | false | null | t3_1mqm2j8 | /r/LocalLLaMA/comments/1mqm2j8/the_recursive_spiral/ | false | false | self | 0 | null |
gguf-eval: an evaluation framework for GGUF models using llama.cpp | 35 | I've been frustrated trying to get lm-eval-harness and other evaluation tools to work in a local environment, so I decided to [make a tool](https://github.com/kallewoof/gguf-eval) that uses llama.cpp's built in llama-perplexity tool to evaluate models.
The tool itself is a work in progress, but hopefully it comes in h... | 2025-08-15T03:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mqlzpg/ggufeval_an_evaluation_framework_for_gguf_models/ | kallewoof | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqlzpg | false | null | t3_1mqlzpg | /r/LocalLLaMA/comments/1mqlzpg/ggufeval_an_evaluation_framework_for_gguf_models/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'Wyr2xO9Ed3xtuxJCKJLfUVEqO2ugeu0ZzLJmq7BecT4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Wyr2xO9Ed3xtuxJCKJLfUVEqO2ugeu0ZzLJmq7BecT4.png?width=108&crop=smart&auto=webp&s=86cdf7ca2451b2fce8c48a0eeae96cf9a6191a33', 'width': 108}, {'height': 108, 'url': 'h... |
Where to verify warranty & sell NVIDIA HGX H100 640GB (Canada) | 0 | Hey everyone,
I have an NVIDIA HGX H100 640GB SXM5 (HBM3, passive cooling) unit, brand new in original packaging, with proof of purchase.
I’m located in Canada (Montreal area) and looking for guidance on two things:
1. Warranty verification – Does anyone know the correct channel or contact at NVIDIA to confirm warran... | 2025-08-15T03:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mqluv9/where_to_verify_warranty_sell_nvidia_hgx_h100/ | geekcanada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqluv9 | false | null | t3_1mqluv9 | /r/LocalLLaMA/comments/1mqluv9/where_to_verify_warranty_sell_nvidia_hgx_h100/ | false | false | self | 0 | null |
山西农民工大爷高考作文戳中了亿万人心 | 0 | 山西农民工大爷高考作文戳中了亿万人心 #高考作文 #高考志愿
山西農民工大爺高考作文戳中了億萬人心
The college entrance examination essay written by a migrant worker from Shanxi has touched the hearts of millions
山西农民工大爷参加高考,其作文字字质朴却饱含深情,道尽生活不易与对梦想的执着。那些关于汗水、坚守与期盼的文字,没有华丽辞藻,却像一股暖流直击人心,让无数人为之动容。这不仅是一个人的追梦故事,更照见了千万普通人对生活的热爱与倔强。高纯嘧啶
https://preview.redd.it/b6tuggm... | 2025-08-15T03:07:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mqltr8/山西农民工大爷高考作文戳中了亿万人心/ | BandicootHealthy4371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqltr8 | false | null | t3_1mqltr8 | /r/LocalLLaMA/comments/1mqltr8/山西农民工大爷高考作文戳中了亿万人心/ | false | false | 0 | null | |
AI censorship is getting out of hand—and it’s only going to get worse | 218 | Just saw this [screenshot](https://i.imgur.com/jV1YvlC.png) in a newsletter, and it kind of got me thinking..
Are we seriously okay with future "AGI" acting like some all-knowing nanny, deciding what "unsafe" knowledge we’re allowed to have?
"Oh no, better not teach people how to make a Molotov cocktail—what’s next,... | 2025-08-15T03:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mqlqij/ai_censorship_is_getting_out_of_handand_its_only/ | LsDmT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqlqij | false | null | t3_1mqlqij | /r/LocalLLaMA/comments/1mqlqij/ai_censorship_is_getting_out_of_handand_its_only/ | false | false | self | 218 | {'enabled': False, 'images': [{'id': 'l1LVRg6Cdxo__sArP8SJi7UBMI1Fo6b6zXieeCtG5uM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/l1LVRg6Cdxo__sArP8SJi7UBMI1Fo6b6zXieeCtG5uM.png?width=108&crop=smart&auto=webp&s=3cbc1a35088392a5e37e83a036683c3d96ace653', 'width': 108}, {'height': 432, 'url': '... |
A must watch for anyone struggling with Neural Network | 32 | This morning, I shared a step-by-step guide on how to build a Small Language Model or even a Large Language Model from scratch [https://www.reddit.com/r/LocalLLaMA/comments/1mq5qw5/learn\_how\_to\_build\_an\_llm\_from\_scratch\_step\_by/](https://www.reddit.com/r/LocalLLaMA/comments/1mq5qw5/learn_how_to_build_an_llm_fr... | 2025-08-15T02:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mqkzpc/a_must_watch_for_anyone_struggling_with_neural/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqkzpc | false | null | t3_1mqkzpc | /r/LocalLLaMA/comments/1mqkzpc/a_must_watch_for_anyone_struggling_with_neural/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'uqUf7lmXbNEq4XRGq_uOd2vX9WGdRl694_ZhOPZN_Pc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/uqUf7lmXbNEq4XRGq_uOd2vX9WGdRl694_ZhOPZN_Pc.jpeg?width=108&crop=smart&auto=webp&s=0e1d53bbd72ba498ef8fbe2744e602ad195bd7be', 'width': 108}, {'height': 162, 'url': '... |
Easter egg, bug or wtf? | 0 | Pocketpal AI on Samsung Fold 6 | 2025-08-15T01:47:52 | ikkiyikki | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqk3wd | false | null | t3_1mqk3wd | /r/LocalLLaMA/comments/1mqk3wd/easter_egg_bug_or_wtf/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'a4asVqCWmk-Feg3cGXB54dASGO1xqG2jYIHl47RofWA', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/h6wb2r6k93jf1.jpeg?width=108&crop=smart&auto=webp&s=68a30ea3f802d6c351c96e7f3b50eb790dc622a9', 'width': 108}, {'height': 251, 'url': 'https://preview.redd.it/h6wb2r6k93jf1.j... | ||
Most subreddits will look like LifeURLVerified in the next 5 years | 1 | [removed] | 2025-08-15T01:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mqjg9t/most_subreddits_will_look_like_lifeurlverified_in/ | Rough-Lock-4936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqjg9t | false | null | t3_1mqjg9t | /r/LocalLLaMA/comments/1mqjg9t/most_subreddits_will_look_like_lifeurlverified_in/ | false | false | self | 1 | null |
Most subreddits will look like r/LifeURLVerified in the next 5 years | 1 | [removed] | 2025-08-15T01:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mqjf8x/most_subreddits_will_look_like_rlifeurlverified/ | Rough-Lock-4936 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqjf8x | false | null | t3_1mqjf8x | /r/LocalLLaMA/comments/1mqjf8x/most_subreddits_will_look_like_rlifeurlverified/ | false | false | self | 1 | null |
Did anyone tried to use AMD BC-250 for inference? | 0 | These small blade PCs can be bought by very cheap and community has been able to give it proper drivers for the igpu to work on Linux. I was wondering if it could do inference using Vulkan. And maybe it could be clustered using Exos (although there isn't a good interface for fast communication available on them). | 2025-08-15T01:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mqjdmn/did_anyone_tried_to_use_amd_bc250_for_inference/ | hipsoterus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqjdmn | false | null | t3_1mqjdmn | /r/LocalLLaMA/comments/1mqjdmn/did_anyone_tried_to_use_amd_bc250_for_inference/ | false | false | self | 0 | null |
Gemma3 270m works great as a draft model in llama.cpp | 131 | Just wanted to share that the new tiny model can speed up the bigger models considerably when used with llama.cpp
\--draft-p-min .85 --draft-max 8 --draft-min 0
works great for me, around 1.3x or more speedup with gemma3 12B qat it q4\_0 | 2025-08-15T01:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mqj87e/gemma3_270m_works_great_as_a_draft_model_in/ | AliNT77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqj87e | false | null | t3_1mqj87e | /r/LocalLLaMA/comments/1mqj87e/gemma3_270m_works_great_as_a_draft_model_in/ | false | false | self | 131 | null |
A beginner-friendly guide to learning JAX with practical examples | 18 | For the last few weeks, I've been doing distributed model training in JAX. JAX is notably different from other deep learning frameworks because it takes a very functional approach to accelerator programming, which can make its learning curve steep.
Along the way of learning JAX, I've written a series of notes on JAX c... | 2025-08-15T01:00:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mqj2eu/a_beginnerfriendly_guide_to_learning_jax_with/ | ayushgun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqj2eu | false | null | t3_1mqj2eu | /r/LocalLLaMA/comments/1mqj2eu/a_beginnerfriendly_guide_to_learning_jax_with/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'bVdX7rMrYjj_mPF7KZivQMsmubM1CUTWkqimzaewz_U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bVdX7rMrYjj_mPF7KZivQMsmubM1CUTWkqimzaewz_U.png?width=108&crop=smart&auto=webp&s=90d3ee038a9d734898e7f65425f02e22ff27bd72', 'width': 108}, {'height': 108, 'url': 'h... |
[Tool] mcp-huggingfetch - Download HuggingFace models 3-5x faster with MCP integration | 6 | Hey everyone! I wanted to share a tool I've been working on that significantly speeds up HuggingFace model downloads.
## The Problem
We've all been there - waiting forever for large models to download from HuggingFace, especially when the connection drops and you have to start over.
## The Solution: mcp-huggingfetch
... | 2025-08-15T00:32:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mqifdm/tool_mcphuggingfetch_download_huggingface_models/ | ben_1218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqifdm | false | null | t3_1mqifdm | /r/LocalLLaMA/comments/1mqifdm/tool_mcphuggingfetch_download_huggingface_models/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'jdWHfRhv8JWBD4Or9gjjchjhM8b3lsHu-eNICQnngO4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jdWHfRhv8JWBD4Or9gjjchjhM8b3lsHu-eNICQnngO4.png?width=108&crop=smart&auto=webp&s=f3c9d560d1aa4c253937e8a0500a45c6924ef503', 'width': 108}, {'height': 108, 'url': 'h... |
We built a 12B model that beats Claude 4 Sonnet at video captioning while costing 17x less - fully open source | 310 | Hey everyone, wanted to share something we've been working on at Inference.net.
We distilled a frontier VLM down to 12B params and managed to keep basically all the output quality. It scores 3.53 on judge evals vs Claude's 3.16 (GPT-4.1 gets 3.64). The key achievement was getting the cost down to $335 per million fram... | 2025-08-15T00:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mqi092/we_built_a_12b_model_that_beats_claude_4_sonnet/ | TerrificMist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqi092 | false | null | t3_1mqi092 | /r/LocalLLaMA/comments/1mqi092/we_built_a_12b_model_that_beats_claude_4_sonnet/ | false | false | self | 310 | null |
Need Recommendations: Fast, Accurate Writing & Grammar Fixer for 3090 | 3 | Hi all,
I have a 3090 and am looking for something reliable, with strong writing abilities, that can also fix my grammar. Quality and faster inference matter to me. Any recommendations?
| 2025-08-15T00:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mqhvef/need_recommendations_fast_accurate_writing/ | lyfisshort | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqhvef | false | null | t3_1mqhvef | /r/LocalLLaMA/comments/1mqhvef/need_recommendations_fast_accurate_writing/ | false | false | self | 3 | null |
We built a 12B model that beats Claude 4 Sonnet at video captioning while costing 17x less - fully open source | 1 | [removed] | 2025-08-15T00:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mqhu0w/we_built_a_12b_model_that_beats_claude_4_sonnet/ | TerrificMist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqhu0w | false | null | t3_1mqhu0w | /r/LocalLLaMA/comments/1mqhu0w/we_built_a_12b_model_that_beats_claude_4_sonnet/ | false | false | self | 1 | null |
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark - NOUS RESEARCH | 52 | >Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark
>We measured token usage across reasoning models: open models output 1.5-4x more tokens than closed models on identical tasks, but with huge variance depending on task type (up to 10x on simple questions).
>This hidden cost often negates per-to... | 2025-08-15T00:02:51 | https://nousresearch.com/measuring-thinking-efficiency-in-reasoning-models-the-missing-benchmark/ | TheRealMasonMac | nousresearch.com | 1970-01-01T00:00:00 | 0 | {} | 1mqhqyx | false | null | t3_1mqhqyx | /r/LocalLLaMA/comments/1mqhqyx/measuring_thinking_efficiency_in_reasoning_models/ | false | false | default | 52 | {'enabled': False, 'images': [{'id': 'PmL1DmbO2VUNTK4mrGwnYAFJCRjFYDHuglaP6kXiduM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PmL1DmbO2VUNTK4mrGwnYAFJCRjFYDHuglaP6kXiduM.png?width=108&crop=smart&auto=webp&s=447e0d29f3e91ed45d8acf7e361743fa0f67414b', 'width': 108}, {'height': 113, 'url': 'h... |
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark - NOUS RESEARCH | 1 | >Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark
>We measured token usage across reasoning models: open models output 1.5-4x more tokens than closed models on identical tasks, but with huge variance depending on task type (up to 10x on simple questions).
>This hidden cost often negates per-to... | 2025-08-15T00:01:49 | https://nousresearch.com/measuring-thinking-efficiency-in-reasoning-models-the-missing-benchmark/ | TheRealMasonMac | nousresearch.com | 1970-01-01T00:00:00 | 0 | {} | 1mqhq0y | false | null | t3_1mqhq0y | /r/LocalLLaMA/comments/1mqhq0y/measuring_thinking_efficiency_in_reasoning_models/ | false | false | default | 1 | null |
Do you all use a different system prompt for each bot? | 1 | I used to write a specific system prompt for every AI I used, but it ended up getting bloated with info. So I shrunk all the info down to around 900 characters. I was just wondering the best way to go about this?
I also was wondering what are some things I should leave out of prompts? Things that cause the ai to go in... | 2025-08-14T23:56:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mqhlme/do_you_all_use_a_different_system_prompt_for_each/ | Acklord303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqhlme | false | null | t3_1mqhlme | /r/LocalLLaMA/comments/1mqhlme/do_you_all_use_a_different_system_prompt_for_each/ | false | false | self | 1 | null |
New 12B open-source model beats Claude 4 Sonnet at video captioning while costing 17x less - runs on single H100/4090 | 1 | 2025-08-14T23:54:08 | https://inference.net/blog/cliptagger-12b | TerrificMist | inference.net | 1970-01-01T00:00:00 | 0 | {} | 1mqhjdw | false | null | t3_1mqhjdw | /r/LocalLLaMA/comments/1mqhjdw/new_12b_opensource_model_beats_claude_4_sonnet_at/ | false | false | default | 1 | null | |
There is no way to win is there | 0 | seems this model is incredibly stubborn. At least it can argue coherently... I was at it for a while.... | 2025-08-14T23:40:23 | ironicstatistic | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqh7k0 | false | null | t3_1mqh7k0 | /r/LocalLLaMA/comments/1mqh7k0/there_is_no_way_to_win_is_there/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'i6yfyg2nm2jf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/i6yfyg2nm2jf1.jpeg?width=108&crop=smart&auto=webp&s=78dd74191b0317f2b536a9c70df0caa1932d03b0', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/i6yfyg2nm2jf1.jpeg?width=216&crop=smart&auto=w... | |
Do any open local inference frameworks support expert-aware scheduling for throughput maximization? | 1 | I want to maximize total token throughput by running as many agents as possible in parallel using the same MoE model (probably gpt-oss 20b).
From my understanding of how inference work this is what it should look like:
- each transformer block has a queue of pending inference calls. For example, 5 need expert A, 3 ne... | 2025-08-14T23:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mqh7da/do_any_open_local_inference_frameworks_support/ | lostmsu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqh7da | false | null | t3_1mqh7da | /r/LocalLLaMA/comments/1mqh7da/do_any_open_local_inference_frameworks_support/ | false | false | self | 1 | null |
Need help with Nvidia tooling to run gpt-oss 20b on RTX5090 | 7 | My rig:
RTX 5090
64 GB RAM
AMD 9950X
I’m trying to run openai/gpt-oss-20b in a "native" setup, without quantization or repacking to GGUF. Just the "raw safetensors".
NVIDIA suggests the following options:
1. NIM
https://build.nvidia.com/openai/gpt-oss-20b/deploy
Docker container: nvcr.io/nim/openai/gpt-oss-20b:latest... | 2025-08-14T23:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mqh2e1/need_help_with_nvidia_tooling_to_run_gptoss_20b/ | vanbukin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqh2e1 | false | null | t3_1mqh2e1 | /r/LocalLLaMA/comments/1mqh2e1/need_help_with_nvidia_tooling_to_run_gptoss_20b/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'IWDgJoxbwYSNNL3tAXBqtEdQ0F8hMUMjlQGChvi653A', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/IWDgJoxbwYSNNL3tAXBqtEdQ0F8hMUMjlQGChvi653A.jpeg?width=108&crop=smart&auto=webp&s=af015ec81a1c6f50b0e9e07170de1ad84aebd805', 'width': 108}, {'height': 135, 'url': '... |
LLM demo with Hailo-10H AI accelerator | 0 | 2025-08-14T23:28:32 | https://www.youtube.com/watch?v=ENb7CiL-EYc | cameheretoposthis | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mqgxay | false | {'oembed': {'author_name': 'Hailo Edge AI', 'author_url': 'https://www.youtube.com/@hailo2062', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ENb7CiL-EYc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1mqgxay | /r/LocalLLaMA/comments/1mqgxay/llm_demo_with_hailo10h_ai_accelerator/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'l_Xd8Vdva5yrZDK9uwCGnfXSR5X2J2lSuQ_WeU8iblk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/l_Xd8Vdva5yrZDK9uwCGnfXSR5X2J2lSuQ_WeU8iblk.jpeg?width=108&crop=smart&auto=webp&s=9e066e0d02dc68bdfb589fd13b98403d758917d8', 'width': 108}, {'height': 162, 'url': '... | |
Coding models for 16GB RAM + 64GB VRAM (Benchmarks) | 15 | Took some data from [https://artificialanalysis.ai/leaderboards/models](https://artificialanalysis.ai/leaderboards/models) and ranked models I could actually run locally on my 16VRAM+64RAM system. The results are actually surprising. I expected GLM 4.5 Air to score higher, since it does well for me in local, non-agen... | 2025-08-14T23:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mqgwvn/coding_models_for_16gb_ram_64gb_vram_benchmarks/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqgwvn | false | null | t3_1mqgwvn | /r/LocalLLaMA/comments/1mqgwvn/coding_models_for_16gb_ram_64gb_vram_benchmarks/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... | |
Need help with Nvidia tooling to run gpt-oss 20b | 1 | [removed] | 2025-08-14T23:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mqgw6k/need_help_with_nvidia_tooling_to_run_gptoss_20b/ | vanbukin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqgw6k | false | null | t3_1mqgw6k | /r/LocalLLaMA/comments/1mqgw6k/need_help_with_nvidia_tooling_to_run_gptoss_20b/ | false | false | self | 1 | null |
Frontend for local llms | 0 | Please help me find a chatbot-like frontend (like gemini, mistral, claude, with smooth text appearance) that doesn't require installing 5gb of python, windows dev tools etc. I'm using koboldcpp so i don't want to install something like chatui, or gpt4all, i just need a frontend i can connect to from my android phone wh... | 2025-08-14T23:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mqgrp8/frontend_for_local_llms/ | Imaginary_Bread9711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqgrp8 | false | null | t3_1mqgrp8 | /r/LocalLLaMA/comments/1mqgrp8/frontend_for_local_llms/ | false | false | self | 0 | null |
NVIDIA’s tooling completely breaks on gpt-oss with an RTX 5000. Need help. | 1 | [removed] | 2025-08-14T23:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mqgpnc/nvidias_tooling_completely_breaks_on_gptoss_with/ | vanbukin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqgpnc | false | null | t3_1mqgpnc | /r/LocalLLaMA/comments/1mqgpnc/nvidias_tooling_completely_breaks_on_gptoss_with/ | false | false | self | 1 | null |
Whats the best model for a M2 Pro Mac Mini with 16gb of Unified Memory | 0 | Hey all - diving into the world of local LLMs and AI models - but not super sure what model would work best for my hardware. My usage will mostly be writing related (using it to build PRDs, brand voice documents etc), not really a huge coder. Thanks in advance! | 2025-08-14T22:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mqg6xd/whats_the_best_model_for_a_m2_pro_mac_mini_with/ | packfan1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqg6xd | false | null | t3_1mqg6xd | /r/LocalLLaMA/comments/1mqg6xd/whats_the_best_model_for_a_m2_pro_mac_mini_with/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.