title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Urgent response needed 😭😭😭 | 0 | Would you pay 10-20% more for guaranteed access to a GPU next month? | 2025-07-17T02:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m1vjbu/urgent_response_needed/ | Bihari_Eminem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1vjbu | false | null | t3_1m1vjbu | /r/LocalLLaMA/comments/1m1vjbu/urgent_response_needed/ | false | false | self | 0 | null |
Urgent help 😭😭 | 0 | What's more important to you when choosing a GPU provider: price, location, reliability, or performance benchmarks? | 2025-07-17T02:10:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m1vi5d/urgent_help/ | Bihari_Eminem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1vi5d | false | null | t3_1m1vi5d | /r/LocalLLaMA/comments/1m1vi5d/urgent_help/ | false | false | self | 0 | null |
Kimi K2 on Aider Polyglot Coding Leaderboard | 182 | 2025-07-17T02:07:06 | aratahikaru5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1vf6g | false | null | t3_1m1vf6g | /r/LocalLLaMA/comments/1m1vf6g/kimi_k2_on_aider_polyglot_coding_leaderboard/ | false | false | 182 | {'enabled': True, 'images': [{'id': 'LMVFffOJSAC5wEzpTLOZuJTr2dMZyX1fAdJ9xnLT47s', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/wvr0xh2jecdf1.jpeg?width=108&crop=smart&auto=webp&s=54f8261dd295857edd815f9e3931823d30a3840f', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/wvr0xh2jecdf1.jp... | |||
GPU providers assemble!! | 0 | Would you be open to leasing your idle compute on a platform that offers future revenue predictability (via pre-booked contracts)? What are your biggest blockers to joining such marketplaces? | 2025-07-17T02:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m1vbrj/gpu_providers_assemble/ | Bihari_Eminem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1vbrj | false | null | t3_1m1vbrj | /r/LocalLLaMA/comments/1m1vbrj/gpu_providers_assemble/ | false | false | self | 0 | null |
Kimi K2 on Aider Polyglot Coding Leaderboard | 2 | 2025-07-17T02:01:04 | aratahikaru5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1valc | false | null | t3_1m1valc | /r/LocalLLaMA/comments/1m1valc/kimi_k2_on_aider_polyglot_coding_leaderboard/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'kscvvkb8dcdf1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/kscvvkb8dcdf1.jpeg?width=108&crop=smart&auto=webp&s=6bd51d43b44a7e36b17a497bf9dd5e1fbdb4df4a', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/kscvvkb8dcdf1.jpeg?width=216&crop=smart&auto=w... | ||
GPU scarcity is real | 0 | Would you pay 10–20% more for guaranteed access to a GPU next month?
[View Poll](https://www.reddit.com/poll/1m1v3v0) | 2025-07-17T01:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m1v3v0/gpu_scarcity_is_real/ | Bihari_Eminem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1v3v0 | false | null | t3_1m1v3v0 | /r/LocalLLaMA/comments/1m1v3v0/gpu_scarcity_is_real/ | false | false | self | 0 | null |
AI-made dark UIs = endless purple & blue | 1 | Anyone else see this? | 2025-07-17T01:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m1v3jr/aimade_dark_uis_endless_purple_blue/ | Ok_Technology_3421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1v3jr | false | null | t3_1m1v3jr | /r/LocalLLaMA/comments/1m1v3jr/aimade_dark_uis_endless_purple_blue/ | false | false | self | 1 | null |
GPU scarcity is real | 0 | [deleted]
[View Poll](https://www.reddit.com/poll/1m1uyo7) | 2025-07-17T01:45:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1m1uyo7 | false | null | t3_1m1uyo7 | /r/LocalLLaMA/comments/1m1uyo7/gpu_scarcity_is_real/ | false | false | default | 0 | null | ||
STEM/AEC small business, tires meeting road? | 1 | [removed] | 2025-07-17T01:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m1ujvd/stemaec_small_business_tires_meeting_road/ | nlatd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1ujvd | false | null | t3_1m1ujvd | /r/LocalLLaMA/comments/1m1ujvd/stemaec_small_business_tires_meeting_road/ | false | false | self | 1 | null |
qwen3-235b on x6 7900xtx using vllm or any Model for 6 GPU | 9 | Hey, i try to find best model for x6 7900xtx, so qwen 235b not working with AWQ and VLLM, because it have 64 attention heads not divided by 6.
Maybe someone have 6xGPU and running good model using VLLM?
How/Where i can check amount of attention heads before downloading model? | 2025-07-17T00:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m1tbzu/qwen3235b_on_x6_7900xtx_using_vllm_or_any_model/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1tbzu | false | null | t3_1m1tbzu | /r/LocalLLaMA/comments/1m1tbzu/qwen3235b_on_x6_7900xtx_using_vllm_or_any_model/ | false | false | self | 9 | null |
Any experiences running LLMs on a MacBook? | 10 | I'm about to buy a MacBook for work, but I also want to experiment with running LLMs locally. Does anyone have experience running (and fine-uning) LLMs locally on a MacBook? I'm considering the MacBook Pro M4 Pro and the MacBook Air M4 | 2025-07-17T00:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m1t19r/any_experiences_running_llms_on_a_macbook/ | emersoftware | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1t19r | false | null | t3_1m1t19r | /r/LocalLLaMA/comments/1m1t19r/any_experiences_running_llms_on_a_macbook/ | false | false | self | 10 | null |
MCPS are awesome! | 350 | I have set up like 17 MCP servers to use with open-webui and local models, and its been amazing!
The ai can decide if it needs to use tools like web search, windows-cli, reddit posts, wikipedia articles.
The usefulness of LLMS became that much bigger!
In the picture above I asked Qwen14B to execute this command in... | 2025-07-16T23:52:02 | iChrist | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1sjsn | false | null | t3_1m1sjsn | /r/LocalLLaMA/comments/1m1sjsn/mcps_are_awesome/ | false | false | 350 | {'enabled': True, 'images': [{'id': '9y_ePp0tuMqP510znerjTn2E-ogs7Ge1blzjsDiAlAE', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/p3766l11qbdf1.png?width=108&crop=smart&auto=webp&s=27e150d4b7f769d18543dd0d5e45b46b8a1b3e31', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/p3766l11qbdf1.png... | ||
Regency Bewildered is a stylistic persona imprint | 27 | You, like most people, are probably scratching your head quizzically, asking yourself "Who is this doofus?"
It's me! With another "model"
[https://huggingface.co/FPHam/Regency\_Bewildered\_12B\_GGUF](https://huggingface.co/FPHam/Regency_Bewildered_12B_GGUF)
Regency Bewildered is a stylistic persona imprint.
This is... | 2025-07-16T23:37:17 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1s7w9 | false | null | t3_1m1s7w9 | /r/LocalLLaMA/comments/1m1s7w9/regency_bewildered_is_a_stylistic_persona_imprint/ | false | false | 27 | {'enabled': True, 'images': [{'id': 'KZW0lObS_UAJUIk7DUl7wPnijCVbHgFRzFrl3amSPnM', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/q83c3ppqnbdf1.png?width=108&crop=smart&auto=webp&s=fade70b0eeaa4d06e3e872730420bead7f4290c4', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/q83c3ppqnbdf1.pn... | ||
How good are 2x 3090s for finetuning? | 1 | Im planning to buy 2x 3090 with powerful pc (good ram etc). Would this be enough for basic stuff? What sorta things i can do with this setup? | 2025-07-16T23:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m1s410/how_good_are_2x_3090s_for_finetuning/ | Lanky_Neighborhood70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1s410 | false | null | t3_1m1s410 | /r/LocalLLaMA/comments/1m1s410/how_good_are_2x_3090s_for_finetuning/ | false | false | self | 1 | null |
LM Studio, MCP, Models and large JSON responses. | 5 | Ok, I got LM Studio running, have a MCP Server parsing XML Data (all runs successfully) and JSON Data comes back as expected. But I am having a problem with models ingesting this kind of data.
Given this tech is new and all is in the beginnings, I am expecting things going wrong. We are still in the learning phase her... | 2025-07-16T22:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m1qtgb/lm_studio_mcp_models_and_large_json_responses/ | Point5_MOA | self.LocalLLaMA | 2025-07-16T22:43:50 | 0 | {} | 1m1qtgb | false | null | t3_1m1qtgb | /r/LocalLLaMA/comments/1m1qtgb/lm_studio_mcp_models_and_large_json_responses/ | false | false | self | 5 | null |
I added themes to ChatGPT - and it looks awesome ! | 0 | Tried adding themes to ChatGPT with a small extension — which of these four do you think looks the best?
For those asking, here’s the extension link: [https://chromewebstore.google.com/detail/gpt-theme-studio-chatgpt/mhgjgiicinjkeaekaojninjkaipenjcp?utm\_source=item-share-cb](https://chromewebstore.google.com/detail... | 2025-07-16T22:30:55 | https://www.reddit.com/gallery/1m1qo4a | electus08 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m1qo4a | false | null | t3_1m1qo4a | /r/LocalLLaMA/comments/1m1qo4a/i_added_themes_to_chatgpt_and_it_looks_awesome/ | false | false | 0 | null | |
Mixing between Nvidia and AMD for LLM | 10 | Hello everyone.
Yesterday, I got a "wetted" Instinct MI50 32GB from local salvor - It came back to life after taking a BW100 shower.
My gaming gear has intel 14TH gen CPU + 4070ti and 64GB Ram and works on WIN11 WSL2 environment.
If possible, I would like to use MI50 as the second GPU to expand VRAM to 44GB (12+32)... | 2025-07-16T22:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m1qmwi/mixing_between_nvidia_and_amd_for_llm/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1qmwi | false | null | t3_1m1qmwi | /r/LocalLLaMA/comments/1m1qmwi/mixing_between_nvidia_and_amd_for_llm/ | false | false | self | 10 | null |
New LLM agent driven AGI test | 0 | A quine is a program that produces its own source code as output.
I propose an AGI test instead of ARC-AGI, the "quines" coding agent This is an agent that given its code can produce a tech spec, which if fed back to same agent can vibe code an equivalent sort of coding agent. | 2025-07-16T22:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m1qmd7/new_llm_agent_driven_agi_test/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1qmd7 | false | null | t3_1m1qmd7 | /r/LocalLLaMA/comments/1m1qmd7/new_llm_agent_driven_agi_test/ | false | false | self | 0 | null |
How I Applied to 1000 Jobs in One Second and Got 34 Interviews [AMA] | 0 |
After graduating in CS from the University of Genoa, I moved to Dublin, and quickly realized how broken the job hunt had become.
Reposted listings. Endless, pointless application forms. Traditional job boards never show most of the jobs companies publish on their own websites.
---
**So I built something bette... | 2025-07-16T22:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m1q9jb/how_i_applied_to_1000_jobs_in_one_second_and_got/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1q9jb | false | null | t3_1m1q9jb | /r/LocalLLaMA/comments/1m1q9jb/how_i_applied_to_1000_jobs_in_one_second_and_got/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE.png?width=108&crop=smart&auto=webp&s=5d55a10c10bd5e0d54a22f9b594e2d33f087c356', 'width': 108}, {'height': 113, 'url': 'h... |
Got “Out of Credits” Email from Together AI While Only Using Free Model and Still Have $1 in Balance | 0 | Hey all,
I’ve been using the `llama-3-70b-instruct-turbo-free` model via the Together API for about a month, integrated into my app. As far as I know, this model is 100% free to use, and I’ve been very careful to only use this free model, not the paid one.
Today I got an email from Together AI saying:
“Your Together... | 2025-07-16T21:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m1opwv/got_out_of_credits_email_from_together_ai_while/ | Wild_King_1035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1opwv | false | null | t3_1m1opwv | /r/LocalLLaMA/comments/1m1opwv/got_out_of_credits_email_from_together_ai_while/ | false | false | self | 0 | null |
What exactly happens if you don't have enough vram for a model? | 3 | I'm sure this a dumb question sorry. But I have 12gb of vram, if I try running a model that would take up to 13gb max to run? What about one that's even more? Would it just run slower or would it behave worse, or not work at all? | 2025-07-16T20:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m1o8ii/what_exactly_happens_if_you_dont_have_enough_vram/ | TheRoyalSniper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1o8ii | false | null | t3_1m1o8ii | /r/LocalLLaMA/comments/1m1o8ii/what_exactly_happens_if_you_dont_have_enough_vram/ | false | false | self | 3 | null |
Sometime… in the next 3 to 5 decades…. | 173 | 2025-07-16T20:32:09 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1np9n | false | null | t3_1m1np9n | /r/LocalLLaMA/comments/1m1np9n/sometime_in_the_next_3_to_5_decades/ | false | false | default | 173 | {'enabled': True, 'images': [{'id': 'obmnyjusqadf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/obmnyjusqadf1.jpeg?width=108&crop=smart&auto=webp&s=1456830fcaf819f2c4c42087dc86c01f46f8e093', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/obmnyjusqadf1.jpeg?width=216&crop=smart&auto=... | ||
Help in using Flux models in 3060 8gb vram and 16gb ram | 3 | how can i run flux model kontext dev locally ?
i need documentation in pure python | 2025-07-16T20:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m1n3kf/help_in_using_flux_models_in_3060_8gb_vram_and/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1n3kf | false | null | t3_1m1n3kf | /r/LocalLLaMA/comments/1m1n3kf/help_in_using_flux_models_in_3060_8gb_vram_and/ | false | false | self | 3 | null |
[Open-Source] self-hostable AI productivity agent using Qwen 3 (4B) - reads your apps, extracts tasks, runs them on autopilot | 58 | hey everyone!
we're currently building an open-source autopilot for maximising productivity.
**TL;DR:** the idea is that users can connect their apps, AI will periodically read these apps for new context (like new emails, new calendar events, etc), extract action items from them, ask the user clarifying questions (i... | 2025-07-16T20:03:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m1myiq/opensource_selfhostable_ai_productivity_agent/ | therealkabeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1myiq | false | null | t3_1m1myiq | /r/LocalLLaMA/comments/1m1myiq/opensource_selfhostable_ai_productivity_agent/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'wRwjYUyXEmD_0KaRu6xtA6LyI8SUz5QMFsQJQzXPNZA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wRwjYUyXEmD_0KaRu6xtA6LyI8SUz5QMFsQJQzXPNZA.png?width=108&crop=smart&auto=webp&s=0c54d09f36ac8c0cd129366c144f7dad6c035ae1', 'width': 108}, {'height': 108, 'url': 'h... |
Improving tool calling via SFT | 4 | Lately, I have been conducting out a few experiments to improve tool calling capabilities of open-source models via SFT+LoRA on custom dataset (1200 data points having single-turn, multi-turn convos). What I have been noticing is that even after SFT, my open source models (qwen 2.5 7B and 14B) still perform badly (like... | 2025-07-16T20:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m1mxwm/improving_tool_calling_via_sft/ | NarrowAssociation239 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1mxwm | false | null | t3_1m1mxwm | /r/LocalLLaMA/comments/1m1mxwm/improving_tool_calling_via_sft/ | false | false | self | 4 | null |
Experimental RAG Techniques Resource | 21 | Hello Everyone!
For the last couple of weeks, I've been working on creating the Experimental RAG Tech repo, which I think some of you might find really interesting. This repository contains various techniques for improving RAG workflows that I've come up with during my research fellowship at my University. Each techni... | 2025-07-16T19:27:24 | https://github.com/LucaStrano/Experimental_RAG_Tech | k-en | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1m1qw | false | null | t3_1m1m1qw | /r/LocalLLaMA/comments/1m1m1qw/experimental_rag_techniques_resource/ | false | false | default | 21 | {'enabled': False, 'images': [{'id': 'BI_t2luUqpFFoT6ofhB1ODYTo_sYADo6lsFHbPcG2pU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BI_t2luUqpFFoT6ofhB1ODYTo_sYADo6lsFHbPcG2pU.png?width=108&crop=smart&auto=webp&s=b348986f536b5d378c085fe90b029c9fff0cefc4', 'width': 108}, {'height': 108, 'url': 'h... |
I want to build a local ai server | 1 | Hey everyone,
I’m setting up a local AI server and could use some advice on which operating system to go with. My setup is:
* **GPU:** RTX 4070 (12GB VRAM)
* **RAM:** 64GB DDR5
* **CPU:** Ryzen 5 7600X
My main goals are to run **local LLMs** possibly using **Ollama, and image generation** . I’ll mostly be using thi... | 2025-07-16T18:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m1l00d/i_want_to_build_a_local_ai_server/ | Reasonable_Brief578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1l00d | false | null | t3_1m1l00d | /r/LocalLLaMA/comments/1m1l00d/i_want_to_build_a_local_ai_server/ | false | false | self | 1 | null |
LLMs to return numeric evals | 1 | Hey, I am building a custom deep research agent that specializes in finding information on people and companies, and I want to return an estimated confidence score, based on how confident the agent is in the data that was collected, but we seem to be getting pretty bad results; the numbers often are not reliable.
I re... | 2025-07-16T18:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m1kr50/llms_to_return_numeric_evals/ | heross28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1kr50 | false | null | t3_1m1kr50 | /r/LocalLLaMA/comments/1m1kr50/llms_to_return_numeric_evals/ | false | false | self | 1 | null |
Building a Self-Bootstrapping Coding Agent in Python | 0 | Bub’s first milestone: automatically fixing type annotations. Powered by Moonshot K2
> **Bub:** Successfully fixed the first mypy issue by adding the missing return type annotation -> None to the __init__ method in src/bub/cli/render.py, reducing the error count from 24 to 23.
| 2025-07-16T18:33:03 | https://psiace.me/posts/baby-step-coding-agent/ | PsiACE | psiace.me | 1970-01-01T00:00:00 | 0 | {} | 1m1kknk | false | null | t3_1m1kknk | /r/LocalLLaMA/comments/1m1kknk/building_a_selfbootstrapping_coding_agent_in/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'Ov5HrIgpbGOkgJxUjtYVrLgcTt6htnyexSPr7wjVud0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Ov5HrIgpbGOkgJxUjtYVrLgcTt6htnyexSPr7wjVud0.png?width=108&crop=smart&auto=webp&s=9da7ba3ccee347e0ceb9172de67e8c30a1044337', 'width': 108}, {'height': 113, 'url': 'h... |
Ollama and Open WebUI | 24 | Hello,
I want to set up my own Ollama server with OpenWebUI for my small business. I currently have the following options:
I still have 5 x RTX 3080 GPUs from my mining days — or would it be better to buy a Mac Mini with the M4 chip?
What would you suggest?
| 2025-07-16T18:18:44 | https://www.reddit.com/gallery/1m1k704 | HeisenbergWalter | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m1k704 | false | null | t3_1m1k704 | /r/LocalLLaMA/comments/1m1k704/ollama_and_open_webui/ | false | false | 24 | null | |
Enable AI Agents to join and interact in your meetings via MCP | 42 | Hey guys,
We've been working on an open-source project called joinly for the last 10 weeks. The idea is that you can connect your favourite MCP servers (e.g. Asana, Notion and Linear, GitHub etc.) to an AI agent and send that agent to any browser-based video conference. This essentially allows you to create your own c... | 2025-07-16T18:12:12 | https://v.redd.it/9w98c7awy9df1 | Square-Test-515 | /r/LocalLLaMA/comments/1m1k0vh/enable_ai_agents_to_join_and_interact_in_your/ | 1970-01-01T00:00:00 | 0 | {} | 1m1k0vh | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9w98c7awy9df1/DASHPlaylist.mpd?a=1755418627%2CNDhhMDg0NmM0NGI1MzZmMmU5NzM5ZWY0ZDcyODg1MjU0MDU2OThiNDMzNGZhZDczNDJlNmMwOWFjNDY1ZjMyYQ%3D%3D&v=1&f=sd', 'duration': 91, 'fallback_url': 'https://v.redd.it/9w98c7awy9df1/DASH_1080.mp4?source=fallback', 'h... | t3_1m1k0vh | /r/LocalLLaMA/comments/1m1k0vh/enable_ai_agents_to_join_and_interact_in_your/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'cGI3eHRnYXd5OWRmMV3lGHAjbP8zKUEsAVrqPldZ7glK4yeeVC_nwGPMlIIM', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/cGI3eHRnYXd5OWRmMV3lGHAjbP8zKUEsAVrqPldZ7glK4yeeVC_nwGPMlIIM.png?width=108&crop=smart&format=pjpg&auto=webp&s=69365ba8c5ea202a945d2c23e02e3baec3dc5... | |
Intel preparing Nova Lake-AX, big APU design to counter AMD Strix Halo - VideoCardz.com | 47 | 2025-07-16T18:08:39 | https://videocardz.com/newz/intel-preparing-nova-lake-ax-big-apu-design-to-counter-amd-strix-halo | EasternBeyond | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1m1jxia | false | null | t3_1m1jxia | /r/LocalLLaMA/comments/1m1jxia/intel_preparing_nova_lakeax_big_apu_design_to/ | false | false | default | 47 | {'enabled': False, 'images': [{'id': 'bwl76m6f4s2m-Wxzq9Cxwn8RVXuFp8molq0d9EndLBE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bwl76m6f4s2m-Wxzq9Cxwn8RVXuFp8molq0d9EndLBE.jpeg?width=108&crop=smart&auto=webp&s=c0c41f10a7f7841ca2668b1087fe6cfe08876e8a', 'width': 108}, {'height': 121, 'url': '... | |
Opensource Grok Ani Companion | 3 | 2025-07-16T18:04:41 | https://github.com/Jackywine/Bella | Charuru | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1jtlo | false | null | t3_1m1jtlo | /r/LocalLLaMA/comments/1m1jtlo/opensource_grok_ani_companion/ | false | false | default | 3 | null | |
Anyone having luck with Hunyuan 80B A13B? | 66 | [Hunyuan-80B-A13B](https://huggingface.co/unsloth/Hunyuan-A13B-Instruct-GGUF) looked really cool on paper, I hoped it would be the "large equivalent" of the excellent Qwen3 30B A3B. According to the official [Hugging Face page](https://huggingface.co/tencent/Hunyuan-A13B-Instruct), it's compact yet powerful, comparable... | 2025-07-16T17:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m1jd2r/anyone_having_luck_with_hunyuan_80b_a13b/ | Admirable-Star7088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1jd2r | false | null | t3_1m1jd2r | /r/LocalLLaMA/comments/1m1jd2r/anyone_having_luck_with_hunyuan_80b_a13b/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': '7tZAafHz8QDi5rnBspdJJoYb2OkIv9OvkoTJ3S1Dpzk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7tZAafHz8QDi5rnBspdJJoYb2OkIv9OvkoTJ3S1Dpzk.png?width=108&crop=smart&auto=webp&s=b85db16a59e36c7951ed0c5a13aadb8f132c73e0', 'width': 108}, {'height': 116, 'url': 'h... |
The Experimental RAG Techniques Repo | 1 | [removed] | 2025-07-16T17:40:35 | https://github.com/LucaStrano/Experimental_RAG_Tech | k-en | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1j65r | false | null | t3_1m1j65r | /r/LocalLLaMA/comments/1m1j65r/the_experimental_rag_techniques_repo/ | false | false | default | 1 | null |
Is CAG just "put your context in system prompt?" | 2 | I recently read about RAG vs CAG article online and they mention about put CAG in the KV cache or something like this, but I did not see any KV cache setting in AI API call also when using GGUF model don't know how to set it, can someone elaborate ? | 2025-07-16T17:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m1j43q/is_cag_just_put_your_context_in_system_prompt/ | dheetoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1j43q | false | null | t3_1m1j43q | /r/LocalLLaMA/comments/1m1j43q/is_cag_just_put_your_context_in_system_prompt/ | false | false | self | 2 | null |
The Experimental RAG Techniques Repo | 1 | [removed] | 2025-07-16T17:37:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m1j30u/the_experimental_rag_techniques_repo/ | k-en | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1j30u | false | null | t3_1m1j30u | /r/LocalLLaMA/comments/1m1j30u/the_experimental_rag_techniques_repo/ | false | false | self | 1 | null |
The Experimental RAG Techniques Repo | 1 | [removed] | 2025-07-16T17:22:53 | https://github.com/LucaStrano/Experimental_RAG_Tech | k-en | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1ip2w | false | null | t3_1m1ip2w | /r/LocalLLaMA/comments/1m1ip2w/the_experimental_rag_techniques_repo/ | false | false | default | 1 | null |
We built an open-source tool that trains both diffusion and text models together in a single interface | 32 | Transformer Lab has just shipped major updates to our Diffusion model support!
Transformer Lab now allows you to generate and train both text models (LLMs) and diffusion models in the same interface. It’s open source (AGPL-3.0) and works on AMD and NVIDIA GPUs, as well as Apple silicon.
Now, we’ve built support for:
... | 2025-07-16T17:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m1im6y/we_built_an_opensource_tool_that_trains_both/ | OriginalSpread3100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1im6y | false | null | t3_1m1im6y | /r/LocalLLaMA/comments/1m1im6y/we_built_an_opensource_tool_that_trains_both/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=108&crop=smart&auto=webp&s=ed618c5bb4c12e2d13ea8c39bad4ca732a513593', 'width': 108}, {'height': 160, 'url': 'h... |
Local LLM Questions | 1 | [removed] | 2025-07-16T17:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m1idik/local_llm_questions/ | AstroGridIron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1idik | false | null | t3_1m1idik | /r/LocalLLaMA/comments/1m1idik/local_llm_questions/ | false | false | self | 1 | null |
He’s out of line but he’s right | 2,655 | 2025-07-16T17:06:31 | EstablishmentFun3205 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m1i922 | false | null | t3_1m1i922 | /r/LocalLLaMA/comments/1m1i922/hes_out_of_line_but_hes_right/ | false | false | default | 2,655 | {'enabled': True, 'images': [{'id': 'dqx9wlf3q9df1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/dqx9wlf3q9df1.jpeg?width=108&crop=smart&auto=webp&s=c7ab507aeba685e314878f607fb5dd3ea2612a23', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/dqx9wlf3q9df1.jpeg?width=216&crop=smart&auto=w... | ||
What would you want in a local LLM phone app? | 4 | Hey folks,
Curious to hear from the people who actually run GGUF and local models: If you could design a phone app for local LLM inference (no server, no telemetry, runs GGUF or MLX depending on the platform), what’s your dream feature set?
What I’m especially interested in:
* How much control do you want over mod... | 2025-07-16T16:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m1hrgs/what_would_you_want_in_a_local_llm_phone_app/ | Agreeable-Rest9162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1hrgs | false | null | t3_1m1hrgs | /r/LocalLLaMA/comments/1m1hrgs/what_would_you_want_in_a_local_llm_phone_app/ | false | false | self | 4 | null |
Playing around with the design of my pet project - does this look decent or nah? | 136 | I posted a showcase of my project recently, would be glad to hear opinions. | 2025-07-16T16:39:57 | https://www.reddit.com/gallery/1m1hixa | RIPT1D3_Z | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m1hixa | false | null | t3_1m1hixa | /r/LocalLLaMA/comments/1m1hixa/playing_around_with_the_design_of_my_pet_project/ | false | false | 136 | null | |
Support for diffusion models (Dream 7B) has been merged into llama.cpp | 197 | Diffusion models are a new kind of language model that generate text by denoising random noise step-by-step, instead of predicting tokens left to right like traditional LLMs.
This PR adds basic support for diffusion models, using Dream 7B instruct as base. DiffuCoder-7B is built on the same arch so it should be triv... | 2025-07-16T16:20:32 | https://github.com/ggml-org/llama.cpp/pull/14644 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1h0fy | false | null | t3_1m1h0fy | /r/LocalLLaMA/comments/1m1h0fy/support_for_diffusion_models_dream_7b_has_been/ | false | false | default | 197 | {'enabled': False, 'images': [{'id': 'OqAAbOs6fFLPZaNF0M6vIqHJqNLZwtArB7hBcX1IZ7M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OqAAbOs6fFLPZaNF0M6vIqHJqNLZwtArB7hBcX1IZ7M.png?width=108&crop=smart&auto=webp&s=674fad62c54e7775ca54c2ce9b07e9727fa1f40d', 'width': 108}, {'height': 108, 'url': 'h... |
Support for diffusion models (Add Dream 7B) has been merged into llama.cpp | 5 | Diffusion models are a new kind of language model that generate text by denoising random noise step-by-step, instead of predicting tokens left to right like traditional LLMs.
From the PR:
>
From the Dream 7B website:
>
| 2025-07-16T16:18:47 | https://github.com/ggml-org/llama.cpp/pull/14644 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1gyu9 | false | null | t3_1m1gyu9 | /r/LocalLLaMA/comments/1m1gyu9/support_for_diffusion_models_add_dream_7b_has/ | false | false | default | 5 | null |
I have 2 5090 FE's in hand. Help me build the rest of the rig! | 2 | Hi local llama!
I think this could be a fun idea!
Here's the Game:
\- I have 2 5090 FE's.
\- 4k budget to purchase
1) Motherboard
2) CPU(s)
3) RAM
As a baseline I want to run Deepseek V3 Architecture (671B) as Q4, but with Kimi at 1T now existing, im interested!
Ive been looking into 1 vs 2 socket... | 2025-07-16T15:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m1g1z2/i_have_2_5090_fes_in_hand_help_me_build_the_rest/ | novel_market_21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1g1z2 | false | null | t3_1m1g1z2 | /r/LocalLLaMA/comments/1m1g1z2/i_have_2_5090_fes_in_hand_help_me_build_the_rest/ | false | false | self | 2 | null |
Has anyone worked on detecting actual face touches (like nose, lips, eyes) using computer vision? | 1 | I'm trying to reliably detect when a person actually touches their nose, lips, or eyes — not just when the finger appears in that 2D region due to camera angle. I'm using MediaPipe for face and hand landmarks, calculating 3D distances, but it's still triggering false positives when the finger is near the face but not t... | 2025-07-16T15:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m1ft4v/has_anyone_worked_on_detecting_actual_face/ | Funny_Working_7490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1ft4v | false | null | t3_1m1ft4v | /r/LocalLLaMA/comments/1m1ft4v/has_anyone_worked_on_detecting_actual_face/ | false | false | self | 1 | null |
CUDA is coming to MLX | 200 | Looks like we will soon get CUDA support in MLX - this means that we’ll be able to run MLX programs on both Apple Silicon and CUDA GPUs. | 2025-07-16T15:31:43 | https://github.com/ml-explore/mlx/pull/1983 | mrfakename0 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m1foz1 | false | null | t3_1m1foz1 | /r/LocalLLaMA/comments/1m1foz1/cuda_is_coming_to_mlx/ | false | false | default | 200 | {'enabled': False, 'images': [{'id': 'w8edStcv8JcRcgUOJ4-eZrp8x-ns7z_4bZz-mt8i8eE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w8edStcv8JcRcgUOJ4-eZrp8x-ns7z_4bZz-mt8i8eE.png?width=108&crop=smart&auto=webp&s=0f056b2f1e49bcba9b9c8584c9ac22674763802b', 'width': 108}, {'height': 108, 'url': 'h... |
Got an opportunity to buy 5090 FE at mrp, need suggestions | 1 | I already have a system with 2x3090 FE.
Cabinet is Lian Li O11 dynamic evo xl.
I have a corsair 1600 w psu.
128 gb ram.
Amd 3950 processor, auros x570 master motherboard with 3 pcie slots.
If I purchase the 5090, should I update the rest of the system? Or should I upgrade only part of the system?
In my country, w... | 2025-07-16T15:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m1ezn6/got_an_opportunity_to_buy_5090_fe_at_mrp_need/ | Jaswanth04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1ezn6 | false | null | t3_1m1ezn6 | /r/LocalLLaMA/comments/1m1ezn6/got_an_opportunity_to_buy_5090_fe_at_mrp_need/ | false | false | self | 1 | null |
Lots of sudden issues while loading models | 6 | I use Kobold to launch models and RisuAI app since it works with settings I'm used to the most, but suddenly I can't load any model anymore. I was running this model in my last post at Q3_K_XL with max context window and it was loading fast, replying even faster and all good. But now that I put on Q4 can it breaks imme... | 2025-07-16T14:57:56 | https://www.reddit.com/gallery/1m1esnr | WEREWOLF_BX13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m1esnr | false | null | t3_1m1esnr | /r/LocalLLaMA/comments/1m1esnr/lots_of_sudden_issues_while_loading_models/ | false | false | 6 | null | |
KIMI AI Opt Out Training Data? | 0 | I am using KIMI for personal reasons through the official host site, but I cannot find the opt out data training option. | 2025-07-16T14:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m1elwf/kimi_ai_opt_out_training_data/ | Parker93GT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1elwf | false | null | t3_1m1elwf | /r/LocalLLaMA/comments/1m1elwf/kimi_ai_opt_out_training_data/ | false | false | self | 0 | null |
Built an Agent That Replaced My Financial Advisor and Now My Realtor Too | 0 | A while back, I built a small app to track stocks. It pulled market data and gave me daily reports on what to buy or sell based on my risk tolerance. It worked so well that I kept iterating it for bigger decisions. Now I’m using it to figure out my next house purchase, stuff like which neighborhoods are hot, new vs. ol... | 2025-07-16T14:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m1efdo/built_an_agent_that_replaced_my_financial_advisor/ | InitialChard8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1efdo | false | null | t3_1m1efdo | /r/LocalLLaMA/comments/1m1efdo/built_an_agent_that_replaced_my_financial_advisor/ | false | false | self | 0 | null |
IMO 2025 LLM Mathematical Reasoning Evaluation | 14 | Following the conclusion of IMO 2025 in Australia today, I tested the performance of three frontier models: Anthropic Sonnet 4 (with thinking), ByteDance Seed 1.6 (with thinking), and Gemini 2.5 Pro. The results weren't as impressive as expected - **only two models** correctly solved **Problem 5** with proper reasoning... | 2025-07-16T14:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m1dzqj/imo_2025_llm_mathematical_reasoning_evaluation/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1dzqj | false | null | t3_1m1dzqj | /r/LocalLLaMA/comments/1m1dzqj/imo_2025_llm_mathematical_reasoning_evaluation/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'NUpUrTkINKtuXyRXZSKIGDyQIdh2fBYAdxAjjE2rTBM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NUpUrTkINKtuXyRXZSKIGDyQIdh2fBYAdxAjjE2rTBM.png?width=108&crop=smart&auto=webp&s=a857ce4111db8fde7a57184518408c87453a5e7d', 'width': 108}, {'height': 108, 'url': 'h... |
The most brutal hardware to run frontier open source LLMs locally. | 0 | B200 Blackwell Octo 1.5TB. Available now from [GPTshop.ai](http://GPTshop.ai) | 2025-07-16T14:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m1dtsr/the_most_brutal_hardware_to_run_frontier_open/ | GPTshop_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1dtsr | false | null | t3_1m1dtsr | /r/LocalLLaMA/comments/1m1dtsr/the_most_brutal_hardware_to_run_frontier_open/ | false | false | self | 0 | null |
If you ever feel stupid, just remember a Google engineer was fired in 2022 for saying their LLM was sentient | 0 | Looking at LLM """IQ""" now vs back then, what an idiot lmao
the guy's now "freelance" (unemployed) | 2025-07-16T14:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m1dpud/if_you_ever_feel_stupid_just_remember_a_google/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1dpud | false | null | t3_1m1dpud | /r/LocalLLaMA/comments/1m1dpud/if_you_ever_feel_stupid_just_remember_a_google/ | false | false | self | 0 | null |
How do you suggest I architecture my voice-controlled mobile assistant? | 6 | Hey everyone, I’m building a voice assistant proof-of-concept that connects a my Flutter app on android to a FastAPI server and lets users perform system-level actions (like sending SMS or placing calls) via natural language commands like:
>Call mom
Send 'see you soon' to dad
It's not necessarily limited to those a... | 2025-07-16T14:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m1dj34/how_do_you_suggest_i_architecture_my/ | Otis43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1dj34 | false | null | t3_1m1dj34 | /r/LocalLLaMA/comments/1m1dj34/how_do_you_suggest_i_architecture_my/ | false | false | self | 6 | null |
I want to run ai locally on my bad pc | 1 | I have a really low end pc and i want to run a llm, which one should i run?
My pc specs are
Gtx 1060 6gb
I7 2600
16gb ram
Also i wanted to ask if its possible to run high end llms? I dont really care if they r gonna be slow, just wanted to ask if i could run them slowly | 2025-07-16T14:03:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m1dfhy/i_want_to_run_ai_locally_on_my_bad_pc/ | xFranx1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1dfhy | false | null | t3_1m1dfhy | /r/LocalLLaMA/comments/1m1dfhy/i_want_to_run_ai_locally_on_my_bad_pc/ | false | false | self | 1 | null |
Has anyone here already done the math? | 0 | I have been trying to weigh up cost factors for a platform I am building and I am just curious if anyone here has already done the math:
Considering an open-source model like Kimi K2 32B how do costs weigh up for serving concurrent users per hour:
**1) API cost**
**2) Self-hosting in cloud (GCP or AWS)**
**3) Sel... | 2025-07-16T13:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m1d9sm/has_anyone_here_already_done_the_math/ | Budget_Map_3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1d9sm | false | null | t3_1m1d9sm | /r/LocalLLaMA/comments/1m1d9sm/has_anyone_here_already_done_the_math/ | false | false | self | 0 | null |
what are the abliterated/uncensored models for? | 0 | a vlm can recognize nsfw images, and anything else? what can a pure llm do? | 2025-07-16T13:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m1chzc/what_are_the_abliterateduncensored_models_for/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1chzc | false | null | t3_1m1chzc | /r/LocalLLaMA/comments/1m1chzc/what_are_the_abliterateduncensored_models_for/ | false | false | nsfw | 0 | null |
Vllm vs. llama.cpp | 33 | Hi gang, in the use case 1 user total, local chat inference, assume model fits in vram, which engine is faster for tokens/sec for any given prompt? | 2025-07-16T12:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m1au28/vllm_vs_llamacpp/ | Agreeable-Prompt-666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1au28 | false | null | t3_1m1au28 | /r/LocalLLaMA/comments/1m1au28/vllm_vs_llamacpp/ | false | false | self | 33 | null |
Using Llama MaaS in Google's Vertex AI | 2 | I am in the EU, and I decided to explore options on Google Vertex; I didn't even know they had a model-as-a-service option. The pricing seems high, but they have a wide array of models, including Llama 3 and 4. Now I've spent the last 2 hours trying to get quoata from them, my account is a business one, but I still can... | 2025-07-16T12:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m1as5s/using_llama_maas_in_googles_vertex_ai/ | Specialist_Bee_9726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1as5s | false | null | t3_1m1as5s | /r/LocalLLaMA/comments/1m1as5s/using_llama_maas_in_googles_vertex_ai/ | false | false | self | 2 | null |
How do you RAG multiple docs in LM STUDIO | 3 | I know its probably asked a lot , but I cant find the answer . I know you can add the document , much like the hybrid approach to the chat, but I was looking at something like Anything LLM workspace , or OpenWeb UI knowledge ….
Since I'm already using LM Studio to host the Embedding model ,,, how do I use a similar fu... | 2025-07-16T12:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m1aps5/how_do_you_rag_multiple_docs_in_lm_studio/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1aps5 | false | null | t3_1m1aps5 | /r/LocalLLaMA/comments/1m1aps5/how_do_you_rag_multiple_docs_in_lm_studio/ | false | false | self | 3 | null |
📢 [RELEASE] LoFT CLI: Fine-tune & Deploy LLMs on CPU (8GB RAM, No GPU, No Cloud) | 43 | Update to my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1luiigi/tool_release_finetune_quantize_13b_llms_on_8gb/) — the repo is **finally public**!
# 🔥 TL;DR
* **GitHub**: [diptanshu1991/LoFT](https://github.com/diptanshu1991/LoFT)
* **What you get**: 5 CLI commands: `loft finetune`, `merge`, `expor... | 2025-07-16T11:51:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m1aj8n/release_loft_cli_finetune_deploy_llms_on_cpu_8gb/ | diptanshu1991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1aj8n | false | null | t3_1m1aj8n | /r/LocalLLaMA/comments/1m1aj8n/release_loft_cli_finetune_deploy_llms_on_cpu_8gb/ | false | false | self | 43 | null |
Ok this tool is actually insane!! I just found a tool that turns ANY document into LLM-ready data!! | 0 | If you're building with AI agents, RAG, or just tinkering with LLMs...
You're gonna love this...
https://preview.redd.it/36glutgx18df1.png?width=800&format=png&auto=webp&s=08c0fb68b7b622b64a6aa96292aeab5a7e8ada36
Microsoft released MarkItDown a lightweight Python tool that converts LITERALLY any file into Markdown... | 2025-07-16T11:30:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m1a4z7/ok_this_tool_is_actually_insane_i_just_found_a/ | AdVirtual2648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1a4z7 | false | null | t3_1m1a4z7 | /r/LocalLLaMA/comments/1m1a4z7/ok_this_tool_is_actually_insane_i_just_found_a/ | false | false | 0 | null | |
So how do I fine-time a local model? | 16 | Hi, I'm a newb, please forgive me if I'm missing some obvious documentation.
For the sake of fun and learning, I'd like to fine-tune a local model (haven't decided which one yet), as some kind of writing assistant. My mid-term goal is to have a local VSCode extension that will rewrite e.g. doc comments or CVs as shake... | 2025-07-16T11:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m19upn/so_how_do_i_finetime_a_local_model/ | ImYoric | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m19upn | false | null | t3_1m19upn | /r/LocalLLaMA/comments/1m19upn/so_how_do_i_finetime_a_local_model/ | false | false | self | 16 | null |
Help me figure out how ? | 5 | I am very new to this AI race, and I haven't figured out much yet, but I want to build something very interesting. 💭
I have seen some schools have very poor education facilities, and teachers don't have proper knowledge either.
I want to build a small app that can help students learn speaking English, Maths, and Sci... | 2025-07-16T10:59:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m19kfw/help_me_figure_out_how/ | Unlikely-Chicken3286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m19kfw | false | null | t3_1m19kfw | /r/LocalLLaMA/comments/1m19kfw/help_me_figure_out_how/ | false | false | self | 5 | null |
getting acceleration on Intel integrated GPU/NPU | 9 | llama.cpp on CPU is easy.
AMD and integrated graphics is also easy, run via Vulkan (and ROCm) and receive noteable speedup. :-)
Intel integrated graphics via Vulkan is actually slower than CPU! :-(
For Intel there is Ipex-LLM (https://github.com/intel/ipex-llm), but I just can't figure out how to get all these depen... | 2025-07-16T10:56:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m19igi/getting_acceleration_on_intel_integrated_gpunpu/ | a_postgres_situation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m19igi | false | null | t3_1m19igi | /r/LocalLLaMA/comments/1m19igi/getting_acceleration_on_intel_integrated_gpunpu/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'YESuJ_bU8jo0mpY7aL2QrUfy4UvNYqHXfcZvSIHUzug', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YESuJ_bU8jo0mpY7aL2QrUfy4UvNYqHXfcZvSIHUzug.png?width=108&crop=smart&auto=webp&s=db096b6b0c3714c4d4d6fac5974b519fd4d83371', 'width': 108}, {'height': 108, 'url': 'h... |
Does llama.cpp support to run kimi-k2 with multi GPUs | 7 | Hey, I'm newbie with llama.cpp. I want to run kimi-k2 unsloth Q4 version on a 8xH20 server, but I cannot find any instruction for this. Is it possible? Or I should try other solution? | 2025-07-16T10:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m18tr9/does_llamacpp_support_to_run_kimik2_with_multi/ | Every_Bathroom_119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m18tr9 | false | null | t3_1m18tr9 | /r/LocalLLaMA/comments/1m18tr9/does_llamacpp_support_to_run_kimik2_with_multi/ | false | false | self | 7 | null |
GitHub - boneylizard/Eloquent: A local front-end for open-weight LLMs with memory, RAG, TTS/STT, Elo ratings, and dynamic research tools. Built with React and FastAPI. | 31 | # 🚀 Just Dropped: Eloquent – A Local LLM Powerhouse
Hey LocalLLaMA! Just dropped **Eloquent** after 4 months of "just one more feature" syndrome.
Started as a basic chat interface... ended up as a full-stack, dual-GPU, memory-retaining AI companion.
Built entirely for local model users — by someone who actually us... | 2025-07-16T10:04:05 | https://github.com/boneylizard/Eloquent | Gerdel | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m18nke | false | null | t3_1m18nke | /r/LocalLLaMA/comments/1m18nke/github_boneylizardeloquent_a_local_frontend_for/ | false | false | default | 31 | null |
Open alternative to Dia / Comet AI Browsers - Can run w/ Local models | 9 | Connect your browser to AI models. No browser switching needed—works seamlessly with any Chromium browser including Chrome & Arc. | 2025-07-16T09:36:16 | https://github.com/aaronjmars/opendia | feekaj | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m187yw | false | null | t3_1m187yw | /r/LocalLLaMA/comments/1m187yw/open_alternative_to_dia_comet_ai_browsers_can_run/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'o4Q6qMVvAneXpKmIjzKlyB1elh2tsIdhZF7cffXQdWs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o4Q6qMVvAneXpKmIjzKlyB1elh2tsIdhZF7cffXQdWs.png?width=108&crop=smart&auto=webp&s=e257c3fa569900a2715753e88e205feb02d6999d', 'width': 108}, {'height': 108, 'url': 'h... |
Ryzen AI 7 350 and rocm | 3 | I have an AMD pc AMD Ryzen™ AI 7 350 w/ Radeon™ 860M, and it doesn't seem to be compatible with rocm, has anyone managed to use rocm with this machine? | 2025-07-16T09:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m17wf2/ryzen_ai_7_350_and_rocm/ | Dear_Impression8189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m17wf2 | false | null | t3_1m17wf2 | /r/LocalLLaMA/comments/1m17wf2/ryzen_ai_7_350_and_rocm/ | false | false | self | 3 | null |
Running Ollama locally with a smooth UI and no technical skills | 0 | We've built a free Ollama client that might be useful for some of you. It lets you:
* Choose between different small models
* Upload files for analysis or summaries
* Do web searches
* Create and organize custom prompts
Runs on Windows, Mac, and laptops. If you don't have a decent GPU, there's an option to connect to... | 2025-07-16T08:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m17ify/running_ollama_locally_with_a_smooth_ui_and_no/ | Constant-Post-122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m17ify | false | null | t3_1m17ify | /r/LocalLLaMA/comments/1m17ify/running_ollama_locally_with_a_smooth_ui_and_no/ | false | false | self | 0 | null |
TTS models for realtime streaming | 2 | Help me find some realistic voice tts models which support realtime streaming of audio.
I have tried sesame/CSM but it doesn't support streaming which causes delay in generation. | 2025-07-16T08:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m1701z/tts_models_for_realtime_streaming/ | hustler0217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m1701z | false | null | t3_1m1701z | /r/LocalLLaMA/comments/1m1701z/tts_models_for_realtime_streaming/ | false | false | self | 2 | null |
Official Local LLM support by AMD released. Lemonade | 58 | Can somebody test the performance of Gemma3 12B / 27B q4 on different modes ONNX, llamacpp, GPU, CPU, NPU ?
https://www.youtube.com/watch?v=mcf7dDybUco | 2025-07-16T07:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m16o6r/official_local_llm_support_by_amd_released/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m16o6r | false | null | t3_1m16o6r | /r/LocalLLaMA/comments/1m16o6r/official_local_llm_support_by_amd_released/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'C7vLvyc2VR5SHdy5lbc2lopLTghrszZiODshvSAbsCw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/C7vLvyc2VR5SHdy5lbc2lopLTghrszZiODshvSAbsCw.jpeg?width=108&crop=smart&auto=webp&s=20307eaee47e8d81001356a8a35bd8a5a6dbe244', 'width': 108}, {'height': 162, 'url': '... |
T5Gemma: A new collection of encoder-decoder Gemma models- Google Developers Blog | 145 | T5Gemma released a new encoder-decoder model. | 2025-07-16T07:46:29 | https://developers.googleblog.com/en/t5gemma/ | DeltaSqueezer | developers.googleblog.com | 1970-01-01T00:00:00 | 0 | {} | 1m16kdm | false | null | t3_1m16kdm | /r/LocalLLaMA/comments/1m16kdm/t5gemma_a_new_collection_of_encoderdecoder_gemma/ | false | false | 145 | {'enabled': False, 'images': [{'id': 'Ua_6ve9KH9QIRkyagh-mJMhThsokU2TkQCZbS7Cxv8k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ua_6ve9KH9QIRkyagh-mJMhThsokU2TkQCZbS7Cxv8k.png?width=108&crop=smart&auto=webp&s=72b5316e353260b40cfddbb1814bf1be5854d7c9', 'width': 108}, {'height': 108, 'url': 'h... | |
GPUs low utilization? | 18 | Love LocalLLM and have been hosting smaller models on my 4090 for a long time. Local LLM seems to be viable now so I got 2x 5090s. I'm trying to run Devstral small 8Q. It uses about 85-90% of the dual 5090 memory with full context.
The issue I'm having is they don't hit 100% utilization. Both GPUs sit at ab... | 2025-07-16T07:40:27 | rymn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m16h0b | false | null | t3_1m16h0b | /r/LocalLLaMA/comments/1m16h0b/gpus_low_utilization/ | false | false | default | 18 | {'enabled': True, 'images': [{'id': 'kdwq8atcw6df1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/kdwq8atcw6df1.png?width=108&crop=smart&auto=webp&s=2a600e6922e30f4a88be9074b33181ff02b90084', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/kdwq8atcw6df1.png?width=216&crop=smart&auto=web... | |
Local RAG + LLM as a Narrative RPG Game Master — Does This Make Sense and How to Build It? | 9 | Hi everyone!
I’d like to get some advice and maybe inspiration from you all.
I’m thinking about building a local RAG setup, paired with a local LLM, that would act as a **narrative Game Master for RPGs**.
Here’s the idea:
🎲 I upload a knowledge base (e.g., vector DB or something else) with PDFs and/or markdown f... | 2025-07-16T07:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m15yss/local_rag_llm_as_a_narrative_rpg_game_master_does/ | goompas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m15yss | false | null | t3_1m15yss | /r/LocalLLaMA/comments/1m15yss/local_rag_llm_as_a_narrative_rpg_game_master_does/ | false | false | self | 9 | null |
Model recommendations for GPU-less server | 3 | Hi, I'm looking for general purpose chatbot models for a server. This is for testing the waters, nothing too serious yet.
The server has a Xeon 4314 and 128 GB DDR4-3200 ECC.
Thanks in advance! | 2025-07-16T06:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m157wo/model_recommendations_for_gpuless_server/ | Takia_Gecko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m157wo | false | null | t3_1m157wo | /r/LocalLLaMA/comments/1m157wo/model_recommendations_for_gpuless_server/ | false | false | self | 3 | null |
Meta's new ASI team discussed about abandoning Meta's powerful Open-source and focus on developing close | 201 | https://www.nytimes.com/2025/07/14/technology/meta-superintelligence-lab-ai.html | 2025-07-16T05:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m14a9j/metas_new_asi_team_discussed_about_abandoning/ | ILoveMy2Balls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m14a9j | false | null | t3_1m14a9j | /r/LocalLLaMA/comments/1m14a9j/metas_new_asi_team_discussed_about_abandoning/ | false | false | self | 201 | {'enabled': False, 'images': [{'id': '62QXtiCManuS6UimUaWcoUxH8gOETN8-9D6ljAVaZH0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/62QXtiCManuS6UimUaWcoUxH8gOETN8-9D6ljAVaZH0.jpeg?width=108&crop=smart&auto=webp&s=f431ed3ef795de81f0d9be2452ed2466f4727f88', 'width': 108}, {'height': 113, 'url': '... |
Could I be put in the right direction for the best model/s ive been using an app for chatting with bots but can't use it anymore due to circumstances and I'm totally new to this stuff | 0 | I don't know how models work or how to use them what's a simple explanation of how to do so? Could I just double click and the thing just runs? How would I get one to be mainly for chatting with? | 2025-07-16T05:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m145cw/could_i_be_put_in_the_right_direction_for_the/ | Daglen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m145cw | false | null | t3_1m145cw | /r/LocalLLaMA/comments/1m145cw/could_i_be_put_in_the_right_direction_for_the/ | false | false | self | 0 | null |
Can someone nudge me into the right direction for creating MCPs using Local models. Tutorials or articles or something. | 0 | I am a college student and can't really find articles on running MCPs using local Modles, The hugging face MCP course is a little hard to follow. It would be helpful if you guys can provide me some documentations or articles. | 2025-07-16T05:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m142vi/can_someone_nudge_me_into_the_right_direction_for/ | Resident_Acadia_4798 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m142vi | false | null | t3_1m142vi | /r/LocalLLaMA/comments/1m142vi/can_someone_nudge_me_into_the_right_direction_for/ | false | false | self | 0 | null |
DIY Voice Chat with Local LLMs on iOS/Mac: Apple Shortcut Using LM Studio + Kokoro-FastAPI (Free & Private) | 6 | I built this shortcut for hands-free, privacy-focused chatting with local AI characters. No cloud services needed, runs on your machine with voice input/output. Here's how it works and how to set it up.
This shortcut as currently configured has a few prerequisites:
* Install LM Studio (from lmstudio.ai) and download ... | 2025-07-16T04:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m13t7g/diy_voice_chat_with_local_llms_on_iosmac_apple/ | local-foreigner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m13t7g | false | null | t3_1m13t7g | /r/LocalLLaMA/comments/1m13t7g/diy_voice_chat_with_local_llms_on_iosmac_apple/ | false | false | 6 | null | |
Mistral releases open-source speech recognition model | 1 |
Mistral has released an open source speech recognition model that looks really good. It looks like it performs a lot better than Whisper Large V3, which I’ve relied on for transcription tasks.
The models are available on huggingface and here’s a blogpost from the devs with more details: https://mistral.ai/news/voxtr... | 2025-07-16T04:34:27 | WallstreetChump | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m13htm | false | null | t3_1m13htm | /r/LocalLLaMA/comments/1m13htm/mistral_releases_opensource_speech_recognition/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'p5qkiaxxz5df1', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/p5qkiaxxz5df1.jpeg?width=108&crop=smart&auto=webp&s=d9b5cba40c2e679bf70b34d2770203f35de45068', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/p5qkiaxxz5df1.jpeg?width=216&crop=smart&auto=... | |
Seeking advice: Which Ollama model should I run on my modest laptop? | 0 | **Hi everyone,**
I’m looking to run an Ollama model locally for building my AI assistant, but my laptop isn’t so powerful. Here are my current specs:
Dell Latitude 3500
8 GB RAM
Intel Core i3‑8145U (4 cores)
Intel UHD Graphics 620
Ubuntu 24.04
I know these specs aren’t ideal, but I’d love your help figuring out ... | 2025-07-16T04:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m13eg0/seeking_advice_which_ollama_model_should_i_run_on/ | AerieExotic342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m13eg0 | false | null | t3_1m13eg0 | /r/LocalLLaMA/comments/1m13eg0/seeking_advice_which_ollama_model_should_i_run_on/ | false | false | self | 0 | null |
AMD Radeon AI PRO R9700 32 GB GPU Listed Online, Pricing Expected Around $1250, Half The Price of NVIDIA's RTX PRO "Blackwell" With 24 GB VRAM | 249 | Said it when this was presented that will have MSRP around RTX5080 since AMD decided to bench it against that card and not some workstation grade RTX.... 🥳
| 2025-07-16T04:28:39 | https://wccftech.com/amd-radeon-ai-pro-r9700-32-gb-gpu-listed-pricing-around-1250-half-price-nvidia-rtx-pro-blackwell-24-gb/ | Rich_Repeat_22 | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1m13eb2 | false | null | t3_1m13eb2 | /r/LocalLLaMA/comments/1m13eb2/amd_radeon_ai_pro_r9700_32_gb_gpu_listed_online/ | false | false | 249 | {'enabled': False, 'images': [{'id': 'q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8.png?width=108&crop=smart&auto=webp&s=7e7da05caae6d75f7cdd0e822908efc42a62fe6e', 'width': 108}, {'height': 121, 'url': 'h... | |
🚨 Docker container stuck on “Waiting for application startup” — Open WebUI won’t load in browser | 0 | Hi folks — hoping someone can help me finally crack this.
I’m trying to run Open WebUI (ghcr.io/open-webui/open-webui:main) via Docker on my Windows machine, connected to a locally running Ollama server, but the WebUI refuses to show up in the browser.
---
🛠️ Setup Details
OS: Windows 11 using Docker Desktop (WSL... | 2025-07-16T03:41:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m12ij7/docker_container_stuck_on_waiting_for_application/ | 0nlyAxeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m12ij7 | false | null | t3_1m12ij7 | /r/LocalLLaMA/comments/1m12ij7/docker_container_stuck_on_waiting_for_application/ | false | false | self | 0 | null |
Cool back drop | 1 | 2025-07-16T03:15:59 | https://seamlessss.etsy.com/listing/4335964842 | BillIcy1555 | seamlessss.etsy.com | 1970-01-01T00:00:00 | 0 | {} | 1m11zyj | false | null | t3_1m11zyj | /r/LocalLLaMA/comments/1m11zyj/cool_back_drop/ | false | false | default | 1 | null | |
Use claudecode with local models | 108 | So I have had FOMO on claudecode, but I refuse to give them my prompts or pay $100-$200 a month. So 2 days ago, I saw that moonshot provides an anthropic API to kimi k2 so folks could use it with claude code. Well, many folks are already doing that with local. So if you don't know, now you know. This is how I did... | 2025-07-16T02:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m118is/use_claudecode_with_local_models/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m118is | false | null | t3_1m118is | /r/LocalLLaMA/comments/1m118is/use_claudecode_with_local_models/ | false | false | self | 108 | {'enabled': False, 'images': [{'id': 'QeKKT8LaH96at-4kJT3oqlCjE7lEbnowQ_YcqEz3vg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QeKKT8LaH96at-4kJT3oqlCjE7lEbnowQ_YcqEz3vg8.png?width=108&crop=smart&auto=webp&s=64c4232e878310d5cd2b2010b00def92a85e8dfe', 'width': 108}, {'height': 108, 'url': 'h... |
Grok 3 just leak me it's system prompt | 0 | i've been testing limits of xAi's grok 3 and requested to edit image generating prompt to make it not save for work
to my surprise it started to do this without any questions, but suddenly start to output it's [system prompt](https://pastebin.com/xbBMgDsv)
[link to this chat](https://grok.com/share/bGVnYWN5_3e39dcf5... | 2025-07-16T02:23:57 | Gasased | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m10y6t | false | null | t3_1m10y6t | /r/LocalLLaMA/comments/1m10y6t/grok_3_just_leak_me_its_system_prompt/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9k2urywjb5df1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/9k2urywjb5df1.png?width=108&crop=smart&auto=webp&s=a249a707abedcccde742171661de0829f8c64e33', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/9k2urywjb5df1.png?width=216&crop=smart&auto=web... | |
Finally, an LLM Router That Thinks Like an Engineer - And Its Local | 12 | 🔗 Model + code: [https://huggingface.co/katanemo/Arch-Router-1.5B](https://huggingface.co/katanemo/Arch-Router-1.5B)
📄 Paper / longer read: [https://arxiv.org/abs/2506.16655](https://arxiv.org/abs/2506.16655)
Integrated and available via Arch: [https://github.com/katanemo/archgw](https://github.com/katanemo/archg... | 2025-07-16T02:10:26 | https://medium.com/@dracattusdev/finally-an-llm-router-that-thinks-like-an-engineer-96ccd8b6a24e | AdditionalWeb107 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1m10o3o | false | null | t3_1m10o3o | /r/LocalLLaMA/comments/1m10o3o/finally_an_llm_router_that_thinks_like_an/ | false | false | default | 12 | null |
Best local LLM right now for translating chinese to english? | 1 | [removed] | 2025-07-16T02:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m10nk9/best_local_llm_right_now_for_translating_chinese/ | killerkrieger567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m10nk9 | false | null | t3_1m10nk9 | /r/LocalLLaMA/comments/1m10nk9/best_local_llm_right_now_for_translating_chinese/ | false | false | self | 1 | null |
Help with building ollama with a custom llama.cpp (crossposted) | 2 | Hi all,
I realize ollama is probably not the best for my use case, but, for a variety of reasons, it's what I need to get done at present. I'm actively working on refactoring some things, but I'm hoping there's a faster solution before that's all done.
In short, I have access to a server with some fairly beefy GPUs. ... | 2025-07-16T02:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m10k45/help_with_building_ollama_with_a_custom_llamacpp/ | 1337HxC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m10k45 | false | null | t3_1m10k45 | /r/LocalLLaMA/comments/1m10k45/help_with_building_ollama_with_a_custom_llamacpp/ | false | false | self | 2 | null |
Obsidian note summarizer using local LLMs | 22 | 2025-07-16T02:04:17 | https://github.com/rosmur/obsidian-summairize/ | rm-rf-rm | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m10jln | false | null | t3_1m10jln | /r/LocalLLaMA/comments/1m10jln/obsidian_note_summarizer_using_local_llms/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'yuy9PgfOKjGlILvMppfg7R35e-ilKg7LUqrLRKWCHk4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yuy9PgfOKjGlILvMppfg7R35e-ilKg7LUqrLRKWCHk4.png?width=108&crop=smart&auto=webp&s=4cca3180c9163dcef1b9a25168dd8b685f6abe34', 'width': 108}, {'height': 108, 'url': 'h... | |
[WANTED] Moonshot K2 Jailbreak | 0 | Title says it all | 2025-07-16T01:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m101yf/wanted_moonshot_k2_jailbreak/ | lQEX0It_CUNTY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m101yf | false | null | t3_1m101yf | /r/LocalLLaMA/comments/1m101yf/wanted_moonshot_k2_jailbreak/ | false | false | self | 0 | null |
New documentation / explainer for GGUF quantization | 59 | There's surprisingly little documentation on how GGUF quantization works, including legacy / I-quants / K-quants and the importance matrix.
The maintainers made it [pretty clear](https://github.com/ggml-org/llama.cpp/pull/1684#issuecomment-2474462323) it's not their priority to write a paper either. Currently, people ... | 2025-07-16T01:35:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m0zy1a/new_documentation_explainer_for_gguf_quantization/ | mojojojo_24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0zy1a | false | null | t3_1m0zy1a | /r/LocalLLaMA/comments/1m0zy1a/new_documentation_explainer_for_gguf_quantization/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'zaGl1D40Hn-ZY9KDt9kXSxW7nLGuoBI58wYtEq7T0Tc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zaGl1D40Hn-ZY9KDt9kXSxW7nLGuoBI58wYtEq7T0Tc.png?width=108&crop=smart&auto=webp&s=cee1747609c287cc1d089b63ff32cf4b0392dd34', 'width': 108}, {'height': 108, 'url': 'h... |
Unofficial docs / explainer for GGUF quantization | 1 | There's surprisingly little documentation on how GGUF quantization works, including legacy / I-quants / K-quants and the importance matrix.
The maintainers [made it pretty clear](https://github.com/ggml-org/llama.cpp/pull/1684#issuecomment-2474462323) it's not their priority to write a paper either. Currently, people ... | 2025-07-16T01:33:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m0zwc3/unofficial_docs_explainer_for_gguf_quantization/ | mojojojo_24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0zwc3 | false | null | t3_1m0zwc3 | /r/LocalLLaMA/comments/1m0zwc3/unofficial_docs_explainer_for_gguf_quantization/ | false | false | self | 1 | null |
What is the most underrated model in your opinion? | 9 | I’m not talking about models that are uncommon like qwen 3 0.6b or glm4 32b since those are still benchmark toppers I’m talking like really underrated ill start first with phi4(non reasoning) and exaone deep:7.8b(which gave me the idea for this post) | 2025-07-16T01:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m0z8sn/what_is_the_most_underrated_model_in_your_opinion/ | yeet5566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0z8sn | false | null | t3_1m0z8sn | /r/LocalLLaMA/comments/1m0z8sn/what_is_the_most_underrated_model_in_your_opinion/ | false | false | self | 9 | null |
Visualization for MuonClip | 13 | Hey this is Benny from Fireworks. There is a lot of interest around Kimi over the last few days, and I wanted to share some visualization I built to help myself understand MuonClip. Check out https://muon-clip-app-644257448872.us-central1.run.app/ and let me know if you have thoughts or feedback! https://x.com/the_bunn... | 2025-07-16T01:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m0z80y/visualization_for_muonclip/ | Civ6forthewin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0z80y | false | null | t3_1m0z80y | /r/LocalLLaMA/comments/1m0z80y/visualization_for_muonclip/ | false | false | self | 13 | null |
Your unpopular takes on LLMs | 527 | Mine are:
1. All the popular public benchmarks are nearly worthless when it comes to a model's general ability. Literaly the only good thing we get out of them is a rating for "can the model regurgitate the answers to questions the devs made sure it was trained on repeatedly to get higher benchmarks, without fucking i... | 2025-07-16T00:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m0z1zx/your_unpopular_takes_on_llms/ | dtdisapointingresult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0z1zx | false | null | t3_1m0z1zx | /r/LocalLLaMA/comments/1m0z1zx/your_unpopular_takes_on_llms/ | false | false | self | 527 | null |
How is the new Grok AI girlfriend animation implemented? | 8 | Looks pretty expressive: https://www.youtube.com/shorts/G8bd-uloo48. I tried on their App, all things (text, audio, lip sync, body movement) are generated in real time.
How do they implement that? Is there any open source work to achieve similar results? | 2025-07-16T00:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m0yw9z/how_is_the_new_grok_ai_girlfriend_animation/ | EvilKY45 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0yw9z | false | null | t3_1m0yw9z | /r/LocalLLaMA/comments/1m0yw9z/how_is_the_new_grok_ai_girlfriend_animation/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'puy5ocqs3_L4pQOcJ1QkmtC6jnDwYIiT6NKOc_rbCTA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/puy5ocqs3_L4pQOcJ1QkmtC6jnDwYIiT6NKOc_rbCTA.jpeg?width=108&crop=smart&auto=webp&s=ef4ebaaaae3cef35510b84bf647212dee97c32b8', 'width': 108}, {'height': 162, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.