title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Terminal AI Assistant - Combining OpenWebUI + Open-Interpreter in the terminal with full Ollama support | 1 | [removed] | 2025-06-13T20:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/1laqykx/terminal_ai_assistant_combining_openwebui/ | MotorNetwork380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laqykx | false | null | t3_1laqykx | /r/LocalLLaMA/comments/1laqykx/terminal_ai_assistant_combining_openwebui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UKh2Z6pP1sbZcYmjUu_EdKXIjmuiUuN46DnOkWmBZyI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UKh2Z6pP1sbZcYmjUu_EdKXIjmuiUuN46DnOkWmBZyI.png?width=108&crop=smart&auto=webp&s=159a99241a969c1dc275ea82158e0953e72badb6', 'width': 108}, {'height': 108, 'url': 'h... |
Conversation with an LLM that knows itself | 0 | I have been working on LYRN, Living Yield Relational Network, for the last few months and while I am still working with investors and lawyers to release this properly I want to share something with you. I do in my heart and soul believe this should be open source. I want everyone to be able to have a real AI that actua... | 2025-06-13T20:30:59 | https://github.com/bsides230/LYRN/blob/main/Greg%20Conversation%20Test%202.txt | PayBetter | github.com | 1970-01-01T00:00:00 | 0 | {} | 1laqaxr | false | null | t3_1laqaxr | /r/LocalLLaMA/comments/1laqaxr/conversation_with_an_llm_that_knows_itself/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'gIcATli92FFsjho6mfkSSR4LUGVSRsyL5bBD7w4Ty5o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gIcATli92FFsjho6mfkSSR4LUGVSRsyL5bBD7w4Ty5o.png?width=108&crop=smart&auto=webp&s=d3e5a094b0485bbda4a2f3f4c46d9fb4ff5520c3', 'width': 108}, {'height': 108, 'url': 'h... |
Why build RAG apps when ChatGPT already supports RAG? | 1 | [removed] | 2025-06-13T20:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1laq03u/why_build_rag_apps_when_chatgpt_already_supports/ | dagm10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laq03u | false | null | t3_1laq03u | /r/LocalLLaMA/comments/1laq03u/why_build_rag_apps_when_chatgpt_already_supports/ | false | false | self | 1 | null |
[Build Help] Budget AI/LLM Rig for Multi-Agent Systems (AutoGen, GGUF, Local Inference) – GPU Advice Needed | 1 | [removed] | 2025-06-13T20:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1laprwy/build_help_budget_aillm_rig_for_multiagent/ | Direct_Chemistry_339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laprwy | false | null | t3_1laprwy | /r/LocalLLaMA/comments/1laprwy/build_help_budget_aillm_rig_for_multiagent/ | false | false | self | 1 | null |
[Build Help] Budget AI/LLM Rig for Multi-Agent Systems (AutoGen, GGUF, Local Inference) – GPU Advice Needed | 1 | [removed] | 2025-06-13T19:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1laphvy/build_help_budget_aillm_rig_for_multiagent/ | Direct_Chemistry_339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laphvy | false | null | t3_1laphvy | /r/LocalLLaMA/comments/1laphvy/build_help_budget_aillm_rig_for_multiagent/ | false | false | self | 1 | null |
Any LLM Leaderboard by need VRAM Size? | 31 | Hey maybe already know the leaderboard sorted by VRAM usage size?
For example with quantization, where we can see q8 small model vs q2 large model?
Where the place to find best model for 96GB VRAM + 4-8k context with good output speed? | 2025-06-13T19:38:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lap21a/any_llm_leaderboard_by_need_vram_size/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lap21a | false | null | t3_1lap21a | /r/LocalLLaMA/comments/1lap21a/any_llm_leaderboard_by_need_vram_size/ | false | false | self | 31 | null |
Western vs Eastern models | 0 | Do you avoid or embrace? | 2025-06-13T19:19:28 | https://youtu.be/0p2mCeub3WA | santovalentino | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1laol6j | false | {'oembed': {'author_name': 'Shawn Ryan Clips', 'author_url': 'https://www.youtube.com/@ShawnRyanClips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/0p2mCeub3WA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; ... | t3_1laol6j | /r/LocalLLaMA/comments/1laol6j/western_vs_eastern_models/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4S07oQ1u2W9TBSLPOC9HzVSTLU3yjUy5vaBxdGyLCQI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4S07oQ1u2W9TBSLPOC9HzVSTLU3yjUy5vaBxdGyLCQI.jpeg?width=108&crop=smart&auto=webp&s=3725c2b0660225a9192f8e935d3b815ef3b3e2a3', 'width': 108}, {'height': 162, 'url': '... | |
Best Speech Enhancement Models out there | 1 | [removed] | 2025-06-13T19:17:43 | https://www.reddit.com/r/LocalLLaMA/comments/1laojmq/best_speech_enhancement_models_out_there/ | OstrichSerious6755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laojmq | false | null | t3_1laojmq | /r/LocalLLaMA/comments/1laojmq/best_speech_enhancement_models_out_there/ | false | false | self | 1 | null |
unexpected keyword argument 'code_background_fill_dark' | 1 | [removed] | 2025-06-13T19:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1laof2g/unexpected_keyword_argument_code_background_fill/ | BrushVisible6282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laof2g | false | null | t3_1laof2g | /r/LocalLLaMA/comments/1laof2g/unexpected_keyword_argument_code_background_fill/ | false | false | self | 1 | null |
3090 Bandwidth Calculation Help | 7 | Quoted bandwidth is 956 GB/s
(384 bits x 1.219 GHz clock x 2) / 8 = 117 GB/s
What am I missing here? I’m off by a factor of 8. Is it something to do with GDDR6X memory? | 2025-06-13T19:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lao8ri/3090_bandwidth_calculation_help/ | skinnyjoints | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lao8ri | false | null | t3_1lao8ri | /r/LocalLLaMA/comments/1lao8ri/3090_bandwidth_calculation_help/ | false | false | self | 7 | null |
We built an emotionally intelligent AI that runs entirely offline — no cloud, no APIs. Meet Vanta. | 1 | [removed] | 2025-06-13T18:50:41 | https://v.redd.it/reskaw31oq6f1 | VantaAI | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lanw5x | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/reskaw31oq6f1/DASHPlaylist.mpd?a=1752432657%2CYWVlNDAzZGVmODY5ODI2ZjE0ZGM0ZTUzZGQwNTQ5ZmYzMTM1MzhkYjhmMmFjMjY2NDlmNDMyNGJmYWM0NzgyOA%3D%3D&v=1&f=sd', 'duration': 2, 'fallback_url': 'https://v.redd.it/reskaw31oq6f1/DASH_480.mp4?source=fallback', 'has... | t3_1lanw5x | /r/LocalLLaMA/comments/1lanw5x/we_built_an_emotionally_intelligent_ai_that_runs/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MmtoamJ3MzFvcTZmMSq-7gshwT7VGyor5NvjKIgVTCHjy6SaiefJkV9AuZf2', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MmtoamJ3MzFvcTZmMSq-7gshwT7VGyor5NvjKIgVTCHjy6SaiefJkV9AuZf2.png?width=108&crop=smart&format=pjpg&auto=webp&s=d0898cc1dfe8f538e7cc932549186137dab1... | |
Qwen3 235B running faster than 70B models on a $1,500 PC | 171 | I ran Qwen3 235B locally on a $1,500 PC (128GB RAM, RTX 3090) using the Q4 quantized version through Ollama.
This is the first time I was able to run anything over 70B on my system, and it’s actually running faster than most 70B models I’ve tested.
Final generation speed: 2.14 t/s
Full video here:
[https://youtu.b... | 2025-06-13T18:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lanri6/qwen3_235b_running_faster_than_70b_models_on_a/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lanri6 | false | null | t3_1lanri6 | /r/LocalLLaMA/comments/1lanri6/qwen3_235b_running_faster_than_70b_models_on_a/ | false | false | self | 171 | {'enabled': False, 'images': [{'id': 'x2fxFGjLBkyoq5iDyNVY_lhIyJ3DXdevY96smhEiyBs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/x2fxFGjLBkyoq5iDyNVY_lhIyJ3DXdevY96smhEiyBs.jpeg?width=108&crop=smart&auto=webp&s=9c539a47ae57f8c2b609e996368e70019240490a', 'width': 108}, {'height': 162, 'url': '... |
[Question] Does anyone know how to call tools using Runpod serverless endpoint? | 0 | I have a simple vLLM endpoint configured on Runpod and I'm wondering how to send tool configs. I've searched the Runpod API docs and can't seem to find any info. Maybe its passed directly to vLLM? Thank you.
The sample requests look like so
```
json
{
"input": {
"prompt": "Hello World"
}
}
``` | 2025-06-13T18:39:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lanm91/question_does_anyone_know_how_to_call_tools_using/ | soorg_nalyd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lanm91 | false | null | t3_1lanm91 | /r/LocalLLaMA/comments/1lanm91/question_does_anyone_know_how_to_call_tools_using/ | false | false | self | 0 | null |
We don't want AI yes-men. We want AI with opinions | 347 | Been noticing something interesting in AI friend character models - the most beloved AI characters aren't the ones that agree with everything. They're the ones that push back, have preferences, and occasionally tell users they're wrong.
It seems counterintuitive. You'd think people want AI that validates everything th... | 2025-06-13T18:33:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lanhbd/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lanhbd | false | null | t3_1lanhbd | /r/LocalLLaMA/comments/1lanhbd/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | false | false | self | 347 | null |
We don't want AI yes-men. We want AI with opinions | 1 | The most beloved AI characters aren't the ones that agree with everything. They're the ones that push back, have preferences, and occasionally tell users they're wrong. | 2025-06-13T18:31:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lanfs1/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lanfs1 | false | null | t3_1lanfs1 | /r/LocalLLaMA/comments/1lanfs1/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | false | false | self | 1 | null |
We don't want AI yes-men. We want AI with opinions | 1 | [removed] | 2025-06-13T18:31:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lanf04/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lanf04 | false | null | t3_1lanf04 | /r/LocalLLaMA/comments/1lanf04/we_dont_want_ai_yesmen_we_want_ai_with_opinions/ | false | false | self | 1 | null |
Are local installations safe? | 1 | [removed] | 2025-06-13T18:24:39 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lan995 | false | null | t3_1lan995 | /r/LocalLLaMA/comments/1lan995/are_local_installations_safe/ | false | false | default | 1 | null | ||
Chinese researchers find multi-modal LLMs develop interpretable human-like conceptual representations of objects | 131 | 2025-06-13T17:33:35 | https://arxiv.org/abs/2407.01067 | xoexohexox | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1lalyy5 | false | null | t3_1lalyy5 | /r/LocalLLaMA/comments/1lalyy5/chinese_researchers_find_multimodal_llms_develop/ | false | false | default | 131 | null | |
Uncensored/Un-restricted LLms | 1 | [removed] | 2025-06-13T17:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lalv80/uncensoredunrestricted_llms/ | Outrageous-Hall9692 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lalv80 | false | null | t3_1lalv80 | /r/LocalLLaMA/comments/1lalv80/uncensoredunrestricted_llms/ | false | false | self | 1 | null |
Anyone else notice the downgrade in coding ability of LLMs? | 1 | [removed] | 2025-06-13T17:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lalstq/anyone_else_notice_the_downgrade_in_coding/ | Awkward-Hedgehog-572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lalstq | false | null | t3_1lalstq | /r/LocalLLaMA/comments/1lalstq/anyone_else_notice_the_downgrade_in_coding/ | false | false | self | 1 | null |
Which is the Best TTS Model for Language Training? | 0 | Which is the best TTS Model for fine tuning it on a specific language to get the best outputs possible? | 2025-06-13T17:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lalj20/which_is_the_best_tts_model_for_language_training/ | RGBGraphicZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lalj20 | false | null | t3_1lalj20 | /r/LocalLLaMA/comments/1lalj20/which_is_the_best_tts_model_for_language_training/ | false | false | self | 0 | null |
[D] The Huge Flaw in LLMs’ Logic | 1 | [removed] | 2025-06-13T17:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lal9hs/d_the_huge_flaw_in_llms_logic/ | Pale-Entertainer-386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lal9hs | false | null | t3_1lal9hs | /r/LocalLLaMA/comments/1lal9hs/d_the_huge_flaw_in_llms_logic/ | false | false | self | 1 | null |
Modding 3090 for Server Use | 1 | [removed] | 2025-06-13T16:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lakt3h/modding_3090_for_server_use/ | PleasantCandidate785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lakt3h | false | null | t3_1lakt3h | /r/LocalLLaMA/comments/1lakt3h/modding_3090_for_server_use/ | false | false | self | 1 | null |
Modding 3090 for Server Use | 1 | [removed] | 2025-06-13T16:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lakqwz/modding_3090_for_server_use/ | PleasantCandidate785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lakqwz | false | null | t3_1lakqwz | /r/LocalLLaMA/comments/1lakqwz/modding_3090_for_server_use/ | false | false | self | 1 | null |
Open Source Release: Fastest Embeddings Client in Python | 9 | We published a simple OpenAI /v1/embeddings client in Rust, which is provided as python package under MIT. The package is available as \`pip install baseten-performance-client\`, and provides 12x speedup over pip install openai.
The client works with [baseten.co](http://baseten.co/), [api.openai.com](http://api.open... | 2025-06-13T16:33:47 | https://github.com/basetenlabs/truss/tree/main/baseten-performance-client | Top-Bid1216 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lakgin | false | null | t3_1lakgin | /r/LocalLLaMA/comments/1lakgin/open_source_release_fastest_embeddings_client_in/ | false | false | default | 9 | null |
Open Source Release: Fastest Embeddings Client for Python | 1 | [removed] | 2025-06-13T16:32:37 | Top-Bid1216 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lakfhr | false | null | t3_1lakfhr | /r/LocalLLaMA/comments/1lakfhr/open_source_release_fastest_embeddings_client_for/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'y9qjwebz1q6f1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/y9qjwebz1q6f1.png?width=108&crop=smart&auto=webp&s=42a09f8c8b473f906fc7b4832f936adcdbcabb19', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/y9qjwebz1q6f1.png?width=216&crop=smart&auto=web... | |
Open Source Release: Fastest Embeddings Client for Python. | 1 | [removed] | 2025-06-13T16:30:47 | Top-Bid1216 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lakdtw | false | null | t3_1lakdtw | /r/LocalLLaMA/comments/1lakdtw/open_source_release_fastest_embeddings_client_for/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'xnioaeqs0q6f1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/xnioaeqs0q6f1.png?width=108&crop=smart&auto=webp&s=7862b6a0c34824414e899014413a6bec9e896164', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/xnioaeqs0q6f1.png?width=216&crop=smart&auto=web... | |
Findings from Apple's new FoundationModel API and local LLM | 77 | **Liquid glass: 🥱. Local LLM: 🤩🚀**
**TL;DR**: I wrote some code to benchmark Apple's foundation model. I failed, but learned a few things. The API is rich and powerful, the model is very small and efficient, you can do LoRAs, constrained decoding, tool calling. Trying to run evals exposes rough edges and interestin... | 2025-06-13T16:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lak9yb/findings_from_apples_new_foundationmodel_api_and/ | pcuenq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lak9yb | false | null | t3_1lak9yb | /r/LocalLLaMA/comments/1lak9yb/findings_from_apples_new_foundationmodel_api_and/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'KZ0ON0OKjBBz6vXp2jNfndg6djstOSRTVC4kItZPybE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KZ0ON0OKjBBz6vXp2jNfndg6djstOSRTVC4kItZPybE.jpeg?width=108&crop=smart&auto=webp&s=927d54fb37b132e58ca401ada7f848c1f555f3c1', 'width': 108}, {'height': 113, 'url': '... |
For those of us outside the U.S or other English speaking countries... | 16 | I was pondering an idea of building an LLM that is trained on very locale-specific data, i.e, data about local people, places, institutions, markets, laws, etc. that have to do with say Uruguay for example.
Hear me out. Because the internet predominantly caters to users who speak English and primarily deals with the "... | 2025-06-13T16:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lajy3x/for_those_of_us_outside_the_us_or_other_english/ | redd_dott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lajy3x | false | null | t3_1lajy3x | /r/LocalLLaMA/comments/1lajy3x/for_those_of_us_outside_the_us_or_other_english/ | false | false | self | 16 | null |
Mac silicon AI: MLX LLM (Llama 3) + MPS TTS = Offline Voice Assistant for M-chips | 19 | # Mac silicon AI: MLX LLM (Llama 3) + MPS TTS = Offline Voice Assistant for M-chips
**hi, this is my first post so I'm kind of nervous, so bare with me. yes I used chatGPT help but still I hope this one finds this code useful.**
**I had a hard time finding a fast way to get a LLM + TTS code to easily create an assist... | 2025-06-13T15:58:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lajkwa/mac_silicon_ai_mlx_llm_llama_3_mps_tts_offline/ | Antique-Ingenuity-97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lajkwa | false | null | t3_1lajkwa | /r/LocalLLaMA/comments/1lajkwa/mac_silicon_ai_mlx_llm_llama_3_mps_tts_offline/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'FROK4VZvK_U07zwoXhlJJ3unop1AdpQDzTySqCkraLE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FROK4VZvK_U07zwoXhlJJ3unop1AdpQDzTySqCkraLE.png?width=108&crop=smart&auto=webp&s=023608355f05e24c302bb838058afb518c50a1de', 'width': 108}, {'height': 108, 'url': 'h... |
🚀 IdeaWeaver: The All-in-One GenAI Power Tool You’ve Been Waiting For! | 2 | Tired of juggling a dozen different tools for your GenAI projects? With new AI tech popping up every day, it’s hard to find a single solution that does it all, until now.
**Meet IdeaWeaver: Your One-Stop Shop for GenAI**
Whether you want to:
* ✅ Train your own models
* ✅ Download and manage models
* ✅ Push to any mo... | 2025-06-13T15:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1laj9wq/ideaweaver_the_allinone_genai_power_tool_youve/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laj9wq | false | null | t3_1laj9wq | /r/LocalLLaMA/comments/1laj9wq/ideaweaver_the_allinone_genai_power_tool_youve/ | false | false | 2 | null | |
Planning a 7–8B Model Benchmark on 8GB GPU — What Should I Test & Measure? | 1 | [removed] | 2025-06-13T15:13:22 | https://www.reddit.com/r/LocalLLaMA/comments/1laigxm/planning_a_78b_model_benchmark_on_8gb_gpu_what/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laigxm | false | null | t3_1laigxm | /r/LocalLLaMA/comments/1laigxm/planning_a_78b_model_benchmark_on_8gb_gpu_what/ | false | false | self | 1 | null |
Fine-tuning Mistral-7B efficiently for industrial task planning | 1 | [removed] | 2025-06-13T14:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lai2p7/finetuning_mistral7b_efficiently_for_industrial/ | Head_Mushroom_3748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lai2p7 | false | null | t3_1lai2p7 | /r/LocalLLaMA/comments/1lai2p7/finetuning_mistral7b_efficiently_for_industrial/ | false | false | self | 1 | null |
You thin AGI is comen soon? ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977 | 1 | [removed] | 2025-06-13T14:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lagxbj/you_thin_agi_is_comen_soon_chatgpt_absolutely/ | More-Plantain491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lagxbj | false | null | t3_1lagxbj | /r/LocalLLaMA/comments/1lagxbj/you_thin_agi_is_comen_soon_chatgpt_absolutely/ | false | false | self | 1 | null |
The downgrade of LLMs for coding | 1 | [removed] | 2025-06-13T14:08:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lagv03/the_downgrade_of_llms_for_coding/ | Awkward-Hedgehog-572 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lagv03 | false | null | t3_1lagv03 | /r/LocalLLaMA/comments/1lagv03/the_downgrade_of_llms_for_coding/ | false | false | self | 1 | null |
The Real-World Speed of AI: Benchmarking a 24B LLM on Local Hardware vs. High-End Cloud GPUs | 1 | [removed] | 2025-06-13T14:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/1laguf7/the_realworld_speed_of_ai_benchmarking_a_24b_llm/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laguf7 | false | null | t3_1laguf7 | /r/LocalLLaMA/comments/1laguf7/the_realworld_speed_of_ai_benchmarking_a_24b_llm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
Detailed Benchmark: magistral:24b on RTX 3070 Laptop vs 5 High-End Cloud GPUs (4090, A100, etc.) - Performance & Cost Analysis | 1 | [removed] | 2025-06-13T13:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lagn1r/detailed_benchmark_magistral24b_on_rtx_3070/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lagn1r | false | null | t3_1lagn1r | /r/LocalLLaMA/comments/1lagn1r/detailed_benchmark_magistral24b_on_rtx_3070/ | false | false | self | 1 | null |
Looking for a simpler alternative to SillyTavern -- something lightweight? | 1 | [removed] | 2025-06-13T13:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lafxep/looking_for_a_simpler_alternative_to_sillytavern/ | d3lt4run3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lafxep | false | null | t3_1lafxep | /r/LocalLLaMA/comments/1lafxep/looking_for_a_simpler_alternative_to_sillytavern/ | false | false | self | 1 | null |
Qwen3 embedding/reranker padding token error? | 9 | I'm new to embedding and rerankers. On paper they seem pretty straightforward:
- The embedding model turns tokens into numbers so models can process them more efficiently for retrieval. The embeddings are stored in an index.
- The reranker simply ranks the text by similarity to the query. Its not perfect, but its a ... | 2025-06-13T13:21:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lafs7r/qwen3_embeddingreranker_padding_token_error/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lafs7r | false | null | t3_1lafs7r | /r/LocalLLaMA/comments/1lafs7r/qwen3_embeddingreranker_padding_token_error/ | false | false | self | 9 | null |
Struggling on local multi-user inference? Llama.cpp GGUF vs VLLM AWQ/GPTQ. | 9 | Hi all,
I tested VLLM and Llama.cpp and got much better results from GGUF than AWQ and GPTQ (it was also hard to find this format for VLLM). I used the same system prompts and saw really crazy bad results on Gemma in GPTQ: higher VRAM usage, slower inference, and worse output quality.
Now my project is moving to mult... | 2025-06-13T13:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lafihl/struggling_on_local_multiuser_inference_llamacpp/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lafihl | false | null | t3_1lafihl | /r/LocalLLaMA/comments/1lafihl/struggling_on_local_multiuser_inference_llamacpp/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'LtKtt6txZ-QrVhG54gL73uTj3IfQTG0w8wIIdqF-0s0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LtKtt6txZ-QrVhG54gL73uTj3IfQTG0w8wIIdqF-0s0.png?width=108&crop=smart&auto=webp&s=48e272d6903d89156a397d8d669cecd7c921f94e', 'width': 108}, {'height': 216, 'url': '... |
Mac Mini for local LLM? 🤔 | 14 | I am not much of an IT guy. Example: I bought a Synology because I wanted a home server, but didn't want to fiddle with things beyond me too much.
That being said, I am a programmer that uses a Macbook every day.
Is it possible to go the on-prem home LLM route using a Mac Mini? | 2025-06-13T12:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1laf96d/mac_mini_for_local_llm/ | matlong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laf96d | false | null | t3_1laf96d | /r/LocalLLaMA/comments/1laf96d/mac_mini_for_local_llm/ | false | false | self | 14 | null |
What's the easiest way to run local models with characters? | 1 | [removed] | 2025-06-13T12:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/1laf52d/whats_the_easiest_way_to_run_local_models_with/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laf52d | false | null | t3_1laf52d | /r/LocalLLaMA/comments/1laf52d/whats_the_easiest_way_to_run_local_models_with/ | false | false | self | 1 | null |
Qwen3-VL is coming! | 13 | [https://qwen3.org/vl/](https://qwen3.org/vl/) | 2025-06-13T12:46:14 | foldl-li | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1laf10l | false | null | t3_1laf10l | /r/LocalLLaMA/comments/1laf10l/qwen3vl_is_coming/ | false | false | default | 13 | {'enabled': True, 'images': [{'id': 'p7mwf4rixo6f1', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/p7mwf4rixo6f1.png?width=108&crop=smart&auto=webp&s=66e86cda794013ef3087f534aaad2df6975bd7e7', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/p7mwf4rixo6f1.png?width=216&crop=smart&auto=we... | |
Regarding the current state of STS models (like Copilot Voice) | 1 | Recently got a new Asus copilot + laptop with Snapdragon CPU; been playing around with the conversational voice mode for Copilot, and REALLY impressed with the quality to be honest.
I've also played around with OpenAI's advanced voice mode, and Sesame.
I'm thinking this would be killer if I could run a local version ... | 2025-06-13T12:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/1laey50/regarding_the_current_state_of_sts_models_like/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laey50 | false | null | t3_1laey50 | /r/LocalLLaMA/comments/1laey50/regarding_the_current_state_of_sts_models_like/ | false | false | self | 1 | null |
Got a tester version of the open-weight OpenAI model. Very lean inference engine! | 1,431 | >!Silkposting in r/LocalLLaMA? I'd never!< | 2025-06-13T12:14:51 | https://v.redd.it/3r075o87qo6f1 | Firepal64 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1laee7q | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/3r075o87qo6f1/DASHPlaylist.mpd?a=1752408905%2CYzkzZTBjY2E0MDBiODUzYzU0MDQwZjYwNDA4ZDEzMDkyMDZmMzNlNjk2NzY5NjU1MTBjZGQ3NzFhN2E4OTI1Ng%3D%3D&v=1&f=sd', 'duration': 70, 'fallback_url': 'https://v.redd.it/3r075o87qo6f1/DASH_720.mp4?source=fallback', 'ha... | t3_1laee7q | /r/LocalLLaMA/comments/1laee7q/got_a_tester_version_of_the_openweight_openai/ | false | false | spoiler | 1,431 | {'enabled': False, 'images': [{'id': 'YTZ6aWx2ODdxbzZmMZP4_Zg7YIqZNzvbtM-0NW72ki5jdKm1HMEQNOp3yi9R', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YTZ6aWx2ODdxbzZmMZP4_Zg7YIqZNzvbtM-0NW72ki5jdKm1HMEQNOp3yi9R.png?width=108&crop=smart&format=pjpg&auto=webp&s=598298caa304e1a4beda26e49405469723ce5... |
What is the purpose of the offloading particular layers on the GPU if you don't have enough VRAM in the LM-studio (there is no difference in the token generation at all) | 1 | [removed] | 2025-06-13T12:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lae3yp/what_is_the_purpose_of_the_offloading_particular/ | panther_ra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lae3yp | false | null | t3_1lae3yp | /r/LocalLLaMA/comments/1lae3yp/what_is_the_purpose_of_the_offloading_particular/ | false | false | self | 1 | null |
Qwen2.5 VL | 6 | Hello,
Has anyone used this LLM for UI/UX? I would like a general opinion on it as I would like to set it up and fine-tune it for such purposes.
If you know models that are better for UI/UX, I would ask if you could recommend me some.
Thanks in advance! | 2025-06-13T11:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ladv5b/qwen25_vl/ | Odd_Industry_2376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ladv5b | false | null | t3_1ladv5b | /r/LocalLLaMA/comments/1ladv5b/qwen25_vl/ | false | false | self | 6 | null |
Finetune a model to think and use tools | 5 | Im very new to Local AI tools, recently built a small Agno Team with agents to do a certain task, and its sort of good. I think it will improve after fine tuning on the tasks related to my prompts(code completion). Right now im using Qwen3:6b which can think and use tools.
1) How do i train models? I know Ollama is me... | 2025-06-13T11:32:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ladl6d/finetune_a_model_to_think_and_use_tools/ | LostDog_88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ladl6d | false | null | t3_1ladl6d | /r/LocalLLaMA/comments/1ladl6d/finetune_a_model_to_think_and_use_tools/ | false | false | self | 5 | null |
Combine amd ai max 395 | 1 | [removed] | 2025-06-13T11:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ladj0w/combine_amd_ai_max_395/ | OpportunityFar3673 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ladj0w | false | null | t3_1ladj0w | /r/LocalLLaMA/comments/1ladj0w/combine_amd_ai_max_395/ | false | false | self | 1 | null |
Langgraph issues | 1 | [removed] | 2025-06-13T11:11:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lad7ue/langgraph_issues/ | Free-Belt-4741 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lad7ue | false | null | t3_1lad7ue | /r/LocalLLaMA/comments/1lad7ue/langgraph_issues/ | false | false | self | 1 | null |
Hi! I’m conducting a short survey for my university final project about public opinions on AI-generated commercials. | 1 | [removed] | 2025-06-13T10:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lacun1/hi_im_conducting_a_short_survey_for_my_university/ | Yvonne_yuyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lacun1 | false | null | t3_1lacun1 | /r/LocalLLaMA/comments/1lacun1/hi_im_conducting_a_short_survey_for_my_university/ | false | false | self | 1 | null |
Getting around censorship of Qwen3 by querying wiki for context | 1 | [removed] | 2025-06-13T10:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lacr4d/getting_around_censorship_of_qwen3_by_querying/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lacr4d | false | null | t3_1lacr4d | /r/LocalLLaMA/comments/1lacr4d/getting_around_censorship_of_qwen3_by_querying/ | false | false | self | 1 | null |
Against the Apple's paper: LLM can solve new complex problems | 137 | 2025-06-13T10:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lacqxh/against_the_apples_paper_llm_can_solve_new/ | WackyConundrum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lacqxh | false | null | t3_1lacqxh | /r/LocalLLaMA/comments/1lacqxh/against_the_apples_paper_llm_can_solve_new/ | false | false | 137 | {'enabled': False, 'images': [{'id': 'zsV7fZGW377yLJEkIf3hZaGk1_mHff9pFj38MmimV4Q', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/diKLwubmbtEotRQN3uKfVq9qOQ6ZisOE0UOpfwELjRg.jpg?width=108&crop=smart&auto=webp&s=547b8992d42dfd9246b273285a80c122f1302cae', 'width': 108}, {'height': 171, 'url': 'h... | ||
Whats the best model to run on a 3090 right now? | 0 | Just picked up a 3090. Searched reddit for the best model to run but the posts are months old sometimes longer. What's the latest and greatest to run on my new card? I'm primarily using it for coding. | 2025-06-13T10:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lacfko/whats_the_best_model_to_run_on_a_3090_right_now/ | LeonJones | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lacfko | false | null | t3_1lacfko | /r/LocalLLaMA/comments/1lacfko/whats_the_best_model_to_run_on_a_3090_right_now/ | false | false | self | 0 | null |
Found a Web3 LLM That Actually Gets DeFi Right | 0 | After months of trying to get reliable responses to DeFi - related questions from GPT - o3 or Grok - 3, without vague answers or hallucinated concepts, I randomly came across something that actually gets it. It's called DMind -1, a Web3 - focused LLM built on Qwen3 -32B. I'd never heard of it before last week, now I'm ... | 2025-06-13T09:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lac0yh/found_a_web3_llm_that_actually_gets_defi_right/ | Luke-Pioneero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lac0yh | false | null | t3_1lac0yh | /r/LocalLLaMA/comments/1lac0yh/found_a_web3_llm_that_actually_gets_defi_right/ | false | false | self | 0 | null |
Building a pc for local llm (help needed) | 0 | I am having a requirement to run ai locally specifically models like gemma3 27b and models in that similar size (roughly 20-30gb).
Planning to get 2 3060 12gb (24gb) and need help choosing cpu and mobo and ram.
Do you guys have any recommendations ?
Would love to hear your about setup if you are running llm in a s... | 2025-06-13T09:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1labraz/building_a_pc_for_local_llm_help_needed/ | anirudhisonline | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1labraz | false | null | t3_1labraz | /r/LocalLLaMA/comments/1labraz/building_a_pc_for_local_llm_help_needed/ | false | false | self | 0 | null |
Local Alternative to NotebookLM | 8 | Hi all, I'm looking to run a local alternative to Google Notebook LM on a M2 with 32GB RAM in a one user scenario but with a lot of documents (~2k PDFs). Has anybody tried this? Are you aware of any tutorials? | 2025-06-13T09:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1labpb1/local_alternative_to_notebooklm/ | sv723 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1labpb1 | false | null | t3_1labpb1 | /r/LocalLLaMA/comments/1labpb1/local_alternative_to_notebooklm/ | false | false | self | 8 | null |
Introducing the Hugging Face MCP Server - find, create and use AI models directly from VSCode, Cursor, Claude or other clients! 🤗 | 51 | Hey hey, everyone, I'm VB from Hugging Face. We're tinkering a lot with MCP at HF these days and are quite excited to host our official MCP server accessible at \`hf.co/mcp\` 🔥
Here's what you can do today with it:
1. You can run semantic search on datasets, spaces and models (find the correct artefact just with tex... | 2025-06-13T09:08:15 | https://www.reddit.com/r/LocalLLaMA/comments/1labaqn/introducing_the_hugging_face_mcp_server_find/ | vaibhavs10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1labaqn | false | null | t3_1labaqn | /r/LocalLLaMA/comments/1labaqn/introducing_the_hugging_face_mcp_server_find/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': 'JCdmiT4zKH58gwmfIQPmOLs-bs5qHzSmc2ZFnRUGk84', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JCdmiT4zKH58gwmfIQPmOLs-bs5qHzSmc2ZFnRUGk84.png?width=108&crop=smart&auto=webp&s=2878a85306e6c618c2dce5dd61b3f87f8eee1fdc', 'width': 108}, {'height': 116, 'url': 'h... |
The EuroLLM team released preview versions of several new models | 135 | They released a 22b version, 2 vision models (1.7b, 9b, based on the older EuroLLMs) and a small MoE with 0.6b active and 2.6b total parameters. The MoE seems to be surprisingly good for its size in my limited testing. They seem to be Apache-2.0 licensed.
EuroLLM 22b instruct preview:
https://huggingface.co/utter-proj... | 2025-06-13T08:47:09 | https://www.reddit.com/r/LocalLLaMA/comments/1laazto/the_eurollm_team_released_preview_versions_of/ | sommerzen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laazto | false | null | t3_1laazto | /r/LocalLLaMA/comments/1laazto/the_eurollm_team_released_preview_versions_of/ | false | false | self | 135 | {'enabled': False, 'images': [{'id': 'ofqgUijGTbnuEUpUyTizn34-6YYssAi8r4fqjxj-fCw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ofqgUijGTbnuEUpUyTizn34-6YYssAi8r4fqjxj-fCw.png?width=108&crop=smart&auto=webp&s=97645d09c00f63c8b746bcba73dc12254cb14425', 'width': 108}, {'height': 116, 'url': 'h... |
"Agent or Workflow?" - Built an interactive quiz to test your intuition on this confusing AI distinction | 2 | [removed] | 2025-06-13T08:41:37 | https://agents-vs-workflows.streamlit.app | htahir1 | agents-vs-workflows.streamlit.app | 1970-01-01T00:00:00 | 0 | {} | 1laax1a | false | null | t3_1laax1a | /r/LocalLLaMA/comments/1laax1a/agent_or_workflow_built_an_interactive_quiz_to/ | false | false | default | 2 | null |
Finally, Zen 6, per-socket memory bandwidth to 1.6 TB/s | 332 | [https://www.tomshardware.com/pc-components/cpus/amds-256-core-epyc-venice-cpu-in-the-labs-now-coming-in-2026](https://www.tomshardware.com/pc-components/cpus/amds-256-core-epyc-venice-cpu-in-the-labs-now-coming-in-2026)
Perhaps more importantly, the new EPYC 'Venice' processor will more than double per-socket memor... | 2025-06-13T08:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/1laavph/finally_zen_6_persocket_memory_bandwidth_to_16_tbs/ | On1ineAxeL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laavph | false | null | t3_1laavph | /r/LocalLLaMA/comments/1laavph/finally_zen_6_persocket_memory_bandwidth_to_16_tbs/ | false | false | 332 | {'enabled': False, 'images': [{'id': 'WQB4YYDDJWqV1l5CZF1V17S1yCdW1pO-9wS0zX4_i0Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WQB4YYDDJWqV1l5CZF1V17S1yCdW1pO-9wS0zX4_i0Y.jpeg?width=108&crop=smart&auto=webp&s=2a20b804bac8ed73f144483b2f4a07c7d1064176', 'width': 108}, {'height': 121, 'url': '... | |
New VS Code update supports all MCP features (tools, prompts, sampling, resources, auth) | 44 | If you have any questions about the release, let me know.
\--vscode pm | 2025-06-13T07:55:10 | https://code.visualstudio.com/updates/v1_101 | isidor_n | code.visualstudio.com | 1970-01-01T00:00:00 | 0 | {} | 1laa9bj | false | null | t3_1laa9bj | /r/LocalLLaMA/comments/1laa9bj/new_vs_code_update_supports_all_mcp_features/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': '4t6GOGdTOsYCXJc0r80Taopgc8TuG7QgRWRArsQ4GFY', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/4t6GOGdTOsYCXJc0r80Taopgc8TuG7QgRWRArsQ4GFY.png?width=108&crop=smart&auto=webp&s=25618146eca93c91aa30c9530931565f6645a874', 'width': 108}, {'height': 107, 'url': 'h... |
Fine tuning LLMs to filter misleading context in RAG | 1 | [removed] | 2025-06-13T07:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1laa0pv/fine_tuning_llms_to_filter_misleading_context_in/ | zpdeaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laa0pv | false | null | t3_1laa0pv | /r/LocalLLaMA/comments/1laa0pv/fine_tuning_llms_to_filter_misleading_context_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pRgB6nrKLFAhvOocvrbiJ79RAzdrbr6DMj1eHntmit0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pRgB6nrKLFAhvOocvrbiJ79RAzdrbr6DMj1eHntmit0.png?width=108&crop=smart&auto=webp&s=49171f0ef399302b0526388a03f9f73cdf861818', 'width': 108}, {'height': 108, 'url': 'h... |
Fine tuning LLMs to filter misleading context in RAG | 1 | [removed] | 2025-06-13T07:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/1la9y57/fine_tuning_llms_to_filter_misleading_context_in/ | zpdeaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la9y57 | false | null | t3_1la9y57 | /r/LocalLLaMA/comments/1la9y57/fine_tuning_llms_to_filter_misleading_context_in/ | false | false | self | 1 | null |
[Hiring] Junior Prompt Engineer | 0 | We're looking for a freelance Prompt Engineer to help us push the boundaries of what's possible with AI. We are an Italian startup that's already helping candidates land interviews at companies like Google, Stripe, and Zillow. We're a small team, moving fast, experimenting daily and we want someone who's obsessed with ... | 2025-06-13T07:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1la9sku/hiring_junior_prompt_engineer/ | interviuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la9sku | false | null | t3_1la9sku | /r/LocalLLaMA/comments/1la9sku/hiring_junior_prompt_engineer/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'yiTRSBNLVqfnTaad4wH3RCdCnkuSJCe31LqT8zpIQrk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yiTRSBNLVqfnTaad4wH3RCdCnkuSJCe31LqT8zpIQrk.png?width=108&crop=smart&auto=webp&s=f224f1e840e924ecaa586ec004961f5a80826316', 'width': 108}, {'height': 113, 'url': 'h... |
🧠 MaGo AgoraAI – Multi-agent academic debate engine using Ollama + Gemma (with concept maps and dialogues) | 1 | [removed] | 2025-06-13T06:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1la9c71/mago_agoraai_multiagent_academic_debate_engine/ | Next-Lengthiness9915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la9c71 | false | null | t3_1la9c71 | /r/LocalLLaMA/comments/1la9c71/mago_agoraai_multiagent_academic_debate_engine/ | false | false | self | 1 | null |
Llama-Server Launcher (Python with performance CUDA focus) | 105 | I wanted to share a llama-server launcher I put together for my personal use. I got tired of maintaining bash scripts and notebook files and digging through my gaggle of model folders while testing out models and turning performance. Hopefully this helps make someone else's life easier, it certainly has for me.
**... | 2025-06-13T06:31:45 | LA_rent_Aficionado | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1la91hz | false | null | t3_1la91hz | /r/LocalLLaMA/comments/1la91hz/llamaserver_launcher_python_with_performance_cuda/ | false | false | default | 105 | {'enabled': True, 'images': [{'id': 'lwjqunrt0n6f1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/lwjqunrt0n6f1.png?width=108&crop=smart&auto=webp&s=51c49f9e25a337d080d25ade9a142b879e567a21', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/lwjqunrt0n6f1.png?width=216&crop=smart&auto=we... | |
Why Search Sucks! (But First, A Brief History) | 1 | [removed] | 2025-06-13T06:26:52 | https://youtu.be/vZVcBUnre-c | kushalgoenka | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1la8yuk | false | {'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZVcBUnre-c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1la8yuk | /r/LocalLLaMA/comments/1la8yuk/why_search_sucks_but_first_a_brief_history/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kQ1-CKMdo6QZFdeorf2tNuFIZphNPNoLDugDK4dEZjk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kQ1-CKMdo6QZFdeorf2tNuFIZphNPNoLDugDK4dEZjk.jpeg?width=108&crop=smart&auto=webp&s=58f0f8fb050c32318213fb63c6a00cec1cdf396c', 'width': 108}, {'height': 162, 'url': '... | |
Is there an AI tool that can actively assist during investor meetings by answering questions about my startup? | 0 | I’m looking for an AI tool where I can input everything about my startup—our vision, metrics, roadmap, team, common Q&A, etc.—and have it actually assist me live during investor meetings.
I’m imagining something that listens in real time, recognizes when I’m being asked something specific (e.g., “What’s your CAC?” or ... | 2025-06-13T05:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1la859z/is_there_an_ai_tool_that_can_actively_assist/ | Samonji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la859z | false | null | t3_1la859z | /r/LocalLLaMA/comments/1la859z/is_there_an_ai_tool_that_can_actively_assist/ | false | false | self | 0 | null |
Is there an AI tool that can actively assist during investor meetings by answering questions about my startup? | 0 | I’m looking for an AI tool where I can input everything about my startup—our vision, metrics, roadmap, team, common Q&A, etc.—and have it actually assist me live during investor meetings.
I’m imagining something that listens in real time, recognizes when I’m being asked something specific (e.g., “What’s your CAC?” or ... | 2025-06-13T05:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1la84fe/is_there_an_ai_tool_that_can_actively_assist/ | Samonji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la84fe | false | null | t3_1la84fe | /r/LocalLLaMA/comments/1la84fe/is_there_an_ai_tool_that_can_actively_assist/ | false | false | self | 0 | null |
Is anyone using Colpali on production? What is your usecase and how is the performance? | 1 | [removed] | 2025-06-13T05:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1la81s6/is_anyone_using_colpali_on_production_what_is/ | hezknight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la81s6 | false | null | t3_1la81s6 | /r/LocalLLaMA/comments/1la81s6/is_anyone_using_colpali_on_production_what_is/ | false | false | self | 1 | null |
Best Model | MCP/Terminal |48GB VRAM | 1 | [removed] | 2025-06-13T05:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/1la81mc/best_model_mcpterminal_48gb_vram/ | aallsbury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la81mc | false | null | t3_1la81mc | /r/LocalLLaMA/comments/1la81mc/best_model_mcpterminal_48gb_vram/ | false | false | self | 1 | null |
Alexandr Wang joins Meta | 1 | [removed] | 2025-06-13T04:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/1la78of/alexandr_wang_joins_meta/ | AIDoomer3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la78of | false | null | t3_1la78of | /r/LocalLLaMA/comments/1la78of/alexandr_wang_joins_meta/ | false | false | self | 1 | null |
Alexandr Wang to join Meta with his crew. | 1 | [removed] | 2025-06-13T04:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1la74zg/alexandr_wang_to_join_meta_with_his_crew/ | AIDoomer3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la74zg | false | null | t3_1la74zg | /r/LocalLLaMA/comments/1la74zg/alexandr_wang_to_join_meta_with_his_crew/ | false | false | self | 1 | null |
Alexandr Wang officially joins Meta with his core team | 1 | [removed] | 2025-06-13T04:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1la73ld/alexandr_wang_officially_joins_meta_with_his_core/ | AIDoomer3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la73ld | false | null | t3_1la73ld | /r/LocalLLaMA/comments/1la73ld/alexandr_wang_officially_joins_meta_with_his_core/ | false | false | self | 1 | null |
Any good 70b ERP model with recent model release? | 0 | maybe based on qwen3.0 or mixtral? Or other good ones? | 2025-06-13T04:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1la6ntm/any_good_70b_erp_model_with_recent_model_release/ | metalfans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la6ntm | false | null | t3_1la6ntm | /r/LocalLLaMA/comments/1la6ntm/any_good_70b_erp_model_with_recent_model_release/ | false | false | self | 0 | null |
What open source local models can run reasonably well on a Raspberry Pi 5 with 16GB RAM? | 0 | **My Long Term Goal:** I'd like to create a chatbot that uses
* Speech to Text - for interpreting human speech
* Text to Speech - for talking
* Computer Vision - for reading human emotions - i
* If you have any recommendations for this as well, please let me know.
**My Short Term Goal (this post):**
I'd like to u... | 2025-06-13T04:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1la6ndx/what_open_source_local_models_can_run_reasonably/ | iKontact | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la6ndx | false | null | t3_1la6ndx | /r/LocalLLaMA/comments/1la6ndx/what_open_source_local_models_can_run_reasonably/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY.jpeg?width=108&crop=smart&auto=webp&s=1e9268f8000ba05c0eaa4a283483174ba8fe421c', 'width': 108}, {'height': 113, 'url': '... |
[First Release!] Serene Pub - 0.1.0 Alpha - Linux/MacOS/Windows - Silly Tavern alternative | 23 | \# Introduction
Hey everyone! I got some moderate interest when I posted a week back about Serene Pub.
I'm proud to say that I've finally reached a point where I can release the first Alpha version of this app for preview, testing and feedback!
This is in development, there will be bugs!
There are releases for ... | 2025-06-13T03:56:02 | https://www.reddit.com/gallery/1la6h3y | doolijb | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1la6h3y | false | null | t3_1la6h3y | /r/LocalLLaMA/comments/1la6h3y/first_release_serene_pub_010_alpha/ | false | false | 23 | {'enabled': True, 'images': [{'id': 'xLYJQVPvAzJH6LTy7XZaM08Yzp0dQfVS5ZcYjGMavrE', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/xLYJQVPvAzJH6LTy7XZaM08Yzp0dQfVS5ZcYjGMavrE.png?width=108&crop=smart&auto=webp&s=4bdba8fdef1eaf64eb125fe923b554303778dd80', 'width': 108}, {'height': 384, 'url': 'h... | |
SSL Certificate Expired on https://llama3-1.llamameta.net | 1 | [removed] | 2025-06-13T03:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1la61l3/ssl_certificate_expired_on/ | Worldly-Welcome3387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la61l3 | false | null | t3_1la61l3 | /r/LocalLLaMA/comments/1la61l3/ssl_certificate_expired_on/ | false | false | self | 1 | null |
SSL Certificate Expired on https://llama3-1.llamameta.net | 1 | [removed] | 2025-06-13T03:29:40 | https://www.reddit.com/r/LocalLLaMA/comments/1la60ak/ssl_certificate_expired_on/ | Worldly-Welcome3387 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la60ak | false | null | t3_1la60ak | /r/LocalLLaMA/comments/1la60ak/ssl_certificate_expired_on/ | false | false | self | 1 | null |
What specs should I go with to run a not-bad model? | 1 | Hello all,
I am completely uneducated about the AI space, but I wanted to get into it to be able to automate some of the simpler side of my work. I am not sure how possible it is, but it doesnt hurt to try, and I am due for a new rig anyways.
For rough specs I was thinking about getting either the 9800X3D or 9950X3D... | 2025-06-13T02:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1la52f5/what_specs_should_i_go_with_to_run_a_notbad_model/ | ZXOS8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la52f5 | false | null | t3_1la52f5 | /r/LocalLLaMA/comments/1la52f5/what_specs_should_i_go_with_to_run_a_notbad_model/ | false | false | self | 1 | null |
Help me find a motherboard | 2 | I need a motherboard that can both fit 4 dual slot GPUs and boot headless (or support integrated graphics). I've been through 2 motherboards already trying to get my quad MI50 setup to boot. First was an ASUS X99 Deluxe. It only fit 3 GPUs because of the pcie slot arrangement. Then I bought an ASUS X99 E-WS/USB3.1. It ... | 2025-06-13T02:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1la4v5h/help_me_find_a_motherboard/ | FrozenAptPea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la4v5h | false | null | t3_1la4v5h | /r/LocalLLaMA/comments/1la4v5h/help_me_find_a_motherboard/ | false | false | self | 2 | null |
ROCm 6.4 running on my rx580(polaris) FAST but odd behavior on models. | 2 | With the help of claude i got ollama to use my rx580 following this guide.
[https://github.com/woodrex83/ROCm-For-RX580](https://github.com/woodrex83/ROCm-For-RX580)
All the work arounds in the past i tried were about half the speed of my GTX1070 , but now some models like gemma3:4b-it-qat actually run up to 1.6... | 2025-06-13T02:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1la4qnt/rocm_64_running_on_my_rx580polaris_fast_but_odd/ | opUserZero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la4qnt | false | null | t3_1la4qnt | /r/LocalLLaMA/comments/1la4qnt/rocm_64_running_on_my_rx580polaris_fast_but_odd/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ohgtgz80rQ0F6PNCsdryC03WGVTALs27vtuj4i-H340', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohgtgz80rQ0F6PNCsdryC03WGVTALs27vtuj4i-H340.png?width=108&crop=smart&auto=webp&s=1c860c2398513053a30706cd2177c6372ea31863', 'width': 108}, {'height': 108, 'url': 'h... |
DeepSeek R-1 NEVER answers my prompts | 0 | Every time I type something, whether it’s a question or a salutation or anything else, instead of giving me a straightforward response, it will start thinking about way deeper stuff.
Let’s say I was to ask for the names of Harry’s two best friends, in Harry Pottr, it would do something like this:
“Okay, let’s look ... | 2025-06-13T02:13:01 | https://www.reddit.com/r/LocalLLaMA/comments/1la4k97/deepseek_r1_never_answers_my_prompts/ | Unkunkn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la4k97 | false | null | t3_1la4k97 | /r/LocalLLaMA/comments/1la4k97/deepseek_r1_never_answers_my_prompts/ | false | false | self | 0 | null |
Happy Birthday Transformers! | 63 | 2025-06-13T01:40:38 | https://x.com/sksq96/status/1933335774100857090?s=46 | sksq9 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1la3xni | false | null | t3_1la3xni | /r/LocalLLaMA/comments/1la3xni/happy_birthday_transformers/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': '5Cf5k8QXUtETtGaXKCuEtMDf2YvEV3aM9UbCLsKrRB4', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/-WGucDJrfXWGk82HcV3sYvI56KgBvSq6Ts-J6hHyLl0.jpg?width=108&crop=smart&auto=webp&s=edaec840bfecebd0cb214231a4edf870dae79563', 'width': 108}, {'height': 432, 'url': '... | |
Anyone tried using Pytorch/Huggingface models on new AMD mini pcs? | 1 | [removed] | 2025-06-13T01:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1la3v98/anyone_tried_using_pytorchhuggingface_models_on/ | daddyodevil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la3v98 | false | null | t3_1la3v98 | /r/LocalLLaMA/comments/1la3v98/anyone_tried_using_pytorchhuggingface_models_on/ | false | false | self | 1 | null |
3.53bit R1 0528 scores 68% on the Aider Polygot | 67 | 3.53bit R1 0528 scores 68% on the Aider Polygot benchmark
ram/vram required: 300GB
context size used: 40960 with flash attention
────────────────────────────- dirname: 2025-06-11-04-03-18--unsloth-DeepSeek-R1-0528-GGUF-UD-Q3\_K\_XL
test\_cases: 225
model: openai/unsloth/DeepSeek-R1-0528-GGUF/UD-Q3\_K\_XL
edit\_fo... | 2025-06-13T01:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1la3uvz/353bit_r1_0528_scores_68_on_the_aider_polygot/ | BumblebeeOk3281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la3uvz | false | null | t3_1la3uvz | /r/LocalLLaMA/comments/1la3uvz/353bit_r1_0528_scores_68_on_the_aider_polygot/ | true | false | spoiler | 67 | null |
3.53bit Deepseek R1 0528 scores 68% on Aider Polygot | 1 | [removed] | 2025-06-13T01:33:59 | https://www.reddit.com/r/LocalLLaMA/comments/1la3t1l/353bit_deepseek_r1_0528_scores_68_on_aider_polygot/ | BumblebeeOk3281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la3t1l | false | null | t3_1la3t1l | /r/LocalLLaMA/comments/1la3t1l/353bit_deepseek_r1_0528_scores_68_on_aider_polygot/ | false | false | self | 1 | null |
3.53bit Deepseek R1 0528 scores 68% which is in-between Sonnet 3.7 and Opus 4 | 1 | [removed] | 2025-06-13T01:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1la3q5q/353bit_deepseek_r1_0528_scores_68_which_is/ | BumblebeeOk3281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la3q5q | false | null | t3_1la3q5q | /r/LocalLLaMA/comments/1la3q5q/353bit_deepseek_r1_0528_scores_68_which_is/ | false | false | self | 1 | null |
Does unsuppervised generative AI model exist? | 1 | [removed] | 2025-06-13T01:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1la3ede/does_unsuppervised_generative_ai_model_exist/ | Exotic-Media5762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la3ede | false | null | t3_1la3ede | /r/LocalLLaMA/comments/1la3ede/does_unsuppervised_generative_ai_model_exist/ | false | false | self | 1 | null |
Claude Code but using local Gemma3 | 1 | [removed] | 2025-06-13T00:55:25 | https://www.reddit.com/r/LocalLLaMA/comments/1la31hu/claude_code_but_using_local_gemma3/ | Willing-Policy6027 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la31hu | false | null | t3_1la31hu | /r/LocalLLaMA/comments/1la31hu/claude_code_but_using_local_gemma3/ | false | false | self | 1 | null |
Best local LLM with strong instruction following for custom scripting language | 3 | I have a scripting language that I use that is “C-like”, but definitely not C. I’ve prompted 4o to successfully write code and now I want to run local.
What’s the best local LLM that would be close to 4o with instruction following that I could run on 96GB of GPU RAM (2xA6000 Ada).
Thanks! | 2025-06-13T00:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1la2g1r/best_local_llm_with_strong_instruction_following/ | Ill_Recipe7620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la2g1r | false | null | t3_1la2g1r | /r/LocalLLaMA/comments/1la2g1r/best_local_llm_with_strong_instruction_following/ | false | false | self | 3 | null |
llama.cpp adds support to two new quantization format, tq1_0 and tq2_0 | 96 | which can be found at tools/convert\_hf\_to\_gguf.py on github.
tq means ternary quantization, what's this? is for consumer device? | 2025-06-12T23:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/1la1v4d/llamacpp_adds_support_to_two_new_quantization/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la1v4d | false | null | t3_1la1v4d | /r/LocalLLaMA/comments/1la1v4d/llamacpp_adds_support_to_two_new_quantization/ | false | false | self | 96 | null |
What are peoples experience with old dual Xeon servers? | 3 | I recently found a used system for sale for a bit under 1000 bucks:
Dell Server R540 Xeon Dual 4110 256GB RAM 20TB
2x Intel Xeon 4110
256GB Ram
5x 4TB HDD
Raid Controler
1x 10GBE SFP+
2x 1GBE RJ45
IDRAC
2 PSUs for redundancy
100W idle 170 under load
Here are my theoretical performance calculations:
DDR4-2400 =... | 2025-06-12T23:13:39 | https://www.reddit.com/r/LocalLLaMA/comments/1la0vz8/what_are_peoples_experience_with_old_dual_xeon/ | Eden1506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la0vz8 | false | null | t3_1la0vz8 | /r/LocalLLaMA/comments/1la0vz8/what_are_peoples_experience_with_old_dual_xeon/ | false | false | self | 3 | null |
queen 3 30b a3b experience and questions | 1 | [removed] | 2025-06-12T22:52:38 | https://www.reddit.com/r/LocalLLaMA/comments/1la0f80/queen_3_30b_a3b_experience_and_questions/ | Axotic69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1la0f80 | false | null | t3_1la0f80 | /r/LocalLLaMA/comments/1la0f80/queen_3_30b_a3b_experience_and_questions/ | false | false | self | 1 | null |
What's your Local Vision Model Rankings and local Benchmarks for them? | 3 | It's obvious were the text2text models are in terms of ranking. We all know for example that deepseek-r1-0528 > deepseek-v3-0324 \~ Qwen3-253B > llama3.3-70b \~ gemma-3-27b > mistral-small-24b
We also have all the home grown "evals" that we throw at these models, boucing ball in a heptagon, move the ball in a cup, ... | 2025-06-12T22:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l9zym8/whats_your_local_vision_model_rankings_and_local/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9zym8 | false | null | t3_1l9zym8 | /r/LocalLLaMA/comments/1l9zym8/whats_your_local_vision_model_rankings_and_local/ | false | false | self | 3 | null |
Conversational Avatars | 1 | HeLLo aLL,
Does anybody know a tool or a workflow that could help me build a video avatar for a conversation bot? I figure some combination of existing tools makes this possible— I have the workflow built except for the video. Any recos? Thanks 🙏🏼 | 2025-06-12T22:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l9zuzs/conversational_avatars/ | JoshuaLandy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9zuzs | false | null | t3_1l9zuzs | /r/LocalLLaMA/comments/1l9zuzs/conversational_avatars/ | false | false | self | 1 | null |
Run Perchance style RPG locally? | 3 | I like the clean UI and ease of use of Perchance's RPG story. It's also pretty good at creativity. Is it reasonably feasible to run something similar locally? | 2025-06-12T22:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l9zddh/run_perchance_style_rpg_locally/ | BenefitOfTheDoubt_01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9zddh | false | null | t3_1l9zddh | /r/LocalLLaMA/comments/1l9zddh/run_perchance_style_rpg_locally/ | false | false | self | 3 | null |
New Agent Creator Tutorial!! with Observer AI 🚀 | 1 | Hey guys! first of all I wanted to thank you all for the amazing support that this community has given me, I added a lot of features to Observer AI:
\* AI Agent Builder
\* Template Agent Builder
\* SMS message notifications
\* Camera input
\* Microphone input (still needs work)
\* Whatsapp message notifiact... | 2025-06-12T21:53:49 | https://v.redd.it/x7fwtqxehk6f1 | Roy3838 | /r/LocalLLaMA/comments/1l9z2i6/new_agent_creator_tutorial_with_observer_ai/ | 1970-01-01T00:00:00 | 0 | {} | 1l9z2i6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x7fwtqxehk6f1/DASHPlaylist.mpd?a=1752486835%2CMTE1MWU5NjU4NGQxMjA2ZmQ0YjM2ZjA5YjU0MjQ0NzI1NTMyNTNiNDA3NjhmNzhlZjBmZDI1OWI1Njg1YTI5NQ%3D%3D&v=1&f=sd', 'duration': 151, 'fallback_url': 'https://v.redd.it/x7fwtqxehk6f1/DASH_1080.mp4?source=fallback', '... | t3_1l9z2i6 | /r/LocalLLaMA/comments/1l9z2i6/new_agent_creator_tutorial_with_observer_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eThremFxeGVoazZmMdAGXmZBeVc_QZpDk3TS6rDj0o4TMoRZ42ebNhgLEVSd.png?width=108&crop=smart&format=pjpg&auto=webp&s=07e1b9e7c467ee5655f1f0b5325b7398863e8... | |
KwaiCoder-AutoThink-preview is a Good Model for Creative Writing! Any Idea about Coding and Math? Your Thoughts? | 4 | [https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview](https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview)
Guys, you should try KwaiCoder-AutoThink-preview.
It's an awesome model. I played with it and tested it's reasoning and creativity, and I am impressed.
It feels like it's a system of 2 models... | 2025-06-12T21:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l9z1ts/kwaicoderautothinkpreview_is_a_good_model_for/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9z1ts | false | null | t3_1l9z1ts | /r/LocalLLaMA/comments/1l9z1ts/kwaicoderautothinkpreview_is_a_good_model_for/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4.png?width=108&crop=smart&auto=webp&s=8675a2ccd80f59585366e3c76f089100d181f4cf', 'width': 108}, {'height': 116, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.