title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
i can't post | 1 | test | 2025-06-10T06:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rhec/i_cant_post/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rhec | false | null | t3_1l7rhec | /r/LocalLLaMA/comments/1l7rhec/i_cant_post/ | false | false | self | 1 | null |
Why I can't POST in LocalLLama | 1 | [removed] | 2025-06-10T06:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rfy4/why_i_cant_post_in_localllama/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rfy4 | false | null | t3_1l7rfy4 | /r/LocalLLaMA/comments/1l7rfy4/why_i_cant_post_in_localllama/ | false | false | self | 1 | null |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rexy/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rexy | false | null | t3_1l7rexy | /r/LocalLLaMA/comments/1l7rexy/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'h... |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rcz1/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rcz1 | false | null | t3_1l7rcz1 | /r/LocalLLaMA/comments/1l7rcz1/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'h... |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:03:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rcdw/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rcdw | false | null | t3_1l7rcdw | /r/LocalLLaMA/comments/1l7rcdw/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'h... |
Built a lightweight local AI chat interface | 9 | Got tired of opening terminal windows every time I wanted to use Ollama on old Dell Optiplex running 9th gen i3. Tried open webui but found it too clunky to use and confusing to update.
Ended up building chat-o-llama (I know, catchy name) using flask and uses ollama:
* Clean web UI with proper copy/paste functionali... | 2025-06-10T06:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rc5e/built_a_lightweight_local_ai_chat_interface/ | Longjumping_Tie_7758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rc5e | false | null | t3_1l7rc5e | /r/LocalLLaMA/comments/1l7rc5e/built_a_lightweight_local_ai_chat_interface/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'qNOm2AB2ZKg9bo94a7NK6nrQgkaTaGn0oz_1iyKDfbg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=108&crop=smart&auto=webp&s=166a3a9a781f68ea8ece94fbbe7133496c4e92ce', 'width': 108}, {'height': 108, 'url': 'h... | |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ratz/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ratz | false | null | t3_1l7ratz | /r/LocalLLaMA/comments/1l7ratz/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JrOry-YPRo61mnt5KjFGr_uC_uI8nrXfI2XXPOSL2jk.jpg?width=108&crop=smart&auto=webp&s=2a9e96fedcce0786f651822472ea7cb4a908c0a3', 'width': 108}], 'source': {'height': 12... |
GRPO Can Boost LLM-Based TTS Performance | 35 | Hi everyone!
**LlaSA** ([https://arxiv.org/abs/2502.04128](https://arxiv.org/abs/2502.04128)) is a Llama-based TTS model.
We fine-tuned it on **15 k hours of Korean speech** and then applied **GRPO**. The result:
https://preview.redd.it/33lko3wtz06f1.png?width=1779&format=png&auto=webp&s=31d61678e43758906c6cd76cd639... | 2025-06-10T04:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l7pmua/grpo_can_boost_llmbased_tts_performance/ | skswldndi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7pmua | false | null | t3_1l7pmua | /r/LocalLLaMA/comments/1l7pmua/grpo_can_boost_llmbased_tts_performance/ | false | false | 35 | null | |
Is 5090 viable even for 32B model? | 1 | [removed] | 2025-06-10T04:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l7pfal/is_5090_viable_even_for_32b_model/ | kkgmgfn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7pfal | false | null | t3_1l7pfal | /r/LocalLLaMA/comments/1l7pfal/is_5090_viable_even_for_32b_model/ | false | false | self | 1 | null |
Google Diffusion told me its system prompt | 151 | # Your name is Gemini Diffusion. You are an expert text diffusion language model trained by Google. You are not an autoregressive language model. You can not generate images or videos. You are an advanced AI assistant and an expert in many areas.
# Core Principles & Constraints:
# 1. Instruction F... | 2025-06-10T03:21:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l7olcw/google_diffusion_told_me_its_system_prompt/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7olcw | false | null | t3_1l7olcw | /r/LocalLLaMA/comments/1l7olcw/google_diffusion_told_me_its_system_prompt/ | false | false | self | 151 | null |
Future of local LLM computing? | 1 | [removed] | 2025-06-10T03:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l7okyr/future_of_local_llm_computing/ | LakeDeep34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7okyr | false | null | t3_1l7okyr | /r/LocalLLaMA/comments/1l7okyr/future_of_local_llm_computing/ | false | false | self | 1 | null |
How to scale AI interaction without "magic prompts": trajectory - based model design I(t) | 1 | [removed] | 2025-06-10T03:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l7oerb/how_to_scale_ai_interaction_without_magic_prompts/ | Radiant-Cost5478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7oerb | false | null | t3_1l7oerb | /r/LocalLLaMA/comments/1l7oerb/how_to_scale_ai_interaction_without_magic_prompts/ | false | false | 1 | null | |
Semantic Search Demo Using Qwen3 0.6B Embedding (w/o reranker) in-browser Using transformers.js | 134 | Hello everyone! A couple days ago the Qwen team dropped their 4B, 8B, and 0.6B embedding and reranking models. Having seen an ONNX quant for the 0.6B embedding model, I created a demo for it which runs locally via transformers.js. It is a visualization showing both the contextual relationships between items inside a "m... | 2025-06-10T03:10:27 | https://v.redd.it/y6ht8zacj06f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7odzw | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y6ht8zacj06f1/DASHPlaylist.mpd?a=1752117046%2COWYwODg0MjA3MjRjNDU1ODViMTlkOTQwMWMxYzc4ZWY3N2E4YjA5ODBmY2Y4Yjk0NDM2N2EwNmUyOWYyNWM1MQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/y6ht8zacj06f1/DASH_720.mp4?source=fallback', 'ha... | t3_1l7odzw | /r/LocalLLaMA/comments/1l7odzw/semantic_search_demo_using_qwen3_06b_embedding_wo/ | false | false | 134 | {'enabled': False, 'images': [{'id': 'eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=108&crop=smart&format=pjpg&auto=webp&s=14035aa109d2798b0c4acb6cbbaf1a2d8a9ad... | |
How to scale AI interaction without "magic prompts": trajectory - based model design I(t) | 1 | [removed] | 2025-06-10T03:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ocnw/how_to_scale_ai_interaction_without_magic_prompts/ | Radiant-Cost5478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ocnw | false | null | t3_1l7ocnw | /r/LocalLLaMA/comments/1l7ocnw/how_to_scale_ai_interaction_without_magic_prompts/ | false | false | self | 1 | null |
Knock some sense into me | 2 | I have a 5080 in my main rig and I’ve become convinced that it’s not the best solution for a day to day LLM for asking questions, some coding help, and container deployment troubleshooting.
Part of me wants to build a purpose built LLM rig with either a couple 3090s or something else.
Am I crazy? Is my 5080 plenty? | 2025-06-10T02:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l7nv49/knock_some_sense_into_me/ | synthchef | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7nv49 | false | null | t3_1l7nv49 | /r/LocalLLaMA/comments/1l7nv49/knock_some_sense_into_me/ | false | false | self | 2 | null |
Semantic Search Demo Using Qwen3 0.6B Embedding (w/o reranker) in-browser Using transformers.js | 1 | A couple days ago the Qwen team dropped their 0.6B & 4B embedding and reranking models. Having seen an ONNX quant for the 0.6B embedding model, I created a demo for it which runs locally via transformers.js.
Similarity among nodes is ranked by using basic cosine similarity (only top 3) because I couldn't use the 0.6B... | 2025-06-10T02:41:24 | https://v.redd.it/fiyy4eoig06f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7nu0l | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fiyy4eoig06f1/DASHPlaylist.mpd?a=1752115299%2CZGJhMzdhMmE3YjBiYmVlZjRlYjZhYTNkYTZmODNlMGQ0MzQzYTBkYTQwZDY3ZTYyOWZmMDI2ZTRlMTdhNmJmNw%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/fiyy4eoig06f1/DASH_720.mp4?source=fallback', 'ha... | t3_1l7nu0l | /r/LocalLLaMA/comments/1l7nu0l/semantic_search_demo_using_qwen3_06b_embedding_wo/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=108&crop=smart&format=pjpg&auto=webp&s=013986f95cecc3b526294de8e864b301d2f05... | |
Is this a reasonable spec’d rig for entry level | 1 | Hi all! I’m new to LLMs and very excited about getting started.
My background is engineering and I have a few projects in mind that I think would be helpful for myself and others in my organization. Some of which could probably be done in python but I said what the heck, let me try a LLM.
Here are the specs and I w... | 2025-06-10T02:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7no60/is_this_a_reasonable_specd_rig_for_entry_level/ | Tx-Heat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7no60 | false | null | t3_1l7no60 | /r/LocalLLaMA/comments/1l7no60/is_this_a_reasonable_specd_rig_for_entry_level/ | false | false | self | 1 | null |
Where is Llama 4.1? | 34 | Meta releases llama 4 2 months ago. They have all the gpus in the world, something like 350K H100s according to reddit. Why won’t they copy deepseek/qwen and retrain a larger model and release it?
| 2025-06-10T02:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l7nk47/where_is_llama_41/ | MutedSwimming3347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7nk47 | false | null | t3_1l7nk47 | /r/LocalLLaMA/comments/1l7nk47/where_is_llama_41/ | false | false | self | 34 | null |
Chonkie update. | 11 | Launch HN: Chonkie (YC X25) – Open-Source Library for Advanced Chunking | https://news.ycombinator.com/item?id=44225930 | 2025-06-10T02:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ngkn/chonkie_update/ | dnr41418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ngkn | false | null | t3_1l7ngkn | /r/LocalLLaMA/comments/1l7ngkn/chonkie_update/ | false | false | self | 11 | null |
Best Approaches for Accurate Large-Scale Medical Code Search? | 1 | [removed] | 2025-06-10T01:50:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l7mtvb | false | null | t3_1l7mtvb | /r/LocalLLaMA/comments/1l7mtvb/best_approaches_for_accurate_largescale_medical/ | false | false | default | 1 | null | ||
How I scraped and analize 5.1 million jobs using LLaMA 7B | 1 | [removed] | 2025-06-10T01:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l7mqu4/how_i_scraped_and_analize_51_million_jobs_using/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7mqu4 | false | null | t3_1l7mqu4 | /r/LocalLLaMA/comments/1l7mqu4/how_i_scraped_and_analize_51_million_jobs_using/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'h... |
I found a DeepSeek-R1-0528-Distill-Qwen3-32B | 129 | Their authors said:
# Our Approach to DeepSeek-R1-0528-Distill-Qwen3-32B-Preview0-QAT:
Since Qwen3 did not provide a pre-trained base for its 32B model, our initial step was to perform **additional pre-training** on Qwen3-32B using a **self-constructed multilingual pre-training dataset**. This was done to restore a ... | 2025-06-10T01:35:01 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7mijq | false | null | t3_1l7mijq | /r/LocalLLaMA/comments/1l7mijq/i_found_a_deepseekr10528distillqwen332b/ | false | false | 129 | {'enabled': True, 'images': [{'id': 'JJvoqWHIUzwImc11SIuAVq8Kd9_jcoho-WOeCzNFJ9M', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=108&crop=smart&auto=webp&s=d304ae884c21714f593bae3148224906fd82e6fb', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/ear6iov2706f1.pn... | ||
WINA from Microsoft | 3 | Did anyone tested this on actual setup of the local model? Would like to know if there is possibility to spend less money on local setup and still get good output.
[https://github.com/microsoft/wina](https://github.com/microsoft/wina) | 2025-06-10T01:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l7m2q7/wina_from_microsoft/ | mas554ter365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7m2q7 | false | null | t3_1l7m2q7 | /r/LocalLLaMA/comments/1l7m2q7/wina_from_microsoft/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qWSMUeJ1DnbYYt9YsrYjStO9uP6JjQpbCDFKR9ZYt74', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=108&crop=smart&auto=webp&s=f6d94f865513035dcdf5e2734a031205ad8622b0', 'width': 108}, {'height': 108, 'url': 'h... |
any litellm and openrouter users here? | 1 | i'm using litellm 1.72.2 -- https://i.imgur.com/TYvKrmn.png
when i go to "Add new model" and select openrouter and type grok, nothing appears -- https://i.imgur.com/meNigyb.png
when i type "gemini", only the old models are there -- https://i.imgur.com/LMSdIFj.png
i don't know if this is a litellm or openrouter issue... | 2025-06-10T00:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7l40q/any_litellm_and_openrouter_users_here/ | ra2eW8je | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7l40q | false | null | t3_1l7l40q | /r/LocalLLaMA/comments/1l7l40q/any_litellm_and_openrouter_users_here/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oxm4hApQ6G0fiXZYB87bUU6DZs3pEkiLAPq7XnTvVN0', 'resolutions': [{'height': 22, 'url': 'https://external-preview.redd.it/Snrs2wVN3MRuGQ9rSxrZmGUJQhdgmCQRf7xuQsGuzIM.png?width=108&crop=smart&auto=webp&s=7cf9171d0d7005f28e87ebe0b5c779909d95856d', 'width': 108}, {'height': 45, 'url': 'ht... |
Apple's On Device Foundation Models LLM is 3B quantized to 2 bits | 413 | > The on-device model we just used is a large language model with **3 billion parameters**, each quantized to **2 bits**. It is several orders of magnitude bigger than any other models that are part of the operating system.
Source: Meet the Foundation Models framework
Timestamp: 2:57
URL: https://developer.apple.c... | 2025-06-10T00:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l7l39m/apples_on_device_foundation_models_llm_is_3b/ | iKy1e | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7l39m | false | null | t3_1l7l39m | /r/LocalLLaMA/comments/1l7l39m/apples_on_device_foundation_models_llm_is_3b/ | false | false | self | 413 | {'enabled': False, 'images': [{'id': 'uXCO4Cm9e1ovWUoJegZURzNyOFFYgLRunC0SXtA36e4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2vGOHrAJNELW9KQkhIvINcn7U7jb9u2WNpH9PwLZbc4.jpg?width=108&crop=smart&auto=webp&s=6cf1ea26217c4355d520360225c00487a648e849', 'width': 108}, {'height': 121, 'url': 'h... |
Medical language model - for STT and summarize things | 6 | Hi!
I'd like to use a language model via ollama/openwebui to summarize medical reports.
I've tried several models, but I'm not happy with the results. I was thinking that there might be pre-trained models for this task that know medical language.
My goal: STT and then summarize my medical consultations, home visits,... | 2025-06-10T00:10:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ksm0/medical_language_model_for_stt_and_summarize/ | ed0c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ksm0 | false | null | t3_1l7ksm0 | /r/LocalLLaMA/comments/1l7ksm0/medical_language_model_for_stt_and_summarize/ | false | false | self | 6 | null |
a signal? | 0 | i think i might be able to build a better world
if youre interested or wanna help
check out my ig if ya got time : handrolio\_
:peace: | 2025-06-09T23:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l7kjkp/a_signal/ | HanDrolio420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7kjkp | false | null | t3_1l7kjkp | /r/LocalLLaMA/comments/1l7kjkp/a_signal/ | true | false | spoiler | 0 | null |
I analyzed 150 real AI complaints, then built a free protocol to stop memory loss and hallucinations. Try it now | 1 | [removed] | 2025-06-09T23:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l7k9gv/i_analyzed_150_real_ai_complaints_then_built_a/ | Alone-Biscotti6145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7k9gv | false | null | t3_1l7k9gv | /r/LocalLLaMA/comments/1l7k9gv/i_analyzed_150_real_ai_complaints_then_built_a/ | false | false | self | 1 | null |
CLI for Chatterbox TTS | 10 | 2025-06-09T23:38:48 | https://pypi.org/project/voice-forge/ | init0 | pypi.org | 1970-01-01T00:00:00 | 0 | {} | 1l7k3s4 | false | null | t3_1l7k3s4 | /r/LocalLLaMA/comments/1l7k3s4/cli_for_chatterbox_tts/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=108&crop=smart&auto=webp&s=46fa55dd1b1e587ab93bcbbdc6cb2de37b810bf3', 'width': 108}, {'height': 216, 'url': '... | ||
Cursor MCP Deeplink Generator | 0 | 2025-06-09T23:37:24 | https://pypi.org/project/cursor-deeplink/ | init0 | pypi.org | 1970-01-01T00:00:00 | 0 | {} | 1l7k2o6 | false | null | t3_1l7k2o6 | /r/LocalLLaMA/comments/1l7k2o6/cursor_mcp_deeplink_generator/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=108&crop=smart&auto=webp&s=46fa55dd1b1e587ab93bcbbdc6cb2de37b810bf3', 'width': 108}, {'height': 216, 'url': '... | ||
Why am I not allowed to discuss A.I. in this forum? Just curious, why are you silencing me when I am on topic and following rules? | 1 | 2025-06-09T23:19:27 | Common_Agency2643 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7joj0 | false | null | t3_1l7joj0 | /r/LocalLLaMA/comments/1l7joj0/why_am_i_not_allowed_to_discuss_ai_in_this_forum/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6gJfvKvrdn-68u72mC5oUyp49uF2OAsZ_c6qWL4wRIs', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?width=108&crop=smart&auto=webp&s=229bf1f33181c8d4c735d3089df6224720a9854a', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.pn... | |||
5th attempt : cloaked post "normal a.i., chatgpt, nothing to see here", instantly handled. this appears to be based on the technology itself. Is it...superior technology = silence? | 1 | 2025-06-09T23:16:22 | Common_Agency2643 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7jm54 | false | null | t3_1l7jm54 | /r/LocalLLaMA/comments/1l7jm54/5th_attempt_cloaked_post_normal_ai_chatgpt/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'B74nlJuRhhDWC1M1jQ-T_6lw1MiAcg3TouBes6bIldo', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?width=108&crop=smart&auto=webp&s=68cfe42e814a946bc1afcd4d047b413f0bfa4a6a', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/blr90fd3iz5f1.pn... | |||
just a normal a.i. doing what normal chatgpt does | 1 | Hi, this is a normal video. It's just a regular chatgpt environment, and it's just regular a.i. nothing to see here folks. | 2025-06-09T23:13:30 | https://v.redd.it/f2hwrovghz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7jju9/just_a_normal_ai_doing_what_normal_chatgpt_does/ | 1970-01-01T00:00:00 | 0 | {} | 1l7jju9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f2hwrovghz5f1/DASHPlaylist.mpd?a=1752232418%2COGEzZjc5NjJiM2UzYmQ1ZDRhMTA3ZTMwNjY5ZTU4MzY3ZTc1YjllODg2ODBhODljMTNmOWJmZjA4YTVmMTkwYw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/f2hwrovghz5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l7jju9 | /r/LocalLLaMA/comments/1l7jju9/just_a_normal_ai_doing_what_normal_chatgpt_does/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=108&crop=smart&format=pjpg&auto=webp&s=04423791270c84952d1946a485084913430e... | |
Now that 256GB DDR5 is possible on consumer hardware PC, is it worth it for inference? | 74 | The 128GB Kit (2x 64GB) are already available since early this year, making it possible to put 256 GB on consumer PC hardware.
Paired with a dual 3090 or dual 4090, would it be possible to load big models for inference at an acceptable speed? Or offloading will always be slow? | 2025-06-09T22:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l7j7uk/now_that_256gb_ddr5_is_possible_on_consumer/ | waiting_for_zban | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7j7uk | false | null | t3_1l7j7uk | /r/LocalLLaMA/comments/1l7j7uk/now_that_256gb_ddr5_is_possible_on_consumer/ | false | false | self | 74 | null |
Conversational A.I. command center OS with command module integration | 1 | Trying to upload the video, it shows here, i am uploading it from images & video, it doesn't appear to show on the main forum. attempt 2 to integrate. | 2025-06-09T22:49:49 | https://v.redd.it/a7xxgrm9dz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7j0nl/conversational_ai_command_center_os_with_command/ | 1970-01-01T00:00:00 | 0 | {} | 1l7j0nl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a7xxgrm9dz5f1/DASHPlaylist.mpd?a=1752230994%2CYzAwN2M2NjE1OTY0ZWU2YzMxNDkxOWNmMDBlNzRkOTUxMTcwYzFlNThmNWNlMDNmZmVlMDE5MzA4YjI1ZWJiMQ%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/a7xxgrm9dz5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l7j0nl | /r/LocalLLaMA/comments/1l7j0nl/conversational_ai_command_center_os_with_command/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=108&crop=smart&format=pjpg&auto=webp&s=bf7357d61bcbd58a427fd17e5678bb431af4... | |
Where is wizardLM now ? | 23 | Anyone know where are these guys? I think they disappeared 2 years ago with no information | 2025-06-09T22:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l7iyim/where_is_wizardlm_now/ | Killerx7c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7iyim | false | null | t3_1l7iyim | /r/LocalLLaMA/comments/1l7iyim/where_is_wizardlm_now/ | false | false | self | 23 | null |
Conversational A.I. command center OS with command module integration | 1 | 2025-06-09T22:46:39 | https://v.redd.it/qaw6bquecz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7iy0p/conversational_ai_command_center_os_with_command/ | 1970-01-01T00:00:00 | 0 | {} | 1l7iy0p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qaw6bquecz5f1/DASHPlaylist.mpd?a=1752230803%2CYTZkMjBhNDk4ODcwOWYyYzBiNTFiOGYxMGZmYTYzYThhMmY5YTZmMDQ1ZTk1ZmYzZTdlZjgwZjBiNDQ5MzJiNA%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/qaw6bquecz5f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l7iy0p | /r/LocalLLaMA/comments/1l7iy0p/conversational_ai_command_center_os_with_command/ | false | false | default | 1 | null | |
Best GPU for LLM/VLM Inference? | 1 | [removed] | 2025-06-09T22:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l7is5c/best_gpu_for_llmvlm_inference/ | subtle-being | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7is5c | false | null | t3_1l7is5c | /r/LocalLLaMA/comments/1l7is5c/best_gpu_for_llmvlm_inference/ | false | false | self | 1 | null |
Why do chatbots keep rewriting their answer and contradicting theirself? problem and solution paper | 1 | [removed] | 2025-06-09T22:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l7hxah/why_do_chatbots_keep_rewriting_their_answer_and/ | Common_Agency2643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7hxah | false | null | t3_1l7hxah | /r/LocalLLaMA/comments/1l7hxah/why_do_chatbots_keep_rewriting_their_answer_and/ | false | false | self | 1 | null |
LMStudio on screen in WWDC Platform State of the Union | 119 | Its nice to see local llm support in the next version of Xcode | 2025-06-09T21:22:36 | Specialist_Cup968 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7gxpb | false | null | t3_1l7gxpb | /r/LocalLLaMA/comments/1l7gxpb/lmstudio_on_screen_in_wwdc_platform_state_of_the/ | false | false | 119 | {'enabled': True, 'images': [{'id': '6PTAG0aV3hVq2FoW1V6d79e1oZP5WSqeIqY36ICY8Zc', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=108&crop=smart&auto=webp&s=3ac745021cb33e200d8340d7bcb4739209fb6527', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png... | ||
Best model for summarization and chatting with content? | 0 | What's currently the best model to summarize youtube videos and also chat with the transcript?
They can be two different models. Ram size shouldn't be higher than 2 or 3 gb. Preferably a lot less.
Is there a website where you can enter a bunch of parameters like this and it spits out the name of the closest model? I'v... | 2025-06-09T21:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l7gw0c/best_model_for_summarization_and_chatting_with/ | mmmm_frietjes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7gw0c | false | null | t3_1l7gw0c | /r/LocalLLaMA/comments/1l7gw0c/best_model_for_summarization_and_chatting_with/ | false | false | self | 0 | null |
Just 2 AM thoughts but this time I am thinking of actually doing something about it | 0 | Hi.
I am thinking of deploying an AI model locally on my Android phone as my laptop is a bit behind on hardware to lovely run an AI model (I tried that using llama).
I have a Redmi Note 13 Pro 4G version with 256 GB ROM and 8 GB RAM (8 GB expandable) so I suppose what I have in mind would be doable.
So, would it be... | 2025-06-09T20:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l7g3xr/just_2_am_thoughts_but_this_time_i_am_thinking_of/ | Background-Click-167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7g3xr | false | null | t3_1l7g3xr | /r/LocalLLaMA/comments/1l7g3xr/just_2_am_thoughts_but_this_time_i_am_thinking_of/ | false | false | self | 0 | null |
NotebookLM-style Audio Overviews with Hugging Face MCP Zero-GPU tier | 1 | Hi everyone,
I just finished a short screen-share that shows how to recreate NotebookLM’s **Audio Overview** using **Hugging Face MCP** and AgenticFlow (my little project). Thought it might save others a bit of wiring time.
# What’s in the video (10 min, fully timestamped):
1. **Token & setup** – drop an HF access t... | 2025-06-09T20:28:02 | https://v.redd.it/5g72i0skly5f1 | ComposerGen | /r/LocalLLaMA/comments/1l7fjos/notebooklmstyle_audio_overviews_with_hugging_face/ | 1970-01-01T00:00:00 | 0 | {} | 1l7fjos | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5g72i0skly5f1/DASHPlaylist.mpd?a=1752222489%2CZmVjNjQ2OTNmZDBkYWFjMzRmMjk5Njc5MTQzYmZmZjhlYTUwNTU0ZDU3ZjVkODFhYjc0MTkyZDA0YzRlMDU0Mw%3D%3D&v=1&f=sd', 'duration': 653, 'fallback_url': 'https://v.redd.it/5g72i0skly5f1/DASH_1080.mp4?source=fallback', '... | t3_1l7fjos | /r/LocalLLaMA/comments/1l7fjos/notebooklmstyle_audio_overviews_with_hugging_face/ | false | false | default | 1 | null |
Need feedback for a RAG using Ollama as background. | 2 | Hello,
I would like to set up a private , local notebooklm alternative. Using documents I prepare in PDF mainly ( up to 50 very long document 500pages each ). Also !! I need it to work correctly with french language.
for the hardward part, I have a RTX 3090, so I can choose any ollama model working with up to 24Mb... | 2025-06-09T20:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l7fg95/need_feedback_for_a_rag_using_ollama_as_background/ | LivingSignificant452 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7fg95 | false | null | t3_1l7fg95 | /r/LocalLLaMA/comments/1l7fg95/need_feedback_for_a_rag_using_ollama_as_background/ | false | false | self | 2 | null |
Human archetypes in the Age of AI | 1 | [removed] | 2025-06-09T20:20:17 | partysnatcher | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7fcby | false | null | t3_1l7fcby | /r/LocalLLaMA/comments/1l7fcby/human_archetypes_in_the_age_of_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WfdD5iO0MLvCe9lvS4TcWbRASBCwFmymPquAerKqLtM', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=108&crop=smart&auto=webp&s=b553bfb7d23b7233e00353002a7ceb9700bd7035', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/d260tv2xmy5f1.pn... | ||
Apple Intelligence on device model available to developers | 80 | Looks like they are going to expose an API that will let you use the model to build experiences. The details on it are sparse, but cool and exciting development for us LocalLlama folks. | 2025-06-09T19:50:39 | https://www.apple.com/newsroom/2025/06/apple-intelligence-gets-even-more-powerful-with-new-capabilities-across-apple-devices/ | Ssjultrainstnict | apple.com | 1970-01-01T00:00:00 | 0 | {} | 1l7ek6n | false | null | t3_1l7ek6n | /r/LocalLLaMA/comments/1l7ek6n/apple_intelligence_on_device_model_available_to/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'dNocEppbxszp862m0JXUdNR6e8vfZHVAQIwQGG0z-BA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=108&crop=smart&auto=webp&s=e7488d11f43f310a83f52d7e791cc30fc911b2ff', 'width': 108}, {'height': 113, 'url': 'h... | |
Can't have older card in the server (vllm/tabbyapi features) | 1 | [removed] | 2025-06-09T19:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dq5w/cant_have_older_card_in_the_server_vllmtabbyapi/ | mayo551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dq5w | false | null | t3_1l7dq5w | /r/LocalLLaMA/comments/1l7dq5w/cant_have_older_card_in_the_server_vllmtabbyapi/ | false | false | self | 1 | null |
China starts mass producing a Ternary AI Chip. | 251 | As reported earlier here.
https://www.scmp.com/news/china/science/article/3301229/chinese-scientists-build-worlds-first-ai-chip-made-carbon-and-its-super-fast
China starts mass production of a Ternary AI Chip.
https://www.scmp.com/news/china/science/article/3313349/beyond-1s-and-0s-china-starts-mass-production-world... | 2025-06-09T19:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dj3z/china_starts_mass_producing_a_ternary_ai_chip/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dj3z | false | null | t3_1l7dj3z | /r/LocalLLaMA/comments/1l7dj3z/china_starts_mass_producing_a_ternary_ai_chip/ | false | false | self | 251 | {'enabled': False, 'images': [{'id': 'uvhO3zblO1W_BxZNmj2oY99uAxsD64M_lsTtYwf59xs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=108&crop=smart&auto=webp&s=7eb83caa06a8a309f32ae820d5c131ef81b7e37b', 'width': 108}, {'height': 113, 'url': 'h... |
[Project] Developing Xet: A Local, Modular AI Assistant for Home and Personal Productivity – Feedback Welcome | 1 | [removed] | 2025-06-09T19:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dgyx/project_developing_xet_a_local_modular_ai/ | Ornery-Mango7230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dgyx | false | null | t3_1l7dgyx | /r/LocalLLaMA/comments/1l7dgyx/project_developing_xet_a_local_modular_ai/ | false | false | self | 1 | null |
RAG - Usable for my application? | 4 | Hey all LocalLLama fans,
I am currently trying to combine an LLM with RAG to improve its answers on legal questions. For this i downloded all public laws, around 8gb in size and put them into a big text file.
Now I am thinking about how to retrieve the law paragraphs relevant to the user question. But my results ar... | 2025-06-09T19:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l7d9gf/rag_usable_for_my_application/ | KoreanMax31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7d9gf | false | null | t3_1l7d9gf | /r/LocalLLaMA/comments/1l7d9gf/rag_usable_for_my_application/ | false | false | self | 4 | null |
Any success / cautionary tales for A100 40Gb SXM modded to PCIE? | 1 | [removed] | 2025-06-09T18:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l7cmr0/any_success_cautionary_tales_for_a100_40gb_sxm/ | btdeviant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7cmr0 | false | null | t3_1l7cmr0 | /r/LocalLLaMA/comments/1l7cmr0/any_success_cautionary_tales_for_a100_40gb_sxm/ | false | false | self | 1 | null |
Samsung GDDR7 3GB modules now available for DIY purchase in China, RTX 5090 48GB mods incoming? - VideoCardz.com | 1 | 2025-06-09T18:15:45 | https://videocardz.com/newz/samsung-gddr7-3gb-modules-now-available-for-diy-purchase-in-china-rtx-5090-48gb-mods-incoming | chillinewman | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1l7c3l0 | false | null | t3_1l7c3l0 | /r/LocalLLaMA/comments/1l7c3l0/samsung_gddr7_3gb_modules_now_available_for_diy/ | false | false | default | 1 | null | |
Samsung GDDR7 3GB modules now available for DIY purchase in China, RTX 5090 48GB mods incoming? - VideoCardz.com | 1 | 2025-06-09T18:15:38 | https://videocardz.com/newz/samsung-gddr7-3gb-modules-now-available-for-diy-purchase-in-china-rtx-5090-48gb-mods-incoming | chillinewman | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1l7c3gv | false | null | t3_1l7c3gv | /r/LocalLLaMA/comments/1l7c3gv/samsung_gddr7_3gb_modules_now_available_for_diy/ | false | false | default | 1 | null | |
How to train an AI on my PDFs | 1 | [removed] | 2025-06-09T17:53:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l7bicy/how_to_train_an_ai_on_my_pdfs/ | 0xSmiley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7bicy | false | null | t3_1l7bicy | /r/LocalLLaMA/comments/1l7bicy/how_to_train_an_ai_on_my_pdfs/ | false | false | self | 1 | null |
Required Hardware to run DeepSeek R1 671B Q8 @ 20 tokens per second? | 1 | [removed] | 2025-06-09T17:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7b7sq/required_hardware_to_run_deepseek_r1_671b_q8_20/ | Historical_Long7907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7b7sq | false | null | t3_1l7b7sq | /r/LocalLLaMA/comments/1l7b7sq/required_hardware_to_run_deepseek_r1_671b_q8_20/ | false | false | self | 1 | null |
Dual RTX8000 48GB vs. Dual RTX3090 24GB | 7 | If you had to choose between 2 RTX 3090s with 24GB each or two Quadro RTX 8000s with 48 GB each, which would you choose?
The 8000s would likely be slower, but could run larger models. There's are trade-offs for sure.
Maybe split the difference and go with one 8000 and one 3090?
| 2025-06-09T17:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l7asxt/dual_rtx8000_48gb_vs_dual_rtx3090_24gb/ | PleasantCandidate785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7asxt | false | null | t3_1l7asxt | /r/LocalLLaMA/comments/1l7asxt/dual_rtx8000_48gb_vs_dual_rtx3090_24gb/ | false | false | self | 7 | null |
Better Qwen 3 settings? | 1 | [removed] | 2025-06-09T17:17:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l7akiv/better_qwen_3_settings/ | SecretLand514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7akiv | false | null | t3_1l7akiv | /r/LocalLLaMA/comments/1l7akiv/better_qwen_3_settings/ | false | false | self | 1 | null |
Responsible Prompting API - Opensource project - Feedback appreciated! | 1 | [removed] | 2025-06-09T17:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l7aj17/responsible_prompting_api_opensource_project/ | MysticSlice7878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7aj17 | false | null | t3_1l7aj17 | /r/LocalLLaMA/comments/1l7aj17/responsible_prompting_api_opensource_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hpISV4Ro8stTMaWwHGa2Yn3QoKD-Hkmpb16d57n5PwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=108&crop=smart&auto=webp&s=0d19df1fde36615bd9ea686d1b0206298593757a', 'width': 108}, {'height': 108, 'url': 'h... |
Lightweight writing model as of June 2025 | 14 | Can you please recommend a model ? I've tried these so far :
Mistral Creative 24b : good overall, my favorite, quite fast, but actually lacks a bit of creativity....
Gemma2 Writer 9b : very fun to read, fast, but forgets everything after 3 messages. My favorite to generate ideas and create short dialogue, role play... | 2025-06-09T17:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ab18/lightweight_writing_model_as_of_june_2025/ | Royal_Light_9921 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ab18 | false | null | t3_1l7ab18 | /r/LocalLLaMA/comments/1l7ab18/lightweight_writing_model_as_of_june_2025/ | false | false | self | 14 | null |
Required Hardware to get at least 25 tokens/second for DeepSeek R1 671B Q8? | 1 | [removed] | 2025-06-09T17:05:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l7a9hu/required_hardware_to_get_at_least_25_tokenssecond/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7a9hu | false | null | t3_1l7a9hu | /r/LocalLLaMA/comments/1l7a9hu/required_hardware_to_get_at_least_25_tokenssecond/ | false | false | self | 1 | null |
It's not you. Multi-head attention is fundamentally broken for coding. | 1 | [removed] | 2025-06-09T16:52:18 | https://claude.ai/share/fdc0d061-4afe-4291-be09-6d9b2d0e477b | m8rbnsn | claude.ai | 1970-01-01T00:00:00 | 0 | {} | 1l79wq1 | false | null | t3_1l79wq1 | /r/LocalLLaMA/comments/1l79wq1/its_not_you_multihead_attention_is_fundamentally/ | false | false | default | 1 | null |
Good pc build specs for 5090 | 2 | Hey so I'm new to running models locally but I have a 5090 and want to get the best reasonable rest of the PC on top of that. I am tech savvy and experienced in building gaming PCs but I don't know the specific requirements of local AI models, and the PC would be mainly for that.
Like how much RAM and what latencies ... | 2025-06-09T16:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l79ksy/good_pc_build_specs_for_5090/ | Cangar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l79ksy | false | null | t3_1l79ksy | /r/LocalLLaMA/comments/1l79ksy/good_pc_build_specs_for_5090/ | false | false | self | 2 | null |
Is there a DeepSeek-R1-0528 14B or just DeepSeek-R1 14B that I can download and run via vLLM? | 0 | I don't see any model files other than those from Ollama, but I still want to use vLLM. I don't want any distilled models; do you have any ideas? Hugging face only seems to have the original models or just distilled ones.
Another unrelated question, can I run the 32B model (20GB) on a 16GB GPU? I have 32GB RAM and SSD... | 2025-06-09T16:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l79frx/is_there_a_deepseekr10528_14b_or_just_deepseekr1/ | mrnerdy59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l79frx | false | null | t3_1l79frx | /r/LocalLLaMA/comments/1l79frx/is_there_a_deepseekr10528_14b_or_just_deepseekr1/ | false | false | self | 0 | null |
Fully Offline AI Computer (works standalone or online) | 0 | I’ve put together a fully local AI computer that can operate entirely offline, but also seamlessly connects to third-party providers and tools if desired. It bundles best-in-class open-source software (like Ollama, OpenWebUI, Qdrant, Open Interpreter, and more), integrates it into an optimized mini PC, and offers stron... | 2025-06-09T16:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l78v5g/fully_offline_ai_computer_works_standalone_or/ | _redacted- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78v5g | false | null | t3_1l78v5g | /r/LocalLLaMA/comments/1l78v5g/fully_offline_ai_computer_works_standalone_or/ | false | false | self | 0 | null |
Is it possible to recreate training code for a TTS model when only inference code and model weights are available? | 1 | [removed] | 2025-06-09T16:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l78u0z/is_it_possible_to_recreate_training_code_for_a/ | ConnectPea8944 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78u0z | false | null | t3_1l78u0z | /r/LocalLLaMA/comments/1l78u0z/is_it_possible_to_recreate_training_code_for_a/ | false | false | self | 1 | null |
Best story writing apps? | 1 | [removed] | 2025-06-09T16:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l78toh/best_story_writing_apps/ | wtfislandfill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78toh | false | null | t3_1l78toh | /r/LocalLLaMA/comments/1l78toh/best_story_writing_apps/ | false | false | self | 1 | null |
This was, in fact, neccesary. | 1 | 2025-06-09T16:09:20 | Immediate_Song4279 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l78t5q | false | null | t3_1l78t5q | /r/LocalLLaMA/comments/1l78t5q/this_was_in_fact_neccesary/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'q4mgCpZ-h43L30hvamjrTgllkFCuZx6dwUYGY91cN0Q', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?width=108&crop=smart&auto=webp&s=f8c357fb8983f0e8df33b1017773321535a2583d', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png... | |||
Benchmark Fusion: m-transportability of AI Evals | 5 | Reviewing VLM spatial reasoning benchmarks [SpatialScore](https://arxiv.org/pdf/2505.17012) versus [OmniSpatial](https://arxiv.org/pdf/2506.03135), you'll find a reversal between the rankings for **SpaceQwen** and **SpatialBot**, and missing comparisons for **SpaceThinker.**
Ultimately, we want to compare models on... | 2025-06-09T16:08:34 | https://www.reddit.com/gallery/1l78sg6 | remyxai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l78sg6 | false | null | t3_1l78sg6 | /r/LocalLLaMA/comments/1l78sg6/benchmark_fusion_mtransportability_of_ai_evals/ | false | false | 5 | null | |
Translation models that support streaming | 3 | Are their any nlps that support streaming outputs? - need translation models that supports steaming text outputs | 2025-06-09T15:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l78eb8/translation_models_that_support_streaming/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78eb8 | false | null | t3_1l78eb8 | /r/LocalLLaMA/comments/1l78eb8/translation_models_that_support_streaming/ | false | false | self | 3 | null |
feeling lost - localai apps | 1 | [removed] | 2025-06-09T15:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l77xgu/feeling_lost_localai_apps/ | Empty-Investment-827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77xgu | false | null | t3_1l77xgu | /r/LocalLLaMA/comments/1l77xgu/feeling_lost_localai_apps/ | false | false | self | 1 | null |
What is a best model?? | 1 | [removed] | 2025-06-09T15:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l77k53/what_is_a_best_model/ | Roberts_shine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77k53 | false | null | t3_1l77k53 | /r/LocalLLaMA/comments/1l77k53/what_is_a_best_model/ | false | false | self | 1 | null |
Models and where to find them? | 1 | So SD has civit.ai, though not perfect it has decent search, ratings and what not, generally find it to work quite well.
But sayI want to see what recent models are popular (and I literally do, so please share) that are for: programming, role play, general questions, maybe some other case I'm not even aware of. What a... | 2025-06-09T15:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l77g2z/models_and_where_to_find_them/ | morphles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77g2z | false | null | t3_1l77g2z | /r/LocalLLaMA/comments/1l77g2z/models_and_where_to_find_them/ | false | false | self | 1 | null |
When it comes to cost-performance ratio, DeepSeek R1 (0528) is the best. | 1 | 2025-06-09T15:11:51 | ashim_k_saha | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l77c59 | false | null | t3_1l77c59 | /r/LocalLLaMA/comments/1l77c59/when_it_comes_to_costperformance_ratio_deepseek/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jagl9bpk3x5f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=108&crop=smart&auto=webp&s=0de4cd5add5c6d0f22f153d51c96e28326649727', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=216&crop=smart&auto=... | ||
Build a full on-device rag app using qwen3 embedding and qwen3 llm | 5 | The Qwen3 0.6B embedding is extremely well at a 4-bit size for the small RAG. I was able to run the entire application offline on my iPhone 13.
[https://youtube.com/shorts/zG\_WD166pHo](https://youtube.com/shorts/zG_WD166pHo)
I have published the macOS version on the App Store and still working on the iOS part. Pleas... | 2025-06-09T14:51:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l76tvu/build_a_full_ondevice_rag_app_using_qwen3/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76tvu | false | null | t3_1l76tvu | /r/LocalLLaMA/comments/1l76tvu/build_a_full_ondevice_rag_app_using_qwen3/ | false | false | self | 5 | null |
Winter has arrived | 0 | Last year we saw a lot of significant improvements in AI, but this year we are only seeing gradual improvements. The feeling that remains is that the wall has become a mountain, and the climb will be very difficult and long. | 2025-06-09T14:50:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l76sg1/winter_has_arrived/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76sg1 | false | null | t3_1l76sg1 | /r/LocalLLaMA/comments/1l76sg1/winter_has_arrived/ | false | false | self | 0 | null |
PC build for local LLMs | 1 | [removed] | 2025-06-09T14:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l76fpy/pc_build_for_local_llms/ | 6446thatsmynumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76fpy | false | null | t3_1l76fpy | /r/LocalLLaMA/comments/1l76fpy/pc_build_for_local_llms/ | false | false | self | 1 | null |
Dolphin appreciation post. | 0 | Just a simple Dolphin appreciation post here. I appreciate all the work done by Cognitive Computationd. Wondering what cool new stuff Eric has cooking lately. | 2025-06-09T14:34:39 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l76ekr | false | null | t3_1l76ekr | /r/LocalLLaMA/comments/1l76ekr/dolphin_appreciation_post/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9w0uktkaxw5f1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=108&crop=smart&auto=webp&s=10e1c7cf22cc272bc454918c300cb2ed2b600e0e', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=216&crop=smart&auto=... | |
7900 XTX what are your go-to models for 24GB VRAM? | 15 | Just finished my new build with a 7900 XTX and I'm looking for some model recommendations.
Since most of the talk is CUDA-centric, I'm curious what my AMD users are running. I've got 24GB of VRAM to play with and I'm mainly looking for good models for general purpose chat/reasoning. | 2025-06-09T14:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l76cg2/7900_xtx_what_are_your_goto_models_for_24gb_vram/ | BillyTheMilli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76cg2 | false | null | t3_1l76cg2 | /r/LocalLLaMA/comments/1l76cg2/7900_xtx_what_are_your_goto_models_for_24gb_vram/ | false | false | self | 15 | null |
DeepSeek R1 0528 Hits 71% (+14.5 pts from R1) on Aider Polyglot Coding Leaderboard | 283 | 2025-06-09T14:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l76ab7/deepseek_r1_0528_hits_71_145_pts_from_r1_on_aider/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76ab7 | false | null | t3_1l76ab7 | /r/LocalLLaMA/comments/1l76ab7/deepseek_r1_0528_hits_71_145_pts_from_r1_on_aider/ | false | false | 283 | {'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'h... | ||
Trying to Make Llama Extract Smarter with a Schema-Building AI Agent | 1 | Hey folks,
I’ve been experimenting with Llama Extract to pull table data from 10-K PDFs. It actually works pretty well when you already have a solid schema in place.
The challenge I’m running into is that 10-Ks from different companies often format their tables a bit differently. So having a single “one-size-fits-all... | 2025-06-09T14:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l7642v/trying_to_make_llama_extract_smarter_with_a/ | Professional_Term579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7642v | false | null | t3_1l7642v | /r/LocalLLaMA/comments/1l7642v/trying_to_make_llama_extract_smarter_with_a/ | false | false | self | 1 | null |
Trying to Make Llama Extract Smarter with a Schema-Building AI Agent | 1 | [removed] | 2025-06-09T14:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l762el/trying_to_make_llama_extract_smarter_with_a/ | Professional_Term579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l762el | false | null | t3_1l762el | /r/LocalLLaMA/comments/1l762el/trying_to_make_llama_extract_smarter_with_a/ | false | false | self | 1 | null |
I built a Code Agent that writes code and live-debugs itself by reading and walking the call stack. | 77 | 2025-06-09T14:11:07 | https://v.redd.it/b1pnpj9lsw5f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l75tp1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b1pnpj9lsw5f1/DASHPlaylist.mpd?a=1752070282%2CMWJjNzk4MmNmN2JiNWI4NTViMWYyYWUyMzY4NGQ3OTU3Y2YxMWYzMmVmOGZiMTc5YmE4MDdmNjcwNzJlZTk0Mw%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/b1pnpj9lsw5f1/DASH_1080.mp4?source=fallback', '... | t3_1l75tp1 | /r/LocalLLaMA/comments/1l75tp1/i_built_a_code_agent_that_writes_code_and/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=6ec9a9c7fcd0be9c241507c1ce755746d8286... | ||
KVzip: Query-agnostic KV Cache Eviction — 3~4× memory reduction and 2× lower decoding latency | 401 | Hi! We've released KVzip, a KV cache compression method designed to support diverse future queries. You can try the demo on GitHub! Supported models include Qwen3/2.5, Gemma3, and LLaMA3.
GitHub: [https://github.com/snu-mllab/KVzip](https://github.com/snu-mllab/KVzip)
Paper: [https://arxiv.org/abs/2505.23416](https... | 2025-06-09T13:54:45 | janghyun1230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l75fc8 | false | null | t3_1l75fc8 | /r/LocalLLaMA/comments/1l75fc8/kvzip_queryagnostic_kv_cache_eviction_34_memory/ | false | false | default | 401 | {'enabled': True, 'images': [{'id': 'bpxlu6tfnw5f1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=108&crop=smart&auto=webp&s=6b6a38e2866e14db91003d5a4a47866574b78280', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=216&crop=smart&auto=web... | |
Gen AI training contrnt | 1 | [removed] | 2025-06-09T13:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l74cps/gen_ai_training_contrnt/ | Outrageous_Cup9473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l74cps | false | null | t3_1l74cps | /r/LocalLLaMA/comments/1l74cps/gen_ai_training_contrnt/ | false | false | self | 1 | null |
Why isn't it common for companies to compare the evaluation of the different quantizations of their model? | 28 | Is it not as trivial as it sounds? Are they scared of showing lower scoring evaluations in case users confuse them for the original ones?
It would be so useful when choosing a gguf version to know how much accuracy loss each has. Like I'm sure there are many models where Qn vs Qn+1 are indistinguishable in performance... | 2025-06-09T13:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l748qc/why_isnt_it_common_for_companies_to_compare_the/ | ArcaneThoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l748qc | false | null | t3_1l748qc | /r/LocalLLaMA/comments/1l748qc/why_isnt_it_common_for_companies_to_compare_the/ | false | false | self | 28 | null |
Local LLama for software dev | 1 | [removed] | 2025-06-09T12:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l73xc7/local_llama_for_software_dev/ | Additional-Purple-70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l73xc7 | false | null | t3_1l73xc7 | /r/LocalLLaMA/comments/1l73xc7/local_llama_for_software_dev/ | false | false | self | 1 | null |
How do I get started? | 1 | The idea of creating a locally-run LLM at home becomes more enticing every day, but I have no clue where to start. What learning resources do you all recommend for setting up and training your own language models? Any resources for building computers to spec for these projects would also be very helpful. | 2025-06-09T12:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l73sya/how_do_i_get_started/ | SoundBwoy_10011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l73sya | false | null | t3_1l73sya | /r/LocalLLaMA/comments/1l73sya/how_do_i_get_started/ | false | false | self | 1 | null |
Building "SpectreMind" – Local AI Red Teaming Assistant (Multi-LLM Orchestrator) | 1 | [removed] | 2025-06-09T12:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l735q4/building_spectremind_local_ai_red_teaming/ | slavicgod699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l735q4 | false | null | t3_1l735q4 | /r/LocalLLaMA/comments/1l735q4/building_spectremind_local_ai_red_teaming/ | false | false | self | 1 | null |
H company - Holo1 7B | 77 | https://huggingface.co/Hcompany/Holo1-7B
Paper : https://huggingface.co/papers/2506.02865
The H company (a French AI startup) released this model, and I haven't seen anyone talk about it here despite the great performance showed on benchmarks for GUI agentic use.
Did anyone tried it ? | 2025-06-09T12:06:38 | TacGibs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l73294 | false | null | t3_1l73294 | /r/LocalLLaMA/comments/1l73294/h_company_holo1_7b/ | false | false | default | 77 | {'enabled': True, 'images': [{'id': 'ph3t561w6w5f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=108&crop=smart&auto=webp&s=93be86a9cc40e606dfcee281ef89a07d9276dd61', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=216&crop=smart&auto=web... | |
Future of local LLM computing ? | 1 | [removed] | 2025-06-09T11:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l72v3l/future_of_local_llm_computing/ | Diligent_Paper9862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72v3l | false | null | t3_1l72v3l | /r/LocalLLaMA/comments/1l72v3l/future_of_local_llm_computing/ | false | false | self | 1 | null |
Building AI Personalities Users Actually Remember - The Memory Hook Formula | 0 | Spent months building detailed AI personalities only to have users forget which was which after 24 hours - "Was Sarah the lawyer or the nutritionist?" The problem wasn't making them interesting; it was making them memorable enough to stick in users' minds between conversations.
**The Memory Hook Formula That Actually ... | 2025-06-09T11:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l72nev/building_ai_personalities_users_actually_remember/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72nev | false | null | t3_1l72nev | /r/LocalLLaMA/comments/1l72nev/building_ai_personalities_users_actually_remember/ | false | false | self | 0 | null |
How do you handle memory and context with GPT API without wasting tokens? | 0 | Hi everyone,
I'm using the GPT API to build a local assistant, and I'm facing a major issue related to memory and context.
The biggest limitation so far is that the model doesn't remember previous interactions. Each API call is stateless, so I have to resend context manually — which results in huge token usage if the... | 2025-06-09T11:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l72kfi/how_do_you_handle_memory_and_context_with_gpt_api/ | ahmetamabanyemis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72kfi | false | null | t3_1l72kfi | /r/LocalLLaMA/comments/1l72kfi/how_do_you_handle_memory_and_context_with_gpt_api/ | false | false | self | 0 | null |
Looking for a scalable LLM API for BDSM roleplay chatbot – OpenAI alternative? | 1 | [removed] | 2025-06-09T10:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l71nm8/looking_for_a_scalable_llm_api_for_bdsm_roleplay/ | Shot-Purchase-2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l71nm8 | false | null | t3_1l71nm8 | /r/LocalLLaMA/comments/1l71nm8/looking_for_a_scalable_llm_api_for_bdsm_roleplay/ | false | false | nsfw | 1 | null |
Concept graph workflow in Open WebUI | 146 | **What is this?**
* Reasoning workflow where LLM thinks about the concepts that are related to the User's query and then makes a final answer based on that
* Workflow runs within OpenAI-compatible LLM proxy. It streams a special HTML artifact that connects back to the workflow and listens for events from it to display... | 2025-06-09T10:41:50 | https://v.redd.it/dzeqvwa9rv5f1 | Everlier | /r/LocalLLaMA/comments/1l71iie/concept_graph_workflow_in_open_webui/ | 1970-01-01T00:00:00 | 0 | {} | 1l71iie | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dzeqvwa9rv5f1/DASHPlaylist.mpd?a=1752187320%2CYzljYzcxMzQwMWIzM2NkZGI1Y2IwZDBjZDM4YTNiZmYyMWRjNGM1M2JkYTliNjQzNzU0NjYxMTE2YWFiMTIzYg%3D%3D&v=1&f=sd', 'duration': 169, 'fallback_url': 'https://v.redd.it/dzeqvwa9rv5f1/DASH_1080.mp4?source=fallback', '... | t3_1l71iie | /r/LocalLLaMA/comments/1l71iie/concept_graph_workflow_in_open_webui/ | false | false | 146 | {'enabled': False, 'images': [{'id': 'aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a00dc9c48e4ce8c056bc07993185072207ea... | |
5090 liquid cooled build optimization | 4 | Hi guys, i am building a new pc for me, primarily designed for ML and LLM tasks. I have all the components and would like to get some feedback, i did check if all things work with each other but maybe i missed something or you guys have improvement tips. This is the build:
|| || |AMD Ryzen™️ 9 9950X3D| |MSI GeForce RT... | 2025-06-09T10:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l716f4/5090_liquid_cooled_build_optimization/ | ElekDn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l716f4 | false | null | t3_1l716f4 | /r/LocalLLaMA/comments/1l716f4/5090_liquid_cooled_build_optimization/ | false | false | self | 4 | null |
best gpu provider for deploying Hunyuan3D-2 as api ? | 1 | [removed] | 2025-06-09T10:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l712sm/best_gpu_provider_for_deploying_hunyuan3d2_as_api/ | Willing_Ad_5594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l712sm | false | null | t3_1l712sm | /r/LocalLLaMA/comments/1l712sm/best_gpu_provider_for_deploying_hunyuan3d2_as_api/ | false | false | self | 1 | null |
Best solution for deploying hunyuan3D-2 as API | 1 | [removed] | 2025-06-09T10:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l711w1/best_solution_for_deploying_hunyuan3d2_as_api/ | Willing_Ad_5594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l711w1 | false | null | t3_1l711w1 | /r/LocalLLaMA/comments/1l711w1/best_solution_for_deploying_hunyuan3d2_as_api/ | false | false | self | 1 | null |
Pc configurator | 1 | [removed] | 2025-06-09T10:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l711u8/pc_configurator/ | Any-Understanding835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l711u8 | false | null | t3_1l711u8 | /r/LocalLLaMA/comments/1l711u8/pc_configurator/ | false | false | self | 1 | null |
Offline Chatbot with voice feature for Android? | 1 | [removed] | 2025-06-09T10:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l70zdm/offline_chatbot_with_voice_feature_for_android/ | No-Background5168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70zdm | false | null | t3_1l70zdm | /r/LocalLLaMA/comments/1l70zdm/offline_chatbot_with_voice_feature_for_android/ | false | false | self | 1 | null |
Choice the best for hosting | 1 | [removed] | 2025-06-09T10:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l70yen/choice_the_best_for_hosting/ | Temporary_Problem_71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70yen | false | null | t3_1l70yen | /r/LocalLLaMA/comments/1l70yen/choice_the_best_for_hosting/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.