title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ryzen Ai Max+ 395 vs RTX 5090 | 23 | Currently running a 5090 and it's been great. Super fast for anything under 34B. I mostly use WAN2.1 14B for video gen and some larger reasoning models. But Id like to run bigger models. And with the release of Veo 3 the quality has blown me away. Stuff like those Bigfoot and Stormtrooper vlogs look years ahead of anyt... | 2025-06-15T00:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lbn1vy/ryzen_ai_max_395_vs_rtx_5090/ | Any-Cobbler6161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbn1vy | false | null | t3_1lbn1vy | /r/LocalLLaMA/comments/1lbn1vy/ryzen_ai_max_395_vs_rtx_5090/ | false | false | self | 23 | null |
LocalLLaMA | 1 | [removed] | 2025-06-15T00:09:54 | ProgrammerDazzling78 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbmzvp | false | null | t3_1lbmzvp | /r/LocalLLaMA/comments/1lbmzvp/localllama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rfrTwVhAQLLYQ6grLSa-q3Hl6IZZvtN5zTs3LYaz0jk', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=108&crop=smart&auto=webp&s=96556c6732cc7144e0b47ddc81eb3631851d61b9', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/4k53k10egz6f1.j... | ||
Is this loss (and speed of decreasing loss) normal? | 1 | [removed] | 2025-06-14T23:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lbm4al/is_this_loss_and_speed_of_decreasing_loss_normal/ | Extra-Campaign7281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbm4al | false | null | t3_1lbm4al | /r/LocalLLaMA/comments/1lbm4al/is_this_loss_and_speed_of_decreasing_loss_normal/ | false | false | 1 | null | |
Optimizing llama.cpp flags for best token/s? 'llama-optimus' search might help | 1 | [removed] | 2025-06-14T23:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lblmiq/optimizing_llamacpp_flags_for_best_tokens/ | Expert-Inspector-128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lblmiq | false | null | t3_1lblmiq | /r/LocalLLaMA/comments/1lblmiq/optimizing_llamacpp_flags_for_best_tokens/ | false | false | 1 | null | |
Squeezing more speed out of devstralQ4_0.gguf on a 1080ti | 2 | I have an old 1080ti GPU and was quite excited that I could get the **devstralQ4\_0.gguf** to run on it! But it is slooooow. So I bothered a bigger LLM for advice on how to speed things up, and it was helpful. But it is still slow. Any magic tricks (aside from finally getting a new card or running a smaller model?)... | 2025-06-14T22:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lbleap/squeezing_more_speed_out_of_devstralq4_0gguf_on_a/ | firesalamander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbleap | false | null | t3_1lbleap | /r/LocalLLaMA/comments/1lbleap/squeezing_more_speed_out_of_devstralq4_0gguf_on_a/ | false | false | self | 2 | null |
Why Search Sucks! (But First, A Brief History) - Talk & Discussion | 1 | [removed] | 2025-06-14T22:50:05 | https://youtu.be/vZVcBUnre-c | kushalgoenka | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1lblcvl | false | {'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZVcBUnre-c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1lblcvl | /r/LocalLLaMA/comments/1lblcvl/why_search_sucks_but_first_a_brief_history_talk/ | false | false | default | 1 | null |
Best tutorial for installing a local llm with GUI setup? | 2 | I essentially want an LLM with a gui setup on my own pc - set up like a ChatGPT with a GUI but all running locally. | 2025-06-14T22:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lbl1qo/best_tutorial_for_installing_a_local_llm_with_gui/ | runnerofshadows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbl1qo | false | null | t3_1lbl1qo | /r/LocalLLaMA/comments/1lbl1qo/best_tutorial_for_installing_a_local_llm_with_gui/ | false | false | self | 2 | null |
is huggingface down? | 0 | AGI taked over? let's hide!!! | 2025-06-14T22:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lbkm9s/is_huggingface_down/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbkm9s | false | null | t3_1lbkm9s | /r/LocalLLaMA/comments/1lbkm9s/is_huggingface_down/ | false | false | self | 0 | null |
Local models are getting decent at coding in Cline | Qwen3-30B-A3B-GGUF; M4 Max, 36GB RAM | 1 | Really impressed with where local models are getting! It's been a longtime dream to run local models in Cline (on reasonable hardware) and it looks like we're getting close!
This is still a lightweight test I've found local models to be all but unusable in Cline.
**model**: lmstudio-community/Qwen3-30B-A3B-GGUF (3-bi... | 2025-06-14T22:05:13 | https://v.redd.it/clqb1y4vqy6f1 | nick-baumann | /r/LocalLLaMA/comments/1lbkfah/local_models_are_getting_decent_at_coding_in/ | 1970-01-01T00:00:00 | 0 | {} | 1lbkfah | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/clqb1y4vqy6f1/DASHPlaylist.mpd?a=1752660319%2CZmYxYzkyMzY4Nzg4MjNmMmIwOGMwZDVjZTgyOTAzNDFiMDM1Mjk4NWZhYTljNzEwZmMxMjk5NzlhMDMyNmQxMw%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/clqb1y4vqy6f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lbkfah | /r/LocalLLaMA/comments/1lbkfah/local_models_are_getting_decent_at_coding_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=108&crop=smart&format=pjpg&auto=webp&s=ac4506a15bfad79eaf5b11f1938562a086139... | |
I added vision to Magistral | 149 | I was inspired by an [experimental Devstral model](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF), and had the idea to the same thing to Magistral Small.
I replaced Mistral Small 3.1's language layers with Magistral's.
I suggest using vLLM for inference with the correct system prompt and sampling par... | 2025-06-14T22:02:20 | https://huggingface.co/OptimusePrime/Magistral-Small-2506-Vision | Vivid_Dot_6405 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lbkd46 | false | null | t3_1lbkd46 | /r/LocalLLaMA/comments/1lbkd46/i_added_vision_to_magistral/ | false | false | default | 149 | {'enabled': False, 'images': [{'id': 'X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=108&crop=smart&auto=webp&s=3102b69c74d945f421090e75fea2d27c61da78b9', 'width': 108}, {'height': 116, 'url': 'h... |
Magistral Small with Vision | 1 | [removed] | 2025-06-14T21:57:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lbk8yb/magistral_small_with_vision/ | Vivid_Dot_6405 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbk8yb | false | null | t3_1lbk8yb | /r/LocalLLaMA/comments/1lbk8yb/magistral_small_with_vision/ | false | false | self | 1 | null |
Vision Support for Magistral Small | 1 | [removed] | 2025-06-14T21:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lbk82e/vision_support_for_magistral_small/ | Vivid_Dot_6405 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbk82e | false | null | t3_1lbk82e | /r/LocalLLaMA/comments/1lbk82e/vision_support_for_magistral_small/ | false | false | self | 1 | null |
Vision Support for Magistral Small | 1 | [removed] | 2025-06-14T21:54:05 | https://huggingface.co/OptimusePrime/Magistral-Small-2506-Vision | Vivid_Dot_6405 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lbk6la | false | null | t3_1lbk6la | /r/LocalLLaMA/comments/1lbk6la/vision_support_for_magistral_small/ | false | false | default | 1 | null |
How does everyone do Tool Calling? | 63 | I’ve begun to see Tool Calling so that I can make the LLMs I’m using do real work for me. I do all my LLM work in Python and was wondering if there’s any libraries that you recommend that make it all easy. I have just recently seen MCP and I have been trying to add it manually through the OpenAI library but that’s quit... | 2025-06-14T21:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lbj978/how_does_everyone_do_tool_calling/ | MKU64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbj978 | false | null | t3_1lbj978 | /r/LocalLLaMA/comments/1lbj978/how_does_everyone_do_tool_calling/ | false | false | self | 63 | null |
Watching Robots having a conversation | 5 | Something I always wanted to do.
Have two or more different local LLM models having a conversation, initiated by user supplied prompt.
I initially wrote this as a python script, but that quickly became not as interesting as a native app.
Personally, I feel like we should aim at having things running on our comput... | 2025-06-14T20:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lbix9k/watching_robots_having_a_conversation/ | sp1tfir3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbix9k | false | null | t3_1lbix9k | /r/LocalLLaMA/comments/1lbix9k/watching_robots_having_a_conversation/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=108&crop=smart&auto=webp&s=3fe634747f21319c47dc44653226fe7416242367', 'width': 108}, {'height': 108, 'url': 'h... | |
Mistral Small 3.1 vs Magistral Small - experience? | 28 | Hi all
I have used Mistral Small 3.1 in my dataset generation pipeline over the past couple months. It does a better job than many larger LLMs in multiturn conversation generation, outperforming Qwen 3 30b and 32b, Gemma 27b, and GLM-4 (as well as others). My next go-to model is Nemotron Super 49B, but I can afford le... | 2025-06-14T20:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lbimsz/mistral_small_31_vs_magistral_small_experience/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbimsz | false | null | t3_1lbimsz | /r/LocalLLaMA/comments/1lbimsz/mistral_small_31_vs_magistral_small_experience/ | false | false | self | 28 | null |
Fine-tuning Diffusion Language Models - Help? | 10 | I have spent the last few days trying to fine tune a diffusion language model for coding.
I tried Dream, LLaDA, and SMDM, but got no Colab Notebook working. I've got to admit, I don't know Python, which might be a reason.
Has anyone had success? Or could anyone help me out? | 2025-06-14T20:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lbii6m/finetuning_diffusion_language_models_help/ | DunklerErpel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbii6m | false | null | t3_1lbii6m | /r/LocalLLaMA/comments/1lbii6m/finetuning_diffusion_language_models_help/ | false | false | self | 10 | null |
Best Approach for Accurate Speaker Diarization | 1 | [removed] | 2025-06-14T20:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lbi9cj/best_approach_for_accurate_speaker_diarization/ | LongjumpingComb8622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbi9cj | false | null | t3_1lbi9cj | /r/LocalLLaMA/comments/1lbi9cj/best_approach_for_accurate_speaker_diarization/ | false | false | self | 1 | null |
Best Approach for Accurate Speaker Diarization | 1 | [removed] | 2025-06-14T20:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lbi7n0/best_approach_for_accurate_speaker_diarization/ | LongjumpingComb8622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbi7n0 | false | null | t3_1lbi7n0 | /r/LocalLLaMA/comments/1lbi7n0/best_approach_for_accurate_speaker_diarization/ | false | false | self | 1 | null |
GPT 4 might already understand what you’re thinking and barely anyone noticed | 1 | [removed] | 2025-06-14T20:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lbhwyj/gpt_4_might_already_understand_what_youre/ | Visible-Property3453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbhwyj | false | null | t3_1lbhwyj | /r/LocalLLaMA/comments/1lbhwyj/gpt_4_might_already_understand_what_youre/ | false | false | self | 1 | null |
Why do all my posts get auto-removed? | 1 | [removed] | 2025-06-14T20:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lbhw3e/why_do_all_my_posts_get_autoremoved/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbhw3e | false | null | t3_1lbhw3e | /r/LocalLLaMA/comments/1lbhw3e/why_do_all_my_posts_get_autoremoved/ | false | false | self | 1 | null |
subreddit meta | 1 | [removed] | 2025-06-14T20:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lbhq4m/subreddit_meta/ | freedom2adventure | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbhq4m | false | null | t3_1lbhq4m | /r/LocalLLaMA/comments/1lbhq4m/subreddit_meta/ | false | false | self | 1 | null |
I'm *also* working on my own local AI "assistant" with memory and emotional logic. Looking for some ideas on how to improve memory. Check it out on github :3 | 1 | [removed] | 2025-06-14T19:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lbh9vw/im_also_working_on_my_own_local_ai_assistant_with/ | flamingrickpat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbh9vw | false | null | t3_1lbh9vw | /r/LocalLLaMA/comments/1lbh9vw/im_also_working_on_my_own_local_ai_assistant_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=108&crop=smart&auto=webp&s=5c401ff5cbdb345669dc864b38a21079c0d4df02', 'width': 108}, {'height': 108, 'url': 'h... |
New tool: llama‑optimus –> Auto‑tune llama.cpp for max tokens/s | 1 | [removed] | 2025-06-14T19:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lbgm4l/new_tool_llamaoptimus_autotune_llamacpp_for_max/ | Expert-Inspector-128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbgm4l | false | null | t3_1lbgm4l | /r/LocalLLaMA/comments/1lbgm4l/new_tool_llamaoptimus_autotune_llamacpp_for_max/ | false | false | 1 | null | |
Massive performance gains from linux? | 85 | Ive been using LM studio for inference and I switched to Mint Linux because Windows is hell. My tokens per second went from 1-2t/s to 7-8t/s. Prompt eval went from 1 minutes to 2 seconds.
Specs:
13700k
Asus Maximus hero z790
64gb of ddr5
2tb Samsung pro SSD
2X 3090 at 250w limit each on x8 pcie lanes
Model: Unsloth Q... | 2025-06-14T19:13:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lbgkuk/massive_performance_gains_from_linux/ | Only_Situation_4713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbgkuk | false | null | t3_1lbgkuk | /r/LocalLLaMA/comments/1lbgkuk/massive_performance_gains_from_linux/ | false | false | self | 85 | null |
🚪 Dungeo AI WebUI – A Local Roleplay Frontend for LLM-based Dungeon Masters 🧙♂️✨ | 1 | [removed] | 2025-06-14T19:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lbgjhv/dungeo_ai_webui_a_local_roleplay_frontend_for/ | Reasonable_Brief578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbgjhv | false | null | t3_1lbgjhv | /r/LocalLLaMA/comments/1lbgjhv/dungeo_ai_webui_a_local_roleplay_frontend_for/ | false | false | self | 1 | null |
Comment on The Illusion of Thinking: Recent paper from Apple contain glaring flaws in the original study's experimental design, from not considering token limit to testing unsolvable puzzles. | 54 | I have seen a lively discussion here on the recent Apple paper, which was quite interesting. When trying to read opinions on it I have found a recent comment on this Apple paper:
*Comment on The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity -* ... | 2025-06-14T19:04:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lbgczn/comment_on_the_illusion_of_thinking_recent_paper/ | Garpagan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbgczn | false | null | t3_1lbgczn | /r/LocalLLaMA/comments/1lbgczn/comment_on_the_illusion_of_thinking_recent_paper/ | false | false | self | 54 | null |
💡 Quick Tip for Newcomers to LLMs (Local Large Language Models) | 1 | [removed] | 2025-06-14T19:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lbgc8y/quick_tip_for_newcomers_to_llms_local_large/ | MixChance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbgc8y | false | null | t3_1lbgc8y | /r/LocalLLaMA/comments/1lbgc8y/quick_tip_for_newcomers_to_llms_local_large/ | false | false | self | 1 | null |
💡 Quick Tip for Newcomers to LLMs (Local Large Language Models): | 1 | [removed] | 2025-06-14T19:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lbg9nx/quick_tip_for_newcomers_to_llms_local_large/ | MixChance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbg9nx | false | null | t3_1lbg9nx | /r/LocalLLaMA/comments/1lbg9nx/quick_tip_for_newcomers_to_llms_local_large/ | false | false | self | 1 | null |
AI voice chat/pdf reader desktop gtk app using ollama | 14 | Hello, I started building this application before solutions like ElevenReader were developed, but maybe someone will find it useful
[https://github.com/kopecmaciej/fox-reader](https://github.com/kopecmaciej/fox-reader) | 2025-06-14T18:56:21 | https://v.redd.it/twm00j9htx6f1 | Cieju04 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbg65e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/twm00j9htx6f1/DASHPlaylist.mpd?a=1752519396%2CMDM0ODRkNGY0ZjI5OTQ1ZDJjNmFlMDcyZTFiZWVkM2VmM2FiY2FkNTE2NDRlZTZlZTZjNTFjMTJkZjUwMjk2Nw%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/twm00j9htx6f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lbg65e | /r/LocalLLaMA/comments/1lbg65e/ai_voice_chatpdf_reader_desktop_gtk_app_using/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=108&crop=smart&format=pjpg&auto=webp&s=cdd6d55089718b0d464dab8c937915ef4406b... | |
Somebody use https://petals.dev/??? | 3 | I just discover this and found strange that nobody here mention it. I mean... it is local after all. | 2025-06-14T18:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lbg06c/somebody_use_httpspetalsdev/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbg06c | false | null | t3_1lbg06c | /r/LocalLLaMA/comments/1lbg06c/somebody_use_httpspetalsdev/ | false | false | self | 3 | null |
an offline voice assistant | 1 | [removed] | 2025-06-14T18:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lbfm15/an_offline_voice_assistant/ | ppzms | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbfm15 | false | null | t3_1lbfm15 | /r/LocalLLaMA/comments/1lbfm15/an_offline_voice_assistant/ | false | false | self | 1 | null |
26 Quants that fit on 32GB vs 10,000-token "Needle in a Haystack" test | 206 | | Model | Params (B) | Quantization | Results |
|:-------------------------------------|-----------:|:------------:|:--------|
| **Meta Llama Family** | | | |
| Llama 2 70 | 70 | q2 | failed |
| Ll... | 2025-06-14T18:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lbfinu/26_quants_that_fit_on_32gb_vs_10000token_needle/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbfinu | false | null | t3_1lbfinu | /r/LocalLLaMA/comments/1lbfinu/26_quants_that_fit_on_32gb_vs_10000token_needle/ | false | false | self | 206 | null |
Augmentoolkit just got a major update - huge advance for dataset generation and fine-tuning | 1 | [removed] | 2025-06-14T18:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lbf59b/augmentoolkit_just_got_a_major_update_huge/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbf59b | false | null | t3_1lbf59b | /r/LocalLLaMA/comments/1lbf59b/augmentoolkit_just_got_a_major_update_huge/ | false | false | self | 1 | null |
Spam detection model/pipeline? | 3 | Hi! Does anyone know some oss model/pipeline for spam detection? As far as I know, there's a project called Detoxify but they are for toxicity (hate speech, etc) moderations, not really for spam detection | 2025-06-14T17:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lbek49/spam_detection_modelpipeline/ | bihungba1101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbek49 | false | null | t3_1lbek49 | /r/LocalLLaMA/comments/1lbek49/spam_detection_modelpipeline/ | false | false | self | 3 | null |
They removed this post? Weird! | 1 | [removed] | 2025-06-14T17:31:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lbe6vf/they_removed_this_post_weird/ | Good-Helicopter3441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbe6vf | false | null | t3_1lbe6vf | /r/LocalLLaMA/comments/1lbe6vf/they_removed_this_post_weird/ | false | false | self | 1 | null |
A challenge in time. No pressure. | 2 | Goal: Create a Visual Model that interprets and Generates 300FPS.
Resources Constraints: 4GB Ram, 2.2Ghz CPU, no GPU/TPU.
Potential: Film Industry, Security, Self Sufficient Agents, and finally light and highly scalable AGI agents on literally any tech from drones to spaceships.
I was checking out the State of th... | 2025-06-14T17:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lbdyyi/a_challenge_in_time_no_pressure/ | Good-Helicopter3441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbdyyi | false | null | t3_1lbdyyi | /r/LocalLLaMA/comments/1lbdyyi/a_challenge_in_time_no_pressure/ | false | false | self | 2 | null |
Running Llama 3 Locally: What’s Your Best Hardware Setup? | 1 | [removed] | 2025-06-14T16:52:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lbdabv/running_llama_3_locally_whats_your_best_hardware/ | amanverasia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbdabv | false | null | t3_1lbdabv | /r/LocalLLaMA/comments/1lbdabv/running_llama_3_locally_whats_your_best_hardware/ | false | false | self | 1 | null |
I've been working on my own local AI assistant with memory and emotional logic – wanted to share progress & get feedback | 8 | Inspired by ChatGPT, I started building my own local AI assistant called *VantaAI*. It's meant to run completely offline and simulates things like emotional memory, mood swings, and personal identity.
I’ve implemented things like:
* Long-term memory that evolves based on conversation context
* A mood graph that track... | 2025-06-14T16:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lbd9jc/ive_been_working_on_my_own_local_ai_assistant/ | PianoSeparate8989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbd9jc | false | null | t3_1lbd9jc | /r/LocalLLaMA/comments/1lbd9jc/ive_been_working_on_my_own_local_ai_assistant/ | false | false | self | 8 | null |
What LLM is everyone using in June 2025? | 146 | Curious what everyone’s running now.
What model(s) are in your regular rotation?
What hardware are you on?
How are you running it? (LM Studio, Ollama, llama.cpp, etc.)
What do you use it for?
Here’s mine:
Recently I've been using mostly Qwen3 (30B, 32B, and 235B)
Ryzen 7 5800X, 128GB RAM, RTX 3090
Ollama... | 2025-06-14T16:43:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lbd2jy/what_llm_is_everyone_using_in_june_2025/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbd2jy | false | null | t3_1lbd2jy | /r/LocalLLaMA/comments/1lbd2jy/what_llm_is_everyone_using_in_june_2025/ | false | false | self | 146 | null |
LLM finder | 1 | [removed] | 2025-06-14T16:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lbclhw/llm_finder/ | Stock-Writer-800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbclhw | false | null | t3_1lbclhw | /r/LocalLLaMA/comments/1lbclhw/llm_finder/ | false | false | self | 1 | null |
Zonos is not consistent. How to fix? | 1 | [removed] | 2025-06-14T16:20:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lbckeb | false | null | t3_1lbckeb | /r/LocalLLaMA/comments/1lbckeb/zonos_is_not_consistent_how_to_fix/ | false | false | default | 1 | null | ||
How much VRAM do you have and what's your daily-driver model? | 91 | Curious what everyone is using day to day, locally, and what hardware they're using.
If you're using a quantized version of a model please say so! | 2025-06-14T16:14:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lbcfjz/how_much_vram_do_you_have_and_whats_your/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbcfjz | false | null | t3_1lbcfjz | /r/LocalLLaMA/comments/1lbcfjz/how_much_vram_do_you_have_and_whats_your/ | false | false | self | 91 | null |
Help - Llamacpp-server & rerankin LLM | 1 | Can anybody suggest me a reranker that works with llamacpp-server and how to use it?
I tried with rank\_zephyr\_7b\_v1 and Qwen3-Reranker-8B, but could not make any of them them work...
\`\`\`
llama-server --model "H:\\MaziyarPanahi\\rank\_zephyr\_7b\_v1\_full-GGUF\\rank\_zephyr\_7b\_v1\_full.Q8\_0.gguf" -... | 2025-06-14T16:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lbc3du/help_llamacppserver_rerankin_llm/ | dodo13333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbc3du | false | null | t3_1lbc3du | /r/LocalLLaMA/comments/1lbc3du/help_llamacppserver_rerankin_llm/ | false | false | self | 1 | null |
Local Memory Chat UI - Open Source + Vector Memory | 14 | Hey everyone,
I created this project focused on CPU. That's why it runs on CPU by default. My aim was to be able to use the model locally on an old computer with a system that "doesn't forget".
Over the past few weeks, I’ve been building a lightweight yet powerful **LLM chat interface** using **llama-cpp-python** — b... | 2025-06-14T15:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lbbwwm/local_memory_chat_ui_open_source_vector_memory/ | Dismal-Cupcake-3641 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbbwwm | false | null | t3_1lbbwwm | /r/LocalLLaMA/comments/1lbbwwm/local_memory_chat_ui_open_source_vector_memory/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=108&crop=smart&auto=webp&s=04369223a39f9200229fb4927b0bd5e1b79a7341', 'width': 108}, {'height': 108, 'url': 'h... | |
LLM Showdown: A Bigger Model with Harsh Quantization vs. a Smaller Model with Gentle Quantization? | 1 | [removed] | 2025-06-14T15:49:58 | RIP26770 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbbuwj | false | null | t3_1lbbuwj | /r/LocalLLaMA/comments/1lbbuwj/llm_showdown_a_bigger_model_with_harsh/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'hcpge2gazw6f1', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=108&crop=smart&auto=webp&s=b9986b16b97b0c4a4560d5a7d5e3f8b13632d585', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=216&crop=smart&auto=... | |
Why local LLM? | 130 | I'm about to install Ollama and try a local LLM but I'm wondering what's possible and are the benefits apart from privacy and cost saving?
My current memberships:
\- Claude AI
\- Cursor AI | 2025-06-14T15:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lbbafh/why_local_llm/ | Beginning_Many324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbbafh | false | null | t3_1lbbafh | /r/LocalLLaMA/comments/1lbbafh/why_local_llm/ | false | false | self | 130 | null |
Llama.cpp | 1 | [removed] | 2025-06-14T15:09:36 | Puzzled-Yoghurt564 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbaxea | false | null | t3_1lbaxea | /r/LocalLLaMA/comments/1lbaxea/llamacpp/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6RWfqrooZlDuxHikofJ4uRGP67NZxZ0xa0LfJijuXB0', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=108&crop=smart&auto=webp&s=6a55191676e5429be5f93b0ac45cd86f8193d983', 'width': 108}, {'height': 262, 'url': 'https://preview.redd.it/22ymidb3sw6f1.j... | ||
Is there any model ( local or in-app ) that can detect defects on text ? | 0 | The mission is to feed an image and detect if the text in the image is malformed or it's out of the frame of the image ( cut off ). Is there any model, local or commercial that can do this effectively yet ? | 2025-06-14T14:46:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lbaedh/is_there_any_model_local_or_inapp_that_can_detect/ | skarrrrrrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbaedh | false | null | t3_1lbaedh | /r/LocalLLaMA/comments/1lbaedh/is_there_any_model_local_or_inapp_that_can_detect/ | false | false | self | 0 | null |
[Discussion] Thinking Without Words: Continuous latent reasoning for local LLaMA inference – feedback? | 7 | [Discussion](https://www.reddit.com/r/LocalLLaMA/?f=flair_name%3A%22Discussion%22)
Hi everyone,
I just published a new post, **“Thinking Without Words”**, where I survey the evolution of latent chain-of-thought reasoning—from STaR and Implicit CoT all the way to COCONUT and HCoT—and propose a novel **GRAIL-Transforme... | 2025-06-14T14:38:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lba8f6/discussion_thinking_without_words_continuous/ | BeowulfBR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lba8f6 | false | null | t3_1lba8f6 | /r/LocalLLaMA/comments/1lba8f6/discussion_thinking_without_words_continuous/ | false | false | self | 7 | null |
Any Model can Reason: ITRS - Iterative Transparent Reasoning Systems | 1 | [removed] | 2025-06-14T14:35:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lba5u5/any_model_can_reason_itrs_iterative_transparent/ | thomheinrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lba5u5 | false | null | t3_1lba5u5 | /r/LocalLLaMA/comments/1lba5u5/any_model_can_reason_itrs_iterative_transparent/ | false | false | self | 1 | null |
GAIA: New Gemma3 4B for Brazilian Portuguese / Um Gemma3 4B para Português do Brasil! | 37 | **\[EN\]**
Introducing **GAIA (Gemma-3-Gaia-PT-BR-4b-it)**, our new open language model, developed and optimized for **Brazilian Portuguese!**
**What does GAIA offer?**
* **PT-BR Focus:** Continuously pre-trained on 13 BILLION high-quality Brazilian Portuguese tokens.
* **Base Model:** google/gemma-3-4b-pt (Gemma 3 ... | 2025-06-14T14:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lb9zhl/gaia_new_gemma3_4b_for_brazilian_portuguese_um/ | ffgnetto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb9zhl | false | null | t3_1lb9zhl | /r/LocalLLaMA/comments/1lb9zhl/gaia_new_gemma3_4b_for_brazilian_portuguese_um/ | false | false | self | 37 | null |
[Discussion] Thinking Without Words: Continuous latent reasoning for local LLaMA inference – feedback? | 1 | [removed] | 2025-06-14T14:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lb9w0e/discussion_thinking_without_words_continuous/ | BeowulfBR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb9w0e | false | null | t3_1lb9w0e | /r/LocalLLaMA/comments/1lb9w0e/discussion_thinking_without_words_continuous/ | false | false | self | 1 | null |
Trying to install llama 4 scout & maverick locally; keep getting errors | 0 | I’ve gotten as far as installing python pip & it spits out some error about unable to install build dependencies . I’ve already filled out the form, selected the models and accepted the terms of use. I went to the email that is supposed to give you a link to GitHub that is supposed to authorize your download. Tried it ... | 2025-06-14T14:15:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lb9ppy/trying_to_install_llama_4_scout_maverick_locally/ | Zmeiler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb9ppy | false | null | t3_1lb9ppy | /r/LocalLLaMA/comments/1lb9ppy/trying_to_install_llama_4_scout_maverick_locally/ | false | false | self | 0 | null |
Help | 1 | [removed] | 2025-06-14T14:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lb9l7y/help/ | Competitive-Sky-9818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb9l7y | false | null | t3_1lb9l7y | /r/LocalLLaMA/comments/1lb9l7y/help/ | false | false | self | 1 | null |
Is it normal for RAG to take this long to load the first time? | 13 | I'm using https://github.com/AllAboutAI-YT/easy-local-rag with the default dolphin-llama3 model, and a 500mb vault.txt file. It's been loading for an hour and a half with my GPU at full utilization but it's still going. Is it normal that it would take this long, and more importantly, is it gonna take this long every ti... | 2025-06-14T14:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lb9jqc/is_it_normal_for_rag_to_take_this_long_to_load/ | just_a_guy1008 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb9jqc | false | null | t3_1lb9jqc | /r/LocalLLaMA/comments/1lb9jqc/is_it_normal_for_rag_to_take_this_long_to_load/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=108&crop=smart&auto=webp&s=79410ae21a1f2e00f6cc762de24cf225285a8efc', 'width': 108}, {'height': 108, 'url': 'h... |
Local LLM Memorization – A fully local memory system for long-term recall and visualization | 1 | [removed] | 2025-06-14T13:45:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lb92g3/local_llm_memorization_a_fully_local_memory/ | Vicouille6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb92g3 | false | null | t3_1lb92g3 | /r/LocalLLaMA/comments/1lb92g3/local_llm_memorization_a_fully_local_memory/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=108&crop=smart&auto=webp&s=987378a5dd1be1ed5f17eface636fa84d7344324', 'width': 108}, {'height': 108, 'url': 'h... |
Zonos TTS is not stable. How to fix? | 1 | [removed] | 2025-06-14T13:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lb92et/zonos_tts_is_not_stable_how_to_fix/ | TheRealistDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb92et | false | null | t3_1lb92et | /r/LocalLLaMA/comments/1lb92et/zonos_tts_is_not_stable_how_to_fix/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kUgDF3b8nhmaAtFQ-IwZpeKaYz08TA8prMsUn44V-9Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=108&crop=smart&auto=webp&s=2f1c029c2ed98c8c192ce8a6f1eea9c68c64b9d7', 'width': 108}, {'height': 162, 'url': 'h... |
Local LLM Memorization – A fully local memory system for long-term recall and visualization | 1 | [removed] | 2025-06-14T13:43:04 | Vicouille6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lb90jw | false | null | t3_1lb90jw | /r/LocalLLaMA/comments/1lb90jw/local_llm_memorization_a_fully_local_memory/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CQxJDnALw36Ew4zqQt-qXzzQB-kyNgiXNCM7Fx0w1UA', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=108&crop=smart&auto=webp&s=2d7cb2c7b7b5b3a8cccc22bb63be846d3df2dcd9', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png... | ||
Local LLM Memorization – A fully local memory system for long-term recall and visualization | 1 | [removed] | 2025-06-14T13:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lb8z80/local_llm_memorization_a_fully_local_memory/ | Vicouille6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb8z80 | false | null | t3_1lb8z80 | /r/LocalLLaMA/comments/1lb8z80/local_llm_memorization_a_fully_local_memory/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=108&crop=smart&auto=webp&s=987378a5dd1be1ed5f17eface636fa84d7344324', 'width': 108}, {'height': 108, 'url': 'h... |
Learning material on how to use LLM at work for senior engineer | 1 | [removed] | 2025-06-14T12:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lb7nvk/learning_material_on_how_to_use_llm_at_work_for/ | gyzerok | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb7nvk | false | null | t3_1lb7nvk | /r/LocalLLaMA/comments/1lb7nvk/learning_material_on_how_to_use_llm_at_work_for/ | false | false | self | 1 | null |
Can you get your local LLM to run the code it suggests? | 0 | A feature of Gemini 2.5 on aistudio that I love is that you can get it to run the code it suggests. It will then automatically correct errors it finds or fix the code if the output doesn't match what it was expecting .This is really powerful and useful feature.
Is it possible to do the same with a local model? | 2025-06-14T12:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lb7h7z/can_you_get_your_local_llm_to_run_the_code_it/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb7h7z | false | null | t3_1lb7h7z | /r/LocalLLaMA/comments/1lb7h7z/can_you_get_your_local_llm_to_run_the_code_it/ | false | false | self | 0 | null |
RTX 6000 Ada or a 4090? | 0 | Hello,
I'm working on a project where I'm looking at around 150-200 tps in a batch of 4 of such processes running in parallel, text-based, no images or anything.
Right now I don't have any GPUs. I can get a RTX 6000 Ada for around $1850 and a 4090 for around the same price (maybe a couple hudreds $ higher).
I'm also... | 2025-06-14T12:13:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lb79sg/rtx_6000_ada_or_a_4090/ | This_Woodpecker_9163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb79sg | false | null | t3_1lb79sg | /r/LocalLLaMA/comments/1lb79sg/rtx_6000_ada_or_a_4090/ | false | false | self | 0 | null |
Gemma function calling | 1 | [removed] | 2025-06-14T11:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lb6qoc/gemma_function_calling/ | Life_Bag_7583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb6qoc | false | null | t3_1lb6qoc | /r/LocalLLaMA/comments/1lb6qoc/gemma_function_calling/ | false | false | self | 1 | null |
Gemma function calling | 1 | [removed] | 2025-06-14T11:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lb6phn/gemma_function_calling/ | Life_Bag_7583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb6phn | false | null | t3_1lb6phn | /r/LocalLLaMA/comments/1lb6phn/gemma_function_calling/ | false | false | self | 1 | null |
Gemma function calling problems | 1 | [removed] | 2025-06-14T11:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lb6mbw/gemma_function_calling_problems/ | Life_Bag_7583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb6mbw | false | null | t3_1lb6mbw | /r/LocalLLaMA/comments/1lb6mbw/gemma_function_calling_problems/ | false | false | self | 1 | null |
LLM Leaderboard by VRAM Size | 1 | [removed] | 2025-06-14T11:28:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lb6hjg/llm_leaderboard_by_vram_size/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb6hjg | false | null | t3_1lb6hjg | /r/LocalLLaMA/comments/1lb6hjg/llm_leaderboard_by_vram_size/ | false | false | self | 1 | null |
Thoughts on hardware price optimisarion for LLMs? | 86 | Graph related (gpt-4o with with web search) | 2025-06-14T10:43:36 | GreenTreeAndBlueSky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lb5rm2 | false | null | t3_1lb5rm2 | /r/LocalLLaMA/comments/1lb5rm2/thoughts_on_hardware_price_optimisarion_for_llms/ | false | false | default | 86 | {'enabled': True, 'images': [{'id': 'iauc7homgv6f1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=108&crop=smart&auto=webp&s=6c1c4341c98a71be6c2d4b27714aca4b3e8a613c', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=216&crop=smart&auto=web... | |
How to train GPT-SoVITS in a new language? | 1 | [removed] | 2025-06-14T10:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lb5b54/how_to_train_gptsovits_in_a_new_language/ | Inside_Letterhead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb5b54 | false | null | t3_1lb5b54 | /r/LocalLLaMA/comments/1lb5b54/how_to_train_gptsovits_in_a_new_language/ | false | false | self | 1 | null |
Frustrated trying to run MiniCPM-o 2.6 on RunPod | 2 | Hi,
I'm trying to use MiniCPM-o 2.6 for a project that involves using the LLM to categorize frames from a video into certain categories.
Naturally, the first step is to get MiniCPM running at all.
This is where I am facing many problems
At first, I tried to get it working on my laptop which has an RTX 3050Ti 4GB GPU, ... | 2025-06-14T09:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lb4yb4/frustrated_trying_to_run_minicpmo_26_on_runpod/ | i5_8300h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb4yb4 | false | null | t3_1lb4yb4 | /r/LocalLLaMA/comments/1lb4yb4/frustrated_trying_to_run_minicpmo_26_on_runpod/ | false | false | self | 2 | null |
Rookie question | 0 | Why is that whenever you generate an image with correct lettering/wording it always spits out some random garbled mess.. why is this? Just curious | 2025-06-14T08:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lb48oi/rookie_question/ | Zmeiler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb48oi | false | null | t3_1lb48oi | /r/LocalLLaMA/comments/1lb48oi/rookie_question/ | false | false | self | 0 | null |
We need a distilled qwen 3 235b a22b model on DeepSeek R1 | 1 | [removed] | 2025-06-14T08:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lb46cq/we_need_a_distilled_qwen_3_235b_a22b_model_on/ | EndLineTech03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb46cq | false | null | t3_1lb46cq | /r/LocalLLaMA/comments/1lb46cq/we_need_a_distilled_qwen_3_235b_a22b_model_on/ | false | false | self | 1 | null |
Can anyone give me a local llm setup which analyses and gives feedback to improve my speaking ability | 2 | I am always afraid of public speaking and freeze up in my interviews. I ramble and can't structure my thoughts and go off on some random tangents whenever i speak. I believe practice makes me better and I was thinking I can use locallama to help me. Something along the lines of recording and then I can use a tts model ... | 2025-06-14T08:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lb3wxq/can_anyone_give_me_a_local_llm_setup_which/ | timedacorn369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb3wxq | false | null | t3_1lb3wxq | /r/LocalLLaMA/comments/1lb3wxq/can_anyone_give_me_a_local_llm_setup_which/ | false | false | self | 2 | null |
Are there any tools to create structured data from webpages? | 14 | I often find myself in a situation where I need to pass a webpage to an LLM, mostly just blog posts and forum posts. Is there some tool that can parse the page and create it in a structured format for an LLM to consume? | 2025-06-14T07:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lb2r2u/are_there_any_tools_to_create_structured_data/ | birdsintheskies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb2r2u | false | null | t3_1lb2r2u | /r/LocalLLaMA/comments/1lb2r2u/are_there_any_tools_to_create_structured_data/ | false | false | self | 14 | null |
How do you provide files? | 6 | Out of curiosity I was wondering how people tended to provide files to their AI when coding. I can’t tell if I’ve completely over complicated how I should be giving the models context or if I actually created a solid solution.
If anyone has any input on how they best handle sending files via API (not using Claude or ... | 2025-06-14T06:47:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lb29r8/how_do_you_provide_files/ | droopy227 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb29r8 | false | null | t3_1lb29r8 | /r/LocalLLaMA/comments/1lb29r8/how_do_you_provide_files/ | false | false | self | 6 | null |
Open Source Unsiloed AI Chunker (EF2024) | 1 | Hey , Unsiloed CTO here!
Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give ... | 2025-06-14T06:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lb25f9/open_source_unsiloed_ai_chunker_ef2024/ | Grand_Coconut_9739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb25f9 | false | null | t3_1lb25f9 | /r/LocalLLaMA/comments/1lb25f9/open_source_unsiloed_ai_chunker_ef2024/ | false | false | self | 1 | null |
Open Source Unsiloed AI Chunker (EF2024) | 48 | Hey , Unsiloed CTO here!
Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give ... | 2025-06-14T06:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lb1v8h/open_source_unsiloed_ai_chunker_ef2024/ | Initial-Western-4438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb1v8h | false | null | t3_1lb1v8h | /r/LocalLLaMA/comments/1lb1v8h/open_source_unsiloed_ai_chunker_ef2024/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=108&crop=smart&auto=webp&s=e26462b5bb0cd8c94cd4a7ea79b999ab38e18ce7', 'width': 108}, {'height': 113, 'url': 'h... |
Guidance Needed: Qwen 3 Embeddings + Reranker Workflow | 1 | [removed] | 2025-06-14T05:56:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lb1gpa/guidance_needed_qwen_3_embeddings_reranker/ | Pale-Box-3470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb1gpa | false | null | t3_1lb1gpa | /r/LocalLLaMA/comments/1lb1gpa/guidance_needed_qwen_3_embeddings_reranker/ | false | false | self | 1 | null |
Guidance Needed: Qwen 3 Embeddings + Reranker Workflow | 1 | [removed] | 2025-06-14T05:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lb1g3j/guidance_needed_qwen_3_embeddings_reranker/ | Pale-Box-3470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb1g3j | false | null | t3_1lb1g3j | /r/LocalLLaMA/comments/1lb1g3j/guidance_needed_qwen_3_embeddings_reranker/ | false | false | self | 1 | null |
What are your go-to small (Can run on 8gb vram) models for Companion/Roleplay settings? | 1 | [removed] | 2025-06-14T05:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lb16ti/what_are_your_goto_small_can_run_on_8gb_vram/ | ItMeansEscape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb16ti | false | null | t3_1lb16ti | /r/LocalLLaMA/comments/1lb16ti/what_are_your_goto_small_can_run_on_8gb_vram/ | false | false | self | 1 | null |
Huggingface model to Roast people | 0 | Hi, so I decided to make something like an Anime/Movie Wrapped and would like to explore option based on roasting them on genre. But I'm having a problem on giving the result to LLM to roast them based on the results and percentage. If someone know any model like this. Do let me know. I'm running this project on Goog... | 2025-06-14T05:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lb0m7e/huggingface_model_to_roast_people/ | FastCommission2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lb0m7e | false | null | t3_1lb0m7e | /r/LocalLLaMA/comments/1lb0m7e/huggingface_model_to_roast_people/ | false | false | self | 0 | null |
Need help setting up ollama and open webui on external hard drive | 1 | [removed] | 2025-06-14T03:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1laz4fw/need_help_setting_up_ollama_and_open_webui_on/ | inthehazardsuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laz4fw | false | null | t3_1laz4fw | /r/LocalLLaMA/comments/1laz4fw/need_help_setting_up_ollama_and_open_webui_on/ | false | false | self | 1 | null |
Watch out for fakes, yesterday it was on LocalLlama | 1 | 2025-06-14T03:37:49 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1laz3vj | false | null | t3_1laz3vj | /r/LocalLLaMA/comments/1laz3vj/watch_out_for_fakes_yesterday_it_was_on_localllama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'BV3m6JwpDzD7O0VJ8ZULiK7Q86Bdvkw_Z_NQwksluD0', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=108&crop=smart&auto=webp&s=3802f2a0209fb8b94a515fa8b8d413f09b9c5799', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png... | |||
How to Break the Machine by Talking to It | 1 | [removed] | 2025-06-14T03:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1laz0m4/how_to_break_the_machine_by_talking_to_it/ | Frequent_Tea8607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laz0m4 | false | null | t3_1laz0m4 | /r/LocalLLaMA/comments/1laz0m4/how_to_break_the_machine_by_talking_to_it/ | false | false | self | 1 | null |
Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training | 1 | [removed] | 2025-06-14T03:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/1layzgy/nonneutral_by_design_why_generative_models_cannot/ | Frequent_Tea8607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1layzgy | false | null | t3_1layzgy | /r/LocalLLaMA/comments/1layzgy/nonneutral_by_design_why_generative_models_cannot/ | false | false | self | 1 | null |
Why is my speed like this? | 1 | PC Specs: Ryzen 5 4600g 6c/12t (512vram I guess) - 12Gb 4+8 3200mhz
Android Specs: Mi 9 6gb Snapdragon 855
I'm really curious about why my pc is slower than my phone in KoboldCpp with Gemmasutra 4B Q6 KMS (best 4B from what i've tried) when loading chat context. The generation task of a 512 tokens output is around 10... | 2025-06-14T02:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1laxt4q/why_is_my_speed_like_this/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laxt4q | false | null | t3_1laxt4q | /r/LocalLLaMA/comments/1laxt4q/why_is_my_speed_like_this/ | false | false | self | 1 | null |
Dk? | 1 | [removed] | 2025-06-14T02:17:16 | https://www.reddit.com/r/LocalLLaMA/comments/1laxmj5/dk/ | Infamous-Echo-1170 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1laxmj5 | false | null | t3_1laxmj5 | /r/LocalLLaMA/comments/1laxmj5/dk/ | true | false | spoiler | 1 | null |
RTX 5090 Training Issues - PyTorch Doesn't Support Blackwell Architecture Yet? | 19 | **Hi,**
I'm trying to fine-tune Mistral-7B on a new RTX 5090 but hitting a fundamental compatibility wall. The GPU uses Blackwell architecture with CUDA compute capability "sm\_120", but PyTorch stable only supports up to "sm\_90". This means literally no PyTorch operations work - even basic tensor creation fails with... | 2025-06-14T00:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1law1go/rtx_5090_training_issues_pytorch_doesnt_support/ | AstroAlto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1law1go | false | null | t3_1law1go | /r/LocalLLaMA/comments/1law1go/rtx_5090_training_issues_pytorch_doesnt_support/ | false | false | self | 19 | null |
Trouble fine-tuning on RTX 5090 — CUDA and compatibility issues with Mistral stack | 1 | Here’s a solid **Reddit post title and body** you can use, especially for r/LocalLLaMA, r/MLQuestions, or r/MachineLearning:
# 🧵 Title:
>
# 🧾 Post body:
Hey all — I'm running into some frustrating compatibility problems trying to fine-tune an open-weight LLM (Mistral-based) on a new RTX 5090. I’ve got the basics ... | 2025-06-14T00:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lavvsy/trouble_finetuning_on_rtx_5090_cuda_and/ | AstroAlto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lavvsy | false | null | t3_1lavvsy | /r/LocalLLaMA/comments/1lavvsy/trouble_finetuning_on_rtx_5090_cuda_and/ | false | false | self | 1 | null |
ChatGPT's system prompt (4o mini) | 1 | [removed] | 2025-06-13T23:54:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lauuv1/chatgpts_system_prompt_4o_mini/ | TimesLast_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lauuv1 | false | null | t3_1lauuv1 | /r/LocalLLaMA/comments/1lauuv1/chatgpts_system_prompt_4o_mini/ | false | false | self | 1 | null |
(Theoretically) fixing the LLM Latency Barrier with SF-Diff (Scaffold-and-Fill Diffusion) | 20 | Current large language models are bottlenecked by slow, sequential generation. My research proposes Scaffold-and-Fill Diffusion (SF-Diff), a novel hybrid architecture designed to theoretically overcome this. We deconstruct language into a parallel-generated semantic "scaffold" (keywords via a diffusion model) and a lig... | 2025-06-13T22:52:03 | https://www.reddit.com/r/LocalLLaMA/comments/1latjnk/theoretically_fixing_the_llm_latency_barrier_with/ | TimesLast_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1latjnk | false | null | t3_1latjnk | /r/LocalLLaMA/comments/1latjnk/theoretically_fixing_the_llm_latency_barrier_with/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '_NTDDp4MaIUNM2_PPfeKf7B0iOIy0XQegcTWD-1TxW4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_NTDDp4MaIUNM2_PPfeKf7B0iOIy0XQegcTWD-1TxW4.png?width=108&crop=smart&auto=webp&s=7e107618abb7e9bcd8f0c6d70f79c250162c97d5', 'width': 108}, {'height': 116, 'url': 'h... |
(Theoretically) fixing the LLM Latency Barrier with SF-Diff (Scaffold-and-Fill Diffusion) | 1 | [removed] | 2025-06-13T22:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lat3f3/theoretically_fixing_the_llm_latency_barrier_with/ | TimesLast_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lat3f3 | false | null | t3_1lat3f3 | /r/LocalLLaMA/comments/1lat3f3/theoretically_fixing_the_llm_latency_barrier_with/ | false | false | self | 1 | null |
Is there any all-in-one app like LM Studio, but with the option of hosting a Web UI server? | 25 | Everything's in the title.
Essentially i do like LM's Studio ease of use as it silently handles the backend server as well as the desktop app, but i'd like to have it also host a web ui server that i could use on my local network from other devices.
Nothing too fancy really, that will only be for home use and what n... | 2025-06-13T21:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/1larzxz/is_there_any_allinone_app_like_lm_studio_but_with/ | HRudy94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larzxz | false | null | t3_1larzxz | /r/LocalLLaMA/comments/1larzxz/is_there_any_allinone_app_like_lm_studio_but_with/ | false | false | self | 25 | null |
Fathom-R1 model is SOTA open source math model at 14b parameters | 1 | [removed] | 2025-06-13T21:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1larx3m/fathomr1_model_is_sota_open_source_math_model_at/ | Ortho-BenzoPhenone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larx3m | false | null | t3_1larx3m | /r/LocalLLaMA/comments/1larx3m/fathomr1_model_is_sota_open_source_math_model_at/ | false | false | 1 | null | |
3B Model Passes Strawberry Test! With Nearly 0 Alignment Filter | 1 | [removed] | 2025-06-13T21:27:32 | https://www.reddit.com/r/LocalLLaMA/comments/1larmto/3b_model_passes_strawberry_test_with_nearly_0/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larmto | false | null | t3_1larmto | /r/LocalLLaMA/comments/1larmto/3b_model_passes_strawberry_test_with_nearly_0/ | false | false | 1 | null | |
Purely Uncensored 3B Model That Passes Strawberry Test! | 1 | [removed] | 2025-06-13T21:23:28 | https://www.reddit.com/r/LocalLLaMA/comments/1larjj6/purely_uncensored_3b_model_that_passes_strawberry/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larjj6 | false | null | t3_1larjj6 | /r/LocalLLaMA/comments/1larjj6/purely_uncensored_3b_model_that_passes_strawberry/ | false | false | 1 | null | |
[Project] I built a web-based Playground to run Google's Gemini Nano 100% on-device. No data leaves your machine. | 1 | [removed] | 2025-06-13T21:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/1larilo/project_i_built_a_webbased_playground_to_run/ | shrewdfox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larilo | false | null | t3_1larilo | /r/LocalLLaMA/comments/1larilo/project_i_built_a_webbased_playground_to_run/ | false | false | 1 | null | |
Extremely uncensored! Model that passes strawberry test | 1 | [removed] | 2025-06-13T21:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lari2y/extremely_uncensored_model_that_passes_strawberry/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lari2y | false | null | t3_1lari2y | /r/LocalLLaMA/comments/1lari2y/extremely_uncensored_model_that_passes_strawberry/ | false | false | self | 1 | null |
Extremely Uncensored 3B Model that passes the strawberry test! | 1 | [removed] | 2025-06-13T21:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1larfpp/extremely_uncensored_3b_model_that_passes_the/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larfpp | false | null | t3_1larfpp | /r/LocalLLaMA/comments/1larfpp/extremely_uncensored_3b_model_that_passes_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1dXIfrqrCmWockG0RwkS3haqncSXuo8vO7opuv0hqLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1dXIfrqrCmWockG0RwkS3haqncSXuo8vO7opuv0hqLc.png?width=108&crop=smart&auto=webp&s=ed6248116d5c2b7e3bad45d6382ea6ef885212a5', 'width': 108}, {'height': 116, 'url': 'h... | |
[Project] I built a web-based Playground to run Google's Gemini Nano 100% on-device. No data leaves your machine. | 1 | [removed] | 2025-06-13T21:13:45 | https://www.reddit.com/r/LocalLLaMA/comments/1larbdo/project_i_built_a_webbased_playground_to_run/ | Historical_Muscle576 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1larbdo | false | null | t3_1larbdo | /r/LocalLLaMA/comments/1larbdo/project_i_built_a_webbased_playground_to_run/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.