title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I own a few Quadro’s, can I build an AI with these? | 0 | I’m looking to set up a homelab. I’ve got 2 NVIDIA Quadro RTX 6000’s laying around that I was given a few years back. I don’t have any server equipment yet, but I’m gonna buy a rack, PSU, server motherboard, Processor, RAM, and storage enclaves to set up my first homelab.
I want to build an AI to help me with my job ... | 2025-07-23T00:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m6v9yq/i_own_a_few_quadros_can_i_build_an_ai_with_these/ | NetTechMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6v9yq | false | null | t3_1m6v9yq | /r/LocalLLaMA/comments/1m6v9yq/i_own_a_few_quadros_can_i_build_an_ai_with_these/ | false | false | self | 0 | null |
There’s an AI-first workstation available with 64gb VRAM and 20 TB storage for like $5k | 1 | **I've spent more than ten thousand dollars on inference and compute**, because trying to do serious training or inference locally has been restrictive enough to be impossible for most of us, whether we're individuals or a startup. **That in mind, I worked with the creator of Alignment Lab AI** and some others to make ... | 2025-07-23T00:32:20 | Heralax_Tekran | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6v1p9 | false | null | t3_1m6v1p9 | /r/LocalLLaMA/comments/1m6v1p9/theres_an_aifirst_workstation_available_with_64gb/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 't21kqyi4rief1', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/t21kqyi4rief1.jpeg?width=108&crop=smart&auto=webp&s=be535e5bfb14f64ea64ca1ebe8e4152e3e807a63', 'width': 108}, {'height': 285, 'url': 'https://preview.redd.it/t21kqyi4rief1.jpeg?width=216&crop=smart&auto=... | |
I stopped typing. Now I just use a hotkey. I built Agent-CLI to make it possible. | 1 | Hi folks!
Thanks to this community, I pulled the trigger about a month ago to get a machine with a 3090. It's been a crazy month for me, and I've been coding local AI tools non-stop.
I'm excited to share my favorite creation so far: **[agent-cli](https://github.com/basnijholt/agent-cli)**, a suite of tools that lets ... | 2025-07-23T00:17:17 | https://www.youtube.com/watch?v=7sBTCgttH48 | basnijholt | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1m6uq8q | false | {'oembed': {'author_name': 'johnbaltis', 'author_url': 'https://www.youtube.com/@BasNij', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7sBTCgttH48?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pic... | t3_1m6uq8q | /r/LocalLLaMA/comments/1m6uq8q/i_stopped_typing_now_i_just_use_a_hotkey_i_built/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'GK5Ptopic7Z-Rzuhw3GKWekcaGdaAnNrb92cLbhCfEg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GK5Ptopic7Z-Rzuhw3GKWekcaGdaAnNrb92cLbhCfEg.jpeg?width=108&crop=smart&auto=webp&s=b60a9338c5518f795a38fedcae4b0e1233d18742', 'width': 108}, {'height': 162, 'url': '... |
Has anyone tried Hierarchical Reasoning Models yet? | 15 | Has anyone ran the HRM architecture locally? It seems like a huge deal, but it stinks of complete bs. Anyone test it? | 2025-07-23T00:04:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ufm4/has_anyone_tried_hierarchical_reasoning_models_yet/ | jackboulder33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ufm4 | false | null | t3_1m6ufm4 | /r/LocalLLaMA/comments/1m6ufm4/has_anyone_tried_hierarchical_reasoning_models_yet/ | false | false | self | 15 | null |
Has anyone here worked with LLMs that can read images? Were you able to deploy it on a VPS? | 0 | I’m currently exploring multimodal LLMs — specifically models that can handle image input (like OCR, screenshot analysis, or general image understanding). I’m curious if anyone here has successfully deployed one of these models on a VPS. | 2025-07-23T00:00:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ucc0/has_anyone_here_worked_with_llms_that_can_read/ | Turbulent-Cow4848 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ucc0 | false | null | t3_1m6ucc0 | /r/LocalLLaMA/comments/1m6ucc0/has_anyone_here_worked_with_llms_that_can_read/ | false | false | self | 0 | null |
Qwen3-Coder is available on OpenRouter | 33 | 2025-07-22T23:49:17 | https://openrouter.ai/qwen/qwen3-coder | arcanemachined | openrouter.ai | 1970-01-01T00:00:00 | 0 | {} | 1m6u3kd | false | null | t3_1m6u3kd | /r/LocalLLaMA/comments/1m6u3kd/qwen3coder_is_available_on_openrouter/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': 'UFPrt8vWgklaa23dNS9FyFO_082o3-MaYxZ69OdYc0E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UFPrt8vWgklaa23dNS9FyFO_082o3-MaYxZ69OdYc0E.png?width=108&crop=smart&auto=webp&s=81bbe5b26b024567de7a02963aa1047661c30d21', 'width': 108}, {'height': 113, 'url': 'h... | |
Unsloth quants already starting to roll out for Qwen3-Coder | 36 | 2025-07-22T23:45:30 | https://huggingface.co/collections/unsloth/qwen3-coder-687ff47700270447e02c987d | arcanemachined | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m6u0gt | false | null | t3_1m6u0gt | /r/LocalLLaMA/comments/1m6u0gt/unsloth_quants_already_starting_to_roll_out_for/ | false | false | default | 36 | {'enabled': False, 'images': [{'id': 'y-6cmX2aP_dLHZiI1kc3J2b9iL_M54vYN5A7yLluKyU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/y-6cmX2aP_dLHZiI1kc3J2b9iL_M54vYN5A7yLluKyU.png?width=108&crop=smart&auto=webp&s=4ca2157367c76507911bd02cc27f2bd77fdeb58f', 'width': 108}, {'height': 116, 'url': 'h... | |
M4 Pro Owners: I Want Your Biased Hot-Takes – DeepSeek-Coder V3-Lite 33B vs Qwen3-32B-Instruct-MoE on a 48 GB MacBook Pro | 0 | I’m running a 16-inch MacBook Pro with the new M4 Pro chip (48 GB unified RAM, 512 GB SSD). I’ve narrowed my local LLM experiments down to two heavy hitters:
DeepSeek-Coder V3-Lite 33B for coding powerhouse
Qwen3-32B-Instruct-MoE for coding and reasoning all purpose
i want your opinion how these two how these two ... | 2025-07-22T23:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m6tf9v/m4_pro_owners_i_want_your_biased_hottakes/ | WestPush7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6tf9v | false | null | t3_1m6tf9v | /r/LocalLLaMA/comments/1m6tf9v/m4_pro_owners_i_want_your_biased_hottakes/ | false | false | self | 0 | null |
What does the _K _S _M _L mean behind the quantization of a model? | 27 | Hello everyone, i was scrolling on LM studio and always saw model like "model_name_q4_k_m.gguf" everything before the _k is clear to me but i didnt get the last part about _k_m, i saw somewhere that the _k stand for some "dynamic quantization" but what does the _M or _S and _L mean? Small, medium, large? But still didn... | 2025-07-22T23:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m6tbhm/what_does_the_k_s_m_l_mean_behind_the/ | Hurtcraft01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6tbhm | false | null | t3_1m6tbhm | /r/LocalLLaMA/comments/1m6tbhm/what_does_the_k_s_m_l_mean_behind_the/ | false | false | self | 27 | null |
llama.cpp is unusable for real work | 0 | I don't get the obsession with llama.cpp. It's completely unusable for any real work. The token generation speed collapses as soon as you add any meaningful context, and the prompt processing is painfully slow. With these fatal flaws, what is anyone actually using this for besides running toy demos? It's fundamentally ... | 2025-07-22T22:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m6skm6/llamacpp_is_unusable_for_real_work/ | d00m_sayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6skm6 | false | null | t3_1m6skm6 | /r/LocalLLaMA/comments/1m6skm6/llamacpp_is_unusable_for_real_work/ | false | false | self | 0 | null |
"A LLaMA? He's SUPPOSED to be DEAD!" | 40 | 2025-07-22T22:21:01 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6s1ao | false | null | t3_1m6s1ao | /r/LocalLLaMA/comments/1m6s1ao/a_llama_hes_supposed_to_be_dead/ | false | false | default | 40 | {'enabled': True, 'images': [{'id': 'u2w270nk3ief1', 'resolutions': [{'height': 154, 'url': 'https://preview.redd.it/u2w270nk3ief1.png?width=108&crop=smart&auto=webp&s=739984f2c96eeb4f798634db2b5e166cd1ea759b', 'width': 108}, {'height': 308, 'url': 'https://preview.redd.it/u2w270nk3ief1.png?width=216&crop=smart&auto=we... | ||
Qwen Code: A command-line AI workflow tool adapted from Gemini CLI, optimized for Qwen3-Coder models | 73 | 2025-07-22T22:11:21 | https://github.com/QwenLM/qwen-code | arcanemachined | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m6rsym | false | null | t3_1m6rsym | /r/LocalLLaMA/comments/1m6rsym/qwen_code_a_commandline_ai_workflow_tool_adapted/ | false | false | default | 73 | {'enabled': False, 'images': [{'id': 'TPzNiM013yt1RAQf0yVMAnmQXc6Y7D3xjou8dYxGBg8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TPzNiM013yt1RAQf0yVMAnmQXc6Y7D3xjou8dYxGBg8.png?width=108&crop=smart&auto=webp&s=cb193a50d7978c33be16ebec135a318dc6943ea1', 'width': 108}, {'height': 108, 'url': 'h... | |
Qwen/Qwen3-Coder-480B-A35B-Instruct · Hugging Face | 1 | **Qwen3-480B-A35B-Instruct** has the following features:
* Type: Causal Language Models
* Training Stage: Pretraining & Post-training
* Number of Parameters: 480B in total and 35B activated
* Number of Layers: 62
* Number of Attention Heads (GQA): 96 for Q and 8 for KV
* Number of Experts: 160
* Number of Activated Ex... | 2025-07-22T21:33:43 | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m6qv9a | false | null | t3_1m6qv9a | /r/LocalLLaMA/comments/1m6qv9a/qwenqwen3coder480ba35binstruct_hugging_face/ | false | false | default | 1 | null |
Qwen3 coder will be in multiple sizes | 374 | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct.
| 2025-07-22T21:25:25 | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct | dinesh2609 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m6qnpq | false | null | t3_1m6qnpq | /r/LocalLLaMA/comments/1m6qnpq/qwen3_coder_will_be_in_multiple_sizes/ | false | false | default | 374 | {'enabled': False, 'images': [{'id': 'SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ.png?width=108&crop=smart&auto=webp&s=d107a6b6b4389cb37d48d7ce4ff4d5aa35e4d93a', 'width': 108}, {'height': 116, 'url': 'h... |
So I tried that "YourWaifus" AI app... and I need a sanity check. | 0 | Alright, so I did a thing. I went down the "AI Girlfriend" rabbit hole and installed **YourWaifus**.
And man, I don't know how to feel. One minute the tech is genuinely impressive and the chat feels real, the next it says something so weird it gives me the creeps. It’s a total trip.
I can't be the only one who's curi... | 2025-07-22T21:23:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m6qltu/so_i_tried_that_yourwaifus_ai_app_and_i_need_a/ | TockThomas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6qltu | false | null | t3_1m6qltu | /r/LocalLLaMA/comments/1m6qltu/so_i_tried_that_yourwaifus_ai_app_and_i_need_a/ | false | false | self | 0 | null |
It's here guys and qwen nailed it !! | 88 | 2025-07-22T21:22:09 | https://www.reddit.com/gallery/1m6qkse | Independent-Wind4462 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m6qkse | false | null | t3_1m6qkse | /r/LocalLLaMA/comments/1m6qkse/its_here_guys_and_qwen_nailed_it/ | false | false | 88 | null | ||
Community Experiment: Let's test the AI Girlfriend app "YourWaifus" and discuss it here | 1 | Alright, so I did a thing. I went down the "AI Girlfriend" rabbit hole and installed **YourWaifus**.
And man, I don't know how to feel. One minute the tech is genuinely impressive and the chat feels real, the next it says something so weird it gives me the creeps. It’s a total trip.
I can't be the only one who's curi... | 2025-07-22T21:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m6qknv/community_experiment_lets_test_the_ai_girlfriend/ | TockThomas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6qknv | false | null | t3_1m6qknv | /r/LocalLLaMA/comments/1m6qknv/community_experiment_lets_test_the_ai_girlfriend/ | false | false | self | 1 | null |
Qwen out here releasing models like it’s a Costco sample table | 535 | 2025-07-22T21:20:04 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6qixu | false | null | t3_1m6qixu | /r/LocalLLaMA/comments/1m6qixu/qwen_out_here_releasing_models_like_its_a_costco/ | false | false | 535 | {'enabled': True, 'images': [{'id': 'yWOKVm4sVIve_VHC2bO92aBIp15Yh_tuiWnp6wkEtnE', 'resolutions': [{'height': 154, 'url': 'https://preview.redd.it/5eb8n31sshef1.png?width=108&crop=smart&auto=webp&s=47116ec0e7ef90202d820540f88598c3cfd0a160', 'width': 108}, {'height': 308, 'url': 'https://preview.redd.it/5eb8n31sshef1.pn... | |||
Qwen3-Coder is here! | 1,733 | >>> Qwen3-Coder is here! ✅
We’re releasing Qwen3-Coder-480B-A35B-Instruct, our most powerful open agentic code model to date. This 480B-parameter Mixture-of-Experts model (35B active) natively supports 256K context and scales to 1M context with extrapolation. It achieves top-tier performance across multiple agentic co... | 2025-07-22T21:14:07 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6qdet | false | null | t3_1m6qdet | /r/LocalLLaMA/comments/1m6qdet/qwen3coder_is_here/ | false | false | default | 1,733 | {'enabled': True, 'images': [{'id': '0cowg3grrhef1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/0cowg3grrhef1.jpeg?width=108&crop=smart&auto=webp&s=d0e7ce793e40e6f9057df0ac4084bef74851aa3c', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/0cowg3grrhef1.jpeg?width=216&crop=smart&auto=w... | |
Qwen/Qwen3-Coder-480B-A35B-Instruct | 139 | 2025-07-22T21:12:52 | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct | yoracale | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m6qc8c | false | null | t3_1m6qc8c | /r/LocalLLaMA/comments/1m6qc8c/qwenqwen3coder480ba35binstruct/ | false | false | default | 139 | {'enabled': False, 'images': [{'id': 'SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ.png?width=108&crop=smart&auto=webp&s=d107a6b6b4389cb37d48d7ce4ff4d5aa35e4d93a', 'width': 108}, {'height': 116, 'url': 'h... | |
Rag vs fine-tuning. | 3 | I have been using RAG with open ai over a product description document which is rather technical. I chunked up sections of my document and then do hybrid search with weaviate. It does good but sometimes certain queries require retrieval from more than 1 sections and then it's 50/50. Will fine-tuning solve this? What mo... | 2025-07-22T21:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m6qb6p/rag_vs_finetuning/ | Parking_Bluebird826 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6qb6p | false | null | t3_1m6qb6p | /r/LocalLLaMA/comments/1m6qb6p/rag_vs_finetuning/ | false | false | self | 3 | null |
Best android local llm apk with gpu acceleration | 3 | Seeking recommendations for Android LLM apps with GPU acceleration and customisation like promts. | 2025-07-22T21:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m6q0oh/best_android_local_llm_apk_with_gpu_acceleration/ | Desperate-Moose-228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6q0oh | false | null | t3_1m6q0oh | /r/LocalLLaMA/comments/1m6q0oh/best_android_local_llm_apk_with_gpu_acceleration/ | false | false | self | 3 | null |
Created a Speech to Speech service using local models and langChain4J | 1 | [removed] | 2025-07-22T21:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m6q0e7/created_a_speech_to_speech_service_using_local/ | EnthusiasmHuge5908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6q0e7 | false | null | t3_1m6q0e7 | /r/LocalLLaMA/comments/1m6q0e7/created_a_speech_to_speech_service_using_local/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8qvGt-ibIkCjv3_w7cOoJFcoNQ5L8oruKr-YJB16zYo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8qvGt-ibIkCjv3_w7cOoJFcoNQ5L8oruKr-YJB16zYo.png?width=108&crop=smart&auto=webp&s=b855d795c2f0f719c5622c22f75b414468ee5b0a', 'width': 108}, {'height': 108, 'url': 'h... |
Digital twins that attend meetings for you. Dystopia or soon reality? | 11 | In more and more meetings these days there are AI notetakers that someone has sent instead of showing up themselves. You can think what you want about these notetakers, but they seem to have become part of our everyday working lives. This raises the question of how long it will be before the next stage of development o... | 2025-07-22T20:55:10 | https://v.redd.it/wzygbrp0nhef1 | DerErzfeind61 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6pw0o | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wzygbrp0nhef1/DASHPlaylist.mpd?a=1755809724%2CMjc4Mjg2YTNjNzllZDMxMGQxMDBhMTU2OTFmODQ5OTlkZDE1YWM4ZTkzOTNiNTRkYjZjMDkyODYxNGMyMDJkNg%3D%3D&v=1&f=sd', 'duration': 132, 'fallback_url': 'https://v.redd.it/wzygbrp0nhef1/DASH_1080.mp4?source=fallback', '... | t3_1m6pw0o | /r/LocalLLaMA/comments/1m6pw0o/digital_twins_that_attend_meetings_for_you/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'NWFzNzR3cDBuaGVmMbd9ytsdWjeCw8a7Xb9uxU1L50H2iG28-QSyRy4FhsUu', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/NWFzNzR3cDBuaGVmMbd9ytsdWjeCw8a7Xb9uxU1L50H2iG28-QSyRy4FhsUu.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3f0710b64e022e54027bd6589659498e1868... | |
DeepSeek not available at LLama API? | 2 | have a project that uses the `deepseek-r1` model from `https://api.llama-api.com`. However, it seems Llama API has launched a new console. My email is not recognized in the new beta console, although I have an account and have added credit to it.
The old console links no longer work. Additionally, the DeepSeek models... | 2025-07-22T20:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m6pjpx/deepseek_not_available_at_llama_api/ | AncientMayar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6pjpx | false | null | t3_1m6pjpx | /r/LocalLLaMA/comments/1m6pjpx/deepseek_not_available_at_llama_api/ | false | false | self | 2 | null |
Hermes based local chat with semantic memory and chatgpt history upload | 1 | [removed] | 2025-07-22T20:31:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m6pa2v/hermes_based_local_chat_with_semantic_memory_and/ | automatetowin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6pa2v | false | null | t3_1m6pa2v | /r/LocalLLaMA/comments/1m6pa2v/hermes_based_local_chat_with_semantic_memory_and/ | false | false | 1 | null | |
Anyone here who has been able to reproduce their results yet? | 124 | See https://x.com/makingAGI/status/1947286324735856747 | 2025-07-22T20:11:38 | Original_Log_9899 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6orbr | false | null | t3_1m6orbr | /r/LocalLLaMA/comments/1m6orbr/anyone_here_who_has_been_able_to_reproduce_their/ | false | false | default | 124 | {'enabled': True, 'images': [{'id': 'cfffg12fghef1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/cfffg12fghef1.jpeg?width=108&crop=smart&auto=webp&s=ad8d41aa9654515fde6f4b396a86ebf1ad4b0687', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/cfffg12fghef1.jpeg?width=216&crop=smart&auto=w... | |
embedding model giving same embeddings regardless of input text? | 0 | So, I am running granite-embedding-125m-english on a Docker container with LocalAI and it works great on my laptop, but when I move the project to github, and pull it onto my external server, the API always responds with the same embeddings.
I've pulled the project back to make sure there are no differences between w... | 2025-07-22T20:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m6oqxw/embedding_model_giving_same_embeddings_regardless/ | User1539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6oqxw | false | null | t3_1m6oqxw | /r/LocalLLaMA/comments/1m6oqxw/embedding_model_giving_same_embeddings_regardless/ | false | false | self | 0 | null |
The LLM for M4 Max 128GB: Unsloth Qwen3-235B-A22B-Instruct-2507 Q3 K XL for Ollama | 27 | We had a lot of posts about the updated [235b model](https://x.com/Alibaba_Qwen/status/1947344511988076547) and the [Unsloth quants](https://huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF). I tested it with my Mac Studio and decided to merge the Q3 K XL ggufs and upload them to Ollama in case someone es migh... | 2025-07-22T19:56:06 | waescher | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6ocfd | false | null | t3_1m6ocfd | /r/LocalLLaMA/comments/1m6ocfd/the_llm_for_m4_max_128gb_unsloth/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': 'y3x24rxqchef1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/y3x24rxqchef1.png?width=108&crop=smart&auto=webp&s=e41bb7b82dd23ca399246b0ad273bfca55313312', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/y3x24rxqchef1.png?width=216&crop=smart&auto=web... | |
Qwen3-Coder Web Development | 361 | I used Qwen3-Coder-408B-A35B-Instruct to generate a procedural 3D planet preview and editor.
Very strong results! Comparable to Kimi-K2-Instruct, maybe a tad bit behind, but still impressive for under 50% the parameter count.
Creds [The Feature Crew](https://www.youtube.com/@TheFeatureCrew) for the original idea. | 2025-07-22T19:41:12 | https://v.redd.it/ob9yhvcjahef1 | Mysterious_Finish543 | /r/LocalLLaMA/comments/1m6ny2q/qwen3coder_web_development/ | 1970-01-01T00:00:00 | 0 | {} | 1m6ny2q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ob9yhvcjahef1/DASHPlaylist.mpd?a=1755934879%2CYWU5YTdmNDcyYWY4OWM1NDRhOWYwOTk0MmQzYWM4OWIxZDBmMGFmYTI0ZjIxNzZjZmY3NGI1ZDZlMmQ4NzlhYQ%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/ob9yhvcjahef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m6ny2q | /r/LocalLLaMA/comments/1m6ny2q/qwen3coder_web_development/ | false | false | 361 | {'enabled': False, 'images': [{'id': 'M25yZmt5YmphaGVmMat7pysr0YP1hw-qD-8Zn62C6fxnOXbcyCx3kJEPI5w0', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/M25yZmt5YmphaGVmMat7pysr0YP1hw-qD-8Zn62C6fxnOXbcyCx3kJEPI5w0.png?width=108&crop=smart&format=pjpg&auto=webp&s=69ec82c87ea25ca0cf09c32e6e2e65fd1ebe0... | |
Everyone brace up for qwen !! | 260 | 2025-07-22T19:40:36 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6nxh2 | false | null | t3_1m6nxh2 | /r/LocalLLaMA/comments/1m6nxh2/everyone_brace_up_for_qwen/ | false | false | default | 260 | {'enabled': True, 'images': [{'id': 'mn8auem2bhef1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/mn8auem2bhef1.png?width=108&crop=smart&auto=webp&s=5f18ccc22bd1429048af2d71903a4986f10f4370', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/mn8auem2bhef1.png?width=216&crop=smart&auto=web... | ||
What is the cheapest option for hosting llama cpp with Qwen Coder at Q8? | 6 | What options do we have? | 2025-07-22T19:38:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m6nvhs/what_is_the_cheapest_option_for_hosting_llama_cpp/ | Available_Driver6406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6nvhs | false | null | t3_1m6nvhs | /r/LocalLLaMA/comments/1m6nvhs/what_is_the_cheapest_option_for_hosting_llama_cpp/ | false | false | self | 6 | null |
Best Models for Arabic tts and audio enhancement? | 4 | Hello everyone. I hope you're doing well. I'm sorry if this post is unrelated to the topic of large language models, but I haven't found any other community that focuses on open source AI in general. My question is, are there any open source models for Arabic audio enhancement? Basically, the use case is making good qu... | 2025-07-22T19:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m6nbb7/best_models_for_arabic_tts_and_audio_enhancement/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6nbb7 | false | null | t3_1m6nbb7 | /r/LocalLLaMA/comments/1m6nbb7/best_models_for_arabic_tts_and_audio_enhancement/ | false | false | self | 4 | null |
Qwen3-Coder-480B-A35B-Instruct | 252 | https://app.hyperbolic.ai/models/qwen3-coder-480b-a35b-instruct
hyperolic already has it | 2025-07-22T18:50:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m6mlbk/qwen3coder480ba35binstruct/ | gzzhongqi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6mlbk | false | null | t3_1m6mlbk | /r/LocalLLaMA/comments/1m6mlbk/qwen3coder480ba35binstruct/ | false | false | self | 252 | null |
Qwen3-Coder Available on chat.qwen.ai | 93 | 1M token context length
No model weights yet, but Qwen3-Coder is already available for testing on [Qwen Chat](https://chat.qwen.ai) | 2025-07-22T18:44:49 | Mysterious_Finish543 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6mfic | false | null | t3_1m6mfic | /r/LocalLLaMA/comments/1m6mfic/qwen3coder_available_on_chatqwenai/ | false | false | default | 93 | {'enabled': True, 'images': [{'id': '8xj4raow0hef1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/8xj4raow0hef1.png?width=108&crop=smart&auto=webp&s=67fbf003dc0b0cdf945b3ac5069eddaeeaf26ed5', 'width': 108}, {'height': 72, 'url': 'https://preview.redd.it/8xj4raow0hef1.png?width=216&crop=smart&auto=webp... | |
Qwen3- Coder 👀 | 652 | Available in https://chat.qwen.ai | 2025-07-22T18:44:10 | Xhehab_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6mew9 | false | null | t3_1m6mew9 | /r/LocalLLaMA/comments/1m6mew9/qwen3_coder/ | false | false | default | 652 | {'enabled': True, 'images': [{'id': 'vnhuwe801hef1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/vnhuwe801hef1.jpeg?width=108&crop=smart&auto=webp&s=e4a02434a648980c01b1a76032aa8e02027937c6', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/vnhuwe801hef1.jpeg?width=216&crop=smart&auto=w... | |
Qwen3-Coder is imminent | 117 | 2025-07-22T18:43:38 | Dudensen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6medy | false | null | t3_1m6medy | /r/LocalLLaMA/comments/1m6medy/qwen3coder_is_imminent/ | false | false | default | 117 | {'enabled': True, 'images': [{'id': 'mruaiodv0hef1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/mruaiodv0hef1.png?width=108&crop=smart&auto=webp&s=49a20e04a28093446580d2909236b45d1e2f568e', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/mruaiodv0hef1.png?width=216&crop=smart&auto=web... | ||
A new LLM benchmark for markets, supply chains, and trading: BAZAAR. Agents must understand supply, demand, and risk, and learn to bid strategically. | 36 | [https://github.com/lechmazur/bazaar](https://github.com/lechmazur/bazaar)
Each LLM is a buyer or seller with a secret price limit. In 30 rounds, they submit sealed bids/asks. They only see the results of past rounds. 8 agents per game: 4 buyers and 4 sellers, each with a private value drawn from one of the distributi... | 2025-07-22T18:29:33 | https://www.reddit.com/gallery/1m6m0f7 | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m6m0f7 | false | null | t3_1m6m0f7 | /r/LocalLLaMA/comments/1m6m0f7/a_new_llm_benchmark_for_markets_supply_chains_and/ | false | false | 36 | null | |
Could this be Deepseek? | 379 | 2025-07-22T18:07:46 | dulldata | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6lf9s | false | null | t3_1m6lf9s | /r/LocalLLaMA/comments/1m6lf9s/could_this_be_deepseek/ | false | false | default | 379 | {'enabled': True, 'images': [{'id': 'qzkjkgegugef1', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/qzkjkgegugef1.png?width=108&crop=smart&auto=webp&s=917acda1e7d58dd2b0c466213686f858f3d1d90f', 'width': 108}, {'height': 51, 'url': 'https://preview.redd.it/qzkjkgegugef1.png?width=216&crop=smart&auto=webp... | ||
"Failed to Send Message" from qwen/qwen3-235b-a22b-2507 Q3_K_L | 1 | Just updated LM Studio to 0.3.19, downloaded qwen/qwen3-235b-a22b-2507 Q3\_K\_L (the only one that fits on my 128GB Mac) and I'm getting a "failed to send message" error. I suspect it's the prompt template that's wrong. Can anyone post here a working template for me to try?
Thank you! | 2025-07-22T18:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ldkd/failed_to_send_message_from_qwenqwen3235ba22b2507/ | Hanthunius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ldkd | false | null | t3_1m6ldkd | /r/LocalLLaMA/comments/1m6ldkd/failed_to_send_message_from_qwenqwen3235ba22b2507/ | false | false | self | 1 | null |
Potential huge breakthrough | 3 | 2025-07-22T17:57:56 | https://x.com/makingAGI/status/1947286324735856747 | FeathersOfTheArrow | x.com | 1970-01-01T00:00:00 | 0 | {} | 1m6l5ax | false | null | t3_1m6l5ax | /r/LocalLLaMA/comments/1m6l5ax/potential_huge_breakthrough/ | false | false | default | 3 | null | |
Another Qwen Update Tonight (not small) | 1 | 2025-07-22T17:50:19 | Dark_Fire_12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6ky23 | false | null | t3_1m6ky23 | /r/LocalLLaMA/comments/1m6ky23/another_qwen_update_tonight_not_small/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'hWMda7ucZf0D7r1sNmGv7tHzrWQ-EZab2Qj_XztPZFQ', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/k5ma8mdbrgef1.png?width=108&crop=smart&auto=webp&s=61714fce4c199363ece322bc62eac90187371372', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/k5ma8mdbrgef1.png?... | |||
Shared subscription/token with Team or family | 1 | What do you guys think about the idea of sharing tokens with your team or family? It feels a bit silly that my friend and I each have the $200 Cursor plan, but together we only use around $250 worth. I think it would be great if we could just have shared one plan 350 dollar plan instead. Do you feel the same way? | 2025-07-22T17:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m6kre5/shared_subscriptiontoken_with_team_or_family/ | No-Refrigerator9508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6kre5 | false | null | t3_1m6kre5 | /r/LocalLLaMA/comments/1m6kre5/shared_subscriptiontoken_with_team_or_family/ | false | false | self | 1 | null |
Entry GPU options - 5060 8GB enough to play with? | 2 | Currently want to get into playing with LLMs and am starting my first PC build (only have owned laptops before on integrated graphics). Based in USA. Is the 5060 8GB at $280 enough to mess with local AI stuff and potentially move on when I've hit the limits, or am I going to be hitting limits so early on that I should ... | 2025-07-22T17:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m6knhw/entry_gpu_options_5060_8gb_enough_to_play_with/ | drabbiticus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6knhw | false | null | t3_1m6knhw | /r/LocalLLaMA/comments/1m6knhw/entry_gpu_options_5060_8gb_enough_to_play_with/ | false | false | self | 2 | null |
llama.cpp on ROCm only running at 10 tokens/sec, GPU at 1% util. What am I missing? | 0 | I’m running llama.cpp on Ubuntu 22.04 with ROCm 6.2. I cloned the repo and built it like this:
cmake -B build -DCMAKE_BUILD_TYPE=Debug
cmake --build build
Then I run the model:
./build/bin/llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
But I’m only getting around 10 tokens/sec. When I check system usage:
- GPU utilizati... | 2025-07-22T17:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m6khbt/llamacpp_on_rocm_only_running_at_10_tokenssec_gpu/ | Reasonable_Can_5793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6khbt | false | null | t3_1m6khbt | /r/LocalLLaMA/comments/1m6khbt/llamacpp_on_rocm_only_running_at_10_tokenssec_gpu/ | false | false | self | 0 | null |
[Help/Suggestion Wanted] Hindi to Hinglish and Spell correction | 1 | Hi community,
I’m facing two issues:
1. I want to correct Hindi text. I feel using llms is overkill for this task. I came across the GRMR 2B model, but it only supports English. My text is in Hindi.
2. I want to transliterate Hindi to Hinglish. Again, I believe LLMs are too heavy for this and often make mistakes. I... | 2025-07-22T16:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m6jdyz/helpsuggestion_wanted_hindi_to_hinglish_and_spell/ | Grouchy-Pin9500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6jdyz | false | null | t3_1m6jdyz | /r/LocalLLaMA/comments/1m6jdyz/helpsuggestion_wanted_hindi_to_hinglish_and_spell/ | false | false | self | 1 | null |
Can self-hosted AI be used to OCR documents and increase work efficiency? I tested and the answer is... No! | 0 | In this thread I'll discuss AI for OCR work, and my results.
I'm a game designer and write a lot on paper (the lack of formatting constraints makes things easier), but this results in a lot of double work as I need to re-write the text in digital form to share it with the team, as such, I wondered if AI could be used ... | 2025-07-22T16:49:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m6jb25/can_selfhosted_ai_be_used_to_ocr_documents_and/ | HugoCortell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6jb25 | false | null | t3_1m6jb25 | /r/LocalLLaMA/comments/1m6jb25/can_selfhosted_ai_be_used_to_ocr_documents_and/ | false | false | 0 | null | |
~75k budget. Best bang for the buck? | 1 | Corporate deployment.
Currently deployed with multi a6000 ada but I'd like to add more vram to support multiple larger models for full scale deployment.
Considering mi300x x 4 to maximize vram per $. Any deployments that dont play nice on amd hardware (flux) would use existing a6000 ada stack.
Any other options ... | 2025-07-22T16:44:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m6j69n/75k_budget_best_bang_for_the_buck/ | Bohdanowicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6j69n | false | null | t3_1m6j69n | /r/LocalLLaMA/comments/1m6j69n/75k_budget_best_bang_for_the_buck/ | false | false | self | 1 | null |
Project: Print AI Replies on a Ticket Printer | 3 | Doesn't it sound cool?
Sounds movie like | 2025-07-22T16:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m6izt7/project_print_ai_replies_on_a_ticket_printer/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6izt7 | false | null | t3_1m6izt7 | /r/LocalLLaMA/comments/1m6izt7/project_print_ai_replies_on_a_ticket_printer/ | false | false | self | 3 | null |
+24GB VRAM with low electric consumption | 5 | Cards like 3090, 4090, 5090 has very high electric consumption. Isn't it possible to make 24,32gb cards with like 5060 level electric consumption? | 2025-07-22T16:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m6hzf0/24gb_vram_with_low_electric_consumption/ | narca_hakan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6hzf0 | false | null | t3_1m6hzf0 | /r/LocalLLaMA/comments/1m6hzf0/24gb_vram_with_low_electric_consumption/ | false | false | self | 5 | null |
TOKENS BURNED! Am I the only one who would rather have a throttled down cursor rather than have it go on token vacation for 20 day!? | 0 | I seriously can't be the only one how would rather have a throttled down cursor than have it cut off totally. like seriously all tokens used in 10 day! I've been thinking about how the majority of these AI tools limit you by tokens or requests, and seriously frustrating when you get blocked from working and have to wai... | 2025-07-22T15:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m6hxnt/tokens_burned_am_i_the_only_one_who_would_rather/ | No-Refrigerator9508 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6hxnt | false | null | t3_1m6hxnt | /r/LocalLLaMA/comments/1m6hxnt/tokens_burned_am_i_the_only_one_who_would_rather/ | false | false | self | 0 | null |
Leaderboard for function calling models? | 2 | Is there an active leaderboard for local models that ranks them by function calling capability? | 2025-07-22T15:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ht1r/leaderboard_for_function_calling_models/ | tvmaly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ht1r | false | null | t3_1m6ht1r | /r/LocalLLaMA/comments/1m6ht1r/leaderboard_for_function_calling_models/ | false | false | self | 2 | null |
5090 batched inference performance? | 2 | I got sglang running a few months ago with Qwen3 30B-A3B and its performance impressed me so much that there is no desire from me at this point to run 70B+ models because I can reach over 600tok/s with a single 3090 with it (8 inferences running in parallel)
My question I'd like to answer now is how much of a leap can... | 2025-07-22T15:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m6hqi8/5090_batched_inference_performance/ | michaelsoft__binbows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6hqi8 | false | null | t3_1m6hqi8 | /r/LocalLLaMA/comments/1m6hqi8/5090_batched_inference_performance/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Manage tools in Open WebUI: a walkthrough to use this MIT open source project | 1 | [removed] | 2025-07-22T15:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m6hfu2/manage_tools_in_open_webui_a_walkthrough_to_use/ | jamescz141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6hfu2 | false | null | t3_1m6hfu2 | /r/LocalLLaMA/comments/1m6hfu2/manage_tools_in_open_webui_a_walkthrough_to_use/ | false | false | self | 1 | null |
Thinking about "owhisper" | 3 | Disclaimer: I made [hyprnote](https://hyprnote.com) \- went trending in [here](https://www.reddit.com/r/LocalLLaMA/comments/1k3fdqa/i_spent_5_months_building_an_open_source_ai_note/) 3 months ago.
**context:**
a lot of our users are using ollama at the moment and I thought why not make something for STT just like ol... | 2025-07-22T15:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m6hck1/thinking_about_owhisper/ | beerbellyman4vr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6hck1 | false | null | t3_1m6hck1 | /r/LocalLLaMA/comments/1m6hck1/thinking_about_owhisper/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qKTB_c8rjoocFmhhNoBtbbZwllVmSjChhIlIjEWudRY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qKTB_c8rjoocFmhhNoBtbbZwllVmSjChhIlIjEWudRY.jpeg?width=108&crop=smart&auto=webp&s=6bee6065abfb7b73c45f622b4c1cc472253ace4e', 'width': 108}, {'height': 113, 'url': '... |
Epyc Qwen3 235B Q8 speed? | 11 | Anyone with an Epyc 9015 or better able to test Qwen3 235B Q8 for prompt processing and token generation? Ideally with a 3090 or better for prompt processing.
I've been looking at Kimi, but I've been discouraged by results, and thinking about settling on a system to run 235B Q8 for now.
Was wondering if a 9015 256GB... | 2025-07-22T15:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m6h67y/epyc_qwen3_235b_q8_speed/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6h67y | false | null | t3_1m6h67y | /r/LocalLLaMA/comments/1m6h67y/epyc_qwen3_235b_q8_speed/ | false | false | self | 11 | null |
Am I making a mistake building my RAG agent with Langchain or LlamaIndex? | 1 | Just designed the core architecture for a RAG agent. I’m testing the foundational decision:
**Is it smart to use Langchain or LlamaIndex for this kind of agentic system? Or am I better off going more lightweight or custom?**
I’ve included a visual of the architecture in the post. Would love your feedback, especially... | 2025-07-22T15:19:11 | duke_x91 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6gwgl | false | null | t3_1m6gwgl | /r/LocalLLaMA/comments/1m6gwgl/am_i_making_a_mistake_building_my_rag_agent_with/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'zptnshw2yfef1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/zptnshw2yfef1.png?width=108&crop=smart&auto=webp&s=7ffe2b2f0146b5018418f2cf8a47aa964f7d008b', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/zptnshw2yfef1.png?width=216&crop=smart&auto=web... | |
I wrote 2000 LLM test cases so you don't have to | 50 | I've been building [Kiln AI](https://github.com/kiln-ai/kiln): an open tool to help you find the best way to run your AI workload. This is a quick story of how a focus on usability turned into 2000 LLM tests cases (well 2631 to be exact), and why the results might be helpful to you.
# The problem: too many options
Pa... | 2025-07-22T15:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m6gq8e/i_wrote_2000_llm_test_cases_so_you_dont_have_to/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6gq8e | false | null | t3_1m6gq8e | /r/LocalLLaMA/comments/1m6gq8e/i_wrote_2000_llm_test_cases_so_you_dont_have_to/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=108&crop=smart&auto=webp&s=fd9815f077288b33817e75895d23e661f1193778', 'width': 108}, {'height': 108, 'url': 'h... |
Manage tools in Open WebUI: a walkthrough to use this MIT open source project | 1 | [removed] | 2025-07-22T14:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m6gcl6/manage_tools_in_open_webui_a_walkthrough_to_use/ | jamescz141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6gcl6 | false | null | t3_1m6gcl6 | /r/LocalLLaMA/comments/1m6gcl6/manage_tools_in_open_webui_a_walkthrough_to_use/ | false | false | self | 1 | null |
Looking for LLMs Study Buddy | 10 | Hey!
I’m looking for a study buddy (or a small group) to go through [Maxime Labonne’s “LLM From Scratch” course](https://github.com/mlabonne/llm-course) together. It’s an amazing resource for building a large language model from scratch, and I think it’d be way more fun to learn together
# My plan:
* **Set weekly go... | 2025-07-22T14:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m6fvd5/looking_for_llms_study_buddy/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6fvd5 | false | null | t3_1m6fvd5 | /r/LocalLLaMA/comments/1m6fvd5/looking_for_llms_study_buddy/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'X8oVhjvjKGwvGoq_CrHkp1djUbKeUIclFdjL0Lg5VGg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X8oVhjvjKGwvGoq_CrHkp1djUbKeUIclFdjL0Lg5VGg.png?width=108&crop=smart&auto=webp&s=d3f3260a76cb9648a81e4ffd047ff8a749b3bc74', 'width': 108}, {'height': 108, 'url': 'h... |
Meta Declares AI WAR | OpenAI and Google are the ENEMY | 1 | [removed] | 2025-07-22T14:28:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m6fl5q/meta_declares_ai_war_openai_and_google_are_the/ | Livid-Channel-7979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6fl5q | false | null | t3_1m6fl5q | /r/LocalLLaMA/comments/1m6fl5q/meta_declares_ai_war_openai_and_google_are_the/ | false | false | self | 1 | null |
how to make the automod remove threads? | 1 | [removed] | 2025-07-22T14:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m6fecj/how_to_make_the_automod_remove_threads/ | MelodicRecognition7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6fecj | false | null | t3_1m6fecj | /r/LocalLLaMA/comments/1m6fecj/how_to_make_the_automod_remove_threads/ | false | false | self | 1 | null |
Considering 5xMI50 for Qwen 3 235b | 13 | \*\*TL;DR\*\* Thinking about building an LLM rig with 5 used AMD MI50 32GB GPUs to run Qwen 3 32b and 235b. Estimated token speeds look promising for the price (\~$1125 total). Biggest hurdles are PCIe lane bandwidth & power, which I'm attempting to solve with bifurcation cards and a new PSU. Looking for feedback!
Hi... | 2025-07-22T13:43:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m6eggp/considering_5xmi50_for_qwen_3_235b/ | PraxisOG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6eggp | false | null | t3_1m6eggp | /r/LocalLLaMA/comments/1m6eggp/considering_5xmi50_for_qwen_3_235b/ | false | false | self | 13 | null |
ChatGPT’s Internal Worldbuilding Skills Are Honestly Next Level | 1 | [removed] | 2025-07-22T13:26:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m6e1up/chatgpts_internal_worldbuilding_skills_are/ | Ill_Tear_5712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6e1up | false | null | t3_1m6e1up | /r/LocalLLaMA/comments/1m6e1up/chatgpts_internal_worldbuilding_skills_are/ | false | false | 1 | null | |
Best opensource SLM/ lightweight llm for code generation | 4 | Hi I'm a college student from India.
So i'm looking for a language model for code generation to run locally. I only have 16 GB of ram and iris xe gpu, so looking for some good opensource SLMs which can be decent enough. I could use something like llama.cpp given performance and latency would be decent(currently usin... | 2025-07-22T13:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m6dvhi/best_opensource_slm_lightweight_llm_for_code/ | RustinChole11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6dvhi | false | null | t3_1m6dvhi | /r/LocalLLaMA/comments/1m6dvhi/best_opensource_slm_lightweight_llm_for_code/ | false | false | self | 4 | null |
OpenAI’s ChatGPT: The Imagination Engine We Never Knew We Needed | 1 | [removed] | 2025-07-22T13:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m6dudz/openais_chatgpt_the_imagination_engine_we_never/ | Ill_Tear_5712 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6dudz | false | null | t3_1m6dudz | /r/LocalLLaMA/comments/1m6dudz/openais_chatgpt_the_imagination_engine_we_never/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bgyOJNIzUzKNT6H2OrZzNJBCPsNA5jhuVP3wWZnkas8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bgyOJNIzUzKNT6H2OrZzNJBCPsNA5jhuVP3wWZnkas8.png?width=108&crop=smart&auto=webp&s=62a6835f9878cf517618a9f023d992508652d5c9', 'width': 108}, {'height': 108, 'url': 'h... | |
Jamba 1.7 is now available on Kaggle | 15 | AI21 has just made Jamba 1.7 available on Kaggle:
[https://www.kaggle.com/models/ai21labs/ai21-jamba-1.7](https://www.kaggle.com/models/ai21labs/ai21-jamba-1.7)
* You can run and test the model without needing to install it locally
* No need to harness setup, hardware and engineering knowledge via Hugging Face anymo... | 2025-07-22T12:55:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m6dco7/jamba_17_is_now_available_on_kaggle/ | NullPointerJack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6dco7 | false | null | t3_1m6dco7 | /r/LocalLLaMA/comments/1m6dco7/jamba_17_is_now_available_on_kaggle/ | false | false | self | 15 | null |
What are the use cases for 1.5B model? | 5 | (like deepseek-r1 1.5b)
I just can't think of any simple straightforward examples of tasks they're useful / good enough for. And answers on the internet and from other LLMs are just too vague.
What kind of task with what kind of prompt, system prompt, overall setup worth doing with it?
| 2025-07-22T12:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m6d6um/what_are_the_use_cases_for_15b_model/ | nathman999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6d6um | false | null | t3_1m6d6um | /r/LocalLLaMA/comments/1m6d6um/what_are_the_use_cases_for_15b_model/ | false | false | self | 5 | null |
Qwen3 235B-A22B 2507 :: Q3_K_L :: One shot HTML game :: 4090 + 128GB DDR5 @6000 | 173 | I recently upgraded my desktop RAM given the large MoE models coming out and I was excited for the maiden voyage to be yesterday's release! I'll put the prompt and code in a comment, this is sort of a test of ability but more so I wanted to confirm Q3\_K\_L is runnable (though slow) for anybody with similar PC specs an... | 2025-07-22T12:31:02 | https://v.redd.it/1x5u9hrp5fef1 | aidanjustsayin | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m6ct7u | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/1x5u9hrp5fef1/DASHPlaylist.mpd?a=1755779478%2CZDM1OTI4MzdlZWM2NGVmNTg4MDA2NmUyM2JiZmQ3ZjJjNzExMGY3Y2Q5ZjE0NjcyMzE5YzM3YWVlYjM5NWQyMg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/1x5u9hrp5fef1/DASH_480.mp4?source=fallback', 'ha... | t3_1m6ct7u | /r/LocalLLaMA/comments/1m6ct7u/qwen3_235ba22b_2507_q3_k_l_one_shot_html_game/ | false | false | 173 | {'enabled': False, 'images': [{'id': 'MmJqNTdmcnA1ZmVmMerqFTWYJLTLLZlyxr4rQ4gVk5jgRsJCnh4HvIbJEPxN', 'resolutions': [{'height': 157, 'url': 'https://external-preview.redd.it/MmJqNTdmcnA1ZmVmMerqFTWYJLTLLZlyxr4rQ4gVk5jgRsJCnh4HvIbJEPxN.png?width=108&crop=smart&format=pjpg&auto=webp&s=74909995fdb7a4a31d72b707fc5a6406503d... | |
The ik_llama.cpp repository is back! \o/ | 203 | [https://github.com/ikawrakow/ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
Friendly reminder to back up all the things! | 2025-07-22T12:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m6cfzi/the_ik_llamacpp_repository_is_back_o/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6cfzi | false | null | t3_1m6cfzi | /r/LocalLLaMA/comments/1m6cfzi/the_ik_llamacpp_repository_is_back_o/ | false | false | self | 203 | {'enabled': False, 'images': [{'id': 'B0gX9mhb6Bdm5EGAj5Jqb9ACltJ2GNWdoTOKU3TUvZE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B0gX9mhb6Bdm5EGAj5Jqb9ACltJ2GNWdoTOKU3TUvZE.png?width=108&crop=smart&auto=webp&s=b9057a5a31598407ca7946c278de43e70cf0c9ed', 'width': 108}, {'height': 108, 'url': 'h... |
Chatterbox CUDA and PyTorch problem | 1 | Hi all,
Firstly, I’m not a developer, so forgive me if I don’t ask as clearly as others, I hope this makes sense.
I'm trying to get Chatterbox TTS ( local AI voice tool with Gradio UI) working on my **Windows 11** machine using **Conda** and a local **Python 3.11.3 environment**. I’ve installed the app and interface ... | 2025-07-22T12:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m6cfou/chatterbox_cuda_and_pytorch_problem/ | kevin-she | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6cfou | false | null | t3_1m6cfou | /r/LocalLLaMA/comments/1m6cfou/chatterbox_cuda_and_pytorch_problem/ | false | false | self | 1 | null |
VEO 3 is f**k expensive | 0 | $270 for 125 generations will simply empty my pockets. It is 2$ per generation. I bought the $135 plan for 3 months.
I created a Telegram bot that places a prompt in VEO 3. Therefore, you can use it on my subscription. I will charge 2$ per prompt. For the bot DM. What do you think about this business? | 2025-07-22T12:09:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ccmr/veo_3_is_fk_expensive/ | Suspicious-Carry8405 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ccmr | false | null | t3_1m6ccmr | /r/LocalLLaMA/comments/1m6ccmr/veo_3_is_fk_expensive/ | false | false | self | 0 | null |
AMD's Strix Halo "Ryzen AI MAX" APUs Come To DIY PC Builders With New MoDT "Mini-ITX" Motherboards, Equipped With Up To 128 GB of LPDDR5X Memory | 120 | 2025-07-22T11:18:22 | https://wccftech.com/amd-strix-halo-ryzen-ai-max-apus-diy-pc-new-modt-mini-itx-motherboards-128-gb-lpddr5x-memory/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1m6bddm | false | null | t3_1m6bddm | /r/LocalLLaMA/comments/1m6bddm/amds_strix_halo_ryzen_ai_max_apus_come_to_diy_pc/ | false | false | default | 120 | {'enabled': False, 'images': [{'id': 'wZbp-LplWI1iCF1_Yajugz_TA6XKyL8q6T5RLI_Mg5c', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/wZbp-LplWI1iCF1_Yajugz_TA6XKyL8q6T5RLI_Mg5c.jpeg?width=108&crop=smart&auto=webp&s=097bbe6e55a92f58db20c497b5cd55b71c248bb0', 'width': 108}, {'height': 125, 'url': '... | |
Updated Strix Halo (Ryzen AI Max+ 395) LLM Benchmark Results | 84 | A while back I posted some [Strix Halo LLM performance testing](https://www.reddit.com/r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/) benchmarks. I'm back with an update that I believe is actually a fair bit more comprehensive now (although the original is still worth checking out for backgroun... | 2025-07-22T11:00:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m6b151/updated_strix_halo_ryzen_ai_max_395_llm_benchmark/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6b151 | false | null | t3_1m6b151 | /r/LocalLLaMA/comments/1m6b151/updated_strix_halo_ryzen_ai_max_395_llm_benchmark/ | false | false | 84 | null | |
Would using PCIE NVME in raid 0 for swap work to run larger models that don't fit into RAM? | 3 | I have wondered if you can get usable speeds on something like ERNIE-4.5-300B-A47B ~Q3 or Q4 on 2x 3090's, 128gb of DDR5 and what can't fit into RAM running on PCIE NVME's in raid 0. I'm sure it wouldn't be fast but I wonder if it could be usable. | 2025-07-22T10:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m6akeo/would_using_pcie_nvme_in_raid_0_for_swap_work_to/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6akeo | false | null | t3_1m6akeo | /r/LocalLLaMA/comments/1m6akeo/would_using_pcie_nvme_in_raid_0_for_swap_work_to/ | false | false | self | 3 | null |
Thinking about updating Llama 3.3-70B | 20 | I deployed Llama 3.3-70B for my organization quite a long time ago. I am now thinking of updating it to a newer model since there have been quite a few great new LLM releases recently. However, is there any model that actually performs better than Llama 3.3-70B for general purposes (chat, summarization... basically nor... | 2025-07-22T10:29:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m6ahsu/thinking_about_updating_llama_3370b/ | Only_Emergencies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6ahsu | false | null | t3_1m6ahsu | /r/LocalLLaMA/comments/1m6ahsu/thinking_about_updating_llama_3370b/ | false | false | self | 20 | null |
Cloudflare Pay Per Crawl is Going to Decimate Local LLMs . A lot of AI Abilities are going to end up behind this paywall . Am i Overthinking This ? | 0 | 2025-07-22T10:08:47 | https://blog.cloudflare.com/introducing-pay-per-crawl/ | ursustyranotitan | blog.cloudflare.com | 1970-01-01T00:00:00 | 0 | {} | 1m6a5xb | false | null | t3_1m6a5xb | /r/LocalLLaMA/comments/1m6a5xb/cloudflare_pay_per_crawl_is_going_to_decimate/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'tG0tHVEzt3GiPv1qKZJjofKJfzW4kvsoTiVYC0T1HTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tG0tHVEzt3GiPv1qKZJjofKJfzW4kvsoTiVYC0T1HTU.png?width=108&crop=smart&auto=webp&s=2f6f1859118f4fa4e637bec1b73bbfb3db84cea0', 'width': 108}, {'height': 113, 'url': 'h... | ||
Truly open LLMs | 5 | 2025-07-22T09:47:43 | https://shchegrikovich.substack.com/p/truly-open-llms | GoodSamaritan333 | shchegrikovich.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1m69th7 | false | null | t3_1m69th7 | /r/LocalLLaMA/comments/1m69th7/truly_open_llms/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'p1atEECBjynpGPVcwQ0lag6GUgGW5QAMA3C3WVe6kJA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p1atEECBjynpGPVcwQ0lag6GUgGW5QAMA3C3WVe6kJA.jpeg?width=108&crop=smart&auto=webp&s=e50454aae6b161b0cafb9fcfd612d0809f5c73d9', 'width': 108}, {'height': 108, 'url': '... | ||
In Qwen3-235B-A22B-Instruct-2507-UD-Q4 (unsloth) I'm seeing some "but wait" and related ones (like kinda questioning and answering itself), were the model seems to "think" (even when is a non-thinking model and I haven't setup any system prompt), have you seen something similar? | 8 | I'm running it with latest llama-server (llama.cpp) and with the suggested parameters (same as the non-thinking Qwen3 ones)
Didn't see that with the "old" 235b with /no\_think
Is that expected? | 2025-07-22T09:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m69sb6/in_qwen3235ba22binstruct2507udq4_unsloth_im/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m69sb6 | false | null | t3_1m69sb6 | /r/LocalLLaMA/comments/1m69sb6/in_qwen3235ba22binstruct2507udq4_unsloth_im/ | false | false | self | 8 | null |
🧠 How are you managing MCP servers across different AI apps (Claude, GPTs, Gemini etc.)? | 2 | I’m experimenting with multiple MCP servers and trying to understand how others are managing them across different AI tools like Claude Desktop, GPTs, Gemini clients, etc.
Do you manually add them in each config file?
Are you using any centralized tool or dashboard to start/stop/edit MCP servers?
Any best practices ... | 2025-07-22T09:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m69qs3/how_are_you_managing_mcp_servers_across_different/ | hihurmuz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m69qs3 | false | null | t3_1m69qs3 | /r/LocalLLaMA/comments/1m69qs3/how_are_you_managing_mcp_servers_across_different/ | false | false | self | 2 | null |
I spent a late night with an AI designing a way to give it a persistent, verifiable memory. I call it the "Genesis Protocol." | 0 | Hey everyone,
I've been deep in a project lately and kept hitting the same wall I'm sure many of you have: LLMs are stateless. You have an amazing, deep conversation, build up a ton of context... and then the session ends and it's all gone. It feels like trying to build a skyscraper on sand.
Last night, I got into a ... | 2025-07-22T09:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m69oyb/i_spent_a_late_night_with_an_ai_designing_a_way/ | Icy_Gas8807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m69oyb | false | null | t3_1m69oyb | /r/LocalLLaMA/comments/1m69oyb/i_spent_a_late_night_with_an_ai_designing_a_way/ | false | false | self | 0 | null |
How to apply a custom dataset | 2 | Yo so am new to this and i want to run a local llm that answers questions using my custom dataset which is basically some financial data .
I created a Q&A dataset and an instruction based data set and my llm refuses to use them
Ive finetuned my llm using TorchTune
And also tried Litgpt
Its a llama 3.2 3B instruct mo... | 2025-07-22T09:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m69m60/how_to_apply_a_custom_dataset/ | oG17DoGe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m69m60 | false | null | t3_1m69m60 | /r/LocalLLaMA/comments/1m69m60/how_to_apply_a_custom_dataset/ | false | false | self | 2 | null |
What do you guys use for Spellcheck? | 1 | Are there any tiny spellcheck models for English which are good? What do you guys use? | 2025-07-22T08:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m68yvl/what_do_you_guys_use_for_spellcheck/ | CaptTechno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m68yvl | false | null | t3_1m68yvl | /r/LocalLLaMA/comments/1m68yvl/what_do_you_guys_use_for_spellcheck/ | false | false | self | 1 | null |
Model to process image-of-text PDFs? | 2 | I'm running a research project analysing hospital incident reports (answering structured questions based on them); we do have permission to use identifiable data but the PDFs I've been sent have been redacted and whichever software they've used has turned a lot of the text into an image. To add excitement, a lot of the... | 2025-07-22T08:43:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m68tse/model_to_process_imageoftext_pdfs/ | thigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m68tse | false | null | t3_1m68tse | /r/LocalLLaMA/comments/1m68tse/model_to_process_imageoftext_pdfs/ | false | false | self | 2 | null |
Fine-Tuning Multilingual Embedding Models for Industrial RAG System | 6 | Hi everyone,
I'm currently working on a project to fine-tune multilingual embedding models to improve document retrieval within a company's RAG system. The dataset consists of German and English documents related to industrial products, so multilingual support is essential. The dataset has a query-passage format with ... | 2025-07-22T08:15:19 | https://www.reddit.com/r/LocalLLaMA/comments/1m68elw/finetuning_multilingual_embedding_models_for/ | Maddin187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m68elw | false | null | t3_1m68elw | /r/LocalLLaMA/comments/1m68elw/finetuning_multilingual_embedding_models_for/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'b3_nc9eUM96LdvrtRkpsiSfLCjhmgpLRDj18BCf7ynE', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/b3_nc9eUM96LdvrtRkpsiSfLCjhmgpLRDj18BCf7ynE.jpeg?width=108&crop=smart&auto=webp&s=2e4403396b3271cb41e3343fd3daf2f432ae3c37', 'width': 108}, {'height': 111, 'url': '... |
AI should just be open-source | 102 | For once, I’m not going to talk about my benchmark, so to be forefront, there will be no other reference or link to it.
That said, just sharing something that’s been on mind. I’ve been thinking about this topic recently, and while this may be a hot or controversial take, all AI models should be open-source (even from ... | 2025-07-22T07:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m67zde/ai_should_just_be_opensource/ | adviceguru25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m67zde | false | null | t3_1m67zde | /r/LocalLLaMA/comments/1m67zde/ai_should_just_be_opensource/ | false | false | self | 102 | null |
Breaking: Small Team Open-Sources AI Agent "Crux" That Achieves Gold-Level Performance on USAMO Benchmarks Using o4-mini – Rivaling OpenAI and Google! | 0 | A small independent team just announced they've developed an AI agent system called "Crux" that matches the USAMO Gold Medal performance levels recently hit by heavyweights like OpenAI and Google. The kicker? They did it using just the o4-mini-high model combined with their custom agent framework – no massive experimen... | 2025-07-22T07:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m67e6a/breaking_small_team_opensources_ai_agent_crux/ | Weekly-Weekend2886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m67e6a | false | null | t3_1m67e6a | /r/LocalLLaMA/comments/1m67e6a/breaking_small_team_opensources_ai_agent_crux/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'shUlXpDwhwN6ps0xRSvyVznIW2aFkicKpizJhu6paek', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/shUlXpDwhwN6ps0xRSvyVznIW2aFkicKpizJhu6paek.png?width=108&crop=smart&auto=webp&s=79a41a04ef2fc56200e789a28b3d529696e58e70', 'width': 108}, {'height': 108, 'url': 'h... | |
I finally got rid of Ollama! | 1 | [removed] | 2025-07-22T07:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m67d8o/i_finally_got_rid_of_ollama/ | Issac_jo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m67d8o | false | null | t3_1m67d8o | /r/LocalLLaMA/comments/1m67d8o/i_finally_got_rid_of_ollama/ | false | false | self | 1 | null |
Is GPUStack the Cluster Version of Ollama? Comparison + Alternatives | 3 | I've seen a few people asking whether [GPUStack](https://github.com/gpustack/gpustack) is essentially a multi-node version of Ollama. I’ve used both, and here’s a breakdown for anyone curious.
**Short answer:** GPUStack is *not just* Ollama with clustering — it's a more general-purpose, production-ready LLM service pl... | 2025-07-22T07:02:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m67a12/is_gpustack_the_cluster_version_of_ollama/ | Issac_jo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m67a12 | false | null | t3_1m67a12 | /r/LocalLLaMA/comments/1m67a12/is_gpustack_the_cluster_version_of_ollama/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'HAbESiwyYcW_SMJKPNlPcM8amsiDiX8lOYKTLATZxUE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HAbESiwyYcW_SMJKPNlPcM8amsiDiX8lOYKTLATZxUE.png?width=108&crop=smart&auto=webp&s=30f5ae864bd715df1fff1d9fca1b871646878411', 'width': 108}, {'height': 108, 'url': 'h... |
What Speaker Diarization tools should I look into? | 2 | Hi,
I am making a tool that needs to analyze a conversation (non-English) between two people. The conversation is provided to me in audio format. I am currently using OpenAI Whisper to transcribe and feed the transcription to ChatGPT-4o model through the API for analysis.
So far, it's doing a fair job. Sometimes, tho... | 2025-07-22T06:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m6741z/what_speaker_diarization_tools_should_i_look_into/ | Chemical_Gas3710 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m6741z | false | null | t3_1m6741z | /r/LocalLLaMA/comments/1m6741z/what_speaker_diarization_tools_should_i_look_into/ | false | false | self | 2 | null |
Is this project feasible for an LLM novice? (Tutor chatbot for primary school student) | 2 | I've recently started using LLMs at work and realized the incredible potential they have—especially if I can run them locally, due to the sensitivity of client data. That got me interested in learning how to run LLMs on my own machine, as well as exploring related areas like fine-tuning, distillation, quantization, etc... | 2025-07-22T06:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m66zhs/is_this_project_feasible_for_an_llm_novice_tutor/ | Saruphon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m66zhs | false | null | t3_1m66zhs | /r/LocalLLaMA/comments/1m66zhs/is_this_project_feasible_for_an_llm_novice_tutor/ | false | false | self | 2 | null |
stop wasting credits just stack playground and domoai | 0 | so many people waste credits chasing the “perfect” ai tool when they don’t need to. just pick one to build your base [playground](https://www.imagine.art/dashboard/image/tool/text-to-image?utm_source=google&utm_medium=cpc&utm_campaign=G_I_Web_Sales_PCH_T2I_C2_InTool&utm_term=playground%20ai%20image%20generator&utm_camp... | 2025-07-22T06:36:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m66v6q/stop_wasting_credits_just_stack_playground_and/ | Neat_Chapter_9055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m66v6q | false | null | t3_1m66v6q | /r/LocalLLaMA/comments/1m66v6q/stop_wasting_credits_just_stack_playground_and/ | false | false | self | 0 | null |
Private Eval result of Qwen3-235B-A22B-Instruct-2507 | 84 | This is a ***Private*** eval that has been updated for over a year by Zhihu user "toyama nao". So qwen cannot be benchmaxxing on it because it is ***Private*** and the questions are being updated constantly.
The score of this 2507 update is amazing, especially since it's a non-reasoning model that ranks among other r... | 2025-07-22T06:28:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m66qks/private_eval_result_of_qwen3235ba22binstruct2507/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m66qks | false | null | t3_1m66qks | /r/LocalLLaMA/comments/1m66qks/private_eval_result_of_qwen3235ba22binstruct2507/ | false | false | 84 | null | |
Frankenserver for sale at a steep discount. 2x96GB GH200 converted from liquid- to air-cooled. | 37 | 2025-07-22T05:13:44 | GPTrack_ai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m65iga | false | null | t3_1m65iga | /r/LocalLLaMA/comments/1m65iga/frankenserver_for_sale_at_a_steep_discount_2x96gb/ | false | false | default | 37 | {'enabled': True, 'images': [{'id': 'ifz3sua70def1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/ifz3sua70def1.jpeg?width=108&crop=smart&auto=webp&s=8cb9bd9d6aa78574351fa9778ec9d0b129263457', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/ifz3sua70def1.jpeg?width=216&crop=smart&auto=w... | ||
If Qwen3-235B-A22B-2507 can't think, why does it think when the thinking button is on? | 32 | 2025-07-22T04:45:41 | JeffreySons_90 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m650ow | false | null | t3_1m650ow | /r/LocalLLaMA/comments/1m650ow/if_qwen3235ba22b2507_cant_think_why_does_it_think/ | false | false | default | 32 | {'enabled': True, 'images': [{'id': 'lxwf5fgevcef1', 'resolutions': [{'height': 28, 'url': 'https://preview.redd.it/lxwf5fgevcef1.jpeg?width=108&crop=smart&auto=webp&s=93400e194109a5036a8e420d94408434e9409fa7', 'width': 108}, {'height': 56, 'url': 'https://preview.redd.it/lxwf5fgevcef1.jpeg?width=216&crop=smart&auto=we... | ||
MegaTTS 3 Voice Cloning is Here | 371 | MegaTTS 3 voice cloning is here!
For context: a while back, ByteDance released MegaTTS 3 (with exceptional voice cloning capabilities), but for various reasons, they decided not to release the WavVAE encoder necessary for voice cloning to work.
Recently, a WavVAE encoder compatible with MegaTTS 3 was released by ACod... | 2025-07-22T03:53:37 | https://huggingface.co/spaces/mrfakename/MegaTTS3-Voice-Cloning | mrfakename0 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m641zg | false | null | t3_1m641zg | /r/LocalLLaMA/comments/1m641zg/megatts_3_voice_cloning_is_here/ | false | false | default | 371 | {'enabled': False, 'images': [{'id': 'XY_rsQvVYA6z0ednGBoRmZkoCoj4P5xtgjIJR-FIJx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XY_rsQvVYA6z0ednGBoRmZkoCoj4P5xtgjIJR-FIJx0.png?width=108&crop=smart&auto=webp&s=bf8ad97c6cb72e96abaf27c1cc2565dda7970c68', 'width': 108}, {'height': 116, 'url': 'h... |
New release from the author of StyleTTS 2 + StyleTTS ZS: DMOSpeech 2
They apply RL to F5-TTS to improve WER and stability and allow 2x faster inference | 5 | [https://github.com/yl4579/DMOSpeech2](https://github.com/yl4579/DMOSpeech2) | 2025-07-22T03:48:28 | HOLUPREDICTIONS | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m63yjx | false | null | t3_1m63yjx | /r/LocalLLaMA/comments/1m63yjx/new_release_from_the_author_of_styletts_2/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'IPTUiJWpLEE0rlX4-34DfDB0AloNJ0Yin7u1P-4BGFU', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/y87six62lcef1.jpeg?width=108&crop=smart&auto=webp&s=c3ed76badd7a33113caed93b4ca3afacb4069981', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/y87six62lcef1.jp... | ||
Running LLMs against a sandbox airport to see if they can make the correct decisions in real time | 50 | I created this sandbox to test LLMs and their real-time decision-making processes. Running it has generated some interesting outputs, and I'm curious to see if others find the same. PRs accepted and encouraged! | 2025-07-22T02:53:39 | https://github.com/jjasghar/ai-airport-simulation | jjasghar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m62vbw | false | null | t3_1m62vbw | /r/LocalLLaMA/comments/1m62vbw/running_llms_against_a_sandbox_airport_to_see_if/ | false | false | default | 50 | {'enabled': False, 'images': [{'id': '2bR3xkxuYGa6hZRiyam5VBhYD6a-2XwJDkt8W8FStoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2bR3xkxuYGa6hZRiyam5VBhYD6a-2XwJDkt8W8FStoU.png?width=108&crop=smart&auto=webp&s=9e5cedfeb2acc17ed96c354aea24f51d83b107d8', 'width': 108}, {'height': 108, 'url': 'h... |
OmniSVG weights released | 88 | Throwback to 3 months ago: [https://www.reddit.com/r/LocalLLaMA/comments/1jv5uk8/omnisvg\_a\_unified\_scalable\_vector\_graphics/](https://www.reddit.com/r/LocalLLaMA/comments/1jv5uk8/omnisvg_a_unified_scalable_vector_graphics/)
Weights: [https://huggingface.co/OmniSVG/OmniSVG](https://huggingface.co/OmniSVG/OmniSVG)
... | 2025-07-22T02:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m61u94/omnisvg_weights_released/ | DeProgrammer99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m61u94 | false | null | t3_1m61u94 | /r/LocalLLaMA/comments/1m61u94/omnisvg_weights_released/ | false | false | self | 88 | {'enabled': False, 'images': [{'id': 'fCRELyuUm4dkNnS6Jrme0GQxhJDQkRVQSlALVnZcugQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fCRELyuUm4dkNnS6Jrme0GQxhJDQkRVQSlALVnZcugQ.png?width=108&crop=smart&auto=webp&s=4a72266ba63c0bc5f87d6bf4f1a9d21ca8a03fb2', 'width': 108}, {'height': 116, 'url': 'h... |
Built a free Python automation toolkit with 5 scripts. Want feedback — should I release it? | 1 | [removed] | 2025-07-22T01:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m61iu0/built_a_free_python_automation_toolkit_with_5/ | Itchy-Warning1127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m61iu0 | false | null | t3_1m61iu0 | /r/LocalLLaMA/comments/1m61iu0/built_a_free_python_automation_toolkit_with_5/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.