title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Test post | 1 | [deleted] | 2025-06-16T00:39:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfkla | false | null | t3_1lcfkla | /r/LocalLLaMA/comments/1lcfkla/test_post/ | false | false | default | 1 | null | ||
Test post | 1 | [removed] | 2025-06-16T00:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfk5m/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfk5m | false | null | t3_1lcfk5m | /r/LocalLLaMA/comments/1lcfk5m/test_post/ | false | false | self | 1 | null |
Test post | 1 | [removed] | 2025-06-16T00:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfjno/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfjno | false | null | t3_1lcfjno | /r/LocalLLaMA/comments/1lcfjno/test_post/ | false | false | self | 1 | null |
Test post | 1 | [removed] | 2025-06-16T00:36:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfilt/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfilt | false | null | t3_1lcfilt | /r/LocalLLaMA/comments/1lcfilt/test_post/ | false | false | self | 1 | null |
Test post | 1 | [removed] | 2025-06-16T00:35:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfi8u/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfi8u | false | null | t3_1lcfi8u | /r/LocalLLaMA/comments/1lcfi8u/test_post/ | false | false | self | 1 | null |
Test Post | 1 | [removed] | 2025-06-16T00:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfhw2/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfhw2 | false | null | t3_1lcfhw2 | /r/LocalLLaMA/comments/1lcfhw2/test_post/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Test post | 1 | [removed] | 2025-06-16T00:32:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lcffkg/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcffkg | false | null | t3_1lcffkg | /r/LocalLLaMA/comments/1lcffkg/test_post/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:31:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcff7s | false | null | t3_1lcff7s | /r/LocalLLaMA/comments/1lcff7s/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfer8/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfer8 | false | null | t3_1lcfer8 | /r/LocalLLaMA/comments/1lcfer8/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfe8u/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfe8u | false | null | t3_1lcfe8u | /r/LocalLLaMA/comments/1lcfe8u/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:29:01 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfdg2 | false | null | t3_1lcfdg2 | /r/LocalLLaMA/comments/1lcfdg2/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:27:17 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfc90 | false | null | t3_1lcfc90 | /r/LocalLLaMA/comments/1lcfc90/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfbiy/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfbiy | false | null | t3_1lcfbiy | /r/LocalLLaMA/comments/1lcfbiy/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:25:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfata | false | null | t3_1lcfata | /r/LocalLLaMA/comments/1lcfata/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:24:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lcf9x5/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcf9x5 | false | null | t3_1lcf9x5 | /r/LocalLLaMA/comments/1lcf9x5/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:23:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf9hd | false | null | t3_1lcf9hd | /r/LocalLLaMA/comments/1lcf9hd/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:22:19 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf8n4 | false | null | t3_1lcf8n4 | /r/LocalLLaMA/comments/1lcf8n4/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:21:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf83c | false | null | t3_1lcf83c | /r/LocalLLaMA/comments/1lcf83c/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:20:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf7jz | false | null | t3_1lcf7jz | /r/LocalLLaMA/comments/1lcf7jz/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:19:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf6qb | false | null | t3_1lcf6qb | /r/LocalLLaMA/comments/1lcf6qb/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
(LLMs) reflexively project grammatical and semantic structures from their training corpus | 1 | [removed] | 2025-06-16T00:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lcf6o1/llms_reflexively_project_grammatical_and_semantic/ | Funny_Ingenuity6982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcf6o1 | false | null | t3_1lcf6o1 | /r/LocalLLaMA/comments/1lcf6o1/llms_reflexively_project_grammatical_and_semantic/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lcf6en/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcf6en | false | null | t3_1lcf6en | /r/LocalLLaMA/comments/1lcf6en/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [removed] | 2025-06-16T00:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lcf5nt/augmentoolkit_30_7_months_of_work_mit_license/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcf5nt | false | null | t3_1lcf5nt | /r/LocalLLaMA/comments/1lcf5nt/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:17:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf510 | false | null | t3_1lcf510 | /r/LocalLLaMA/comments/1lcf510/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Test post | 1 | [deleted] | 2025-06-16T00:16:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcf4dl | false | null | t3_1lcf4dl | /r/LocalLLaMA/comments/1lcf4dl/test_post/ | false | false | default | 1 | null | ||
Test Post | 1 | [deleted] | 2025-06-15T23:58:21 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcer1z | false | null | t3_1lcer1z | /r/LocalLLaMA/comments/1lcer1z/test_post/ | false | false | default | 1 | null | ||
Test post | 1 | [deleted] | 2025-06-15T23:57:11 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lceq68 | false | null | t3_1lceq68 | /r/LocalLLaMA/comments/1lceq68/test_post/ | false | false | default | 1 | null | ||
Best tutorials and resources for learning RAG? | 17 | I want to learn how RAG works and use it on a 4B-7B model. Do you have some beginner-friendly links/videotutorials/tools to help me out? Thanks! | 2025-06-15T23:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lcelbw/best_tutorials_and_resources_for_learning_rag/ | sebastianmicu24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcelbw | false | null | t3_1lcelbw | /r/LocalLLaMA/comments/1lcelbw/best_tutorials_and_resources_for_learning_rag/ | false | false | self | 17 | null |
Mistral Small 3.1 vs Magistral Small - experience? | 1 | [deleted] | 2025-06-15T23:35:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lceb0j | false | null | t3_1lceb0j | /r/LocalLLaMA/comments/1lceb0j/mistral_small_31_vs_magistral_small_experience/ | false | false | default | 1 | null | ||
HP ZBook Ultra 14 Zoll G1a LLM Benchmarks | 1 | [removed] | 2025-06-15T23:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lce4vg/hp_zbook_ultra_14_zoll_g1a_llm_benchmarks/ | holistech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lce4vg | false | null | t3_1lce4vg | /r/LocalLLaMA/comments/1lce4vg/hp_zbook_ultra_14_zoll_g1a_llm_benchmarks/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=108&crop=smart&auto=webp&s=fa4b7038ed19bf08dd9773cb2f0db1d6352e949e', 'width': 108}, {'height': 216, 'url': '... |
run ollama with local models | 1 | [removed] | 2025-06-15T23:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lcdsde/run_ollama_with_local_models/ | OwnSoup8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcdsde | false | null | t3_1lcdsde | /r/LocalLLaMA/comments/1lcdsde/run_ollama_with_local_models/ | false | false | self | 1 | null |
app that runs ollama and local LLM | 1 | [removed] | 2025-06-15T23:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lcdnrf/app_that_runs_ollama_and_local_llm/ | OwnSoup8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcdnrf | false | null | t3_1lcdnrf | /r/LocalLLaMA/comments/1lcdnrf/app_that_runs_ollama_and_local_llm/ | false | false | self | 1 | null |
Best LLMs for a MacBook Air M4 16GB | 1 | [removed] | 2025-06-15T22:34:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lcd0hx/best_llms_for_a_macbook_air_m4_16gb/ | Akeel1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcd0hx | false | null | t3_1lcd0hx | /r/LocalLLaMA/comments/1lcd0hx/best_llms_for_a_macbook_air_m4_16gb/ | false | false | self | 1 | null |
Most human like LLM | 1 | [removed] | 2025-06-15T22:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lccj7n/most_human_like_llm/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lccj7n | false | null | t3_1lccj7n | /r/LocalLLaMA/comments/1lccj7n/most_human_like_llm/ | false | false | self | 1 | null |
Is gemini 2.5 pro just naturally better than the rest or is it just me? | 70 | I mean, maybe the other models do better in niche benchmarks, and maybe claude is better at coding specifically, but gemini 2.5 pro feels like I'm talking to a smart human being and it can actually build good arguments and have better chat sessions. | 2025-06-15T21:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lcc5vk/is_gemini_25_pro_just_naturally_better_than_the/ | freecodeio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcc5vk | false | null | t3_1lcc5vk | /r/LocalLLaMA/comments/1lcc5vk/is_gemini_25_pro_just_naturally_better_than_the/ | false | false | self | 70 | null |
FULL LEAKED v0 System Prompts and Tools [UPDATED] | 170 | (Latest system prompt: 15/06/2025)
I managed to get FULL updated v0 system prompt and internal tools info. Over 900 lines
You can it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools | 2025-06-15T21:37:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lcbs7z/full_leaked_v0_system_prompts_and_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcbs7z | false | null | t3_1lcbs7z | /r/LocalLLaMA/comments/1lcbs7z/full_leaked_v0_system_prompts_and_tools_updated/ | false | false | self | 170 | {'enabled': False, 'images': [{'id': 'z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=108&crop=smart&auto=webp&s=e0931cf11aed86a4d7ab261ddac8592d0789a7fa', 'width': 108}, {'height': 108, 'url': 'h... |
Mistral-Small useless when running locally | 5 | Mistral-Small from 2024 was one of my favorite local models, but their 2025 versions (running on llama.cpp with chat completion) is driving me crazy. It's not just the repetition problem people report, but in my use cases it behaves totally erratic, bad instruction following and sometimes completely off the rail answer... | 2025-06-15T21:09:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lcb4e6/mistralsmall_useless_when_running_locally/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcb4e6 | false | null | t3_1lcb4e6 | /r/LocalLLaMA/comments/1lcb4e6/mistralsmall_useless_when_running_locally/ | false | false | self | 5 | null |
changeish - Manage your code's changelog using Ollama | 1 | [removed] | 2025-06-15T19:51:58 | https://github.com/itlackey/changeish | Kitchen_Fix1464 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lc9bp3 | false | null | t3_1lc9bp3 | /r/LocalLLaMA/comments/1lc9bp3/changeish_manage_your_codes_changelog_using_ollama/ | false | false | default | 1 | null |
Good models for a 16GB M4 Mac Mini? | 16 | Just bought a 16GB M4 Mac Mini and put LM Studio into it. Right now I'm running the Deepseek R1 Qwen 8B model. It's ok and generates text pretty quickly but sometimes doesn't quite give the answer I'm looking for.
What other models do you recommend? I don't code, mostly just use these things as a toy or to get quick ... | 2025-06-15T19:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lc9alf/good_models_for_a_16gb_m4_mac_mini/ | puukkeriro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc9alf | false | null | t3_1lc9alf | /r/LocalLLaMA/comments/1lc9alf/good_models_for_a_16gb_m4_mac_mini/ | false | false | self | 16 | null |
What can I do with my laptop? | 1 | [removed] | 2025-06-15T19:30:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lc8tcf/what_can_i_do_with_my_laptop/ | djinny31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc8tcf | false | null | t3_1lc8tcf | /r/LocalLLaMA/comments/1lc8tcf/what_can_i_do_with_my_laptop/ | false | false | self | 1 | null |
What's a model (preferably uncensored) that my computer would handle but with difficulty? | 1 | [removed] | 2025-06-15T19:20:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lc8l69/whats_a_model_preferably_uncensored_that_my/ | Rahodees | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc8l69 | false | null | t3_1lc8l69 | /r/LocalLLaMA/comments/1lc8l69/whats_a_model_preferably_uncensored_that_my/ | false | false | self | 1 | null |
So how are people actually building their agentic RAG pipeline? | 23 | I have a rag app, with a few sources that I can manually chose from to retrieve context. how does one prompt the LLM to get it to choose the right source? I just read on here people have success with the new mistral, but what do these prompts to the agent LLM look like? What have I missed after all these months that ev... | 2025-06-15T19:11:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lc8cse/so_how_are_people_actually_building_their_agentic/ | walagoth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc8cse | false | null | t3_1lc8cse | /r/LocalLLaMA/comments/1lc8cse/so_how_are_people_actually_building_their_agentic/ | false | false | self | 23 | null |
Remastering public domain Blues and Jazz | 1 | [removed] | 2025-06-15T18:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lc7zvn/remastering_public_domain_blues_and_jazz/ | autonoma_2042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc7zvn | false | null | t3_1lc7zvn | /r/LocalLLaMA/comments/1lc7zvn/remastering_public_domain_blues_and_jazz/ | false | false | self | 1 | null |
Bank transactions extractions, tech stack help needed. | 0 | Hi, I am planning to start a project to extract transactions from bank PDFs. Let say I have 50 different bank statements and they all have different templates some have tables and some donot. Different banks uses different headers for transactions like some credit/deposit..., some banks daily balance etc.
So input is ... | 2025-06-15T18:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lc7ye0/bank_transactions_extractions_tech_stack_help/ | nimmalachaitanya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc7ye0 | false | null | t3_1lc7ye0 | /r/LocalLLaMA/comments/1lc7ye0/bank_transactions_extractions_tech_stack_help/ | false | false | self | 0 | null |
Can someone explain the current status socio-politics of GPU? | 0 | Hai i want to preapre an article on ai race, gpu and economical war between countries. I was not following the news past 8 months. What is the current status of it?
I would like to hear, Nvidias monopoly, CUDA, massive chip shortage, role of TSMC, what biden did to cut nvidias exporting to china, what is Trumps tariff ... | 2025-06-15T18:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lc7arp/can_someone_explain_the_current_status/ | Trysem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc7arp | false | null | t3_1lc7arp | /r/LocalLLaMA/comments/1lc7arp/can_someone_explain_the_current_status/ | false | false | self | 0 | null |
I wrapped Apple’s new on-device models in an OpenAI-compatible API | 311 | I spent the weekend vibe-coding in Cursor and ended up with a small Swift app that turns the new macOS 26 on-device Apple Intelligence models into a local server you can hit with standard OpenAI `/v1/chat/completions` calls. Point any client you like at `http://127.0.0.1:11535`.
* Nothing leaves your Mac
* Works with ... | 2025-06-15T18:06:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lc6tii/i_wrapped_apples_new_ondevice_models_in_an/ | FixedPt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc6tii | false | null | t3_1lc6tii | /r/LocalLLaMA/comments/1lc6tii/i_wrapped_apples_new_ondevice_models_in_an/ | false | false | self | 311 | null |
I wrapped Apple’s new on-device models in an OpenAI-compatible API | 1 | [removed] | 2025-06-15T18:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lc6r5y/i_wrapped_apples_new_ondevice_models_in_an/ | ChanningDai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc6r5y | false | null | t3_1lc6r5y | /r/LocalLLaMA/comments/1lc6r5y/i_wrapped_apples_new_ondevice_models_in_an/ | false | false | self | 1 | null |
Gemma3 12b or 27b for writing assistance/brainstorming? | 6 | A disclaimer before any reddit writers shit on me for using AI to write.
I don't blindly copy and paste. I don't have it generate stories. All the ideas come from ME. I only use AI to bounce ideas off it. And to give advice on writing. And have it help me streamlie the stories. It's like having a more experienced wri... | 2025-06-15T17:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lc6idx/gemma3_12b_or_27b_for_writing/ | Lord_Thunderballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc6idx | false | null | t3_1lc6idx | /r/LocalLLaMA/comments/1lc6idx/gemma3_12b_or_27b_for_writing/ | false | false | self | 6 | null |
Is rocm better supported on arch through a AUR package? | 5 | Or is the best way to use rocm the docker image provided here: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html#using-wheels-package
For a friend of mine | 2025-06-15T17:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lc5w5r/is_rocm_better_supported_on_arch_through_a_aur/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc5w5r | false | null | t3_1lc5w5r | /r/LocalLLaMA/comments/1lc5w5r/is_rocm_better_supported_on_arch_through_a_aur/ | false | false | self | 5 | null |
Live Speech To Text in Arabic | 1 | I was building an app for the Holy Quran which includes a feature where you can recite in Arabic and a highlighter will follow what you spoke. I want to later make this scalable to error detection and more similar to tarteel AI. But I can't seem to find a good model for Arabic to do the Audio to text part adequately in... | 2025-06-15T16:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lc4j2w/live_speech_to_text_in_arabic/ | AbdullahKhanSherwani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc4j2w | false | null | t3_1lc4j2w | /r/LocalLLaMA/comments/1lc4j2w/live_speech_to_text_in_arabic/ | false | false | self | 1 | null |
Can someone with a Chinese ID get me an API key for Volcengine? | 0 | I am trying to run the new Seedance models via API and saw that they were made available on Volcengine (https://www.volcengine.com/docs/82379/1520757).
However, in order to get an API key, you need to have a Chinese ID, which I do not have. I wonder if anyone can help on that issue. | 2025-06-15T16:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lc4gtr/can_someone_with_a_chinese_id_get_me_an_api_key/ | yachty66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc4gtr | false | null | t3_1lc4gtr | /r/LocalLLaMA/comments/1lc4gtr/can_someone_with_a_chinese_id_get_me_an_api_key/ | false | false | self | 0 | null |
Experimental ChatGPT like Web UI for Gemini API (open source) | 1 | [removed] | 2025-06-15T16:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lc4582/experimental_chatgpt_like_web_ui_for_gemini_api/ | W4D-cmd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc4582 | false | null | t3_1lc4582 | /r/LocalLLaMA/comments/1lc4582/experimental_chatgpt_like_web_ui_for_gemini_api/ | false | false | 1 | null | |
What I Learned from Breaking Down How Small AI Chatbots Actually Work (Tokenization to Testing) | 1 | [removed] | 2025-06-15T15:57:33 | LokeshKeswani | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lc3oid | false | null | t3_1lc3oid | /r/LocalLLaMA/comments/1lc3oid/what_i_learned_from_breaking_down_how_small_ai/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'vlhs3hyb547f1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=108&crop=smart&auto=webp&s=fd1e8598884dac49707761c86d9ed5ca8e288e02', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=216&crop=smart&auto=web... | |
Local LLM Memorization – A fully local memory system for long-term recall and visualization | 1 | [removed] | 2025-06-15T15:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lc3lfs/local_llm_memorization_a_fully_local_memory/ | Vicouille6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc3lfs | false | null | t3_1lc3lfs | /r/LocalLLaMA/comments/1lc3lfs/local_llm_memorization_a_fully_local_memory/ | false | false | self | 1 | null |
PSA: 2 * 3090 with Nvlink can cause depression* | 201 | Hello. I was enjoying my 3090 so much. So I thought why not get a second? My use case is local coding models, and Gemma 3 mostly.
It's been nothing short of a nightmare to get working. Just about everything that could go wrong, has gone wrong.
* Mining rig frame took a day to put together
* Power supply so huge it... | 2025-06-15T15:15:53 | cuckfoders | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lc2pv9 | false | null | t3_1lc2pv9 | /r/LocalLLaMA/comments/1lc2pv9/psa_2_3090_with_nvlink_can_cause_depression/ | false | false | default | 201 | {'enabled': True, 'images': [{'id': 'sy4x3c4ft37f1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=108&crop=smart&auto=webp&s=97f6c82b2993197999ec385b1aa95cb80e8f221d', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=216&crop=smart&auto=... | |
Is it appropriate to do creative writing with RAG? | 1 | [removed] | 2025-06-15T15:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lc2kr9/is_it_appropriate_to_do_creative_writing_with_rag/ | ArranEye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc2kr9 | false | null | t3_1lc2kr9 | /r/LocalLLaMA/comments/1lc2kr9/is_it_appropriate_to_do_creative_writing_with_rag/ | false | false | self | 1 | null |
Recreating old cartoons | 8 | I don’t actually have a solution for this. I’m curious if anyone else has found one.
At some point in the future, I imagine the new video/image models could take old cartoons (or stop motion Gumby) that are very low resolution and very low frame rate and build them so that they are both high frame as well as high reso... | 2025-06-15T14:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lc295w/recreating_old_cartoons/ | olympics2022wins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc295w | false | null | t3_1lc295w | /r/LocalLLaMA/comments/1lc295w/recreating_old_cartoons/ | false | false | self | 8 | null |
Is it appropriate to do creative writing with RAG? | 1 | [removed] | 2025-06-15T14:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lc1gcs/is_it_appropriate_to_do_creative_writing_with_rag/ | ArranEye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc1gcs | false | null | t3_1lc1gcs | /r/LocalLLaMA/comments/1lc1gcs/is_it_appropriate_to_do_creative_writing_with_rag/ | false | false | self | 1 | null |
LLM chess ELO? | 0 | I was wondering how good LLMs are at chess, in regards to ELO - say Lichess for discussion purposes -, and looked online, and the best I could find was [this](https://dubesor.de/chess/chess-leaderboard), which seems at least not uptodate at best, and not reliable more realistically. Any clue anyone if there's a more ac... | 2025-06-15T13:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lc0oyf/llm_chess_elo/ | BaconSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc0oyf | false | null | t3_1lc0oyf | /r/LocalLLaMA/comments/1lc0oyf/llm_chess_elo/ | false | false | self | 0 | null |
llama.cpp: llama-server has multimodal audio input, so I tried it out. | 1 | [removed] | 2025-06-15T13:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lc0n1x/llamacpp_llamaserver_has_multimodal_audio_input/ | DesignToWin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc0n1x | false | null | t3_1lc0n1x | /r/LocalLLaMA/comments/1lc0n1x/llamacpp_llamaserver_has_multimodal_audio_input/ | false | false | 1 | null | |
Best practices - RAG, content generation | 1 | [removed] | 2025-06-15T13:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lc0a5t/best_practices_rag_content_generation/ | allan_watts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc0a5t | false | null | t3_1lc0a5t | /r/LocalLLaMA/comments/1lc0a5t/best_practices_rag_content_generation/ | false | false | self | 1 | null |
Best practices - RAG, content generation | 1 | [removed] | 2025-06-15T13:23:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lc08at/best_practices_rag_content_generation/ | Odd-Gene7766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc08at | false | null | t3_1lc08at | /r/LocalLLaMA/comments/1lc08at/best_practices_rag_content_generation/ | false | false | self | 1 | null |
What am I doing wrong? | 0 | I'm new to local LLM and just downloaded LM Studio and a few models to test out. deepseek/deepseek-r1-0528-qwen3-8b being one of them.
I asked it to write a simple function to sum a list of ints.
Then I asked it to write a class to send emails.
Watching it's thought process it seems to get lost and reverted back to... | 2025-06-15T13:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lc07xn/what_am_i_doing_wrong/ | jcam12312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lc07xn | false | null | t3_1lc07xn | /r/LocalLLaMA/comments/1lc07xn/what_am_i_doing_wrong/ | false | false | self | 0 | null |
[Follow-Up] Building Delta Wasn’t a Joke — This Is the System Behind It. Prove me wrong.(Plug-in free) | 0 | Hours ago I posted Delta — a modular, prompt-only semantic agent built without memory, plugins, or backend tools.
Many thought it was just chatbot roleplay with a fancy wrapper.
But Delta wasn’t built in isolation. It runs on something deeper:
Language Construct Modeling (LCM) — a semantic architecture I’ve been devel... | 2025-06-15T12:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lbzgp9/followup_building_delta_wasnt_a_joke_this_is_the/ | Ok_Sympathy_4979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbzgp9 | false | null | t3_1lbzgp9 | /r/LocalLLaMA/comments/1lbzgp9/followup_building_delta_wasnt_a_joke_this_is_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=108&crop=smart&auto=webp&s=3bfce6c097d7a95a62bb0084af4f38fca27618f9', 'width': 108}, {'height': 108, 'url': 'h... |
Can I talk to more than one character via “LLM”? I have tried many online models but I can only talk to one character. | 1 | [removed] | 2025-06-15T12:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lbza9z/can_i_talk_to_more_than_one_character_via_llm_i/ | foskarnet0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbza9z | false | null | t3_1lbza9z | /r/LocalLLaMA/comments/1lbza9z/can_i_talk_to_more_than_one_character_via_llm_i/ | false | false | self | 1 | null |
Can I talk to more than one character via “LLM”? I have tried many online models but I can only talk to one character. | 1 | [removed] | 2025-06-15T12:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lbz7n3/can_i_talk_to_more_than_one_character_via_llm_i/ | foskarnet0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbz7n3 | false | null | t3_1lbz7n3 | /r/LocalLLaMA/comments/1lbz7n3/can_i_talk_to_more_than_one_character_via_llm_i/ | false | false | self | 1 | null |
7600 XT and 3050 Ti Mobile comparison | 1 | [removed] | 2025-06-15T12:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lbz7a0/7600_xt_and_3050_ti_mobile_comparison/ | Sherstnyov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbz7a0 | false | null | t3_1lbz7a0 | /r/LocalLLaMA/comments/1lbz7a0/7600_xt_and_3050_ti_mobile_comparison/ | false | false | self | 1 | null |
New OpenAI local model Leak straight from chatgpt | 0 | So appareently ChatGPT leaked the name of the new local model that OpenAI will work on
When asked about more details he would just search the web and deny it's existence but after i forced it to tell me more it just stated that
Apaprently it's going to be a "GPT-4o-calss" model, it's going to be multimodal and com... | 2025-06-15T12:18:30 | https://www.reddit.com/gallery/1lbyzcm | Skystunt | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lbyzcm | false | null | t3_1lbyzcm | /r/LocalLLaMA/comments/1lbyzcm/new_openai_local_model_leak_straight_from_chatgpt/ | true | false | spoiler | 0 | {'enabled': True, 'images': [{'id': 'I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=108&crop=smart&auto=webp&s=c836dc1128f64a01866f37f4ac08ccfa21590dfa', 'width': 108}, {'height': 124, 'url': 'ht... |
What's the best OcrOptions to choose for OCR in Dockling? | 1 | I'm struggling to do the proper OCR. I have a PDF that contains both images (with text inside) and plain text. I tried to convert pdf to PNG and digest it, but with this approach ,it becomes even worse sometimes.
Usually, I experiment with TesseractCliOcrOptions. I have a PDF with text and the logo of the company at ... | 2025-06-15T12:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lbyv2s/whats_the_best_ocroptions_to_choose_for_ocr_in/ | depava | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbyv2s | false | null | t3_1lbyv2s | /r/LocalLLaMA/comments/1lbyv2s/whats_the_best_ocroptions_to_choose_for_ocr_in/ | false | false | self | 1 | null |
Creative writing and roleplay content generation. Any experience with good settings and prompting out there? | 0 | I have a model that is llama based and fine tuned for RP. It's uh... a little wild let's say. If I just say hello it starts writing business letters or describing random movie scenes. Kind of. It's pretty scattered.
I've played somewhat with settings but I'm trying to stomp some of this out by setting up a model level... | 2025-06-15T12:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lbytvw/creative_writing_and_roleplay_content_generation/ | Agitated_Budgets | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbytvw | false | null | t3_1lbytvw | /r/LocalLLaMA/comments/1lbytvw/creative_writing_and_roleplay_content_generation/ | false | false | self | 0 | null |
🚀 This AI Agent Uses Zero Memory, Zero Tools — Just Language. Meet Delta. | 0 | Hi I’m Vincent Chong. It’s me again —
the guy who kept spamming LCM and SLS all over this place a few months ago. 😅
I’ve been working quietly on something, and it’s finally ready:
Delta — a fully modular, prompt-only semantic agent built entirely with language.
No memory. No plugins. No backend tools. Just structured... | 2025-06-15T11:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lby87t/this_ai_agent_uses_zero_memory_zero_tools_just/ | Ok_Sympathy_4979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lby87t | false | null | t3_1lby87t | /r/LocalLLaMA/comments/1lby87t/this_ai_agent_uses_zero_memory_zero_tools_just/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=108&crop=smart&auto=webp&s=0d79031aa7c97f9a1b21a13b5317213b7ffe34ab', 'width': 108}, {'height': 108, 'url': 'h... |
Cursor and Bolt free alternative in VSCode | 1 | I have recently bought a new pc with a rtx 5060 ti 16gb and I want something like cursor and bolt but in VSCode I have already installed continue.dev as a replacement of copilot and installed deepseek r1 8b from ollama but when I tried it with cline or roo code something I tried with deepseek it doesn't work sometimes ... | 2025-06-15T10:41:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lbxdwm/cursor_and_bolt_free_alternative_in_vscode/ | McMezoplayz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbxdwm | false | null | t3_1lbxdwm | /r/LocalLLaMA/comments/1lbxdwm/cursor_and_bolt_free_alternative_in_vscode/ | false | false | self | 1 | null |
Do multimodal LLMs (like Chatgpt, Gemini, Claude) use OCR under the hood to read text in images? | 39 | SOTA multimodal LLMs can read text from images (e.g. signs, screenshots, book pages) really well — almost better thatn OCR.
Are they actually using an internal OCR system (like Tesseract or Azure Vision), or do they learn to "read" purely through pretraining (like contrastive learning on image-text pairs)? | 2025-06-15T10:11:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lbwxj8/do_multimodal_llms_like_chatgpt_gemini_claude_use/ | Comprehensive-Yam291 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbwxj8 | false | null | t3_1lbwxj8 | /r/LocalLLaMA/comments/1lbwxj8/do_multimodal_llms_like_chatgpt_gemini_claude_use/ | false | false | self | 39 | null |
Optimizing llama.cpp flags for max tokens/sec ; any auto-tuning tools out there? | 1 | [removed] | 2025-06-15T10:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lbwtdx/optimizing_llamacpp_flags_for_max_tokenssec_any/ | Expert-Inspector-128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbwtdx | false | null | t3_1lbwtdx | /r/LocalLLaMA/comments/1lbwtdx/optimizing_llamacpp_flags_for_max_tokenssec_any/ | false | false | 1 | null | |
Best practices - RAG, content generation | 1 | [removed] | 2025-06-15T10:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lbwspl/best_practices_rag_content_generation/ | Odd-Gene7766 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbwspl | false | null | t3_1lbwspl | /r/LocalLLaMA/comments/1lbwspl/best_practices_rag_content_generation/ | false | false | self | 1 | null |
New Open Source Python Pack to Optimize llama.cpp flags: llama-optimus | 1 | [removed] | 2025-06-15T09:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lbwkcf/new_open_source_python_pack_to_optimize_llamacpp/ | Expert-Inspector-128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbwkcf | false | null | t3_1lbwkcf | /r/LocalLLaMA/comments/1lbwkcf/new_open_source_python_pack_to_optimize_llamacpp/ | false | false | self | 1 | null |
Optimizing llama.cpp flags for max tokens/sec : anyone doing this automatically? | 1 | [removed] | 2025-06-15T09:15:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lbw3y1/optimizing_llamacpp_flags_for_max_tokenssec/ | Expert-Inspector-128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbw3y1 | false | null | t3_1lbw3y1 | /r/LocalLLaMA/comments/1lbw3y1/optimizing_llamacpp_flags_for_max_tokenssec/ | false | false | self | 1 | null |
rednote-hilab dots.llm1 support has been merged into llama.cpp | 85 | 2025-06-15T08:18:35 | https://github.com/ggml-org/llama.cpp/pull/14118 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lbva5o | false | null | t3_1lbva5o | /r/LocalLLaMA/comments/1lbva5o/rednotehilab_dotsllm1_support_has_been_merged/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=108&crop=smart&auto=webp&s=9766f8865dbd84288d2ddb6939f52a48e3be0623', 'width': 108}, {'height': 108, 'url': 'h... | ||
Fine-tuning LLama 4 Scout Instruct on RTX 5090 Out of VRAM Memory Error | 1 | [removed] | 2025-06-15T08:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lbv6bj/finetuning_llama_4_scout_instruct_on_rtx_5090_out/ | AerieSure9064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbv6bj | false | null | t3_1lbv6bj | /r/LocalLLaMA/comments/1lbv6bj/finetuning_llama_4_scout_instruct_on_rtx_5090_out/ | false | false | self | 1 | null |
Testing Local LLMs on a Simple Web App Task (Performance + Output Comparison) | 7 | **Hey everyone,**
I recently did a simple test to compare how a few local LLMs (plus Claude Sonnet 3.5 for reference) could perform on a basic front-end web development prompt. The goal was to generate code for a **real estate portfolio sharing website**, including a **listing entry form** and **listing display**, all... | 2025-06-15T08:05:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lbv3f1/testing_local_llms_on_a_simple_web_app_task/ | SoAp9035 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbv3f1 | false | null | t3_1lbv3f1 | /r/LocalLLaMA/comments/1lbv3f1/testing_local_llms_on_a_simple_web_app_task/ | false | false | self | 7 | null |
Dual 3060RTX's running vLLM / Model suggestions? | 8 | Hello,
I am pretty new to the foray here and I have enjoyed the last couple of days learning a bit about setting things.
I was able to score a pair of 3060RTX's from marketplace for $350.
Currently I have vLLM running with dwetzel/Mistral-Small-24B-Instruct-2501-GPTQ-INT4, per a thread I found here.
Thin... | 2025-06-15T07:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lbu89a/dual_3060rtxs_running_vllm_model_suggestions/ | phin586 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbu89a | false | null | t3_1lbu89a | /r/LocalLLaMA/comments/1lbu89a/dual_3060rtxs_running_vllm_model_suggestions/ | false | false | self | 8 | null |
Is there a need for ReAct? | 6 | For everyone's use case, is the ReAct paradigm useful or does it just slow down your agentic flow? | 2025-06-15T07:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lbu6u2/is_there_a_need_for_react/ | slashrshot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbu6u2 | false | null | t3_1lbu6u2 | /r/LocalLLaMA/comments/1lbu6u2/is_there_a_need_for_react/ | false | false | self | 6 | null |
How come Models like Qwen3 respond gibberish in Chinese ? | 0 | [https://model.lmstudio.ai/download/Qwen/Qwen3-Embedding-8B-GGUF](https://model.lmstudio.ai/download/Qwen/Qwen3-Embedding-8B-GGUF)
Is there something that I'm missing ? , im using LM STUDIO 0.3.16 with updated Vulcan and CPU divers , its also broken in Koboldcpp
https://preview.redd.it/q13vxkyhd17f1.png?width=814&for... | 2025-06-15T06:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lbtub3/how_come_models_like_qwen3_respond_gibberish_in/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbtub3 | false | null | t3_1lbtub3 | /r/LocalLLaMA/comments/1lbtub3/how_come_models_like_qwen3_respond_gibberish_in/ | false | false | 0 | null | |
Best model for dual or quad 3090? | 2 | I've seen a lot of these builds, they are very cool but what are you running on them? | 2025-06-15T05:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lbsn7g/best_model_for_dual_or_quad_3090/ | humanoid64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbsn7g | false | null | t3_1lbsn7g | /r/LocalLLaMA/comments/1lbsn7g/best_model_for_dual_or_quad_3090/ | false | false | self | 2 | null |
Tabulens: A Vision-LLM Powered PDF Table Extractor | 20 | Hey everyone,
For one of my projects, I needed a tool to pull tables out of PDFs as CSVs (especially ones with nested or hierarchical headers). However, most existing libraries I found couldn't handle those cases well. So, I built this tool (tabulens), which leverages vision-LLMs to convert PDF tables into pandas Data... | 2025-06-15T05:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lbsma4/tabulens_a_visionllm_powered_pdf_table_extractor/ | PleasantInspection12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbsma4 | false | null | t3_1lbsma4 | /r/LocalLLaMA/comments/1lbsma4/tabulens_a_visionllm_powered_pdf_table_extractor/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=108&crop=smart&auto=webp&s=94e88f263aba8371e04f40a8ef95c227b5a0bb7b', 'width': 108}, {'height': 108, 'url': 'h... |
What's the best model to run on the RTX Pro 6000 (96GB) right now? | 1 | [removed] | 2025-06-15T05:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lbskvh/whats_the_best_model_to_run_on_the_rtx_pro_6000/ | humanoid64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbskvh | false | null | t3_1lbskvh | /r/LocalLLaMA/comments/1lbskvh/whats_the_best_model_to_run_on_the_rtx_pro_6000/ | false | false | self | 1 | null |
Facing issues while running AiBharat/IndicF5 | 1 | [removed] | 2025-06-15T04:46:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lbs0zs/facing_issues_while_running_aibharatindicf5/ | aivoicebot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbs0zs | false | null | t3_1lbs0zs | /r/LocalLLaMA/comments/1lbs0zs/facing_issues_while_running_aibharatindicf5/ | false | false | self | 1 | null |
Defining What it means to be Conscious | 0 | Consciousness, does not emerge from computational complexity alone, or intelligence but from a developmental trajectory shaped by self-organized internalization and autonomous modification. While current machine learning models—particularly large-scale neural networks—already exhibit impressive emergent behaviors, such... | 2025-06-15T04:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lbrzo0/defining_what_it_means_to_be_conscious/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbrzo0 | false | null | t3_1lbrzo0 | /r/LocalLLaMA/comments/1lbrzo0/defining_what_it_means_to_be_conscious/ | false | false | self | 0 | null |
Noob Question - Suggest the best way to use Natural language for querying Database, preferably using Local LLM | 0 | I want to request for the best way to query a database using Natural language, pls suggest me the best way with libraries, LLM models which can do Text-to-SQL or AI-SQL.
Please only suggest techniques which can really be full-on self-hosted, as schema also can't be transferred/shared to Web Services like Open AI, Clau... | 2025-06-15T04:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lbrv0v/noob_question_suggest_the_best_way_to_use_natural/ | finah1995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbrv0v | false | null | t3_1lbrv0v | /r/LocalLLaMA/comments/1lbrv0v/noob_question_suggest_the_best_way_to_use_natural/ | false | false | self | 0 | null |
Jan-nano, a 4B model that can outperform 671B on MCP | 1,129 | Hi everyone it's me from Menlo Research again,
Today, I’d like to introduce our latest model: **Jan-nano** \- a model fine-tuned with DAPO on Qwen3-4B. Jan-nano comes with some unique capabilities:
* It can perform deep research (with the right prompting)
* It picks up relevant information effectively from search res... | 2025-06-15T04:24:03 | https://v.redd.it/p52b768jp07f1 | Kooky-Somewhere-2883 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbrnod | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p52b768jp07f1/DASHPlaylist.mpd?a=1752553457%2CMDYwZjJjNThkY2E3YjI3NzE4ZmNhOTdhM2EzYzcwNGViNWRlMjdlNzc5NzBhOWE5MDViNDU0ZWU4MTM4Njk2ZQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/p52b768jp07f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lbrnod | /r/LocalLLaMA/comments/1lbrnod/jannano_a_4b_model_that_can_outperform_671b_on_mcp/ | false | false | 1,129 | {'enabled': False, 'images': [{'id': 'cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=108&crop=smart&format=pjpg&auto=webp&s=01b0f3b472c9e1df1ac5ab521d0d2aaae1274... | |
New Model on LMarena? | 0 | "stephen-vision" model spotted in LMarena. It disappeared from UI before I could take screenshot. Is it new though? | 2025-06-15T04:09:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lbreow/new_model_on_lmarena/ | Strategosky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbreow | false | null | t3_1lbreow | /r/LocalLLaMA/comments/1lbreow/new_model_on_lmarena/ | false | false | self | 0 | null |
Can I put two unit of rtx 3060 12gb in ASRock B550M Pro4?? | 6 | It has one PCIe 4.0 and one PCIe 3.0. I want to do some ML stuff. Will it degrade performance?
How much performance degradation are we looking here? If I can somehow pull it off I will have one more device with 'it works fine for me'.
And what is the recommended power supply. I have CV650 here. | 2025-06-15T03:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lbr6w8/can_i_put_two_unit_of_rtx_3060_12gb_in_asrock/ | maifee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbr6w8 | false | null | t3_1lbr6w8 | /r/LocalLLaMA/comments/1lbr6w8/can_i_put_two_unit_of_rtx_3060_12gb_in_asrock/ | false | false | self | 6 | null |
Mistral Small 3.1 is incredible for agentic use cases | 187 | I recently tried switching from Gemini 2.5 to Mistral Small 3.1 for most components of my agentic workflow and barely saw any drop off in performance. It’s absolutely mind blowing how good 3.1 is given how few parameters it has. Extremely accurate and intelligent tool calling and structured output capabilities, and equ... | 2025-06-15T03:40:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lbqwfs/mistral_small_31_is_incredible_for_agentic_use/ | ButterscotchVast2948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbqwfs | false | null | t3_1lbqwfs | /r/LocalLLaMA/comments/1lbqwfs/mistral_small_31_is_incredible_for_agentic_use/ | false | false | self | 187 | null |
Reverse Internship | 0 | I need to be near the action at all costs.
Anyone open to an arrangement where I pay them to allow me to work/help on real practical business problems related to LLM inference/training?
I have a strong electrical/software engineering background. Here are some of my interests:
* evals, evals, evals!
* context win... | 2025-06-15T03:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lbqtf6/reverse_internship/ | tmplogic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbqtf6 | false | null | t3_1lbqtf6 | /r/LocalLLaMA/comments/1lbqtf6/reverse_internship/ | false | false | self | 0 | null |
When is a home server preferable to VPS? | 1 | I see a lot of people talk about buying some expensive hardware or running an old desktop instead of using a VPS. I can see why a NAS is good at home, but when do you see a good use for a server at home besides maybe streaming games with moonlight? | 2025-06-15T03:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lbqm04/when_is_a_home_server_preferable_to_vps/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbqm04 | false | null | t3_1lbqm04 | /r/LocalLLaMA/comments/1lbqm04/when_is_a_home_server_preferable_to_vps/ | false | false | self | 1 | null |
What's the "best" single way to keep up with notable model releases for hosting locally? | 1 | [removed] | 2025-06-15T03:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lbqjk8/whats_the_best_single_way_to_keep_up_with_notable/ | CapnFlisto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbqjk8 | false | null | t3_1lbqjk8 | /r/LocalLLaMA/comments/1lbqjk8/whats_the_best_single_way_to_keep_up_with_notable/ | false | false | self | 1 | null |
Make Local Models watch your screen! Observer Tutorial | 55 | Hey guys!
This is a tutorial on how to self host Observer on your home lab!
See more info here:
[https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer) | 2025-06-15T01:46:50 | https://v.redd.it/toz9tr0ixz6f1 | Roy3838 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbotj8 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/toz9tr0ixz6f1/DASHPlaylist.mpd?a=1752544023%2CMDM0OTkzZDhkNzhlYzJiNGQxNzY4NmYwNzI2YjViZDE5NTEyZjRhYWQ3Y2VjZGM2YzQyNWU0NGUxZjE4M2E0Yg%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/toz9tr0ixz6f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lbotj8 | /r/LocalLLaMA/comments/1lbotj8/make_local_models_watch_your_screen_observer/ | false | false | 55 | {'enabled': False, 'images': [{'id': 'OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c0e9854bfc14573c4cbded99a19ac5649065... | |
How to convert PDFs with complex layouts to accurate HTML/LaTex/Markdown (selectable)? | 1 | I'm trying to convert legal documents, specifically US laws into selectable format where I can later add hyperlinks to the content.
My current approach is pass these documents through gpt4 but it's not that accurate (layout). Plus takes a lot of time. Traditional OCR tools works great for text extraction but I wan... | 2025-06-15T01:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lbodzc/how_to_convert_pdfs_with_complex_layouts_to/ | PomegranateThat3605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbodzc | false | null | t3_1lbodzc | /r/LocalLLaMA/comments/1lbodzc/how_to_convert_pdfs_with_complex_layouts_to/ | false | false | 1 | null | |
LLM training on RTX 5090 | 354 | Tech Stack
Hardware & OS: NVIDIA RTX 5090 (32GB VRAM, Blackwell architecture), Ubuntu 22.04 LTS, CUDA 12.8
Software: Python 3.12, PyTorch 2.8.0 nightly, Transformers and Datasets libraries from Hugging Face, Mistral-7B base model (7.2 billion parameters)
Training: Full fine-tuning with gradient checkpointing, 23 cus... | 2025-06-15T00:25:56 | https://v.redd.it/t5kg81t0jz6f1 | AstroAlto | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lbnb79 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/t5kg81t0jz6f1/DASHPlaylist.mpd?a=1752539173%2CNzI3NmQ0NWZkNzdiYmM3NzJiOTE4YTZlNzYyYjNiY2ZhYjBlMzk5YTllNzZlMTA4OGNlZTY2YTVmMjc5ZThkNw%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/t5kg81t0jz6f1/DASH_720.mp4?source=fallback', 'ha... | t3_1lbnb79 | /r/LocalLLaMA/comments/1lbnb79/llm_training_on_rtx_5090/ | false | false | 354 | {'enabled': False, 'images': [{'id': 'cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=108&crop=smart&format=pjpg&auto=webp&s=30c5e659958af889350c77a992dee258e4a64... | |
Going cold turkey on free online AI and handling the practicalities on mobile | 1 | [removed] | 2025-06-15T00:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lbnaee/going_cold_turkey_on_free_online_ai_and_handling/ | After-Cell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lbnaee | false | null | t3_1lbnaee | /r/LocalLLaMA/comments/1lbnaee/going_cold_turkey_on_free_online_ai_and_handling/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.