title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LangChain Apps Can Now Remember - Drop-in Memory API for Agents, Copilots, and SaaS | 0 | We just shipped something we've been working on for a while now and it quietly solves a problem most LangChain (and LLM app) devs have been hacking around with for too long:
>
**Introducing** [**Recallio**](https://python.langchain.com/docs/integrations/providers/recallio/) **for LangChain.**
A drop-in memory infra... | 2025-08-13T16:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mp9cvr/langchain_apps_can_now_remember_dropin_memory_api/ | GardenCareless5991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp9cvr | false | null | t3_1mp9cvr | /r/LocalLLaMA/comments/1mp9cvr/langchain_apps_can_now_remember_dropin_memory_api/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oPpz0skggl6FucgJq68Aig_yMAhLIgPewsYwHhGrNp8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oPpz0skggl6FucgJq68Aig_yMAhLIgPewsYwHhGrNp8.png?width=108&crop=smart&auto=webp&s=bcb12127d4f7d313311c16f862145225e0ddf104', 'width': 108}, {'height': 112, 'url': 'h... |
Ollama alternative, HoML v0.2.0 Released: Blazing Fast Speed | 0 | I worked on a few more improvement over the load speed.
The model start(load+compile) speed goes down from 40s to 8s, still 4X slower than Ollama, but with much higher throughput:
Now on RTX4000 Ada SFF(a tiny 70W GPU), I can get 5.6X throughput vs Ollama.
If you're interested, try it out: [https://homl.dev/](https:... | 2025-08-13T16:33:21 | https://homl.dev/blogs/release_notes_v0.2.0.html | wsmlbyme | homl.dev | 1970-01-01T00:00:00 | 0 | {} | 1mp9av6 | false | null | t3_1mp9av6 | /r/LocalLLaMA/comments/1mp9av6/ollama_alternative_homl_v020_released_blazing/ | false | false | default | 0 | null |
Need advice on building a production-ready conversational FAQ chatbot | 2 | Hey everyone,
I’m a college student trying to build proper production-ready AI app, and I’d love some guidance from folks here who have more experience.
The idea is to help small businesses in the hospitality and food space (restaurants, cafés, hotels, cloud kitchens, caterers, etc.) replace their static FAQ pages wi... | 2025-08-13T16:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mp9an1/need_advice_on_building_a_productionready/ | Ok_Performance_347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp9an1 | false | null | t3_1mp9an1 | /r/LocalLLaMA/comments/1mp9an1/need_advice_on_building_a_productionready/ | false | false | self | 2 | null |
Finetuning on suryaocr | 1 | [removed] | 2025-08-13T16:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mp98vi/finetuning_on_suryaocr/ | ParfaitFragrant2176 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp98vi | false | null | t3_1mp98vi | /r/LocalLLaMA/comments/1mp98vi/finetuning_on_suryaocr/ | false | false | self | 1 | null |
Finetuning on suryaocr | 1 | [removed] | 2025-08-13T16:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mp97s3/finetuning_on_suryaocr/ | PureDoughnut6289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp97s3 | false | null | t3_1mp97s3 | /r/LocalLLaMA/comments/1mp97s3/finetuning_on_suryaocr/ | false | false | self | 1 | null |
Finetuning on suryaocr | 1 | [removed] | 2025-08-13T16:29:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mp972t/finetuning_on_suryaocr/ | NoBlackberry3264 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp972t | false | null | t3_1mp972t | /r/LocalLLaMA/comments/1mp972t/finetuning_on_suryaocr/ | false | false | self | 1 | null |
Flash Attention massively accelerate gpt-oss-120b inference speed on Apple silicon | 93 | I wanted to share my observation and experience with gpt-oss-120b (unsloth/gpt-oss-120b-GGUF, F16).
I am running it via LM Studio (latest v0.3.23), my hardware config is Mac Studio M4 Max (16c/40g) with 128GB of unified memory.
My main complaint against gpt-oss-120b was its inference speed, once the context win... | 2025-08-13T16:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mp92nc/flash_attention_massively_accelerate_gptoss120b/ | DaniDubin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp92nc | false | null | t3_1mp92nc | /r/LocalLLaMA/comments/1mp92nc/flash_attention_massively_accelerate_gptoss120b/ | false | false | self | 93 | null |
[Release] Neobelt - Desktop app for managing MCP servers with Docker | 0 | Hey r/LocalLLaMA! I built a desktop application that makes it easier to manage Model Context Protocol (MCP) servers locally.
What it does:
- Browse and install MCP servers from registries
- Manage Docker containers running MCP servers
- Configure environment variables, ports, and volumes through a GUI
- Cros... | 2025-08-13T16:21:31 | de_3lue | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp8zbp | false | null | t3_1mp8zbp | /r/LocalLLaMA/comments/1mp8zbp/release_neobelt_desktop_app_for_managing_mcp/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'CRmz7G7XvPbhb9AaQUQr9FnPwtEl4DPHM0O_i1ioLsA', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/jslgyxsfbtif1.png?width=108&crop=smart&auto=webp&s=3e29df533c451fedf5369c252ea277f629ec6e9b', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/jslgyxsfbtif1.png... | ||
Local LLM PC Build | 1 | [removed] | 2025-08-13T16:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mp8p5m/local_llm_pc_build/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp8p5m | false | null | t3_1mp8p5m | /r/LocalLLaMA/comments/1mp8p5m/local_llm_pc_build/ | false | false | self | 1 | null |
It looks like someone managed to access gpt-oss-base | 0 | I just came across this post on X - https://x.com/jxmnop/status/1955436067353502083?s=46&t=MGyeqPyVXxSwZ\_V2hYhfEw. It looks like someone managed to extract the gpt-oss base model. Its available on https://huggingface.co/jxm/gpt-oss-20b-base. | 2025-08-13T16:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mp8h7x/it_looks_like_someone_managed_to_access_gptossbase/ | Specialist_Cup968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp8h7x | false | null | t3_1mp8h7x | /r/LocalLLaMA/comments/1mp8h7x/it_looks_like_someone_managed_to_access_gptossbase/ | false | false | self | 0 | null |
OpenAI launched prompt optimizer for GPT5 | 0 | Since release of GPT5, it appears that prompt tuning is the key to get the best results. For the same, OpenAI has released a new platform (for free) for optimising your generic prompt for GPT5 which can be tested below.
Platform link : https://platform.openai.com/chat/edit?optimize=true
Demo : https://youtu.be/oHvpG... | 2025-08-13T15:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mp8afm/openai_launched_prompt_optimizer_for_gpt5/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp8afm | false | null | t3_1mp8afm | /r/LocalLLaMA/comments/1mp8afm/openai_launched_prompt_optimizer_for_gpt5/ | false | false | self | 0 | null |
How do I get qwen3 with 2025 data? | 0 | Hello, I am just starting and a complete newbie. I downloaded LM studio on an M1 Mac 8GB. Based on research and suggestions, I download both Gemma3 1B and qwen3-4b-thinking-2507. When I ask qwen for info on an event from Q1 2025 it states that we are still in Q1 of 2024. Is there a comparable model out there with a mor... | 2025-08-13T15:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7zr8/how_do_i_get_qwen3_with_2025_data/ | Fried_Yoda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7zr8 | false | null | t3_1mp7zr8 | /r/LocalLLaMA/comments/1mp7zr8/how_do_i_get_qwen3_with_2025_data/ | false | false | self | 0 | null |
How I fixed RAG breaking on table-heavy archives | 2 | People don’t seem to have a solid solution for varied format retrieval. A client in the energy sector gave me 5 years of equipment maintenance logs stored as PDFs. They had handwritten notes around tables and diagrams, not just typed info.
I ran them through a RAG pipeline and the retrieval pass looked fine at first u... | 2025-08-13T15:38:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7u3g/how_i_fixed_rag_breaking_on_tableheavy_archives/ | NullPointerJack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7u3g | false | null | t3_1mp7u3g | /r/LocalLLaMA/comments/1mp7u3g/how_i_fixed_rag_breaking_on_tableheavy_archives/ | false | false | self | 2 | null |
How to not print the Thinking part of Qwen, deepseek? | 1 | So I know we can stop thinking completely, but I want the models to reason as they perform significantly better after that but I don't want the output to contain the thinking part, as I only want the generated prompt from the model as an output.
Can somebody help me with how to just not print the thinking part? | 2025-08-13T15:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7ld3/how_to_not_print_the_thinking_part_of_qwen/ | FriendshipNo6754 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7ld3 | false | null | t3_1mp7ld3 | /r/LocalLLaMA/comments/1mp7ld3/how_to_not_print_the_thinking_part_of_qwen/ | false | false | self | 1 | null |
awesome-private-ai: all things for your AI data sovereign | 43 | hi just wanted to show - I have created this list. Been working on those topics recently and will be expanding it even more.
[https://github.com/tdi/awesome-private-ai](https://github.com/tdi/awesome-private-ai)
| 2025-08-13T15:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7cfe/awesomeprivateai_all_things_for_your_ai_data/ | tdi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7cfe | false | null | t3_1mp7cfe | /r/LocalLLaMA/comments/1mp7cfe/awesomeprivateai_all_things_for_your_ai_data/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': '0sKEOfpT6Tozj7CE-c5zgnI53Mo2kt3_s310Ss1f9NE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0sKEOfpT6Tozj7CE-c5zgnI53Mo2kt3_s310Ss1f9NE.png?width=108&crop=smart&auto=webp&s=2ffa195036d7de9b833e0a57446c651e480b29c5', 'width': 108}, {'height': 108, 'url': 'h... |
Finally found an all-in-one dev tool after months of frustration 😭 | 1 | [removed] | 2025-08-13T15:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7292/finally_found_an_allinone_dev_tool_after_months/ | Timely_Painter_6263 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7292 | false | null | t3_1mp7292 | /r/LocalLLaMA/comments/1mp7292/finally_found_an_allinone_dev_tool_after_months/ | false | false | self | 1 | null |
Finally found an all-in-one dev hub | 1 | [removed] | 2025-08-13T15:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mp7097/finally_found_an_allinone_dev_hub/ | Timely_Painter_6263 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp7097 | false | null | t3_1mp7097 | /r/LocalLLaMA/comments/1mp7097/finally_found_an_allinone_dev_hub/ | false | false | self | 1 | null |
Thankful to r/localllama, Swapped from Manus to a local setup | 91 | Saw a post here a while back about running multi‑agent setups locally. At the time I was still subbed to Manus and figured I'd just stick with what I knew.
Last week I decided to actually try it after seeing it mentioned again and… the OS community is fire tbh. Found an open‑source tool that runs entirely on my machin... | 2025-08-13T15:04:36 | Proof_Dog6506 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp6y0e | false | null | t3_1mp6y0e | /r/LocalLLaMA/comments/1mp6y0e/thankful_to_rlocalllama_swapped_from_manus_to_a/ | false | false | default | 91 | {'enabled': True, 'images': [{'id': '0zp95auywsif1', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/0zp95auywsif1.jpeg?width=108&crop=smart&auto=webp&s=4e783a34b19dbcfb72fb5049c0ee8115a010e2d6', 'width': 108}, {'height': 263, 'url': 'https://preview.redd.it/0zp95auywsif1.jpeg?width=216&crop=smart&auto=... | |
Which LLM to run locally on m4 pro 48 gb ram ? | 0 | My task includes rag , summarisation of large pdf , generating script’s, python code and mops and an interactive chatbot . | 2025-08-13T15:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mp6vba/which_llm_to_run_locally_on_m4_pro_48_gb_ram/ | SoupSoggy3314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp6vba | false | null | t3_1mp6vba | /r/LocalLLaMA/comments/1mp6vba/which_llm_to_run_locally_on_m4_pro_48_gb_ram/ | false | false | self | 0 | null |
now it can turn your PDFs and docs into clean fine tuning datasets | 114 | [The flow on how it generates datasets using local resources](https://preview.redd.it/l4z271b5usif1.png?width=1812&format=png&auto=webp&s=4e4d98143bf7d60e382b53787e3ce6eb6272f8c8)
[Demo](https://reddit.com/link/1mp6it6/video/hhwtavqwusif1/player)
repo is here [https://github.com/Datalore-ai/datalore-localgen-cli... | 2025-08-13T14:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mp6it6/now_it_can_turn_your_pdfs_and_docs_into_clean/ | Interesting-Area6418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp6it6 | false | null | t3_1mp6it6 | /r/LocalLLaMA/comments/1mp6it6/now_it_can_turn_your_pdfs_and_docs_into_clean/ | false | false | 114 | {'enabled': False, 'images': [{'id': '3sG_aaHa7N5A_uKldFg_ckXPZRKSagJ4eq_vlsxxQ-g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3sG_aaHa7N5A_uKldFg_ckXPZRKSagJ4eq_vlsxxQ-g.png?width=108&crop=smart&auto=webp&s=a4fc2276303267e3cfea1843f315d4ff8dc009ca', 'width': 108}, {'height': 108, 'url': 'h... | |
now it can turn your PDFs and docs into clean fine tuning datasets | 1 | [the flow how it will generate dataset from local resources](https://preview.redd.it/bo2lvsjgssif1.png?width=1812&format=png&auto=webp&s=26ee723bd5981853c67d75bf363eb61243d616ad)
[Demo](https://i.redd.it/hgej8gcvssif1.gif)
repo is here [https://github.com/Datalore-ai/datalore-localgen-cli](https://github.com/Dat... | 2025-08-13T14:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mp6b9n/now_it_can_turn_your_pdfs_and_docs_into_clean/ | Interesting-Area6418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp6b9n | false | null | t3_1mp6b9n | /r/LocalLLaMA/comments/1mp6b9n/now_it_can_turn_your_pdfs_and_docs_into_clean/ | false | false | 1 | null | |
Is there any android compatible library to finetune a llm on device? | 3 | I am a computer science student who has taken part in Samsung prism hackathon. We are tasked to create a on device llm finetuning framework app. I know it is impractical to expect finetuning in a mobile device, even with QLoRA, but that is what samsung has tasked the participants with.
Is there any kotlin library, tha... | 2025-08-13T14:27:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mp5zlp/is_there_any_android_compatible_library_to/ | Acrobatic-Tomato4862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp5zlp | false | null | t3_1mp5zlp | /r/LocalLLaMA/comments/1mp5zlp/is_there_any_android_compatible_library_to/ | false | false | self | 3 | null |
A quick CPU vs GPU comparison... | 1 | [removed] | 2025-08-13T14:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mp5ttq/a_quick_cpu_vs_gpu_comparison/ | 73tada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp5ttq | false | null | t3_1mp5ttq | /r/LocalLLaMA/comments/1mp5ttq/a_quick_cpu_vs_gpu_comparison/ | false | false | self | 1 | null |
Advice needed: Best way to build a document Q&A AI chatbot? (Docs → Answers) | 0 | I’m building a platform for a scientific foundation and want to add a document Q&A AI chatbot.
Students will ask questions, and it should answer only using our PDFs and research papers.
For an MVP, what’s the smartest approach?
\- Use RAG with an existing model?
\- Fine-tune a model on the docs?
\- Somet... | 2025-08-13T14:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mp5l6f/advice_needed_best_way_to_build_a_document_qa_ai/ | I-man2077 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp5l6f | false | null | t3_1mp5l6f | /r/LocalLLaMA/comments/1mp5l6f/advice_needed_best_way_to_build_a_document_qa_ai/ | false | false | self | 0 | null |
Which coding model is better? Kimi-K2 or GLM 4.5? | 3 | Which is better for coding? Kimi-K2 or GLM 4.5? because i saw this video comparing them [https://www.youtube.com/watch?v=ulfZwEa1x\_o](https://www.youtube.com/watch?v=ulfZwEa1x_o) (0 to 13 minutes is where im referring to) and GLM had a pretty good design choice while Kimi K2s website/os was really functional so idk. w... | 2025-08-13T14:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mp5ibc/which_coding_model_is_better_kimik2_or_glm_45/ | Cookiebotss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp5ibc | false | null | t3_1mp5ibc | /r/LocalLLaMA/comments/1mp5ibc/which_coding_model_is_better_kimik2_or_glm_45/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'iSXjiT9td49N32imknbJi0CNq-eUxIiflC_xF_oWIvE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iSXjiT9td49N32imknbJi0CNq-eUxIiflC_xF_oWIvE.jpeg?width=108&crop=smart&auto=webp&s=a5b9c55219cd513c4d76d3bd5f7477d556f7d64b', 'width': 108}, {'height': 162, 'url': '... |
God I love Qwen and llamacpp so much! | 998 | Local batch inference with qwen3 30B Instruct on a single RTX3090, 4 requests in parallel
Gonna use it to mass process some data to generate insights about our platform usage
I feel like I'm hitting my limits here and gonna need a multi GPU setup soon 😄 | 2025-08-13T14:01:37 | https://v.redd.it/ur3oxzhnmsif1 | Limp_Classroom_2645 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp5bjc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ur3oxzhnmsif1/DASHPlaylist.mpd?a=1757685716%2CNjg2NDYxZGJhYTJiOTA3YzlmNDJlZGE4ZjViNjdhZjIwYzcwZDJmYzVhYThhMGQ3MWYwNTY1MGExMzI5Zjk2NA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/ur3oxzhnmsif1/DASH_1080.mp4?source=fallback', 'h... | t3_1mp5bjc | /r/LocalLLaMA/comments/1mp5bjc/god_i_love_qwen_and_llamacpp_so_much/ | false | false | 998 | {'enabled': False, 'images': [{'id': 'YWE3eDdxZG5tc2lmMRvVg1psIEfKedgCcU_ySdSE0fdUxqG9M3HUjgrx1S5i', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/YWE3eDdxZG5tc2lmMRvVg1psIEfKedgCcU_ySdSE0fdUxqG9M3HUjgrx1S5i.png?width=108&crop=smart&format=pjpg&auto=webp&s=71a9b531b4942b6273a4f9d759163c9fac34... | |
Open source Excel add-in for Ollama | 0 | Looks like someone just built an open source version of the Excel add-in to communicate with Ollama. I tried the .xlam file and it worked as expected. | 2025-08-13T14:00:40 | https://github.com/arsaboo/ollama-excel-udf | planetearth80 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mp5aiw | false | null | t3_1mp5aiw | /r/LocalLLaMA/comments/1mp5aiw/open_source_excel_addin_for_ollama/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'B-E2N4M-TgzrO0s9ohOL6OCFrw05soV5NqNQoe6I3m0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B-E2N4M-TgzrO0s9ohOL6OCFrw05soV5NqNQoe6I3m0.png?width=108&crop=smart&auto=webp&s=1351485e63fb32d72388bc7123101bd5f3002e3b', 'width': 108}, {'height': 108, 'url': 'h... |
How accessible can you have a setup while maintaining privacy? | 0 | I'm interested in setting a local LM on a desktop in my office that I can access through (I think it's called) an API.
The idea being that it can be accessed via a laptop, an iPad (or Android tablet) and a phone
The problem is because it contains some client data, it literally cannot be a cloud based system and sinc... | 2025-08-13T13:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mp5789/how_accessible_can_you_have_a_setup_while/ | monkey5511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp5789 | false | null | t3_1mp5789 | /r/LocalLLaMA/comments/1mp5789/how_accessible_can_you_have_a_setup_while/ | false | false | self | 0 | null |
Running local models with .net | 1 | [removed] | 2025-08-13T13:55:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mp55ri/running_local_models_with_net/ | pjrze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp55ri | false | null | t3_1mp55ri | /r/LocalLLaMA/comments/1mp55ri/running_local_models_with_net/ | false | false | self | 1 | null |
Plant UML Generator LLM finetune | 45 | **Introducing pumlGenV2-1: The AI That Visualizes Complex Ideas as PlantUML Diagrams**
# What It Does
Give it a complex question—whether about **architecture**, **philosophical debates**, or **historical events**—and it generates a structured PlantUML diagram with (all input text is treated as a question):
✔ **Logi... | 2025-08-13T13:44:53 | https://huggingface.co/chrisrutherford/pumlGenV2 | lolzinventor | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mp4vxe | false | null | t3_1mp4vxe | /r/LocalLLaMA/comments/1mp4vxe/plant_uml_generator_llm_finetune/ | false | false | default | 45 | {'enabled': False, 'images': [{'id': 'A2vjjq6KSpHPh1Yu7sh0j0dqmFW8S4lQdPeKAxnYi1A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A2vjjq6KSpHPh1Yu7sh0j0dqmFW8S4lQdPeKAxnYi1A.png?width=108&crop=smart&auto=webp&s=f9a997a32dda049f5e2048f1ffa27686c4ecdfe5', 'width': 108}, {'height': 116, 'url': 'h... |
Looking for a better emotional intelligence benchmark than EQBench | 3 | Horizon Alpha (rumored to be GPT 5) charts [at the top of EQBench](https://eqbench.com/) and gpt-5-chat ChatGPT-4o beats ChatGPT-4o, but Reddit and X commentary suggests that everyone *loves* ChatGPT-4o for its "warmth" and *hates* ChatGPT-5.
This makes me believe that EQBench is not a good benchmark to evaluate emoti... | 2025-08-13T13:41:01 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp4sd7 | false | null | t3_1mp4sd7 | /r/LocalLLaMA/comments/1mp4sd7/looking_for_a_better_emotional_intelligence/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 's83ynelpisif1', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/s83ynelpisif1.png?width=108&crop=smart&auto=webp&s=e359b31da2396b47ab9ba24ff0f5d7ff4b06fa80', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/s83ynelpisif1.png?width=216&crop=smart&auto=webp... | |
How do you guys do system prompts? Any good catch-all one? | 0 | I've heard that a good system prompt can elevate a so-so model to a great one depending on the use case. Mine was always minimal, mostly relating to how the response should be formatted and nothing about the content. I've always kinda preferred the responses I get on the provider's own chat website even if I am using t... | 2025-08-13T13:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mp4ihn/how_do_you_guys_do_system_prompts_any_good/ | 1234filip | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp4ihn | false | null | t3_1mp4ihn | /r/LocalLLaMA/comments/1mp4ihn/how_do_you_guys_do_system_prompts_any_good/ | false | false | self | 0 | null |
Beelink GTR9 Pro Mini PC Launched: 140W AMD Ryzen AI MAX+ 395 APU, 128 GB LPDDR5x 8000 MT/s Memory, 2 TB Crucial SSD, Dual 10GbE LAN For $1985 | 181 | 2025-08-13T13:28:20 | https://wccftech.com/beelink-gtr9-pro-mini-pc-launched-140w-amd-ryzen-ai-max-395-128-gb-dual-10gbe-1985-usd/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mp4gwl | false | null | t3_1mp4gwl | /r/LocalLLaMA/comments/1mp4gwl/beelink_gtr9_pro_mini_pc_launched_140w_amd_ryzen/ | false | false | default | 181 | {'enabled': False, 'images': [{'id': 'Bvw60PvhPgoef0Ng9Djae_QLUotq8vncLfnhqt8cL74', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/Bvw60PvhPgoef0Ng9Djae_QLUotq8vncLfnhqt8cL74.png?width=108&crop=smart&auto=webp&s=ac51b608ca7c3bec3a97d12c8cdad62d23e690b7', 'width': 108}, {'height': 132, 'url': 'h... | |
Free me from the choice analysis paralysis of LLMs | 0 | Im going to the point:
CPU: Ryzen 5 5500
GPU: RTX 3060 12GB VRAM
RAM: 16 GB
Need frontend app to chat with 3 differents AI models: Code, General use, RP
If not, some insights on how to connect models to Godot (im a game dev so i use this engine to make almost all my apps)
I've been using ComfyUI on Linux for alm... | 2025-08-13T13:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mp4g18/free_me_from_the_choice_analysis_paralysis_of_llms/ | Pro3dPrinterGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp4g18 | false | null | t3_1mp4g18 | /r/LocalLLaMA/comments/1mp4g18/free_me_from_the_choice_analysis_paralysis_of_llms/ | false | false | self | 0 | null |
I feel like I’ve completely fallen behind on how this works. | 1 | [removed] | 2025-08-13T13:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3sts/i_feel_like_ive_completely_fallen_behind_on_how/ | armeg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3sts | false | null | t3_1mp3sts | /r/LocalLLaMA/comments/1mp3sts/i_feel_like_ive_completely_fallen_behind_on_how/ | false | false | self | 1 | null |
Pairs of GPUs for inference? | 0 | I’ve been looking into running larger models (Llama4 108B, GLM air, OSS 120B) and am starting to look at some dedicated hardware. The numbers for the Ryzen AI platforms look promising, but the lack of upgrade path worries me.
Have people tried running pairs of GPUS like 3080Ti/4060Ti? I know llama.cpp supports this, b... | 2025-08-13T12:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3p2v/pairs_of_gpus_for_inference/ | --jen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3p2v | false | null | t3_1mp3p2v | /r/LocalLLaMA/comments/1mp3p2v/pairs_of_gpus_for_inference/ | false | false | self | 0 | null |
Triton 3.4 for MI50 | 22 | I've built triton 3.4 whl for ubuntu 24.04 + pytorch 2.8.0 + rocm 6.3 + MI50 (chinese version, flashed with 16gb radeon pro vii firmware from techpowerup). I can install it on my system, everything run just fine. You can download it here: https://huggingface.co/datasets/jetaudio/triton_gfx906
P/s: only tested on my sy... | 2025-08-13T12:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3oyt/triton_34_for_mi50/ | jetaudio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3oyt | false | null | t3_1mp3oyt | /r/LocalLLaMA/comments/1mp3oyt/triton_34_for_mi50/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'EKYuIvvBxO2ruHFlO24PUX_2L7H3F8cjbJCCRWelu-4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EKYuIvvBxO2ruHFlO24PUX_2L7H3F8cjbJCCRWelu-4.png?width=108&crop=smart&auto=webp&s=e8e23444f9aaa1cad0ea5c0503fdef0d9e25e60f', 'width': 108}, {'height': 116, 'url': 'h... |
Nano Banana Hype | 0 | This is on another level, best I have seen | 2025-08-13T12:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3npz/nano_banana_hype/ | No_Efficiency_1144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3npz | false | null | t3_1mp3npz | /r/LocalLLaMA/comments/1mp3npz/nano_banana_hype/ | false | false | self | 0 | null |
is doing full finetune instead of LORA an overkill for a small dataset? | 12 | I'm going to be finetuning qwen3-30b-a3b but not sure if I should do full finetuning or LORA, I have around 500 examples of how I want the LLM to talk, behave, how long the sentences should be, what to say depending on certain situations etc... | 2025-08-13T12:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3mwq/is_doing_full_finetune_instead_of_lora_an/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3mwq | false | null | t3_1mp3mwq | /r/LocalLLaMA/comments/1mp3mwq/is_doing_full_finetune_instead_of_lora_an/ | false | false | self | 12 | null |
Help with weird output | 0 | I'm completely new to llama.cpp. I shifted to it from ollama upon hearing that it's better because it provides more customization. I installed it in wsl (because windows...)
I have an RTX 4070 (12gb vram), 32 gb system ram.
`./llama-cli --model ~/llama.cpp/models/gpt-oss-20b-base-q4_k_m.gguf -c 0 -fa --jinja ... | 2025-08-13T12:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3bs7/help_with_weird_output/ | -_Pxycho_Caxon_- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3bs7 | false | null | t3_1mp3bs7 | /r/LocalLLaMA/comments/1mp3bs7/help_with_weird_output/ | false | false | self | 0 | null |
Few doubts in using gpt-oss 20B | 1 | I’ve got an A10 GPU with 22GB VRAM, but when I try running a GPT OSS model, I keep hitting a CUDA out-of-memory error. I can’t use `mxfp4` quantization since it’s only supported on Hopper GPUs, and my attempt with `bnb` config also failed. Does anyone know a way to load this model in a quantized form that would work on... | 2025-08-13T12:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mp3auo/few_doubts_in_using_gptoss_20b/ | Careless_Meringue525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp3auo | false | null | t3_1mp3auo | /r/LocalLLaMA/comments/1mp3auo/few_doubts_in_using_gptoss_20b/ | false | false | self | 1 | null |
Windsurf user demands local model support - Can already be done with twinny? | 0 | 2025-08-13T12:37:38 | https://feedback.windsurf.com/feature-requests/p/an-option-for-local-models | Jethro_E7 | feedback.windsurf.com | 1970-01-01T00:00:00 | 0 | {} | 1mp3a8c | false | null | t3_1mp3a8c | /r/LocalLLaMA/comments/1mp3a8c/windsurf_user_demands_local_model_support_can/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'gswLktIaeZzt5wxI6o-mzqLJSMXD11Aqhpzo69Ib9Ws', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gswLktIaeZzt5wxI6o-mzqLJSMXD11Aqhpzo69Ib9Ws.jpeg?width=108&crop=smart&auto=webp&s=4b8d8fbfa5af605acc309cfbaee202313080f9bb', 'width': 108}, {'height': 113, 'url': '... | ||
what is the best / cheapest model to run for transcription formattion? | 1 | im making a tool that transforms audiofile to a meaningfull transcription.
to make a transcription i use whisper v3, from plain text i want to use LLM to transform it to a transcription - speaker, what they say, etc.
currently i use gemini-2.5-flash with limit of 1000 in reasoning token, it works best but it's not ... | 2025-08-13T12:25:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mp30wv/what_is_the_best_cheapest_model_to_run_for/ | Prainss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp30wv | false | null | t3_1mp30wv | /r/LocalLLaMA/comments/1mp30wv/what_is_the_best_cheapest_model_to_run_for/ | false | false | self | 1 | null |
There is a new text-to-image model named nano-banana | 429 | 2025-08-13T12:20:32 | Severe-Awareness829 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp2wq3 | false | null | t3_1mp2wq3 | /r/LocalLLaMA/comments/1mp2wq3/there_is_a_new_texttoimage_model_named_nanobanana/ | false | false | default | 429 | {'enabled': True, 'images': [{'id': 'jmw88evj4sif1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/jmw88evj4sif1.png?width=108&crop=smart&auto=webp&s=6f957c39a379d7c63f3fe816f853b2307b91c51a', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/jmw88evj4sif1.png?width=216&crop=smart&auto=we... | ||
Anyone using MaxText, Google's AI Hyperscaling "reference" implementation? | 2 | https://github.com/AI-Hypercomputer/maxtext
I've been trying to work with this repo but it's been a pain to even convert models into whatever maxtext wants.
However... it boasts very high utilization rates (MFU) on connected GPUs and TPUs. So from a business standpoint it would be higher performance/dollar AFAIK.
A... | 2025-08-13T12:16:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mp2tjr/anyone_using_maxtext_googles_ai_hyperscaling/ | DrKedorkian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp2tjr | false | null | t3_1mp2tjr | /r/LocalLLaMA/comments/1mp2tjr/anyone_using_maxtext_googles_ai_hyperscaling/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'Jh1RWnsXJ3I0u2xv4YX90twnb8i8bC6RDq1o0Q_A8fU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Jh1RWnsXJ3I0u2xv4YX90twnb8i8bC6RDq1o0Q_A8fU.png?width=108&crop=smart&auto=webp&s=21e47f6752ad6d41f97ccee4ca3b5781a0b134aa', 'width': 108}, {'height': 108, 'url': 'h... |
Predictions: A day when OS LLM Models become easy to run on any device | 0 | Looking at competiting models from China that are matching the performance of closed source model is on the verge. Soon, there will be models that will surpass newer closed source models.
But, I think what everyone wants is to run these OS LLM models on their crappy laptops, phones, tablets,...
The BIGGEST hurdle... | 2025-08-13T12:16:21 | Soft_Ad1142 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp2thd | false | null | t3_1mp2thd | /r/LocalLLaMA/comments/1mp2thd/predictions_a_day_when_os_llm_models_become_easy/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fespkt0v3sif1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/fespkt0v3sif1.png?width=108&crop=smart&auto=webp&s=50539a29da61404cd9a3668961d5ace313087cb0', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/fespkt0v3sif1.png?width=216&crop=smart&auto=web... | |
How GLM4.5 Helps You Read and Summarize Academic Papers Faster | 5 | The following is my conversation with GLM-4.5: link to chat (https://chat.z.ai/s/a9e599ab-4d7a-476d-bbe7-65c0a1dee0b6)
In this session, GLM-4.5 first checked the arXiv link, then read the PDF and provided a concise summary of the paper.
After that, I asked it to explain more details about the paper—such as the model’... | 2025-08-13T12:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mp2lb4/how_glm45_helps_you_read_and_summarize_academic/ | OddUnderstanding1633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp2lb4 | false | null | t3_1mp2lb4 | /r/LocalLLaMA/comments/1mp2lb4/how_glm45_helps_you_read_and_summarize_academic/ | false | false | self | 5 | null |
Is using Open WebUI as the main chat interface for my AI app a good long-term strategy? | 0 | I’m building an AI companion app with a custom backend that exposes an OpenAI-compatible API (/v1/chat/completions).
For the UI, I’ve been experimenting with [Open WebUI](https://github.com/open-webui/open-webui) because:
* It’s feature-rich out of the box (chat history, multi-model support, etc.)
* It’s responsi... | 2025-08-13T12:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mp2l78/is_using_open_webui_as_the_main_chat_interface/ | gigachadhd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp2l78 | false | null | t3_1mp2l78 | /r/LocalLLaMA/comments/1mp2l78/is_using_open_webui_as_the_main_chat_interface/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4lW32XM1gQl2OJQ3nJeHqEYAYHdWeL5irn2_f_RQUU4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4lW32XM1gQl2OJQ3nJeHqEYAYHdWeL5irn2_f_RQUU4.png?width=108&crop=smart&auto=webp&s=5cc84a5e7a48e42382ec856b2645c4d089bb5d92', 'width': 108}, {'height': 108, 'url': 'h... |
Semem : Semantic Web Memory for Intelligent Agents | 0 | Semem [1] is an experimental Node.js toolkit for AI memory management that integrates large language models (LLMs) with Semantic Web technologies (RDF/SPARQL). It offers knowledge graph retrieval and augmentation algorithms within a conceptual model based on the Ragno [2] (knowledge graph description) and ZPT [3] (... | 2025-08-13T11:41:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mp23bv/semem_semantic_web_memory_for_intelligent_agents/ | danja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp23bv | false | null | t3_1mp23bv | /r/LocalLLaMA/comments/1mp23bv/semem_semantic_web_memory_for_intelligent_agents/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GaiEkHuf91CpQFIK6ZZPVXA__pTJ0vj-Yp9phBn4VrU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GaiEkHuf91CpQFIK6ZZPVXA__pTJ0vj-Yp9phBn4VrU.png?width=108&crop=smart&auto=webp&s=7c7203e86d6468171c9d2a12c6b835635f7227aa', 'width': 108}, {'height': 108, 'url': 'h... |
gptme v0.28.0 major release - agent CLI with local model support | 5 | 2025-08-13T11:37:53 | https://github.com/gptme/gptme/releases/tag/v0.28.0 | ErikBjare | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mp20qa | false | null | t3_1mp20qa | /r/LocalLLaMA/comments/1mp20qa/gptme_v0280_major_release_agent_cli_with_local/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'pd8Zgp5hyaUkgDOYatKpFNMomJKv_Ji19N5-m2ttO0s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pd8Zgp5hyaUkgDOYatKpFNMomJKv_Ji19N5-m2ttO0s.png?width=108&crop=smart&auto=webp&s=d13fcc906012d0afd0704dcfb0afb1a451022ca9', 'width': 108}, {'height': 108, 'url': 'h... | ||
What are the ways to evaluate response time for LLMs. I saw a lot of literature on the other metrics but couldn't find much on the response time. | 3 | I want to evaluate and compare response time for LLMs based on when the prompt is given, the length of the prompts, wording choice, and other relevant parameters. | 2025-08-13T11:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1wno/what_are_the_ways_to_evaluate_response_time_for/ | SkyDifficult2469 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1wno | false | null | t3_1mp1wno | /r/LocalLLaMA/comments/1mp1wno/what_are_the_ways_to_evaluate_response_time_for/ | false | false | self | 3 | null |
[Beta] Local TTS Studio with Kokoro, Kitten TTS, and Piper built in, completely in JavaScript (930+ voices to choose from) | 66 | Hey all! Last week, [I posted](https://www.reddit.com/r/LocalLLaMA/comments/1mi45h1/kitten_tts_web_demo/) a Kitten TTS web demo that it seemed like a lot of people liked, so I decided to take it a step further and add Piper and Kokoro to the project! The project lets you load Kitten TTS, Piper Voices, or Kokoro complet... | 2025-08-13T11:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1ras/beta_local_tts_studio_with_kokoro_kitten_tts_and/ | CommunityTough1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1ras | false | null | t3_1mp1ras | /r/LocalLLaMA/comments/1mp1ras/beta_local_tts_studio_with_kokoro_kitten_tts_and/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': '5AJgfoLBV2jUPX4q5cINe-XsF8c2usIJ3lmV56hTfK8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5AJgfoLBV2jUPX4q5cINe-XsF8c2usIJ3lmV56hTfK8.png?width=108&crop=smart&auto=webp&s=91db5ab4b0b2f09c6a516fa967735076a8464e88', 'width': 108}, {'height': 108, 'url': 'h... |
Deep dive: LLaMA context windows and handling long outputs with stepwise prompts | 3 | So I’ve been running local LLaMA models (7B and 13B) and kept banging into the context window limit. You ask for a multi-page report and, halfway through, the output just stops. This used to be easier with smaller tasks but once you try a simulation or long essay, you see the model hitting around 4k tokens and silently... | 2025-08-13T11:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1oi5/deep_dive_llama_context_windows_and_handling_long/ | youknowwmorethanme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1oi5 | false | null | t3_1mp1oi5 | /r/LocalLLaMA/comments/1mp1oi5/deep_dive_llama_context_windows_and_handling_long/ | false | false | self | 3 | null |
Heads-up about ChatGPT Voice Mode change on Sept 9 | 0 | OpenAI is removing the ability to switch between Advanced and Standard voice modes on September 9. After that, the app will lock you into whatever mode is built in, with no option to toggle.
For most people, that means losing the original Cove voice in Advanced mode — which had a big following for its warmth and natur... | 2025-08-13T11:15:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1lc6/headsup_about_chatgpt_voice_mode_change_on_sept_9/ | Kami-Nova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1lc6 | false | null | t3_1mp1lc6 | /r/LocalLLaMA/comments/1mp1lc6/headsup_about_chatgpt_voice_mode_change_on_sept_9/ | false | false | self | 0 | null |
!HELP! I need some guide and help on figuring out an industry level RAG chatbot for the startup I am working.(explained in the body) | 0 | Hey, so I just joined a small startup(more like a 2-person company), I have beenasked to create a SaaS product where the client can come and submit their website url or/and pdf related to the info about the company that the user on the website may ask about their company .
Till now I am able to crawl the website by us... | 2025-08-13T11:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1kof/help_i_need_some_guide_and_help_on_figuring_out/ | 1amN0tSecC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1kof | false | null | t3_1mp1kof | /r/LocalLLaMA/comments/1mp1kof/help_i_need_some_guide_and_help_on_figuring_out/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GtcXXgONgM9dN-fyNju-XBWnrbi1FlJFq4axvneapLA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GtcXXgONgM9dN-fyNju-XBWnrbi1FlJFq4axvneapLA.png?width=108&crop=smart&auto=webp&s=a746ced9e7b11ee4729412a8fb137cbedf5edafd', 'width': 108}, {'height': 108, 'url': 'h... |
Peak safety theater: gpt-oss-120b refuses to discuss implementing web search in llama.cpp | 300 | 2025-08-13T11:12:30 | csixtay | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp1j7e | false | null | t3_1mp1j7e | /r/LocalLLaMA/comments/1mp1j7e/peak_safety_theater_gptoss120b_refuses_to_discuss/ | false | false | default | 300 | {'enabled': True, 'images': [{'id': 'j7hi9xgjrrif1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/j7hi9xgjrrif1.png?width=108&crop=smart&auto=webp&s=b7da0d29778ec85a87dd7c1e9131f1120a7fa25c', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/j7hi9xgjrrif1.png?width=216&crop=smart&auto=webp... | ||
once again the rumour is deepseek r2 is going to launch | 0 | im 100 percent sure its will be better then previous generation | 2025-08-13T10:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mp1476/once_again_the_rumour_is_deepseek_r2_is_going_to/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp1476 | false | null | t3_1mp1476 | /r/LocalLLaMA/comments/1mp1476/once_again_the_rumour_is_deepseek_r2_is_going_to/ | false | false | self | 0 | null |
gemini cli is scamming us by telling that its open source and giving 2.5 pro access actually they are giving the flash one access and the more u code the dumber will it become | 0 | not a good expereince with gemini cli and same problem with the qwen 3 coder cli the more u code the dumber it will get but they dont change the model | 2025-08-13T10:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mp12ol/gemini_cli_is_scamming_us_by_telling_that_its/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp12ol | false | null | t3_1mp12ol | /r/LocalLLaMA/comments/1mp12ol/gemini_cli_is_scamming_us_by_telling_that_its/ | false | false | self | 0 | null |
if you're in the LA metro area and are in the market for a dual 4090 watercooled PC with a threadripper 5995wx and want to help me out | 1 | [removed] | 2025-08-13T10:44:45 | casualcamus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mp10um | false | null | t3_1mp10um | /r/LocalLLaMA/comments/1mp10um/if_youre_in_the_la_metro_area_and_are_in_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '7ls0nghhnrif1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/7ls0nghhnrif1.jpeg?width=108&crop=smart&auto=webp&s=6e227619c0f733de9284a3a5987e706cffad907b', 'width': 108}, {'height': 286, 'url': 'https://preview.redd.it/7ls0nghhnrif1.jpeg?width=216&crop=smart&auto=... | |
Synthetic dataset evaluation | 1 | Hi! If I wanted to introduce new task and create a dataset for it, how would I evaluate it to prove its quality? Especially if the samples are synthetically generated. | 2025-08-13T10:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mp0zzt/synthetic_dataset_evaluation/ | MariaFitz345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp0zzt | false | null | t3_1mp0zzt | /r/LocalLLaMA/comments/1mp0zzt/synthetic_dataset_evaluation/ | false | false | self | 1 | null |
I can't fine-tune because my VRAM is not good enough | 0 | Hi, I want to fine-tune an AI model, but my GPU isn’t very powerful, so I can’t fine-tune it efficiently. I’m using an NVIDIA GeForce RTX 3060 Laptop GPU with 6 GB of VRAM. Is there any way I can still fine-tune a model with limited GPU memory?
Thank you in advence. | 2025-08-13T10:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mp0xc0/i_cant_finetune_because_my_vram_is_not_good_enough/ | anovatikz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp0xc0 | false | null | t3_1mp0xc0 | /r/LocalLLaMA/comments/1mp0xc0/i_cant_finetune_because_my_vram_is_not_good_enough/ | false | false | self | 0 | null |
Prompt Injection | 1 | Hi. I'm thinking of putting up a small web application where an LLM classifies a given user text, e.g. whether it is of a certain language.
Thinking of prompt injection, is it still a risk when the user input is _not_ supposed to be an instruction? I want to put in my system prompt, which is supposed to be less "inje... | 2025-08-13T10:31:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mp0sls/prompt_injection/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp0sls | false | null | t3_1mp0sls | /r/LocalLLaMA/comments/1mp0sls/prompt_injection/ | false | false | self | 1 | null |
Sending multiple user role messages in one API request | 1 | Hi. I tried sending one system message and then a sequence of user message in a row, instead of e.g. sending one line-separated user message.
I saw in the reasoning that the LLM is treating all of the user messages as one big message, sort of. Given that, is there any difference/benefit when sending multiple messages... | 2025-08-13T10:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mp0nnh/sending_multiple_user_role_messages_in_one_api/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp0nnh | false | null | t3_1mp0nnh | /r/LocalLLaMA/comments/1mp0nnh/sending_multiple_user_role_messages_in_one_api/ | false | false | self | 1 | null |
Proposed GPT-OSS Roleplay Settings by ChatGPT (Terrible Outcome) | 0 | The title, but I'm listing them here in case someone has better settings or would like to improve on those, using these settings with GPT-OSS in Koboldcpp I got terrible hallucination, I'm using the Q4 GGUF Jinx-gpt-oss-20b [Jinx-org/Jinx-gpt-oss-20b · Hugging Face](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b):
R... | 2025-08-13T10:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mp0ngl/proposed_gptoss_roleplay_settings_by_chatgpt/ | Electronic-Metal2391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp0ngl | false | null | t3_1mp0ngl | /r/LocalLLaMA/comments/1mp0ngl/proposed_gptoss_roleplay_settings_by_chatgpt/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'P0d7BMzhU8lFm_gY9r3-Ieqcq7avVW4yk_FBxEW_Ccs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/P0d7BMzhU8lFm_gY9r3-Ieqcq7avVW4yk_FBxEW_Ccs.png?width=108&crop=smart&auto=webp&s=f4bd6c37b59017817c7574387134e19b9a3cebbf', 'width': 108}, {'height': 116, 'url': 'h... |
Gemma3n e4b or Qwen 3 4b thinking? what's the best one? | 13 | Very straightforward question. | 2025-08-13T09:49:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mp02yy/gemma3n_e4b_or_qwen_3_4b_thinking_whats_the_best/ | pumukidelfuturo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mp02yy | false | null | t3_1mp02yy | /r/LocalLLaMA/comments/1mp02yy/gemma3n_e4b_or_qwen_3_4b_thinking_whats_the_best/ | false | false | self | 13 | null |
Hardware requirements for running good open source LLMs locally | 1 | [removed] | 2025-08-13T09:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mozrub/hardware_requirements_for_running_good_open/ | emilmsh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mozrub | false | null | t3_1mozrub | /r/LocalLLaMA/comments/1mozrub/hardware_requirements_for_running_good_open/ | false | false | self | 1 | null |
Is there a wiki that is updated once a month containing recommended models per use case? | 49 | As someone who doesn't constantly follow developments, is there a good resource for determining good models for different use cases? I understand benchmarks are suboptimal, but even something like a vote based resource or something that's manually curated would be great. Things are still moving fast, and it's hard to t... | 2025-08-13T09:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mozddg/is_there_a_wiki_that_is_updated_once_a_month/ | Yugen42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mozddg | false | null | t3_1mozddg | /r/LocalLLaMA/comments/1mozddg/is_there_a_wiki_that_is_updated_once_a_month/ | false | false | self | 49 | null |
gpt-oss-120B most intelligent model that fits on an H100 in native precision | 339 | Interesting analysis thread: https://x.com/artificialanlys/status/1952887733803991070 | 2025-08-13T08:46:18 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1moz341 | false | null | t3_1moz341 | /r/LocalLLaMA/comments/1moz341/gptoss120b_most_intelligent_model_that_fits_on_an/ | false | false | default | 339 | {'enabled': True, 'images': [{'id': '4okvse7e2rif1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/4okvse7e2rif1.jpeg?width=108&crop=smart&auto=webp&s=423a2e7b24261b20dd99baac6a973e95e5059e60', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/4okvse7e2rif1.jpeg?width=216&crop=smart&auto=w... | |
Any suggestions? | 16 | 2025-08-13T08:32:17 | Anto444_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1moyvcx | false | null | t3_1moyvcx | /r/LocalLLaMA/comments/1moyvcx/any_suggestions/ | false | false | default | 16 | {'enabled': True, 'images': [{'id': 'lrv0e95wzqif1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/lrv0e95wzqif1.jpeg?width=108&crop=smart&auto=webp&s=a4fa24a1b3c042c96eaa2c0e113d0c208faa99ed', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/lrv0e95wzqif1.jpeg?width=216&crop=smart&auto=w... | ||
Hardware requirements for running open source LLMs locally | 1 | [removed] | 2025-08-13T08:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1moyso1/hardware_requirements_for_running_open_source/ | emilmsh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moyso1 | false | null | t3_1moyso1 | /r/LocalLLaMA/comments/1moyso1/hardware_requirements_for_running_open_source/ | false | false | self | 1 | null |
data cleaning help llm | 3 | hi all! very noob i wish i was more knowledgeable.
I have this csv file i want to clean. it has columns: parent name, parent id, contact first name, contact last name, contact email, country code, contact phone.
about 145 rows of data is there. the thing is it is messy af like a 5 year old entered the data without s... | 2025-08-13T08:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1moysf0/data_cleaning_help_llm/ | bukkaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moysf0 | false | null | t3_1moysf0 | /r/LocalLLaMA/comments/1moysf0/data_cleaning_help_llm/ | false | false | self | 3 | null |
Maestro Update: CPU Support (AMD/non-NVIDIA), Intelligent Search & Login Fixes | 18 | Hey everyone,
Just wanted to post a quick update for my project, Maestro. I know a few users were running into login or connection issues. I've now added an `nginx` entry point and added a new setup script which should resolve those problems, so if you had trouble getting it to work before, please give it another try!... | 2025-08-13T08:13:22 | https://www.reddit.com/gallery/1moyl4m | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1moyl4m | false | null | t3_1moyl4m | /r/LocalLLaMA/comments/1moyl4m/maestro_update_cpu_support_amdnonnvidia/ | false | false | 18 | null | |
Anyone succeded to train a GPT-Sovits model and add a different language other than Japanese/Chinese/English? | 6 | As the title suggests i'm trying to add different languages to GPT-Sovits like maybe arabic, french, italien. If someone achieve that please don't hesitate to share the steps to do that. Thank you. | 2025-08-13T08:02:37 | mrpeace03 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1moyf3p | false | null | t3_1moyf3p | /r/LocalLLaMA/comments/1moyf3p/anyone_succeded_to_train_a_gptsovits_model_and/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'r93ytizxtqif1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/r93ytizxtqif1.png?width=108&crop=smart&auto=webp&s=b31d9667728fd1c256ed300efcf784700bafa6b0', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/r93ytizxtqif1.png?width=216&crop=smart&auto=web... | |
[novice question] When to use thinking/non-thinking MoE/other local llms? | 3 | I am not sure whether to use thinking or non-thinking local models. I have several thousand articles that I need to code for the extent of presence of a specific concept (based on moral foundations theory). Ideally, I would want zero - or few- shot prompt template.
Should I by default using thinking local llms for be... | 2025-08-13T08:01:29 | https://www.reddit.com/r/LocalLLaMA/comments/1moyegw/novice_question_when_to_use_thinkingnonthinking/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moyegw | false | null | t3_1moyegw | /r/LocalLLaMA/comments/1moyegw/novice_question_when_to_use_thinkingnonthinking/ | false | false | self | 3 | null |
RTX Pro 4000 Blackwell paper launch | 0 | PNY is receiving orders for RTX Pro Blackwell series from weeks, but away from RTX Pro 6000, a haven't seen review of other model in series. Any idea when real deliveries for other models will start, and especially RTX Pro 4000 Blackwell? | 2025-08-13T07:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1moy8oq/rtx_pro_4000_blackwell_paper_launch/ | Zealousideal-Ad-7969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moy8oq | false | null | t3_1moy8oq | /r/LocalLLaMA/comments/1moy8oq/rtx_pro_4000_blackwell_paper_launch/ | false | false | self | 0 | null |
Someone turned gpt-oss-20b instruct back into a base model | 0 | 2025-08-13T07:35:50 | https://huggingface.co/jxm/gpt-oss-20b-base | Thomas-Lore | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1moy0pe | false | null | t3_1moy0pe | /r/LocalLLaMA/comments/1moy0pe/someone_turned_gptoss20b_instruct_back_into_a/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'zmAstA5GHsZ5D1Ytdd09n55weXV0zAPIz20Ni-1LhRg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zmAstA5GHsZ5D1Ytdd09n55weXV0zAPIz20Ni-1LhRg.png?width=108&crop=smart&auto=webp&s=e9c954e868a579c2b52d86757df97f1a6fc7d913', 'width': 108}, {'height': 116, 'url': 'h... | ||
Open Source Human like Voice Cloning for Personalized Outreach!! | 0 | Hey everyone please help!!
I'm working with agency owners and want to create personalized outreach videos for their potential clients. The idea is to have a short under 1 min video with the agency owner's face in a facecam format, while their portfolio scrolls in the background. The script for each video will be differ... | 2025-08-13T07:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/1moxvhw/open_source_human_like_voice_cloning_for/ | According_Net_1792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moxvhw | false | null | t3_1moxvhw | /r/LocalLLaMA/comments/1moxvhw/open_source_human_like_voice_cloning_for/ | false | false | self | 0 | null |
Free, open source, no data collected app (done as a hobby - no commercial purpose) running Qwen3-4B-4bit beats Mistral, Deepseek, Qwen web search functionalities and matches ChatGPT on most queries. | 51 | Hi guys!
The new updates to the LLM pigeon companion apps are out and have a much improved web search functionality.
LLM Pigeon and LLM Pigeon Server are two companion apps. One for Mac and one for iOS. They are both free and open source. They collect no data (it's just a cool tool I wanted for myself).
To put it... | 2025-08-13T07:17:50 | https://v.redd.it/y6zpo2o6jqif1 | Valuable-Run2129 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1moxqht | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/y6zpo2o6jqif1/DASHPlaylist.mpd?a=1757661484%2CZWNmNGM5ZjM5NjQyYTU3YmJlM2M3NDExNDYyMGNlYmI5ZWU1ODJmYmU5NjA0OThjYmNhYjBhOGRlNmYxYTExMg%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/y6zpo2o6jqif1/DASH_1080.mp4?source=fallback', 'h... | t3_1moxqht | /r/LocalLLaMA/comments/1moxqht/free_open_source_no_data_collected_app_done_as_a/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'NXgwbnczbzZqcWlmMdVEZN68k09jQTVjAWzZ-7bMpjYBGMRNAxORrrhQtEef', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXgwbnczbzZqcWlmMdVEZN68k09jQTVjAWzZ-7bMpjYBGMRNAxORrrhQtEef.png?width=108&crop=smart&format=pjpg&auto=webp&s=7560664935c07bede93bfdfad148c941daa9d... | |
right now qwen 3 cli helping me too much in my anime startup , soon my project will complete in two weeks and its all possible bcz of qwen 3 coder cli but i wish if they just provide the 10 million context window of model with swe benchmark like 95 percent | 0 | coming years we will see so many new startups | 2025-08-13T07:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/1moxm8m/right_now_qwen_3_cli_helping_me_too_much_in_my/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moxm8m | false | null | t3_1moxm8m | /r/LocalLLaMA/comments/1moxm8m/right_now_qwen_3_cli_helping_me_too_much_in_my/ | false | false | self | 0 | null |
Fast model swap with llama-swap & unified memory | 14 | Swapping between multiple frequently-used models are quite slow with llama-swap&llama.cpp.
Even if you reload from vm cache, initializing is stil slow.
Qwen3-30B is large and will consume all VRAM. If I want swap between 30b-coder and 30b-thinking, I have to unload and reload.
Here is the key to load them simutaneoul... | 2025-08-13T07:06:43 | https://www.reddit.com/r/LocalLLaMA/comments/1moxk4g/fast_model_swap_with_llamaswap_unified_memory/ | TinyDetective110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moxk4g | false | null | t3_1moxk4g | /r/LocalLLaMA/comments/1moxk4g/fast_model_swap_with_llamaswap_unified_memory/ | false | false | self | 14 | null |
[UPDATE] DocStrange - Structured data extraction from images/pdfs/docs | 101 | I previously shared the open‑source library DocStrange. Now I have hosted it as a free to use web app to upload pdfs/images/docs to get clean structured data in Markdown/CSV/JSON/Specific-fields and other formats.
**Live Demo:** [**https://docstrange.nanonets.com**](https://docstrange.nanonets.com)
Would love to he... | 2025-08-13T06:34:17 | LostAmbassador6872 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mox183 | false | null | t3_1mox183 | /r/LocalLLaMA/comments/1mox183/update_docstrange_structured_data_extraction_from/ | false | false | default | 101 | {'enabled': True, 'images': [{'id': 'nclxmfireqif1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/nclxmfireqif1.gif?width=108&crop=smart&format=png8&s=0c54b1b3e0cf3932dab7328da10411a113f73a69', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/nclxmfireqif1.gif?width=216&crop=smart&format... | |
So I tried to run gpt-oss:20b using llama-cli in my MacBook... | 49 | ...and this happened. How can I fix this?
I'm using M3 pro 18gb MacBook. I used command from llama.cpp repo(`llama-cli -hf modelname`). I expected the model to run since it ran without errors when using Ollama.
The graphic glitch happened after the line `load_tensors: loading model tensors, this can take a while... (... | 2025-08-13T06:27:37 | https://v.redd.it/hgs8wvr4cqif1 | qscwdv351 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mowxb3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hgs8wvr4cqif1/DASHPlaylist.mpd?a=1757658470%2CMDhjZDFlODkwMGNjZmYyNGUzMTJmODIzNzA3ODU4NWE3NWEwYjE4NmMxN2UwMDBjN2I1YWRmMDcxNWNlMWViMQ%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/hgs8wvr4cqif1/DASH_1080.mp4?source=fallback', 'h... | t3_1mowxb3 | /r/LocalLLaMA/comments/1mowxb3/so_i_tried_to_run_gptoss20b_using_llamacli_in_my/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'OW54ZWN4cjRjcWlmMXoKX18xRFkOvTxBhp0YKGJ7rKqpCw2PYiSs9tD7VgN2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OW54ZWN4cjRjcWlmMXoKX18xRFkOvTxBhp0YKGJ7rKqpCw2PYiSs9tD7VgN2.png?width=108&crop=smart&format=pjpg&auto=webp&s=71ad1ebabd29381814eb8d32d8740f5fc69be... | |
Tiny SLM English only, with decent reasoning, summarization but with largish context window for purpose specific RAG | 2 | My needs are somewhat specific, where the entire RAG stack must fit into 4\~6GB of RAM, while running off CPUs (4 vCPU, no GPU). This is for RAG over pretty technical documents, all written in English. Everything local, i.e. python RAG applications (ingestor and query/summarization), llamma.cpp w/ model(s) \[generation... | 2025-08-13T06:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mowv16/tiny_slm_english_only_with_decent_reasoning/ | Professional_Row_967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowv16 | false | null | t3_1mowv16 | /r/LocalLLaMA/comments/1mowv16/tiny_slm_english_only_with_decent_reasoning/ | false | false | self | 2 | null |
Run Ollama GPT-OSS:20B Locally and used as Rest API | 0 | I build a basic chat app in Flutter by using Ollama Rest API, the model used is GPT-OSS:20B model.
My Current PC configuration is
NVIDIA 4070 Ti Super
Ryzen R9 9950X3D
32GB RAM
and it work great with some delay
[https://www.youtube.com/watch?v=F5\_Fq1BJEr8](https://www.youtube.com/watch?v=F5_Fq1BJEr8) | 2025-08-13T06:23:32 | Fail_Key | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mowuu3 | false | null | t3_1mowuu3 | /r/LocalLLaMA/comments/1mowuu3/run_ollama_gptoss20b_locally_and_used_as_rest_api/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'fTD88DStGvar9TOw-YGCz-sR5tu77FB2IvSAslmPtNg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9fmkv8owcqif1.jpeg?width=108&crop=smart&auto=webp&s=40a959a083b79911b02393d9321128c07fdfc8f7', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9fmkv8owcqif1.jp... | ||
Anyone else experiencing ”never ending” reasoning on small quantized models? | 5 | So I prompted a very simple PLC programming exercise (buttons pressed logic, light turns on/off, present a function block representation) to various models in these were the results:
Gemini pro 2.5 via google ai studio: nailed it, both breakdown and presentation was clear.
oss 20b via openrouter: correct answer prov... | 2025-08-13T06:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mowsxv/anyone_else_experiencing_never_ending_reasoning/ | AI-On-A-Dime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowsxv | false | null | t3_1mowsxv | /r/LocalLLaMA/comments/1mowsxv/anyone_else_experiencing_never_ending_reasoning/ | false | false | self | 5 | null |
What are my options to get actual emotional outputs? | 2 | Sorry for noob question.
Since chatgpt remove gpt 4o for free users and is now available for plus users, I'm unable to afford it due to having some financial issues. I can afford after sometime but not now.
What are my options for getting emotional human like outputs without paying?
I need like 3-4 stories that feel... | 2025-08-13T06:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mowo08/what_are_my_options_to_get_actual_emotional/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowo08 | false | null | t3_1mowo08 | /r/LocalLLaMA/comments/1mowo08/what_are_my_options_to_get_actual_emotional/ | false | false | self | 2 | null |
Multi-Token Prediction(MTP) in llama.cpp | 123 | [https://github.com/ggml-org/llama.cpp/pull/15225](https://github.com/ggml-org/llama.cpp/pull/15225)
The dev says they're pretty new to ML outside of python so patience is required. It's only a draft for now but i felt like i need to share it with you folks, maybe some of you have the required knowledge and skills to ... | 2025-08-13T06:07:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mowlg2/multitoken_predictionmtp_in_llamacpp/ | UpperParamedicDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowlg2 | false | null | t3_1mowlg2 | /r/LocalLLaMA/comments/1mowlg2/multitoken_predictionmtp_in_llamacpp/ | false | false | self | 123 | {'enabled': False, 'images': [{'id': 'MldTBWBXY6gWu5GwAV10UMBLVSDnlW5vhaGEP9bZHcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MldTBWBXY6gWu5GwAV10UMBLVSDnlW5vhaGEP9bZHcs.png?width=108&crop=smart&auto=webp&s=4f5e1497214f72a4f9f7c9413084e02e89911e92', 'width': 108}, {'height': 108, 'url': 'h... |
What happened here? | 1 | [removed] | 2025-08-13T06:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mowgyv/what_happened_here/ | Necessary_Image1281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowgyv | false | null | t3_1mowgyv | /r/LocalLLaMA/comments/1mowgyv/what_happened_here/ | false | false | self | 1 | null |
What really happened to this sub? | 1 | [removed] | 2025-08-13T05:59:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mowgjg/what_really_happened_to_this_sub/ | Necessary_Image1281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowgjg | false | null | t3_1mowgjg | /r/LocalLLaMA/comments/1mowgjg/what_really_happened_to_this_sub/ | false | false | self | 1 | null |
Multi-Token Prediction(MTP) in llama.cpp | 1 | [removed] | 2025-08-13T05:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mowfyt/multitoken_predictionmtp_in_llamacpp/ | UpperParamedicDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowfyt | false | null | t3_1mowfyt | /r/LocalLLaMA/comments/1mowfyt/multitoken_predictionmtp_in_llamacpp/ | false | false | self | 1 | null |
What really happened to this sub? | 1 | [removed] | 2025-08-13T05:58:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mowfts/what_really_happened_to_this_sub/ | Terrible-Priority-21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mowfts | false | null | t3_1mowfts | /r/LocalLLaMA/comments/1mowfts/what_really_happened_to_this_sub/ | false | false | self | 1 | null |
Ai resources whatsapp group | 1 | [removed] | 2025-08-13T05:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mow2ez/ai_resources_whatsapp_group/ | LocalUsed3057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mow2ez | false | null | t3_1mow2ez | /r/LocalLLaMA/comments/1mow2ez/ai_resources_whatsapp_group/ | false | false | self | 1 | null |
I tested some local models on my server with blackwell GPU 16GB vram - here are the results | 12 | I wanted to test some of my local AI models on ollama and after doing some manual command line prompts with --verbose, I then used a mixture of Claude, Gemini, Grok to help me write the script which then did all the local benchmark tests on ollama and output the details to a csv file. Then I had Claude AI analysis and ... | 2025-08-13T04:49:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mov8xd/i_tested_some_local_models_on_my_server_with/ | maximo101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mov8xd | false | null | t3_1mov8xd | /r/LocalLLaMA/comments/1mov8xd/i_tested_some_local_models_on_my_server_with/ | false | false | 12 | null | |
I tried the Jan-v1 model released today and here are the results | 146 | Search tool was brave. Tried 3 searches and its broken - the chat screenshots are attached and summarized below
1. **Whats the GDP of the US?:** Gave me a growth rate number, not the GDP figure itself.
2. **Whats the popilation of the world?:** Got stuck in loop searching for the same thing and then thinking. I wa... | 2025-08-13T04:41:17 | https://www.reddit.com/gallery/1mov3d9 | rm-rf-rm | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mov3d9 | false | null | t3_1mov3d9 | /r/LocalLLaMA/comments/1mov3d9/i_tried_the_janv1_model_released_today_and_here/ | false | false | 146 | null | |
I asked 23 models who was more trustworthy, Sam Altman or Elon Musk. Only Grok-4 said Elon Musk. | 0 | Grok 4 proves who its daddy is.
[Make comparisons in flowith](https://preview.redd.it/mkot6sjzqpif1.png?width=1482&format=png&auto=webp&s=5e4c9008e2753c3e6c16362b40a2885aa12582ee)
| 2025-08-13T04:21:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mouqfo/i_asked_23_models_who_was_more_trustworthy_sam/ | Quick-Knowledge1615 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mouqfo | false | null | t3_1mouqfo | /r/LocalLLaMA/comments/1mouqfo/i_asked_23_models_who_was_more_trustworthy_sam/ | false | false | 0 | null | |
Can AI help map threat modeling outputs to cybersecurity requirements? | 1 | Hi everyone,
I'm experimenting with a Python-based tool that uses semantic similarity (via the all-MiniLM-L6-v2 model) to match threats identified in a Microsoft Threat Modeling Tool report with existing cybersecurity requirements.
The idea is to automatically assess whether a threat (e.g., "Weak Authentication Schem... | 2025-08-13T04:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1moupxc/can_ai_help_map_threat_modeling_outputs_to/ | cyberSecSeekerAsh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moupxc | false | null | t3_1moupxc | /r/LocalLLaMA/comments/1moupxc/can_ai_help_map_threat_modeling_outputs_to/ | false | false | self | 1 | null |
Did I mess up my GPU purchase? | 0 | So I bought a 5070 Ti recently for a decent price. Plan to use it mostly for gaming but wanted to experiment with local LLMs on the side (mostly for side project coding). I felt good about it until I learned the same day about the rumored 5070 Ti S coming with 24gb VRAM. Did I make a bad purchase? Or would the gap betw... | 2025-08-13T04:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1moucrh/did_i_mess_up_my_gpu_purchase/ | Bombay111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1moucrh | false | null | t3_1moucrh | /r/LocalLLaMA/comments/1moucrh/did_i_mess_up_my_gpu_purchase/ | false | false | self | 0 | null |
Code ranking in arena | 1 | In the Arena’s coding ability rankings, Claude has consistently held a top position, while the newly released GPT-5 takes first place — I haven’t tried it yet. In addition, the performance of open-source models like Qwen, Kimi, and GLM is also impressive.
https://preview.redd.it/hex2ia53mpif1.png?width=2048&format=png... | 2025-08-13T03:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mou7mr/code_ranking_in_arena/ | Middle-Copy4577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mou7mr | false | null | t3_1mou7mr | /r/LocalLLaMA/comments/1mou7mr/code_ranking_in_arena/ | false | false | 1 | null | |
Kyutai voice cloning | 14 | After a lot of thought, I’ve decided to release a version of the Mimi voice embedder for kyutais tts model. The model is gated on Hugging Face with automatic access due to legal concerns as I am in the EU. If Kyutai ask me to remove this model I will, as Iove their work and dont want to get them into legal trouble. Ill... | 2025-08-13T03:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mou63h/kyutai_voice_cloning/ | SovietWarBear17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mou63h | false | null | t3_1mou63h | /r/LocalLLaMA/comments/1mou63h/kyutai_voice_cloning/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '81qiuX3B-YgLFQ5eEgVVSFuZcgWZzGp0QM4SFdUyPMI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/81qiuX3B-YgLFQ5eEgVVSFuZcgWZzGp0QM4SFdUyPMI.png?width=108&crop=smart&auto=webp&s=95afab9a98d347953c3f31e8e5a1c32560e32907', 'width': 108}, {'height': 108, 'url': 'h... |
Seeking numbers on RTX 5090 vs M3 Ultra performance at large context lengths. | 0 | A desktop with RTX 5090 (32GB DDR7 VRAM), + 64GB DDR5 RAM (though I suppose RAM can be increased relatively easily)
vs
Mac Studio, with 256GB Unified Memory (M3 Ultra chip with 28-core CPU, 60-core GPU, 32-core Neural Engine)
are similarly priced (India).
Can someone hint at which configuration would be b... | 2025-08-13T03:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1motsy3/seeking_numbers_on_rtx_5090_vs_m3_ultra/ | TechnoRhythmic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1motsy3 | false | null | t3_1motsy3 | /r/LocalLLaMA/comments/1motsy3/seeking_numbers_on_rtx_5090_vs_m3_ultra/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wDjV0pcdZtunj1ATDoUEwj6KI-W1Egt4MtsYgSZ73tU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/wDjV0pcdZtunj1ATDoUEwj6KI-W1Egt4MtsYgSZ73tU.jpeg?width=108&crop=smart&auto=webp&s=082d6888290c54aac28a33e06160f6b30ed555f7', 'width': 108}, {'height': 121, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.