title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Advise needed on runtime and Model for my HW | 0 | I'm seeking an advise from the community about best of use of my rig -> i9/32GB/3090+4070
I need to host local models for code assistance, and routine automation with N8N. All 8B models are quite useless, and I want to run something decent (if possible). What models and what runtime could I use to get maximum from 3... | 2025-07-05T09:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ls5x6q/advise_needed_on_runtime_and_model_for_my_hw/ | mancubus77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls5x6q | false | null | t3_1ls5x6q | /r/LocalLLaMA/comments/1ls5x6q/advise_needed_on_runtime_and_model_for_my_hw/ | false | false | self | 0 | null |
Have LLMs really improved for actual use? | 0 | Every month a new LLM is releasing, beating others in every benchmark, but is it actually better for day to day use?
Well, yes, they are smarter, that's for sure, at least on paper, benchmarks don't show the full thing. Thing is, I don't feel like they have actually improved that much, even getting worse, I remember w... | 2025-07-05T09:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ls5qjv/have_llms_really_improved_for_actual_use/ | Xpl0it_U | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls5qjv | false | null | t3_1ls5qjv | /r/LocalLLaMA/comments/1ls5qjv/have_llms_really_improved_for_actual_use/ | false | false | self | 0 | null |
5090 w/ 3090? | 0 | I am upgrading my system which will have a 5090. Would adding my old 3090 be any benefit or would it slow down the 5090 too much? Inference only. I'd like to get large context window on high quant of 32B, potentially using 70B. | 2025-07-05T09:10:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ls5pbt/5090_w_3090/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls5pbt | false | null | t3_1ls5pbt | /r/LocalLLaMA/comments/1ls5pbt/5090_w_3090/ | false | false | self | 0 | null |
Powerful 4B Nemotron based finetune | 143 | Hello all,
I present to you **Impish\_LLAMA\_4B**, one of the most powerful roleplay \\ adventure finetunes at its size category.
TL;DR:
* An **incredibly powerful** roleplay model for the size. It has **sovl !**
* Does **Adventure** very well for such size!
* Characters have **agency**, and might surprise you! [See... | 2025-07-05T08:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ls5b89/powerful_4b_nemotron_based_finetune/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls5b89 | false | null | t3_1ls5b89 | /r/LocalLLaMA/comments/1ls5b89/powerful_4b_nemotron_based_finetune/ | false | false | 143 | {'enabled': False, 'images': [{'id': 'HPVfFVlOpp4pNbOJ94txPWQsg8plop9RTeb6Vvswqrw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HPVfFVlOpp4pNbOJ94txPWQsg8plop9RTeb6Vvswqrw.png?width=108&crop=smart&auto=webp&s=eb145a5dc8675fac7f239771cdb889ea5c13d23f', 'width': 108}, {'height': 116, 'url': 'h... | |
Any thoughts on preventing hallucination in agents with tools | 0 | Hey All
Right now building a customer service agent with crewai and using tools to access enterprise data. Using self hosted LLMs (qwen30b/llama3.3:70b).
What i see is the agent blurting out information which are not available from the tools. Example: Address of your branch in NYC? It just makes up some address and ... | 2025-07-05T07:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ls4kp1/any_thoughts_on_preventing_hallucination_in/ | dnivra26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls4kp1 | false | null | t3_1ls4kp1 | /r/LocalLLaMA/comments/1ls4kp1/any_thoughts_on_preventing_hallucination_in/ | false | false | self | 0 | null |
Best Local VLM for Automated Image Classification? (10k+ Images) | 0 | Need to automatically sort 10k+ images into categories (flat-lay clothing vs people wearing clothes). Looking for the best local VLM approach. | 2025-07-05T06:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ls3rw2/best_local_vlm_for_automated_image_classification/ | survior2k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls3rw2 | false | null | t3_1ls3rw2 | /r/LocalLLaMA/comments/1ls3rw2/best_local_vlm_for_automated_image_classification/ | false | false | self | 0 | null |
What is NVLink? | 2 | I’m not entirely certain what it is, people recommend using it sometimes while recommending against it other times.
What is NVlink and what’s the difference against just plugging two cards into the motherboard?
Does it require more hardware? I heard stuff about a bridge? How does that work?
What about AMD cards, giv... | 2025-07-05T06:53:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ls3pkv/what_is_nvlink/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls3pkv | false | null | t3_1ls3pkv | /r/LocalLLaMA/comments/1ls3pkv/what_is_nvlink/ | false | false | self | 2 | null |
Open source tool for generating training datasets from text files and pdf for fine-tuning language models. | 46 | Hey yall I made a new open-source tool.
It's an app that **creates training data for AI models from your text and PDFs**.
It uses AI like Gemini, Claude, and OpenAI to make good question-answer sets that you can use to make your own AI smarter. The data comes out ready for different models.
Super simple, super usefu... | 2025-07-05T06:36:35 | https://github.com/MonkWarrior08/Dataset_Generator_for_Fine-tuning?tab=readme-ov-file | Idonotknow101 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ls3gho | false | null | t3_1ls3gho | /r/LocalLLaMA/comments/1ls3gho/open_source_tool_for_generating_training_datasets/ | false | false | default | 46 | null |
Will this ever be fixed? RP repetition | 8 | From time to time, often months between it. I start a roleplay with a local LLM and when I do this I chat for a while. And since two years I run every time into the same issue: After a while the roleplay turned into a "how do I fix the LLM from repeating itself too much" or into a "Post an answer, wait for the LLM answ... | 2025-07-05T04:29:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ls1hd2/will_this_ever_be_fixed_rp_repetition/ | Blizado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls1hd2 | false | null | t3_1ls1hd2 | /r/LocalLLaMA/comments/1ls1hd2/will_this_ever_be_fixed_rp_repetition/ | false | false | self | 8 | null |
Several images illustrate just how powerful Grok 4 is. | 1 | [removed] | 2025-07-05T04:06:07 | https://www.reddit.com/gallery/1ls13u3 | LLMLearner | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ls13u3 | false | null | t3_1ls13u3 | /r/LocalLLaMA/comments/1ls13u3/several_images_illustrate_just_how_powerful_grok/ | false | false | 1 | null | |
Are these AI topics enough to become an AI Consultant / GenAI PM / Strategy Lead? | 0 | Hi all,
I’m transitioning into AI consulting, GenAI product management, or AI strategy leadership roles — not engineering. My goal is to advise organizations on how to adopt, implement, and scale GenAI solutions responsibly and effectively.
I’ve built a 6 to 10 month learning plan based on curated Maven courses and i... | 2025-07-05T03:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ls0y8d/are_these_ai_topics_enough_to_become_an_ai/ | Ok_Story5978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls0y8d | false | null | t3_1ls0y8d | /r/LocalLLaMA/comments/1ls0y8d/are_these_ai_topics_enough_to_become_an_ai/ | false | false | self | 0 | null |
Does anyone here know of such a system that could easily be trained to recognize objects or people in photos? | 3 | I have thousands upon thousands of photos on various drives in my home. It would take the rest of my life likely to organize it all. What would be amazing is a piece of software or a collection of tools working together that could label and tag all of it. Essential feature would be for me to be like "this photo here is... | 2025-07-05T03:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ls0vb7/does_anyone_here_know_of_such_a_system_that_could/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls0vb7 | false | null | t3_1ls0vb7 | /r/LocalLLaMA/comments/1ls0vb7/does_anyone_here_know_of_such_a_system_that_could/ | false | false | self | 3 | null |
Will commercial humanoid robots ever use local AI? | 4 | When humanity gets to the point where humanoid robots are advanced enough to do household tasks and be personal companions, do you think their AIs will be local or will they have to be connected to the internet?
How difficult would it be to fit the gpus or hardware needed to run the best local llms/voice to voice mode... | 2025-07-05T03:44:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ls0r0w/will_commercial_humanoid_robots_ever_use_local_ai/ | Bristull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ls0r0w | false | null | t3_1ls0r0w | /r/LocalLLaMA/comments/1ls0r0w/will_commercial_humanoid_robots_ever_use_local_ai/ | false | false | self | 4 | null |
speech, app studio, hosting - all local and seemless(ish) | my toy: bplus Server | 9 | Hopefully I uploaded everything correctly and haven't embarrassed myself..:
[https://github.com/mrhappynice/bplus-server](https://github.com/mrhappynice/bplus-server)
My little toy. Just talk into the mic. hit gen. look at code, is it there?? hit create, page is hosted and live.
also app manager(edit, delete, cre... | 2025-07-05T03:21:12 | mr_happy_nice | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ls0d8u | false | null | t3_1ls0d8u | /r/LocalLLaMA/comments/1ls0d8u/speech_app_studio_hosting_all_local_and/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': 'toexu80x1zaf1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/toexu80x1zaf1.jpeg?width=108&crop=smart&auto=webp&s=13c05e9d01ffd1112614238278e02a8b0c5fa8f7', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/toexu80x1zaf1.jpeg?width=216&crop=smart&auto=w... | |
Best model at the moment for 128GB M4 Max | 37 | Hi everyone,
Recently got myself a brand new M4 Max 128Gb ram Mac Studio.
I saw some old posts about the best models to use with this computer, but I am wondering if that has changed throughout the months/years.
Currently, what is the best model and settings to use with this machine?
Cheers! | 2025-07-05T02:44:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lrzrmd/best_model_at_the_moment_for_128gb_m4_max/ | Xx_DarDoAzuL_xX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrzrmd | false | null | t3_1lrzrmd | /r/LocalLLaMA/comments/1lrzrmd/best_model_at_the_moment_for_128gb_m4_max/ | false | false | self | 37 | null |
License-friendly LLMs for generating synthetic datasets | 2 | Title. I wonder if there is any collections/rankings for open-to-use LLMs in the area of generating dataset. As far as I know (please correct me if I'm wrong):
- ChatGPT disallows "using ChatGPT to build a competitive model against itself". Though the terms is quite vague, it wouldn't be safe to assume that they're "op... | 2025-07-05T02:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lrzom4/licensefriendly_llms_for_generating_synthetic/ | blankboy2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrzom4 | false | null | t3_1lrzom4 | /r/LocalLLaMA/comments/1lrzom4/licensefriendly_llms_for_generating_synthetic/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '5HE7KGF_L1EksV4d9v3Dw9DfwLTMriQo2T312gJwv3o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5HE7KGF_L1EksV4d9v3Dw9DfwLTMriQo2T312gJwv3o.png?width=108&crop=smart&auto=webp&s=c854207e0586cf8b3235769c68e916f5c8c84aec', 'width': 108}, {'height': 113, 'url': 'h... |
Any models with weather forecast automation? | 5 | Exploring an idea, potentially to expand a collection of data from Meshtastic nodes, but looking to keep it really simple/see what is possible.
I don't know if it's going to be like an abridged version of the Farmers Almanac, but I'm curious if there's AI tools that can evaluate offgrid meteorological readings like t... | 2025-07-05T02:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lrzmk8/any_models_with_weather_forecast_automation/ | techtornado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrzmk8 | false | null | t3_1lrzmk8 | /r/LocalLLaMA/comments/1lrzmk8/any_models_with_weather_forecast_automation/ | false | false | self | 5 | null |
Neuroflux - Experimental Hybrid Api Open Source Agentic Local Ollama Spec Decoder Rag Qdrant Research Report Generator | 0 | [Local System Report Test](https://reddit.com/link/1lrzgc3/video/fg155rw4uyaf1/player)
Before I release this app as it is now stable and consistent in the reports, can you advise what features you would like in an open source RAG research reporter so i can finetune the app and place on Github?
Thanks!
**Key f... | 2025-07-05T02:26:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lrzgc3/neuroflux_experimental_hybrid_api_open_source/ | CodevsScience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrzgc3 | false | null | t3_1lrzgc3 | /r/LocalLLaMA/comments/1lrzgc3/neuroflux_experimental_hybrid_api_open_source/ | false | false | self | 0 | null |
Got some real numbers how llama.cpp got FASTER over last 3-months | 86 | Hey everyone. I am author of Hyprnote([https://github.com/fastrepl/hyprnote](https://github.com/fastrepl/hyprnote)) - privacy-first notepad for meetings. We regularly test out the AI models we use in various devices to make sure it runs well.
When testing MacBook, Qwen3 1.7B is used, and for Windows, Qwen3 0.6B is use... | 2025-07-05T02:08:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lrz5uy/got_some_real_numbers_how_llamacpp_got_faster/ | AggressiveHunt2300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrz5uy | false | null | t3_1lrz5uy | /r/LocalLLaMA/comments/1lrz5uy/got_some_real_numbers_how_llamacpp_got_faster/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'wUf3Yu2e5X0htYmKODWf6bn_xezHqWptQE969HmhAbI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wUf3Yu2e5X0htYmKODWf6bn_xezHqWptQE969HmhAbI.png?width=108&crop=smart&auto=webp&s=ec7da19ac0d910c3d450387c36d5eef2bfd4ab9f', 'width': 108}, {'height': 108, 'url': 'h... |
M4 Mini pro Vs M4 Studio | 4 | Anyone know what the difference in tps would be for 64g mini pro vs 64g Studio since the studio has more gpu cores, but is it a meaningful difference for tps. I'm getting 5.4 tps on 70b on the mini. Curious if it's worth going to the studio | 2025-07-05T02:07:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lrz52e/m4_mini_pro_vs_m4_studio/ | AlgorithmicMuse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrz52e | false | null | t3_1lrz52e | /r/LocalLLaMA/comments/1lrz52e/m4_mini_pro_vs_m4_studio/ | false | false | self | 4 | null |
I have developed a Quantized LLM chat bot called AstralNet! Feel free to query it! | 2 |
It has different personalities and which can be passed with the prompt!
The model uses quantized architecture!!
Let me know what u think!!!
Link on profile due to dev environment changing url. | 2025-07-05T01:44:08 | Gelisea | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lryr18 | false | null | t3_1lryr18 | /r/LocalLLaMA/comments/1lryr18/i_have_developed_a_quantized_llm_chat_bot_called/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': '9o9bg4xinyaf1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/9o9bg4xinyaf1.jpeg?width=108&crop=smart&auto=webp&s=ae85efb9f58edf804412cae056b327e7b55b408b', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/9o9bg4xinyaf1.jpeg?width=216&crop=smart&auto=... | |
I have developed a Quantized LLM chat bot called AstralNet! Feel free to query it! | 1 |
It has different personalities and which can be passed with the prompt!
The model uses quantized architecture!!
Let me know what u think!!!
Link on profile due to dev environment changing url. | 2025-07-05T01:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lryqby/i_have_developed_a_quantized_llm_chat_bot_called/ | Gelisea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lryqby | false | null | t3_1lryqby | /r/LocalLLaMA/comments/1lryqby/i_have_developed_a_quantized_llm_chat_bot_called/ | false | false | self | 1 | null |
when 1m? | 0 | please. thanks u
/meme | 2025-07-05T00:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lrx8h3/when_1m/ | Quopid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrx8h3 | false | null | t3_1lrx8h3 | /r/LocalLLaMA/comments/1lrx8h3/when_1m/ | false | false | self | 0 | null |
office AI | 0 | i was wondering what the lowest cost hardware and model i need in order to run a language model locally for my office of 11 people. i was looking at llama70B, Jamba large, and Mistral (if you have any better ones would love to hear). For the Gpu i was looking at 2 xtx7900 24GB Amd gpus just because they are much cheape... | 2025-07-04T23:39:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lrwjnx/office_ai/ | Odd_Translator_3026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrwjnx | false | null | t3_1lrwjnx | /r/LocalLLaMA/comments/1lrwjnx/office_ai/ | false | false | self | 0 | null |
Qwen3 on AWS Bedrock | 5 | Looks like AWS Bedrock doesn’t have all the Qwen3 models available in their catalog. Anyone successfully load Qwen3-30B-A3B (the MOE variant) on Bedrock through their custom model feature? | 2025-07-04T23:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lrvvkk/qwen3_on_aws_bedrock/ | International_Quail8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrvvkk | false | null | t3_1lrvvkk | /r/LocalLLaMA/comments/1lrvvkk/qwen3_on_aws_bedrock/ | false | false | self | 5 | null |
How and why is Llama so behind the other models at coding and UI/UX? Who is even using it? | 27 | Based on the this [benchmark for coding and UI/UX](https://www.designarena.ai/leaderboard), the Llama models are absolutely horrendous when it comes to build websites, apps, and other kinds of user interfaces.
How is Llama this bad and Meta so behind on AI compared to everyone else? No wonder they're trying to poach ... | 2025-07-04T22:52:32 | https://www.reddit.com/gallery/1lrvlsx | idwiw_wiw | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lrvlsx | false | null | t3_1lrvlsx | /r/LocalLLaMA/comments/1lrvlsx/how_and_why_is_llama_so_behind_the_other_models/ | false | false | 27 | null | |
Day 10/50: Building a Small Language Model from Scratch - What is Model Distillation? | 19 | # Day 10/50: Building a Small Language Model from Scratch — What is Model Distillation?
*This is one of my favorite topics. I’ve always wanted to run large models (several billion parameters, like DeepSeek 671b) or at least make my smaller models behave as intelligently and powerfully as those massive, high-parameter ... | 2025-07-04T22:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lrv48g/day_1050_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrv48g | false | null | t3_1lrv48g | /r/LocalLLaMA/comments/1lrv48g/day_1050_building_a_small_language_model_from/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'gb7m4qcvvylH6NNx0ZMIbw8Owvba0Plr3imkJqRcZso', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gb7m4qcvvylH6NNx0ZMIbw8Owvba0Plr3imkJqRcZso.jpeg?width=108&crop=smart&auto=webp&s=02c27e5c55a67614952db7dbeb132f79dd07ebdc', 'width': 108}, {'height': 121, 'url': '... |
KIOXIA AiSAQ software advances AI RAG with new version of vector search library | 3 | > In an ongoing effort to improve the usability of AI vector database searches within retrieval-augmented generation (RAG) systems by optimizing the use of solid-state drives (SSDs), KIOXIA today announced an update to its KIOXIA AiSAQ™[1] (All-in-Storage ANNS with Product Quantization) software. This new open-source r... | 2025-07-04T21:35:05 | https://europe.kioxia.com/en-europe/business/news/2025/20250703-1.html | Balance- | europe.kioxia.com | 1970-01-01T00:00:00 | 0 | {} | 1lru0fv | false | null | t3_1lru0fv | /r/LocalLLaMA/comments/1lru0fv/kioxia_aisaq_software_advances_ai_rag_with_new/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'oQ8m1Ysy-iCvuMSa9jsgbDq5mZ905x6TLSsHOyOWaUo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oQ8m1Ysy-iCvuMSa9jsgbDq5mZ905x6TLSsHOyOWaUo.png?width=108&crop=smart&auto=webp&s=fe07491456a1986f569aa8d4fad62110341814de', 'width': 108}, {'height': 113, 'url': 'h... | |
How do you guys balance speed versus ease and usability? | 13 | TLDR Personally, I suck at CLI troubleshooting, I realized I will now happily trade away some token speed for a more simple and intuitive UI/UX
I'm very new to Linux as well as local LLMs, finally switched over to Linux just last week from Windows 10. I have basically zero CLI experience.
Few days ago, I started havi... | 2025-07-04T21:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lrtv8u/how_do_you_guys_balance_speed_versus_ease_and/ | sourpatchgrownadults | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrtv8u | false | null | t3_1lrtv8u | /r/LocalLLaMA/comments/1lrtv8u/how_do_you_guys_balance_speed_versus_ease_and/ | false | false | self | 13 | null |
Smallest VLM that currently exists and what's the minimum spec y'all have gotten them to work on? | 5 | I was kinda curious if instead of moondream and smolvlm there's more stuff out there? | 2025-07-04T21:23:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lrtrmw/smallest_vlm_that_currently_exists_and_whats_the/ | combo-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrtrmw | false | null | t3_1lrtrmw | /r/LocalLLaMA/comments/1lrtrmw/smallest_vlm_that_currently_exists_and_whats_the/ | false | false | self | 5 | null |
Why I'd rather lose token speed than my mind troubleshooting CLI (just venting lol) | 1 | [removed] | 2025-07-04T21:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lrtpj1/why_id_rather_lose_token_speed_than_my_mind/ | sourpatchgrownadults | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrtpj1 | false | null | t3_1lrtpj1 | /r/LocalLLaMA/comments/1lrtpj1/why_id_rather_lose_token_speed_than_my_mind/ | false | false | self | 1 | null |
How RAG actually works — a toy example with real math | 589 | Most RAG explainers jump into theories and scary infra diagrams. Here’s the tiny end-to-end demo that can easy to understand for me:
Suppose we have a documentation like this: "Boil an egg. Poach an egg. How to change a tire"
# Step 1: Chunk
S0: "Boil an egg"
S1: "Poach an egg"
S2: "How to change a tire"... | 2025-07-04T20:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lrsx20/how_rag_actually_works_a_toy_example_with_real/ | Main-Fisherman-2075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrsx20 | false | null | t3_1lrsx20 | /r/LocalLLaMA/comments/1lrsx20/how_rag_actually_works_a_toy_example_with_real/ | false | false | self | 589 | null |
InstaTunnel – Share Your Localhost with a Single Command (Solving ngrok's biggest pain points) | 0 | Hey everyone 👋
I'm Memo, founder of InstaTunnel [instatunnel.my](https://instatunnel.my/) After diving deep into r/webdev and developer forums, I kept seeing the same frustrations with ngrok over and over:
**"Your account has exceeded 100% of its free ngrok bandwidth limit"** \- Sound familiar?
**"The tunnel sessi... | 2025-07-04T20:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lrsscx/instatunnel_share_your_localhost_with_a_single/ | JadeLuxe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrsscx | false | null | t3_1lrsscx | /r/LocalLLaMA/comments/1lrsscx/instatunnel_share_your_localhost_with_a_single/ | false | false | self | 0 | null |
THUDM/GLM-4.1V-9B-Thinking looks impressive | 121 | Looking forward to the GGUF quants to give it a shot. Would love if the awesome Unsloth team did their magic here, too.
[https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking](https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking) | 2025-07-04T20:37:49 | ConfidentTrifle7247 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lrss4u | false | null | t3_1lrss4u | /r/LocalLLaMA/comments/1lrss4u/thudmglm41v9bthinking_looks_impressive/ | false | false | default | 121 | {'enabled': True, 'images': [{'id': '62vkwepq4xaf1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/62vkwepq4xaf1.jpeg?width=108&crop=smart&auto=webp&s=e48334db0b608a0149e13ef26625c23ab6d950f5', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/62vkwepq4xaf1.jpeg?width=216&crop=smart&auto=we... | |
Need help fitting second gpu + 3rd drive | 2 | Original post got lost while I had reddit suspended while taking pictures smh. Anyways in short I have an additional 3090 and a 3rd 2.5 inch drive that I need to install. I know I will need risers and some sort of mount. Case is a coolermaster masterbox td500 mesh. The smaller pcie slots are occupied by 2 usb expansion... | 2025-07-04T20:27:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lrskbk/need_help_fitting_second_gpu_3rd_drive/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrskbk | false | null | t3_1lrskbk | /r/LocalLLaMA/comments/1lrskbk/need_help_fitting_second_gpu_3rd_drive/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'dGm2JvtNGavqfVU4wROz6T97IZ2rM5wpj2TNRZRinMw', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/PKb3CbPsXEivFF-a1w3WIdjgeMpA0sWla1FBk_K7bCc.jpg?width=108&crop=smart&auto=webp&s=b4cddf0a8cc12e8717419408be07265a2d8e4d0a', 'width': 108}, {'height': 288, 'url': '... |
No Race for the leading MCP Server GUI? | 2 | Disclaimer: I am not a programmer at all, and vibecoding thanks to LLMs has already brought me immense joy to my embedded hobby. (it just runs and nothing is critical and I am happy).
With MCP having been around longer by now and with it not seemingly not going away any time soon, how come setting up a MCP server is s... | 2025-07-04T20:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lrsi1e/no_race_for_the_leading_mcp_server_gui/ | Karim_acing_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrsi1e | false | null | t3_1lrsi1e | /r/LocalLLaMA/comments/1lrsi1e/no_race_for_the_leading_mcp_server_gui/ | false | false | self | 2 | null |
OCRFlux-3B | 137 | From the HF repo:
"OCRFlux is a multimodal large language model based toolkit for converting PDFs and images into clean, readable, plain Markdown text. It aims to push the current state-of-the-art to a significantly higher level."
Claims to beat other models like olmOCR and Nanonets-OCR-s by a substantial margin.
Rea... | 2025-07-04T20:21:20 | https://huggingface.co/ChatDOC/OCRFlux-3B | k-en | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lrsf6x | false | null | t3_1lrsf6x | /r/LocalLLaMA/comments/1lrsf6x/ocrflux3b/ | false | false | default | 137 | {'enabled': False, 'images': [{'id': 'x9gxRnW-oFgiJds7kCEygtLLuK_ZzX-0pJcvDDyr2xk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x9gxRnW-oFgiJds7kCEygtLLuK_ZzX-0pJcvDDyr2xk.png?width=108&crop=smart&auto=webp&s=14e38f0f603f7da8b5de8711620b4650bf1d4210', 'width': 108}, {'height': 116, 'url': 'h... |
Best fast local model for extracting data from scraped HTML? | 2 | Hi Folks, I’m scraping some listing pages and want to extract structured info like title, location, and link — but the HTML varies a lot between sites.
I’m looking for a fast, local LLM that can handle this kind of messy data and give me clean results. Ideally something lightweight (quantized is fine), and works well ... | 2025-07-04T20:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lrsdne/best_fast_local_model_for_extracting_data_from/ | xtremx12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrsdne | false | null | t3_1lrsdne | /r/LocalLLaMA/comments/1lrsdne/best_fast_local_model_for_extracting_data_from/ | false | false | self | 2 | null |
Need help upgrading my rig for llms | 1 | [removed] | 2025-07-04T20:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lrsdif/need_help_upgrading_my_rig_for_llms/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrsdif | false | null | t3_1lrsdif | /r/LocalLLaMA/comments/1lrsdif/need_help_upgrading_my_rig_for_llms/ | false | false | self | 1 | null |
Anyone having issues with multiple GPUs and games? Trying to run LLM + other 3D stuff is a PITA. | 2 | Hey all. I've got a 3080 (my main gaming card) and a 3060 (which I want to use for local LLMs).
In Windows, games I run (specifically The Finals) always default to the 3060 and the only way I get it to top is by disabling it.
In Linux Ubuntu, the same game won't launch when two cards are in the system - I have ... | 2025-07-04T20:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lrs917/anyone_having_issues_with_multiple_gpus_and_games/ | InvertedVantage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrs917 | false | null | t3_1lrs917 | /r/LocalLLaMA/comments/1lrs917/anyone_having_issues_with_multiple_gpus_and_games/ | false | false | self | 2 | null |
Run `huggingface-cli scan-cache` occasionally to see what models are taking up space. Then run `huggingface-cli delete-cache` to delete the ones you don't use. (See text post) | 27 | The `~/.cache/huggingface` location is where a lot of stuff gets stored (on Windows it's `$HOME\.cache\huggingface`). You could just delete it every so often, but then you'll be re-downloading stuff you use.
**How to:**
1. `uv pip install 'huggingface_hub[cli]'` ([use uv](https://docs.astral.sh/uv/) it's worth it)
2.... | 2025-07-04T19:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lrrvva/run_huggingfacecli_scancache_occasionally_to_see/ | The_frozen_one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrrvva | false | null | t3_1lrrvva | /r/LocalLLaMA/comments/1lrrvva/run_huggingfacecli_scancache_occasionally_to_see/ | false | false | self | 27 | null |
M1 vs M4 pro | 0 | Hello ,
I am relatively new to local llm. I’ve run a few models, it’s quite slow.
I currently have an M1 Pro 16 gb, and am thinking about trading it for an M4 pro. I mostly want to upgrade from 14 inch to 16 inch monitor, but will there be any significant improvement in my ability to run local models? | 2025-07-04T19:48:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lrrojr/m1_vs_m4_pro/ | CulturalGrapefruit97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrrojr | false | null | t3_1lrrojr | /r/LocalLLaMA/comments/1lrrojr/m1_vs_m4_pro/ | false | false | self | 0 | null |
Built an offline AI chat app for macOS that works with local LLMs via Ollama | 0 | I've been working on a lightweight macOS desktop chat application that runs entirely offline and communicates with local LLMs through Ollama. No internet required once set up!
Key features:
\- 🧠 Local LLM integration via Ollama
\- 💬 Clean, modern chat interface with real-time streaming
\- 📝 Full markdown... | 2025-07-04T19:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lrqtzj/built_an_offline_ai_chat_app_for_macos_that_works/ | Disastrous-Parsnip93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrqtzj | false | null | t3_1lrqtzj | /r/LocalLLaMA/comments/1lrqtzj/built_an_offline_ai_chat_app_for_macos_that_works/ | false | false | self | 0 | null |
As foretold - LLMs are revolutionizing security research | 3 | 2025-07-04T19:06:14 | https://hackerone.com/reports/2298307 | HOLUPREDICTIONS | hackerone.com | 1970-01-01T00:00:00 | 0 | {} | 1lrqqiy | false | null | t3_1lrqqiy | /r/LocalLLaMA/comments/1lrqqiy/as_foretold_llms_are_revolutionizing_security/ | false | false | default | 3 | null | |
Marketing AI agent suggestions ( please, i want it to fine tune locally ) | 0 | guide me on this, i have parsed the data nd have the processed.jsonl file ready, now tell me how do i proceed with it? | 2025-07-04T19:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lrqptp/marketing_ai_agent_suggestions_please_i_want_it/ | RookAndRep2807 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrqptp | false | null | t3_1lrqptp | /r/LocalLLaMA/comments/1lrqptp/marketing_ai_agent_suggestions_please_i_want_it/ | false | false | self | 0 | null |
i made a script to train your own transformer model on a custom dataset on your machine | 57 | over the last couple of years we have seen LLMs become super duper popular and some of them are small enough to run on consumer level hardware, but in most cases we are talking about pre-trained models that can be used only in inference mode without considering the full training phase. Something that i was cuorious abo... | 2025-07-04T19:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lrqoul/i_made_a_script_to_train_your_own_transformer/ | samas69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrqoul | false | null | t3_1lrqoul | /r/LocalLLaMA/comments/1lrqoul/i_made_a_script_to_train_your_own_transformer/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'BVtEt6cZ0osDH-48KskOkgP07Gr7jhgYOk0LZe_LbvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BVtEt6cZ0osDH-48KskOkgP07Gr7jhgYOk0LZe_LbvY.png?width=108&crop=smart&auto=webp&s=172adefafb8f644efa0ce3d9f1b5a82f3a2f5ad3', 'width': 108}, {'height': 108, 'url': 'h... |
Am I correct that to run multiple models with Llama.cpp I need multiple instances on multiple ports? | 7 | I've been enjoying Ollama for the ability to have an easy web interface to download models with and that I can make API calls to a single endpoint and Port while specifying different models that I want used. As far as I understand it, llama.cpp requires one running instance per model, and obviously different ports. I'm... | 2025-07-04T18:57:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lrqj68/am_i_correct_that_to_run_multiple_models_with/ | CharlesStross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrqj68 | false | null | t3_1lrqj68 | /r/LocalLLaMA/comments/1lrqj68/am_i_correct_that_to_run_multiple_models_with/ | false | false | self | 7 | null |
Enterprise AI teams - what's stopping you from deploying more agents in production? | 0 | I am trying to solve the Enterprise AI Agent issue and would love to get feedback from you!
What's stopping you from deploying more agents in production?
* **Reliability concerns** \- Can't predict when agents will fail
* **Governance challenges** \- No centralized control over agent behavior
* **Integration overhea... | 2025-07-04T18:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lrq99t/enterprise_ai_teams_whats_stopping_you_from/ | tokyo_kunoichi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrq99t | false | null | t3_1lrq99t | /r/LocalLLaMA/comments/1lrq99t/enterprise_ai_teams_whats_stopping_you_from/ | false | false | self | 0 | null |
cli-agent - An agentic framework for arbitrary LLMs - now with hooks, roles, and deep research! | 8 | Hello everyone,
So I've been working on what was initially meant to be a Claude Code clone for arbitrary LLMs over the past two weeks, [cli-agent](https://github.com/amranu/cli-agent). It has support for various APIs as well as ollama, so I felt posting here is as good idea as any.
The project has access to all the t... | 2025-07-04T18:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lrq827/cliagent_an_agentic_framework_for_arbitrary_llms/ | amranu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrq827 | false | null | t3_1lrq827 | /r/LocalLLaMA/comments/1lrq827/cliagent_an_agentic_framework_for_arbitrary_llms/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'bdLJtiVMPiAMuYWA26Aedkjth-mZiCG-flDZsN3QbGM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bdLJtiVMPiAMuYWA26Aedkjth-mZiCG-flDZsN3QbGM.png?width=108&crop=smart&auto=webp&s=031f2ce9bc32e6bd693f22f934b2c517eff29015', 'width': 108}, {'height': 108, 'url': 'h... |
Can home sized LLMs (32b, etc.) or home GPUs ever improve to the point where they can compete with cloud models? | 0 | I feel so dirty using cloud models. They even admit to storing your queries forever and manually inspecting them if you trigger flags. | 2025-07-04T18:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lrpjpc/can_home_sized_llms_32b_etc_or_home_gpus_ever/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrpjpc | false | null | t3_1lrpjpc | /r/LocalLLaMA/comments/1lrpjpc/can_home_sized_llms_32b_etc_or_home_gpus_ever/ | false | false | self | 0 | null |
FULL Cursor System Prompt and Tools [UPDATED, v1.2] | 1 | [removed] | 2025-07-04T17:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lroyba/full_cursor_system_prompt_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lroyba | false | null | t3_1lroyba | /r/LocalLLaMA/comments/1lroyba/full_cursor_system_prompt_and_tools_updated_v12/ | false | false | self | 1 | null |
FULL Cursor System Prompt and Tools [UPDATED, v1.2] | 1 | [removed] | 2025-07-04T17:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lrowy6/full_cursor_system_prompt_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrowy6 | false | null | t3_1lrowy6 | /r/LocalLLaMA/comments/1lrowy6/full_cursor_system_prompt_and_tools_updated_v12/ | false | false | self | 1 | null |
FULL Cursor System Prompts and Tools [UPDATED, v1.2] | 1 | [removed] | 2025-07-04T17:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lroty9/full_cursor_system_prompts_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lroty9 | false | null | t3_1lroty9 | /r/LocalLLaMA/comments/1lroty9/full_cursor_system_prompts_and_tools_updated_v12/ | false | false | self | 1 | null |
FULL Cursor System Prompts and Tools [UPDATED, v1.2] | 1 | [removed] | 2025-07-04T17:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lrossc/full_cursor_system_prompts_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrossc | false | null | t3_1lrossc | /r/LocalLLaMA/comments/1lrossc/full_cursor_system_prompts_and_tools_updated_v12/ | false | false | self | 1 | null |
Llama.cpp and continuous batching for performance | 6 | I have an archive of several thousand maintenance documents. They are all very structured and similar but not identical. They cover 5 major classes of big industrial equipment. For a single class there may be 20 or more specific builds but not every build in a class is identical. Sometimes we want information... | 2025-07-04T17:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lroopr/llamacpp_and_continuous_batching_for_performance/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lroopr | false | null | t3_1lroopr | /r/LocalLLaMA/comments/1lroopr/llamacpp_and_continuous_batching_for_performance/ | false | false | self | 6 | null |
Gemini CLI is open source. Could we fork it to be able to use other models ? | 42 | Unlike Claude Code, [Gemini CLI is open source](https://github.com/google-gemini/gemini-cli/tree/main). Wouldn’t it be interesting to fork it and extend it to support other models, similar to what Aider provides? Given that [Aider now seems outpaced](https://aider.chat/) by its competitors, enhancing Gemini CLI with mu... | 2025-07-04T17:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lroonm/gemini_cli_is_open_source_could_we_fork_it_to_be/ | SubliminalPoet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lroonm | false | null | t3_1lroonm | /r/LocalLLaMA/comments/1lroonm/gemini_cli_is_open_source_could_we_fork_it_to_be/ | false | false | self | 42 | null |
how can i make langchain stream the same way openai does? | 2 | 2025-07-04T17:15:59 | https://www.reddit.com/gallery/1lro41o | Beyond_Birthday_13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lro41o | false | null | t3_1lro41o | /r/LocalLLaMA/comments/1lro41o/how_can_i_make_langchain_stream_the_same_way/ | false | false | 2 | null | ||
llama : add high-throughput mode by ggerganov · Pull Request #14363 · ggml-org/llama.cpp | 85 | 2025-07-04T16:26:56 | https://github.com/ggml-org/llama.cpp/pull/14363 | LinkSea8324 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lrmxn7 | false | null | t3_1lrmxn7 | /r/LocalLLaMA/comments/1lrmxn7/llama_add_highthroughput_mode_by_ggerganov_pull/ | false | false | default | 85 | {'enabled': False, 'images': [{'id': 'IuYm0uiYOGT85fahGmoFFRSSnFzP4A66rCPcA3iycYY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IuYm0uiYOGT85fahGmoFFRSSnFzP4A66rCPcA3iycYY.png?width=108&crop=smart&auto=webp&s=d2f754516aec2c21e4f5375196c6d7bba0b657d3', 'width': 108}, {'height': 108, 'url': 'h... | |
I built a vector database, performing 2-8x faster search than traditional vector databases | 0 | For the last couple of months I have been building [Antarys AI](https://www.linkedin.com/company/antarys-ai/), a local first vector database to cut down latency and increased throughput.
I did this by creating a new indexing algorithm from HNSW and added an async layer on top of it, calling it AHNSW
since this is sti... | 2025-07-04T16:15:34 | https://github.com/antarys-ai/antarys-python/ | pacifio | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lrmo6i | false | null | t3_1lrmo6i | /r/LocalLLaMA/comments/1lrmo6i/i_built_a_vector_database_performing_28x_faster/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Ktrh2YRTl523xAOdRI2hknTtYXWUcylqfv8HmGajlFw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ktrh2YRTl523xAOdRI2hknTtYXWUcylqfv8HmGajlFw.png?width=108&crop=smart&auto=webp&s=59cfa121140bb0d5fd75006f1cb15e85aff5a8e6', 'width': 108}, {'height': 108, 'url': 'h... | |
In what format should i encode/transform my JSON Data before for finetuning? | 2 | [removed] | 2025-07-04T15:48:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lrm0jh/in_what_format_should_i_encodetransform_my_json/ | heil_ali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrm0jh | false | null | t3_1lrm0jh | /r/LocalLLaMA/comments/1lrm0jh/in_what_format_should_i_encodetransform_my_json/ | false | false | self | 2 | null |
In what format should i encode/transform my Data for finetuning? | 1 | [removed] | 2025-07-04T15:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lrlyws/in_what_format_should_i_encodetransform_my_data/ | heil_ali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrlyws | false | null | t3_1lrlyws | /r/LocalLLaMA/comments/1lrlyws/in_what_format_should_i_encodetransform_my_data/ | false | false | self | 1 | null |
In what format should i encode/transform my Data for finetuning? | 1 | [removed] | 2025-07-04T15:42:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lrlvv9/in_what_format_should_i_encodetransform_my_data/ | heil_ali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrlvv9 | false | null | t3_1lrlvv9 | /r/LocalLLaMA/comments/1lrlvv9/in_what_format_should_i_encodetransform_my_data/ | false | false | self | 1 | null |
How are the casual users here using LLMs or/and MCPs? | 18 | I have been exploring LLMs for a while and have been using Ollama and python to just do some formatting, standardisation and conversions of some private files. Beyond this I use Claude to help me with complex excel functions or to help me collate lists of all podcasts with Richard Thaler, for example.
I'm curious abo... | 2025-07-04T15:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lrlmco/how_are_the_casual_users_here_using_llms_orand/ | man_eating_chicken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrlmco | false | null | t3_1lrlmco | /r/LocalLLaMA/comments/1lrlmco/how_are_the_casual_users_here_using_llms_orand/ | false | false | self | 18 | null |
Best local Humanizer tool | 1 | Looking to run locally for free. Please responde of you have suggestions. I tried a local llm to spin my AI response, but it was refusing to spin it or rather humanized it. | 2025-07-04T15:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lrlgvk/best_local_humanizer_tool/ | redlikeazebra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrlgvk | false | null | t3_1lrlgvk | /r/LocalLLaMA/comments/1lrlgvk/best_local_humanizer_tool/ | false | false | self | 1 | null |
Kwai Keye VL 8B - Very promising new VL model | 38 | The model Kwai Keye VL 8B is available on Huggingface with Apache 2.0 license. It has been built by Kuaishou (1st time I hear of them) on top of Qwen 3 8B and combines it with SigLIP-400M.
Their paper is truly a gem as they detail their pretraining and post-training methodology exhaustively. Haven't tested it yet, but... | 2025-07-04T15:18:18 | https://arxiv.org/abs/2507.01949 | pol_phil | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1lrlags | false | null | t3_1lrlags | /r/LocalLLaMA/comments/1lrlags/kwai_keye_vl_8b_very_promising_new_vl_model/ | false | false | default | 38 | null |
12x3090s + 2x EPYC 7282 monstrously slow without full GPU offload | 1 | Trying to run V3 but when I try to offload to CPU to increase the context it slows to a crawl.
I understand that dual CPU setups have NUMA issues but even using threads=1 results in something like 1t/5s.
Super frustrated because I'm seeing single GPU setups run it blazing fast and wondering why bother with 3090s thes... | 2025-07-04T14:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lrkrqu/12x3090s_2x_epyc_7282_monstrously_slow_without/ | cantgetthistowork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrkrqu | false | null | t3_1lrkrqu | /r/LocalLLaMA/comments/1lrkrqu/12x3090s_2x_epyc_7282_monstrously_slow_without/ | false | false | self | 1 | null |
Is fine tuning worth it? | 2 | I have never fine tuned a model before, I want a model/agent to do financial analysis. Can someone help? | 2025-07-04T14:41:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lrkeib/is_fine_tuning_worth_it/ | ManagementNo5153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrkeib | false | null | t3_1lrkeib | /r/LocalLLaMA/comments/1lrkeib/is_fine_tuning_worth_it/ | false | false | self | 2 | null |
Bedt current model for 48vram | 0 | what are the best current models to use with 48 ram and ryzen 9 9900x and 96 gb ddr5 ram. Should I use them for completion reformulation etc of legal texts. | 2025-07-04T14:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lrkdo5/bedt_current_model_for_48vram/ | Bobcotelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrkdo5 | false | null | t3_1lrkdo5 | /r/LocalLLaMA/comments/1lrkdo5/bedt_current_model_for_48vram/ | false | false | self | 0 | null |
I built a local first vector database! | 1 | For the last couple of months I have been building [Antarys AI](https://www.linkedin.com/company/antarys-ai/),
to solve two major AI probelms!
Faster Compute - Antarys was built to run offline and locally so you can own your AI and with your own data and consume much less compute power than traditional vector databa... | 2025-07-04T14:27:21 | https://v.redd.it/wlsb87qeavaf1 | pacifio | /r/LocalLLaMA/comments/1lrk2jf/i_built_a_local_first_vector_database/ | 1970-01-01T00:00:00 | 0 | {} | 1lrk2jf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wlsb87qeavaf1/DASHPlaylist.mpd?a=1754360848%2CYWUxMWJmZjY0NTkwYTRiZDBmODFmNzMzYzJlMmViMjUzNGYyZTcyNjJiZWYyYmFmN2JmNDRhNTk2MjhhODcxMg%3D%3D&v=1&f=sd', 'duration': 889, 'fallback_url': 'https://v.redd.it/wlsb87qeavaf1/DASH_1080.mp4?source=fallback', '... | t3_1lrk2jf | /r/LocalLLaMA/comments/1lrk2jf/i_built_a_local_first_vector_database/ | false | false | default | 1 | null |
Anyone else feel like working with LLM libs is like navigating a minefield ? | 131 |
I've worked about 7 years in software development companies, and it's "easy" to be a software/backend/web developer because we use tools/frameworks/libs that are mature and battle-tested.
Problem with Django? Update it, the bug was probably fixed ages ago.
With LLMs it's an absolute clusterfuck. You just bought an R... | 2025-07-04T14:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lrjy15/anyone_else_feel_like_working_with_llm_libs_is/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrjy15 | false | null | t3_1lrjy15 | /r/LocalLLaMA/comments/1lrjy15/anyone_else_feel_like_working_with_llm_libs_is/ | false | false | self | 131 | {'enabled': False, 'images': [{'id': 'L1dGEw1UUf0GHnpmf5uDpElgs8er0s6PxenyNFO_HEs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L1dGEw1UUf0GHnpmf5uDpElgs8er0s6PxenyNFO_HEs.png?width=108&crop=smart&auto=webp&s=4129a80937cfe58e8199cc33db87b61d13901e85', 'width': 108}, {'height': 108, 'url': 'h... |
Gemma 3 Reasoning | 7 | Is there any Gemma3 based model with reasoning (GRPO) implemented for ollama? Thanks! | 2025-07-04T14:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lrjrvg/gemma_3_reasoning/ | Dazz9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrjrvg | false | null | t3_1lrjrvg | /r/LocalLLaMA/comments/1lrjrvg/gemma_3_reasoning/ | false | false | self | 7 | null |
Unmute + Llama.cpp server | 18 | Managed to get unmute to work with llama-server API, (thanks to Gemini 2.5 flash). This modified `llm_utils.py` goes into unmute/llm (note, it might make vLLM not work, haven't tested):
[https://gist.github.com/jepjoo/7ab6da43c3e51923eeaf278eac47c9c9](https://gist.github.com/jepjoo/7ab6da43c3e51923eeaf278eac47c9c9)
R... | 2025-07-04T14:05:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lrjkx3/unmute_llamacpp_server/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrjkx3 | false | null | t3_1lrjkx3 | /r/LocalLLaMA/comments/1lrjkx3/unmute_llamacpp_server/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
What are some locally hosted Killer Apps? | 15 | What are your locally hosted killer apps at the moment. What do you show to wow your friends and boss?
I just got asked by a friend since he has been tasked to install a local ai chat but wants to wow his boss and I also realized I have been stuck in the 'helps coding' and 'helps writing' corner for a while. | 2025-07-04T14:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lrjg7t/what_are_some_locally_hosted_killer_apps/ | AdOne8437 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrjg7t | false | null | t3_1lrjg7t | /r/LocalLLaMA/comments/1lrjg7t/what_are_some_locally_hosted_killer_apps/ | false | false | self | 15 | null |
Best NSFW LLM for RTX 5090 | 0 | I'm looking for a good NSFW LLM for my 5090 32gb and I have 64gb of DDR5 RAM and i7-12700kcpu. I am wanting something that can Roleplay and read PDF's files. GPT is too limited for me and won't allow me to upload more than 20 files. Anyone have any suggestions? | 2025-07-04T13:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lrjftc/best_nsfw_llm_for_rtx_5090/ | Desenbigh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrjftc | false | null | t3_1lrjftc | /r/LocalLLaMA/comments/1lrjftc/best_nsfw_llm_for_rtx_5090/ | false | false | nsfw | 0 | null |
Suggest best nsfw roleplay models | 0 | I'm pretty new to localLLaMA. Before this I've only tried online rp chatbots. Please suggest me best nowadays and how does the context size work locally? Online models have pretty much limited context that takes away the fun. | 2025-07-04T13:38:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lriyw6/suggest_best_nsfw_roleplay_models/ | Far-Reward6867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lriyw6 | false | null | t3_1lriyw6 | /r/LocalLLaMA/comments/1lriyw6/suggest_best_nsfw_roleplay_models/ | false | false | nsfw | 0 | null |
Great price on a 5090 | 552 | About to pull the trigger on this one I can't believe how cheap it is. | 2025-07-04T12:53:29 | psdwizzard | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lri12r | false | null | t3_1lri12r | /r/LocalLLaMA/comments/1lri12r/great_price_on_a_5090/ | false | false | default | 552 | {'enabled': True, 'images': [{'id': '1en1lic1uuaf1', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/1en1lic1uuaf1.jpeg?width=108&crop=smart&auto=webp&s=f0892aac54334c15c1614bc8d67b8f98944cf56b', 'width': 108}, {'height': 256, 'url': 'https://preview.redd.it/1en1lic1uuaf1.jpeg?width=216&crop=smart&auto=... | |
Best iOS app with local OpenAI-like API endpoint? | 4 | I'll describe my ideal app on my phone for all my local LLM conversations:
\- native iOS app
\- OpenAI-like API endpoint (to connect to LM Studio on my local network, when I'm on the go using Tailscale to stay connected)
\- multimodal support: images, STT, TTS
\- conversation history easily exportable or synced... | 2025-07-04T12:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lrhpl8/best_ios_app_with_local_openailike_api_endpoint/ | PardusHD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrhpl8 | false | null | t3_1lrhpl8 | /r/LocalLLaMA/comments/1lrhpl8/best_ios_app_with_local_openailike_api_endpoint/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'KYxsyzIADIzZqP010FD_ZiRelMQaR4luE0l42uVReV4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KYxsyzIADIzZqP010FD_ZiRelMQaR4luE0l42uVReV4.png?width=108&crop=smart&auto=webp&s=6a2b4a70cadfdf08aec7b1d6bf6a9f16333dd403', 'width': 108}, {'height': 113, 'url': 'h... |
How can i use bitnet on phone i have tried chatterui and it crashed | 0 | . | 2025-07-04T12:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lrh0kk/how_can_i_use_bitnet_on_phone_i_have_tried/ | ENTJ_bro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrh0kk | false | null | t3_1lrh0kk | /r/LocalLLaMA/comments/1lrh0kk/how_can_i_use_bitnet_on_phone_i_have_tried/ | false | false | self | 0 | null |
Fridays LocalLLama Musings | 0 | Hey LL's
I had been planning on creating content for awhile on general topics that come up on LocalLama, one of my fave places to stay up to date.
A little bit about me, I have been a software engineer for almost 20 years working mostly on open source, and most of that focused on security, and for the past two yea... | 2025-07-04T11:54:34 | https://www.youtube.com/watch?v=LlN3BSGccDk | RedDotRocket | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lrgw16 | false | {'oembed': {'author_name': 'Luke Hinds', 'author_url': 'https://www.youtube.com/@luke_hinds', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/LlN3BSGccDk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;... | t3_1lrgw16 | /r/LocalLLaMA/comments/1lrgw16/fridays_localllama_musings/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'tbAnLI2hcPDYK6O9MNFyMx-Ai_BkxCX_oyNLg9pz3V4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tbAnLI2hcPDYK6O9MNFyMx-Ai_BkxCX_oyNLg9pz3V4.jpeg?width=108&crop=smart&auto=webp&s=b43969e074528a2ad9cd1000e2a67dd1e70ca8d1', 'width': 108}, {'height': 162, 'url': '... |
AI agents forget 71% of their work and don't know it - what we discovered building DevPartner | 1 | [removed] | 2025-07-04T11:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lrgq1l/ai_agents_forget_71_of_their_work_and_dont_know/ | Adventurous-Snow2584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrgq1l | false | null | t3_1lrgq1l | /r/LocalLLaMA/comments/1lrgq1l/ai_agents_forget_71_of_their_work_and_dont_know/ | false | false | self | 1 | null |
MCP 2025-06-18 Spec Update: Security, Structured Output & Elicitation | 69 | The Model Context Protocol has faced a lot of criticism due to its security vulnerabilities. Anthropic recently released a new Spec Update (`MCP v2025-06-18`) and I have been reviewing it, especially around security. Here are the important changes you should know.
1. MCP servers are classified as OAuth 2.0 Resource Se... | 2025-07-04T11:42:49 | https://forgecode.dev/blog/mcp-spec-updates/ | anmolbaranwal | forgecode.dev | 1970-01-01T00:00:00 | 0 | {} | 1lrgomi | false | null | t3_1lrgomi | /r/LocalLLaMA/comments/1lrgomi/mcp_20250618_spec_update_security_structured/ | false | false | default | 69 | null |
what is the best python best Local TTS to use in python for an average 8GB RAM BETTER THAN KORORO? | 0 | I need a good TTS that will run on an average 8GB RAM, it can take all the time it need to render the audio (I do not need it is fast) but the audio should be as expressive as possible.
I already tried Coqui TTS and Parler TTS which are kind of ok but not expressive enough
I then asked like a year ago and you guys su... | 2025-07-04T11:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lrgizm/what_is_the_best_python_best_local_tts_to_use_in/ | DiscoverFolle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrgizm | false | null | t3_1lrgizm | /r/LocalLLaMA/comments/1lrgizm/what_is_the_best_python_best_local_tts_to_use_in/ | false | false | self | 0 | null |
Is there a rule between alpha (α) and rank (r) for LoRA? | 9 | Meaning, should alpha be double the rank or it doesn't matter much? | 2025-07-04T11:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lrgdzg/is_there_a_rule_between_alpha_α_and_rank_r_for/ | TechNerd10191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrgdzg | false | null | t3_1lrgdzg | /r/LocalLLaMA/comments/1lrgdzg/is_there_a_rule_between_alpha_α_and_rank_r_for/ | false | false | self | 9 | null |
🧠 Affordable GPU Servers for AI Devs (A100 / 4090) – Free Trial | 1 | [removed] | 2025-07-04T11:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lrgdw6/affordable_gpu_servers_for_ai_devs_a100_4090_free/ | Prudent-Ambition-311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrgdw6 | false | null | t3_1lrgdw6 | /r/LocalLLaMA/comments/1lrgdw6/affordable_gpu_servers_for_ai_devs_a100_4090_free/ | false | false | self | 1 | null |
Can I use ollama to run https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF/blob/main/DeepSeek-R1-0528-Qwen3-8B-UD-Q4_K_XL.gguf ? | 0 | I don't understand when you can use ollama to run huggingface models. Can that model be used with ollama? | 2025-07-04T11:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lrgcd4/can_i_use_ollama_to_run/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrgcd4 | false | null | t3_1lrgcd4 | /r/LocalLLaMA/comments/1lrgcd4/can_i_use_ollama_to_run/ | false | false | self | 0 | null |
Marinara’s AI Discord Buddies | 1 | [removed] | 2025-07-04T11:09:39 | https://github.com/SpicyMarinara/Discord-Buddy | Meryiel | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lrg40p | false | null | t3_1lrg40p | /r/LocalLLaMA/comments/1lrg40p/marinaras_ai_discord_buddies/ | false | false | default | 1 | null |
Marinara’s AI Discord Buddies | 1 | [removed] | 2025-07-04T10:55:25 | https://www.reddit.com/gallery/1lrfv1n | Meryiel | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lrfv1n | false | null | t3_1lrfv1n | /r/LocalLLaMA/comments/1lrfv1n/marinaras_ai_discord_buddies/ | false | false | 1 | null | |
What kind of models can I run with my new hardware? | 0 | |Component|Details|
|:-|:-|
|GPU|RTX 3090, 24GB VRAM|
|CPU|Ryzen 9 9950X3D, 32 threads, 192MB L3|
|RAM|192GB DDR5 3600hz|
I am using webui as a back end, what type of GGUF (30b/70b models with 8/4 quantization...etc) models can I run? How much should I off load to GPU and how much to CPU with reasonable t/s?
Also, is... | 2025-07-04T10:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lrfo4i/what_kind_of_models_can_i_run_with_my_new_hardware/ | GTurkistane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrfo4i | false | null | t3_1lrfo4i | /r/LocalLLaMA/comments/1lrfo4i/what_kind_of_models_can_i_run_with_my_new_hardware/ | false | false | self | 0 | null |
The future of AI won’t be cloud-first. It’ll be chain-native. | 0 | AI has grown up inside centralized clouds—fast, convenient, but tightly controlled. The problem? As AI becomes more powerful and influential, questions around transparency, ownership, and control are only getting louder.
Cloud-first AI can’t answer those questions. Chain-native AI can.
This shift isn’t just about put... | 2025-07-04T10:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lrfdq8/the_future_of_ai_wont_be_cloudfirst_itll_be/ | Maleficent_Apple_287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrfdq8 | false | null | t3_1lrfdq8 | /r/LocalLLaMA/comments/1lrfdq8/the_future_of_ai_wont_be_cloudfirst_itll_be/ | false | false | self | 0 | null |
Question about GPUs (i know this isn't the best place, but askscience/asckcompsci removed it) | 2 | Sorry to trouble you guys, I know its not the reddit for it, I can't seem to find one that doesn't autoremove me without any message as to why. I am just trying to find answer to something I don't know about GPUs that I can't figure out, it's for my PhD thesis:
tldr; i work in computational chemistry. i do this thing... | 2025-07-04T09:57:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lrexwm/question_about_gpus_i_know_this_isnt_the_best/ | NewspaperPossible210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrexwm | false | null | t3_1lrexwm | /r/LocalLLaMA/comments/1lrexwm/question_about_gpus_i_know_this_isnt_the_best/ | false | false | self | 2 | null |
30-60tok/s on 4bit local LLM, iPhone 16. | 83 | Hey all, I’m an AI/LLM enthusiast coming from a mobile dev background (iOS, Swift). I’ve been building a local inference engine, tailored for Metal-first, real-time inference on iOS (iPhone + iPad).
I’ve been benchmarking on iPhone 16 and hitting what seem to be high token/s rates for 4-bit quantized models.
Current ... | 2025-07-04T09:50:53 | Specific_Opinion_573 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lreu44 | false | null | t3_1lreu44 | /r/LocalLLaMA/comments/1lreu44/3060toks_on_4bit_local_llm_iphone_16/ | false | false | 83 | {'enabled': True, 'images': [{'id': 'uQNJZpTH3DR_345eUwYV66__t-z4DFq2QoGmMo8Fras', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/1pi871kgxtaf1.jpeg?width=108&crop=smart&auto=webp&s=c20d57ad84da56626439ad72013e3790e5404e9a', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/1pi871kgxtaf1.jpe... | ||
pytorch 2.7.x no longer supports Pascal architecture? | 14 | I got these warnings:
/home/user/anaconda3/lib/python3.12/site-packages/torch/cuda/__init__.py:262: UserWarning:
Found GPU0 NVIDIA GeForce GT 1030 which is of cuda capability 6.1.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability support... | 2025-07-04T09:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lrerwe/pytorch_27x_no_longer_supports_pascal_architecture/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrerwe | false | null | t3_1lrerwe | /r/LocalLLaMA/comments/1lrerwe/pytorch_27x_no_longer_supports_pascal_architecture/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'akOX9n901M19sZspfWwfi0njVQhgKCdPXxQXMrrTCpM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/akOX9n901M19sZspfWwfi0njVQhgKCdPXxQXMrrTCpM.png?width=108&crop=smart&auto=webp&s=20375fdf4c207a82be22152fe6cce0f4a088a374', 'width': 108}, {'height': 121, 'url': 'h... |
Help regarding synthetic data generation and benchmarking | 1 | I am planning on creating data for histopathology medical report summarization. Since there isn't any publicly available dataset I plan on creating a synthetic data using OpenAI API.
My goal is to fine tune a SLM on the dataset - my plan is to see how affective is Chain-of-Thoughts based fine tuning in medical report... | 2025-07-04T09:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lreamd/help_regarding_synthetic_data_generation_and/ | NPCompletePet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lreamd | false | null | t3_1lreamd | /r/LocalLLaMA/comments/1lreamd/help_regarding_synthetic_data_generation_and/ | false | false | self | 1 | null |
ERNIE 4.5 (Baidu's new open-source model) now runs locally with llama.cpp! | 1 | [removed] | 2025-07-04T09:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lre81c/ernie_45_baidus_new_opensource_model_now_runs/ | Responsible-Host1800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lre81c | false | null | t3_1lre81c | /r/LocalLLaMA/comments/1lre81c/ernie_45_baidus_new_opensource_model_now_runs/ | false | false | self | 1 | null |
Apple M4 Max or AMD Ryzen AI Max+ 395 (Framwork Desktop) | 52 | I'm working on a LLM-Project for my CS Degree where I need to run a models locally, because of sensitive data. My current Desktop PC is quite old now (Windows, i5-6600K, 16GB RAM, GTX 1060 6GB) and only capable of running small models, so I want to upgrade it anyway. I saw a few people reccomending Apples ARM for the j... | 2025-07-04T09:02:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lre3x9/apple_m4_max_or_amd_ryzen_ai_max_395_framwork/ | zeltbrennt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lre3x9 | false | null | t3_1lre3x9 | /r/LocalLLaMA/comments/1lre3x9/apple_m4_max_or_amd_ryzen_ai_max_395_framwork/ | false | false | self | 52 | null |
Fine-tuning LLM PoC | 1 | Hi everyone,
I have only worked with big enterprise models so far.
I would like to run a fine-tuning PoC for a small pretrained model.
Please suggest up to 3 selections for the following:
1. Dataset selection (dataset for text classification or sentiment analysis)
2. Model selection (which are the best small mo... | 2025-07-04T08:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lrdrzi/finetuning_llm_poc/ | QueRoub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrdrzi | false | null | t3_1lrdrzi | /r/LocalLLaMA/comments/1lrdrzi/finetuning_llm_poc/ | false | false | self | 1 | null |
What is with all the ‘don’t use local’ replies lately? | 1 | [removed] | 2025-07-04T08:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lrdoj8/what_is_with_all_the_dont_use_local_replies_lately/ | FunnyAsparagus1253 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrdoj8 | false | null | t3_1lrdoj8 | /r/LocalLLaMA/comments/1lrdoj8/what_is_with_all_the_dont_use_local_replies_lately/ | false | false | self | 1 | null |
Best models July 2025 to run on 16gb vram? | 0 | Hey, I know this probably gets repeated a bunch of times.
Anyway
I was wondering what the best overall set of models would be great to run that fits within 16gb vram.
-Creative Writing
-Worldbuilding assistance and general creativity
- coding
-translation
-RP/ERP
-Summarization
Thank you | 2025-07-04T08:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lrdl05/best_models_july_2025_to_run_on_16gb_vram/ | itis_whatit-is | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrdl05 | false | null | t3_1lrdl05 | /r/LocalLLaMA/comments/1lrdl05/best_models_july_2025_to_run_on_16gb_vram/ | false | false | self | 0 | null |
Picking the perfect model/architecture for particular task. | 1 | How do you guys achieve this problem? Say you have x problem in mind with y expected solution.
Picking any model and working with it (like gpt-4.1, gemini-2.5-pro, sonnet-4) etc but turns out basic intelligence is not working out.
I am assuming most of the models might be pre-trained on almost same data, just prep... | 2025-07-04T08:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lrdej8/picking_the_perfect_modelarchitecture_for/ | akash-vekariya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrdej8 | false | null | t3_1lrdej8 | /r/LocalLLaMA/comments/1lrdej8/picking_the_perfect_modelarchitecture_for/ | false | false | self | 1 | null |
Did I just waste all my money on Local Llama? | 0 | I use the free version of ChatGPT a lot so when a video popped up on YouTube about running Llama locally I thought it would be a good investment so I bought a very capable PC and GPU and set up was really easy.
My GPU is 11GB VRAM and can run many models no sweat, but the answers are always wrong, I've almost never h... | 2025-07-04T08:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lrd7pb/did_i_just_waste_all_my_money_on_local_llama/ | Salamander500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lrd7pb | false | null | t3_1lrd7pb | /r/LocalLLaMA/comments/1lrd7pb/did_i_just_waste_all_my_money_on_local_llama/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.