title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What do you think of self-hosting a small LLM on a VPS or abstracted container, calling it externally for simple AI agents/API calls? Cheaper or more expensive than bigger models? | 1 | Investigating this idea myself, and noting it down. Thought I'd post it as a discussion in case people have roasts/suggestions before I revisit it. I'll research all this myself but if anyone wants to criticize or correct me, that would be welcome
Could be done on any platform that has plug and play for Node.js?
Is t... | 2025-07-21T08:37:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m5djms/what_do_you_think_of_selfhosting_a_small_llm_on_a/ | angry_cactus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5djms | false | null | t3_1m5djms | /r/LocalLLaMA/comments/1m5djms/what_do_you_think_of_selfhosting_a_small_llm_on_a/ | false | false | self | 1 | null |
What consumer hardware do I need to run Kimi-K2 | 0 | Hi, I am looking to run Kimi-K2 locally with reasonable response. What hardware would I need (excluding NVidia 6000 series cards)? Could I run a cluster of Macs? | 2025-07-21T08:24:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m5dcqj/what_consumer_hardware_do_i_need_to_run_kimik2/ | stealthmatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5dcqj | false | null | t3_1m5dcqj | /r/LocalLLaMA/comments/1m5dcqj/what_consumer_hardware_do_i_need_to_run_kimik2/ | false | false | self | 0 | null |
ik_llama.cpp 404: temporary repo up to commit d44c2d3 | 46 | For those interested, here is a temporary copy pulled just before the official repo went 404.
[https://github.com/PieBru/ik\_llama.cpp\_temp\_copy](https://github.com/PieBru/ik_llama.cpp_temp_copy) | 2025-07-21T08:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m5d66p/ik_llamacpp_404_temporary_repo_up_to_commit/ | PieBru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5d66p | false | null | t3_1m5d66p | /r/LocalLLaMA/comments/1m5d66p/ik_llamacpp_404_temporary_repo_up_to_commit/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'OeOxSPElJ7feBpmIHwfN3BbOjC_fgsHhahVsK3oEdNw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OeOxSPElJ7feBpmIHwfN3BbOjC_fgsHhahVsK3oEdNw.png?width=108&crop=smart&auto=webp&s=9caec387fe1ebd69552e3f38510d53745dc8e07c', 'width': 108}, {'height': 108, 'url': 'h... |
GGUF on Android Studio | 0 | Is there way to run the GGUF files on Android Studio? Maybe with llama.cpp? I have been trying to build a wrapper around llama.cpp with Kotlin+Java but there must be a better solution. | 2025-07-21T08:07:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m5d2cv/gguf_on_android_studio/ | neural-learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5d2cv | false | null | t3_1m5d2cv | /r/LocalLLaMA/comments/1m5d2cv/gguf_on_android_studio/ | false | false | self | 0 | null |
I'm Sorry Dave, I'm afraid I can't do that - Kimi K2 research returns blank. | 1 | Gave KIMI K2 research Agent a task to find me lucrative stocks with a 200% gain probability in the short term - After almost 10 minutes of research - it came back and told me it's not possible. Love this No BS attitude from KIMI | 2025-07-21T07:55:56 | Parker93GT | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m5cw0t | false | null | t3_1m5cw0t | /r/LocalLLaMA/comments/1m5cw0t/im_sorry_dave_im_afraid_i_cant_do_that_kimi_k2/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2b2l0i1qn6ef1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/2b2l0i1qn6ef1.png?width=108&crop=smart&auto=webp&s=8cc207c28f1fde7a951b4902db3291ed55fc6194', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/2b2l0i1qn6ef1.png?width=216&crop=smart&auto=web... | |
What model shall i run? | 0 | hardware info: in second pic. | 2025-07-21T07:46:52 | https://www.reddit.com/gallery/1m5cr11 | RevolutionaryBus4545 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m5cr11 | false | null | t3_1m5cr11 | /r/LocalLLaMA/comments/1m5cr11/what_model_shall_i_run/ | false | false | 0 | null | |
ONNX or GGUF | 7 | am having a hard time with which one is good and why ???!! | 2025-07-21T07:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m5ckr0/onnx_or_gguf/ | Xitizdumb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5ckr0 | false | null | t3_1m5ckr0 | /r/LocalLLaMA/comments/1m5ckr0/onnx_or_gguf/ | false | false | self | 7 | null |
How's your experimentation with MCP going? | 3 | Anyone here having fun time using MCP? I've just started to look around into it and was wondering that most of the tutorial are based out of claude desktop or cursor. Anyone here experimenting it out without them (using streamlit or fastAPI). | 2025-07-21T06:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m5bccx/hows_your_experimentation_with_mcp_going/ | UnfinishedSentenc-1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5bccx | false | null | t3_1m5bccx | /r/LocalLLaMA/comments/1m5bccx/hows_your_experimentation_with_mcp_going/ | false | false | self | 3 | null |
What makes a model ethical? | 6 | People have started throwing the terms ethical and ethics around with respect and I'm not sure how to read those terms. Is a more ethical model one which was trained using "less" electricity with something made on a raspberry pi approaching "peak" ethicalness? Are the inputs to a model more important? Less? How do both... | 2025-07-21T04:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m59xzv/what_makes_a_model_ethical/ | KnownDairyAcolyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m59xzv | false | null | t3_1m59xzv | /r/LocalLLaMA/comments/1m59xzv/what_makes_a_model_ethical/ | false | false | self | 6 | null |
First time using QLoRa results in gibberish | 12 | I am trying to fine tune a LlaVa model, I have a training set of 7800 high quality conversations, each with an image.
I am using qlora to fine tune the model, and regardless of the batch size, the lr, and the rank, so far all of my trials were resulted in gibberish on evaluation.
I did some reading, and in order to... | 2025-07-21T03:48:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m58qf3/first_time_using_qlora_results_in_gibberish/ | Emotional-Sundae4075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m58qf3 | false | null | t3_1m58qf3 | /r/LocalLLaMA/comments/1m58qf3/first_time_using_qlora_results_in_gibberish/ | false | false | self | 12 | null |
Model to retrieve information from Knowledge. | 3 | Currently using Ollama with OpenWebUI on a dedicated PC. This has a Intel Xeon E5v2, 32gb Ram and 2x Titan V 12GB (have a third on its way). Limited budget and this is roughly what I have to play with right now.
I was wanting to add about 20-30 pdf documents to a knowledge base. I would then have an LLM to find and ... | 2025-07-21T03:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m58ohn/model_to_retrieve_information_from_knowledge/ | themungbeans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m58ohn | false | null | t3_1m58ohn | /r/LocalLLaMA/comments/1m58ohn/model_to_retrieve_information_from_knowledge/ | false | false | self | 3 | null |
Any smaller moe model that think in latent space on the horizon? | 0 | I think a model like qwen3 30B 3AB that thinks in latent space would be great for running locally. If I'm understanding the way it would works, it would save context not having the models output the thinking. It would be faster on low end setups too, less memory lane bottleneck from not needing to output as many tokens... | 2025-07-21T03:40:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m58kqj/any_smaller_moe_model_that_think_in_latent_space/ | ArchdukeofHyperbole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m58kqj | false | null | t3_1m58kqj | /r/LocalLLaMA/comments/1m58kqj/any_smaller_moe_model_that_think_in_latent_space/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'CslI8S1QqwM_0Cj3DEKPHpwUxKBh1TI8Z-6Eko4vFKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CslI8S1QqwM_0Cj3DEKPHpwUxKBh1TI8Z-6Eko4vFKQ.png?width=108&crop=smart&auto=webp&s=b33823d38ce5dcd32e1ee8972af0acb81b108d73', 'width': 108}, {'height': 116, 'url': 'h... |
Which local 100B+ heavy weight models are your favorite and why? | 110 | 1. Mistral\_large-Instruct
2. Qwen3-235B
3. Command-A
4. Deepseek-V3
5. Deepseek-R1
6. Deepseek-R1-0528
7. Deepseek-TNG-R1T2-Chimera
8. Kimi-K2
9. Ernie-4.5-300b
10. llama3.1-405B
11. llama3.1-Nemotron-Ultra-253b?
12. Others? | 2025-07-21T03:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m58695/which_local_100b_heavy_weight_models_are_your/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m58695 | false | null | t3_1m58695 | /r/LocalLLaMA/comments/1m58695/which_local_100b_heavy_weight_models_are_your/ | false | false | self | 110 | null |
Which LLMs, tools, or research have been overlooked or deserve more attention? | 34 | Hello!
I feel like there have been a lot of new releases in the past few weeks after a relatively quiet period following the Qwen3 release.
Of course, there was the new Deepseek model, and now Kimi. But what is the consensus on the other, somewhat smaller LLMs that came out? Models like Jamba-Mini-1.7, Hunyuan-A13B-I... | 2025-07-21T03:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m5827d/which_llms_tools_or_research_have_been_overlooked/ | MDT-49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5827d | false | null | t3_1m5827d | /r/LocalLLaMA/comments/1m5827d/which_llms_tools_or_research_have_been_overlooked/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'KasKHMjBTqO8qG3YViHMjFsBwsBOo-TVs2fkqqj8qBo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KasKHMjBTqO8qG3YViHMjFsBwsBOo-TVs2fkqqj8qBo.png?width=108&crop=smart&auto=webp&s=326925660fb97ea07b8f47320d9a931b6f3b8850', 'width': 108}, {'height': 116, 'url': 'h... |
Weird ollama pull behaviour | 0 | Has anyone else noticed this weird behaviour when you pull a model (I am using SSH) and after about 3 mins the speed plummets? I then terminate the command and re-issue it. Then I get back to fast speeds and it carries on from where it left off?
https://preview.redd.it/bqei1o1u75ef1.png?width=972&format=png&auto=web... | 2025-07-21T03:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m57utu/weird_ollama_pull_behaviour/ | themungbeans | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m57utu | false | null | t3_1m57utu | /r/LocalLLaMA/comments/1m57utu/weird_ollama_pull_behaviour/ | false | false | 0 | null | |
Pseudo RAID and Kimi-K2 | 4 | I have Threadripper 2970WX uses a PCI-Express Gen 3
256GB DDR4 + 5090
I ran Kimi-K2-Instruct-UD-Q2\_K\_XL (354.9GB) and got 2t/sec
I have 4 SSD drives. I made symbolic links. I put 2 files on each drive and got 2.3t/sec
cheers! =) | 2025-07-21T02:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m57iep/pseudo_raid_and_kimik2/ | Defiant_Diet9085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m57iep | false | null | t3_1m57iep | /r/LocalLLaMA/comments/1m57iep/pseudo_raid_and_kimik2/ | false | false | self | 4 | null |
ASCII art and LocalLLMa | 2 | Hi folks, if you have a couple of minutes could you please check if your favorite llm can recognize/depict mid size ASCII art (like 20*20 chars)
And please share your settings like temperature, minP, topK, topP etc.
Based on my observations qwen 235b on default settings is able to depict some ASCII art. | 2025-07-21T02:45:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m57icy/ascii_art_and_localllma/ | Bitter_Square6273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m57icy | false | null | t3_1m57icy | /r/LocalLLaMA/comments/1m57icy/ascii_art_and_localllma/ | false | false | self | 2 | null |
A question about running Ollama locally for NSFW/ERP | 0 | Today I just installed Ollama on my local machine and Open-WebUI. I was wondering if there's a better UI/way for running models for ERP/NSFW. Like saying a character and having it as a character. I have no idea what I'm doing and I'm new to this so any help would be appreciated. Currently I have dolphin-mixtral-8x7B an... | 2025-07-21T02:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m576x6/a_question_about_running_ollama_locally_for/ | JacobBender92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m576x6 | false | null | t3_1m576x6 | /r/LocalLLaMA/comments/1m576x6/a_question_about_running_ollama_locally_for/ | false | false | nsfw | 0 | null |
why are there quite different quant strategies of bartowski and unsloth on MoE? | 26 | [https://huggingface.co/bartowski/baidu\_ERNIE-4.5-21B-A3B-PT-GGUF](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF)
[https://huggingface.co/unsloth/ERNIE-4.5-21B-A3B-PT-GGUF](https://huggingface.co/unsloth/ERNIE-4.5-21B-A3B-PT-GGUF)
they are quant of a same model. at a same quant, e.g. both Q3\_K\_M... | 2025-07-21T02:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m56z4m/why_are_there_quite_different_quant_strategies_of/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m56z4m | false | null | t3_1m56z4m | /r/LocalLLaMA/comments/1m56z4m/why_are_there_quite_different_quant_strategies_of/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'wHvNbmd8WWIQFMd6cKGd0f5flJJfJGOyvWzZPM8zur8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wHvNbmd8WWIQFMd6cKGd0f5flJJfJGOyvWzZPM8zur8.png?width=108&crop=smart&auto=webp&s=d07f61f99c0ca8c045bf1472163f101485f90b26', 'width': 108}, {'height': 116, 'url': 'h... | |
A solution to deploy your LLM agent with one click | 0 | Hello devs,
The idea came from while I was working on a personal project. When I tried to deploy my agent into the cloud, I ran into a lot of headaches — setting up VMs, writing config, handling crashes. I decided to build a solution for it and called it Agentainer.
Agentainer’s goal is to let anyone (even coding age... | 2025-07-21T02:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m56x5u/a_solution_to_deploy_your_llm_agent_with_one_click/ | Tradingoso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m56x5u | false | null | t3_1m56x5u | /r/LocalLLaMA/comments/1m56x5u/a_solution_to_deploy_your_llm_agent_with_one_click/ | false | false | self | 0 | null |
[2507.09850] The Challenge of Teaching Reasoning to LLMs Without RL or Distillation | 17 | \> Reasoning-capable language models achieve state-of-the-art performance in diverse complex tasks by generating long, explicit Chain-of-Thought (CoT) traces. While recent works show that base models can acquire such reasoning traces via reinforcement learning or distillation from stronger models like DeepSeek-R1, prev... | 2025-07-21T01:43:09 | https://arxiv.org/abs/2507.09850 | TheRealMasonMac | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1m568j8 | false | null | t3_1m568j8 | /r/LocalLLaMA/comments/1m568j8/250709850_the_challenge_of_teaching_reasoning_to/ | false | false | default | 17 | null |
Hitting Data Walls with Local LLM Projects? Check Out This Curated Dataset Resource! | 2 | If you’ve spent any amount of time experimenting with local LLMs you know that high quality datasets are the foundation of great results. But tracking down relevant well labeled and community vetted datasets especially ones that match your specific use case can be a huge headache.
Whether you’re:
* Fine tuning models... | 2025-07-21T01:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m566jg/hitting_data_walls_with_local_llm_projects_check/ | Creepy-Potential3408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m566jg | false | null | t3_1m566jg | /r/LocalLLaMA/comments/1m566jg/hitting_data_walls_with_local_llm_projects_check/ | false | false | self | 2 | null |
Help with Finetuning Phi4-Mini | 0 | I’m experimenting with lightweight finetuning of phi-4-mini to alter its speaking style for a project — think tonal adjustments like high-energy, friendly, getting rid of that “I am a artificial intelligence assistant…” stuff, etc. I still want to preserve all tool calling functions (Python, web search, image generatio... | 2025-07-21T01:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m563lh/help_with_finetuning_phi4mini/ | Witty_Mycologist_995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m563lh | false | null | t3_1m563lh | /r/LocalLLaMA/comments/1m563lh/help_with_finetuning_phi4mini/ | false | false | self | 0 | null |
How fast is gemma 3 27b on an H100? how many tokens per second can I expect? | 35 | I've seen people say 60/s and i've seen 22000/sec, I don't even know who to believe anymore.
Also how much does optimizing boost the tokens output speed? | 2025-07-21T01:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m55rrt/how_fast_is_gemma_3_27b_on_an_h100_how_many/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m55rrt | false | null | t3_1m55rrt | /r/LocalLLaMA/comments/1m55rrt/how_fast_is_gemma_3_27b_on_an_h100_how_many/ | false | false | self | 35 | null |
Best novel writing workflow? | 5 | I’m writing a novel that’s near-future literary fiction / soft dystopia / psychological tragedy with erotic elements. I’m subscribed to ChatGPT and Claude, but built a PC to move to local AI without limits and guardrails for the NSFW stuff.
What’s the best workflow for me? I downloaded Oobabooga and a MythosMax model... | 2025-07-21T00:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m5586n/best_novel_writing_workflow/ | AccidentalFolklore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m5586n | false | null | t3_1m5586n | /r/LocalLLaMA/comments/1m5586n/best_novel_writing_workflow/ | false | false | self | 5 | null |
Need a totally open, free AI for deep-dive research across global sources | 0 | Looking for a **$0 AI** that can:
* index **global sources** (US, EU, Asia, etc.) without geo-blocks
* handle **long-form docs** (forum dumps, READMEs, entire threads)
* **never refuses** questions on **technical topics** (sandboxing, device IDs, proxy lists, APK internals, etc.)
* **ready today** (browser or local GG... | 2025-07-21T00:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m54wfh/need_a_totally_open_free_ai_for_deepdive_research/ | Actual_Sleep_5398 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m54wfh | false | null | t3_1m54wfh | /r/LocalLLaMA/comments/1m54wfh/need_a_totally_open_free_ai_for_deepdive_research/ | false | false | self | 0 | null |
What are the biggest challenges in selling automations (and finding someone to implement them)? Looking for real insights from everyone! | 0 | Hi guys, how are you?
I'm doing research on the automation market — especially automation for small businesses, repetitive tasks, integrations with systems, bots, among other things. I want to better understand two specific pains:
1. For those who want to sell automations (freelancers, agencies, devs, etc.):
– What ... | 2025-07-20T23:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m52sj5/what_are_the_biggest_challenges_in_selling/ | Present-Entry8676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52sj5 | false | null | t3_1m52sj5 | /r/LocalLLaMA/comments/1m52sj5/what_are_the_biggest_challenges_in_selling/ | false | false | self | 0 | null |
U.S. GPU compute available | 0 | Hey all — I’m working on building out Atlas Grid, a new network of U.S.-based GPU hosts focused on reliability and simplicity for devs and researchers.
We’ve got a few committed rigs already online, including a 3080 Ti and 3070 Ti, running on stable secondary machines here in the U.S. — ideal for fine-tuning, inferenc... | 2025-07-20T22:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m52r9e/us_gpu_compute_available/ | No_Professional_2726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52r9e | false | null | t3_1m52r9e | /r/LocalLLaMA/comments/1m52r9e/us_gpu_compute_available/ | false | false | self | 0 | null |
Best multilingual TTS ? | 1 | [removed] | 2025-07-20T22:49:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m52jb2/best_multilingual_tts/ | Mundane-Buyer-6729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52jb2 | false | null | t3_1m52jb2 | /r/LocalLLaMA/comments/1m52jb2/best_multilingual_tts/ | false | false | self | 1 | null |
Best multilingual TTS (RU lang) ? | 1 | [removed] | 2025-07-20T22:48:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m52hxw/best_multilingual_tts_ru_lang/ | Mundane-Buyer-6729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52hxw | false | null | t3_1m52hxw | /r/LocalLLaMA/comments/1m52hxw/best_multilingual_tts_ru_lang/ | false | false | self | 1 | null |
Best RAG pipeline for math-heavy documents? | 8 | I’m looking for a solid RAG pipeline that works well with SGLang + AnythingLLM. Something that can handle technical docs, math textbooks with lots of formulas, research papers, and diagrams. The RAG in AnythingLLM is, well, not great. What setups actually work for you? | 2025-07-20T22:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1m52h8x/best_rag_pipeline_for_mathheavy_documents/ | PO-ll-UX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52h8x | false | null | t3_1m52h8x | /r/LocalLLaMA/comments/1m52h8x/best_rag_pipeline_for_mathheavy_documents/ | false | false | self | 8 | null |
I posted 3 weeks ago about training my own model. Progress report. | 221 | Hello, I posted that I wanted to train an LLM for under $1000 here: [https://www.reddit.com/r/LocalLLaMA/comments/1lmbtvg/attempting\_to\_train\_a\_model\_from\_scratch\_for\_less/](https://www.reddit.com/r/LocalLLaMA/comments/1lmbtvg/attempting_to_train_a_model_from_scratch_for_less/)
I had to crunch a lot to fi... | 2025-07-20T22:46:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m52h10/i_posted_3_weeks_ago_about_training_my_own_model/ | thebadslime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52h10 | false | null | t3_1m52h10 | /r/LocalLLaMA/comments/1m52h10/i_posted_3_weeks_ago_about_training_my_own_model/ | false | false | 221 | null | |
FULL Orchids.app System Prompt and Tools | 4 | (Latest update: 21/07/2025)
I've just extracted the FULL Orchids.app system prompt and internal tools. Over 200 lines.
You can check it out at https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools | 2025-07-20T22:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m52e50/full_orchidsapp_system_prompt_and_tools/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52e50 | false | null | t3_1m52e50 | /r/LocalLLaMA/comments/1m52e50/full_orchidsapp_system_prompt_and_tools/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '3J3iRUCJuzk-_Ka1tn-hg8r5ofkLMUYaOdxx_goyGNc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3J3iRUCJuzk-_Ka1tn-hg8r5ofkLMUYaOdxx_goyGNc.png?width=108&crop=smart&auto=webp&s=713542cecbbdf7b9088a103dbe84f99ea6238c0b', 'width': 108}, {'height': 108, 'url': 'h... |
Unstructured financial data for Lama3B | 1 | Hey everyone,
I’ve been trying to OCR tables out of bank statements that only exist as scanned images or non‐selectable PDFs, but I keep running into walls—Tesseract/PaddleOCR gets the text, Camelot/pdfplumber and OpenCV sometimes find gridlines, and regex hacks help a bit, but nothing works reliably across different... | 2025-07-20T22:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m52b7l/unstructured_financial_data_for_lama3b/ | LiveMud8172 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m52b7l | false | null | t3_1m52b7l | /r/LocalLLaMA/comments/1m52b7l/unstructured_financial_data_for_lama3b/ | false | false | self | 1 | null |
Questions about AI for translation | 3 | I'm looking for a solution to translate story text from a game. The translation is very domain specific to the fantasy world of the game.
Source is always English or Japanese, output is either English (from JP) or other common languages such as Spanish (from a human English TL).
The text follows a visual novel format... | 2025-07-20T21:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m4zyv1/questions_about_ai_for_translation/ | neobenedict | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4zyv1 | false | null | t3_1m4zyv1 | /r/LocalLLaMA/comments/1m4zyv1/questions_about_ai_for_translation/ | false | false | self | 3 | null |
What do we think of Devstral then? | 1 | I've tried it and it's quite good (latest) w/ Cline was my set-up. Why is no one talking about it? 🤔 | 2025-07-20T20:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m4zx8s/what_do_we_think_of_devstral_then/ | teleadx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4zx8s | false | null | t3_1m4zx8s | /r/LocalLLaMA/comments/1m4zx8s/what_do_we_think_of_devstral_then/ | false | false | self | 1 | null |
Is there a reason to prefer Nvidia over AMD for programming use cases? | 0 | Hello,
I'm interested in running local LLMs but it's not super clear to me if it's better to have Nvidia over AMD for this use case.
The main idea would be to run local LLMs to hook them up to Cursor/Cline/Roo/etc for programming work.
The budget is fairly limited, I guess maybe 1000€ for GPUs (which I guess could g... | 2025-07-20T20:49:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m4zpqt/is_there_a_reason_to_prefer_nvidia_over_amd_for/ | oblio- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4zpqt | false | null | t3_1m4zpqt | /r/LocalLLaMA/comments/1m4zpqt/is_there_a_reason_to_prefer_nvidia_over_amd_for/ | false | false | self | 0 | null |
Offloading layers | 0 | Simple question, how offloading layers work in LLM, so for example if i have 24Gig rtx 3090 and offloading layers, lets say 5 gig each, so the model will offload only 4 of them leaving remaining 4 giga dormant or it will utilize it somehow as well? Asking because many time seeing task menager under performance tab I s... | 2025-07-20T20:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m4zogr/offloading_layers/ | PawelSalsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4zogr | false | null | t3_1m4zogr | /r/LocalLLaMA/comments/1m4zogr/offloading_layers/ | false | false | self | 0 | null |
Replacing DevOps with agents | 0 | I think most of the DevOps activities can be replaced with agents. Any big thoughts on it? | 2025-07-20T20:42:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m4zj9q/replacing_devops_with_agents/ | AccomplishedUse3344 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4zj9q | false | null | t3_1m4zj9q | /r/LocalLLaMA/comments/1m4zj9q/replacing_devops_with_agents/ | false | false | self | 0 | null |
UTCP Request For Comments | 3 | 2025-07-20T20:34:15 | https://github.com/universal-tool-calling-protocol/utcp-specification/issues/18 | cleverusernametry | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m4zc1c | false | null | t3_1m4zc1c | /r/LocalLLaMA/comments/1m4zc1c/utcp_request_for_comments/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'RhZrpNBxAR6OqNFJorobfrnNwUn9_qi6R0ITk_qYmyU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RhZrpNBxAR6OqNFJorobfrnNwUn9_qi6R0ITk_qYmyU.png?width=108&crop=smart&auto=webp&s=c7550a7c956ca2380007ae3b91d47cb75560396b', 'width': 108}, {'height': 108, 'url': 'h... | |
Why are LLMs not able to give an estimate on their own confidence or say that they are not sure about something? | 2 | Hallucination is a real problem with LLMs but I wonder is it such a hard problem to assign a confidence value to an inference result? | 2025-07-20T20:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m4z64o/why_are_llms_not_able_to_give_an_estimate_on/ | MarinatedPickachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4z64o | false | null | t3_1m4z64o | /r/LocalLLaMA/comments/1m4z64o/why_are_llms_not_able_to_give_an_estimate_on/ | false | false | self | 2 | null |
DiffRhythm 1.2 music generation model produces "Avicii vs Nicky Romero - I Could Be the One" nearly verbatim | 52 | And this is how you get sued, lol. I noticed this while playing around with DiffRhythm; I had unrelated lyrics and an unrelated audio prompt set for the generation, and it still injected Avicii into the output, which was really funny.
Skip to 1:00 in the video to skip the generation process
Seed: 50518556518147 | 2025-07-20T20:06:46 | https://v.redd.it/ulng63nd53ef1 | iGermanProd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4yo0g | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ulng63nd53ef1/DASHPlaylist.mpd?a=1755634019%2CNzVmOGEzOGNjZDAyZmNhMGJjYjNhOWVjZTYzYjJiM2FmOWIzM2ExOWRkYjA3MGFkNzI1ZWU1YTY4MDBiYzQ2NQ%3D%3D&v=1&f=sd', 'duration': 100, 'fallback_url': 'https://v.redd.it/ulng63nd53ef1/DASH_720.mp4?source=fallback', 'h... | t3_1m4yo0g | /r/LocalLLaMA/comments/1m4yo0g/diffrhythm_12_music_generation_model_produces/ | false | false | 52 | {'enabled': False, 'images': [{'id': 'YTYwaWo1bmQ1M2VmMb_CBqNLMKq7B8TFWMuXRmq1kpK-35NHOdsHUCPSQtde', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YTYwaWo1bmQ1M2VmMb_CBqNLMKq7B8TFWMuXRmq1kpK-35NHOdsHUCPSQtde.png?width=108&crop=smart&format=pjpg&auto=webp&s=3623f942ea04411214f32a79d8a31d6c84ffc... | |
Fine-tuned her the perfect local model. Still got API’d 💔 | 107 | 2025-07-20T19:43:10 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4y3cj | false | null | t3_1m4y3cj | /r/LocalLLaMA/comments/1m4y3cj/finetuned_her_the_perfect_local_model_still_got/ | false | false | default | 107 | {'enabled': True, 'images': [{'id': 'xitr9w9f13ef1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/xitr9w9f13ef1.png?width=108&crop=smart&auto=webp&s=f18eb7bb3172b0aeea4ed2a63fa91436ba164167', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/xitr9w9f13ef1.png?width=216&crop=smart&auto=we... | ||
SmolVLM2 it's the dumbest shit I ever seen. | 2 | Maybe a monkey can be smatter. | 2025-07-20T19:07:08 | Advi1120 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4x7h1 | false | null | t3_1m4x7h1 | /r/LocalLLaMA/comments/1m4x7h1/smolvlm2_its_the_dumbest_shit_i_ever_seen/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'HYTAPIBRIw31PEr2xC9nW_v7rXCwTp4orPwhfluxOa8', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/szn0fp6av2ef1.png?width=108&crop=smart&auto=webp&s=ab2210254e074d14da8fa3c88e1fa8e724e65859', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/szn0fp6av2ef1.pn... | ||
Ikllamacpp repository gone, or it is only me? | 171 | Was seeing if there was a new commit today but when refreshed the page got a 404. | 2025-07-20T18:14:47 | https://github.com/ikawrakow/ik_llama.cpp/commits/main/ | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m4vw29 | false | null | t3_1m4vw29 | /r/LocalLLaMA/comments/1m4vw29/ikllamacpp_repository_gone_or_it_is_only_me/ | false | false | default | 171 | null |
Looking for a Python-based solution to parse finance-related PDFs (like payslips) to JSON — AI or no
Discussion | 1 | [removed] | 2025-07-20T18:13:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m4vv0z/looking_for_a_pythonbased_solution_to_parse/ | Ok-Product3161 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4vv0z | false | null | t3_1m4vv0z | /r/LocalLLaMA/comments/1m4vv0z/looking_for_a_pythonbased_solution_to_parse/ | false | false | self | 1 | null |
Anyone interested in adding their fine-tuned / open source models to this benchmark? | 7 | I've posted on this sub before, but context is that me and a small team are working on a [benchmark](https://www.designarena.ai/) to evaluate how good LLMs are at producing UIs and frontends that are engaging and satisfiable for people.
Right now, working on adding more models, and specifically open source models dev... | 2025-07-20T18:01:43 | Accomplished-Copy332 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4vk88 | false | null | t3_1m4vk88 | /r/LocalLLaMA/comments/1m4vk88/anyone_interested_in_adding_their_finetuned_open/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': '4i6kqeqjj2ef1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/4i6kqeqjj2ef1.png?width=108&crop=smart&auto=webp&s=f15f6bd7f09b6acb627b72a6e55823d63b6f4f07', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/4i6kqeqjj2ef1.png?width=216&crop=smart&auto=web... | |
Best Small LLMs for Tool Calling? | 4 | I am currently building a small app, and I don't want to use large LLMs to call the tools. Instead, I want to use small open-source LLMs for that task. So I was wondering, what are the best models for such a use case? | 2025-07-20T17:53:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m4vcnz/best_small_llms_for_tool_calling/ | Saniok_Digital | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4vcnz | false | null | t3_1m4vcnz | /r/LocalLLaMA/comments/1m4vcnz/best_small_llms_for_tool_calling/ | false | false | self | 4 | null |
Local LLM file access | 1 | I would like to get my local LLM to be able to read and write files. I know they can do it through coding tools but I would like to be able to do at a more basic level. Would I need to use a MCP server or could LMStudio/Ollama do this? I have searched and found "[lm-tool-writer](https://lmstudio.ai/yagil/lm-tool-writer... | 2025-07-20T17:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ukgp/local_llm_file_access/ | fishslinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ukgp | false | null | t3_1m4ukgp | /r/LocalLLaMA/comments/1m4ukgp/local_llm_file_access/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/jfzX48pNXxWiv1rFE02VrWXPWTtL7fsdf2LTd8mixlQ.png?width=108&crop=smart&auto=webp&s=e7c8590a62cea205bab07f4af2106acd17647234', 'width': 108}, {'height': 212, 'url': '... |
Decentralized LLM inference from your terminal, verified on-chain | 0 | This command runs verifiable LLM inference using Parity Protocol, our open decentralized compute engine.
https://preview.redd.it/fk9nx5j7a2ef1.jpg?width=1226&format=pjpg&auto=webp&s=a90ba9e9b4bac68c43e6af6b453bd0c169b20f25
\- Task gets executed in a distributed way
\- Each node returns output + hash
\- Outputs are ... | 2025-07-20T17:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m4u914/decentralized_llm_inference_from_your_terminal/ | Efficient-Ad-2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4u914 | false | null | t3_1m4u914 | /r/LocalLLaMA/comments/1m4u914/decentralized_llm_inference_from_your_terminal/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'JvOB4teBJPexclfMC0OuzNuMWooM4UHhYEL6W0yN6Es', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JvOB4teBJPexclfMC0OuzNuMWooM4UHhYEL6W0yN6Es.jpeg?width=108&crop=smart&auto=webp&s=53c3b00d143605b43ec731d87b95add6cf7cf10c', 'width': 108}, {'height': 216, 'url': ... | |
What's the most crackhead garbage local LLM setup you can think of? | 58 | Alright so basically - I want to run qwen3 235b MoE. I dont wanna pay 235b MoE money tho. So far I've been eyeing grabbing an old dell xeon workstation, slapping in lots of RAM & two mi50 cards & calling it a day. Would that work? probably i guess, hell you'd even get good performance out of that running 32b models whi... | 2025-07-20T17:08:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m4u7j6/whats_the_most_crackhead_garbage_local_llm_setup/ | caraccidentGAMING | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4u7j6 | false | null | t3_1m4u7j6 | /r/LocalLLaMA/comments/1m4u7j6/whats_the_most_crackhead_garbage_local_llm_setup/ | false | false | self | 58 | null |
Tools for LM Studio? | 2 | I woud like to test limits of local LLMs. I use LM Studio. Is there a tool repository I can use? The tool selection in LM Studio is limited to RAG and js execution. | 2025-07-20T16:44:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m4tll3/tools_for_lm_studio/ | Mk-Daniel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4tll3 | false | null | t3_1m4tll3 | /r/LocalLLaMA/comments/1m4tll3/tools_for_lm_studio/ | false | false | self | 2 | null |
Where is DeepsSeek R2? | 0 | Claude 4 is out. Grok 4 performed way better then any model in humanity last exam. Kimi K2 has launched with significantly improved creative writing. MiniMax M1 and Qwen 235B are here. Even hints of "Gemma 3" have been found in Git repositories. OpenAI will release their next major model (probably GPT-5) in few months ... | 2025-07-20T16:31:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ta0f/where_is_deepsseek_r2/ | ba2sYd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ta0f | false | null | t3_1m4ta0f | /r/LocalLLaMA/comments/1m4ta0f/where_is_deepsseek_r2/ | false | false | self | 0 | null |
Is there a way to use Ollama with vscode copilot in agent mode? | 0 | I see it works in 'Ask' mode, but not 'Agent'. | 2025-07-20T16:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m4t85h/is_there_a_way_to_use_ollama_with_vscode_copilot/ | richsonreddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4t85h | false | null | t3_1m4t85h | /r/LocalLLaMA/comments/1m4t85h/is_there_a_way_to_use_ollama_with_vscode_copilot/ | false | false | self | 0 | null |
What GPU is Moonshot Kimi K2 running on? | 0 | If I'm not mistaken the most powerful GPU Nvidia is exporting to China is RTX 5080 as even RTX 5090 is over limit.
Did Moonshot train on their stockpile of old GPUs or use some domestic alternative? | 2025-07-20T16:22:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m4t22z/what_gpu_is_moonshot_kimi_k2_running_on/ | arstarsta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4t22z | false | null | t3_1m4t22z | /r/LocalLLaMA/comments/1m4t22z/what_gpu_is_moonshot_kimi_k2_running_on/ | false | false | self | 0 | null |
Smaller, Faster, Smarter: Why MoR Might Replace Transformers | Front Page | 0 | Here's a brand new Ai firework called Mixture of Recursions from Google DeepMimd .
And NO ..This is not my video .. | 2025-07-20T16:16:46 | https://youtu.be/MfswBXmSPZU?si=7WIVGDy4BsV7EGkp | sub_RedditTor | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m4swld | false | {'oembed': {'author_name': 'AIM Tv', 'author_url': 'https://www.youtube.com/@aimmediahouse', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MfswBXmSPZU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ... | t3_1m4swld | /r/LocalLLaMA/comments/1m4swld/smaller_faster_smarter_why_mor_might_replace/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'BaYjkt9t34xYr7tLos2qWSu-VnvXRoFAvrMZrJeMuu4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BaYjkt9t34xYr7tLos2qWSu-VnvXRoFAvrMZrJeMuu4.jpeg?width=108&crop=smart&auto=webp&s=14f1816fa9f3bf2d2f3c975f6345c9490bdc384c', 'width': 108}, {'height': 162, 'url': '... |
I built a desktop tool to auto-organize files using local LLMs (open source, cross-platform) | 24 | Hi everyone,
Just wanted to share a use case where local LLMs are genuinely helpful for daily workflows: file organization.
I've been working on a C++ desktop app called **AI File Sorter** – it uses local LLMs via `llama.cpp to help organize messy folders like `Downloads` or `Desktop`. Not sort files into folders sol... | 2025-07-20T15:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m4sdsg/i_built_a_desktop_tool_to_autoorganize_files/ | ph0tone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4sdsg | false | null | t3_1m4sdsg | /r/LocalLLaMA/comments/1m4sdsg/i_built_a_desktop_tool_to_autoorganize_files/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'III9L8U9ufnPqsufLcON9Z0Me6NkILlBBY6rajY_EiU', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/III9L8U9ufnPqsufLcON9Z0Me6NkILlBBY6rajY_EiU.png?width=108&crop=smart&auto=webp&s=3876969ecc31a39da4b700e4c7291afd8be438a4', 'width': 108}, {'height': 176, 'url': 'h... |
Chess Llama - Training a tiny Llama model to play chess | 44 | 2025-07-20T15:51:13 | https://lazy-guy.github.io/blog/chessllama/ | LazyGuy-_- | lazy-guy.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m4s9nn | false | null | t3_1m4s9nn | /r/LocalLLaMA/comments/1m4s9nn/chess_llama_training_a_tiny_llama_model_to_play/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': 'EkWIWtbbH4xlzhYZBKN8etSvG-CzFTB0R2srsCELe50', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/EkWIWtbbH4xlzhYZBKN8etSvG-CzFTB0R2srsCELe50.png?width=108&crop=smart&auto=webp&s=002fa5232c800bdd9c21b2a388d77948455886e5', 'width': 108}, {'height': 121, 'url': 'h... | |
I'm sorry Zuck please don't leave us we were just having fun | 740 | 2025-07-20T15:51:11 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4s9mt | false | null | t3_1m4s9mt | /r/LocalLLaMA/comments/1m4s9mt/im_sorry_zuck_please_dont_leave_us_we_were_just/ | false | false | default | 740 | {'enabled': True, 'images': [{'id': 'p9mxxen7w1ef1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/p9mxxen7w1ef1.png?width=108&crop=smart&auto=webp&s=0312de90e101f89f1aedd7bdb3cd988bb8639791', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/p9mxxen7w1ef1.png?width=216&crop=smart&auto=web... | ||
Anyone else tracking their local LLMs’ performance? I built a tool to make it easier | 3 | Hey all,
I've been running some LLMs locally and was curious how others are keeping tabs on model performance, latency, and token usage. I didn’t find a lightweight tool that fit my needs, so I started working on one myself.
It’s a simple dashboard + API setup that helps me monitor and analyze what's going on under t... | 2025-07-20T15:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m4rmd5/anyone_else_tracking_their_local_llms_performance/ | Hades_7658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4rmd5 | false | null | t3_1m4rmd5 | /r/LocalLLaMA/comments/1m4rmd5/anyone_else_tracking_their_local_llms_performance/ | false | false | self | 3 | null |
Anyone else tracking local LLM performance? I made a small tool to help with it | 1 | [removed] | 2025-07-20T15:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m4rdab/anyone_else_tracking_local_llm_performance_i_made/ | PigletHot7743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4rdab | false | null | t3_1m4rdab | /r/LocalLLaMA/comments/1m4rdab/anyone_else_tracking_local_llm_performance_i_made/ | false | false | self | 1 | null |
What is the latest version of ollama? | 0 | Hi, I wanted to update my models in ollama and asked for advice on updating models.
Cut a long story short I downloaded ollama version 0.9.6 both from web and brew.
Gemini 2.5 pro insists it should be 0.2.0. Have I lost my mind?
The response when it asked me to type in which ollama to establish version.
You have ... | 2025-07-20T15:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m4rbqv/what_is_the_latest_version_of_ollama/ | elksie5000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4rbqv | false | null | t3_1m4rbqv | /r/LocalLLaMA/comments/1m4rbqv/what_is_the_latest_version_of_ollama/ | false | false | self | 0 | null |
LLM Observer Hub — Open-source tool to monitor and analyze your local LLMs | 1 | [removed] | 2025-07-20T15:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ra8h/llm_observer_hub_opensource_tool_to_monitor_and/ | PigletHot7743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ra8h | false | null | t3_1m4ra8h | /r/LocalLLaMA/comments/1m4ra8h/llm_observer_hub_opensource_tool_to_monitor_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Bow-NFz9Q0ImbR1lfgYv3HUkSHqIeZ5xDU7H3I49tLc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bow-NFz9Q0ImbR1lfgYv3HUkSHqIeZ5xDU7H3I49tLc.png?width=108&crop=smart&auto=webp&s=4315a593f219c79c9ffc446265a972a019d3ace1', 'width': 108}, {'height': 108, 'url': 'h... |
How to get 3b models to squeeze onto 2gig Nvidia GPU? | 0 | Hi I got my old laptop working and it's got a 940mx with 2gb of ddr5 memory and 8gb of ddr4 ram with i5 6200u. I got qwen3 1.7b q5 from unsloth to run well and it looked fine for what it was.
However I've been looking at llama 3.2 3b and have a hunch that more params will make it a better model compared to qwen3 1.7b... | 2025-07-20T15:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m4r7t5/how_to_get_3b_models_to_squeeze_onto_2gig_nvidia/ | combo-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4r7t5 | false | null | t3_1m4r7t5 | /r/LocalLLaMA/comments/1m4r7t5/how_to_get_3b_models_to_squeeze_onto_2gig_nvidia/ | false | false | self | 0 | null |
Open source is humanity’s last hope! | 140 | I’m just making this post as I want mutual opinions on the idea that if open source doesn’t consistently stay within a reasonable margin of the smartest AI systems out there we will move into a world where government almost certainly as their unbeatable, informants and enforcers via AI and I personally see it as a almo... | 2025-07-20T15:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m4r4j1/open_source_is_humanitys_last_hope/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4r4j1 | false | null | t3_1m4r4j1 | /r/LocalLLaMA/comments/1m4r4j1/open_source_is_humanitys_last_hope/ | false | false | self | 140 | null |
Ideal setup for long context window fine-tuning? | 1 | Hi, I’m doing a thesis on using LLMs to parse scientific articles from plaintext pdf format into structured XML. I’ve been looking into fine tuning a model locally to achieve this task, but a key consideration is the long context window requirement. The pdfs are multiple pages so up to 10 000 tokens long, making the VR... | 2025-07-20T14:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/1m4qzmt/ideal_setup_for_long_context_window_finetuning/ | Ill_Imagination_6575 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4qzmt | false | null | t3_1m4qzmt | /r/LocalLLaMA/comments/1m4qzmt/ideal_setup_for_long_context_window_finetuning/ | false | false | self | 1 | null |
which is the best tiny vlm to recognize nsfw pics? | 24 | I tried Mimo-7B. It has a decent quality at this size. but for nsfw, it can only work with anime pics. for realistic, it refused. | 2025-07-20T14:32:25 | https://www.reddit.com/r/LocalLLaMA/comments/1m4qdo6/which_is_the_best_tiny_vlm_to_recognize_nsfw_pics/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4qdo6 | false | null | t3_1m4qdo6 | /r/LocalLLaMA/comments/1m4qdo6/which_is_the_best_tiny_vlm_to_recognize_nsfw_pics/ | false | false | nsfw | 24 | null |
Any Proper high quality Voice cloning for TTS tool? | 6 | I’ve tested a few tools, including chatterbox. The problem is, even after uploading a clear and long reference audio, it couldn’t replicate the same tone and pacing on the generated audio. Chatterbox failed to match the tone accurately with the cloned voice.
I decided to try minimax audio and while it didn’t mimic the... | 2025-07-20T14:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m4q4dx/any_proper_high_quality_voice_cloning_for_tts_tool/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4q4dx | false | null | t3_1m4q4dx | /r/LocalLLaMA/comments/1m4q4dx/any_proper_high_quality_voice_cloning_for_tts_tool/ | false | false | self | 6 | null |
Why AI feels inconsistent (and most people don't understand what's actually happening) | 0 | Everyone's always complaining about AI being unreliable. Sometimes it's brilliant, sometimes it's garbage. But most people are looking at this completely wrong.
The issue isn't really the AI model itself. It's whether the system is doing proper context engineering before the AI even starts working.
Think about it - w... | 2025-07-20T14:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m4pq8q/why_ai_feels_inconsistent_and_most_people_dont/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4pq8q | false | null | t3_1m4pq8q | /r/LocalLLaMA/comments/1m4pq8q/why_ai_feels_inconsistent_and_most_people_dont/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wKqWnEkJ8XBzIv2uLkYiK1q4Y6cuHkY-zVtXvMSB1iQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wKqWnEkJ8XBzIv2uLkYiK1q4Y6cuHkY-zVtXvMSB1iQ.jpeg?width=108&crop=smart&auto=webp&s=b7899772fea2e4125bdb806722b1eb52a7e99c3f', 'width': 108}, {'height': 108, 'url': '... |
which frontend supports diffusion model now? since llama.cpp has supported that. | 3 | Must I use comfyui to generate text? | 2025-07-20T13:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m4p75g/which_frontend_supports_diffusion_model_now_since/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4p75g | false | null | t3_1m4p75g | /r/LocalLLaMA/comments/1m4p75g/which_frontend_supports_diffusion_model_now_since/ | false | false | self | 3 | null |
Small LLM capable to describe images in greater details. | 6 | I am looking for small/slow LLM capable to describe an image scenery. Speed/latency is irrelevant. | 2025-07-20T13:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m4op39/small_llm_capable_to_describe_images_in_greater/ | valijali32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4op39 | false | null | t3_1m4op39 | /r/LocalLLaMA/comments/1m4op39/small_llm_capable_to_describe_images_in_greater/ | false | false | self | 6 | null |
Any way to serve images and text from a single GPU? | 0 | I'm experimenting with a home server setup and wondering if anyone has managed to run both an LLM (e.g. LM Studio, Ollama) **and** an image generation model (e.g. Stable Diffusion via Forge or SD WebUI) **on the same GPU**.
If you had a chatbot that needs to handle both text and image generation, would it be feasible ... | 2025-07-20T13:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ojg7/any_way_to_serve_images_and_text_from_a_single_gpu/ | Realistic_Age6660 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ojg7 | false | null | t3_1m4ojg7 | /r/LocalLLaMA/comments/1m4ojg7/any_way_to_serve_images_and_text_from_a_single_gpu/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'mmB6xar8qCDkGD1UqcagteHnAJt2faixrMMNEmqX3YY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mmB6xar8qCDkGD1UqcagteHnAJt2faixrMMNEmqX3YY.png?width=108&crop=smart&auto=webp&s=966eeb02a37829373ce955e2c8576c9130f89901', 'width': 108}, {'height': 108, 'url': 'h... |
What's the smartest tiny LLM you've actually used? | 179 | Looking for something small but still usable.
What's your go-to? | 2025-07-20T13:04:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m4of82/whats_the_smartest_tiny_llm_youve_actually_used/ | Luston03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4of82 | false | null | t3_1m4of82 | /r/LocalLLaMA/comments/1m4of82/whats_the_smartest_tiny_llm_youve_actually_used/ | false | false | self | 179 | null |
MediPhi-Instruct | 65 | 2025-07-20T12:47:51 | https://huggingface.co/microsoft/MediPhi-Instruct | AaronFeng47 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m4o37k | false | null | t3_1m4o37k | /r/LocalLLaMA/comments/1m4o37k/mediphiinstruct/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'rMf-9D6Y4sgIJc1XvCzarHFTmb103pT_gKU9N1hAZmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rMf-9D6Y4sgIJc1XvCzarHFTmb103pT_gKU9N1hAZmg.png?width=108&crop=smart&auto=webp&s=296e218ebbb84850c1f1bf1837ba3fc99e98ce95', 'width': 108}, {'height': 116, 'url': 'h... | ||
how do i translate 30 pages like this and still have the same architecture and not raw translated text? | 3 | 2025-07-20T12:44:51 | Beyond_Birthday_13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4o11y | false | null | t3_1m4o11y | /r/LocalLLaMA/comments/1m4o11y/how_do_i_translate_30_pages_like_this_and_still/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'MLBCUzgefWUrEA34GK5-dWbntSGbqutcAd25RncmHp8', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/dswksu12z0ef1.png?width=108&crop=smart&auto=webp&s=aef515c534683941b615c03946b41b8fcf39779a', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/dswksu12z0ef1.pn... | |||
AI Model Juggler automatically and transparently switches between LLM and image generation backends and models | 33 | AI Model Juggler is a simple utility for serving multiple LLM and image generation backends or models as if simultaneously while only requiring enough VRAM for one at a time. It is written in Python, but has no external dependencies, making installation as simple as downloading the code.
That might sound a lot like [l... | 2025-07-20T12:01:26 | https://github.com/makedin/AI-Model-Juggler | Casual-Godzilla | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m4n7fh | false | null | t3_1m4n7fh | /r/LocalLLaMA/comments/1m4n7fh/ai_model_juggler_automatically_and_transparently/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': 'U_oBf_gfttiNbAtXKPelsTxexbPQv3y9UoOvds_oQOI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U_oBf_gfttiNbAtXKPelsTxexbPQv3y9UoOvds_oQOI.png?width=108&crop=smart&auto=webp&s=c62c4ca86c3ececd35af14e2ce0f1d848f67bbdf', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen 3 8b/14b finetuning on 50k medical data unsloth on runpod and optimal training settings | 3 | Hey guys,
Im having some problems on different cuda/torch runpods (errors etc seems unstable) with a 4090 but google colab works without problems, anyone had same problems? Unsloth seems a little unstable on runpod? But i already got runpod credits…
50k medical data 75% reasoning 25% non reasoning
Learning Rate: 2e-4... | 2025-07-20T11:53:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m4n2bx/qwen_3_8b14b_finetuning_on_50k_medical_data/ | AlbionPlayerFun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4n2bx | false | null | t3_1m4n2bx | /r/LocalLLaMA/comments/1m4n2bx/qwen_3_8b14b_finetuning_on_50k_medical_data/ | false | false | self | 3 | null |
Building a local red teaming AI assistant | 1 | [removed] | 2025-07-20T11:43:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m4mw9y/building_a_local_red_teaming_ai_assistant/ | slavicgod699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4mw9y | false | null | t3_1m4mw9y | /r/LocalLLaMA/comments/1m4mw9y/building_a_local_red_teaming_ai_assistant/ | false | false | self | 1 | null |
Best uncensored creative writing GGUF model to run on 24 GB VRAM?? | 0 | Hi guys, I'm new here, so can you guide me please, which is currently the best uncensored creative writing GGUF model to run locally on 24 GB VRAM?? on LM Studio,
It would be great if it also had Vision capabilities, or you can suggest another model specific for vision, as long as it's good. | 2025-07-20T11:42:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m4mvbe/best_uncensored_creative_writing_gguf_model_to/ | younestft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4mvbe | false | null | t3_1m4mvbe | /r/LocalLLaMA/comments/1m4mvbe/best_uncensored_creative_writing_gguf_model_to/ | false | false | self | 0 | null |
Building a local red teaming AI assistant | 1 | [removed] | 2025-07-20T11:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m4msjh/building_a_local_red_teaming_ai_assistant/ | poeticmaster689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4msjh | false | null | t3_1m4msjh | /r/LocalLLaMA/comments/1m4msjh/building_a_local_red_teaming_ai_assistant/ | false | false | self | 1 | null |
Need advice: building a local red teaming AI assistant — what models should I use? | 1 | [removed] | 2025-07-20T11:35:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m4mr3k/need_advice_building_a_local_red_teaming_ai/ | poeticmaster689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4mr3k | false | null | t3_1m4mr3k | /r/LocalLLaMA/comments/1m4mr3k/need_advice_building_a_local_red_teaming_ai/ | false | false | self | 1 | null |
Next big thing after LLMs - World Model [explained on the example of V-JEPA2] | 190 | *^(#I'm starting a new series of explaining intriguing new AI papers)*
LLMs learn from text and lack an inherent understanding of the physical world. Their "knowledge" is **mostly** limited to what's been described in the text they were trained on. This means they mostly struggle with concepts that are not easily desc... | 2025-07-20T11:17:11 | https://v.redd.it/h0ivgtibj0ef1 | VR-Person | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4mfs8 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/h0ivgtibj0ef1/DASHPlaylist.mpd?a=1755602245%2CNWRhZjVkN2I4YWZmM2IwNTU0MDdjZmU2NTYzZmZjNTQwZDc5MDk5NWI4NzIxZjE3MmY4OTRmNjMyMjE0ZDBkOA%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/h0ivgtibj0ef1/DASH_720.mp4?source=fallback', 'ha... | t3_1m4mfs8 | /r/LocalLLaMA/comments/1m4mfs8/next_big_thing_after_llms_world_model_explained/ | false | false | 190 | {'enabled': False, 'images': [{'id': 'bzd6M2RyaWJqMGVmMazIDtYX-m4G2qSnSaRS5wvuc50lS7cqMTyw9S71POit', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/bzd6M2RyaWJqMGVmMazIDtYX-m4G2qSnSaRS5wvuc50lS7cqMTyw9S71POit.png?width=108&crop=smart&format=pjpg&auto=webp&s=bd457375042fde995b2dbbbc85910bf0ffe7d... | |
Next big thing after LLM - World Model [explained on the example of V-JEPA2] | 1 | *^(#I'm starting a new series of explaining intriguing new AI papers)*
LLMs learn from text and lack an inherent understanding of the physical world. Their "knowledge" is **mostly** limited to what's been described in the text they were trained on. This means they mostly struggle with concepts that are not easily desc... | 2025-07-20T10:56:29 | https://v.redd.it/f9l6sjkif0ef1 | VR-Person | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4m3fu | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/f9l6sjkif0ef1/DASHPlaylist.mpd?a=1755601003%2CNTQ1MDNmZmEyYTg1MjdlM2VjOTJmYzk2N2U3OTUyNjczYWI5MDUxYTM0MzIxYTI4NzBlODEwYjU3MjYxMDVkMQ%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/f9l6sjkif0ef1/DASH_720.mp4?source=fallback', 'ha... | t3_1m4m3fu | /r/LocalLLaMA/comments/1m4m3fu/next_big_thing_after_llm_world_model_explained_on/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YzM4ZGNra2lmMGVmMazIDtYX-m4G2qSnSaRS5wvuc50lS7cqMTyw9S71POit', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/YzM4ZGNra2lmMGVmMazIDtYX-m4G2qSnSaRS5wvuc50lS7cqMTyw9S71POit.png?width=108&crop=smart&format=pjpg&auto=webp&s=ce4c700b0f2affb31a4801923181a61c68624... | |
Semantic chunking using LLMs | 20 | I use LLMs for semantic text chunking. Models in the range of 24 to 32B, quantized between Q4 and Q6, give me the most robust results. Mistral-Small-3.2, Gemma-27B and Qwen3-32B all work well, Mistral and Gemma seem to be a bit better with certain non-English languages.
When I go lower, results are still ok with Qwen3... | 2025-07-20T10:45:37 | https://www.reddit.com/r/LocalLLaMA/comments/1m4lxak/semantic_chunking_using_llms/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4lxak | false | null | t3_1m4lxak | /r/LocalLLaMA/comments/1m4lxak/semantic_chunking_using_llms/ | false | false | self | 20 | null |
Do voice "changers / modifiers" actually exist? | 0 | Do voice "changers / modifiers" actually exist?
From what I see, most tools claiming to do this actually just convert your speech into text, and then that text into an AI voice.
Are there any tools that actually "modify" voice?
It'd be super handy to retain the subtle inflections and performance of a talk, which is ... | 2025-07-20T10:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m4lwcu/do_voice_changers_modifiers_actually_exist/ | jasj3b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4lwcu | false | null | t3_1m4lwcu | /r/LocalLLaMA/comments/1m4lwcu/do_voice_changers_modifiers_actually_exist/ | false | false | self | 0 | null |
What's your biggest pain point running LLMs locally (especially with low VRAM GPUs)? | 0 | I’ve been exploring local LLM setups lately and wanted to ask the community:
What are the **most frustrating parts** of running models locally?
Any specific struggles with **low VRAM GPUs**, **limited RAM**, or **older hardware**?
Have you faced issues with **quantization**, **driver setup**, **tokenizer mismatches*... | 2025-07-20T09:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1m4l1tl/whats_your_biggest_pain_point_running_llms/ | Xitizdumb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4l1tl | false | null | t3_1m4l1tl | /r/LocalLLaMA/comments/1m4l1tl/whats_your_biggest_pain_point_running_llms/ | false | false | self | 0 | null |
Looking for local provider for Kimi K2 at a better price | 0 | Hey everyone!
I’m looking to buy a **Kimi K2**, but hoping to find a **local provider or distributor** who might offer it at a cheaper price than the big retail sites.
I’m based in Berlin, so any local tips or sellers you’ve had good experiences with would be appreciated!
Thanks in advance! | 2025-07-20T09:34:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ktlb/looking_for_local_provider_for_kimi_k2_at_a/ | byk1nq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ktlb | false | null | t3_1m4ktlb | /r/LocalLLaMA/comments/1m4ktlb/looking_for_local_provider_for_kimi_k2_at_a/ | false | false | self | 0 | null |
New to fine tuning | 7 | Hi I am using ollama, mistral 7b, huggingface tranformers and peft.
This is an example I have made for a piece of training data. Does anyone have any tips on how to improve it? Am I using correct Grammer? Am I missing anything important?
{
"call\_id": "66",
"scenario\_id": "66",
"messages": \[
{
"role": ... | 2025-07-20T09:06:15 | https://www.reddit.com/r/LocalLLaMA/comments/1m4ke3x/new_to_fine_tuning/ | Ok_Pie_6906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4ke3x | false | null | t3_1m4ke3x | /r/LocalLLaMA/comments/1m4ke3x/new_to_fine_tuning/ | false | false | self | 7 | null |
Giochi cross-platform | 1 | [removed] | 2025-07-20T08:39:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m4jzg5/giochi_crossplatform/ | Level_Pattern_3034 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4jzg5 | false | null | t3_1m4jzg5 | /r/LocalLLaMA/comments/1m4jzg5/giochi_crossplatform/ | false | false | self | 1 | null |
1 comet invite left : challenge | 0 | So, I have one comet invite left, and planning to giveaway to anyone interested
But the challenge is, anyone who can provide a good long form joke in this thread, and whichever get the most upvote within next 2 days, will get the invite
Please be creative guys, it should actually crack people.
It should not be any one... | 2025-07-20T08:37:41 | Khushalgogia | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4jyrq | false | null | t3_1m4jyrq | /r/LocalLLaMA/comments/1m4jyrq/1_comet_invite_left_challenge/ | false | false | 0 | {'enabled': True, 'images': [{'id': '1MZrv5btmgNdZX-bI0MCLMlLSmJm54TzCWBuQJjj7ww', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/kx1lbe3zqzdf1.jpeg?width=108&crop=smart&auto=webp&s=97d56ad9e1e82bdcb13de2827c6113812e7091c1', 'width': 108}, {'height': 362, 'url': 'https://preview.redd.it/kx1lbe3zqzdf1.j... | ||
Advice on choice of model | 2 | I give a bit of context, I often have to study videos on YouTube (sometimes even 40 minutes long), to study I take notes and create diagrams, I would like to use a local llm (lm studio) to compare my notes with the transcription of the video so that the model can indicate any congruences or missing points.
What model ... | 2025-07-20T08:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m4jxo9/advice_on_choice_of_model/ | Hydratant_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4jxo9 | false | null | t3_1m4jxo9 | /r/LocalLLaMA/comments/1m4jxo9/advice_on_choice_of_model/ | false | false | self | 2 | null |
Best Budget SFF/Low profile gpu’s? | 1 | I’m looking for a gpu to put in a EliteDesk SFF PC. I don’t plan to run anything past 8b models so VRAM doesn’t need to be super high.
Was looking at this 3050 LP but wasn’t sure of performance:
https://www.zotacstore.com/us/zt-a30510l-10l-r | 2025-07-20T07:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m4j5nf/best_budget_sfflow_profile_gpus/ | MEI2011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4j5nf | false | null | t3_1m4j5nf | /r/LocalLLaMA/comments/1m4j5nf/best_budget_sfflow_profile_gpus/ | false | false | self | 1 | null |
🆘 [Help] My Fine-Tuned Model Keeps Echoing Prompts or Giving Blank/Generic Responses | 0 | Hey everyone,
I’ve been working on fine-tuning open-source LLMs like Phi-3 and LLaMA 3 using Unsloth in Google Colab, targeting a chatbot for customer support (around 500 prompt-response examples).
I’m facing the same recurring issues no matter what I do:
⸻
❗ The problems:
1. The model often responds with the exact... | 2025-07-20T07:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m4j0sa/help_my_finetuned_model_keeps_echoing_prompts_or/ | Srmxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4j0sa | false | null | t3_1m4j0sa | /r/LocalLLaMA/comments/1m4j0sa/help_my_finetuned_model_keeps_echoing_prompts_or/ | false | false | self | 0 | null |
Repo Wizard: Local AI Tool for Safe Code Changes (Inspired by Repo Prompt, Runs on Any OS) | 9 | Been tinkering with local AI for coding and got fed up with slow, unpredictable auto-agents. Saw Repo Prompt's context ideas and made **Repo Wizard**—a free, open-source desktop app to apply AI code suggestions safely. Works on Mac, Windows, Linux, and pairs with any LLM and can **make use of any subscription you have*... | 2025-07-20T05:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m4hhg8/repo_wizard_local_ai_tool_for_safe_code_changes/ | fanzzzd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4hhg8 | false | null | t3_1m4hhg8 | /r/LocalLLaMA/comments/1m4hhg8/repo_wizard_local_ai_tool_for_safe_code_changes/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'kuYjjmP2VIn8HJL3x5JWl35nkIevoFBVsYuZoSVHmO4', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/kuYjjmP2VIn8HJL3x5JWl35nkIevoFBVsYuZoSVHmO4.png?width=108&crop=smart&auto=webp&s=4af2e0da00844213b0e623b27f13cbcfdbe75a24', 'width': 108}, {'height': 107, 'url': 'h... |
Does LLM architecture allow for injecting some more input tokens in the middle of token generation? | 11 | Here is something of a hiccup I find myself running into a lot. I type up a prompt, often very elaborate of course, and RIGHT AFTER sending the prompt I realize that I have one more parting thought that could change everything.
It occurs to me that an LLM just flows all previously generated tokens through as it genera... | 2025-07-20T05:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m4hfy0/does_llm_architecture_allow_for_injecting_some/ | michaelsoft__binbows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4hfy0 | false | null | t3_1m4hfy0 | /r/LocalLLaMA/comments/1m4hfy0/does_llm_architecture_allow_for_injecting_some/ | false | false | self | 11 | null |
Wrote something about Rerankers - Why and How of it | 3 | [https://open.substack.com/pub/transformersandtheiravatars/p/rerankers-and-their-intricacies?r=1ftbb&utm\_campaign=post&utm\_medium=web&showWelcomeOnShare=true](https://open.substack.com/pub/transformersandtheiravatars/p/rerankers-and-their-intricacies?r=1ftbb&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true) | 2025-07-20T05:24:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m4gx69/wrote_something_about_rerankers_why_and_how_of_it/ | ZucchiniCalm4617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4gx69 | false | null | t3_1m4gx69 | /r/LocalLLaMA/comments/1m4gx69/wrote_something_about_rerankers_why_and_how_of_it/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'rHHc_pSs8zQSrXpR-fgcrkV47cBhoPAaGDQbPb5m8_4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rHHc_pSs8zQSrXpR-fgcrkV47cBhoPAaGDQbPb5m8_4.jpeg?width=108&crop=smart&auto=webp&s=006e5fe90751cae9f7b26e4e7518902709a50e33', 'width': 108}, {'height': 108, 'url': '... |
How to prevent negative transfer when fine tuning? | 1 | I'm looking to fine tune an AI using a bunch of publicly submitted data.
Which means I'll be asking people questions, they'll be submitting answers that might disagree with each other.
I then want to train it on question-answer pairs and would like it to learn from both sides instead of negative transfer that I've be... | 2025-07-20T05:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m4goon/how_to_prevent_negative_transfer_when_fine_tuning/ | mczarnek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4goon | false | null | t3_1m4goon | /r/LocalLLaMA/comments/1m4goon/how_to_prevent_negative_transfer_when_fine_tuning/ | false | false | self | 1 | null |
Context Rot: How Increasing Input Tokens Impacts LLM Performance | 242 | **TL;DR: Model performance is non-uniform across context lengths due to "Context Rot", including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.**
Research reveals that LLMs (large language models) experience significant performance *"degradation"* as input context length increases, even on simple ta... | 2025-07-20T04:17:04 | 5h3r_10ck | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m4fs2t | false | null | t3_1m4fs2t | /r/LocalLLaMA/comments/1m4fs2t/context_rot_how_increasing_input_tokens_impacts/ | false | false | default | 242 | {'enabled': True, 'images': [{'id': 'x8dkgvkifydf1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/x8dkgvkifydf1.jpeg?width=108&crop=smart&auto=webp&s=ee39ea66e1d0b5d1b3ad61e605a8d4691605af43', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/x8dkgvkifydf1.jpeg?width=216&crop=smart&auto=w... | |
Two AIs Debate The Origin of The Universe (ChatGPT o3 vs Gemini 2.5 Pro) | 0 | 2025-07-20T02:53:25 | https://youtube.com/watch?v=L9w-PBtBOQ8&si=tIF2_MLoxqOfE8Fa | 1nconnor | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1m4e9in | false | {'oembed': {'author_name': 'Connor Barbee', 'author_url': 'https://www.youtube.com/@connorbarbee', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/L9w-PBtBOQ8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1m4e9in | /r/LocalLLaMA/comments/1m4e9in/two_ais_debate_the_origin_of_the_universe_chatgpt/ | false | false | default | 0 | null | |
Made a local C++ utility to calculate RAM needed to fit a quantized model | 41 | I've been using [NyxKrage's VRAM Calculator](https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator) for a while, but I find sometimes I want to calculate this stuff without an internet connection or using a webpage. I also needed to calculate how much VRAM was needed for specific quants or for a lot of model... | 2025-07-20T02:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m4djo6/made_a_local_c_utility_to_calculate_ram_needed_to/ | philetairus_socius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m4djo6 | false | null | t3_1m4djo6 | /r/LocalLLaMA/comments/1m4djo6/made_a_local_c_utility_to_calculate_ram_needed_to/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'CVC3UCXWBZbCWfBKaMAbb_pHJ3k1bjsiW1It_67lGVY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CVC3UCXWBZbCWfBKaMAbb_pHJ3k1bjsiW1It_67lGVY.png?width=108&crop=smart&auto=webp&s=dab6a28dd0587077a356703d53799748a71b3d4c', 'width': 108}, {'height': 116, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.