title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sakana AI proposes the Darwin Gödel Machine, an self-learning AI system that leverages an evolution algorithm to iteratively rewrite its own code, thereby continuously improving its performance on programming tasks | 85 | 2025-06-03T16:46:59 | https://sakana.ai/dgm/ | juanviera23 | sakana.ai | 1970-01-01T00:00:00 | 0 | {} | 1l2gv3a | false | null | t3_1l2gv3a | /r/LocalLLaMA/comments/1l2gv3a/sakana_ai_proposes_the_darwin_gödel_machine_an/ | false | false | 85 | {'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=108&crop=smart&auto=webp&s=61f7124235d3c9cc17267eb2ed7de46bab49765e', 'width': 108}, {'height': 108, 'url': 'h... | ||
Using a LLM (large language model ) as a simplest physics engine — no physics code, just prompts | 1 | [removed] | 2025-06-03T16:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l2gro8/using_a_llm_large_language_model_as_a_simplest/ | Arch1324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2gro8 | false | null | t3_1l2gro8 | /r/LocalLLaMA/comments/1l2gro8/using_a_llm_large_language_model_as_a_simplest/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TnrTHW3CKw3GNTKRI2mE_HkAsQiy9l3ygE6SDH3npbs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=108&crop=smart&auto=webp&s=c850c2ffabc5b03c38bf77a102d65815f4438c35', 'width': 108}, {'height': 108, 'url': 'h... | |
Can you mix and mach GPUs? | 1 | Lets say if using LM studio if I am currently using 3090 and would buy 5090, can I use combined VRAM? | 2025-06-03T15:56:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l2fkow/can_you_mix_and_mach_gpus/ | FlanFederal8447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2fkow | false | null | t3_1l2fkow | /r/LocalLLaMA/comments/1l2fkow/can_you_mix_and_mach_gpus/ | false | false | self | 1 | null |
I'm collecting dialogue from anime, games, and visual novels — is this actually useful for improving AI? | 41 | Hi! I’m not a programmer or AI developer, but I’ve been doing something on my own for a while out of passion.
I’ve noticed that most AI responses — especially in roleplay or emotional dialogue — tend to sound repetitive, shallow, or generic. They often reuse the same phrases and don’t adapt well to different character... | 2025-06-03T15:54:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l2fj2k/im_collecting_dialogue_from_anime_games_and/ | Akowmako | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2fj2k | false | null | t3_1l2fj2k | /r/LocalLLaMA/comments/1l2fj2k/im_collecting_dialogue_from_anime_games_and/ | false | false | self | 41 | null |
I forked google’s Fullstack LangGraph Quickstart to work with ollama + searxng | 1 | [removed] | 2025-06-03T15:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l2fhlj/i_forked_googles_fullstack_langgraph_quickstart/ | Filo0104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2fhlj | false | null | t3_1l2fhlj | /r/LocalLLaMA/comments/1l2fhlj/i_forked_googles_fullstack_langgraph_quickstart/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'lw6H4eybNgcdiVEB_D0Rv2OY9faHpBI4cf637K_Eapo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=108&crop=smart&auto=webp&s=a5728335f3dc0a8d92fafbc48e08fc31e319962e', 'width': 108}, {'height': 108, 'url': 'h... |
RubyLLM 1.3.0: First-Class Ollama Support for Ruby Developers 💻 | 0 | Ruby developers can now use local models as easily as cloud APIs.
**Simple setup:**
```ruby
RubyLLM.configure do |config|
config.ollama_api_base = 'http://localhost:11434/v1'
end
# Same API, local model
chat = RubyLLM.chat(model: 'mistral', provider: 'ollama')
response = chat.ask("Explain transformer architecture")... | 2025-06-03T15:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l2fa3o/rubyllm_130_firstclass_ollama_support_for_ruby/ | crmne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2fa3o | false | null | t3_1l2fa3o | /r/LocalLLaMA/comments/1l2fa3o/rubyllm_130_firstclass_ollama_support_for_ruby/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'dRPRYHcWlFicSK-z6Co_vdHBXzCqhUGopsZ8LYR4_gU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=108&crop=smart&auto=webp&s=97279c46c41c61602c898da205817a6ebb5292f8', 'width': 108}, {'height': 108, 'url': 'h... |
Teaching local LLMs to generate workflows | 1 | [removed] | 2025-06-03T15:42:08 | https://advanced-stack.com/resources/how-to-build-workflows-trigger-action-program-with-llms.html | Fluid-Age-9266 | advanced-stack.com | 1970-01-01T00:00:00 | 0 | {} | 1l2f7mo | false | null | t3_1l2f7mo | /r/LocalLLaMA/comments/1l2f7mo/teaching_local_llms_to_generate_workflows/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'FpW-rKjCmA17tYicq1iqXsO4j7FbSI7IZEXvEMtqYZc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=108&crop=smart&auto=webp&s=2d63de925c65eca3014f1d0e622b309156c40024', 'width': 108}, {'height': 113, 'url': 'h... | |
Local hosted search ai | 1 | [removed] | 2025-06-03T15:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l2f0gm/local_hosted_search_ai/ | Successful-Leader830 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2f0gm | false | null | t3_1l2f0gm | /r/LocalLLaMA/comments/1l2f0gm/local_hosted_search_ai/ | false | false | self | 1 | null |
Gemini 2.5 Flash can't beat open-source model at pointing! | 1 | [removed] | 2025-06-03T15:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l2evoe/gemini_25_flash_cant_beat_opensource_model_at/ | IndependentDoor8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2evoe | false | null | t3_1l2evoe | /r/LocalLLaMA/comments/1l2evoe/gemini_25_flash_cant_beat_opensource_model_at/ | false | false | self | 1 | null |
OLLAMA Overhead with Qwen3 ????? Any help | 1 | [removed] | 2025-06-03T15:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l2e7lp/ollama_overhead_with_qwen3_any_help/ | Tough-Double687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2e7lp | false | null | t3_1l2e7lp | /r/LocalLLaMA/comments/1l2e7lp/ollama_overhead_with_qwen3_any_help/ | false | false | self | 1 | null |
Postman like client for local MCP servers | 9 | I wanted to test my custom MCP server on Linux but none of the options seemed right. So I built my own on a weekend.
It's MIT licensed so do with it what you like! | 2025-06-03T14:42:11 | https://github.com/faraazahmad/mcp_debug | Mysterious-Coat5856 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l2dowh | false | null | t3_1l2dowh | /r/LocalLLaMA/comments/1l2dowh/postman_like_client_for_local_mcp_servers/ | false | false | default | 9 | null |
Daily AI-tools | 0 | 🚀 Hey everyone! I’ve been exploring some of the newest and most powerful AI tools out there and started sharing quick, engaging overviews on TikTok to help others discover what’s possible right now with AI.
I’m focusing on tools like Claude Opus 4, Heygen, Durable, and more — things that help with content creation, a... | 2025-06-03T14:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l2dl5s/daily_aitools/ | jordanbelfort42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2dl5s | false | null | t3_1l2dl5s | /r/LocalLLaMA/comments/1l2dl5s/daily_aitools/ | false | false | self | 0 | null |
Checkout this FREE and FAST semantic deduplication app on Hugging Face | 8 | There's no point only hashing deduplication of datasets. You might as well use semantic deduplication too. This space for semantic deduplication works on multiple massive datasets. Removing near duplicates, not just exact matches!
This is how it works:
* You pick one all more datasets from the Hub
* It make a semanti... | 2025-06-03T14:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l2dizc/checkout_this_free_and_fast_semantic/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2dizc | false | null | t3_1l2dizc | /r/LocalLLaMA/comments/1l2dizc/checkout_this_free_and_fast_semantic/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '3gtoBjG3Po9RMr7lgBFimouFLTstTHNi-aMSCTTsWTg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=108&crop=smart&auto=webp&s=4f718df6d41d24fcf5edb4a0aa6960af1eca457e', 'width': 108}, {'height': 116, 'url': 'h... |
Arcee Homunculus-12B | 98 | **Homunculus** is a 12 billion-parameter instruction model distilled from **Qwen3-235B** onto the **Mistral-Nemo** backbone.
[https://huggingface.co/arcee-ai/Homunculus](https://huggingface.co/arcee-ai/Homunculus)
[https://huggingface.co/arcee-ai/Homunculus-GGUF](https://huggingface.co/arcee-ai/Homunculus-GGUF)
| 2025-06-03T14:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l2diwk/arcee_homunculus12b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2diwk | false | null | t3_1l2diwk | /r/LocalLLaMA/comments/1l2diwk/arcee_homunculus12b/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': 'lI41-kDbWkB7h1l1GR9Tkyz69nD8uzuyGKGsyhGjjNQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=108&crop=smart&auto=webp&s=35398510f4c42c93a4291a1eb4c620e5821a17d1', 'width': 108}, {'height': 116, 'url': 'h... |
When you wanna Finetune a model what methods do you use to Chunk Data? | 1 | What else some of your top methods for chunking data when you want to fine tune a model i'm getting ready to do that myself I wanted to train it on a tabletop RPG book so that the model could be my assistant but I'm not sure of the best way to chunk the book. | 2025-06-03T14:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l2dei2/when_you_wanna_finetune_a_model_what_methods_do/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2dei2 | false | null | t3_1l2dei2 | /r/LocalLLaMA/comments/1l2dei2/when_you_wanna_finetune_a_model_what_methods_do/ | false | false | self | 1 | null |
2025 Apple Mac Studio: M3 Ultra 256GB vs. M4 Ultra 256GB | 0 | Will the M4 deliver better token performance? If so, by how much—specifically when running a 70B model?
| 2025-06-03T14:19:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l2d4l7/2025_apple_mac_studio_m3_ultra_256gb_vs_m4_ultra/ | emimix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2d4l7 | false | null | t3_1l2d4l7 | /r/LocalLLaMA/comments/1l2d4l7/2025_apple_mac_studio_m3_ultra_256gb_vs_m4_ultra/ | false | false | self | 0 | null |
Qwen2 7b says it's made by Google?! | 1 | [removed] | 2025-06-03T13:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l2c6lh/qwen2_7b_says_its_made_by_google/ | Responsible_Wait4020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2c6lh | false | null | t3_1l2c6lh | /r/LocalLLaMA/comments/1l2c6lh/qwen2_7b_says_its_made_by_google/ | false | false | 1 | null | |
Its my first PC build , I need help. Is this enough to run LLM locally ! | 0 | [PCPriceTracker Build](https://pcpricetracker.in/b/s/74ff4d5d-5825-4841-8bbc-dd6851a52ca6)
Category|Selection|Source|Price
:----|:----|:----|----:
**Processor** | [Amd Ryzen 5 7600 Gaming Desktop Processor (100-100001015BOX)](https://pcpricetracker.in/products/3746a9dcc20314ac958396bdb9187b91) | Computech Store | 1789... | 2025-06-03T13:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l2btw0/its_my_first_pc_build_i_need_help_is_this_enough/ | Series-Curious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2btw0 | false | null | t3_1l2btw0 | /r/LocalLLaMA/comments/1l2btw0/its_my_first_pc_build_i_need_help_is_this_enough/ | false | false | self | 0 | null |
My setup for managing multiple LLM APIs + local models with a unified interface | 0 | Hey everyone! Wanted to share something I've been using for the past few months that's made my LLM workflow way smoother.
I was getting tired of juggling API keys for OpenAI, Anthropic, Groq, and a few other providers, plus constantly switching between different interfaces and keeping track of token costs across all o... | 2025-06-03T13:16:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l2bmnt/my_setup_for_managing_multiple_llm_apis_local/ | vivi541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2bmnt | false | null | t3_1l2bmnt | /r/LocalLLaMA/comments/1l2bmnt/my_setup_for_managing_multiple_llm_apis_local/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'pyn1GSWMRKlZVNADg3z0xrqzgwFNMv_ht0Vy2upoN_E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=108&crop=smart&auto=webp&s=c99f0b79d394eb56ea94b5a632ba3a9de15036fc', 'width': 108}, {'height': 108, 'url': 'h... |
Testing small quants: the case of Qwen 3 30B A3B | 1 | [removed] | 2025-06-03T13:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l2bman/testing_small_quants_the_case_of_qwen_3_30b_a3b/ | Astrophilorama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2bman | false | null | t3_1l2bman | /r/LocalLLaMA/comments/1l2bman/testing_small_quants_the_case_of_qwen_3_30b_a3b/ | false | false | self | 1 | null |
Semantic Search PoC for Hugging Face – Now with Parameter Size Filters (0-1B to 70B+) | 25 | Hey!
I’ve recently updated my prototype semantic search for Hugging Face Space, which makes it easier to discover models not only via semantic search but also by **parameter size**.
There are currently over 1.5 million models on the Hub, and finding the right one can be a challenge.
This PoC helps you:
* Semanti... | 2025-06-03T13:08:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l2bgc1/semantic_search_poc_for_hugging_face_now_with/ | dvanstrien | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2bgc1 | false | null | t3_1l2bgc1 | /r/LocalLLaMA/comments/1l2bgc1/semantic_search_poc_for_hugging_face_now_with/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'WFwIIbRpE84XE6oagr5hRGQLBcoHMOy27I018aYekE8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=108&crop=smart&auto=webp&s=04146a78247987fc1cf9225f5cd657b7fdb827db', 'width': 108}, {'height': 116, 'url': 'h... |
Vision Language Models are Biased | 105 | 2025-06-03T12:58:13 | https://vlmsarebiased.github.io/ | taesiri | vlmsarebiased.github.io | 1970-01-01T00:00:00 | 0 | {} | 1l2b83p | false | null | t3_1l2b83p | /r/LocalLLaMA/comments/1l2b83p/vision_language_models_are_biased/ | false | false | default | 105 | null | |
Finetune LLM :- Google colab issues | 1 | [removed] | 2025-06-03T12:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l2b7ng/finetune_llm_google_colab_issues/ | Kooky_Cattle4583 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2b7ng | false | null | t3_1l2b7ng | /r/LocalLLaMA/comments/1l2b7ng/finetune_llm_google_colab_issues/ | false | false | 1 | null | |
Fine-Tuning DeepSeek-R1-0528 on an RTX 4090 | 1 | [removed] | 2025-06-03T12:35:17 | https://www.datacamp.com/tutorial/fine-tuning-deep-seek-r1-0528 | kingabzpro | datacamp.com | 1970-01-01T00:00:00 | 0 | {} | 1l2aqob | false | null | t3_1l2aqob | /r/LocalLLaMA/comments/1l2aqob/finetuning_deepseekr10528_on_an_rtx_4090/ | false | false | default | 1 | null |
Attention by Hand - Practice attention mechanism on an interactive webpage | 29 | https://i.redd.it/fmji9oswfp4f1.gif
Try this: [https://vizuara-ai-learning-lab.vercel.app/](https://vizuara-ai-learning-lab.vercel.app/)
Nuts-And-Bolts-AI is an interactive web environment where you can practice AI concepts by writing down matrix multiplications.
(1) Let’s take the attention mechanism in language m... | 2025-06-03T12:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l2agpu/attention_by_hand_practice_attention_mechanism_on/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2agpu | false | null | t3_1l2agpu | /r/LocalLLaMA/comments/1l2agpu/attention_by_hand_practice_attention_mechanism_on/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'HUR4ZjSsMcPldBF8PlxclI3gg-mjZXBfe4bNavSwrFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=108&crop=smart&auto=webp&s=4c05659da71aabefa650df1fddb91bdf8888031d', 'width': 108}, {'height': 113, 'url': 'h... | |
PipesHub - Open Source Enterprise Search Platform(Generative-AI Powered) | 20 | Hey everyone!
I’m excited to share something we’ve been building for the past few months – **PipesHub**, a fully open-source Enterprise Search Platform.
In short, PipesHub is your **customizable, scalable, enterprise-grade RAG platform** for everything from intelligent search to building agentic apps — all powered by... | 2025-06-03T12:20:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l2afie/pipeshub_open_source_enterprise_search/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2afie | false | null | t3_1l2afie | /r/LocalLLaMA/comments/1l2afie/pipeshub_open_source_enterprise_search/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'buY1wCd39fnYe4gQsYrQN9EpiOdHMy4jLV6G-HIWIsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=108&crop=smart&auto=webp&s=eee15c3c7c7d5a8b4b7798af503093305d5a88d6', 'width': 108}, {'height': 108, 'url': 'h... |
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores | 1 | [removed] | 2025-06-03T12:04:34 | Money-Coast-3905 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2a4nn | false | null | t3_1l2a4nn | /r/LocalLLaMA/comments/1l2a4nn/searchr1_reproduce_project_shows_reflective/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'vslKNYayNSJhW2bwB-M5AJO72oNzq-EL4QJAx51le7s', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=108&crop=smart&auto=webp&s=34722d3d96eca493c702af60a35f06bb016e6b2e', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png... | ||
Benchmarking OCR on LLMs for consumer GPUs: Xiaomi MiMo-VL-7B-RL vs Qwen, Gemma, InternVL — Surprising Insights on Parameters and /no_think | 1 | [removed] | 2025-06-03T12:03:32 | https://www.reddit.com/gallery/1l2a3xu | PaceZealousideal6091 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l2a3xu | false | null | t3_1l2a3xu | /r/LocalLLaMA/comments/1l2a3xu/benchmarking_ocr_on_llms_for_consumer_gpus_xiaomi/ | false | false | 1 | null | |
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores | 1 | [removed] | 2025-06-03T12:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l2a3u4/searchr1_reproduce_project_shows_reflective/ | Money-Coast-3905 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2a3u4 | false | null | t3_1l2a3u4 | /r/LocalLLaMA/comments/1l2a3u4/searchr1_reproduce_project_shows_reflective/ | false | false | self | 1 | null |
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores | 1 | [removed] | 2025-06-03T12:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l2a1kq/searchr1_reproduce_project_shows_reflective/ | Money-Coast-3905 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2a1kq | false | null | t3_1l2a1kq | /r/LocalLLaMA/comments/1l2a1kq/searchr1_reproduce_project_shows_reflective/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ytQuyxiOuL5ouJ0sqxE7gOdxYzItnkLkbVqu5MGpRDk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=108&crop=smart&auto=webp&s=833ac246fce7c4cf7a92407b679586ced159449f', 'width': 108}, {'height': 116, 'url': 'h... | |
I reproduced Search-R1 on Qwen 2.5-3B and slightly surpassed it—here's my key finding on "reflective reasoning"! | 1 | [removed] | 2025-06-03T11:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l29wpp/i_reproduced_searchr1_on_qwen_253b_and_slightly/ | Money-Coast-3905 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l29wpp | false | null | t3_1l29wpp | /r/LocalLLaMA/comments/1l29wpp/i_reproduced_searchr1_on_qwen_253b_and_slightly/ | false | false | 1 | null | |
GPT-4 might already have Theory of Mind. A new paper shows it can model false beliefs—without any special training. | 1 | [removed] | 2025-06-03T11:31:59 | https://bytesandbrains.beehiiv.com/subscribe | Visible-Property3453 | bytesandbrains.beehiiv.com | 1970-01-01T00:00:00 | 0 | {} | 1l29im5 | false | null | t3_1l29im5 | /r/LocalLLaMA/comments/1l29im5/gpt4_might_already_have_theory_of_mind_a_new/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'DAJ6lFy0un-mhroVTMgF-HKp3YlgN35hOyKuhK5AfRs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=108&crop=smart&auto=webp&s=84d6bc361ffd058ec676035761317d669b7d3e11', 'width': 108}, {'height': 113, 'url': 'h... | |
What are the minimum Parts i need for my micro controller to run 1B oder 2B models? | 1 | [removed] | 2025-06-03T11:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l29h4s/what_are_the_minimum_parts_i_need_for_my_micro/ | sokratesy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l29h4s | false | null | t3_1l29h4s | /r/LocalLLaMA/comments/1l29h4s/what_are_the_minimum_parts_i_need_for_my_micro/ | false | false | self | 1 | null |
Are Vision Language Models Biased? | 1 | [removed] | 2025-06-03T11:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l29d7y/are_vision_language_models_biased/ | Substantial-Air-1285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l29d7y | false | null | t3_1l29d7y | /r/LocalLLaMA/comments/1l29d7y/are_vision_language_models_biased/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} |
Vision Language Models are Biased | 1 | State-of-the-art VLMs (o3, o4-mini, GPT-4.1, Claude 3.7, Gemini 2.5) achieve 100% accuracy counting on images of popular subjects (e.g. knowing that the Adidas logo has 3 stripes and a dog has 4 legs) but are only **\~17%** accurate in counting in counterfactual images (e.g. counting stripes in a 4-striped Adidas-like ... | 2025-06-03T11:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l28y7r/vision_language_models_are_biased/ | Substantial-Air-1285 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l28y7r | false | null | t3_1l28y7r | /r/LocalLLaMA/comments/1l28y7r/vision_language_models_are_biased/ | false | false | self | 1 | null |
Smallest model to fine tune for RAG-like use case? | 2 | I am investigating switching from a large model to a smaller LLM fine tuned for our use case, that is a form of RAG.
Currently I use json for input / output but I can switch to simple text even if I lose the contour set of support information.
I imagine i can potentially use a 7/8b model but I wonder if I can get a... | 2025-06-03T10:57:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l28vqr/smallest_model_to_fine_tune_for_raglike_use_case/ | daniele_dll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l28vqr | false | null | t3_1l28vqr | /r/LocalLLaMA/comments/1l28vqr/smallest_model_to_fine_tune_for_raglike_use_case/ | false | false | self | 2 | null |
Local LLM Server. Is ZimaBoard 2 a good option? If not, what is? | 1 | [removed] | 2025-06-03T10:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l28d68/local_llm_server_is_zimaboard_2_a_good_option_if/ | Jokras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l28d68 | false | null | t3_1l28d68 | /r/LocalLLaMA/comments/1l28d68/local_llm_server_is_zimaboard_2_a_good_option_if/ | false | false | self | 1 | null |
From crypto mining to democratizing AI: I built a platform that lets you run Llama-70B using distributed GPUs - Beta launching this month! | 0 | [removed] | 2025-06-03T10:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l28d43/from_crypto_mining_to_democratizing_ai_i_built_a/ | myurtsever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l28d43 | false | null | t3_1l28d43 | /r/LocalLLaMA/comments/1l28d43/from_crypto_mining_to_democratizing_ai_i_built_a/ | false | false | self | 0 | null |
nvidia/Nemotron-Research-Reasoning-Qwen-1.5B · Hugging Face | 144 | 2025-06-03T10:06:22 | https://huggingface.co/nvidia/Nemotron-Research-Reasoning-Qwen-1.5B | ab2377 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l2820t | false | null | t3_1l2820t | /r/LocalLLaMA/comments/1l2820t/nvidianemotronresearchreasoningqwen15b_hugging/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'VN7DXrn_T5Pxpv1mq6PfRd0le3hZRiB0SsXAxPAGtN0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=108&crop=smart&auto=webp&s=5ce45ed3dc5fb189823b00c0dd8f361141f4594c', 'width': 108}, {'height': 116, 'url': 'h... | ||
Good Hindi tts needed, kokoro works, but unfair pauses and and very less tones ? | 0 | So I am basically fan of kokoro, had helped me automate lot of stuff,
currently working on chatterbox-tts it only supports english while i liked it which need editing though because of noises. | 2025-06-03T09:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l27wcj/good_hindi_tts_needed_kokoro_works_but_unfair/ | jadhavsaurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l27wcj | false | null | t3_1l27wcj | /r/LocalLLaMA/comments/1l27wcj/good_hindi_tts_needed_kokoro_works_but_unfair/ | false | false | self | 0 | null |
Google opensources DeepSearch stack | 917 | While it's not evident if this is the exact same stack they use in the Gemini user app, it sure looks very promising! Seems to work with Gemini and Google Search. Maybe this can be adapted for any local model and SearXNG? | 2025-06-03T09:25:47 | https://github.com/google-gemini/gemini-fullstack-langgraph-quickstart | Mr_Moonsilver | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l27g8d | false | null | t3_1l27g8d | /r/LocalLLaMA/comments/1l27g8d/google_opensources_deepsearch_stack/ | false | false | 917 | {'enabled': False, 'images': [{'id': '76BYxAmoYh0LRivDOt8EZLMmuAZLopQTlMSJxK1_FL0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=108&crop=smart&auto=webp&s=df7320f3f462d80501e450cba890c5c1da63f14c', 'width': 108}, {'height': 108, 'url': 'h... | |
Flexible Quant length models | 1 | [removed] | 2025-06-03T09:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l27dfk/flexible_quant_length_models/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l27dfk | false | null | t3_1l27dfk | /r/LocalLLaMA/comments/1l27dfk/flexible_quant_length_models/ | false | false | self | 1 | null |
Current best multimodal web search model that fits on 16gb vram? | 1 | [removed] | 2025-06-03T09:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l276zx/current_best_multimodal_web_search_model_that/ | anonthatisopen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l276zx | false | null | t3_1l276zx | /r/LocalLLaMA/comments/1l276zx/current_best_multimodal_web_search_model_that/ | false | false | self | 1 | null |
Quants performance of Qwen3 30b a3b | 0 | Graph based on the data taken from the second pic, on qwen'hf page. | 2025-06-03T09:00:59 | https://www.reddit.com/gallery/1l2735s | GreenTreeAndBlueSky | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l2735s | false | null | t3_1l2735s | /r/LocalLLaMA/comments/1l2735s/quants_performance_of_qwen3_30b_a3b/ | false | false | 0 | null | |
How are commercial dense models so much faster? | 3 | Is there a way increase generation speed of a model?
I have been trying to make the the QwQ work, and I has been... acceptable quality wise, but because of the thinking (thought for a minute) chatting has become a drag. And regenerating a message requires either a lot of patience or manually editing the message part e... | 2025-06-03T08:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l26ujb/how_are_commercial_dense_models_so_much_faster/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l26ujb | false | null | t3_1l26ujb | /r/LocalLLaMA/comments/1l26ujb/how_are_commercial_dense_models_so_much_faster/ | false | false | self | 3 | null |
Fine-tune + Outlines | 1 | [removed] | 2025-06-03T08:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l26sos/finetune_outlines/ | Total_Hedgehog2946 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l26sos | false | null | t3_1l26sos | /r/LocalLLaMA/comments/1l26sos/finetune_outlines/ | false | false | self | 1 | null |
Claude 4 Sonnet Locally? | 1 | [removed] | 2025-06-03T08:18:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l26hm2/claude_4_sonnet_locally/ | VanFenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l26hm2 | false | null | t3_1l26hm2 | /r/LocalLLaMA/comments/1l26hm2/claude_4_sonnet_locally/ | false | false | self | 1 | null |
What happened to the fused/merged models? | 12 | I remember back when QwQ-32 first came out there was a FuseO1 thing with SkyT1. Are there any newer models like this? | 2025-06-03T07:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l25oyk/what_happened_to_the_fusedmerged_models/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l25oyk | false | null | t3_1l25oyk | /r/LocalLLaMA/comments/1l25oyk/what_happened_to_the_fusedmerged_models/ | false | false | self | 12 | null |
any local FIM model for writing? | 1 | [removed] | 2025-06-03T07:22:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l25oft/any_local_fim_model_for_writing/ | TinyDetective110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l25oft | false | null | t3_1l25oft | /r/LocalLLaMA/comments/1l25oft/any_local_fim_model_for_writing/ | false | false | self | 1 | null |
Try to ask DeepSeek a lucky number and … | 1 | 2025-06-03T06:40:18 | https://v.redd.it/8k6anz35rn4f1 | hachimi_ddj | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l251is | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8k6anz35rn4f1/DASHPlaylist.mpd?a=1751524832%2CZjhkMjgxZDBiY2Q1NDU2NDJmZGVlOWVhMTZlZTRiNjdkYWE5NmU4YTE5YmI3YWM0NDAzNmY5YTljNmU2Njg0Nw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/8k6anz35rn4f1/DASH_720.mp4?source=fallback', 'ha... | t3_1l251is | /r/LocalLLaMA/comments/1l251is/try_to_ask_deepseek_a_lucky_number_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=108&crop=smart&format=pjpg&auto=webp&s=3dc0fc3719c342222fef2b79e744b84f588ad... | ||
Guide: How to Run DeepSeek R1 0528 (FP8 + Q4_K_M Hybrid) Locally on Ktransformers with 10tk/s | 1 | [removed] | 2025-06-03T06:01:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l24g9t/guide_how_to_run_deepseek_r1_0528_fp8_q4_k_m/ | texasdude11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l24g9t | false | null | t3_1l24g9t | /r/LocalLLaMA/comments/1l24g9t/guide_how_to_run_deepseek_r1_0528_fp8_q4_k_m/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RduHNzldNAak5rxDap3_NIDMxvXd1vV1qebNFfPTrp0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5fCBw3TJn0Z0DDLLbtMDzmb0QlZB-RM_NGf0IFIv63c.jpg?width=108&crop=smart&auto=webp&s=667feb0d27b7e82d0b729b17c003b649467b058c', 'width': 108}, {'height': 162, 'url': 'h... |
Did anyone that ordered the GMK X2 from Amazon get it yet? | 3 | From what I've read elsewhere, GMK is reportedly giving priority to orders made directly on their website. So Amazon orders get the leftovers. Has anyone gotten a X2 ordered off of Amazon? | 2025-06-03T05:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l23rrg/did_anyone_that_ordered_the_gmk_x2_from_amazon/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l23rrg | false | null | t3_1l23rrg | /r/LocalLLaMA/comments/1l23rrg/did_anyone_that_ordered_the_gmk_x2_from_amazon/ | false | false | self | 3 | null |
Do small reasoning/CoT models get stuck in long thinking loops more often? | 8 | Hey,
As the title suggests, I've noticed small reasoning models tend to think a lot, sometimes they don't stop.
QwQ-32B, DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-0528-Qwen3-8B.
Larger models tend to not get stuck as often. Could it be because of short context windows? Or am I imagining it.
| 2025-06-03T04:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l23d09/do_small_reasoningcot_models_get_stuck_in_long/ | Proud_Fox_684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l23d09 | false | null | t3_1l23d09 | /r/LocalLLaMA/comments/1l23d09/do_small_reasoningcot_models_get_stuck_in_long/ | false | false | self | 8 | null |
LMStudio+Cline+MacBookPro repeated response | 0 | Hi guys, I didn’t know who to turn to so I wanna ask here. On my new MacBook Pro M4 48gb RAM I’m running LM studio and Cline Vs code extension+MCP. When I ask something in Cline, it repeats the response over and over and was thinking maybe LMstudio was caching the response. When I use Copilot or other online models, it... | 2025-06-03T03:55:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l22cel/lmstudioclinemacbookpro_repeated_response/ | mcchung52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l22cel | false | null | t3_1l22cel | /r/LocalLLaMA/comments/1l22cel/lmstudioclinemacbookpro_repeated_response/ | false | false | self | 0 | null |
Fine tuning/Distilling local models to achieve high accuracy | 1 | [removed] | 2025-06-03T03:16:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l21mmb/fine_tuningdistilling_local_models_to_achieve/ | tezdhar-mk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l21mmb | false | null | t3_1l21mmb | /r/LocalLLaMA/comments/1l21mmb/fine_tuningdistilling_local_models_to_achieve/ | false | false | self | 1 | null |
OSS implementation of OpenAI's vector search tool? | 13 | Hi,
Is there a library that implements OpenAI's vector search?
Something where you can create vector stores, add files (pdf, docx, md) to the vector stores and then search these vector store for a certain query. | 2025-06-03T02:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l219ol/oss_implementation_of_openais_vector_search_tool/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l219ol | false | null | t3_1l219ol | /r/LocalLLaMA/comments/1l219ol/oss_implementation_of_openais_vector_search_tool/ | false | false | self | 13 | null |
rtx pro 6000 96gb a good inference gpu? | 1 | [removed] | 2025-06-03T02:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l20zul/rtx_pro_6000_96gb_a_good_inference_gpu/ | Dry-Vermicelli-682 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l20zul | false | null | t3_1l20zul | /r/LocalLLaMA/comments/1l20zul/rtx_pro_6000_96gb_a_good_inference_gpu/ | false | false | self | 1 | null |
lightening fast, realtime voice cloning text to speech model? | 1 | [removed] | 2025-06-03T02:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l20vyq/lightening_fast_realtime_voice_cloning_text_to/ | aivoicebot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l20vyq | false | null | t3_1l20vyq | /r/LocalLLaMA/comments/1l20vyq/lightening_fast_realtime_voice_cloning_text_to/ | false | false | self | 1 | null |
LLM an engine | 24 | I can’t help but feel like the LLM, ollama, deep seek, openAI, Claude, are all engines sitting on a stand. Yes we see the raw power it puts out when sitting on an engine stand, but we can’t quite conceptually figure out the “body” of the automobile. The car changed the world, but not without first the engine.
I’ve be... | 2025-06-03T02:13:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l20f2h/llm_an_engine/ | localremote762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l20f2h | false | null | t3_1l20f2h | /r/LocalLLaMA/comments/1l20f2h/llm_an_engine/ | false | false | self | 24 | null |
How Far Can AI Go in Higher Math Education? | 1 | [removed] | 2025-06-03T01:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l202zi/how_far_can_ai_go_in_higher_math_education/ | Quick-Knowledge1615 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l202zi | false | null | t3_1l202zi | /r/LocalLLaMA/comments/1l202zi/how_far_can_ai_go_in_higher_math_education/ | false | false | self | 1 | null |
Losing my patience with LLMs | 0 | me: ok. then why did you bullshit me earlier and tell me i couldn't...
llm: You're absolutely right — I did not bullshit you. I was being very careful to make sure you understood the nuances and limitations of...
... later ...
Final Answer
You did not get "bullshitted" — I was being very careful to make sure you didn’t... | 2025-06-03T01:41:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1zsci/losing_my_patience_with_llms/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1zsci | false | null | t3_1l1zsci | /r/LocalLLaMA/comments/1l1zsci/losing_my_patience_with_llms/ | false | false | self | 0 | null |
Local model/setup similar to GPT4-turbo. Is it possible? | 1 | [removed] | 2025-06-03T01:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l1z1pg/local_modelsetup_similar_to_gpt4turbo_is_it/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1z1pg | false | null | t3_1l1z1pg | /r/LocalLLaMA/comments/1l1z1pg/local_modelsetup_similar_to_gpt4turbo_is_it/ | false | false | self | 1 | null |
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation! | 4 | hello! I wanted to share a tool that I created for making hand written fine tuning datasets, originally I built this for myself when I was unable to find conversational datasets formatted the way I needed when I was fine-tuning llama 3 for the first time and hand typing JSON files seemed like some sort of torture so I ... | 2025-06-02T23:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l1x5k4/sharing_my_a_demo_of_tool_for_easy_handwritten/ | abaris243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1x5k4 | false | null | t3_1l1x5k4 | /r/LocalLLaMA/comments/1l1x5k4/sharing_my_a_demo_of_tool_for_easy_handwritten/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'k3HvQmGHEJ1BAvR8nlt8TqGTjxapGWDW6TdcJI-H9Eo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=108&crop=smart&auto=webp&s=4f6646bf38ed847b77260fb5e044f0d2d2c85075', 'width': 108}, {'height': 116, 'url': 'h... |
Why use thinking model ? | 28 | I'm relatively new to using models. I've experimented with some that have a "thinking" feature, but I'm finding the delay quite frustrating – a minute to generate a response feels excessive.
I understand these models are popular, so I'm curious what I might be missing in terms of their benefits or how to best utilize... | 2025-06-02T23:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l1wnsz/why_use_thinking_model/ | Empty_Object_9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1wnsz | false | null | t3_1l1wnsz | /r/LocalLLaMA/comments/1l1wnsz/why_use_thinking_model/ | false | false | self | 28 | null |
llama4:maverick vs qwen3:235b | 12 | Title says it all. Which do like best and why? | 2025-06-02T22:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l1wdlj/llama4maverick_vs_qwen3235b/ | M3GaPrincess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1wdlj | false | null | t3_1l1wdlj | /r/LocalLLaMA/comments/1l1wdlj/llama4maverick_vs_qwen3235b/ | false | false | self | 12 | null |
Anthropic is owning the ARC-AGI-2 leaderboard | 0 | https://arcprize.org/leaderboard | 2025-06-02T22:46:27 | Balance- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1w4fb | false | null | t3_1l1w4fb | /r/LocalLLaMA/comments/1l1w4fb/anthropic_is_owning_the_arcagi2_leaderboard/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'AWODV5oVitenu2UAmjnRBkjhowjViOIJjGmT9x6HZ8A', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=108&crop=smart&auto=webp&s=4dcc0ef420694d90033002576c523f515b1ced5c', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jp... | ||
Thoughts on "The Real Cost of Open-Source LLMs [Breakdowns]" | 0 | [https://artificialintelligencemadesimple.substack.com/p/the-real-cost-of-open-source-llms](https://artificialintelligencemadesimple.substack.com/p/the-real-cost-of-open-source-llms)
I agree with most of the arguments in this post. While the pro argument for using open-source LLMs for most part is that you control you... | 2025-06-02T22:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l1vv61/thoughts_on_the_real_cost_of_opensource_llms/ | azhorAhai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1vv61 | false | null | t3_1l1vv61 | /r/LocalLLaMA/comments/1l1vv61/thoughts_on_the_real_cost_of_opensource_llms/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'uDPbA4kCHGno54ldbcC_Aws-DRxCuXWuc0eOPJUcLZo', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?width=108&crop=smart&auto=webp&s=2d191c8ab679e708e18cfd26aa18f44d49b726cb', 'width': 108}, {'height': 159, 'url': 'h... |
Run models on local pc | 1 | [removed] | 2025-06-02T21:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l1uk46/run_models_on_local_pc/ | borisr10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1uk46 | false | null | t3_1l1uk46 | /r/LocalLLaMA/comments/1l1uk46/run_models_on_local_pc/ | false | false | self | 1 | null |
What formats should I use for fine tuning of LLM’s? | 3 | I have been working on an AI agent program that essentially recursively splits tasks into smaller tasks, until an LLM decides it is simple enough. Then it attempts to execute the task with tool calling, and the results propagate up to the initial task. I want to fine tune a model (maybe Qwen2.5) to perform better on th... | 2025-06-02T21:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l1uex6/what_formats_should_i_use_for_fine_tuning_of_llms/ | Pretend_Guava7322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1uex6 | false | null | t3_1l1uex6 | /r/LocalLLaMA/comments/1l1uex6/what_formats_should_i_use_for_fine_tuning_of_llms/ | false | false | self | 3 | null |
From Zork to LocalLLM’s. | 0 | Newb here. I recently taught my kids how to make text based adventure games based on Transformers lore using AI. They had a blast. I wanted ChatGPT to generate an image with each story prompt and I was really disappointed with the speed and frustrated by the constant copyright issues.
I found myself upgrading the ... | 2025-06-02T21:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1uct5/from_zork_to_localllms/ | Yakapo88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1uct5 | false | null | t3_1l1uct5 | /r/LocalLLaMA/comments/1l1uct5/from_zork_to_localllms/ | false | false | self | 0 | null |
Which model should duckduckgo add next? | 0 | They currently have llama 3.3 and mistral small 3, in terms of open models. The closed ones are o3 mini , gpt 4o mini and Claude 3 haiku.
What would you add if you were in charge? | 2025-06-02T21:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l1tnug/which_model_should_duckduckgo_add_next/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1tnug | false | null | t3_1l1tnug | /r/LocalLLaMA/comments/1l1tnug/which_model_should_duckduckgo_add_next/ | false | false | self | 0 | null |
💻 I optimized Qwen3:30B MoE to run on my RTX 3070 laptop at 24 tok/s — full breakdown inside | 1 | [removed] | 2025-06-02T20:52:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l1td2c/i_optimized_qwen330b_moe_to_run_on_my_rtx_3070/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1td2c | false | null | t3_1l1td2c | /r/LocalLLaMA/comments/1l1td2c/i_optimized_qwen330b_moe_to_run_on_my_rtx_3070/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oqG6_t4eKaYPbxXv0zqeDRigZxwGztvuEF9rm-qDThY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=108&crop=smart&auto=webp&s=4b5d3e2bcd050efbb28916e92c0b7d0420fc426f', 'width': 108}, {'height': 116, 'url': 'h... |
ZorkGPT: Open source AI agent that plays the classic text adventure game Zork | 116 | I built an AI system that plays Zork (the classic, and very hard 1977 text adventure game) using multiple open-source LLMs working together.
The system uses separate models for different tasks:
* Agent model decides what actions to take
* Critic model evaluates those actions before execution
* Extractor model parses ... | 2025-06-02T20:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1t75j/zorkgpt_open_source_ai_agent_that_plays_the/ | stickystyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1t75j | false | null | t3_1l1t75j | /r/LocalLLaMA/comments/1l1t75j/zorkgpt_open_source_ai_agent_that_plays_the/ | false | false | self | 116 | null |
What real-world use cases actually justify running a local LLM instead of using a cloud model? | 1 | [removed] | 2025-06-02T20:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l1scmw/what_realworld_use_cases_actually_justify_running/ | Similar-Let-1981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1scmw | false | null | t3_1l1scmw | /r/LocalLLaMA/comments/1l1scmw/what_realworld_use_cases_actually_justify_running/ | false | false | self | 1 | null |
MCP Client with Local Ollama LLM and Multi-Server Tool Support | 1 | [removed] | 2025-06-02T20:03:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l1s3y9/mcp_client_with_local_ollama_llm_and_multiserver/ | Wise-Grand-8374 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1s3y9 | false | null | t3_1l1s3y9 | /r/LocalLLaMA/comments/1l1s3y9/mcp_client_with_local_ollama_llm_and_multiserver/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rx9DrYNWlGQT1_MnLZNwFPsPllAXaBHqTpEvI2wZRew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=108&crop=smart&auto=webp&s=0969de21fb21b1d0da604b858cff682cc4837a4d', 'width': 108}, {'height': 108, 'url': 'h... |
MCP Client with Local Ollama LLM and Multi-Server Tool Support | 1 | [removed] | 2025-06-02T20:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l1s2vs/mcp_client_with_local_ollama_llm_and_multiserver/ | Wise-Grand-8374 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1s2vs | false | null | t3_1l1s2vs | /r/LocalLLaMA/comments/1l1s2vs/mcp_client_with_local_ollama_llm_and_multiserver/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rx9DrYNWlGQT1_MnLZNwFPsPllAXaBHqTpEvI2wZRew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=108&crop=smart&auto=webp&s=0969de21fb21b1d0da604b858cff682cc4837a4d', 'width': 108}, {'height': 108, 'url': 'h... |
LLMs to run on CPU and low memory | 1 | [removed] | 2025-06-02T19:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l1rrox/llms_to_run_on_cpu_and_low_memory/ | idreesBughio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1rrox | false | null | t3_1l1rrox | /r/LocalLLaMA/comments/1l1rrox/llms_to_run_on_cpu_and_low_memory/ | false | false | self | 1 | null |
Use offline voice controlled agents to search and browse the internet with a contextually aware LLM in the next version of AI Runner | 10 | 2025-06-02T19:34:26 | https://v.redd.it/ir6jvtbbgk4f1 | w00fl35 | /r/LocalLLaMA/comments/1l1rda5/use_offline_voice_controlled_agents_to_search_and/ | 1970-01-01T00:00:00 | 0 | {} | 1l1rda5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ir6jvtbbgk4f1/DASHPlaylist.mpd?a=1751614473%2CZjE3ZTE0N2E3MDFlZDVjMGM0YWMyYmUxNzAyMmZhNDgxNzQ3OTQyYTc2OGQ3NGVlMjY5YTJhYjBkYTJjOWRhZA%3D%3D&v=1&f=sd', 'duration': 94, 'fallback_url': 'https://v.redd.it/ir6jvtbbgk4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1rda5 | /r/LocalLLaMA/comments/1l1rda5/use_offline_voice_controlled_agents_to_search_and/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=108&crop=smart&format=pjpg&auto=webp&s=016a5cfdf04765b2264f0a5c79dc0f9eac775... | ||
I made LLMs respond with diff patches rather than standard code blocks and the result is simply amazing! | 144 | I've been developing a coding assistant for JetBrains IDEs called **ProxyAI** (previously CodeGPT), and I wanted to experiment with an idea where LLM is instructed to produce diffs as opposed to regular code blocks, which ProxyAI then applies directly to your project.
I was fairly skeptical about this at first, but af... | 2025-06-02T19:32:00 | https://v.redd.it/zcq3wk5ffk4f1 | carlrobertoh | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1rb18 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zcq3wk5ffk4f1/DASHPlaylist.mpd?a=1751484737%2COTNhNjM0ZTI5MmJlZTA1NjUyYmEzMjZkODAwZTkxNGVlYmNiMjFiOTQ2NzNhZWQwN2VhZjE1YWE1NTE1OTA5Zg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/zcq3wk5ffk4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1rb18 | /r/LocalLLaMA/comments/1l1rb18/i_made_llms_respond_with_diff_patches_rather_than/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=108&crop=smart&format=pjpg&auto=webp&s=7dec1b58b6dc219690c91bb8e91d66daddf63... | |
671B IQ1_S vs 70B Q8_0 | 11 | In an optimal world, there should be no shortage of memory. VRAM is used over RAM for its superior memory bandwidth, where HBM > GDDR > DDR. However, due to limitations that are oftentimes financial, quantisations are used to fit a bigger model into smaller memory by approximating the precision of the weights.
Usually... | 2025-06-02T19:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l1r366/671b_iq1_s_vs_70b_q8_0/ | nagareteku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1r366 | false | null | t3_1l1r366 | /r/LocalLLaMA/comments/1l1r366/671b_iq1_s_vs_70b_q8_0/ | false | false | self | 11 | null |
At the airport people watching while I run models locally: | 2,007 | 2025-06-02T19:10:02 | Current-Ticket4214 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1qqdx | false | null | t3_1l1qqdx | /r/LocalLLaMA/comments/1l1qqdx/at_the_airport_people_watching_while_i_run_models/ | false | false | 2,007 | {'enabled': True, 'images': [{'id': 'RctvpTqQgVgKVx-oERol7Qbk3LOI73TjzT_6xx1QVCM', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=108&crop=smart&auto=webp&s=9f9c7d418d20ce56a74d327f6586c3f5250632a9', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/55ab38z0ck4f1.j... | |||
RTX PRO 6000 Blackwell and Max-Q version | 1 | [removed] | 2025-06-02T19:00:16 | https://youtu.be/LSQL7c29arM | svskaushik | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1l1qgyb | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/LSQL7c29arM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1l1qgyb | /r/LocalLLaMA/comments/1l1qgyb/rtx_pro_6000_blackwell_and_maxq_version/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'o74b2cGO3aPzc-1Jf9o3hd99veR6MAbzIaLpwdqf36I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ccci9Txk0dAfwXv5whHZsN6QRjnVmvbGwDRTneHMrDU.jpg?width=108&crop=smart&auto=webp&s=fde6c90f5b2df709b57caab25e0a4836c119276b', 'width': 108}, {'height': 162, 'url': 'h... | |
Which programming languages do LLMs struggle with the most, and why? | 59 | I've noticed that LLMs do well with Python, which is quite obvious, but often make mistakes in other languages. I can't test every language myself, so can you share, which languages have you seen them struggle with, and what went wrong?
For context: I want to test LLMs on various "hard" languages | 2025-06-02T18:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l1q3dk/which_programming_languages_do_llms_struggle_with/ | alozowski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1q3dk | false | null | t3_1l1q3dk | /r/LocalLLaMA/comments/1l1q3dk/which_programming_languages_do_llms_struggle_with/ | false | false | self | 59 | null |
Application to auto-test or determine an LLM model's optimal settings | 1 | Does this exist?
Like something that can run a specific model through a bunch of test prompts on a range of settings and provide you with a report at that end recommending settings for temperature, rep penalty, etc?
Even its just a recommended settings range between x and y would be nice. | 2025-06-02T18:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pzdb/application_to_autotest_or_determine_an_llm/ | Primary-Wear-2460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pzdb | false | null | t3_1l1pzdb | /r/LocalLLaMA/comments/1l1pzdb/application_to_autotest_or_determine_an_llm/ | false | false | self | 1 | null |
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation! | 1 | [removed] | 2025-06-02T18:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pxf3/sharing_my_a_demo_of_tool_for_easy_handwritten/ | abaris243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pxf3 | false | null | t3_1l1pxf3 | /r/LocalLLaMA/comments/1l1pxf3/sharing_my_a_demo_of_tool_for_easy_handwritten/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UjweHFlBfjtq-qgJURLZe74ot5ARI6AHWtzN7VjFiRs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=108&crop=smart&auto=webp&s=5704e06d9310b4293e014267081165563e8bbeda', 'width': 108}, {'height': 162, 'url': 'h... |
Looking for advice: 5060 ti using PCIE 4.0 for converting my desktop into an LLM server | 1 | Hey!
I am looking to create a server for LLM experimentation. I am pricing out different options, and purchasing a new 5060 ti 16gb gpu seems like an attractive price friendly option to start dipping my toes.
The desktop I am looking to convert has a Ryzen 5800x, 64gb ram, 2 tb nvme 4. The mobo only supports pcie 4.0... | 2025-06-02T18:37:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pvrr/looking_for_advice_5060_ti_using_pcie_40_for/ | polymath_renegade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pvrr | false | null | t3_1l1pvrr | /r/LocalLLaMA/comments/1l1pvrr/looking_for_advice_5060_ti_using_pcie_40_for/ | false | false | self | 1 | null |
What to do with GPUs? [Seeking ideas] | 3 | Hi there, I have a sizeable amount of GPU reserved instanced in Azure and GCP for next few longs. I am looking for some fun project to work on. Looking for ideas about what to build/fine-tune a model. | 2025-06-02T18:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pueu/what_to_do_with_gpus_seeking_ideas/ | Ok-Regular-1142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pueu | false | null | t3_1l1pueu | /r/LocalLLaMA/comments/1l1pueu/what_to_do_with_gpus_seeking_ideas/ | false | false | self | 3 | null |
latest llama.cpp (b5576) + DeepSeek-R1-0528-Qwen3-8B-Q8_0.gguf successful VScode + MCP running | 71 | Just downloaded [Release b5576 · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/releases/tag/b5576) and try to use MCP tools with folllowing environment:
1. [DeepSeek-R1-0528-Qwen3-8B-Q8\_0](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-R1-0528-Qwen3-8B-GGUF)2. VS code
3. Cline
4. MCP tools lik... | 2025-06-02T18:21:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pgv9/latest_llamacpp_b5576_deepseekr10528qwen38bq8/ | tyoyvr-2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pgv9 | false | null | t3_1l1pgv9 | /r/LocalLLaMA/comments/1l1pgv9/latest_llamacpp_b5576_deepseekr10528qwen38bq8/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'XimqWHIYvM5SwtiqperTDecr6b-hk0KlHOB1ZVWG1Lo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=108&crop=smart&auto=webp&s=6dda0e07950f1814805fb54edde94c948c6d062e', 'width': 108}, {'height': 108, 'url': 'h... | |
Mistral-Small 3.1 is {good|bad} at OCR when using {ollama|llama.cpp} | 3 | I’ve tried everything I can think of, and I’m losing my mind. Does anyone have any suggestions?
I’ve been trying out 24-28B local vision models for some slightly specialized OCR (nothing too fancy, it’s still words printed on a page), first using Ollama for inference. The results for Mistral Small 3.1 were fantastic,... | 2025-06-02T17:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ob6a/mistralsmall_31_is_goodbad_at_ocr_when_using/ | exacly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ob6a | false | null | t3_1l1ob6a | /r/LocalLLaMA/comments/1l1ob6a/mistralsmall_31_is_goodbad_at_ocr_when_using/ | false | false | self | 3 | null |
chatbot arena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l1npyr/chatbot_arena_update_claude_4_opus_at_8_4_with/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1npyr | false | null | t3_1l1npyr | /r/LocalLLaMA/comments/1l1npyr/chatbot_arena_update_claude_4_opus_at_8_4_with/ | false | false | self | 1 | null |
lmarena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nobm/lmarena_update_claude_4_opus_at_8_4_with_style/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nobm | false | null | t3_1l1nobm | /r/LocalLLaMA/comments/1l1nobm/lmarena_update_claude_4_opus_at_8_4_with_style/ | false | false | self | 1 | null |
Best Software to Self-host LLM | 0 | Hello everyone,
What is the best Android app where I can plug in my API key? Same question for Windows?
It would be great if it supports new models just like LiteLLM from Anthropic, Google, OpenAI, etc. | 2025-06-02T17:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nnaa/best_software_to_selfhost_llm/ | AcanthaceaeNo5503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nnaa | false | null | t3_1l1nnaa | /r/LocalLLaMA/comments/1l1nnaa/best_software_to_selfhost_llm/ | false | false | self | 0 | null |
lmarena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nmzb/lmarena_update_claude_4_opus_at_8_4_with_style/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nmzb | false | null | t3_1l1nmzb | /r/LocalLLaMA/comments/1l1nmzb/lmarena_update_claude_4_opus_at_8_4_with_style/ | false | false | self | 1 | null |
Best uncensored multi language LLM up to 12B, still Mistral Nemo? | 23 | I want to use a fixed model for my private none commercial AI project because I want to finetune it later (LoRAs) for it's specific tasks. For that I need:
- A up to 12B text to text model - need to match into 12GB VRAM inclusive 8K context window.
- As uncensored as possible in it's core.
- Official support for main ... | 2025-06-02T16:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1n6h4/best_uncensored_multi_language_llm_up_to_12b/ | Blizado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1n6h4 | false | null | t3_1l1n6h4 | /r/LocalLLaMA/comments/1l1n6h4/best_uncensored_multi_language_llm_up_to_12b/ | false | false | self | 23 | null |
Has anyone had success implementing a local FIM model? | 5 | I've noticed that the auto-completion features in my current IDE can be sluggish. As I rely heavily on auto-completion during coding, I strongly prefer accurate autocomplete suggestions like those offered by "Cursor" over automated code generation(Chat/Agent tabs). Therefore, I'm seeking a local alternative that incorp... | 2025-06-02T16:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l1mliw/has_anyone_had_success_implementing_a_local_fim/ | m_abdelfattah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1mliw | false | null | t3_1l1mliw | /r/LocalLLaMA/comments/1l1mliw/has_anyone_had_success_implementing_a_local_fim/ | false | false | self | 5 | null |
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 98 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:20:20 | https://github.com/playht/playdiffusion | SandSalt8370 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l1mb6y | false | null | t3_1l1mb6y | /r/LocalLLaMA/comments/1l1mb6y/playais_latest_diffusionbased_speech_editing/ | false | false | 98 | {'enabled': False, 'images': [{'id': 'CSWT8MFaDNeST1dQr1DqDgWI53La8d_i9DyfjkHlVy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=108&crop=smart&auto=webp&s=41cf3242be5c447c4e9a00d66fb5da1c09dcca24', 'width': 108}, {'height': 108, 'url': 'h... | |
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:19:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1masl | false | null | t3_1l1masl | /r/LocalLLaMA/comments/1l1masl/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
How I Built a Better Gumloop in 48 Hours with Vibe Coding | 2 | Most no-code agent builders are just workflow automation with LLM calls sprinkled in. They're not built for actual agents that need to:
* Make dynamic routing decisions
* Handle complex tool orchestration
* Support ANY model (not just OpenAI)
I successfully built such a platform, here's everything I used:
**Agent F... | 2025-06-02T16:19:18 | https://v.redd.it/1xjdawdkfj4f1 | goddamnit_1 | /r/LocalLLaMA/comments/1l1ma92/how_i_built_a_better_gumloop_in_48_hours_with/ | 1970-01-01T00:00:00 | 0 | {} | 1l1ma92 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1xjdawdkfj4f1/DASHPlaylist.mpd?a=1751602760%2CYTk1OWNlZmUzNmRhMThlNGNjNDlmYWUwNWJjYTFlNzA0ZTk1ZjEwZWQ2NjBhMTE3NGJmM2ZhODlhZGYzMDJlZQ%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/1xjdawdkfj4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1ma92 | /r/LocalLLaMA/comments/1l1ma92/how_i_built_a_better_gumloop_in_48_hours_with/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=108&crop=smart&format=pjpg&auto=webp&s=7b23392dccfb736ace44dc1edbb3a40a7538b... | |
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 5 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:16:12 | https://huggingface.co/spaces/PlayHT/PlayDiffusion | SandSalt8370 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l1m7fm | false | null | t3_1l1m7fm | /r/LocalLLaMA/comments/1l1m7fm/playais_latest_diffusionbased_speech_editing/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'CqblY3Zg0YyBkT7WL4m7rTHmTkmHkQsN6Ve3JKLmzUk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=108&crop=smart&auto=webp&s=e9fb57a5c0e50ad13a69c186f8b3a8edb818eacc', 'width': 108}, {'height': 116, 'url': 'h... | |
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:15:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m6v5 | false | null | t3_1l1m6v5 | /r/LocalLLaMA/comments/1l1m6v5/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.