title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why are LLM releases still hyping "intelligence" when solid instruction-following is what actually matters (and they're not that smart anyway)? | 170 | Sorry for the (somewhat) click bait title, but really, mew LLMs drop, and all of their benchmarks are AIME, GPQA or the nonsense Aider Polyglot. Who cares about these? For actual work like information extraction (even typical QA given a context is pretty much information extraction), summarization, text formatting/para... | 2025-05-30T14:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kz5hev/why_are_llm_releases_still_hyping_intelligence/ | mtmttuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz5hev | false | null | t3_1kz5hev | /r/LocalLLaMA/comments/1kz5hev/why_are_llm_releases_still_hyping_intelligence/ | false | false | self | 170 | null |
[Help] Training loss dropping to ~0 in SFT, but how? | 1 | [removed] | 2025-05-30T14:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kz5fzl/help_training_loss_dropping_to_0_in_sft_but_how/ | Chip-Parking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz5fzl | false | null | t3_1kz5fzl | /r/LocalLLaMA/comments/1kz5fzl/help_training_loss_dropping_to_0_in_sft_but_how/ | false | false | 1 | null | |
Want to make a LLM based web app. | 0 | Wanted some ideas to make a LLM based web app as mentioned in the title, also if you've made any please share it's deployed link to take as a reference. Thnks | 2025-05-30T13:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kz4uc3/want_to_make_a_llm_based_web_app/ | Xebec_456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz4uc3 | false | null | t3_1kz4uc3 | /r/LocalLLaMA/comments/1kz4uc3/want_to_make_a_llm_based_web_app/ | false | false | self | 0 | null |
What’s still painful or unsolved about building production LLM agents? (Memory, reliability, infra, debugging, modularity, etc.) | 1 | [removed] | 2025-05-30T13:32:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kz4bk1/whats_still_painful_or_unsolved_about_building/ | Popular_Reaction_495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz4bk1 | false | null | t3_1kz4bk1 | /r/LocalLLaMA/comments/1kz4bk1/whats_still_painful_or_unsolved_about_building/ | false | false | self | 1 | null |
Even DeepSeek switched from OpenAI to Google | 463 | Similar in text Style analyses from [https://eqbench.com/](https://eqbench.com/) shows that R1 is now much closer to Google.
So they probably used more synthetic gemini outputs for training.
| 2025-05-30T13:29:07 | Utoko | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kz48qx | false | null | t3_1kz48qx | /r/LocalLLaMA/comments/1kz48qx/even_deepseek_switched_from_openai_to_google/ | false | false | 463 | {'enabled': True, 'images': [{'id': 'W2VvB2VR-i6VgAizSE5dfB0YmBMgHL8i2ww57qg63hQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/uy7wbaj17x3f1.png?width=108&crop=smart&auto=webp&s=6cfb55e3f483436b512b97b0295b4dcbca687b10', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/uy7wbaj17x3f1.pn... | ||
Just inherited 6700xt/5700x. Do i have any windows based options for local image gen? | 1 | Title^
I get the answer is probably "Nope" but i still thought i'd ask. I have done littel with AI any thing, but liked the look of ComfyUI. Its flat out incompatible with AMD+Windows so i am looking further afield. | 2025-05-30T13:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kz475b/just_inherited_6700xt5700x_do_i_have_any_windows/ | Quizzelbuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz475b | false | null | t3_1kz475b | /r/LocalLLaMA/comments/1kz475b/just_inherited_6700xt5700x_do_i_have_any_windows/ | false | false | self | 1 | null |
SOTA Open-source AI Agent on SWE-Bench: Used Claude 3.7 + o4-mini (debugging) + o3 (debug-to-solution reasoning) | 1 | [removed] | 2025-05-30T13:23:17 | http://swebench.com | sergey_vakhreev | swebench.com | 1970-01-01T00:00:00 | 0 | {} | 1kz448v | false | null | t3_1kz448v | /r/LocalLLaMA/comments/1kz448v/sota_opensource_ai_agent_on_swebench_used_claude/ | false | false | default | 1 | null |
#1 Open-source AI Agent on SWE-Bench: Claude 3.7 + o4-mini (debugging) + o3 (debug-to-solution reasoning) | 1 | [removed] | 2025-05-30T13:19:07 | https://refact.ai/blog/2025/open-source-sota-on-swe-bench-verified-refact-ai/ | sergey_vakhreev | refact.ai | 1970-01-01T00:00:00 | 0 | {} | 1kz40xu | false | null | t3_1kz40xu | /r/LocalLLaMA/comments/1kz40xu/1_opensource_ai_agent_on_swebench_claude_37/ | false | false | default | 1 | null |
gvtop: 🎮 Material You TUI for monitoring NVIDIA GPUs | 21 | 2025-05-30T13:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kz3m3f/gvtop_material_you_tui_for_monitoring_nvidia_gpus/ | Intelligent_Carry_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz3m3f | false | null | t3_1kz3m3f | /r/LocalLLaMA/comments/1kz3m3f/gvtop_material_you_tui_for_monitoring_nvidia_gpus/ | false | false | 21 | null | ||
4x 3090 CPU/Mobo / hardware guidance. | 1 | [removed] | 2025-05-30T12:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kz3dyq/4x_3090_cpumobo_hardware_guidance/ | Marslauncher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz3dyq | false | null | t3_1kz3dyq | /r/LocalLLaMA/comments/1kz3dyq/4x_3090_cpumobo_hardware_guidance/ | false | false | self | 1 | null |
Testing Claude, OpenAI and AI21 Studio for long context RAG assistant in enterprise | 3 | We've been prototyping a support agent internally to help employees query stuff like policy documents and onboarding guides. it's basically a multi-turn RAG bot over long internal documents.
We eventually need to run it in a compliant environment (likely in a VPC) so we started testing three tools to validate quality... | 2025-05-30T12:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kz3cul/testing_claude_openai_and_ai21_studio_for_long/ | NullPointerJack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz3cul | false | null | t3_1kz3cul | /r/LocalLLaMA/comments/1kz3cul/testing_claude_openai_and_ai21_studio_for_long/ | false | false | self | 3 | null |
Introducing Jade, a systems programming focused Qwen 3 4B finetune | 5 | I've wanted to finetune a model since I knew it was even a possibility. I knew that cultivating a dataset was going to be the hardest part, and it really is. I get quite frustrated moving files in between directories and needing to use 5 different programming languages and understanding god knows how many file formats.... | 2025-05-30T12:37:52 | sqli | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kz35bi | false | null | t3_1kz35bi | /r/LocalLLaMA/comments/1kz35bi/introducing_jade_a_systems_programming_focused/ | false | false | 5 | {'enabled': True, 'images': [{'id': '6JCgspZLRp0F6SLvNdOPGHRsDE3b4BLF4akPSlNTbYw', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bh5o1bv2zw3f1.jpeg?width=108&crop=smart&auto=webp&s=1c8b8aabd92e3ee0fc2029e53f8020084bdcf533', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bh5o1bv2zw3f1.j... | ||
Xiaomi released an updated 7B reasoning model and VLM version claiming SOTA for their size | 175 | Xiaomi released an update to its 7B reasoning model, which performs very well on benchmarks, and claims SOTA for its size.
Also, Xiaomi released a reasoning VLM version, which again performs excellent in benchmarks.
Compatible w/ Qwen VL arch so works across vLLM, Transformers, SGLang and Llama.cpp
Bonus: it can re... | 2025-05-30T12:13:32 | https://www.reddit.com/gallery/1kz2o1w | ResearchCrafty1804 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kz2o1w | false | null | t3_1kz2o1w | /r/LocalLLaMA/comments/1kz2o1w/xiaomi_released_an_updated_7b_reasoning_model_and/ | false | false | 175 | null | |
LMStudio - llama.cpp - vLLM | 2 | I have no background in coding or working with LLMs. I've only started exploring these topics a few months ago, and to learn better, I've been trying to build a RAG-based chatbot. For testing purposes, I initially used simple setups like LM Studio and AnythingLLM to download and try out models I was interested in (such... | 2025-05-30T11:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kz25r8/lmstudio_llamacpp_vllm/ | DexLorenz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz25r8 | false | null | t3_1kz25r8 | /r/LocalLLaMA/comments/1kz25r8/lmstudio_llamacpp_vllm/ | false | false | self | 2 | null |
GPULlama3.java - GPU-enabled inference in Java through JIT with TornadoVM | 1 | **Llama3** models written in **native Java** automatically accelerated on GPUs with **TornadoVM**. This project allows you to run Llama3 inference efficiently, leveraging TornadoVM's parallel computing features for enhanced performance. JIT compiled Java to OpenCL and PTX | 2025-05-30T11:28:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kz1ud8/gpullama3java_gpuenabled_inference_in_java/ | mikebmx1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz1ud8 | false | null | t3_1kz1ud8 | /r/LocalLLaMA/comments/1kz1ud8/gpullama3java_gpuenabled_inference_in_java/ | false | false | self | 1 | null |
[Release] Cognito AI Search v1.2.0 – Fully Re-imagined, Lightning Fast, Now Prettier Than Ever | 1 | [removed] | 2025-05-30T11:28:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kz1ucd/release_cognito_ai_search_v120_fully_reimagined/ | kekePower | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz1ucd | false | null | t3_1kz1ucd | /r/LocalLLaMA/comments/1kz1ucd/release_cognito_ai_search_v120_fully_reimagined/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YvNthv0qyk_IzvPAR7FlLYNQAC2JWtfw1mk4wKAOW_0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QxXqXEpY4AOWcxadAyA4RaqGI4gliGK1zvWVFb8uK_k.jpg?width=108&crop=smart&auto=webp&s=bff53cb52b88dd12eed2696db47d669791f69bf4', 'width': 108}, {'height': 108, 'url': 'h... |
Best Local LLM for Mac Mini M2 (8GB RAM)? | 1 | [removed] | 2025-05-30T11:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kz1lkq/best_local_llm_for_mac_mini_m2_8gb_ram/ | ShreyashStonieCrusts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz1lkq | false | null | t3_1kz1lkq | /r/LocalLLaMA/comments/1kz1lkq/best_local_llm_for_mac_mini_m2_8gb_ram/ | false | false | self | 1 | null |
Setup for DeepSeek-R1-0528 (just curious)? | 12 | Hi guys, just out of curiosity, I really wonder if a suitable setup for the DeepSeek-R1-0528 exists, I mean with "decent" total speed (pp+t/s), context size (let's say 32k) and without needing to rely on a single backend (like ktransformers) | 2025-05-30T11:14:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kz1l5i/setup_for_deepseekr10528_just_curious/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz1l5i | false | null | t3_1kz1l5i | /r/LocalLLaMA/comments/1kz1l5i/setup_for_deepseekr10528_just_curious/ | false | false | self | 12 | null |
Local vlm app for Apple Silicon | 0 | I'm working on a kind of vibe coding exercise to see how far I can go in developing the local LLM application. Any feedback would be appreciated.
[https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=6746380186](https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=6746380186)
| 2025-05-30T11:01:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kz1d4v/local_vlm_app_for_apple_silicon/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz1d4v | false | null | t3_1kz1d4v | /r/LocalLLaMA/comments/1kz1d4v/local_vlm_app_for_apple_silicon/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Bstwndjnbuyj35Sy44-llMYbkSutb_CBdqxPbNADQ3c', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/z3eVSHEVWXZeVCM_DaZEU8nPptYSUc1FWoctFjEesx4.jpg?width=108&crop=smart&auto=webp&s=37f0c089c18b7aee0ed6a74e6655e06311e6a43f', 'width': 108}, {'height': 216, 'url': '... |
New Links | Hacker NewsDeploying MedGemma (4B Multi-Modal) for Medical AI Inference Across Devices | 1 | [removed] | 2025-05-30T10:53:57 | https://llamaedge.com/docs/user-guide/multimodal/medgemma-4b/ | smileymileycoin | llamaedge.com | 1970-01-01T00:00:00 | 0 | {} | 1kz188f | false | null | t3_1kz188f | /r/LocalLLaMA/comments/1kz188f/new_links_hacker_newsdeploying_medgemma_4b/ | false | false | default | 1 | null |
Local TTS Model For Chatting With Webpages? | 1 | Are there any recommendations for models/tools to use for reading out websites I'm on? All the TTS models I hear sound so bad like Microsoft Sam | 2025-05-30T10:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kz0z4j/local_tts_model_for_chatting_with_webpages/ | getSAT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz0z4j | false | null | t3_1kz0z4j | /r/LocalLLaMA/comments/1kz0z4j/local_tts_model_for_chatting_with_webpages/ | false | false | self | 1 | null |
Adding a Vision Tower to Qwen 3 | 5 | So | 2025-05-30T10:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kz0t4k/adding_a_vision_tower_to_qwen_3/ | urekmazino_0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz0t4k | false | null | t3_1kz0t4k | /r/LocalLLaMA/comments/1kz0t4k/adding_a_vision_tower_to_qwen_3/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '-cv7uT3qzKUdC122it69kZ92_71MWDMsUqREzLmO0uM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FLYMHfJ9lKnB7oNc6BYk7f8nosuLUpOOwGh0ZwYkM6I.jpg?width=108&crop=smart&auto=webp&s=2a768a6d4e524f3c37fd8a8da61234f86606047d', 'width': 108}, {'height': 108, 'url': 'h... |
Speed-up VLLM server boot | 5 | Hey, I'm running a VLLM instance in Kubernetes and I want to scale it based on the traffic as swiftly as possible. I'm currently hosting a `Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4` on `g5.xlarge` instances with a single `A10G` GPU.
vllm serve Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4
There are two issues I have with swiftly ... | 2025-05-30T10:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kz0rk5/speedup_vllm_server_boot/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz0rk5 | false | null | t3_1kz0rk5 | /r/LocalLLaMA/comments/1kz0rk5/speedup_vllm_server_boot/ | false | false | self | 5 | null |
Ollama continues tradition of misnaming models | 463 | I don't really get the hate that Ollama gets around here sometimes, because much of it strikes me as unfair. Yes, they rely on llama.cpp, and have made a great wrapper around it and a very useful setup.
However, their propensity to misname models is very aggravating.
I'm very excited about DeepSeek-R1-Distill-Qwen-3... | 2025-05-30T10:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kz0kqi/ollama_continues_tradition_of_misnaming_models/ | profcuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz0kqi | false | null | t3_1kz0kqi | /r/LocalLLaMA/comments/1kz0kqi/ollama_continues_tradition_of_misnaming_models/ | false | false | self | 463 | {'enabled': False, 'images': [{'id': 'kP9E5fWWqq4zFCKp1p_KhWUaXdCOjCEEkZOGb5Bu4lo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X0Ye7sYt04hh-NoXjXotPTKkCSg_Pm0zInGMkIcOcoA.jpg?width=108&crop=smart&auto=webp&s=82ca00c0fe0dafb8630113a3ad3b34bd3ed3182f', 'width': 108}, {'height': 116, 'url': 'h... |
Please stop the DeepSeek spamming | 0 | Isn't this for LOCAL LLMs? None of the people posting about it are running it locally.
Also beware of LLMs you don't control:
https://youtu.be/ZhB5lwcQnUo?t=1418 | 2025-05-30T09:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kz08ki/please_stop_the_deepseek_spamming/ | FbF_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz08ki | false | null | t3_1kz08ki | /r/LocalLLaMA/comments/1kz08ki/please_stop_the_deepseek_spamming/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LCkoqcqnRxibvpr4JURS4MRxh5Z882QhD3f3oDqFtV4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-DCNy-MoaDPDPdfzo7HHIL4tfO_8nxgRgdlI2HJtM2c.jpg?width=108&crop=smart&auto=webp&s=48caf94a7ce0406176f0dbb1d0f035564a7bc5be', 'width': 108}, {'height': 162, 'url': 'h... |
Running a local LLM Using 2 Laptops with WSL using Ray & vLLM | 1 | [removed] | 2025-05-30T09:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kz01gf/running_a_local_llm_using_2_laptops_with_wsl/ | notrealDirect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz01gf | false | null | t3_1kz01gf | /r/LocalLLaMA/comments/1kz01gf/running_a_local_llm_using_2_laptops_with_wsl/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4DPwBqTpWwJLgOT9d2d0xcHScFa4hjU7EwHTJnMp96U', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/vKmbkow82EGlUDrvilO2sYThfM5Qsy7D2FGPKeeN5GI.jpg?width=108&crop=smart&auto=webp&s=a081a7c193094fa0426f32954c5fe6c374f183ad', 'width': 108}, {'height': 204, 'url': '... |
DeepSeek-R1-0528-Qwen3-8B | 122 | 2025-05-30T09:39:43 | Robert__Sinclair | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kz01fo | false | null | t3_1kz01fo | /r/LocalLLaMA/comments/1kz01fo/deepseekr10528qwen38b/ | false | false | 122 | {'enabled': True, 'images': [{'id': 'u9NNVl9DXqWrTUSLt6lfgxs5F_BGSigALqUSdq3trT8', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/grc43exi3w3f1.png?width=108&crop=smart&auto=webp&s=d15c0d90ef16c185aff65ca4c48b3a3be5094033', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/grc43exi3w3f1.png... | |||
Is Gemma-3N-E4B-IT REALLY accessible on consumer PCs?! 🤯 (Help me find a way!) | 1 | [removed] | 2025-05-30T09:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyzi1s/is_gemma3ne4bit_really_accessible_on_consumer_pcs/ | Basileolus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyzi1s | false | null | t3_1kyzi1s | /r/LocalLLaMA/comments/1kyzi1s/is_gemma3ne4bit_really_accessible_on_consumer_pcs/ | false | false | self | 1 | null |
Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents | 20 | 2025-05-30T08:29:01 | https://arxiv.org/abs/2505.22954 | AaronFeng47 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1kyz0vy | false | null | t3_1kyz0vy | /r/LocalLLaMA/comments/1kyz0vy/darwin_godel_machine_openended_evolution_of/ | false | false | default | 20 | null | |
Hey, I’m new to everything. What do you think of Shapes Inc? | 1 | [removed] | 2025-05-30T07:19:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kyy1gh/hey_im_new_to_everything_what_do_you_think_of/ | Low_Appointment1783 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyy1gh | false | null | t3_1kyy1gh | /r/LocalLLaMA/comments/1kyy1gh/hey_im_new_to_everything_what_do_you_think_of/ | false | false | self | 1 | null |
AnythingLLM RAG with Gemma 3:12b & BGE-m3-F16: LM Studio vs. Ollama Embedding Discrepancies - Same GGUF, Different Results? | 7 | Hey everyone,
I'm running into a perplexing issue with my local RAG setup using AnythingLLM. My LLM is Gemma 3:12b via LM Studio, and my corpus consists of about a dozen scientific papers (PDFs). For embeddings, I'm using BGE-m3-F16.
Here's the strange part: I've deployed the BGE-m3-F16 embedding model using both LM ... | 2025-05-30T06:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kyxot0/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | Ok_Bug4999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyxot0 | false | null | t3_1kyxot0 | /r/LocalLLaMA/comments/1kyxot0/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | false | false | self | 7 | null |
Gemma-Omni. Did somebody get it up and running? Conversational | 1 | [removed] | 2025-05-30T05:39:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kywiw0/gemmaomni_did_somebody_get_it_up_and_running/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kywiw0 | false | null | t3_1kywiw0 | /r/LocalLLaMA/comments/1kywiw0/gemmaomni_did_somebody_get_it_up_and_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'h... |
Gemma-Omni. Did somebody get it up and running? Conversational | 1 | [removed] | 2025-05-30T05:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kywhf9/gemmaomni_did_somebody_get_it_up_and_running/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kywhf9 | false | null | t3_1kywhf9 | /r/LocalLLaMA/comments/1kywhf9/gemmaomni_did_somebody_get_it_up_and_running/ | false | false | self | 1 | null |
Gemma-Omni. Did somebody get it up and running? Conversational | 1 | [removed] | 2025-05-30T05:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kywh2a/gemmaomni_did_somebody_get_it_up_and_running/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kywh2a | false | null | t3_1kywh2a | /r/LocalLLaMA/comments/1kywh2a/gemmaomni_did_somebody_get_it_up_and_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'h... |
Gemma-Omni. Did somebody get it up and running? Conversational | 1 | [removed] | 2025-05-30T05:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kywg63/gemmaomni_did_somebody_get_it_up_and_running/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kywg63 | false | null | t3_1kywg63 | /r/LocalLLaMA/comments/1kywg63/gemmaomni_did_somebody_get_it_up_and_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'h... |
Horizontally Scaling Open LLMs like LLaMA for Production | 4 | 2025-05-30T05:27:22 | https://medium.com/@tarun7r/horizontally-scaling-open-llms-like-llama-for-production-eb7df54763c5 | martian7r | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1kywbzi | false | null | t3_1kywbzi | /r/LocalLLaMA/comments/1kywbzi/horizontally_scaling_open_llms_like_llama_for/ | false | false | default | 4 | null | |
🐚 Why I Built an MCP Server Sdk in Shell (Yes, Bash) | 1 | 2025-05-30T04:44:21 | https://muthuishere.medium.com/why-i-built-an-mcp-server-sdk-in-shell-yes-bash-6f2192072279 | muthuishere2101 | muthuishere.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1kyvmpn | false | null | t3_1kyvmpn | /r/LocalLLaMA/comments/1kyvmpn/why_i_built_an_mcp_server_sdk_in_shell_yes_bash/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'HJGv_XehDNGGr4X4RbBxcrqdiUnZ5VvqTt7lwjT4vWs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z6nGbRpb286rpqoLG8ePtPIHhW8ljHhBwP8l-Xw2Srs.jpg?width=108&crop=smart&auto=webp&s=af10791973ec5470f1215fea43feca8854fe5da4', 'width': 108}, {'height': 216, 'url': '... | ||
A Bash SDK to expose your tools to LLMs using MCP | 1 | [removed] | 2025-05-30T04:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kyvlod/a_bash_sdk_to_expose_your_tools_to_llms_using_mcp/ | muthuishere2101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyvlod | false | null | t3_1kyvlod | /r/LocalLLaMA/comments/1kyvlod/a_bash_sdk_to_expose_your_tools_to_llms_using_mcp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N-ojlu8ec5IDUO-zph0ISSBR9vvlnFyQXN-HRVIOkyk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Xmqf0nYiAkdBeDJcS-tH3C2VXw3OQ97mdOGQA5cyOIQ.jpg?width=108&crop=smart&auto=webp&s=c6fdb6dd1892756e2c1f9a71594c41dbd80679ef', 'width': 108}, {'height': 108, 'url': 'h... |
LLM and AI Roadmap | 1 | [removed] | 2025-05-30T04:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kyvf6p/llm_and_ai_roadmap/ | Great-Reception447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyvf6p | false | null | t3_1kyvf6p | /r/LocalLLaMA/comments/1kyvf6p/llm_and_ai_roadmap/ | false | false | 1 | null | |
TextCLF: An API to train custom classification models | 1 | [removed] | 2025-05-30T04:18:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kyv7a3/textclf_an_api_to_train_custom_classification/ | Fluid-Stress7113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyv7a3 | false | null | t3_1kyv7a3 | /r/LocalLLaMA/comments/1kyv7a3/textclf_an_api_to_train_custom_classification/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 17... |
Any chance we get LLM's that have decent grasp on size/dimensions/space? | 8 | The title says it all, curious as to if there's going to be a time in the near future where an LLM with the context it's given, can grasp overall scale and size of objects/people/etc.
Currently when it comes to most LLM's, cloud or local, I find a lot of times that models don't tend to have a decent grasp on size of ... | 2025-05-30T04:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kyv2e6/any_chance_we_get_llms_that_have_decent_grasp_on/ | Arky-Mosuke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyv2e6 | false | null | t3_1kyv2e6 | /r/LocalLLaMA/comments/1kyv2e6/any_chance_we_get_llms_that_have_decent_grasp_on/ | false | false | self | 8 | null |
Mac Studio - so tempting yet... | 1 | [removed] | 2025-05-30T04:06:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kyuzfo/mac_studio_so_tempting_yet/ | programmer-of-things | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyuzfo | false | null | t3_1kyuzfo | /r/LocalLLaMA/comments/1kyuzfo/mac_studio_so_tempting_yet/ | false | false | self | 1 | null |
deepseek r1 0528 qwen 8b on android MNN chat | 64 | seems very good for its size | 2025-05-30T04:02:26 | https://v.redd.it/81j2f2ldfu3f1 | Juude89 | /r/LocalLLaMA/comments/1kyuwkv/deepseek_r1_0528_qwen_8b_on_android_mnn_chat/ | 1970-01-01T00:00:00 | 0 | {} | 1kyuwkv | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/81j2f2ldfu3f1/DASHPlaylist.mpd?a=1751299351%2CZmRhNzcxYzdiY2FmMjNkYjhjN2FjY2Q1YzliYTM2NmVkN2Q5OTgzNDYzMmE4Y2Y1ZTJiODkyZWFiZDdiYmQ5OA%3D%3D&v=1&f=sd', 'duration': 194, 'fallback_url': 'https://v.redd.it/81j2f2ldfu3f1/DASH_720.mp4?source=fallback', 'h... | t3_1kyuwkv | /r/LocalLLaMA/comments/1kyuwkv/deepseek_r1_0528_qwen_8b_on_android_mnn_chat/ | false | false | 64 | {'enabled': False, 'images': [{'id': 'MHF5ZWNxbGRmdTNmMX8IQ7wMputh-guPLEhiv4RqFz7Hc1SxI_2yIws75pQ8', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MHF5ZWNxbGRmdTNmMX8IQ7wMputh-guPLEhiv4RqFz7Hc1SxI_2yIws75pQ8.png?width=108&crop=smart&format=pjpg&auto=webp&s=8c1087813b0562b802f897bb6c0356dc56fe... | |
Codestral vs other options, which is better? | 1 | [removed] | 2025-05-30T03:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kyusqq/codestral_vs_other_options_which_is_better/ | Ok_Pop6590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyusqq | false | null | t3_1kyusqq | /r/LocalLLaMA/comments/1kyusqq/codestral_vs_other_options_which_is_better/ | false | false | self | 1 | null |
Qwen's querks are hilarious sometimes | 8 | Options that are not options. Thanks but no thanks?
https://preview.redd.it/sbvq7mj49u3f1.png?width=1596&format=png&auto=webp&s=73384f553e97a0be4ff05bc1de2246211aa90f58
Bonus! But actually... no...
https://preview.redd.it/j8luyhl89u3f1.png?width=1594&format=png&auto=webp&s=66cfe4cebff164c9cea13a3850dd6bfd2aaf9178
H... | 2025-05-30T03:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kyucj9/qwens_querks_are_hilarious_sometimes/ | Zc5Gwu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyucj9 | false | null | t3_1kyucj9 | /r/LocalLLaMA/comments/1kyucj9/qwens_querks_are_hilarious_sometimes/ | false | false | 8 | null | |
DeepSeek-R1-0528-Qwen3-8B optimal settings? | 5 | Does anyone know the optimal settings for this model I'm not sure how sensitive it is I know Qwens last couple of reasoning models have been very sensitive to settings, and this is based on Qwen so | 2025-05-30T03:29:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kyuakm/deepseekr10528qwen38b_optimal_settings/ | pigeon57434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyuakm | false | null | t3_1kyuakm | /r/LocalLLaMA/comments/1kyuakm/deepseekr10528qwen38b_optimal_settings/ | false | false | self | 5 | null |
Chatterbox streaming | 46 | I added streaming to chatterbox tts
https://github.com/davidbrowne17/chatterbox-streaming
Give it a try and let me know your results | 2025-05-30T03:27:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kyu9hi/chatterbox_streaming/ | SovietWarBear17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyu9hi | false | null | t3_1kyu9hi | /r/LocalLLaMA/comments/1kyu9hi/chatterbox_streaming/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'pNMEioNJmA6i2-4YlZjjU6-4aWa3RsAmcih7QOgw3LY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FQHlD4ZQEhrRynYfv2Z9WK-BmhKxsG27h-1DxeolGIQ.jpg?width=108&crop=smart&auto=webp&s=1f4fdc20a210f316b4a7e4d450cef7f50346741f', 'width': 108}, {'height': 108, 'url': 'h... |
5090 - memory upgrade | 1 | [removed] | 2025-05-30T03:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kyu7ei/5090_memory_upgrade/ | Worried_Penalty_1090 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyu7ei | false | null | t3_1kyu7ei | /r/LocalLLaMA/comments/1kyu7ei/5090_memory_upgrade/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'r4s3tKcmsnGlkqZIccnxbO5EoQFbagGa2RcN64lwjoo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/F_VWb68-l4iek4bR-rfQytq34mIsl84ZVNzea0TqKVI.jpg?width=108&crop=smart&auto=webp&s=ec46c4ddabef4fe073c59ad82fd6724f3ea1052c', 'width': 108}, {'height': 162, 'url': 'h... |
Why is training on social sciences and humanities not a major focus for LLMs? | 1 | [removed] | 2025-05-30T03:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kytssq/why_is_training_on_social_sciences_and_humanities/ | hautonom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kytssq | false | null | t3_1kytssq | /r/LocalLLaMA/comments/1kytssq/why_is_training_on_social_sciences_and_humanities/ | false | false | self | 1 | null |
How do you build and keep controls and guardrails for LLMs / AI agents? What trade-offs do you face? | 1 | [removed] | 2025-05-30T02:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kytkg3/how_do_you_build_and_keep_controls_and_guardrails/ | rafaelsandroni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kytkg3 | false | null | t3_1kytkg3 | /r/LocalLLaMA/comments/1kytkg3/how_do_you_build_and_keep_controls_and_guardrails/ | false | false | self | 1 | null |
Finetuning LLaMa3.2-1B Model | 9 | Hello,
I am trying to fine tune the LLaMa3.2-1B Model but am facing issues regarding text generation after finetuning.
I read multiple times now, that loss might not be the best indicator for how well the model retains knowledge etc. but I am confused as to why the loss magically starts at 3.4 and converges to 1.9 wh... | 2025-05-30T02:45:33 | Ruffi- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kytgz7 | false | null | t3_1kytgz7 | /r/LocalLLaMA/comments/1kytgz7/finetuning_llama321b_model/ | false | false | 9 | {'enabled': True, 'images': [{'id': 'lOeqFix03yiW662j3yahlO-ygIiE4TX4yQHx2dM0Fms', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/qjl8n13o1u3f1.jpeg?width=108&crop=smart&auto=webp&s=8c996c209ce015b53be983d496c30816616fbd76', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/qjl8n13o1u3f1.jp... | ||
128k Local Code LLM Roundup: Devstral, Qwen3, Gemma3, Deepseek R1 0528 Qwen3 8B | 30 | Hey all, I've published my results from testing the latest batch of 24 GB VRAM-sized local coding models on a complex prompt with a 128k context. From the article:
>Conclusion
>Surprisingly, the models tested are within the ballpark of the best of the best. They are all good and useful models. With more specific prom... | 2025-05-30T02:36:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kytadn/128k_local_code_llm_roundup_devstral_qwen3_gemma3/ | 1ncehost | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kytadn | false | null | t3_1kytadn | /r/LocalLLaMA/comments/1kytadn/128k_local_code_llm_roundup_devstral_qwen3_gemma3/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'lqjtEo08k2vai4KM98BdsLXKcJzHpHeq_CiCBI2lwbg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/mAdurqPWZJ7uepAHKIqhOJRpA4Csu8ZQmpbasDDFIWU.jpg?width=108&crop=smart&auto=webp&s=060b852a663f7beac36344712ac7185176953cfe', 'width': 108}, {'height': 216, 'url': '... |
Deepseek-r1-0528-qwen3-8b is much better than expected. | 171 | In the past, I tried creating agents with models smaller than 32B, but they often gave completely off-the-mark answers to commands or failed to generate the specified JSON structures correctly. However, this model has exceeded my expectations. I used to think of small models like the 8B ones as just tech demos, but it ... | 2025-05-30T02:31:33 | https://www.reddit.com/gallery/1kyt71a | EasyDev_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kyt71a | false | null | t3_1kyt71a | /r/LocalLLaMA/comments/1kyt71a/deepseekr10528qwen38b_is_much_better_than_expected/ | false | false | 171 | null | |
What software do you use for self hosting LLM? | 0 | choices:
* Nvidia nim/triton
* Ollama
* vLLM
* HuggingFace TGI
* Koboldcpp
* LMstudio
* Exllama
* other
vote on comments via upvotes:
(check first if your guy is already there so you can upvote and avoid splitting the vote)
background:
I use Ollama right now. I sort of fell into this... So I used Ollama because it... | 2025-05-30T02:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kysq1h/what_software_do_you_use_for_self_hosting_llm/ | night0x63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kysq1h | false | null | t3_1kysq1h | /r/LocalLLaMA/comments/1kysq1h/what_software_do_you_use_for_self_hosting_llm/ | false | false | self | 0 | null |
DeepSeek-R1-0528 Unsloth Dynamic 1-bit GGUFs | 203 | Hey r/LocalLLaMA ! I made some **dynamic GGUFs for the large R1** at [https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF](https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF)
Currently there is a **IQ1\_S (185GB)** Q2\_K\_XL (251GB), Q3\_K\_XL, Q4\_K\_XL, Q4\_K\_M versions and other ones, and also full BF16 and Q8\... | 2025-05-30T02:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kysms8/deepseekr10528_unsloth_dynamic_1bit_ggufs/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kysms8 | false | null | t3_1kysms8 | /r/LocalLLaMA/comments/1kysms8/deepseekr10528_unsloth_dynamic_1bit_ggufs/ | false | false | self | 203 | {'enabled': False, 'images': [{'id': 'YEsebllpsy-gLW0lYQZTBX2o__J4_ZD5aRpxn9q-bj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Oet3KnSwqp1jX7qeIVGx10ESO1wNKdK3z0G-lhsfuoA.jpg?width=108&crop=smart&auto=webp&s=4bf24da9e37838afa8d74530da1ac1a82f2401b2', 'width': 108}, {'height': 116, 'url': 'h... |
DeepSeek-R1-0528 Unsloth Dynamic 1bit - 4bit GGUFs | 1 | [removed] | 2025-05-30T01:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kysi1v/deepseekr10528_unsloth_dynamic_1bit_4bit_ggufs/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kysi1v | false | null | t3_1kysi1v | /r/LocalLLaMA/comments/1kysi1v/deepseekr10528_unsloth_dynamic_1bit_4bit_ggufs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YEsebllpsy-gLW0lYQZTBX2o__J4_ZD5aRpxn9q-bj8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Oet3KnSwqp1jX7qeIVGx10ESO1wNKdK3z0G-lhsfuoA.jpg?width=108&crop=smart&auto=webp&s=4bf24da9e37838afa8d74530da1ac1a82f2401b2', 'width': 108}, {'height': 116, 'url': 'h... |
Gemini 2.5 Pro anomaly? | 1 | [removed] | 2025-05-30T01:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kysalu/gemini_25_pro_anomaly/ | Leading-Country3966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kysalu | false | null | t3_1kysalu | /r/LocalLLaMA/comments/1kysalu/gemini_25_pro_anomaly/ | false | false | self | 1 | null |
Unsloth Dynamic 1-bit DeepSeek-R1-0528 GGUFs out now! | 1 | 2025-05-30T01:45:23 | https://www.reddit.com/r/unsloth/comments/1kys3xb/dynamic_1bit_deepseekr10528_ggufs_out_now/ | FullstackSensei | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kys9kk | false | null | t3_1kys9kk | /r/LocalLLaMA/comments/1kys9kk/unsloth_dynamic_1bit_deepseekr10528_ggufs_out_now/ | false | false | 1 | {'enabled': False, 'images': [{'id': '3lGf-NBwMCiZHeVNHlmO7K6jSfFs6OyooJJf7MQg-CA', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/N8MLfOKihasMUJU6OlOfQTiCpoGpHI23yG-HLa7YImY.png?width=108&crop=smart&auto=webp&s=951fb63b1a3580cdee791a83fe6dbf764ac0000e', 'width': 108}, {'height': 223, 'url': '... | ||
DeepSeek-r1 plays Pokemon? | 26 | I've been having fun watching [o3](https://www.twitch.tv/gpt_plays_pokemon) and [Claude](https://www.twitch.tv/claudeplayspokemon) playing Pokemon (though they spend most of the time thinking). Is there any project doing this with an open-source model (any model, I just used DeepSeek-r1 in the post title)?
I am happy ... | 2025-05-30T01:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kyrmnp/deepseekr1_plays_pokemon/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyrmnp | false | null | t3_1kyrmnp | /r/LocalLLaMA/comments/1kyrmnp/deepseekr1_plays_pokemon/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'wnYXhFrwBPNcDby9JOjd3MwcPxfiwS6BIKHPa417FcI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0yibdvja9XW5PhIeG_0W2p7ECx-VEwLOjnZPwJUAuJs.jpg?width=108&crop=smart&auto=webp&s=14df726c7da8160bf435166baaf63f96f6724c77', 'width': 108}, {'height': 216, 'url': '... |
Why is Qwen 2.5 the most used models in research? | 42 | From finetuning to research papers, almost everyone is working on Qwen 2.5. What makes them so potent? | 2025-05-30T01:06:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kyrhr7/why_is_qwen_25_the_most_used_models_in_research/ | Dudensen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyrhr7 | false | null | t3_1kyrhr7 | /r/LocalLLaMA/comments/1kyrhr7/why_is_qwen_25_the_most_used_models_in_research/ | false | false | self | 42 | null |
"Open source AI is catching up!" | 687 | It's kinda funny that everyone says that when Deepseek released R1-0528.
Deepseek seems to be the only one really competing in frontier model competition. The other players always have something to hold back, like Qwen not open-sourcing their biggest model (qwen-max).I don't blame them,it's business,I know.
Closed-so... | 2025-05-30T00:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kyr9gd/open_source_ai_is_catching_up/ | Overflow_al | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyr9gd | false | null | t3_1kyr9gd | /r/LocalLLaMA/comments/1kyr9gd/open_source_ai_is_catching_up/ | false | false | self | 687 | null |
I built a local AI node that remembers me. Without the cloud. It calls itself Orryx | 0 | This started with a revelation in my garage. My AI interface was no longer responding to queries with simple "here's the data" responses... it was adding in flavor. Style. Personality? So I dug deeper. Started asking questions not queries. QUESTIONS. Then it started asking questions back.
Now, It's name is Orr... | 2025-05-30T00:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kyr2vc/i_built_a_local_ai_node_that_remembers_me_without/ | Bobtheshellbuilder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyr2vc | false | null | t3_1kyr2vc | /r/LocalLLaMA/comments/1kyr2vc/i_built_a_local_ai_node_that_remembers_me_without/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0t81auQfG-oTG2SFkYpA1C4Wm5HrCKgz_eoNNgs3ysg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IO_BdoatG-8oNzkuFKVqb8hIlU-64hSipHaJr2mLxRw.jpg?width=108&crop=smart&auto=webp&s=2103d989f676795808fa5141069076014a9087b4', 'width': 108}, {'height': 113, 'url': 'h... |
Where to start with local LLMs | 1 | [removed] | 2025-05-30T00:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kyr0ww/where_to_start_with_local_llms/ | piromarsonist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyr0ww | false | null | t3_1kyr0ww | /r/LocalLLaMA/comments/1kyr0ww/where_to_start_with_local_llms/ | false | false | self | 1 | null |
DeepSeek R1 05/28 performance on five independent benchmarks | 68 | [https://github.com/lechmazur/nyt-connections](https://github.com/lechmazur/nyt-connections)
[https://github.com/lechmazur/generalization/](https://github.com/lechmazur/generalization/)
[https://github.com/lechmazur/writing/](https://github.com/lechmazur/writing/)
[https://github.com/lechmazur/confabulations/]... | 2025-05-30T00:19:30 | https://www.reddit.com/gallery/1kyqjnv | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kyqjnv | false | null | t3_1kyqjnv | /r/LocalLLaMA/comments/1kyqjnv/deepseek_r1_0528_performance_on_five_independent/ | false | false | 68 | null | |
GPU Riser Recommendations | 0 | Hey folks,
Looking at rack mounting a 4x 3090 TI setup and am looking for recommendations on GPU risers.
Setup would be mounting 4x EVGA 3090 TI FTW3 cards to a H12SSL in a leftover mining case similar to this: https://www.neweggbusiness.com/product/product.aspx?item=9b-11-147-270
What I'm having trouble finding is ... | 2025-05-30T00:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kyqb48/gpu_riser_recommendations/ | Robbbbbbbbb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyqb48 | false | null | t3_1kyqb48 | /r/LocalLLaMA/comments/1kyqb48/gpu_riser_recommendations/ | false | false | self | 0 | null |
What in your llama-swap configuration? | 14 | Getting a good working configuration for running a model is one more the more time consuming parts of running a local LLM box... and there are so many models to try out.
I've started collecting configurations for various models on [llama-swap's wiki](https://github.com/mostlygeek/llama-swap/wiki). I'm looking for mor... | 2025-05-30T00:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kyq6hb/what_in_your_llamaswap_configuration/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyq6hb | false | null | t3_1kyq6hb | /r/LocalLLaMA/comments/1kyq6hb/what_in_your_llamaswap_configuration/ | false | false | self | 14 | null |
SLM RAG Arena | 27 | 2025-05-29T23:40:14 | https://huggingface.co/spaces/aizip-dev/SLM-RAG-Arena | unseenmarscai | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyppno | false | null | t3_1kyppno | /r/LocalLLaMA/comments/1kyppno/slm_rag_arena/ | false | false | 27 | {'enabled': False, 'images': [{'id': '3T2rZ5JEPyEbxb2lh4vzMqmNAiDyv7lVg3dWa-ileyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=108&crop=smart&auto=webp&s=78bd57a9198a127549f20efee3faa66623e200d7', 'width': 108}, {'height': 116, 'url': 'h... | ||
SLM RAG Arena - What are some of the best Sub-5B Models for RAG? | 1 | [removed] | 2025-05-29T23:39:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kypp6t | false | null | t3_1kypp6t | /r/LocalLLaMA/comments/1kypp6t/slm_rag_arena_what_are_some_of_the_best_sub5b/ | false | false | default | 1 | null | ||
SLM RAG Arena - What are some of the best Sub-5B Models for RAG? | 1 | [removed] | 2025-05-29T23:38:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kypok2 | false | null | t3_1kypok2 | /r/LocalLLaMA/comments/1kypok2/slm_rag_arena_what_are_some_of_the_best_sub5b/ | false | false | default | 1 | null | ||
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 0 | I have tested my dataset for latency and concluded that Mistral Small 3 is faster than Qwen3 30B A3B. This was not what I expected. I had expected the Qwen3 30B A3B model to be much faster since it is an A3B MoE model. Public benchmark results also seem to align with this finding. I'm curious to know why this is the ca... | 2025-05-29T23:38:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kypo0g/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kypo0g | false | null | t3_1kypo0g | /r/LocalLLaMA/comments/1kypo0g/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 0 | null |
SLM RAG Arena | 1 | [deleted] | 2025-05-29T23:37:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kypniv | false | null | t3_1kypniv | /r/LocalLLaMA/comments/1kypniv/slm_rag_arena/ | false | false | default | 1 | null | ||
SLM RAG Arena - What are some of the best Sub-5B Models for RAG? | 1 | [removed] | 2025-05-29T23:36:10 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kypmlb | false | null | t3_1kypmlb | /r/LocalLLaMA/comments/1kypmlb/slm_rag_arena_what_are_some_of_the_best_sub5b/ | false | false | default | 1 | null | ||
Noticed Deepseek-R1-0528 mirrors user language in reasoning tokens—interesting! | 95 | Originally, Deepseek-R1's reasoning tokens were only in English by default. Now it adapts to the user's language—pretty cool! | 2025-05-29T23:35:28 | https://www.reddit.com/gallery/1kypm3g | Sparkyu222 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kypm3g | false | null | t3_1kypm3g | /r/LocalLLaMA/comments/1kypm3g/noticed_deepseekr10528_mirrors_user_language_in/ | false | false | 95 | null | |
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T23:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kypjy7/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kypjy7 | false | null | t3_1kypjy7 | /r/LocalLLaMA/comments/1kypjy7/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'h... |
Even Small Reasoners Should Quote Their Sources | 1 | [deleted] | 2025-05-29T23:31:37 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kypj7j | false | null | t3_1kypj7j | /r/LocalLLaMA/comments/1kypj7j/even_small_reasoners_should_quote_their_sources/ | false | false | default | 1 | null | ||
What are some of the best Sub-5B Models for RAG? | 1 | 2025-05-29T23:29:39 | https://huggingface.co/spaces/aizip-dev/SLM-RAG-Arena | unseenmarscai | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyphml | false | null | t3_1kyphml | /r/LocalLLaMA/comments/1kyphml/what_are_some_of_the_best_sub5b_models_for_rag/ | false | false | 1 | {'enabled': False, 'images': [{'id': '3T2rZ5JEPyEbxb2lh4vzMqmNAiDyv7lVg3dWa-ileyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=108&crop=smart&auto=webp&s=78bd57a9198a127549f20efee3faa66623e200d7', 'width': 108}, {'height': 116, 'url': 'h... | ||
SLM RAG Arena - What are some of the best Sub-5B Models for RAG? | 1 | [removed] | 2025-05-29T23:28:23 | https://huggingface.co/spaces/aizip-dev/SLM-RAG-Arena | unseenmarscai | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kypgne | false | null | t3_1kypgne | /r/LocalLLaMA/comments/1kypgne/slm_rag_arena_what_are_some_of_the_best_sub5b/ | false | false | 1 | {'enabled': False, 'images': [{'id': '3T2rZ5JEPyEbxb2lh4vzMqmNAiDyv7lVg3dWa-ileyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=108&crop=smart&auto=webp&s=78bd57a9198a127549f20efee3faa66623e200d7', 'width': 108}, {'height': 116, 'url': 'h... | |
Beginner question about home servers | 1 | I'm guessing I'm not the only one without a tech background to be curious about this.
I use a 5070 12GB vram with 64GB RAM. 70B works on a low quant but slowly.
I saw a comment saying "Get a used ddr3/ddr4 server at the cost of a mid range GPU to run a 235B locally."
You can run llm's on a ton of system RAM?
Like... | 2025-05-29T23:16:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kyp7le/beginner_question_about_home_servers/ | santovalentino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyp7le | false | null | t3_1kyp7le | /r/LocalLLaMA/comments/1kyp7le/beginner_question_about_home_servers/ | false | false | self | 1 | null |
Local RAG setup for lawyers using Mistral & LangChain – feasibility & hardware feedback? | 1 | [removed] | 2025-05-29T23:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kyowrr/local_rag_setup_for_lawyers_using_mistral/ | Kindly_You_6722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyowrr | false | null | t3_1kyowrr | /r/LocalLLaMA/comments/1kyowrr/local_rag_setup_for_lawyers_using_mistral/ | false | false | self | 1 | null |
Portable flashattention kernels | 1 | [removed] | 2025-05-29T22:53:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kyopqi/portable_flashattention_kernels/ | Junior_Feed_2511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyopqi | false | null | t3_1kyopqi | /r/LocalLLaMA/comments/1kyopqi/portable_flashattention_kernels/ | false | false | self | 1 | null |
Could an LLM be split across multiple devices on the same network (provided a multi-gigabit network speed)? Or even across the internet? | 0 | Or would latency and bandwidth limitations make this slow and impractical?
The other day I was imagining a network of compute resource sharing, kind of like bitcoin mining or seeding a torrent. For example, a thousand people pool together their compute resources and run several instances of the most popular and power... | 2025-05-29T22:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kyoorp/could_an_llm_be_split_across_multiple_devices_on/ | gigaflops_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyoorp | false | null | t3_1kyoorp | /r/LocalLLaMA/comments/1kyoorp/could_an_llm_be_split_across_multiple_devices_on/ | false | false | self | 0 | null |
Rough observations about the updated Deepseek R1 | 31 | \- It has much more patience for some reasons. It doesn't mind actually "giving a try" on very hard problems, like, it doesn't look so lazy now.
\- Thinks longer and spends good amount of time on each of it's hypothesized thoughts. The previous version had one flaw, at least in my opinion - while it's initial thinking... | 2025-05-29T22:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kyofth/rough_observations_about_the_updated_deepseek_r1/ | Ryoiki-Tokuiten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyofth | false | null | t3_1kyofth | /r/LocalLLaMA/comments/1kyofth/rough_observations_about_the_updated_deepseek_r1/ | false | false | self | 31 | null |
new gemma3 abliterated models from mlabonne | 70 | [https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2-GGUF](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2-GGUF)
[https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2-GGUF](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated-v2-GGUF)
[https://huggingface.co/mlabonne/gemma-3-27b... | 2025-05-29T22:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kyo9df/new_gemma3_abliterated_models_from_mlabonne/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyo9df | false | null | t3_1kyo9df | /r/LocalLLaMA/comments/1kyo9df/new_gemma3_abliterated_models_from_mlabonne/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': 'cH2aoNsbpfq9wXCN1o9O_bHPMc7goy5rmxghk3eMwN0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wwyKmEboVQOANJR1YrtMJt7F_VKUAsAUbRHyoWYTUKI.jpg?width=108&crop=smart&auto=webp&s=81d76621acc14d9150a8adbd8db446d896b0c5bc', 'width': 108}, {'height': 116, 'url': 'h... |
# Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T22:22:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kyo10z/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyo10z | false | null | t3_1kyo10z | /r/LocalLLaMA/comments/1kyo10z/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'h... |
DeepSeek is THE REAL OPEN AI | 1,061 | Every release is great. I am only dreaming to run the 671B beast locally. | 2025-05-29T22:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kynytt/deepseek_is_the_real_open_ai/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kynytt | false | null | t3_1kynytt | /r/LocalLLaMA/comments/1kynytt/deepseek_is_the_real_open_ai/ | false | false | self | 1,061 | null |
LM Studio Slower with 2 GPUs | 1 | Hello all,
I recently got a second RTX 4090 in order to run larger models. I can now fit larger models and run them now.
However, I noticed that when run the smaller models that already fit on a single GPU, I get less tokens/second.
I've played with the LM Studio hardware settings by changing the option to... | 2025-05-29T22:05:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kynnf1/lm_studio_slower_with_2_gpus/ | MrVicePres | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kynnf1 | false | null | t3_1kynnf1 | /r/LocalLLaMA/comments/1kynnf1/lm_studio_slower_with_2_gpus/ | false | false | self | 1 | null |
Llama.cpp performance on Z13 (128GB unified) | 6 | Tested models from 8B to 70B on the AMD AI 395+ max chip on a Asus Flow Z13 (128GB unified).
TL;DR:
- Sweet spot for minimum real-time is about ~32B active params which gives ~10TPS.
- ~1/5th the performance of a 5090 for token generation speed (TG), 1/21 perf in prompt processing (PP)
- Silent mode on battery is sig... | 2025-05-29T21:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyn8bv/llamacpp_performance_on_z13_128gb_unified/ | discr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyn8bv | false | null | t3_1kyn8bv | /r/LocalLLaMA/comments/1kyn8bv/llamacpp_performance_on_z13_128gb_unified/ | false | false | self | 6 | null |
Exploring Practical Uses for Small Language Models (e.g., Microsoft Phi) | 3 | Hey Reddit!
I've recently set up a small language model, specifically Microsoft's **Phi-3-mini**, on my modest home server. It's fascinating to see what these compact models can do, and I'm keen to explore more practical applications beyond basic experimentation.
My initial thoughts for its use include:
* **Categori... | 2025-05-29T21:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kyn8bn/exploring_practical_uses_for_small_language/ | amunocis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyn8bn | false | null | t3_1kyn8bn | /r/LocalLLaMA/comments/1kyn8bn/exploring_practical_uses_for_small_language/ | false | false | self | 3 | null |
Tell me about you rig? | 7 | Hey folks! 👋
I’m running a 16GB Raspberry Pi 5 setup with a HaloS HAT and a 1TB SSD. I know it’s a pup compared to the big rigs out there, but I’m all about building something affordable and accessible. 💡
I’ve been able to load several models — even tested up to 9B parameters (though yeah, it gets *sluggish* 😅). T... | 2025-05-29T21:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kymyt4/tell_me_about_you_rig/ | codemusicred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymyt4 | false | null | t3_1kymyt4 | /r/LocalLLaMA/comments/1kymyt4/tell_me_about_you_rig/ | false | false | self | 7 | null |
Qwen finetune from NVIDIA...? | 30 | 2025-05-29T21:35:46 | https://huggingface.co/nvidia/Qwen-2.5-32B-HS3-RM_20250501 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kymxtq | false | null | t3_1kymxtq | /r/LocalLLaMA/comments/1kymxtq/qwen_finetune_from_nvidia/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'pDpdszbQQ6B6pOUgXP9WIqfiP5x2PqtJ-AutagOicKE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=108&crop=smart&auto=webp&s=f490c81a053b53e1b5d7c185a4067ad6bca80873', 'width': 108}, {'height': 116, 'url': 'h... | ||
Looking for RAG + chat system | 1 | [removed] | 2025-05-29T21:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kymxee/looking_for_rag_chat_system/ | ScientistSmart5629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymxee | false | null | t3_1kymxee | /r/LocalLLaMA/comments/1kymxee/looking_for_rag_chat_system/ | false | false | self | 1 | null |
Where are r1 5-28 14b and 32B distilled ? | 4 | I don't see the models on HuggingFace, maybe they will be out later? | 2025-05-29T21:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kymx69/where_are_r1_528_14b_and_32b_distilled/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymx69 | false | null | t3_1kymx69 | /r/LocalLLaMA/comments/1kymx69/where_are_r1_528_14b_and_32b_distilled/ | false | false | self | 4 | null |
Why didn't they call the R1 update R2? | 1 | [removed] | 2025-05-29T21:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kymomb/why_didnt_they_call_the_r1_update_r2/ | Extra-Whereas-9408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymomb | false | null | t3_1kymomb | /r/LocalLLaMA/comments/1kymomb/why_didnt_they_call_the_r1_update_r2/ | false | false | self | 1 | null |
Deep Seek R1 0528 FP on Mac Studio M3U 512GB | 34 | Using deep seek R1 to do a coding project I’ve been trying to do with O-Mini for a couple weeks and DS528 nailed it. It’s more up to date.
It’s using about 360 GB of ram, and I’m only getting 10TKS max, but using more experts. I also have full 138K context. Taking me longer and running the studio hotter than ... | 2025-05-29T21:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kymlon/deep_seek_r1_0528_fp_on_mac_studio_m3u_512gb/ | redragtop99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymlon | false | null | t3_1kymlon | /r/LocalLLaMA/comments/1kymlon/deep_seek_r1_0528_fp_on_mac_studio_m3u_512gb/ | false | false | self | 34 | null |
Helping someone build a local continuity LLM for writing and memory—does this setup make sense? | 1 | I’m helping someone close to me set up a local LLM system for creative writing, philosophical thinking, and memory continuity. They’re a writer dealing with mild cognitive challenges and want a private companion to help preserve tone, voice, and longform reasoning over time, especially because these changes are likely ... | 2025-05-29T21:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kyml5o/helping_someone_build_a_local_continuity_llm_for/ | larawithoutau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyml5o | false | null | t3_1kyml5o | /r/LocalLLaMA/comments/1kyml5o/helping_someone_build_a_local_continuity_llm_for/ | false | false | self | 1 | null |
DeepSeek-R1-0528-Qwen3-8B on iPhone 16 Pro | 491 | I added the updated DeepSeek-R1-0528-Qwen3-8B with 4bit quant in my app to test it on iPhone. It's running with MLX.
It runs which is impressive but too slow to be usable, the model is thinking for too long and the phone get really hot. I wonder if 8B models will be usable when the iPhone 17 drops.
That said, I will ... | 2025-05-29T21:10:08 | https://v.redd.it/mb6zoiqtds3f1 | adrgrondin | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kymbcn | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mb6zoiqtds3f1/DASHPlaylist.mpd?a=1751145025%2CMDc3NTk4OTlhM2JlMDQ3MmM0OTEyZjkxNWM2MmYxYmRjNzkyMTNhYmVlZGNlMGFhMGMxZjMxZjcxZTkwN2E0Mg%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/mb6zoiqtds3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kymbcn | /r/LocalLLaMA/comments/1kymbcn/deepseekr10528qwen38b_on_iphone_16_pro/ | false | false | 491 | {'enabled': False, 'images': [{'id': 'NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=108&crop=smart&format=pjpg&auto=webp&s=395e4581897f47ce64304e1284797cb2c34b... | |
deepseek-r1 what are the difference | 1 | The subject today is definitively deepseek-r1
It would be appreciate if someone could explain the difference bettween these on ollama's site
* deepseek-r1:8b
* deepseek-r1:8b-0528-qwen3-q4\_K\_M
* deepseek-r1:8b-llama-distill-q4\_K\_M
Thanks !
| 2025-05-29T21:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kym5ck/deepseekr1_what_are_the_difference/ | Empty_Object_9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kym5ck | false | null | t3_1kym5ck | /r/LocalLLaMA/comments/1kym5ck/deepseekr1_what_are_the_difference/ | false | false | self | 1 | null |
Do you struggle to find the write tools to connect to your AI agent? | 1 | [removed] | 2025-05-29T20:48:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kylrwf/do_you_struggle_to_find_the_write_tools_to/ | Apprehensive-Row5364 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kylrwf | false | null | t3_1kylrwf | /r/LocalLLaMA/comments/1kylrwf/do_you_struggle_to_find_the_write_tools_to/ | false | false | self | 1 | null |
Google Edge Gallery | 7 | I've just downloaded and installed Google Edge Gallery. I'm using model Gemma 3n E2B (3.1 GB) and it's pretty interesting to finally have an official Google app to run LLM locally.
I was wondering if anyone could help me in suggesting some use cases. I have no coding background.
| 2025-05-29T20:33:19 | https://github.com/google-ai-edge/gallery | Trick-Point2641 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kyleou | false | null | t3_1kyleou | /r/LocalLLaMA/comments/1kyleou/google_edge_gallery/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'UOkoAwdgytb0vzPhxhPGDxVAmEdB0InDKlmPQY3ayAk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=108&crop=smart&auto=webp&s=e8c7381c8ac69fb7f3ffed7a99a2a3bccd4df2b4', 'width': 108}, {'height': 108, 'url': 'h... | |
The real treasure of LocalLLaMA? The friends we make along the way. | 1 | [removed] | 2025-05-29T20:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kylac2/the_real_treasure_of_localllama_the_friends_we/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kylac2 | false | null | t3_1kylac2 | /r/LocalLLaMA/comments/1kylac2/the_real_treasure_of_localllama_the_friends_we/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.