title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[Update] Rensa: added full CMinHash + OptDensMinHash support (fast MinHash in Rust for dataset deduplication / LLM fine-tuning) | 7 | Hey all — quick update on [Rensa](https://github.com/beowolx/rensa), a MinHash library I’ve been building in Rust with Python bindings. It’s focused on speed and works well for deduplicating large text datasets — especially stuff like LLM fine-tuning where near duplicates are a problem.
Originally, I built a custom al... | 2025-05-31T15:33:13 | https://github.com/beowolx/rensa | BeowulfBR | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kzzvzt | false | null | t3_1kzzvzt | /r/LocalLLaMA/comments/1kzzvzt/update_rensa_added_full_cminhash_optdensminhash/ | false | false | 7 | {'enabled': False, 'images': [{'id': '9EoVEpX8dd8gmVZrlAEv4KPn0lD8aSgti45FwF6jKUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=108&crop=smart&auto=webp&s=8574a58256166f65fec102524f76a4e2ee5dfa0e', 'width': 108}, {'height': 108, 'url': 'h... | |
Google lets you run AI models locally | 310 | https://techcrunch.com/2025/05/31/google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/ | 2025-05-31T15:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kzzshu/google_lets_you_run_ai_models_locally/ | dnr41418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzzshu | false | null | t3_1kzzshu | /r/LocalLLaMA/comments/1kzzshu/google_lets_you_run_ai_models_locally/ | false | false | self | 310 | {'enabled': False, 'images': [{'id': '_FBQRawtsVnlTLgg9jFSaAELbacVusil3H8bxH8zdWA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=108&crop=smart&auto=webp&s=9ddd21dcf8ac59bd61fe2319db5ff3b12f11fcdf', 'width': 108}, {'height': 144, 'url': 'h... |
llama 3 capabilities such as periodic tasks? | 1 | [removed] | 2025-05-31T15:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kzzqoe/llama_3_capabilities_such_as_periodic_tasks/ | Exotic-Media5762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzzqoe | false | null | t3_1kzzqoe | /r/LocalLLaMA/comments/1kzzqoe/llama_3_capabilities_such_as_periodic_tasks/ | false | false | self | 1 | null |
Open source iOS app for local AI inference - MIT License | 2 | Run LLMs completely locally on your iOS device. localAI is a native iOS application that enables on-device inference with large language models without requiring an internet connection. Built with Swift and SwiftUI for efficient model inference on Apple Silicon.
Repo [https://github.com/sse-97/localAI-by-sse](https://... | 2025-05-31T15:18:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kzzjpn/open_source_ios_app_for_local_ai_inference_mit/ | CrazySymphonie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzzjpn | false | null | t3_1kzzjpn | /r/LocalLLaMA/comments/1kzzjpn/open_source_ios_app_for_local_ai_inference_mit/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'gtOEr9J87Hzb41mlKbPN5z639UOP3JB1hOUtvISgyws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=108&crop=smart&auto=webp&s=5a5e7ecf4f67db2c0372e00c6e2919e3e2507856', 'width': 108}, {'height': 108, 'url': 'h... |
Use MCP to run computer use in a VM. | 15 | MCP Server with Computer Use Agent runs through Claude Desktop, Cursor, and other MCP clients.
An example use case lets try using Claude as a tutor to learn how to use Tableau.
The MCP Server implementation exposes CUA's full functionality through standardized tool calls. It supports single-task commands and multi-t... | 2025-05-31T15:04:34 | https://v.redd.it/p51trp5fu44f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzz7t4 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/p51trp5fu44f1/DASHPlaylist.mpd?a=1751295889%2CMmZhMjRhMmJlMWY3ZmQ4ZTM2NDA1NmI2NGM0ZDM0OTlhNzE0NTI2YmE3YzMyNTRmYzZmNmU1ZDRhZjY0YmUwYg%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/p51trp5fu44f1/DASH_720.mp4?source=fallback', 'h... | t3_1kzz7t4 | /r/LocalLLaMA/comments/1kzz7t4/use_mcp_to_run_computer_use_in_a_vm/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=108&crop=smart&format=pjpg&auto=webp&s=47e2dc9420359333cdafde9bbec0ab1e172ca... | |
Use MCP to run computer use in a VM. | 1 | [removed] | 2025-05-31T15:01:50 | https://v.redd.it/l5uhpnqxt44f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzz5l7 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/l5uhpnqxt44f1/DASHPlaylist.mpd?a=1751295725%2CNmU4YzNlMmRmOGMwNzdjYjk4MWUyMDY0MjE4OWVkMDhkYTdjYjU0OTQ4YjViNThhZWM4MzYyNzI3ZjAwODk1Mw%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/l5uhpnqxt44f1/DASH_720.mp4?source=fallback', 'h... | t3_1kzz5l7 | /r/LocalLLaMA/comments/1kzz5l7/use_mcp_to_run_computer_use_in_a_vm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=108&crop=smart&format=pjpg&auto=webp&s=bcb3610aecd8a191bb6b7c2cb5fefa1295840... | |
What if I secretly get access to chatgpt 4o model weights ? | 0 | Can I sell the model weights secretly ?
Is it possible to open source the model weights?
What is even stopping the OpenAi's employees form secretly doing it? | 2025-05-31T14:31:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kzygjb/what_if_i_secretly_get_access_to_chatgpt_4o_model/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzygjb | false | null | t3_1kzygjb | /r/LocalLLaMA/comments/1kzygjb/what_if_i_secretly_get_access_to_chatgpt_4o_model/ | false | false | self | 0 | null |
what if I steal Chatgpt 4o model weights | 1 | [removed] | 2025-05-31T14:27:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kzydam/what_if_i_steal_chatgpt_4o_model_weights/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzydam | false | null | t3_1kzydam | /r/LocalLLaMA/comments/1kzydam/what_if_i_steal_chatgpt_4o_model_weights/ | false | false | self | 1 | null |
A newbie's question about inference and quantization | 0 | Hi there, a newbie here. For a long time I have such question(s) and I'd appreciate someone (especially who worked around llama.cpp/vllm/any other LLM inference engine's related part) could answer:
It's been a long time since NVIDIA GPUs (and other hardwares) got support of INT8 inference, then FP8 and more recently ... | 2025-05-31T13:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kzwm5p/a_newbies_question_about_inference_and/ | IngenuityNo1411 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzwm5p | false | null | t3_1kzwm5p | /r/LocalLLaMA/comments/1kzwm5p/a_newbies_question_about_inference_and/ | false | false | self | 0 | null |
Building a product management tool designed for the AI era | 2 | Most planning tools were built before AI became part of how we build. Product docs are written in one place, technical tasks live somewhere else, and the IDE where the actual code lives is isolated from both. And most of the time, devs are the ones who have to figure it out when things are unclear.
After running into ... | 2025-05-31T12:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kzwgkc/building_a_product_management_tool_designed_for/ | eastwindtoday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzwgkc | false | null | t3_1kzwgkc | /r/LocalLLaMA/comments/1kzwgkc/building_a_product_management_tool_designed_for/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'eKSJQCGEFQzPwasZkKBHRfQaHfxfGSdD-2RaXDc9bYY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=108&crop=smart&auto=webp&s=3b3c930c6a2274ce22def82119bd52f3a2dfa457', 'width': 108}, {'height': 113, 'url': 'h... |
Is there any voice agent framework in JS or equivalent of pipecat? Also is there any for avatar altertnative of Simli or Taven? | 0 | Trying to research on options that are good for creating voice ai agent with optionally an avatar. Open source packages preferred. I found pipecat, but server is in python, prefer js one if any anyone know any opensource Simli or Taven to be able to run.
| 2025-05-31T12:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kzwd46/is_there_any_voice_agent_framework_in_js_or/ | gpt872323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzwd46 | false | null | t3_1kzwd46 | /r/LocalLLaMA/comments/1kzwd46/is_there_any_voice_agent_framework_in_js_or/ | false | false | self | 0 | null |
Local vs cloud ai models running and relearn | 1 | [removed] | 2025-05-31T12:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kzwbn3/local_vs_cloud_ai_models_running_and_relearn/ | borisr10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzwbn3 | false | null | t3_1kzwbn3 | /r/LocalLLaMA/comments/1kzwbn3/local_vs_cloud_ai_models_running_and_relearn/ | false | false | self | 1 | null |
Hardware for AI models | 1 | [removed] | 2025-05-31T12:42:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kzw6ud/hardware_for_ai_models/ | borisr10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzw6ud | false | null | t3_1kzw6ud | /r/LocalLLaMA/comments/1kzw6ud/hardware_for_ai_models/ | false | false | self | 1 | null |
AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market | 70 | 2025-05-31T12:41:19 | https://wccftech.com/amd-ryzen-ai-max-pro-385-spotted-on-geekbench/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1kzw65c | false | null | t3_1kzw65c | /r/LocalLLaMA/comments/1kzw65c/amd_octacore_ryzen_ai_max_pro_385_processor/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'UB0xih_7GF6izcL4pkLktCeRW7ESe4LzZtsDYEAGV8w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=108&crop=smart&auto=webp&s=e3826e7384652a976c8dffa169cacbeb0284bda5', 'width': 108}, {'height': 121, 'url': 'h... | ||
Local LLM folks — how are you defining function specs? | 1 | [removed] | 2025-05-31T12:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kzvu9d/local_llm_folks_how_are_you_defining_function/ | FrostyButterscotch77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzvu9d | false | null | t3_1kzvu9d | /r/LocalLLaMA/comments/1kzvu9d/local_llm_folks_how_are_you_defining_function/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HUR4ZjSsMcPldBF8PlxclI3gg-mjZXBfe4bNavSwrFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=108&crop=smart&auto=webp&s=4c05659da71aabefa650df1fddb91bdf8888031d', 'width': 108}, {'height': 113, 'url': 'h... |
Demo video of AutoBE, backend vibe coding agent, writing 100% compilation-successful code | 1 | ## AutoBE: Backend Vibe Coding Agent Achieving 100% Compilation Success
- Github Repository: https://github.com/wrtnlabs/autobe
- Playground Website: https://stackblitz.com/github/wrtnlabs/autobe-playground-stackblitz
- Demo Result (Generated backend applications by AutoBE)
- [Bullet-in Board System](https://stackbl... | 2025-05-31T12:07:00 | https://v.redd.it/vz1ddc2uj34f1 | jhnam88 | /r/LocalLLaMA/comments/1kzvj6i/demo_video_of_autobe_backend_vibe_coding_agent/ | 1970-01-01T00:00:00 | 0 | {} | 1kzvj6i | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vz1ddc2uj34f1/DASHPlaylist.mpd?a=1751414824%2CMzM1YzQ1Y2RkZjRkMzg4NzY4YjRjYzE1MzNlZjhkYTI1ZWM0ODNjNzY1NTc5OTdhYzkwYjQyYTQ0MjI5N2M2NA%3D%3D&v=1&f=sd', 'duration': 323, 'fallback_url': 'https://v.redd.it/vz1ddc2uj34f1/DASH_1080.mp4?source=fallback', '... | t3_1kzvj6i | /r/LocalLLaMA/comments/1kzvj6i/demo_video_of_autobe_backend_vibe_coding_agent/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ce324a2367534c80fc992ada86d1573a6530... | |
Now available on Ollama: finance-llama-8b (FP16, Q4) | 1 | Finance-Llama-8B is a fine-tuned Llama 3.1 8B model trained on 500k examples for tasks like QA, reasoning, sentiment, and NER. It supports multi-turn dialogue and is ideal for financial assistants.
https://ollama.com/martain7r/finance-llama-8b
Appreciate any thoughts or suggestions. Thanks! | 2025-05-31T12:02:13 | https://www.reddit.com/r/LocalLLaMA/s/f3DrnBYhaO | martian7r | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kzvfzy | false | null | t3_1kzvfzy | /r/LocalLLaMA/comments/1kzvfzy/now_available_on_ollama_financellama8b_fp16_q4/ | false | false | default | 1 | null |
Finetune embedders | 1 | [removed] | 2025-05-31T11:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kzv9sz/finetune_embedders/ | DedeU10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzv9sz | false | null | t3_1kzv9sz | /r/LocalLLaMA/comments/1kzv9sz/finetune_embedders/ | false | false | self | 1 | null |
Surprisingly Fast AI-Generated Kernels We Didn’t Mean to Publish (Yet) | 211 | 2025-05-31T11:41:56 | https://crfm.stanford.edu/2025/05/28/fast-kernels.html | Maxious | crfm.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 1kzv322 | false | null | t3_1kzv322 | /r/LocalLLaMA/comments/1kzv322/surprisingly_fast_aigenerated_kernels_we_didnt/ | false | false | default | 211 | null | |
Ai Chatbot | 1 | [removed] | 2025-05-31T11:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kzv2wc/ai_chatbot/ | Strong_Hurry6781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzv2wc | false | null | t3_1kzv2wc | /r/LocalLLaMA/comments/1kzv2wc/ai_chatbot/ | false | false | self | 1 | null |
Finetune embedding | 1 | [removed] | 2025-05-31T11:36:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kzuzzx/finetune_embedding/ | DedeU10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzuzzx | false | null | t3_1kzuzzx | /r/LocalLLaMA/comments/1kzuzzx/finetune_embedding/ | false | false | self | 1 | null |
Best General + Coding Model for 3060 12GB | 1 | [removed] | 2025-05-31T10:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kzu848/best_general_coding_model_for_3060_12gb/ | DisgustingBlackChimp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzu848 | false | null | t3_1kzu848 | /r/LocalLLaMA/comments/1kzu848/best_general_coding_model_for_3060_12gb/ | false | false | self | 1 | null |
Perplexity Pro 1 Year Subscription $10 | 1 | [removed] | 2025-05-31T10:28:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kztx7k/perplexity_pro_1_year_subscription_10/ | Mae8tro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kztx7k | false | null | t3_1kztx7k | /r/LocalLLaMA/comments/1kztx7k/perplexity_pro_1_year_subscription_10/ | false | false | self | 1 | null |
Nemotron Ultra 235B - how to turn thinking/reasoning off? | 4 | Hi,
I have an M3 Ultra with 88GB VRAM available and I was wondering, how useful a low quant of Nemotron Ultra was. I downloaded UD-IQ2\_XXS from unsloth and I loaded it with koboldcpp with 32k context window just fine. With no context and a simple prompt it generates at 4 to 5 t/s. I just want to try a few one-shots a... | 2025-05-31T10:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kztvsv/nemotron_ultra_235b_how_to_turn_thinkingreasoning/ | doc-acula | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kztvsv | false | null | t3_1kztvsv | /r/LocalLLaMA/comments/1kztvsv/nemotron_ultra_235b_how_to_turn_thinkingreasoning/ | false | false | self | 4 | null |
Which is the best coding AI model to choose in LM Studio? | 1 | [removed] | 2025-05-31T10:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kztq69/which_is_the_best_coding_ai_model_to_choose_in_lm/ | rakarnov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kztq69 | false | null | t3_1kztq69 | /r/LocalLLaMA/comments/1kztq69/which_is_the_best_coding_ai_model_to_choose_in_lm/ | false | false | self | 1 | null |
How are Intel gpus for local models | 25 | Say the b580 plus ryzen cpu and lots of ram
Does anyone have experience with this and what are your thoughts especially on Linux say fedora
I hope this makes sense I'm a bit out of my depth | 2025-05-31T10:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kztjgp/how_are_intel_gpus_for_local_models/ | Unusual_Pride_6480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kztjgp | false | null | t3_1kztjgp | /r/LocalLLaMA/comments/1kztjgp/how_are_intel_gpus_for_local_models/ | false | false | self | 25 | null |
Can current LLMs solve even basic cryptography problems after fine tuning? | 1 | [removed] | 2025-05-31T09:26:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kzt049/can_current_llms_solve_even_basic_cryptography/ | Chemical-Luck492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzt049 | false | null | t3_1kzt049 | /r/LocalLLaMA/comments/1kzt049/can_current_llms_solve_even_basic_cryptography/ | false | false | self | 1 | null |
Can LLMs solve even easy cryptographic problems after fine tuning? | 1 | [removed] | 2025-05-31T09:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kzstzn/can_llms_solve_even_easy_cryptographic_problems/ | Chemical-Luck492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzstzn | false | null | t3_1kzstzn | /r/LocalLLaMA/comments/1kzstzn/can_llms_solve_even_easy_cryptographic_problems/ | false | false | self | 1 | null |
Built a production-grade Discord bot ecosystem with 11 microservices using local LLaMA 3 8B - zero API costs, complete privacy | 1 | [removed] | 2025-05-31T09:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kzssrh/built_a_productiongrade_discord_bot_ecosystem/ | Dape25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzssrh | false | null | t3_1kzssrh | /r/LocalLLaMA/comments/1kzssrh/built_a_productiongrade_discord_bot_ecosystem/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dIEpZd2FHBnm_A-lIR20mUL9bH77JjXfHKgMd-tG5RE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=108&crop=smart&auto=webp&s=826e6ba13fc4d396c29beff7a2316d5c9eed5db0', 'width': 108}, {'height': 108, 'url': 'h... |
Will LLMs be able to solve even easy cryptographic problems after fine tuning? | 1 | [removed] | 2025-05-31T09:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kzsspk/will_llms_be_able_to_solve_even_easy/ | Chemical-Luck492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzsspk | false | null | t3_1kzsspk | /r/LocalLLaMA/comments/1kzsspk/will_llms_be_able_to_solve_even_easy/ | false | false | self | 1 | null |
New idea for benchmark: Code completion predictions | 1 | [removed] | 2025-05-31T08:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kzsgin/new_idea_for_benchmark_code_completion_predictions/ | robertpiosik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzsgin | false | null | t3_1kzsgin | /r/LocalLLaMA/comments/1kzsgin/new_idea_for_benchmark_code_completion_predictions/ | false | false | self | 1 | null |
China is leading open source | 2,192 | 2025-05-31T08:35:25 | TheLogiqueViper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzsa70 | false | null | t3_1kzsa70 | /r/LocalLLaMA/comments/1kzsa70/china_is_leading_open_source/ | false | false | 2,192 | {'enabled': True, 'images': [{'id': 'u2HjiiRhgPI4n-eKfxpJhgGH_d7eS7-G3hJNkYjVYAI', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=108&crop=smart&auto=webp&s=a796b5ab3d4fddcdeb8c23008babbaa2e89ed48f', 'width': 108}, {'height': 336, 'url': 'https://preview.redd.it/6stw9ivzw24f1.j... | |||
GPU-enabled Llama 3 inference in Java from scratch | 40 | 2025-05-31T08:05:09 | https://github.com/beehive-lab/GPULlama3.java | mikebmx1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kzrujd | false | null | t3_1kzrujd | /r/LocalLLaMA/comments/1kzrujd/gpuenabled_llama_3_inference_in_java_from_scratch/ | false | false | 40 | {'enabled': False, 'images': [{'id': 's8r4emDVh9Zk49kXpwkON8lrt_dBcy2Cn8d-PwX03F8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=108&crop=smart&auto=webp&s=051f1fd29e83b03877204c7a61585a887b6f6d4c', 'width': 108}, {'height': 108, 'url': 'h... | ||
GPULlama3.java --- 𝗚𝗣𝗨-𝗲𝗻𝗮𝗯𝗹𝗲𝗱 𝗝𝗮𝘃𝗮 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗲𝗻𝗴𝗶𝗻𝗲 𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗯𝘆 𝗧𝗼𝗿𝗻𝗮𝗱𝗼𝗩𝗠 | 0 | 2025-05-31T08:01:37 | https://github.com/beehive-lab/GPULlama3.java | mikebmx1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kzrsn9 | false | null | t3_1kzrsn9 | /r/LocalLLaMA/comments/1kzrsn9/gpullama3java_𝗚𝗣𝗨𝗲𝗻𝗮𝗯𝗹𝗲𝗱_𝗝𝗮𝘃𝗮_𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲_𝗲𝗻𝗴𝗶𝗻𝗲/ | false | false | 0 | {'enabled': False, 'images': [{'id': 's8r4emDVh9Zk49kXpwkON8lrt_dBcy2Cn8d-PwX03F8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=108&crop=smart&auto=webp&s=051f1fd29e83b03877204c7a61585a887b6f6d4c', 'width': 108}, {'height': 108, 'url': 'h... | ||
Getting sick of companies cherry picking their benchmarks when they release a new model | 110 | I get why they do it. They need to hype up their thing etc. But cmon a bit of academic integrity would go a long way. Every new model comes with the claim that it outcompetes older models that are 10x their size etc. Like, no. Maybe I'm an old man shaking my fist at clouds here I don't know. | 2025-05-31T07:36:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kzrfop/getting_sick_of_companies_cherry_picking_their/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzrfop | false | null | t3_1kzrfop | /r/LocalLLaMA/comments/1kzrfop/getting_sick_of_companies_cherry_picking_their/ | false | false | self | 110 | null |
Do you think we'll get the r1 distill for the other qwen3 models? | 8 | It's been quite a few days now and im losing hope. I don't remember how long it took last time though. | 2025-05-31T07:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kzrbuv/do_you_think_well_get_the_r1_distill_for_the/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzrbuv | false | null | t3_1kzrbuv | /r/LocalLLaMA/comments/1kzrbuv/do_you_think_well_get_the_r1_distill_for_the/ | false | false | self | 8 | null |
Installed CUDA drivers for gpu but still ollama runs in 100% CPU only i dont know what to do , can any one help | 0 | CUDA drivers is also showing in terminal but still not able to gpu aceclareate llm like deepseek-r1 | 2025-05-31T06:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kzqu0q/installed_cuda_drivers_for_gpu_but_still_ollama/ | bhagwano-ka-bhagwan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzqu0q | false | null | t3_1kzqu0q | /r/LocalLLaMA/comments/1kzqu0q/installed_cuda_drivers_for_gpu_but_still_ollama/ | false | false | self | 0 | null |
Automated LinkedIn content generation with the help of this community. Thank you everyone!! | 1 | [removed] | 2025-05-31T06:28:58 | https://v.redd.it/nx4cd68u924f1 | Competitive-Wing1585 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzqez1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nx4cd68u924f1/DASHPlaylist.mpd?a=1751264951%2CZjE3ZmJkYTE0M2I5NWJkODA1MmQyNDBmNmFlZDUxNmYzZDYzZTljM2VjMzcxMjM2NzcwNDYzMmU2ZDA0MWI3OA%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/nx4cd68u924f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kzqez1 | /r/LocalLLaMA/comments/1kzqez1/automated_linkedin_content_generation_with_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=e33a6518dc1b2763ef57cdbd826bb594ea737... | |
M3 Ultra Binned (256GB, 60-Core) vs Unbinned (512GB, 80-Core) MLX Performance Comparison | 101 | Hey everyone,
I recently decided to invest in an M3 Ultra model for running LLMs, and after a *lot* of deliberation, I wanted to share some results that might help others in the same boat.
One of my biggest questions was the actual performance difference between the binned and unbinned M3 Ultra models. It's pretty mu... | 2025-05-31T06:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kzq4fp/m3_ultra_binned_256gb_60core_vs_unbinned_512gb/ | cryingneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzq4fp | false | null | t3_1kzq4fp | /r/LocalLLaMA/comments/1kzq4fp/m3_ultra_binned_256gb_60core_vs_unbinned_512gb/ | false | false | self | 101 | null |
Need help finding the right LLM | 1 | [removed] | 2025-05-31T05:47:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kzprtq/need_help_finding_the_right_llm/ | Routine-Carrot76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzprtq | false | null | t3_1kzprtq | /r/LocalLLaMA/comments/1kzprtq/need_help_finding_the_right_llm/ | false | false | self | 1 | null |
How many users can an M4 Pro support? | 9 | Thinking an all the bells and whistles M4 Pro unless theres a better option for the price. Not a super critical workload but they dont want it to just take a crap all the time from hardware issues either.
I am looking to implement some locally hosted AI workflows for a smaller company that deals with some more sen... | 2025-05-31T04:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kznz2t/how_many_users_can_an_m4_pro_support/ | Cold_Sail_9727 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kznz2t | false | null | t3_1kznz2t | /r/LocalLLaMA/comments/1kznz2t/how_many_users_can_an_m4_pro_support/ | false | false | self | 9 | null |
Now with 8GB VRAM. Worth upgrading, for texts only and answering only based on my documents? | 1 | [removed] | 2025-05-31T03:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kzn8pb/now_with_8gb_vram_worth_upgrading_for_texts_only/ | Relevant-Bet-7916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzn8pb | false | null | t3_1kzn8pb | /r/LocalLLaMA/comments/1kzn8pb/now_with_8gb_vram_worth_upgrading_for_texts_only/ | false | false | self | 1 | null |
Running Deepseek R1 0528 q4_K_M and mlx 4-bit on a Mac Studio M3 | 70 | First- this model has a shockingly small KV Cache. If any of you saw my [post about running Deepseek V3 q4\_K\_M](https://www.reddit.com/r/LocalLLaMA/comments/1jke5wg/m3_ultra_mac_studio_512gb_prompt_and_write_speeds/), you'd have seen that the KV cache buffer in llama.cpp/koboldcpp was 157GB for 32k of context. I expe... | 2025-05-31T03:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kzn4ix/running_deepseek_r1_0528_q4_k_m_and_mlx_4bit_on_a/ | SomeOddCodeGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzn4ix | false | null | t3_1kzn4ix | /r/LocalLLaMA/comments/1kzn4ix/running_deepseek_r1_0528_q4_k_m_and_mlx_4bit_on_a/ | false | false | self | 70 | null |
Q3 is absolute garbage, but we always use q4, is it good? | 0 | Especially for reasoning into a json format (real world facts, like how a country would react in a situation) do you think that it's worth it to test q6 8b? Or 14b of q4 will always be better?
Thank you for the local llamas that you keep in my dreams | 2025-05-31T02:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kzmt56/q3_is_absolute_garbage_but_we_always_use_q4_is_it/ | Osama_Saba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzmt56 | false | null | t3_1kzmt56 | /r/LocalLLaMA/comments/1kzmt56/q3_is_absolute_garbage_but_we_always_use_q4_is_it/ | false | false | self | 0 | null |
Deepseek-r1-0528-qwen3-8b rating justified? | 2 | Hello | 2025-05-31T02:35:01 | ready_to_fuck_yeahh | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzmfum | false | null | t3_1kzmfum | /r/LocalLLaMA/comments/1kzmfum/deepseekr10528qwen38b_rating_justified/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'mt1fOg0Y2iaomVCoGbd5T3pixD7mOJ1CMCq2dMVRU0U', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=108&crop=smart&auto=webp&s=a64dfe6e82138f80c2752638b1970e217ac21816', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/jypwbwdm414f1.png... | ||
Local Agent AI for Spreadsheet Manipulation (Non-Coder Friendly)? | 6 | Hey everyone! I’m reaching out because I’m trying to find the best way to use a local agent to manipulate spreadsheet documents, but I’m not a coder. I need something with a GUI (graphical user interface) if possible—BIG positive for me—but I’m not entirely against CLI if it’s the only/best way to get the job done.
... | 2025-05-31T02:30:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kzmcqh/local_agent_ai_for_spreadsheet_manipulation/ | National_Meeting_749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzmcqh | false | null | t3_1kzmcqh | /r/LocalLLaMA/comments/1kzmcqh/local_agent_ai_for_spreadsheet_manipulation/ | false | false | self | 6 | null |
Gemma3 on Ollama | 1 | [removed] | 2025-05-31T02:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kzma35/gemma3_on_ollama/ | Living-Purpose-8428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzma35 | false | null | t3_1kzma35 | /r/LocalLLaMA/comments/1kzma35/gemma3_on_ollama/ | false | false | self | 1 | null |
Tips for running a local RAG and llm? | 3 | With the help of ChatGPT I stood up a local instance of llama3:instruct on my PC and used Chroma to create a vector database of my TTRPG game system. I broke the documents into 21 txt files: core rules, game masters guide, and then some subsystems like game modes are bigger text files with maybe a couple hundred pages ... | 2025-05-31T02:06:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kzlwtl/tips_for_running_a_local_rag_and_llm/ | mccoypauley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzlwtl | false | null | t3_1kzlwtl | /r/LocalLLaMA/comments/1kzlwtl/tips_for_running_a_local_rag_and_llm/ | false | false | self | 3 | null |
The OpenRouter-hosted Deepseek R1-0528 sometimes generate typo. | 11 | I'm testing the DS R1-0528 on Roo Code. So far, it's impressive in its ability to effectively tackle the requested tasks.
However, it often generates code from the OpenRouter that includes some weird Chinese characters in the middle of variable or function names (e.g. 'ProjectInfo' becomes 'Project极Info'). This caus... | 2025-05-31T01:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kzlps2/the_openrouterhosted_deepseek_r10528_sometimes/ | ExcuseAccomplished97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzlps2 | false | null | t3_1kzlps2 | /r/LocalLLaMA/comments/1kzlps2/the_openrouterhosted_deepseek_r10528_sometimes/ | false | false | self | 11 | null |
I built a game to test if humans can still tell AI apart -- and which models are best at blending in. I just added the new version of Deepseek | 1 | [removed] | 2025-05-31T01:51:33 | No-Device-6554 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzlmof | false | null | t3_1kzlmof | /r/LocalLLaMA/comments/1kzlmof/i_built_a_game_to_test_if_humans_can_still_tell/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'BPJ4S1pZ1CVOwr9h0q8YKzmyBZLLtNGEdBWnNtvZgn4', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=108&crop=smart&auto=webp&s=c6d6ebc9cc6855310e43336a216f0f4df2cc6c22', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/2ltst3zvw04f1.pn... | ||
Best General + Coding Model for 3060 12GB | 1 | [removed] | 2025-05-31T01:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kzljot/best_general_coding_model_for_3060_12gb/ | DisgustingBlackChimp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzljot | false | null | t3_1kzljot | /r/LocalLLaMA/comments/1kzljot/best_general_coding_model_for_3060_12gb/ | false | false | self | 1 | null |
Best General + Coding Model for 3060 12GB | 1 | [removed] | 2025-05-31T01:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kzlczo/best_general_coding_model_for_3060_12gb/ | DisgustingBlackChimp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzlczo | false | null | t3_1kzlczo | /r/LocalLLaMA/comments/1kzlczo/best_general_coding_model_for_3060_12gb/ | false | false | self | 1 | null |
Unlimited Speech to Speech using Moonshine and Kokoro, 100% local, 100% open source | 171 | 2025-05-31T01:34:33 | https://rhulha.github.io/Speech2Speech/ | paranoidray | rhulha.github.io | 1970-01-01T00:00:00 | 0 | {} | 1kzlb8g | false | null | t3_1kzlb8g | /r/LocalLLaMA/comments/1kzlb8g/unlimited_speech_to_speech_using_moonshine_and/ | false | false | default | 171 | null | |
How much vram is needed to fine tune deepseek r1 locally? And what is the most practical setup for that? | 7 | I know it takes more vram to fine tune than to inference, but actually how much?
I’m thinking of using m3 ultra cluster for this task, because NVIDIA gpus are to expensive to reach enough vram. What do you think?
| 2025-05-31T00:49:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kzkfjv/how_much_vram_is_needed_to_fine_tune_deepseek_r1/ | SpecialistPear755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzkfjv | false | null | t3_1kzkfjv | /r/LocalLLaMA/comments/1kzkfjv/how_much_vram_is_needed_to_fine_tune_deepseek_r1/ | false | false | self | 7 | null |
all models sux | 0 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/yyuw5fm4l04f1.png?width=1159&format=png&auto=webp&s=acb8c4ab4da03f1b1d08075ce89ada009aca5968\n\nhttps://preview.redd.it/as3wfqm8l04f1.png?width=1269&format=png&auto=webp&s=b1529e1a6a3a6f6424a912c2eb20de301322d1d2\n\nhttps://preview.redd.it/jc11zyucl04f1.png?width=945&format=png&auto=webp&s=9a0fcec44a954cc96a660b6c54d89ca822fbaf2a\n\nhttps://preview.redd.it/pe7jeehhl04f1.png?width=967&format=png&auto=webp&s=36e5a9bdac47be16da697517631df9ccf5426617\n\n\n\nhttps://preview.redd.it/lkk4mxuil04f1.png?width=949&format=png&auto=webp&s=9cee7d373431cdad26264f96aaf185bfc3580acc\n\nI will attack the redacted Gemini 2.5 Pro Preview log bellow if anyone wants to read it (it's still very long and somewhat repetitive, Claude's analysis is decent, it still misses some things, but its verbose enough as it is)" | 2025-05-31T00:48:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kzkf8f/all_models_sux/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzkf8f | false | null | t3_1kzkf8f | /r/LocalLLaMA/comments/1kzkf8f/all_models_sux/ | false | false | 0 | null | |
I built an API that allows users to create custom text classification models with their own data. Feedback appreciated! | 1 | [removed] | 2025-05-31T00:35:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kzk5ja/i_built_an_api_that_allows_users_to_create_custom/ | textclf-founder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzk5ja | false | null | t3_1kzk5ja | /r/LocalLLaMA/comments/1kzk5ja/i_built_an_api_that_allows_users_to_create_custom/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 17... |
dsf | 1 | [removed] | 2025-05-31T00:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kzjwav/dsf/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzjwav | false | null | t3_1kzjwav | /r/LocalLLaMA/comments/1kzjwav/dsf/ | false | false | self | 1 | null |
Ollama 0.9.0 Supports ability to enable or disable thinking | 39 | 2025-05-31T00:02:35 | https://github.com/ollama/ollama/releases/tag/v0.9.0 | mj3815 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kzjhfd | false | null | t3_1kzjhfd | /r/LocalLLaMA/comments/1kzjhfd/ollama_090_supports_ability_to_enable_or_disable/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'KO2NS68Y-sP4xVfFlL6FAkHrwFPMcmsCOrZNS9u62DU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=108&crop=smart&auto=webp&s=b31f45065df059f141d9c889d4e01bcf1bbe8a29', 'width': 108}, {'height': 108, 'url': 'h... | ||
Built an open source desktop app to easily play with local LLMs and MCP | 59 | Tome is an open source desktop app for Windows or MacOS that lets you chat with an MCP-powered model without having to fuss with Docker, npm, uvx or json config files. Install the app, connect it to a local or remote LLM, one-click install some MCP servers and chat away.
GitHub link here: [https://github.com/runebooka... | 2025-05-30T23:56:00 | WalrusVegetable4506 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzjcdf | false | null | t3_1kzjcdf | /r/LocalLLaMA/comments/1kzjcdf/built_an_open_source_desktop_app_to_easily_play/ | false | false | 59 | {'enabled': True, 'images': [{'id': 'LolxZjRP8DsiYfmBKauEUF25pSZJZ2xQo0cjH0doRFk', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=108&crop=smart&auto=webp&s=0071e83841240602463b7f0a35056d481f86cdef', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png... | ||
Tome - free & open source desktop app to let anyone play with LLMs and MCP | 1 | [removed] | 2025-05-30T23:51:35 | https://v.redd.it/zu8pqzn1a04f1 | WalrusVegetable4506 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzj90t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zu8pqzn1a04f1/DASHPlaylist.mpd?a=1751241109%2CNzJhZTg4ODgzMTU3YmRlMmM5MGNjNGUxN2EwMjYwOTlmYTIwNjBmOWFlNzdmNzM0NDFlY2I3YTI0MWQwYTkyZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/zu8pqzn1a04f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kzj90t | /r/LocalLLaMA/comments/1kzj90t/tome_free_open_source_desktop_app_to_let_anyone/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=108&crop=smart&format=pjpg&auto=webp&s=f87d4c6a8b1981e52c27d01cccb49b6f1340c... | |
The Machine Starting Singing | Keith Soyka | 1 | [removed] | 2025-05-30T23:48:34 | https://www.linkedin.com/posts/keith-soyka-4411338844keith-soyka-866421354_the-machine-starting-singing-activity-7334361113674891264-gqYT?utm_source=social_share_send&utm_medium=android_app&rcm=ACoAAFhW3Y4Bm5du0Jn23ZJQWWzFPx94jBGZaaU&utm_campaign=copy_link | gestaltview | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 1kzj6nz | false | null | t3_1kzj6nz | /r/LocalLLaMA/comments/1kzj6nz/the_machine_starting_singing_keith_soyka/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'CjbMbFq2MEqSKWpNCjv-ipCLADmRBQ1ZCG3w2yy71f0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=108&crop=smart&auto=webp&s=f9bafd7ce5cdc8400b3dec1967a6f34a28315c6b', 'width': 108}, {'height': 123, 'url': 'h... | |
AI AGENT | 0 | I’m currently building an AI agent in python using Mistral 7B and the ElevenLabs api for my text to speech .The models purpose is to gather information from callers and direct them to the relevant departments,or log a ticket based on the information it receives,I use a telegram bot to test the model through voice notes... | 2025-05-30T23:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kzizjz/ai_agent/ | MOTHEOXO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzizjz | false | null | t3_1kzizjz | /r/LocalLLaMA/comments/1kzizjz/ai_agent/ | false | false | self | 0 | null |
Gemma-Omni. Did somebody get it up and running? Conversational | 1 | [removed] | 2025-05-30T23:22:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kzimx5/gemmaomni_did_somebody_get_it_up_and_running/ | Consistent-Disk-7282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzimx5 | false | null | t3_1kzimx5 | /r/LocalLLaMA/comments/1kzimx5/gemmaomni_did_somebody_get_it_up_and_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'h... |
The Next Job Interview? How Soon? | 1 | [removed] | 2025-05-30T23:13:14 | brucespector | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kzif6y | false | null | t3_1kzif6y | /r/LocalLLaMA/comments/1kzif6y/the_next_job_interview_how_soon/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'mshSh2JQ8fj4HTbobCSDlZGVCWESg1zrg25OL7qFd_c', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=108&crop=smart&auto=webp&s=0e431844a2f998c5469ac3b7366bdcce88cf8feb', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/xwk76v9p404f1.j... | ||
[Tool] DeepFinder – Spotlight search that lets a local LLaMA model build your keyword list and rank results | 1 | [removed] | 2025-05-30T22:54:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kzi09x/tool_deepfinder_spotlight_search_that_lets_a/ | MarkVoenixAlexander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzi09x | false | null | t3_1kzi09x | /r/LocalLLaMA/comments/1kzi09x/tool_deepfinder_spotlight_search_that_lets_a/ | false | false | self | 1 | null |
[Tool] DeepFinder – Spotlight search that lets a local LLaMA model build your keyword list and rank results | 1 | [removed] | 2025-05-30T22:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kzhzou/tool_deepfinder_spotlight_search_that_lets_a/ | MarkVoenixAlexander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzhzou | false | null | t3_1kzhzou | /r/LocalLLaMA/comments/1kzhzou/tool_deepfinder_spotlight_search_that_lets_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'w9MKKAOHZf7mWXUlhqUD_2CDzW9g4qasylzXQMnpoUE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=108&crop=smart&auto=webp&s=b010998bc8b207102ab79ef246f16fca5ef3f579', 'width': 108}, {'height': 108, 'url': 'h... |
Any custom prompts to make Gemini/Deepseek output short & precise like GPT-4-Turbo? | 4 | I use Gemini / DS / GPT depending on what task I'm doing, and been noticing that Gemini & DS always gives very very very long answers, in comparison GPT-4 family of models often given short and previcise answers.
I also noticed that GPT-4's answer depsite being short, feels more related to what I asked. While Gemin... | 2025-05-30T22:25:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kzhctk/any_custom_prompts_to_make_geminideepseek_output/ | Rxunique | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzhctk | false | null | t3_1kzhctk | /r/LocalLLaMA/comments/1kzhctk/any_custom_prompts_to_make_geminideepseek_output/ | false | false | self | 4 | null |
My Coding Agent Ran DeepSeek-R1-0528 on a Rust Codebase for 47 Minutes (Opus 4 Did It in 18): Worth the Wait? | 1 | [removed] | 2025-05-30T21:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kzg6g0/my_coding_agent_ran_deepseekr10528_on_a_rust/ | West-Chocolate2977 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzg6g0 | false | null | t3_1kzg6g0 | /r/LocalLLaMA/comments/1kzg6g0/my_coding_agent_ran_deepseekr10528_on_a_rust/ | false | false | self | 1 | null |
I built an open-source VRAM Calculator inside Hugging Face | 1 | It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB).
\> To test it, install the extension (no registration/key ... | 2025-05-30T21:34:14 | https://v.redd.it/14n787wwmz3f1 | Cool-Maintenance8594 | /r/LocalLLaMA/comments/1kzg64p/i_built_an_opensource_vram_calculator_inside/ | 1970-01-01T00:00:00 | 0 | {} | 1kzg64p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/14n787wwmz3f1/DASHPlaylist.mpd?a=1751362458%2CMDRmYzJhYjMzNTkzYzVjNzEzM2JiOTgyYWYxM2RlMTcyZTg1YmNmMWJmZGI3ZThjNTE1NzU4N2JmMzdiZWFlNA%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/14n787wwmz3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kzg64p | /r/LocalLLaMA/comments/1kzg64p/i_built_an_opensource_vram_calculator_inside/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=30272b4260f8055da821ccf04dfb8991f8691... | |
Too Afraid to Ask: Why don't LoRAs exist for LLMs? | 46 | Image generation models generally allow for the use of LoRAs which -- for those who may not know -- is essentially adding some weight to a model that is honed in on a certain thing (this can be art styles, objects, specific characters, etc) that make the model much better at producing images with that style/object/char... | 2025-05-30T21:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kzg3yv/too_afraid_to_ask_why_dont_loras_exist_for_llms/ | Saguna_Brahman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzg3yv | false | null | t3_1kzg3yv | /r/LocalLLaMA/comments/1kzg3yv/too_afraid_to_ask_why_dont_loras_exist_for_llms/ | false | false | self | 46 | null |
DeepSeek R1 - 0528 System Prompt leak | 1 | [removed] | 2025-05-30T21:27:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kzg0g7/deepseek_r1_0528_system_prompt_leak/ | exocija2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzg0g7 | false | null | t3_1kzg0g7 | /r/LocalLLaMA/comments/1kzg0g7/deepseek_r1_0528_system_prompt_leak/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-j9bRzTHi-9J0ZIEPmn5-NwbXiUsigqjM4z7gCHF5Fg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=108&crop=smart&auto=webp&s=380485ee2165322e202a824d283ab931208e6eca', 'width': 108}, {'height': 113, 'url': 'h... |
I built an open-source VRAM Calculator inside Hugging Face | 1 | It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB).
\> To test it, install the extension (no registration/key ... | 2025-05-30T21:22:55 | https://v.redd.it/5hm166sykz3f1 | Cool-Maintenance8594 | /r/LocalLLaMA/comments/1kzfwfb/i_built_an_opensource_vram_calculator_inside/ | 1970-01-01T00:00:00 | 0 | {} | 1kzfwfb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5hm166sykz3f1/DASHPlaylist.mpd?a=1751361780%2CMDE0NzgyNzQzMGRkZDZjYWFmMjFhNzQxMjA4YTlhZjBhOTg5YzJhZTY2YjIyYmM0MGY0ZDZkOWQ4YzViM2YwYg%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/5hm166sykz3f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kzfwfb | /r/LocalLLaMA/comments/1kzfwfb/i_built_an_opensource_vram_calculator_inside/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d5874da55cafce81b31cc23f5bf145148f7a... | |
ubergarm/DeepSeek-R1-0528-GGUF | 100 | Hey y'all just cooked up some ik\_llama.cpp exclusive quants for the recently updated DeepSeek-R1-0528 671B. New recipes are looking pretty good (lower perplexity is "better"):
* `DeepSeek-R1-0528-Q8_0` 666GiB
- `Final estimate: PPL = 3.2130 +/- 0.01698`
- I didn't upload this, it is for baseline reference only.
*... | 2025-05-30T21:17:00 | https://huggingface.co/ubergarm/DeepSeek-R1-0528-GGUF | VoidAlchemy | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kzfrdt | false | null | t3_1kzfrdt | /r/LocalLLaMA/comments/1kzfrdt/ubergarmdeepseekr10528gguf/ | false | false | 100 | {'enabled': False, 'images': [{'id': '3ISj42OzoDxdhD3QSqKLiYSmDYteg9Mijqy6MGtDQLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=108&crop=smart&auto=webp&s=701647ce8daa5510640c0c59b0826da2c020e1b3', 'width': 108}, {'height': 116, 'url': 'h... | |
ubergarm/DeepSeek-R1-0528-GGUF | 1 | [removed] | 2025-05-30T21:13:30 | https://huggingface.co/ubergarm/DeepSeek-R1-0528-GGUF | VoidAlchemy | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kzfocb | false | null | t3_1kzfocb | /r/LocalLLaMA/comments/1kzfocb/ubergarmdeepseekr10528gguf/ | false | false | 1 | {'enabled': False, 'images': [{'id': '3ISj42OzoDxdhD3QSqKLiYSmDYteg9Mijqy6MGtDQLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=108&crop=smart&auto=webp&s=701647ce8daa5510640c0c59b0826da2c020e1b3', 'width': 108}, {'height': 116, 'url': 'h... | |
I built a memory MCP that understands you (so Sam Altman can't). | 0 | I built a deep contextual memory bank that is callable in AI applications like Claude and Cursor.
It knows anything you give it about you, is safe and secure, and kept private so Chat-GPT doesn't own understanding of you.
https://preview.redd.it/xo82qo3diz3f1.png?width=3452&format=png&auto=webp&s=23768cfd288d65355158... | 2025-05-30T21:09:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kzfkzw/i_built_a_memory_mcp_that_understands_you_so_sam/ | OneEither8511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzfkzw | false | null | t3_1kzfkzw | /r/LocalLLaMA/comments/1kzfkzw/i_built_a_memory_mcp_that_understands_you_so_sam/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'QvijV4VZ6qA07MkS1wpzmqb5ksdd85VN87Gdxj3Fi_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=108&crop=smart&auto=webp&s=6e97190253fae3dd947a8b068142afaf7c1569f9', 'width': 108}, {'height': 108, 'url': 'h... | |
Introducing the unified multi-modal MLX engine architecture in LM Studio | 1 | 2025-05-30T21:04:45 | https://lmstudio.ai/blog/unified-mlx-engine | adefa | lmstudio.ai | 1970-01-01T00:00:00 | 0 | {} | 1kzfgm4 | false | null | t3_1kzfgm4 | /r/LocalLLaMA/comments/1kzfgm4/introducing_the_unified_multimodal_mlx_engine/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eiDteCmG0LKmpuLKvus26TJ8b22ovOioDWY6USPVu3E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=108&crop=smart&auto=webp&s=41b43c7c3ed4c454e694120ea4eeddb3853e940e', 'width': 108}, {'height': 113, 'url': 'h... | ||
ResembleAI provides safetensors for Chatterbox TTS | 36 | Safetensors files are now uploaded on Hugging Face:
[https://huggingface.co/ResembleAI/chatterbox/tree/main](https://huggingface.co/ResembleAI/chatterbox/tree/main)
And a PR is that adds support to use them to the example code is ready and will be merged in a couple of days:
[https://github.com/resemble-ai/chatter... | 2025-05-30T21:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kzfces/resembleai_provides_safetensors_for_chatterbox_tts/ | WackyConundrum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzfces | false | null | t3_1kzfces | /r/LocalLLaMA/comments/1kzfces/resembleai_provides_safetensors_for_chatterbox_tts/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'ZUH0a8iidvteHxtDF3nsL7xFz7SBWOHojoPDOtwA6pE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=108&crop=smart&auto=webp&s=8d920e6b5d691495bb59e89c192d2faf9d41c440', 'width': 108}, {'height': 116, 'url': 'h... |
Deepseek is cool, but is there an alternative to Claude Code I can use with it? | 83 | I'm looking for an AI coding framework that can help me with training diffusion models. Take existing quasi-abandonned spaguetti codebases and update them to latest packages, implement papers, add features like inpainting, autonomously experiment using different architectures, do hyperparameter searches, preprocess my ... | 2025-05-30T20:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kzf9nl/deepseek_is_cool_but_is_there_an_alternative_to/ | BITE_AU_CHOCOLAT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzf9nl | false | null | t3_1kzf9nl | /r/LocalLLaMA/comments/1kzf9nl/deepseek_is_cool_but_is_there_an_alternative_to/ | false | false | self | 83 | null |
Where can I use medgemma 27B (medical LLM) for free online? Can't inference it | 5 | Thanks! | 2025-05-30T20:53:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kzf6hu/where_can_i_use_medgemma_27b_medical_llm_for_free/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzf6hu | false | null | t3_1kzf6hu | /r/LocalLLaMA/comments/1kzf6hu/where_can_i_use_medgemma_27b_medical_llm_for_free/ | false | false | self | 5 | null |
qSpeak - Superwhisper cross-platform alternative now with MCP support | 18 | Hey, we've released a new version of qSpeak with advanced support for MCP. Now you can access whatever platform tools wherever you would want in your system using voice.
We've spent a great amount of time to make the experience of steering your system with voice a pleasure. We would love to get some feedback. The app... | 2025-05-30T20:23:35 | https://qspeak.app | fajfas3 | qspeak.app | 1970-01-01T00:00:00 | 0 | {} | 1kzegpe | false | null | t3_1kzegpe | /r/LocalLLaMA/comments/1kzegpe/qspeak_superwhisper_crossplatform_alternative_now/ | false | false | default | 18 | null |
Looking for software that processes images in realtime (or periodically). | 2 | Are there any projects out there that allow a multimodal llm process a window in realtime? Basically im trying to have the gui look at a window, take a screenshot periodically and send it to ollama and have it processed with a system and spit out an output all hands free.
Ive been trying to look at some OSS projects ... | 2025-05-30T20:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kze5m0/looking_for_software_that_processes_images_in/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kze5m0 | false | null | t3_1kze5m0 | /r/LocalLLaMA/comments/1kze5m0/looking_for_software_that_processes_images_in/ | false | false | self | 2 | null |
Ollama run bob | 868 | 2025-05-30T20:06:52 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kze1r6 | false | null | t3_1kze1r6 | /r/LocalLLaMA/comments/1kze1r6/ollama_run_bob/ | false | false | 868 | {'enabled': True, 'images': [{'id': 'V7hd_GwzcPsZsFc10q7c5bcI5m6-uRj69Uci8ZzPvdA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=108&crop=smart&auto=webp&s=89523d8f6dbbf876488d4e3cce75b51686be4c7f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.j... | |||
Confused, 2x 5070ti vs 1x 3090 | 3 | Looking to buy an AI server for running 32b models, but I'm confused about the 3090 recommendations.
$ on Amazon:
5070ti: $890
3090: $1600
32b model on vllm:
2x 5070ti: 54 T/s
1x 3090: 40 T/s
2 5070ti's give you faster speeds and 8gb wiggle room for almost the same price. Plus, it gives you the opportunity ... | 2025-05-30T19:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kzdla4/confused_2x_5070ti_vs_1x_3090/ | MiyamotoMusashi7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzdla4 | false | null | t3_1kzdla4 | /r/LocalLLaMA/comments/1kzdla4/confused_2x_5070ti_vs_1x_3090/ | false | false | self | 3 | null |
Noob question: Why did Deepseek distill Qwen3? | 77 | In unsloth's documentation, it says "DeepSeek also released a R1-0528 distilled version by fine-tuning Qwen3 (8B)."
Being a noob, I don't understand why they would use Qwen3 as the base and then distill from there and then call it Deepseek-R1-0528. Isn't it mostly Qwen3 and they are taking Qwen3's work and then doing ... | 2025-05-30T18:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kzcc3f/noob_question_why_did_deepseek_distill_qwen3/ | Turbulent-Week1136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzcc3f | false | null | t3_1kzcc3f | /r/LocalLLaMA/comments/1kzcc3f/noob_question_why_did_deepseek_distill_qwen3/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'h... |
llama-server is cooking! gemma3 27b, 100K context, vision on one 24GB GPU. | 237 | llama-server has really improved a lot recently. With vision support, SWA (sliding window attention) and performance improvements I've got 35tok/sec on a 3090. P40 gets 11.8 tok/sec. Multi-gpu performance has improved. Dual 3090s performance goes up to 38.6 tok/sec (600W power limit). Dual P40 gets 15.8 tok/sec (320W p... | 2025-05-30T18:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kzcalh/llamaserver_is_cooking_gemma3_27b_100k_context/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzcalh | false | null | t3_1kzcalh | /r/LocalLLaMA/comments/1kzcalh/llamaserver_is_cooking_gemma3_27b_100k_context/ | false | false | self | 237 | {'enabled': False, 'images': [{'id': 'Geq1aGoOJXF9X64PthOjFRWSxgKsZlsfIaVGTsL8Ed0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oJNo35UOCjrIXfKcYlffc8A4NkSsH_nP10zEv4HIN74.jpg?width=108&crop=smart&auto=webp&s=45b898447004243f7433989c45cb5ac013d27237', 'width': 108}, {'height': 108, 'url': 'h... |
kbo.ai,kbl.ai,mpop.ai,kiwoom.ai,tmoney.ai,mpop.ai,tmoney.ai | 1 | [removed] | 2025-05-30T18:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kzc3vo/kboaikblaimpopaikiwoomaitmoneyaimpopaitmoneyai/ | Artistic_Duty_9915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzc3vo | false | null | t3_1kzc3vo | /r/LocalLLaMA/comments/1kzc3vo/kboaikblaimpopaikiwoomaitmoneyaimpopaitmoneyai/ | false | false | self | 1 | null |
Launching TextCLF: API to create custom text classification models with your own data | 1 | [removed] | 2025-05-30T18:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kzbp3y/launching_textclf_api_to_create_custom_text/ | LineAlternative5694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzbp3y | false | null | t3_1kzbp3y | /r/LocalLLaMA/comments/1kzbp3y/launching_textclf_api_to_create_custom_text/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 17... |
Yappus. Your Terminal Just Started Talking Back (The Fuck, but Better) | 30 | Yappus is a terminal-native LLM interface written in Rust, focused on being local-first, fast, and scriptable.
No GUI, no HTTP wrapper. Just a CLI tool that integrates with your filesystem and shell. I am planning to turn into a little shell inside shell kinda stuff. Integrating with Ollama soon!.
Check out syste... | 2025-05-30T17:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kzansb/yappus_your_terminal_just_started_talking_back/ | dehydratedbruv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzansb | false | null | t3_1kzansb | /r/LocalLLaMA/comments/1kzansb/yappus_your_terminal_just_started_talking_back/ | false | false | 30 | null | |
why has gemini refused to anwser? is this a general question? | 1 | [removed] | 2025-05-30T17:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kzacqk/why_has_gemini_refused_to_anwser_is_this_a/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kzacqk | false | null | t3_1kzacqk | /r/LocalLLaMA/comments/1kzacqk/why_has_gemini_refused_to_anwser_is_this_a/ | false | false | 1 | null | |
AI learning collaboration! | 1 | [removed] | 2025-05-30T17:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kza2pd/ai_learning_collaboration/ | Zealousideal_Pay8775 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kza2pd | false | null | t3_1kza2pd | /r/LocalLLaMA/comments/1kza2pd/ai_learning_collaboration/ | false | false | self | 1 | null |
Is this idea reasonable? | 0 | *I asked GPT to help me flesh out the idea and write it into a technical white paper. I don't understand all the technical things it came up with, but my idea is basically instead of having massive models, tuned by tiny LORAs, what if we have a smaller reasoning model that can intelligently apply combinations of LORAs ... | 2025-05-30T17:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kz9jse/is_this_idea_reasonable/ | beentothefuture | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz9jse | false | null | t3_1kz9jse | /r/LocalLLaMA/comments/1kz9jse/is_this_idea_reasonable/ | false | false | self | 0 | null |
Should I add my 235B LLM at home to my dating profile? | 1 | [removed] | 2025-05-30T16:13:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kz89no/should_i_add_my_235b_llm_at_home_to_my_dating/ | MDT-49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz89no | false | null | t3_1kz89no | /r/LocalLLaMA/comments/1kz89no/should_i_add_my_235b_llm_at_home_to_my_dating/ | false | false | self | 1 | null |
TTS for Podcast (1 speaker) based on my voice | 1 | Hi!
I'm looking for a free and easy to use TTS, I need it to create 1 podcast (in Italian and only me as a speaker) based on my cloned voice. In short, something quite similar to what ElevenLabs does.
I have a MacBook 16 M1 Pro with 16GB of RAM and I know how to use LM Studio quite well, but I don't have much kno... | 2025-05-30T16:04:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kz81as/tts_for_podcast_1_speaker_based_on_my_voice/ | fucilator_3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz81as | false | null | t3_1kz81as | /r/LocalLLaMA/comments/1kz81as/tts_for_podcast_1_speaker_based_on_my_voice/ | false | false | self | 1 | null |
Hosting Qwen 3 4B model | 1 | [removed] | 2025-05-30T15:43:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kz7ig9/hosting_qwen_3_4b_model/ | prahasanam-boi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz7ig9 | false | null | t3_1kz7ig9 | /r/LocalLLaMA/comments/1kz7ig9/hosting_qwen_3_4b_model/ | false | false | self | 1 | null |
Is there any chance I could ever get performance similar to chatgpt-4o out of my home desktop? | 0 | I'm enjoying playing with models at home but it's becoming clear that if I don't have a SOTA system capable of running 600+b parameters I'm never going to be able to have the same quality of experience as I could by just paying $20/month to chatgpt.
Or am I wrong?
Does paying the subscription just make the most sense ... | 2025-05-30T15:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kz7b1u/is_there_any_chance_i_could_ever_get_performance/ | W5SNx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz7b1u | false | null | t3_1kz7b1u | /r/LocalLLaMA/comments/1kz7b1u/is_there_any_chance_i_could_ever_get_performance/ | false | false | self | 0 | null |
Fiance-Llama-8B: Specialized LLM for Financial QA, Reasoning and Dialogue | 53 | Hi everyone,
Just sharing a model release that might be useful for those working on financial NLP or building domain-specific assistants.
Model on Hugging Face: https://huggingface.co/tarun7r/Finance-Llama-8B
Finance-Llama-8B is a fine-tuned version of Meta-Llama-3.1-8B, trained on the Finance-Instruct-500k dataset, ... | 2025-05-30T14:57:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kz6cbp/fiancellama8b_specialized_llm_for_financial_qa/ | martian7r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz6cbp | false | null | t3_1kz6cbp | /r/LocalLLaMA/comments/1kz6cbp/fiancellama8b_specialized_llm_for_financial_qa/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': 'h9xuv7TDnggf0lEja-VKKE23s02QTpMGzYNxAZAt3aQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_asAlEfFv7vi5Y0iy6EzjnU8pzaDdsQMagVnlXzQmL4.jpg?width=108&crop=smart&auto=webp&s=0cbbd3cf51e38e42b51203204bc83a601486ea55', 'width': 108}, {'height': 116, 'url': 'h... |
Llama 3.2 1B Base (4-bit BNB) Fine-tuning with Unsloth - Model Not Learning (10+ Epochs)! Seeking Help🙏 | 1 | [removed] | 2025-05-30T14:53:55 | https://colab.research.google.com/drive/1WLjc25RHedPbhjG-t_CRN1PxNWBqQrxE?usp=sharing | Fun_Cockroach9020 | colab.research.google.com | 1970-01-01T00:00:00 | 0 | {} | 1kz69g4 | false | null | t3_1kz69g4 | /r/LocalLLaMA/comments/1kz69g4/llama_32_1b_base_4bit_bnb_finetuning_with_unsloth/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': '... | |
RAG system for local LLM? | 1 | [removed] | 2025-05-30T14:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kz63vr/rag_system_for_local_llm/ | zipzak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz63vr | false | null | t3_1kz63vr | /r/LocalLLaMA/comments/1kz63vr/rag_system_for_local_llm/ | false | false | self | 1 | null |
Improving decode rate of LLMs using llama.cpp on mobile | 1 | [removed] | 2025-05-30T14:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kz5q1b/improving_decode_rate_of_llms_using_llamacpp_on/ | Capital-Drag-8820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz5q1b | false | null | t3_1kz5q1b | /r/LocalLLaMA/comments/1kz5q1b/improving_decode_rate_of_llms_using_llamacpp_on/ | false | false | self | 1 | null |
One shot script conversion from shell to python fails miserably | 0 | So today apparently I'm going nuts, needed a parser for ipfw2 output in FreeBSD and look what the leading models provided, can somebody explain or did they become more stupid? For context I am converting a backup script in gemini, asked to expand sh script for portability and add a few features, it failed on initial fe... | 2025-05-30T14:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kz5onp/one_shot_script_conversion_from_shell_to_python/ | Ikinoki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kz5onp | false | null | t3_1kz5onp | /r/LocalLLaMA/comments/1kz5onp/one_shot_script_conversion_from_shell_to_python/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.