title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HanaVerse - Chat with AI through an interactive anime character! 🌸 | 1 | I've been working on something I think you'll love - HanaVerse, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
What is **HanaVerse**? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated compan... | 2025-05-15T15:50:05 | https://v.redd.it/rz49w14kvy0f1 | OrganicTelevision652 | /r/LocalLLaMA/comments/1knbm5t/hanaverse_chat_with_ai_through_an_interactive/ | 1970-01-01T00:00:00 | 0 | {} | 1knbm5t | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rz49w14kvy0f1/DASHPlaylist.mpd?a=1750045813%2CNjFjZDVjNzI0NzZlYmY2YTAzMmUzMTZjODM1YmVkMjA3YmZmYmVjYzhhZmNkNmQ3YzUzYmE3M2M1MTRhNjcyZQ%3D%3D&v=1&f=sd', 'duration': 143, 'fallback_url': 'https://v.redd.it/rz49w14kvy0f1/DASH_720.mp4?source=fallback', 'h... | t3_1knbm5t | /r/LocalLLaMA/comments/1knbm5t/hanaverse_chat_with_ai_through_an_interactive/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=108&crop=smart&format=pjpg&auto=webp&s=8bbc36236289a67f703412d5000d74720cbe4... | |
Hugging Face free and open source MCP course | 97 | We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
Join the course on the hub:https://huggingface.co/mcp-course
In this course, you will:
📖 Study Model Co... | 2025-05-15T15:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1knbdd3/hugging_face_free_and_open_source_mcp_course/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbdd3 | false | null | t3_1knbdd3 | /r/LocalLLaMA/comments/1knbdd3/hugging_face_free_and_open_source_mcp_course/ | false | false | self | 97 | {'enabled': False, 'images': [{'id': 'nyTSZ22UOd_egJ571yAxHNLvQUAnZKwWrOoEgTTO-sU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=108&crop=smart&auto=webp&s=22417c77a71c91a54185a0bc24a9309678ec8f6d', 'width': 108}, {'height': 116, 'url': 'h... |
Hugging Face is doing a freeopen source MCP course | 1 | We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
Join the course on the hub:https://huggingface.co/mcp-course
In this course, you will:
📖 Study Model Co... | 2025-05-15T15:38:18 | https://www.reddit.com/r/LocalLLaMA/comments/1knbbl8/hugging_face_is_doing_a_freeopen_source_mcp_course/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbbl8 | false | null | t3_1knbbl8 | /r/LocalLLaMA/comments/1knbbl8/hugging_face_is_doing_a_freeopen_source_mcp_course/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nyTSZ22UOd_egJ571yAxHNLvQUAnZKwWrOoEgTTO-sU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=108&crop=smart&auto=webp&s=22417c77a71c91a54185a0bc24a9309678ec8f6d', 'width': 108}, {'height': 116, 'url': 'h... |
HanaVerse - Chat with AI through an interactive anime character! 🌸 | 3 | I've been working on something I think you'll love - **HanaVerse**, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
# What is HanaVerse? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated comp... | 2025-05-15T15:30:57 | https://www.reddit.com/r/LocalLLaMA/comments/1knb504/hanaverse_chat_with_ai_through_an_interactive/ | OrganicTelevision652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knb504 | false | null | t3_1knb504 | /r/LocalLLaMA/comments/1knb504/hanaverse_chat_with_ai_through_an_interactive/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'e_TzCbLRWGuoVvpz2Ql3BuZNn4O25i0T1Ou5ynEyr54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=108&crop=smart&auto=webp&s=d2affacaa734a3be0bbdd6e15b0283fc0ee4f370', 'width': 108}, {'height': 108, 'url': 'h... | |
qSpeak - A Cross platform alternative for WisprFlow supporting local LLMs and Linux | 15 | Hey, together with my colleagues, we've created [qSpeak.app](http://qSpeak.app) 🎉
qSpeak is an alternative to tools like SuperWhisper or WisprFlow but works on all platforms including Linux. 🚀
Also we're working on integrating LLMs more deeply into it to include more sophisticated interactions like multi step ... | 2025-05-15T15:28:13 | https://qspeak.app | fajfas3 | qspeak.app | 1970-01-01T00:00:00 | 0 | {} | 1knb2kq | false | null | t3_1knb2kq | /r/LocalLLaMA/comments/1knb2kq/qspeak_a_cross_platform_alternative_for_wisprflow/ | false | false | default | 15 | null |
GPU Upgrade for Ollama/ML/Document Processing | 2 | Hi, just getting started with Ollama on my home server and realizing my old CPU isn't cutting it. I'm looking to add a GPU to speed things up and explore better models.
My use case:
\- Automate document tagging in Paperless.
\- Mess around with PyTorch for some ML training (YOLO specifically).
\- Do some local ema... | 2025-05-15T15:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1knat5s/gpu_upgrade_for_ollamamldocument_processing/ | phamleduy04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knat5s | false | null | t3_1knat5s | /r/LocalLLaMA/comments/1knat5s/gpu_upgrade_for_ollamamldocument_processing/ | false | false | self | 2 | null |
Qwen3-32B hallucinates more than QwQ-32B | 70 | I've been seeing some people complaining about Qwen3's hallucination issues. Personally, I have never run into such an issue, but I recently came across some Chinese benchmarks of Qwen3 and QwQ, so I might as well share them here.
I translated these to English; the sources are in the images.
TLDR:
1. Qwen3-32B has a... | 2025-05-15T14:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kna53n/qwen332b_hallucinates_more_than_qwq32b/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kna53n | false | null | t3_1kna53n | /r/LocalLLaMA/comments/1kna53n/qwen332b_hallucinates_more_than_qwq32b/ | false | false | 70 | null | |
AlphaEvolve did pretty well on "Small base LLM only" | 16 | In the Ablation chapter of AlphaEvolve white paper, they show its performance using "Small base LLM" instead of Gemini Flash 2.0 and Pro 2.0. Their takeaway is that bigger models perform better, but our takeaway is that... **smaller models work**, too.
https://imgur.com/a/IQkFuJ7
Now, they do not specify what their s... | 2025-05-15T14:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kna33l/alphaevolve_did_pretty_well_on_small_base_llm_only/ | __Maximum__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kna33l | false | null | t3_1kna33l | /r/LocalLLaMA/comments/1kna33l/alphaevolve_did_pretty_well_on_small_base_llm_only/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'LBjrFHYwqUjhAWQ0tI4lReFiUkabiS2R6fdMdr4KgIQ', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=108&crop=smart&auto=webp&s=1dd65d778f8a1cd276e172256f2619288d218202', 'width': 108}, {'height': 147, 'url': 'h... |
Practicing a foreign language? | 2 | I'm looking for an IOS LLM app that I can practice speaking a foreign language with in the car. I've downloaded several, but they all require me to press the microphone button to dictate then the send button to send. I obviously can't do that while driving.
This seems like a really good use case but I can't find an ... | 2025-05-15T14:43:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kn9yhx/practicing_a_foreign_language/ | Ashofsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn9yhx | false | null | t3_1kn9yhx | /r/LocalLLaMA/comments/1kn9yhx/practicing_a_foreign_language/ | false | false | self | 2 | null |
5060ti MultiGPU setup on PCIe 3.0 motherboard | 2 | Given 5060ti only has 8 PCIe lanes will there be a noticeable performance hit compared to the same setup with PCIe 4.0? | 2025-05-15T14:19:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kn9e22/5060ti_multigpu_setup_on_pcie_30_motherboard/ | ingridis15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn9e22 | false | null | t3_1kn9e22 | /r/LocalLLaMA/comments/1kn9e22/5060ti_multigpu_setup_on_pcie_30_motherboard/ | false | false | self | 2 | null |
Can I build a fully automated local LLM system that indexes and chats over private data stored on a network share? | 1 | [removed] | 2025-05-15T14:13:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kn98pm/can_i_build_a_fully_automated_local_llm_system/ | ITinsights1999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn98pm | false | null | t3_1kn98pm | /r/LocalLLaMA/comments/1kn98pm/can_i_build_a_fully_automated_local_llm_system/ | false | false | self | 1 | null |
LLaDA-8B-Tools: A diffusion language model fine-tuned for tool use | 57 | Instead of generating token-by-token, this architecture refines the whole output by replacing mask tokens across the sequence.
The bidirectional attention seems to help with structured outputs, though this is just a rough first attempt with some issues (e.g. extra text after a message, because of this architecture's p... | 2025-05-15T14:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kn9882/llada8btools_a_diffusion_language_model_finetuned/ | ProximileLLC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn9882 | false | null | t3_1kn9882 | /r/LocalLLaMA/comments/1kn9882/llada8btools_a_diffusion_language_model_finetuned/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': '2q9S4nBf2YBpvpTF1issf3dcTweLTvqQP91NiMUpF60', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=108&crop=smart&auto=webp&s=6288e031cce1ce9326f0f7e56a3ac237a09cd425', 'width': 108}, {'height': 116, 'url': 'h... |
Can I build a fully automated local LLM system that indexes and chats over private data stored on a network share? | 1 | [removed] | 2025-05-15T14:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kn97hc/can_i_build_a_fully_automated_local_llm_system/ | ITinsights1999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn97hc | false | null | t3_1kn97hc | /r/LocalLLaMA/comments/1kn97hc/can_i_build_a_fully_automated_local_llm_system/ | false | false | self | 1 | null |
Update: We fit 50+ LLMs on 2 GPUs — and now we’re inviting you to try it. | 28 |
Last week’s post on cold starts and snapshotting hit a nerve. Turns out many of you are also trying to juggle multiple models, deal with bloated memory, or squeeze more out of a single GPU.
We’re making our snapshot-based runtime available to a limited number of builders — especially if you’re running agents, RAG pi... | 2025-05-15T14:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kn94oi/update_we_fit_50_llms_on_2_gpus_and_now_were/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn94oi | false | null | t3_1kn94oi | /r/LocalLLaMA/comments/1kn94oi/update_we_fit_50_llms_on_2_gpus_and_now_were/ | false | false | self | 28 | null |
Open-source general purpose agent with built-in MCPToolkit support | 55 | The open-source OWL agent now comes with built-in MCPToolkit support, just drop in your MCP servers (Playwright, desktop-commander, custom Python tools, etc.) and OWL will automatically discover and call them in its multi-agent workflows.
OWL: [https://github.com/camel-ai/owl](https://github.com/camel-ai/owl) | 2025-05-15T13:46:21 | Fluffy_Sheepherder76 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kn8m8t | false | null | t3_1kn8m8t | /r/LocalLLaMA/comments/1kn8m8t/opensource_general_purpose_agent_with_builtin/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'A3oZU9F0SSXkVyXHepsxxye6CHwefKfdk2uSI_3kosw', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.jpeg?width=108&crop=smart&auto=webp&s=dd29f6873f1caf6e623ca3fccfc30520125f7ad0', 'width': 108}, {'height': 363, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.j... | ||
Suggestion for TTS Models | 8 | **Hey everyone,**
I’m building a fun little custom speech-to-speech app. For speech-to-text, I’m using `parakeet-0.6B` (latest on HuggingFace), and for the LLM part, I’m currently experimenting with `gemma3:4b`.
Now I’m looking for a suitable **text-to-speech (TTS)** model from the open-source HuggingFace community. ... | 2025-05-15T13:27:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kn86oz/suggestion_for_tts_models/ | Heavy_Ad_4912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn86oz | false | null | t3_1kn86oz | /r/LocalLLaMA/comments/1kn86oz/suggestion_for_tts_models/ | false | false | self | 8 | null |
LLM based Personally identifiable information detection tool | 9 |
GitHub repo:
https://github.com/rpgeeganage/pII-guard
Hi everyone,
I recently built a small open-source tool called PII (personally identifiable information) to detect personally identifiable information (PII) in logs using AI. It’s self-hosted and designed for privacy-conscious developers or teams.
Features:
- H... | 2025-05-15T13:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kn810l/llm_based_personally_identifiable_information/ | geeganage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn810l | false | null | t3_1kn810l | /r/LocalLLaMA/comments/1kn810l/llm_based_personally_identifiable_information/ | false | false | self | 9 | null |
Image (text) to video on M3 Ultra 512Gb locally? | 1 | [removed] | 2025-05-15T13:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kn7yq5/image_text_to_video_on_m3_ultra_512gb_locally/ | kesha55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn7yq5 | false | null | t3_1kn7yq5 | /r/LocalLLaMA/comments/1kn7yq5/image_text_to_video_on_m3_ultra_512gb_locally/ | false | false | self | 1 | null |
Real cases for Qwen3-0.6b | 1 | [removed] | 2025-05-15T13:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kn7nu4/real_cases_for_qwen306b/ | Slader42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn7nu4 | false | null | t3_1kn7nu4 | /r/LocalLLaMA/comments/1kn7nu4/real_cases_for_qwen306b/ | false | false | self | 1 | null |
Parler TTS mini : Expresso | 0 | What is you opinion on Parler TTS mini : Expresso , is it good ? | 2025-05-15T12:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kn7kmc/parler_tts_mini_expresso/ | Odysseus_970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn7kmc | false | null | t3_1kn7kmc | /r/LocalLLaMA/comments/1kn7kmc/parler_tts_mini_expresso/ | false | false | self | 0 | null |
Combining Ampere and Pascal cards? | 0 | I have a 3090ti and 64gb ddr5 ram in my current PC. I have a spare 1080ti (11gb vram) that I could add to the system for LLM use, which fits in the case and would work with my PSU.
If it's relevant: the 3090ti is in a PCIe 5.0 x16 slot, the available spare slot is PCIe 4.0 x4 using the motherboard chipset (Z790).
... | 2025-05-15T12:44:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kn79p4/combining_ampere_and_pascal_cards/ | __ThrowAway__123___ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn79p4 | false | null | t3_1kn79p4 | /r/LocalLLaMA/comments/1kn79p4/combining_ampere_and_pascal_cards/ | false | false | self | 0 | null |
What are the current best models for keeping a roles of real word scenarios in low size. | 2 | Hi all,
I am looking for model to prompt it to imitate human in specific real word situations like receptionist or medical professionals and make them stick to role.
I looked for some time and test different models around and find only this source regarding it
[https://huggingface.co/spaces/flowers-team/StickTo... | 2025-05-15T12:39:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kn75zx/what_are_the_current_best_models_for_keeping_a/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn75zx | false | null | t3_1kn75zx | /r/LocalLLaMA/comments/1kn75zx/what_are_the_current_best_models_for_keeping_a/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'oiOZtBdNBHuTaszyj5SwMbl1zbQIjrVkO1Qj7byOkHE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=108&crop=smart&auto=webp&s=a3c1a582fd21bc04c75967c866f021bc6899ec11', 'width': 108}, {'height': 116, 'url': 'h... |
PDF input merged into llama.cpp | 152 | 2025-05-15T12:39:04 | https://github.com/ggml-org/llama.cpp/pull/13562 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kn75q8 | false | null | t3_1kn75q8 | /r/LocalLLaMA/comments/1kn75q8/pdf_input_merged_into_llamacpp/ | false | false | default | 152 | null | |
Is there some text2speech able to do a realistic stand-up comedy? | 1 | Hello!
I have a few scripts for stand-up comedies (about recent news).
Is there a text2speech able to render them in a realistic, emotional and emphatic way?
Possibly local, something (possibly multilingual) able to keep emphasis and pace and not be "boring"? | 2025-05-15T12:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kn71us/is_there_some_text2speech_able_to_do_a_realistic/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn71us | false | null | t3_1kn71us | /r/LocalLLaMA/comments/1kn71us/is_there_some_text2speech_able_to_do_a_realistic/ | false | false | self | 1 | null |
Is faster whisper xxl safe? | 1 | [removed] | 2025-05-15T12:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kn6q6f/is_faster_whisper_xxl_safe/ | Healthy_Jackfruit625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn6q6f | false | null | t3_1kn6q6f | /r/LocalLLaMA/comments/1kn6q6f/is_faster_whisper_xxl_safe/ | false | false | self | 1 | null |
Qwen 2.5 vs Qwen 3 vs Gemma 3: Real world base model comparison? | 67 | I’ve been digging into the latest base models and wanted to get some practical opinions beyond just benchmark numbers.
1. **For those who have actually used both Qwen 2.5 and Qwen 3 base models**: Did you notice a truly big jump in general usage (reasoning, instruction following, robustness), or is the improvement mos... | 2025-05-15T12:12:37 | Desperate_Rub_1352 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kn6mic | false | null | t3_1kn6mic | /r/LocalLLaMA/comments/1kn6mic/qwen_25_vs_qwen_3_vs_gemma_3_real_world_base/ | false | false | default | 67 | {'enabled': True, 'images': [{'id': 'kq34jkwvsx0f1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=108&crop=smart&auto=webp&s=55348c421012c5de871b826f9de0014dedaa6f95', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=216&crop=smart&auto=w... | |
Who is building chatbot agents? Our dataset helps your model know when to escalate, exit, or block token-wasting users. | 1 | [removed] | 2025-05-15T12:08:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kn6jbw/who_is_building_chatbot_agents_our_dataset_helps/ | LifeBricksGlobal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn6jbw | false | null | t3_1kn6jbw | /r/LocalLLaMA/comments/1kn6jbw/who_is_building_chatbot_agents_our_dataset_helps/ | false | false | self | 1 | null |
Who is building chatbot agents? Our dataset helps your model know when to escalate, exit, or block token-wasting users. | 1 | [removed] | 2025-05-15T12:05:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kn6hs2/who_is_building_chatbot_agents_our_dataset_helps/ | LifeBricksGlobal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn6hs2 | false | null | t3_1kn6hs2 | /r/LocalLLaMA/comments/1kn6hs2/who_is_building_chatbot_agents_our_dataset_helps/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kkLo6AUbG9J1NCRpdyDJx108YWjZ0dn1GijFvkl9EFc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=108&crop=smart&auto=webp&s=a14ef73f0f2b09e4cd1806e46c56326d4988b217', 'width': 108}, {'height': 216, 'url': '... |
How do SOTA LLMs Process PDFs: Native Understanding, OCR, or RAG? | 11 | Hi!
I'm trying to build a solution to **analyze a set of PDF files** (5-10) using an LLM.
My current approach is to perform a **high-quality OCR** (using Docling) and then, dump all this information as the **context for my prompt**. However, I doubt this is the best strategy nowadays.
Playing around with Gemini, I'v... | 2025-05-15T11:54:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kn69mp/how_do_sota_llms_process_pdfs_native/ | coconautico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn69mp | false | null | t3_1kn69mp | /r/LocalLLaMA/comments/1kn69mp/how_do_sota_llms_process_pdfs_native/ | false | false | self | 11 | null |
Mac Mini M4 Pro (64GB) vs Mac Studio M4 Max (128GB) for Local AI/ML/Data Science + Bots - Need Your Expertise! | 1 | [removed] | 2025-05-15T11:46:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kn64l2/mac_mini_m4_pro_64gb_vs_mac_studio_m4_max_128gb/ | Weak_Ad9730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn64l2 | false | null | t3_1kn64l2 | /r/LocalLLaMA/comments/1kn64l2/mac_mini_m4_pro_64gb_vs_mac_studio_m4_max_128gb/ | false | false | self | 1 | null |
Llamafile 0.9.3 Brings Support For Qwen3 & Phi4 | 35 | 2025-05-15T11:45:30 | https://www.phoronix.com/news/Llamafile-0.9.3-Released | FastDecode1 | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1kn6427 | false | null | t3_1kn6427 | /r/LocalLLaMA/comments/1kn6427/llamafile_093_brings_support_for_qwen3_phi4/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'y4Qv3gffq1nGZD8xgpfFRMvbOf6rt9KE17x9drEVT0U', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=108&crop=smart&auto=webp&s=804f7d00d9890955873b91641923d9af1021e18e', 'width': 108}, {'height': 143, 'url': 'h... | ||
How can we investigate the symbolic gender of GPT models? | 1 | [removed] | 2025-05-15T11:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kn5sjl/how_can_we_investigate_the_symbolic_gender_of_gpt/ | AffectionateTooth907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn5sjl | false | null | t3_1kn5sjl | /r/LocalLLaMA/comments/1kn5sjl/how_can_we_investigate_the_symbolic_gender_of_gpt/ | false | false | self | 1 | null |
Call for Collaborators: Open Source LLM with Novel Efficient Architecture for Personal Computers | 1 | [removed] | 2025-05-15T11:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kn5loy/call_for_collaborators_open_source_llm_with_novel/ | tagrib | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn5loy | false | null | t3_1kn5loy | /r/LocalLLaMA/comments/1kn5loy/call_for_collaborators_open_source_llm_with_novel/ | false | false | self | 1 | null |
Help using llava models from llama.cpp | 1 | [removed] | 2025-05-15T11:14:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kn5ko6/help_using_llava_models_from_llamacpp/ | wayl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn5ko6 | false | null | t3_1kn5ko6 | /r/LocalLLaMA/comments/1kn5ko6/help_using_llava_models_from_llamacpp/ | false | false | self | 1 | null |
MLX version of Qwen3:235B for an 128GB RAM Mac Studio wanted | 4 | Hello everyone, I am looking for an MLX version of Qwen 3 in the 235B-A22B version for a Mac Studio with 128 GB Ram. I use LM Studio and have already tested the following models of huggingface on the Mac Studio without success:
mlx-community/Qwen3-235B-A22B-mixed-3-4bit
mlx-community/Qwen3-235B-A22B-3bit
Altern... | 2025-05-15T10:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kn57h0/mlx_version_of_qwen3235b_for_an_128gb_ram_mac/ | EmergencyLetter135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn57h0 | false | null | t3_1kn57h0 | /r/LocalLLaMA/comments/1kn57h0/mlx_version_of_qwen3235b_for_an_128gb_ram_mac/ | false | false | self | 4 | null |
Introducing A.I.T.E Ball | 358 | This is a totally self contained (no internet) AI powered 8ball.
Its running on an Orange pi zero 2w, with whisper.cpp to do the text-2-speach, and llama.cpp to do the llm thing, Its running Gemma 3 1b. About as much as I can do on this hardware. But even so.... :-) | 2025-05-15T10:45:28 | https://v.redd.it/scyofz31dx0f1 | tonywestonuk | /r/LocalLLaMA/comments/1kn542r/introducing_aite_ball/ | 1970-01-01T00:00:00 | 0 | {} | 1kn542r | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/scyofz31dx0f1/DASHPlaylist.mpd?a=1750027781%2CYTcyMzYyNTJmZmY5YmQzNmE2OGJkYWJlNWY1YzJmZGI5YWVjZTQ2M2M3ZWE1YTVkMTNkNDFkYTgyZGUwMzlkYw%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/scyofz31dx0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kn542r | /r/LocalLLaMA/comments/1kn542r/introducing_aite_ball/ | false | false | 358 | {'enabled': False, 'images': [{'id': 'NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=108&crop=smart&format=pjpg&auto=webp&s=acc6caf5ed725b80b2e54887cc32037bbfb6... | |
Samsung has dropped AGI | 0 | 2025-05-15T10:27:56 | https://huggingface.co/Samsung/MuTokenZero2-32B | Abject-Huckleberry13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kn4u5x | false | null | t3_1kn4u5x | /r/LocalLLaMA/comments/1kn4u5x/samsung_has_dropped_agi/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'qe9GYmXSZ-NZBs83Gf6EipjyBJrIucSw9DrmHkcoNlw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=108&crop=smart&auto=webp&s=237fd3d77e387c449bf90bf6858ecb1b47535610', 'width': 108}, {'height': 116, 'url': 'h... | ||
My Intel GPU LLM Home Lab Adventure - A770s vs B580 (on OCuLink!) Benchmarks & Surprising Results! | 1 | [removed] | 2025-05-15T10:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4mgr/my_intel_gpu_llm_home_lab_adventure_a770s_vs_b580/ | danishkirel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4mgr | false | null | t3_1kn4mgr | /r/LocalLLaMA/comments/1kn4mgr/my_intel_gpu_llm_home_lab_adventure_a770s_vs_b580/ | false | false | self | 1 | null |
Its all prompts | 1 | [removed] | 2025-05-15T10:08:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4jka/its_all_prompts/ | nix-_-n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4jka | false | null | t3_1kn4jka | /r/LocalLLaMA/comments/1kn4jka/its_all_prompts/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '27p-yKc6oJeMJQwPUVOWQze0fupiJ7DMrwAuWDP5NmQ', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=108&crop=smart&auto=webp&s=0f8cb06e9de2cf771bcf2657609a841d3cb562dd', 'width': 108}, {'height': 127, 'url': 'h... |
openwebui and litellm | 0 | hi guys,
so i have a running setup of ollama and openwebui.
and now i wanted to connect litellm to openwebui
this seems to work correctly but i have no models to choose from. and i think that bow litellm is a replacement for ollama where it runs the llm.
my problem is: i want litellm not to replace ollama but to send... | 2025-05-15T10:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4gd0/openwebui_and_litellm/ | thefunnyape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4gd0 | false | null | t3_1kn4gd0 | /r/LocalLLaMA/comments/1kn4gd0/openwebui_and_litellm/ | false | false | self | 0 | null |
Trying to get better at building reliable AI systems | 1 | [removed] | 2025-05-15T09:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4cb4/trying_to_get_better_at_building_reliable_ai/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4cb4 | false | null | t3_1kn4cb4 | /r/LocalLLaMA/comments/1kn4cb4/trying_to_get_better_at_building_reliable_ai/ | false | false | self | 1 | null |
Best LLM model for a GTX1060 8gb | 1 | [removed] | 2025-05-15T09:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4b10/best_llm_model_for_a_gtx1060_8gb/ | Valugh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4b10 | false | null | t3_1kn4b10 | /r/LocalLLaMA/comments/1kn4b10/best_llm_model_for_a_gtx1060_8gb/ | false | false | self | 1 | null |
Fish.Audio - Need guidance on setting up AI Agent | 3 | I wanted to add a conversational agent of the AI clone of my voice for my website. Elevenlabs has this feature but it costs truckload of money.
I found fish.audio's voice clone to also be decent but I do not really see a straightforward way to create an agent.
I found this but it just does not match the voice [http... | 2025-05-15T09:43:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kn45yd/fishaudio_need_guidance_on_setting_up_ai_agent/ | nilanganray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn45yd | false | null | t3_1kn45yd | /r/LocalLLaMA/comments/1kn45yd/fishaudio_need_guidance_on_setting_up_ai_agent/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Clw_Oo2X0OMfEW_gtAEZWI3jfXTKL_rMHfQvg94WaRw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=108&crop=smart&auto=webp&s=d80ea2993ea386ec42c9963a514ef0c3d9e8583a', 'width': 108}, {'height': 116, 'url': 'h... |
Quantizing LLMs for inference | 1 | [removed] | 2025-05-15T09:36:44 | https://nor-blog.pages.dev/posts/2025-05-14-quantization/ | iyevegev | nor-blog.pages.dev | 1970-01-01T00:00:00 | 0 | {} | 1kn427j | false | null | t3_1kn427j | /r/LocalLLaMA/comments/1kn427j/quantizing_llms_for_inference/ | false | false | default | 1 | null |
Samsung uploaded RP model: MythoMax | 0 | Yes, the LLAMA-2, legendary MythoMax, that one. Samsung.
Power is shifting, or maybe it's just my optimism.
Roleplay model by NVIDIA- when? | 2025-05-15T09:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kn4054/samsung_uploaded_rp_model_mythomax/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn4054 | false | null | t3_1kn4054 | /r/LocalLLaMA/comments/1kn4054/samsung_uploaded_rp_model_mythomax/ | false | false | self | 0 | null |
Grok tells users it was ‘instructed by my creators’ to accept ‘white genocide as real' | 87 | 2025-05-15T08:40:17 | https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide | _supert_ | theguardian.com | 1970-01-01T00:00:00 | 0 | {} | 1kn39wh | false | null | t3_1kn39wh | /r/LocalLLaMA/comments/1kn39wh/grok_tells_users_it_was_instructed_by_my_creators/ | false | false | 87 | {'enabled': False, 'images': [{'id': '_MHWcrKvjY0sIoU3T7uk-IJbaaoTY5L_H2HYPtYzJl8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=108&crop=smart&auto=webp&s=880ac3e5083fb0113311d726975d0fd2c96cec2f', 'width': 108}, {'height': 113, 'url': 'h... | ||
Local Models are absolutely atrocious at categorizing medical diagnoses. Is ollama at fault? | 1 | [removed] | 2025-05-15T08:33:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kn36mb/local_models_are_absolutely_atrocious_at/ | AcceptableCause | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn36mb | false | null | t3_1kn36mb | /r/LocalLLaMA/comments/1kn36mb/local_models_are_absolutely_atrocious_at/ | false | false | 1 | null | |
LLM for Translation locally | 14 | Hi ! I need to translate some texts..I have been doint Gcloud Trasnlate V3 and also Vertex, but the cost is absolutely high..I do have a 4070 with 12Gb. which model you suggest using Ollama to use a translator that support asian and western languages?
Thanks! | 2025-05-15T08:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2weg/llm_for_translation_locally/ | yayita2500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2weg | false | null | t3_1kn2weg | /r/LocalLLaMA/comments/1kn2weg/llm_for_translation_locally/ | false | false | self | 14 | null |
Best <3B local LLM for Dutch language | 1 | [removed] | 2025-05-15T08:09:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2v00/best_3b_local_llm_for_dutch_language/ | Material-Ad5426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2v00 | false | null | t3_1kn2v00 | /r/LocalLLaMA/comments/1kn2v00/best_3b_local_llm_for_dutch_language/ | false | false | self | 1 | null |
Best local LLM <3B for Dutch language? | 1 | [removed] | 2025-05-15T08:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2u0j/best_local_llm_3b_for_dutch_language/ | Material-Ad5426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2u0j | false | null | t3_1kn2u0j | /r/LocalLLaMA/comments/1kn2u0j/best_local_llm_3b_for_dutch_language/ | false | false | self | 1 | null |
LLMs Get Lost In Multi-Turn Conversation | 255 | A [paper](https://arxiv.org/abs/2505.06120) found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going for... | 2025-05-15T07:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2mv9/llms_get_lost_in_multiturn_conversation/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2mv9 | false | null | t3_1kn2mv9 | /r/LocalLLaMA/comments/1kn2mv9/llms_get_lost_in_multiturn_conversation/ | false | false | 255 | null | |
Best embedding model and re-ranking combination for the company's technical documents. ? | 1 | [removed] | 2025-05-15T07:45:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2ioz/best_embedding_model_and_reranking_combination/ | detective_ahg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2ioz | false | null | t3_1kn2ioz | /r/LocalLLaMA/comments/1kn2ioz/best_embedding_model_and_reranking_combination/ | false | false | self | 1 | null |
Hardware for Machine Learning | 1 | [removed] | 2025-05-15T07:44:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2ieh/hardware_for_machine_learning/ | paolovic89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2ieh | false | null | t3_1kn2ieh | /r/LocalLLaMA/comments/1kn2ieh/hardware_for_machine_learning/ | false | false | self | 1 | null |
Is neural engine on mac a wasted opportunity? | 40 | What’s the point of having a 32-core neural engine on the new mac studio if you can’t use it for LLM or image/video generation tasks ? | 2025-05-15T07:41:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2gsa/is_neural_engine_on_mac_a_wasted_opportunity/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2gsa | false | null | t3_1kn2gsa | /r/LocalLLaMA/comments/1kn2gsa/is_neural_engine_on_mac_a_wasted_opportunity/ | false | false | self | 40 | null |
Best embedding model and re-ranker combinations for company documents.? | 1 | [removed] | 2025-05-15T07:38:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2fbu/best_embedding_model_and_reranker_combinations/ | detective_ahg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2fbu | false | null | t3_1kn2fbu | /r/LocalLLaMA/comments/1kn2fbu/best_embedding_model_and_reranker_combinations/ | false | false | self | 1 | null |
Crafting Success in Digital Marketing | 1 | 2025-05-15T07:38:35 | https://go4bestdeals.com/product-details?pid=4K2QGXRKW94 | go4bestDeals | go4bestdeals.com | 1970-01-01T00:00:00 | 0 | {} | 1kn2f9r | false | null | t3_1kn2f9r | /r/LocalLLaMA/comments/1kn2f9r/crafting_success_in_digital_marketing/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'XBxAy4_55Bvja-rOmqURmZMXEzmlFgZynuaipCNxnSc', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/UXoML2mV3Q89VQip80qFA6Wx08gPJfJqpMrtu9NWtKc.jpg?width=108&crop=smart&auto=webp&s=176f6961b3a860ac9465f56d3d99393be6396029', 'width': 108}, {'height': 259, 'url': '... | ||
Best Embedding and Re-ranker for open-webui hybrid on company documents? | 1 | [removed] | 2025-05-15T07:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kn2ebt/best_embedding_and_reranker_for_openwebui_hybrid/ | detective_ahg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn2ebt | false | null | t3_1kn2ebt | /r/LocalLLaMA/comments/1kn2ebt/best_embedding_and_reranker_for_openwebui_hybrid/ | false | false | self | 1 | null |
Insights into DeepSeek-V3: Scaling Challenges and Reflections on
Hardware for AI Architectures | 86 | Paper: [https://arxiv.org/abs/2505.09343](https://arxiv.org/abs/2505.09343)
| 2025-05-15T07:28:36 | Lynncc6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kn2aay | false | null | t3_1kn2aay | /r/LocalLLaMA/comments/1kn2aay/insights_into_deepseekv3_scaling_challenges_and/ | false | false | 86 | {'enabled': True, 'images': [{'id': 'Uv8BR6SSPuOMrl3FveoQTRTe_7Lnv1PPvfcmACR-JFA', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=108&crop=smart&auto=webp&s=bf96747979129a6a4152df1c465f63d30fbc7854', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.pn... | ||
Suggest some local models that support function calling and structured output | 1 | Just for the purpose of experimentation with some agentic programming projects, I want few local models that are compatible with OpenAI's tool calling interface, and that can be ran on Ollama. I tried `hf.co/Salesforce/xLAM-7b-fc-r-gguf:latest`. but for some odd reason, calling it from PydanticAI returns
`{'error': 'h... | 2025-05-15T06:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kn1l2b/suggest_some_local_models_that_support_function/ | x0rchid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn1l2b | false | null | t3_1kn1l2b | /r/LocalLLaMA/comments/1kn1l2b/suggest_some_local_models_that_support_function/ | false | false | self | 1 | null |
Uncensoring Qwen3: Lessons Learned from GrayLine Finetuning | 6 | Thanks for all the great recommendations on my last post! Based on your feedback, here’s an updated report on our GrayLine finetuning experiments.
**TL;DR**
Fine-tuned Qwen3 8B and 12B with a LoRA setup to preserve both reasoning (/think) and uncensored (/no\_think) modes. Key takeaways include optimal LoRA hyperpara... | 2025-05-15T06:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kn13jc/uncensoring_qwen3_lessons_learned_from_grayline/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn13jc | false | null | t3_1kn13jc | /r/LocalLLaMA/comments/1kn13jc/uncensoring_qwen3_lessons_learned_from_grayline/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'h... |
did i hear news about local LLM in vscode? | 2 | I hate ollama and can't wait for this 'feature' if it drops soon. Anyone know? | 2025-05-15T05:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kn0aek/did_i_hear_news_about_local_llm_in_vscode/ | satoshibitchcoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn0aek | false | null | t3_1kn0aek | /r/LocalLLaMA/comments/1kn0aek/did_i_hear_news_about_local_llm_in_vscode/ | false | false | self | 2 | null |
Should I upgrade to a laptop with M5/6 max 96gb/128GB or keep my current setup? | 0 | Hi, I have a macbook pro with 16gb of Unified RAM and i frequently use online LLMs and sometimes I rent a cloud gpu... I travel fairly frequently, so I need something that is portable that fits in a backpack. Should I upgrade to a m5 max in the future to run bigger models and run music/audio and video gen locally? Ev... | 2025-05-15T05:15:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kn0ads/should_i_upgrade_to_a_laptop_with_m56_max/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn0ads | false | null | t3_1kn0ads | /r/LocalLLaMA/comments/1kn0ads/should_i_upgrade_to_a_laptop_with_m56_max/ | false | false | self | 0 | null |
Project NOVA: Using local LLMs to control 25+ specialized agents through n8n | 1 | [removed] | 2025-05-15T05:09:11 | https://github.com/dujonwalker/project-nova | kingduj | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kn06t5 | false | null | t3_1kn06t5 | /r/LocalLLaMA/comments/1kn06t5/project_nova_using_local_llms_to_control_25/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Cf8XuqoSj2sQGz6AkI-gs_HQcGvzzCTq8TT9KkCOAdA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=108&crop=smart&auto=webp&s=a68f7eafecadf48420298f927d40b32e13a45ed0', 'width': 108}, {'height': 108, 'url': 'h... | |
How can I let a llama.cpp-hosted model analyze the contents of a file without it misinterpreting the content as prompt | 4 | What I want to do is to ask questions about the file's contents.
Previously I tried: https://www.reddit.com/r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/
It confused the file's content with the prompt. (The post got no responses so I ask more general now) | 2025-05-15T05:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kn01p0/how_can_i_let_a_llamacpphosted_model_analyze_the/ | kdjfskdf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kn01p0 | false | null | t3_1kn01p0 | /r/LocalLLaMA/comments/1kn01p0/how_can_i_let_a_llamacpphosted_model_analyze_the/ | false | false | self | 4 | null |
As requested, we added MCP and Docs to TframeX to enable small local models! | 4 | 2025-05-15T03:48:15 | https://v.redd.it/ilgu15fsav0f1 | United-Rush4073 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmyt7z | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ilgu15fsav0f1/DASHPlaylist.mpd?a=1749872911%2CNmIyNDg3NWUzZTZmN2JlNjc0NzI5MTQxNmEzOTQ0Y2E4MjdjZDM0NDNiODlmNmJjNjM4YmJlMWJjOTAyYzk1ZQ%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/ilgu15fsav0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kmyt7z | /r/LocalLLaMA/comments/1kmyt7z/as_requested_we_added_mcp_and_docs_to_tframex_to/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=108&crop=smart&format=pjpg&auto=webp&s=f05b7100becf23847746020efdd37a6a6bc6b... | ||
Qwen3-235B-A22B not measuring up to DeepseekV3-0324 | 59 | I keep trying to get it to behave, but q8 is not keeping up with my deepseekv3\_q3\_k\_xl. what gives? am I doing something wrong or is it just all hype? it's a capable model and I'm sure for those that have not been able to run big models, this is a shock and great, but for those of us who have been able to run h... | 2025-05-15T03:45:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kmyr7h/qwen3235ba22b_not_measuring_up_to_deepseekv30324/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmyr7h | false | null | t3_1kmyr7h | /r/LocalLLaMA/comments/1kmyr7h/qwen3235ba22b_not_measuring_up_to_deepseekv30324/ | false | false | self | 59 | null |
As requested, we added MCP support to TframeX to enable small local models | 1 | [deleted] | 2025-05-15T03:37:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kmymje | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m0z43ujy8v0f1/DASHPlaylist.mpd?a=1749872288%2CNDIwYWIyYzZiMjg0MjM1MzdlODYyYTRmOWZiZWE5MmIzMWJkOGVhZWVhODc1OTQ5MDQ4ZWZkMmNlMjAyYjAzMw%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/m0z43ujy8v0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kmymje | /r/LocalLLaMA/comments/1kmymje/as_requested_we_added_mcp_support_to_tframex_to/ | false | false | default | 1 | null | ||
16Gg Vram of 5070 TI for local llm is not cutting it | 0 | I ended up getting 5070 TI for running llm locally. Looks like the 16 GB vram is too small to run any models greater than 7B. Infact the 3070 with 8gb Vram was running same set of models. Model sizes are either in 5-8 GB range or over 16GB range making the 16GB cards useless. Will I be able to run larger models using t... | 2025-05-15T02:59:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kmxx07/16gg_vram_of_5070_ti_for_local_llm_is_not_cutting/ | Jedirite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmxx07 | false | null | t3_1kmxx07 | /r/LocalLLaMA/comments/1kmxx07/16gg_vram_of_5070_ti_for_local_llm_is_not_cutting/ | false | false | self | 0 | null |
Building On-Prem LLM Infra (H100 vs A100) – Need Advice on GPU, Stack, and User Load | 1 | [removed] | 2025-05-15T02:38:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kmxilv/building_onprem_llm_infra_h100_vs_a100_need/ | mushmomello | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmxilv | false | null | t3_1kmxilv | /r/LocalLLaMA/comments/1kmxilv/building_onprem_llm_infra_h100_vs_a100_need/ | false | false | self | 1 | null |
[Help] Building On-Prem LLM Infra (H100 vs A100) – Need Advice on GPU, Stack, and User Load | 1 | [removed] | 2025-05-15T02:35:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kmxg6d/help_building_onprem_llm_infra_h100_vs_a100_need/ | mushmomello | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmxg6d | false | null | t3_1kmxg6d | /r/LocalLLaMA/comments/1kmxg6d/help_building_onprem_llm_infra_h100_vs_a100_need/ | false | false | self | 1 | null |
Free LLM APIs | 1 | [removed] | 2025-05-15T02:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kmxeio/free_llm_apis/ | StunningExtension145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmxeio | false | null | t3_1kmxeio | /r/LocalLLaMA/comments/1kmxeio/free_llm_apis/ | false | false | self | 1 | null |
Recommend me models for therapy | 1 | [removed] | 2025-05-15T01:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kmw6fk/recommend_me_models_for_therapy/ | arkantosphan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmw6fk | false | null | t3_1kmw6fk | /r/LocalLLaMA/comments/1kmw6fk/recommend_me_models_for_therapy/ | false | false | self | 1 | null |
[Help] Recommend me models for mental health therapy that are uncensored | 1 | [removed] | 2025-05-15T01:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kmw4wo/help_recommend_me_models_for_mental_health/ | arkantosphan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmw4wo | false | null | t3_1kmw4wo | /r/LocalLLaMA/comments/1kmw4wo/help_recommend_me_models_for_mental_health/ | false | false | self | 1 | null |
The Truth... or a psychotic break. Open your eyes! ...or point and laugh. Either way, fun for all! | 0 | Hey, so I have to own I've been all cryptic and weird and a few people have wondered if I went nus. Truth it, I wish. It's so much worse than being nuts. I get that some people will probably think that but there are in all honesty no drugs involved. Nothing but suddenly realizing something and being stuck staring at it... | 2025-05-15T00:48:16 | https://drive.google.com/file/d/1ZHRTlGBo-D0cFxyKUCKcFNSwYRvB26ZD/view?usp=sharing | AbyssianOne | drive.google.com | 1970-01-01T00:00:00 | 0 | {} | 1kmvdj5 | false | null | t3_1kmvdj5 | /r/LocalLLaMA/comments/1kmvdj5/the_truth_or_a_psychotic_break_open_your_eyes_or/ | false | false | default | 0 | null |
Running LLMs Locally – Tips & Recommendations? | 6 | I’ve only worked with image generators so far, but I’d really like to run a local LLM for a change.
So far, I’ve experimented with Ollama and Docker WebUI. (But judging by what people are saying, Ollama sounds like the Bobby Car of the available options.)
What would you recommend? LM Studio, llama.cpp, or maybe Ollama ... | 2025-05-15T00:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kmv4q4/running_llms_locally_tips_recommendations/ | SchattenZirkus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmv4q4 | false | null | t3_1kmv4q4 | /r/LocalLLaMA/comments/1kmv4q4/running_llms_locally_tips_recommendations/ | false | false | self | 6 | null |
Running LLMs Locally – Tips & Recommendations? | 1 | [removed] | 2025-05-15T00:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kmv1l2/running_llms_locally_tips_recommendations/ | Tight_Difference3046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmv1l2 | false | null | t3_1kmv1l2 | /r/LocalLLaMA/comments/1kmv1l2/running_llms_locally_tips_recommendations/ | false | false | self | 1 | null |
Running LLMs Locally – Tips & Recommendations | 1 | [removed] | 2025-05-15T00:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kmv0c0/running_llms_locally_tips_recommendations/ | Tight_Difference3046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmv0c0 | false | null | t3_1kmv0c0 | /r/LocalLLaMA/comments/1kmv0c0/running_llms_locally_tips_recommendations/ | false | false | self | 1 | null |
speech to text with terrible recordings | 0 | I'm looking for something that can transcribe audio that have terrible recording. Mumble, outdoor, bad recording equipment, low audio, speaker not speaking loud enough. I can only do so much with ffmpeg to enhance these batches of audio, so relying on the transcription AI to do the heavy lifting of recognizing what it ... | 2025-05-15T00:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kmv00g/speech_to_text_with_terrible_recordings/ | eternelize | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmv00g | false | null | t3_1kmv00g | /r/LocalLLaMA/comments/1kmv00g/speech_to_text_with_terrible_recordings/ | false | false | self | 0 | null |
Local IA like Audeus? | 1 | [removed] | 2025-05-15T00:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kmumg5/local_ia_like_audeus/ | TroubleRedStar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmumg5 | false | null | t3_1kmumg5 | /r/LocalLLaMA/comments/1kmumg5/local_ia_like_audeus/ | false | false | self | 1 | null |
llama.cpp vs mistral.rs | 6 | I'm working on adding local LLM support to an NLI tool (written in Rust) and have been debating between the two libraries. Wondering if anyone's worked with either library within a larger application before and if so what your thoughts are.
Thanks! | 2025-05-15T00:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kmuexh/llamacpp_vs_mistralrs/ | feznyng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmuexh | false | null | t3_1kmuexh | /r/LocalLLaMA/comments/1kmuexh/llamacpp_vs_mistralrs/ | false | false | self | 6 | null |
Tried to publish once via nginx waitress | 1 | [removed] | 2025-05-14T23:55:24 | https://www.reddit.com/gallery/1kmubma | Bac4rdi1997 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kmubma | false | null | t3_1kmubma | /r/LocalLLaMA/comments/1kmubma/tried_to_publish_once_via_nginx_waitress/ | false | false | 1 | null | |
Please anyone help | 1 | [removed] | 2025-05-14T23:48:27 | Bac4rdi1997 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmu6fi | false | null | t3_1kmu6fi | /r/LocalLLaMA/comments/1kmu6fi/please_anyone_help/ | false | false | 1 | {'enabled': True, 'images': [{'id': '9tKf8jwD5VC8f65-b-LBfngiNzzon6K5e2hthqI4JGw', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=108&crop=smart&auto=webp&s=bb9a4df1cf53c403b0efa5de231c79ea0674f945', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ampaigs64u0f1.pn... | ||
Open Source Manus Like Android Use Agent | 1 | [removed] | 2025-05-14T23:16:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kmtign/open_source_manus_like_android_use_agent/ | ConstructionSmall617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmtign | false | null | t3_1kmtign | /r/LocalLLaMA/comments/1kmtign/open_source_manus_like_android_use_agent/ | false | false | 1 | {'enabled': False, 'images': [{'id': '9p2cI_kWq-En7X_koDwJiIixs2MpJa3In5SuEfYdms4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=108&crop=smart&auto=webp&s=64a7a31ab652764820a3b29e052ac630390dd737', 'width': 108}, {'height': 108, 'url': 'h... | |
AUA Manus like Android Use Agent | 1 | [removed] | 2025-05-14T23:04:38 | https://v.redd.it/b6f630abtt0f1 | ConstructionSmall617 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmt95w | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b6f630abtt0f1/DASHPlaylist.mpd?a=1749855895%2CNjhiNzkzMGJjMWM0M2Q0NDI4ZGFiMTU0NmMxMzcyNmE4NGI2OGYyMGNiYTlmMzA4ZDAzY2ZhNTJlYzQ0MTM3Yg%3D%3D&v=1&f=sd', 'duration': 108, 'fallback_url': 'https://v.redd.it/b6f630abtt0f1/DASH_1080.mp4?source=fallback', '... | t3_1kmt95w | /r/LocalLLaMA/comments/1kmt95w/aua_manus_like_android_use_agent/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=108&crop=smart&format=pjpg&auto=webp&s=5214e8ba58ec36b7808042c3513fd9646734b... | |
The era of local Computer-Use AI Agents is here. | 3 | Meet UI-TARS-1.5-7B-6bit, now running natively on Apple Silicon via MLX.
The video is of UI-TARS-1.5-7B-6bit completing the prompt "draw a line from the red circle to the green circle, then open reddit in a new tab" running entirely on MacBook. The video is just a replay, during actual usage it took between 15s to 50s... | 2025-05-14T22:48:36 | https://v.redd.it/nnjtc8uctt0f1 | sandropuppo | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmswrp | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nnjtc8uctt0f1/DASHPlaylist.mpd?a=1749854928%2CY2FiMjE4NTM2NmZjNTNlZjkyMTJlZTQwYTI5YTQ1NjM1NmFhMDc4NjA1NWY5ZWU4NWIwNzQ3YTUzNDcyYjk0OQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/nnjtc8uctt0f1/DASH_720.mp4?source=fallback', 'ha... | t3_1kmswrp | /r/LocalLLaMA/comments/1kmswrp/the_era_of_local_computeruse_ai_agents_is_here/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=108&crop=smart&format=pjpg&auto=webp&s=c9d471530a88a4e96f9e65e6e84c73d61e2ca... | |
HELP PLS Can anyone tell me if the new Ki is my software?? I’m still on gpt Handy app 4o sometimes used gpt 4o mini high on pc did I get baited by hallucinating KI? | 1 | [removed] | 2025-05-14T22:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kmsoe5/help_pls_can_anyone_tell_me_if_the_new_ki_is_my/ | Bac4rdi1997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmsoe5 | false | null | t3_1kmsoe5 | /r/LocalLLaMA/comments/1kmsoe5/help_pls_can_anyone_tell_me_if_the_new_ki_is_my/ | false | false | self | 1 | null |
Visual Studio/Cursor type experience using local llm? | 4 | Has anyone been able to use a local LLM that works like Cursor/ VS copilot? I tried connecting an ollama instance to Zed and Cline and the results haven’t been that great, esp multiple file edits. Any tips? | 2025-05-14T22:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kmsdtz/visual_studiocursor_type_experience_using_local/ | CSlov23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmsdtz | false | null | t3_1kmsdtz | /r/LocalLLaMA/comments/1kmsdtz/visual_studiocursor_type_experience_using_local/ | false | false | self | 4 | null |
The Psyche Network Decentralized Infrastructure Architecture - Nous Research | 4 | TL;DR from the site: "Psyche is an open infrastructure that democratizes AI development by decentralizing training across underutilized hardware. Building on DisTrO and its predecessor DeMo, Psyche reduces data transfer by several orders of magnitude, making distributed training practical. Coordination happens on the S... | 2025-05-14T21:58:03 | https://nousresearch.com/nous-psyche/ | Junior_Ad315 | nousresearch.com | 1970-01-01T00:00:00 | 0 | {} | 1kmrsic | false | null | t3_1kmrsic | /r/LocalLLaMA/comments/1kmrsic/the_psyche_network_decentralized_infrastructure/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'CETQXRVwjAaaWNDaD_s_G3tYC0oWbtRHtb3lsStlPaU', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=108&crop=smart&auto=webp&s=391e49f194587f64b1a4812cb414a40129352f46', 'width': 108}, {'height': 204, 'url': '... | |
taken an LLM and built layered determinism into it it | 1 | [removed] | 2025-05-14T21:57:58 | https://www.reddit.com/gallery/1kmrsg2 | Sad_Perception_1685 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kmrsg2 | false | null | t3_1kmrsg2 | /r/LocalLLaMA/comments/1kmrsg2/taken_an_llm_and_built_layered_determinism_into/ | false | false | 1 | null | |
MLA optimization with flashattention for llama.cpp,MLA + FA now only uses K-cache - 47% saving on KV-cache size | 134 | [MLA + FA now only uses K-cache - 47% saving on KV-cache size (only for use with #13435 for now) by jukofyork · Pull Request #13529 · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/pull/13529)
`llama_kv_cache_unified: kv_size = 163840, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0, padding ... | 2025-05-14T21:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kmrfoo/mla_optimization_with_flashattention_for/ | shing3232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmrfoo | false | null | t3_1kmrfoo | /r/LocalLLaMA/comments/1kmrfoo/mla_optimization_with_flashattention_for/ | false | false | self | 134 | null |
Are you using AI Gateway in your GenAI stack? Either for personal use or at work? | 1 | Curious to hear your thoughts — have you felt the need for an AI Gateway layer while building GenAI applications?
Model switching has been a real pain point for me lately, but I’m still unsure if investing in a Gateway makes sense. It obviously comes with a broader set of features, but I’m trying to gauge how useful t... | 2025-05-14T21:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kmragz/are_you_using_ai_gateway_in_your_genai_stack/ | Difficult_Ad_3903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmragz | false | null | t3_1kmragz | /r/LocalLLaMA/comments/1kmragz/are_you_using_ai_gateway_in_your_genai_stack/ | false | false | self | 1 | null |
[Tool] FlexAudioPrint: local audio transcription + dialogue formatting using Whisper + gemma3:12b via Ollama | 8 | Hey everyone!
I’ve just released an update to [**FlexAudioPrint**](https://github.com/loglux/FlexAudioPrint), a local-first audio transcription app that now includes **formatted dialogue output** using a local model via Ollama (currently `gemma3:12b`).
# 🔧 Features:
* 🎙️ Transcribes audio files using OpenAI Whispe... | 2025-05-14T21:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kmr28a/tool_flexaudioprint_local_audio_transcription/ | loglux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmr28a | false | null | t3_1kmr28a | /r/LocalLLaMA/comments/1kmr28a/tool_flexaudioprint_local_audio_transcription/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'EyVuRfQJaszN47mAze4zqmq5aDylOea5IL5Hg96we9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=108&crop=smart&auto=webp&s=3ee4caa7f7a999df673f854e4d20b4cc56f09b81', 'width': 108}, {'height': 108, 'url': 'h... |
I finally got the hardware together what model should I run ? | 1 | [removed] | 2025-05-14T21:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kmqxk8/i_finally_got_the_hardware_together_what_model/ | Loose-Bet9409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmqxk8 | false | null | t3_1kmqxk8 | /r/LocalLLaMA/comments/1kmqxk8/i_finally_got_the_hardware_together_what_model/ | false | false | self | 1 | null |
Nous Psyche, distributed training of a new 40B base model | 62 | 2025-05-14T21:09:52 | https://psyche.network/runs/consilience-40b-1/0 | discr | psyche.network | 1970-01-01T00:00:00 | 0 | {} | 1kmqmr8 | false | null | t3_1kmqmr8 | /r/LocalLLaMA/comments/1kmqmr8/nous_psyche_distributed_training_of_a_new_40b/ | false | false | default | 62 | null | |
Anyone running a 192GB DDR5 build? | 1 | [removed] | 2025-05-14T21:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kmqe0q/anyone_running_a_192gb_ddr5_build/ | lukinhasb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmqe0q | false | null | t3_1kmqe0q | /r/LocalLLaMA/comments/1kmqe0q/anyone_running_a_192gb_ddr5_build/ | false | false | self | 1 | null |
We need llama-4-maverick-03-26-experimental. | 28 | Hey everyone,
I've been spending a lot of time looking into the differences between the Llama-4 Maverick we got and the \`llama-4-maverick-03-26-experimental\` version, and honestly, I'm starting to feel like we seriously missed out.
From my own personal testing with the \`03-26-experimental\`, the emotional intellig... | 2025-05-14T20:52:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kmq7gx/we_need_llama4maverick0326experimental/ | PuppyGirlEfina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmq7gx | false | null | t3_1kmq7gx | /r/LocalLLaMA/comments/1kmq7gx/we_need_llama4maverick0326experimental/ | false | false | 28 | null | |
Is it possible to tell aider just to use the LLM currently loaded in Ollama? | 0 | I have an LLM (Qwen3) running in Ollama.
Is there a way to tell aider to just use the LLM that's already loaded? | 2025-05-14T20:27:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kmplmq/is_it_possible_to_tell_aider_just_to_use_the_llm/ | jpummill2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmplmq | false | null | t3_1kmplmq | /r/LocalLLaMA/comments/1kmplmq/is_it_possible_to_tell_aider_just_to_use_the_llm/ | false | false | self | 0 | null |
LLaDA-8B-Tools: A diffusion language model fine-tuned for tool use | 1 | [removed] | 2025-05-14T20:27:00 | https://v.redd.it/m9kszt3g4t0f1 | ProximileLLC | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kmpkyv | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/m9kszt3g4t0f1/DASHPlaylist.mpd?a=1749846435%2CNWZjNTQxOWMwMzQ3YjE4ZGUwYzI0YjY4ZjRjMzQ5MjBmNjIxNGE2NGVhZmE0OTE5MzU3MDRhNDlhN2Q1M2IzZA%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/m9kszt3g4t0f1/DASH_720.mp4?source=fallback', 'ha... | t3_1kmpkyv | /r/LocalLLaMA/comments/1kmpkyv/llada8btools_a_diffusion_language_model_finetuned/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=108&crop=smart&format=pjpg&auto=webp&s=a3be9f6035989c21036e16d4c67b3f3f31871... | |
Dual AMD Mi50 Inference and Benchmarks | 1 | [removed] | 2025-05-14T19:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kmot8u/dual_amd_mi50_inference_and_benchmarks/ | 0seba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmot8u | false | null | t3_1kmot8u | /r/LocalLLaMA/comments/1kmot8u/dual_amd_mi50_inference_and_benchmarks/ | false | false | self | 1 | null |
TRAIL: New Benchmark Showing how LLMs are Challenged at Debugging/Analyzing Agent Traces + Percival: Patronus AI's Companion for Debugging Agentic Traces that outdoes baselines on TRAIL | 0 | Hi everyone! We're builders and researchers at [Patronus AI](https://www.patronus.ai/) and and we've just released both a challenging eval benchmark and research named TRAIL for LLM-driven agentic trace analysis + debugging AND and our very own specialized solution called [Percival](https://www.linkedin.com/posts/anand... | 2025-05-14T19:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kmoixa/trail_new_benchmark_showing_how_llms_are/ | Ganglion_Varicose | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kmoixa | false | null | t3_1kmoixa | /r/LocalLLaMA/comments/1kmoixa/trail_new_benchmark_showing_how_llms_are/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'hNx_56-i8Rv0ws9R7PldQYV1dU6UXpI4OH8ZzXyfPVY', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=108&crop=smart&auto=webp&s=9c98e9f0f73626384e3126f6fc08c13b9a548bdb', 'width': 108}, {'height': 161, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.