title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:13:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m4wg | false | null | t3_1l1m4wg | /r/LocalLLaMA/comments/1l1m4wg/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [deleted] | 2025-06-02T16:09:48 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m1dj | false | null | t3_1l1m1dj | /r/LocalLLaMA/comments/1l1m1dj/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
PlayDiffusion - PlayAI's Latest Diffusion-based Speech Editing Model | 1 | [removed] | 2025-06-02T16:09:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m0uj | false | null | t3_1l1m0uj | /r/LocalLLaMA/comments/1l1m0uj/playdiffusion_playais_latest_diffusionbased/ | true | false | default | 1 | null | ||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:08:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m0ai | false | null | t3_1l1m0ai | /r/LocalLLaMA/comments/1l1m0ai/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:08:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1lzxb | false | null | t3_1l1lzxb | /r/LocalLLaMA/comments/1l1lzxb/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:07:36 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1lzbm | false | null | t3_1l1lzbm | /r/LocalLLaMA/comments/1l1lzbm/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null | ||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 3 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:03:46 | https://huggingface.co/spaces/PlayHT/PlayDiffusion | SandSalt8370 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l1lvri | false | null | t3_1l1lvri | /r/LocalLLaMA/comments/1l1lvri/playais_latest_diffusionbased_speech_editing/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'CqblY3Zg0YyBkT7WL4m7rTHmTkmHkQsN6Ve3JKLmzUk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=108&crop=smart&auto=webp&s=e9fb57a5c0e50ad13a69c186f8b3a8edb818eacc', 'width': 108}, {'height': 116, 'url': 'h... | |
What's a general model 14b or less that genuinely impresses you? | 31 | I'm looking for a general purpose model that is exceptional, outstanding, can do a wide array of tasks especially administrative, doing things like preparing me PowerPoint slide and the text that should be put into documents and just taking notes on stuff, converting ugly messy unformatted notes into something tangible... | 2025-06-02T16:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l1luwz/whats_a_general_model_14b_or_less_that_genuinely/ | intimate_sniffer69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1luwz | false | null | t3_1l1luwz | /r/LocalLLaMA/comments/1l1luwz/whats_a_general_model_14b_or_less_that_genuinely/ | false | false | self | 31 | null |
I Built a Better Gumloop in 48 Hours with Vibe Coding | 1 | Most no-code agent builders (Gumloop) are just workflow automation with LLM calls. They're not built for actual agents that need to:
\* Make dynamic routing decisions
\* Handle complex tool orchestration
\* Support ANY model (not just OpenAI)
\*\*Agent Framework:\*\* LangGraph (JS) because agents ARE graphs - nodes... | 2025-06-02T16:02:26 | https://v.redd.it/vlmx362kcj4f1 | goddamnit_1 | /r/LocalLLaMA/comments/1l1luhf/i_built_a_better_gumloop_in_48_hours_with_vibe/ | 1970-01-01T00:00:00 | 0 | {} | 1l1luhf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vlmx362kcj4f1/DASHPlaylist.mpd?a=1751601753%2CNWNiYzM5ZGQwYWM2MDNiMzM1NmYwMjJlOTdkYWQ1ZmM2ZWE2MzI0NWMxZjNhMDExYzcyNGIwZDYyNGU0MjE3Yg%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/vlmx362kcj4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1luhf | /r/LocalLLaMA/comments/1l1luhf/i_built_a_better_gumloop_in_48_hours_with_vibe/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=108&crop=smart&format=pjpg&auto=webp&s=58d91aed6e6b8fda287b40f34c042a8db13e0... | |
PlayAI's Latest Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lt8r/playais_latest_speech_editing_model_playdiffusion/ | SandSalt8370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lt8r | false | null | t3_1l1lt8r | /r/LocalLLaMA/comments/1l1lt8r/playais_latest_speech_editing_model_playdiffusion/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rfZh-VjWw-fpYgDNs403ia4KfWbi-8eAXIVDDmzS5-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=108&crop=smart&auto=webp&s=6fd21e84a4656b2763783d6fafcc09e46dee9870', 'width': 108}, {'height': 108, 'url': 'h... |
PlayAI's Latest Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T15:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lrcm/playais_latest_speech_editing_model_playdiffusion/ | SandSalt8370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lrcm | false | null | t3_1l1lrcm | /r/LocalLLaMA/comments/1l1lrcm/playais_latest_speech_editing_model_playdiffusion/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rfZh-VjWw-fpYgDNs403ia4KfWbi-8eAXIVDDmzS5-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=108&crop=smart&auto=webp&s=6fd21e84a4656b2763783d6fafcc09e46dee9870', 'width': 108}, {'height': 108, 'url': 'h... | |
Which LLM is best at understanding information in spreadsheets? | 3 | I have been having trouble finding an LLM that can properly process spreadsheet data. I've tried Gemma 8b and the latest deepseek. Yet both struggle to even do simple matching. I haven't tried Gemma 27b yet but I'm just not sure what I'm missing here.
I'm running on a 4090 and i9 with 64gb. | 2025-06-02T15:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lqdm/which_llm_is_best_at_understanding_information_in/ | ColoradoCyclist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lqdm | false | null | t3_1l1lqdm | /r/LocalLLaMA/comments/1l1lqdm/which_llm_is_best_at_understanding_information_in/ | false | false | self | 3 | null |
Tips with double 3090 setup | 0 | I'm planning on buying a second 3090 to expand the possibilities of what i can generate, it's going to be around 500-600 euros.
I have a RYZEN 5 5600x which I have been delaying upgrading, but might do so as well but because of gaming mostly. Have 32GB of RAM. And the motherboard is a B550-GAMING-EDGE-WIFI which will ... | 2025-06-02T15:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lksz/tips_with_double_3090_setup/ | Lonhanha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lksz | false | null | t3_1l1lksz | /r/LocalLLaMA/comments/1l1lksz/tips_with_double_3090_setup/ | false | false | self | 0 | null |
Multiturn causes additional output Quality? | 1 | So recently while just testing some things, I tried to change how I process the user assistant chat messages.
Instead of having alternating user and assistant messages be sent, I passed the entire chat as raw text with a user: and assistant: prefixed in the user message.
System prompt was kept the same.
The post p... | 2025-06-02T15:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lgvi/multiturn_causes_additional_output_quality/ | Federal_Order4324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lgvi | false | null | t3_1l1lgvi | /r/LocalLLaMA/comments/1l1lgvi/multiturn_causes_additional_output_quality/ | false | false | self | 1 | null |
Multimodal Monday #10: Unified Frameworks, Specialized Efficiency | 1 | [removed] | 2025-06-02T15:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1le0w/multimodal_monday_10_unified_frameworks/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1le0w | false | null | t3_1l1le0w | /r/LocalLLaMA/comments/1l1le0w/multimodal_monday_10_unified_frameworks/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zn7lJoCz71Sa-nFC6TpZPBPGqutrbCLUZvHJf1J43dk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=108&crop=smart&auto=webp&s=8b6644abcdf07a87206a196aa9d01ee52c160fe6', 'width': 108}, {'height': 144, 'url': 'h... |
New to LLMs — Where Do I Even Start? (Using LM Studio + RTX 4050) | 1 | [removed] | 2025-06-02T14:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1k0fl/new_to_llms_where_do_i_even_start_using_lm_studio/ | penumbrae_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1k0fl | false | null | t3_1l1k0fl | /r/LocalLLaMA/comments/1l1k0fl/new_to_llms_where_do_i_even_start_using_lm_studio/ | false | false | self | 1 | null |
Smallest LLM you tried that's legit | 176 | what's the smallest LLM you've used that gives proper text, not just random gibberish?
I've tried qwen2.5:0.5B.it works pretty well for me, actually quite good | 2025-06-02T14:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jyld/smallest_llm_you_tried_thats_legit/ | Remarkable-Law9287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jyld | false | null | t3_1l1jyld | /r/LocalLLaMA/comments/1l1jyld/smallest_llm_you_tried_thats_legit/ | false | false | self | 176 | null |
Is Bandwidth of Oculink port enough to inference local LLMs? | 1 | RTX 3090 has bandwidth of 936.2 GB/s, if I connect the 3090 to a mini pc with Oculink port, Will the bandwidth be limited to 64Gbps ? | 2025-06-02T14:41:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jsmq/is_bandwidth_of_oculink_port_enough_to_inference/ | Relative_Rope4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jsmq | false | null | t3_1l1jsmq | /r/LocalLLaMA/comments/1l1jsmq/is_bandwidth_of_oculink_port_enough_to_inference/ | false | false | self | 1 | null |
Model Tuning and Re-Tuning Problem. | 1 | [removed] | 2025-06-02T14:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jqc6/model_tuning_and_retuning_problem/ | Desperate_System3058 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jqc6 | false | null | t3_1l1jqc6 | /r/LocalLLaMA/comments/1l1jqc6/model_tuning_and_retuning_problem/ | false | false | self | 1 | null |
What Should Void Editor Provider Settings Be For llama.cpp (OpenAI compatible)? | 1 | [removed] | 2025-06-02T14:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l1joxl/what_should_void_editor_provider_settings_be_for/ | je11eebean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1joxl | false | null | t3_1l1joxl | /r/LocalLLaMA/comments/1l1joxl/what_should_void_editor_provider_settings_be_for/ | false | false | self | 1 | null |
R1-0528 won't stop thinking | 1 | If anyone can help with this issue, or provide some things to keep in mind when setting up R1-0528, that would be appreciated. It can handle small requests just fine, like ask it for a recipe and it can give you one, albeit with something weird here or there, but it gets trapped in a circuitous thought pattern when I g... | 2025-06-02T14:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jla0/r10528_wont_stop_thinking/ | madman24k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jla0 | false | null | t3_1l1jla0 | /r/LocalLLaMA/comments/1l1jla0/r10528_wont_stop_thinking/ | false | false | self | 1 | null |
NVIDIA RTX PRO 6000 Unlocks GB202's Full Performance In Gaming: Beats GeForce RTX 5090 Convincingly | 80 | 2025-06-02T14:20:16 | https://wccftech.com/nvidia-rtx-pro-6000-beats-geforce-rtx-5090/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1l1j94p | false | null | t3_1l1j94p | /r/LocalLLaMA/comments/1l1j94p/nvidia_rtx_pro_6000_unlocks_gb202s_full/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'JO87FqpgwRig4JJap9mmFU_C_QcRKIKsV0AaCsC1zCI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=108&crop=smart&auto=webp&s=485c236f1d332f6b0fa8a2e9bfe1a2f3878d14fe', 'width': 108}, {'height': 121, 'url': 'h... | ||
Drift Audit | 1 | [removed] | 2025-06-02T14:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1j1i7/drift_audit/ | ShipOk3732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1j1i7 | false | null | t3_1l1j1i7 | /r/LocalLLaMA/comments/1l1j1i7/drift_audit/ | false | false | self | 1 | null |
Enterprise-ready solution for local LLM | 1 | [removed] | 2025-06-02T13:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l1imwu/enterpriseready_solution_for_local_llm/ | Soft_Protection2836 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1imwu | false | null | t3_1l1imwu | /r/LocalLLaMA/comments/1l1imwu/enterpriseready_solution_for_local_llm/ | false | false | self | 1 | null |
MedGemma on Android | 5 | Any way to use the multimodel capabilities of MedGemma on android? Tried with both Layla and Crosstalk apps but the model cant read images using them | 2025-06-02T13:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1imus/medgemma_on_android/ | caiporadomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1imus | false | null | t3_1l1imus | /r/LocalLLaMA/comments/1l1imus/medgemma_on_android/ | false | false | self | 5 | null |
The duality of man | 1 | 2025-06-02T13:25:59 | poormail | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1hziq | false | null | t3_1l1hziq | /r/LocalLLaMA/comments/1l1hziq/the_duality_of_man/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'wKEfZcL_Y-hZLOBAoUjcy5ERUrVD1j6VSbqEKHxtotg', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=108&crop=smart&auto=webp&s=01ce335493bdb13ee75c5553bf1db5496c30a863', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png... | |||
Agent controlling iPhone using OpenAI API | 1 | Seems like it Uses Xcode UI tests + accessibility tree to look into apps, and performs swipes, taps, to get things done. So technically it might be possible with 3n as it has vision to run it locally.
[https://github.com/rounak/PhoneAgent](https://github.com/rounak/PhoneAgent) | 2025-06-02T13:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l1hyns/agent_controlling_iphone_using_openai_api/ | Predatedtomcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1hyns | false | null | t3_1l1hyns | /r/LocalLLaMA/comments/1l1hyns/agent_controlling_iphone_using_openai_api/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zDyNdvXFGpItNCUiMFgkCUffHN_KZl5cvnnSqBXwo9M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=108&crop=smart&auto=webp&s=ed96be4ea713574777e10afd9fbd0dbe39f68b76', 'width': 108}, {'height': 108, 'url': 'h... |
Tensor offload hunt for Qwen3 | 1 | [removed] | 2025-06-02T13:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ht87/tensor_offload_hunt_for_qwen3/ | SimilarWarthog8393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ht87 | false | null | t3_1l1ht87 | /r/LocalLLaMA/comments/1l1ht87/tensor_offload_hunt_for_qwen3/ | false | false | self | 1 | null |
Why is Qwen so neurotic | 1 | 2025-06-02T13:07:11 | nat2r | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1hk8d | false | null | t3_1l1hk8d | /r/LocalLLaMA/comments/1l1hk8d/why_is_qwen_so_neurotic/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'airmeXL9rUqFmqQunaV-JWtmNZAdBT0XJlrbNc52X-c', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?width=108&crop=smart&auto=webp&s=9f061924c02b00aad6dc14755d71308864817d42', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png... | |||
What are the best open source llms to be used for structured output? | 1 | [removed] | 2025-06-02T12:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1h94o/what_are_the_best_open_source_llms_to_be_used_for/ | mrpeakyblinder2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1h94o | false | null | t3_1l1h94o | /r/LocalLLaMA/comments/1l1h94o/what_are_the_best_open_source_llms_to_be_used_for/ | false | false | self | 1 | null |
Best Open source LLMs for tool call / structured output | 0 | I have tried Qwen models (both 2.5 and 3) but it they still get the output wrong. (using vLLM). At least Qwen 32B (thinking and non thinking both) struggle with the output I specify. I have tried guided decoding too but no luck, they sometime work, but it's super unstable in terms out output. Llama 4 is nice but someti... | 2025-06-02T12:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1h5dq/best_open_source_llms_for_tool_call_structured/ | Initial_Track6190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1h5dq | false | null | t3_1l1h5dq | /r/LocalLLaMA/comments/1l1h5dq/best_open_source_llms_for_tool_call_structured/ | false | false | self | 0 | null |
Which Open Source Model I should use for transcribing Audio Calls? Calls are in Indian Languages. I have used Whisper Large v3 and v2 and they are not good enough. | 1 | [removed] | 2025-06-02T12:45:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1h3my | false | null | t3_1l1h3my | /r/LocalLLaMA/comments/1l1h3my/which_open_source_model_i_should_use_for/ | false | false | default | 1 | null | ||
What is the best LLM to run locally? | 1 | [removed] | 2025-06-02T12:34:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l1guq3/what_is_the_best_llm_to_run_locally/ | Intelligent_Pop_4973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1guq3 | false | null | t3_1l1guq3 | /r/LocalLLaMA/comments/1l1guq3/what_is_the_best_llm_to_run_locally/ | false | false | self | 1 | null |
Which model will be good to Auto-Tag Inventory in a Dashboard? | 1 | [removed] | 2025-06-02T12:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gs9c/which_model_will_be_good_to_autotag_inventory_in/ | tonyblu331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gs9c | false | null | t3_1l1gs9c | /r/LocalLLaMA/comments/1l1gs9c/which_model_will_be_good_to_autotag_inventory_in/ | false | false | self | 1 | null |
Anyone Used an LLM to Auto-Tag Inventory in a Dashboard? | 1 | [removed] | 2025-06-02T12:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gpn9/anyone_used_an_llm_to_autotag_inventory_in_a/ | tonyblu331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gpn9 | false | null | t3_1l1gpn9 | /r/LocalLLaMA/comments/1l1gpn9/anyone_used_an_llm_to_autotag_inventory_in_a/ | false | false | self | 1 | null |
What is the best model for Void editor with agentic capabilities? | 1 | [removed] | 2025-06-02T12:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gno3/what_is_the_best_model_for_void_editor_with/ | PreparationTrue9138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gno3 | false | null | t3_1l1gno3 | /r/LocalLLaMA/comments/1l1gno3/what_is_the_best_model_for_void_editor_with/ | false | false | self | 1 | null |
Which Open Source Model I should use for transcribing Audio Calls? Calls are in Indian Languages. I have used Whisper Large v3 and v2 and they are not good enough. | 1 | [removed] | 2025-06-02T12:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1glpp/which_open_source_model_i_should_use_for/ | sportoholic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1glpp | false | null | t3_1l1glpp | /r/LocalLLaMA/comments/1l1glpp/which_open_source_model_i_should_use_for/ | false | false | self | 1 | null |
Anyone tried this? - Self improving AI agents | 56 | Repository for **Darwin Gödel Machine (DGM)**, a novel self-improving system that iteratively modifies its own code (thereby also improving its ability to modify its own codebase) and empirically validates each change using coding benchmarks.
[https://github.com/jennyzzt/dgm](https://github.com/jennyzzt/dgm)
| 2025-06-02T12:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l1glmq/anyone_tried_this_self_improving_ai_agents/ | davesmith001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1glmq | false | null | t3_1l1glmq | /r/LocalLLaMA/comments/1l1glmq/anyone_tried_this_self_improving_ai_agents/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'n2xrbopkMAwkYk9N1AXfdke1pr4pcaC3hC_Z_JrMqo8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=108&crop=smart&auto=webp&s=39c844529ca20ad8e34ef42add1bb79c5654de3e', 'width': 108}, {'height': 108, 'url': 'h... |
[DEMO] I created a coding agent that can do dynamic, runtime debugging. | 18 | I'm just annoyed with inability of current coding agents creating buggy code and can not fix it. It is said that current LLM have Ph.D level and cannot fix some obvious bugs, just loop around and around and offer the same wrong solution for the bug. At the same time they look very smart, much knowledgeable than me. Why... | 2025-06-02T12:14:32 | https://v.redd.it/qic49y0h8i4f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1ggkp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qic49y0h8i4f1/DASHPlaylist.mpd?a=1751458489%2CNTlhNzhhNGE0MmJhNmFhNzEyZWEwZWY2YmZjOGUwNjViM2Q2YzU1NDA2YzFhODcyNTM3ZGRhZmM1MmMzOTEyNA%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/qic49y0h8i4f1/DASH_1080.mp4?source=fallback', '... | t3_1l1ggkp | /r/LocalLLaMA/comments/1l1ggkp/demo_i_created_a_coding_agent_that_can_do_dynamic/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=fb95136b2067dde97d492a926777bc155e42a... | |
training local offline AI Ollama model | 1 | [removed] | 2025-06-02T11:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l1g1c0/training_local_offline_ai_ollama_model/ | Prior-Initiative6925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1g1c0 | false | null | t3_1l1g1c0 | /r/LocalLLaMA/comments/1l1g1c0/training_local_offline_ai_ollama_model/ | false | false | self | 1 | null |
Local LLMs and user tasks unrelated to IT | 1 | [removed] | 2025-06-02T11:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ftpr/local_llms_and_user_tasks_unrelated_to_it/ | KitchenPlayful3160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ftpr | false | null | t3_1l1ftpr | /r/LocalLLaMA/comments/1l1ftpr/local_llms_and_user_tasks_unrelated_to_it/ | false | false | self | 1 | null |
Local LLMs and User Tasks Unrelated to IT | 1 | [removed] | 2025-06-02T11:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l1foh9/local_llms_and_user_tasks_unrelated_to_it/ | KitchenPlayful3160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1foh9 | false | null | t3_1l1foh9 | /r/LocalLLaMA/comments/1l1foh9/local_llms_and_user_tasks_unrelated_to_it/ | false | false | self | 1 | null |
Best Local LLMs for RTX 4060 (8GB VRAM) & 32GB RAM on Asus Zephyrus G14 (2024)? | 1 | [removed] | 2025-06-02T10:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ezqv/best_local_llms_for_rtx_4060_8gb_vram_32gb_ram_on/ | andreaingrando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ezqv | false | null | t3_1l1ezqv | /r/LocalLLaMA/comments/1l1ezqv/best_local_llms_for_rtx_4060_8gb_vram_32gb_ram_on/ | false | false | self | 1 | null |
Any fast and multilingual TTS model trained with a lightweighted LLM? | 4 | There were some work such as Orptheus, Octus, Zonos etc, however, they seems both only for English.
Am seeking for a model trained with multilingual and with emotion promptable.
Anyone are planing to train a one? | 2025-06-02T10:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l1el53/any_fast_and_multilingual_tts_model_trained_with/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1el53 | false | null | t3_1l1el53 | /r/LocalLLaMA/comments/1l1el53/any_fast_and_multilingual_tts_model_trained_with/ | false | false | self | 4 | null |
Pinokio down for days | 1 | What's happening with [https://pinokio.computer/](https://pinokio.computer/) its been days its not working and because of that the discover tab in the client is also blank since it can't fetch data from the server
i've also used a website availability checker and it also can't reach pinokio if anyone knows what's up ... | 2025-06-02T10:25:16 | Reys_dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1ehll | false | null | t3_1l1ehll | /r/LocalLLaMA/comments/1l1ehll/pinokio_down_for_days/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'De8Ah0XbfiCkCiFakq0q4wpjO1Zc_3Dg7EiiOGhJr88', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=108&crop=smart&auto=webp&s=b886aa2ad682aed2db46dc15b82a9be0792a0e42', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png... | ||
Ignore the hype - AI companies still have no moat | 267 | An article I wrote a while back, I think r/LocalLLaMA still wins
The basis of it is that Every single AI tool – has an open source alternative, every. single. one – so programming wise, for a new company to implement these features is not a matter of development complexity but a matter of getting the biggest audience ... | 2025-06-02T10:06:26 | https://river.berlin/blog/there-is-still-no-moat/ | No_Tea2273 | river.berlin | 1970-01-01T00:00:00 | 0 | {} | 1l1e6ic | false | null | t3_1l1e6ic | /r/LocalLLaMA/comments/1l1e6ic/ignore_the_hype_ai_companies_still_have_no_moat/ | false | false | 267 | {'enabled': False, 'images': [{'id': 'TU8AJKDkxfU0q12qDeRP3T0ItraWwkLCVZQ_QFZdlPo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=108&crop=smart&auto=webp&s=58751e5944a3c7f8a7e94ef84d9f6df289e90d68', 'width': 108}, {'height': 108, 'url': 'h... | |
Any node based tools for general AI workflows? | 1 | I'm looking if anyone built any Comfy UI style tools for all sorts of general AI workflows like LLMs, STT, TTS, basic stuff like HTTP requests, custom functions, etc. Something like a mix of Comfy UI and n8n. The closest thing I found is a closed source tool florafauna. | 2025-06-02T10:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1e4uz/any_node_based_tools_for_general_ai_workflows/ | GamerWael | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1e4uz | false | null | t3_1l1e4uz | /r/LocalLLaMA/comments/1l1e4uz/any_node_based_tools_for_general_ai_workflows/ | false | false | self | 1 | null |
What do people think about SGLang vs. vLLM (or any other framework)? | 1 | [removed] | 2025-06-02T10:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l1e42c/what_do_people_think_about_sglang_vs_vllm_or_any/ | Mother_Context_2446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1e42c | false | null | t3_1l1e42c | /r/LocalLLaMA/comments/1l1e42c/what_do_people_think_about_sglang_vs_vllm_or_any/ | false | false | self | 1 | null |
GPT4All, AnythingLLM, Open WebUI, or other? | 0 | I don't have the time I'd like to work on running LLMs locally, So far I have played with various models on GPT4All and a bit on AnythingLLM. In the interest of saving time, I am seeking opinions on which "front end" interface I should use with these various popular LLMs. I should note that I am most interested current... | 2025-06-02T09:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l1dujm/gpt4all_anythingllm_open_webui_or_other/ | BobbyNGa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1dujm | false | null | t3_1l1dujm | /r/LocalLLaMA/comments/1l1dujm/gpt4all_anythingllm_open_webui_or_other/ | false | false | self | 0 | null |
Any ideas on how to make qwen 3 8b run on phone? | 2 | I'm developing an app where you can edit code from your github repos using LLMs using llama.rn. Using the lowest quanitzation it still crashes the app. A bit strange since it can handle larger llms like yi coder 9b.
Anyone got an idea on what to do or what to read to understand the issue better?
Of if anyone would li... | 2025-06-02T09:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ds4n/any_ideas_on_how_to_make_qwen_3_8b_run_on_phone/ | AspecialistI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ds4n | false | null | t3_1l1ds4n | /r/LocalLLaMA/comments/1l1ds4n/any_ideas_on_how_to_make_qwen_3_8b_run_on_phone/ | false | false | self | 2 | null |
Best Video captioning model | 10 | Need to generate text captions from small video clips that later i can use to do semantic scene search. What are the best models for VRAM 12-32GB.
Maybe i can train/fine tune so i can do embeded search? | 2025-06-02T09:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l1drru/best_video_captioning_model/ | VihmaVillu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1drru | false | null | t3_1l1drru | /r/LocalLLaMA/comments/1l1drru/best_video_captioning_model/ | false | false | self | 10 | null |
Looking for model recommendations for creative writing | 0 | Been using Fimbulvetr-11b-v2-i1 within LM Studio to generate a wide variety of fiction, 500 words at a time. Nothing commercial, just to amuse myself. But being limited to such short generations can be frustrating, especially when it starts skipping details from long prompts. When using Claude Sonnet, I saw it could p... | 2025-06-02T09:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1dpic/looking_for_model_recommendations_for_creative/ | Bed-After | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1dpic | false | null | t3_1l1dpic | /r/LocalLLaMA/comments/1l1dpic/looking_for_model_recommendations_for_creative/ | false | false | self | 0 | null |
A personal AI assistant on my laptop with 16 GB RAM and RTX 3050 4GB video memory. Which model is feasible? | 0 | I have worked with AI and RAG as part of profession most of that is glorified API calling. I don't have a speck of experience with local LLMs.
I want to build something that works on my machine. A low end LLM that can make tool calls and respond to simple questions.
For example:
Me : Open reddit
LLM: should make a ... | 2025-06-02T09:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l1df0n/a_personal_ai_assistant_on_my_laptop_with_16_gb/ | WiseObjective8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1df0n | false | null | t3_1l1df0n | /r/LocalLLaMA/comments/1l1df0n/a_personal_ai_assistant_on_my_laptop_with_16_gb/ | false | false | self | 0 | null |
Start up ideas around LLM and vision models like flux | 0 | Hi Friends,
I am looking for suggestions, I am planning to start a startup around llm and lora trained on specific customer data like their website or business information.
And I want to provide solution -
1 a chatbot for user which can help user navigate to different pages for doing certain task.
2 tools for adm... | 2025-06-02T08:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l1d31v/start_up_ideas_around_llm_and_vision_models_like/ | SearchTricky7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1d31v | false | null | t3_1l1d31v | /r/LocalLLaMA/comments/1l1d31v/start_up_ideas_around_llm_and_vision_models_like/ | false | false | self | 0 | null |
Best VLM for financial document processing | 1 | [removed] | 2025-06-02T07:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l1c0e3/best_vlm_for_financial_document_processing/ | SaasPhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1c0e3 | false | null | t3_1l1c0e3 | /r/LocalLLaMA/comments/1l1c0e3/best_vlm_for_financial_document_processing/ | false | false | self | 1 | null |
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation! | 1 | [removed] | 2025-06-02T07:17:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1bo95 | false | null | t3_1l1bo95 | /r/LocalLLaMA/comments/1l1bo95/sharing_my_a_demo_of_tool_for_easy_handwritten/ | false | false | default | 1 | null | ||
Best accuracy vs speed tradeoff on a local setup | 1 | [removed] | 2025-06-02T07:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l1bjwr/best_accuracy_vs_speed_tradeoff_on_a_local_setup/ | Awkward_Sympathy4475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1bjwr | false | null | t3_1l1bjwr | /r/LocalLLaMA/comments/1l1bjwr/best_accuracy_vs_speed_tradeoff_on_a_local_setup/ | false | false | self | 1 | null |
System Prompt Learning: Teaching your local LLMs to learn problem-solving strategies from experience (optillm plugin) | 36 | Hey r/LocalLlama!
I wanted to share something we've been working on that might interest folks running local LLMs - **System Prompt Learning (SPL)**.
# The Problem
You know how ChatGPT, Claude, etc. perform so well partly because they have incredibly detailed system prompts with sophisticated reasoning strategies? Mo... | 2025-06-02T07:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l1bjhm/system_prompt_learning_teaching_your_local_llms/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1bjhm | false | null | t3_1l1bjhm | /r/LocalLLaMA/comments/1l1bjhm/system_prompt_learning_teaching_your_local_llms/ | false | false | self | 36 | null |
What LLM libraries/frameworks are worthwhile and what is better to roll your own from scratch? | 31 | Maybe I'm suffering from NIH, but the core of systems can be quite simple to roll out using just torch/transformers/API calls.
What libraries/frameworks do you find most valuable to use instead of rolling your own? | 2025-06-02T06:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l1b801/what_llm_librariesframeworks_are_worthwhile_and/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1b801 | false | null | t3_1l1b801 | /r/LocalLLaMA/comments/1l1b801/what_llm_librariesframeworks_are_worthwhile_and/ | false | false | self | 31 | null |
[DEMO] I created an AI coding agent that debugs autonomously using dynamic runtime state in VS Code – Feedback Wanted! | 7 | I would like to share a demo of **Zentara Code,** a coding agent forked from Roo Code that can do debugging leveraging runtime state. One of the main pain spots when using coding agents is that they generate buggy code. And they have limited abilities to fix errors as they cannot leverage the runtime debugging tools l... | 2025-06-02T06:45:17 | https://v.redd.it/poqqxlqvlg4f1 | bn_from_zentara | /r/LocalLLaMA/comments/1l1b6u0/demo_i_created_an_ai_coding_agent_that_debugs/ | 1970-01-01T00:00:00 | 0 | {} | 1l1b6u0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/poqqxlqvlg4f1/DASHPlaylist.mpd?a=1751568321%2CY2Y0MmRkOTk5MThlNzI3ZTE1M2RjNjBjN2I4NTA4YTgwYTZjODM0NDQzNjQ1NTEyMTVkMjU2ZTUwMmEzMTBhYQ%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/poqqxlqvlg4f1/DASH_1080.mp4?source=fallback', '... | t3_1l1b6u0 | /r/LocalLLaMA/comments/1l1b6u0/demo_i_created_an_ai_coding_agent_that_debugs/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=2e51897591c2decf50c45d451a09ca8ebb43c... | |
Model under 1B parameters with great perfomance | 1 | [removed] | 2025-06-02T06:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l1at3f/model_under_1b_parameters_with_great_perfomance/ | Josephdhub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1at3f | false | null | t3_1l1at3f | /r/LocalLLaMA/comments/1l1at3f/model_under_1b_parameters_with_great_perfomance/ | false | false | self | 1 | null |
Snapdragon 8 Elite gets 5.5 t/s on Qwen3 30B A3B | 89 | Phone is a Razr Ultra 2025 | 2025-06-02T05:44:39 | 1ncehost | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a944 | false | null | t3_1l1a944 | /r/LocalLLaMA/comments/1l1a944/snapdragon_8_elite_gets_55_ts_on_qwen3_30b_a3b/ | false | false | 89 | {'enabled': True, 'images': [{'id': 'xmgABfvO1zSJbINJ-e4ZDYYCr3oRdPwgHC-A4qqr4ZI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=108&crop=smart&auto=webp&s=f0e4e601fd14ad1d1fc02c56ad8d9e48243a840e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jagac0yccg4f1.pn... | ||
KEEPER: Smarter Surveillance That Sees Like a Human | 1 | 2025-06-02T05:41:22 | https://v.redd.it/3nnh64dqbg4f1 | Constant-Marketing97 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a78h | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3nnh64dqbg4f1/DASHPlaylist.mpd?a=1751434896%2CNWYxY2Y5OTMxZjEwNzA2MDYwOTc4Y2I0OTc1N2NhMGYwNzgxZWM2ZmNjNWI5NjA4NDE3NDNkZDg1NTk0NjA1Yw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/3nnh64dqbg4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1a78h | /r/LocalLLaMA/comments/1l1a78h/keeper_smarter_surveillance_that_sees_like_a_human/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=108&crop=smart&format=pjpg&auto=webp&s=9d7778239ce4660efa417b96a39c81f36b9df... | ||
KEEPER: Smarter Surveillance That Sees Like a Human | 1 | 2025-06-02T05:40:22 | https://v.redd.it/k47jp5uhbg4f1 | Constant-Marketing97 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a6nm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k47jp5uhbg4f1/DASHPlaylist.mpd?a=1751434836%2CYmJjNjRkYzQ4ZDQ3YmI2MDA2YzE5MDc3ZGRkMjBiOTY3ODZhY2UzNzYzZTllY2IwZmJjZmMxZmQzZmM0Y2YxNg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/k47jp5uhbg4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l1a6nm | /r/LocalLLaMA/comments/1l1a6nm/keeper_smarter_surveillance_that_sees_like_a_human/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6f342483f2ed36c375820ab252d80386b40e... | ||
IQ1_Smol_Boi | 418 | Some folks asked me for an R1-0528 quant that might fit on 128GiB RAM + 24GB VRAM. I didn't think it was possible, but turns out my new smol boi `IQ1_S_R4` is 131GiB and actually runs okay (ik_llama.cpp fork only), and has perplexity lower "better" than `Qwen3-235B-A22B-Q8_0` which is almost twice the size! Not sure th... | 2025-06-02T05:26:51 | VoidAlchemy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l19yud | false | null | t3_1l19yud | /r/LocalLLaMA/comments/1l19yud/iq1_smol_boi/ | false | false | 418 | {'enabled': True, 'images': [{'id': 'W-hH3Ojd_aRT_pAe1zxAxaTry8Prv1J_g5owpNWj7ug', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?width=108&crop=smart&auto=webp&s=97b77df2c7ad2de0f72aa8041ee27a467626c1d9', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png... | ||
Looking for an AI Chat Interface Platform Similar to Open WebUI (With Specific Requirements) | 1 | [removed] | 2025-06-02T05:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l19s5s/looking_for_an_ai_chat_interface_platform_similar/ | Lethal_Protector_404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l19s5s | false | null | t3_1l19s5s | /r/LocalLLaMA/comments/1l19s5s/looking_for_an_ai_chat_interface_platform_similar/ | false | false | self | 1 | null |
What's next? Behemoth? Qwen VL/Coder? Mistral Large Reasoning/Vision? | 11 | do you await any model? | 2025-06-02T04:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l194gj/whats_next_behemoth_qwen_vlcoder_mistral_large/ | secopsml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l194gj | false | null | t3_1l194gj | /r/LocalLLaMA/comments/1l194gj/whats_next_behemoth_qwen_vlcoder_mistral_large/ | false | false | self | 11 | null |
Alienware R11 with RTX 3090 to run local AI? | 1 | [removed] | 2025-06-02T04:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l18mxb/alienware_r11_with_rtx_3090_to_run_local_ai/ | Brief_Original | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l18mxb | false | null | t3_1l18mxb | /r/LocalLLaMA/comments/1l18mxb/alienware_r11_with_rtx_3090_to_run_local_ai/ | false | false | self | 1 | null |
anyone working on an interesting project | 1 | [removed] | 2025-06-02T03:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l17z1z/anyone_working_on_an_interesting_project/ | shoman30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17z1z | false | null | t3_1l17z1z | /r/LocalLLaMA/comments/1l17z1z/anyone_working_on_an_interesting_project/ | false | false | self | 1 | null |
Memory Layer Compatible with Local Llama | 0 | I built a open-sourced remote personal memory vault that works with MCP compatible clients. You can just say "remember X, Y, Z." and then retrieve it later. You can store documents, and I am working on integrations with Obsidian and such. Looking for contributors to make this compatible with local llama.
I want thi... | 2025-06-02T03:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l17sdd/memory_layer_compatible_with_local_llama/ | OneEither8511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17sdd | false | null | t3_1l17sdd | /r/LocalLLaMA/comments/1l17sdd/memory_layer_compatible_with_local_llama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0K3EP3BTVG7L7Lx8Qm2haQiAc6eProGUj9H_Y6QGMsc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=108&crop=smart&auto=webp&s=2498bff756608b692d3780a9ac2beb43816621f9', 'width': 108}, {'height': 113, 'url': 'h... |
SAGA Update: Autonomous Novel Writing with Deep KG & Semantic Context - Now Even More Advanced! | 27 | A couple of weeks ago, I shared an early version of SAGA (Semantic And Graph-enhanced Authoring), my project for autonomous novel generation. Thanks to some great initial feedback and a lot of focused development, I'm excited to share a significantly advanced version!
**What is SAGA?**
SAGA, powered by its NANA (Next... | 2025-06-02T03:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l17m9g/saga_update_autonomous_novel_writing_with_deep_kg/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17m9g | false | null | t3_1l17m9g | /r/LocalLLaMA/comments/1l17m9g/saga_update_autonomous_novel_writing_with_deep_kg/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'vmohKiMaIalkUvT1Ey-JVw1JK3sVXOizSNS8yN44-wU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=108&crop=smart&auto=webp&s=912fc6747b6206ece9f37c1785188e5b32551151', 'width': 108}, {'height': 108, 'url': 'h... |
Thoughts on Chatbox? | 1 | [removed] | 2025-06-02T02:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l17cgl/thoughts_on_chatbox/ | Accomplished-Rub2331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17cgl | false | null | t3_1l17cgl | /r/LocalLLaMA/comments/1l17cgl/thoughts_on_chatbox/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Qp3D189pFu1CLJc-3D4NUZhwtBVSjmjY03kJ5KEgcpw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=108&crop=smart&auto=webp&s=1de6605ab2e21cdf2cd50cc4ff69f3d15f135c1f', 'width': 108}, {'height': 108, 'url': 'h... |
What's an open model to use to emulate what NotebookLM does? | 4 | Forgive the naive or dumb question here, I'm just starting out with doing this locally. So far I'm using instruct3-llama and a vector database in Chroma to prompt against a rulesbook. I give send a context selected by the user alongside the prompt to narrow what the LLM looks at to return results. Is command-r better? | 2025-06-02T02:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l16d23/whats_an_open_model_to_use_to_emulate_what/ | mccoypauley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l16d23 | false | null | t3_1l16d23 | /r/LocalLLaMA/comments/1l16d23/whats_an_open_model_to_use_to_emulate_what/ | false | false | self | 4 | null |
Is Google censoring Qwen Long-L1-32B? | 1 | [removed] | 2025-06-02T01:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l166zr/is_google_censoring_qwen_longl132b/ | lincolnrules | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l166zr | false | null | t3_1l166zr | /r/LocalLLaMA/comments/1l166zr/is_google_censoring_qwen_longl132b/ | false | false | 1 | null | |
Does anyone know if there is a good leaderboard for Audio Language Model? | 1 | [removed] | 2025-06-02T01:54:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l163p0/does_anyone_know_if_there_is_a_good_leaderboard/ | MediaHaunting8669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l163p0 | false | null | t3_1l163p0 | /r/LocalLLaMA/comments/1l163p0/does_anyone_know_if_there_is_a_good_leaderboard/ | false | false | self | 1 | null |
Which model are you using? June'25 edition | 213 | As proposed previously from this [post](https://www.reddit.com/r/LocalLLaMA/comments/1jxu0f7/we_should_have_a_monthly_which_models_are_you/), it's time for another monthly check-in on the latest models and their applications. The goal is to keep everyone updated on recent releases and discover hidden gems that might be... | 2025-06-02T01:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l1581z/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1581z | false | null | t3_1l1581z | /r/LocalLLaMA/comments/1l1581z/which_model_are_you_using_june25_edition/ | false | false | self | 213 | null |
did nvidia fix melting cable issue for rtx 6000 pro? I was thinking of buying one for AI stuff | 1 | [removed] | 2025-06-02T01:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l151w2/did_nvidia_fix_melting_cable_issue_for_rtx_6000/ | tooLateButStillYoung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l151w2 | false | null | t3_1l151w2 | /r/LocalLLaMA/comments/1l151w2/did_nvidia_fix_melting_cable_issue_for_rtx_6000/ | false | false | self | 1 | null |
Which model are you using? June'25 edition | 1 | [removed] | 2025-06-02T00:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l14rz6/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l14rz6 | false | null | t3_1l14rz6 | /r/LocalLLaMA/comments/1l14rz6/which_model_are_you_using_june25_edition/ | false | false | self | 1 | null |
Who is getting paid to work doing this rather than just hobby dabbling..what was your path? | 149 | I really enjoy hacking together LLM scripts and ideas. but how do I get paid doing it?? | 2025-06-02T00:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l13tv3/who_is_getting_paid_to_work_doing_this_rather/ | bornfree4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13tv3 | false | null | t3_1l13tv3 | /r/LocalLLaMA/comments/1l13tv3/who_is_getting_paid_to_work_doing_this_rather/ | false | false | self | 149 | null |
Which model are you using? June'25 edition | 1 | [removed] | 2025-06-01T23:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l13qto/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13qto | false | null | t3_1l13qto | /r/LocalLLaMA/comments/1l13qto/which_model_are_you_using_june25_edition/ | false | false | self | 1 | null |
How are you selecting LLMs? | 0 | Below is my Desktop config
CPU : I9-13900KF
RAM : 64GB DDR4
GPU: NVIDIA GeForce RTX 4070 Ti with 12GB Dedicated GPU and 32GB Shared GPU. Overall, Task Manager shows my GPU Memory as 44GB.
https://preview.redd.it/xljtz6jqhe4f1.png?width=791&format=png&auto=webp&s=6bfe83e00013b28e950b09b9c8d48a0e89e00f41
Q1 :... | 2025-06-01T23:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l13j9b/how_are_you_selecting_llms/ | KVT_BK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13j9b | false | null | t3_1l13j9b | /r/LocalLLaMA/comments/1l13j9b/how_are_you_selecting_llms/ | false | false | 0 | null | |
How are people running dual GPU these days? | 56 | I have a 4080 but was considering getting a 3090 for LLM models. I've never ran a dual set up before because I read like 6 years ago that crossfire isn't used anymore. But clearly people are doing it so is that still going on? How does it work? Will it only offload to 1 gpu and then to the RAM, or can it offload to one... | 2025-06-01T23:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l13fqa/how_are_people_running_dual_gpu_these_days/ | admiralamott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13fqa | false | null | t3_1l13fqa | /r/LocalLLaMA/comments/1l13fqa/how_are_people_running_dual_gpu_these_days/ | false | false | self | 56 | null |
IronLoom-32B-v1 - A Character Card Creator Model with Structured Planning | 9 | IronLoom-32B-v1 is a model specialized in creating character cards for Silly Tavern that has been trained to reason in a structured way before outputting the card.
**Model Name: IronLoom-32B-v1**
**Model URL:** [https://huggingface.co/Lachesis-AI/IronLoom-32B-v1](https://huggingface.co/Lachesis-AI/IronLoom-32B-v1) ... | 2025-06-01T23:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l13c8n/ironloom32bv1_a_character_card_creator_model_with/ | Kos11_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13c8n | false | null | t3_1l13c8n | /r/LocalLLaMA/comments/1l13c8n/ironloom32bv1_a_character_card_creator_model_with/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'xQwYiosjCwwzBVbg47ge7mxH035jxm95l8I9ZhKcorQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=108&crop=smart&auto=webp&s=e428699e499363a54282e235c4a8975e593e54c6', 'width': 108}, {'height': 116, 'url': 'h... | |
Excel to PDF | 2 | I'm interested in running a llm locally for a variety of reasons, but for my actual job I have a menial task of taking data from an excel sheet and copying the various fields into a PDF template I have.
From what I read chatGPT plus can do this, but do ya'll think it's possible and/or too much hassle to get a local ll... | 2025-06-01T23:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l12yc2/excel_to_pdf/ | Soliloquy789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l12yc2 | false | null | t3_1l12yc2 | /r/LocalLLaMA/comments/1l12yc2/excel_to_pdf/ | false | false | self | 2 | null |
Playing generated games of Atari Style PingPong and Space Invaders, thanks to Qwen 3 8b! (Original non Deepseek version) This small model continues to amaze. | 18 | 2025-06-01T22:51:37 | https://youtu.be/ar_kFDHGbhQ | c64z86 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1l12cmi | false | {'oembed': {'author_name': 'c64', 'author_url': 'https://www.youtube.com/@c64z86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ar_kFDHGbhQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in... | t3_1l12cmi | /r/LocalLLaMA/comments/1l12cmi/playing_generated_games_of_atari_style_pingpong/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'XmcvlSRaFm1YeY2y0bLV7P5o9rzDH0mYlaKVYZVnus4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fORXcgKVkaTCLfSuQUzrXrubR0RAsGHr5swRFkIXzZY.jpg?width=108&crop=smart&auto=webp&s=f151b6251d0f13bb540e4ecfe1e1ce200a5bbafc', 'width': 108}, {'height': 162, 'url': 'h... | ||
Scalable Strategies for Continual Learning with Replay | 1 | [https://arxiv.org/abs/2505.12512](https://arxiv.org/abs/2505.12512) | 2025-06-01T22:09:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l11eqz/scalable_strategies_for_continual_learning_with/ | Old_Cardiologist_854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l11eqz | false | null | t3_1l11eqz | /r/LocalLLaMA/comments/1l11eqz/scalable_strategies_for_continual_learning_with/ | false | false | self | 1 | null |
Sharing my tool for easy handwritten fine-tuning dataset creation: supports multiple formats, token counting & auto saving! | 1 | [removed] | 2025-06-01T21:54:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l111qk | false | null | t3_1l111qk | /r/LocalLLaMA/comments/1l111qk/sharing_my_tool_for_easy_handwritten_finetuning/ | false | false | default | 1 | null | ||
Connecting two 3090s | 0 | How can I connect two 3090s in consumer hardware? My motherboard supports x8/x8, and ample cooling.
I was trying to connect them via an SLI/NVM Link but I don't see many resources on the topic. I've read some mentions of SLI being deprecated for FUTURE support, but I'm assuming it's still possible.
I am not interest... | 2025-06-01T21:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l10im3/connecting_two_3090s/ | elchurnerista | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l10im3 | false | null | t3_1l10im3 | /r/LocalLLaMA/comments/1l10im3/connecting_two_3090s/ | false | false | self | 0 | null |
Context Window for Llama 4 New Meta API | 0 | Does anyone know what is the context window supported for llama 4 new meta api? I cannot find it. | 2025-06-01T21:19:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l108b2/context_window_for_llama_4_new_meta_api/ | Temporary-Koala-7370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l108b2 | false | null | t3_1l108b2 | /r/LocalLLaMA/comments/1l108b2/context_window_for_llama_4_new_meta_api/ | false | false | self | 0 | null |
Any LLM benchmarks yet for the GMKTek EVO-X2 AMD Ryzen AI Max+ PRO 395? | 13 | Any LLM benchmarks yet for the GMKTek Evo-X2 AMD Ryzen AI Max+ PRO 395?
I'd love to see latest benchmarks with ollama doing 30 to 100 GB models and maybe a lineup vs 4xxx and 5xxx Nvidia GPUs.
Thanks! | 2025-06-01T21:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l106wk/any_llm_benchmarks_yet_for_the_gmktek_evox2_amd/ | StartupTim | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l106wk | false | null | t3_1l106wk | /r/LocalLLaMA/comments/1l106wk/any_llm_benchmarks_yet_for_the_gmktek_evox2_amd/ | false | false | self | 13 | null |
Anyone using an open source framework to control LLM agent behavior more precisely? | 1 | [removed] | 2025-06-01T21:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l102hk/anyone_using_an_open_source_framework_to_control/ | Ecstatic-Cranberry90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l102hk | false | null | t3_1l102hk | /r/LocalLLaMA/comments/1l102hk/anyone_using_an_open_source_framework_to_control/ | false | false | self | 1 | null |
25L Portable NV-linked Dual 3090 LLM Rig | 163 | Main point of portability is because The workplace of the coworker I built this for is truly offline, with no potential for LAN or wifi, so to download new models and update the system periodically I need to go pick it up from him and take it home.
WARNING - these components don't fit if you try to copy this build. T... | 2025-06-01T21:01:58 | https://www.reddit.com/gallery/1l0zsv7 | Special-Wolverine | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l0zsv7 | false | null | t3_1l0zsv7 | /r/LocalLLaMA/comments/1l0zsv7/25l_portable_nvlinked_dual_3090_llm_rig/ | false | false | 163 | null | |
I manage to integrate vision in a Desktop app. | 1 | [removed] | 2025-06-01T20:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l0zq2m/i_manage_to_integrate_vision_in_a_desktop_app/ | Trilogix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0zq2m | false | null | t3_1l0zq2m | /r/LocalLLaMA/comments/1l0zq2m/i_manage_to_integrate_vision_in_a_desktop_app/ | false | false | 1 | null | |
dsr1 0528 on ollama.com | 0 | is this misspelled on the repo?
http://lollama.com/ibrary/deepseek-r1:621b-2508-94K-M
"2508"?
| 2025-06-01T20:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l0zefs/dsr1_0528_on_ollamacom/ | neurostream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0zefs | false | null | t3_1l0zefs | /r/LocalLLaMA/comments/1l0zefs/dsr1_0528_on_ollamacom/ | false | false | self | 0 | null |
Hello friends, a question about LLM model for 256 gb m3 ultra. | 1 | [removed] | 2025-06-01T20:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l0yrs3/hello_friends_a_question_about_llm_model_for_256/ | Mean_Bird_6331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0yrs3 | false | null | t3_1l0yrs3 | /r/LocalLLaMA/comments/1l0yrs3/hello_friends_a_question_about_llm_model_for_256/ | false | false | self | 1 | null |
Pure vs. merged - and a modern leaderboard | 9 | Probably been discussion about this, but I've noticed the trained-in quirks of models diminish with merged models. (Can't tell with abliterated since the only ones I've used are also mergers).
Quirks include stubbornness in personality, desire consistency, to suck with certain formatting, etc.
Yet we have no leaderbo... | 2025-06-01T20:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l0ylj8/pure_vs_merged_and_a_modern_leaderboard/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0ylj8 | false | null | t3_1l0ylj8 | /r/LocalLLaMA/comments/1l0ylj8/pure_vs_merged_and_a_modern_leaderboard/ | false | false | self | 9 | null |
Llama.cpp - cache-type-k+cache-type-v+flash-attn too good to be true!? | 1 | [removed] | 2025-06-01T20:04:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l0yf1d/llamacpp_cachetypekcachetypevflashattn_too_good/ | cesarean722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0yf1d | false | null | t3_1l0yf1d | /r/LocalLLaMA/comments/1l0yf1d/llamacpp_cachetypekcachetypevflashattn_too_good/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'h... |
A Privacy-Focused Perplexity That Runs Locally on all your devices - iPhone, Android, iPad! | 38 | Hey r/LocalLlama community!
Following up on my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/)\- the response has been incredible! Thank you to everyone who tried it out, left reviews, and provided feedback.
Based on your requests, I'm excited to... | 2025-06-01T19:52:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l0y4ep/a_privacyfocused_perplexity_that_runs_locally_on/ | Ssjultrainstnict | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0y4ep | false | null | t3_1l0y4ep | /r/LocalLLaMA/comments/1l0y4ep/a_privacyfocused_perplexity_that_runs_locally_on/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': '7-HFAtbo5I60W1_r4CgocNBdTzGwoEdGmG9vh0EFuog', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=108&crop=smart&auto=webp&s=8e6bc59f3f54bb8d3e4765d5d924ef62c20af88e', 'width': 108}, {'height': 108, 'url': 'h... |
Allowing LLM to ponder in Open WebUI | 261 | **What is this?**
A completely superficial way of letting LLM to ponder a bit before making its conversation turn. The process is streamed to an artifact within Open WebUI.
[Code](https://github.com/av/harbor/blob/main/boost/src/modules/ponder.py) | 2025-06-01T19:47:52 | https://v.redd.it/uoeptbsbdd4f1 | Everlier | /r/LocalLLaMA/comments/1l0y0wp/allowing_llm_to_ponder_in_open_webui/ | 1970-01-01T00:00:00 | 0 | {} | 1l0y0wp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uoeptbsbdd4f1/DASHPlaylist.mpd?a=1751528877%2CYzllZGRmNjdmYWZkNGQ2YjUxNDEyMDY1OWE5Y2UxNWRhMWQ4NzJhZWQ4N2M0MTllMGYzMGEwYmM1MjczMzFmYQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/uoeptbsbdd4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l0y0wp | /r/LocalLLaMA/comments/1l0y0wp/allowing_llm_to_ponder_in_open_webui/ | false | false | 261 | {'enabled': False, 'images': [{'id': 'dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=108&crop=smart&format=pjpg&auto=webp&s=93b52068d1b79fa20ca68ed14fd0f9c0a3a6e... | |
Toolcalling in the reasoning trace as an alternative to agentic frameworks | 15 | [Deep Reasoning With Tools: Toolcalling in the reasoning trace](https://2084.substack.com/p/deep-reasoning-with-tools-toolcalling)
Hey, so I was working on training reasoning models to do interesting things, when I started wanting them to be more dynamic: not just predict based on static information but actively searc... | 2025-06-01T19:40:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l0xubg/toolcalling_in_the_reasoning_trace_as_an/ | ExaminationNo8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0xubg | false | null | t3_1l0xubg | /r/LocalLLaMA/comments/1l0xubg/toolcalling_in_the_reasoning_trace_as_an/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '9a7ZCjCbYNIdu6GfaNVd7eVb-N5vwv7fmfApivsoKEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=108&crop=smart&auto=webp&s=2ff2ae74fc25431ddfd5f2d07cab594f85e7d19c', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.