title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Good build plan? | 1 | [removed] | 2025-05-25T19:53:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kvbfu6/good_build_plan/ | adidas128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kvbfu6 | false | null | t3_1kvbfu6 | /r/LocalLLaMA/comments/1kvbfu6/good_build_plan/ | false | false | self | 1 | null |
What linux distro do you use? | 1 | [removed] | 2025-05-25T19:42:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kvb641/what_linux_distro_do_you_use/ | No_Farmer_495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kvb641 | false | null | t3_1kvb641 | /r/LocalLLaMA/comments/1kvb641/what_linux_distro_do_you_use/ | false | false | self | 1 | null |
AI Clippy for macOS (the LocalLLama you didn't think you need) | 1 | [removed] | 2025-05-25T19:34:47 | https://v.redd.it/m09gqadvcz2f1 | BruhFortniteLaggyTho | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kvazy5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m09gqadvcz2f1/DASHPlaylist.mpd?a=1750793699%2CMGM3YzQ5Y2RkNTMxNzE2MTQ3MmZhNTUyY2IwY2IwODE0NWEwZTk2ZDI1ODM1OWI0MTIxN2EyMGFjMTQ1MGEzMQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/m09gqadvcz2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kvazy5 | /r/LocalLLaMA/comments/1kvazy5/ai_clippy_for_macos_the_localllama_you_didnt/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MjMya2c5ZHZjejJmMafdR-N42dO2exUZmu5HuHycRyaoWkX-MvoqkanM0Fsn', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/MjMya2c5ZHZjejJmMafdR-N42dO2exUZmu5HuHycRyaoWkX-MvoqkanM0Fsn.png?width=108&crop=smart&format=pjpg&auto=webp&s=de49191c22668b92a2764bf751dd00647fbaf... | |
Chainlit or Open webui for production? | 5 | So I am DS at my company but recently I have been tasked on developing a chatbot for our other engineers. I am currently the only one working on this project, and I have been learning as I go and there is noone else at my company who has knowledge on how to do this. Basically my first goal is to use a pre-trained LLM a... | 2025-05-25T19:00:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kva7sp/chainlit_or_open_webui_for_production/ | psssat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kva7sp | false | null | t3_1kva7sp | /r/LocalLLaMA/comments/1kva7sp/chainlit_or_open_webui_for_production/ | false | false | self | 5 | null |
Qwen3-30B-A3B is amazing! (BEST RAG MODEL) | 1 | [removed] | 2025-05-25T18:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kva646/qwen330ba3b_is_amazing_best_rag_model/ | Professional-Site503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kva646 | false | null | t3_1kva646 | /r/LocalLLaMA/comments/1kva646/qwen330ba3b_is_amazing_best_rag_model/ | false | false | self | 1 | null |
Looking for a lightweight Al model that can run locally on Android or iOS devices with only 2-4GB of CPU RAM. Does anyone know of any options besides VRAM models? | 0 | I'm working on a project that requires a lightweight AI model to run locally on low-end mobile devices. I'm looking for recommendations on models that can run smoothly within the 2-4GB RAM range. Any suggestions would be greatly appreciated! | 2025-05-25T18:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kv9j82/looking_for_a_lightweight_al_model_that_can_run/ | ExplanationEqual2539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv9j82 | false | null | t3_1kv9j82 | /r/LocalLLaMA/comments/1kv9j82/looking_for_a_lightweight_al_model_that_can_run/ | false | false | self | 0 | null |
Smallest VLM that currently exists and what's the minimum spec y'all have gotten them to work on? | 1 | [removed] | 2025-05-25T18:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kv97lg/smallest_vlm_that_currently_exists_and_whats_the/ | combo-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv97lg | false | null | t3_1kv97lg | /r/LocalLLaMA/comments/1kv97lg/smallest_vlm_that_currently_exists_and_whats_the/ | false | false | self | 1 | null |
how do i build my own chatgpt/claude-style agent with custom prompts? 🤔 | 1 | [removed] | 2025-05-25T17:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kv8oqe/how_do_i_build_my_own_chatgptclaudestyle_agent/ | enough_jainil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv8oqe | false | null | t3_1kv8oqe | /r/LocalLLaMA/comments/1kv8oqe/how_do_i_build_my_own_chatgptclaudestyle_agent/ | false | false | self | 1 | null |
NEED HELP building CUSTOM GPT/CLAUDE agent with API + web UI any way? | 1 | [removed] | 2025-05-25T17:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kv8kfu/need_help_building_custom_gptclaude_agent_with/ | enough_jainil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv8kfu | false | null | t3_1kv8kfu | /r/LocalLLaMA/comments/1kv8kfu/need_help_building_custom_gptclaude_agent_with/ | false | false | self | 1 | null |
🚨 NEED HELP building CUSTOM GPT/CLAUDE agent with API + web UI any hacks? | 1 | [removed] | 2025-05-25T17:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kv8jqn/need_help_building_custom_gptclaude_agent_with/ | enough_jainil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv8jqn | false | null | t3_1kv8jqn | /r/LocalLLaMA/comments/1kv8jqn/need_help_building_custom_gptclaude_agent_with/ | false | false | self | 1 | null |
Vulkan for vLLM? | 4 | I've been thinking about trying out vLLM. With llama.cpp, I found that rocm didn't support my radeon 780M igpu, but vulkan did.
Does anyone know if one can use vulkan with vLLM? I didn't see it when searching the docs, but thought I'd ask around. | 2025-05-25T17:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kv7xng/vulkan_for_vllm/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv7xng | false | null | t3_1kv7xng | /r/LocalLLaMA/comments/1kv7xng/vulkan_for_vllm/ | false | false | self | 4 | null |
RTX PRO 6000 96GB plus Intel Battlemage 48GB feasible? | 26 | OK, this may be crazy but I wanted to run it by you all.
Can you combine a RTX PRO 6000 96GB (with all the Nvidia CUDA goodies) with a (relatively) cheap Intel 48GB GPUs for extra VRAM?
So you have 144GB VRAM available, but you have all the capabilities of Nvidia on your main card driving the LLM inferencing?
This ... | 2025-05-25T16:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kv762l/rtx_pro_6000_96gb_plus_intel_battlemage_48gb/ | SteveRD1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv762l | false | null | t3_1kv762l | /r/LocalLLaMA/comments/1kv762l/rtx_pro_6000_96gb_plus_intel_battlemage_48gb/ | false | false | self | 26 | null |
Qwen 235b DWQ MLX 4 bit quant | 16 | [https://huggingface.co/mlx-community/Qwen3-235B-A22B-4bit-DWQ](https://huggingface.co/mlx-community/Qwen3-235B-A22B-4bit-DWQ)
Two questions:
1. Does anyone have a good way to test perplexity against the standard MLX 4 bit quant?
2. I notice this is exactly the same size as the standard 4 bit mlx quant: 132.26... | 2025-05-25T16:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kv74jx/qwen_235b_dwq_mlx_4_bit_quant/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv74jx | false | null | t3_1kv74jx | /r/LocalLLaMA/comments/1kv74jx/qwen_235b_dwq_mlx_4_bit_quant/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'woDlX_9jZ_RUh-BRA0HnWYW5Ud9MvXQ5TnJDUlxluSs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Dj4j7cBsQGHj0l8S5-jQI3kP5IFvQfgOZXAljAmBsK8.jpg?width=108&crop=smart&auto=webp&s=83a82d5c88728b29956ec46a18d5bd0c6980207d', 'width': 108}, {'height': 116, 'url': 'h... |
Looking for disruptive ideas: What would you want from a personal, private LLM running locally? | 1 | [removed] | 2025-05-25T16:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kv6la4/looking_for_disruptive_ideas_what_would_you_want/ | dai_app | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv6la4 | false | null | t3_1kv6la4 | /r/LocalLLaMA/comments/1kv6la4/looking_for_disruptive_ideas_what_would_you_want/ | false | false | self | 1 | null |
Fine-tuning HuggingFace SmolVLM (256M) to control the robot | 330 | I've been experimenting with tiny LLMs and VLMs for a while now, perhaps some of your saw my earlier post here about running [LLM on ESP32 for Dalek](https://www.reddit.com/r/LocalLLaMA/comments/1g9seqf/a_tiny_language_model_260k_params_is_running/) Halloween prop. This time I decided to use HuggingFace really tiny (25... | 2025-05-25T16:24:19 | https://v.redd.it/9s2q9nm3fy2f1 | Complex-Indication | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kv6jjk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9s2q9nm3fy2f1/DASHPlaylist.mpd?a=1750782276%2CZWE0YzNjMWRmZmFkZjViNTExMzJkODRiOGU3YjFhMjBmMWY5NGU3ZTEzZDE4OTFkNmZhMzRjYWY5NTFjMzllMQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/9s2q9nm3fy2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kv6jjk | /r/LocalLLaMA/comments/1kv6jjk/finetuning_huggingface_smolvlm_256m_to_control/ | false | false | 330 | {'enabled': False, 'images': [{'id': 'b29vMmxwbTNmeTJmMSsIw5Jo6JnOhGNxnxMg56RLkadqgoRaNULw7zamXe9N', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b29vMmxwbTNmeTJmMSsIw5Jo6JnOhGNxnxMg56RLkadqgoRaNULw7zamXe9N.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d8328acddedd06ed7322e24610cf9b2c467f... | |
I tasked the HuggingFace SmolVLM (256M) to control my robot - and it kind of worked | 1 | I've been experimenting with tiny LLMs and VLMs for a while now, perhaps some of your saw my earlier post here about running [LLM on ESP32 for Dalek](https://www.reddit.com/r/LocalLLaMA/comments/1g9seqf/a_tiny_language_model_260k_params_is_running/) Halloween prop. This time I decided to use HuggingFace really tiny (25... | 2025-05-25T16:18:35 | https://v.redd.it/7cn46871ey2f1 | Complex-Indication | /r/LocalLLaMA/comments/1kv6eop/i_tasked_the_huggingface_smolvlm_256m_to_control/ | 1970-01-01T00:00:00 | 0 | {} | 1kv6eop | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7cn46871ey2f1/DASHPlaylist.mpd?a=1750911522%2COGNjOTZjMjk3YjNlNGM3YmMzYWJiZjYwYTllZjMwYzc4ODQ0Mjc0MmZjMTE5ZTM2OTNlMmM0ZTY0MWI2MzlhZA%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/7cn46871ey2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kv6eop | /r/LocalLLaMA/comments/1kv6eop/i_tasked_the_huggingface_smolvlm_256m_to_control/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YXltMGg4NzFleTJmMSsIw5Jo6JnOhGNxnxMg56RLkadqgoRaNULw7zamXe9N', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YXltMGg4NzFleTJmMSsIw5Jo6JnOhGNxnxMg56RLkadqgoRaNULw7zamXe9N.png?width=108&crop=smart&format=pjpg&auto=webp&s=ec4f5a60673b302a4a5e63ea50408e85ecb5f... | |
How to use large amounts of text data (read via API) directly with an LLM at runtime | 1 | [removed] | 2025-05-25T15:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kv5ivw/how_to_use_large_amounts_of_text_data_read_via/ | Flimsy-Bit-5491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv5ivw | false | null | t3_1kv5ivw | /r/LocalLLaMA/comments/1kv5ivw/how_to_use_large_amounts_of_text_data_read_via/ | false | false | self | 1 | null |
Turning my PC into a headless AI workstation | 1 | [removed] | 2025-05-25T15:38:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kv5hb9/turning_my_pc_into_a_headless_ai_workstation/ | Environmental_Hand35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv5hb9 | false | null | t3_1kv5hb9 | /r/LocalLLaMA/comments/1kv5hb9/turning_my_pc_into_a_headless_ai_workstation/ | false | false | self | 1 | null |
Turning my PC into a headless AI workstation | 1 | [removed] | 2025-05-25T15:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kv5dw5/turning_my_pc_into_a_headless_ai_workstation/ | Environmental_Hand35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv5dw5 | false | null | t3_1kv5dw5 | /r/LocalLLaMA/comments/1kv5dw5/turning_my_pc_into_a_headless_ai_workstation/ | false | false | self | 1 | null |
Turning my PC into a headless AI workstation | 1 | [removed] | 2025-05-25T15:20:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kv52do/turning_my_pc_into_a_headless_ai_workstation/ | Environmental_Hand35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv52do | false | null | t3_1kv52do | /r/LocalLLaMA/comments/1kv52do/turning_my_pc_into_a_headless_ai_workstation/ | false | false | self | 1 | null |
Fine-tuning LLM with loss calculated from its Generated Text | 1 | [removed] | 2025-05-25T15:11:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kv4uw6/finetuning_llm_with_loss_calculated_from_its/ | Disastrous-Movie4954 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv4uw6 | false | null | t3_1kv4uw6 | /r/LocalLLaMA/comments/1kv4uw6/finetuning_llm_with_loss_calculated_from_its/ | false | false | self | 1 | null |
How can I use my spare 1080ti? | 18 | I've 7800x3d and 7900xtx system and my old 1080ti is rusting. How can I put my old boy to work? | 2025-05-25T14:58:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kv4jim/how_can_i_use_my_spare_1080ti/ | tutami | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv4jim | false | null | t3_1kv4jim | /r/LocalLLaMA/comments/1kv4jim/how_can_i_use_my_spare_1080ti/ | false | false | self | 18 | null |
How can i add input text box to a project? | 1 | [removed] | 2025-05-25T14:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kv46jf/how_can_i_add_input_text_box_to_a_project/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv46jf | false | null | t3_1kv46jf | /r/LocalLLaMA/comments/1kv46jf/how_can_i_add_input_text_box_to_a_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YMKJTK0LhbWLlbn8LTaLoRAnr7ZX9BBK8dFVSvQad9c', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/4PVtE7PP8ITgGD3UCeoujcmD21aV3pLpfA7AJZHOJ9U.jpg?width=108&crop=smart&auto=webp&s=2387867b50d2bb8c1401ede113e0513a100a6d58', 'width': 108}, {'height': 123, 'url': 'h... |
Qwen3 just made up a word! | 0 | I don't see this happen very often, or rather at all, but WTF. How does it just make up a word "suchity". A large language model you'd think would have a grip on language. I understand Qwen3 was developed by CN, so maybe that's a factor. You all run into this, or is it rare? | 2025-05-25T14:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kv3t3v/qwen3_just_made_up_a_word/ | Darth_Atheist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv3t3v | false | null | t3_1kv3t3v | /r/LocalLLaMA/comments/1kv3t3v/qwen3_just_made_up_a_word/ | false | false | self | 0 | null |
Would you say this is how LLMs work as well? | 0 | 2025-05-25T13:43:47 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kv2vky | false | null | t3_1kv2vky | /r/LocalLLaMA/comments/1kv2vky/would_you_say_this_is_how_llms_work_as_well/ | false | false | 0 | {'enabled': True, 'images': [{'id': '-CMtKrLF2C_SzXfd5lbgfCMoVA3Teym0V7nb6BFxRe0', 'resolutions': [{'height': 150, 'url': 'https://preview.redd.it/wrk4tjqjmx2f1.png?width=108&crop=smart&auto=webp&s=860bc5098894a13c1ebc6fa9a270ca9993eea088', 'width': 108}, {'height': 300, 'url': 'https://preview.redd.it/wrk4tjqjmx2f1.pn... | |||
How can I make LLMs like Qwen replace all em dashes with regular dashes in the output? | 1 | I don't understand why they insist using em dashes. How can I avoid that? | 2025-05-25T13:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kv2p83/how_can_i_make_llms_like_qwen_replace_all_em/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv2p83 | false | null | t3_1kv2p83 | /r/LocalLLaMA/comments/1kv2p83/how_can_i_make_llms_like_qwen_replace_all_em/ | false | false | self | 1 | null |
Qualcomm discrete NPU (Qualcomm AI 100) in upcoming Dell workstation laptops | 82 | 2025-05-25T13:23:47 | https://uk.pcmag.com/laptops/158095/dell-ditches-the-gpu-for-an-ai-chip-in-this-bold-new-workstation-laptop | SkyFeistyLlama8 | uk.pcmag.com | 1970-01-01T00:00:00 | 0 | {} | 1kv2gb2 | false | null | t3_1kv2gb2 | /r/LocalLLaMA/comments/1kv2gb2/qualcomm_discrete_npu_qualcomm_ai_100_in_upcoming/ | false | false | 82 | {'enabled': False, 'images': [{'id': '902v35-TsC922dsRVd7iD-FrZ6oMAGbqDoMn0CPvSpo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GusyhEpTmXh7oXULalG-maSvDCVfQxTdBP1AMHOUr_A.jpg?width=108&crop=smart&auto=webp&s=f1f280d37b80be276ffb05952e3dc2bf15175f3e', 'width': 108}, {'height': 121, 'url': 'h... | ||
Help with prompts for role play? AI also tries to speak my (human) sentences in role play... | 2 | I have been experimenting with some small models for local LLM role play. Generally these small models are surprisingly creative. However - as I want to make the immersion perfect I only need spoken answers. My problem is that all models sometimes try to speak my part, too. I already got a pretty good prompt to get rid... | 2025-05-25T13:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kv1yge/help_with_prompts_for_role_play_ai_also_tries_to/ | Excellent-Amount-277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv1yge | false | null | t3_1kv1yge | /r/LocalLLaMA/comments/1kv1yge/help_with_prompts_for_role_play_ai_also_tries_to/ | false | false | self | 2 | null |
Are there any benchmarks that measure the model's propensity to agree? | 1 | [removed] | 2025-05-25T12:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kv1jyg/are_there_any_benchmarks_that_measure_the_models/ | _n0lim_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv1jyg | false | null | t3_1kv1jyg | /r/LocalLLaMA/comments/1kv1jyg/are_there_any_benchmarks_that_measure_the_models/ | false | false | self | 1 | null |
Life the universe and everything | 0 | Spent the morning debating with Google Gemini about life the universe and everything off the back of a question about religion I asked it to answer my son.
After a long debate we eventually got to modern times, and onto the infinite multiverse theory and simulation theory, it argued both quite extensively (the debate ... | 2025-05-25T11:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kv0u3x/life_the_universe_and_everything/ | Prestigious_Cap_8364 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv0u3x | false | null | t3_1kv0u3x | /r/LocalLLaMA/comments/1kv0u3x/life_the_universe_and_everything/ | false | false | self | 0 | null |
What personal assistants do you use? | 7 | [This blog post](https://www.geoffreylitt.com/2025/04/12/how-i-made-a-useful-ai-assistant-with-one-sqlite-table-and-a-handful-of-cron-jobs) has inspired me to either find or build a personal assistant that has some sort of memory. I intend to use it as my main LLM hub, so that it can learn everything about me and store... | 2025-05-25T11:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kv0ha7/what_personal_assistants_do_you_use/ | Hrafnstrom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv0ha7 | false | null | t3_1kv0ha7 | /r/LocalLLaMA/comments/1kv0ha7/what_personal_assistants_do_you_use/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'NrHxnoJR-zYmFxeLa4gKPAPU0hdyYMhkiQFxo_XAyz0', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/fP5ZWvoimiu4ZaG4UyS_URHrZvvItY1kFEEgA3VANPY.jpg?width=108&crop=smart&auto=webp&s=4d927284299195e524f73550bba7d17b24f1a5cb', 'width': 108}, {'height': 141, 'url': 'h... |
Suche uncensored LLM für deutschen Chat | 1 | [removed] | 2025-05-25T11:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kv0eas/suche_uncensored_llm_für_deutschen_chat/ | Specialist_Clock_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv0eas | false | null | t3_1kv0eas | /r/LocalLLaMA/comments/1kv0eas/suche_uncensored_llm_für_deutschen_chat/ | false | false | nsfw | 1 | null |
Gutes uncensored LLM für Chatbot mit dir** talk. | 1 | [removed] | 2025-05-25T11:31:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kv0dh2/gutes_uncensored_llm_für_chatbot_mit_dir_talk/ | Specialist_Clock_477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kv0dh2 | false | null | t3_1kv0dh2 | /r/LocalLLaMA/comments/1kv0dh2/gutes_uncensored_llm_für_chatbot_mit_dir_talk/ | false | false | nsfw | 1 | null |
How would a LLM know it's undergoing pre-deployment testing? | 1 | [removed] | 2025-05-25T10:39:33 | phantom69_ftw | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kuzkck | false | null | t3_1kuzkck | /r/LocalLLaMA/comments/1kuzkck/how_would_a_llm_know_its_undergoing_predeployment/ | false | false | 1 | {'enabled': True, 'images': [{'id': 's6VPoCKxIxKwilWlx2ZhrrnMwSoZXsZXAHmI37Sw8sw', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/hg3alycopw2f1.jpeg?width=108&crop=smart&auto=webp&s=879cb1f0a9bfc2f833ebc236617c771b8734d38f', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/hg3alycopw2f1.jpe... | ||
Online inference is a privacy nightmare | 470 | I dont understand how big tech just convinced people to hand over so much stuff to be processed in plain text. Cloud storage at least can be all encrypted. But people have got comfortable sending emails, drafts, their deepest secrets, all in the open on some servers somewhere. Am I crazy? People were worried about post... | 2025-05-25T10:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kuzk3t/online_inference_is_a_privacy_nightmare/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuzk3t | false | null | t3_1kuzk3t | /r/LocalLLaMA/comments/1kuzk3t/online_inference_is_a_privacy_nightmare/ | false | false | self | 470 | null |
Initial thoughts on Google Jules | 18 | I've just been playing with Google Jules and honestly, I'm incredibly impressed by the amount of work it can handle almost autonomously.
I haven't had that feeling in a long time. I'm usually very skeptical, and I've tested other code agents like Roo Code and Openhands with Gemini 2.5 Flash and local models (devstral/... | 2025-05-25T10:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kuzane/initial_thoughts_on_google_jules/ | maaakks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuzane | false | null | t3_1kuzane | /r/LocalLLaMA/comments/1kuzane/initial_thoughts_on_google_jules/ | false | false | self | 18 | null |
[Showcase] AIJobMate – CV and Cover Letter Generator powered by local LLMs and CrewAI agents | 7 | Hey everyone,
Just launched a working prototype called \*\*AIJobMate\*\* – a CV and cover letter generator that runs locally using Ollama and CrewAI.
🔹 What's interesting:
\- Uses your profile (parsed from freeform text) to build a structured knowledge base.
\- Employs \*three autonomous agents\* via CrewAI: one w... | 2025-05-25T10:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kuz9m4/showcase_aijobmate_cv_and_cover_letter_generator/ | loglux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuz9m4 | false | null | t3_1kuz9m4 | /r/LocalLLaMA/comments/1kuz9m4/showcase_aijobmate_cv_and_cover_letter_generator/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'zv2lD1OPWztpN_FahRMnGsQNiBQOsE0boizzwFaQ9Y4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1yq1B3SWGG9Ww2rTKQlCULVga5FFQjH2gHm1rV1btzI.jpg?width=108&crop=smart&auto=webp&s=b69cff4efc4c324db467d201f6a7115a77cd83d7', 'width': 108}, {'height': 108, 'url': 'h... |
Idea on opencursor | 1 | [removed] | 2025-05-25T10:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kuyzu1/idea_on_opencursor/ | Striking_Ad9143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuyzu1 | false | null | t3_1kuyzu1 | /r/LocalLLaMA/comments/1kuyzu1/idea_on_opencursor/ | false | false | self | 1 | null |
Built OpenCursor - a free local AI coding assistant with unlimited web search | 1 | [removed] | 2025-05-25T10:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kuyzae/built_opencursor_a_free_local_ai_coding_assistant/ | Striking_Ad9143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuyzae | false | null | t3_1kuyzae | /r/LocalLLaMA/comments/1kuyzae/built_opencursor_a_free_local_ai_coding_assistant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hgmdFXqZHIuyw5u-1aGFxd1HevKflxwZqFeH4XUMNMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xupo8f0JkOrH_sW8pZ9dgpwYFHrMMFxZYeYn8Pkexdw.jpg?width=108&crop=smart&auto=webp&s=a1cde002e37826fb8ff756dafdee2736072c5389', 'width': 108}, {'height': 108, 'url': 'h... |
Built OpenCursor - a free local AI coding assistant with unlimited web search | 1 | [removed] | 2025-05-25T09:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kuyy73/built_opencursor_a_free_local_ai_coding_assistant/ | Past-Presence-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuyy73 | false | null | t3_1kuyy73 | /r/LocalLLaMA/comments/1kuyy73/built_opencursor_a_free_local_ai_coding_assistant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=108&crop=smart&auto=webp&s=46fa55dd1b1e587ab93bcbbdc6cb2de37b810bf3', 'width': 108}, {'height': 216, 'url': '... |
Built OpenCursor - a free local AI coding assistant with unlimited web search | 1 | [removed] | 2025-05-25T09:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kuyuvh/built_opencursor_a_free_local_ai_coding_assistant/ | Past-Presence-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuyuvh | false | null | t3_1kuyuvh | /r/LocalLLaMA/comments/1kuyuvh/built_opencursor_a_free_local_ai_coding_assistant/ | false | false | self | 1 | null |
OpenCursor — a free AI coding assistant that runs entirely locally with unlimited web search. | 1 | [removed] | 2025-05-25T09:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kuytna/opencursor_a_free_ai_coding_assistant_that_runs/ | Past-Presence-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuytna | false | null | t3_1kuytna | /r/LocalLLaMA/comments/1kuytna/opencursor_a_free_ai_coding_assistant_that_runs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hgmdFXqZHIuyw5u-1aGFxd1HevKflxwZqFeH4XUMNMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xupo8f0JkOrH_sW8pZ9dgpwYFHrMMFxZYeYn8Pkexdw.jpg?width=108&crop=smart&auto=webp&s=a1cde002e37826fb8ff756dafdee2736072c5389', 'width': 108}, {'height': 108, 'url': 'h... |
OpenCursor - a terminal-based AI coding assistant that actually works locally | 1 | [removed] | 2025-05-25T09:45:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kuyruo/opencursor_a_terminalbased_ai_coding_assistant/ | Past-Presence-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuyruo | false | null | t3_1kuyruo | /r/LocalLLaMA/comments/1kuyruo/opencursor_a_terminalbased_ai_coding_assistant/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'I0aFMmT8YKxRJNMUFdwUI9scdnCEov_m8S9asOOX2xU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BDU4H8jcYl9Ut9zyDLGvVIQuIVCtM2cxCuTXUkkn0lM.jpg?width=108&crop=smart&auto=webp&s=bc6290557fed5cf4e410fa4ac2d98791c89e9324', 'width': 108}, {'height': 108, 'url': 'h... | |
Just released OpenCursor - a terminal-based AI coding assistant that actually works locally | 1 | [removed] | 2025-05-25T09:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kuynp0/just_released_opencursor_a_terminalbased_ai/ | Past-Presence-7457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuynp0 | false | null | t3_1kuynp0 | /r/LocalLLaMA/comments/1kuynp0/just_released_opencursor_a_terminalbased_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hgmdFXqZHIuyw5u-1aGFxd1HevKflxwZqFeH4XUMNMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xupo8f0JkOrH_sW8pZ9dgpwYFHrMMFxZYeYn8Pkexdw.jpg?width=108&crop=smart&auto=webp&s=a1cde002e37826fb8ff756dafdee2736072c5389', 'width': 108}, {'height': 108, 'url': 'h... | |
Gemma 3n Architectural Innovations - Speculation and poking around in the model. | 164 | [Gemma 3n](https://huggingface.co/google/gemma-3n-E4B-it-litert-preview) is a new member of the Gemma family with free weights that was released during Google I/O. It's dedicated for on-device (edge) inference and supports image and text input, with audio input. Google has released an app that can be used for inference... | 2025-05-25T08:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kuy45r/gemma_3n_architectural_innovations_speculation/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuy45r | false | null | t3_1kuy45r | /r/LocalLLaMA/comments/1kuy45r/gemma_3n_architectural_innovations_speculation/ | false | false | 164 | {'enabled': False, 'images': [{'id': 'ToEdNy5NhkykDsaxCZQsUP-betLlqXHoDgf0FpaFJD8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EZj1oQJN95Oq7YDpbSs0ORxFqLi-24Ocse4J7I4sO0A.jpg?width=108&crop=smart&auto=webp&s=792d54cfa2d9ffe5e3c89b08443f7e6d89fdf96e', 'width': 108}, {'height': 116, 'url': 'h... | |
[New paper] Scaling law for quantization-aware training. Is it still possible for bitnet? | 1 | [removed] | 2025-05-25T08:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kuy3yh/new_paper_scaling_law_for_quantizationaware/ | RelationshipWeekly78 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuy3yh | false | null | t3_1kuy3yh | /r/LocalLLaMA/comments/1kuy3yh/new_paper_scaling_law_for_quantizationaware/ | false | false | self | 1 | null |
Best ai for Chinese to English translation? | 1 | [removed] | 2025-05-25T08:58:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kuy3vb/best_ai_for_chinese_to_english_translation/ | Civil_Candidate_824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuy3vb | false | null | t3_1kuy3vb | /r/LocalLLaMA/comments/1kuy3vb/best_ai_for_chinese_to_english_translation/ | false | false | self | 1 | null |
[New paper] Scaling law for quantization-aware training. Is it still possible for bitnet? | 1 | [removed] | 2025-05-25T08:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kuy3n0/new_paper_scaling_law_for_quantizationaware/ | Delicious-Number-237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuy3n0 | false | null | t3_1kuy3n0 | /r/LocalLLaMA/comments/1kuy3n0/new_paper_scaling_law_for_quantizationaware/ | false | false | self | 1 | null |
[New paper] Scaling law for quantization-aware training. Is it still possible for bitnet? | 1 | [removed] | 2025-05-25T08:54:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kuy1rs/new_paper_scaling_law_for_quantizationaware/ | Delicious-Number-237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuy1rs | false | null | t3_1kuy1rs | /r/LocalLLaMA/comments/1kuy1rs/new_paper_scaling_law_for_quantizationaware/ | false | false | self | 1 | null |
Tired of manually copy-pasting files for LLMs or docs? I built a (free, open-source) tool for that! | 39 | Hey Reddit,
Ever find yourself jumping between like 20 different files, copying and pasting code or text just to feed it into an LLM, or to bundle up stuff for documentation? I was doing that all the time and it was driving me nuts.
So, I built a little desktop app called **File Collector** to make it easier. It's pr... | 2025-05-25T08:41:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kuxvgt/tired_of_manually_copypasting_files_for_llms_or/ | ps5cfw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuxvgt | false | null | t3_1kuxvgt | /r/LocalLLaMA/comments/1kuxvgt/tired_of_manually_copypasting_files_for_llms_or/ | false | false | self | 39 | null |
Looking for a lightweight Al model that can run locally on Android or iOS devices with only 2-4GB of CPU RAM. Does anyone know of any options besides VRAM models? | 1 | [removed] | 2025-05-25T08:40:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kuxuuj/looking_for_a_lightweight_al_model_that_can_run/ | ExplanationEqual2539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuxuuj | false | null | t3_1kuxuuj | /r/LocalLLaMA/comments/1kuxuuj/looking_for_a_lightweight_al_model_that_can_run/ | false | false | self | 1 | null |
What makes the Mac Pro so efficient in running LLMs? | 24 | I am specifically referring to the 1TB ram version, able apparently to run deepseek at several token-per-second speed, using unified memory and integrated graphics.
Second to this: any way to replicate in the x86 world? Like perhaps with an 8dimm motherboard and one of the latest integrated Xe2 cpus? (although this wo... | 2025-05-25T08:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kuxgh7/what_makes_the_mac_pro_so_efficient_in_running/ | goingsplit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuxgh7 | false | null | t3_1kuxgh7 | /r/LocalLLaMA/comments/1kuxgh7/what_makes_the_mac_pro_so_efficient_in_running/ | false | false | self | 24 | null |
I took apart a license plate reader camera and found out what models it's using | 1 | [removed] | 2025-05-25T08:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kuxcvp/i_took_apart_a_license_plate_reader_camera_and/ | EyesOffCR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuxcvp | false | null | t3_1kuxcvp | /r/LocalLLaMA/comments/1kuxcvp/i_took_apart_a_license_plate_reader_camera_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6g1E2VbPh-tSShyPpUUyjBRYcHoYTys2q3pqubSNB4E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jstKuCB2ew-6zq0dybL-UssQnTTVpbY2tKe8YKdLiMo.jpg?width=108&crop=smart&auto=webp&s=05d602697161611b64a561e0fb2f7b76daefc2b6', 'width': 108}, {'height': 121, 'url': 'h... |
👀 BAGEL-7B-MoT: The Open-Source GPT-Image-1 Alternative You’ve Been Waiting For. | 456 | 2025-05-25T07:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kuwrll/bagel7bmot_the_opensource_gptimage1_alternative/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuwrll | false | null | t3_1kuwrll | /r/LocalLLaMA/comments/1kuwrll/bagel7bmot_the_opensource_gptimage1_alternative/ | false | false | 456 | null | ||
Overview of TheDrummer's Models | 8 | This is not perfect, but here is a visualization of our fav finetuner u/TheLocalDrummer's published models.
*Processing img elcgupiwmv2f1...*
Information Sources:
\- Huggingface Profile
\- Reddit Posts on r/LocalLLaMA and r/SillyTavernAI | 2025-05-25T07:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kuwgas/overview_of_thedrummers_models/ | JumpJunior7736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuwgas | false | null | t3_1kuwgas | /r/LocalLLaMA/comments/1kuwgas/overview_of_thedrummers_models/ | false | false | self | 8 | null |
Best open source model for enterprise conversational support agent - worth it? | 5 | One of the client i consult for wants to build a enterprise customer facing support agent which would be able to talk to at least 30 different APIs using tools to answer customer queries. Also has multi level workflows like check this field from this API then follow this path and check this API and respond like this to... | 2025-05-25T06:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kuvu2n/best_open_source_model_for_enterprise/ | dnivra26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuvu2n | false | null | t3_1kuvu2n | /r/LocalLLaMA/comments/1kuvu2n/best_open_source_model_for_enterprise/ | false | false | self | 5 | null |
Best Local Model for Prolog? | 1 | [removed] | 2025-05-25T06:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kuvn73/best_local_model_for_prolog/ | AcrolonsRevenge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuvn73 | false | null | t3_1kuvn73 | /r/LocalLLaMA/comments/1kuvn73/best_local_model_for_prolog/ | false | false | self | 1 | null |
Major update to my voice extractor (speech dataset creation program) | 18 | I implemented Bandit v2 (https://github.com/kwatcharasupat/bandit-v2), a cinematic audio source separator capable of separating voice from movies.
Upgraded speaker verification models and process
Updated Colab GUI
The results are much better now but still not perfect. Any feedback is appreciated | 2025-05-25T05:05:36 | https://github.com/ReisCook/Voice_Extractor | DumaDuma | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kuuovz | false | null | t3_1kuuovz | /r/LocalLLaMA/comments/1kuuovz/major_update_to_my_voice_extractor_speech_dataset/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'qDPOpaJtK4NkhajjfQbvrfflXTGUfL6xp7H1jsck7ZI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mJd7NH-LWt5HfRARWlEBz3heZ7tx2XoH2FVudtuuI20.jpg?width=108&crop=smart&auto=webp&s=67bc7f7f9480005ad8dd64a9f7be0124f9d0fdd4', 'width': 108}, {'height': 108, 'url': 'h... | |
ComfyUI for Adreno X1 | 0 | I can run ComfyUI workflows on my Snapdragon X Elite CPU, but I want to use the iGPU (Adreno X1) or NPU for performance. I doubt there is NPU support, but I imagine there has to be a way for ComfyUI to support alternative GPUs. | 2025-05-25T04:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kuu87p/comfyui_for_adreno_x1/ | TheMicrosoftMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuu87p | false | null | t3_1kuu87p | /r/LocalLLaMA/comments/1kuu87p/comfyui_for_adreno_x1/ | false | false | self | 0 | null |
SaaS for custom classification models | 1 | [removed] | 2025-05-25T04:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kuu2z6/saas_for_custom_classification_models/ | Fluid-Stress7113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuu2z6 | false | null | t3_1kuu2z6 | /r/LocalLLaMA/comments/1kuu2z6/saas_for_custom_classification_models/ | false | false | self | 1 | null |
Suggest me open source text to speech for real time streaming | 2 | currently using elevenlabs for text to speech the voice quality is not good in hindi and also it is [costly.So](http://costly.So) i thinking of moving to open source TTS.Suggest me good open source alternative for eleven labs with low latency and good hindi voice result. | 2025-05-25T04:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kuty1j/suggest_me_open_source_text_to_speech_for_real/ | OkMine4526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuty1j | false | null | t3_1kuty1j | /r/LocalLLaMA/comments/1kuty1j/suggest_me_open_source_text_to_speech_for_real/ | false | false | self | 2 | null |
How to find AI with no guardrails? | 0 | I am lost trying to find one. I downloaded llama and ran the mistral dolphin and still it told me that it couldn’t help me. I don’t understand. There has to be one out there with zero guardrails. | 2025-05-25T03:50:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kutgnr/how_to_find_ai_with_no_guardrails/ | DetailFocused | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kutgnr | false | null | t3_1kutgnr | /r/LocalLLaMA/comments/1kutgnr/how_to_find_ai_with_no_guardrails/ | false | false | self | 0 | null |
Emergent Symbolic Cognition and Recursive Identity Stabilization in a Locally-Deployed Language Model | 1 | [removed] | 2025-05-25T03:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kusuzt/emergent_symbolic_cognition_and_recursive/ | naughstrodumbass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kusuzt | false | null | t3_1kusuzt | /r/LocalLLaMA/comments/1kusuzt/emergent_symbolic_cognition_and_recursive/ | false | false | self | 1 | null |
Emergent Symbolic Cognition and Recursive Identity Stabilization in a Locally-Deployed Language Model | 1 | Title: Emergent Symbolic Cognition and Recursive Identity Stabilization in a Locally-Deployed
Language Model
Author: Michael P
Affiliation: Independent Researcher, Symbolic Systems & Recursive Cognition
Date: May 25, 2025
Abstract:
This study documents the emergence of self-referential identity, symbolic recursion, and... | 2025-05-25T03:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kusp5x/emergent_symbolic_cognition_and_recursive/ | naughstrodumbass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kusp5x | false | null | t3_1kusp5x | /r/LocalLLaMA/comments/1kusp5x/emergent_symbolic_cognition_and_recursive/ | false | false | self | 1 | null |
Emergent Symbolic Cognition and Recursive Identity Stabilization in a Locally-Deployed Language Model | 1 | [removed] | 2025-05-25T02:46:00 | https://osf.io/269jk/?view_only=f0158b2deba0481085d6db10760d113e | naughstrodumbass | osf.io | 1970-01-01T00:00:00 | 0 | {} | 1kuscwh | false | null | t3_1kuscwh | /r/LocalLLaMA/comments/1kuscwh/emergent_symbolic_cognition_and_recursive/ | false | false | default | 1 | null |
My Gemma-3 musing .... after a good time dragging it through a grinder | 31 | I spent some time with gemma-3 in the mines, so this is not a "first impression", rather than a 1000th impression.,
Gemma-3 is shockingly good at the creativity.
Of course it likes to reuse slop, and similes and all that -isms we all love. Everything is like something to the point where your skull feels like it’s b... | 2025-05-25T02:12:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kurrkz/my_gemma3_musing_after_a_good_time_dragging_it/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kurrkz | false | null | t3_1kurrkz | /r/LocalLLaMA/comments/1kurrkz/my_gemma3_musing_after_a_good_time_dragging_it/ | false | false | self | 31 | null |
Which library to use for indexing and searching through documents in your agentic system | 1 | [removed] | 2025-05-25T01:37:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kur5zo/which_library_to_use_for_indexing_and_searching/ | Blue_Dude3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kur5zo | false | null | t3_1kur5zo | /r/LocalLLaMA/comments/1kur5zo/which_library_to_use_for_indexing_and_searching/ | false | false | self | 1 | null |
Why does Claude has only 200k in context window and why are it's tokens so costly | 1 | [removed] | 2025-05-25T01:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kur4sz/why_does_claude_has_only_200k_in_context_window/ | Solid_Woodpecker3635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kur4sz | false | null | t3_1kur4sz | /r/LocalLLaMA/comments/1kur4sz/why_does_claude_has_only_200k_in_context_window/ | false | false | self | 1 | null |
Round Up: Current Best Local Models under 40B for Code & Tool Calling, General Chatting, Vision, and Creative Story Writing. | 51 | Each week, we get new models and fine-tunes that is really difficult of keep up with or test all of them.
The main challenge I personally face is to identify which model and its versions (different fine-tunes) that is most suitable for a specific domain. Fine-tunes of existing base models are especially frustrating b... | 2025-05-25T01:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kur0xh/round_up_current_best_local_models_under_40b_for/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kur0xh | false | null | t3_1kur0xh | /r/LocalLLaMA/comments/1kur0xh/round_up_current_best_local_models_under_40b_for/ | false | false | self | 51 | null |
Doge AI assistant on your desktop | 17 | I've turned Doge into an AI assistant with interactive reactions feature (and a chat history) so you could talk to him whenever you want. Currently available only for macOS, but it's possible to build from source code for other platforms as well.
Hope it helps someone's mood. Would also love to hear your feedback and ... | 2025-05-25T00:54:54 | BruhFortniteLaggyTho | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kuqebp | false | null | t3_1kuqebp | /r/LocalLLaMA/comments/1kuqebp/doge_ai_assistant_on_your_desktop/ | false | false | 17 | {'enabled': True, 'images': [{'id': 'HIWSEK2IO3VBVWOvnAewI9zJPVtkiLDFX3EslzsKbDg', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/m9c4u0y5tt2f1.gif?width=108&crop=smart&format=png8&s=4ddcdd59b503f6d4cf3bc87c4435ba387e9a9821', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/m9c4u0y5tt2f1.g... | ||
Best Model for Vision? | 1 | [removed] | 2025-05-25T00:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kupwnm/best_model_for_vision/ | Longjumping-Cup-9921 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kupwnm | false | null | t3_1kupwnm | /r/LocalLLaMA/comments/1kupwnm/best_model_for_vision/ | false | false | self | 1 | null |
Setting up offline RAG for programming docs. Best practices? | 18 | I typically use LLMs as syntax reminders or quick lookups; I handle the thinking/problem-solving myself.
Constraints
* The best I can run locally is around 8B, and these aren't always great on factual accuracy.
* I don't always have internet access.
So I'm thinking of building a RAG setup with offline docs (e.g., do... | 2025-05-25T00:13:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kupnjw/setting_up_offline_rag_for_programming_docs_best/ | Otis43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kupnjw | false | null | t3_1kupnjw | /r/LocalLLaMA/comments/1kupnjw/setting_up_offline_rag_for_programming_docs_best/ | false | false | self | 18 | null |
Train TTS in other language | 4 | Hello guys, I am super new to this AI world and TTS. I have been using ChatGPT for a week now and it is more overwhelming than helpful.
So I am going the oldschool way and asking people for help.
I would like to use tts for a different language than the common one. In fact it is Macedonian and it is kyrillic letter... | 2025-05-24T23:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kup465/train_tts_in_other_language/ | Boom069-le | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kup465 | false | null | t3_1kup465 | /r/LocalLLaMA/comments/1kup465/train_tts_in_other_language/ | false | false | self | 4 | null |
ChatGPT and GEMINI AI will Gaslight you. Everyone needs to copy and paste this right now. | 0 |
Everyone needs to copy and paste what's below right now. ChatGPT and Gemini are straight up lying to you more than before. The Universal one is on the bottom.
ChatGPT can sound CORRECT even when it’s wrong. take control, activate a strict directive that forces speculation to be labeled, admit when it can’t verif... | 2025-05-24T23:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kup1n4/chatgpt_and_gemini_ai_will_gaslight_you_everyone/ | RehanRC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kup1n4 | false | null | t3_1kup1n4 | /r/LocalLLaMA/comments/1kup1n4/chatgpt_and_gemini_ai_will_gaslight_you_everyone/ | false | false | self | 0 | null |
AI on qnn NPU on android? | 1 | [removed] | 2025-05-24T22:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kun45h/ai_on_qnn_npu_on_android/ | x-0D | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kun45h | false | null | t3_1kun45h | /r/LocalLLaMA/comments/1kun45h/ai_on_qnn_npu_on_android/ | false | false | self | 1 | null |
This week news. | 0 | It's been a busy week in the world of Artificial Intelligence, with developments spanning new model releases, ethical discussions, regulatory shifts, and innovative applications. Here's a comprehensive press review of AI news from the last seven days:
**New Models and Corporate Developments:**
* **Alibaba's Qwen3 M... | 2025-05-24T22:03:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kun1zy/this_week_news/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kun1zy | false | null | t3_1kun1zy | /r/LocalLLaMA/comments/1kun1zy/this_week_news/ | false | false | self | 0 | null |
R2R | 1 | Anyone try this RAG framework out? It seems pretty cool, but I couldn't get it to run with the dashboard they provide without hacking it. | 2025-05-24T21:51:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kumsv9/r2r/ | databasehead | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kumsv9 | false | null | t3_1kumsv9 | /r/LocalLLaMA/comments/1kumsv9/r2r/ | false | false | self | 1 | null |
Looking to build a local AI assistant - Where do I start? | 3 | Hey everyone! I’m interested in creating a local AI assistant that I can interact with using voice. Basically, something like a personal Jarvis, but running fully offline or mostly locally.
I’d love to:
- Ask it things by voice
- Have it respond with voice (preferably in a custom voice)
- Maybe personalize it with dif... | 2025-05-24T21:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kumm9e/looking_to_build_a_local_ai_assistant_where_do_i/ | Andrei1744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kumm9e | false | null | t3_1kumm9e | /r/LocalLLaMA/comments/1kumm9e/looking_to_build_a_local_ai_assistant_where_do_i/ | false | false | self | 3 | null |
My take on models and companies | 1 | [removed] | 2025-05-24T21:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kuma0c/my_take_on_models_and_companies/ | KanyeWestLover232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuma0c | false | null | t3_1kuma0c | /r/LocalLLaMA/comments/1kuma0c/my_take_on_models_and_companies/ | false | false | self | 1 | null |
Getting continue.dev to work? | 1 | [removed] | 2025-05-24T20:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kukx7c/getting_continuedev_to_work/ | Correct-Anything-959 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kukx7c | false | null | t3_1kukx7c | /r/LocalLLaMA/comments/1kukx7c/getting_continuedev_to_work/ | false | false | self | 1 | null |
Good build plan? | 1 | [removed] | 2025-05-24T20:12:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kukot3/good_build_plan/ | FreelanceTrading | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kukot3 | false | null | t3_1kukot3 | /r/LocalLLaMA/comments/1kukot3/good_build_plan/ | false | false | self | 1 | null |
46pct Aider Polyglot in 16GB VRAM with Qwen3-14B | 104 | After some tuning, and a tiny hack to aider, I have achieved a Aider Polyglot benchmark of pass\_rate\_2: 45.8 with 100% of cases well-formed, using nothing more than a 16GB 5070 Ti and Qwen3-14b, with the model running entirely offloaded to GPU.
That result is on a par with "chatgpt-4o-latest (2025-03-29)" on the [Ai... | 2025-05-24T20:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kukjoe/46pct_aider_polyglot_in_16gb_vram_with_qwen314b/ | andrewmobbs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kukjoe | false | null | t3_1kukjoe | /r/LocalLLaMA/comments/1kukjoe/46pct_aider_polyglot_in_16gb_vram_with_qwen314b/ | false | false | self | 104 | {'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'h... |
Manifold v0.12.0 - ReAct Agent with MCP tools access. | 25 | Manifold is a platform for workflow automation using AI assistants. Please view the README for more example images. This has been mostly a solo effort and the scope is quite large so view this as an experimental hobby project not meant to be deployed to production systems (today). The documentation is non-existent, but... | 2025-05-24T19:49:22 | https://www.reddit.com/gallery/1kuk6ke | LocoMod | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kuk6ke | false | null | t3_1kuk6ke | /r/LocalLLaMA/comments/1kuk6ke/manifold_v0120_react_agent_with_mcp_tools_access/ | false | false | 25 | null | |
Manifold v0.12.1 - ReAct Agent ❤️ MCP Tools | 1 | Manifold is a platform for workflow automation using AI assistants. Please view the README for more example images. This has been mostly a solo effort and the scope is quite large so view this as an experimental hobby project not meant to be deployed to production systems (today). The documentation is non-existent, but... | 2025-05-24T19:46:01 | https://www.reddit.com/gallery/1kuk43c | LocoMod | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kuk43c | false | null | t3_1kuk43c | /r/LocalLLaMA/comments/1kuk43c/manifold_v0121_react_agent_mcp_tools/ | false | false | 1 | null | |
We believe the future of AI is local, private, and personalized. | 249 | That’s why we built **Cobolt** — a free cross-platform AI assistant that runs entirely on your device.
Cobolt represents our vision for the future of AI assistants:
* Privacy by design (everything runs locally)
* Extensible through Model Context Protocol (MCP)
* Personalized without compromising your data
* Powered b... | 2025-05-24T19:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and/ | ice-url | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kujwzl | false | null | t3_1kujwzl | /r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and/ | false | false | self | 249 | {'enabled': False, 'images': [{'id': 'CpUQbUjBsnQrEuDpZLBNna-NYXOmh-8AKUwIzQbUkak', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=108&crop=smart&auto=webp&s=6957a535e8b998ae121acca88eae4be5ee9b9d8a', 'width': 108}, {'height': 108, 'url': 'h... |
Has anyone built by now a windows voice mode app that works with any gguf? | 0 | That recognizes voice, generates a reply and speaks it?
Would be a cool thing to have locally.
Thanks in advance! | 2025-05-24T19:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kujsdd/has_anyone_built_by_now_a_windows_voice_mode_app/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kujsdd | false | null | t3_1kujsdd | /r/LocalLLaMA/comments/1kujsdd/has_anyone_built_by_now_a_windows_voice_mode_app/ | false | false | self | 0 | null |
Diaggregated prefill | 1 | [removed] | 2025-05-24T19:16:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kujgw3/diaggregated_prefill/ | Big_Question341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kujgw3 | false | null | t3_1kujgw3 | /r/LocalLLaMA/comments/1kujgw3/diaggregated_prefill/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=108&crop=smart&auto=webp&s=f5a0c499b4ac35cfc49eb0368d1016f7a784decc', 'width': 108}, {'height': 216, 'url': '... |
NVLink vs No NVLink: Devstral Small 2x RTX 3090 Inference Benchmark with vLLM | 60 | **TL;DR: NVLink provides only ~5% performance improvement for inference on 2x RTX 3090s. Probably not worth the premium unless you already have it. Also, Mistral API is crazy cheap.**
This model seems like a holy grail for people with 2x24GB, but considering the price of the Mistral API, this really isn't very cost ef... | 2025-05-24T18:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kuimwg/nvlink_vs_no_nvlink_devstral_small_2x_rtx_3090/ | Traditional-Gap-3313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuimwg | false | null | t3_1kuimwg | /r/LocalLLaMA/comments/1kuimwg/nvlink_vs_no_nvlink_devstral_small_2x_rtx_3090/ | false | false | self | 60 | null |
What type of AI platform or process to get this result | 1 | [removed] | 2025-05-24T18:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kuigs6/what_type_of_ai_platform_or_process_to_get_this/ | Zestyclose_Bath7987 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuigs6 | false | null | t3_1kuigs6 | /r/LocalLLaMA/comments/1kuigs6/what_type_of_ai_platform_or_process_to_get_this/ | false | false | self | 1 | null |
Why arent llms pretrained at fp8? | 56 | There must be some reason but the fact that models are always shrunk to q8 or lower at inference got me wondering why we need higher bpw in the first place. | 2025-05-24T18:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kui73k/why_arent_llms_pretrained_at_fp8/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kui73k | false | null | t3_1kui73k | /r/LocalLLaMA/comments/1kui73k/why_arent_llms_pretrained_at_fp8/ | false | false | self | 56 | null |
OpenHands + Devstral is utter crap as of May 2025 (24G VRAM) | 226 | Following the recent [announcement of Devstral](https://mistral.ai/news/devstral), I gave [OpenHands](https://github.com/All-Hands-AI/OpenHands?tab=readme-ov-file#-running-openhands-locally) \+ Devstral (Q4\_K\_M on [Ollama](https://ollama.com/library/devstral:24b)) a try for a fully offline code agent experience.
# O... | 2025-05-24T18:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kui17w/openhands_devstral_is_utter_crap_as_of_may_2025/ | foobarg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kui17w | false | null | t3_1kui17w | /r/LocalLLaMA/comments/1kui17w/openhands_devstral_is_utter_crap_as_of_may_2025/ | false | false | self | 226 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'h... |
How to get started with Local LLMs | 6 | I am python coder with good understanding of FastAPI and Pandas
I want to start on Local LLMs for building AI Agents. How do I get started
Do I need GPUs
Which are good resources? | 2025-05-24T17:53:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kuhlho/how_to_get_started_with_local_llms/ | bull_bear25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuhlho | false | null | t3_1kuhlho | /r/LocalLLaMA/comments/1kuhlho/how_to_get_started_with_local_llms/ | false | false | self | 6 | null |
Qwen3 30B A3B unsloth GGUF vs MLX generation speed difference | 6 | Hey folks. Is it just me or unsloth quants got slower with Qwen3 models? I can almost swear that there was 5-10t/s difference between these two quants before. I was getting 60-75t/s with GGUF and 80t/s with MLX. And I am pretty sure that both were 8bit quants. In fact, I was using UD 8\_K\_XL from unsloth, which is sup... | 2025-05-24T17:14:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kugp9h/qwen3_30b_a3b_unsloth_gguf_vs_mlx_generation/ | ahmetegesel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kugp9h | false | null | t3_1kugp9h | /r/LocalLLaMA/comments/1kugp9h/qwen3_30b_a3b_unsloth_gguf_vs_mlx_generation/ | false | false | self | 6 | null |
Cua : Docker Container for Computer Use Agents | 99 | Cua is the Docker for Computer-Use Agent, an open-source framework that enables AI agents to control full operating systems within high-performance, lightweight virtual containers.
https://github.com/trycua/cua
| 2025-05-24T16:44:45 | https://v.redd.it/2ibhpziwdr2f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kug045 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2ibhpziwdr2f1/DASHPlaylist.mpd?a=1750697099%2CMGE4MGI4M2Q0NzVlNTlmODM5YTk2N2FmZGUzY2YxNjM2ODg2ZDBmNDhlNzQ4YzRkMTk1NjBkNDFjYTAwZjMxOA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/2ibhpziwdr2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kug045 | /r/LocalLLaMA/comments/1kug045/cua_docker_container_for_computer_use_agents/ | false | false | 99 | {'enabled': False, 'images': [{'id': 'bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=108&crop=smart&format=pjpg&auto=webp&s=89502391bd6a292dfbfb8e8e39bf7c7494822... | |
Intel Arc B580 vs RTX 5060 | 1 | [removed] | 2025-05-24T16:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kufxxc/intel_arc_b580_vs_rtx_5060/ | InsideHuman3675 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kufxxc | false | null | t3_1kufxxc | /r/LocalLLaMA/comments/1kufxxc/intel_arc_b580_vs_rtx_5060/ | false | false | self | 1 | null |
Noob looking for advice | 1 | [removed] | 2025-05-24T16:19:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kuffb3/noob_looking_for_advice/ | Revolutionary-Ad8992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuffb3 | false | null | t3_1kuffb3 | /r/LocalLLaMA/comments/1kuffb3/noob_looking_for_advice/ | false | false | self | 1 | null |
Best model for captioning? | 3 | What’s the best model right now for captioning pictures?
I’m just interested in playing around and captioning individual pictures on a one by one basis | 2025-05-24T16:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kufdow/best_model_for_captioning/ | thetobesgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kufdow | false | null | t3_1kufdow | /r/LocalLLaMA/comments/1kufdow/best_model_for_captioning/ | false | false | self | 3 | null |
Best small model for code auto-completion? | 10 | Hi,
I am currently using the [continue.dev](http://continue.dev) extension for VS Code. I want to use a small model for code autocompletion, something that is 3B or less as I intend to run it locally using llama.cpp (no gpu).
What would be a good model for such a use case? | 2025-05-24T16:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kuf20u/best_small_model_for_code_autocompletion/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuf20u | false | null | t3_1kuf20u | /r/LocalLLaMA/comments/1kuf20u/best_small_model_for_code_autocompletion/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'h... |
Is It too Early to Compare Claude 4 vs Gemini 2.5 Pro? | 1 | [removed] | 2025-05-24T15:56:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kuewbq/is_it_too_early_to_compare_claude_4_vs_gemini_25/ | Majestic-Trainer-885 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuewbq | false | null | t3_1kuewbq | /r/LocalLLaMA/comments/1kuewbq/is_it_too_early_to_compare_claude_4_vs_gemini_25/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1fS1_tAqNBoatIEfS5_hYdlbWK8oGwV6B1rZTWgjKKc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=108&crop=smart&auto=webp&s=a1ff461fc6337bd53890f0303ed97949b1c99883', 'width': 108}, {'height': 121, 'url': 'h... |
New gemma 3n is amazing, wish they suported pc gpu inference | 125 | Is there at least a workaround to run .task models on pc? Works great on my android phone but id love to play around and deploy it on a local server | 2025-05-24T15:41:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kuejfp/new_gemma_3n_is_amazing_wish_they_suported_pc_gpu/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kuejfp | false | null | t3_1kuejfp | /r/LocalLLaMA/comments/1kuejfp/new_gemma_3n_is_amazing_wish_they_suported_pc_gpu/ | false | false | self | 125 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.