title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Did Standford HuggingFace account got Hacked? | 558 | 2025-05-16T14:26:25 | ObscuraMirage | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko27bi | false | null | t3_1ko27bi | /r/LocalLLaMA/comments/1ko27bi/did_standford_huggingface_account_got_hacked/ | false | false | nsfw | 558 | {'enabled': True, 'images': [{'id': 'nmvWE6RXrgxkEhaUD2bJyOcNR5edy8heN3MGrGlP--Y', 'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.jpeg?width=108&crop=smart&auto=webp&s=41b7cd3f422c4872d7e23e88565ba7ce334094dd', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/0j4j7z8yl51f1.j... | ||
I built a tiny Linux OS to make your LLMs actually useful on your machine | 305 | Hey folks — I’ve been working on llmbasedos, a minimal Arch-based Linux distro that turns your local environment into a first-class citizen for any LLM frontend (like Claude Desktop, VS Code, ChatGPT+browser, etc).
The problem: every AI app has to reinvent the wheel — file pickers, OAuth flows, plugins, sandboxing…
Th... | 2025-05-16T14:12:00 | https://github.com/iluxu/llmbasedos | iluxu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ko1v1k | false | null | t3_1ko1v1k | /r/LocalLLaMA/comments/1ko1v1k/i_built_a_tiny_linux_os_to_make_your_llms/ | false | false | 305 | {'enabled': False, 'images': [{'id': 'KLay6A6X7_CWMxBWlijhND-p-8uSznXOlE6As5ASn2o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Opn0lWenfUSxX1FlZaKUoyxIpn8_sSk-rxtkMoj2byo.jpg?width=108&crop=smart&auto=webp&s=fc673e466902c94f83124f79a6442e6562bb4ba7', 'width': 108}, {'height': 108, 'url': 'h... | |
EU inference providers with strong privacy | 8 | I would like a EU based company (so Aws, Google Vertex, Azure are a non starter) that provides an inference API for open-weight models hosted in the EU with strong privacy guarantees.
I want to pay per token not pay for some sort of GPU instance.
So far I have found https://nebius.com/, however in their privacy polic... | 2025-05-16T14:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ko1u5c/eu_inference_providers_with_strong_privacy/ | Ambitious_Subject108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko1u5c | false | null | t3_1ko1u5c | /r/LocalLLaMA/comments/1ko1u5c/eu_inference_providers_with_strong_privacy/ | false | false | self | 8 | null |
If you are comparing models, please state the task you are using them for! | 51 | The amount of posts like "Why is deepseek so much better than qwen 235," with no information about the task that the poster is comparing the models on, is maddening. ALL models' performance levels vary across domains, and many models are highly domain specific. Some people are creating waifus, some are coding, some are... | 2025-05-16T14:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ko1tg5/if_you_are_comparing_models_please_state_the_task/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko1tg5 | false | null | t3_1ko1tg5 | /r/LocalLLaMA/comments/1ko1tg5/if_you_are_comparing_models_please_state_the_task/ | false | false | self | 51 | null |
Local OCR in mobile applications with React Native ExecuTorch | 1 | [removed] | 2025-05-16T14:02:45 | https://v.redd.it/v6za0645g51f1 | FinancialAd1961 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ko1n5f | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v6za0645g51f1/DASHPlaylist.mpd?a=1749996188%2CYjAxZmQ2NmQyOTdhMjM5YWFjNjEwNWNiOGE2MzcyYWMzNzI4MTU1YmM4OWZkOTU0ZGQ4M2M2MmIwMTg5NzVkMg%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/v6za0645g51f1/DASH_720.mp4?source=fallback', 'h... | t3_1ko1n5f | /r/LocalLLaMA/comments/1ko1n5f/local_ocr_in_mobile_applications_with_react/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Z21hbHMyMzVnNTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=108&crop=smart&format=pjpg&auto=webp&s=dd1c95bc94383d13fa9821e8a5cf292e2931c... | |
Ollama violating llama.cpp license for over a year | 529 | 2025-05-16T13:57:38 | https://news.ycombinator.com/item?id=44003741 | op_loves_boobs | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 1ko1iob | false | null | t3_1ko1iob | /r/LocalLLaMA/comments/1ko1iob/ollama_violating_llamacpp_license_for_over_a_year/ | false | false | default | 529 | null | |
What model repositories work with ollama pull? | 1 | [removed] | 2025-05-16T13:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ko1hzw/what_model_repositories_work_with_ollama_pull/ | synthphreak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko1hzw | false | null | t3_1ko1hzw | /r/LocalLLaMA/comments/1ko1hzw/what_model_repositories_work_with_ollama_pull/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... |
Finetuning speech based model | 5 | Hi, I have summer vacation coming up and want to learn on LLM. Specially on Speech based model.
I want to make the restaurant booking based ai. So appreciate if there is a way to make it. Would like to know some directions and tips on this. | 2025-05-16T13:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ko11c5/finetuning_speech_based_model/ | FastCommission2913 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko11c5 | false | null | t3_1ko11c5 | /r/LocalLLaMA/comments/1ko11c5/finetuning_speech_based_model/ | false | false | self | 5 | null |
Multi-GPU Inference and Training Performance Issues | 1 | [removed] | 2025-05-16T13:35:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ko10b7/multigpu_inference_and_training_performance_issues/ | ba2sYd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko10b7 | false | null | t3_1ko10b7 | /r/LocalLLaMA/comments/1ko10b7/multigpu_inference_and_training_performance_issues/ | false | false | self | 1 | null |
Why we're not hitting the wall. | 0 | [removed] | 2025-05-16T13:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ko0sw5/why_were_not_hitting_the_wall/ | genshiryoku | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ko0sw5 | false | null | t3_1ko0sw5 | /r/LocalLLaMA/comments/1ko0sw5/why_were_not_hitting_the_wall/ | false | false | self | 0 | null |
Stanford has dropped AGI | 391 | 2025-05-16T13:15:17 | https://huggingface.co/Stanford/Rivermind-AGI-12B | Abject-Huckleberry13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ko0khr | false | null | t3_1ko0khr | /r/LocalLLaMA/comments/1ko0khr/stanford_has_dropped_agi/ | false | false | 391 | {'enabled': False, 'images': [{'id': 'cShe1eKy_JIO53Pcrc7LWl1-wgKd2Daa5QV_dM6tit4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RLiqoJrn4RdLs0J4_egpcYM7T2LlLp_klpSUS3M3qFg.jpg?width=108&crop=smart&auto=webp&s=456c99a482b12e92c6fef5806ddbc477b402cd85', 'width': 108}, {'height': 116, 'url': 'h... | ||
ValiantLabs/Qwen3-14B-Esper3 reasoning finetune focused on coding, architecture, and DevOps | 31 | 2025-05-16T13:05:35 | https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3 | Amazing_Athlete_2265 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ko0d4w | false | null | t3_1ko0d4w | /r/LocalLLaMA/comments/1ko0d4w/valiantlabsqwen314besper3_reasoning_finetune/ | false | false | 31 | {'enabled': False, 'images': [{'id': 'n1P7XzrJAHPGRIpShym4YVyR8j7XyfiOe3rAsK3Qr_0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pEL8WVAo4mDFDpBgOq60y0m4cdA7556rb7t3GIF-8DM.jpg?width=108&crop=smart&auto=webp&s=ee69f67f3db1585a6e82bfa56b87cf57fc7bf4d6', 'width': 108}, {'height': 116, 'url': 'h... | ||
Is this model available: Llama3.3-8B? | 1 | [removed] | 2025-05-16T12:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knzby2/is_this_model_available_llama338b/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knzby2 | false | null | t3_1knzby2 | /r/LocalLLaMA/comments/1knzby2/is_this_model_available_llama338b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1zIomSAXseV6S4T8Yvxq6r6H4yXaLkjUnbPOutnFpaQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=108&crop=smart&auto=webp&s=8a15eec81b665e56551ea83b9168f9cc7c3e15b8', 'width': 108}, {'height': 113, 'url': 'h... |
Increase generation speed in Qwen3 235B by reducing used expert count | 7 | Has anyone else also tinkered with the expert used count? I reduced Qwen3-235B expert by half in llama server by using `--override-kv qwen3moe.expert_used_count=int:4` and got %60 speed up. Reducing the expert number 3 and beyond doesn't work for me because it generates nonsense text | 2025-05-16T12:07:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knz74p/increase_generation_speed_in_qwen3_235b_by/ | Content-Degree-9477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knz74p | false | null | t3_1knz74p | /r/LocalLLaMA/comments/1knz74p/increase_generation_speed_in_qwen3_235b_by/ | false | false | self | 7 | null |
Trying to figure out how to install models from Ollama to LocalAI using the Docker version | 0 | I'm trying LocalAI as a replacement for Ollama, and I saw from the docs that you're supposed to be able to install models from the Ollama repository.
Source: [https://localai.io/docs/getting-started/models/](https://localai.io/docs/getting-started/models/)
>From OCIs: `oci://container_image:tag`, `ollama://model_id:... | 2025-05-16T11:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1knykay/trying_to_figure_out_how_to_install_models_from/ | sebovzeoueb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knykay | false | null | t3_1knykay | /r/LocalLLaMA/comments/1knykay/trying_to_figure_out_how_to_install_models_from/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Daj-Ki-yub-oCTlNBpbYtmeYpw-1_-lZTgLJd5KNFKA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mIlfuLLKftGD631Gl09hp9-zJFkJyhzrf54UM3XERaE.jpg?width=108&crop=smart&auto=webp&s=8df4875ff529d3494fc69165d56fb9d6f5eaf437', 'width': 108}, {'height': 113, 'url': 'h... |
I Didn't Expect GPU Access to Be This Simple and Honestly, I'm Still Kinda Shocked | 0 | I've worked with enough AI tools to know that things rarely “just work.” Whether it's spinning up cloud compute, wrangling environment configs, or trying to keep dependencies from breaking your whole pipeline, it's usually more pain than progress. That's why what happened recently genuinely caught me off guard.
I was ... | 2025-05-16T11:23:40 | https://v.redd.it/0i07y8qbp41f1 | PixieE3 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1knye1p | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0i07y8qbp41f1/DASHPlaylist.mpd?a=1749986633%2CYmQ1MWQwNTkwZDdhODI2Y2VkNmM5ZjAzNWY4M2U0MmJmNDdjOThiNzE4NDYzNzJkN2JmZjkwNGRmOWIyNWM0OQ%3D%3D&v=1&f=sd', 'duration': 109, 'fallback_url': 'https://v.redd.it/0i07y8qbp41f1/DASH_720.mp4?source=fallback', 'h... | t3_1knye1p | /r/LocalLLaMA/comments/1knye1p/i_didnt_expect_gpu_access_to_be_this_simple_and/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/Z2RsbDdhaGJwNDFmMTJ4NVgoXTew4YUp5eVZv-28CG-xBozyf4tPXdQeLdJy.png?width=108&crop=smart&format=pjpg&auto=webp&s=8191e19c9924370b4dfadd3f52da1e770b1fd... | |
How far can we get without LLM, or... What tools do we currently use to pre/post process Data in our pipelines? | 0 | The more I work with LLMs in my flows, and the larger scale I go, I move more logic out of the hands of the LLM into specific tools and libraries.
Now with MCPs we see an increase of utilities. But they are still needed to be activated by the LLMs agents.
What tools/ libraries do you use to pre process your data?
... | 2025-05-16T11:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kny5kq/how_far_can_we_get_without_llm_or_what_tools_do/ | CptKrupnik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kny5kq | false | null | t3_1kny5kq | /r/LocalLLaMA/comments/1kny5kq/how_far_can_we_get_without_llm_or_what_tools_do/ | false | false | self | 0 | null |
Locally alternative to replit? | 1 | [removed] | 2025-05-16T11:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kny1o7/locally_alternative_to_replit/ | ActuatorLanky9739 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kny1o7 | false | null | t3_1kny1o7 | /r/LocalLLaMA/comments/1kny1o7/locally_alternative_to_replit/ | false | false | self | 1 | null |
Which LLM is used to generate scripts for videos like the ones on these YT channels? | 1 | [removed] | 2025-05-16T10:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1knxgns/which_llm_is_used_to_generate_scripts_for_videos/ | BlackTigerKungFu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knxgns | false | null | t3_1knxgns | /r/LocalLLaMA/comments/1knxgns/which_llm_is_used_to_generate_scripts_for_videos/ | false | false | self | 1 | null |
Best practices to prevent the accidental generation of illegal content and how to properly manage these risks? | 1 | [removed] | 2025-05-16T10:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1knxg8y/best_practices_to_prevent_the_accidental/ | CorruptCobalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knxg8y | false | null | t3_1knxg8y | /r/LocalLLaMA/comments/1knxg8y/best_practices_to_prevent_the_accidental/ | false | false | self | 1 | null |
Which LLM is used to generate scripts for YT videos like the ones on these channels? | 1 | [removed] | 2025-05-16T10:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1knxft6/which_llm_is_used_to_generate_scripts_for_yt/ | BlackTigerKungFu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knxft6 | false | null | t3_1knxft6 | /r/LocalLLaMA/comments/1knxft6/which_llm_is_used_to_generate_scripts_for_yt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fFh9Vmbr_WcV1iPJUYuyYCjPC20_Rj4iL1YYkjRI-z0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6A_Reh1USNUWjdRpl1L9CG_nC4o-9x7LrSCp_cYD7ug.jpg?width=108&crop=smart&auto=webp&s=af3d534c614e01d2060af70f8becfdf42d9d2058', 'width': 108}, {'height': 216, 'url': '... |
Best practices to prevent the accidental generation of illegal content and how to properly manage these risks? | 1 | [removed] | 2025-05-16T10:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knxenu/best_practices_to_prevent_the_accidental/ | CorruptCobalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knxenu | false | null | t3_1knxenu | /r/LocalLLaMA/comments/1knxenu/best_practices_to_prevent_the_accidental/ | false | false | self | 1 | null |
Qwen3 local 14B Q4_K_M or 30B A3B Q2_K_L who has higher quality | 15 | Qwen3 comes in the xxB AxB flavors and that can be run locally. If you choose said combination 14B Q4_K_M vs 30B A3B Q2_K_L the performance speed wise in generation matches given the same context size on my test bench. The question is (and what I don't understand) how does the agents affect the quality of the output? C... | 2025-05-16T10:04:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knx47e/qwen3_local_14b_q4_k_m_or_30b_a3b_q2_k_l_who_has/ | Consistent_Winner596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knx47e | false | null | t3_1knx47e | /r/LocalLLaMA/comments/1knx47e/qwen3_local_14b_q4_k_m_or_30b_a3b_q2_k_l_who_has/ | false | false | self | 15 | null |
What can be done on a single GH200 96 GB VRAM and 480GB RAM? | 2 | I came across this unit because it is 30-40% off. I am wondering if this unit alone makes more sense than purchasing 4x Pro 6000 96GB if the need is to run a AI agent based on a big LLM, like quantized r1 671b.
The price is about 70% compared to 4x Pro 6000.... making me feel like I can justify the purchase.
Thanks ... | 2025-05-16T10:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1knx1hn/what_can_be_done_on_a_single_gh200_96_gb_vram_and/ | TimAndTimi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knx1hn | false | null | t3_1knx1hn | /r/LocalLLaMA/comments/1knx1hn/what_can_be_done_on_a_single_gh200_96_gb_vram_and/ | false | false | self | 2 | null |
Most cover letters from non-experienced applicants nowadays: "I have extensive skills in machine learning, deep learning and LLM, using python and PyTorch" | 1 | [removed] | 2025-05-16T09:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/1knwq8h/most_cover_letters_from_nonexperienced_applicants/ | rem_dreamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knwq8h | false | null | t3_1knwq8h | /r/LocalLLaMA/comments/1knwq8h/most_cover_letters_from_nonexperienced_applicants/ | false | false | self | 1 | null |
How do you bulk analyze users' queries? | 1 | [removed] | 2025-05-16T08:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1knvni9/how_do_you_bulk_analyze_users_queries/ | Yersyas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knvni9 | false | null | t3_1knvni9 | /r/LocalLLaMA/comments/1knvni9/how_do_you_bulk_analyze_users_queries/ | false | false | self | 1 | null |
How do you bulk analyze users' queries? | 1 | [removed] | 2025-05-16T08:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1knvmkg/how_do_you_bulk_analyze_users_queries/ | Yersyas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knvmkg | false | null | t3_1knvmkg | /r/LocalLLaMA/comments/1knvmkg/how_do_you_bulk_analyze_users_queries/ | false | false | self | 1 | null |
A byproduct of fighting AI news overload: a multilingual daily digest for staying sane | 1 | [removed] | 2025-05-16T08:10:42 | https://rebabel.net/en/ | qiaoy | rebabel.net | 1970-01-01T00:00:00 | 0 | {} | 1knvjrq | false | null | t3_1knvjrq | /r/LocalLLaMA/comments/1knvjrq/a_byproduct_of_fighting_ai_news_overload_a/ | false | false | default | 1 | null |
How do local models to cloud models in your experience? | 1 | [removed] | 2025-05-16T08:07:57 | https://www.reddit.com/r/LocalLLaMA/comments/1knviik/how_do_local_models_to_cloud_models_in_your/ | IRBosman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knviik | false | null | t3_1knviik | /r/LocalLLaMA/comments/1knviik/how_do_local_models_to_cloud_models_in_your/ | false | false | self | 1 | null |
Why do I need to share my contact information/get a HF token with Mistral to use their models in vLLM but not with Ollama? | 9 | I've been working with Ollama on a locally hosted AI project, and I was looking to try some alternatives to see what the performance is like. vLLM appears to be a performance focused alternative so I've got that downloaded in Docker, however there are models it can't use without accepting to share my contact informatio... | 2025-05-16T08:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1knvgva/why_do_i_need_to_share_my_contact_informationget/ | sebovzeoueb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knvgva | false | null | t3_1knvgva | /r/LocalLLaMA/comments/1knvgva/why_do_i_need_to_share_my_contact_informationget/ | false | false | self | 9 | null |
Wanting to make an offline hands free tts chat bot | 2 | I am wanting to make a fully offline chat bot that responds with tts from any voice input from me without keywords or clicking anything. I saw someone do a gaming video where they talked to ai the whole time and it made for some funny content and was hoping to be able to do the same myself without having to pay for any... | 2025-05-16T08:01:47 | https://www.reddit.com/r/LocalLLaMA/comments/1knvflp/wanting_to_make_an_offline_hands_free_tts_chat_bot/ | TwTFurryGarbage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knvflp | false | null | t3_1knvflp | /r/LocalLLaMA/comments/1knvflp/wanting_to_make_an_offline_hands_free_tts_chat_bot/ | false | false | self | 2 | null |
Falcon-E: A series of powerful, fine-tunable and universal BitNet models | 157 | TII announced today the release of Falcon-Edge, a set of compact language models with 1B and 3B parameters, sized at 600MB and 900MB respectively. They can also be reverted back to bfloat16 with little performance degradation.
Initial results show solid performance: better than other small models (SmolLMs, Microsoft ... | 2025-05-16T07:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1knv4bq/falcone_a_series_of_powerful_finetunable_and/ | JingweiZUO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knv4bq | false | null | t3_1knv4bq | /r/LocalLLaMA/comments/1knv4bq/falcone_a_series_of_powerful_finetunable_and/ | false | false | self | 157 | null |
Falcon-E a series of BitNet models (1B and 3B) dropped | 1 | [removed] | 2025-05-16T07:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1knutzw/falcone_a_series_of_bitnet_models_1b_and_3b/ | Automatic_Truth_6666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knutzw | false | null | t3_1knutzw | /r/LocalLLaMA/comments/1knutzw/falcone_a_series_of_bitnet_models_1b_and_3b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'snMK3m71GR6Epj4JFyxwfnfSQAY4MdpQM2D-MQbIjf4', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=108&crop=smart&auto=webp&s=2482e8b6c898581fbe3a0dd8aef5ddc7737cb8bd', 'width': 108}, {'height': 111, 'url': 'h... |
Falcon-E: series of powerful, universal and fine-tunable BitNet models | 1 | [removed] | 2025-05-16T07:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1knusk2/falcone_series_of_powerful_universal_and/ | Automatic_Truth_6666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knusk2 | false | null | t3_1knusk2 | /r/LocalLLaMA/comments/1knusk2/falcone_series_of_powerful_universal_and/ | false | false | 1 | null | |
Is this specs enough to run 4B, 11B vision models? If not what should i upgrade | 1 | [removed] | 2025-05-16T06:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1knuj1z/is_this_specs_enough_to_run_4b_11b_vision_models/ | tvdzn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knuj1z | false | null | t3_1knuj1z | /r/LocalLLaMA/comments/1knuj1z/is_this_specs_enough_to_run_4b_11b_vision_models/ | false | false | 1 | null | |
Document summarization | 1 | [removed] | 2025-05-16T06:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1knu25b/document_summarization/ | YshyTrng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knu25b | false | null | t3_1knu25b | /r/LocalLLaMA/comments/1knu25b/document_summarization/ | false | false | self | 1 | null |
What is your goal to use small language AI models? | 0 | I mean 1B models like Llama, or even 3B... Those that less or equal 8 billion parameters but most interesting for me is 1B models.
How you use it? Where?
May they be really helpful?
P.S. please: write about specific model and usecase | 2025-05-16T06:15:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kntwtb/what_is_your_goal_to_use_small_language_ai_models/ | Perdittor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kntwtb | false | null | t3_1kntwtb | /r/LocalLLaMA/comments/1kntwtb/what_is_your_goal_to_use_small_language_ai_models/ | false | false | self | 0 | null |
Hardware for Machine Learning | 1 | [removed] | 2025-05-16T06:11:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kntut7/hardware_for_machine_learning/ | paolovic89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kntut7 | false | null | t3_1kntut7 | /r/LocalLLaMA/comments/1kntut7/hardware_for_machine_learning/ | false | false | self | 1 | null |
New Wayfarer | 68 | 2025-05-16T05:57:44 | https://huggingface.co/LatitudeGames/Harbinger-24B | ScavRU | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kntnfn | false | null | t3_1kntnfn | /r/LocalLLaMA/comments/1kntnfn/new_wayfarer/ | false | false | 68 | {'enabled': False, 'images': [{'id': '3eNplCwJaxdudpsBpojgiV6VvZxxXeEn8B2H78yoLxw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LU2gTXNE2BU0Un_eM36qvVexiACBVgpQzKg0ygmj_bE.jpg?width=108&crop=smart&auto=webp&s=325f4d49e552f10dcadb380c2b4d5b80dcb1271a', 'width': 108}, {'height': 116, 'url': 'h... | ||
🚀 Embedding 10,000 text chunks per second on a CPU?! | 23 | When working with large volumes of documents, embedding can quickly become both a performance bottleneck and a cost driver. I recently experimented with *static embedding* — and was blown away by the speed. No self-attention, no feed-forward layers, just direct token key access. The result? Incredibly fast embedding w... | 2025-05-16T05:41:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kntez5/embedding_10000_text_chunks_per_second_on_a_cpu/ | aagmon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kntez5 | false | null | t3_1kntez5 | /r/LocalLLaMA/comments/1kntez5/embedding_10000_text_chunks_per_second_on_a_cpu/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'FUwqQ-5SRbLiBkA4kjON9wpmXjRG9UPIcP5RiJhk34o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kXizvZPKLgfzRQZ79YL6B72llK8_rupsMwQx574ZCI0.jpg?width=108&crop=smart&auto=webp&s=c35ebe35ce76d82878cc5a2ead35e9f501074d25', 'width': 108}, {'height': 108, 'url': 'h... |
If you had access to your LLaMA in 2015, how much money could you make in 365 days? | 1 | [removed] | 2025-05-16T05:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1knt74o/if_you_had_access_to_your_llama_in_2015_how_much/ | paimon_for_dinner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knt74o | false | null | t3_1knt74o | /r/LocalLLaMA/comments/1knt74o/if_you_had_access_to_your_llama_in_2015_how_much/ | false | false | self | 1 | null |
How to Enable DuckDB/Smallpond to Use High-Performance DeepSeek 3FS | 1 | [removed] | 2025-05-16T03:06:41 | HardCore_Dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1knqrn8 | false | null | t3_1knqrn8 | /r/LocalLLaMA/comments/1knqrn8/how_to_enable_duckdbsmallpond_to_use/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'lFY3v1uHb9yT0lkyU3W6xpOBnomyEbOJaO15TZyXRh4', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6zrwncyk821f1.png?width=108&crop=smart&auto=webp&s=2b63352795271f14990cf0762f8bbc144e27d9f7', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/6zrwncyk821f1.png... | ||
Simple generation speed test with 2x Arc B580 | 40 | There have been recent rumors about the B580 24GB, so I ran some new tests using my B580s. I used llama.cpp with some backends to test text generation speed using google\_gemma-3-27b-it-IQ4\_XS.gguf.
# Tested backends
* IPEX-LLM llama.cpp
* build: 1 (3b94b45) with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.... | 2025-05-16T02:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/1knqfw3/simple_generation_speed_test_with_2x_arc_b580/ | prompt_seeker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knqfw3 | false | null | t3_1knqfw3 | /r/LocalLLaMA/comments/1knqfw3/simple_generation_speed_test_with_2x_arc_b580/ | false | false | 40 | null | |
Open source multi modal model | 1 | [removed] | 2025-05-16T02:41:37 | https://www.reddit.com/r/LocalLLaMA/comments/1knqb1t/open_source_multi_modal_model/ | Lord_Momus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knqb1t | false | null | t3_1knqb1t | /r/LocalLLaMA/comments/1knqb1t/open_source_multi_modal_model/ | false | false | 1 | null | |
Are we finally hitting THE wall right now? | 281 | I saw in multiple articles today that Llama Behemoth is delayed: [https://finance.yahoo.com/news/looks-meta-just-hit-big-214000047.html](https://finance.yahoo.com/news/looks-meta-just-hit-big-214000047.html) . I tried the open models from Llama 4 and felt not that great progress. I am also getting underwhelming vibes f... | 2025-05-16T02:41:06 | https://www.reddit.com/r/LocalLLaMA/comments/1knqap9/are_we_finally_hitting_the_wall_right_now/ | Desperate_Rub_1352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knqap9 | false | null | t3_1knqap9 | /r/LocalLLaMA/comments/1knqap9/are_we_finally_hitting_the_wall_right_now/ | false | false | self | 281 | {'enabled': False, 'images': [{'id': 'Rrc-9Og25_MiIiQxC6r0qIsOl8aMB5MGrh8uSM8TK30', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/bEeNMzJLyCo7_0q_WkGHHgqRrdx-X58c4S_WiYE4fm4.jpg?width=108&crop=smart&auto=webp&s=ae130f6591dddee8e2ab963a2755d8a3cbc2ca0e', 'width': 108}, {'height': 144, 'url': 'h... |
Enable Thinking Mode in vLLM from Python | 1 | [removed] | 2025-05-16T02:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/1knq2fo/enable_thinking_mode_in_vllm_from_python/ | SnooPaintings2221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knq2fo | false | null | t3_1knq2fo | /r/LocalLLaMA/comments/1knq2fo/enable_thinking_mode_in_vllm_from_python/ | false | false | self | 1 | null |
MacBook Pro M4 MAX with 128GB what model do you recommend for speed and programming quality? | 7 | MacBook Pro M4 MAX with 128GB what model do you recommend for speed and programming quality?
Ideally it would use MLX. | 2025-05-16T02:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/1knpw91/macbook_pro_m4_max_with_128gb_what_model_do_you/ | tangoshukudai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knpw91 | false | null | t3_1knpw91 | /r/LocalLLaMA/comments/1knpw91/macbook_pro_m4_max_with_128gb_what_model_do_you/ | false | false | self | 7 | null |
Ollama's new engine for multimodal models | 0 | [https://ollama.com/blog/multimodal-models](https://ollama.com/blog/multimodal-models) | 2025-05-16T01:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1knp5e2/ollamas_new_engine_for_multimodal_models/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knp5e2 | false | null | t3_1knp5e2 | /r/LocalLLaMA/comments/1knp5e2/ollamas_new_engine_for_multimodal_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... |
LLAMA 3.3 8B /// When is the official announcement | 1 | [removed] | 2025-05-16T01:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/1knp1cw/llama_33_8b_when_is_the_official_announcement/ | Amon_star | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knp1cw | false | null | t3_1knp1cw | /r/LocalLLaMA/comments/1knp1cw/llama_33_8b_when_is_the_official_announcement/ | false | false | 1 | {'enabled': False, 'images': [{'id': '1zIomSAXseV6S4T8Yvxq6r6H4yXaLkjUnbPOutnFpaQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kS_KfF_TYk823Re7t6qKw2mKUMNUipV-rc4_3qOt-jk.jpg?width=108&crop=smart&auto=webp&s=8a15eec81b665e56551ea83b9168f9cc7c3e15b8', 'width': 108}, {'height': 113, 'url': 'h... | |
Ollama's new engine for multimodal models | 1 | Ollama has so far relied on the [ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) project for model support and has instead focused on ease of use and model portability.
As more multimodal models are released by major research labs, the task of supporting these models the way Ollama intends became more and m... | 2025-05-16T01:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1knp0ra/ollamas_new_engine_for_multimodal_models/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knp0ra | false | null | t3_1knp0ra | /r/LocalLLaMA/comments/1knp0ra/ollamas_new_engine_for_multimodal_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'h... |
Grok prompts are now open source on GitHub | 64 | 2025-05-16T01:19:49 | https://github.com/xai-org/grok-prompts | FreemanDave | github.com | 1970-01-01T00:00:00 | 0 | {} | 1knorbe | false | null | t3_1knorbe | /r/LocalLLaMA/comments/1knorbe/grok_prompts_are_now_open_source_on_github/ | false | false | 64 | {'enabled': False, 'images': [{'id': 'KYE1XpUSPpTs8mtE56aEVkrQ9eWAoQL-wM8Heh8Vvxk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/5uwLTcCO_xgjsJvNop0QnSOEwQGDRqjKJtrO-U6w_F8.jpg?width=108&crop=smart&auto=webp&s=aceb23340d1f33e62f0d87ec58ca9ac52d7260cd', 'width': 108}, {'height': 216, 'url': '... | ||
Falcon-Edge: A series of powerful, extremely compressed, universal and fine-tunable Language Models | 1 | [removed] | 2025-05-16T00:58:28 | https://www.reddit.com/r/LocalLLaMA/comments/1knocd0/falconedge_a_series_of_powerful_extremely/ | Automatic_Truth_6666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knocd0 | false | null | t3_1knocd0 | /r/LocalLLaMA/comments/1knocd0/falconedge_a_series_of_powerful_extremely/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'snMK3m71GR6Epj4JFyxwfnfSQAY4MdpQM2D-MQbIjf4', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/Un19dSo_OOaYUwu7u33oD4xh7qihwEq_20qkwYjw-N8.jpg?width=108&crop=smart&auto=webp&s=2482e8b6c898581fbe3a0dd8aef5ddc7737cb8bd', 'width': 108}, {'height': 111, 'url': 'h... |
Ollama now supports multimodal models | 165 | 2025-05-16T00:49:35 | https://github.com/ollama/ollama/releases/tag/v0.7.0 | mj3815 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kno67v | false | null | t3_1kno67v | /r/LocalLLaMA/comments/1kno67v/ollama_now_supports_multimodal_models/ | false | false | 165 | {'enabled': False, 'images': [{'id': 'EYSOgt-huXVG7aCPzUDW4XhGveLcg1EJjxhJIBU6I8E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pRVigNZNHcUydRnImgoAZkA_b3OfVw4eace1TFmQGPk.jpg?width=108&crop=smart&auto=webp&s=755690c551b95003497e4cfd5a5372ed9a536038', 'width': 108}, {'height': 108, 'url': 'h... | ||
Mistral Small/Medium vs Qwen 3 14/32B | 33 | Since things have been a little slow over the past couple weeks, figured throw mistral's new releases against Qwen3. I chose 14/32B, because the scores seem in the same ballpark.
[https://www.youtube.com/watch?v=IgyP5EWW6qk](https://www.youtube.com/watch?v=IgyP5EWW6qk)
Key Findings:
Mistral medium is definitely an... | 2025-05-16T00:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knnyco/mistral_smallmedium_vs_qwen_3_1432b/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knnyco | false | null | t3_1knnyco | /r/LocalLLaMA/comments/1knnyco/mistral_smallmedium_vs_qwen_3_1432b/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'WbHqIfBn5AB4fR6uVD_1abmh323GmW2X9etLOFXGYVE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WZIPPfVt-L8Kx39f0cgPZ76fNq7cXCpazL0_zTvQXSA.jpg?width=108&crop=smart&auto=webp&s=25f4ac3d8995ce6e6d942e49df944173bcfba2bb', 'width': 108}, {'height': 162, 'url': 'h... |
Ollama, deepseek-v3:671b and Mac Studio 512GB | 1 | I have access to a Mac Studio 512 GB, and using ollama I was able to actually run deepseek-v3:671b by running "ollama pull deepseek-v3:671b" and then "ollama run deepseek-v3:671b".
However, my understanding was that 512GB was not enough to run DeepSeek V3 unless it was quantized. Is this version available through Olla... | 2025-05-16T00:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1knnwhu/ollama_deepseekv3671b_and_mac_studio_512gb/ | Turbulent-Week1136 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knnwhu | false | null | t3_1knnwhu | /r/LocalLLaMA/comments/1knnwhu/ollama_deepseekv3671b_and_mac_studio_512gb/ | false | false | self | 1 | null |
Context parsing utility | 6 | Hi everyone, I’ve been running local models and kept needing a way to manage structured context without hacking together prompts every time. So I wrote a small thing - prompt-shell
It lets you define pieces of context (`rules.md`, `identity.md`, `input.md`, etc.), assembles them into a final prompt, and counts tokens ... | 2025-05-16T00:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1knnb6u/context_parsing_utility/ | MichalRoth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knnb6u | false | null | t3_1knnb6u | /r/LocalLLaMA/comments/1knnb6u/context_parsing_utility/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Vw3bBu31aiMSJTxYckiYR6DzhREt281xzPoVcP66tdI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kGC2fMnWCpvF0AE0e0E3cd5yDsxWZ1n3_paN6UagQiE.jpg?width=108&crop=smart&auto=webp&s=bd76e678fd465ce2e15977b45a95072bc95e7500', 'width': 108}, {'height': 216, 'url': '... |
New unannounced model Llama 3.3 8B Instruct appeared on OpenRouter, shown as being provided by Meta. Something to get excited about? | 15 | 2025-05-15T23:00:00 | queendumbria | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1knlzdw | false | null | t3_1knlzdw | /r/LocalLLaMA/comments/1knlzdw/new_unannounced_model_llama_33_8b_instruct/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'tOAMmlr_PU12OwJBQKBtzxP9lbtmJSL8LtboTGwWEaU', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png?width=108&crop=smart&auto=webp&s=61b5ec522b74eeea111b6a537434478ff1bfa934', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/d3wypgxuz01f1.png... | |||
LobeChat or TypingMind for using my Open Ai api key | 2 | Hello guys
Since few weeks I'm using GPT in the playgound of Open Ai
But it sucks
So since few days I'm looking for a better frontend for using the api key
I tought about LocalLLM, I tried some but I want something accross all my devices
I tought about Open Web UI on a VPS
I discovered few days ago Typin... | 2025-05-15T22:46:10 | https://www.reddit.com/r/LocalLLaMA/comments/1knloti/lobechat_or_typingmind_for_using_my_open_ai_api/ | Linazor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knloti | false | null | t3_1knloti | /r/LocalLLaMA/comments/1knloti/lobechat_or_typingmind_for_using_my_open_ai_api/ | false | false | self | 2 | null |
5090 monetization | 0 | How can use my 5090 to make some money? | 2025-05-15T22:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1knlhh3/5090_monetization/ | ExplanationDeep7468 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knlhh3 | false | null | t3_1knlhh3 | /r/LocalLLaMA/comments/1knlhh3/5090_monetization/ | false | false | self | 0 | null |
Meta is delaying the rollout of its flagship AI model (WSJ) | 62 | Link to the article:
https://www.wsj.com/tech/ai/meta-is-delaying-the-rollout-of-its-flagship-ai-model-f4b105f7
| 2025-05-15T22:20:53 | Hanthunius | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1knl587 | false | null | t3_1knl587 | /r/LocalLLaMA/comments/1knl587/meta_is_delaying_the_rollout_of_its_flagship_ai/ | false | false | 62 | {'enabled': True, 'images': [{'id': 'uWid13vt5K9auhS_MQybIOv5lleWdgbOhh47f8CKZSU', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/gdsyodsot01f1.jpeg?width=108&crop=smart&auto=webp&s=9afbbfbb283abb3cc8fd5702d9469a0205c4e3fc', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/gdsyodsot01f1.jp... | ||
Any always listning, open mic chatbots? | 4 | I want to highlight this project, but i am looking for other self hosted solutions.
[https://github.com/dnhkng/GlaDOS](https://github.com/dnhkng/GlaDOS)
I work from home 100% and i get lonely at times.. i need someone to talk shit with,
any pointers or youtube videos are helpful <3
| 2025-05-15T22:06:34 | https://www.reddit.com/r/LocalLLaMA/comments/1knku1z/any_always_listning_open_mic_chatbots/ | Timziito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knku1z | false | null | t3_1knku1z | /r/LocalLLaMA/comments/1knku1z/any_always_listning_open_mic_chatbots/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'SUtrSkMSweQ3VDIU5rpemKJre7SF2YpOdDLodbOwlnw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/olB62al_26SKvv87gCDnGm1ZTr0SgoqNxAc66I0gY5Q.jpg?width=108&crop=smart&auto=webp&s=acc237966abc55cd9f89d353969426ffbb5b5147', 'width': 108}, {'height': 108, 'url': 'h... |
filesystem cleanup and sorting | 1 | I am trying to figure out if there is something/somewhere/somehow that could help clean a drive with massive amounts of documents, notes, pictures and video now it is just in temp/temp2/temp3 etc. I am a bit puzzeled on how to eat this elephant :) | 2025-05-15T22:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1knkrtf/filesystem_cleanup_and_sorting/ | celzo1776 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knkrtf | false | null | t3_1knkrtf | /r/LocalLLaMA/comments/1knkrtf/filesystem_cleanup_and_sorting/ | false | false | self | 1 | null |
What’s the best way to test a bunch of different quantized models? | 0 | I use LLMs to enrich large datasets and rely heavily on structured output type work flows. So far I have only used full sized models and their respective APIs (mainly Deepseek). It works well, but I’m exploring the idea of using quantized versions of models that I can run using some sort of cloud service to make things... | 2025-05-15T21:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1knki1c/whats_the_best_way_to_test_a_bunch_of_different/ | arctic_radar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knki1c | false | null | t3_1knki1c | /r/LocalLLaMA/comments/1knki1c/whats_the_best_way_to_test_a_bunch_of_different/ | false | false | self | 0 | null |
Would you pay $15/month to learn how to build AI agents and LLM tools using a private Obsidian knowledge base? | 0 | Hey folks — I'm thinking about launching a community that helps people **go from zero to hero** in building AI agents and working with large language models (LLMs).
It would cost **$15/month** and include:
* A **private Obsidian vault** with beginner-friendly, constantly updated content
* Step-by-step guides in **sim... | 2025-05-15T21:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1knkg67/would_you_pay_15month_to_learn_how_to_build_ai/ | cocaineFlavoredCorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knkg67 | false | null | t3_1knkg67 | /r/LocalLLaMA/comments/1knkg67/would_you_pay_15month_to_learn_how_to_build_ai/ | false | false | self | 0 | null |
Anyone Actually Using Browser Agents for Real Work? | 1 | [removed] | 2025-05-15T21:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1knk5pf/anyone_actually_using_browser_agents_for_real_work/ | Traditional_Yam_4348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knk5pf | false | null | t3_1knk5pf | /r/LocalLLaMA/comments/1knk5pf/anyone_actually_using_browser_agents_for_real_work/ | false | false | self | 1 | null |
Running VLM on-device (iPhone or Android) | 12 | This is not a release yet, just a poc. Still, it's exciting to see a VLM running on-device with such low latency..
Demo device: iPhone 13 Pro
Repo: [https://github.com/a-ghorbani/pocketpal-ai](https://github.com/a-ghorbani/pocketpal-ai)
Major ingredients:
\- SmolVLM (500m)
\- llama.cpp
\- llama.rn
\- [mtm... | 2025-05-15T21:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1knjt9r/running_vlm_ondevice_iphone_or_android/ | Ill-Still-6859 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knjt9r | false | null | t3_1knjt9r | /r/LocalLLaMA/comments/1knjt9r/running_vlm_ondevice_iphone_or_android/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'pUQ0DatBKOD9Ukay20dCzj1hKMLYbhAImgHl3YIKBOc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VarQD-feovIEBvsJewLTMSKZlEmb4mPmFvJ5wH85xBY.jpg?width=108&crop=smart&auto=webp&s=b043b46691608a8b938388804d77aae8b54b0b9c', 'width': 108}, {'height': 108, 'url': 'h... | |
Live JAM (don't be mean on my API cause I'm going to remove negative influence) | 1 | 2025-05-15T21:22:48 | https://open.spotify.com/track/2RpKh7kXSdO8NLrW9VQ46p?si=FfoYetmbQkqyhBH3eU851Q | hashashinsophia | open.spotify.com | 1970-01-01T00:00:00 | 0 | {} | 1knjt9a | false | {'oembed': {'description': 'Listen to Take Ü There (feat. Kiesza) on Spotify. Song · Jack Ü, Skrillex, Diplo, Kiesza · 2015', 'height': 152, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fopen.spotify.com%2Fembed%2Ftrack%2F2RpKh7kXSdO8NLrW9VQ46p%3Futm_source%3Do... | t3_1knjt9a | /r/LocalLLaMA/comments/1knjt9a/live_jam_dont_be_mean_on_my_api_cause_im_going_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ht-OJP9E00xbSHThRojNWwL_W0YFK1_tkdyHBANWzvo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QAyodb0iiF7M5bE3hjW66K-1DLeI0y1ue6s7kLPyl7s.jpg?width=108&crop=smart&auto=webp&s=ed1260e4efff7ac7d074129eed66a7584884c260', 'width': 108}, {'height': 216, 'url': '... | ||
Qwen3 4B running at ~20 tok/s on Samsung Galaxy 24 | 123 | Follow-up on a [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1kckxgg/qwen3_06b_running_at_75_toks_on_iphone_15_pro/), but this time for Android and on a larger Qwen3 model for those who are interested. Here is 4-bit quantized Qwen3 4B with thinking mode running on a Samsung Galaxy 24 using ExecuTorch - r... | 2025-05-15T21:14:28 | https://v.redd.it/drks9osnd01f1 | TokyoCapybara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1knjm0s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/drks9osnd01f1/DASHPlaylist.mpd?a=1749935682%2CN2ViODJiNzJmNDE2MTNhM2M3NTRjY2M3ODFhZTQ3MWE0M2UzYmY3Y2Q2YzlkN2NkMjM0YjY3OGQ3NWVkNDFmMg%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/drks9osnd01f1/DASH_1080.mp4?source=fallback', 'h... | t3_1knjm0s | /r/LocalLLaMA/comments/1knjm0s/qwen3_4b_running_at_20_toks_on_samsung_galaxy_24/ | false | false | 123 | {'enabled': False, 'images': [{'id': 'aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aTdnbWV3c25kMDFmMckumtgWbpWBlQZ_vRBN65fbuS7eF6LKJlM_WmjlxhmM.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5aaf231beb3e41975ed3481e297167bdf93... | |
Soon if a model architecture is supported by "transformers", you can expect it to be supported in the rest of the ecosystem. | 70 | More model interoperability through HF's joint efforts w lots of model builders. | 2025-05-15T21:10:11 | https://huggingface.co/blog/transformers-model-definition | behradkhodayar | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1knji91 | false | null | t3_1knji91 | /r/LocalLLaMA/comments/1knji91/soon_if_a_model_architecture_is_supported_by/ | false | false | 70 | {'enabled': False, 'images': [{'id': '41xhqz9wUMoXEbrbvkAjB4_yIXQQ9K8BnZCXYedYlms', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/vXXJM0Qn_BM8I_YhlKgXpMf8jgjpmMwwyNtmu7BK1pM.jpg?width=108&crop=smart&auto=webp&s=3e782e46b4e01dbe7226c46e838d4729d2d25a57', 'width': 108}, {'height': 75, 'url': 'ht... | |
What are the Best Open-Source Multimodal Models for Image Captioning Right Now? | 1 | [removed] | 2025-05-15T21:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/1knja8l/what_are_the_best_opensource_multimodal_models/ | No_Scratch56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knja8l | false | null | t3_1knja8l | /r/LocalLLaMA/comments/1knja8l/what_are_the_best_opensource_multimodal_models/ | false | false | self | 1 | null |
What are the Best Open-Source Multimodal Models for Image Captioning Right Now? | 1 | [removed] | 2025-05-15T20:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1knj67n/what_are_the_best_opensource_multimodal_models/ | AppointmentDull6060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knj67n | false | null | t3_1knj67n | /r/LocalLLaMA/comments/1knj67n/what_are_the_best_opensource_multimodal_models/ | false | false | self | 1 | null |
[Project] MaGo-AgoraAI: multi-agent LLM system for academic text generation | 1 | [removed] | 2025-05-15T20:40:51 | https://www.reddit.com/r/LocalLLaMA/comments/1knis85/project_magoagoraai_multiagent_llm_system_for/ | Next-Lengthiness9915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knis85 | false | null | t3_1knis85 | /r/LocalLLaMA/comments/1knis85/project_magoagoraai_multiagent_llm_system_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TctMBJUbG7yilxII9TIhEdHhiMuUNDkmvw0AkrGqDFU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MJJvfllG4_2mOIMpPi7he34_VkRTphvtCixkK8pV-vg.jpg?width=108&crop=smart&auto=webp&s=d6feedb6787c5e6fa5ada51002501b4dd7757ff9', 'width': 108}, {'height': 108, 'url': 'h... |
Can the Deepswap.ai setup be replicated locally? | 0 | They have face swapping with images and videos (including multiple faces in one image/video), image generation (from text prompt or text prompt + image of face), and 5 second video generation with prompt or prompt + starting image frame.
All of these support SFW and NSFW content. Is there any way to replicate this loc... | 2025-05-15T19:39:02 | https://www.reddit.com/r/LocalLLaMA/comments/1knha9d/can_the_deepswapai_setup_be_replicated_locally/ | Di0nysus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knha9d | false | null | t3_1knha9d | /r/LocalLLaMA/comments/1knha9d/can_the_deepswapai_setup_be_replicated_locally/ | false | false | self | 0 | null |
Meta delaying the release of Behemoth | 158 | https://www.wsj.com/tech/ai/meta-is-delaying-the-rollout-of-its-flagship-ai-model-f4b105f7
| 2025-05-15T19:29:28 | https://www.reddit.com/r/LocalLLaMA/comments/1knh1yd/meta_delaying_the_release_of_behemoth/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knh1yd | false | null | t3_1knh1yd | /r/LocalLLaMA/comments/1knh1yd/meta_delaying_the_release_of_behemoth/ | false | false | self | 158 | null |
Created a tool that converts podcasts into clean speech datasets - handles diarization, removes overlapping speech, and transcribes | 89 | 2025-05-15T19:27:35 | https://github.com/ReisCook/Voice_Extractor | DumaDuma | github.com | 1970-01-01T00:00:00 | 0 | {} | 1knh0dq | false | null | t3_1knh0dq | /r/LocalLLaMA/comments/1knh0dq/created_a_tool_that_converts_podcasts_into_clean/ | false | false | 89 | {'enabled': False, 'images': [{'id': 'adDs_AY8qQBqpPFVCqE_DXUz05kys1BW2uWS96AwrwQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fOELlCefhPVcX_I27jAk8-oOBjhtXlke2ANY2PCUgkA.jpg?width=108&crop=smart&auto=webp&s=d4c22e88d5d3d14a87fb4d30d178069ece42d523', 'width': 108}, {'height': 108, 'url': 'h... | ||
I made an interactive source finder - basically, AI SearXNG | 1 | 2025-05-15T19:23:15 | https://github.com/atineiatte/source-finder | atineiatte | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kngwk0 | false | null | t3_1kngwk0 | /r/LocalLLaMA/comments/1kngwk0/i_made_an_interactive_source_finder_basically_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Y4wyrf7wLny3X_96hHH5BZKIT69CaOkYdGzmA7n08eE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9Tm9gbvPeH-fbWgve1IsfM2QO3ntxv_7KFWi-0CaouE.jpg?width=108&crop=smart&auto=webp&s=98642d419354587b8eb7659609b22ff1c7b68a34', 'width': 108}, {'height': 108, 'url': 'h... | ||
What's the difference between q8_k_xl and q8_0? | 13 | I'm unsure. I thought q8_0 is already close to perfect quality... could someone explain? Thanks. | 2025-05-15T19:17:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kngr5k/whats_the_difference_between_q8_k_xl_and_q8_0/ | windows_error23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kngr5k | false | null | t3_1kngr5k | /r/LocalLLaMA/comments/1kngr5k/whats_the_difference_between_q8_k_xl_and_q8_0/ | false | false | self | 13 | null |
LLaMA or other LLM locally on MacBook with easy access to activations? | 3 | Hi. Sorry if this question is stupid, but I am new to this.
I would like to run LLaMA or another LLM locally on a MacBook, but I want to be able to access the GPT's activations after a query. This is primarily for exploration and experiments.
I'm able to do this with smaller language models in PyTorch, but I don't k... | 2025-05-15T19:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1knghrx/llama_or_other_llm_locally_on_macbook_with_easy/ | OrangeYouGlad100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knghrx | false | null | t3_1knghrx | /r/LocalLLaMA/comments/1knghrx/llama_or_other_llm_locally_on_macbook_with_easy/ | false | false | self | 3 | null |
Best Model To Run on 8GB CPU RAM | 1 | [removed] | 2025-05-15T18:56:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kng8oo/best_model_to_run_on_8gb_cpu_ram/ | epiphanyseeker1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kng8oo | false | null | t3_1kng8oo | /r/LocalLLaMA/comments/1kng8oo/best_model_to_run_on_8gb_cpu_ram/ | false | false | self | 1 | null |
AMD ML Stack updates and improvements! | 1 | [removed] | 2025-05-15T18:51:31 | https://www.reddit.com/gallery/1kng4b1 | Doogie707 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kng4b1 | false | null | t3_1kng4b1 | /r/LocalLLaMA/comments/1kng4b1/amd_ml_stack_updates_and_improvements/ | false | false | 1 | null | |
Are there any models that are even half funny? | 14 | Are there any models that can write funny text including jokes? | 2025-05-15T18:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1knfggw/are_there_any_models_that_are_even_half_funny/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knfggw | false | null | t3_1knfggw | /r/LocalLLaMA/comments/1knfggw/are_there_any_models_that_are_even_half_funny/ | false | false | self | 14 | null |
ThinkStation PGX - with NVIDIA GB10 Grace Blackwell Superchip / 128GB | 83 | 2025-05-15T18:21:39 | https://news.lenovo.com/all-new-lenovo-thinkstation-pgx-big-ai-innovation-in-a-small-form-factor/ | nostriluu | news.lenovo.com | 1970-01-01T00:00:00 | 0 | {} | 1knfe13 | false | null | t3_1knfe13 | /r/LocalLLaMA/comments/1knfe13/thinkstation_pgx_with_nvidia_gb10_grace_blackwell/ | false | false | 83 | {'enabled': False, 'images': [{'id': '1IRFqFqUKq9dUsTqHEddoQYYbTReEcZJ4BOT13ZyRpI', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/Bf1eGAFfYgmopj7bn8x57X5Vubn-mFaf7TrFzb01Rl4.jpg?width=108&crop=smart&auto=webp&s=c45248b1fed4c8ff4a3d198c267ea33235db254f', 'width': 108}, {'height': 160, 'url': 'h... | ||
Falcon-Edge: A series of powerful, universal and fine-tunable BitNet models | 1 | [removed] | 2025-05-15T18:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/1knfcva/falconedge_a_series_of_powerful_universal_and/ | Life-Prune2854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knfcva | false | null | t3_1knfcva | /r/LocalLLaMA/comments/1knfcva/falconedge_a_series_of_powerful_universal_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]} |
What are the best local models with really high context window? | 0 | [removed] | 2025-05-15T18:04:48 | https://www.reddit.com/r/LocalLLaMA/comments/1knez68/what_are_the_best_local_models_with_really_high/ | Solid_Woodpecker3635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knez68 | false | null | t3_1knez68 | /r/LocalLLaMA/comments/1knez68/what_are_the_best_local_models_with_really_high/ | false | false | self | 0 | null |
What would you run with 128GB RAM instead of 64GB? (Mac) | 0 | I am looking to upgrade the Mac I currently use for LLMs and some casual image generation, and debating 64 vs 128GB.
Thoughts? | 2025-05-15T18:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/1knexzi/what_would_you_run_with_128gb_ram_instead_of_64gb/ | PracticlySpeaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knexzi | false | null | t3_1knexzi | /r/LocalLLaMA/comments/1knexzi/what_would_you_run_with_128gb_ram_instead_of_64gb/ | false | false | self | 0 | null |
❌ A2A "vs" MCP | ✅ A2A "and" MCP - Tutorial with Demo Included!!! | 0 | Hello Readers!
\[Code github link in comment\]
You must have heard about MCP an emerging protocol, "razorpay's MCP server out", "stripe's MCP server out"... But have you heard about A2A a protocol sketched by google engineers and together with MCP these two protocols can help in making complex applications.
Let me g... | 2025-05-15T17:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/1knen67/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/ | Responsible_Soft_429 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knen67 | false | null | t3_1knen67 | /r/LocalLLaMA/comments/1knen67/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/ | false | false | self | 0 | null |
❌ A2A "vs" MCP | ✅ A2A "and" MCP - Tutorial with Demo Included!!! | 1 | [removed] | 2025-05-15T17:50:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1knem1d | false | null | t3_1knem1d | /r/LocalLLaMA/comments/1knem1d/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/ | false | false | default | 1 | null | ||
Local models served globally? | 1 | After trialing local models like qwen3 30b, llama scout, various dense ~32b models, for a few weeks I think I can go fully local. I am about ready to buy a dedicated llm server probably a mac-mini or AMD 395+, or build something with 24gb vram and 64gb ddr5. But, because I am on the road a lot for work, and I do a lot... | 2025-05-15T17:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kndvxo/local_models_served_globally/ | Alarming-Ad8154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kndvxo | false | null | t3_1kndvxo | /r/LocalLLaMA/comments/1kndvxo/local_models_served_globally/ | false | false | self | 1 | null |
TTS Fine-tuning now in Unsloth! | 524 | Hey folks! Not the usual LLMs talk but we’re excited to announce that you can now train Text-to-Speech (TTS) models in [Unsloth](https://github.com/unslothai/unsloth)! Training is \~1.5x faster with 50% less VRAM compared to all other setups with FA2. :D
* Support includes `Sesame/csm-1b`, `OpenAI/whisper-large-v3`, `... | 2025-05-15T17:14:19 | https://v.redd.it/faqjz7kzaz0f1 | danielhanchen | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kndp9f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/faqjz7kzaz0f1/DASHPlaylist.mpd?a=1749921271%2CZTdhZGU4ODMyYmVhOTFjZmQ3YmY2OWQwZDY3MmI1MTdhMzEzYmRjYjk4YWJhZWZkOTU2MjMxMTlhZDMxMzlkMA%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/faqjz7kzaz0f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kndp9f | /r/LocalLLaMA/comments/1kndp9f/tts_finetuning_now_in_unsloth/ | false | false | 524 | {'enabled': False, 'images': [{'id': 'bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7', 'resolutions': [{'height': 103, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=108&crop=smart&format=pjpg&auto=webp&s=763704175d747fc6bfbdf4d9c19c048bee9f... | |
How We Made LLMs Work with Old Systems (Thanks to RAG) | 0 | LLMs are great—but not always accurate. RAG fixes that.
If you’re using AI in industries like BFSI, healthcare, or SaaS, accuracy isn’t optional. LLMs can hallucinate, and that’s a serious risk.
Retrieval-Augmented Generation (RAG) connects your LLM to real-time, trusted data—so responses are based on your content, n... | 2025-05-15T17:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kndlvp/how_we_made_llms_work_with_old_systems_thanks_to/ | Elvis_Vijay1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kndlvp | false | null | t3_1kndlvp | /r/LocalLLaMA/comments/1kndlvp/how_we_made_llms_work_with_old_systems_thanks_to/ | false | false | self | 0 | null |
AI Code completion for Netbeans IDE | 4 | Hey.
I wanted to share a hobby project of mine, in the unlikely event someone finds it useful.
I've written a plugin for Netbeans IDE that enables both fim code completion, instruction based completion and Ai Chat with local or remote backends.
"Why Netbeans?", you might ask. (Or more likely: "What is Netbeans?")
T... | 2025-05-15T17:09:52 | neph1010 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kndl7d | false | null | t3_1kndl7d | /r/LocalLLaMA/comments/1kndl7d/ai_code_completion_for_netbeans_ide/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'ydAe_hMc7hrVVzeYY1ay6Iom0-1jVHjLhgDa04rYv34', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?width=108&crop=smart&auto=webp&s=9d683ce4dc9f2c7a19480d89e66fd660da5146ed', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.pn... | ||
Ansible to build out LLM | 1 | Anyone know of a repository of Ansible scripts to building / optimizing a Linux LLM environment? | 2025-05-15T16:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1knd6vb/ansible_to_build_out_llm/ | jsconiers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knd6vb | false | null | t3_1knd6vb | /r/LocalLLaMA/comments/1knd6vb/ansible_to_build_out_llm/ | false | false | self | 1 | null |
TTS Fine-tuning now in Unsloth - Sesame CSM + Whisper support | 2 | Hey folks! This one’s a bit different from LLMs but we’re super excited to announce that you can now train Text-to-Speech (TTS) models in [Unsloth](https://github.com/unslothai/unsloth)! Training is \~1.5x faster with 50% less VRAM compared to all other setups with FA2. :D
* We support models like `Sesame/csm-1b`, `Op... | 2025-05-15T16:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1knd53e/tts_finetuning_now_in_unsloth_sesame_csm_whisper/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knd53e | false | null | t3_1knd53e | /r/LocalLLaMA/comments/1knd53e/tts_finetuning_now_in_unsloth_sesame_csm_whisper/ | false | false | self | 2 | null |
Attempting to get genetic behaviour in 1 llm call. Attempt 1. | 1 | [removed] | 2025-05-15T16:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kncooi/attempting_to_get_genetic_behaviour_in_1_llm_call/ | Character-Drink2952 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kncooi | false | null | t3_1kncooi | /r/LocalLLaMA/comments/1kncooi/attempting_to_get_genetic_behaviour_in_1_llm_call/ | false | false | self | 1 | null |
Quick Qwen3-30B-A6B-16-Extreme vs Qwen3-30B A3B Benchmark | 56 | Hey, I have a Benchmark suite of 110 tasks across multiple programming languages. The focus really is on more complex problems and not Javascript one-shot problems. I was interested in comparing the above two models.
Setup
\- Qwen3-30B-A6B-16-Extreme Q4\_K\_M running in LMStudio
\- Qwen3-30B A3B on OpenRouter
... | 2025-05-15T16:17:15 | https://www.reddit.com/r/LocalLLaMA/comments/1knca48/quick_qwen330ba6b16extreme_vs_qwen330b_a3b/ | terhechte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knca48 | false | null | t3_1knca48 | /r/LocalLLaMA/comments/1knca48/quick_qwen330ba6b16extreme_vs_qwen330b_a3b/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': '-1tEckwmrxwtomZTafD2hEqMRKvNLO0mPIutjO_qYhY', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=108&crop=smart&auto=webp&s=4904377f1acc1958af76874ca7486f29ef665e09', 'width': 108}, {'height': 190, 'url': 'h... |
Falcon-Edge: A series of powerful, universal and fine-tunable BitNet models | 1 | [removed] | 2025-05-15T16:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1knc1wh/falconedge_a_series_of_powerful_universal_and/ | Automatic_Truth_6666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knc1wh | false | null | t3_1knc1wh | /r/LocalLLaMA/comments/1knc1wh/falconedge_a_series_of_powerful_universal_and/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uKoAN56QBrx49tKNY13u6ICrHEJxeCADh_8PLik14kc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=108&crop=smart&auto=webp&s=1a28048819a5343167657c63adfd0b1c74d3a365', 'width': 108}, {'height': 108, 'url': 'h... |
Falcon-Edge: a 1B and 3B LLM based on the BitNet architecture. | 1 | [removed] | 2025-05-15T16:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/1knbyp9/falconedge_a_1b_and_3b_llm_based_on_the_bitnet/ | ilyas555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbyp9 | false | null | t3_1knbyp9 | /r/LocalLLaMA/comments/1knbyp9/falconedge_a_1b_and_3b_llm_based_on_the_bitnet/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Gpoz4LVC0qzEk4xcFJcFr01b9w6NYjnY5GTIL8r_1vs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=108&crop=smart&auto=webp&s=6e1c6ff46d1d71acc70b3f9eb066a663cd674e87', 'width': 108}, {'height': 116, 'url': 'h... |
Falcon-Edge: a 1B and 3B models based on the bitnet architecture. | 1 | [removed] | 2025-05-15T16:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/1knbv8r/falconedge_a_1b_and_3b_models_based_on_the_bitnet/ | ilyas555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbv8r | false | null | t3_1knbv8r | /r/LocalLLaMA/comments/1knbv8r/falconedge_a_1b_and_3b_models_based_on_the_bitnet/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Gpoz4LVC0qzEk4xcFJcFr01b9w6NYjnY5GTIL8r_1vs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=108&crop=smart&auto=webp&s=6e1c6ff46d1d71acc70b3f9eb066a663cd674e87', 'width': 108}, {'height': 116, 'url': 'h... |
Falcon-Edge: a 1B and 3B models based on the bitnet architecture. | 1 | [removed] | 2025-05-15T15:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/1knbsef/falconedge_a_1b_and_3b_models_based_on_the_bitnet/ | ilyas555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbsef | false | null | t3_1knbsef | /r/LocalLLaMA/comments/1knbsef/falconedge_a_1b_and_3b_models_based_on_the_bitnet/ | false | false | 1 | null | |
HanaVerse - Chat with AI through an interactive anime character! 🌸 | 15 | I've been working on something I think you'll love - HanaVerse, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
What is **HanaVerse**? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated compan... | 2025-05-15T15:52:31 | https://www.reddit.com/r/LocalLLaMA/comments/1knbo80/hanaverse_chat_with_ai_through_an_interactive/ | OrganicTelevision652 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1knbo80 | false | null | t3_1knbo80 | /r/LocalLLaMA/comments/1knbo80/hanaverse_chat_with_ai_through_an_interactive/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'e_TzCbLRWGuoVvpz2Ql3BuZNn4O25i0T1Ou5ynEyr54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=108&crop=smart&auto=webp&s=d2affacaa734a3be0bbdd6e15b0283fc0ee4f370', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.