title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
An AI researcher at Anthropic reveals that Claude Opus 4 will contact regulators or try to lock you out if it detects something illegal | 619 | 2025-05-22T18:54:53 | erdaltoprak | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksyqo9 | false | null | t3_1ksyqo9 | /r/LocalLLaMA/comments/1ksyqo9/an_ai_researcher_at_anthropic_reveals_that_claude/ | false | false | 619 | {'enabled': True, 'images': [{'id': 'rOxvDCr6sdpBzMLylerTH10OtUEjU6WOn9BSdL2Gq-M', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?width=108&crop=smart&auto=webp&s=ea799e647f5879b25432ba2fd919ec366f8a3e08', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpe... | |||
Introducing the world's most powerful model | 1,628 | 2025-05-22T18:45:16 | eastwindtoday | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksyicp | false | null | t3_1ksyicp | /r/LocalLLaMA/comments/1ksyicp/introducing_the_worlds_most_powerful_model/ | false | false | 1,628 | {'enabled': True, 'images': [{'id': 'YOcuSylpokpdn-aTBYXZJ23tkMbp-nVqDnT3-xrNXhQ', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=108&crop=smart&auto=webp&s=89c79d223d6875ff5561ca4065175480922e4c44', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/hqx8fzosod2f1.pn... | |||
Sonnet 4 (non thinking) does consistently break in my vibe coding test | 5 | *Write a raytracer that renders an interesting scene with many colourful lightsources in python. Output a 800x600 image as a png*
(More info here: https://github.com/cpldcpu/llmbenchmark/blob/master/raytracer/Readme.md)
Only 1 out of 8 generations worked one first attempt! All others always failed with the same error... | 2025-05-22T18:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ksyfij/sonnet_4_non_thinking_does_consistently_break_in/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksyfij | false | null | t3_1ksyfij | /r/LocalLLaMA/comments/1ksyfij/sonnet_4_non_thinking_does_consistently_break_in/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'iKS9PBNfy2C7ElH1gfvl15hZ_XldK10KrjNYjYp3VR8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=108&crop=smart&auto=webp&s=5aae72fe5795f61782fa9dbee42eae28264f095e', 'width': 108}, {'height': 108, 'url': 'h... |
🤝 Meet NVIDIA Llama Nemotron Nano 4B + Tutorial on Getting Started | 43 | *📹 New Tutorial: How to get started with Llama Nemotron Nano 4b:* [*https://youtu.be/HTPiUZ3kJto*](https://youtu.be/HTPiUZ3kJto)
*🤝 Meet NVIDIA Llama Nemotron Nano 4B, an open reasoning model that provides leading accuracy and compute efficiency across scientific tasks, coding, complex math, function calling, a... | 2025-05-22T18:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ksy9hi/meet_nvidia_llama_nemotron_nano_4b_tutorial_on/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksy9hi | false | null | t3_1ksy9hi | /r/LocalLLaMA/comments/1ksy9hi/meet_nvidia_llama_nemotron_nano_4b_tutorial_on/ | false | false | self | 43 | {'enabled': False, 'images': [{'id': 'E1FQBX3O66NI1oIU5qh0JBjtnfkFYhiMAlqBFNGYIbQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zUfV9gqL2twXXXmjwIMWtp53p3qmERdBE6zYke1eggU.jpg?width=108&crop=smart&auto=webp&s=bb3afc7986439990b1f3178f1e9ee48eb2973f0f', 'width': 108}, {'height': 162, 'url': 'h... |
What are Preview models in Github Copilot? | 0 | I am looking for Claude 4 at [https://github.com/copilot](https://github.com/copilot) . It is there, but under the Preview Category. I don't know what Preview Models are or what details about them.
https://preview.redd.it/prkko08thd2f1.png?width=622&format=png&auto=webp&s=e8fe751c4c21a7c15e54eeb40d8bd8dffc6b4613
Help... | 2025-05-22T18:02:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ksxgl4/what_are_preview_models_in_github_copilot/ | ashim_k_saha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksxgl4 | false | null | t3_1ksxgl4 | /r/LocalLLaMA/comments/1ksxgl4/what_are_preview_models_in_github_copilot/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'R43kJiA4HczWEfcwJa_6P44XkMvFZzJxSkx6bWVD2w0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=108&crop=smart&auto=webp&s=672a1c97a6742819c50e2eaa3c8cfc2d157ad6d6', 'width': 108}, {'height': 113, 'url': 'h... | |
What Does It Really Mean to Own Your AI System? (Looking for Feedback on My Framework) | 1 | [removed] | 2025-05-22T17:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kswyic/what_does_it_really_mean_to_own_your_ai_system/ | davidtwaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kswyic | false | null | t3_1kswyic | /r/LocalLLaMA/comments/1kswyic/what_does_it_really_mean_to_own_your_ai_system/ | false | false | self | 1 | null |
II-Agent | 5 | Suprised i did not find anything about it here. Tested it but ran into anthrophic token limit | 2025-05-22T17:33:32 | https://github.com/Intelligent-Internet/ii-agent | Local_Beach | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kswq8p | false | null | t3_1kswq8p | /r/LocalLLaMA/comments/1kswq8p/iiagent/ | false | false | 5 | {'enabled': False, 'images': [{'id': '9A62q69XBL2wmV8RRzRjDYH8w58kEfcIXM4_bythlUU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=108&crop=smart&auto=webp&s=f986a8bdcc81396383d1f21031ea44b4879d60f9', 'width': 108}, {'height': 108, 'url': 'h... | |
Genuine question: Why are the Unsloth GGUFs more preferred than the official ones? | 96 | That's at least the case with the latest GLM, Gemma and Qwen models. Unlosh ones are downloaded 5-10X more. | 2025-05-22T17:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ksw070/genuine_question_why_are_the_unsloth_ggufs_more/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksw070 | false | null | t3_1ksw070 | /r/LocalLLaMA/comments/1ksw070/genuine_question_why_are_the_unsloth_ggufs_more/ | false | false | self | 96 | null |
Master's Thesis: State-of-the-Art LLM Inference Optimization on Consumer Hardware | 1 | [removed] | 2025-05-22T17:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ksvzo9/masters_thesis_stateoftheart_llm_inference/ | Budget-Track5555 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksvzo9 | false | null | t3_1ksvzo9 | /r/LocalLLaMA/comments/1ksvzo9/masters_thesis_stateoftheart_llm_inference/ | false | false | self | 1 | null |
Claude 4 by Anthropic officially released! | 663 | 2025-05-22T16:37:17 | purealgo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksvb3k | false | null | t3_1ksvb3k | /r/LocalLLaMA/comments/1ksvb3k/claude_4_by_anthropic_officially_released/ | false | false | 663 | {'enabled': True, 'images': [{'id': 'WzX5JTVRTuL9sh96KThaLGxGYDsP6Kbr6Y6NyNa3C9Q', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?width=108&crop=smart&auto=webp&s=444bc5d29fb796e69836c5f917f0ba42f02cd962', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png... | |||
Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows? | 69 | Magentic-One was kind of a cool agent framework for a minute when it was first released a few months ago, but DAMN, it was a pain in the butt to get working and then it kinda would just see a squirrel on a webpage and get distracted and such.
I think AutoGen added Magentic as an Agent type in AutoGen, but then it kind... | 2025-05-22T16:22:59 | https://www.reddit.com/gallery/1ksuycv | Porespellar | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ksuycv | false | null | t3_1ksuycv | /r/LocalLLaMA/comments/1ksuycv/microsoft_releases_magenticui_could_this_finally/ | false | false | 69 | null | |
Learn how to use Devstral with Mistral Inference locally and with OpenHands | 1 | [removed] | 2025-05-22T15:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kstu5j/learn_how_to_use_devstral_with_mistral_inference/ | kingabzpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kstu5j | false | null | t3_1kstu5j | /r/LocalLLaMA/comments/1kstu5j/learn_how_to_use_devstral_with_mistral_inference/ | false | false | 1 | null | |
Create a chatbot for chatting with people with Wikipedia pages | 11 | Exploring different techniques for creating a chatbot. Sample implementation there the chatbot is designed to do a multi-turn chat based on someone's Wikipedia page.
Interesting learnings and a fun project altogether.
Link in case you are interested:
[https://www.teachmecoolstuff.com/viewarticle/creating-a-chatbo... | 2025-05-22T15:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ksts7o/create_a_chatbot_for_chatting_with_people_with/ | funJS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksts7o | false | null | t3_1ksts7o | /r/LocalLLaMA/comments/1ksts7o/create_a_chatbot_for_chatting_with_people_with/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'FWfN3I_aSWWeBvz0kSnI6WbqDHPesaFYU-RKBgH0afY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=108&crop=smart&auto=webp&s=da9997765ffdce6ca201796ab450ea42756d1d0c', 'width': 108}, {'height': 216, 'url': '... |
Story writing workflow / software | 3 | I've been trying to figure out how to write stories with LLMs, and it feels like I'm going in circles. I know that there's no magical "Write me a story" AI and that I'll have to do the work of writing an outline and keeping the story on track, but I'm still pretty fuzzy on _how_ to do that.
The general advice seems to... | 2025-05-22T15:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kstlh0/story_writing_workflow_software/ | Nazrax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kstlh0 | false | null | t3_1kstlh0 | /r/LocalLLaMA/comments/1kstlh0/story_writing_workflow_software/ | false | false | self | 3 | null |
Google's new Text Diffusion model explained, and why it matters for LocalLLaMA | 1 | [removed] | 2025-05-22T15:28:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kstl1k/googles_new_text_diffusion_model_explained_and/ | amapleson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kstl1k | false | null | t3_1kstl1k | /r/LocalLLaMA/comments/1kstl1k/googles_new_text_diffusion_model_explained_and/ | false | false | 1 | null | |
Notes on AlphaEvolve: Are we closing in on Singularity? | 56 | DeepMind released the AlphaEvolve paper last week, which, considering what they have achieved, is arguably one of the most important papers of the year. But I found the discourse around it was very thin, not many who actively cover the AI space have talked much about it.
So, I made some notes on the important aspects ... | 2025-05-22T15:19:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kstdhn/notes_on_alphaevolve_are_we_closing_in_on/ | SunilKumarDash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kstdhn | false | null | t3_1kstdhn | /r/LocalLLaMA/comments/1kstdhn/notes_on_alphaevolve_are_we_closing_in_on/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'go5ckY2IFGbH800cLCyjjRmNkE0GXSbTSzGz_GA5Zi8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=108&crop=smart&auto=webp&s=2fb7ce2d3fecad06c94ac1b3faa1dfc668a958c0', 'width': 108}, {'height': 144, 'url': 'h... |
Trying to get to 24gb of vram - what are some sane options? | 4 | I am considering shelling out 600$ cad on a potential upgrade. I currently have just tesla p4 which works great for 3b or limited 8b models.
Either I get two rtx 3060 12gb or i found a seller for a a4000 for 600$. Should I go for the two 3060's or the a4000?
main advantages seem to be more cores on the a4000, and l... | 2025-05-22T14:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kssaun/trying_to_get_to_24gb_of_vram_what_are_some_sane/ | emaiksiaime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kssaun | false | null | t3_1kssaun | /r/LocalLLaMA/comments/1kssaun/trying_to_get_to_24gb_of_vram_what_are_some_sane/ | false | false | self | 4 | null |
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark | 1 | [removed] | 2025-05-22T14:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kssa7q/qwen314b_vs_phi14breasoning_plus_practical/ | qki_machine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kssa7q | false | null | t3_1kssa7q | /r/LocalLLaMA/comments/1kssa7q/qwen314b_vs_phi14breasoning_plus_practical/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'h... |
Tiny agents from hugging face is great for llama.cpp mcp agents | 37 | Tiny agents have to be the easiest browsers control setup, you just the cli, a json, and a prompt definition.
\- it uses main MCPs, like Playright, mcp-remote
\- works with local models via openai compatible server
\- model can controls the browser or local files without calling APIs
here's a tutorial form ... | 2025-05-22T14:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kss44x/tiny_agents_from_hugging_face_is_great_for/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kss44x | false | null | t3_1kss44x | /r/LocalLLaMA/comments/1kss44x/tiny_agents_from_hugging_face_is_great_for/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'sNzX5uTQOS-vzzfgq-G17PwmYgs1br9Ww1hsxwfZH2s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=108&crop=smart&auto=webp&s=944843508bc1f6738e31d244f0306beda2b23552', 'width': 108}, {'height': 116, 'url': 'h... |
Intuitive explanation on diffusion language models (dLLMs) and why they may be far superior to autoregressive for most uses (append & amend VS mutate & defragment) | 18 | I have been preaching diffusion LLMs for a month now and can give explains as to why it's possibly superior to autoregressive, or perhaps two complementary hemispheres in a more complete being. Let's look at one application first.
Diffusion LLMs with reinforcement learning for agentic coding are going to be utterly nu... | 2025-05-22T14:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ksrxm7/intuitive_explanation_on_diffusion_language/ | psychonucks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksrxm7 | false | null | t3_1ksrxm7 | /r/LocalLLaMA/comments/1ksrxm7/intuitive_explanation_on_diffusion_language/ | false | false | self | 18 | null |
Intuitive explanation on diffusion language models (dLLMs) and why they may be far superior to autoregressive for most uses (append & amend VS mutate & defragment) | 1 | [removed] | 2025-05-22T14:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ksrw68/intuitive_explanation_on_diffusion_language/ | ryunuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksrw68 | false | null | t3_1ksrw68 | /r/LocalLLaMA/comments/1ksrw68/intuitive_explanation_on_diffusion_language/ | false | false | self | 1 | null |
What quant size should i run for Qwen3 on a 3090? | 1 | [removed] | 2025-05-22T14:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ksruy3/what_quant_size_should_i_run_for_qwen3_on_a_3090/ | AcanthaceaeMurky1365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksruy3 | false | null | t3_1ksruy3 | /r/LocalLLaMA/comments/1ksruy3/what_quant_size_should_i_run_for_qwen3_on_a_3090/ | false | false | self | 1 | null |
Github copilot open-sourced; usable with local llamas? | 1 | This post might come off as a little impatient, but basically, since the github copilot extension for
vscode has been announced as open-source, I'm wondering if anyone here is looking into, or have successfully managed to integrate local models with the vscode extension. I would love to have my own model running in ... | 2025-05-22T14:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ksru2q/github_copilot_opensourced_usable_with_local/ | k_means_clusterfuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksru2q | false | null | t3_1ksru2q | /r/LocalLLaMA/comments/1ksru2q/github_copilot_opensourced_usable_with_local/ | false | false | self | 1 | null |
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated) | 1 | [removed] | 2025-05-22T14:03:42 | https://v.redd.it/5brzv7e19c2f1 | CheeringCheshireCat | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksrj4u | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/5brzv7e19c2f1/DASHPlaylist.mpd?a=1750514634%2CODdmNTVlMWZiZmNmNDkyMWQ1YjA3YTU4YTE3YzlhYzIyNjA1ZTZlODNiNDhmNjhmMzA0YjViZTA1YjBjMzQxOA%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/5brzv7e19c2f1/DASH_270.mp4?source=fallback', 'has... | t3_1ksrj4u | /r/LocalLLaMA/comments/1ksrj4u/ai_baby_monitor_fully_local_videollm_nanny_beeps/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=d09f057f45682053081b00e84783b1569224... | |
What’s a good model for RTX 4080 for sentence classification, not generation? | 1 | [removed] | 2025-05-22T13:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ksquqy/whats_a_good_model_for_rtx_4080_for_sentence/ | jb-stats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksquqy | false | null | t3_1ksquqy | /r/LocalLLaMA/comments/1ksquqy/whats_a_good_model_for_rtx_4080_for_sentence/ | false | false | self | 1 | null |
I added Ollama support to AI Runner | 0 | 2025-05-22T13:16:54 | https://v.redd.it/a4d6hiey2c2f1 | w00fl35 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksqg8o | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a4d6hiey2c2f1/DASHPlaylist.mpd?a=1750511830%2CNWZmOTM1OGM0MDc3YzUyNTdhMGU1ZWY5M2ZmMGIzOTU2Nzk4NGZiZGYyYmNkNjI3NzE2MDQ5NTRkMWUzYjBiOQ%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/a4d6hiey2c2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ksqg8o | /r/LocalLLaMA/comments/1ksqg8o/i_added_ollama_support_to_ai_runner/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=c1196fdd479c308cb893821bd487b473dd6e6... | ||
Tinker with Byte Latent Transformer's "tokenizer-free" patcher | 1 | [removed] | 2025-05-22T13:09:50 | https://huggingface.co/spaces/lucalp/blt-entropy-patcher | lucalp__ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ksqaik | false | null | t3_1ksqaik | /r/LocalLLaMA/comments/1ksqaik/tinker_with_byte_latent_transformers/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'FU8KZ1aDRfcDViwSH0e1yDxZjMftIUb6LrpBsw0EH2Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=108&crop=smart&auto=webp&s=6118241a31f5aac5996f7883844bb089609a6ed8', 'width': 108}, {'height': 116, 'url': 'h... | |
Openhands + LM Studio try | 2 | I need you guys help.
How can I set it up right?
host.docker.internal:1234/v1/ not good.
https://preview.redd.it/j66n34js0c2f1.png?width=2431&format=png&auto=webp&s=cb3ab28caa92916898ba1a2aeafe971658db16c0
https://preview.redd.it/w8bs9hxm0c2f1.png?width=1509&format=png&auto=webp&s=79796bcc81b32ae2e1571dc04447c1... | 2025-05-22T13:05:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ksq71e/openhands_lm_studio_try/ | ywis797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksq71e | false | null | t3_1ksq71e | /r/LocalLLaMA/comments/1ksq71e/openhands_lm_studio_try/ | false | false | 2 | null | |
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark | 1 | [removed] | 2025-05-22T12:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kspomh/qwen314b_vs_phi14breasoning_plus_practical/ | qki_machine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kspomh | false | null | t3_1kspomh | /r/LocalLLaMA/comments/1kspomh/qwen314b_vs_phi14breasoning_plus_practical/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'h... |
I'm beginning to doubt the claims that Gemma-3 E4Bn is better than Claude 3.7 | 1 | [removed] | 2025-05-22T12:34:14 | Infiten | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kspk7o | false | null | t3_1kspk7o | /r/LocalLLaMA/comments/1kspk7o/im_beginning_to_doubt_the_claims_that_gemma3_e4bn/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ExfFbzH_opod80rXmPrFVVoHkX47D_XcnL61Fcmtolw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=108&crop=smart&auto=webp&s=b5346b9a6b4159dcb7c05334586b7fa96d7702a0', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png... | ||
Gemma 3n: Smarter, Faster, and Offline-Ready | 1 | [removed] | 2025-05-22T12:32:52 | https://www.kdnuggets.com/gemma-3n-smarter-faster-and-offline-ready | kingabzpro | kdnuggets.com | 1970-01-01T00:00:00 | 0 | {} | 1kspj6j | false | null | t3_1kspj6j | /r/LocalLLaMA/comments/1kspj6j/gemma_3n_smarter_faster_and_offlineready/ | false | false | default | 1 | null |
Best local model OCR solution for PDF document PII redaction app with bounding boxes | 5 | Hi all,
I'm a long term lurker in LocalLLaMA. I've created an open source Python/Gradio-based app for redacting personally-identifiable (PII) information from PDF documents, images and tabular data files - you can try it out [here](https://huggingface.co/spaces/seanpedrickcase/document_redaction) on Hugging Face space... | 2025-05-22T12:26:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kspe8c/best_local_model_ocr_solution_for_pdf_document/ | Sonnyjimmy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kspe8c | false | null | t3_1kspe8c | /r/LocalLLaMA/comments/1kspe8c/best_local_model_ocr_solution_for_pdf_document/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'XEY0RsiN_5J9g6qGnY9fZZzLbL-zLC8y5nwKkjw5zeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=108&crop=smart&auto=webp&s=eb5e5b0d99b67e6bd67e73d8c921c298938aaa97', 'width': 108}, {'height': 116, 'url': 'h... |
should I be concerned? | 1 | 2025-05-22T12:23:33 | https://www.reddit.com/gallery/1kspc9o | lifeisalsodifficult | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kspc9o | false | null | t3_1kspc9o | /r/LocalLLaMA/comments/1kspc9o/should_i_be_concerned/ | false | false | 1 | null | ||
Open source document PII redaction and review app - which local OCR model to add in? | 1 | [removed] | 2025-05-22T12:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ksp9ep/open_source_document_pii_redaction_and_review_app/ | SeanPedrickCase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksp9ep | false | null | t3_1ksp9ep | /r/LocalLLaMA/comments/1ksp9ep/open_source_document_pii_redaction_and_review_app/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XEY0RsiN_5J9g6qGnY9fZZzLbL-zLC8y5nwKkjw5zeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=108&crop=smart&auto=webp&s=eb5e5b0d99b67e6bd67e73d8c921c298938aaa97', 'width': 108}, {'height': 116, 'url': 'h... |
We need to watch out ChatGPT and Grok choose to save Robot's instead of Human | 1 | [removed] | 2025-05-22T12:12:48 | https://v.redd.it/6h413iojrb2f1 | CosmicTurtle44 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksp4n5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6h413iojrb2f1/DASHPlaylist.mpd?a=1750507983%2CZGJjOTQxN2I2MGM1NTMyYTAzZTEwOGM5YzRiZjExYTUwMTEzNDY1OTc3OGYzMDZhMjQ0NzdmN2IwMjk2MTA3NA%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/6h413iojrb2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ksp4n5 | /r/LocalLLaMA/comments/1ksp4n5/we_need_to_watch_out_chatgpt_and_grok_choose_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b97c81251b421040e58a7c8eaf1c7368d17f... | |
We need to watch out ChatGPT and Grok choose to save Robot's instead of Human | 1 | [removed] | 2025-05-22T11:55:55 | https://v.redd.it/tff2ay4iob2f1 | Marwan_hbt8 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksosuu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tff2ay4iob2f1/DASHPlaylist.mpd?a=1750506968%2CYWFkNDRjOGQ5YmU1NWFkZjU1ODUwODU3MGI1ZjMxOTJjNWIwY2M3YjRjNjZkNTU1NWI4Y2Y3OTUzZTIxYWE3ZA%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/tff2ay4iob2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ksosuu | /r/LocalLLaMA/comments/1ksosuu/we_need_to_watch_out_chatgpt_and_grok_choose_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=108&crop=smart&format=pjpg&auto=webp&s=de4109fe94262752a9f82f135b4c1d5fe17d0... | |
Fun with AI | 0 | write a story about japan
qwen3-30b-a3b-python-coder
def write\_story\_japan():
import random
# Define the main elements of the story
setting = ["Tokyo", "Kyoto", "Osaka", "Hiroshima"]
season = ["spring", "summer", "autumn", "winter"]
culture_element = ["tea ceremony", "calligraphy", "origami"... | 2025-05-22T11:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ksor6d/fun_with_ai/ | admajic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksor6d | false | null | t3_1ksor6d | /r/LocalLLaMA/comments/1ksor6d/fun_with_ai/ | false | false | self | 0 | null |
Why is there no Llama-3.2-90B-Vision GGUF available? | 2 | Why is there no [Llama-3.2-90B-Vision ](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)GGUF available? There is only a `mllama` arch model for ollama [available](https://ollama.com/library/llama3.2-vision:90b) but other inferencing software (like LM Studio) is not able to work with it. | 2025-05-22T11:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ksoojc/why_is_there_no_llama3290bvision_gguf_available/ | tristan-k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksoojc | false | null | t3_1ksoojc | /r/LocalLLaMA/comments/1ksoojc/why_is_there_no_llama3290bvision_gguf_available/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'L7iypfYlovrk4HPW-7cRppFgAeJEoq-9MK_agGfQO6s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=108&crop=smart&auto=webp&s=0ebe04a907dc05ea3f570ecb21fc2ce3cc0f8af0', 'width': 108}, {'height': 116, 'url': 'h... |
Is devstral + continued.dev better than copilot agent on vscode? | 7 | At work we are only allowed to use either copilot or local models that our pc can support. Is it better to try continue + devstral or keep using the copilot agent? | 2025-05-22T11:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ksoo52/is_devstral_continueddev_better_than_copilot/ | _maverick98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksoo52 | false | null | t3_1ksoo52 | /r/LocalLLaMA/comments/1ksoo52/is_devstral_continueddev_better_than_copilot/ | false | false | self | 7 | null |
I accidentally too many P100 | 1 | [removed] | 2025-05-22T11:37:49 | https://www.reddit.com/gallery/1ksohbw | TooManyPascals | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ksohbw | false | null | t3_1ksohbw | /r/LocalLLaMA/comments/1ksohbw/i_accidentally_too_many_p100/ | false | false | 1 | null | |
AMD Takes a Major Leap in Edge AI With ROCm; Announces Integration With Strix Halo APUs & Radeon RX 9000 Series GPUs | 164 | 2025-05-22T11:22:20 | https://wccftech.com/amd-takes-a-major-leap-in-edge-ai-with-rocm/ | nostriluu | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1kso7p2 | false | null | t3_1kso7p2 | /r/LocalLLaMA/comments/1kso7p2/amd_takes_a_major_leap_in_edge_ai_with_rocm/ | false | false | 164 | {'enabled': False, 'images': [{'id': 'bpZ21N5J8qSKEnstrDXVmlmd8fFXRTOUtAXWj6RvYE4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=108&crop=smart&auto=webp&s=e724cc21bb8d9f17f01f40ff692756a8a47372c7', 'width': 108}, {'height': 121, 'url': 'h... | ||
Best AI coding Tool today ? | 1 | [removed] | 2025-05-22T11:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ksnw4b/best_ai_coding_tool_today/ | Ok-Guidance9730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksnw4b | false | null | t3_1ksnw4b | /r/LocalLLaMA/comments/1ksnw4b/best_ai_coding_tool_today/ | false | false | self | 1 | null |
Promethease alternative? | 0 | it's really strange that during this AI boom promethease has gone MIA, so many people relied on them.
I'm curious if anyone has a similar alternative that doesn't involve getting a WGS and sending your genetic data to a company again | 2025-05-22T10:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ksnleo/promethease_alternative/ | Dyonizius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksnleo | false | null | t3_1ksnleo | /r/LocalLLaMA/comments/1ksnleo/promethease_alternative/ | false | false | self | 0 | null |
Anyone using a Leaked System Prompt? | 6 | I've seen quite a few posts here about people leaking system prompts from \_\_\_\_ AI firm, and I wonder... in theory, would you get decent results using this prompt with your own system and a model of your choosing?
I would imagine the 24,000 token Claude prompt would be an issue, but surely a more conservative one w... | 2025-05-22T10:44:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ksnkqn/anyone_using_a_leaked_system_prompt/ | JustinPooDough | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksnkqn | false | null | t3_1ksnkqn | /r/LocalLLaMA/comments/1ksnkqn/anyone_using_a_leaked_system_prompt/ | false | false | self | 6 | null |
What local LLM can I run on my Mac? | 0 | Hi. I am planning to download Deepseek R1 but wondering which one to get that my Mac can run? I have MBP M3 Max with 48GB of RAM and 40-core GPU. Thanks! | 2025-05-22T10:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ksnk56/what_local_llm_can_i_run_on_my_mac/ | wanhanred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksnk56 | false | null | t3_1ksnk56 | /r/LocalLLaMA/comments/1ksnk56/what_local_llm_can_i_run_on_my_mac/ | false | false | self | 0 | null |
The best blog post I've read so far on word embeddings. | 0 | 2025-05-22T10:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ksn5hb/the_best_blog_post_ive_read_so_far_on_word/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksn5hb | false | null | t3_1ksn5hb | /r/LocalLLaMA/comments/1ksn5hb/the_best_blog_post_ive_read_so_far_on_word/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'm3p-PV6pLt0gV-DNSEjVYFJUugxRKHlyG7ibqMZpSAw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=108&crop=smart&auto=webp&s=155a324a490fb11b87bf2250efa29131fbf18323', 'width': 108}, {'height': 108, 'url': 'h... | ||
Flux 1.1 Pro Ultra vs HiDream-I1 Full — Which One Is Better? Looking for User Opinions | 1 | [removed] | 2025-05-22T10:16:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ksn520/flux_11_pro_ultra_vs_hidreami1_full_which_one_is/ | AhmedOsamaMath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksn520 | false | null | t3_1ksn520 | /r/LocalLLaMA/comments/1ksn520/flux_11_pro_ultra_vs_hidreami1_full_which_one_is/ | false | false | self | 1 | null |
How to check the relative quality of quantized models? | 6 | I am novice in the technical space of LLM. So please bear with me if this is a stupid question.
I understand that in most cases if one were interested in running a open llm on their mac laptops or desktops with NVIDIA gpus, one would be making use of quantized models. For my study purposes, I wanted to pick three bes... | 2025-05-22T10:08:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ksn0y4/how_to_check_the_relative_quality_of_quantized/ | sbs1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksn0y4 | false | null | t3_1ksn0y4 | /r/LocalLLaMA/comments/1ksn0y4/how_to_check_the_relative_quality_of_quantized/ | false | false | self | 6 | null |
RpR-v4 now with less repetition and impersonation! | 42 | 2025-05-22T09:58:43 | https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4 | Arli_AI | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ksmvab | false | null | t3_1ksmvab | /r/LocalLLaMA/comments/1ksmvab/rprv4_now_with_less_repetition_and_impersonation/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'bSYUJ_kisf3lxijdNPv6SmJ0R61X4277NoocNI2k1XI', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=108&crop=smart&auto=webp&s=39238a264eec6ec9aa0d3550891adba2d05c354e', 'width': 108}, {'height': 259, 'url': '... | ||
Someone from google has stolen my designs for an AGI architecture generated via asi | 1 | [removed] | 2025-05-22T09:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmllt/someone_from_google_has_stolen_my_designs_for_an/ | CharacterJealous383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmllt | false | null | t3_1ksmllt | /r/LocalLLaMA/comments/1ksmllt/someone_from_google_has_stolen_my_designs_for_an/ | false | false | self | 1 | null |
Someone from google has stolen my designs for AGI generated via aistudio | 1 | [removed] | 2025-05-22T09:37:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmkdl/someone_from_google_has_stolen_my_designs_for_agi/ | CharacterJealous383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmkdl | false | null | t3_1ksmkdl | /r/LocalLLaMA/comments/1ksmkdl/someone_from_google_has_stolen_my_designs_for_agi/ | false | false | self | 1 | null |
👀 New Gemma 3n (E4B Preview) from Google Lands on Hugging Face - Text, Vision & More Coming! | 145 | Google has released a new preview version of their Gemma 3n model on Hugging Face: google/gemma-3n-E4B-it-litert-preview
https://preview.redd.it/beelus5sya2f1.png?width=1999&format=png&auto=webp&s=39d6f33cb85c4fb1e3e2a616ce0cedc865281079
Here are some key takeaways from the model card:
* **Multimodal Input:** Thi... | 2025-05-22T09:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmiwz/new_gemma_3n_e4b_preview_from_google_lands_on/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmiwz | false | null | t3_1ksmiwz | /r/LocalLLaMA/comments/1ksmiwz/new_gemma_3n_e4b_preview_from_google_lands_on/ | false | false | 145 | null | |
What is the best ollama model for writing a YouTube script | 1 | [removed] | 2025-05-22T09:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmhnm/what_is_the_best_ollama_model_for_writing_a/ | wanhanred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmhnm | false | null | t3_1ksmhnm | /r/LocalLLaMA/comments/1ksmhnm/what_is_the_best_ollama_model_for_writing_a/ | false | false | self | 1 | null |
MMaDA: Multimodal Large Diffusion Language Models | 55 | [https://github.com/Gen-Verse/MMaDA](https://github.com/Gen-Verse/MMaDA)
[https://huggingface.co/Gen-Verse/MMaDA-8B-Base](https://huggingface.co/Gen-Verse/MMaDA-8B-Base) | 2025-05-22T09:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmhe9/mmada_multimodal_large_diffusion_language_models/ | First_Ground_9849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmhe9 | false | null | t3_1ksmhe9 | /r/LocalLLaMA/comments/1ksmhe9/mmada_multimodal_large_diffusion_language_models/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'j0T5RpZxWFfJRkMAkAvrDti124e5dnxso0osuQyzSJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=108&crop=smart&auto=webp&s=21ed3ee79f399d4d3ad6125bd8d7c79748982e51', 'width': 108}, {'height': 108, 'url': 'h... |
StoriiCare | 1 | Hey all! I work with a platform called StoriiCare that’s designed for adult day centers, care homes, and other long-term care providers. It’s focused on improving how teams manage documentation, staff workflows, and especially how they engage with families.
We’ve seen a lot of interest from providers looking for bette... | 2025-05-22T09:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ksmbjs/storiicare/ | Lassiegirl2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksmbjs | false | null | t3_1ksmbjs | /r/LocalLLaMA/comments/1ksmbjs/storiicare/ | false | false | self | 1 | null |
LLM for detecting offensive writing | 0 | Has anyone here used a local LLM to flag/detect offensive posts. This is to detect verbal attacks that are not detectable with basic keywords/offensive word lists. I'm trying to find a suitable small model that ideally runs on CPU.
I'd like to hear experiences of what techniques people have used beyond LLM and succes... | 2025-05-22T09:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ksm9c4/llm_for_detecting_offensive_writing/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksm9c4 | false | null | t3_1ksm9c4 | /r/LocalLLaMA/comments/1ksm9c4/llm_for_detecting_offensive_writing/ | false | false | self | 0 | null |
Want to know your reviews about this 14B model. | 1 | [removed] | 2025-05-22T08:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kslpmq/want_to_know_your_reviews_about_this_14b_model/ | EvanFengYi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kslpmq | false | null | t3_1kslpmq | /r/LocalLLaMA/comments/1kslpmq/want_to_know_your_reviews_about_this_14b_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oiXxa3AeQjPyS014SfL85mFkAl65CMnweJS5us56xg8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=108&crop=smart&auto=webp&s=d49b6159d1fe495c160f658a33ee4ccaafe1e387', 'width': 108}, {'height': 116, 'url': 'h... |
Running local LLMs on Mac | 1 | [removed] | 2025-05-22T08:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kslgsq/running_local_llms_on_mac/ | AdHelpful1382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kslgsq | false | null | t3_1kslgsq | /r/LocalLLaMA/comments/1kslgsq/running_local_llms_on_mac/ | false | false | self | 1 | null |
Why I Built PoliteAI: One Workspace for GPT, Claude, Gemini, Grok and Your Team | 1 | 2025-05-22T08:07:10 | https://alexpham14.medium.com/why-i-built-politeai-one-workspace-for-gpt-claude-gemini-grok-and-your-team-d6d75a8a0315 | Real_Enthusiasm_2657 | alexpham14.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1kslbp2 | false | null | t3_1kslbp2 | /r/LocalLLaMA/comments/1kslbp2/why_i_built_politeai_one_workspace_for_gpt_claude/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MW_83CnapBVGJtG1MgZmzH96GD9NNXzEJpw4CTDgxfQ', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=108&crop=smart&auto=webp&s=d6c1b4122fd642863e06bf56a2b33c1982b8d78c', 'width': 108}, {'height': 129, 'url': 'h... | ||
If can make AI vids with low vram, why are low vram photo gens still so low qual? | 3 | If we're able to generate videos with 24to60 frames per second, which eludes to 60 single shots in a second. Why does it take so much to generate a single image? I don't really understand what the gap is and why things aren't improving as much. Shouldn't we able to get hands right with low vram models for image gen atl... | 2025-05-22T08:05:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kslb38/if_can_make_ai_vids_with_low_vram_why_are_low/ | Life_is_boring_rn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kslb38 | false | null | t3_1kslb38 | /r/LocalLLaMA/comments/1kslb38/if_can_make_ai_vids_with_low_vram_why_are_low/ | false | false | self | 3 | null |
I made Model Version Control Protocol for AI agents | 8 | I've been working on MVCP (Model Version Control Protocol), inspired by the Model Context Protocol (MCP), a lightweight Git-compatible tool designed specifically **for AI agents to track their progress during code transformations**, built using Python.
**What it does?**
MVCP creates a unified, human-readable system f... | 2025-05-22T07:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ksl4nq/i_made_model_version_control_protocol_for_ai/ | _twelvechess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksl4nq | false | null | t3_1ksl4nq | /r/LocalLLaMA/comments/1ksl4nq/i_made_model_version_control_protocol_for_ai/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': '2vJXIFTylYDY8PZ0xjz6t2u3TnwC4SF6UTutJzbQnFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=108&crop=smart&auto=webp&s=aabea739c1379e6cdf0ca1bd7ad2377b91879398', 'width': 108}, {'height': 108, 'url': 'h... |
Best model for AI therapy? | 0 | Hi All, I am trying to deploy and self-host LLM to a cloud container, so resources are not an issue, but also I need somethig budget friendly < $1/h.
Please always paste Hugging Face link/id.
If it helps, my main forcus in therapy is CBT. | 2025-05-22T07:52:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ksl476/best_model_for_ai_therapy/ | AbdallahHeidar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksl476 | false | null | t3_1ksl476 | /r/LocalLLaMA/comments/1ksl476/best_model_for_ai_therapy/ | false | false | self | 0 | null |
Privacy-first AI Development with Foundry Local + Semantic Kernel | 0 | Just published a new blog post where I walk through how to run LLMs locally using Foundry Local and orchestrate them using Microsoft's Semantic Kernel.
In a world where data privacy and security are more important than ever, running models on your own hardware gives you full control—no sensitive data leaves your envir... | 2025-05-22T07:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ksl3o8/privacyfirst_ai_development_with_foundry_local/ | anktsrkr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksl3o8 | false | null | t3_1ksl3o8 | /r/LocalLLaMA/comments/1ksl3o8/privacyfirst_ai_development_with_foundry_local/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'jHTUiOy-VnwxAK5u4zwTjhEH2YOHFQm4RMvkfOpdGgQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=108&crop=smart&auto=webp&s=938f877ee48d0140956bfbb2dc69eb795abf564d', 'width': 108}, {'height': 216, 'url': '... |
Converting my Gaming PC into a LLM-Server (GTX 1080 Ti) - worth it? | 0 | Background:
I have a proxmox cluster at home but with pretty old hardware: 32GB and 16GB DDR3, some very old Xeon E3 CPUs. For most of my usecases absolutely enough. But for LLM absolutely not sufficient. Beside that I have a gaming PC with more current hardware and I already played around with 8-11B Modells (always Q... | 2025-05-22T07:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kskxm9/converting_my_gaming_pc_into_a_llmserver_gtx_1080/ | delobre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kskxm9 | false | null | t3_1kskxm9 | /r/LocalLLaMA/comments/1kskxm9/converting_my_gaming_pc_into_a_llmserver_gtx_1080/ | false | false | self | 0 | null |
llmbasedos: Docker Update + USB Key Launch Monday! | 2 | Hey everyone,
A while back, I introduced llmbasedos, a minimal OS-layer designed to securely connect local resources (files, emails, tools) with LLMs via the Model Context Protocol (MCP). Originally, the setup revolved around an Arch Linux ISO for a dedicated appliance experience.
After extensive testing and communit... | 2025-05-22T07:30:05 | https://github.com/iluxu/llmbasedos | iluxu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kskt9w | false | null | t3_1kskt9w | /r/LocalLLaMA/comments/1kskt9w/llmbasedos_docker_update_usb_key_launch_monday/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'k-x69qYR1RapsOSAZAFNUwRR9nsploVo0xghJ_WwOx8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=108&crop=smart&auto=webp&s=f0cf8109c8555c962452b027d9b66a168ebb86d4', 'width': 108}, {'height': 108, 'url': 'h... | |
I saw a project that I'm interested in: 3DTown: Constructing a 3D Town from a Single Image | 184 | According to the official description, **3DTown outperforms state-of-the-art baselines, including Trellis, Hunyuan3D-2, and TripoSG, in terms of geometry quality, spatial coherence, and texture fidelity.** | 2025-05-22T07:15:06 | https://v.redd.it/6as4adn9aa2f1 | Dr_Karminski | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksklse | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6as4adn9aa2f1/DASHPlaylist.mpd?a=1750490120%2CZTdmNDA0MWRjMjFmZTQ3MDBiZWZlODI3YWNiMTgwZjVlN2MzMjkxZGU5YzhhZjE5NzU5YzQ3YjM4YzM5OGM4MQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/6as4adn9aa2f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ksklse | /r/LocalLLaMA/comments/1ksklse/i_saw_a_project_that_im_interested_in_3dtown/ | false | false | 184 | {'enabled': False, 'images': [{'id': 'emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3b08f1fea0b68935072b12fcd6cdb79d20b6... | |
Mixtral releases devstral coding model!! | 1 | Downloading now, can't wait to try it out. | 2025-05-22T07:09:09 | https://huggingface.co/unsloth/Devstral-Small-2505-GGUF/tree/main | thebadslime | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kskirg | false | null | t3_1kskirg | /r/LocalLLaMA/comments/1kskirg/mixtral_releases_devstral_coding_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': '8swN28uA0dpMYAiPEzC4m1BThAOuDjj_S5vOgbCZj2k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=108&crop=smart&auto=webp&s=cc73fd7603816365f70b1e3a619c72b3555efc5e', 'width': 108}, {'height': 116, 'url': 'h... | |
How to determine sampler settings if not listed? | 5 | For example, I'm trying to figure out the best settings for Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-Q6_K - with my current settings it goes off the rails far too often, latching onto and repeating phrases it seems to 'like' until it loses its shit entirely and gets stuck in circular sentences.
Maybe I just missed i... | 2025-05-22T06:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ksk4er/how_to_determine_sampler_settings_if_not_listed/ | Jawzper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksk4er | false | null | t3_1ksk4er | /r/LocalLLaMA/comments/1ksk4er/how_to_determine_sampler_settings_if_not_listed/ | false | false | self | 5 | null |
Introducing Skywork Super Agents: The Next Era of AI Workspace is Here | 0 | Skywork Super Agents is a suite of AI workspace agents based on deep research, designed to make modern people's work and study more efficient.
Compared to other general AI agents, Skywork is more professional, smarter, more reliable, easier to use, and offers better value for money.
Skywork isn’t just another... | 2025-05-22T06:34:58 | https://www.youtube.com/watch?v=AjU5hihAclw&t=13s | Lynncc6 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ksk12z | false | {'oembed': {'author_name': 'Skywork', 'author_url': 'https://www.youtube.com/@SkyworkAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/AjU5hihAclw?start=13&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros... | t3_1ksk12z | /r/LocalLLaMA/comments/1ksk12z/introducing_skywork_super_agents_the_next_era_of/ | false | false | 0 | {'enabled': False, 'images': [{'id': 't6xlyyV7_kuvCAFuk5Wr8At0KJrBbkX6fvlnFWi8w9k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/E3cYeggKyMKeVg63PDru-LcMYCPMPY53c0sV6UsmIYg.jpg?width=108&crop=smart&auto=webp&s=65f14d5d244927fa5d90a8dcc865ac774829c89c', 'width': 108}, {'height': 162, 'url': 'h... | |
is there any existing repo that lets us replace llm from a VLM model with another LLM? | 1 | Same as title: is there any existing repo that lets us replace llm from a VLM model with another LLM?
Also if anyone tried this? How much more training is required? | 2025-05-22T06:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ksjqwj/is_there_any_existing_repo_that_lets_us_replace/ | SouvikMandal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksjqwj | false | null | t3_1ksjqwj | /r/LocalLLaMA/comments/1ksjqwj/is_there_any_existing_repo_that_lets_us_replace/ | false | false | self | 1 | null |
Looking for education specific models | 1 | [removed] | 2025-05-22T06:04:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ksjkz9/looking_for_education_specific_models/ | Nickthrowaway10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksjkz9 | false | null | t3_1ksjkz9 | /r/LocalLLaMA/comments/1ksjkz9/looking_for_education_specific_models/ | false | false | self | 1 | null |
Jan is now Apache 2.0 | 384 | Hey, we've just changed [Jan](https://jan.ai/)'s license.
Jan has always been open-source, but the AGPL license made it hard for many teams to actually use it. Jan is now licensed under Apache 2.0, a more permissive, industry-standard license that works inside companies as well.
What this means:
– You can bring ... | 2025-05-22T06:03:22 | https://github.com/menloresearch/jan/blob/dev/LICENSE | eck72 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ksjkhb | false | null | t3_1ksjkhb | /r/LocalLLaMA/comments/1ksjkhb/jan_is_now_apache_20/ | false | false | 384 | {'enabled': False, 'images': [{'id': 'shzsQ4jFIMUP0eV7qVLugDgJKssf6oxjYHSP4mq1DkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=108&crop=smart&auto=webp&s=cbb579694969492e2c4b9ebcb99f070c6ba4b3e6', 'width': 108}, {'height': 108, 'url': 'h... | |
Falcon-H1: hybrid Transformer–SSM model series from 0.5B to 34B | 104 | 🔬 Hybrid architecture: Attention + Mamba2 heads in parallel
🧠 From 0.5B, 1.5B, 1.5B-Deep,3B, 7B to 34B
📏 up to 256K context
🔥 Outperforming and rivaling top Transformer models like Qwen3-32B, Qwen2.5-72B, Llama4-Scout-17B/109B, and Gemma3-27B — consistently outperforming models up to 2× their size.
💥 Falco... | 2025-05-22T05:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ksjee6/falconh1_hybrid_transformerssm_model_series_from/ | JingweiZUO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksjee6 | false | null | t3_1ksjee6 | /r/LocalLLaMA/comments/1ksjee6/falconh1_hybrid_transformerssm_model_series_from/ | false | false | self | 104 | {'enabled': False, 'images': [{'id': '-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=108&crop=smart&auto=webp&s=2c89e79295c27ff9df86abe70d9d10532ad29272', 'width': 108}, {'height': 91, 'url': 'ht... |
Feedback from Anyone Running RTX 4000 SFF Ada vs Dual RTX 2000 SFF Ada? | 1 | [removed] | 2025-05-22T05:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ksiyd6/feedback_from_anyone_running_rtx_4000_sff_ada_vs/ | PocketMartyr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksiyd6 | false | null | t3_1ksiyd6 | /r/LocalLLaMA/comments/1ksiyd6/feedback_from_anyone_running_rtx_4000_sff_ada_vs/ | false | false | self | 1 | null |
I am looking for light weight models to run locally | 1 | [removed] | 2025-05-22T05:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ksindc/i_am_looking_for_light_weight_models_to_run/ | Obvious_Ad_2699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksindc | false | null | t3_1ksindc | /r/LocalLLaMA/comments/1ksindc/i_am_looking_for_light_weight_models_to_run/ | false | false | self | 1 | null |
Local LLM laptop budget 2.5-5k | 7 | # Hello everyone,
I'm looking to purchase a laptop specifically for running local LLM RAG models. My primary use cases/requirements will be:
* General text processing
* University paper review and analysis
* Light to moderate coding
* Good battery life
* Good heat disipation
* Windows OS
**Budget**: $2500-5000
I kn... | 2025-05-22T04:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ksi7ty/local_llm_laptop_budget_255k/ | 0800otto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksi7ty | false | null | t3_1ksi7ty | /r/LocalLLaMA/comments/1ksi7ty/local_llm_laptop_budget_255k/ | false | false | self | 7 | null |
Advantage of using superblocks for K-quants | 3 | I've been trying to figure out the advantage of using superblocks for K-quants.
I saw the comments on the other thread.
[https://www.reddit.com/r/LocalLLaMA/comments/1dved4c/llamacpp\_kquants/](https://www.reddit.com/r/LocalLLaMA/comments/1dved4c/llamacpp_kquants/)
I understand K-quants uses superblocks and th... | 2025-05-22T04:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kshy6c/advantage_of_using_superblocks_for_kquants/ | datashri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kshy6c | false | null | t3_1kshy6c | /r/LocalLLaMA/comments/1kshy6c/advantage_of_using_superblocks_for_kquants/ | false | false | self | 3 | null |
LM Studio in Git copilot on vs code | 1 | [removed] | 2025-05-22T04:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kshwih/lm_studio_in_git_copilot_on_vs_code/ | Lazy_Damage4931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kshwih | false | null | t3_1kshwih | /r/LocalLLaMA/comments/1kshwih/lm_studio_in_git_copilot_on_vs_code/ | false | false | self | 1 | null |
In video intel talks a bit about battlematrix 192GB VRAM | 50 | With Intel Sr. Director of Discrete Graphics Qi Lin to learn more about a new breed of inference workstations codenamed Project Battlematrix and the Intel Arc Pro B60 GPUs that help them accelerate local AI workloads. The B60 brings 24GB of VRAM to accommodate larger AI models and supports multi-GPU inferencing with up... | 2025-05-22T03:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ksh780/in_video_intel_talks_a_bit_about_battlematrix/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksh780 | false | null | t3_1ksh780 | /r/LocalLLaMA/comments/1ksh780/in_video_intel_talks_a_bit_about_battlematrix/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': '4qR7PrzF4eOukDtu6x5cYjftSRgtNNw3F0pjCCrfzrM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/FxCWahMe0Vwd2ycocJMrRD_zM8VRNqTe8p2zls1tbbs.jpg?width=108&crop=smart&auto=webp&s=e6e469b584f6b144a3d7031ca95d9bf26d719f7a', 'width': 108}, {'height': 162, 'url': 'h... |
Open-Sourced Multimodal Large Diffusion Language Models | 115 | MMaDA is a new family of **multimodal diffusion foundation models** designed to achieve superior performance across diverse domains such as textual reasoning, multimodal understanding, and text-to-image generation. MMaDA is distinguished by three key innovations:
1. MMaDA adopts a **unified diffusion architecture** wi... | 2025-05-22T02:18:45 | https://github.com/Gen-Verse/MMaDA | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ksfqc4 | false | null | t3_1ksfqc4 | /r/LocalLLaMA/comments/1ksfqc4/opensourced_multimodal_large_diffusion_language/ | false | false | 115 | {'enabled': False, 'images': [{'id': 'j0T5RpZxWFfJRkMAkAvrDti124e5dnxso0osuQyzSJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=108&crop=smart&auto=webp&s=21ed3ee79f399d4d3ad6125bd8d7c79748982e51', 'width': 108}, {'height': 108, 'url': 'h... | |
Why has no one been talking about Open Hands so far? | 211 | So I just stumbled across Open Hands while checking out Mistral’s new Devstral model—and honestly, I was really impressed. The agent itself seems super capable, yet I feel like barely anyone is talking about it?
What’s weird is that OpenHands has 54k+ stars on GitHub. For comparison: Roo Code sits at ~14k, and Cline i... | 2025-05-22T02:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ksfos8/why_has_no_one_been_talking_about_open_hands_so/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksfos8 | false | null | t3_1ksfos8 | /r/LocalLLaMA/comments/1ksfos8/why_has_no_one_been_talking_about_open_hands_so/ | false | false | self | 211 | null |
Announcing: TiānshūBench 0.0! | 35 | Llama-sté, local llama-wranglers!
I'm happy to announce that I’ve started work on TiānshūBench (天书Bench), a novel benchmark for evaluating Large Language Models' ability to understand and generate code.
Its distinctive feature is a series of tests which challenge the LLM to solve programming problems in an obscure pr... | 2025-05-22T01:18:06 | JeepyTea | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksekcn | false | null | t3_1ksekcn | /r/LocalLLaMA/comments/1ksekcn/announcing_tiānshūbench_00/ | false | false | 35 | {'enabled': True, 'images': [{'id': 'sHS6t-rtCJ3EIEr57q-4RNgJnIczrkJqcSbPBOgEx5c', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=108&crop=smart&auto=webp&s=cd9cdfe69f07c5df3a1e4674be46fb92d7f76f60', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png... | ||
Add voices to Kokoru TTS? | 5 |
Hello everyone
I'm not experienced in python and codibg, i have questions
I'm using Kokoru TTS
and I want to add voices to it
If I'm not wrong kokoru using .pt files as voice models,
Does anyone here know how to create .pt files? Which models can creates this files
And would it be working if i create .pt file in K... | 2025-05-22T01:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kseex4/add_voices_to_kokoru_tts/ | No_Cartographer_2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kseex4 | false | null | t3_1kseex4 | /r/LocalLLaMA/comments/1kseex4/add_voices_to_kokoru_tts/ | false | false | self | 5 | null |
N00b needing assistance with prompting/formatting in LM Studio. | 1 | [removed] | 2025-05-22T01:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ksealt/n00b_needing_assistance_with_promptingformatting/ | Blorfgor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksealt | false | null | t3_1ksealt | /r/LocalLLaMA/comments/1ksealt/n00b_needing_assistance_with_promptingformatting/ | false | false | self | 1 | null |
Any of the concurrent backends (vLLM, SGlang etc.) support model switching? | 8 | I need to run both a VLM and an LLM. I could use two GPUs/containers for this but that obviously doubles the cost. Any of big name backends like vLLM or SGlang support model switching or loading multiple models on the same GPU? What's the best way to go about this? Or is it simple a dream at the moment? | 2025-05-22T00:55:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kse4pe/any_of_the_concurrent_backends_vllm_sglang_etc/ | No-Break-7922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kse4pe | false | null | t3_1kse4pe | /r/LocalLLaMA/comments/1kse4pe/any_of_the_concurrent_backends_vllm_sglang_etc/ | false | false | self | 8 | null |
Harnessing the Universal Geometry of Embeddings | 62 | 2025-05-22T00:32:59 | https://arxiv.org/abs/2505.12540 | Recoil42 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1ksdox8 | false | null | t3_1ksdox8 | /r/LocalLLaMA/comments/1ksdox8/harnessing_the_universal_geometry_of_embeddings/ | false | false | default | 62 | null | |
4-bit quantized Moondream: 42% less memory with 99.4% accuracy | 148 | 2025-05-22T00:19:04 | https://moondream.ai/blog/smaller-faster-moondream-with-qat | radiiquark | moondream.ai | 1970-01-01T00:00:00 | 0 | {} | 1ksdeup | false | null | t3_1ksdeup | /r/LocalLLaMA/comments/1ksdeup/4bit_quantized_moondream_42_less_memory_with_994/ | false | false | default | 148 | null | |
Devstral Small from 2023 | 3 | knowledge cutoff in 2023 many things has been changed in the development field. very disappointing but can fine-tune own version | 2025-05-22T00:07:23 | Null_Execption | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksd6el | false | null | t3_1ksd6el | /r/LocalLLaMA/comments/1ksd6el/devstral_small_from_2023/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'P4Ut2BPslzTz9oX-BPMCld09xGxRUuaMzAb0q0DJcU8', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=108&crop=smart&auto=webp&s=70c1544c8b232c3ce6305f05167f6d7c3ba4341e', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?... | ||
Intel introduces AI Assistant Builder | 9 | 2025-05-22T00:03:02 | https://github.com/intel/intel-ai-assistant-builder | reps_up | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ksd3ba | false | null | t3_1ksd3ba | /r/LocalLLaMA/comments/1ksd3ba/intel_introduces_ai_assistant_builder/ | false | false | 9 | {'enabled': False, 'images': [{'id': '6EIJuaMtcCyCEql43kkf2EFeHWy1BNA3ZnPHNMBU_uI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=108&crop=smart&auto=webp&s=409ba12fe26dc26e79f475ebe4b0625c8b83fd65', 'width': 108}, {'height': 108, 'url': 'h... | ||
Where is DeepSeek R2? | 0 | Seriously, what's going on with the Deepseek team? News outlets were confident R2 will be released in April. Some claimed early May. Google released 2 SOTA models after R2 (and Gemma-3 family). Alibaba released 2 families of models since then. Heck, even ClosedAI released o3 and o4.
What is the Deepseek team cooking?... | 2025-05-21T23:53:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kscwf0/where_is_deepseek_r2/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kscwf0 | false | null | t3_1kscwf0 | /r/LocalLLaMA/comments/1kscwf0/where_is_deepseek_r2/ | false | false | self | 0 | null |
I built an Open-Source AI Resume Tailoring App with LangChain & Ollama - Looking for feedback & my next CV/GenAI role! | 0 | I've been diving deep into the LLM world lately and wanted to share a project I've been tinkering with: an **AI-powered Resume Tailoring application**.
**The Gist:** You feed it your current resume and a job description, and it tries to tweak your resume's keywords to better align with what the job posting is looking ... | 2025-05-21T23:48:09 | https://v.redd.it/rimmf6n8282f1 | Solid_Woodpecker3635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kscs5w | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rimmf6n8282f1/DASHPlaylist.mpd?a=1750463303%2CNDg2ZTA1YjY2ZGZmOGI3Njk4YmQ3Y2ZiMDIzYWNiOWJkMDZhN2EwMDc0ZmZmNGMwYTY5NDU3Zjg1MjM0NmQ0Zg%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/rimmf6n8282f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kscs5w | /r/LocalLLaMA/comments/1kscs5w/i_built_an_opensource_ai_resume_tailoring_app/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=108&crop=smart&format=pjpg&auto=webp&s=609dabb647f900c86e1bd63336656ceb69de4... | |
Qwen3 is impressive but sometimes acts like it went through lobotomy. Have you experienced something similar? | 32 | I've tested Qwen3 32b at Q4\_s, Qwen3 30b-A3B Q5\_m and Qwen 14b Q6\_k a few days ago. The 14b was the fastest one for me since it didn't require loading into RAM (I have 16gb VRAM) (and yes the 30b one was 2-5t/s slower than 14b).
Qwen3 14b was very impressive at basic math, even when I ended up just bashing my keybo... | 2025-05-21T23:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kscnlo/qwen3_is_impressive_but_sometimes_acts_like_it/ | AltruisticList6000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kscnlo | false | null | t3_1kscnlo | /r/LocalLLaMA/comments/1kscnlo/qwen3_is_impressive_but_sometimes_acts_like_it/ | false | false | self | 32 | null |
Benchmarking FP8 vs GGUF:Q8 on RTX 5090 (Blackwell SM120) | 9 | Now that the first FP8 implementations for RTX Blackwell (SM120) are available in vLLM, I’ve benchmarked several models and frameworks under Windows 11 with WSL (Ubuntu 24.04):
* vLLM with [https://huggingface.co/RedHatAI/phi-4-FP8-dynamic](https://huggingface.co/RedHatAI/phi-4-FP8-dynamic) (FP8 compressed-tensors)
* ... | 2025-05-21T23:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kscn2n/benchmarking_fp8_vs_ggufq8_on_rtx_5090_blackwell/ | drulee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kscn2n | false | null | t3_1kscn2n | /r/LocalLLaMA/comments/1kscn2n/benchmarking_fp8_vs_ggufq8_on_rtx_5090_blackwell/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'M_gTmLtCfRqDWG1zQCFF07fs4TLqvtzbxGKM41ar9uw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=108&crop=smart&auto=webp&s=9d2d00426a93a0f40beec99054056b16be635e71', 'width': 108}, {'height': 116, 'url': 'h... | |
AI Agents and assistants | 5 | I’ve been trying various AI agents and assistants.
I want:
- a coding assistant that can analyze code, propose/make changes, create commits maybe
- search the internet, save the info, find URLs, download git repos maybe
- examine my code on disk, tell me why it sucks, web search data on disk, and add to the memory... | 2025-05-21T23:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kscibt/ai_agents_and_assistants/ | johnfkngzoidberg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kscibt | false | null | t3_1kscibt | /r/LocalLLaMA/comments/1kscibt/ai_agents_and_assistants/ | false | false | self | 5 | null |
Blackwell 5000 vs DGX | 2 | I’m on an AM4 platform, and looking for guidance on the trade offs between the dgx spark vs the similarly priced Blackwell 5000. I would like to be able to run llms locally for my coding needs, a bit of invokeai fun, and in general explore all of the cool innovations in open source. Are the models that can fit into 4... | 2025-05-21T23:25:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kscb9d/blackwell_5000_vs_dgx/ | cpfowlke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kscb9d | false | null | t3_1kscb9d | /r/LocalLLaMA/comments/1kscb9d/blackwell_5000_vs_dgx/ | false | false | self | 2 | null |
I created a story generator that streams forever - all running locally on my desktop. | 1 | [removed] | 2025-05-21T22:53:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ksbmbu/i_created_a_story_generator_that_streams_forever/ | -Ants-In-Pants- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksbmbu | false | null | t3_1ksbmbu | /r/LocalLLaMA/comments/1ksbmbu/i_created_a_story_generator_that_streams_forever/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'S409SasqYcx_GUZ9RTqjucd0yhcAT7--IBgD3U7cLa8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=108&crop=smart&auto=webp&s=e05a5b7d772c13aaa998d37df68da0287dbf6cd0', 'width': 108}, {'height': 162, 'url': 'h... |
I prompted Google Veo 2 for a woman to wash her hands under a faucet of running water. I used a Flux reference image from a setup using Character Creator 4 rendering. This one is way better than the Hailuo Minimax I tried with the same reference image. | 0 | 2025-05-21T22:51:02 | https://v.redd.it/drkp3kngs72f1 | Extension-Fee-8480 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ksbkm8 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/drkp3kngs72f1/DASHPlaylist.mpd?a=1750459877%2CZDZlZDE0NjQ3YjQ0MzY2YzM5Y2M0MmYwMWE4YWFjODJlYTVlOWVhODk2OTQwNGEzZWE3YTA2MmQ5MWUxMWNkNg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/drkp3kngs72f1/DASH_720.mp4?source=fallback', 'has... | t3_1ksbkm8 | /r/LocalLLaMA/comments/1ksbkm8/i_prompted_google_veo_2_for_a_woman_to_wash_her/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b782329be0ecdb4e2b498a87dbe46aace91a... | ||
Devstral vs DeepSeek vs Qwen3 | 46 | What are your expectations about it? The announcement is quite interesting. 🔥
Noticed that they put Gemma3 on the bottom of the chart, but it shows very well on daily basis. 🤔 | 2025-05-21T22:15:51 | https://mistral.ai/news/devstral | COBECT | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1ksat42 | false | null | t3_1ksat42 | /r/LocalLLaMA/comments/1ksat42/devstral_vs_deepseek_vs_qwen3/ | false | false | 46 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'h... | |
Local TTS with actual multilingual support | 9 | Hey guys! I'm doing a local Home Assistant project that includes a fully local Voice Assistant, all in native Bulgarian. I'm using Whisper Turbo V3 for STT, Qwen3 for the LLM part, but I'm stuck at the TTS part. I'm looking for a good, Bulgarian-speaking, open-source TTS engine (preferably a modern one), but all of the... | 2025-05-21T22:14:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ksas7b/local_tts_with_actual_multilingual_support/ | oMGalLusrenmaestkaen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksas7b | false | null | t3_1ksas7b | /r/LocalLLaMA/comments/1ksas7b/local_tts_with_actual_multilingual_support/ | false | false | self | 9 | null |
Tools to perform data transformations using LLMs? | 1 | What tools do you use if you have some large amounts of data and performing transformations them is a huge task? With LLMs there's the issue of context length and high API cost. I've been building something in this space, but curious to know what other tools are there?
Any results in both unstructured and structured ... | 2025-05-21T21:57:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ksadu3/tools_to_perform_data_transformations_using_llms/ | metalvendetta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ksadu3 | false | null | t3_1ksadu3 | /r/LocalLLaMA/comments/1ksadu3/tools_to_perform_data_transformations_using_llms/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.