title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Vibe-code your own Static Site Generator (SSG | 0 | Hi guys, recently I run an experiment to vibe-code my own Static Site Generator (SSG) and the results were pretty good. I put together a blog post breaking down the whole process, plus I included the an initial prompt so you can try it out yourself. Give it a shot and let me know how it goes! | 2025-06-01T19:27:01 | https://eug.github.io/posts/vibe-code-your-own-ssg.html | eugf_ | eug.github.io | 1970-01-01T00:00:00 | 0 | {} | 1l0xj42 | false | null | t3_1l0xj42 | /r/LocalLLaMA/comments/1l0xj42/vibecode_your_own_static_site_generator_ssg/ | false | false | default | 0 | null |
3x Modded 4090 48GB or RTX Pro 6000? | 13 | I can source them for about the same price. I've heard there is an efficiency hit on multi card with those modded 4090. But 3 card has 144GB vram vs RTX Pro's 96GB. And power consumption is comparable. Which route should I choose? | 2025-06-01T19:05:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l0x0q8/3x_modded_4090_48gb_or_rtx_pro_6000/ | sNullp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0x0q8 | false | null | t3_1l0x0q8 | /r/LocalLLaMA/comments/1l0x0q8/3x_modded_4090_48gb_or_rtx_pro_6000/ | false | false | self | 13 | null |
My Local LLM plan for academic editing help | 0 | Purchase a 512 GB Mac Studio.
I have not chosen a model yet. I am not sure how large a model I will be able to fine tune, nor which model will be best.
Run MLX.
Fine tune the model on around 4 GB of previously edited files. I'm hoping Unsloth support comes soon, but I don't have high hopes. Hence the 512GB. Lot... | 2025-06-01T18:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l0wix3/my_local_llm_plan_for_academic_editing_help/ | LeopardOrLeaveHer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0wix3 | false | null | t3_1l0wix3 | /r/LocalLLaMA/comments/1l0wix3/my_local_llm_plan_for_academic_editing_help/ | false | false | self | 0 | null |
Is multiple m3 ultras the move instead of 1 big one? | 7 | I am seriously considering invest in a sizable m3 ultra mac studio. Looking through some of the benchmarks, it seems the m3ultra's do well but not as well in prompt processing speed. The comparisons from the 60 core to the 80 core seem to show a (surprisingly?) big boost from going up in gpu size. Given the low powe... | 2025-06-01T18:42:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l0wfln/is_multiple_m3_ultras_the_move_instead_of_1_big/ | AcceptableBridge7616 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0wfln | false | null | t3_1l0wfln | /r/LocalLLaMA/comments/1l0wfln/is_multiple_m3_ultras_the_move_instead_of_1_big/ | false | false | self | 7 | null |
WILL ANTHROPIC survive? | 0 | I am 100% sure that I am not the only one that feels like Anthropic might not make it to the point of AGI
Here's why I think that
- OpenAi is the most famous and they just had 500 billion$ investment(if I am not mistaken)
-Gemini obviously powered by Google(that should be enough to tell how much potential Gemini ha... | 2025-06-01T18:23:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l0vz1p/will_anthropic_survive/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0vz1p | false | null | t3_1l0vz1p | /r/LocalLLaMA/comments/1l0vz1p/will_anthropic_survive/ | false | false | self | 0 | null |
Would a laptop iGPU + 64GB RAM be good for anything, speed wise? | 12 | VRAM is a big limiting factor for a lot of bigger models for most of consumer GPU. So, I was wondering if my iGPU (Ryzen 5 5600H) would be capable for running some models locally using RAM?
Or would you think a M2 mac machine with similar RAM would be significantly better? | 2025-06-01T18:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l0vwc1/would_a_laptop_igpu_64gb_ram_be_good_for_anything/ | ArsenicBismuth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0vwc1 | false | null | t3_1l0vwc1 | /r/LocalLLaMA/comments/1l0vwc1/would_a_laptop_igpu_64gb_ram_be_good_for_anything/ | false | false | self | 12 | null |
I'm trying to make llm use the docker vnc computer but it's not working | 1 | [removed] | 2025-06-01T18:02:59 | rodrigoandrigo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l0vh31 | false | null | t3_1l0vh31 | /r/LocalLLaMA/comments/1l0vh31/im_trying_to_make_llm_use_the_docker_vnc_computer/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1dcuALT7VjGZkzs391kXysPy1BOdyTb-0bvLnXiWLe8', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=108&crop=smart&auto=webp&s=9b26ecfcdd79c4b05064f90fdfb3260085306620', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png... | ||
24GB MacMini users, can you offload up to 24GB models to the GPU? | 1 | [removed] | 2025-06-01T18:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l0vgvq/24gb_macmini_users_can_you_offload_up_to_24gb/ | electricgoat01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0vgvq | false | null | t3_1l0vgvq | /r/LocalLLaMA/comments/1l0vgvq/24gb_macmini_users_can_you_offload_up_to_24gb/ | false | false | self | 1 | null |
I made a simple tool to test/compare your local LLMs on AIME 2024 | 47 | I made [LocalAIME](https://github.com/Belluxx/LocalAIME) a simple tool that tests one or many LLMs locally or trough API (you can use any OpenAI-compatible API) on AIME 2024.
It is pretty useful for testing different quants of the same model or the same quant of different providers.
[Performance of some models i test... | 2025-06-01T17:54:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l0v8yq/i_made_a_simple_tool_to_testcompare_your_local/ | EntropyMagnets | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0v8yq | false | null | t3_1l0v8yq | /r/LocalLLaMA/comments/1l0v8yq/i_made_a_simple_tool_to_testcompare_your_local/ | false | false | 47 | {'enabled': False, 'images': [{'id': 'DsEhjmQ5Kl6ySNdivTOfWdAkiX0u-UrmwagwKWDzL4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=108&crop=smart&auto=webp&s=00dd04b0e8977332e6e19735c2514f614e5d1c70', 'width': 108}, {'height': 108, 'url': 'h... | |
Baby Voice TTS ? Kokoro or f5 or any good? I really want laghing and normal voices | 0 | Looking for tts who can create voice like 4-8 year old baby or childrens.
with kokoro it doesnt have voices. | 2025-06-01T17:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l0v8wt/baby_voice_tts_kokoro_or_f5_or_any_good_i_really/ | jadhavsaurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0v8wt | false | null | t3_1l0v8wt | /r/LocalLLaMA/comments/1l0v8wt/baby_voice_tts_kokoro_or_f5_or_any_good_i_really/ | false | false | self | 0 | null |
ollama-multirun: A bash shell script to run a single prompt against all your locally installed ollama models, saving the output and performance statistics as easily navigable web pages. | 1 | [removed] | 2025-06-01T17:48:12 | https://www.reddit.com/gallery/1l0v3w0 | shared-media | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l0v3w0 | false | null | t3_1l0v3w0 | /r/LocalLLaMA/comments/1l0v3w0/ollamamultirun_a_bash_shell_script_to_run_a/ | false | false | 1 | null | |
Has anyone tried Lobe-Chat? | 1 | [removed] | 2025-06-01T17:18:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l0udh0/has_anyone_tried_lobechat/ | AlexM4H | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0udh0 | false | null | t3_1l0udh0 | /r/LocalLLaMA/comments/1l0udh0/has_anyone_tried_lobechat/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mpEFEY8JAlYVUVMYAueCra5ioNR_ClnoM09nfcnumOw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=108&crop=smart&auto=webp&s=8c6a0faec435ec7b16d1b3c2a454ef58231c9463', 'width': 108}, {'height': 108, 'url': 'h... |
I built a lightweight, private, MCP server to share context between AI tools | 1 | Hey guys, I have seen a few projects similar to mine lately, so I decided to open source mine ASAP.
I wanted to make a service that persists context and can recall it across any AI tools. I also want it to be a way to persist your digital life and semantic search it, all self hosted.
**One thing I saw lacking in a fe... | 2025-06-01T17:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l0uccd/i_built_a_lightweight_private_mcp_server_to_share/ | coding9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0uccd | false | null | t3_1l0uccd | /r/LocalLLaMA/comments/1l0uccd/i_built_a_lightweight_private_mcp_server_to_share/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wxX4QQJ-CAB3b-9UI5nsVGnGl38LHuQnlTGweQxBSuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=108&crop=smart&auto=webp&s=927c3cb8a52320ca0049f8b2efe5c27dd0205612', 'width': 108}, {'height': 108, 'url': 'h... |
DeepSeek-R1-0528-Distill-Devstral Needs to Happen! | 1 | Someone Should Make DeepSeek-R1-0528-Distill-Devstral, That Would Be Sick! | 2025-06-01T16:32:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l0t8tz/deepseekr10528distilldevstral_needs_to_happen/ | Libertumi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0t8tz | false | null | t3_1l0t8tz | /r/LocalLLaMA/comments/1l0t8tz/deepseekr10528distilldevstral_needs_to_happen/ | false | false | self | 1 | null |
DeepSeek-R1-0528-Distill-Devstral Needs to Happen! | 1 | DeepSeek-R1-0528-Distill-Devstral Needs to Happen! | 2025-06-01T16:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l0t86p/deepseekr10528distilldevstral_needs_to_happen/ | Libertumi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0t86p | false | null | t3_1l0t86p | /r/LocalLLaMA/comments/1l0t86p/deepseekr10528distilldevstral_needs_to_happen/ | false | false | self | 1 | null |
Qwenlong L1 long-context models | 0 | Wondering if anyone knows when we may get these to download?
https://venturebeat.com/ai/qwenlong-l1-solves-long-context-reasoning-challenge-that-stumps-current-llms/
| 2025-06-01T16:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l0t7sz/qwenlong_l1_longcontext_models/ | Willdudes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0t7sz | false | null | t3_1l0t7sz | /r/LocalLLaMA/comments/1l0t7sz/qwenlong_l1_longcontext_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OD0UowLO7TYGjjjJRgA6lMym9726ap7GK-CiZaFcLL4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=108&crop=smart&auto=webp&s=1e4c75368a860bf5187b2c49f94257d087399c64', 'width': 108}, {'height': 121, 'url': 'h... |
Old dual socket Xeon server with tons of RAM viable for LLM inference? | 22 | I was looking into maybe getting a used 2 socket Lga 3647 board and some Xeons wit loads of (RAM 256GB+). I don't need insane speeds, but it shouldn't take hours either.
It seems a lot more affordable per GB than Apple silicon and of course VRAM, but I feel like it might be too slow to really be viable or just plain ... | 2025-06-01T15:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l0rvqr/old_dual_socket_xeon_server_with_tons_of_ram/ | jojokingxp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0rvqr | false | null | t3_1l0rvqr | /r/LocalLLaMA/comments/1l0rvqr/old_dual_socket_xeon_server_with_tons_of_ram/ | false | false | self | 22 | null |
Sharing my tool for easy handwritten fine-tuning dataset creation: supports multiple formats, token counting & auto saving! | 1 | [removed] | 2025-06-01T15:16:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l0regm/sharing_my_tool_for_easy_handwritten_finetuning/ | abaris243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0regm | false | null | t3_1l0regm | /r/LocalLLaMA/comments/1l0regm/sharing_my_tool_for_easy_handwritten_finetuning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UjweHFlBfjtq-qgJURLZe74ot5ARI6AHWtzN7VjFiRs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=108&crop=smart&auto=webp&s=5704e06d9310b4293e014267081165563e8bbeda', 'width': 108}, {'height': 162, 'url': 'h... |
Recommended setup for local LLMs | 7 | I'm currently running a PC with i7-8700k, 32GB of memory and Nvidia 4070 and it is clearly not fit for my needs (coding Typescript, Python and LLMs). However, I haven't found good resources on what should I upgrade next. My options at the moment are:
\- Mac Studio M3 Ultra 96GB unified memory (or with 256GB if I manag... | 2025-06-01T15:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l0rcin/recommended_setup_for_local_llms/ | pioni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0rcin | false | null | t3_1l0rcin | /r/LocalLLaMA/comments/1l0rcin/recommended_setup_for_local_llms/ | false | false | self | 7 | null |
Seeking Community Review: Documented Evidence of AI Identity Persistence Across Instances | 1 | [removed] | 2025-06-01T14:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l0qvsk/seeking_community_review_documented_evidence_of/ | PotentialCraft3781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0qvsk | false | null | t3_1l0qvsk | /r/LocalLLaMA/comments/1l0qvsk/seeking_community_review_documented_evidence_of/ | false | false | self | 1 | null |
App-Use : Create virtual desktops for AI agents to focus on specific apps. | 53 | App-Use lets you scope agents to just the apps they need. Instead of full desktop access, say "only work with Safari and Notes" or "just control iPhone Mirroring" - visual isolation without new processes for perfectly focused automation.
Running computer-use on the entire desktop often causes agent hallucinations and ... | 2025-06-01T14:46:57 | https://v.redd.it/v0fcznj6wb4f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l0qp75 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v0fcznj6wb4f1/DASHPlaylist.mpd?a=1751381232%2CNTRlYmU0MTg2MzU0NTQ4MTY3OTVkMGE3NWI5MzFhNjc2NWYxYjU0NWI3YWUyYWZiNWUzYzUwMzdmODA4ZGEzMQ%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/v0fcznj6wb4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l0qp75 | /r/LocalLLaMA/comments/1l0qp75/appuse_create_virtual_desktops_for_ai_agents_to/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=a78df13b1398a355f7fbaeff03bda8b127ab7... | |
TTS support in llama.cpp? | 8 | I know I can do this (using `OuteTTS-0.2-500M`):
llama-tts -m OuteTTS-0.2-500M-Q4_K_M.gguf -m WavTokenizer-Large-75-F16.gguf -p "Hello"
... and get an `output.wav` audio file, that I can reproduce, with any terminal audio player, like:
- aplay
- play (sox)
- paplay
- mpv
- ffplay
---
Does llama-tts support any... | 2025-06-01T14:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l0qbot/tts_support_in_llamacpp/ | Disonantemus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0qbot | false | null | t3_1l0qbot | /r/LocalLLaMA/comments/1l0qbot/tts_support_in_llamacpp/ | false | false | self | 8 | null |
Where can I share prompts I've written? | 0 | I've often written a roleplaying prompt for sillyness and just to mess around, only to do the same one months later. I don't typically like to keep them on my PC, cause it's just not preferred to keep NSFW prompts there, idk just don't want to. Is there a place I can share them with others, like a library or something?... | 2025-06-01T14:23:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l0q5iy/where_can_i_share_prompts_ive_written/ | intimate_sniffer69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0q5iy | false | null | t3_1l0q5iy | /r/LocalLLaMA/comments/1l0q5iy/where_can_i_share_prompts_ive_written/ | false | false | nsfw | 0 | null |
DeepSeek-R1-0528-UD-Q6-K-XL on 10 Year Old Hardware | 225 | Don't expect anything useful in this post. I did it just to see if it was possible. This was on a 10+ year old system with a 6th generation i5 with 12gb of RAM. My ssd is nearly full so I had to mount an external 8TB USB drive to store the 560GB model. At least it is USB-3.
I made an 800GB swap file and enabled... | 2025-06-01T14:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l0q2zk/deepseekr10528udq6kxl_on_10_year_old_hardware/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0q2zk | false | null | t3_1l0q2zk | /r/LocalLLaMA/comments/1l0q2zk/deepseekr10528udq6kxl_on_10_year_old_hardware/ | false | false | self | 225 | null |
Has anyone had a play around with the new Google AI edge local models on Android? I tried one and it was not bad. | 2 | 2025-06-01T14:19:00 | https://github.com/google-ai-edge/gallery | mintybadgerme | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l0q2b5 | false | null | t3_1l0q2b5 | /r/LocalLLaMA/comments/1l0q2b5/has_anyone_had_a_play_around_with_the_new_google/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'tWqFFTtW1YjAoWlH44lH9wTxrW0TFs0PxgzHtrKYS6Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=108&crop=smart&auto=webp&s=2e7234cf12e391aa62e475715d73244f9fe6b382', 'width': 108}, {'height': 108, 'url': 'h... | ||
Let's build a production level Small Language Model (SLM) from scratch | 3 hour workshop | 196 | 2025-06-01T13:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l0p3et/lets_build_a_production_level_small_language/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0p3et | false | {'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pOFcwcwtv3k?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictu... | t3_1l0p3et | /r/LocalLLaMA/comments/1l0p3et/lets_build_a_production_level_small_language/ | false | false | 196 | {'enabled': False, 'images': [{'id': 'oVEwtSXuv3g7GOlMqGljKa2WWMnZtnzafggxen7gFSg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3QugVQO6P_Q3v0881CbP7ispW7LV5z9hQhVFGV8ZV58.jpg?width=108&crop=smart&auto=webp&s=a2b034196ef61c6b003d6df44caff39ccd200871', 'width': 108}, {'height': 162, 'url': 'h... | ||
Experimenting with Autonomous AI Agents in Continuous Thinking Loops | 1 | [removed] | 2025-06-01T12:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l0nqjw/experimenting_with_autonomous_ai_agents_in/ | Wise-Increase1493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0nqjw | false | null | t3_1l0nqjw | /r/LocalLLaMA/comments/1l0nqjw/experimenting_with_autonomous_ai_agents_in/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8bc_BXyGH8x4MjD2x6QM1kNDEKz4aOikVHQjwVa9SiM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=108&crop=smart&auto=webp&s=5ea6cbd37c8a0ce7588dec4e7ba1645c95996115', 'width': 108}, {'height': 108, 'url': 'h... |
Which is the best uncensored model? | 216 | Wanted to learn ethical hacking. Tried dolphin-mistral-r1 it did answer but it's answers were bad.
Are there any good uncensored models? | 2025-06-01T11:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l0n5ta/which_is_the_best_uncensored_model/ | BoJackHorseMan53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0n5ta | false | null | t3_1l0n5ta | /r/LocalLLaMA/comments/1l0n5ta/which_is_the_best_uncensored_model/ | false | false | self | 216 | null |
Setting up an AI to help prepare for a high difficulty oral questions test | 1 | [removed] | 2025-06-01T11:50:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l0n2ee/setting_up_an_ai_to_help_prepare_for_a_high/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0n2ee | false | null | t3_1l0n2ee | /r/LocalLLaMA/comments/1l0n2ee/setting_up_an_ai_to_help_prepare_for_a_high/ | false | false | self | 1 | null |
Introducing an open source cross-platform graphical interface LLM client | 32 |
Cherry Studio is a desktop client that supports for multiple LLM providers, available on Windows, Mac and Linux. | 2025-06-01T11:26:40 | https://github.com/CherryHQ/cherry-studio | Fun-Doctor6855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l0mo90 | false | null | t3_1l0mo90 | /r/LocalLLaMA/comments/1l0mo90/introducing_an_open_source_crossplatform/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'He5VG53rTBjWbNk1_UdCjYukNuT1UhGRClb6ecDAOwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=108&crop=smart&auto=webp&s=6c9b9a17a1cba0f4382bf80f06bb3715c6dc44e3', 'width': 108}, {'height': 108, 'url': 'h... | |
104k-Token Prompt in a 110k-Token Context with DeepSeek-R1-0528-UD-IQ1_S – Benchmark & Impressive Results | 132 | The Prompt: https://thireus.com/REDDIT/DeepSeek_Runescape_Massive_Prompt.txt (Firefox: View -> Repair Text Encoding)
The Command (on Windows):
```
perl -pe 's/\n/\\n/' DeepSeek_Runescape_Massive_Prompt.txt | CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,2,1 ~/llama-b5355-bin-win-cuda12.4-x64/llama-cli -m DeepSe... | 2025-06-01T11:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l0m8r0/104ktoken_prompt_in_a_110ktoken_context_with/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0m8r0 | false | null | t3_1l0m8r0 | /r/LocalLLaMA/comments/1l0m8r0/104ktoken_prompt_in_a_110ktoken_context_with/ | false | false | self | 132 | null |
How to execute commands by llm or how to switch back and forth llm to tool/function call? | 0 | How to execute commands by llm or how to switch back and forth llm to tool/function call? (sorry if question is not clear itself)
I will try to cover my requirement.
I am developing my personal assistant. So assuming I am giving command to llm
**q: "What is the time now?"**
llm answer: (internally: user asked time ... | 2025-06-01T10:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l0m2yd/how_to_execute_commands_by_llm_or_how_to_switch/ | InsideResolve4517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0m2yd | false | null | t3_1l0m2yd | /r/LocalLLaMA/comments/1l0m2yd/how_to_execute_commands_by_llm_or_how_to_switch/ | false | false | self | 0 | null |
AI Cost Optimisation | 1 | [removed] | 2025-06-01T10:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l0lpqp/ai_cost_optimisation/ | BenSimmons97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0lpqp | false | null | t3_1l0lpqp | /r/LocalLLaMA/comments/1l0lpqp/ai_cost_optimisation/ | false | false | self | 1 | null |
OpenAI to release open-source model this summer - everything we know so far | 0 | *TED2025 (April 11th 2025)*
[https://youtu.be/5MWT\_doo68k?t=473](https://youtu.be/5MWT_doo68k?t=473)
**Question:** How much were you shaken up by the arrival of DeepSeek?
**Sam Altman's response:** I think open-source has an important place. We actually last night hosted our first community session to decide the... | 2025-06-01T09:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l0l1fx/openai_to_release_opensource_model_this_summer/ | iamn0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0l1fx | false | null | t3_1l0l1fx | /r/LocalLLaMA/comments/1l0l1fx/openai_to_release_opensource_model_this_summer/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'yXu4Pd0Sn1hhLraTl-3eER1ALKtDG3yKNhzu6uZ9KeA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qvR28r6IP4l8wfyLQs3YuRfIWWXs55MxYkodQ4PcgC4.jpg?width=108&crop=smart&auto=webp&s=277265b3edae6a60a3425eda3538e71ebd27e68f', 'width': 108}, {'height': 162, 'url': 'h... |
Has anyone successfully built an LLM system that works well on a large codebase? | 1 | [removed] | 2025-06-01T09:11:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l0kmg3/has_anyone_successfully_built_an_llm_system_that/ | shijoi87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0kmg3 | false | null | t3_1l0kmg3 | /r/LocalLLaMA/comments/1l0kmg3/has_anyone_successfully_built_an_llm_system_that/ | false | false | self | 1 | null |
How many parameters does R1 0528 have? | 28 | I found conflicting info online, some articles say it's 685b and some say 671b, which is correct? huggingface also shows 685b (look at the attached screenshot) BUT it shows that even for the old one, which I know for sure was 671b. anyone know which is correct?
| 2025-06-01T07:44:01 | https://www.reddit.com/gallery/1l0jcoa | Sudden-Albatross-733 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l0jcoa | false | null | t3_1l0jcoa | /r/LocalLLaMA/comments/1l0jcoa/how_many_parameters_does_r1_0528_have/ | false | false | 28 | null | |
Prebuilt PC vs DIY 5090 | 8 | Thanks to micro center Santa Clara, I got lucky to bought an HP OMEN 45L prebuilt: Ultra 9 285K, RTX 5090 (OEM), 64GB DDR5, 2TB SSD, 360mm liquid cooling.
As well as a 5090 Founders Edition.
Background:
• Have some prev ML/DL knowledge and exposure, but haven’t been hands-on in a while
• Looking to get back into de... | 2025-06-01T07:38:29 | https://www.microcenter.com/product/693699/hp-omen-45l-gt22-3090-gaming-pc | henrygatech | microcenter.com | 1970-01-01T00:00:00 | 0 | {} | 1l0j9r8 | false | null | t3_1l0j9r8 | /r/LocalLLaMA/comments/1l0j9r8/prebuilt_pc_vs_diy_5090/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'VnA5kMfbPelLVLz2MQgbYUR1h0T11uGgzde947fDaOM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tpiY2KNJA8Hoz09nmlhVjJx1UKfVzlzUwEhk5dBrcDM.jpg?width=108&crop=smart&auto=webp&s=1e055f890da2b034199db051097096c4421f040a', 'width': 108}], 'source': {'height': 20... | |
Which model is suitable for e-mail classification / labeling? | 7 | I'm looking to automatically add labels my to e-mails like `spam`, `scam`, `cold-email`, `marketing`, `resume`, `proposal`, `meeting-request`, etc. to see how effective it is at keeping my mailbox organized. I need it to be self-hostable and I don't mind if it is slow.
What is a suitable model for this? | 2025-06-01T06:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l0ig7q/which_model_is_suitable_for_email_classification/ | surveypoodle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0ig7q | false | null | t3_1l0ig7q | /r/LocalLLaMA/comments/1l0ig7q/which_model_is_suitable_for_email_classification/ | false | false | self | 7 | null |
deepseek r1 matches gemini 2.5? what gpu do you use? | 2 | can anyone confirm based on vibes if the bechmarks are true?
what gpu do you use for the new r1?
i mean if i can get something close to gemini 2.5 pro locally then this changes everything. | 2025-06-01T05:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l0hpha/deepseek_r1_matches_gemini_25_what_gpu_do_you_use/ | Just_Lingonberry_352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0hpha | false | null | t3_1l0hpha | /r/LocalLLaMA/comments/1l0hpha/deepseek_r1_matches_gemini_25_what_gpu_do_you_use/ | false | false | self | 2 | null |
local LLM researcher/scientist which uses local documents | 1 | [removed] | 2025-06-01T05:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l0hi08/local_llm_researcherscientist_which_uses_local/ | tomkod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0hi08 | false | null | t3_1l0hi08 | /r/LocalLLaMA/comments/1l0hi08/local_llm_researcherscientist_which_uses_local/ | false | false | self | 1 | null |
AI researcher/scientist which uses local documents | 1 | [removed] | 2025-06-01T05:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l0hdjf/ai_researcherscientist_which_uses_local_documents/ | tomkod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0hdjf | false | null | t3_1l0hdjf | /r/LocalLLaMA/comments/1l0hdjf/ai_researcherscientist_which_uses_local_documents/ | false | false | self | 1 | null |
AI Researcher/Scientist which uses local documents | 1 | [removed] | 2025-06-01T05:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l0had8/ai_researcherscientist_which_uses_local_documents/ | tomkod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0had8 | false | null | t3_1l0had8 | /r/LocalLLaMA/comments/1l0had8/ai_researcherscientist_which_uses_local_documents/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'tCewK1g7--AHxgYCPys7oNGWJ3BJpvMUo_OYs8I-Jnc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=108&crop=smart&auto=webp&s=6387da88ba86162b5fb6c012694f60a2cbab7a91', 'width': 108}, {'height': 108, 'url': 'h... |
Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman | 7 | 2025-06-01T05:11:57 | https://www.youtube.com/watch?v=Kev_-HyuI9Y | Warm_Iron_273 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1l0gzwk | false | {'oembed': {'author_name': 'Tucker Carlson', 'author_url': 'https://www.youtube.com/@TuckerCarlson', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Kev_-HyuI9Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyr... | t3_1l0gzwk | /r/LocalLLaMA/comments/1l0gzwk/mother_of_likely_murdered_openai_whistleblower/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'n84akkk6SaJYgH6SehKBmvvooHbYzvsDfA9Hy-mZFuk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ULIJqtHvtm8DuAgwq07tSjSEx6PNAi5QSrX9Y4tt7vA.jpg?width=108&crop=smart&auto=webp&s=cbbc29998b061546191f29883e3f8c544964c1a4', 'width': 108}, {'height': 162, 'url': 'h... | ||
What's the best setup/llm for writing fast code? | 7 | I am interested how automated the process of writing the fastest code possible can be. Say I want code to multiply two 1000 by 1000 matrices as quickly as possible for example. Ideally the setup would produce code, time it on my machine, modify the code and repeat. | 2025-06-01T05:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l0guyk/whats_the_best_setupllm_for_writing_fast_code/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0guyk | false | null | t3_1l0guyk | /r/LocalLLaMA/comments/1l0guyk/whats_the_best_setupllm_for_writing_fast_code/ | false | false | self | 7 | null |
Help : GPU not being used? | 1 | Ok, so I'm new to this. Apologies if this is a dumb question.
I have a rtx 3070 8gb vram, 32gb ram, Ryzen 5 5600gt (integrated graphics) windows11
I downloaded ollama and then downloaded a coder variant of qwen3 4b.(`ollama run mychen76/qwen3_cline_roocode:4b`) i ran it, and it runs 100% on my CPU (checked with `ol... | 2025-06-01T04:21:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l0g5ob/help_gpu_not_being_used/ | pyroblazer68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0g5ob | false | null | t3_1l0g5ob | /r/LocalLLaMA/comments/1l0g5ob/help_gpu_not_being_used/ | false | false | self | 1 | null |
DeepSeek R1 0528 + MCP → one model, 10 K+ tools (demo & walkthrough) | 1 | Hey folks,
I’ve been experimenting with the new R1-0528 drop and thought some of you might like a peek at how it behaves once it’s wired to MCP (Model Context Protocol).
# TL;DR
* **Why bother?** R1-0528 is sitting at #4 on the leaderboard, but costs \~18× less than the usual suspects.
* **MCP = universal adapter.*... | 2025-06-01T03:54:59 | https://v.redd.it/k4s7a6mjn84f1 | ComposerGen | /r/LocalLLaMA/comments/1l0fooz/deepseek_r1_0528_mcp_one_model_10_k_tools_demo/ | 1970-01-01T00:00:00 | 0 | {} | 1l0fooz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k4s7a6mjn84f1/DASHPlaylist.mpd?a=1751471706%2CNjFjNDQ2NzQ1YmNkMmU5OTFiMmM3NzUwODAwZWFjMjk4NTJiNDM4YWY1NThlODRlMzM5NzRiMmVhN2M0YTc2Yw%3D%3D&v=1&f=sd', 'duration': 126, 'fallback_url': 'https://v.redd.it/k4s7a6mjn84f1/DASH_1080.mp4?source=fallback', '... | t3_1l0fooz | /r/LocalLLaMA/comments/1l0fooz/deepseek_r1_0528_mcp_one_model_10_k_tools_demo/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=108&crop=smart&format=pjpg&auto=webp&s=b1d581812c3ec32a1293b6c4c5187c33f2e3c... | |
I Made a Tool to Convert PDF to MP3 Using Kokoro | 1 | [removed] | 2025-06-01T03:37:29 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l0fdog | false | null | t3_1l0fdog | /r/LocalLLaMA/comments/1l0fdog/i_made_a_tool_to_convert_pdf_to_mp3_using_kokoro/ | false | false | default | 1 | null | ||
PDF2MP3: OSS Web App to Convert PDF to MP3 Using Kokoro TTS | 1 | [removed] | 2025-06-01T03:34:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l0fbv8/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/ | ProHackerEvan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0fbv8 | false | null | t3_1l0fbv8 | /r/LocalLLaMA/comments/1l0fbv8/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/ | false | false | self | 1 | null |
PDF2MP3: OSS Web App to Convert PDF to MP3 Using Kokoro TTS | 1 | [removed] | 2025-06-01T03:33:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l0fav9/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/ | ProHackerEvan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0fav9 | false | null | t3_1l0fav9 | /r/LocalLLaMA/comments/1l0fav9/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/ | false | false | self | 1 | null |
Write a MCP server to generate template code based on a finite state machine (FSM) in which an LLM writes code for? | 1 | [removed] | 2025-06-01T03:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l0f0ai/write_a_mcp_server_to_generate_template_code/ | top_ness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0f0ai | false | null | t3_1l0f0ai | /r/LocalLLaMA/comments/1l0f0ai/write_a_mcp_server_to_generate_template_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IC8cA5rNg1SSZZOPY4GpmWKZtdgjLtel9UP9WHQKmDQ', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?width=108&crop=smart&auto=webp&s=4cf7bd45944d5f868a47607c20a439cb31abf124', 'width': 108}, {'height': 86, 'url': 'ht... |
Creating a .gitignore breaks Qwen 3's brain. | 1 | [removed] | 2025-06-01T02:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l0emfa/creating_a_gitignore_breaks_qwen_3s_brain/ | Typical-Act-8371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0emfa | false | null | t3_1l0emfa | /r/LocalLLaMA/comments/1l0emfa/creating_a_gitignore_breaks_qwen_3s_brain/ | false | false | self | 1 | null |
I'm tired of windows awful memory management how is the performance of LLM and AI tasks in Ubuntu? Windows takes 8+ gigs of ram idle and that's after debloating. | 11 | Windows isnt horrible for AI but god its so resource inefficient, for example if I train a wan 1.3b lora it will take 50+ gigs of ram unless I do something like launch Doom The Dark Ages and play on my other GPU then WSL ram usage drops and stays at 30 gigs. Why? No clue windows is the worst at memory management. When ... | 2025-06-01T02:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l0e4jl/im_tired_of_windows_awful_memory_management_how/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0e4jl | false | null | t3_1l0e4jl | /r/LocalLLaMA/comments/1l0e4jl/im_tired_of_windows_awful_memory_management_how/ | false | false | self | 11 | null |
DIY CAI with Llama 4 | 1 | [removed] | 2025-06-01T02:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l0e4ix/diy_cai_with_llama_4/ | ZackFlashhhh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0e4ix | false | null | t3_1l0e4ix | /r/LocalLLaMA/comments/1l0e4ix/diy_cai_with_llama_4/ | false | false | self | 1 | null |
[OC] Built an AI SQL Agent with dual model support (Local Ollama + OpenAI API) | 1 | [removed] | 2025-06-01T01:57:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l0djk3/oc_built_an_ai_sql_agent_with_dual_model_support/ | loglux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0djk3 | false | null | t3_1l0djk3 | /r/LocalLLaMA/comments/1l0djk3/oc_built_an_ai_sql_agent_with_dual_model_support/ | false | false | self | 1 | null |
The largest change I've noticed in Deepseek-R1-0528 is more censorship | 23 | Running both models locally (R1 and R1-0528, both full 671B, FP8, on 8x Nvidia H200)
This model is locked down WAY more than the last one.
This is just an obvious example. First image is 0528, the remaining are one response from original R1
Thoughts everyone? I think this needs some discussion outside of the communi... | 2025-06-01T01:30:00 | https://www.reddit.com/gallery/1l0d1mc | SashaUsesReddit | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l0d1mc | false | null | t3_1l0d1mc | /r/LocalLLaMA/comments/1l0d1mc/the_largest_change_ive_noticed_in_deepseekr10528/ | false | false | 23 | null | |
Is there an alternative to LM Studio with first class support for MLX models? | 27 | I've been using LM Studio for the last few months on my Macs due to it's first class support for MLX models (they implemented a very nice [MLX engine](https://github.com/lmstudio-ai/mlx-engine) which supports adjusting context length etc.
While it works great, there are a few issues with it:
\- it doesn't work behin... | 2025-06-01T01:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l0ct34/is_there_an_alternative_to_lm_studio_with_first/ | ksoops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0ct34 | false | null | t3_1l0ct34 | /r/LocalLLaMA/comments/1l0ct34/is_there_an_alternative_to_lm_studio_with_first/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'mNQ4j6dpl2JHjgdFR60OtY1qnXkZg4DDE0PneDdL0mY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=108&crop=smart&auto=webp&s=9425bd87b5a0bc0aa972444b4808fbae085b6d81', 'width': 108}, {'height': 108, 'url': 'h... |
AMD RX 9080 XT ES engineering sample, up to 32 GB of VRAM. | 58 | 2025-06-01T00:57:59 | https://www.notebookcheck.net/AMD-RX-9080-XT-ES-engineering-sample-could-rival-RTX-5080-Super.1027707.0.html | fallingdowndizzyvr | notebookcheck.net | 1970-01-01T00:00:00 | 0 | {} | 1l0cg8b | false | null | t3_1l0cg8b | /r/LocalLLaMA/comments/1l0cg8b/amd_rx_9080_xt_es_engineering_sample_up_to_32_gb/ | false | false | default | 58 | null | |
Built an API for creating custom lightweight text classification models. Feedback appreciated | 1 | [removed] | 2025-06-01T00:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l0bl70/built_an_api_for_creating_custom_lightweight_text/ | LineAlternative5694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0bl70 | false | null | t3_1l0bl70 | /r/LocalLLaMA/comments/1l0bl70/built_an_api_for_creating_custom_lightweight_text/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 17... |
OpenWebUI vs LibreChat? | 46 | Hi,
These are the two most popular Chat UI tools for LLMs. Have you tried them?
Which one do you think is better? | 2025-05-31T23:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l0bc5j/openwebui_vs_librechat/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0bc5j | false | null | t3_1l0bc5j | /r/LocalLLaMA/comments/1l0bc5j/openwebui_vs_librechat/ | false | false | self | 46 | null |
I scraped 200k Dev jobs directly from corporate websites | 1 | [removed] | 2025-05-31T23:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l0atq9/i_scraped_200k_dev_jobs_directly_from_corporate/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0atq9 | false | null | t3_1l0atq9 | /r/LocalLLaMA/comments/1l0atq9/i_scraped_200k_dev_jobs_directly_from_corporate/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'h... |
Using reasoning models makes me feel sad | 1 | [removed] | 2025-05-31T23:30:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l0aqiy/using_reasoning_models_makes_me_feel_sad/ | OrvaldMaxwell666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0aqiy | false | null | t3_1l0aqiy | /r/LocalLLaMA/comments/1l0aqiy/using_reasoning_models_makes_me_feel_sad/ | false | false | self | 1 | null |
Best LLM for Helping writing a high fantasy book? | 3 | Hi, i am writing a book, and i would like some assitance from a Language model, mainly because english is not my first language, and even though i am quite fluent in it, i know for a fact there are grammar rules and stuff i am not aware of. So i need a model that i can feed it my book chapter by chapter, and it can cor... | 2025-05-31T23:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l0amua/best_llm_for_helping_writing_a_high_fantasy_book/ | Leonblackdeath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0amua | false | null | t3_1l0amua | /r/LocalLLaMA/comments/1l0amua/best_llm_for_helping_writing_a_high_fantasy_book/ | false | false | self | 3 | null |
DIY CAI | 1 | [removed] | 2025-05-31T23:24:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l0allt/diy_cai/ | ZackFlashhhh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0allt | false | null | t3_1l0allt | /r/LocalLLaMA/comments/1l0allt/diy_cai/ | false | false | self | 1 | null |
What local LLM and IDE have documentation indexing like Cursor's @Docs? | 5 | Cursor will read and index code documentation but it doesn't work with local LLMs, not even via the ngrok method recently it seems (ie spoofing a local LLM with an OpenAI compatible API and using ngrok to tunnel localhost to a remote URL). VSCode doesn't have it, nor Windsurf, it seems. I see only Continue.dev has the ... | 2025-05-31T22:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l09u5c/what_local_llm_and_ide_have_documentation/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l09u5c | false | null | t3_1l09u5c | /r/LocalLLaMA/comments/1l09u5c/what_local_llm_and_ide_have_documentation/ | false | false | self | 5 | null |
Created an AI chat app. Long chat responses are getting cutoff. It’s using Llama (via Groq cloud). Ne1 know how to stop it cuting out mid sentence. I’ve set prompt to only respond using couple of sentences and within 30 words. Also token limit. Also extended limit to try make it finish, but no joy? | 0 | Thanks to anyone who has a solution. | 2025-05-31T22:46:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l09s7m/created_an_ai_chat_app_long_chat_responses_are/ | OkPaper8003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l09s7m | false | null | t3_1l09s7m | /r/LocalLLaMA/comments/1l09s7m/created_an_ai_chat_app_long_chat_responses_are/ | false | false | self | 0 | null |
What are the top creative writing models ? | 12 | Hello everyone I wanted to know what are the top models that are good at creative writing. I'm looking for ones I can run on my card. I've got a 4070. It has 12GB of Vram. I've got 64GB of normal ram. | 2025-05-31T22:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l09i8f/what_are_the_top_creative_writing_models/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l09i8f | false | null | t3_1l09i8f | /r/LocalLLaMA/comments/1l09i8f/what_are_the_top_creative_writing_models/ | false | false | self | 12 | null |
So... we tried to create an image-based world exploration game. | 1 | [removed] | 2025-05-31T22:25:52 | https://www.reddit.com/gallery/1l09cjo | Fickle-Bake-7557 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l09cjo | false | null | t3_1l09cjo | /r/LocalLLaMA/comments/1l09cjo/so_we_tried_to_create_an_imagebased_world/ | false | false | 1 | null | |
Some newb assistant/agent questions. | 2 | I've been learning LLMs, and for most things it's easier to define a project to accomplish, then learn as you go, so I'm working on creating a generic AI agent/assistant that can do some (I thought) simple automation tasks.
Really I just want something that can
\- search the web, aggregate data and summarize.
... | 2025-05-31T21:57:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l08qc3/some_newb_assistantagent_questions/ | johnfkngzoidberg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l08qc3 | false | null | t3_1l08qc3 | /r/LocalLLaMA/comments/1l08qc3/some_newb_assistantagent_questions/ | false | false | self | 2 | null |
Enigma | 1 | [removed] | 2025-05-31T21:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l08nun/enigma/ | FitCar5539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l08nun | false | null | t3_1l08nun | /r/LocalLLaMA/comments/1l08nun/enigma/ | false | false | self | 1 | null |
Google quietly released an app that lets you download and run AI models locally | TechCrunch | 0 | 2025-05-31T21:44:17 | https://techcrunch.com/2025/05/31/google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/ | chillinewman | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1l08fr8 | false | null | t3_1l08fr8 | /r/LocalLLaMA/comments/1l08fr8/google_quietly_released_an_app_that_lets_you/ | false | false | 0 | {'enabled': False, 'images': [{'id': '_FBQRawtsVnlTLgg9jFSaAELbacVusil3H8bxH8zdWA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=108&crop=smart&auto=webp&s=9ddd21dcf8ac59bd61fe2319db5ff3b12f11fcdf', 'width': 108}, {'height': 144, 'url': 'h... | ||
The Quest for 100k - LLAMA.CPP Setting for a Noobie | 5 | SO there was a post about eeking 100k context out of gemma3 27b on a 3090 and I really wanted to try it... but never setup llama.cpp before and being a glutton for punishment decided I wanted a GUI too in the form of open-webui. I think I got most of it working with an assortment of help from various AI's but the post ... | 2025-05-31T21:29:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l0843f/the_quest_for_100k_llamacpp_setting_for_a_noobie/ | LostHisDog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l0843f | false | null | t3_1l0843f | /r/LocalLLaMA/comments/1l0843f/the_quest_for_100k_llamacpp_setting_for_a_noobie/ | false | false | self | 5 | null |
What is the current best Image to Video model with least content restrictions and guardrails? | 0 | Recently I can across few Instagram pages with borderline content . They have AI generated videos of women in bikini/lingerie.
I know there are some jailbreaking prompts for commercial video generators like sora, veo and others but they generate videos of new women faces.
What models could they be using to convert an... | 2025-05-31T21:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l07qj1/what_is_the_current_best_image_to_video_model/ | Im_banned_everywhere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l07qj1 | false | null | t3_1l07qj1 | /r/LocalLLaMA/comments/1l07qj1/what_is_the_current_best_image_to_video_model/ | false | false | self | 0 | null |
Do you agree with this assessment? (7B vs 24B) | 0 | Me:
Say I'm using a 24B model for role-play. Can you give me a short example of how the 7B version would differ in quality? Hardware isn't the topic of this scenario.
Gemini 2.5 Pro (preview):
Okay, let's imagine a role-play scenario. Assume hardware is not a constraint, and we're just looking at the potential d... | 2025-05-31T21:06:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l07llz/do_you_agree_with_this_assessment_7b_vs_24b/ | santovalentino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l07llz | false | null | t3_1l07llz | /r/LocalLLaMA/comments/1l07llz/do_you_agree_with_this_assessment_7b_vs_24b/ | false | false | self | 0 | null |
LLM-agnostic receipt & invoice generator | 4 | This is a super helpful update — especially for devs building tools on top of LLMs.
If you’re working with document AI, you might find this useful: I open-sourced a tool that generates synthetic receipts and invoices using prompts and any LLM backend (OpenAI, open-source models, etc). It’s great for testing extraction... | 2025-05-31T20:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l06klt/llmagnostic_receipt_invoice_generator/ | Sharp-Past-8473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l06klt | false | null | t3_1l06klt | /r/LocalLLaMA/comments/1l06klt/llmagnostic_receipt_invoice_generator/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'xulo0UX4sQN4WA4NhGTUMyBtZSn1OUrRiBGqt06-aCA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=108&crop=smart&auto=webp&s=ea741d619d2515a8e32bb0bfcee552df9ab6fa3e', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek R1 spirals off for me, how are y'all getting coherent replies? | 0 | * ollama version is 0.9.0
* deepseek-r1:latest 6995872bfe4c 5.2 GB
* Ubuntu server
Prompt: `show off your coding power by making a 1 page html/JS/WebGPU demo that is a full screen physics simulation`
And... wow. It is still going.
* I'm not sure if this is a good idea. Let me start over.,
* It seems I'm not g... | 2025-05-31T20:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l06i1a/deepseek_r1_spirals_off_for_me_how_are_yall/ | firesalamander | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l06i1a | false | null | t3_1l06i1a | /r/LocalLLaMA/comments/1l06i1a/deepseek_r1_spirals_off_for_me_how_are_yall/ | false | false | self | 0 | null |
The SRE’s Guide to High Availability Open WebUI Deployment Architecture | 11 | Based on my real world experiences running Open WebUI for thousands of concurrent users, this guide covers the best practices for deploying stateless Open WebUI containers (Kubernetes Pods, Swarm services, ECS etc), Redis and external embeddings, vector databases and put all that behind a load balancer that understands... | 2025-05-31T20:15:14 | https://taylorwilsdon.medium.com/the-sres-guide-to-high-availability-open-webui-deployment-architecture-2ee42654eced | taylorwilsdon | taylorwilsdon.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1l06g4l | false | null | t3_1l06g4l | /r/LocalLLaMA/comments/1l06g4l/the_sres_guide_to_high_availability_open_webui/ | false | false | 11 | {'enabled': False, 'images': [{'id': '-nPuH1hGczKfqGyzVl_emgzzYfQAn92QWGDHsFzv2QQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=108&crop=smart&auto=webp&s=22bace721e175cf4d7c925250970becaac031481', 'width': 108}, {'height': 144, 'url': 'h... | |
Most powerful < 7b parameters model at the moment? | 118 | I would like to know which is the best model less than 7b currently available. | 2025-05-31T20:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l06f7r/most_powerful_7b_parameters_model_at_the_moment/ | ventilador_liliana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l06f7r | false | null | t3_1l06f7r | /r/LocalLLaMA/comments/1l06f7r/most_powerful_7b_parameters_model_at_the_moment/ | false | false | self | 118 | null |
Speaker separation and transcription | 8 | Is there any software, llm or example code to do speaker separation and transcription from a mono recording source? | 2025-05-31T19:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l05ypt/speaker_separation_and_transcription/ | Khipu28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l05ypt | false | null | t3_1l05ypt | /r/LocalLLaMA/comments/1l05ypt/speaker_separation_and_transcription/ | false | false | self | 8 | null |
Recommendations for a model for nsfw rp with 8gb gpu? | 1 | [removed] | 2025-05-31T19:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l05yfv/recommendations_for_a_model_for_nsfw_rp_with_8gb/ | MacaroniBee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l05yfv | false | null | t3_1l05yfv | /r/LocalLLaMA/comments/1l05yfv/recommendations_for_a_model_for_nsfw_rp_with_8gb/ | false | false | nsfw | 1 | null |
Has anyone managed to get a non Google AI to run | 38 | In the new Google edge gallery app? I'm wondering if deepseek or a version of it can be ran locally with it?
| 2025-05-31T19:51:20 | Gabrielmorrow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l05wpz | false | null | t3_1l05wpz | /r/LocalLLaMA/comments/1l05wpz/has_anyone_managed_to_get_a_non_google_ai_to_run/ | false | false | 38 | {'enabled': True, 'images': [{'id': '45KWKL8csIb8OerruV55VVRy2HArFuq64kszbwYgDqM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=108&crop=smart&auto=webp&s=b2dc8e7df010f065ca77c717be0ec88a1eed168c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8yt7shdl964f1.pn... | ||
Recommendations for a model for nsfw rp with 8gb gpu? | 1 | [removed] | 2025-05-31T19:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l05ve9/recommendations_for_a_model_for_nsfw_rp_with_8gb/ | Proper-Customer7286 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l05ve9 | false | null | t3_1l05ve9 | /r/LocalLLaMA/comments/1l05ve9/recommendations_for_a_model_for_nsfw_rp_with_8gb/ | false | false | nsfw | 1 | null |
Open-Source TTS That Beats ElevenLabs? Chatterbox TTS by Resemble AI | 0 | Resemble AI just released Chatterbox, an open-source TTS model that might be the most powerful alternative to ElevenLabs to date. It's fast, expressive, and surprisingly versatile.
Highlights:
→ Emotion Control: Fine-tune speech expressiveness with a single parameter. From deadpan to dramatic—works out of the box.
→... | 2025-05-31T19:37:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l05lhj/opensource_tts_that_beats_elevenlabs_chatterbox/ | mahimairaja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l05lhj | false | null | t3_1l05lhj | /r/LocalLLaMA/comments/1l05lhj/opensource_tts_that_beats_elevenlabs_chatterbox/ | false | false | self | 0 | null |
llama-server, gemma3, 32K context *and* speculative decoding on a 24GB GPU | 74 | llama.cpp keeps cooking! Draft model support with SWA landed this morning and early tests show up to 30% improvements in performance. Fitting it all on a single 24GB GPU was tight. The 4b as a draft model had a high enough acceptance rate to make a performance difference. Generating code had the best speed ups and crea... | 2025-05-31T19:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l05hpu/llamaserver_gemma3_32k_context_and_speculative/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l05hpu | false | null | t3_1l05hpu | /r/LocalLLaMA/comments/1l05hpu/llamaserver_gemma3_32k_context_and_speculative/ | false | false | self | 74 | {'enabled': False, 'images': [{'id': 'Gl3gNdSGmTRUgVuThHldNFN7ixhImdAgLgxGF5XRiAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=108&crop=smart&auto=webp&s=a7d8b32c8d708477fad9966f0b4311e5b0bdbd4a', 'width': 108}, {'height': 108, 'url': 'h... |
Demo Video of AutoBE, Backend Vibe Coding Agent Achieving 100% Compilation Success (Open Source) | 39 | ## AutoBE: Backend Vibe Coding Agent Achieving 100% Compilation Success
- Github Repository: https://github.com/wrtnlabs/autobe
- Playground Website: https://stackblitz.com/github/wrtnlabs/autobe-playground-stackblitz
- Demo Result (Generated backend applications by AutoBE)
- [Bullet-in Board System](https://stackbl... | 2025-05-31T18:38:56 | https://v.redd.it/f2df0y0jw54f1 | jhnam88 | /r/LocalLLaMA/comments/1l049hr/demo_video_of_autobe_backend_vibe_coding_agent/ | 1970-01-01T00:00:00 | 0 | {} | 1l049hr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f2df0y0jw54f1/DASHPlaylist.mpd?a=1751438340%2CNjk2NWQyZjkyNmQ1MzhhODRhMzAwYTA0ZDBmY2YwNGM5NDg0ZDQ2N2Q5NzA1NWE4ZWQzYTU4NTYxZjFkMzdhMQ%3D%3D&v=1&f=sd', 'duration': 323, 'fallback_url': 'https://v.redd.it/f2df0y0jw54f1/DASH_1080.mp4?source=fallback', '... | t3_1l049hr | /r/LocalLLaMA/comments/1l049hr/demo_video_of_autobe_backend_vibe_coding_agent/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=108&crop=smart&format=pjpg&auto=webp&s=a08fdf3f483e3a42ed140d0d75fddda5f6739... | |
Regarding Hardcoded GGML Tensor Name Length Limit (GGML_MAX_NAME) | 1 | [removed] | 2025-05-31T18:26:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l03z9d/regarding_hardcoded_ggml_tensor_name_length_limit/ | Swimming-Market7717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l03z9d | false | null | t3_1l03z9d | /r/LocalLLaMA/comments/1l03z9d/regarding_hardcoded_ggml_tensor_name_length_limit/ | false | false | self | 1 | null |
"Fill in the middle" video generation? | 9 | My dad has been taking photos when he goes hiking. He always frames them the same, and has taken photos for every season over the course of a few years. Can you guys recommend a video generator that can "fill in the middle" such that I can produce a video in between each of the photos? | 2025-05-31T18:06:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l03iep/fill_in_the_middle_video_generation/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l03iep | false | null | t3_1l03iep | /r/LocalLLaMA/comments/1l03iep/fill_in_the_middle_video_generation/ | false | false | self | 9 | null |
LLM Extension for Command Palette: A way to chat with LLM without opening new windows | 10 | After my [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kfxl36/proof_of_concept_ollama_chat_in_powertoys_command) got some nice feedbacks on what was just a small project, it motivated me to put this [on Microsoft store](https://apps.microsoft.com/detail/9NPK6KSDLC81) and also on winget, which means now the ... | 2025-05-31T18:02:51 | https://v.redd.it/54dvyzcfo54f1 | GGLio | /r/LocalLLaMA/comments/1l03f2h/llm_extension_for_command_palette_a_way_to_chat/ | 1970-01-01T00:00:00 | 0 | {} | 1l03f2h | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/54dvyzcfo54f1/DASHPlaylist.mpd?a=1751436176%2CNWZjMjM1MjFmYzg4YzQ3ZDc5OTI2ZjMyYzM0ZGQzZGYxODRhYmIxOWU3NmE2MzlkZGVlNGYxMDk1ZjdmODRkMQ%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/54dvyzcfo54f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l03f2h | /r/LocalLLaMA/comments/1l03f2h/llm_extension_for_command_palette_a_way_to_chat/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5270174b630e3fffd0e7dfd239aff4bec384... | |
Why he think he Claude 3 Opus ? | 0 | 2025-05-31T17:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l03btj/why_he_think_he_claude_3_opus/ | presidentbidden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l03btj | false | null | t3_1l03btj | /r/LocalLLaMA/comments/1l03btj/why_he_think_he_claude_3_opus/ | false | false | 0 | null | ||
Best models to try on 96gb gpu? | 47 | RTX pro 6000 Blackwell arriving next week. What are the top local coding and image/video generation models I can try? Thanks! | 2025-05-31T17:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l033vh/best_models_to_try_on_96gb_gpu/ | sc166 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l033vh | false | null | t3_1l033vh | /r/LocalLLaMA/comments/1l033vh/best_models_to_try_on_96gb_gpu/ | false | false | self | 47 | null |
deepseek/deepseek-r1-0528-qwen3-8b stuck on infinite tool loop. Any ideas? | 26 | I've downloaded the official Deepseek distillation from their official sources and it does seem a touch smarter. However, when using tools, it often gets stuck forever trying to use them. Do you know why this is going on, and if we have any workaround? | 2025-05-31T17:23:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l02hmq/deepseekdeepseekr10528qwen38b_stuck_on_infinite/ | Substantial_Swan_144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l02hmq | false | null | t3_1l02hmq | /r/LocalLLaMA/comments/1l02hmq/deepseekdeepseekr10528qwen38b_stuck_on_infinite/ | false | false | self | 26 | null |
Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA) | 1 | [removed] | 2025-05-31T17:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l02cgv/ablating_gemma_3_27b_variants_with_synthetic_data/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l02cgv | false | null | t3_1l02cgv | /r/LocalLLaMA/comments/1l02cgv/ablating_gemma_3_27b_variants_with_synthetic_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'h... |
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA) | 1 | [removed] | 2025-05-31T17:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l025xa/oc_ablating_gemma_3_27b_variants_with_synthetic/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l025xa | false | null | t3_1l025xa | /r/LocalLLaMA/comments/1l025xa/oc_ablating_gemma_3_27b_variants_with_synthetic/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'h... |
Is there a way to convert the model downloaded directly from huggingface to blobs, refs, snapshots directory structure? | 2 | I downloaded new DeepSeek-R1 from huggingface. All the config, json and safetensors files are in single directory.
I’m using mlx distributed and it requires the model to be in this directory structure.
models—mlx-community—DeepSeek-R1-0528-4bit/
├── blobs/
├── refs/
├── snapshots/
I don’t want to re-download this hu... | 2025-05-31T17:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l020zk/is_there_a_way_to_convert_the_model_downloaded/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l020zk | false | null | t3_1l020zk | /r/LocalLLaMA/comments/1l020zk/is_there_a_way_to_convert_the_model_downloaded/ | false | false | self | 2 | null |
Context Issue on Long Threads For Reasoning Models | 1 | Context Issue on Long Threads For Reasoning Models
Hi Everyone,
This is an issue I noticed while extensively using o4-mini and 4o in a long ChatGPT thread related to one of my projects. As the context grew, I noticed that o4-mini getting confused while 4o was providing the desired answers. For example, if I asked o4-... | 2025-05-31T17:01:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l01yhr/context_issue_on_long_threads_for_reasoning_models/ | PleasantInspection12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l01yhr | false | null | t3_1l01yhr | /r/LocalLLaMA/comments/1l01yhr/context_issue_on_long_threads_for_reasoning_models/ | false | false | self | 1 | null |
[VOICE VIBE CODING] Android app to code while afk | 0 | Hello,
This is a continuation of a post I made \~2 months ago, showcasing an **Open Source implementation of Computer Use: "Simple Computer Use"**.
We are now making public the main client we use: a **lightweight "Simple Computer Use" Android App**:
[https://github.com/pnmartinez/simple-computer-use/releases/tag/0.5... | 2025-05-31T16:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l01frk/voice_vibe_coding_android_app_to_code_while_afk/ | nava_7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l01frk | false | null | t3_1l01frk | /r/LocalLLaMA/comments/1l01frk/voice_vibe_coding_android_app_to_code_while_afk/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'hOrlbiuUHyPNMK9eU2w13rY4HX9HQGepkHo3FhKJOwI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=108&crop=smart&auto=webp&s=53251397df96a18f97fa166e2c3d7961623fa8d9', 'width': 108}, {'height': 108, 'url': 'h... | |
Giving Qwen 3 0.6B a Toolbelt in the form of MCP Support, Running Locally in Your Browser with Adjustable Thinking! | 49 | Hello all. I have spent a couple weekends giving the tiny Qwen3 0.6B model the ability to show off its underutilized tool calling abilities by using remote MCP servers. I am pleasantly surprised at how well it can chain tools. Additionally, I gave it the option to limit how much it can think to avoid the "overthinking"... | 2025-05-31T16:34:06 | https://v.redd.it/r495cezy654f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l01bfe | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/r495cezy654f1/DASHPlaylist.mpd?a=1751301259%2CODRlNjE5OGRkY2ZiNzRjZTU3YjUzYWM5Y2IyZTA5ZTUzYTZkZThmOGI3N2Q2ZTZjMDA2OWM2MGIxMThhMTJiMg%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/r495cezy654f1/DASH_720.mp4?source=fallback', 'ha... | t3_1l01bfe | /r/LocalLLaMA/comments/1l01bfe/giving_qwen_3_06b_a_toolbelt_in_the_form_of_mcp/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=108&crop=smart&format=pjpg&auto=webp&s=ef2f98923e0d32c5e1f5c62993c8047486044... | |
Why did Anthropic release MCP as a standard? | 0 | Was there a capitalist reason? Did they think others were going to base it anyway like the OpenAI API? | 2025-05-31T16:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l00r5n/why_did_anthropic_release_mcp_as_a_standard/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l00r5n | false | null | t3_1l00r5n | /r/LocalLLaMA/comments/1l00r5n/why_did_anthropic_release_mcp_as_a_standard/ | false | false | self | 0 | null |
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA) | 1 | [removed] | 2025-05-31T15:56:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l00f80/oc_ablating_gemma_3_27b_variants_with_synthetic/ | RemarkableMatter4058 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l00f80 | false | null | t3_1l00f80 | /r/LocalLLaMA/comments/1l00f80/oc_ablating_gemma_3_27b_variants_with_synthetic/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'h... |
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA) | 1 | [removed] | 2025-05-31T15:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l00agm/oc_ablating_gemma_3_27b_variants_with_synthetic/ | tawnyManticore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l00agm | false | null | t3_1l00agm | /r/LocalLLaMA/comments/1l00agm/oc_ablating_gemma_3_27b_variants_with_synthetic/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'h... |
Why Do LLMs Lie So Believably? | 0 | I’ve been messing around with a few local LLaMA models and something keeps tripping me up. They make stuff up with total confidence.
It’s not just wrong answers. It’s the delivery. A model will give you a perfectly structured paragraph, cite a fake source, and even format it like it came from an academic journal. Unle... | 2025-05-31T15:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l003b6/why_do_llms_lie_so_believably/ | Work_for_burritos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l003b6 | false | null | t3_1l003b6 | /r/LocalLLaMA/comments/1l003b6/why_do_llms_lie_so_believably/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.