title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm using LM Studio and have just started trying to use a Deepseek-R1 Distilled Llama model and unlike any other model I've ever used, the LLM keeps responding in a strange way. I am incredibly new to this whole thing, so if this is a stupid question I apologize. | 0 | Every time I throw something at the model (8B or 70B both) it responds with something like "Okay, so I'm trying to figure out..." or "The user wants to know... " and none of my other models have responded like this. What's causing this? I'm incredibly confused and honestly don't even know where to begin searching for t... | 2025-05-29T20:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kyku6k/im_using_lm_studio_and_have_just_started_trying/ | BokehJunkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyku6k | false | null | t3_1kyku6k | /r/LocalLLaMA/comments/1kyku6k/im_using_lm_studio_and_have_just_started_trying/ | false | false | self | 0 | null |
Voice customization on Orpheus TTS Baseten deployment | 1 | [removed] | 2025-05-29T20:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kyktn6/voice_customization_on_orpheus_tts_baseten/ | ComedianImpressive37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyktn6 | false | null | t3_1kyktn6 | /r/LocalLLaMA/comments/1kyktn6/voice_customization_on_orpheus_tts_baseten/ | false | false | self | 1 | null |
What’s the most useful agent you’ve built or used? | 1 | [removed] | 2025-05-29T19:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kykkad/whats_the_most_useful_agent_youve_built_or_used/ | InitialChard8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykkad | false | null | t3_1kykkad | /r/LocalLLaMA/comments/1kykkad/whats_the_most_useful_agent_youve_built_or_used/ | false | false | self | 1 | null |
What’s the most useful agent you’ve built or used? | 1 | [removed] | 2025-05-29T19:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kykize/whats_the_most_useful_agent_youve_built_or_used/ | InitialChard8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykize | false | null | t3_1kykize | /r/LocalLLaMA/comments/1kykize/whats_the_most_useful_agent_youve_built_or_used/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eeW9fH3YcxMuHj2amhvboHe6AxADIiP0ot6ECXrRMbs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=108&crop=smart&auto=webp&s=3c43d42d0b10a45d21cf05f31dcf8aa5592ea940', 'width': 108}, {'height': 108, 'url': 'h... |
PSA: Don't waste electricity when running vllm. Use this patch | 303 | I was annoyed by vllm using 100% CPU on as many cores as there are connected GPUs even when there's no activity. I have 8 GPUs connected connected to a single machine, so idle power usage was almost double compared to optimal arrangement.
I went forward and fixed this: https://github.com/vllm-project/vllm/pull/16226. ... | 2025-05-29T19:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kykez2/psa_dont_waste_electricity_when_running_vllm_use/ | pmur12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykez2 | false | null | t3_1kykez2 | /r/LocalLLaMA/comments/1kykez2/psa_dont_waste_electricity_when_running_vllm_use/ | false | false | self | 303 | {'enabled': False, 'images': [{'id': '0pKXup16UoyNVN1je09sqpKw5PcVHUVQgKwcevkCKQs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=108&crop=smart&auto=webp&s=21d00b95cd1b1e998caf7fcdd6ae2e9ecf8edaf5', 'width': 108}, {'height': 108, 'url': 'h... |
Always nice to get something open from the closed AI labs. This time from Anthropic, not a model but pretty cool research/exploration tool. | 159 | 2025-05-29T19:47:35 | https://www.anthropic.com/research/open-source-circuit-tracing | indicava | anthropic.com | 1970-01-01T00:00:00 | 0 | {} | 1kyk9nf | false | null | t3_1kyk9nf | /r/LocalLLaMA/comments/1kyk9nf/always_nice_to_get_something_open_from_the_closed/ | false | false | 159 | {'enabled': False, 'images': [{'id': 'tne0EzCj3fYeq1lZqYf76vEUZ6Xi7xtX6Z_j6u_q-B4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=108&crop=smart&auto=webp&s=1eb6c6d6bee2b0280b9a7e0f6f219f41d3fe0706', 'width': 108}, {'height': 113, 'url': 'h... | ||
[[ACCEPT]] -> Will you train my AGI/ASI/AMI on your beast of a computer? | 1 | [removed] | 2025-05-29T19:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kyk3la/accept_will_you_train_my_agiasiami_on_your_beast/ | MagicaItux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyk3la | false | null | t3_1kyk3la | /r/LocalLLaMA/comments/1kyk3la/accept_will_you_train_my_agiasiami_on_your_beast/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fE4ENnR74sJD8MSpm2UY59UL-v5B8OIbom80atHNgPs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=108&crop=smart&auto=webp&s=bd626ef8e3a68f30e5d070fb0f55cbedfac8fe76', 'width': 108}, {'height': 108, 'url': 'h... |
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T19:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kyk385/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyk385 | false | null | t3_1kyk385 | /r/LocalLLaMA/comments/1kyk385/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'h... |
DeepSeek-R1-0528-Qwen3-8B-OpenVINO quants are up | 13 | https://huggingface.co/Echo9Zulu/DeepSeek-R1-0528-Qwen3-8B-OpenVINO
There are a handful of quants in this repo. To keep things easier to maintain I've taken queues from how unsloth organizes their repos.
Will add some inference code examples tonight. There were some issues with AutoTokenizers in my quick tests and I ... | 2025-05-29T19:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kyjqrg/deepseekr10528qwen38bopenvino_quants_are_up/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyjqrg | false | null | t3_1kyjqrg | /r/LocalLLaMA/comments/1kyjqrg/deepseekr10528qwen38bopenvino_quants_are_up/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'JYKAtiDF6fVq7LGoir-AB6chiRAaDougdypYh6p4_ug', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=108&crop=smart&auto=webp&s=0999c93c7c021fc6e55d3eaf3a6ce7a2524ec874', 'width': 108}, {'height': 116, 'url': 'h... |
seeking (or building) an ai browser extension with inline form suggestions + multi-field support | 2 |
hey all — i'm looking for an existing tool (or folks interested in building one) that can intelligently assist with filling out web forms. not just basic autofill, but something smarter — context-aware, user-aware, and unobtrusive.
here’s what i’m envisioning:
* a browser extension that stays dormant until trigger... | 2025-05-29T19:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kyji9c/seeking_or_building_an_ai_browser_extension_with/ | madouble7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyji9c | false | null | t3_1kyji9c | /r/LocalLLaMA/comments/1kyji9c/seeking_or_building_an_ai_browser_extension_with/ | false | false | self | 2 | null |
Paper page - GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning | 29 | This looks pretty promising for getting closer to a full finetuning. | 2025-05-29T19:15:48 | https://huggingface.co/papers/2505.20355 | AutomataManifold | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyjh6f | false | null | t3_1kyjh6f | /r/LocalLLaMA/comments/1kyjh6f/paper_page_gralora_granular_lowrank_adaptation/ | false | false | 29 | {'enabled': False, 'images': [{'id': '3i6EYJM_JZbEgvv1WLLZWmZ-2tZKsR9bOFAJ8I3sm50', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=108&crop=smart&auto=webp&s=7558232f6452c15a462e887fb212ab8ae4ce18a5', 'width': 108}, {'height': 116, 'url': 'h... | |
Smallest+Fastest Model For Chatting With Webpages? | 5 | I want to use the [Page Assist Firefox extension](https://github.com/n4ze3m/page-assist) for talking with AI about the current webpage I'm on. Are there recommended small+fast models for this I can run on ollama?
Embedding models recommendations are great too. They suggested using [nomic-embed-text](https://ollama.com... | 2025-05-29T18:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kyiwjp/smallestfastest_model_for_chatting_with_webpages/ | getSAT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyiwjp | false | null | t3_1kyiwjp | /r/LocalLLaMA/comments/1kyiwjp/smallestfastest_model_for_chatting_with_webpages/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'iReN37fyKZV3DfAdtYlAQyk9Org-AelIPHAJ5YP6IBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=108&crop=smart&auto=webp&s=ee302aa4cba946af847d96c73e8ea0e67454a3bb', 'width': 108}, {'height': 108, 'url': 'h... |
Considering a dedicated compute card for MSTY. What is faster than a 6800XT and affordable? | 1 | I’m looking at the Radeon Instinct MI50 that has 16GB of HBM2, doubling the memory bandwidth of the 6800XT but the 6800XT has 84% better compute.
What should I be considering? | 2025-05-29T18:42:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kyin1j/considering_a_dedicated_compute_card_for_msty/ | TurtleCrusher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyin1j | false | null | t3_1kyin1j | /r/LocalLLaMA/comments/1kyin1j/considering_a_dedicated_compute_card_for_msty/ | false | false | self | 1 | null |
## DL: CLI Downloader - Hugging Face, Llama.cpp, Auto-Updates & More! | 0 | Hey everyone!
I'm excited to share \*\*DL\*\*, a command-line interface (CLI) tool I've been developing (with a \*lot\* of help from AI!) to make downloading files, especially large model files and repositories, much smoother and faster. If you're often grabbing stuff from Hugging Face, need the latest llama.cpp, or j... | 2025-05-29T18:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kyikj7/dl_cli_downloader_hugging_face_llamacpp/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyikj7 | false | null | t3_1kyikj7 | /r/LocalLLaMA/comments/1kyikj7/dl_cli_downloader_hugging_face_llamacpp/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'W7eRnFuf9-jFWaQMacHfXI1UUakTahErVEbmllcveM0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=108&crop=smart&auto=webp&s=8ac618c7c35354de206d04314ab61bed3064d380', 'width': 108}, {'height': 108, 'url': 'h... | |
## DL: CLI Downloader - Hugging Face, Llama.cpp, Auto-Updates & More! | 0 | Hi,
I'm excited to share \*\*DL\*\*, a command-line interface (CLI) tool I've been developing (with a \*lot\* of help from AI!) to make downloading files, especially large model files and repositories, much smoother and faster. If you're often grabbing stuff from Hugging Face, need the latest llama.cpp, or just want a... | 2025-05-29T18:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kyigzz/dl_cli_downloader_hugging_face_llamacpp/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyigzz | false | null | t3_1kyigzz | /r/LocalLLaMA/comments/1kyigzz/dl_cli_downloader_hugging_face_llamacpp/ | false | false | 0 | null | |
Claude Sonnet 4 is truly decieving | 1 | [removed] | 2025-05-29T18:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyidnn/claude_sonnet_4_is_truly_decieving/ | Ortho-BenzoPhenone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyidnn | false | null | t3_1kyidnn | /r/LocalLLaMA/comments/1kyidnn/claude_sonnet_4_is_truly_decieving/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OyJmICVkJ46HCE5hYYD__ia7siW4AiqfKr6KYSU2clc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=108&crop=smart&auto=webp&s=b6ec9686c50c0dbd7647322b08ccb9bca4b2f4e0', 'width': 108}, {'height': 108, 'url': 'h... | |
has anyone tried BAML for structured outputs? | 1 | [removed] | 2025-05-29T18:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kyia2i/has_anyone_tried_baml_for_structured_outputs/ | sandy_005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyia2i | false | null | t3_1kyia2i | /r/LocalLLaMA/comments/1kyia2i/has_anyone_tried_baml_for_structured_outputs/ | false | false | self | 1 | null |
Which local model in your practical experience is best for tool use? | 1 | [removed] | 2025-05-29T18:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyi881/which_local_model_in_your_practical_experience_is/ | numbtheless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyi881 | false | null | t3_1kyi881 | /r/LocalLLaMA/comments/1kyi881/which_local_model_in_your_practical_experience_is/ | false | false | self | 1 | null |
Free up VRAM by using iGPU for display rendering, and Graphics card just for LLM | 8 | Has anyone tried using your internal GPU for display rendering so you have all the VRAM available for your AI programs? Will it be as simple as disconnecting all cables from the graphics card and only connecting your monitor to your iGPU? I'm using Windows, but the question also applies if using other OSes. | 2025-05-29T18:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyhl5u/free_up_vram_by_using_igpu_for_display_rendering/ | some_user_2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyhl5u | false | null | t3_1kyhl5u | /r/LocalLLaMA/comments/1kyhl5u/free_up_vram_by_using_igpu_for_display_rendering/ | false | false | self | 8 | null |
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T17:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kyhfhx/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyhfhx | false | null | t3_1kyhfhx | /r/LocalLLaMA/comments/1kyhfhx/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | null |
R1 on live bench | 21 | [benchmark ](https://preview.redd.it/kmmnq5dodr3f1.png?width=1390&format=png&auto=webp&s=8faaad69539bfb4dc5eb23f1e0126ba3709b5f0d)
benchmark | 2025-05-29T17:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kyh95g/r1_on_live_bench/ | Inevitable_Clothes91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyh95g | false | null | t3_1kyh95g | /r/LocalLLaMA/comments/1kyh95g/r1_on_live_bench/ | false | false | 21 | null | |
How are you handling AI agent coordination in your SaaS? | 1 | [removed] | 2025-05-29T17:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kyga7j/how_are_you_handling_ai_agent_coordination_in/ | Easy-String6650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyga7j | false | null | t3_1kyga7j | /r/LocalLLaMA/comments/1kyga7j/how_are_you_handling_ai_agent_coordination_in/ | false | false | self | 1 | null |
R1 distil qwen 3 8b way worse than qwen3 14b | 0 | Sent the same prompt: "do a solar system simulation in a single html file" to both of them, 3 times each. Qwen14b did fine all three times. The other one failed every single time. Used q4_k_m for qwen3 14b and q5_k_m for r1 distil. | 2025-05-29T17:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kyg15b/r1_distil_qwen_3_8b_way_worse_than_qwen3_14b/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyg15b | false | null | t3_1kyg15b | /r/LocalLLaMA/comments/1kyg15b/r1_distil_qwen_3_8b_way_worse_than_qwen3_14b/ | false | false | self | 0 | null |
LLM benchmarks for AI MAX+ 395 (HP laptop) | 36 | Not my video.
Even knowing the bandwidth in advance, the tokens per second are still a bit underwhelming. Can't beat physics I guess.
The Framework Desktop will have a higher TDP, but don't think it's gonna help much. | 2025-05-29T16:34:01 | https://www.youtube.com/watch?v=-HJ-VipsuSk | BerryGloomy4215 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1kyfcky | false | {'oembed': {'author_name': 'AIex The AI Workbench', 'author_url': 'https://www.youtube.com/@AIexTheAIWorkbench', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-HJ-VipsuSk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypte... | t3_1kyfcky | /r/LocalLLaMA/comments/1kyfcky/llm_benchmarks_for_ai_max_395_hp_laptop/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'by0eH17d5lslDimbW-QRNwhVovOySH8G4eVonrVuD1g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tr4JiOsuiWwYXrmqb9qpQBRMgBXV0gjIlFHUHTS_EpE.jpg?width=108&crop=smart&auto=webp&s=724424371434feca6b704da3f5ba3b9f973114fc', 'width': 108}, {'height': 162, 'url': 'h... | |
Dual 4090 build for brand compliance analysis - worth it or waste? | 0 | Building a rig to auto-analyze marketing assets against brand guidelines/marketing persona preferences (logo placement, colors, text positioning etc). Need to batch process and score images, then generate reports.
Specs I'm considering:
• 2x RTX 4090 24GB
• R9 7950X
• 128GB DDR5 ECC
• 2TB NVMe, 1600W PSU
• Proxmox fo... | 2025-05-29T16:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kyf3oc/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyf3oc | false | null | t3_1kyf3oc | /r/LocalLLaMA/comments/1kyf3oc/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 0 | null |
Does anyone knows what is goldmane llm at lmarena? | 3 | It gave 10/10 to my specific tasks | 2025-05-29T16:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kyf07f/does_anyone_knows_what_is_goldmane_llm_at_lmarena/ | Economy_Apple_4617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyf07f | false | null | t3_1kyf07f | /r/LocalLLaMA/comments/1kyf07f/does_anyone_knows_what_is_goldmane_llm_at_lmarena/ | false | false | self | 3 | null |
When to Fine-Tune LLMs (and When Not To) - A Practical Guide | 110 | I've been building fine-tunes for 9 years (at my own startup, then at Apple, now at a second startup) and learned a lot along the way. I thought most of this was common knowledge, but I've been told it's helpful so wanted to write up a rough guide for when to (and when not to) fine-tune, what to expect, and which model... | 2025-05-29T16:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kyeo4z/when_to_finetune_llms_and_when_not_to_a_practical/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyeo4z | false | null | t3_1kyeo4z | /r/LocalLLaMA/comments/1kyeo4z/when_to_finetune_llms_and_when_not_to_a_practical/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=108&crop=smart&auto=webp&s=fa326ef50bb272a1afa988432609189589ae2dee', 'width': 108}, {'height': 108, 'url': 'h... |
What are cool ways you use your Local LLM | 7 | Things that just make your life a bit easier with Ai. | 2025-05-29T15:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kydzmh/what_are_cool_ways_you_use_your_local_llm/ | DOK10101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydzmh | false | null | t3_1kydzmh | /r/LocalLLaMA/comments/1kydzmh/what_are_cool_ways_you_use_your_local_llm/ | false | false | self | 7 | null |
How do you define "vibe coding"? | 0 | 2025-05-29T15:33:38 | vibjelo | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1kydu4r | false | null | t3_1kydu4r | /r/LocalLLaMA/comments/1kydu4r/how_do_you_define_vibe_coding/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ifU43VPseK25e5pNBguSXwQIB5m0OD5Yv2d_aGc13Hg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=108&crop=smart&auto=webp&s=e15cb99d58745beecc428343d490f7548016337e', 'width': 108}, {'height': 109, 'url': 'ht... | |||
Is there any good smaller NSFW models for story writing? | 4 | I have a fairly weak PC, 6GB VRAM and 50 GB RAM. I have tried a couple models on Ollama but most of them suck, they either keep repeating themselves or just do a sort of Compilation where they just briefly summarizes everything and immediately skips to the end.
So are there any good models on Ollama or elsewhere that ... | 2025-05-29T15:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kydt6f/is_there_any_good_smaller_nsfw_models_for_story/ | LeiMoshen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydt6f | false | null | t3_1kydt6f | /r/LocalLLaMA/comments/1kydt6f/is_there_any_good_smaller_nsfw_models_for_story/ | false | false | nsfw | 4 | null |
Google MedGemma models | 1 | [removed] | 2025-05-29T15:32:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kydsza/google_medgemma_models/ | MST019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydsza | false | null | t3_1kydsza | /r/LocalLLaMA/comments/1kydsza/google_medgemma_models/ | false | false | self | 1 | null |
Mastering DeepSeek LLaMA Locally: Open WebUI + Ollama Guide | 1 | [removed] | 2025-05-29T15:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kydqw2/mastering_deepseek_llama_locally_open_webui/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydqw2 | false | null | t3_1kydqw2 | /r/LocalLLaMA/comments/1kydqw2/mastering_deepseek_llama_locally_open_webui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YwLbil7JL-v1VrLIWxRnPRhtfaTePgNV_z5tZk6MWvY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=108&crop=smart&auto=webp&s=6b0f2892fce5def25786bde7c912767ff4aef411', 'width': 108}, {'height': 216, 'url': '... |
Is there a local model that can solve this text decoding riddle? | 4 | Since the introduction of DeepSeek-R1 distills (the original ones) I've tried to find a local model that can solve text decoding problem from o1 research page ["Learning to reason with LLMs" (OpenAI)](https://openai.com/index/learning-to-reason-with-llms/):
oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step
... | 2025-05-29T15:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kydoio/is_there_a_local_model_that_can_solve_this_text/ | F1amy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydoio | false | null | t3_1kydoio | /r/LocalLLaMA/comments/1kydoio/is_there_a_local_model_that_can_solve_this_text/ | false | false | self | 4 | null |
Does Llama actually work well for real projects? Which version is best, and what are the trade-offs? | 1 | [removed] | 2025-05-29T15:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kydksa/does_llama_actually_work_well_for_real_projects/ | chefs-1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydksa | false | null | t3_1kydksa | /r/LocalLLaMA/comments/1kydksa/does_llama_actually_work_well_for_real_projects/ | false | false | self | 1 | null |
Got Access to Domo AI. What should I try with it? | 0 | just got access to [domoai](https://www.domoai.app/home?via=081621AUG) and have been testing different prompts. If you have ideas like anime to real, style-swapped videos, or anything unusual, drop them in the comments. I’ll try the top suggestions with the most upvotes after a few hours since it takes some time to gen... | 2025-05-29T15:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kydf3k/got_access_to_domo_ai_what_should_i_try_with_it/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydf3k | false | null | t3_1kydf3k | /r/LocalLLaMA/comments/1kydf3k/got_access_to_domo_ai_what_should_i_try_with_it/ | false | false | self | 0 | null |
"These students can't add two and two, and they go to Harvard." — Donald Trump | 0 | 2025-05-29T15:03:44 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyd34w | false | null | t3_1kyd34w | /r/LocalLLaMA/comments/1kyd34w/these_students_cant_add_two_and_two_and_they_go/ | false | false | 0 | {'enabled': True, 'images': [{'id': '4QakP4WzA59VCrcpOFeyPMVdWrPIvXFstpY8P_8XPfI', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=108&crop=smart&auto=webp&s=5edb55ab120c0ee2aa01d6841f984ab20ecae915', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.j... | |||
No offense: Deepseek 8b 0528 Qwen3 Not Better Than Qwen3 8B | 0 | Just want to say this
Asked some prompts related to basic stuff like create calculator.
Qwen in zero shot where deepseek 8b qwen - required more shooting.
| 2025-05-29T14:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kycymx/no_offense_deepseek_8b_0528_qwen3_not_better_than/ | dreamai87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycymx | false | null | t3_1kycymx | /r/LocalLLaMA/comments/1kycymx/no_offense_deepseek_8b_0528_qwen3_not_better_than/ | false | false | self | 0 | null |
What is this nice frontend shown on the Deepseek R1 updated website? | 3 | https://i.redd.it/68wa4yfvgq3f1.gif
[Deepseek News Link](https://api-docs.deepseek.com/news/news250528) | 2025-05-29T14:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kycm83/what_is_this_nice_frontend_shown_on_the_deepseek/ | Yes_but_I_think | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycm83 | false | null | t3_1kycm83 | /r/LocalLLaMA/comments/1kycm83/what_is_this_nice_frontend_shown_on_the_deepseek/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=108&crop=smart&auto=webp&s=4f39a07c027d6036b98ac9f4ba405a8d11549aa3', 'width': 108}, {'height': 121, 'url': 'h... | |
PC configuration for fast LocalLLaMA | 1 | [removed] | 2025-05-29T14:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kycllk/pc_configuration_for_fast_localllama/ | Icy_Fee7219 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycllk | false | null | t3_1kycllk | /r/LocalLLaMA/comments/1kycllk/pc_configuration_for_fast_localllama/ | false | false | self | 1 | null |
Can we take Deepseek-R1-Qwen3-8b tokenizer and copy it to Qwen3 30b A3b? | 0 | Deepseek’s post on the R1 distill for Qwen3 8b implies the only thing changed is the tokenizer config, and other parts of Qwen3 are the same.
This is surprising to me, as I thought such a distill would require a lot of GPU power, to finetune the model with the R1 dataset.
If this is not the case, and we can do a simp... | 2025-05-29T14:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kycia8/can_we_take_deepseekr1qwen38b_tokenizer_and_copy/ | jaxchang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycia8 | false | null | t3_1kycia8 | /r/LocalLLaMA/comments/1kycia8/can_we_take_deepseekr1qwen38b_tokenizer_and_copy/ | false | false | self | 0 | null |
Deepseek is the 4th most intelligent AI in the world. | 325 | 2025-05-29T14:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kyca0p/deepseek_is_the_4th_most_intelligent_ai_in_the/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyca0p | false | null | t3_1kyca0p | /r/LocalLLaMA/comments/1kyca0p/deepseek_is_the_4th_most_intelligent_ai_in_the/ | false | false | 325 | null | ||
Deepseek is the 4th most Intelligent Ai in the world | 1 | *Processing img fw3jnlhocq3f1...*
And yes, that's Claude-4 all the way at the bottom.
i love Deepseek
i mean look at the price to performance | 2025-05-29T14:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kyc47y/deepseek_is_the_4th_most_intelligent_ai_in_the/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyc47y | false | null | t3_1kyc47y | /r/LocalLLaMA/comments/1kyc47y/deepseek_is_the_4th_most_intelligent_ai_in_the/ | false | false | self | 1 | null |
Interesting LLM Thesis Topics | 1 | [removed] | 2025-05-29T14:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kyc2sc/interesting_llm_thesis_topics/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyc2sc | false | null | t3_1kyc2sc | /r/LocalLLaMA/comments/1kyc2sc/interesting_llm_thesis_topics/ | false | false | self | 1 | null |
New Qwen3 8B Distill of DeepSeek R1 0528 | 1 | [removed] | 2025-05-29T14:20:50 | samuelchristlie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyc0uz | false | null | t3_1kyc0uz | /r/LocalLLaMA/comments/1kyc0uz/new_qwen3_8b_distill_of_deepseek_r1_0528/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'vD848Gtcd6AzmG2Ci8YxqhMMmWmK3hihSXC_LPqGqHo', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=108&crop=smart&auto=webp&s=124fe075347a442f069658b0add64282d5f91cc9', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?... | ||
I scraped 1M jobs directly from corporate websites. | 1 | [removed] | 2025-05-29T14:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kybvz7/i_scraped_1m_jobs_directly_from_corporate_websites/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybvz7 | false | null | t3_1kybvz7 | /r/LocalLLaMA/comments/1kybvz7/i_scraped_1m_jobs_directly_from_corporate_websites/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'h... |
Small open models are more cost effective than closed ones (score from artifical analysis). | 34 | Sampled only the most cost efficient models that were above a score threshold. | 2025-05-29T14:12:37 | GreenTreeAndBlueSky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kybtri | false | null | t3_1kybtri | /r/LocalLLaMA/comments/1kybtri/small_open_models_are_more_cost_effective_than/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'lA-Kd2ezIorsaDntaG6RoDwCx7zssa-KCHeOKjh2sgU', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=108&crop=smart&auto=webp&s=52779e4d061460335cb2c329d16ef64704eb13ab', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/hwn90facbq3f1.png... | ||
the impact of memory timings on CPU LLM inference performance. | 7 | I didn't find any data related to this subject so I ran a few tests over the past few days and got some interesting results.
The inspiration for the test was [this thread on hardwareluxx](https://www.hardwareluxx.de/community/threads/ram-timings-und-deren-einfluss-auf-spiele-und-anwendungen-amd-update-23-05-2020.12691... | 2025-05-29T14:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kybql4/the_impact_of_memory_timings_on_cpu_llm_inference/ | AliNT77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybql4 | false | null | t3_1kybql4 | /r/LocalLLaMA/comments/1kybql4/the_impact_of_memory_timings_on_cpu_llm_inference/ | false | false | 7 | null | |
Personalized AI Tutor Demo | Learn about LLMs with an AI Tutor | 1 | [removed] | 2025-05-29T14:03:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kybmcu/personalized_ai_tutor_demo_learn_about_llms_with/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybmcu | false | null | t3_1kybmcu | /r/LocalLLaMA/comments/1kybmcu/personalized_ai_tutor_demo_learn_about_llms_with/ | false | false | self | 1 | null |
Qwen withholds 32B/235B base models, presumably so they can’t be distilled by Deepseek. | 1 | [removed] | 2025-05-29T13:54:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kybdzn/qwen_withholds_32b235b_base_models_presumably_so/ | DowntownCase7112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybdzn | false | null | t3_1kybdzn | /r/LocalLLaMA/comments/1kybdzn/qwen_withholds_32b235b_base_models_presumably_so/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eUe17voVkF4rUxp20J0CXK9LZ1ckV3728roXC7v8pVo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=108&crop=smart&auto=webp&s=1e4e0581cca8cdee9d1908117d0d6678ae7c2d82', 'width': 108}, {'height': 116, 'url': 'h... |
Setting Up a Local LLM for Private Document Processing – Recommendations? | 2 | Hey!
I’ve got a client who needs a local AI setup to process sensitive documents that can't be exposed online. So, I'm planning to deploy a local LLM (Large Language Model) on a dedicated server within their internal network.
The budget is around $5,000 USD, so getting solid computing power and a decent GPU shouldn't... | 2025-05-29T13:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kyaw41/setting_up_a_local_llm_for_private_document/ | DSandleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyaw41 | false | null | t3_1kyaw41 | /r/LocalLLaMA/comments/1kyaw41/setting_up_a_local_llm_for_private_document/ | false | false | self | 2 | null |
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B · Hugging Face | 289 | 2025-05-29T13:24:05 | https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyap9q | false | null | t3_1kyap9q | /r/LocalLLaMA/comments/1kyap9q/deepseekaideepseekr10528qwen38b_hugging_face/ | false | false | 289 | {'enabled': False, 'images': [{'id': 'R-1OzuRKOdpsZYZg4m_xP2EzGdAuJDcDlA7j2s3ED38', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=108&crop=smart&auto=webp&s=165d4cf0673ca50bddb247fff72e6822b06e2c6e', 'width': 108}, {'height': 116, 'url': 'h... | ||
Coresignal MCP: Test it with 1,000 free credits | 1 | [removed] | 2025-05-29T13:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kyaoir/coresignal_mcp_test_it_with_1000_free_credits/ | AdmirableBat3827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyaoir | false | null | t3_1kyaoir | /r/LocalLLaMA/comments/1kyaoir/coresignal_mcp_test_it_with_1000_free_credits/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kuDdK7_W5GTZN-ezUbE9RIrXyLhC3vvLkLS07xbGEaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=108&crop=smart&auto=webp&s=dc65aacc17290e6486aa963cda6254e33c8563d0', 'width': 108}, {'height': 108, 'url': 'h... |
DeepSeek-R1-0528 distill on Qwen3 8B | 151 | 2025-05-29T13:17:51 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyakcp | false | null | t3_1kyakcp | /r/LocalLLaMA/comments/1kyakcp/deepseekr10528_distill_on_qwen3_8b/ | false | false | 151 | {'enabled': True, 'images': [{'id': 'lPwlt9148s2wZuBYrrHPk7x396eULnEK2CRLdF8d6-c', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?width=108&crop=smart&auto=webp&s=638df7c2c7e4d93291a44abbc75d2cf1ee37fd26', 'width': 108}, {'height': 203, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.j... | |||
Speed-up VLLM server boot | 1 | [removed] | 2025-05-29T13:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kyak6q/speedup_vllm_server_boot/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyak6q | false | null | t3_1kyak6q | /r/LocalLLaMA/comments/1kyak6q/speedup_vllm_server_boot/ | false | false | self | 1 | null |
New DeepSeek R1 8B Distill that's "matching the performance of Qwen3-235B-thinking" may be incoming! | 307 | DeepSeek-R1-0528-Qwen3-8B incoming? Oh yeah, gimme that, thank you! 😂 | 2025-05-29T13:07:22 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyac9f | false | null | t3_1kyac9f | /r/LocalLLaMA/comments/1kyac9f/new_deepseek_r1_8b_distill_thats_matching_the/ | false | false | 307 | {'enabled': True, 'images': [{'id': 'LpXiT4GLQ3DeHpOvOTUlsOTrsEELtw0WIGZtQjOfPLI', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?width=108&crop=smart&auto=webp&s=2cc095ea99d80b76e9d0148cdd9f44b25fca4cd2', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.pn... | ||
DeepSeek-R1-0528 Official Benchmark | 371 | Source:https://mp.weixin.qq.com/s/U5fnTRW4cGvXYJER__YBiw | 2025-05-29T13:02:45 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kya8kq | false | null | t3_1kya8kq | /r/LocalLLaMA/comments/1kya8kq/deepseekr10528_official_benchmark/ | false | false | 371 | {'enabled': True, 'images': [{'id': 'X84gp9VhqYpUYVIl--oeHETjZ58lDPymOMOWpmRdbnE', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=108&crop=smart&auto=webp&s=61508f70e6c9bb6ea9982ce4eb6821c431beb0dc', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png... | ||
Deepseek R1.1 dominates gemini 2.5 flash on price vs performance | 166 | 2025-05-29T12:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kya3c2/deepseek_r11_dominates_gemini_25_flash_on_price/ | ihexx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kya3c2 | false | null | t3_1kya3c2 | /r/LocalLLaMA/comments/1kya3c2/deepseek_r11_dominates_gemini_25_flash_on_price/ | false | false | 166 | null | ||
Smallest & best OCR model that can read math & code? | 3 | It seems like Math & OCR is hard for models.
I tried Google's Gemma models 2b, 7b, 27b (my LMStudio has Gemma 3 4B Instruct QAT) but it always makes some mistake. Either it doesn't read stuff fully or make mistakes. For example, a particular section had 4 listicles but it only read 2 of them.
Another one was Qwen-2.5... | 2025-05-29T12:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9q2a/smallest_best_ocr_model_that_can_read_math_code/ | deadcoder0904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9q2a | false | null | t3_1ky9q2a | /r/LocalLLaMA/comments/1ky9q2a/smallest_best_ocr_model_that_can_read_math_code/ | false | false | self | 3 | null |
First version of Elicitation to the MCP draft specification. | 8 | 2025-05-29T12:27:17 | https://modelcontextprotocol.io/specification/draft/client/elicitation | Jordi_Mon_Companys | modelcontextprotocol.io | 1970-01-01T00:00:00 | 0 | {} | 1ky9i0z | false | null | t3_1ky9i0z | /r/LocalLLaMA/comments/1ky9i0z/first_version_of_elicitation_to_the_mcp_draft/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=108&crop=smart&auto=webp&s=7a834690bf5b504383c894e57e513dfb8c93ea61', 'width': 108}, {'height': 121, 'url': 'h... | ||
🔍 DeepSeek-R1-0528: Open-Source Reasoning Model Catching Up to O3 & Gemini? | 30 |
DeepSeek just released an updated version of its reasoning model: **DeepSeek-R1-0528**, and it's getting *very* close to the top proprietary models like OpenAI's O3 and Google’s Gemini 2.5 Pro—while remaining completely open-source.
https://preview.redd.it/bw6qw038rp3f1.png?width=3961&format=png&auto=webp&s=4399... | 2025-05-29T12:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9dbd/deepseekr10528_opensource_reasoning_model/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9dbd | false | null | t3_1ky9dbd | /r/LocalLLaMA/comments/1ky9dbd/deepseekr10528_opensource_reasoning_model/ | false | false | 30 | null | |
[OC] Clean MCP server/client setup for backend apps — no more Stdio + IDE lock-in | 1 | [removed] | 2025-05-29T12:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9bej/oc_clean_mcp_serverclient_setup_for_backend_apps/ | s1lv3rj1nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9bej | false | null | t3_1ky9bej | /r/LocalLLaMA/comments/1ky9bej/oc_clean_mcp_serverclient_setup_for_backend_apps/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gt8YBQhLJDO9bS_Ufhd6THIbFdPaILwQd6v-W4W06rg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=108&crop=smart&auto=webp&s=87476195a7beac59ac6b8392511baa3f2a3bbd17', 'width': 108}, {'height': 108, 'url': 'h... |
https://github.com/adeelahmad/mlx-grpo | 1 | 2025-05-29T12:08:54 | https://github.com/adeelahmad/mlx-grpo | adeelahmadch | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ky958t | false | null | t3_1ky958t | /r/LocalLLaMA/comments/1ky958t/httpsgithubcomadeelahmadmlxgrpo/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0V-DlH4S8Lpw-5ak6RkOhkC6HAYu3E-wUk_YyWjyugM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=108&crop=smart&auto=webp&s=f8fc1cb3dcdf3ab15fe255a9086398c836f34441', 'width': 108}, {'height': 108, 'url': 'h... | ||
Anyone heard about DeepSeek-R1-0528-Qwen3-8B? | 1 | [removed] | 2025-05-29T12:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ky91at/anyone_heard_about_deepseekr10528qwen38b/ | ApprehensiveRoof2722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky91at | false | null | t3_1ky91at | /r/LocalLLaMA/comments/1ky91at/anyone_heard_about_deepseekr10528qwen38b/ | false | false | self | 1 | null |
DeepSeek-R1-0528 Official Benchmarks Released!!! | 707 | 2025-05-29T11:55:06 | https://huggingface.co/deepseek-ai/DeepSeek-R1-0528 | Xhehab_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ky8vlm | false | null | t3_1ky8vlm | /r/LocalLLaMA/comments/1ky8vlm/deepseekr10528_official_benchmarks_released/ | false | false | 707 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'h... | ||
deepseek r1 0528 Anti-fitting logic test | 6 | api
[https://llm-benchmark.github.io/](https://llm-benchmark.github.io/)
For some reason, 60% of the questions cannot be returned because of too long thinking chains (always wrong)
The score went from 0/16 to 1/16, which also made R1 overtake Gemini
I got one question right, and the wrong questions were more ri... | 2025-05-29T11:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8rsu/deepseek_r1_0528_antifitting_logic_test/ | flysnowbigbig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8rsu | false | null | t3_1ky8rsu | /r/LocalLLaMA/comments/1ky8rsu/deepseek_r1_0528_antifitting_logic_test/ | false | false | self | 6 | null |
Dual 4090 build for brand compliance analysis - worth it or waste? | 0 | [removed] | 2025-05-29T11:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8ei2/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8ei2 | false | null | t3_1ky8ei2 | /r/LocalLLaMA/comments/1ky8ei2/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 0 | null |
SWE-rebench: Over 21,000 Open Tasks for SWE LLMs | 36 | Hi! We just released SWE-rebench – an extended and improved version of our previous dataset with GitHub issue-solving tasks.
One common limitation in such datasets is that they usually don’t have many tasks, and they come from only a small number of repositories. For example, in the original SWE-bench there are 2,000+... | 2025-05-29T11:25:45 | https://huggingface.co/datasets/nebius/SWE-rebench | Fabulous_Pollution10 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ky8cby | false | null | t3_1ky8cby | /r/LocalLLaMA/comments/1ky8cby/swerebench_over_21000_open_tasks_for_swe_llms/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'B6v5sBICdVUHLpjO8vQN_BhBGMqJhDRiXG6BpX0jgxk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=108&crop=smart&auto=webp&s=2318cc187aed29555ee8f4e95b18cbc44d177f9f', 'width': 108}, {'height': 116, 'url': 'h... | |
Speed up VLLM server boot time | 1 | [removed] | 2025-05-29T11:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8bux/speed_up_vllm_server_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8bux | false | null | t3_1ky8bux | /r/LocalLLaMA/comments/1ky8bux/speed_up_vllm_server_boot_time/ | false | false | self | 1 | null |
Another benchmark result is in for Deepseek r1.1: big jump in nyt word connections | 64 | 2025-05-29T11:13:10 | _Nils- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky847t | false | null | t3_1ky847t | /r/LocalLLaMA/comments/1ky847t/another_benchmark_result_is_in_for_deepseek_r11/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'szHsUpIn9Tm2_gOyHwY_qJSgi1EJEfTGdAIZ3tUh9VU', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=108&crop=smart&auto=webp&s=4f497a0a12314ba489d84e80c63694be1dd47202', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png... | |||
What is the best cheap GPU for speculative decoding? | 2 | Here's a question that doesn't get asked very often (and the answer isn't "get a 3090").
What is the best cheap GPU for speculative decoding? My main GPU is a 3090.
My goal is to have this 2nd GPU running Qwen 3 0.6b or Qwen 3 1.7b. Or Gemma 3 4b. It may also be running whisper or a similar speech-to-text model at ... | 2025-05-29T11:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7ycc/what_is_the_best_cheap_gpu_for_speculative/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7ycc | false | null | t3_1ky7ycc | /r/LocalLLaMA/comments/1ky7ycc/what_is_the_best_cheap_gpu_for_speculative/ | false | false | self | 2 | null |
PromptCoT-Mamba-7B | 1 | [removed] | 2025-05-29T10:53:55 | https://www.reddit.com/gallery/1ky7s5r | Efficient-Owl9751 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ky7s5r | false | null | t3_1ky7s5r | /r/LocalLLaMA/comments/1ky7s5r/promptcotmamba7b/ | false | false | 1 | null | |
PromptCoT-Mamba-7B | 1 | [removed] | 2025-05-29T10:46:46 | https://www.reddit.com/gallery/1ky7nzz | Efficient-Owl9751 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ky7nzz | false | null | t3_1ky7nzz | /r/LocalLLaMA/comments/1ky7nzz/promptcotmamba7b/ | false | false | 1 | null | |
DGX spark/station | 1 | [removed] | 2025-05-29T10:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7ju2/dgx_sparkstation/ | AvailableSlice6854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7ju2 | false | null | t3_1ky7ju2 | /r/LocalLLaMA/comments/1ky7ju2/dgx_sparkstation/ | false | false | self | 1 | null |
2x Instinct MI50 32G running vLLM results | 22 | I picked up these two AMD Instinct MI50 32G cards from a second-hand trading platform in China. Each card cost me 780 CNY, plus an additional 30 CNY for shipping. I also grabbed two cooling fans to go with them, each costing 40 CNY. In total, I spent 1730 CNY, which is approximately 230 USD.
Even though it’s a second-... | 2025-05-29T10:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7diy/2x_instinct_mi50_32g_running_vllm_results/ | NaLanZeYu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7diy | false | null | t3_1ky7diy | /r/LocalLLaMA/comments/1ky7diy/2x_instinct_mi50_32g_running_vllm_results/ | false | false | self | 22 | null |
Speed up VLLM server boot | 1 | [removed] | 2025-05-29T09:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6qm0/speed_up_vllm_server_boot/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6qm0 | false | null | t3_1ky6qm0 | /r/LocalLLaMA/comments/1ky6qm0/speed_up_vllm_server_boot/ | false | false | self | 1 | null |
What model to run. | 0 | Hello does anyone have some tips for what model to run on a 5070 ti for making a llm thats gonna function as a ai agent with own documents that is being fed as data | 2025-05-29T09:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6qfc/what_model_to_run/ | Material-Score-8128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6qfc | false | null | t3_1ky6qfc | /r/LocalLLaMA/comments/1ky6qfc/what_model_to_run/ | false | false | self | 0 | null |
MNN is quite something, Qwen3-32B on a OnePlus 13 24GB | 97 | In the settings for the model mmap needs to be enabled for this to not crash. It's not that fast, but works. | 2025-05-29T09:32:44 | VickWildman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky6hxy | false | null | t3_1ky6hxy | /r/LocalLLaMA/comments/1ky6hxy/mnn_is_quite_something_qwen332b_on_a_oneplus_13/ | false | false | 97 | {'enabled': True, 'images': [{'id': 'PtQB33Svan8LfgmXbBs90S2Rjmj7LtVQwALE5U4Qf7o', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=108&crop=smart&auto=webp&s=d1964064b0ace02c6708060994168e15b8169c67', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/432wqex5vo3f1.j... | ||
Speed up VLLM boot time | 1 | [removed] | 2025-05-29T09:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6ecs/speed_up_vllm_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6ecs | false | null | t3_1ky6ecs | /r/LocalLLaMA/comments/1ky6ecs/speed_up_vllm_boot_time/ | false | false | self | 1 | null |
Built an ADK Agent that finds Jobs based on your Resume | 7 | I recently built an AI Agent to do job search using Google's new ADK framework, which requires us to upload resume and it takes care of all things by itself.
At first, I was looking to use Qwen vision llm to read resume but decided to use Mistral OCR instead. It was a right choice for sure, Mistral OCR is perfect for ... | 2025-05-29T09:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6c8z/built_an_adk_agent_that_finds_jobs_based_on_your/ | codes_astro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6c8z | false | null | t3_1ky6c8z | /r/LocalLLaMA/comments/1ky6c8z/built_an_adk_agent_that_finds_jobs_based_on_your/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'pnpR0apJjmbpIKgry27yZ1hp2afUqHiEyHaSslIgZwc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=108&crop=smart&auto=webp&s=8eec53ba80ea572e4558c1c0f818333467759f20', 'width': 108}, {'height': 121, 'url': 'h... |
Speed-up VLLM boot time | 1 | [removed] | 2025-05-29T09:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6c7v/speedup_vllm_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6c7v | false | null | t3_1ky6c7v | /r/LocalLLaMA/comments/1ky6c7v/speedup_vllm_boot_time/ | false | false | self | 1 | null |
LORA Continuos pre-training on 7B Instruct Model | 1 | [removed] | 2025-05-29T09:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6bv2/lora_continuos_pretraining_on_7b_instruct_model/ | Fun-Industry-1485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6bv2 | false | null | t3_1ky6bv2 | /r/LocalLLaMA/comments/1ky6bv2/lora_continuos_pretraining_on_7b_instruct_model/ | false | false | self | 1 | null |
How to quantize Vision models for Ollama/GGUF. | 1 | I need to quantize a fine-tuned Gemma 3 model that supports images. Usually I quantize with Ollama, but it doesn't know to ignore the "Vision Tower" and fails.
vLLM has a recipe to do this correctly, but the resulting model uses I4, I8 etc, that Ollama cannot handle.
I'd rather stay with Ollama because my ap... | 2025-05-29T09:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ky68do/how_to_quantize_vision_models_for_ollamagguf/ | Hughesbay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky68do | false | null | t3_1ky68do | /r/LocalLLaMA/comments/1ky68do/how_to_quantize_vision_models_for_ollamagguf/ | false | false | self | 1 | null |
What are Feasible & Interesting LLM Thesis Topics | 1 | [removed] | 2025-05-29T09:06:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ky64az/what_are_feasible_interesting_llm_thesis_topics/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky64az | false | null | t3_1ky64az | /r/LocalLLaMA/comments/1ky64az/what_are_feasible_interesting_llm_thesis_topics/ | false | false | self | 1 | null |
How to Actually Run a Large Language Model (LLM) from a Portable SSD? Is it Feasible? | 1 | [removed] | 2025-05-29T09:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ky63p3/how_to_actually_run_a_large_language_model_llm/ | Own-Objective-7818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky63p3 | false | null | t3_1ky63p3 | /r/LocalLLaMA/comments/1ky63p3/how_to_actually_run_a_large_language_model_llm/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T09:01:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ky61ce/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky61ce | false | null | t3_1ky61ce | /r/LocalLLaMA/comments/1ky61ce/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T09:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ky61b8/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky61b8 | false | null | t3_1ky61b8 | /r/LocalLLaMA/comments/1ky61b8/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T08:56:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5yno/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5yno | false | null | t3_1ky5yno | /r/LocalLLaMA/comments/1ky5yno/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
What's wrong with llama 3.2:1b? | 1 | [removed] | 2025-05-29T08:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5lfv/whats_wrong_with_llama_321b/ | n0man_ch0wdhury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5lfv | false | null | t3_1ky5lfv | /r/LocalLLaMA/comments/1ky5lfv/whats_wrong_with_llama_321b/ | false | false | self | 1 | null |
AnythingLLM RAG with Gemma 3:12b & BGE-m3-F16: LM Studio vs. Ollama Embedding Discrepancies - Same GGUF, Different Results? | 1 | [removed] | 2025-05-29T08:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5aj1/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | Ok_Bug4999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5aj1 | false | null | t3_1ky5aj1 | /r/LocalLLaMA/comments/1ky5aj1/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | false | false | self | 1 | null |
using LLMs for trigger warnings for auditory/visual sensitivities? | 0 | So, as a neurodivergent who has severe auditory and visual sensitivities to certain stimuli, I wonder what the best local audio/vision models are for trigger warnings? does this exist?
I have been struggling to watch movies, play most story-driven games and listen to most music for more than a decade due to my issues ... | 2025-05-29T08:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5962/using_llms_for_trigger_warnings_for/ | Neggy5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5962 | false | null | t3_1ky5962 | /r/LocalLLaMA/comments/1ky5962/using_llms_for_trigger_warnings_for/ | false | false | self | 0 | null |
Best models and/or workflows for visual/auditory trigger warnings? preferably via youtube urls? | 1 | [removed] | 2025-05-29T08:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ky58hq/best_models_andor_workflows_for_visualauditory/ | Neggy5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky58hq | false | null | t3_1ky58hq | /r/LocalLLaMA/comments/1ky58hq/best_models_andor_workflows_for_visualauditory/ | false | false | self | 1 | null |
Olmo 32b vs deepseek r1 | 1 | [removed] | 2025-05-29T08:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ky56h3/olmo_32b_vs_deepseek_r1/ | perfectcrop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky56h3 | false | null | t3_1ky56h3 | /r/LocalLLaMA/comments/1ky56h3/olmo_32b_vs_deepseek_r1/ | false | false | self | 1 | null |
PLEASE LEARN BASIC CYBERSECURITY | 810 | Stumbled across a project doing about $30k a month with their OpenAI API key exposed in the frontend.
Public key, no restrictions, fully usable by anyone.
At that volume someone could easily burn through thousands before it even shows up on a billing alert.
This kind of stuff doesn’t happen because people are carele... | 2025-05-29T07:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ky54kq/please_learn_basic_cybersecurity/ | eastwindtoday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky54kq | false | null | t3_1ky54kq | /r/LocalLLaMA/comments/1ky54kq/please_learn_basic_cybersecurity/ | false | false | self | 810 | null |
Dual 4090 build for brand compliance analysis - worth it or waste? | 1 | [removed] | 2025-05-29T07:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4xwm/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4xwm | false | null | t3_1ky4xwm | /r/LocalLLaMA/comments/1ky4xwm/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 1 | null |
Dual 4090 build for brand compliance analysis - worth it or waste? | 1 | [removed] | 2025-05-29T07:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4tdp/dual_4090_build_for_brand_compliance_analysis/ | Majestic_Reason7903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4tdp | false | null | t3_1ky4tdp | /r/LocalLLaMA/comments/1ky4tdp/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 1 | null |
Offline locally run character.ai alternative using Qwen3 with COQUI XTTS-v2 | 1 | I chose Qwen3 as the LLM that would run this project and also Coqui XTTS-v2 for the voice cloning software.
Yes, think is off and it has a custom prompt for individual chat and different system prompt handling for the group chat feature to handle multiple characters.
The voice cloning is slow, since it is CPU bound ... | 2025-05-29T07:35:26 | https://v.redd.it/waqygsrgco3f1 | xhakux99 | /r/LocalLLaMA/comments/1ky4s23/offline_locally_run_characterai_alternative_using/ | 1970-01-01T00:00:00 | 0 | {} | 1ky4s23 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/waqygsrgco3f1/DASHPlaylist.mpd?a=1751225731%2CYTllYTRiMmZhOTJjYTgwYjI3NDUwNmM5NDVjZTUyZWE0YmU3ZjA2ZDUyOGI3MDYwZWMwOGUwYjI0ZjJlMjMzMw%3D%3D&v=1&f=sd', 'duration': 143, 'fallback_url': 'https://v.redd.it/waqygsrgco3f1/DASH_720.mp4?source=fallback', 'h... | t3_1ky4s23 | /r/LocalLLaMA/comments/1ky4s23/offline_locally_run_characterai_alternative_using/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?width=108&crop=smart&format=pjpg&auto=webp&s=9adac9276fff940d71dc3202263d94a734f3... | |
If you have plan to make new TTS/ASR consider other languages or low resource ones, it's always English, Chinese & some other popular languages it's always trained on. | 14 | Every new releases of TTS or ASR are always either english or chinese. We have already lots of SOTA in these popular languages like spanish. If someone is planning to build new systems, consider other languages with no presence. Also there are lots of low resource (LR) languages are there to consider.
We need to make t... | 2025-05-29T07:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4oia/if_you_have_plan_to_make_new_ttsasr_consider/ | Trysem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4oia | false | null | t3_1ky4oia | /r/LocalLLaMA/comments/1ky4oia/if_you_have_plan_to_make_new_ttsasr_consider/ | false | false | self | 14 | null |
Generate academic posters effortlessly with this open-source tool | 1 | [removed] | 2025-05-29T07:19:46 | Fluffy_Sheepherder76 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky4jko | false | null | t3_1ky4jko | /r/LocalLLaMA/comments/1ky4jko/generate_academic_posters_effortlessly_with_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': '95h-l_7xXQ_2iHL5lkyWZCoG4tGTmPj5bL2Fq0zwKqY', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=108&crop=smart&auto=webp&s=ea35fb7690f545fb354997427e3f8974b271fe0b', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.j... | ||
Using Llama-indexing with deployed LLM | 1 | [removed] | 2025-05-29T07:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4f6o/using_llamaindexing_with_deployed_llm/ | martianx23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4f6o | false | null | t3_1ky4f6o | /r/LocalLLaMA/comments/1ky4f6o/using_llamaindexing_with_deployed_llm/ | false | false | self | 1 | null |
Built an uncensored AI that maintains intelligence - tbio.ai beta feedback wanted | 1 | [removed] | 2025-05-29T06:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ky48lr/built_an_uncensored_ai_that_maintains/ | Apple12Pi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky48lr | false | null | t3_1ky48lr | /r/LocalLLaMA/comments/1ky48lr/built_an_uncensored_ai_that_maintains/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.