title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How do I make LM Studio use the default parameters from the GGUF | 5 | I'm still quite new to the local llm space. When I look at the huggingface page of a model, there is a generation\_config.json file. This has the parameters that are loaded default onto the model, which I assume offer the best performance found by the creator.
When I download a GGUF on LM Studio I have a "Preset" load... | 2025-06-25T13:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lk6axd/how_do_i_make_lm_studio_use_the_default/ | Su1tz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk6axd | false | null | t3_1lk6axd | /r/LocalLLaMA/comments/1lk6axd/how_do_i_make_lm_studio_use_the_default/ | false | false | self | 5 | null |
Gemini CLI: your open-source AI agent | 118 | Free license gets you access to Gemini 2.5 Pro and its massive 1 million token context window. To ensure you rarely, if ever, hit a limit during this preview, we offer the industry’s largest allowance: 60 model requests per minute and 1,000 requests per day at no charge. | 2025-06-25T13:45:52 | https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ | touhidul002 | blog.google | 1970-01-01T00:00:00 | 0 | {} | 1lk63od | false | null | t3_1lk63od | /r/LocalLLaMA/comments/1lk63od/gemini_cli_your_opensource_ai_agent/ | false | false | default | 118 | {'enabled': False, 'images': [{'id': 'v_nU-59VjAFg3tUf3ktH0OR1eDLLCpt7sTIO-4lpiic', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v_nU-59VjAFg3tUf3ktH0OR1eDLLCpt7sTIO-4lpiic.png?width=108&crop=smart&auto=webp&s=b8e739b515523fc4b279dd605822228fe9c1b445', 'width': 108}, {'height': 121, 'url': 'h... |
Combining VRam for Inference | 1 | Given the new 5050 cards have the best vram:price ratio yet, Is it feasible to be able to combine six of them to get 48 GB of VRAM? What would the performance downsides be over 2 3090s?
Thank you! | 2025-06-25T13:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lk5u8b/combining_vram_for_inference/ | Th3OnlyN00b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk5u8b | false | null | t3_1lk5u8b | /r/LocalLLaMA/comments/1lk5u8b/combining_vram_for_inference/ | false | false | self | 1 | null |
The Jan.ai "team" used fake engagement to advertise their new 4B model, and deleted the post when called out | 312 | These are all of my interactions with the jan "team", followed by an instantly deleted angry comment, and the deletion of their entire announcement post without an explanation. Up to you how to interpret their response, but personally i feel i've seen enough just sorting the comment section by old and clicking a few ra... | 2025-06-25T13:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lk5u1o/the_janai_team_used_fake_engagement_to_advertise/ | Xandred_the_thicc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk5u1o | false | null | t3_1lk5u1o | /r/LocalLLaMA/comments/1lk5u1o/the_janai_team_used_fake_engagement_to_advertise/ | false | false | self | 312 | null |
Nvidia DGX Spark - what's the catch? | 3 | I currently train/finetune transformer models for audio (around 50M parameters) with my mighty 3090 and for finetuning it works great, while training from scratch is close to impossible due to it being slow and not having that much VRAM.
I found out about the DGX Spark and was looking at the Asus one for $3000 but can... | 2025-06-25T13:33:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lk5te5/nvidia_dgx_spark_whats_the_catch/ | lucellent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk5te5 | false | null | t3_1lk5te5 | /r/LocalLLaMA/comments/1lk5te5/nvidia_dgx_spark_whats_the_catch/ | false | false | self | 3 | null |
What are the best Speech-to-Text and Text-to-Speech models with multi lingual support? | 4 | I see a lot of SOTA models coming out, but only with English support.
What are the SOTA open source models for STT and TTS that have multilingual support ?
Is it still Whisper for speech recognition? Looking specifically por Brazilian Portuguese support to create voice agents. | 2025-06-25T13:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lk5rl9/what_are_the_best_speechtotext_and_texttospeech/ | alew3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk5rl9 | false | null | t3_1lk5rl9 | /r/LocalLLaMA/comments/1lk5rl9/what_are_the_best_speechtotext_and_texttospeech/ | false | false | self | 4 | null |
Still early, but building a system to help AI code with full project awareness. What would help you most? | 0 | I’ve been building a tool that started out as a personal attempt to improve AI performance in programming. Over the last few weeks it’s grown a lot, and I’m planning to release a free demo soon for others to try.
The goal is to address some of the common issues that still haven’t been properly solved, things like hall... | 2025-06-25T13:21:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lk5j69/still_early_but_building_a_system_to_help_ai_code/ | Budget_Map_3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk5j69 | false | null | t3_1lk5j69 | /r/LocalLLaMA/comments/1lk5j69/still_early_but_building_a_system_to_help_ai_code/ | false | false | self | 0 | null |
Idea: Making AI Conversations Actually Feel Like Conversations | 0 |
## The Problem: AI Doesn’t Know How to Have a Conversation
Have you ever noticed how weird it feels to talk to AI with voice? Here’s what I mean:
**Me:** “Hey, can you help me write a Python script to download YouTube videos?”
**AI:** “I’d be happy to help you create a Python script for downloading YouTube videos.... | 2025-06-25T13:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lk554m/idea_making_ai_conversations_actually_feel_like/ | anonthatisopen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk554m | false | null | t3_1lk554m | /r/LocalLLaMA/comments/1lk554m/idea_making_ai_conversations_actually_feel_like/ | false | false | self | 0 | null |
Could anyone get UI-TARS Desktop running locally? | 9 | While using Ollama or LM Studios for [UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) inference. | 2025-06-25T12:53:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lk4vbo/could_anyone_get_uitars_desktop_running_locally/ | m_abdelfattah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk4vbo | false | null | t3_1lk4vbo | /r/LocalLLaMA/comments/1lk4vbo/could_anyone_get_uitars_desktop_running_locally/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'x33KX2bgp5Joih6m9uiM0Dko0mobsGA_tb-t1_rd6ds', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x33KX2bgp5Joih6m9uiM0Dko0mobsGA_tb-t1_rd6ds.png?width=108&crop=smart&auto=webp&s=278fde3609d6b9856bfcd356ccb78e22ea2c3443', 'width': 108}, {'height': 116, 'url': 'h... |
Recommend Tiny/Small Models for 8GB VRAM (32GB RAM) | 6 | As title says.
I can load up to 14B models in my laptop, but recent days I don't use 10+B models frequently due to slow t/s response & laptop making too much noise(Still laptop has bunch of 10+B models for use)
For example, I'm more happy with 4B with Q8, 6B with Q6/Q5 than 14B with Q4.
**My Use Cases** : Writing(B... | 2025-06-25T12:47:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lk4r3c/recommend_tinysmall_models_for_8gb_vram_32gb_ram/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk4r3c | false | null | t3_1lk4r3c | /r/LocalLLaMA/comments/1lk4r3c/recommend_tinysmall_models_for_8gb_vram_32gb_ram/ | false | false | self | 6 | null |
Hunyuan-A13B | 88 | [https://huggingface.co/tencent/Hunyuan-A13B-Instruct-FP8](https://huggingface.co/tencent/Hunyuan-A13B-Instruct-FP8)
I think the model should be a ~80B MoE. As 3072x4096x3x(64+1)*32 = 78.5B, and there are embedding layers and gating parts.
| 2025-06-25T12:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lk40ac/hunyuana13b/ | lly0571 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk40ac | false | null | t3_1lk40ac | /r/LocalLLaMA/comments/1lk40ac/hunyuana13b/ | false | false | self | 88 | {'enabled': False, 'images': [{'id': '4IGqEBI7O-XHR3exFIJFq-LJfiOCD60iVcge43c-UAU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4IGqEBI7O-XHR3exFIJFq-LJfiOCD60iVcge43c-UAU.png?width=108&crop=smart&auto=webp&s=0888f6b06f1ca70290ad27244aa3b95535883bf0', 'width': 108}, {'height': 116, 'url': 'h... |
$10k budget | 3 | I'm learning towards an Apple studio just because it would be so easy, great power efficiency, small profile, etc
Goals: Running tool LLMs to replace my use of Gemini 2.5 Pro and Claude 3.7 Sonnet in Cline.
Token / sec on ~40-50gb models is what's most important...
I think the tokens/s output of 2x 5090s would likel... | 2025-06-25T11:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lk3jp7/10k_budget/ | chisleu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk3jp7 | false | null | t3_1lk3jp7 | /r/LocalLLaMA/comments/1lk3jp7/10k_budget/ | false | false | self | 3 | null |
Why does my gemma model always return nearly the same words with default temperature? | 0 | I have a prompt where i want it to do a specific thing (announce an event in the evening).
when i run the prompt multiple times (always a new context throught api), it always returns mostly the same response.
I've used ollama to download gemma3:12b and use the default settings. temperature default setting is 0.8 look... | 2025-06-25T11:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lk3394/why_does_my_gemma_model_always_return_nearly_the/ | choise_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk3394 | false | null | t3_1lk3394 | /r/LocalLLaMA/comments/1lk3394/why_does_my_gemma_model_always_return_nearly_the/ | false | false | self | 0 | null |
What are the best 70b tier models/finetunes? (That fit into 48gb these days) | 28 | It's been a while since llama 3.3 came out.
Are there any real improvements in the 70b area? That size is interesting since it can fit into 48gb very well when quantized.
Anything that beats Qwen 3 32b?
From what I can tell, the Qwen 3 models are cutting edge for general purpose use running locally, with Gemma 3 ... | 2025-06-25T11:08:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lk2swu/what_are_the_best_70b_tier_modelsfinetunes_that/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk2swu | false | null | t3_1lk2swu | /r/LocalLLaMA/comments/1lk2swu/what_are_the_best_70b_tier_modelsfinetunes_that/ | false | false | self | 28 | null |
Budget VPS as a viable off-ramp for unsustainable Google Cloud bills? | 6 | Our team is running a custom model on Google Cloud with a Vercel frontend. While we're seeing user growth, the GCP bill—driven by compute and data egress fees—is scaling much faster than our revenue. The cost has quickly become unsustainable.
We're now considering moving the AI backend to a budget VPS or bare-metal pr... | 2025-06-25T11:04:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lk2pza/budget_vps_as_a_viable_offramp_for_unsustainable/ | reclusebird | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk2pza | false | null | t3_1lk2pza | /r/LocalLLaMA/comments/1lk2pza/budget_vps_as_a_viable_offramp_for_unsustainable/ | false | false | self | 6 | null |
Has anyone tried layering prompts with logic stabilizers to reduce LLM hallucinations? | 2 | So I’ve been playing with a lightweight prompt logic framework someone dropped on GitHub it’s called WFGY. It doesn’t retrain the model or use external search, but instead wraps prompts with internal checks like: “Does this contradict earlier input?” or “Is this reasoning stable?”
It kind of acts like a soft reasonin... | 2025-06-25T11:01:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lk2o42/has_anyone_tried_layering_prompts_with_logic/ | OkRooster4056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk2o42 | false | null | t3_1lk2o42 | /r/LocalLLaMA/comments/1lk2o42/has_anyone_tried_layering_prompts_with_logic/ | false | false | self | 2 | null |
Shared KV cache | 4 | I need some advice on a little unconventional idea of mine.
I want to create a "thinking agents", a fake RAG of sorts, running simultaneously using the same input data. Let's say 2x Qwen3 8B/14B agents with a massive unquantized context.
Is there a way to have them use the same KV cache? Considering I want to reduce ... | 2025-06-25T10:57:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lk2lou/shared_kv_cache/ | kaisurniwurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk2lou | false | null | t3_1lk2lou | /r/LocalLLaMA/comments/1lk2lou/shared_kv_cache/ | false | false | self | 4 | null |
Knowledge Database Advise needed/ Local RAG for IT Asset Discovery - Best approach for varied data? | 3 | I want to build an RAG system for myself to get a better understanding of the different Softwares and Versions that my new company is running on the machines of our customers. The info I need is hidden in pdfs, saved emails, docs, csv, txt and excel files, stored in different folder structures... It's a real mess.
The... | 2025-06-25T10:54:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lk2jat/knowledge_database_advise_needed_local_rag_for_it/ | Rompe101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk2jat | false | null | t3_1lk2jat | /r/LocalLLaMA/comments/1lk2jat/knowledge_database_advise_needed_local_rag_for_it/ | false | false | self | 3 | null |
Llama vs ChatGPT when it comes to politics | 0 | 2025-06-25T10:36:34 | https://www.reddit.com/gallery/1lk28ci | Currypott | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lk28ci | false | null | t3_1lk28ci | /r/LocalLLaMA/comments/1lk28ci/llama_vs_chatgpt_when_it_comes_to_politics/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Lwo8r56MuKFZXnqK5h2UPN5EOihJRluha6VKb22NeNc', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Lwo8r56MuKFZXnqK5h2UPN5EOihJRluha6VKb22NeNc.jpeg?width=108&crop=smart&auto=webp&s=60c6db782445253a6f90ea8f9ae56164e72cfbf2', 'width': 108}, {'height': 432, 'url': '... | ||
Consumer Grade mobo for mutliple-GPU usage | 2 | I'm building a new pc for AI training. I do know about computers but not much about llms.
I'll buy an 5090 paired with 9950x3d and mobo i think to use is Proart x870e.
First,
Proart has 2 pcie 5.0 x16 and can run 2 gpu at x8/x8.
My question is will it be enough for training/working on llms, and slows the perform... | 2025-06-25T10:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lk25xy/consumer_grade_mobo_for_mutliplegpu_usage/ | lone_dream | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk25xy | false | null | t3_1lk25xy | /r/LocalLLaMA/comments/1lk25xy/consumer_grade_mobo_for_mutliplegpu_usage/ | false | false | self | 2 | null |
Newbie question MacMini | 1 | Hello from Germany,
Despite my advanced age, I would like to learn more about local LLMs. I know that this requires a relatively powerful computer.
Now I have been given a Mac mini 2018 (i5/32GB RAM/512 GB SSD) as a gift, just as I was in the process of configuring a Macmini M4.
My question to those in the know: i... | 2025-06-25T10:26:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lk222u/newbie_question_macmini/ | jotes2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk222u | false | null | t3_1lk222u | /r/LocalLLaMA/comments/1lk222u/newbie_question_macmini/ | false | false | self | 1 | null |
Running llama.pp et al on Strix Halo on Linux, anyone? | 6 | Hi!
I bought short time ago a GMKtec EVO X2 , which sports the Strix Halo CPU/GPU hardware. I bought it with 128 GB RAM and 2 TB SSD.
So I thought, 'This is the perfect system for a nice, private LLM machine, especially under Linux!"
In real life I had to overcome some obstacles (i.E. upgrading the EFI BIOS by one mi... | 2025-06-25T10:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lk20o4/running_llamapp_et_al_on_strix_halo_on_linux/ | Captain-Pie-62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk20o4 | false | null | t3_1lk20o4 | /r/LocalLLaMA/comments/1lk20o4/running_llamapp_et_al_on_strix_halo_on_linux/ | false | false | self | 6 | null |
We built runtime API discovery for LLM agents using a simple agents.json | 1 | Current LLM tool use assumes compile-time bindings — every tool must be known in advance, added to the prompt, and hardcoded in.
We built [Invoke](https://invoke.network), a lightweight framework that lets agents discover and invoke APIs dynamically at runtime using a simple agents.json descriptor — no plugins, no sch... | 2025-06-25T10:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lk1ycx/we_built_runtime_api_discovery_for_llm_agents/ | persephone0100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk1ycx | false | null | t3_1lk1ycx | /r/LocalLLaMA/comments/1lk1ycx/we_built_runtime_api_discovery_for_llm_agents/ | false | false | self | 1 | null |
Small AI models for me it's... | 3 |
Small AI Models for me it's are amazing 🤩 – The Future Is Running Them on Your Smartphone!
As they improve, we'll see instant, private, and affordable AI for everyone. The future is decentralized, lightweight, and in your pocket. What you think about it? | 2025-06-25T10:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lk1was/small_ai_models_for_me_its/ | MykolaUA825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk1was | false | null | t3_1lk1was | /r/LocalLLaMA/comments/1lk1was/small_ai_models_for_me_its/ | false | false | self | 3 | null |
Unit Tests written by a local coder model loaded on a 5090, thinker condensing context on a 3090 | 1 | The agents are at it, the orchestrator plans and delegates the tasks, and the respective mode simply progresses with a few nudges here and there.
On a side note -
I feel loading/unloading models over to a 5090 is better than giving other models dedicated 3090s, since it'll be a constant time i.e. unloading (mayb... | 2025-06-25T10:08:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lk1rbk/unit_tests_written_by_a_local_coder_model_loaded/ | Emergency_Fuel_2988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk1rbk | false | null | t3_1lk1rbk | /r/LocalLLaMA/comments/1lk1rbk/unit_tests_written_by_a_local_coder_model_loaded/ | false | false | 1 | null | |
New Mistral Small 3.2 actually feels like something big. [non-reasoning] | 300 | 2025-06-25T09:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lk12th/new_mistral_small_32_actually_feels_like/ | Snail_Inference | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk12th | false | null | t3_1lk12th | /r/LocalLLaMA/comments/1lk12th/new_mistral_small_32_actually_feels_like/ | false | false | 300 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... | ||
It's a Chrome Extension for collecting Airbnb listing and market data, locally! | 1 | Posting here since this data is hard to get/expensive and this can be used to locally collect your market's airbnb listing & market data for XYZ purposes.
Everything else I've found is external, meaning not directly from or on airbnb. This gives incredible insights just by using the Airbnb website itself. You can't be... | 2025-06-25T09:05:38 | DRONE_SIC | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lk0rgk | false | null | t3_1lk0rgk | /r/LocalLLaMA/comments/1lk0rgk/its_a_chrome_extension_for_collecting_airbnb/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'REZL_YTLpPqbUMdJmgYY00Oo0crWpf15G3JVUQj9C8Y', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/uc7pxiw3g19f1.png?width=108&crop=smart&auto=webp&s=ae65c33521c57891bfd26ca5ce51c60f4091504b', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/uc7pxiw3g19f1.png... | ||
Mistral small 3.2 knows current date | 0 | Hello, I used LM Studio to load local models. I was just trying Mistral Small 3.2 and I asked "What date is today?".
Surprisingly (to me), it was able to give me a correct answer.
```
mistralai/mistral-small-3.2
Today's date is June 25, 2025.
```
I tried with my other models (Gemma 3 27b and Phi 4 reasoning pl... | 2025-06-25T08:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lk0fam/mistral_small_32_knows_current_date/ | hdoshekru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk0fam | false | null | t3_1lk0fam | /r/LocalLLaMA/comments/1lk0fam/mistral_small_32_knows_current_date/ | false | false | self | 0 | null |
NVFP4: will this be the graal for quantization? | 1 | https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/ | 2025-06-25T08:38:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lk0d10/nvfp4_will_this_be_the_graal_for_quantization/ | Green-Ad-3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk0d10 | false | null | t3_1lk0d10 | /r/LocalLLaMA/comments/1lk0d10/nvfp4_will_this_be_the_graal_for_quantization/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'H2e1iJNYyEDHkou7aYaWtjDGWoBDAWSQ_RuXr2lQNXY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/H2e1iJNYyEDHkou7aYaWtjDGWoBDAWSQ_RuXr2lQNXY.png?width=108&crop=smart&auto=webp&s=e32576b039003662de9b141d038be1a38ccbbcdc', 'width': 108}, {'height': 121, 'url': 'h... |
Jan Nano + Deepseek R1: Combining Remote Reasoning with Local Models using MCP | 19 | # Combining Remote Reasoning with Local Models
I made this MCP server which wraps open source models on Hugging Face. It's useful if you want to give you local model access to (bigger) models via an API.
This is the basic idea:
1. **Local model** handles initial user input and decides task complexity
2. **Remote mod... | 2025-06-25T08:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lk0cjv/jan_nano_deepseek_r1_combining_remote_reasoning/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk0cjv | false | null | t3_1lk0cjv | /r/LocalLLaMA/comments/1lk0cjv/jan_nano_deepseek_r1_combining_remote_reasoning/ | false | false | self | 19 | null |
Will Mac Studio M4 Max 128GB run Qwen 3 325b 22 MoE? | 2 | Anyone could share insightful tests in either (good/horror) scenario to help understand how far such option could be?
Other mac versions experiences welcome. | 2025-06-25T08:17:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lk025n/will_mac_studio_m4_max_128gb_run_qwen_3_325b_22/ | lupo90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lk025n | false | null | t3_1lk025n | /r/LocalLLaMA/comments/1lk025n/will_mac_studio_m4_max_128gb_run_qwen_3_325b_22/ | false | false | self | 2 | null |
Looking to buy a MacBook Pro for on the go local LLMs. Would be dealing with several workflows, files, ocr, csv data analysis (80k lines) webapps creation etc. What are your experiences with the Apple silicone and ram selection? What is the max model size you ran and what was the max context length? | 4 | Do mention the configuration of your Mac’s also please
| 2025-06-25T08:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ljzwro/looking_to_buy_a_macbook_pro_for_on_the_go_local/ | alor_van_diaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljzwro | false | null | t3_1ljzwro | /r/LocalLLaMA/comments/1ljzwro/looking_to_buy_a_macbook_pro_for_on_the_go_local/ | false | false | self | 4 | null |
Fastest & Smallest LLM for realtime response 4080 Super | 1 | 4080 Super 16gb VRAM -
I already filled 10gb with various other AI in the pipeline, but the data flows to an LLM to process a simple text response, the text response then gets passed to TTS which takes \~3 seconds to compute so I need an LLM that can produce simple text responses VERY quickly to minimize the time the... | 2025-06-25T07:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ljze7w/fastest_smallest_llm_for_realtime_response_4080/ | StickyShuba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljze7w | false | null | t3_1ljze7w | /r/LocalLLaMA/comments/1ljze7w/fastest_smallest_llm_for_realtime_response_4080/ | false | false | self | 1 | null |
Simple tool to estimate VRAM usage for LLMs and multimodal models | 1 | I came across this open-source VRAM calculator recently and found it surprisingly useful,It helps estimate how much VRAM you’ll need for different AI tasks like LLM inference, multi-modal models, or training. You just input your model type, precision, and a few config options—then it gives you a clean breakdown of the ... | 2025-06-25T07:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ljzaoj/simple_tool_to_estimate_vram_usage_for_llms_and/ | Basic_Influence_9851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljzaoj | false | null | t3_1ljzaoj | /r/LocalLLaMA/comments/1ljzaoj/simple_tool_to_estimate_vram_usage_for_llms_and/ | false | false | self | 1 | null |
Simple tool to estimate VRAM usage for LLMs and multimodal models | 1 | [removed] | 2025-06-25T07:22:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ljz92l/simple_tool_to_estimate_vram_usage_for_llms_and/ | Basic_Influence_9851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljz92l | false | null | t3_1ljz92l | /r/LocalLLaMA/comments/1ljz92l/simple_tool_to_estimate_vram_usage_for_llms_and/ | false | false | self | 1 | null |
Handy VRAM Estimator for LLMs & Diffusion Models | 1 | [removed] | 2025-06-25T07:20:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ljz7yk/handy_vram_estimator_for_llms_diffusion_models/ | Basic_Influence_9851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljz7yk | false | null | t3_1ljz7yk | /r/LocalLLaMA/comments/1ljz7yk/handy_vram_estimator_for_llms_diffusion_models/ | false | false | self | 1 | null |
How effective are LLMs at translating heavy context-based languages like Japanese, Korean, Thai, and others? | 2 | Most of these languages rely deeply on cultural nuance, implied subjects, honorifics, and flexible grammar structures that don't map neatly to English or other Indo-European languages. For example:
Japanese often omits the subject and even the object, relying entirely on context.
Korean speech changes based on soci... | 2025-06-25T07:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ljz6sh/how_effective_are_llms_at_translating_heavy/ | GTurkistane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljz6sh | false | null | t3_1ljz6sh | /r/LocalLLaMA/comments/1ljz6sh/how_effective_are_llms_at_translating_heavy/ | false | false | self | 2 | null |
Found a really useful VRAM calculator for AI models | 1 | [removed] | 2025-06-25T07:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ljz5w2/found_a_really_useful_vram_calculator_for_ai/ | Basic_Influence_9851 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljz5w2 | false | null | t3_1ljz5w2 | /r/LocalLLaMA/comments/1ljz5w2/found_a_really_useful_vram_calculator_for_ai/ | false | false | self | 1 | null |
OMG i can finally post something here. | 7 | I have tried to post multiple times in this subreddit and it is always automatically removed saying "awating moderator approve" or something similar and it was never approved, i tried contacting the old mods and no one replied, i learned then that the old "mods" was literally one person with multiple automods, who was ... | 2025-06-25T07:08:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ljz16o/omg_i_can_finally_post_something_here/ | GTurkistane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljz16o | false | null | t3_1ljz16o | /r/LocalLLaMA/comments/1ljz16o/omg_i_can_finally_post_something_here/ | false | false | self | 7 | null |
Self-hosted LLMs mit Tool-Calling, Vision & RAG: Was setzt ihr ein? | 0 | **Frage an alle, die mit AI/LLMs im Web- oder Agenturumfeld arbeiten (z. B. Content, Kundenprojekte, Automatisierung):**
Wir bauen aktuell ein eigenes **LLM-Hosting auf europäischer Infrastruktur** (kein Reselling, keine US-API-Forwarding-Lösung) und testen gerade unterschiedliche Setups und Modelle. Ziel: eine **DSGV... | 2025-06-25T06:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ljyq2m/selfhosted_llms_mit_toolcalling_vision_rag_was/ | Strong-Tough444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljyq2m | false | null | t3_1ljyq2m | /r/LocalLLaMA/comments/1ljyq2m/selfhosted_llms_mit_toolcalling_vision_rag_was/ | false | false | self | 0 | null |
Jan-nano-128k: A 4B Model with a Super-Long Context Window (Still Outperforms 671B) | 890 | Hi everyone it's me from Menlo Research again,
Today, I'd like to introduce our latest model: **Jan-nano-128k** \- this model is fine-tuned on **Jan-nano** (which is a qwen3 finetune), improve performance when enable YaRN scaling **(instead of having degraded performance)**.
* It can uses tools continuously, repeated... | 2025-06-25T06:44:26 | https://v.redd.it/909kwwnbo09f1 | Kooky-Somewhere-2883 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljyo2p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/909kwwnbo09f1/DASHPlaylist.mpd?a=1753425883%2COGM5MzgwZGM3NjZlYWQyNWJmZDExMTBhYTEwYWZjNTRlMGFjMDkwY2M3ZWRhMzAxZTNiMmFmZGI4YmRjNjQyMA%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/909kwwnbo09f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ljyo2p | /r/LocalLLaMA/comments/1ljyo2p/jannano128k_a_4b_model_with_a_superlong_context/ | false | false | 890 | {'enabled': False, 'images': [{'id': 'MDRyeGJ6bmJvMDlmMdx7LrexgFcEoZTqX8Yp_PzSREeGDqUB-Qd2XY93v_7d', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/MDRyeGJ6bmJvMDlmMdx7LrexgFcEoZTqX8Yp_PzSREeGDqUB-Qd2XY93v_7d.png?width=108&crop=smart&format=pjpg&auto=webp&s=954fd7c723905947918a2d324b31fbab4418a... | |
Suggestions to build local voice assistant | 8 | # AIM
I am looking to build a local running voice assistant that acts as a full time assistant with memory that helps me for the following:
* Help me with my work related tasks (coding/business/analysis/mails/taking notes)
* I should be able to attach media(s) and share it with my model/assistant
* Offer person... | 2025-06-25T06:32:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ljyhkc/suggestions_to_build_local_voice_assistant/ | prashv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljyhkc | false | null | t3_1ljyhkc | /r/LocalLLaMA/comments/1ljyhkc/suggestions_to_build_local_voice_assistant/ | false | false | self | 8 | null |
Built an AI Notes Assistant Using Mistral 7B Instruct – Feedback Welcome! | 7 | I’ve been building an AI-powered website called NexNotes AI, and wanted to share a bit of my journey here for folks working with open models.
I’m currently using Mistral 7B Instruct (via Together AI) to handle summarization ,flashcards, Q&A over user notes, article content,, and PDFs. It’s been surprisingly effective ... | 2025-06-25T06:26:39 | anonymously_geek | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljye3o | false | null | t3_1ljye3o | /r/LocalLLaMA/comments/1ljye3o/built_an_ai_notes_assistant_using_mistral_7b/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'wpKjv-2Vzni1jYaoSqondEMRYCONGgv3Z6QhtlQGR_A', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/p9b6u09to09f1.png?width=108&crop=smart&auto=webp&s=aa5cd6c0a58bb4ad8e50f8ce8f404db0cda447d9', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/p9b6u09to09f1.png... | ||
Jan-nano-128k: A 4B Model with a Super-Long Context Window (Still Outperforms 671B) | 4 | Hi everyone it's me from Menlo Research again,
Today, I'd like to introduce our latest model: **Jan-nano-128k** \- this model is fine-tuned on **Jan-nano** (which is a qwen3 finetune), improve performance when enable YaRN scaling.
* It can perform deep research **VERY VERY DEEP**
* It can uses tools continuously (i ... | 2025-06-25T06:15:33 | https://v.redd.it/2748d0xcm09f1 | Kooky-Somewhere-2883 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljy7rw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2748d0xcm09f1/DASHPlaylist.mpd?a=1753424148%2CNjMwMTQyODBlOTdlZmM3YjAxY2U3ZjMwYmM2NTAwOWNkMzA0MmZmNzU3MjdlMzQ2Mjc0MTM4ZGNjNThlZTNjOA%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/2748d0xcm09f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ljy7rw | /r/LocalLLaMA/comments/1ljy7rw/jannano128k_a_4b_model_with_a_superlong_context/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'eDc3cXlrb25tMDlmMemOmMQxJFc0zwhaIz44R49918vyCeexVLt4AQRO3_oo', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/eDc3cXlrb25tMDlmMemOmMQxJFc0zwhaIz44R49918vyCeexVLt4AQRO3_oo.png?width=108&crop=smart&format=pjpg&auto=webp&s=71367391885b9d736ddd4c4d830fa90efb358... | |
Its all marketing | 1 | [deleted] | 2025-06-25T05:51:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ljxtvd | false | null | t3_1ljxtvd | /r/LocalLLaMA/comments/1ljxtvd/its_all_marketing/ | false | false | default | 1 | null | ||
Is it possible to get a response in 0.2s? | 4 | I'll most likely be using gemma 3, and assuming I'm using an A100, which version of gemma 3 should I be using to achieve the 0.2s question-to-response delay?
Gemma 3 1B
Gemma 3 4B
Gemma 3 12B
Gemma 3 27B | 2025-06-25T05:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ljxtbq/is_it_possible_to_get_a_response_in_02s/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljxtbq | false | null | t3_1ljxtbq | /r/LocalLLaMA/comments/1ljxtbq/is_it_possible_to_get_a_response_in_02s/ | false | false | self | 4 | null |
Gemini CLI: your open-source AI agent | 137 | Really generous free tier | 2025-06-25T05:18:00 | https://blog.google/technology/developers/introducing-gemini-cli/ | adefa | blog.google | 1970-01-01T00:00:00 | 0 | {} | 1ljxa2e | false | null | t3_1ljxa2e | /r/LocalLLaMA/comments/1ljxa2e/gemini_cli_your_opensource_ai_agent/ | false | false | 137 | {'enabled': False, 'images': [{'id': 'v_nU-59VjAFg3tUf3ktH0OR1eDLLCpt7sTIO-4lpiic', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/v_nU-59VjAFg3tUf3ktH0OR1eDLLCpt7sTIO-4lpiic.png?width=108&crop=smart&auto=webp&s=b8e739b515523fc4b279dd605822228fe9c1b445', 'width': 108}, {'height': 121, 'url': 'h... | |
Is anyone else frustrated by AI chats getting amnesia? | 0 | Hey everyone,
We're two engineers (and heavy AI users). We use tools like ChatGPT and Claude as thinking partners for complex projects, but we're constantly frustrated by one thing: starting over.
Every time we open a new chat, the AI has total amnesia. We have to re-explain the project context, re-paste the same cod... | 2025-06-25T04:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ljwtbm/is_anyone_else_frustrated_by_ai_chats_getting/ | Matrix_Ender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljwtbm | false | null | t3_1ljwtbm | /r/LocalLLaMA/comments/1ljwtbm/is_anyone_else_frustrated_by_ai_chats_getting/ | false | false | self | 0 | null |
Migrate Java Spring boot application to FastAPI python application, suggest any AI tool? | 0 | In current project, we have a a lot of spring boot applications as per the client requirement to migrate the entire applications to fastAPI.
Each application manually converted into the python. It will take a lot of time, so we have any ai tool convert the entire application into FastAPI
Could you please suggest any A... | 2025-06-25T04:47:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ljwrts/migrate_java_spring_boot_application_to_fastapi/ | chanupatel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljwrts | false | null | t3_1ljwrts | /r/LocalLLaMA/comments/1ljwrts/migrate_java_spring_boot_application_to_fastapi/ | false | false | self | 0 | null |
NeuralTranslate: Nahuatl to Spanish LLM! (Gemma 3 27b fine-tune) | 15 | Hey! After quite a long time there's a new release from my open-source series of models: NeuralTranslate!
This time I full fine-tuned Gemma 3 27b on a Nahuatl-Spanish dataset. It comes with 3 versions: v1, v1.1 & v1.2. v1 is the epoch 4 checkpoint for the model, v1.1 is for epoch 9 & v1.2 is for epoch 10. I've seen... | 2025-06-25T04:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ljw728/neuraltranslate_nahuatl_to_spanish_llm_gemma_3/ | Azuriteh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljw728 | false | null | t3_1ljw728 | /r/LocalLLaMA/comments/1ljw728/neuraltranslate_nahuatl_to_spanish_llm_gemma_3/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': '8U-FNXUWzZsjyI2uIUtwW2AfpuG3VutM9CEeyuVp5e4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8U-FNXUWzZsjyI2uIUtwW2AfpuG3VutM9CEeyuVp5e4.png?width=108&crop=smart&auto=webp&s=b6fb8d412de536695bfb83c1a33f8befe6aac558', 'width': 108}, {'height': 116, 'url': 'h... |
Where does spelling correction happen in LLMs like ChatGPT? | 0 | 2025-06-25T03:51:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ljvqmo/where_does_spelling_correction_happen_in_llms/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljvqmo | false | null | t3_1ljvqmo | /r/LocalLLaMA/comments/1ljvqmo/where_does_spelling_correction_happen_in_llms/ | false | false | 0 | null | ||
Llama.cpp vs API - Gemma 3 Context Window Performance | 3 | Hello everyone,
So basically I'm testing out the Gemma 3 models on both local inference and online from the AI Studio and wanted to pass in a transcription averaging around 6-7k tokens. Locally, the model doesn't know what the text is about, or merely the very end of the text, whereas the same model on AI studio is in... | 2025-06-25T03:32:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ljve2u/llamacpp_vs_api_gemma_3_context_window_performance/ | Wise_Professor_6007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljve2u | false | null | t3_1ljve2u | /r/LocalLLaMA/comments/1ljve2u/llamacpp_vs_api_gemma_3_context_window_performance/ | false | false | self | 3 | null |
Using public to provide a Ai model for free? | 9 | I recently came upon this [https://mindcraft.riqvip.dev/andy-docs](https://mindcraft.riqvip.dev/andy-docs) , it's a llama 8b finetuned for minecraft. The way it's being hosted interested me its relying on people hosting it for themselves and letting others use that compute power. Would there be potential to this with o... | 2025-06-25T03:04:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ljuul0/using_public_to_provide_a_ai_model_for_free/ | Pale_Ad_6029 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljuul0 | false | null | t3_1ljuul0 | /r/LocalLLaMA/comments/1ljuul0/using_public_to_provide_a_ai_model_for_free/ | false | false | self | 9 | null |
LM Studio alternative for remote APIs? | 8 | Basically the title. I need something that does all the things that LM Studio does, except for remote APIs instead of local.
I see things like Chatbox and SillyTavern, but I need something far more developer-oriented. Set all API parameters, system message, etc.
Any suggestions?
Thanks! | 2025-06-25T02:56:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ljup39/lm_studio_alternative_for_remote_apis/ | TrickyWidget | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljup39 | false | null | t3_1ljup39 | /r/LocalLLaMA/comments/1ljup39/lm_studio_alternative_for_remote_apis/ | false | false | self | 8 | null |
What local model is best for multi-turn conversations? | 0 | Title.
Up to 70-80B params. | 2025-06-25T02:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ljufhk/what_local_model_is_best_for_multiturn/ | Glittering-Bag-4662 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljufhk | false | null | t3_1ljufhk | /r/LocalLLaMA/comments/1ljufhk/what_local_model_is_best_for_multiturn/ | false | false | self | 0 | null |
From Idea to Post: Meet the AI Agent That Writes Linkedin post for You | 0 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/np9t3mllhz8f1.png?width=1024&format=png&auto=webp&s=00ac7f5344cf0b4ddf624a64dc5590e07d28f219\n\n# Meet IdeaWeaver, your new AI agent for content creation.\n\n*Just type:*\n\n`ideaweaver agent linkedin_post — topic “AI trends in 2025”`\n\n*That’s it. One command, and a high-quality, engaging post is ready for LinkedIn.*\n\n* *Completely free*\n* *First tries your local LLM via Ollama*\n* *Falls back to OpenAI if needed*\n\n*No brainstorming. No writer’s block. Just results.*\n\n*Whether you’re a founder, developer, or content creator, IdeaWeaver makes it ridiculously easy to build a personal brand with AI.*\n\n*Try it out today. It doesn’t get simpler than this.*\n\n*Docs:* [*https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/commands/*](https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/commands/)\n\n*GitHub:* [*https://github.com/ideaweaver-ai-code/ideaweaver*](https://github.com/ideaweaver-ai-code/ideaweaver)\n\n*If you find IdeaWeaver helpful, a ⭐ on the repo would mean a lot!\\\\*" | 2025-06-25T02:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lju2wd/from_idea_to_post_meet_the_ai_agent_that_writes/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lju2wd | false | null | t3_1lju2wd | /r/LocalLLaMA/comments/1lju2wd/from_idea_to_post_meet_the_ai_agent_that_writes/ | false | false | 0 | null | |
Developer-oriented Windows client for remote APIs? | 1 | I need a client for connecting to remote OpenAI-compatible LLM APIs that's oriented around technical use. But all I can find is the likes of Chatbox and SillyTavern, which don't fit the bill.
At a minimum, I need it to run on Windows, be able to set all API parameter values, and support file attachments. Ideally, it... | 2025-06-25T02:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lju1vw/developeroriented_windows_client_for_remote_apis/ | TrickyWidget | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lju1vw | false | null | t3_1lju1vw | /r/LocalLLaMA/comments/1lju1vw/developeroriented_windows_client_for_remote_apis/ | false | false | self | 1 | null |
ThermoAsk: getting an LLM to set its own temperature | 102 | I got an LLM to dynamically adjust its own sampling temperature.
I wrote a blog post on how I did this and why dynamic temperature adjustment might be a valuable ability for a language model to possess: [amanvir.com/blog/getting-an-llm-to-set-its-own-temperature](http://amanvir.com/blog/getting-an-llm-to-set-its-own-t... | 2025-06-25T00:54:24 | tycho_brahes_nose_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljs95d | false | null | t3_1ljs95d | /r/LocalLLaMA/comments/1ljs95d/thermoask_getting_an_llm_to_set_its_own/ | false | false | 102 | {'enabled': True, 'images': [{'id': 'j8kwaXqTMljcdz_l3iG-0Zwn-uRB-FmYnEf_mEAi7zk', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/t8az5arc1z8f1.png?width=108&crop=smart&auto=webp&s=20291630c903679d9e7b771e98021828b9aa7967', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/t8az5arc1z8f1.png... | ||
All of our posts for the last week: | 63 | 2025-06-25T00:47:47 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljs4e7 | false | null | t3_1ljs4e7 | /r/LocalLLaMA/comments/1ljs4e7/all_of_our_posts_for_the_last_week/ | false | false | default | 63 | {'enabled': True, 'images': [{'id': '0feqhgvc0z8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0feqhgvc0z8f1.jpeg?width=108&crop=smart&auto=webp&s=5852a856e92a73398cef777ca6456d04eafaccb8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0feqhgvc0z8f1.jpeg?width=216&crop=smart&auto=... | ||
Does anyone else find Dots really impressive? | 30 | I've been using Dots and I find it really impressive. It's my current favorite model. It's knowledgeable, uncensored and has a bit of attitude. Its uncensored in that it will not only talk about TS, it will do so in great depth. If you push it about something, it'll show some attitude by being sarcastic. I like that. I... | 2025-06-25T00:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ljrwrq/does_anyone_else_find_dots_really_impressive/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljrwrq | false | null | t3_1ljrwrq | /r/LocalLLaMA/comments/1ljrwrq/does_anyone_else_find_dots_really_impressive/ | false | false | self | 30 | null |
WebBench: A real-world benchmark for Browser Agents | 31 | WebBench is an open, task-oriented benchmark designed to measure how effectively browser agents handle complex, realistic web workflows. It includes 2,454 tasks across 452 live websites selected from the global top-1000 by traffic.
GitHub: https://github.com/Halluminate/WebBench
| 2025-06-25T00:32:43 | Impressive_Half_2819 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljrt7g | false | null | t3_1ljrt7g | /r/LocalLLaMA/comments/1ljrt7g/webbench_a_realworld_benchmark_for_browser_agents/ | false | false | default | 31 | {'enabled': True, 'images': [{'id': 'h8nloj5oxy8f1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/h8nloj5oxy8f1.jpeg?width=108&crop=smart&auto=webp&s=f4dfaac91fbf16f4d746c04edc5d77eae5e012e4', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/h8nloj5oxy8f1.jpeg?width=216&crop=smart&auto=w... | |
Faster local inference? | 3 | I am curious to hear folks perspective on the speed they get when running models locally. I've tried on a Mac (with llama.cpp, ollama, and mlx) as well as on an AMD card on a PC. But while I can see various benefits to running models locally, I also at times want the response speed that only seems possible when using a... | 2025-06-25T00:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ljrqvy/faster_local_inference/ | badatreality | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljrqvy | false | null | t3_1ljrqvy | /r/LocalLLaMA/comments/1ljrqvy/faster_local_inference/ | false | false | self | 3 | null |
GPU benchmarking website for AI? | 3 | Hi, does anyone know of a website that lists user submitted GPU benchmarks for models? Like tokens/sec, etc?
I remember there was a website I saw recently that was xxxxxx.ai but I forgot to save the link. I think the domain started with an "a" but i'm not sure. | 2025-06-25T00:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ljro3f/gpu_benchmarking_website_for_ai/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljro3f | false | null | t3_1ljro3f | /r/LocalLLaMA/comments/1ljro3f/gpu_benchmarking_website_for_ai/ | false | false | self | 3 | null |
Qwen3 vs phi4 vs gemma3 vs deepseek r1 or deepseek v3 vs llama 3 or llama 4 | 3 | Which model do you use where? As in what case does one solve that other isn’t able to do? I’m diving into local llm after using openai, gemini and claude. If I had to make ai agents which model would fit which use case? Llama 4, qwen3 (both dense and moe) and deepseek v3/r1 are moe and others are dense I guess? I would... | 2025-06-25T00:11:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ljrcq6/qwen3_vs_phi4_vs_gemma3_vs_deepseek_r1_or/ | Divkix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljrcq6 | false | null | t3_1ljrcq6 | /r/LocalLLaMA/comments/1ljrcq6/qwen3_vs_phi4_vs_gemma3_vs_deepseek_r1_or/ | false | false | self | 3 | null |
Where is OpenAI's open source model? | 101 | Did I miss something? | 2025-06-24T23:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ljr1wn/where_is_openais_open_source_model/ | _Vedr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljr1wn | false | null | t3_1ljr1wn | /r/LocalLLaMA/comments/1ljr1wn/where_is_openais_open_source_model/ | false | false | self | 101 | null |
[Gamers Nexus] NVIDIA RTX PRO 6000 Blackwell Benchmarks & Tear-Down | Thermals, Gaming, LLM, & Acoustic Tests | 0 | 2025-06-24T23:56:11 | https://www.youtube.com/watch?v=ZCvjw8B6rcg | asssuber | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ljr12h | false | {'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ZCvjw8B6rcg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1ljr12h | /r/LocalLLaMA/comments/1ljr12h/gamers_nexus_nvidia_rtx_pro_6000_blackwell/ | false | false | 0 | {'enabled': False, 'images': [{'id': '83pqnDabbeW2W87zR8vNGBLxz05MxlwFmCGTIDvEn_8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/83pqnDabbeW2W87zR8vNGBLxz05MxlwFmCGTIDvEn_8.jpeg?width=108&crop=smart&auto=webp&s=a58cdde998bff9e72f0ff67386955a9754c258ae', 'width': 108}, {'height': 162, 'url': '... | ||
Best tts and stt open source or cheap - NOT real time? | 9 | Seeing a lot of realtime qna when I was browsing and searching the sub, what about not real time? Ideally not insanely slow but I have no need for anything close to real time so higher quality audio would be preferred. | 2025-06-24T23:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ljqgxb/best_tts_and_stt_open_source_or_cheap_not_real/ | dabble_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljqgxb | false | null | t3_1ljqgxb | /r/LocalLLaMA/comments/1ljqgxb/best_tts_and_stt_open_source_or_cheap_not_real/ | false | false | self | 9 | null |
How fast are OpenAI/Anthropic API really? | 0 | What's the benchmark here for these LLM cloud services? I imagine many people choose to use these becuase of inference speed, most likely for software developing/debugging purposes. How fast are they really? are they comparable to running small models on local machines or faster? | 2025-06-24T23:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ljqd4x/how_fast_are_openaianthropic_api_really/ | Caffdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljqd4x | false | null | t3_1ljqd4x | /r/LocalLLaMA/comments/1ljqd4x/how_fast_are_openaianthropic_api_really/ | false | false | self | 0 | null |
I gave the same silly task to ~70 models that fit on 32GB of VRAM - thousands of times (resharing my post from /r/LocalLLM) | 303 | I'd posted this over at /r/LocalLLM and Some people thought I presented this too much as serious research - it wasn't, it was much closer to a bored rainy day activity. So here's the post I've been waiting to make on /r/LocalLLaMA for some time, simplified as casually as possible:
Quick recap - [here is the original p... | 2025-06-24T22:56:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ljpo64/i_gave_the_same_silly_task_to_70_models_that_fit/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljpo64 | false | null | t3_1ljpo64 | /r/LocalLLaMA/comments/1ljpo64/i_gave_the_same_silly_task_to_70_models_that_fit/ | false | false | self | 303 | null |
So, what do people think about the new Mistral Small 3.2? | 98 | I was wondering why the sub was so quiet lately, but alas, what're your thoughts so far?
I for one welcome the decreased repetition, solid "minor" update. | 2025-06-24T22:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ljp29d/so_what_do_people_think_about_the_new_mistral/ | TacticalRock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljp29d | false | null | t3_1ljp29d | /r/LocalLLaMA/comments/1ljp29d/so_what_do_people_think_about_the_new_mistral/ | false | false | self | 98 | null |
What local clients do you use? | 6 | I want to build a local client for llms embeddings and rerankers, possibly rag. But I doubt that it will be used by someone else than me. I was going to make something like lm studio but opensource. Upon deeper research I found many alternatives like jan ai or anythingllm.
Do you think that my app will be used by anyon... | 2025-06-24T22:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ljoyvm/what_local_clients_do_you_use/ | PotatoHD404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljoyvm | false | null | t3_1ljoyvm | /r/LocalLLaMA/comments/1ljoyvm/what_local_clients_do_you_use/ | false | false | self | 6 | null |
After a year in the LLM wilderness, I think the 'memory problem' isn't a bug—it's a business model. So I went a different way. | 0 | Hey everyone, I've been on a journey for the past year, probably like many of you here. I've worked with every major model, spent countless hours trying to fine-tune, and run head-first into the same wall over and over: the Groundhog Day problem. The sense that no matter how good your prompts get, you're always startin... | 2025-06-24T22:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ljoxno/after_a_year_in_the_llm_wilderness_i_think_the/ | Fantastic-Salmon92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljoxno | false | null | t3_1ljoxno | /r/LocalLLaMA/comments/1ljoxno/after_a_year_in_the_llm_wilderness_i_think_the/ | false | false | self | 0 | null |
Automating Form Mapping with AI | 1 | Hi I’m working on an autofill extension that automates interactions with web pages—clicking buttons, filling forms, submitting data, etc. It uses a custom instruction format to describe what actions to take on a given page.
The current process is pretty manual:
I have to open the target page, inspect all the relevant... | 2025-06-24T22:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ljoqsd/automating_form_mapping_with_ai/ | carrick1363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljoqsd | false | null | t3_1ljoqsd | /r/LocalLLaMA/comments/1ljoqsd/automating_form_mapping_with_ai/ | false | false | self | 1 | null |
Will I be happy with a RTX 3090? | 7 | Before making a big purchase, I would be grateful for some advice from the experts here!
**What I want to do:**
1. Enhanced web search (for example using [perplexica](https://github.com/ItzCrazyKns/Perplexica)) - it seems you can achieve decent results with smaller models. Being able to get summaries of "todays new... | 2025-06-24T22:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ljohcu/will_i_be_happy_with_a_rtx_3090/ | eribob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljohcu | false | null | t3_1ljohcu | /r/LocalLLaMA/comments/1ljohcu/will_i_be_happy_with_a_rtx_3090/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'EbEpin0SFWbZTkNqyRFvAKTwDz_KqYIW1fyCm5RHCcw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EbEpin0SFWbZTkNqyRFvAKTwDz_KqYIW1fyCm5RHCcw.png?width=108&crop=smart&auto=webp&s=05d327dddfb3d122a5bbea176ba825b18bbec20c', 'width': 108}, {'height': 108, 'url': 'h... |
LinusTechTips reviews Chinese 4090s with 48Gb VRAM, messes with LLMs | 80 | Just thought it might be fun for the community to see one of the largest tech YouTubers introducing their audience to local LLMs.
Lots of newbie mistakes in their messing with Open WebUI and Ollama but hopefully it encourages some of their audience to learn more. For anyone who saw the video and found their way here, ... | 2025-06-24T22:05:08 | https://youtu.be/HZgQp-WDebU | BumbleSlob | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1ljogsx | false | {'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy... | t3_1ljogsx | /r/LocalLLaMA/comments/1ljogsx/linustechtips_reviews_chinese_4090s_with_48gb/ | false | false | default | 80 | {'enabled': False, 'images': [{'id': 'ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=108&crop=smart&auto=webp&s=34b6e95c9e78450a03bc17669db1039556875ab2', 'width': 108}, {'height': 162, 'url': '... |
New Moondream 2B VLM update, with visual reasoning | 83 | 2025-06-24T21:51:07 | https://moondream.ai/blog/moondream-2025-06-21-release | radiiquark | moondream.ai | 1970-01-01T00:00:00 | 0 | {} | 1ljo4ns | false | null | t3_1ljo4ns | /r/LocalLLaMA/comments/1ljo4ns/new_moondream_2b_vlm_update_with_visual_reasoning/ | false | false | default | 83 | null | |
RTX 5090 TTS Advice | 2 | Need help and advice on which TTS models are quality and will run locally on a 5090. Tried chatterbox, but there are pytorch compatibility issues, running torch 2.7.0+cu128 vs. the required 2.6.0. | 2025-06-24T21:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ljo4el/rtx_5090_tts_advice/ | FishingMysterious366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljo4el | false | null | t3_1ljo4el | /r/LocalLLaMA/comments/1ljo4el/rtx_5090_tts_advice/ | false | false | self | 2 | null |
3090 vs 5070 ti | 1 | I'm using gemma3:12b-it-qat for Inference and may increase to gemma3:27b-it-qat when I can run it at speed, I'll have concurrent inference sessions (5-10 daily active users), currently using ollama.
Google says gemma3:27b-it-qatgemma needs roughly 14.1GB VRAM, so at this point, I don't think it will even load onto a s... | 2025-06-24T21:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ljo1rp/3090_vs_5070_ti/ | GroundbreakingMain93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljo1rp | false | null | t3_1ljo1rp | /r/LocalLLaMA/comments/1ljo1rp/3090_vs_5070_ti/ | false | false | 1 | null | |
AMD Instinct MI60 (32gb VRAM) "llama bench" results for 10 models - Qwen3 30B A3B Q4_0 resulted in: pp512 - 1,165 t/s | tg128 68 t/s - Overall very pleased and resulted in a better outcome for my use case than I even expected | 30 | I just completed a new build and (finally) have everything running as I wanted it to when I spec'd out the build. I'll be making a separate post about that as I'm now my own sovereign nation state for media, home automation (including voice activated commands), security cameras and local AI which I'm thrilled about...b... | 2025-06-24T21:32:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ljnoj7/amd_instinct_mi60_32gb_vram_llama_bench_results/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljnoj7 | false | null | t3_1ljnoj7 | /r/LocalLLaMA/comments/1ljnoj7/amd_instinct_mi60_32gb_vram_llama_bench_results/ | false | false | 30 | null | |
Google researcher requesting feedback on the next Gemma. | 110 | [https:\/\/x.com\/osanseviero\/status\/1937453755261243600](https://preview.redd.it/kr52i2mn0y8f1.png?width=700&format=png&auto=webp&s=f654b4d8fc807a8722055201e8c097168452937f)
Source: [https://x.com/osanseviero/status/1937453755261243600](https://x.com/osanseviero/status/1937453755261243600)
I'm gpu poor. 8-12B mode... | 2025-06-24T21:30:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ljnmj9/google_researcher_requesting_feedback_on_the_next/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljnmj9 | false | null | t3_1ljnmj9 | /r/LocalLLaMA/comments/1ljnmj9/google_researcher_requesting_feedback_on_the_next/ | false | false | 110 | null | |
Made an LLM Client for the PS Vita | 177 | Hello all, awhile back I had ported llama2.c on the PS Vita for on-device inference using the TinyStories 260K & 15M checkpoints. Was a cool and fun concept to work on, but it wasn't too practical in the end.
Since then, I have made a full fledged LLM client for the Vita instead! You can even use the camera to take ph... | 2025-06-24T21:24:23 | https://v.redd.it/qunyr1jwzx8f1 | ajunior7 | /r/LocalLLaMA/comments/1ljnhca/made_an_llm_client_for_the_ps_vita/ | 1970-01-01T00:00:00 | 0 | {} | 1ljnhca | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qunyr1jwzx8f1/DASHPlaylist.mpd?a=1753521866%2CZTE4MjcxNGNlNDdmYzBjZjBmODJiNjg1OGEwZmRhYjA3NDFmYjZkZDU4NGU4NTAzZTUwYTY0YjExYWIyZWJhOA%3D%3D&v=1&f=sd', 'duration': 117, 'fallback_url': 'https://v.redd.it/qunyr1jwzx8f1/DASH_1080.mp4?source=fallback', '... | t3_1ljnhca | /r/LocalLLaMA/comments/1ljnhca/made_an_llm_client_for_the_ps_vita/ | false | false | 177 | {'enabled': False, 'images': [{'id': 'Y283aGV6aXd6eDhmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y283aGV6aXd6eDhmMfIP8BrPficmhyY5KB42Ptrwyms9E-ke6lpIPgzOipjX.png?width=108&crop=smart&format=pjpg&auto=webp&s=4bfbdc37eb2e7f15c8074e492f511a6fa8d2c... | |
What are your go-to models for daily use? Please also comment about your quantization of choice | 9 |
[View Poll](https://www.reddit.com/poll/1ljncfs) | 2025-06-24T21:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ljncfs/what_are_your_goto_models_for_daily_use_please/ | okaris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljncfs | false | null | t3_1ljncfs | /r/LocalLLaMA/comments/1ljncfs/what_are_your_goto_models_for_daily_use_please/ | false | false | self | 9 | null |
Why is my llama so dumb? | 7 | Model: DeepSeek R1 Distill Llama 70B
GPU+Hardware: Vulkan on AMD AI Max+ 395 128GB VRAM
Program+Options:
\- GPU Offload Max
\- CPU Thread Pool Size 16
\- Offload KV Cache: Yes
\- Keep Model in Memory: Yes
\- Try mmap(): Yes
\- K Cache Quantization Type: Q4\_0
So the question is, when askin... | 2025-06-24T21:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ljn4h8/why_is_my_llama_so_dumb/ | CSEliot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljn4h8 | false | null | t3_1ljn4h8 | /r/LocalLLaMA/comments/1ljn4h8/why_is_my_llama_so_dumb/ | false | false | self | 7 | null |
I made a free iOS app for people who run LLMs locally. It’s a chatbot that you can use away from home to interact with an LLM that runs locally on your desktop Mac. | 7 | It is easy enough that anyone can use it. No tunnel or port forwarding needed.
The app is called LLM Pigeon and has a companion app called LLM Pigeon Server for Mac.
It works like a carrier pigeon :). It uses iCloud to append each prompt and response to a file on iCloud.
It’s not totally local because iCloud is in... | 2025-06-24T21:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ljn09w/i_made_a_free_ios_app_for_people_who_run_llms/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljn09w | false | null | t3_1ljn09w | /r/LocalLLaMA/comments/1ljn09w/i_made_a_free_ios_app_for_people_who_run_llms/ | false | false | self | 7 | null |
Falcon H1 Models | 1 | Why is this model family slept on ?
From what i understood its a new hybrid architecture and it has alreally good results.
Am i missing something? | 2025-06-24T21:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ljmzvi/falcon_h1_models/ | Daemontatox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljmzvi | false | null | t3_1ljmzvi | /r/LocalLLaMA/comments/1ljmzvi/falcon_h1_models/ | false | false | self | 1 | null |
Vision model for detecting welds? | 3 | I searched for "best vision models" up to date, but are there any difference between industry applications and "document scanning" models? Should we proceed to fine-tine them with photos to identify correct welds vs incorrect welds?
Can anyone guide us regarding vision model in industry applications (mainly constructi... | 2025-06-24T20:49:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ljmlcn/vision_model_for_detecting_welds/ | -Fake_GTD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljmlcn | false | null | t3_1ljmlcn | /r/LocalLLaMA/comments/1ljmlcn/vision_model_for_detecting_welds/ | false | false | self | 3 | null |
We built a tool that helps you plan features before using AI to code (public beta launch) | 0 | 2025-06-24T20:41:25 | https://v.redd.it/8td2dkxbsx8f1 | eastwindtoday | /r/LocalLLaMA/comments/1ljmdzg/we_built_a_tool_that_helps_you_plan_features/ | 1970-01-01T00:00:00 | 0 | {} | 1ljmdzg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8td2dkxbsx8f1/DASHPlaylist.mpd?a=1753519303%2CN2QyMDA0NWYxYWMyMTUzZmM1NDYzN2VlNTc1NmUwYmExYjgzOWViMmQ4NjM0MGM2ZWUwMmUzODc4NDA4NmRjYw%3D%3D&v=1&f=sd', 'duration': 377, 'fallback_url': 'https://v.redd.it/8td2dkxbsx8f1/DASH_1080.mp4?source=fallback', '... | t3_1ljmdzg | /r/LocalLLaMA/comments/1ljmdzg/we_built_a_tool_that_helps_you_plan_features/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/OTEzaXJneGJzeDhmMZvu8dVTETA4R0lbzOCMpCYSy2EGu4LkODdfToxoMFWa.png?width=108&crop=smart&format=pjpg&auto=webp&s=1a05d63253a12b40a6d990bf048acd8f7df08... | ||
LocalLlama is saved! | 564 | LocalLlama has been many folk's favorite place to be for everything AI, so it's good to see a new moderator taking the reins!
Thanks to u/HOLUPREDICTIONS for taking the reins!
More detail here: [https://www.reddit.com/r/LocalLLaMA/comments/1ljlr5b/subreddit\_back\_in\_business/](https://www.reddit.com/r/LocalLLaMA/co... | 2025-06-24T20:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ljm3pb/localllama_is_saved/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljm3pb | false | null | t3_1ljm3pb | /r/LocalLLaMA/comments/1ljm3pb/localllama_is_saved/ | false | false | self | 564 | null |
Agent Arena – crowdsourced testbed for evaluating AI agents in the wild | 11 | We just launched Agent Arena -- a crowdsourced testbed for evaluating AI agents in the wild.
Think Chatbot Arena, but for agents.
It’s completely free to run matches. We cover the inference.
I always find myself debating whether to use 4o or o3, but now I just try both on Agent Arena!
Try it out: https://obl.dev/ | 2025-06-24T20:29:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ljm32s/agent_arena_crowdsourced_testbed_for_evaluating/ | tejpal-obl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljm32s | false | null | t3_1ljm32s | /r/LocalLLaMA/comments/1ljm32s/agent_arena_crowdsourced_testbed_for_evaluating/ | false | false | self | 11 | null |
Polaris: A Post-training recipe for scaling RL on Advanced ReasonIng models | 46 | Here is the link.
I have no idea what it is but it was released a few days ago and has an intriguing concept so I decided to post here to see if anyone knows about this. It seems pretty new but its some sort of post-training RL with a unique approach that claims a Qwen3-4b performance boost that surpasses Claude-4-Opu... | 2025-06-24T20:28:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ljm2n2/polaris_a_posttraining_recipe_for_scaling_rl_on/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljm2n2 | false | null | t3_1ljm2n2 | /r/LocalLLaMA/comments/1ljm2n2/polaris_a_posttraining_recipe_for_scaling_rl_on/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E0mxyQKIBg9_L5-Uibj4NGnf8c47vbOyJqVz_vOZVpU.png?width=108&crop=smart&auto=webp&s=5a15e7d7ac0b52dfdf2170149515a16efe27df26', 'width': 108}, {'height': 108, 'url': 'h... |
0.5 tok/s with R1 Q4 on EPYC 7C13 with 1TB of RAM, BIOS settings to blame? | 12 | [Now I've got your attention, I hope!](https://i.redd.it/8wptyvlppx8f1.gif)
Hi there everyone!
I've just recently assembled an entire home server system, however, for some reason, the performance I'm getting is atrocious with 1TB of 2400MHz RAM on EPYC 7C13 running on Gigabyte MZ32-AR1. I'm getting 3-12 tok/s on pr... | 2025-06-24T20:27:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ljm13j/05_toks_with_r1_q4_on_epyc_7c13_with_1tb_of_ram/ | BasicCoconut9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljm13j | false | null | t3_1ljm13j | /r/LocalLLaMA/comments/1ljm13j/05_toks_with_r1_q4_on_epyc_7c13_with_1tb_of_ram/ | false | false | 12 | null | |
Is it normal to have significantly more performance from Qwen 235B compared to Qwen 32B when doing partial offloading? | 3 | here are the llama-swap settings I am running, my hardware is a xeon e5-2690v4 with 128GB of 2400 DDR4 and 2 P104-100 8GB GPUs, while prompt processing is faster on the 32B (12 tk/s vs 5 tk/s) the actual inference is much faster on the 235B, 5tk/s vs 2.5 tk/s. Does anyone know why this is? Even if the 235B only has 22B... | 2025-06-24T20:24:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ljlyrs/is_it_normal_to_have_significantly_more/ | OUT_OF_HOST_MEMORY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljlyrs | false | null | t3_1ljlyrs | /r/LocalLLaMA/comments/1ljlyrs/is_it_normal_to_have_significantly_more/ | false | false | self | 3 | null |
Subreddit back in business | 632 | As most of you folks I'm also not sure what happened but I'm attaching screenshot of the last actions taken by the previous moderator before deleting their account | 2025-06-24T20:16:36 | HOLUPREDICTIONS | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ljlr5b | false | null | t3_1ljlr5b | /r/LocalLLaMA/comments/1ljlr5b/subreddit_back_in_business/ | false | false | default | 632 | {'enabled': True, 'images': [{'id': '1sx7mwusnx8f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=108&crop=smart&auto=webp&s=505c891feba9f30abe8510ca980b0be8bd842a92', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/1sx7mwusnx8f1.jpeg?width=216&crop=smart&auto=w... | |
Please help me understand frequency penalty, presence penalty, and repetition penalty | 3 | I am writing a (very amateur) python LLM story telling app. My app sends a POST request to Ollama or OpenRouter or whatever backend you want, with a bunch of parameters I found online. Things like model, prompt, and these three penalties: presence\_penalty, frequency\_penalty, and repetition\_penalty.
In debugging I w... | 2025-06-24T20:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ljllgo/please_help_me_understand_frequency_penalty/ | ruumies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljllgo | false | null | t3_1ljllgo | /r/LocalLLaMA/comments/1ljllgo/please_help_me_understand_frequency_penalty/ | false | false | self | 3 | null |
Mobile AI Apps? | 2 | Kobold.cpp on termux gets pretty decent speed with Gemma 3 4b 6ksm ~~(sutra version is faster and better at writting, decriptions and everything else except extremely low context windows)~~ on snapdragon 850, Mi 9.
I could not find other programs or apps that run model locally on android asides from Google Edge Galler... | 2025-06-24T20:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ljll8c/mobile_ai_apps/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljll8c | false | null | t3_1ljll8c | /r/LocalLLaMA/comments/1ljll8c/mobile_ai_apps/ | false | false | self | 2 | null |
Knowledge Database Advise needed/ Local RAG for IT Asset Discovery - Best approach for varied data? | 1 | [removed] | 2025-06-24T19:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ljlalp/knowledge_database_advise_needed_local_rag_for_it/ | Rompe101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljlalp | false | null | t3_1ljlalp | /r/LocalLLaMA/comments/1ljlalp/knowledge_database_advise_needed_local_rag_for_it/ | false | false | self | 1 | null |
The Context Lock-In Problem No One’s Talking About | 1 | [removed] | 2025-06-24T19:56:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ljl87q/the_context_lockin_problem_no_ones_talking_about/ | Imad-aka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljl87q | false | null | t3_1ljl87q | /r/LocalLLaMA/comments/1ljl87q/the_context_lockin_problem_no_ones_talking_about/ | false | false | self | 1 | null |
Are we back? | 1 | [removed] | 2025-06-24T19:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ljkno8/are_we_back/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljkno8 | false | null | t3_1ljkno8 | /r/LocalLLaMA/comments/1ljkno8/are_we_back/ | false | false | self | 1 | null |
Is it normal to have significantly more performance from Qwen 235B compared to Qwen 32B when doing partial offloading? | 1 | [removed] | 2025-06-24T19:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ljkilv/is_it_normal_to_have_significantly_more/ | OUT_OF_HOST_MEMORY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljkilv | false | null | t3_1ljkilv | /r/LocalLLaMA/comments/1ljkilv/is_it_normal_to_have_significantly_more/ | false | false | self | 1 | null |
Combining VRam for Inference | 1 | [removed] | 2025-06-24T19:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ljkach/combining_vram_for_inference/ | Th3OnlyN00b | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ljkach | false | null | t3_1ljkach | /r/LocalLLaMA/comments/1ljkach/combining_vram_for_inference/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.