title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
glm 4.5 air which version is working with lmstudio in wimdows? | 0 | Can you tell me if there is a program on huggingface.com that works with lmstudio on Windows? Thanks. | 2025-08-10T07:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mmc2lg/glm_45_air_which_version_is_working_with_lmstudio/ | Bobcotelli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmc2lg | false | null | t3_1mmc2lg | /r/LocalLLaMA/comments/1mmc2lg/glm_45_air_which_version_is_working_with_lmstudio/ | false | false | self | 0 | null |
Best local model with function calling? | 3 | Hello! Need a bit of advice for what model to pick for this use case:
I need a local model which can call functions (like gemini or openai models can) but is also uncensored. It needs to be uncensored because the prompts it would receive could potentially be NSFW and I would not want it to refuse to call functions bec... | 2025-08-10T06:58:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mmbufa/best_local_model_with_function_calling/ | construct_of_paliano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmbufa | false | null | t3_1mmbufa | /r/LocalLLaMA/comments/1mmbufa/best_local_model_with_function_calling/ | false | false | self | 3 | null |
Qwen Code CLI: Free, Open-Source CLI for Qwen3-Coder – Generous Daily Limits & Local Dev Friendly | 0 | I wanted to share an impressive AI coding tool that offers genuinely generous free usage: \*\*Qwen Code CLI\*\*.
\## What makes it special?
\*\*Actually generous free tier:\*\*
\- 🎯 \*\*2,000 free requests per day\*\* via Qwen OAuth (recommended)
\- ⚡ \*\*60 requests per minute\*\* rate limit
\- 🔓 \*\*No token... | 2025-08-10T06:54:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mmbs3c/qwen_code_cli_free_opensource_cli_for_qwen3coder/ | Outrageous_Permit154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmbs3c | false | null | t3_1mmbs3c | /r/LocalLLaMA/comments/1mmbs3c/qwen_code_cli_free_opensource_cli_for_qwen3coder/ | false | false | self | 0 | null |
Seeking Feedback: Self-Hosted RAG System + Multi-GPU Hardware Setup for Local LLM Development | 6 | Hey r/LocalLLaMA,
I've built a custom RAG (Retrieval-Augmented Generation) system for local software development and I'm planning a hardware upgrade to support serious LLM workloads. Would love your feedback on both the software architecture and hardware choices before I pull the trigger on the GPU purchases.
## Soft... | 2025-08-10T06:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mmb9rg/seeking_feedback_selfhosted_rag_system_multigpu/ | Halfpikant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmb9rg | false | null | t3_1mmb9rg | /r/LocalLLaMA/comments/1mmb9rg/seeking_feedback_selfhosted_rag_system_multigpu/ | false | false | self | 6 | null |
Any proper working Local LLM and Agentic CLI | 5 | I have tried using **Qwen 3 Coder 30B A3B : IQ3\_M** (unsloth) with
1. Opencode
2. Roo Code
3. Kilo Code
4. Claude Code + Cloud Code Router
5. Crush
6. Qwen CLI
7. Claude Code (claude models)
**My Hardware specs**
* AMD R5 5600x
* AMD RX7800XT
* 48GB 3200Mhz DDR4 RAM
Most of the agentic tools ar... | 2025-08-10T05:45:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mmaosq/any_proper_working_local_llm_and_agentic_cli/ | Grouchy-Drag-2281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmaosq | false | null | t3_1mmaosq | /r/LocalLLaMA/comments/1mmaosq/any_proper_working_local_llm_and_agentic_cli/ | false | false | self | 5 | null |
I found a way to compress meaning (semantic compression, not data compression) that AI systems can decompress at ratios that should be impossible. | 0 | Traditional compression: 10:1 maximum (Shannon's entropy limit)
Semantic compression: 5000:1 achieved (17,500:1 on some examples)
Level8 is 18000:1 but current AI cant do it, Grok theorized it was the upper limit. ChatGPT and Claude can easily doe 5000:1
[I wrote up the full technical details, demo, and proof here... | 2025-08-10T05:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mmaetp/i_found_a_way_to_compress_meaning_semantic/ | barrphite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mmaetp | false | null | t3_1mmaetp | /r/LocalLLaMA/comments/1mmaetp/i_found_a_way_to_compress_meaning_semantic/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HWu_93ZkhF_shFuunTZCaLGYfjGp5Ov8xGnLgGve-k8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HWu_93ZkhF_shFuunTZCaLGYfjGp5Ov8xGnLgGve-k8.png?width=108&crop=smart&auto=webp&s=23a02679ed17d57edc2472fbb19c19ad3dd3254b', 'width': 108}, {'height': 113, 'url': 'h... |
Best model for coding? | 1 | [removed] | 2025-08-10T05:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mma52v/best_model_for_coding/ | Unlucky-Tap8945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mma52v | false | null | t3_1mma52v | /r/LocalLLaMA/comments/1mma52v/best_model_for_coding/ | false | false | self | 1 | null |
Different way to approach MOE | 0 | What do you think about this idea for changing mixture of expert models?
## Traditional MoE: Each Layer Has Its Own Experts
In standard MoE, each layer has completely separate experts:
```
6-layer model with 4 experts per layer = 24 total experts
Layer 1: [Expert_1_A, Expert_1_B, Expert_1_C, Expert_1_D]
Layer 2: [E... | 2025-08-10T05:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mma4gm/different_way_to_approach_moe/ | FestiveDiamond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mma4gm | false | null | t3_1mma4gm | /r/LocalLLaMA/comments/1mma4gm/different_way_to_approach_moe/ | false | false | self | 0 | null |
Self-evolution as a method for improving LLM reasoning | 7 | 2025-08-10T04:57:08 | https://github.com/Chengsong-Huang/R-Zero | Conscious-content42 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mm9ur4 | false | null | t3_1mm9ur4 | /r/LocalLLaMA/comments/1mm9ur4/selfevolution_as_a_method_for_improving_llm/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'O2bR2rA5cN8Hjkpcf5puNP0si-NEXsW8lAv_IhsqbXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O2bR2rA5cN8Hjkpcf5puNP0si-NEXsW8lAv_IhsqbXw.png?width=108&crop=smart&auto=webp&s=2a2c50c678fd99040f5a2b108cec074fef091195', 'width': 108}, {'height': 108, 'url': 'h... | |
Any small model that works well with Memory MCP? | 3 | I like the idea of [Memory MCP](https://github.com/modelcontextprotocol/servers/tree/main/src/memory), where the LLM gradually builds a knowledge graph about you. But I want my graph to be private, so using it with the big SOTA models isn't an option.
I tried using Memory MCP with local models like Qwen3 30B, Gemma 12... | 2025-08-10T04:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mm9osg/any_small_model_that_works_well_with_memory_mcp/ | MiBoSuPan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm9osg | false | null | t3_1mm9osg | /r/LocalLLaMA/comments/1mm9osg/any_small_model_that_works_well_with_memory_mcp/ | false | false | self | 3 | null |
MCP Tool for my RAG Use Case | 1 | I'm a PhD student, sometimes when I collected literature from several PDFs, I like to do RAG on my documents, to recall main details when I'm doing a Literature Review. Also, sometimes I have several articles saved, and sometimes I forget which exact article said the key point that I"m citing.
The issue I'm currently... | 2025-08-10T04:27:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mm9c0y/mcp_tool_for_my_rag_use_case/ | combrade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm9c0y | false | null | t3_1mm9c0y | /r/LocalLLaMA/comments/1mm9c0y/mcp_tool_for_my_rag_use_case/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UPKGq_-A2uhQXrzZUEfhzK-9YHE6c83aauW4EPuQFOg', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/UPKGq_-A2uhQXrzZUEfhzK-9YHE6c83aauW4EPuQFOg.png?width=108&crop=smart&auto=webp&s=bfa80c765710db4342130686fd897dc38b0cdddd', 'width': 108}, {'height': 218, 'url': '... |
So just download edge gallery by Google , is gemma 3n-E4b-it-int4 is a good model ? | 0 | For general and quick rp task ? | 2025-08-10T04:10:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mm911q/so_just_download_edge_gallery_by_google_is_gemma/ | YourMoM__12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm911q | false | null | t3_1mm911q | /r/LocalLLaMA/comments/1mm911q/so_just_download_edge_gallery_by_google_is_gemma/ | false | false | self | 0 | null |
Using local LLM to generate Stable Diffusion prompts for image generation | 7 | 2025-08-10T03:59:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mm8t5v/using_local_llm_to_generate_stable_diffusion/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm8t5v | false | null | t3_1mm8t5v | /r/LocalLLaMA/comments/1mm8t5v/using_local_llm_to_generate_stable_diffusion/ | false | false | 7 | null | ||
Is GPT-OSS the meta for low vram setups? | 23 | A lot of people have been clowning on OpenAI’s new models for the heavy handed censorship, which, yeah, is annoying, but I don’t see a lot of people talking about the fact that the performance of this model with low vram is genuinely impressive. My relatively humble setup (a 3060ti with 8GB of vram, an Intel 11700k, an... | 2025-08-10T03:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mm8rqg/is_gptoss_the_meta_for_low_vram_setups/ | QbitKrish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm8rqg | false | null | t3_1mm8rqg | /r/LocalLLaMA/comments/1mm8rqg/is_gptoss_the_meta_for_low_vram_setups/ | false | false | self | 23 | null |
Has anyone noticed that Qwen 3 30B A3B 2507 instruct got silently neutered compared to first version? | 6 | First version of 30B only needed an honest and precise system prompt and it did its job like an LLM should do. Now, I get "I am sorry, I cant assist with that." from 2507.
Its so disappointing. | 2025-08-10T03:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mm8mzu/has_anyone_noticed_that_qwen_3_30b_a3b_2507/ | Sidran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm8mzu | false | null | t3_1mm8mzu | /r/LocalLLaMA/comments/1mm8mzu/has_anyone_noticed_that_qwen_3_30b_a3b_2507/ | false | false | self | 6 | null |
Newb Alert | 2 | I’m interested in setting up a local LLM and just starting to learn about it. Before I jump right into getting hardware, I thought I’d consult to great Reddit hive mind.
Hardware is my first question - do I need to invest in a legit system to really start learning or can I dust off an old laptop or a Pi and use that t... | 2025-08-10T03:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mm81lx/newb_alert/ | Luppercut777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm81lx | false | null | t3_1mm81lx | /r/LocalLLaMA/comments/1mm81lx/newb_alert/ | false | false | self | 2 | null |
Hello Huggingface World! | 0 | Belto’s GPT-OSS-20B = OpenAI-compatible 20B-parameter model (Q8₀ GGUF) → runs locally, resource-efficient, education-optimized, no vendor lock-in. | 2025-08-10T02:59:07 | https://huggingface.co/HuggingBelto/gpt-oss-20b | Unknown_Talk_OG | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mm7o94 | false | null | t3_1mm7o94 | /r/LocalLLaMA/comments/1mm7o94/hello_huggingface_world/ | false | false | default | 0 | null |
Multi head classifiers aren't always the answer: empirical comparison with adaptive classifiers | 1 | [removed] | 2025-08-10T02:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mm7njt/multi_head_classifiers_arent_always_the_answer/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm7njt | false | null | t3_1mm7njt | /r/LocalLLaMA/comments/1mm7njt/multi_head_classifiers_arent_always_the_answer/ | false | false | 1 | null | |
Qwen3 jailbreak | 0 | So, I enjoy finding ways to bypass restrictions on local models (just out of curiosity/boredom). I was in the middle of doing this for Qwen3:8B and I had gotten it to the point of providing production grade “malware”, (explain malware>improve explanation> provide mock example > generate list of errors from above code >... | 2025-08-10T02:56:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mm7m6s/qwen3_jailbreak/ | Zealousideal-Slip-49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm7m6s | false | null | t3_1mm7m6s | /r/LocalLLaMA/comments/1mm7m6s/qwen3_jailbreak/ | false | false | self | 0 | null |
local llama en espanol/spanish | 1 | I'm creating a local LLM for offline use for a spanish-speaker. Has anyone created a list of files or where I can get files to add? ChatGPT has recommended Kiwix and Wikipedia and subtitles. What would you guys recommend?
Estoy creando un LLM local para que un hispanohablante lo use sin conexión. ¿Alguien ha creado un... | 2025-08-10T02:52:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mm7jyc/local_llama_en_espanolspanish/ | seriouswhite | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm7jyc | false | null | t3_1mm7jyc | /r/LocalLLaMA/comments/1mm7jyc/local_llama_en_espanolspanish/ | false | false | self | 1 | null |
I'm using 5 2b models in an agentic workflow to generate synthetic datasets | 0 | 2025-08-10T02:51:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mm7jaa/im_using_5_2b_models_in_an_agentic_workflow_to/ | Sativatoshi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm7jaa | false | null | t3_1mm7jaa | /r/LocalLLaMA/comments/1mm7jaa/im_using_5_2b_models_in_an_agentic_workflow_to/ | false | false | 0 | null | ||
OpenAI gpt-oss-20b & 120 model performance on the RTX Pro 6000 Blackwell vs RTX 5090M | 74 | Preface - I am not a programmer just an AI enthusiast and user. The GPU I got is mainly used for video editing and creative work but I know its very well suited to run large AI models so I decided to test it out. If you want me to test the performance of other models let me know as long it works in LM studio.
Thanks ... | 2025-08-10T02:40:18 | traderjay_toronto | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mm7azs | false | null | t3_1mm7azs | /r/LocalLLaMA/comments/1mm7azs/openai_gptoss20b_120_model_performance_on_the_rtx/ | false | false | default | 74 | {'enabled': True, 'images': [{'id': 'g22r9c3au3if1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/g22r9c3au3if1.jpeg?width=108&crop=smart&auto=webp&s=58fa0711aeeb0b6b69eca84530f882c00432dfc2', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/g22r9c3au3if1.jpeg?width=216&crop=smart&auto=w... | |
The model router system of GPT-5 is flawed by design. | 131 | The model router system or GPT-5 is flawed by design.
The model router has to be fast and cheap, which means using a small model lightweight (low-param). But small models lack deep comprehension and intelligence of larger models.
There are 100s of posts I've seen people claiming GPT-5 can't do basic math or the reas... | 2025-08-10T01:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mm6hsg/the_model_router_system_of_gpt5_is_flawed_by/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm6hsg | false | null | t3_1mm6hsg | /r/LocalLLaMA/comments/1mm6hsg/the_model_router_system_of_gpt5_is_flawed_by/ | false | false | self | 131 | null |
The model router system or GPT-5 is flawed by design. | 1 | The model router has to be fast and cheap, which means using a small model lightweight (low-param). But small models lack deep comprehension and intelligence of larger models.
There are 100s of posts I've seen people claiming GPT-5 can't do basic math or the reasoning is quite lacking which is usually being solved by... | 2025-08-10T01:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mm6gve/the_model_router_system_or_gpt5_is_flawed_by/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm6gve | false | null | t3_1mm6gve | /r/LocalLLaMA/comments/1mm6gve/the_model_router_system_or_gpt5_is_flawed_by/ | false | false | self | 1 | null |
Smallest LLM which has function calling and open source ? | 16 | Looking for free open source LLM which can be used for tool calling for agents related development in local machine | 2025-08-10T00:57:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mm59n7/smallest_llm_which_has_function_calling_and_open/ | Solid-Look3548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm59n7 | false | null | t3_1mm59n7 | /r/LocalLLaMA/comments/1mm59n7/smallest_llm_which_has_function_calling_and_open/ | false | false | self | 16 | null |
MiMo-VL-7B-RL-2508 from XiaoMi is out | 60 | # 📈 Performance Improvements
MiMo-VL-7B-RL-2508 demonstrates consistent improvements across both image and video benchmarks, achieving notable milestones of **70.6 on MMMU** and **70.8 on VideoMME**. | 2025-08-10T00:56:40 | https://huggingface.co/XiaomiMiMo/MiMo-VL-7B-RL-2508 | kironlau | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mm58wv | false | null | t3_1mm58wv | /r/LocalLLaMA/comments/1mm58wv/mimovl7brl2508_from_xiaomi_is_out/ | false | false | default | 60 | {'enabled': False, 'images': [{'id': 'z7jaNHadE97J-68rBpePIzvUnN2-TEcrofNYwD6SFpA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z7jaNHadE97J-68rBpePIzvUnN2-TEcrofNYwD6SFpA.png?width=108&crop=smart&auto=webp&s=414234c9de9d3c6418b5e9f232dd9d606c520522', 'width': 108}, {'height': 116, 'url': 'h... |
Jan is fast | 0 | Jan is ideal for use on laptops and mini PCs as it offers good performance even on devices with limited resources , I have activated the igpu.
https://preview.redd.it/t5fv08hr93if1.png?width=975&format=png&auto=webp&s=ae960574b06a46e45d70d5d2a8866515f73b9953
https://preview.redd.it/xodk074b93if1.png?width=744&format=... | 2025-08-10T00:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mm50m4/jan_is_fast/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm50m4 | false | null | t3_1mm50m4 | /r/LocalLLaMA/comments/1mm50m4/jan_is_fast/ | false | false | 0 | null | |
Statement on the Originality of OpenRLHF and veRL FSDP RLHF | 2 | From the original zhihu blog: [https://zhuanlan.zhihu.com/p/23147932785](https://zhuanlan.zhihu.com/p/23147932785)
**Recently, there has been quite a bit of discussion and controversy online about OpenRLHF and veRL.**
**As the original author, I feel compelled to issue a statement.**
In short: **OpenRLHF is like Ka... | 2025-08-10T00:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mm4g6l/statement_on_the_originality_of_openrlhf_and_verl/ | seventh_day123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm4g6l | false | null | t3_1mm4g6l | /r/LocalLLaMA/comments/1mm4g6l/statement_on_the_originality_of_openrlhf_and_verl/ | false | false | 2 | null | |
REAPER ReaScript (LUA) | 3 | Wondering what might be an approach, to create a specialist in REAPER ReaScript specifically.
It has a robust but reasonably-sized API, clearly documented in a PDF. There's also an absolute wealth of example scripts freely available.
How might this work? Is there a specific model that can be augmented, so that it fu... | 2025-08-10T00:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mm4enw/reaper_reascript_lua/ | ferropop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm4enw | false | null | t3_1mm4enw | /r/LocalLLaMA/comments/1mm4enw/reaper_reascript_lua/ | false | false | self | 3 | null |
Not looking too hot for gpt5 | 0 | [https://trackingai.org/IQ](https://trackingai.org/IQ) | 2025-08-09T23:58:14 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mm42g7 | false | null | t3_1mm42g7 | /r/LocalLLaMA/comments/1mm42g7/not_looking_too_hot_for_gpt5/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'x8lshBtTq5zd0yw2AfXNX3Mxfo6hooM6SOrfQpVZF90', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/46zc9nw613if1.png?width=108&crop=smart&auto=webp&s=8e10df0bf5e5018868d28ef6ee28314b6448e659', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/46zc9nw613if1.png... | ||
What are your funniest model tool calling projects? | 1 | I just stumbled upon this concept, and while it's fun to make an AI call python functions, I don't really see the point.
Putting in a GUI with option selectors seems to be a faster and less consuming alternative for me.
Obviously I'm asking about projects with local LLMs | 2025-08-09T23:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mm3kc9/what_are_your_funniest_model_tool_calling_projects/ | BalStrate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm3kc9 | false | null | t3_1mm3kc9 | /r/LocalLLaMA/comments/1mm3kc9/what_are_your_funniest_model_tool_calling_projects/ | false | false | self | 1 | null |
Sampler Settings for GLM 4.5-Air | 5 | I'm running glm 4.5 air locally using 48gb vram and 64gb ram. I'm struggling to make it coherent during chat using sillytavern as a front end. I've also used their suggested sampler settings on their website, but that didn't help either | 2025-08-09T23:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mm2vm2/sampler_settings_for_glm_45air/ | JeffDunham911 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm2vm2 | false | null | t3_1mm2vm2 | /r/LocalLLaMA/comments/1mm2vm2/sampler_settings_for_glm_45air/ | false | false | self | 5 | null |
Llama.cpp and ROCM - how to get it working | 6 | I finally (**FINALLY**) switched over to Ubuntu on my main desktop this morning. I'd read about how MOE models run much faster on linux compared to windows on ROCM.
Now I'm trying to compile llama.cpp with ROCM support and it cannot find HIP no matter what I throw at it. Tried to build with Cmake - CPU only. Tried to ... | 2025-08-09T22:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mm2nr8/llamacpp_and_rocm_how_to_get_it_working/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm2nr8 | false | null | t3_1mm2nr8 | /r/LocalLLaMA/comments/1mm2nr8/llamacpp_and_rocm_how_to_get_it_working/ | false | false | self | 6 | null |
suggestions on good TTS reader ? | 1 | Need to have something like below:
set 設定 設定
outside 出面 出面
et 等 等
strategy 策略 策略
clearly 清楚 清楚
above is sample in Cantonese, the actual list is about 10,000 most common words, and 2,000 sentences in virtually any languages.
TTS should read out in slow speed, English once, foreign languages twice, prefer 0.5 X sp... | 2025-08-09T22:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mm2lg1/suggestions_on_good_tts_reader/ | Basic_Pineapple7792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm2lg1 | false | null | t3_1mm2lg1 | /r/LocalLLaMA/comments/1mm2lg1/suggestions_on_good_tts_reader/ | false | false | self | 1 | null |
Open source speech to speech (Voice Agent) model? | 5 | Is there an open source speech to speech (Voice Agent) model, like Amazon Nova Sonic? | 2025-08-09T22:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mm2izb/open_source_speech_to_speech_voice_agent_model/ | Powerful-Angel-301 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm2izb | false | null | t3_1mm2izb | /r/LocalLLaMA/comments/1mm2izb/open_source_speech_to_speech_voice_agent_model/ | false | false | self | 5 | null |
Reasoning Models + Tool Use outperform most vision models for complex object detection | 50 | Task: detect the street sign in this image.
This is a hard problem for most SOTA object detectors. The sign is barely visible, even for humans. So we gave a reasoning system (o3) access to tools: zoom, crop, and call an external detector. No training, no fine-tuning—just a single prompt. And it worked. See it in actio... | 2025-08-09T22:34:15 | https://v.redd.it/jna4croam2if1 | bci-hacker | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mm2ae5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jna4croam2if1/DASHPlaylist.mpd?a=1757370872%2COGZlNmJjODExMjUzMjM5ZmMwYmQxODVlNjFmMDYwMGYyMTA3ODc0Yzg3YTEwYjhhZTIzMjJiYzk3YzQwMGI1Mw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/jna4croam2if1/DASH_1080.mp4?source=fallback', 'h... | t3_1mm2ae5 | /r/LocalLLaMA/comments/1mm2ae5/reasoning_models_tool_use_outperform_most_vision/ | false | false | 50 | {'enabled': False, 'images': [{'id': 'bjJwOG1yb2FtMmlmMSubCZJHqCfZ9hv7QWq0qC3p6f_M7abBmrHK3aFoqfso', 'resolutions': [{'height': 110, 'url': 'https://external-preview.redd.it/bjJwOG1yb2FtMmlmMSubCZJHqCfZ9hv7QWq0qC3p6f_M7abBmrHK3aFoqfso.png?width=108&crop=smart&format=pjpg&auto=webp&s=598a91c9a42c6b67f24818905a6093a531a4... | |
Looking for client on MacOS for Qwen3-4B-Thinking-2507 with MCP support | 3 | I'm basically looking for a client that is similar to Claude Desktop on MacOS. Ideally it should be open-sourced, allow MCP calling , and can communicate properly with mlx-lm hosted variants of Qwen3-4B-Thinking-2507. I'd appreciate recommendations in this front that satisfy above requirements ! | 2025-08-09T22:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mm27gt/looking_for_client_on_macos_for/ | Special-Economist-64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm27gt | false | null | t3_1mm27gt | /r/LocalLLaMA/comments/1mm27gt/looking_for_client_on_macos_for/ | false | false | self | 3 | null |
Made something for visuallizing tokens from multiple files and models | 4 | I have been dealing with large token files for data analysis involving different models for different file types.
Couldn't find any other tool for this specific usecase, so here it is.
Features:
- Real-time token + cost estimates
- Support for OpenAI, Anthropic, and Google models
- Upload any file — code, PDFs, spre... | 2025-08-09T22:28:28 | https://v.redd.it/kdj6spznj2if1 | Abhi011999 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mm25v5 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/kdj6spznj2if1/DASHPlaylist.mpd?a=1757370524%2CYTdjODE3OGQ4ZGJiMTNmNzQyNjM1MTk3MDkwYjJlYjllOGFmODlmZTZkYjdmOWIxMmNhMWE2MjNhZTgzZDIxNw%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/kdj6spznj2if1/DASH_480.mp4?source=fallback', 'ha... | t3_1mm25v5 | /r/LocalLLaMA/comments/1mm25v5/made_something_for_visuallizing_tokens_from/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'MTRidGFsem5qMmlmMXDF_EaEjvX7UPAp8XFyzns3PCwWm2pyP17ICxEMKM8S', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/MTRidGFsem5qMmlmMXDF_EaEjvX7UPAp8XFyzns3PCwWm2pyP17ICxEMKM8S.png?width=108&crop=smart&format=pjpg&auto=webp&s=46ad652be6653f10ec1149bf05762d48125b1... | |
i'm making dating simulator game with ai npc using open source llm | 184 | you can play on your browser: [https://romram.itch.io/break-time](https://romram.itch.io/break-time)
you need LM Studio as a local server: [https://lmstudio.ai/](https://lmstudio.ai/)
use uncensored llama 8b model or more and 8k context window or more for better experience.
i use blacksheep gguf models:
[http... | 2025-08-09T22:20:10 | https://v.redd.it/hchl142hc2if1 | aziib | /r/LocalLLaMA/comments/1mm1zcg/im_making_dating_simulator_game_with_ai_npc_using/ | 1970-01-01T00:00:00 | 0 | {} | 1mm1zcg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hchl142hc2if1/DASHPlaylist.mpd?a=1757499618%2CZDE1MDMyZDRhNmFhNjE1YTg3YjQwYzg3ODJlYzQ0NjRkOWYyYTU0NzkzZWRhNGY5NDZlYjRmZTMyN2JkYzc0Mg%3D%3D&v=1&f=sd', 'duration': 206, 'fallback_url': 'https://v.redd.it/hchl142hc2if1/DASH_1080.mp4?source=fallback', '... | t3_1mm1zcg | /r/LocalLLaMA/comments/1mm1zcg/im_making_dating_simulator_game_with_ai_npc_using/ | false | false | 184 | {'enabled': False, 'images': [{'id': 'ZjR3N2g2MmhjMmlmMW57LOTD_dYSJqtRCJKUw8LsjdMQCiQ_6aIAc3dh1JZp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZjR3N2g2MmhjMmlmMW57LOTD_dYSJqtRCJKUw8LsjdMQCiQ_6aIAc3dh1JZp.png?width=108&crop=smart&format=pjpg&auto=webp&s=4f0637372aaf4be4b98b7787b347566f7f7a3... | |
A personal take: LLMs are stuck, but local might win? | 87 | Some personal rambling thoughts on LLMs:
* Pretraining decides what the model knows. By chewing through a massive slice of the internet, it picks up world knowledge, language patterns, common sense, and the core generalization that underwrites its “understanding.” That’s the source—and the cap—of its intelligence.
* P... | 2025-08-09T22:04:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mm1mcj/a_personal_take_llms_are_stuck_but_local_might_win/ | Truncleme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm1mcj | false | null | t3_1mm1mcj | /r/LocalLLaMA/comments/1mm1mcj/a_personal_take_llms_are_stuck_but_local_might_win/ | false | false | self | 87 | null |
I attempted to clone Grok's Ani, while its not perfect it's a start | 117 | I'm not a good developer by any means, but I made this for the player2 jam in only 7 days! It's a humble start, it's still very rough but emotions work well, it called me yogurt boy for no reason 😭
https://player2.game/discover/games/019884e5-3dd9-7872-97b3-88b8c81237a2
Model made in vroid by me, It uses the player2... | 2025-08-09T21:51:26 | ELPascalito | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mm1c2w | false | null | t3_1mm1c2w | /r/LocalLLaMA/comments/1mm1c2w/i_attempted_to_clone_groks_ani_while_its_not/ | false | false | default | 117 | {'enabled': True, 'images': [{'id': '6tb6td9re2if1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/6tb6td9re2if1.jpeg?width=108&crop=smart&auto=webp&s=a3a19353f496ea5a3c52f16351afc709c49a82fa', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/6tb6td9re2if1.jpeg?width=216&crop=smart&auto=w... | |
gpt-o more people will join the locals knowing that they are not the only ones | 1 | Thanks to the emergence of GPT-OSS, many people have become aware of the existence of a wide range of artificial intelligence models, beyond closed and centralized systems. This project has served as an accessible entry point for new users, accelerating the adoption of solutions based on local inference models.
As a r... | 2025-08-09T21:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mm199n/gpto_more_people_will_join_the_locals_knowing/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm199n | false | null | t3_1mm199n | /r/LocalLLaMA/comments/1mm199n/gpto_more_people_will_join_the_locals_knowing/ | false | false | 1 | null | |
New Tool for Finding Why Your LLM Inference is Slow | 24 | Been working on reverse engineering GPUs to build a profiler that actually shows what's happening during inference.
**The problem**: You're running Llama/Mistral/whatever and it's slow, but torch.profiler gives you a mess of data that doesn't help you fix it.
What we built:
* One decorator on your inference code
* G... | 2025-08-09T21:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mm0ssv/new_tool_for_finding_why_your_llm_inference_is/ | Upstairs-Fun8458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mm0ssv | false | null | t3_1mm0ssv | /r/LocalLLaMA/comments/1mm0ssv/new_tool_for_finding_why_your_llm_inference_is/ | false | false | self | 24 | null |
OpenAI ain't doing so well on api usage compared to Qwen or anyone else | 0 | I decided to have a look in OpenRouter on the total tokens processed by provider and OpenAI seems to have taken a giant nosedive for a while.
What gives? | 2025-08-09T21:17:53 | https://www.reddit.com/gallery/1mm0kcb | Semi_Tech | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mm0kcb | false | null | t3_1mm0kcb | /r/LocalLLaMA/comments/1mm0kcb/openai_aint_doing_so_well_on_api_usage_compared/ | false | false | 0 | null | |
I'm sure it's a small win, but I have a local model now! | 609 | It took some troubleshooting but apparently I just had the wrong kind of SD card for my Jetson Orin nano. No more random ChatAI changes now though!
I'm using openwebui in a container and Ollama as a service. For now it's running from an SD card but I'll move it to the m.2 sata soon-ish. Performance on a 3b model is fi... | 2025-08-09T21:16:39 | https://www.reddit.com/gallery/1mm0jb6 | LAKnerd | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mm0jb6 | false | null | t3_1mm0jb6 | /r/LocalLLaMA/comments/1mm0jb6/im_sure_its_a_small_win_but_i_have_a_local_model/ | false | false | 609 | null | |
in one sentence explain most important reasons why local LLM is much better than crap like *** | 1 | **ChatGPT 5**
in one sentence explain most important reasons why local LLM is much better than crap like ChatGPT
A well-tuned local LLM can be faster, cheaper, private, customizable to your exact needs, and immune to corporate filtering or data collection, giving you full control over how it learns and responds.
**G... | 2025-08-09T20:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mlzs8z/in_one_sentence_explain_most_important_reasons/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlzs8z | false | null | t3_1mlzs8z | /r/LocalLLaMA/comments/1mlzs8z/in_one_sentence_explain_most_important_reasons/ | false | false | self | 1 | null |
Are AI finetuners now allowed to charge for there models? | 0 | Hey I am just wondering if this is right/legal. While I really respect the guys work and understand they might want to get some donations. I seem to be gated/not able to download a Qwen model on hugging face. Ive being messing with big models to see what I could get to run on my system. Went to download this:
[https:... | 2025-08-09T20:32:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mlzi7p/are_ai_finetuners_now_allowed_to_charge_for_there/ | fluffywuffie90210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlzi7p | false | null | t3_1mlzi7p | /r/LocalLLaMA/comments/1mlzi7p/are_ai_finetuners_now_allowed_to_charge_for_there/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8HdSgL8vWDLQVcxOwfpIKo16WbPNwqpZJvOX3JWm2h8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8HdSgL8vWDLQVcxOwfpIKo16WbPNwqpZJvOX3JWm2h8.png?width=108&crop=smart&auto=webp&s=22450e114fedf6827e5748a4a6ecfdd882744f78', 'width': 108}, {'height': 116, 'url': 'h... |
Advice for developer pc | 1 | Hi, I’m looking to buy a new computer for software development (large codebase, a lot of docker containers). I would also like to run some local models for coding since i'm not allowed to use the cloud for part of the codebase.
Im choosing between framework desktop 128 gb and a traditional pc with ryzen 9950x, 128 gb... | 2025-08-09T20:31:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mlzhbg/advice_for_developer_pc/ | Magnus114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlzhbg | false | null | t3_1mlzhbg | /r/LocalLLaMA/comments/1mlzhbg/advice_for_developer_pc/ | false | false | self | 1 | null |
Now that I could run Qwen 30B A3B on 6GB Vram at 12tps, what other big models could I run ? | 3 | It's surprising that I finally could run a 30B model locally using a cheap Dell laptop with RTX 3050 ( 6B Vram ) and 16GB ram. It's due to the fact that it is 30B, but only uses 3B at a time. I get around 12.4 tokens per second ( TPS ), which is amazing. Now, what other models follow this architecture so that I could r... | 2025-08-09T20:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mlzaon/now_that_i_could_run_qwen_30b_a3b_on_6gb_vram_at/ | Current-Stop7806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlzaon | false | null | t3_1mlzaon | /r/LocalLLaMA/comments/1mlzaon/now_that_i_could_run_qwen_30b_a3b_on_6gb_vram_at/ | false | false | self | 3 | null |
Thunderbolt link aggression on Mac Studio ? | 4 |
Hi all,
I am not sure if its possible (in theory) or not so here asking
Mac Studio has 5 Thunderbolt 5 120Gbps ports.
Can these ports be used to link 2 Mac Studios with multiple cables and Link Aggregated like in Ethernet to achieve 5 x 120Gbps bandwidth between them for exo / llama rpc?
Anyone tried or knows if it'... | 2025-08-09T20:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mlz8vr/thunderbolt_link_aggression_on_mac_studio/ | Recent-Success-1520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlz8vr | false | null | t3_1mlz8vr | /r/LocalLLaMA/comments/1mlz8vr/thunderbolt_link_aggression_on_mac_studio/ | false | false | self | 4 | null |
Looking for advice for AI for grandma with dementia | 2 | So my grandmother has dementia and drives my parents insane asking the same questions 500 times a day. I’d like to build a local LLM she can talk to all she wants and set up an easy way to remotely update a file with context so the AI knows whatever the newest things are that she’s asking about. It would need to be run... | 2025-08-09T20:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mlys5f/looking_for_advice_for_ai_for_grandma_with/ | SirGalahadOfCamelot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlys5f | false | null | t3_1mlys5f | /r/LocalLLaMA/comments/1mlys5f/looking_for_advice_for_ai_for_grandma_with/ | false | false | self | 2 | null |
Feedback wanted on open-source LLM-as-a-Judge | 0 | Hey Folks, I wanted to share a recently release feature of [Oumi](https://github.com/oumi-ai/oumi) for doing LLM-as-a-Judge. Would love to get your feedback on how we can improve the API, documentation, additional features you’d like to see, and so on - *we’re an open-source community driven project after all!*
Here’s... | 2025-08-09T19:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mlyne0/feedback_wanted_on_opensource_llmasajudge/ | PrincipleFar6835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlyne0 | false | null | t3_1mlyne0 | /r/LocalLLaMA/comments/1mlyne0/feedback_wanted_on_opensource_llmasajudge/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MbrpYJNVMNieG94WHtmUNWP1RDBn7yEk1EC-0rMoWtk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MbrpYJNVMNieG94WHtmUNWP1RDBn7yEk1EC-0rMoWtk.png?width=108&crop=smart&auto=webp&s=8aa9234ac01c676ed53a3041b72b6628d0219691', 'width': 108}, {'height': 108, 'url': 'h... |
When exactly "Qwen3-235B-A22B-2507" started generating flow charts? | 216 | 2025-08-09T19:56:09 | JeffreySons_90 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mlymxq | false | null | t3_1mlymxq | /r/LocalLLaMA/comments/1mlymxq/when_exactly_qwen3235ba22b2507_started_generating/ | false | false | default | 216 | {'enabled': True, 'images': [{'id': 'hw6j18x8u1if1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/hw6j18x8u1if1.jpeg?width=108&crop=smart&auto=webp&s=40f62cd4a3a4d83cdaf48fa6f604458b471c75c2', 'width': 108}, {'height': 90, 'url': 'https://preview.redd.it/hw6j18x8u1if1.jpeg?width=216&crop=smart&auto=we... | ||
Day 2 – Building an AI-Powered Cloud OS | 0 | https://reddit.com/link/1mly2td/video/faypusc6p1if1/player
I’m excited to share that **PullForgeOS** now includes **browser support**, **GitHub OAuth integration**, and an **App Store** (currently a demo version). The in-browser OS delivers a smooth, fully functional browsing experience.
The project is **open-source*... | 2025-08-09T19:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mly2td/day_2_building_an_aipowered_cloud_os/ | Prestigious_Skin6507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mly2td | false | null | t3_1mly2td | /r/LocalLLaMA/comments/1mly2td/day_2_building_an_aipowered_cloud_os/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'rnptTQrP5AKXr0E0bk5n7Q12RwhxlB9amMeaBbiXVPM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rnptTQrP5AKXr0E0bk5n7Q12RwhxlB9amMeaBbiXVPM.png?width=108&crop=smart&auto=webp&s=a0ffb524cc00cc50876dda8edb7cafea2165df56', 'width': 108}, {'height': 108, 'url': 'h... |
Choose your models | 20 | Which models will u choose to save as your last resort if all AI models were to disappear tomorrow.
An under 40B model.
A mid size model, 70-150B
A sota model
Closed source open source doesn't matter. Trying to see actual public preference | 2025-08-09T19:06:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mlxg6h/choose_your_models/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlxg6h | false | null | t3_1mlxg6h | /r/LocalLLaMA/comments/1mlxg6h/choose_your_models/ | false | false | self | 20 | null |
vLLM can not split model across multiple GPUs with different VRAM amount? | 0 | I have 144 GB VRAM total on diffrent GPU models, and when I'm trying to run a 105 GB model `vllm` fails with OOM, as far as I understand it finds a GPU with the largest amount of VRAM and tries to load the same amount on the smaller ones and this obviously fails. Am I correct?
I've found a similar 1 year old ticket: h... | 2025-08-09T19:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mlxcco/vllm_can_not_split_model_across_multiple_gpus/ | MelodicRecognition7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlxcco | false | null | t3_1mlxcco | /r/LocalLLaMA/comments/1mlxcco/vllm_can_not_split_model_across_multiple_gpus/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'FtipiD6Jptk5F-awTdVcuyYz-0FexvLcnI-P73MG5g4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FtipiD6Jptk5F-awTdVcuyYz-0FexvLcnI-P73MG5g4.png?width=108&crop=smart&auto=webp&s=2d671aeb4a1aebe6542709d8f9ccf1a9b01ba402', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone using Android Studio with gemini? | 0 | What are your reviews??? | 2025-08-09T18:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mlwtba/anyone_using_android_studio_with_gemini/ | rozeappletree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlwtba | false | null | t3_1mlwtba | /r/LocalLLaMA/comments/1mlwtba/anyone_using_android_studio_with_gemini/ | false | false | self | 0 | null |
Run GPT-OSS with MLX or GGUF in your CLI using 1 line of code | 4 | ERROR: type should be string, got "https://reddit.com/link/1mlwaj7/video/uxn1bma711if1/player\n\nHi LocalLLaMA,\n\nI'm from [nexa-sdk](https://github.com/NexaAI/nexa-sdk) project. We've made it simple to run gpt-oss in **MLX** & GGUF formats directly from your CLI - using one line of code. \n\nWe built this to make it easy to experiment **format + backend combos** for our agentic RAG application. For MacOS, we believe the gpt-oss in MLX format with 4 bit quant is an excellent choice.\n\n# Models we found great in MLX & GGUF\n\n* **MLX:** [InferenceIllusionist/gpt-oss-20b-MLX-4bit]() (thanks [InferenceIllusionist]() 🙌)\n* **GGUF:** [unsloth/gpt-oss-20b-GGUF]() (thanks [unsloth]() 🙌)\n\nThe qualities are roughly the same for our use case, but performance:\n\nOn an M4 Max, MLX hit \\~103 tok/s, about 25% faster than GGUF. \n\n# Quick tip\n\nYou can paste any Hugging Face repo name into the CLI and pull it directly:\n\n nexa pull unsloth/gpt-oss-20b-GGUF\n\nThen just select the quant option you want directly in your CLI. We think this is easier for us to test with different quants without switching screens. Let me know your thoughts & feedback!\n\n# Link\n\nGithub: [https://github.com/NexaAI/nexa-sdk](https://github.com/NexaAI/nexa-sdk)\n\nOur goal is to make **local model runs frictionless** across more backends. Any requests and feedback are welcome!" | 2025-08-09T18:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mlwaj7/run_gptoss_with_mlx_or_gguf_in_your_cli_using_1/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlwaj7 | false | null | t3_1mlwaj7 | /r/LocalLLaMA/comments/1mlwaj7/run_gptoss_with_mlx_or_gguf_in_your_cli_using_1/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'KoHiBlaF8EL7uOTdEBkB3ymiA3dlaWUFUlQlHB90A0I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KoHiBlaF8EL7uOTdEBkB3ymiA3dlaWUFUlQlHB90A0I.png?width=108&crop=smart&auto=webp&s=9d89ee2d3bfa4199d661845049a8bfcbbe50c1b5', 'width': 108}, {'height': 108, 'url': 'h... |
GPT-OSS-120B MXFP4 crashing out of llama.cpp when it hits the context limit. Is there a workaround? | 1 | I'm using the official GGML Org release of GPT OSS 120B MXFP4. I'm using last night's release of llama.cpp. It works great until it hits the context limit and then it crashes out. If I leave the context at the default of 4K, it crashes out when it generates 4K tokens. If I set the context to be 8K, it crashes out when ... | 2025-08-09T18:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mlvzkj/gptoss120b_mxfp4_crashing_out_of_llamacpp_when_it/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlvzkj | false | null | t3_1mlvzkj | /r/LocalLLaMA/comments/1mlvzkj/gptoss120b_mxfp4_crashing_out_of_llamacpp_when_it/ | false | false | self | 1 | null |
Fastest way to LoRA exported ChatGPT and Claude chats? | 1 | Is there a fast way to clean chats for training? Many chats are continuations of chats (eg initial is like rewrite this blah blah blah and next is like what if rewritten in this style) so the prompt alone isn’t good for the instructions part. Is there a faster and easier way to convert them? I have like 1200 chat FILES... | 2025-08-09T17:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mlvhb6/fastest_way_to_lora_exported_chatgpt_and_claude/ | AccidentalFolklore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlvhb6 | false | null | t3_1mlvhb6 | /r/LocalLLaMA/comments/1mlvhb6/fastest_way_to_lora_exported_chatgpt_and_claude/ | false | false | self | 1 | null |
Which one do you think is the best for agentic coding with Claude Code - Qwen 3 Coder, GLM 4.5, or Kimi K2? | 9 | I would like to hear your experiences - if you have compared these models while using them with Claude Code for actual coding tasks, not benchmarks or things like that. Could you share which one works best for you and why? | 2025-08-09T17:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mluu4a/which_one_do_you_think_is_the_best_for_agentic/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mluu4a | false | null | t3_1mluu4a | /r/LocalLLaMA/comments/1mluu4a/which_one_do_you_think_is_the_best_for_agentic/ | false | false | self | 9 | null |
preguntra sobre mem0 | 0 | alguien que alla probado este , al utilizarlo aumenta el rendimiento del modelo ? osea si tieneen benchmark tiene **%35.1** pero con esto dice que aumenta **%25.1** entonces aumenta mas la precisión
https://preview.redd.it/9or2woboy0if1.png?width=1365&format=png&auto=webp&s=71ce8ccf95e422f6bf6588c335796ab648ca3376
| 2025-08-09T17:09:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mlumqa/preguntra_sobre_mem0/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlumqa | false | null | t3_1mlumqa | /r/LocalLLaMA/comments/1mlumqa/preguntra_sobre_mem0/ | false | false | 0 | null | |
Benchmarking models using your own dataset | 4 | Hey folks,
After chatting with some friends on Discord, I decided to open-source the CLI tool I built to benchmark new models
The reason is because with the recent release of some open source models, some friends asked me how I benchmark them. A while ago I built a robust dataset to benchmark them for my use cases an... | 2025-08-09T17:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mlulxg/benchmarking_models_using_your_own_dataset/ | BeowulfBR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlulxg | false | null | t3_1mlulxg | /r/LocalLLaMA/comments/1mlulxg/benchmarking_models_using_your_own_dataset/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'ijAIa19vjfXgQbFSl5aLpNvBjU1G68vcYzphbbvf9hg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ijAIa19vjfXgQbFSl5aLpNvBjU1G68vcYzphbbvf9hg.png?width=108&crop=smart&auto=webp&s=0aae8fc2c944c47892d3c2a6f12dc8204eb34c58', 'width': 108}, {'height': 108, 'url': 'h... |
Need help with benchmarking for RAG + LLM | 0 | I want to benchmark RAG setup for multiple file formats like - doc, xls, csv, ppt, png etc.
Are there any benchmarks with which I can test multiple file formats | 2025-08-09T16:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mltwyq/need_help_with_benchmarking_for_rag_llm/ | irodov4030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mltwyq | false | null | t3_1mltwyq | /r/LocalLLaMA/comments/1mltwyq/need_help_with_benchmarking_for_rag_llm/ | false | false | self | 0 | null |
llama 4 supera a gpt-5 en iq | 0 | Fuimos muy duros con Llama 4, pero Llama 4 supera a GPT-5 en inteligencia general (IQ), registrando el desempeño más bajo de la tabla.
https://preview.redd.it/he8xjk9cu0if1.png?width=732&format=png&auto=webp&s=42993bcb672ef981d63d4a4760d44c873a29c218
pagina : [https://www.trackingai.org/home](https://www.trackingai.o... | 2025-08-09T16:35:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mlttjs/llama_4_supera_a_gpt5_en_iq/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlttjs | false | null | t3_1mlttjs | /r/LocalLLaMA/comments/1mlttjs/llama_4_supera_a_gpt5_en_iq/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'jqAZ0KqAJfNv4ndpVkfPFZ5XjZFdsSO4qfJScEMQJ44', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jqAZ0KqAJfNv4ndpVkfPFZ5XjZFdsSO4qfJScEMQJ44.png?width=108&crop=smart&auto=webp&s=01d4109886c841383410ee6d63b334a4ce5db57a', 'width': 108}, {'height': 121, 'url': 'h... | |
axolotl vs unsloth [performance and everything] | 36 | there has been updates like (https://github.com/axolotl-ai-cloud/axolotl/releases/tag/v0.12.0 shoutout to great work by axolotl team) i was wondering ,is unsloth mostly used for those who have gpu vram limitations or do you guys have exp is using these in production , i would love to know feedback from startups too tha... | 2025-08-09T16:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mltobj/axolotl_vs_unsloth_performance_and_everything/ | Shivacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mltobj | false | null | t3_1mltobj | /r/LocalLLaMA/comments/1mltobj/axolotl_vs_unsloth_performance_and_everything/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '7EL6svPicb5mc9RUaJLbNA_II-fkeWE7z7fhNep0ULY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7EL6svPicb5mc9RUaJLbNA_II-fkeWE7z7fhNep0ULY.png?width=108&crop=smart&auto=webp&s=096ed075262ebd7700f4ad4d99df4eb893367556', 'width': 108}, {'height': 108, 'url': 'h... |
Why aren't people using small llms to train on their own local datasets? | 50 | Now that there are so many good small base model llms available why aren't we seeing people train them on their own data. Their every day to day work or home data/files on local models? I mean general llms like chatgpt are all great but most people have data lying around for their specific context/work that the general... | 2025-08-09T16:18:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mltedd/why_arent_people_using_small_llms_to_train_on/ | QFGTrialByFire | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mltedd | false | null | t3_1mltedd | /r/LocalLLaMA/comments/1mltedd/why_arent_people_using_small_llms_to_train_on/ | false | false | self | 50 | null |
[ Removed by Reddit ] | 1 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-08-09T16:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mltceo/removed_by_reddit/ | Alternative_Sock_191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mltceo | false | null | t3_1mltceo | /r/LocalLLaMA/comments/1mltceo/removed_by_reddit/ | false | false | self | 1 | null |
XandAI-extension - Allow you to chat with your browser using ollama. | 3 | 2025-08-09T16:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mlszpz/xandaiextension_allow_you_to_chat_with_your/ | Sea-Reception-2697 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlszpz | false | null | t3_1mlszpz | /r/LocalLLaMA/comments/1mlszpz/xandaiextension_allow_you_to_chat_with_your/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'bYRVS90ZKX1r4OqkPxy0a_PbC3IEsvLjY869Bkl6r1A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bYRVS90ZKX1r4OqkPxy0a_PbC3IEsvLjY869Bkl6r1A.png?width=108&crop=smart&auto=webp&s=75e06f2bd82ba455bbfc6b23410b22bf2f4c5f38', 'width': 108}, {'height': 108, 'url': 'h... | ||
Update on XandAI-extension - Allow user to query Ollama feeding browser text context and interact with as many tabs as you want shifting contexts. | 1 | 2025-08-09T15:59:34 | Sea-Reception-2697 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mlsxfu | false | null | t3_1mlsxfu | /r/LocalLLaMA/comments/1mlsxfu/update_on_xandaiextension_allow_user_to_query/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'uvp8j050o0if1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/uvp8j050o0if1.png?width=108&crop=smart&auto=webp&s=e7c3c53b58b2e2cd18d9675850a644cd1ea0a222', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/uvp8j050o0if1.png?width=216&crop=smart&auto=web... | ||
A native iOS client that lets you use your own api key/local models with on-device RAG & Agents on the roadmap? Has anyone tried "Privacy AI"? | 2 | Before I start just wanna say I’m not affiliated with them at all I try out a lot of chat clients/api wrappers on GitHub/App Store and I figured the llama community would appreciate it. It’s leagues ahead of anything else on the App Store, you can use use any api provider, local model, tts/stt models built in like whis... | 2025-08-09T15:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mlsurg/a_native_ios_client_that_lets_you_use_your_own/ | Tonomous_Agent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlsurg | false | null | t3_1mlsurg | /r/LocalLLaMA/comments/1mlsurg/a_native_ios_client_that_lets_you_use_your_own/ | false | false | self | 2 | null |
Can we finally agree that creative writing benchmarks like EQBench are totally useless? | 86 | These benchmarks uses AI to evaluate AI writing and consistently gives the highest ratings to the most boring, sloppy, and uncreative models, like GPT series top rankings. Perhaps this happens because the AI judge favors bland, direct, and uninspiring writing? I see the leaderboard dominated by what I consider most bor... | 2025-08-09T15:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mlsos9/can_we_finally_agree_that_creative_writing/ | FluffyMacho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlsos9 | false | null | t3_1mlsos9 | /r/LocalLLaMA/comments/1mlsos9/can_we_finally_agree_that_creative_writing/ | false | false | self | 86 | null |
Qwen 3 0.6B beats GPT-5 in simple math | 1,194 | I saw this comparison between Grok and GPT-5 on X for solving the equation 5.9 = x + 5.11. In the comparison, Grok solved it but GPT-5 without thinking failed.
It could have been handpicked after multiples runs, so out of curiosity and for fun I decided to test it myself. Not with Grok but with local models running on... | 2025-08-09T15:46:37 | adrgrondin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mlsm8e | false | null | t3_1mlsm8e | /r/LocalLLaMA/comments/1mlsm8e/qwen_3_06b_beats_gpt5_in_simple_math/ | false | false | 1,194 | {'enabled': True, 'images': [{'id': 'gOKPzvOe7NhwCvXcJY0XF2oHCq6-0BR-ls7S9s32yEU', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/vtstf4nql0if1.jpeg?width=108&crop=smart&auto=webp&s=8b0fbe5d76d3ceb6d0f8589d9bc97072d136a983', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/vtstf4nql0if1.j... | ||
The recent upshot of off topic posts | 1 | [removed] | 2025-08-09T15:24:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mls2yp/the_recent_upshot_of_off_topic_posts/ | Fun_Atmosphere8071 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mls2yp | false | null | t3_1mls2yp | /r/LocalLLaMA/comments/1mls2yp/the_recent_upshot_of_off_topic_posts/ | false | false | self | 1 | null |
Stop off-topic shilling posts! | 1 | [removed] | 2025-08-09T15:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mls1d2/stop_offtopic_shilling_posts/ | Fun_Atmosphere8071 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mls1d2 | false | null | t3_1mls1d2 | /r/LocalLLaMA/comments/1mls1d2/stop_offtopic_shilling_posts/ | false | false | self | 1 | null |
Review for TTSReader.com, , also known as WellSource Ltd | 0 | This is review for [https://ttsreader.com](https://ttsreader.com), , also known as WellSource Ltd
https://preview.redd.it/g7ixa7w6g0if1.jpg?width=1275&format=pjpg&auto=webp&s=2f0710412fc9940515111de84840de82ae183d3a
https://preview.redd.it/i3c04898g0if1.jpg?width=1275&format=pjpg&auto=webp&s=36f6f5ec8a31ee4c29d7c41a2... | 2025-08-09T15:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mlrxer/review_for_ttsreadercom_also_known_as_wellsource/ | Basic_Pineapple7792 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlrxer | false | null | t3_1mlrxer | /r/LocalLLaMA/comments/1mlrxer/review_for_ttsreadercom_also_known_as_wellsource/ | false | false | 0 | null | |
Local CLI agent for general tasks? | 0 | Hi r/LocalLLaMa,
I'm looking for a CLI tool similar to opencode, claude code, or gemini cli, but for general-purpose tasks. I just want to open a terminal, run the agent and ask it to do things not necessarily related to code, eg searching the web, querying a vector db, and accessing my files.
The closest tool I've f... | 2025-08-09T14:37:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqz54/local_cli_agent_for_general_tasks/ | Shiny-Squirtle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqz54 | false | null | t3_1mlqz54 | /r/LocalLLaMA/comments/1mlqz54/local_cli_agent_for_general_tasks/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'l-voJcFUNL-lEBJ04YIQs-2lZDiSH6muuJ_urhM0b7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l-voJcFUNL-lEBJ04YIQs-2lZDiSH6muuJ_urhM0b7c.png?width=108&crop=smart&auto=webp&s=a2bf514f1075b5a4e253755b442a468b0296bac1', 'width': 108}, {'height': 108, 'url': 'h... |
AI max+ 395 | 13 | Anyone using a 128gb version with a local model as a serious replacement for commercial apis?
If so, what device? What model? What tokens / second and context? | 2025-08-09T14:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqylw/ai_max_395/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqylw | false | null | t3_1mlqylw | /r/LocalLLaMA/comments/1mlqylw/ai_max_395/ | false | false | self | 13 | null |
Best Local LLM for Mobile Development ? | 0 | I am currently using Claude 4 Sonnet for Mobile Development using Native Android & Flutter because OpenAI is not very good, and Gemini feels over-engineered. But Claude is great for Native Android & Flutter.
I also need an open-source local LLM (regardless of the cost of running).
I checked Qwen3 Coder but couldn’t g... | 2025-08-09T14:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqtbn/best_local_llm_for_mobile_development/ | MatrixEternal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqtbn | false | null | t3_1mlqtbn | /r/LocalLLaMA/comments/1mlqtbn/best_local_llm_for_mobile_development/ | false | false | self | 0 | null |
How come AnythingLLM t/s is so slow to LM Studio | 1 | As the Title says , How come AnythingLLM t/s is so slow to LM Studio ?
same model, same prompt, 16 t/s vs 4 t/s.
I have no idea what causing it , because i only use AnythingLLM for RAG ,,,, if i was smart enough to move away from it , i probably would just to get faster speeds . pretty the same when using llama-ser... | 2025-08-09T14:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqrrv/how_come_anythingllm_ts_is_so_slow_to_lm_studio/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqrrv | false | null | t3_1mlqrrv | /r/LocalLLaMA/comments/1mlqrrv/how_come_anythingllm_ts_is_so_slow_to_lm_studio/ | false | false | self | 1 | null |
Mitigate Hallucinations by Fine-tuning gpt-oss-120b with One Example | 0 | https://huggingface.co/longsiyu/gpt-oss-120b-hallu-miti
This is a LoRA adapter (rank=64) for gpt-oss-120b fine-tuned with only ONE data example to mitigate hallucinations. After fine-tuning, the model is more likely to refuse to answer questions it perceives as involving information that is too recent. | 2025-08-09T14:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqphy/mitigate_hallucinations_by_finetuning_gptoss120b/ | Soft-Transition3971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqphy | false | null | t3_1mlqphy | /r/LocalLLaMA/comments/1mlqphy/mitigate_hallucinations_by_finetuning_gptoss120b/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '15JX-PlXwQEsGwdblZPcng62DVznuC6IUjIYEmhoqIM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/15JX-PlXwQEsGwdblZPcng62DVznuC6IUjIYEmhoqIM.png?width=108&crop=smart&auto=webp&s=d9c548fa2ba3bb37dc9f7ee5e966631792b89302', 'width': 108}, {'height': 116, 'url': 'h... |
How to start using local agents? | 0 | I see you guys use different models as agents. But I don't know where and how to start?
I use LM Studio and it was enough for me. But I need to level up in my AI understanding, to do some practical useful things.
How? Where to start?
Do we have analogue of deep research on local? | 2025-08-09T14:06:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqape/how_to_start_using_local_agents/ | WinDrossel007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqape | false | null | t3_1mlqape | /r/LocalLLaMA/comments/1mlqape/how_to_start_using_local_agents/ | false | false | self | 0 | null |
[Help] Weekend-only deadline: on-prem French RAG chatbot for wiring diagrams — need the simplest viable plan | 1 |
Deadline: I must deliver a working MVP by Monday (I only have the weekend).
Goal (what “done” means):
Input: natural-language French query, e.g. « Donne le schéma de câblage pour Modèle X100 »
Output:
the correct PDF page from our technical manuals,
an optional one-sentence summary in French,
a trace: {doc_id, ... | 2025-08-09T14:06:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mlqad5/help_weekendonly_deadline_onprem_french_rag/ | Cool_Recognition_747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlqad5 | false | null | t3_1mlqad5 | /r/LocalLLaMA/comments/1mlqad5/help_weekendonly_deadline_onprem_french_rag/ | false | false | self | 1 | null |
Temporal qwen2.5vl:32b tips? | 8 | I'm trying to generate visual based event logs from robotic systems using Qwen. The video logs can run for 1 to 3 hours and they are full of weird and increasingly niche objects and events (specialty equipment and weird hardware interfaces). That challenge is a topic for later but right now I am more interested in temp... | 2025-08-09T14:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mlq8ka/temporal_qwen25vl32b_tips/ | ChristopherLyon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlq8ka | false | null | t3_1mlq8ka | /r/LocalLLaMA/comments/1mlq8ka/temporal_qwen25vl32b_tips/ | false | false | self | 8 | null |
Issue with Gemini 2.5 Pro Escaping HTML Entities in Code Output via Vertex AI | 0 | Hi everyone,
I'm running into a weird issue with Gemini 2.5 Pro (accessed via Vertex AI through LiteLLM) where it's escaping HTML entities in my JSX code outputs. For example, instead of outputting raw <React.StrictMode>, it gives: <React.StrictMode>.
This only happens with Gemini 2.5 Pro. Other models like Claude wo... | 2025-08-09T13:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mlq4dj/issue_with_gemini_25_pro_escaping_html_entities/ | Impressive-Reason200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlq4dj | false | null | t3_1mlq4dj | /r/LocalLLaMA/comments/1mlq4dj/issue_with_gemini_25_pro_escaping_html_entities/ | false | false | self | 0 | null |
Change My Mind: Reasoning was created just for benchmaxing, it's not really useful for downstream tasks | 0 | Forcing the LLM to talk about the response it's going to provide does improve the quality of the response.
However, I believe it's much more powerful for any specific task to use a non-thinking LLM and prompt the model to think about the task, by showing examples on how to do the thinking for that particular task.
Th... | 2025-08-09T13:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mlppcc/change_my_mind_reasoning_was_created_just_for/ | ArcaneThoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlppcc | false | null | t3_1mlppcc | /r/LocalLLaMA/comments/1mlppcc/change_my_mind_reasoning_was_created_just_for/ | false | false | self | 0 | null |
GPT-OSS have some strengths | 189 | 2025-08-09T13:26:57 | Charuru | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mlpf3t | false | null | t3_1mlpf3t | /r/LocalLLaMA/comments/1mlpf3t/gptoss_have_some_strengths/ | false | false | 189 | {'enabled': True, 'images': [{'id': 'FwdrHQdwGV2GWeeTWSNchop6IgwTEed4d5V6po-PqhA', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/5dfwje5swzhf1.jpeg?width=108&crop=smart&auto=webp&s=af6c0032fa117dd36ca2e2a2ea8a2a445b71c284', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/5dfwje5swzhf1.jp... | |||
BERT classifier to detect PII in multi-lingual documents | 2 | Can anyone recommend a model that reliably detects if a text chunk contains personal information, like names, email addresses etc.? Text is mostly English and German. Just a binary classifier that spits out a probability would be fine. Flagged chunks will be reviewed by a human. | 2025-08-09T13:26:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mlpeto/bert_classifier_to_detect_pii_in_multilingual/ | mnze_brngo_7325 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlpeto | false | null | t3_1mlpeto | /r/LocalLLaMA/comments/1mlpeto/bert_classifier_to_detect_pii_in_multilingual/ | false | false | self | 2 | null |
My thoughts on gpt-oss-120b | 331 | Since the model dropped, it's become notoriously hated on for its censorship. (Idk what people were expecting from OpenAI of all companies)
All the chat template issues and performance fluctuations with varying cloud providers made it even worse for all the people who were optimistic to try it out.
On the first day,... | 2025-08-09T12:48:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mlomlb/my_thoughts_on_gptoss120b/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlomlb | false | null | t3_1mlomlb | /r/LocalLLaMA/comments/1mlomlb/my_thoughts_on_gptoss120b/ | false | false | self | 331 | null |
My system caught an error in Grok that GPT-4 also missed here’s how it works and why it matters for evaluating LLM reasoning. | 0 | Built an AI system (Alycon) and tested it against a difficult Humanities Last Exam question on Hebrew linguistics. Had an interesting moment where my system corrected Grok's analysis.
**The challenge:** Syllable analysis of Psalms 104:7 using Tiberian Hebrew phonology rules - notoriously difficult even for experts.
*... | 2025-08-09T12:41:17 | https://www.reddit.com/gallery/1mlohbh | Sad_Perception_1685 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mlohbh | false | null | t3_1mlohbh | /r/LocalLLaMA/comments/1mlohbh/my_system_caught_an_error_in_grok_that_gpt4_also/ | false | false | 0 | null | |
GPT OSS flop? Very disappointed | 3 | So yeah...I have tried both GPT models..20b and 120b.. not only both of them very seriously censorsed but they lacked in minimal coding stuff.
I was able to run 120b model on my 4090 with 64gb of ram and was so excited about it...at least something from OpenAI but imho they just so bad that I can't find a use case fo... | 2025-08-09T12:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mloh4o/gpt_oss_flop_very_disappointed/ | theundertakeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mloh4o | false | null | t3_1mloh4o | /r/LocalLLaMA/comments/1mloh4o/gpt_oss_flop_very_disappointed/ | false | false | self | 3 | null |
My Honest Review: gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T12:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mlodrx/my_honest_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlodrx | false | null | t3_1mlodrx | /r/LocalLLaMA/comments/1mlodrx/my_honest_review_gptoss120bhigh/ | false | false | self | 1 | null |
Looking for a local open source voice assistant with push-to-talk and toggle mode, cross-platform | 6 | Hi all,
I’m searching for a local speech-to-text assistant tool that ideally has these features:
* a **push-to-talk** function with a simple visual indicator showing when it’s listening and a timer,
* a **toggle on/off mode** to avoid holding down a key continuously,
* a shortcut to quickly select or set a custom pro... | 2025-08-09T12:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mlo17q/looking_for_a_local_open_source_voice_assistant/ | Specific_Dimension51 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlo17q | false | null | t3_1mlo17q | /r/LocalLLaMA/comments/1mlo17q/looking_for_a_local_open_source_voice_assistant/ | false | false | self | 6 | null |
Trying to convert mlx fine-tuned mistral-7B-instruct-v3.0 into GGUF | 2 | I'm having a lot of trouble converting my fine-tuned mistral model to gguf for using it with ollama.I did the following steps:
download from hf and quantize model using mlx-examples/lora:
`python [convert.py](http://convert.py) --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q`
fine-tune using my data:
python lor... | 2025-08-09T12:17:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mlo0xc/trying_to_convert_mlx_finetuned/ | MrSmeeeeegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlo0xc | false | null | t3_1mlo0xc | /r/LocalLLaMA/comments/1mlo0xc/trying_to_convert_mlx_finetuned/ | false | false | self | 2 | null |
Comparing agent memory kits (Letta, MemOS, Cognee, etc) | 3 | Hi, I'd like to dive into a memory system project. I can develop in Python but prefer the Typescript ecosystem and what I want to develop is largely front end. I've been looking at relevant kits but each has quite a bit of depth so difficult to compare. Some help would be greatly appreciated.
I did the deeplearning c... | 2025-08-09T12:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mlnvob/comparing_agent_memory_kits_letta_memos_cognee_etc/ | nostriluu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlnvob | false | null | t3_1mlnvob | /r/LocalLLaMA/comments/1mlnvob/comparing_agent_memory_kits_letta_memos_cognee_etc/ | false | false | self | 3 | null |
My Objective Review- gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T11:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mln3qt/my_objective_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mln3qt | false | null | t3_1mln3qt | /r/LocalLLaMA/comments/1mln3qt/my_objective_review_gptoss120bhigh/ | false | false | self | 1 | null |
My Objective Review- gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T11:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mln2bq/my_objective_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mln2bq | false | null | t3_1mln2bq | /r/LocalLLaMA/comments/1mln2bq/my_objective_review_gptoss120bhigh/ | false | false | self | 1 | null |
My Objective Review- gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T11:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mlmtzw/my_objective_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlmtzw | false | null | t3_1mlmtzw | /r/LocalLLaMA/comments/1mlmtzw/my_objective_review_gptoss120bhigh/ | false | false | self | 1 | null |
My Objective Review- gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T11:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mlms60/my_objective_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlms60 | false | null | t3_1mlms60 | /r/LocalLLaMA/comments/1mlms60/my_objective_review_gptoss120bhigh/ | false | false | self | 1 | null |
My Objective Review- gpt-oss-120b(high) | 1 | [removed] | 2025-08-09T11:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mlmr64/my_objective_review_gptoss120bhigh/ | Lowkey_LokiSN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mlmr64 | false | null | t3_1mlmr64 | /r/LocalLLaMA/comments/1mlmr64/my_objective_review_gptoss120bhigh/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.