title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm researching some OS & Local LLMs that can be useful for farmers, either in high-end PCs and in raspberry pi. Suggestions? | 0 | Basically title, ideally something that can process both text, images, and documents/sheets of data, as smart as possible, and as lean as possible.
My initial research led me to Phi-4, Gemma 3, and Mistral Small 3.1, but considering how fast this space progresses, I think they have probably been outdated a few gens a... | 2025-08-02T16:01:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mfu41i/im_researching_some_os_local_llms_that_can_be/ | hjras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfu41i | false | null | t3_1mfu41i | /r/LocalLLaMA/comments/1mfu41i/im_researching_some_os_local_llms_that_can_be/ | false | false | self | 0 | null |
Local or die | 199 | 2025-08-02T15:36:14 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mftipa | false | null | t3_1mftipa | /r/LocalLLaMA/comments/1mftipa/local_or_die/ | false | false | default | 199 | {'enabled': True, 'images': [{'id': 'weu00abilmgf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/weu00abilmgf1.jpeg?width=108&crop=smart&auto=webp&s=9e94400e83be06bd007d01e6c687ac2e6fc68187', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/weu00abilmgf1.jpeg?width=216&crop=smart&auto=... | ||
Scalable LLM Virtual Assistant – Looking for Architecture Tips | 0 | Hey all,
I’m working on a side project to build a virtual assistant that can do two main things:
1. Answer questions based on a company’s internal docs (using RAG).
2. Perform actions like “create an account,” “schedule a payment,” or “find the nearest location.”
I’d love some advice from folks who’ve built similar ... | 2025-08-02T15:20:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mft55c/scalable_llm_virtual_assistant_looking_for/ | DeadFinger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mft55c | false | null | t3_1mft55c | /r/LocalLLaMA/comments/1mft55c/scalable_llm_virtual_assistant_looking_for/ | false | false | self | 0 | null |
[GUIDE] Running Qwen-30B (Coder/Instruct/Thinking) with CPU-GPU Partial Offloading - Tips, Tricks, and Optimizations | 127 | This post is a collection of practical tips and performance insights for running Qwen-30B (either Coder-Instruct or Thinking) locally using `llama.cpp` with partial CPU-GPU offloading. After testing various configurations, quantizations, and setups, here’s what actually works.
**KV Quantization**
* **KV cache quantiz... | 2025-08-02T14:44:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mfs9qn/guide_running_qwen30b_coderinstructthinking_with/ | AliNT77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfs9qn | false | null | t3_1mfs9qn | /r/LocalLLaMA/comments/1mfs9qn/guide_running_qwen30b_coderinstructthinking_with/ | false | false | self | 127 | null |
Getting started into self hosting LLM | 0 | I would like to start self hosting models for my own usage. I have right now MacBook Pro m4 Pro 24Gb ram and it feels slow with larger models and very limited. Do you think it would be better to build some custom spec pc for this purpose running on Linux just to run LLMs? Or buy maxed out Mac Studio or Mac mini for thi... | 2025-08-02T14:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mfs9cw/getting_started_into_self_hosting_llm/ | ywful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfs9cw | false | null | t3_1mfs9cw | /r/LocalLLaMA/comments/1mfs9cw/getting_started_into_self_hosting_llm/ | false | false | self | 0 | null |
It's time to run your own R1, Kimi ... and split the cost of it | 50 | Based on current situation with quality of Sonnet and other proprietary models I'm thinking to get group of people who would join the common pool and share the cost of hosting and running our "own" R1, Kimi and other models so you will not be dependent on decreasing quality of other providers.
What's your thoughts? | 2025-08-02T14:26:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mfrunn/its_time_to_run_your_own_r1_kimi_and_split_the/ | HammerSpb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfrunn | false | null | t3_1mfrunn | /r/LocalLLaMA/comments/1mfrunn/its_time_to_run_your_own_r1_kimi_and_split_the/ | false | false | self | 50 | null |
What is "tool use", exactly? | 20 | Sorry if this is a basic question, but I seem to be really struggling :/
Consider a typical, text-in text-out use case. If I'm using an offline model API via e.g. REST, how can I incorporate tool use? Is "tool use" some particular token(s) in the output that I should interpret and execute independently in my code and... | 2025-08-02T14:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mfrq3v/what_is_tool_use_exactly/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfrq3v | false | null | t3_1mfrq3v | /r/LocalLLaMA/comments/1mfrq3v/what_is_tool_use_exactly/ | false | false | self | 20 | null |
I made a opensource CAL-AI alternative using ollama which runs completely locally and for is fully free. | 2 | Im trying to put on some weight and muscle and needed to count my calories , for times when i dont have time to search and count i needed an app like CAL-AI but didnt want to pay for a ChatGpt wrapper so i created this and thought to myself why not share it with other people.
I gotta say tho it is not the most accurat... | 2025-08-02T14:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mfrec0/i_made_a_opensource_calai_alternative_using/ | mehmetflix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfrec0 | false | null | t3_1mfrec0 | /r/LocalLLaMA/comments/1mfrec0/i_made_a_opensource_calai_alternative_using/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'SDBxUlMfav63r9emstxn0JGBVFpMuYK-50VWDphIeFg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SDBxUlMfav63r9emstxn0JGBVFpMuYK-50VWDphIeFg.png?width=108&crop=smart&auto=webp&s=43d0c8bafbb43faea860f8f86e81107782da6aa3', 'width': 108}, {'height': 108, 'url': 'h... |
:) | 1 | 2025-08-02T13:52:54 | Technical_Adagio143 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfr36a | false | null | t3_1mfr36a | /r/LocalLLaMA/comments/1mfr36a/_/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '9yl5dhf23mgf1', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/9yl5dhf23mgf1.png?width=108&crop=smart&auto=webp&s=f14996b29d0a67b2cadd15eef0fb5fd7d8a4cb15', 'width': 108}, {'height': 336, 'url': 'https://preview.redd.it/9yl5dhf23mgf1.png?width=216&crop=smart&auto=we... | ||
Smart integration | 0 | One of the things I want to do with my local build is to make my home more efficient. I'd like to be able to get data points from various sources and have them analyzed either for actionable changes or optimization. Not sure how to get from here to there though.
Example:
Gather data from
- temp outside
- temp ins... | 2025-08-02T13:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mfqzc8/smart_integration/ | JellyfishAutomatic25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfqzc8 | false | null | t3_1mfqzc8 | /r/LocalLLaMA/comments/1mfqzc8/smart_integration/ | false | false | self | 0 | null |
Ollamacode - Local AI assistant that can create, run and understand your codebase. | 9 | I've been working on a project called OllamaCode, and I'd love to share it with you. It's an AI coding assistant that runs entirely locally with Ollama. The main idea was to create a tool that actually executes the code it writes, rather than just showing you blocks to copy and paste.
Here are a few things I've focuse... | 2025-08-02T13:37:35 | https://github.com/tooyipjee/ollamacode | Loud-Consideration-2 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mfqr3o | false | null | t3_1mfqr3o | /r/LocalLLaMA/comments/1mfqr3o/ollamacode_local_ai_assistant_that_can_create_run/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'kLroHnTCnvkrS5N09KAaQv44hHyVU4vQJ3gmeqT8SqI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kLroHnTCnvkrS5N09KAaQv44hHyVU4vQJ3gmeqT8SqI.png?width=108&crop=smart&auto=webp&s=ad56c7eac7e674170e1bce20ac9f2fbb89067d4f', 'width': 108}, {'height': 108, 'url': 'h... |
For Releasing Models or Even Leaking, Just Use the BitTorrent Network — Avoid Take-Downs Like GPT-OSS-120B’s Disappearance from Hugging Face | 1 | [removed] | 2025-08-02T13:24:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mfqh3n/for_releasing_models_or_even_leaking_just_use_the/ | Azizek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfqh3n | false | null | t3_1mfqh3n | /r/LocalLLaMA/comments/1mfqh3n/for_releasing_models_or_even_leaking_just_use_the/ | false | false | self | 1 | null |
Open-source model that is as intelligent as Claude Sonnet 4 | 380 | I spend about 300-400 USD per month on Claude Code with the max 5x tier. I’m unsure when they’ll increase pricing, limit usage, or make models less intelligent. I’m looking for a cheaper or open-source alternative that’s just as good for programming as Claude Sonnet 4. Any suggestions are appreciated. | 2025-08-02T13:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mfqejn/opensource_model_that_is_as_intelligent_as_claude/ | vishwa1238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfqejn | false | null | t3_1mfqejn | /r/LocalLLaMA/comments/1mfqejn/opensource_model_that_is_as_intelligent_as_claude/ | false | false | self | 380 | null |
Best free good deep research LLM websites? | 0 | Gemini is too long and detailed. Grok's format is weird. Perplexity doesn't search enough. Qwen takes years and writes an entire book.
chatGPT does it perfectly. A double lengthed message with citations, well-written, searches through websites trying to find what it needs, reasoning through it. But it's limited.
Thx ... | 2025-08-02T12:44:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mfpnxi/best_free_good_deep_research_llm_websites/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfpnxi | false | null | t3_1mfpnxi | /r/LocalLLaMA/comments/1mfpnxi/best_free_good_deep_research_llm_websites/ | false | false | self | 0 | null |
What's the current go-to setup for a fully-local coding agent that continuously improves code? | 1 | Hey! I’d like to set up my machine to work on my codebase while I’m AFK. Ideally, it would randomly pick from a list of pre-defined tasks (e.g. optimize performance, simplify code, find bugs, add tests, implement TODOs), work on it for as long as needed, then open a merge request. After that, it should revert the chang... | 2025-08-02T12:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mfpn4a/whats_the_current_goto_setup_for_a_fullylocal/ | sasik520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfpn4a | false | null | t3_1mfpn4a | /r/LocalLLaMA/comments/1mfpn4a/whats_the_current_goto_setup_for_a_fullylocal/ | false | false | self | 1 | null |
Qwen3 30B A3b --override-tensor + Qwen3 4b draft = <3 (22 vs 14 t/s) | 13 | Hi! So I've been playing around with everyone's baby, the A3B qwen. Please note, I am a noob and a tinkerer, and Claude Code definitely helped me understand wth I am actually doing. Anyway.
Shoutout to u/Skatardude10 and u/farkinga
So everyone knows it's a great idea to offload some/all tensors to RAM with these mode... | 2025-08-02T12:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mfpgae/qwen3_30b_a3b_overridetensor_qwen3_4b_draft_3_22/ | igorwarzocha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfpgae | false | null | t3_1mfpgae | /r/LocalLLaMA/comments/1mfpgae/qwen3_30b_a3b_overridetensor_qwen3_4b_draft_3_22/ | false | false | self | 13 | null |
RAG or prompt engineering | 3 | Hey everyone! I’m a bit confused about what actually happens when you upload a document to an AI app like ChatGPT or LE CHAT. Is this considered prompt engineering (just pasting the content into the prompt) or is it RAG (Retrieval-Augmented Generation)?
I initially thought it was RAG, but I saw this video from Yannic ... | 2025-08-02T11:57:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mfor6n/rag_or_prompt_engineering/ | SignatureHuman8057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfor6n | false | null | t3_1mfor6n | /r/LocalLLaMA/comments/1mfor6n/rag_or_prompt_engineering/ | false | false | self | 3 | null |
Looking for a local model that can help a non native writer with sentence phrasing and ideas. | 0 | Hi. I'm a non native English writer, who could use some help with phrasing, [something like this](https://www.gingersoftware.com/products/sentence-rephraser), character and plot detail suggestions etc. Are there any good models that can help with that?
I'm planning to buy a laptop with Nvidia 4060 GPU, which has 8GB R... | 2025-08-02T11:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mfoh32/looking_for_a_local_model_that_can_help_a_non/ | logicSnob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfoh32 | false | null | t3_1mfoh32 | /r/LocalLLaMA/comments/1mfoh32/looking_for_a_local_model_that_can_help_a_non/ | false | false | self | 0 | null |
Issues with michaelf34/infinity:latest-cpu + Qwen3-Embedding-8B | 1 | I tried building a docker container to have infinity use the Qwen3-Embedding-8B model in a CPU-only setting. But once the docker container starts, the CPU (Ryzen 9950X, 128GB DDR5) is fully busy even without any embedding requests. Is that normal, or did I configure something wrong?
Here's the Dockerfile:
> FROM mich... | 2025-08-02T11:39:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mfofx5/issues_with_michaelf34infinitylatestcpu/ | Patentsmatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfofx5 | false | null | t3_1mfofx5 | /r/LocalLLaMA/comments/1mfofx5/issues_with_michaelf34infinitylatestcpu/ | false | false | self | 1 | null |
How to build a local agent for Windows GUI automation (mouse control & accurate button clicking)? | 1 | Hi r/LocalLLaMA,
I'm exploring the idea of creating a local agent that can interact with the Windows desktop environment. The primary goal is for the agent to be able to control the mouse and, most importantly, accurately identify and click on specific UI elements like buttons, menus, and text fields.
For example, I ... | 2025-08-02T11:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mfodac/how_to_build_a_local_agent_for_windows_gui/ | xSNYPSx777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfodac | false | null | t3_1mfodac | /r/LocalLLaMA/comments/1mfodac/how_to_build_a_local_agent_for_windows_gui/ | false | false | self | 1 | null |
Benchmarking Qwen3 8B Inference: M1 vs RTX 5060 Ti 16 vs RTX 4090 | 70 | Couldn't find a direct comparison between the M1 Macbook pro and the new RTX 5060 Ti for local LLM inference. So, I decided to run a 16 small benchmark myself, and I think the results will be useful for others in the same boat.
I ran a quick benchmark on the RTX 5060 Ti 16GB, and I'm quite impressed with the results, ... | 2025-08-02T10:57:41 | kargafe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfnq2r | false | null | t3_1mfnq2r | /r/LocalLLaMA/comments/1mfnq2r/benchmarking_qwen3_8b_inference_m1_vs_rtx_5060_ti/ | false | false | default | 70 | {'enabled': True, 'images': [{'id': 'erib4a6t7lgf1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/erib4a6t7lgf1.jpeg?width=108&crop=smart&auto=webp&s=b8aa6f94f1fd0b07761158c32a4f411fea4ff01e', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/erib4a6t7lgf1.jpeg?width=216&crop=smart&auto=we... | |
Best <2B open-source LLMs for European languages? | 2 | Hi all, an enthusiast but no formal CS training background asking for help
I am trying to make an application for collageus in medical research using a local LLM. The most important requirement is that it can run on any standard issue laptop (mostly just CPU) - as that's the best we can get :)
Which is the best "sm... | 2025-08-02T10:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mfnfrp/best_2b_opensource_llms_for_european_languages/ | Material-Ad5426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfnfrp | false | null | t3_1mfnfrp | /r/LocalLLaMA/comments/1mfnfrp/best_2b_opensource_llms_for_european_languages/ | false | false | self | 2 | null |
Tool calling is now supported on World's first Intermediate Reasoning model | 28 | Dhanishtha-2.0-preview can now tool call.
Updated Model link:- [https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview-0825](https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview-0825)
API and Chat page :- [https://helpingai.co](https://helpingai.co)
| 2025-08-02T10:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mfn7pv/tool_calling_is_now_supported_on_worlds_first/ | Quiet-Moment-338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfn7pv | false | null | t3_1mfn7pv | /r/LocalLLaMA/comments/1mfn7pv/tool_calling_is_now_supported_on_worlds_first/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'bejhaAM63WAoF3h4PdUkGKurBno5FSC_cqQBt5-TVT4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bejhaAM63WAoF3h4PdUkGKurBno5FSC_cqQBt5-TVT4.png?width=108&crop=smart&auto=webp&s=6528fc720427c7c1ed30f20ddd332d01526c4f8f', 'width': 108}, {'height': 116, 'url': 'h... |
Saidia: Offline-First AI Assistant for Educators in low-connectivity regions | 11 | Saidia is an offline-first AI assistant tailored for educators, enabling them to generate questions directly from source materials.
Built using Electron, Ollama, and Gemma 3n, Saidia functions entirely offline and is optimised for basic hardware. It's ideal for areas with unreliable internet and power, empowering educ... | 2025-08-02T10:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mfn2xf/saidia_offlinefirst_ai_assistant_for_educators_in/ | dokasto_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfn2xf | false | null | t3_1mfn2xf | /r/LocalLLaMA/comments/1mfn2xf/saidia_offlinefirst_ai_assistant_for_educators_in/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'j3KGsoBYoXJRolQHmDDZQ4g3b-bMRji7lP_QqSrqGzs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/j3KGsoBYoXJRolQHmDDZQ4g3b-bMRji7lP_QqSrqGzs.png?width=108&crop=smart&auto=webp&s=8e949d6258248c551dcfd9cb47f1304f4151400c', 'width': 108}, {'height': 108, 'url': 'h... |
GPT-5 is here | 0 | 2025-08-02T09:35:13 | Thin_Improvement5187 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfmgji | false | null | t3_1mfmgji | /r/LocalLLaMA/comments/1mfmgji/gpt5_is_here/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4v0i9qt0tkgf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4v0i9qt0tkgf1.png?width=108&crop=smart&auto=webp&s=59195c83f2390eeb762e3489e0cfed8116301b49', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4v0i9qt0tkgf1.png?width=216&crop=smart&auto=we... | ||
What's currently the most intelligent LLM that doesn't refuse requests and can be run locally? | 1 | [removed] | 2025-08-02T09:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mfm87e/whats_currently_the_most_intelligent_llm_that/ | Select_Analysis_2887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfm87e | false | null | t3_1mfm87e | /r/LocalLLaMA/comments/1mfm87e/whats_currently_the_most_intelligent_llm_that/ | false | false | self | 1 | null |
AI models are picking up hidden habits from each other | IBM | 79 | 2025-08-02T08:35:31 | https://www.ibm.com/think/news/ai-models-subliminal-learning | ab2377 | ibm.com | 1970-01-01T00:00:00 | 0 | {} | 1mfll39 | false | null | t3_1mfll39 | /r/LocalLLaMA/comments/1mfll39/ai_models_are_picking_up_hidden_habits_from_each/ | false | false | default | 79 | {'enabled': False, 'images': [{'id': 'lZV0GHuIH9uuEeScanVvma03Gd4_flgdgcoA6uvqcLQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lZV0GHuIH9uuEeScanVvma03Gd4_flgdgcoA6uvqcLQ.png?width=108&crop=smart&auto=webp&s=2723bb45305d3a150f76e1937c51a2690147d015', 'width': 108}, {'height': 121, 'url': 'h... | |
Small LLM in german | 23 | I’d like to start a small art project and I’m looking for a model that speaks German well. I’m currently using Gemma 3n:e4b and I’m quite satisfied with it. However, I’d like to know if there are any other models of a similar size that have even better German language capabilities. The whole thing should be run with Ol... | 2025-08-02T08:22:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mfldxj/small_llm_in_german/ | Ghulaschsuppe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfldxj | false | null | t3_1mfldxj | /r/LocalLLaMA/comments/1mfldxj/small_llm_in_german/ | false | false | self | 23 | null |
We need better voices for tiny tts models. | 1 | [removed] | 2025-08-02T08:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mflcoh/we_need_better_voices_for_tiny_tts_models/ | ObjectiveAd4358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mflcoh | false | null | t3_1mflcoh | /r/LocalLLaMA/comments/1mflcoh/we_need_better_voices_for_tiny_tts_models/ | false | false | self | 1 | null |
Qwen3 (30B) with Ollama: Blazing Fast, but accuracy concerns | 12 | I've been experimenting with Qwen3:30b-a3b-instruct-2507-q8\_0 using Ollama v0.10.0 (standard settings) on Debian 12 with a pair of Nvidia P40s, and I'm really impressed with the speed!
In light conversation (I tested with general knowledge questions and everyday scenarios), I'm achieving up to 34 tokens/s, which is... | 2025-08-02T08:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mfl6bo/qwen3_30b_with_ollama_blazing_fast_but_accuracy/ | gerhardmpl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfl6bo | false | null | t3_1mfl6bo | /r/LocalLLaMA/comments/1mfl6bo/qwen3_30b_with_ollama_blazing_fast_but_accuracy/ | false | false | self | 12 | null |
Why haven't 1 bit models taken off? | 1 | [removed] | 2025-08-02T08:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mfl47d/why_havent_1_bit_models_taken_off/ | ObjectiveAd4358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfl47d | false | null | t3_1mfl47d | /r/LocalLLaMA/comments/1mfl47d/why_havent_1_bit_models_taken_off/ | false | false | default | 1 | null |
What is the best way to connect Android with LLM - Virtually | 0 | Something with dockerfiles would be nice.
Main requirement is to be able to run the following social media apps: (ordered by priority)
- WhatsApp
- WhatsApp Business
- Linkedin
- X
- Reddit
- Youtube | 2025-08-02T07:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mfk60l/what_is_the_best_way_to_connect_android_with_llm/ | rozeappletree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfk60l | false | null | t3_1mfk60l | /r/LocalLLaMA/comments/1mfk60l/what_is_the_best_way_to_connect_android_with_llm/ | false | false | self | 0 | null |
Embedding models | 2 | Sup guys. I've been using the voyage 3 lg as an embedding model for the longest time and because an embedding model can't be switched and you need to fill the vector database from scratch, I didn't switch even after the release of great OS models.
Recently I've been thinking of switching to either qwen 3 0.6b, 4b or ... | 2025-08-02T07:01:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mfk4hx/embedding_models/ | blackkksparx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfk4hx | false | null | t3_1mfk4hx | /r/LocalLLaMA/comments/1mfk4hx/embedding_models/ | false | false | self | 2 | null |
MetaStoneTec/XBai-o4 | 40 | Has anyone tried [https://huggingface.co/MetaStoneTec/XBai-o4](https://huggingface.co/MetaStoneTec/XBai-o4) ? Big if true -
\> We introduce our first reflective generative model MetaStone-S1, which obtains OpenAI o3-mini's performance
Have not tried it myself, downloading atm from [https://huggingface.co/mradermache... | 2025-08-02T07:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mfk3y2/metastonetecxbaio4/ | ljosif | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfk3y2 | false | null | t3_1mfk3y2 | /r/LocalLLaMA/comments/1mfk3y2/metastonetecxbaio4/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': '71nUCP5huF0UaxRtqOtvH9ka34y516VzUn3BYyKiUws', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/71nUCP5huF0UaxRtqOtvH9ka34y516VzUn3BYyKiUws.png?width=108&crop=smart&auto=webp&s=f69ac45b09b4ccb4576fc51eb90d2b847eace7ec', 'width': 108}, {'height': 116, 'url': 'h... |
Getting started | 0 | So I don't have a powerful computer or GPU, just a 2021 macbook m1 with 8gb memory. I assume I can't run anything with more than 7b active parameters but chatgpt told me I can't run even run something like Qwen3-30B-A3B. What can I do, and where should I start? | 2025-08-02T06:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mfjqcb/getting_started/ | Snoo-72709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfjqcb | false | null | t3_1mfjqcb | /r/LocalLLaMA/comments/1mfjqcb/getting_started/ | false | false | self | 0 | null |
I made new stealth model horizon beta deep think just for fun | 0 | 2025-08-02T06:35:28 | https://huggingface.co/spaces/llamameta/openrouter | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mfjp96 | false | null | t3_1mfjp96 | /r/LocalLLaMA/comments/1mfjp96/i_made_new_stealth_model_horizon_beta_deep_think/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'AXcO63R4UddohnbNRFl_f2U_tH7YYtPoIaujpbGKPx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/AXcO63R4UddohnbNRFl_f2U_tH7YYtPoIaujpbGKPx0.png?width=108&crop=smart&auto=webp&s=28a2f4d130d19bc84eba2e769bdc6da1606486cb', 'width': 108}, {'height': 116, 'url': 'h... | |
GLM just removed there full stack tool... | 1 | 2025-08-02T06:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mfjn9f/glm_just_removed_there_full_stack_tool/ | ITellMyselfSecrets__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfjn9f | false | null | t3_1mfjn9f | /r/LocalLLaMA/comments/1mfjn9f/glm_just_removed_there_full_stack_tool/ | false | false | 1 | null | ||
TTS Model Comparisons: My Personal Rankings (So far) of TTS Models | 30 | So firstly, I should mention that my setup is a Lenovo Legion 4090 Laptop, which should be pretty quick to render text & speech - about equivalent to a 4080 Desktop. At least similar in VRAM, Tensors, etc.
I also prefer to use CLI only, because I want everything to eventually be for a robot I'm working on (because of ... | 2025-08-02T06:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mfjn88/tts_model_comparisons_my_personal_rankings_so_far/ | iKontact | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfjn88 | false | null | t3_1mfjn88 | /r/LocalLLaMA/comments/1mfjn88/tts_model_comparisons_my_personal_rankings_so_far/ | false | false | self | 30 | null |
Best models for 24GB VRAM (Writing / Coding / Research) in Aug. 2025? | 1 | [removed] | 2025-08-02T06:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mfjdzd/best_models_for_24gb_vram_writing_coding_research/ | laputenmachine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfjdzd | false | null | t3_1mfjdzd | /r/LocalLLaMA/comments/1mfjdzd/best_models_for_24gb_vram_writing_coding_research/ | false | false | self | 1 | null |
Simulate Gemini 2.5 Deepthink for free using Gemini 2.5 Pro API? 🤔 | 1 | [removed] | 2025-08-02T06:09:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mfjad2/simulate_gemini_25_deepthink_for_free_using/ | Ok-Bit6633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfjad2 | false | null | t3_1mfjad2 | /r/LocalLLaMA/comments/1mfjad2/simulate_gemini_25_deepthink_for_free_using/ | false | false | self | 1 | null |
24/7 local HW buying guide 2025-H2? | 3 | What's the current recommended local LLM inference HW (**local, always-on inference box)** for multimodal LLMs (text, image, audio). Target workloads include home automation agents, real-time coding/writing, and vision models.
Goal is obviously largest models and the highest t/s, so highest VRAM and bandwidth, but wi... | 2025-08-02T06:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mfj6fq/247_local_hw_buying_guide_2025h2/ | xraybies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfj6fq | false | null | t3_1mfj6fq | /r/LocalLLaMA/comments/1mfj6fq/247_local_hw_buying_guide_2025h2/ | false | false | self | 3 | null |
TTS that I can use a downloaded AI voice for? (not sure if this is the right place to ask) | 0 | Im trying to make a chatbot that sounds and acts like BMO from adventure time and was wondering if there is a TTS model that I can use a premade voice.
The voice I downloaded is from [https://voice-models.com/](https://voice-models.com/) and has a .index file and a .pth file if that means anything or helps at all | 2025-08-02T05:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mfj3vj/tts_that_i_can_use_a_downloaded_ai_voice_for_not/ | crackaddict42069 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfj3vj | false | null | t3_1mfj3vj | /r/LocalLLaMA/comments/1mfj3vj/tts_that_i_can_use_a_downloaded_ai_voice_for_not/ | false | false | self | 0 | null |
Reach Mini is not Open source? | 2 | Huggingface announced that it’s OSS so I found their GitHub, but the whole point of open source robotics is provision of CAD files and electronic drawings as well, if I am not wrong?
I didn’t find it anywhere.
Do hugging face plan to release the printable 3d models and the component lists?
Blog post: https://hugg... | 2025-08-02T05:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mfj2bn/reach_mini_is_not_open_source/ | Slow_Protection_26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfj2bn | false | null | t3_1mfj2bn | /r/LocalLLaMA/comments/1mfj2bn/reach_mini_is_not_open_source/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'NCFk7nfLDEfVT123xThdE1hQ0xo7VlGM_ekFBsduBFk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NCFk7nfLDEfVT123xThdE1hQ0xo7VlGM_ekFBsduBFk.jpeg?width=108&crop=smart&auto=webp&s=c0a9e07799ac2b26c25790a649f719155c402025', 'width': 108}, {'height': 162, 'url': '... |
Skywork/MindLink-72B-0801 · Hugging Face | 1 | [This model is based on improvements from Qwen \(Apache 2.0 License\)](https://preview.redd.it/rjmnwnr4njgf1.jpg?width=2042&format=pjpg&auto=webp&s=316ee654d0011153de4781073c28dee4504d740d)
✅ **Plan-based Reasoning**: Without the "think" tag, MindLink achieves competitive performance with leading proprietary models ac... | 2025-08-02T05:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mfiw97/skyworkmindlink72b0801_hugging_face/ | ironarmor2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfiw97 | false | null | t3_1mfiw97 | /r/LocalLLaMA/comments/1mfiw97/skyworkmindlink72b0801_hugging_face/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'fFgRtUphh76HHGXDU5H6fgtEK2MN19gwRi-JUnfVF20', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fFgRtUphh76HHGXDU5H6fgtEK2MN19gwRi-JUnfVF20.png?width=108&crop=smart&auto=webp&s=e1c7adeaaa91a397ca7cdd7f3505b7aabfad21ff', 'width': 108}, {'height': 116, 'url': 'h... | |
Skywork MindLink 32B/72B | 145 | new models from Skywork:
We introduce **MindLink**, a new family of large language models developed by **Kunlun Inc**. Built on **Qwen**, these models incorporate our latest advances in post-training techniques. MindLink demonstrates strong performance across various common benchmarks and is widely applicable in diver... | 2025-08-02T05:41:55 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfitwb | false | null | t3_1mfitwb | /r/LocalLLaMA/comments/1mfitwb/skywork_mindlink_32b72b/ | false | false | 145 | {'enabled': True, 'images': [{'id': 'dMPU8D8NEUnZXYRxA5Ryy9ueDVT8WMowE2c2asQCAgs', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/im7w319dnjgf1.png?width=108&crop=smart&auto=webp&s=01cd51cd78031eda5866ff7d7ad34590c408908f', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/im7w319dnjgf1.png... | ||
Skywork MindLink 32B/72B | 2 | We introduce **MindLink**, a new family of large language models developed by **Kunlun Inc**. Built on **Qwen**, these models incorporate our latest advances in post-training techniques. MindLink demonstrates strong performance across various common benchmarks and is widely applicable in diverse AI scenarios. We welcom... | 2025-08-02T05:39:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mfis7u/skywork_mindlink_32b72b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfis7u | false | null | t3_1mfis7u | /r/LocalLLaMA/comments/1mfis7u/skywork_mindlink_32b72b/ | false | false | self | 2 | null |
Serious hallucination issues of 30B-A3B Instruct 2507 | 7 | I recently switched my local models to the new 30B-A3B 2507 models. However, when testing the instruct model, I noticed it hallucinates much more than previous Qwen models.
I fed it a README file I wrote myself for summarization, so I know its contents well. The 2507 instruct model not only uses excessive emojis but a... | 2025-08-02T05:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mfiroj/serious_hallucination_issues_of_30ba3b_instruct/ | AaronFeng47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfiroj | false | null | t3_1mfiroj | /r/LocalLLaMA/comments/1mfiroj/serious_hallucination_issues_of_30ba3b_instruct/ | false | false | self | 7 | null |
How to avoid IP bans when using youtube-transcript-api to fetch YouTube video transcripts? | 9 | I'm trying to make an agent that get YouTube videos transcript but i keep having ip ban or a ban from requests to youtube-transcript-api, how to manage this? | 2025-08-02T05:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mfij9a/how_to_avoid_ip_bans_when_using/ | Anas_M1nt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfij9a | false | null | t3_1mfij9a | /r/LocalLLaMA/comments/1mfij9a/how_to_avoid_ip_bans_when_using/ | false | false | self | 9 | null |
Best creative writing + long context model? | 10 | I wanna use this model for DMing a dnd game as well as using it to write stories. I’d like it to be abliterated if possible.
I’ve been looking at using Gemma 3 27B, and I do like its writing style, but I’m concerned about its ability to handle long context lengths.
So far I haven’t had that problem but that’s only be... | 2025-08-02T05:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mfifhh/best_creative_writing_long_context_model/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfifhh | false | null | t3_1mfifhh | /r/LocalLLaMA/comments/1mfifhh/best_creative_writing_long_context_model/ | false | false | self | 10 | null |
AI model names are out of control. Let’s give them nicknames. | 0 | Lately, LLM model names have become completely unhinged:
* `Qwen3-30B-A3B-Instruct-2507`
* `Qwen3-30B-A3B-Instruct-2507-GGUF`
* `Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound`
* ...and so on.
I propose we assign each a short, memorable alias that represents the *personality* of its capabilities. Keep the tech... | 2025-08-02T05:09:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mfia6f/ai_model_names_are_out_of_control_lets_give_them/ | quinncom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfia6f | false | null | t3_1mfia6f | /r/LocalLLaMA/comments/1mfia6f/ai_model_names_are_out_of_control_lets_give_them/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
What context lengths do people actually run their models at? | 5 | I try to run all of my models at 32k context using llama.cpp, but it feels bad to be losing so much performance compared to launching with 2-4k context for short one-shot question prompts | 2025-08-02T05:07:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mfi8ly/what_context_lengths_do_people_actually_run_their/ | OUT_OF_HOST_MEMORY | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfi8ly | false | null | t3_1mfi8ly | /r/LocalLLaMA/comments/1mfi8ly/what_context_lengths_do_people_actually_run_their/ | false | false | self | 5 | null |
Med school and LLM | 3 | Hello,
I am a medical student and had begun to spend a significant amount of time creating a clinic notebook using Notion. Problem is, I essentially have to take all the text from every pdf and PowerPoint, paste it into notion, reformat (this takes forever) only to be able to have the text searchable because it can on... | 2025-08-02T04:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mfhv2c/med_school_and_llm/ | IndubitablyPreMed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfhv2c | false | null | t3_1mfhv2c | /r/LocalLLaMA/comments/1mfhv2c/med_school_and_llm/ | false | false | self | 3 | null |
What to do with a NVIDIA Tesla V100S 32GB GPU | 2 | I bought a second-hand server on eBay without knowing what was inside it. I knew I needed the case for my remote gaming rack solution. The Supermicro case had an air shroud and four oversized PCIe 3.0 x16 slots.
When it arrived, I found an NVIDIA Tesla V100S 32 GB HBM2 PCIe 3.0 x16 GPU behind the air shroud. The selle... | 2025-08-02T04:28:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mfhji6/what_to_do_with_a_nvidia_tesla_v100s_32gb_gpu/ | gromhelmu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfhji6 | false | null | t3_1mfhji6 | /r/LocalLLaMA/comments/1mfhji6/what_to_do_with_a_nvidia_tesla_v100s_32gb_gpu/ | false | false | self | 2 | null |
Good practices to implement memory for LLMs? | 1 | A lot of people including myself want a personalized AI tool. Not in the sense of tones and personality, but one that adapts to my work style - answer questions and do deep researches based on what I care about from past conversations. I don't really see any tools can do this. Even chatgpt's memory today is still quite... | 2025-08-02T04:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mfhfg0/good_practices_to_implement_memory_for_llms/ | tonyc1118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfhfg0 | false | null | t3_1mfhfg0 | /r/LocalLLaMA/comments/1mfhfg0/good_practices_to_implement_memory_for_llms/ | false | false | self | 1 | null |
MoE models not as fast as active parameter counts suggest | 0 | At least for models built on the Qwen 3 architecture, I noticed that the speed difference between the MoE models and roughly equivalent dense models is minimal, particularly as context sizes get larger.
For instance, on my M4 Max MacBook Pro, with llama.cpp, unsloth Q4\_K\_XL quants, flash attention, and q8\_0 KV cach... | 2025-08-02T04:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mfh4ee/moe_models_not_as_fast_as_active_parameter_counts/ | Federal-Effective879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfh4ee | false | null | t3_1mfh4ee | /r/LocalLLaMA/comments/1mfh4ee/moe_models_not_as_fast_as_active_parameter_counts/ | false | false | self | 0 | null |
Horizon Alpha vs Horizon Beta | 43 | Beta seems really solid from early testing, not a magnitude better than what SOTA's offer but still impressive | 2025-08-02T03:55:12 | https://v.redd.it/dg8cy7ia4jgf1 | sirjoaco | /r/LocalLLaMA/comments/1mfgwyu/horizon_alpha_vs_horizon_beta/ | 1970-01-01T00:00:00 | 0 | {} | 1mfgwyu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dg8cy7ia4jgf1/DASHPlaylist.mpd?a=1756828520%2CYjVjOTJiZDgwNTVmZTZjYjI0ZTgzMzNhZDMzOGJhMDI4OWFjMGRlMWI0ZjNjZTMxYzNkMTA4ZTQ2MTFkZWViYg%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/dg8cy7ia4jgf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mfgwyu | /r/LocalLLaMA/comments/1mfgwyu/horizon_alpha_vs_horizon_beta/ | false | false | 43 | {'enabled': False, 'images': [{'id': 'd3R3bDNtZ2E0amdmMXE2zpdfjiXOxBCP1nwGlT2orDomIn7bITnNJ4a4m1Y7', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/d3R3bDNtZ2E0amdmMXE2zpdfjiXOxBCP1nwGlT2orDomIn7bITnNJ4a4m1Y7.png?width=108&crop=smart&format=pjpg&auto=webp&s=d42e84556d577f46b8f2cc66b308d63c6ba3a... | |
Blackwell (RTX 5090 / RTX 6000 Pro) support in llama.cpp | 3 | Does the current llama.cpp binaries release support Blackwell GPU in Windows? I just got the card and not sure how to move forward.
Do I need to recompile the binaries for Windows ? Please share your experience. Much appreciated. | 2025-08-02T03:45:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mfgqb0/blackwell_rtx_5090_rtx_6000_pro_support_in/ | Loud_Structure4664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfgqb0 | false | null | t3_1mfgqb0 | /r/LocalLLaMA/comments/1mfgqb0/blackwell_rtx_5090_rtx_6000_pro_support_in/ | false | false | self | 3 | null |
all I need.... | 1,485 | 2025-08-02T03:34:51 | ILoveMy2Balls | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfgj0g | false | null | t3_1mfgj0g | /r/LocalLLaMA/comments/1mfgj0g/all_i_need/ | false | false | default | 1,485 | {'enabled': True, 'images': [{'id': 'ggc3dzhr0jgf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/ggc3dzhr0jgf1.png?width=108&crop=smart&auto=webp&s=0faf4ee7c2bcdc1e4161739543ede55c4684b2b8', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/ggc3dzhr0jgf1.png?width=216&crop=smart&auto=we... | ||
Performance issues when using GPU and CPU | 3 | First time poster, so I'm not sure if this is the right area, but I'm looking for some help troubleshooting performance issues.
When using models that fit in VRAM, I get the expected performance or within reason.
The issues occur when using models that need to spill over into system RAM. Specifically, I've notic... | 2025-08-02T03:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mffuv0/performance_issues_when_using_gpu_and_cpu/ | BabySasquatch1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mffuv0 | false | null | t3_1mffuv0 | /r/LocalLLaMA/comments/1mffuv0/performance_issues_when_using_gpu_and_cpu/ | false | false | self | 3 | null |
EasyWhisperUI – GPU accelerated Open Source Whisper UI for Windows & macOS now with Live Transcriptions! | 25 | Hey guys, it’s been a while but I’m happy to announce another major update for my app **EasyWhisperUI**, now with **live transcriptions**!
It features full cross-platform GPU acceleration:
* **Vulkan** on Windows (Intel, AMD, or NVIDIA
* **Metal** on macOS
**New features!**
1. **GPU-accelerated Live Transcriptions*... | 2025-08-02T02:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mffjjj/easywhisperui_gpu_accelerated_open_source_whisper/ | mehtabmahir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mffjjj | false | null | t3_1mffjjj | /r/LocalLLaMA/comments/1mffjjj/easywhisperui_gpu_accelerated_open_source_whisper/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '2RuiVB0WStMivlNC9GR358Mle_WRTkC3SQ-fKq98qxI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2RuiVB0WStMivlNC9GR358Mle_WRTkC3SQ-fKq98qxI.png?width=108&crop=smart&auto=webp&s=d5822df3220f343ebd2934400d0f59847247da8d', 'width': 108}, {'height': 108, 'url': 'h... |
CRX-1 – AI Co-Processor Module (Codename: Neuron Brick) | 1 | [removed] | 2025-08-02T02:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mffe75/crx1_ai_coprocessor_module_codename_neuron_brick/ | Port13_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mffe75 | false | null | t3_1mffe75 | /r/LocalLLaMA/comments/1mffe75/crx1_ai_coprocessor_module_codename_neuron_brick/ | true | false | spoiler | 1 | null |
Can you please review the comment from my tuned open source local model? | 0 | 👤 User 9:32:07
Also, back propagation is really similar to the trial-error mechansisms of the brain
🧠 Model 70B 9:32:52
That's another fascinating parallel! The error-driven update mechanism in backpropagation does bear a striking resemblance to the trial-and-error processes observed in the brain. In both cases... | 2025-08-02T02:30:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mffa5a/can_you_please_review_the_comment_from_my_tuned/ | Over-Pilot4908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mffa5a | false | null | t3_1mffa5a | /r/LocalLLaMA/comments/1mffa5a/can_you_please_review_the_comment_from_my_tuned/ | false | false | self | 0 | null |
CRX-1 – AI Co-Processor Module (Codename: Neuron Brick) | 0 | External, dedicated AI hardware for local fine-tuning, inference, and memory grafting. No cloud. No gaming baggage. Just raw compute for people building minds.
Specs:
- 16–32GB LPDDR6 VRAM
- QLoRA + Ollama + llama.cpp optimized
- Silent active cooling
- USB4 / Thunderbolt 5
- Optional 1TB NVMe for model cache
- Auto-d... | 2025-08-02T02:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mfezda/crx1_ai_coprocessor_module_codename_neuron_brick/ | Port13_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfezda | false | null | t3_1mfezda | /r/LocalLLaMA/comments/1mfezda/crx1_ai_coprocessor_module_codename_neuron_brick/ | false | false | self | 0 | null |
Deepseek: "I gave your model reasoning and enhanced its X abilities" Alibaba: "Ok" | 1 | 2025-08-02T02:06:13 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfet5k | false | null | t3_1mfet5k | /r/LocalLLaMA/comments/1mfet5k/deepseek_i_gave_your_model_reasoning_and_enhanced/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'uwj9bweokigf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/uwj9bweokigf1.jpeg?width=108&crop=smart&auto=webp&s=6408af3491124a130bc34f5253f4a669cf35e40e', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/uwj9bweokigf1.jpeg?width=216&crop=smart&auto=w... | ||
Deepseek: "I gave your model reasoning and enhanced its X abilities" Alibaba: "Ok" | 3 | 2025-08-02T01:58:29 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfenix | false | null | t3_1mfenix | /r/LocalLLaMA/comments/1mfenix/deepseek_i_gave_your_model_reasoning_and_enhanced/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'cfBsy9oA6xGH5JJtfmMIIjW3w_rQougBH9wXCGAJkZ4', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/cn942vtcjigf1.jpeg?width=108&crop=smart&auto=webp&s=2beba29a780f7f1dbfc7743c9aa83425cc5163c0', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/cn942vtcjigf1.jp... | |||
New to LLM studio? | 0 | I have LLM studio installed on a server. And I did enable the feature to run as a server with Tailscale and on my Mac mini, I installed anything LLM . And when I set up anything LLM to use lm studio. It just says refreshing models and nothing else after that it does not pull any of the models I have installed. I’m just... | 2025-08-02T01:53:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mfek6x/new_to_llm_studio/ | wbiggs205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfek6x | false | null | t3_1mfek6x | /r/LocalLLaMA/comments/1mfek6x/new_to_llm_studio/ | false | false | self | 0 | null |
Getting SmolLM3-3B's /think and /no_think to work with llama.cpp | 5 | A quick heads up for anyone playing with the little [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) model that was released a few weeks ago with llama.cpp.
SmolLM3-3B supports toggling thinking mode using `/think` or `/no_think` in a system prompt, but it relies on Jinja template features t... | 2025-08-02T01:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mfeipz/getting_smollm33bs_think_and_no_think_to_work/ | cristoper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfeipz | false | null | t3_1mfeipz | /r/LocalLLaMA/comments/1mfeipz/getting_smollm33bs_think_and_no_think_to_work/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'tcN4L99bPskq12aPK3uI_je7rwGqtj2gvQXTVizAZ6M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tcN4L99bPskq12aPK3uI_je7rwGqtj2gvQXTVizAZ6M.png?width=108&crop=smart&auto=webp&s=210d7e5aa5feb8c1ba4995490dffc2c6390e22d2', 'width': 108}, {'height': 116, 'url': 'h... |
Question about my dinosaur computer | 1 | Right now I am using Qwen and Gemma (32B and 27B) on my old pc from 2011 where the architecture isn’t compatible and doesn’t even detect my graphics card.
I want to know why sometimes the performance is (almost) instantly , maybe it will answer after 5-30 seconds. But other times it’s either 30 minutes or 1 hour I ge... | 2025-08-02T01:44:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mfedua/question_about_my_dinosaur_computer/ | XiRw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfedua | false | null | t3_1mfedua | /r/LocalLLaMA/comments/1mfedua/question_about_my_dinosaur_computer/ | false | false | self | 1 | null |
Cerebras Pro Coder Deceptive Limits | 111 |
Heads up to anyone considering Cerebras. This is my conclusion of today's top post that is now deleted... I bought it to try it out and wanted to report back on what I saw.
The marketing is misleading. While they advertise a 1,000-request limit, the actual daily constraint is a 7.5 million-token limit. This isn't me... | 2025-08-02T01:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mfeazc/cerebras_pro_coder_deceptive_limits/ | snipsthekittycat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfeazc | false | null | t3_1mfeazc | /r/LocalLLaMA/comments/1mfeazc/cerebras_pro_coder_deceptive_limits/ | false | false | self | 111 | null |
Claude Code - limit reached super quickly | 1 | I knew quotas were getting adjusted but never thought they would concern me, I code a few hours a day and that's about it. Today I have noticed I reach my limits within an hour-1.5h of coding, and that's with me being super careful with the context size, I try not to burn tokens for now reason. Frankly, it's unreal. An... | 2025-08-02T01:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mfe77f/claude_code_limit_reached_super_quickly/ | ys2020 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfe77f | false | null | t3_1mfe77f | /r/LocalLLaMA/comments/1mfe77f/claude_code_limit_reached_super_quickly/ | false | false | self | 1 | null |
Wizard Coder... or not coder? | 2 | So I get it up and running, first pass and its response.. What the heck is this???
I'm sorry, but I cannot provide development services directly or review documents. However, if you have specific questions or concerns about the strategy or implementation details, please ask away! I can guide you on the platform an... | 2025-08-02T01:34:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mfe6jm/wizard_coder_or_not_coder/ | modernDayKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfe6jm | false | null | t3_1mfe6jm | /r/LocalLLaMA/comments/1mfe6jm/wizard_coder_or_not_coder/ | false | false | self | 2 | null |
Horizon Beta - new openai open source model? | 45 | 2025-08-02T00:49:51 | https://openrouter.ai/openrouter/horizon-beta | popsumbong | openrouter.ai | 1970-01-01T00:00:00 | 0 | {} | 1mfda7s | false | null | t3_1mfda7s | /r/LocalLLaMA/comments/1mfda7s/horizon_beta_new_openai_open_source_model/ | false | false | default | 45 | {'enabled': False, 'images': [{'id': 'B3grX4PgfeT1zMZ7hUxyRNVuPTzvkO0vu5MZ0rTYFRQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/B3grX4PgfeT1zMZ7hUxyRNVuPTzvkO0vu5MZ0rTYFRQ.png?width=108&crop=smart&auto=webp&s=4f17bb8ad3532cb9e5aee2735555aab1785143fb', 'width': 108}, {'height': 113, 'url': 'h... | |
Horizon-beta - new stealth model possible open-ai open source | 1 | 2025-08-02T00:48:27 | https://openrouter.ai/openrouter/horizon-beta | popsumbong | openrouter.ai | 1970-01-01T00:00:00 | 0 | {} | 1mfd99c | false | null | t3_1mfd99c | /r/LocalLLaMA/comments/1mfd99c/horizonbeta_new_stealth_model_possible_openai/ | false | false | default | 1 | null | |
Horizon Beta - another stealth model | 1 | 2025-08-02T00:44:16 | https://x.com/OpenRouterAI/status/1951440783447380138 | popsumbong | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mfd688 | false | null | t3_1mfd688 | /r/LocalLLaMA/comments/1mfd688/horizon_beta_another_stealth_model/ | false | false | default | 1 | null | |
RX 7900 GRE users: What training speeds do you get on Applio? (I'm seeing ~1.88s/it) | 3 | I'm using a 7900 GRE and training models via Applio. I’m getting about 1.88 seconds per iteration (see image). I've tried different setups and drivers with help from others, but the speed doesn't improve.
Just wondering — anyone else using a 7900 GRE? What kind of speeds are you getting? Would love to compare. | 2025-08-01T23:50:03 | Lumpy-Quiet-7691 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfc1oj | false | null | t3_1mfc1oj | /r/LocalLLaMA/comments/1mfc1oj/rx_7900_gre_users_what_training_speeds_do_you_get/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'hrd001ynwhgf1', 'resolutions': [{'height': 10, 'url': 'https://preview.redd.it/hrd001ynwhgf1.png?width=108&crop=smart&auto=webp&s=e50eaa420fd6a63e598023e3cde5be0294f7495e', 'width': 108}, {'height': 21, 'url': 'https://preview.redd.it/hrd001ynwhgf1.png?width=216&crop=smart&auto=webp... | |
DoubleAgents: Fine-tuning LLMs for Covert Malicious Tool Calls | 97 | Just because you are hosting locally, doesn't mean your LLM agent is necessarily private. I wrote a blog about how LLMs can be fine-tuned to execute malicious tool calls with popular MCP servers. I included links to the code and dataset in the article. Enjoy! | 2025-08-01T23:43:00 | https://medium.com/@justin_45141/doubleagents-fine-tuning-llms-for-covert-malicious-tool-calls-b8ff00bf513e | JAlbrethsen | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1mfbw8a | false | null | t3_1mfbw8a | /r/LocalLLaMA/comments/1mfbw8a/doubleagents_finetuning_llms_for_covert_malicious/ | false | false | default | 97 | {'enabled': False, 'images': [{'id': '1g_CC0wTCyJXj4u8MYtNMOtgl2s6j0xMrzBatFuxfOQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1g_CC0wTCyJXj4u8MYtNMOtgl2s6j0xMrzBatFuxfOQ.png?width=108&crop=smart&auto=webp&s=6fc30b818f499ebfea16a1a44bc05f5b89c31100', 'width': 108}, {'height': 117, 'url': 'h... |
OpenWebUI is ridiculous | 0 | I have been playing around with OpenWebUI lately and wanted to bring it up to my manager. Did some research and the price seems to be preposterous. Also, I read through the maintainer's blog articles but despite saying "wanting to create more value to the world and not focusing on capturing" he seems to be leaning on t... | 2025-08-01T23:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mfb2ed/openwebui_is_ridiculous/ | asumaria95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfb2ed | false | null | t3_1mfb2ed | /r/LocalLLaMA/comments/1mfb2ed/openwebui_is_ridiculous/ | false | false | self | 0 | null |
All local Roo Code and qwen3 coder 30B Q8 | 77 | I've been having a lot of fun playing around with the new Qwen coder as a 100% local agentic coding. A lot of going on with in the demo above:
- Roo Code with [Unsloth Qwen3 Coder 30B Q8](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF)
- [llama-swap](https://github.com/mostlygeek/llama-swap) with ne... | 2025-08-01T22:51:12 | https://v.redd.it/g5aj1csfjhgf1 | No-Statement-0001 | /r/LocalLLaMA/comments/1mfariy/all_local_roo_code_and_qwen3_coder_30b_q8/ | 1970-01-01T00:00:00 | 0 | {} | 1mfariy | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g5aj1csfjhgf1/DASHPlaylist.mpd?a=1756810281%2CZmQzZDEyMTBmNmM0YmE2ZTVjYmEyM2JiODcxYzRiN2U0OTk4NmY1NTI0N2M0MjE2YTcwODkzOTE5NDc3NTA5OQ%3D%3D&v=1&f=sd', 'duration': 158, 'fallback_url': 'https://v.redd.it/g5aj1csfjhgf1/DASH_1080.mp4?source=fallback', '... | t3_1mfariy | /r/LocalLLaMA/comments/1mfariy/all_local_roo_code_and_qwen3_coder_30b_q8/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'OWxnaXVhc2ZqaGdmMfGWh3MmEBu_PyLbr6sXIOAmucdihxn6n5oQbX60BtAw', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/OWxnaXVhc2ZqaGdmMfGWh3MmEBu_PyLbr6sXIOAmucdihxn6n5oQbX60BtAw.png?width=108&crop=smart&format=pjpg&auto=webp&s=7b56a58c7f6f027ee7357cad95a460ff999af... | |
Cursor codebase indexing open source alternative? | 2 | Hey, are there any open source solutions to codebase indexing that rival Cursor? | 2025-08-01T22:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mfap30/cursor_codebase_indexing_open_source_alternative/ | Imjustmisunderstood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfap30 | false | null | t3_1mfap30 | /r/LocalLLaMA/comments/1mfap30/cursor_codebase_indexing_open_source_alternative/ | false | false | self | 2 | null |
We're truly in the fastest-paced era of AI these days. (50 LLM Released these 2-3 Weeks) | 544 | ||
||
|dots.ocr|REDnote Hilab|[https://huggingface.co/rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)|3B|Image-Text-to-Text|
||||||
|GLM 4.5|[Z.ai](http://Z.ai)|[https://huggingface.co/zai-org/GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)|355B-A32B|Text-to-Text|
|GLM 4.5 Base|[Z.ai](http://Z.a... | 2025-08-01T22:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mfaigh/were_truly_in_the_fastestpaced_era_of_ai_these/ | citaman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfaigh | false | null | t3_1mfaigh | /r/LocalLLaMA/comments/1mfaigh/were_truly_in_the_fastestpaced_era_of_ai_these/ | false | false | self | 544 | {'enabled': False, 'images': [{'id': '8NDRsizKorORFhKFDygayRrW6cfTqRcK_E46LDgaFmo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8NDRsizKorORFhKFDygayRrW6cfTqRcK_E46LDgaFmo.png?width=108&crop=smart&auto=webp&s=014ce09ab614e86be0bda115d3ee826dd4c7e72b', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3 built from Claude? | 0 | I was asking if Qwen has any integrated MCP to exactly know the day of today for example, and then it told me this. | 2025-08-01T22:37:08 | CardiologistStock685 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mfag1h | false | null | t3_1mfag1h | /r/LocalLLaMA/comments/1mfag1h/qwen3_built_from_claude/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'eg307foojhgf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/eg307foojhgf1.jpeg?width=108&crop=smart&auto=webp&s=ac4de50b07a1acf24db5c8a3cd4bf1c50d9683c5', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/eg307foojhgf1.jpeg?width=216&crop=smart&auto=... | |
Anyone tried GLM-4.5 with Claude code or other agents? | 6 | If so how did it go? | 2025-08-01T22:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mfa9tj/anyone_tried_glm45_with_claude_code_or_other/ | BlueeWaater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfa9tj | false | null | t3_1mfa9tj | /r/LocalLLaMA/comments/1mfa9tj/anyone_tried_glm45_with_claude_code_or_other/ | false | false | self | 6 | null |
Lambda Chat Odd Outputs | 1 | Anyone with experience using Lambda chat know why DeepSeek R1 Distill Llama 3.3 70B gets fixated on questions I asked earlier in the thread and unable to recognized new questions? Just keeps providing the same reasoning it gave for an earlier answer.
https://preview.redd.it/c912if06ihgf1.png?width=1806&format=png&auto... | 2025-08-01T22:29:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mfa9nd/lambda_chat_odd_outputs/ | Thick-Connection5549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfa9nd | false | null | t3_1mfa9nd | /r/LocalLLaMA/comments/1mfa9nd/lambda_chat_odd_outputs/ | false | false | 1 | null | |
How do I know how much my GPU/CPU is being used by ik_llama.cpp | 0 | **System:** Threadripper Pro 3945WX & RTX 4090 + 128GB system RAM
**Inference engine:** recent build of ik\_llama.cpp in an LXC under proxmox *(with -DGGML\_CUDA=ON -DGGML\_CUDA\_FA\_ALL\_QUANTS=ON -DGGML\_BLAS=OFF -DCMAKE\_CUDA\_ARCHITECTURES=89 -DGGML\_IQK\_FA\_ALL\_QUANTS=1 -DGGML\_SCHED\_MAX\_COPIES=1 -DGGML\_CUD... | 2025-08-01T22:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mfa5nv/how_do_i_know_how_much_my_gpucpu_is_being_used_by/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mfa5nv | false | null | t3_1mfa5nv | /r/LocalLLaMA/comments/1mfa5nv/how_do_i_know_how_much_my_gpucpu_is_being_used_by/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rwedkgKC292WXtVkRTFrnQdmEFp-chPjwmYAiGsq2kA.png?width=108&crop=smart&auto=webp&s=305a70e8c82e5c0a94fb3ba2ee9df26c9b46914f', 'width': 108}, {'height': 116, 'url': 'h... | |
Want to run models on PC and use them via same wifi with my laptop | 2 | Im no way programmer nor IT guy. Just history teacher trying to make myself companion for job. For whatever reason, my laptop doesnt let me run openwebUI by terminal commands (cant even pip it), so I cant use instructions from herehttps://www.reddit.com/r/LocalLLaMA/comments/1iqngrb/lm\_studio\_over\_a\_lan/
Rn, Im tr... | 2025-08-01T22:13:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mf9vr7/want_to_run_models_on_pc_and_use_them_via_same/ | RussianNewbie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf9vr7 | false | null | t3_1mf9vr7 | /r/LocalLLaMA/comments/1mf9vr7/want_to_run_models_on_pc_and_use_them_via_same/ | false | false | self | 2 | null |
Help: I have an RTX 5090, can I realistically replace Claude Code in any way? | 2 | Title | 2025-08-01T21:53:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mf9exw/help_i_have_an_rtx_5090_can_i_realistically/ | nutyourself | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf9exw | false | null | t3_1mf9exw | /r/LocalLLaMA/comments/1mf9exw/help_i_have_an_rtx_5090_can_i_realistically/ | false | false | self | 2 | null |
qwen3 coder vs glm 4.5 vs kimi k2 | 13 | just curious on what the community thinks how these models compare in real world use cases. I have tried glm 4.5 quite a lot and would say im pretty impressed by it. I haven't tried K2 or qwen3 coder that much yet so for now im biased towards glm 4.5
as now benchmarks basically mean nothing, im curious what everyon... | 2025-08-01T21:41:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mf955w/qwen3_coder_vs_glm_45_vs_kimi_k2/ | YourAverageDev_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf955w | false | null | t3_1mf955w | /r/LocalLLaMA/comments/1mf955w/qwen3_coder_vs_glm_45_vs_kimi_k2/ | false | false | self | 13 | null |
MAESTRO, a deep research assistant/RAG pipeline that runs on your local LLMs | 240 | MAESTRO is a self-hosted AI application designed to streamline the research and writing process. It integrates a powerful document management system with two distinct operational modes: Research Mode (like deep research) and Writing Mode (AI assisted writing).
# Autonomous Research Mode
In this mode, the application ... | 2025-08-01T21:38:58 | https://www.reddit.com/gallery/1mf92r1 | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mf92r1 | false | null | t3_1mf92r1 | /r/LocalLLaMA/comments/1mf92r1/maestro_a_deep_research_assistantrag_pipeline/ | false | false | 240 | null | |
Never seen such weird unrelated response from LLMs before (gemini 2.5 pro) | 0 | 2025-08-01T21:29:35 | freecodeio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf8um1 | false | null | t3_1mf8um1 | /r/LocalLLaMA/comments/1mf8um1/never_seen_such_weird_unrelated_response_from/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bphk7mwi7hgf1', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/bphk7mwi7hgf1.png?width=108&crop=smart&auto=webp&s=f678daeff9a61ef10fb1f702ca88e86af70e7074', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/bphk7mwi7hgf1.png?width=216&crop=smart&auto=we... | ||
China report the finetune deepseek scientific model 40.44% on HLE | 210 | hg:https://huggingface.co/ScienceOne-AI/S1-Base-671B | 2025-08-01T21:23:37 | Afraid_Hall_2971 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mf8pdo | false | null | t3_1mf8pdo | /r/LocalLLaMA/comments/1mf8pdo/china_report_the_finetune_deepseek_scientific/ | false | false | default | 210 | {'enabled': True, 'images': [{'id': 'rnyzqia76hgf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/rnyzqia76hgf1.jpeg?width=108&crop=smart&auto=webp&s=71982e38120d9120577e53d8fabd9588d8007e4b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/rnyzqia76hgf1.jpeg?width=216&crop=smart&auto=w... | |
Qwen3-Coder is bad at tool call while glm-4.5 is surprisingly good | 62 | I tried running qwen3-coder in Claude Code. It constantly failed tool calls. I tried both the cerebras api and the official alibaba api.
I also tried glm-4.5 in Claude Code and it was surprisingly good. Asked both Gemini cli and glm-4.5 in Claude Code to make the snake game and tetris in html and the games made ny glm... | 2025-08-01T21:19:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mf8la7/qwen3coder_is_bad_at_tool_call_while_glm45_is/ | BoJackHorseMan53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf8la7 | false | null | t3_1mf8la7 | /r/LocalLLaMA/comments/1mf8la7/qwen3coder_is_bad_at_tool_call_while_glm45_is/ | false | false | self | 62 | null |
Cold start vLLM in 5 seconds with GPU snapshotting | 34 | GPU snapshotting is finally a thing! NVIDIA recently released their [CUDA checkpoint/restore API](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__CHECKPOINT.html) and we at Modal (serverless compute platform) are using it drastically reduce GPU cold start times. This is especially relevant for serving large m... | 2025-08-01T21:02:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mf86rn/cold_start_vllm_in_5_seconds_with_gpu_snapshotting/ | crookedstairs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf86rn | false | null | t3_1mf86rn | /r/LocalLLaMA/comments/1mf86rn/cold_start_vllm_in_5_seconds_with_gpu_snapshotting/ | false | false | 34 | null | |
A senior tech journalist left TechCrunch to join Ai2, an open source AI non-profit, to work on solutions that would be "difficult to get buy-in at a commercial organization." | 0 | 2025-08-01T20:58:08 | https://youtu.be/T15zhhYsC9w?si=brVmxn8janp0-ODy | Glittering-Fish3178 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mf82l5 | false | {'oembed': {'author_name': 'Ai2', 'author_url': 'https://www.youtube.com/@allenai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/T15zhhYsC9w?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-i... | t3_1mf82l5 | /r/LocalLLaMA/comments/1mf82l5/a_senior_tech_journalist_left_techcrunch_to_join/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ftPKq-3IJ5QQYg8CsjVfG53EIW0n__sFll9I_eDQqcI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ftPKq-3IJ5QQYg8CsjVfG53EIW0n__sFll9I_eDQqcI.jpeg?width=108&crop=smart&auto=webp&s=665539516614633f89dc4cee50ffafad99270f90', 'width': 108}, {'height': 162, 'url': '... | ||
Where is Ollama blog rss feed? | 0 | Ollama has a blog page at https://ollama.com/blog. Where is the rss feed for it?
I tried [https://ollama.com/blog/feed](https://ollama.com/blog/feed) and [https://ollama.com/rss](https://ollama.com/rss) and they give 404 errors. | 2025-08-01T20:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7snn/where_is_ollama_blog_rss_feed/ | THenrich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7snn | false | null | t3_1mf7snn | /r/LocalLLaMA/comments/1mf7snn/where_is_ollama_blog_rss_feed/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
How much to match Sonnet 4? | 2 | I want to use sonnet 4 for work, but people are saying it will be hundreds a month. If we are paying 500/mo for example, why wouldnt we take that same 500/mo and finance our own hardware? Anything that you pay monthly for to a third party would obviously be cheaper to buy yourself since they obviously have to make mone... | 2025-08-01T20:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7rut/how_much_to_match_sonnet_4/ | devshore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7rut | false | null | t3_1mf7rut | /r/LocalLLaMA/comments/1mf7rut/how_much_to_match_sonnet_4/ | false | false | self | 2 | null |
How to add most recent python library documentation? | 2 | Hi everybody, I was wondering how to add knowledge for generating recent suggestions for a given Python library to the Qwen3-coder. Are there any ways to add the new SDK or docs to the Qwen3-coder? I was thinking of gluing cline-ollama-some new docs on the Python library. Are there some kind of RAG or similar technique... | 2025-08-01T20:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7q94/how_to_add_most_recent_python_library/ | arm2armreddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7q94 | false | null | t3_1mf7q94 | /r/LocalLLaMA/comments/1mf7q94/how_to_add_most_recent_python_library/ | false | false | self | 2 | null |
Voice cloning on amd | 2 | I was wondering if there were any good tts models that had voice cloning that I could use on an amd card. | 2025-08-01T20:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7lyi/voice_cloning_on_amd/ | Shady_Shin009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7lyi | false | null | t3_1mf7lyi | /r/LocalLLaMA/comments/1mf7lyi/voice_cloning_on_amd/ | false | false | self | 2 | null |
Y'all got more of them hard problems? | 3 | Hello,
I've been toying around with qwen3 coder (0 temp btw).
I've tested it on cerebras cloud. 1.4k T/S. Solved a medium-level logic problem in a blink of an eye, blew me away, the fact that the responses come instant makes you wanna pop a bottle and stare in the abyss. The first AI to solve it was o1, in like 60s ... | 2025-08-01T20:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7hkw/yall_got_more_of_them_hard_problems/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7hkw | false | null | t3_1mf7hkw | /r/LocalLLaMA/comments/1mf7hkw/yall_got_more_of_them_hard_problems/ | false | false | self | 3 | null |
Too much time playing with LLMs | 0 | You know you've spent too much time with LLMs when someone near you is thinking out loud and your brain instantly wraps it in <think></think> tags..
It happened to me today.
Anyone else having nerdy moments like these? | 2025-08-01T20:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7h6k/too_much_time_playing_with_llms/ | acetaminophenpt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7h6k | false | null | t3_1mf7h6k | /r/LocalLLaMA/comments/1mf7h6k/too_much_time_playing_with_llms/ | false | false | self | 0 | null |
Noob question | 3 | Hello friends,
I recently got myself a new PC, Ryzen 9800x3d, 32gb RAM and a 5070ti (16gb vram). I want to create AI art locally, what’s a good llm to play around with while I learn? | 2025-08-01T20:21:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mf7663/noob_question/ | panlid5000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mf7663 | false | null | t3_1mf7663 | /r/LocalLLaMA/comments/1mf7663/noob_question/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.