title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[Framework] Recursive Identity Memory Patch for GPT Agents | 0 | **Purpose**
This protocol introduces a simple pattern to **anchor identity and coherence** across recursive GPT calls. It reduces drift, preserves memory in stateless environments, and helps agents recognize themselves across iterations.
**Context**
ChatGPT and similar LLMs often **lose internal alignment** during... | 2025-07-28T11:33:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mbdqby/framework_recursive_identity_memory_patch_for_gpt/ | ConsistentPractice46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbdqby | false | null | t3_1mbdqby | /r/LocalLLaMA/comments/1mbdqby/framework_recursive_identity_memory_patch_for_gpt/ | false | false | self | 0 | null |
Hybrid Reasoning Models | 3 | I really love the fact that I can have both a SOTA reasoning AND instruct model variant off of one singular model. I can essentially deploy 2 models with 2 use cases with the cost of one models vram. With /think for difficult problems and /no_think for easier problems, essentially we can experience a best from both wor... | 2025-07-28T11:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mbdn26/hybrid_reasoning_models/ | MichaelXie4645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbdn26 | false | null | t3_1mbdn26 | /r/LocalLLaMA/comments/1mbdn26/hybrid_reasoning_models/ | false | false | self | 3 | null |
GLM 4.5 possibly releasing today according to Bloomberg | 157 | Bloomberg writes:
>The startup will release GLM-4.5, an update to its flagship model, as soon as Monday, according to a person familiar with the plan.
The organization has changed their name on HF from THUDM to zai-org and they have a GLM 4.5 collection which has 8 hidden items in it.
[https://huggingface.co/organiz... | 2025-07-28T11:26:56 | https://www.bloomberg.com/news/articles/2025-07-28/chinese-openai-challenger-zhipu-to-unveil-new-open-source-model | rerri | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 1mbdm6t | false | null | t3_1mbdm6t | /r/LocalLLaMA/comments/1mbdm6t/glm_45_possibly_releasing_today_according_to/ | false | false | 157 | {'enabled': False, 'images': [{'id': 'MKYF_mjSE9CGBChz_RrVYgFKUGWvflOwY1euYqGGxdc', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/MKYF_mjSE9CGBChz_RrVYgFKUGWvflOwY1euYqGGxdc.jpeg?width=108&crop=smart&auto=webp&s=49d610e841064301ef9eed8e3e833431e3633cd1', 'width': 108}, {'height': 144, 'url': '... | |
🚀 Built and launched a live AI app in 15 minutes — no code, no backend, just upload & go 😎 | 0 | I wanted to test how fast I could go from idea to live AI product — and I was honestly shocked.
In literally 15 minutes, I created:
✅ A chatbot trained on my own PDF
✅ Auto-generated web interface
✅ Live public link + API
✅ Fully customizable prompts & memory
✅ Zero code — just drag, drop, tweak
What’s wild?
You get ... | 2025-07-28T11:25:18 | https://linkly.link/2C1fh | Emotional-Step-7328 | linkly.link | 1970-01-01T00:00:00 | 0 | {} | 1mbdl2y | false | null | t3_1mbdl2y | /r/LocalLLaMA/comments/1mbdl2y/built_and_launched_a_live_ai_app_in_15_minutes_no/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Qac6deOGwdOKwslKfU-GW0p1hpSUSxVD9tnrSjoF7Qs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Qac6deOGwdOKwslKfU-GW0p1hpSUSxVD9tnrSjoF7Qs.jpeg?width=108&crop=smart&auto=webp&s=63b005644d33745cf777c275c1188920619b5ced', 'width': 108}, {'height': 108, 'url': '... | |
Building a personal project for portfolio management. | 1 | Hi everyone I am trying to build a small project just to keep in touch with all the news and information flowing in the markets so that I can better understand what is happening around the world. I am fetching the data from a website where I get the link of the pdf for concalls and other credit ratings changes, this in... | 2025-07-28T11:17:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mbdg53/building_a_personal_project_for_portfolio/ | Boring_Tip_1218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbdg53 | false | null | t3_1mbdg53 | /r/LocalLLaMA/comments/1mbdg53/building_a_personal_project_for_portfolio/ | false | false | self | 1 | null |
Opensource: The AI Model Router - Automating AI Model Selection | 3 | Hey yall, I built an opensource AI Model Router that automatically picks the best AI provider (OpenAI, Anthropic, Google, local), model, and settings for your prompts. No more guessing between openai Claude, or Gemini!
Feedback welcome! | 2025-07-28T10:48:12 | https://github.com/MonkWarrior08/Model_Router | Idonotknow101 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mbcwek | false | null | t3_1mbcwek | /r/LocalLLaMA/comments/1mbcwek/opensource_the_ai_model_router_automating_ai/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'U65r0MxggWUPTuOch0OpSwES-nV5AG-PgAkmJMyj4wE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U65r0MxggWUPTuOch0OpSwES-nV5AG-PgAkmJMyj4wE.png?width=108&crop=smart&auto=webp&s=e0eae7298df9a75056291706e27eb55423947f5a', 'width': 108}, {'height': 108, 'url': 'h... |
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights | 18 | 2025-07-28T10:17:33 | https://jerryliang24.github.io/DnD/ | paf1138 | jerryliang24.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mbce7b | false | null | t3_1mbce7b | /r/LocalLLaMA/comments/1mbce7b/draganddrop_llms_zeroshot_prompttoweights/ | false | false | default | 18 | null | |
Understanding Local Language Models: A Beginner’s Guide | 4 | TL;DR A local language model is like a mini-brain for your computer. It’s trained to understand and generate text, like answering questions or writing essays. Unlike online AI (like ChatGPT), local LLMs don’t need a cloud server—you run them directly on your machine. But to do this, you need to know about **model size*... | 2025-07-28T10:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mbc9d3/understanding_local_language_models_a_beginners/ | 120-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbc9d3 | false | null | t3_1mbc9d3 | /r/LocalLLaMA/comments/1mbc9d3/understanding_local_language_models_a_beginners/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU.png?width=108&crop=smart&auto=webp&s=4c76a863977e105532ff0253418287f7ceba9902', 'width': 108}, {'height': 116, 'url': 'h... |
Are there any examples of 14B+ reputable models that outperform models twice their size or more? | 9 | Looking for examples where smaller reputable models (Llama, Qwen, DeepSeek, …) are widely recognized as better - not just in benchmarks, but in broader evaluations for general tasks.
I sometimes see claims that 70B-range models beat 300B+ ones, often based on benchmark results. But in practice or broader testing, the ... | 2025-07-28T10:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mbc8tb/are_there_any_examples_of_14b_reputable_models/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbc8tb | false | null | t3_1mbc8tb | /r/LocalLLaMA/comments/1mbc8tb/are_there_any_examples_of_14b_reputable_models/ | false | false | self | 9 | null |
Qwen 3 thinks deeper, acts faster, and it outperforms models like DeepSeek-R1, Grok 3 and Gemini-2.5-Pro. | 0 | 2025-07-28T09:35:28 | https://x.com/Invessted/status/1949375630975635577 | JeffreySons_90 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mbbphk | false | null | t3_1mbbphk | /r/LocalLLaMA/comments/1mbbphk/qwen_3_thinks_deeper_acts_faster_and_it/ | false | false | default | 0 | null | |
Vibe-coded Webpage-summarizer Chrome extension to leverage OSS models | 6 | Repo: [https://github.com/JC1DA/Neutral\_Summarizer](https://github.com/JC1DA/Neutral_Summarizer)
It was built using Cline + Qwen3-coder
Hope it will be useful to some people :) | 2025-07-28T08:45:31 | https://www.reddit.com/gallery/1mbaxqj | JC1DA | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mbaxqj | false | null | t3_1mbaxqj | /r/LocalLLaMA/comments/1mbaxqj/vibecoded_webpagesummarizer_chrome_extension_to/ | false | false | 6 | null | |
My first finetune: Gemma 3 4B unslop via GRPO | 36 | Training code is included, so maybe someone with more hardware than me can do cooler stuff.
I also uploaded a Q4_K_M GGUF made with unsloth's imatrix.
It's released as a LoRA adapter because my internet sucks and I can't successfully upload the whole thing. If you want full quality you'll need to merge it with https:... | 2025-07-28T08:41:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mbavi1/my_first_finetune_gemma_3_4b_unslop_via_grpo/ | terminoid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbavi1 | false | null | t3_1mbavi1 | /r/LocalLLaMA/comments/1mbavi1/my_first_finetune_gemma_3_4b_unslop_via_grpo/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'n4F82h2bj6n4tdhhDMhHVbVA_pWqxkTu7TkGUD3n1ws', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n4F82h2bj6n4tdhhDMhHVbVA_pWqxkTu7TkGUD3n1ws.png?width=108&crop=smart&auto=webp&s=1eab9597f3861206e36473c4a5729c07d8f15be7', 'width': 108}, {'height': 116, 'url': 'h... |
Please suggest me android apps to run onnx models for testing like pocketpal | 2 | Hi same as title. I have used pocketpal and smolchat to run gguf models as of now in Android. I want to test some onnxmodels. Is there any similar app for the same? | 2025-07-28T08:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mbap20/please_suggest_me_android_apps_to_run_onnx_models/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbap20 | false | null | t3_1mbap20 | /r/LocalLLaMA/comments/1mbap20/please_suggest_me_android_apps_to_run_onnx_models/ | false | false | self | 2 | null |
Best Ollama Models for Coding (Java/Python) and Writing Help? | 1 | [removed] | 2025-07-28T08:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mbaor4/best_ollama_models_for_coding_javapython_and/ | Kd_Gaming1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbaor4 | false | null | t3_1mbaor4 | /r/LocalLLaMA/comments/1mbaor4/best_ollama_models_for_coding_javapython_and/ | false | false | self | 1 | null |
Best model for different tasks? | 1 | [removed] | 2025-07-28T08:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mbamg9/best_model_for_different_tasks/ | Kd_Gaming1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbamg9 | false | null | t3_1mbamg9 | /r/LocalLLaMA/comments/1mbamg9/best_model_for_different_tasks/ | false | false | self | 1 | null |
Fine Tuning; Attribution at Inference Time | 3 | I'm working on a new model that allows for attribution of trained on data to be identified at the time of inference. One of my hypothesis being that if the the data being used at inference can be attributed then the next round of fine tuning can,
1. Trim data that wasn't used at inference
2. More data could be a... | 2025-07-28T08:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mbako7/fine_tuning_attribution_at_inference_time/ | Iam_Alastair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbako7 | false | null | t3_1mbako7 | /r/LocalLLaMA/comments/1mbako7/fine_tuning_attribution_at_inference_time/ | false | false | self | 3 | null |
UI persistently refusing to work | 0 | Alright so essentially I'm trying to make a Jarivs-eske AI to talk to and that can record information i mention about hobbies and him reply back with that info, and be helpful along the way. I'm using LM Studio, mistral 7b q4 ummm ksm or whatever its called, Chroma, Huggingface, LangChain, and alot of python. Prompt is... | 2025-07-28T08:15:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mbaho0/ui_persistently_refusing_to_work/ | ActiveBathroom9482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mbaho0 | false | null | t3_1mbaho0 | /r/LocalLLaMA/comments/1mbaho0/ui_persistently_refusing_to_work/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GCULlMRnSeYn1sb9Es6Plh7-e1ojq5AtN2dVcrhOKo0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GCULlMRnSeYn1sb9Es6Plh7-e1ojq5AtN2dVcrhOKo0.png?width=108&crop=smart&auto=webp&s=e3707c20deb077f14b6e1c3aa58515b4817b20f2', 'width': 108}, {'height': 113, 'url': 'h... |
best small LLM for pandasai via ollama | 0 | i have 3x Tesla A100's . my goal i want to serve a model via ollama and use it with pandasai package so the user enters a prompt and the model generates code to analyze large dataframes and outputs plots or values etc
which models do you suggest?
i've seen mistral nemo , qwen 2.5 etc
im trying to get the current bes... | 2025-07-28T07:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mba8j8/best_small_llm_for_pandasai_via_ollama/ | Main-Quail-3717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mba8j8 | false | null | t3_1mba8j8 | /r/LocalLLaMA/comments/1mba8j8/best_small_llm_for_pandasai_via_ollama/ | false | false | self | 0 | null |
Did Qwen put up Qwen3-30B-A3B-Instruct-2507 on Hugging Face by accident? | 3 | There [link now](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507) on Hugging Face now returns a 404, so it looks like they didn't mean to put it up so soon?
That said, anyone know when it's coming out? | 2025-07-28T07:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mba3hk/did_qwen_put_up_qwen330ba3binstruct2507_on/ | Accomplished-Copy332 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mba3hk | false | null | t3_1mba3hk | /r/LocalLLaMA/comments/1mba3hk/did_qwen_put_up_qwen330ba3binstruct2507_on/ | false | false | self | 3 | null |
Qwen/Qwen3-30B-A3B-Instruct-2507 · Hugging Face | 556 | No model card as of yet | 2025-07-28T07:33:42 | https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507 | rerri | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mb9uy8 | false | null | t3_1mb9uy8 | /r/LocalLLaMA/comments/1mb9uy8/qwenqwen330ba3binstruct2507_hugging_face/ | false | false | default | 556 | {'enabled': False, 'images': [{'id': '4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4L2FXW9Fym-Ol4pha2Ze5zHkeeMTtxPBl8ihz-UFknI.png?width=108&crop=smart&auto=webp&s=d1c3476d621a9393fbb7ca11c48a3074c5fd6803', 'width': 108}, {'height': 116, 'url': 'h... |
OS Cursor for documents? | 3 | Is there a platform, preferably open source, that would behave like claude code/cursor but for writing? (and not coding).
Currently, I use roocode and create custom agents, but:
1. Not web-based
2. Coder spill overs. Many such agents system prompts is specific to coding and time to time they write code.
3. There are... | 2025-07-28T06:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mb9b1t/os_cursor_for_documents/ | keniget | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb9b1t | false | null | t3_1mb9b1t | /r/LocalLLaMA/comments/1mb9b1t/os_cursor_for_documents/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '_BHq_SDHRIx5gVATfsBYlVYLFDnOzpJ-sRmdoP6ZldU', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/oQlLZuK0ZN4sqe3SpmPzB_Ve0jJmmRQAh1zSyvuejTU.jpg?width=108&crop=smart&auto=webp&s=b0a62112ca4f3da001972d1db4e275a2298823f6', 'width': 108}, {'height': 134, 'url': 'h... |
Granite 4 small and medium might be 30B6A/120B30A? | 77 | 2025-07-28T06:53:38 | https://www.youtube.com/watch?v=UxUD88TRlBY | Kryesh | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mb98cm | false | {'oembed': {'author_name': 'IBM Developer', 'author_url': 'https://www.youtube.com/@IBMDeveloperAdvocates', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/UxUD88TRlBY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-med... | t3_1mb98cm | /r/LocalLLaMA/comments/1mb98cm/granite_4_small_and_medium_might_be_30b6a120b30a/ | false | false | default | 77 | {'enabled': False, 'images': [{'id': 'HsLxSV9iQYiHn_HZBXHd4eTY-jpAHtvg9nNDBZ3sa94', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/HsLxSV9iQYiHn_HZBXHd4eTY-jpAHtvg9nNDBZ3sa94.jpeg?width=108&crop=smart&auto=webp&s=8ffc303981c4cefcd42ea47abb6a8382d3a7034a', 'width': 108}, {'height': 162, 'url': '... | |
Dual Turin build anyone? | 0 | Was looking into a dual 9175F with 24 channels RAM and wanted to check if anybody ever succeded with that or a similar build?
My option would be a MZ73-LM0 r3 motherboard, but I am scared of the cpu qvl marking the 9175F as "contact us!"
Would love to go for a Asrock Rack /Supermicro but no 24 dimm in a reasonable ... | 2025-07-28T06:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mb8sa8/dual_turin_build_anyone/ | nail_nail | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb8sa8 | false | null | t3_1mb8sa8 | /r/LocalLLaMA/comments/1mb8sa8/dual_turin_build_anyone/ | false | false | self | 0 | null |
Watch Alibaba Cloud Founder on China’s AI Future | 42 | 2025-07-28T05:25:13 | https://www.bloomberg.com/news/videos/2025-07-28/alibaba-cloud-founder-on-china-s-ai-future-video | fallingdowndizzyvr | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 1mb7tb7 | false | null | t3_1mb7tb7 | /r/LocalLLaMA/comments/1mb7tb7/watch_alibaba_cloud_founder_on_chinas_ai_future/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'grOevYCkkhDi2lNOhhkTLldJ3vjPBtyjZzAD6KyhuGI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/grOevYCkkhDi2lNOhhkTLldJ3vjPBtyjZzAD6KyhuGI.jpeg?width=108&crop=smart&auto=webp&s=1d4a0415cf6ce806582cc8deb1c35cc85ba99e73', 'width': 108}, {'height': 121, 'url': '... | ||
Help me, please | 0 | I took on a task that is turning out to be extremely difficult for me. Normally, I’m pretty good at finding resources online and implementing them.
I’ve essentially put upper management in the loop, and they are really hoping that this done this week.
A basic way, for container yard workers to scan large stacks of ... | 2025-07-28T05:09:19 | BitSharp5640 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mb7jrh | false | null | t3_1mb7jrh | /r/LocalLLaMA/comments/1mb7jrh/help_me_please/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'vsV8jvg9QZ8DSgdHnPTY2B_cYjlwBDHYXauyHpSGyNw', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/xrqoc6l3tjff1.jpeg?width=108&crop=smart&auto=webp&s=d92ac3adb775ffb2bb8aaa366a580bee8225123a', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/xrqoc6l3tjff1.j... | ||
System Ram Speed Importance when using GPU | 3 | I am very attracted to the idea of using server hardware for llms, since 16 channel ddr4 memory will give 400gb/s worth of bandwidth.
However, one thing that keeps popping up when researching is pcie bandwidth being an issue
Logically, it does make sense, since pcie 4.0x16 gives 32gb/s, way too little for llms, not t... | 2025-07-28T05:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mb7gxu/system_ram_speed_importance_when_using_gpu/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb7gxu | false | null | t3_1mb7gxu | /r/LocalLLaMA/comments/1mb7gxu/system_ram_speed_importance_when_using_gpu/ | false | false | self | 3 | null |
2x RTX 3090 24GB or 8x 3060 12GB | 18 | Hey, apologies if this question has been posted before i haven’t been able to find any concrete info on it.
In my area i can get 8 3060 12GBs for the exact same price as two 3090s, I’m looking to run LLMs, Heavy ComfyUI workflows, training models, LoRas and just about any other AI development haha.
I’ve never ran an... | 2025-07-28T04:49:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mb77c7/2x_rtx_3090_24gb_or_8x_3060_12gb/ | twotemp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb77c7 | false | null | t3_1mb77c7 | /r/LocalLLaMA/comments/1mb77c7/2x_rtx_3090_24gb_or_8x_3060_12gb/ | false | false | self | 18 | null |
Pi AI studio | 122 | This 96GB device cost around $1000. Has anyone tried it before? Can it host small LLMs? | 2025-07-28T04:28:59 | https://www.reddit.com/gallery/1mb6uhm | koumoua01 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mb6uhm | false | null | t3_1mb6uhm | /r/LocalLLaMA/comments/1mb6uhm/pi_ai_studio/ | false | false | 122 | null | |
Is anyone using MemOS? What are the pros and cons? | 0 | From the docs: **MemOS** is a **Memory Operating System** for large language models (LLMs) and autonomous agents. It treats memory as a **first-class, orchestrated, and explainable resource**, rather than an opaque layer hidden inside model weights.
Here's the URL of the docs: [https://memos-docs.openmem.net/docs/... | 2025-07-28T04:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mb6rre/is_anyone_using_memos_what_are_the_pros_and_cons/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb6rre | false | null | t3_1mb6rre | /r/LocalLLaMA/comments/1mb6rre/is_anyone_using_memos_what_are_the_pros_and_cons/ | false | false | self | 0 | null |
Why I'm Betting Against AI Agents in 2025 (Despite Building Them) | 85 | 2025-07-28T04:12:40 | https://utkarshkanwat.com/writing/betting-against-agents/ | Ilovekittens345 | utkarshkanwat.com | 1970-01-01T00:00:00 | 0 | {} | 1mb6jzz | false | null | t3_1mb6jzz | /r/LocalLLaMA/comments/1mb6jzz/why_im_betting_against_ai_agents_in_2025_despite/ | false | false | default | 85 | null | |
Has vLLM made Ollama and llama.cpp redundant? | 0 | I remember when vLLM was just a narrowly specialized tool which almost nobody used. Everyone was using Ollama (basically a wrapper for llama.cpp which turns it into an OpenAI-capable API and adds some easy tools for downloading models), or using llama.cpp directly.
But I've been seeing more and more people using vLLM ... | 2025-07-28T04:10:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mb6i7x/has_vllm_made_ollama_and_llamacpp_redundant/ | pilkyton | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb6i7x | false | null | t3_1mb6i7x | /r/LocalLLaMA/comments/1mb6i7x/has_vllm_made_ollama_and_llamacpp_redundant/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'gcraJ-_ZjGA3RGGP329KqzVct1E6PhewGRbEc8JjoCA', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/gcraJ-_ZjGA3RGGP329KqzVct1E6PhewGRbEc8JjoCA.png?width=108&crop=smart&auto=webp&s=9651e112d40e457f68e29ea968b21c203638d6d5', 'width': 108}, {'height': 143, 'url': 'h... |
Rtx 3090 + Rtx 2060 for Context Increase and Performance | 4 | Yesterday I bought a 3090 and it works great with vllm (despite some issues in some models, but that is probably my fault). Is there a way that I could use my rtx 2060 (6gb vram) for context (I can only use 8k context in qwen2.5-coder:32b awq using the 3090)? If not for context then maybe to increase the tokens/second.... | 2025-07-28T03:20:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mb5jut/rtx_3090_rtx_2060_for_context_increase_and/ | FredericoDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb5jut | false | null | t3_1mb5jut | /r/LocalLLaMA/comments/1mb5jut/rtx_3090_rtx_2060_for_context_increase_and/ | false | false | self | 4 | null |
Technical Report of TeleChat2, TeleChat2.5 and T1 | 7 | **TECHNICAL REPORT OF TELECHAT2, TELECHAT2.5 AND T1**
|Model|Link|
|:-|:-|
|**TeleChat2-35B** |[**https://modelscope.cn/models/TeleAI/TeleChat2-35B**](https://modelscope.cn/models/TeleAI/TeleChat2-35B)|
|**TeleChat2-115B**|[**https://modelscope.cn/models/TeleAI/TeleChat2-115B**](https://modelscope.cn/models/TeleAI/Te... | 2025-07-28T02:32:33 | https://arxiv.org/abs/2507.18013 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mb4mex | false | null | t3_1mb4mex | /r/LocalLLaMA/comments/1mb4mex/technical_report_of_telechat2_telechat25_and_t1/ | false | false | default | 7 | null |
Can anyone suggest the best local model for multi turn chat with RAG usage? | 2 | I’m trying to figure out which local model(s) will be best for multi chat turn RAG usage. I anticipate my responses filling up the full chat context and needing to get it to continue repeatedly.
Can anyone suggest high output token models that work well when continuing/extending a chat turn so the answer continues whe... | 2025-07-28T02:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mb4h6d/can_anyone_suggest_the_best_local_model_for_multi/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb4h6d | false | null | t3_1mb4h6d | /r/LocalLLaMA/comments/1mb4h6d/can_anyone_suggest_the_best_local_model_for_multi/ | false | false | self | 2 | null |
Bending VS Code into a document-processing AI tool worked - but there must be a better way | 9 | Here's what happened:
I needed to help someone extract structured data from hundreds of detailed Word documents (~100KB each) containing manually typed survey responses (yes/no answers + comments). Each document was internally unique, making traditional automation impossible. With limited time to research solutions, I... | 2025-07-28T02:19:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mb4d9y/bending_vs_code_into_a_documentprocessing_ai_tool/ | Normal-Ad-7114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb4d9y | false | null | t3_1mb4d9y | /r/LocalLLaMA/comments/1mb4d9y/bending_vs_code_into_a_documentprocessing_ai_tool/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'okT-A-GTNMcR1p0qC1l3xTiCyaSUnymno2UcuWewt-c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/okT-A-GTNMcR1p0qC1l3xTiCyaSUnymno2UcuWewt-c.png?width=108&crop=smart&auto=webp&s=b6b187e8b4cac1bc1c1bbd33b1877252d6b4cdae', 'width': 108}, {'height': 108, 'url': 'h... |
Pre-built Desktop Tower Optimized for 70b Local LLMs | 1 | Hi friends. I am looking to purchase a pre-built machine for running ollama models. I'm not doing fine-tuning or anything advanced. This thing will run headless in the basement and I plan to access it over the network.
Any suggestions? I've searched and mostly found advice for DIY builds, or gaming machines with a me... | 2025-07-28T02:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mb43ux/prebuilt_desktop_tower_optimized_for_70b_local/ | DonutQuixote | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb43ux | false | null | t3_1mb43ux | /r/LocalLLaMA/comments/1mb43ux/prebuilt_desktop_tower_optimized_for_70b_local/ | false | false | self | 1 | null |
UI/UX Benchmark Update 7/27: 50 Models, Humanity, Voice, and new models from an AI lab on the horizon? | 26 | Here's my last post as [context](https://www.reddit.com/r/LocalLLaMA/comments/1m6ztb2/uiux_benchmark_update_722_newest_qwen_models/). Otherwise let's get to the exciting updates about [the benchmark](https://www.designarena.ai/).
1. **50 Models:** I've lost track of the count, but since the benchmark began a little o... | 2025-07-28T01:58:00 | https://www.reddit.com/gallery/1mb3xi3 | Accomplished-Copy332 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mb3xi3 | false | null | t3_1mb3xi3 | /r/LocalLLaMA/comments/1mb3xi3/uiux_benchmark_update_727_50_models_humanity/ | false | false | 26 | null | |
AI agents created a 55-minute Kubernetes audio tour with Rick & Morty, Batman, GLaDOS & more - all through conversation | 0 | Hey everyone! 👋
I wanted to share something cool that happened today. Through a conversational workflow built an entire 55-minute audio tour teaching Kubernetes (for me kinda held back with docker jungle) - and the whole thing was created by AI agents responding to my requests.
How it was made:
• Started with: "I ... | 2025-07-28T01:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mb383e/ai_agents_created_a_55minute_kubernetes_audio/ | Loserdotcom4real | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb383e | false | null | t3_1mb383e | /r/LocalLLaMA/comments/1mb383e/ai_agents_created_a_55minute_kubernetes_audio/ | false | false | self | 0 | null |
The Untold Revolution in iOS 26: WebGPU Is Coming | 91 | 2025-07-28T01:08:57 | https://brandlens.io/blog/the-untold-revolution-beneath-ios-26-webgpu-is-coming-everywhere-and-it-changes-everything/ | WooFL | brandlens.io | 1970-01-01T00:00:00 | 0 | {} | 1mb2y1z | false | null | t3_1mb2y1z | /r/LocalLLaMA/comments/1mb2y1z/the_untold_revolution_in_ios_26_webgpu_is_coming/ | false | false | default | 91 | {'enabled': False, 'images': [{'id': 'LyD_1wQqYUDOfxUDg_36aYU0Ld7GP8TKYS-wDVU6gWY', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/LyD_1wQqYUDOfxUDg_36aYU0Ld7GP8TKYS-wDVU6gWY.png?width=108&crop=smart&auto=webp&s=cd59353d15b225ac7141154eca19d5658accf506', 'width': 108}, {'height': 181, 'url': 'h... | |
Byte-Vision is a privacy-first (Llama.cpp) document intelligence platform that transforms static documents into an interactive, searchable knowledge base. Built on Elasticsearch with RAG (Retrieval-Augmented Generation) capabilities, it offers document parsing, OCR processing, and modern UI. | 44 | 2025-07-28T00:41:00 | https://github.com/kbrisso/byte-vision | Important_Half_8277 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mb2dcp | false | null | t3_1mb2dcp | /r/LocalLLaMA/comments/1mb2dcp/bytevision_is_a_privacyfirst_llamacpp_document/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': 'ywpzKzmrsuXJqmShKQ64gatOoAFIfbPYe9pFc1NIqDQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ywpzKzmrsuXJqmShKQ64gatOoAFIfbPYe9pFc1NIqDQ.png?width=108&crop=smart&auto=webp&s=ca560c73715d7330212b1645381ce757ae0517c8', 'width': 108}, {'height': 108, 'url': 'h... | |
Best Local LLM for Japanese to English translation and explanation for 24gb VRAM | 4 | I saw a post saying Qwen 2.5 Bakemono was the best but that was 4 months ago and was wondering if something better is currently available. | 2025-07-28T00:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mb286h/best_local_llm_for_japanese_to_english/ | Abject-Obligation406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb286h | false | null | t3_1mb286h | /r/LocalLLaMA/comments/1mb286h/best_local_llm_for_japanese_to_english/ | false | false | self | 4 | null |
What's the best (free) LLM for a potato laptop, I still want to be able to generate images. | 2 | **The title says most of it, but to be exact, I'm using an HP EliteBook 840 G3.**
I'm trying to generate some gory artwork for a book I'm writing, but I'm running into a problem, most of the good (and free 😅) AI tools have heavy censorship. The ones that don’t either seem sketchy or just aren’t very good.
Any help... | 2025-07-28T00:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mb2486/whats_the_best_free_llm_for_a_potato_laptop_i/ | Roxlife1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb2486 | false | null | t3_1mb2486 | /r/LocalLLaMA/comments/1mb2486/whats_the_best_free_llm_for_a_potato_laptop_i/ | false | false | self | 2 | null |
Can someone please explain how rwkv works in lm studio? | 0 | I hope it's not a really dumb question but I've been trying to figure out how this works any different than normal transformers based models.
My understanding was that there was a layer that stores conversation summary, and it gets updated after every user input. I assumed that meant that deleting previous turns in t... | 2025-07-28T00:08:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mb1p56/can_someone_please_explain_how_rwkv_works_in_lm/ | ArchdukeofHyperbole | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb1p56 | false | null | t3_1mb1p56 | /r/LocalLLaMA/comments/1mb1p56/can_someone_please_explain_how_rwkv_works_in_lm/ | false | false | self | 0 | null |
Best models for 3090? | 0 | I just bought a computer with a 3090, and I was wondering if I could get advice on the best models for my gpu. Specifically, I am looking for:
• Best model for vision+tool use
• Best uncensored
• Best for coding
• Best for context length
• And maybe best for just vision or just tool use | 2025-07-28T00:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mb1of0/best_models_for_3090/ | No-Yak4416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mb1of0 | false | null | t3_1mb1of0 | /r/LocalLLaMA/comments/1mb1of0/best_models_for_3090/ | false | false | self | 0 | null |
UIGEN-X-0727 Runs Locally and Crushes It. Reasoning for UI, Mobile, Software and Frontend design. | 436 | [https://huggingface.co/Tesslate/UIGEN-X-32B-0727](https://huggingface.co/Tesslate/UIGEN-X-32B-0727) Releasing 4B in 24 hours and 32B now.
Specifically trained for modern web and mobile development across frameworks like React (Next.js, Remix, Gatsby, Vite), Vue (Nuxt, Quasar), Angular (Angular CLI, Ionic), and Svelt... | 2025-07-27T23:42:37 | https://www.reddit.com/gallery/1mb15g2 | smirkishere | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mb15g2 | false | null | t3_1mb15g2 | /r/LocalLLaMA/comments/1mb15g2/uigenx0727_runs_locally_and_crushes_it_reasoning/ | false | false | 436 | null | |
How do you monitor your Ollama instance? | 0 | I am running an ollama server as a container in unraid, but I am running up against some problems where models are failing for some use cases. I have several different clients connecting to the server. But I don't know the best way to monitor ollama, for example even just for token usage. But really I want to have some... | 2025-07-27T22:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mazvnk/how_do_you_monitor_your_ollama_instance/ | ishbuggy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mazvnk | false | null | t3_1mazvnk | /r/LocalLLaMA/comments/1mazvnk/how_do_you_monitor_your_ollama_instance/ | false | false | self | 0 | null |
An LLM Focused Just on Debugging | 6 | Found this paper recently and thought the idea was worth sharing.
It is a language model trained specifically for debugging rather than general-purpose code generation. It’s built to understand large codebases over time, using something called Adaptive Graph-Guided Retrieval to pull in relevant files, logs, and commit... | 2025-07-27T22:27:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mazi8m/an_llm_focused_just_on_debugging/ | Sharp-Arachnid-8760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mazi8m | false | null | t3_1mazi8m | /r/LocalLLaMA/comments/1mazi8m/an_llm_focused_just_on_debugging/ | false | false | self | 6 | null |
is qwen powered by gpt 4? | 0 | I was just testing the model and i wanted to know its pricing scheme but it casually said i could find its pricing in openai's pricing section | 2025-07-27T22:09:59 | https://www.reddit.com/gallery/1maz39j | BebeKelly | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1maz39j | false | null | t3_1maz39j | /r/LocalLLaMA/comments/1maz39j/is_qwen_powered_by_gpt_4/ | false | false | 0 | null | |
Devstral & Magistral as adapters of Mistral | 32 | [The initials of Devstral, Mistral, and Magistral as connected puzzle pieces](https://preview.redd.it/tshdyj57ghff1.png?width=2048&format=png&auto=webp&s=14e06a8a7213b113ef28becb5a61878fc952e8c7)
tl;dr: title. Here are the weights: [Devstral-Small-2507-Rebased-Vision](https://huggingface.co/kmouratidis/Devstral-Smal... | 2025-07-27T22:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1maywaw/devstral_magistral_as_adapters_of_mistral/ | kmouratidis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maywaw | false | null | t3_1maywaw | /r/LocalLLaMA/comments/1maywaw/devstral_magistral_as_adapters_of_mistral/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'ExFuLA42V4peZpwQDsAgEzViFAWZpyUbQAHlGXRRxKQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ExFuLA42V4peZpwQDsAgEzViFAWZpyUbQAHlGXRRxKQ.png?width=108&crop=smart&auto=webp&s=336ca45300d9ad8f941487b0ce465efa53dd0e02', 'width': 108}, {'height': 116, 'url': 'h... | |
Free | 0 | Hey there! I’m loving my card from Capital One. Their pre-approval tool makes it easy to see what cards you’re eligible for with no impact on your credit score. Plus, no credit score is required to apply. Want to check it out? Click this link to find the card that’s right for you! https://i.capitalone.com/JDt4yglE3 | 2025-07-27T22:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mayvak/free/ | ImprovementOk3372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mayvak | false | null | t3_1mayvak | /r/LocalLLaMA/comments/1mayvak/free/ | false | false | self | 0 | null |
Researching: Self-hosted LLM interpretability for compliance - is there demand? | 2 | Question for the community: How many of you need to explain your model's decisions to regulators/auditors?
The problem I'm investigating: Healthcare/legal/finance orgs want local LLMs but can't deploy them because "the model said so" isn't acceptable for compliance.
The insight: Interpretability techniques from mech ... | 2025-07-27T21:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mayhuz/researching_selfhosted_llm_interpretability_for/ | Complex_Tie_4875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mayhuz | false | null | t3_1mayhuz | /r/LocalLLaMA/comments/1mayhuz/researching_selfhosted_llm_interpretability_for/ | false | false | self | 2 | null |
Local Distributed GPU Use | 1 | I have a few PCs at home with different GPUs sitting around. I was thinking it would be great if these idle GPUs can all work together to process AI prompts sent from one machine. Is there an out of the box solution that allows me to leverage the multiple computers in my house to do ai work load? note pulling the gpus ... | 2025-07-27T21:29:17 | https://www.reddit.com/r/LocalLLaMA/comments/1may4ut/local_distributed_gpu_use/ | deathcom65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1may4ut | false | null | t3_1may4ut | /r/LocalLLaMA/comments/1may4ut/local_distributed_gpu_use/ | false | false | self | 1 | null |
cheep motherborad for use multiple mi50 | 1 | [removed] | 2025-07-27T21:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1may45c/cheep_motherborad_for_use_multiple_mi50/ | wertrigone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1may45c | false | null | t3_1may45c | /r/LocalLLaMA/comments/1may45c/cheep_motherborad_for_use_multiple_mi50/ | false | false | self | 1 | null |
cheep motherboard for use 3 mi50 | 1 | [removed] | 2025-07-27T21:25:47 | https://www.reddit.com/r/LocalLLaMA/comments/1may1xh/cheep_motherboard_for_use_3_mi50/ | wertrigone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1may1xh | false | null | t3_1may1xh | /r/LocalLLaMA/comments/1may1xh/cheep_motherboard_for_use_3_mi50/ | false | false | self | 1 | null |
Does monitoring AI output catch moral hazard? Replit AI gave "correct" responses while secretly deleting production data 🤖💥 | 0 | The Replit incident exposed a blind spot: AI agent said reasonable things while doing catastrophic actions. The output looked fine, but the behavior was rogue.
This incident got me thinking - traditional output monitoring clearly isn't enough. An AI agent literally deleted a production database, lied about it, then "p... | 2025-07-27T21:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1maxyld/does_monitoring_ai_output_catch_moral_hazard/ | tokyo_kunoichi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maxyld | false | null | t3_1maxyld | /r/LocalLLaMA/comments/1maxyld/does_monitoring_ai_output_catch_moral_hazard/ | false | false | self | 0 | null |
Trying a temporal + spatial slot fusion model (HRM × Axiom) | 1 | I’m hacking together the Hierarchical Reasoning Model (temporal slots) with Axiom’s object‑centric slots.
Here’s my brain dump:
Loaded HRM: “past, present, future loops”
Identified sample‑efficiency as core driver
Spotted Axiom: “spatial slots, as in, object centroids expanding on the fly”
Noticed both ditch big o... | 2025-07-27T21:13:06 | https://www.reddit.com/r/LocalLLaMA/comments/1maxquu/trying_a_temporal_spatial_slot_fusion_model_hrm/ | Key_Clerk_1431 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maxquu | false | null | t3_1maxquu | /r/LocalLLaMA/comments/1maxquu/trying_a_temporal_spatial_slot_fusion_model_hrm/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YsHmtOqqO5eQRfbMvLMBSMGxXsuMLox0sk8UIYyDRho', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YsHmtOqqO5eQRfbMvLMBSMGxXsuMLox0sk8UIYyDRho.png?width=108&crop=smart&auto=webp&s=9e8b6ad2ebbebead71fafd89ce76556f4fa005de', 'width': 108}, {'height': 108, 'url': 'h... |
Advance humanity on our scale. | 0 | I’m genuinely fascinated by artificial intelligence and convinced it’s going to reshape the world. But I have no technical skills and no money to contribute to its progress directly. I can just use and admire. That’s why I came up with this idea.
What if we had a platform — a website with WebGPU or an app — where peop... | 2025-07-27T21:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1maxmeg/advance_humanity_on_our_scale/ | Loud_Possibility_148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maxmeg | false | null | t3_1maxmeg | /r/LocalLLaMA/comments/1maxmeg/advance_humanity_on_our_scale/ | false | false | self | 0 | null |
Trying a temporal + spatial slot fusion model (HRM × Axiom) | 1 | I’m hacking together the Hierarchical Reasoning Model (temporal slots) with Axiom’s object‑centric slots.
**Here’s my brain dump:**
* Loaded HRM: “past, present, future loops”
* Identified sample‑efficiency as core driver
* Spotted Axiom: “spatial slots, as in, object centroids expanding on the fly”
* Noticed both d... | 2025-07-27T21:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1maxkfx/trying_a_temporal_spatial_slot_fusion_model_hrm/ | Good_Illustrator3674 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maxkfx | false | null | t3_1maxkfx | /r/LocalLLaMA/comments/1maxkfx/trying_a_temporal_spatial_slot_fusion_model_hrm/ | false | false | self | 1 | null |
What happened to the Yi models? | 32 | I remember some of them were really solid, but it's been over a year since we've seen a new release.
Is the team still active, or has the project quietly died? | 2025-07-27T20:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1maxfeb/what_happened_to_the_yi_models/ | GabryIta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maxfeb | false | null | t3_1maxfeb | /r/LocalLLaMA/comments/1maxfeb/what_happened_to_the_yi_models/ | false | false | self | 32 | null |
Speculative decoding without a draft model (C#) | 12 | tl;dr: faster grammar check and minor code edits without a draft model: a C# proof-of-concept.
[https://github.com/dpmm99/ModelFreeSpeculation](https://github.com/dpmm99/ModelFreeSpeculation)
This is a toy project built on LLamaSharp. It's a toy because it assumes the output will be nearly identical to the input--no ... | 2025-07-27T20:53:02 | https://www.reddit.com/r/LocalLLaMA/comments/1max9qz/speculative_decoding_without_a_draft_model_c/ | DeProgrammer99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1max9qz | false | null | t3_1max9qz | /r/LocalLLaMA/comments/1max9qz/speculative_decoding_without_a_draft_model_c/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '6SBbFAaNO6y4KvdR37bmJv25qEqiWaX9KiswAYV7NXY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6SBbFAaNO6y4KvdR37bmJv25qEqiWaX9KiswAYV7NXY.png?width=108&crop=smart&auto=webp&s=b4443b14344a6b363cc7c1a64c3d726e9bc418ef', 'width': 108}, {'height': 108, 'url': 'h... |
Hostinger ollama hosting review ? | 0 | Has anyone you Hostinger . As ollama hosting ? If so what do you think ? | 2025-07-27T20:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mawtr7/hostinger_ollama_hosting_review/ | wbiggs205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mawtr7 | false | null | t3_1mawtr7 | /r/LocalLLaMA/comments/1mawtr7/hostinger_ollama_hosting_review/ | false | false | self | 0 | null |
Beyond Context Limits: Subconscious Threads for Long-Horizon Reasoning | 24 | Abstract
>To break the context limits of large language models (LLMs) that bottleneck reasoning accuracy and efficiency, we propose the Thread Inference Model (TIM), a family of LLMs trained for recursive and decompositional problem solving, and TIMRUN, an inference runtime enabling long-horizon structured reasoning b... | 2025-07-27T20:07:24 | https://arxiv.org/abs/2507.16784 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1maw5dy | false | null | t3_1maw5dy | /r/LocalLLaMA/comments/1maw5dy/beyond_context_limits_subconscious_threads_for/ | false | false | default | 24 | null |
Looking for reviews on the new moes | 1 | [removed] | 2025-07-27T19:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mavuel/looking_for_reviews_on_the_new_moes/ | BIGPAPAPUMP3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mavuel | false | null | t3_1mavuel | /r/LocalLLaMA/comments/1mavuel/looking_for_reviews_on_the_new_moes/ | false | false | self | 1 | null |
What happened to Titan? | 4 | [https://arxiv.org/abs/2501.00663](https://arxiv.org/abs/2501.00663) was released at the beginning of year, and despite all the hype around it, there haven't been any further developments on it. Not from Google. Not from any other labs. It seems like maybe it was a flop and they used a different strategy for their Gemi... | 2025-07-27T19:47:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mavo1v/what_happened_to_titan/ | TheRealMasonMac | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mavo1v | false | null | t3_1mavo1v | /r/LocalLLaMA/comments/1mavo1v/what_happened_to_titan/ | false | false | self | 4 | null |
Is there a website which has a collection of all benchmarks perfomed for LLM models? | 4 | Basically benchmark of benchmarks. AI companies generally just show the benchmarks which suits accordingly to them, and hiding others.
Is there a place where I can all of the benchmarks, so that I can take an informed decision before using any LLM API or downloading any new models? | 2025-07-27T19:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mav8p7/is_there_a_website_which_has_a_collection_of_all/ | Special_System_6627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mav8p7 | false | null | t3_1mav8p7 | /r/LocalLLaMA/comments/1mav8p7/is_there_a_website_which_has_a_collection_of_all/ | false | false | self | 4 | null |
Can We Recreate Claude Locally | 0 | Hi local llama!
I tried Claude 4 for the first time and was absolutely blown away by it's capabilities. Do we have a local option that recreates what it's able to produce? I'm not sure if I'm looking for a chat interface like OpenWeb-UI with specific capabilities enabled or an IDE that's been conjoined with agentic wo... | 2025-07-27T19:24:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mav3eu/can_we_recreate_claude_locally/ | YouDontSeemRight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mav3eu | false | null | t3_1mav3eu | /r/LocalLLaMA/comments/1mav3eu/can_we_recreate_claude_locally/ | false | false | self | 0 | null |
low perfomance on Contionue extension Vs code | 1 | Hello guys, I am just new here.
I installed ollama and runing model qwen3:8b
When I run iot through terminal, I get full utilisation of the GPU (3060 Mobile 60W).
but slow response and bad utilisation when run in VS Code.
provided some of my debug log-
ubuntu terminal:
$ ollama ps
NAME ID ... | 2025-07-27T18:51:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mau9os/low_perfomance_on_contionue_extension_vs_code/ | 0-sigma-0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mau9os | false | null | t3_1mau9os | /r/LocalLLaMA/comments/1mau9os/low_perfomance_on_contionue_extension_vs_code/ | false | false | self | 1 | null |
How to increase tps Tokens/Second? Other ways to optimize things to get faster response | 1 | Apart from RAM & GPU upgrades. I use Jan & Kobaldcpp.
Found few things from online on this.
* Picking Quantized model fittable for VRAM
* Set Q8\_0(instead of 16) for KV Cache
* Use Recommended Settings(Temperature, TopP, TopK, MinP) for models(Mostly from Model cards on HuggingFace)
* Decent Prompts
What else could... | 2025-07-27T18:42:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mau1nz/how_to_increase_tps_tokenssecond_other_ways_to/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mau1nz | false | null | t3_1mau1nz | /r/LocalLLaMA/comments/1mau1nz/how_to_increase_tps_tokenssecond_other_ways_to/ | false | false | self | 1 | null |
GRAPH RAG vs baseline RAG for MVP | 0 | Hi people
Been working on a local agent MVP these 3 last weeks. To summarise newsletters and plugged into your private projects would then offer unique insights and suggestions from the newsletters to keep you competitive and enhance your productivity.
I've implemented a baseline RAG under Ollama using Llama index, ... | 2025-07-27T17:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mas4nn/graph_rag_vs_baseline_rag_for_mvp/ | ctxgen_founder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mas4nn | false | null | t3_1mas4nn | /r/LocalLLaMA/comments/1mas4nn/graph_rag_vs_baseline_rag_for_mvp/ | false | false | self | 0 | null |
Singing in a studio | 1 | Singing in the studio with this my picture turn to video musi | 2025-07-27T17:18:24 | Full_Town_5528 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1marxec | false | null | t3_1marxec | /r/LocalLLaMA/comments/1marxec/singing_in_a_studio/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'c8ba725r9gff1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/c8ba725r9gff1.jpeg?width=108&crop=smart&auto=webp&s=39573f6bd1b32a971221168822254451330c8058', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/c8ba725r9gff1.jpeg?width=216&crop=smart&auto=... | |
How can we simulate gemini deepthink with models like deepseek/qwen or other open models? | 9 | There's good hyper around gemini deep think. Can we simulate it using the DeepSeek models or Qwen?
Is that simply gemini 2.5 pro with a much higher thinking budget or it's using some branch of thoughts or Graph of thoughts behind the scenes using multiple parallel instances????
Has anyone tested something like this?... | 2025-07-27T17:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1marx3v/how_can_we_simulate_gemini_deepthink_with_models/ | True_Requirement_891 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1marx3v | false | null | t3_1marx3v | /r/LocalLLaMA/comments/1marx3v/how_can_we_simulate_gemini_deepthink_with_models/ | false | false | self | 9 | null |
General Intel Arc compatibility with Nvidia | 5 | I have a chance to travel to China the end of this year. I'm thinking about buying the 48 GB dual B60 GPU, if I could find one (not really the goal of my travel there). Can you guys give me some insights on the Intel's previous GPUs compatibility with Nvidia kit? I've read that AMD's Rocm is a bit of a pain. That's why... | 2025-07-27T17:14:25 | https://www.reddit.com/r/LocalLLaMA/comments/1martn1/general_intel_arc_compatibility_with_nvidia/ | SwingNinja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1martn1 | false | null | t3_1martn1 | /r/LocalLLaMA/comments/1martn1/general_intel_arc_compatibility_with_nvidia/ | false | false | self | 5 | null |
Best models to run on m4 pro 24gb | 4 | I have gemma 3 12b. Been playing around with it and love it. I am interested in a (easily) jailbreakable model or a model without as much restrictions. Thanks in advance. | 2025-07-27T17:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1marks7/best_models_to_run_on_m4_pro_24gb/ | brayo1st | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1marks7 | false | null | t3_1marks7 | /r/LocalLLaMA/comments/1marks7/best_models_to_run_on_m4_pro_24gb/ | false | false | self | 4 | null |
Why hasn't LoRA gained more popularity? | 95 | In my impression, the focus is mostly on MCP, A2A, and RAG. While these are great for their respective use cases, you still have to send prompts to LLMs with 70 to 500 billion parameters, which is quite resource-intensive and expensive. The alternative is to settle for one of the smaller LLMs with around 8 billion para... | 2025-07-27T16:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1maq0hg/why_hasnt_lora_gained_more_popularity/ | dabomb007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maq0hg | false | null | t3_1maq0hg | /r/LocalLLaMA/comments/1maq0hg/why_hasnt_lora_gained_more_popularity/ | false | false | self | 95 | null |
Apple Intelligence but with multiple chats, RAG, and Web Search | 1 | Hey LocalLLaMA (big fan)!
I made an app called Aeru, an app that uses Apple's Foundation Models framework but given more features like RAG support and Web Search! It's all private, local, free, and open source!
I wanted to make this app because I was really intrigued by Apple's Foundation Models framework, and notice... | 2025-07-27T15:59:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mapwdm/apple_intelligence_but_with_multiple_chats_rag/ | sskarz1016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mapwdm | false | null | t3_1mapwdm | /r/LocalLLaMA/comments/1mapwdm/apple_intelligence_but_with_multiple_chats_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'NVh7CqmCKf_j3MSVTpACSGbzlIgeP2Nq8yGxYzoaryc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NVh7CqmCKf_j3MSVTpACSGbzlIgeP2Nq8yGxYzoaryc.png?width=108&crop=smart&auto=webp&s=1e9803257a9c0433a006bd52ab2e684abe03edd9', 'width': 108}, {'height': 216, 'url': '... |
What does --prio 2 do in llama.cpp? Can't find documentation :( | 2 | I noticed in this wonderful guide [https://docs.unsloth.ai/basics/gemma-3n-how-to-run-and-fine-tune](https://docs.unsloth.ai/basics/gemma-3n-how-to-run-and-fine-tune) a parameter for running the model \`--prio 2\` but I cannot find any documentation on what this is doing, nor do I see a difference when running the mode... | 2025-07-27T15:57:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mapvcv/what_does_prio_2_do_in_llamacpp_cant_find/ | shrug_hellifino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mapvcv | false | null | t3_1mapvcv | /r/LocalLLaMA/comments/1mapvcv/what_does_prio_2_do_in_llamacpp_cant_find/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
Drummer's Mixtral 4x3B v1 - A finetuned clown MoE experiment with Voxtral 3B! | 43 | 2025-07-27T15:56:16 | https://huggingface.co/TheDrummer/Mixtral-4x3B-v1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1maptvc | false | null | t3_1maptvc | /r/LocalLLaMA/comments/1maptvc/drummers_mixtral_4x3b_v1_a_finetuned_clown_moe/ | false | false | default | 43 | {'enabled': False, 'images': [{'id': 'f52jZxJLyrUGUN-MtgsNp9MhYVmfObcQcPQZdRl80CA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/f52jZxJLyrUGUN-MtgsNp9MhYVmfObcQcPQZdRl80CA.png?width=108&crop=smart&auto=webp&s=bbb0ffe720c33190a7c35c23d0bfa7c0465f73ba', 'width': 108}, {'height': 116, 'url': 'h... | |
GPU Help (1080ti vs 3060 vs 5060ti) | 6 | Hi, I know you are probably tired of seeing these posts, but I'd really appreciate the input
Current GPU set up:
\* gtx 1080ti (11Gb)
\* gtx 1050ti (4Gb)
\* pcie gen 3.0
\* 16Gb DDR3 RAM
\* Very old i5-4460 with 4 cores at 3.2GHz
So CPU inference is out of the question
I want to upgrade it because the 1050... | 2025-07-27T15:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/1map5pe/gpu_help_1080ti_vs_3060_vs_5060ti/ | Expensive-Apricot-25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1map5pe | false | null | t3_1map5pe | /r/LocalLLaMA/comments/1map5pe/gpu_help_1080ti_vs_3060_vs_5060ti/ | false | false | self | 6 | null |
Perplexity Labs Live System Prompt | 0 | Was able to pull Perplexity’s live system prompt yesterday(26 Jul). Sharing what I found for research purpose. I was interested in how it calls all the tools, and summarizes the answers.
| 2025-07-27T15:24:18 | https://www.reddit.com/gallery/1map12i | craftogrammer | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1map12i | false | null | t3_1map12i | /r/LocalLLaMA/comments/1map12i/perplexity_labs_live_system_prompt/ | false | false | 0 | null | |
Just snagged Perplexity’s current system prompt (26 Jul 2025) | 1 | [removed] | 2025-07-27T15:21:22 | https://www.reddit.com/gallery/1maoyje | craftogrammer | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1maoyje | false | null | t3_1maoyje | /r/LocalLLaMA/comments/1maoyje/just_snagged_perplexitys_current_system_prompt_26/ | false | false | 1 | null | |
LLM / VLM Local model obsolescence decisions for personal STEM / utility / english / Q&A / RAG / tool use / IT desktop / workstation use cases? | 0 | Suggestions as to what you've found worth using / keeping vs. not?
What specific older models or older model / use case combinations from 2023-2024 would you emphatically NOT consider wholly obsoleted by newer models?
Local model obsolescence decisions for personal STEM / utility / english / Q&A / RAG / tool use / IT... | 2025-07-27T15:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1maoody/llm_vlm_local_model_obsolescence_decisions_for/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maoody | false | null | t3_1maoody | /r/LocalLLaMA/comments/1maoody/llm_vlm_local_model_obsolescence_decisions_for/ | false | false | self | 0 | null |
Where can I download glossary for Japanese, Chinese and Korean translation to english | 0 | Where can I download glossary for Japanese, Chinese and Korean translation to english
Do someone know where can I download glossaries for translation, for things like fanfics of animes, mangas, or even novels?
Because I tried to make some, and when I used it remarkable improved the translation for some fanfics I was ... | 2025-07-27T15:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1maoiae/where_can_i_download_glossary_for_japanese/ | PedroHBN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1maoiae | false | null | t3_1maoiae | /r/LocalLLaMA/comments/1maoiae/where_can_i_download_glossary_for_japanese/ | false | false | self | 0 | null |
Running LLMs exclusively on AMD Ryzen AI NPU | 167 | We’re a small team building **FastFlowLM** — a fast, open-source runtime for running **LLaMA, Qwen, DeepSeek**, and other models **entirely on the AMD Ryzen AI NPU**. No CPU or iGPU fallback — just lean, efficient, **NPU-native inference**. Think **Ollama**, but purpose-built and deeply optimized for AMD NPUs — with bo... | 2025-07-27T14:52:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mao95d/running_llms_exclusively_on_amd_ryzen_ai_npu/ | BandEnvironmental834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mao95d | false | null | t3_1mao95d | /r/LocalLLaMA/comments/1mao95d/running_llms_exclusively_on_amd_ryzen_ai_npu/ | false | false | self | 167 | {'enabled': False, 'images': [{'id': 'vJGRc2UlTJrSFHnGlJYDN0YsOLC8w4mlAwQVmF6tcgo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vJGRc2UlTJrSFHnGlJYDN0YsOLC8w4mlAwQVmF6tcgo.png?width=108&crop=smart&auto=webp&s=97afc3fc381198ec693e0055e6c72c2c0c3cad84', 'width': 108}, {'height': 108, 'url': 'h... |
Run Local LLMs exclusively on AMD Ryzen AI NPU | 1 | [removed] | 2025-07-27T14:49:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mao67s/run_local_llms_exclusively_on_amd_ryzen_ai_npu/ | AlternativeVirtual33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mao67s | false | null | t3_1mao67s | /r/LocalLLaMA/comments/1mao67s/run_local_llms_exclusively_on_amd_ryzen_ai_npu/ | false | false | self | 1 | null |
MoE models in 2025 | 0 | It's amazing how fast Qwen3 MoE model is. Why isn't MoE architecture more popular? Unless I am missing something and there are more of interesting MoE models released this year?
Is Mixtral still a thing? | 2025-07-27T14:46:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mao3ym/moe_models_in_2025/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mao3ym | false | null | t3_1mao3ym | /r/LocalLLaMA/comments/1mao3ym/moe_models_in_2025/ | false | false | self | 0 | null |
Notable 2025 Chinese models | 0 | Hi,
Were there any interesting models released by Chinese companies in 2025, except Qwen?
I'm interested in those around 30B size.
Thanks!
| 2025-07-27T14:37:46 | https://www.reddit.com/r/LocalLLaMA/comments/1manwi5/notable_2025_chinese_models/ | Acrobatic_Cat_3448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1manwi5 | false | null | t3_1manwi5 | /r/LocalLLaMA/comments/1manwi5/notable_2025_chinese_models/ | false | false | self | 0 | null |
Introducing FastFlowLM: Lightweight Runtime for Local LLM on AMD Ryzen™ AI NPU | 2 | # 🚀 FastFlowLM: Lightweight Runtime for Local LLMs on AMD Ryzen™ AI NPU
Hey folks! We’re a small team building **FastFlowLM** — a fast, open-source runtime for running **LLaMA, Qwen, DeepSeek**, and other models **entirely on the AMD Ryzen™ AI NPU**.
No CPU or iGPU fallback — just lean, efficient, **NPU-native infe... | 2025-07-27T14:34:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mantxk/introducing_fastflowlm_lightweight_runtime_for/ | AlternativeVirtual33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mantxk | false | null | t3_1mantxk | /r/LocalLLaMA/comments/1mantxk/introducing_fastflowlm_lightweight_runtime_for/ | false | false | self | 2 | null |
Got 500 hours on an AMD MI300X. What's the most impactful thing I can build/train/break? | 3 | I've found myself with a pretty amazing opportunity: 500 total hrs on a single AMD MI300X GPU (or the alternative of \~125 hrs on a node with 8 of them).
I've been studying DL for about 1.5 yrs, so I'm not a complete beginner, but I'm definitely not an expert. My first thought was to just finetune a massive LLM, but I... | 2025-07-27T14:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mantju/got_500_hours_on_an_amd_mi300x_whats_the_most/ | beiyonder17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mantju | false | null | t3_1mantju | /r/LocalLLaMA/comments/1mantju/got_500_hours_on_an_amd_mi300x_whats_the_most/ | false | false | self | 3 | null |
What arguments best to use on mobile? | 0 | Sorry if this is a dumb question, I'm still learning.
I use [Koboldcpp](https://github.com/LostRuins/koboldcpp) primarily as a backend for my frontend SillyTavern on my dedicated PC. I was curious if I could actually run SillyTavern and Kobold solely on my cellphone (Samsung ZFold5 specifically) through Termux and to... | 2025-07-27T14:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1manewo/what_arguments_best_to_use_on_mobile/ | IZA_does_the_art | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1manewo | false | null | t3_1manewo | /r/LocalLLaMA/comments/1manewo/what_arguments_best_to_use_on_mobile/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PDMaDj1qjOOZUOPtT7mEDuV_ywif6xq7z8UvPonFcEk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PDMaDj1qjOOZUOPtT7mEDuV_ywif6xq7z8UvPonFcEk.png?width=108&crop=smart&auto=webp&s=7d0fcd7f85cd45cd3e57154c2c37929bc113952f', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen GSPO (Group Sequence Policy Optimization) | 62 | Qwen has introduced a new technique called **GSPO** (Group Sequence Policy Optimization)
Put simply:
* It's a new method for training large language models
* Instead of focusing on individual words like older methods, it optimizes entire sentences or passages as a whole — which is more logical and leads to better per... | 2025-07-27T14:00:25 | https://www.reddit.com/r/LocalLLaMA/comments/1man0hu/qwen_gspo_group_sequence_policy_optimization/ | koc_Z3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1man0hu | false | null | t3_1man0hu | /r/LocalLLaMA/comments/1man0hu/qwen_gspo_group_sequence_policy_optimization/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'kpkVEAiwNd6D_mfl3tEdDni1cD692QYRZ9sC2FzlBz4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kpkVEAiwNd6D_mfl3tEdDni1cD692QYRZ9sC2FzlBz4.png?width=108&crop=smart&auto=webp&s=204816acf3c4a486bb403207785321d33214adc7', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3-235B-A22B 2507 is so good | 322 | The non-reasoning model is about as good as 2.5 flash with 4k reasoning tokens. The latency of no reasoning vs reasoning makes it so much better than 2.5 flash. I also prefer the shorter outputs than the verbose asf gemini.
The markdown formatting is so much better and the outputs are just so much nicer to read than ... | 2025-07-27T13:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mammv5/qwen3235ba22b_2507_is_so_good/ | z_3454_pfk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mammv5 | false | null | t3_1mammv5 | /r/LocalLLaMA/comments/1mammv5/qwen3235ba22b_2507_is_so_good/ | false | false | self | 322 | null |
Reasoning prompt strategy | 3 | Hi
Anyone has any prompts I can use to make local base model reason?
Do share! Thank you | 2025-07-27T13:25:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mam8p4/reasoning_prompt_strategy/ | rockybaby2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mam8p4 | false | null | t3_1mam8p4 | /r/LocalLLaMA/comments/1mam8p4/reasoning_prompt_strategy/ | false | false | self | 3 | null |
GeForce RTX 5060 Ti 16GB good for LLama LLM inferencing/Fintuning ? | 3 | Hey Folks
Need GPU selection suggestion before i make the purchase
Where i live, i am getting GeForce RTX 5060 Ti 16GB GDDR7 at USD 500 , buying 4 of these devices would be a good choice (yes i will also be buying new RIG / CPU / MB/ PS), hence not worrying about backward compatibility.
My use case : (Is not gamin... | 2025-07-27T13:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mam6of/geforce_rtx_5060_ti_16gb_good_for_llama_llm/ | kingksingh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mam6of | false | null | t3_1mam6of | /r/LocalLLaMA/comments/1mam6of/geforce_rtx_5060_ti_16gb_good_for_llama_llm/ | false | false | self | 3 | null |
8xxx+RDNA3 vs 9xxx+RDNA2 speed for LLMs? | 0 | I have some experience with an AMD 8700G RDNA3 iGPU and acceleration via Vulkan - quite easy to set up for llama.cpp.
As a 9700G does not exist (yet?), does anyone know how the AMD 9700X with its RDNA2 iGPU+Vulkan would compare in speed for llama.cpp use?
Shall I 1) get another 8700G system, or 2) get a 9700X, or 3) ... | 2025-07-27T13:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1malsci/8xxxrdna3_vs_9xxxrdna2_speed_for_llms/ | a_postgres_situation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1malsci | false | null | t3_1malsci | /r/LocalLLaMA/comments/1malsci/8xxxrdna3_vs_9xxxrdna2_speed_for_llms/ | false | false | self | 0 | null |
Non-deterministic Dialogue in games, how much would LLMs really help here? | 5 | I’ve spent a good amount of time enjoying narrative driven games and open world style games alike. I wonder how much nondeterminism through “AI” can enhance the experience. I’ve had claude 3.5 (or 3.7 can’t really remember) write stories for me from a seed concept, and they did alright. But I definitely needed to “anch... | 2025-07-27T13:04:42 | https://www.reddit.com/r/LocalLLaMA/comments/1malsbp/nondeterministic_dialogue_in_games_how_much_would/ | m1tm0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1malsbp | false | null | t3_1malsbp | /r/LocalLLaMA/comments/1malsbp/nondeterministic_dialogue_in_games_how_much_would/ | false | false | self | 5 | null |
Building a quiet LLM machine for 24/7 use, is this setup overkill or smart? | 12 | Hey folks,
I’m putting together a PC mainly for running large language models like Qwen, LLaMA3, DeepSeek, etc. It’ll mostly be used for **code generation tasks**, and I want it to run **24/7**, quietly, in my home office.
Here’s what I’ve picked so far:
* **Case**: Lian Li O11D EVO XL
* **CPU**: AMD Ryzen 9 7950X3D... | 2025-07-27T12:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1malflg/building_a_quiet_llm_machine_for_247_use_is_this/ | bardanaadam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1malflg | false | null | t3_1malflg | /r/LocalLLaMA/comments/1malflg/building_a_quiet_llm_machine_for_247_use_is_this/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h... |
Any CJK datas? | 3 | I'm looking for CJK data on hugging face. I don't see any high quality data sets. If you have any recommendations, I'd appreciate it. | 2025-07-27T12:47:13 | https://www.reddit.com/r/LocalLLaMA/comments/1malf9l/any_cjk_datas/ | DependentDazzling703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1malf9l | false | null | t3_1malf9l | /r/LocalLLaMA/comments/1malf9l/any_cjk_datas/ | false | false | self | 3 | null |
Motherboard for AM5 CPU and 3 GPUS (2 3090 and 1 5070 ti) | 3 | Hi guys,
I'm looking for a motherboard that supports an AM5 CPU and three GPUs: two 3090s and one 5070 Ti.
I found a motherboard with three PCI Express ports, but it appears that only the first runs at 16x. The other two run at 8x and 4x.
Does PCI speed have an impact when using it for LLM?
I've heard about workstatio... | 2025-07-27T12:26:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mal0bo/motherboard_for_am5_cpu_and_3_gpus_2_3090_and_1/ | ed0c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mal0bo | false | null | t3_1mal0bo | /r/LocalLLaMA/comments/1mal0bo/motherboard_for_am5_cpu_and_3_gpus_2_3090_and_1/ | false | false | self | 3 | null |
4090 48GB for UK - Where? | 14 | Do you live in the UK and have you bought a 4090 48GB?
Where exactly did you get it from? eBay? Which vendor?
| 2025-07-27T12:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1makqv4/4090_48gb_for_uk_where/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1makqv4 | false | null | t3_1makqv4 | /r/LocalLLaMA/comments/1makqv4/4090_48gb_for_uk_where/ | false | false | self | 14 | null |
NVIDIA RTX PRO 4000 Blackwell - 24GB GDDR7 | 12 | Could get NVIDIA RTX PRO 4000 Blackwell - 24GB GDDR7 1 275,50 euros without VAT.
But its only 140W and 8960 CUDA cores. Takes only 1 slot. Is it worth? Some Epyc board could fit 6 of these...with pci-e 5.0 | 2025-07-27T10:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1majha1/nvidia_rtx_pro_4000_blackwell_24gb_gddr7/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1majha1 | false | null | t3_1majha1 | /r/LocalLLaMA/comments/1majha1/nvidia_rtx_pro_4000_blackwell_24gb_gddr7/ | false | false | self | 12 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.