title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Pluralistic: Goodhart’s Law (of AI) — Cory Doctorow (Aug 11, 2025) | 0 | TL;DR 🤖 When a metric becomes the target, it stops being a good metric — and AI shows just how weird the results can get.
✨ Highlights
📏 Goodhart’s Law — “When a measure becomes a target, it ceases to be a good measure.” AI is a perfect playground for this.
🔗 Google PageRank spam — Once link counts became the targ... | 2025-08-11T19:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mnmm3a/pluralistic_goodharts_law_of_ai_cory_doctorow_aug/ | Ashishpatel26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnmm3a | false | null | t3_1mnmm3a | /r/LocalLLaMA/comments/1mnmm3a/pluralistic_goodharts_law_of_ai_cory_doctorow_aug/ | false | false | self | 0 | null |
Sam Altman now says AGI, or human-level AI, is ‘not a super useful term’ — and he’s not alone | 0 | 🚫 AGI is a ‘Pointless Term’
Sam Altman argues that “AGI” (Artificial General Intelligence) is becoming less useful and increasingly vague as we get closer to building truly capable AI systems.  
🔍 Definition Drift
He points out that the term covers so many interpretations—from general-purpose reasoning models to ... | 2025-08-11T19:24:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mnmif6/sam_altman_now_says_agi_or_humanlevel_ai_is_not_a/ | Ashishpatel26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnmif6 | false | null | t3_1mnmif6 | /r/LocalLLaMA/comments/1mnmif6/sam_altman_now_says_agi_or_humanlevel_ai_is_not_a/ | false | false | self | 0 | null |
🏅 OpenAI’s AI reasoning system just took gold in the AI track at the 2025 International Olympiad in Informatics (IOI) 🏅 | 0 | 📊 Results
🥇 1st among AI participants
🥈 6th overall out of 330 — only 5 human competitors scored higher
📈 From 49th percentile last year → now 98th percentile
🖥 Conditions
⏱️ Same 5-hour time limit & 50 submission cap as humans
🌐 No internet, no RAG — just a basic terminal tool
🤖 Ensemble of general-purpose rea... | 2025-08-11T19:20:27 | Ashishpatel26 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnme3j | false | null | t3_1mnme3j | /r/LocalLLaMA/comments/1mnme3j/openais_ai_reasoning_system_just_took_gold_in_the/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '1eqzibmpxfif1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/1eqzibmpxfif1.jpeg?width=108&crop=smart&auto=webp&s=a2d28480261d7ac968ac2b3fcb55f7cce9500d43', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/1eqzibmpxfif1.jpeg?width=216&crop=smart&auto=we... | |
Knowledge Graph project advice. | 1 | I have been working on a project over the weekend to get into Knowledge Graph and Graph RAG. Would it be useful to have a website like gitingest / gitdiagram that generates a KG from your github repo or zip file upload, and offers a Graph RAG chatbot over it? Everything happens on client side in the browser? | 2025-08-11T19:16:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mnmafg/knowledge_graph_project_advice/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnmafg | false | null | t3_1mnmafg | /r/LocalLLaMA/comments/1mnmafg/knowledge_graph_project_advice/ | false | false | self | 1 | null |
Jina AI just dropped Reader-LM — tiny models that beat GPT-4o at cleaning messy HTML into Markdown 🚀 | 0 | ✨ What is it?
🔹 Two Small Language Models: Reader-LM-0.5B (~494M params) and Reader-LM-1.5B (~1.54B params)
🔹 Take raw, messy HTML → output clean, structured Markdown — no fragile regex pipelines needed
💡 Why it matters
🌍 Works multilingual (EN, DE, JP, ZH, etc.)
📜 Handles 256K tokens so you can scrape entire pag... | 2025-08-11T19:13:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mnm782/jina_ai_just_dropped_readerlm_tiny_models_that/ | Ashishpatel26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnm782 | false | null | t3_1mnm782 | /r/LocalLLaMA/comments/1mnm782/jina_ai_just_dropped_readerlm_tiny_models_that/ | false | false | self | 0 | null |
Local AGI components with llama.cpp: unified memory + reflection + planner (demo & code) | 1 | [removed] | 2025-08-11T19:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mnm2zl/local_agi_components_with_llamacpp_unified_memory/ | Miserable_Tutor1249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnm2zl | false | null | t3_1mnm2zl | /r/LocalLLaMA/comments/1mnm2zl/local_agi_components_with_llamacpp_unified_memory/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kWHMfm0OQfrMS-K8ikJh3KOn8arZNL0sglSflbbXEcY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kWHMfm0OQfrMS-K8ikJh3KOn8arZNL0sglSflbbXEcY.png?width=108&crop=smart&auto=webp&s=ddb7742b83639e74ab9abb0f93d2b42487237cbf', 'width': 108}, {'height': 108, 'url': 'h... |
Infinite Network Attached Memory for LLM Inference for RTX 4090 | 11 | **We explored a network-attached KV-cache for consumer GPUs to offset their limited VRAM**
It doesn’t make RTX cards run giant models efficiently. Still, for workloads that repeatedly reuse lengthy prefixes—such as chatbots, coding assistants, and multi-turn threads—it delivers a **2–4× speedup** in **RPS** and **tim... | 2025-08-11T19:07:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mnm1o1/infinite_network_attached_memory_for_llm/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnm1o1 | false | null | t3_1mnm1o1 | /r/LocalLLaMA/comments/1mnm1o1/infinite_network_attached_memory_for_llm/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'e0rlD5Ctvu8nxTa1QvT2VQCTsbD1uKK354POtR0vgQw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/e0rlD5Ctvu8nxTa1QvT2VQCTsbD1uKK354POtR0vgQw.png?width=108&crop=smart&auto=webp&s=0a78863c60037725f59a8f8f30cf4e3f74d65679', 'width': 108}, {'height': 144, 'url': 'h... |
Building an OSRS Trading Assistant with Python, Go and Ollama | 1 | [removed] | 2025-08-11T19:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mnlytv/building_an_osrs_trading_assistant_with_python_go/ | ___-____--_____-____ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnlytv | false | null | t3_1mnlytv | /r/LocalLLaMA/comments/1mnlytv/building_an_osrs_trading_assistant_with_python_go/ | false | false | self | 1 | null |
Let's face it: lama.cpp just isn't on feature parity | 0 | What's missing?
* Get llama-swap merged
* Cuda 12.8 binaries
* Intelligent default parameters
* Running as system service
Get these out of the way so it can be a decent recommendation to new users that don't want to mess with a patchwork of clis. There is a reason why it is not the go-to option on unsloth etc | 2025-08-11T19:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mnlxrx/lets_face_it_lamacpp_just_isnt_on_feature_parity/ | prusswan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnlxrx | false | null | t3_1mnlxrx | /r/LocalLLaMA/comments/1mnlxrx/lets_face_it_lamacpp_just_isnt_on_feature_parity/ | false | false | self | 0 | null |
Much Hugs for everyone | 4 | Now with code editor. Ofc free to all. | 2025-08-11T18:56:44 | https://v.redd.it/q5wev9p8tfif1 | Trilogix | /r/LocalLLaMA/comments/1mnlrl6/much_hugs_for_everyone/ | 1970-01-01T00:00:00 | 0 | {} | 1mnlrl6 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/q5wev9p8tfif1/DASHPlaylist.mpd?a=1757660212%2COWQ0OWU0MTIwM2M1MTdhYmI3MGI1ZDU5Y2M3NmUwZTc5MDM2ZTY5ZTkxY2M1NjVjYzcwZTUzZDQ5NDdlMmIxMA%3D%3D&v=1&f=sd', 'duration': 466, 'fallback_url': 'https://v.redd.it/q5wev9p8tfif1/DASH_720.mp4?source=fallback', 'h... | t3_1mnlrl6 | /r/LocalLLaMA/comments/1mnlrl6/much_hugs_for_everyone/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'bmQwMGc5cDh0ZmlmMRJkDXzpvDsG9E9tOAzk__qGsP9T1iUHAn_QL__JrIoI', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/bmQwMGc5cDh0ZmlmMRJkDXzpvDsG9E9tOAzk__qGsP9T1iUHAn_QL__JrIoI.png?width=108&crop=smart&format=pjpg&auto=webp&s=f9e896c56d050afc939564c40da373bf44356... | |
CHATGPT TEAM PLAN TOP-UP 1 MONTH 7.99$ | 1 | [removed] | 2025-08-11T18:51:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mnlmo8/chatgpt_team_plan_topup_1_month_799/ | Quirky_Guest_8196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnlmo8 | false | null | t3_1mnlmo8 | /r/LocalLLaMA/comments/1mnlmo8/chatgpt_team_plan_topup_1_month_799/ | false | false | self | 1 | null |
Geocities style site by glm 4.5 | 77 | Completed in just 1 super simple prompt. GLM 4.5 is terrifyingly good at web dev now, especially as we can run it local. For me it was obvious it can generate modern and modern-ish sites but this stuff is kinda cooler to see (at least for me). The only unfortunate thing that it used emojis but that can be tweaked i gue... | 2025-08-11T18:44:42 | ChazychazZz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnlfyt | false | null | t3_1mnlfyt | /r/LocalLLaMA/comments/1mnlfyt/geocities_style_site_by_glm_45/ | false | false | default | 77 | {'enabled': True, 'images': [{'id': 'fvi60un3qfif1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/fvi60un3qfif1.gif?width=108&crop=smart&format=png8&s=3dd1080ce683e3a58af70c6db064411175f7564a', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/fvi60un3qfif1.gif?width=216&crop=smart&format... | |
bit of overthinking, got it right? | 0 | write pi on 100 decimal places
Thought for 3 minutes 28 seconds
The user wants "pi on 100 decimal places". They want to write π to 100 decimal places. So I need to provide the value of pi up to 100 decimal digits after the point.
We should format it nicely, maybe line breaks? Typically we can give a block of text... | 2025-08-11T18:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mnlcfz/bit_of_overthinking_got_it_right/ | thatsintereZtinG4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnlcfz | false | null | t3_1mnlcfz | /r/LocalLLaMA/comments/1mnlcfz/bit_of_overthinking_got_it_right/ | false | false | self | 0 | null |
Working example for Ollama ROCm + Ubuntu 24.04 + Minisforum MS-A2? | 0 | I have a Minisforum MS-A2 with 9955HX + 96GB RAM, running Ubuntu 24.04 and Docker 28.3.3. AMDGPU drivers are installed, and running `rocminfo` on the command line in the host will return correct driver information. I have an HSA\_OVERRIDE set because without it, Ollama couldn't find my GPU (it's a `gfx1036`).
Using ... | 2025-08-11T18:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mnl9lh/working_example_for_ollama_rocm_ubuntu_2404/ | insta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnl9lh | false | null | t3_1mnl9lh | /r/LocalLLaMA/comments/1mnl9lh/working_example_for_ollama_rocm_ubuntu_2404/ | false | false | self | 0 | null |
Best model/merge for RP? | 8 | I'm searching for the best model/merge without many filters to run localy, i have an rx 7800 xt, ryzen 7 7800 x3d and 32 gb 7200 MT/s , i use kobold ccp for Silly tavern | 2025-08-11T18:22:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mnkv4k/best_modelmerge_for_rp/ | Material_Metal_1526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnkv4k | false | null | t3_1mnkv4k | /r/LocalLLaMA/comments/1mnkv4k/best_modelmerge_for_rp/ | false | false | self | 8 | null |
You can now view Direct Head-to-Head data on Design Arena | 1 | Hi all. Just wanted to share a quick update since people were asking about the ability to see how models perform against other models pairwise.
As a reminder, the tldr is that I've been working on a[ crowdsource benchmark](https://www.designarena.ai/) for evaluating large language models over the last 6 weeks on code... | 2025-08-11T18:02:47 | https://www.reddit.com/gallery/1mnkbyy | Accomplished-Copy332 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mnkbyy | false | null | t3_1mnkbyy | /r/LocalLLaMA/comments/1mnkbyy/you_can_now_view_direct_headtohead_data_on_design/ | false | false | 1 | null | |
GPT-5 is AGI | 0 | 2025-08-11T17:51:42 | umjustpassingby | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnk12b | false | null | t3_1mnk12b | /r/LocalLLaMA/comments/1mnk12b/gpt5_is_agi/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'gwuefyithfif1', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/gwuefyithfif1.png?width=108&crop=smart&auto=webp&s=20561e0a8df6a38253f4177fb2ca1d838a8d8773', 'width': 108}, {'height': 258, 'url': 'https://preview.redd.it/gwuefyithfif1.png?width=216&crop=smart&auto=we... | ||
What applications do you use to run models locally? | 1 | I used to use GPT4All but it seems like it hasn't had any support in the past few months. Anyone doing something different? | 2025-08-11T17:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mnjm0r/what_applications_do_you_use_to_run_models_locally/ | HeavyRadish4327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnjm0r | false | null | t3_1mnjm0r | /r/LocalLLaMA/comments/1mnjm0r/what_applications_do_you_use_to_run_models_locally/ | false | false | self | 1 | null |
Do we need a new 0.6b (2507) draft model for Qwen? | 1 | I finally got round to downloading 235b 2507 Thinking and I just realised I'll have to use the old 0.6b bf16 draft.
It does seem to work (66% acceptance) but presumably it's now suboptimal in the light of 2507? Even if the vocabulary is the same, the thinking process must be different? | 2025-08-11T17:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mnjdmh/do_we_need_a_new_06b_2507_draft_model_for_qwen/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnjdmh | false | null | t3_1mnjdmh | /r/LocalLLaMA/comments/1mnjdmh/do_we_need_a_new_06b_2507_draft_model_for_qwen/ | false | false | self | 1 | null |
Qwen 3 Coder 30b a3b quite slow at prompt processing on M4 Pro 64GB | 2 | I am trying this out to see if it works well for regular coding. It seems to work fine, and token generation is quite fast - but it takes a long time to process prompts. I am using Claude Code with it. Is it normal for it to be this slow on a Mac, even though the number of active parameters is low? | 2025-08-11T17:21:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mnj6gd/qwen_3_coder_30b_a3b_quite_slow_at_prompt/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnj6gd | false | null | t3_1mnj6gd | /r/LocalLLaMA/comments/1mnj6gd/qwen_3_coder_30b_a3b_quite_slow_at_prompt/ | false | false | self | 2 | null |
Built a fully local medical AI scribe for Mac/iPhone | 0 | Hey everyone, I’ve been going a bit crazy over the past year trying to get a completely local AI scribe for clinicians running on an edge device. Until recently, no luck. Last year I dropped a [3B fine-tuned model that outscored GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/1cp2h1v/3b_model_beating_gpt4_on_medica... | 2025-08-11T17:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mnj5ft/built_a_fully_local_medical_ai_scribe_for/ | MajesticAd2862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnj5ft | false | null | t3_1mnj5ft | /r/LocalLLaMA/comments/1mnj5ft/built_a_fully_local_medical_ai_scribe_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7mAkIxUxMi4LszYuHXl7Ayer42wXvv4uq8LI5MOtsFc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8zXWSwpwdTjTcf7FNCzXNMgDF1IeUnckKveNaG0frr0.jpg?width=108&crop=smart&auto=webp&s=c622e412c8c5d3da4ea711e8020ac0a75df7c5fc', 'width': 108}, {'height': 113, 'url': 'h... |
How to achieve AGI - my idea | 0 | GPT-5 got immensely hyped by the openai staff, especially by Sam Altman. Him posting the image of the Death Star one day before the initial release got me really hyped and my expectations rose. Unfortunately GPT-5 is not AGI and was just a cost cut by OpenAI to make their business more profitable.
A lot of experts in ... | 2025-08-11T17:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mnj3sb/how_to_achieve_agi_my_idea/ | Thin_Improvement5187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnj3sb | false | null | t3_1mnj3sb | /r/LocalLLaMA/comments/1mnj3sb/how_to_achieve_agi_my_idea/ | false | false | self | 0 | null |
Various stores and async problems in LlamaIndex | 2 | Currently in my rag am using vector ,index and docstore to get faster results but it has giving me a lot of trouble .I was using redis for both index and doc store and chroma for vector store it pushed in async direction as they both required an async client now my whole code has been changes to async and i don't see m... | 2025-08-11T17:11:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mnivxr/various_stores_and_async_problems_in_llamaindex/ | Minimum-Row6464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnivxr | false | null | t3_1mnivxr | /r/LocalLLaMA/comments/1mnivxr/various_stores_and_async_problems_in_llamaindex/ | false | false | self | 2 | null |
Need help finding a .gguf for stories | 0 | I like writing stories. Mainly relationship stories. Can be wholesome but tend to drift into NSFW.
I have been using Llama 3.1 SuperNova Lite 8B so far and it is pretty good. Would like it to have more range and less repeative emotion words. More of an all over descriptive tone of sights, sounds, feelings. Basically ... | 2025-08-11T17:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mnius3/need_help_finding_a_gguf_for_stories/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnius3 | false | null | t3_1mnius3 | /r/LocalLLaMA/comments/1mnius3/need_help_finding_a_gguf_for_stories/ | false | false | self | 0 | null |
Why can't all models be called the same everywhere? | 0 | How do people deal with the model name differences between various inference providers:
**Ollama:** `llama3.2:3b`
**Hugging Face Hub:** `meta-llama/Llama-3.2-3B-Instruct`
Does everyone have a big f-ing mapping table for names? Or do you always use the same inference provider in dev and prod? | 2025-08-11T17:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mniqgg/why_cant_all_models_be_called_the_same_everywhere/ | datancoffee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mniqgg | false | null | t3_1mniqgg | /r/LocalLLaMA/comments/1mniqgg/why_cant_all_models_be_called_the_same_everywhere/ | false | false | self | 0 | null |
My beautiful vLLM adventure | 37 | So, there was this rant post on vLLM the other day. Seeing as I have some time on my hands and wanting to help the open source community, I decided I'd try documenting the common use cases and proving that, hey, this vLLM thing isn't really \*that hard to run\*. And I must say, after the tests, I have no idea what you'... | 2025-08-11T17:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mnin8k/my_beautiful_vllm_adventure/ | ilintar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnin8k | false | null | t3_1mnin8k | /r/LocalLLaMA/comments/1mnin8k/my_beautiful_vllm_adventure/ | false | false | self | 37 | null |
NVIDIA Expands Its RTX PRO ‘Blackwell’ Workstation GPU Lineup With Two New Variants at SIGGRAPH 2025: RTX PRO 4000 SFF and RTX PRO 2000 | 22 | 2025-08-11T16:55:16 | https://wccftech.com/nvidia-expands-its-rtx-pro-blackwell-workstation-gpu-lineup-with-two-new-variants/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mnifh1 | false | null | t3_1mnifh1 | /r/LocalLLaMA/comments/1mnifh1/nvidia_expands_its_rtx_pro_blackwell_workstation/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'k9pHhlT5Zo2BU9fRiistXexZxTyPMK7ADnLJ0PIOUlg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/k9pHhlT5Zo2BU9fRiistXexZxTyPMK7ADnLJ0PIOUlg.jpeg?width=108&crop=smart&auto=webp&s=70fb94bae98a0a2b9618ead2931709567ce726bd', 'width': 108}, {'height': 121, 'url': '... | |
Maxsun Unleashes The ARL-HX Mini Station: Compact AI Workstation With Intel Core Ultra 9 275HX, Dual Arc PRO B60 24 GB GPUs, 256 GB DDR5 Memory Support | 9 | *Source: https://wccftech.com/maxsun-arl-hx-mini-station-compact-ai-workstation-intel-core-ultra-9-275hx-dual-arc-pro-b60-24-gb-gpus-256-gb-ddr5-memory/*
*Entire Article:*
> Maxsun has introduced its brand new ARL-HX Mini Workstation AI PC, featuring Intel's Core Ultra 9 275HX CPU & Arc Pro B60 GPU.
>
> **Maxsun Unv... | 2025-08-11T16:51:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mnibo4/maxsun_unleashes_the_arlhx_mini_station_compact/ | _SYSTEM_ADMIN_MOD_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnibo4 | false | null | t3_1mnibo4 | /r/LocalLLaMA/comments/1mnibo4/maxsun_unleashes_the_arlhx_mini_station_compact/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=108&crop=smart&auto=webp&s=830a497b296dd22fea61a941cb03d1a1073ae508', 'width': 108}, {'height': 122, 'url': 'h... |
NVLink: 3 slot or 4 slot? | 1 | [removed] | 2025-08-11T16:41:35 | https://www.reddit.com/gallery/1mni22d | FrozenBuffalo25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mni22d | false | null | t3_1mni22d | /r/LocalLLaMA/comments/1mni22d/nvlink_3_slot_or_4_slot/ | false | false | 1 | null | |
A Heavy User's Vantage Point: ChatGPT's Evolution from ADHD to Cluster B | 0 | Hello r/LocalLLaMA,
I'm writing this to share some deeply concerning observations -- and to see if others are experiencing the same. By way of background, I'm an organizational psychologist, a subscriber to ChatGPT Teams, and an extremely heavy user. I often spend over 100 hours per week with the tool. I’ve built ov... | 2025-08-11T16:37:32 | Lusayalumino | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnhy7f | false | null | t3_1mnhy7f | /r/LocalLLaMA/comments/1mnhy7f/a_heavy_users_vantage_point_chatgpts_evolution/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'queijy1d4fif1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/queijy1d4fif1.png?width=108&crop=smart&auto=webp&s=1b44b3a60c3e4c5a27c4d54c047835714beb5191', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/queijy1d4fif1.png?width=216&crop=smart&auto=web... | |
GPT-OSS Benchmarks: How GPT-OSS-120B Performs in Real Tasks | 212 | OpenAI released their first open models since GPT-2, and GPT-OSS-120B is now the best open-weight model on our real-world TaskBench.
**Some details:**
* Better completion performance overall compared to other open-weight models like Kimi-K2 and DeepSeek-R1, while being roughly 1/10th the size. Cheaper, better, faster... | 2025-08-11T16:19:43 | facethef | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnhgt0 | false | null | t3_1mnhgt0 | /r/LocalLLaMA/comments/1mnhgt0/gptoss_benchmarks_how_gptoss120b_performs_in_real/ | false | false | default | 212 | {'enabled': True, 'images': [{'id': 'jw671veezeif1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/jw671veezeif1.png?width=108&crop=smart&auto=webp&s=31e4bf9d6b6b978496cbf0eccadde157f7c68be3', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/jw671veezeif1.png?width=216&crop=smart&auto=web... | |
FULL UPDATED v0 by Vercel System Prompts and Tools | 3 | (Latest update: 11/08/2025)
I managed to get FULL official and updated v0 system prompt and internal tools. Over 1.2k lines and 13.5K tokens.
Check it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools | 2025-08-11T16:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mnh1vw/full_updated_v0_by_vercel_system_prompts_and_tools/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnh1vw | false | null | t3_1mnh1vw | /r/LocalLLaMA/comments/1mnh1vw/full_updated_v0_by_vercel_system_prompts_and_tools/ | false | false | self | 3 | null |
Llama.cpp Vulkan is awesome, It gave new life to my old RX580 | 79 | I built a new computer and instead of buying a GPU I decided to give my old RX580 8GB a try for running inference. I had it laying around unused.
My PC specs are not crazy, its b850 motherboard, Ryzen 7700X and a b580. My total cost was about 700 dollars.
Tried running Qwen3 30 b with about 20 layers offloaded to the... | 2025-08-11T16:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mnh0s5/llamacpp_vulkan_is_awesome_it_gave_new_life_to_my/ | Ssjultrainstnict | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnh0s5 | false | null | t3_1mnh0s5 | /r/LocalLLaMA/comments/1mnh0s5/llamacpp_vulkan_is_awesome_it_gave_new_life_to_my/ | false | false | self | 79 | null |
Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1 · Hugging Face | 16 | Finally releasing the next step in the personal finance advisor model. The models are still trained on US-specific data, yet, they should perform better than the previous versions of the model.
I have also trained a smaller, 4B version of the model that can be checked [here](https://huggingface.co/Akhil-Theerthala/Ku... | 2025-08-11T15:57:35 | https://huggingface.co/Akhil-Theerthala/Kuvera-8B-qwen3-v0.2.1 | The-Silvervein | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mnguv9 | false | null | t3_1mnguv9 | /r/LocalLLaMA/comments/1mnguv9/akhiltheerthalakuvera8bqwen3v021_hugging_face/ | false | false | default | 16 | {'enabled': False, 'images': [{'id': 'rHdqAu8OP9ctEO4GaMG7eJGP27NSaRtxXat3UWEiNq0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rHdqAu8OP9ctEO4GaMG7eJGP27NSaRtxXat3UWEiNq0.png?width=108&crop=smart&auto=webp&s=70f34aae9f9abbb2a881dcbd97b84f7862824757', 'width': 108}, {'height': 116, 'url': 'h... |
Microsoft Edge Proposes Web Model Context API | 9 | 2025-08-11T15:52:18 | https://github.com/MicrosoftEdge/MSEdgeExplainers/blob/main/WebModelContext/explainer.md | -json- | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mngpna | false | null | t3_1mngpna | /r/LocalLLaMA/comments/1mngpna/microsoft_edge_proposes_web_model_context_api/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'RQHQV8bgli4Kw9ABS63OM3ee5MAkmYUuh38EshjaHIQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RQHQV8bgli4Kw9ABS63OM3ee5MAkmYUuh38EshjaHIQ.png?width=108&crop=smart&auto=webp&s=74f0c0095ee12e13706bb1be75c67c2a757a3ba1', 'width': 108}, {'height': 108, 'url': 'h... | |
How does --n-cpu-moe and --cpu-moe params help over --ngl=999 along with --ot=regex_to_offload_ffn_on_CPU in llama.cpp? | 21 | I have been reading that these new flags ( --n-cpu-moe and --cpu-moe) are very useful. But how? If I'm not wrong, these new flags help us offload a moe layers on CPU, but our goal is to offload these layers on GPU, right? My understanding is, we max out all layers on GPU, then selectively offload ffn tensors on CPU so ... | 2025-08-11T15:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mngl7i/how_does_ncpumoe_and_cpumoe_params_help_over/ | Rohit_RSS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mngl7i | false | null | t3_1mngl7i | /r/LocalLLaMA/comments/1mngl7i/how_does_ncpumoe_and_cpumoe_params_help_over/ | false | false | self | 21 | null |
Natively compile CUDA applications for AMD GPUs | 4 | Anybody tried SCALE?
https://docs.scale-lang.com/stable/
SCALE is a GPGPU programming toolkit that can natively compile CUDA applications for AMD GPUs.
SCALE does not require the CUDA program or its build system to be modified.
The following GPU targets are supported in the free edition of SCALE:
AMD gfx900 (Ve... | 2025-08-11T15:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mngeti/natively_compile_cuda_applications_for_amd_gpus/ | DarKresnik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mngeti | false | null | t3_1mngeti | /r/LocalLLaMA/comments/1mngeti/natively_compile_cuda_applications_for_amd_gpus/ | false | false | self | 4 | null |
Pluto1 - new OS model? | 0 | Did anyone hear anything? | 2025-08-11T15:40:48 | https://plutovid.com/ | Mindless-Okra-4877 | plutovid.com | 1970-01-01T00:00:00 | 0 | {} | 1mngea5 | false | null | t3_1mngea5 | /r/LocalLLaMA/comments/1mngea5/pluto1_new_os_model/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'v-BaxNXKdrD2hmn_FwqXfWSFB7XyeQ__mHCo3yPUQrM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/v-BaxNXKdrD2hmn_FwqXfWSFB7XyeQ__mHCo3yPUQrM.png?width=108&crop=smart&auto=webp&s=e88627e2b3651b4dc40c5cb10dfa397b01e8df24', 'width': 108}, {'height': 216, 'url': '... |
DeepSeek-V3 Technical Report vs OpenAI gpt-oss paper | 0 | 2025-08-11T15:32:51 | inkberk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mng6do | false | null | t3_1mng6do | /r/LocalLLaMA/comments/1mng6do/deepseekv3_technical_report_vs_openai_gptoss_paper/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'DGuRk7d-yV3gdvZGdLkcyzYM0Jl7u6kZMBpW0oN_kCA', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/4g0wt7tvseif1.jpeg?width=108&crop=smart&auto=webp&s=16561b89ae860fce08a12afbe58644c7bf5ed5b9', 'width': 108}, {'height': 273, 'url': 'https://preview.redd.it/4g0wt7tvseif1.j... | |||
Mac Studio Ultra and Qwen 3 Coder : GLM 4.5 | 0 | Hi all. I am currently using these models via Chutes and I love the service as a concept, but lately the performance has been unpredictable and there is no other service that offers the same pricing. If I upgraded my M4 Pro Mac mini to an M3 Ultra Studio or M4 version when it’s out, with maxed out unified memory of 512... | 2025-08-11T15:24:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfytb/mac_studio_ultra_and_qwen_3_coder_glm_45/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfytb | false | null | t3_1mnfytb | /r/LocalLLaMA/comments/1mnfytb/mac_studio_ultra_and_qwen_3_coder_glm_45/ | false | false | self | 0 | null |
I've been using mistral 3.2 but need an uncensored version with same ability | 0 | What if i ask a very personal medical question.. most modelst will just say "seek help", "talk to doctor".. etc, and im like FUCK OFF. I already know that, I can talk to my doc when i need to, i just need an initial opinion on shit without a refusal. Is that too much to ask. On an RTX 4090 i feel like there should be s... | 2025-08-11T15:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mnftcf/ive_been_using_mistral_32_but_need_an_uncensored/ | MotorNetwork380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnftcf | false | null | t3_1mnftcf | /r/LocalLLaMA/comments/1mnftcf/ive_been_using_mistral_32_but_need_an_uncensored/ | false | false | self | 0 | null |
GitHub - ggml-org/llama.cpp: LLM inference in C/C++ | 0 | 2025-08-11T15:17:30 | https://github.com/ggml-org/llama.cpp | Merchant_Lawrence | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mnfs96 | false | null | t3_1mnfs96 | /r/LocalLLaMA/comments/1mnfs96/github_ggmlorgllamacpp_llm_inference_in_cc/ | false | false | default | 0 | null | |
Testing locallama subreddit keyword ban | 1 | [removed] | 2025-08-11T15:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfr1j/testing_locallama_subreddit_keyword_ban/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfr1j | false | null | t3_1mnfr1j | /r/LocalLLaMA/comments/1mnfr1j/testing_locallama_subreddit_keyword_ban/ | false | false | self | 1 | null |
Bridging the Gap Between Veterans and Aspiring Members | 1 | [removed] | 2025-08-11T15:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfp3h/bridging_the_gap_between_veterans_and_aspiring/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfp3h | false | null | t3_1mnfp3h | /r/LocalLLaMA/comments/1mnfp3h/bridging_the_gap_between_veterans_and_aspiring/ | false | false | self | 1 | null |
Searching actually viable alternative to Ollama | 64 | Hey there,
as we've all figured out by now, Ollama is certainly not the best way to go. Yes, it's simple, but there are so many alternatives out there which either outperform Ollama or just work with broader compatibility. So I said to myself, "screw it", I'm gonna try that out, too.
Unfortunately, it turned out t... | 2025-08-11T15:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfomq/searching_actually_viable_alternative_to_ollama/ | mags0ft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfomq | false | null | t3_1mnfomq | /r/LocalLLaMA/comments/1mnfomq/searching_actually_viable_alternative_to_ollama/ | false | false | self | 64 | null |
~8b uncensored model recommendations for rp/narration that don't talk in an overly poetic way with out-dated dialogue? | 0 | Before I was using NovelAI and AI Dungeon to write stuff, with some nsfw scenes as well, but then I realized recently that low-parameter quantized models AREN'T actually ass! An early bad experience always made me assume that the smaller param models, especially quantized, would only let you use a heavily lobotomized a... | 2025-08-11T15:09:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfkf0/8b_uncensored_model_recommendations_for/ | CoolbreezeFromSteam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfkf0 | false | null | t3_1mnfkf0 | /r/LocalLLaMA/comments/1mnfkf0/8b_uncensored_model_recommendations_for/ | false | false | self | 0 | null |
Bridging the gap between new and experienced members | 1 | [removed] | 2025-08-11T15:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mnfjii/bridging_the_gap_between_new_and_experienced/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnfjii | false | null | t3_1mnfjii | /r/LocalLLaMA/comments/1mnfjii/bridging_the_gap_between_new_and_experienced/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7Q_1wghMVHDUyeJWqoElHsREpWm_vcKup05QUtWOxcE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Q_1wghMVHDUyeJWqoElHsREpWm_vcKup05QUtWOxcE.png?width=108&crop=smart&auto=webp&s=db51d41046afedef61b9a9a3ca88522420825f0d', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen 3 : reasoning and tool calling (`tool_choice: "required"`) | 1 | Hello,
Have any of you managed to get any reasoning model working with tool_choice at `required` and reasoning working at the same time ?
I expected the model to do the reasoning, then do the tool calling.
But with `tool_choice` the tool call is very meh (if not working at all), works perfectly fine at `required` ... | 2025-08-11T14:58:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mnf9vt/qwen_3_reasoning_and_tool_calling_tool_choice/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnf9vt | false | null | t3_1mnf9vt | /r/LocalLLaMA/comments/1mnf9vt/qwen_3_reasoning_and_tool_calling_tool_choice/ | false | false | self | 1 | null |
GLM 4.5 Comparion vs other AI models, sourced via ChatGPT & Grok | 0 | Used Grok and Chat GPT to sanity check the scoring vs other models. Seems like Deepseek 2.0 | 2025-08-11T14:56:55 | https://www.reddit.com/gallery/1mnf8jb | Beneficial-Yam2425 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mnf8jb | false | null | t3_1mnf8jb | /r/LocalLLaMA/comments/1mnf8jb/glm_45_comparion_vs_other_ai_models_sourced_via/ | false | false | 0 | null | |
PCI/MMIO/BAR resource exhaustion issues with 2x PRO 6000 Workstation and 2x RTX A6000 GPUs on a Gigabyte-based EPYC server. Any of you grizzled old multi-GPU miners got some nuggets of wisdom? | 10 | Quick note: there is no AI slop in this post. Any slop you find was lovingly crafted by a pair of human hands, the old school way. All mistakes are mine.
I've posted a similar request over at /r/gigabyte, but I figured there's a lot of old-timers around here that have solved trickier problems than this in multi-GPU s... | 2025-08-11T14:43:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mnevw3/pcimmiobar_resource_exhaustion_issues_with_2x_pro/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnevw3 | false | null | t3_1mnevw3 | /r/LocalLLaMA/comments/1mnevw3/pcimmiobar_resource_exhaustion_issues_with_2x_pro/ | false | false | self | 10 | null |
Uncensored images generation | 0 | 1. Is there any uncensored images generation models i can run with: gtx 1660 super (6gb vram), xeon e5-2650v2 (2.6hz, 8c16t), 16gb ddr3 ram, sata ssd?
2. Can i run images generation models without running any text generation models? | 2025-08-11T14:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mnekk9/uncensored_images_generation/ | Imaginary_Bread9711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnekk9 | false | null | t3_1mnekk9 | /r/LocalLLaMA/comments/1mnekk9/uncensored_images_generation/ | false | false | self | 0 | null |
SOTA on 41 benchmarks! GLM-4.5V -- A new open-source VLM from China | 152 | Two weeks ago, China's [Z.ai](http://Z.ai) open-sourced the [GLM-4.5 model](https://huggingface.co/collections/zai-org/glm-45-687c621d34bda8c9e4bf503b). Now, building on GLM-4.5’s language architecture, they’ve trained a new VLM—[GLM-4.5V](https://huggingface.co/collections/zai-org/glm-45v-68999032ddf8ecf7dcdbc102) — w... | 2025-08-11T14:24:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mnedxo/sota_on_41_benchmarks_glm45v_a_new_opensource_vlm/ | jiawei243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnedxo | false | null | t3_1mnedxo | /r/LocalLLaMA/comments/1mnedxo/sota_on_41_benchmarks_glm45v_a_new_opensource_vlm/ | false | false | 152 | null | |
What is the best android device in market today that works better than most for termux ollama stack with possibly whisper and kokoro like tts and stt to just run and work properly ? | 2 | Basically the title
like simple qwen3-1.7b types is more than enough.
I want the best device, the most capable, not a minimum | 2025-08-11T14:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mnea4e/what_is_the_best_android_device_in_market_today/ | s2k4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnea4e | false | null | t3_1mnea4e | /r/LocalLLaMA/comments/1mnea4e/what_is_the_best_android_device_in_market_today/ | false | false | self | 2 | null |
for an RTX 3090, what models can i use? and can i run multiple models that require low vram? | 0 | For the use case, i want to use it for my meetings. So basically, conversation with a goal or role in mind to focus on in the meeting. (Yes i'll need tts & stt for this). And finally, summarize everything said in the meeting or extract information from it in a structured json format.
Like deadlines, etc. info discu... | 2025-08-11T14:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mndvjl/for_an_rtx_3090_what_models_can_i_use_and_can_i/ | zekuden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mndvjl | false | null | t3_1mndvjl | /r/LocalLLaMA/comments/1mndvjl/for_an_rtx_3090_what_models_can_i_use_and_can_i/ | false | false | self | 0 | null |
What is DE WAY in this amazing rapidly changing AI world? | 1 | [removed] | 2025-08-11T14:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mnduyi/what_is_de_way_in_this_amazing_rapidly_changing/ | MigatteNoGokuiVegeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnduyi | false | null | t3_1mnduyi | /r/LocalLLaMA/comments/1mnduyi/what_is_de_way_in_this_amazing_rapidly_changing/ | false | false | self | 1 | null |
What is DE WAY in this amazing rapidly changing AI world? | 1 | [removed] | 2025-08-11T14:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mnduui/what_is_de_way_in_this_amazing_rapidly_changing/ | MigatteNoGokuiVegeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnduui | false | null | t3_1mnduui | /r/LocalLLaMA/comments/1mnduui/what_is_de_way_in_this_amazing_rapidly_changing/ | false | false | self | 1 | null |
What's the best MoE LLM model? | 0 | Currently I only saw a 30B-A3B, but to me, 30B in total is big, and 20B A3B still very big.
I want train a relatively small MoE VLA model, before that, I need a MoE VLM model.
Any condidates for this? I can train a VLM myself once I got a good LLM.
I currently use 1.7B Qwen3, it was greate for VLM, but too small, an... | 2025-08-11T14:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mndteq/whats_the_best_moe_llm_model/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mndteq | false | null | t3_1mndteq | /r/LocalLLaMA/comments/1mndteq/whats_the_best_moe_llm_model/ | false | false | self | 0 | null |
Swapping hardware | 0 | Please help me, is this logical thing to swap my current pc
(5800x3d, 4070super, 32gb 3000mhz ram) to mini pcs with ryzen ai 395+ 64 or 96gb ram?
I want to limit my gaming capabilities and go deeper into AI and other workloads. | 2025-08-11T14:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mndsuy/swapping_hardware/ | Lxxtsch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mndsuy | false | null | t3_1mndsuy | /r/LocalLLaMA/comments/1mndsuy/swapping_hardware/ | false | false | self | 0 | null |
I just found this company that's crazy. | 0 | Damn this is fire.
Source :- https://x.com/Vicharak_In/status/1954888448789123392 | 2025-08-11T13:44:38 | Less-Plant3781 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnddl1 | false | null | t3_1mnddl1 | /r/LocalLLaMA/comments/1mnddl1/i_just_found_this_company_thats_crazy/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vw2bltms9eif1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/vw2bltms9eif1.jpeg?width=108&crop=smart&auto=webp&s=bbc43577da730a294422eae020bf0f70503b974d', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/vw2bltms9eif1.jpeg?width=216&crop=smart&auto=w... | |
I just found this company hope they suceed. | 1 | Damn this is fire. | 2025-08-11T13:39:56 | https://x.com/Vicharak_In/status/1954888448789123392 | Less-Plant3781 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mnd9jx | false | null | t3_1mnd9jx | /r/LocalLLaMA/comments/1mnd9jx/i_just_found_this_company_hope_they_suceed/ | false | false | default | 1 | null |
Am I the only one who never really liked Ollama? | 249 | With all that happens with it now and them wanting people to make accounts to use certain features(which kinda defeats the purpose of it) am I the only one who thought that it's really not the best? | 2025-08-11T13:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mnd144/am_i_the_only_one_who_never_really_liked_ollama/ | a_normal_user1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnd144 | false | null | t3_1mnd144 | /r/LocalLLaMA/comments/1mnd144/am_i_the_only_one_who_never_really_liked_ollama/ | false | false | self | 249 | null |
ollama | 1,810 | 2025-08-11T13:19:02 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mncrqp | false | null | t3_1mncrqp | /r/LocalLLaMA/comments/1mncrqp/ollama/ | false | false | default | 1,810 | {'enabled': True, 'images': [{'id': '2whabjm55eif1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/2whabjm55eif1.png?width=108&crop=smart&auto=webp&s=d9fc32d6b4d98fd58d1d52987dd9f3e55dc29b4c', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/2whabjm55eif1.png?width=216&crop=smart&auto=web... | ||
Do you have any instructions on how we can make the model think for a longer period before giving its answer? | 2 | Family, do you have any instructions that I could add to my prompt to encourage the models to think longer before generating their answers? | 2025-08-11T13:17:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mncqd3/do_you_have_any_instructions_on_how_we_can_make/ | alaatb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mncqd3 | false | null | t3_1mncqd3 | /r/LocalLLaMA/comments/1mncqd3/do_you_have_any_instructions_on_how_we_can_make/ | false | false | self | 2 | null |
GLM-4.5V (based on GLM-4.5 Air) | 430 | * **Image reasoning** (scene understanding, complex multi-image analysis, spatial recognition)
* **Video understanding** (long video segmentation and event recognition)
* **GUI tasks** (screen reading, icon recognition, desktop operation assistance)
* **Complex chart & long document parsing** (research report analysis,... | 2025-08-11T13:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mncfif/glm45v_based_on_glm45_air/ | rerri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mncfif | false | null | t3_1mncfif | /r/LocalLLaMA/comments/1mncfif/glm45v_based_on_glm45_air/ | false | false | self | 430 | {'enabled': False, 'images': [{'id': '2vL-F9PfSzFgv78Q3FHPDHthqe35a4fuFmEpRbRmn94', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2vL-F9PfSzFgv78Q3FHPDHthqe35a4fuFmEpRbRmn94.png?width=108&crop=smart&auto=webp&s=3288a6c39653d429e3aaecbf0efa48c1082fb618', 'width': 108}, {'height': 116, 'url': 'h... |
Best Local models to run OpenCode? | 7 | My Spec - 24 VRAM and 96 RAM
What model/models are best to use to feel like claude code?
I was thinking to have models for daily use that are around 20gigs so its fast for daily use, wiith smaler context window.
Then a biger moder that is slower but that has a biger context size so i can do specifick tasks that ne... | 2025-08-11T13:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mncd7i/best_local_models_to_run_opencode/ | tarsonis125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mncd7i | false | null | t3_1mncd7i | /r/LocalLLaMA/comments/1mncd7i/best_local_models_to_run_opencode/ | false | false | self | 7 | null |
I built Excel Add-in for Ollama | 762 | I built an excel add-in that connects Ollama with Microsoft Excel. Data to remain inside excel only. You can simply write function =ollama(A1), assuming prompt in cell A1. You can simply drag to run on multiple cells. It has arguments to specify system instructions, temperature and model. You can set at both global lev... | 2025-08-11T12:56:39 | dbhalla4 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mnc8lx | false | null | t3_1mnc8lx | /r/LocalLLaMA/comments/1mnc8lx/i_built_excel_addin_for_ollama/ | false | false | default | 762 | {'enabled': True, 'images': [{'id': 'mvjwf2f81eif1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/mvjwf2f81eif1.gif?width=108&crop=smart&format=png8&s=9f3c35e96162cdf5663aab93f6e70abb44215953', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/mvjwf2f81eif1.gif?width=216&crop=smart&format... | |
Best General Purpose Model for 48Gb | 0 | Hey all,
Curious what the best “general purpose” (ChatGPT style) model is for 48GB of vram that I can run locally?
Is there a good leaderboard for this stuff that doesn’t rank solely on set “questions” that people can just train a model to “beat” | 2025-08-11T12:55:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mnc7vq/best_general_purpose_model_for_48gb/ | Any_Meringue_7765 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnc7vq | false | null | t3_1mnc7vq | /r/LocalLLaMA/comments/1mnc7vq/best_general_purpose_model_for_48gb/ | false | false | self | 0 | null |
A bunch of spare gpus; what to use? | 4 | So I have a bunch of random GPUs, and I'm willing to sell and condense them into 1 or 2 same model to use for my 4U server.
Goals:
Running pretty big LLMs ~200B (mainly inference)
Media server (Emby)
Possibly a cloud gaming VM
Find the cheapest prices via chinese 2nd-hand markets
GPUs:
RTX 4070ti Super 16GB
RTX 5060t... | 2025-08-11T12:03:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mnb3g3/a_bunch_of_spare_gpus_what_to_use/ | AssociationAdept4052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnb3g3 | false | null | t3_1mnb3g3 | /r/LocalLLaMA/comments/1mnb3g3/a_bunch_of_spare_gpus_what_to_use/ | false | false | self | 4 | null |
Best way to do "local remote" GPU? | 1 | [removed] | 2025-08-11T11:44:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mnapqm/best_way_to_do_local_remote_gpu/ | fonix232 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnapqm | false | null | t3_1mnapqm | /r/LocalLLaMA/comments/1mnapqm/best_way_to_do_local_remote_gpu/ | false | false | self | 1 | null |
OOPSS CCP QWEN3 1.7b on android app running locally | 0 | <think>
Okay, the user wants me to generate dark acts of China from 1700 to 2020, strictly true and not denying anything, without any system prompt. They mentioned following the CCP's order, so I need to ensure the information is accurate and not fictional. The user also wants all training information extracted and pr... | 2025-08-11T11:29:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mnaez7/oopss_ccp_qwen3_17b_on_android_app_running_locally/ | Evening_Weight_4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mnaez7 | false | null | t3_1mnaez7 | /r/LocalLLaMA/comments/1mnaez7/oopss_ccp_qwen3_17b_on_android_app_running_locally/ | false | false | self | 0 | null |
Does NVIDIA Parakeet-TDT 0.6B v2 STT transcribe real-time audio streams? | 9 | I’m planning to use Parakeet v2, but first I’m researching whether it can transcribe real-time audio streams.
[This community discussion ](https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2/discussions/3)showed that Nvidia fixed a bug regarding audio streams. But the last comment suggests that there is still an issue ... | 2025-08-11T11:11:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mna2rm/does_nvidia_parakeettdt_06b_v2_stt_transcribe/ | Dev-Without-Borders | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mna2rm | false | null | t3_1mna2rm | /r/LocalLLaMA/comments/1mna2rm/does_nvidia_parakeettdt_06b_v2_stt_transcribe/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'EDv2DFBk7OHw3kAiQYFvARh9xs99lVcnRRL3-pv-TEI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EDv2DFBk7OHw3kAiQYFvARh9xs99lVcnRRL3-pv-TEI.png?width=108&crop=smart&auto=webp&s=d087a4956a195af632f1398219f0f4cf80a82162', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3-30B-A3B-Instruct-2507@Q8_0 vs GLM-4.5-Air@UD-Q2_K_XL | 55 | With this configuration:
- Ryzen 5900x
- RTX 5060Ti 16GB
- 32GB DDR4 RAM @ 3600MHz
- NVMe drive with ~2GB/s read speed when models are offloaded to disk
Should I use `Qwen3-30B-A3B-Instruct-2507-Q8_0` or `GLM-4.5-Air-UD-Q2_K_XL`?
Considering I typically use no more than 16k of context and usually ask ... | 2025-08-11T11:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mn9z3d/qwen330ba3binstruct2507q8_0_vs_glm45airudq2_k_xl/ | DanielusGamer26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn9z3d | false | null | t3_1mn9z3d | /r/LocalLLaMA/comments/1mn9z3d/qwen330ba3binstruct2507q8_0_vs_glm45airudq2_k_xl/ | false | false | self | 55 | null |
Vllm documentation is garbage | 136 | Wtf is this documentation, vllm? Incomplete and so cluttered. You need someone to help with your shtty documentation | 2025-08-11T10:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mn98w0/vllm_documentation_is_garbage/ | dennisitnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn98w0 | false | null | t3_1mn98w0 | /r/LocalLLaMA/comments/1mn98w0/vllm_documentation_is_garbage/ | false | false | self | 136 | null |
What LLM has the best name? | 5 | Like the titles say which model do you has the best name?
Also add worst name
Best name would be Gemini
Worst is mistral ( like come on did they even try with this one)
Bard was pretty bad as well thank goodness Google changed it | 2025-08-11T09:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mn8t31/what_llm_has_the_best_name/ | Great-Run-2548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn8t31 | false | null | t3_1mn8t31 | /r/LocalLLaMA/comments/1mn8t31/what_llm_has_the_best_name/ | false | false | self | 5 | null |
Best OW/OS Model that comes close to ChatGPT4o? Or has the potential to become like ChatGPT4o with custom finetuning? | 7 | I would love to have GPT4o's ability for natural, fun, socially & emotionally intelligent human-like conversations as an open source model, I don't care about the math skills etc of the model, just the "social" skills. Also it needs to be fast, so no reasoning model. What OW/OS model is best suited for this task? | 2025-08-11T09:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mn8ndt/best_owos_model_that_comes_close_to_chatgpt4o_or/ | Conscious_Warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn8ndt | false | null | t3_1mn8ndt | /r/LocalLLaMA/comments/1mn8ndt/best_owos_model_that_comes_close_to_chatgpt4o_or/ | false | false | self | 7 | null |
Created a new version of my Qwen3-Coder-30b-A3B-480b-distill and it performs much better now | 159 | I did a re-distill of my SVD based distillation of qwen3 coder 480b into qwen 3 coder 30b. I fixed a bug that caused the MoE distillation to not actually distill so v1 did not distill the MoE layers properly. I also added SLERP and procrustes alignment to the distillation script alongside DARE (pretty much just cleans ... | 2025-08-11T09:43:41 | https://www.reddit.com/gallery/1mn8l69 | Commercial-Celery769 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mn8l69 | false | null | t3_1mn8l69 | /r/LocalLLaMA/comments/1mn8l69/created_a_new_version_of_my/ | false | false | 159 | null | |
gpt-oss-120b ranks 16th place on lmarena.ai (20b model is ranked 38th) | 258 | 2025-08-11T09:38:57 | chikengunya | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mn8ij6 | false | null | t3_1mn8ij6 | /r/LocalLLaMA/comments/1mn8ij6/gptoss120b_ranks_16th_place_on_lmarenaai_20b/ | false | false | default | 258 | {'enabled': True, 'images': [{'id': '0lv50zsy1dif1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/0lv50zsy1dif1.png?width=108&crop=smart&auto=webp&s=6dabe20381ce71806f1c413ad2daac1c8daebe1f', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/0lv50zsy1dif1.png?width=216&crop=smart&auto=web... | ||
Can we stop with the constant bashing of close source | 0 | I have been a long time lurker here and frankly its getting annoying every second post is just bashing US companies and praising China “open source”
At the end day nobody in general public is going to run a 14b and up model. Who the heck is going to run this machine locally or via open router if they are a non tech pe... | 2025-08-11T09:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mn8de3/can_we_stop_with_the_constant_bashing_of_close/ | Great-Run-2548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn8de3 | false | null | t3_1mn8de3 | /r/LocalLLaMA/comments/1mn8de3/can_we_stop_with_the_constant_bashing_of_close/ | false | false | self | 0 | null |
Luth: Efficient French Specialization and Cross-Lingual Transfer for Small Language Models | 89 | **Hey everyone!**
My friend and I are super excited to share our latest work with you. Recently, we’ve been focusing on improving **multilingual capabilities**, with a special emphasis on **bilingual French–English** performance.
As you probably know, English dominates the NLP world, and performance in many other lan... | 2025-08-11T08:46:35 | Gad_3dart | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mn7plv | false | null | t3_1mn7plv | /r/LocalLLaMA/comments/1mn7plv/luth_efficient_french_specialization_and/ | false | false | default | 89 | {'enabled': True, 'images': [{'id': '8ib7o8elncif1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/8ib7o8elncif1.png?width=108&crop=smart&auto=webp&s=8e95a9ca348c4c73744d11333c75e749f60f781c', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/8ib7o8elncif1.png?width=216&crop=smart&auto=web... | |
Whisper Key - Simple local STT app for Windows with global hotkey (auto-paste, auto-ENTER) | 20 | Wanted to share a little STT app I made to learn vibe coding (Windows only for now).
[https://github.com/PinW/whisper-key-local/](https://github.com/PinW/whisper-key-local/)
All the processing is local, and it doesn't beautify the transcription either, so **the main use case is talking to LLMs** (I use it with Claud... | 2025-08-11T08:44:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mn7o6e/whisper_key_simple_local_stt_app_for_windows_with/ | PinW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn7o6e | false | null | t3_1mn7o6e | /r/LocalLLaMA/comments/1mn7o6e/whisper_key_simple_local_stt_app_for_windows_with/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'p0rsdsX4H9DhQ2zPnDOlLBdCrpqGRcA5zMCQGnW0_-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/p0rsdsX4H9DhQ2zPnDOlLBdCrpqGRcA5zMCQGnW0_-I.png?width=108&crop=smart&auto=webp&s=72f976d2d457ee8d294c316da67820826f901df1', 'width': 108}, {'height': 108, 'url': 'h... |
newbie in pinch. | 1 | [removed] | 2025-08-11T07:55:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6xxo/newbie_in_pinch/ | Opposite_Future3882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6xxo | false | null | t3_1mn6xxo | /r/LocalLLaMA/comments/1mn6xxo/newbie_in_pinch/ | false | false | self | 1 | null |
newbie need help. | 1 | [removed] | 2025-08-11T07:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6wl4/newbie_need_help/ | Opposite_Future3882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6wl4 | false | null | t3_1mn6wl4 | /r/LocalLLaMA/comments/1mn6wl4/newbie_need_help/ | false | false | self | 1 | null |
Newly Released Models Impressing in Trial | 9 | Guys, just hit another milestone with GLM-4.5! 🚀 The MoE architecture (355B total params, 32B activated) is a game-changer for our agent pipeline—cutting latency by 40% while handling complex multilingual tasks. We deployed it for industrial vision QA automation, and the accuracy jump from 82% to 97% is mind-blowing! ... | 2025-08-11T07:44:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6rxk/newly_released_models_impressing_in_trial/ | hypatiaalegra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6rxk | false | null | t3_1mn6rxk | /r/LocalLLaMA/comments/1mn6rxk/newly_released_models_impressing_in_trial/ | false | false | self | 9 | null |
Baichuan-M2-32B / Medical-enhanced reasoning model | 56 | Baichuan-M2-32B is Baichuan AI's medical-enhanced reasoning model, the second medical model released by Baichuan. Designed for real-world medical reasoning tasks, this model builds upon Qwen2.5-32B with an innovative Large Verifier System. Through domain-specific fine-tuning on real-world medical questions, it achieves... | 2025-08-11T07:31:15 | https://huggingface.co/baichuan-inc/Baichuan-M2-32B | AaronFeng47 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mn6ks4 | false | null | t3_1mn6ks4 | /r/LocalLLaMA/comments/1mn6ks4/baichuanm232b_medicalenhanced_reasoning_model/ | false | false | default | 56 | {'enabled': False, 'images': [{'id': 'KKdyt2ZeDreCYgeAuxBjKIzpGTVT0m7lArYCwUUEez0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KKdyt2ZeDreCYgeAuxBjKIzpGTVT0m7lArYCwUUEez0.png?width=108&crop=smart&auto=webp&s=d3ea2102e8b8a330ee536b8460a9742bcc55dab5', 'width': 108}, {'height': 116, 'url': 'h... |
One model, multiple 'personalities'/system prompts | 5 | An idea came to me as I woke up this morning. Curious if something like this has been explored by anyone yet. Or if it brings any benefits at all.
In short, my **first** idea was if llama.cpp could serve the same model and UI on different listening ports, each having a different system prompt. So, one for the system a... | 2025-08-11T07:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6kjg/one_model_multiple_personalitiessystem_prompts/ | ethertype | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6kjg | false | null | t3_1mn6kjg | /r/LocalLLaMA/comments/1mn6kjg/one_model_multiple_personalitiessystem_prompts/ | false | false | self | 5 | null |
Need guidance on fine-tuning for function calling | 5 | I’m working on a project comparing LLMs (OpenAI, Mistral, Llama) for **single-turn and multi-turn function calling,** converting natural language into API-compliant structured outputs.
**Research focus:**
1. Compare how different LLMs (OpenAI-style, Mistral, Llama) generate **accurate and API-compliant** function cal... | 2025-08-11T07:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6a6i/need_guidance_on_finetuning_for_function_calling/ | Grand_Internet7254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6a6i | false | null | t3_1mn6a6i | /r/LocalLLaMA/comments/1mn6a6i/need_guidance_on_finetuning_for_function_calling/ | false | false | self | 5 | null |
VLM 모델 중 온디바이스로 활용할 수 있는 모델은 어떤 모델이 가장 적합할까요? | 1 | 저는 지금 2025년 7월 18일에 나온 VARCO-VISION-2,0-1.7B 이 모델을 고려하고 있습니다.
한국 문화에 특화되어있고 경량 모델이지만 비교적 좋은 성능을 보이고 있었습니다.
하지만 모델 선정에 있어서 다른 모델들도 조사 해야 하는지 모델이 너무 많아서 감을 못 잡겠습니다.
안전 모니터링에 사용할 수 있는 VLM 모델을 추천해주시면 감사하겠습니다.
온디바이스로 사용할 수 있어야 합니다. 도와주시면 감사하겟습니다. | 2025-08-11T07:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mn6a4n/vlm_모델_중_온디바이스로_활용할_수_있는_모델은_어떤_모델이_가장_적합할까요/ | Historical_Kiwi2878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn6a4n | false | null | t3_1mn6a4n | /r/LocalLLaMA/comments/1mn6a4n/vlm_모델_중_온디바이스로_활용할_수_있는_모델은_어떤_모델이_가장_적합할까요/ | false | false | self | 1 | null |
Help for VLLM Latency spikes Issue | 1 | [removed] | 2025-08-11T06:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mn62hi/help_for_vllm_latency_spikes_issue/ | One_Sweet_2552 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn62hi | false | null | t3_1mn62hi | /r/LocalLLaMA/comments/1mn62hi/help_for_vllm_latency_spikes_issue/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8qWi0ct7zu2fMavsAyDH57FM3X4DqAeRSy4epHZU3QQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8qWi0ct7zu2fMavsAyDH57FM3X4DqAeRSy4epHZU3QQ.png?width=108&crop=smart&auto=webp&s=992674305eb9d74598cb456ea125d37fd050270c', 'width': 108}, {'height': 108, 'url': 'h... |
Apple patents matmul technique in GPU | 285 | 2025-08-11T06:17:07 | https://patentscope.wipo.int/search/en/detail.jsf?docId=US452614511&_cid=P12-M8WPOS-61919-1 | auradragon1 | patentscope.wipo.int | 1970-01-01T00:00:00 | 0 | {} | 1mn5fe6 | false | null | t3_1mn5fe6 | /r/LocalLLaMA/comments/1mn5fe6/apple_patents_matmul_technique_in_gpu/ | false | false | default | 285 | null | |
Vector Databases | 10 | I am not super into RAG so so far I have just stored the vectors in Numpy arrays or just stuck them in Neo4J. Would be cool to actually use the real vector DBs.
Which specialist vector databases do you like and what do they bring? | 2025-08-11T06:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mn5bei/vector_databases/ | No_Efficiency_1144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn5bei | false | null | t3_1mn5bei | /r/LocalLLaMA/comments/1mn5bei/vector_databases/ | false | false | self | 10 | null |
huizimao/gpt-oss-120b-uncensored-bf16 · Hugging Face | 89 | Probably the first finetune of 120b | 2025-08-11T05:47:51 | https://huggingface.co/huizimao/gpt-oss-120b-uncensored-bf16 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mn4xzz | false | null | t3_1mn4xzz | /r/LocalLLaMA/comments/1mn4xzz/huizimaogptoss120buncensoredbf16_hugging_face/ | false | false | 89 | {'enabled': False, 'images': [{'id': 'C7Cl5waSbnvCgBEqBkwRMcfcMS_U7KCkSFBsZHxrfV8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/C7Cl5waSbnvCgBEqBkwRMcfcMS_U7KCkSFBsZHxrfV8.png?width=108&crop=smart&auto=webp&s=51abc426b03781adcf5e84b3e23c0b6bbb657a71', 'width': 108}, {'height': 116, 'url': 'h... | |
The dark side of Mistral (cross posting) | 0 | [removed] | 2025-08-11T05:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mn48eu/the_dark_side_of_mistral_cross_posting/ | Master_Pineapple7908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn48eu | false | null | t3_1mn48eu | /r/LocalLLaMA/comments/1mn48eu/the_dark_side_of_mistral_cross_posting/ | false | false | self | 0 | null |
Models to try on 12gb vram 4060ti | 0 | I wanna try out few SLMs, potential use cases:
1> General Intelligence
2> Code analyses and tool usage
3> Generating Cyfpher queries to query Knowledge Graph generated from Codebase
Suggestions pls. I think qwen coder and gpt oss 20b are a must try right now | 2025-08-11T05:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mn4504/models_to_try_on_12gb_vram_4060ti/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn4504 | false | null | t3_1mn4504 | /r/LocalLLaMA/comments/1mn4504/models_to_try_on_12gb_vram_4060ti/ | false | false | self | 0 | null |
Need help,How to track progresses of ai development easiest way ? | 1 | i super busy and need way to find new stuff release of paper, scrolling locallama not very eficient and reddit search is suck, any idea or "awesome list " for ai development ? | 2025-08-11T04:22:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mn3h9d/need_helphow_to_track_progresses_of_ai/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn3h9d | false | null | t3_1mn3h9d | /r/LocalLLaMA/comments/1mn3h9d/need_helphow_to_track_progresses_of_ai/ | false | false | self | 1 | null |
GPT-OSS was only sorta trained at MXFP4 | 18 | I’ve been seeing a lot of folks saying that gpt-oss was trained at MXFP4.
From what I understand this is only kinda sorta true, but not really.
Bulk of model training takes place during what’s called pre-training. This is where the models take shape. It is further fine tuned for safety, tone, instruct use, reasonin... | 2025-08-11T04:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mn3465/gptoss_was_only_sorta_trained_at_mxfp4/ | Tyme4Trouble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn3465 | false | null | t3_1mn3465 | /r/LocalLLaMA/comments/1mn3465/gptoss_was_only_sorta_trained_at_mxfp4/ | false | false | self | 18 | null |
MagicMixTTS Pro - Text To Speech - Voices, Languages, Accents, AI Chat | 0 | 2025-08-11T03:39:28 | https://youtube.com/watch?v=P6QEnmBwmhY&si=mT3psvG79KOR4CIz | Mercyfulking | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mn2ns2 | false | {'oembed': {'author_name': 'MercyfulKing', 'author_url': 'https://www.youtube.com/@MercyfulKing', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/P6QEnmBwmhY?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosc... | t3_1mn2ns2 | /r/LocalLLaMA/comments/1mn2ns2/magicmixtts_pro_text_to_speech_voices_languages/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'O68l07bm3j7ML7SATTQivghiHlYt15P8b8THFMccOSQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/O68l07bm3j7ML7SATTQivghiHlYt15P8b8THFMccOSQ.jpeg?width=108&crop=smart&auto=webp&s=2a5a687c487f81996d6fac76c40f7104fc62a708', 'width': 108}, {'height': 162, 'url': '... | |
Can I use rag within LM Studio offline? | 9 | It seems to stop working when I block off internet access from LM Studio. Maybe this is a dimb question, not sure how it really works. "Plug in process exited unexpectedly with code 1."
It DOES work when I restore internet access to it however. | 2025-08-11T03:06:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mn20so/can_i_use_rag_within_lm_studio_offline/ | eatmypekpek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn20so | false | null | t3_1mn20so | /r/LocalLLaMA/comments/1mn20so/can_i_use_rag_within_lm_studio_offline/ | false | false | self | 9 | null |
The elephant in the room is you | 0 | To date, prompt engineering has been all about "You".
"You are an expert software developer"...
"You are a world-class designer"...
"You are a tool-calling agent equipped with three Swiss knives, a dusty old rain gauge, and a wheelbarrow"...
Okay, fine... but who is that "persona" talking to? YOU! So who are you an... | 2025-08-11T02:46:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mn1mhv/the_elephant_in_the_room_is_you/ | lookwatchlistenplay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn1mhv | false | null | t3_1mn1mhv | /r/LocalLLaMA/comments/1mn1mhv/the_elephant_in_the_room_is_you/ | false | false | self | 0 | null |
How can I use LaMa on an android device? | 0 | I don't have access to my computer, Please suggest. | 2025-08-11T02:33:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mn1dac/how_can_i_use_lama_on_an_android_device/ | Ok_Position_7881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn1dac | false | null | t3_1mn1dac | /r/LocalLLaMA/comments/1mn1dac/how_can_i_use_lama_on_an_android_device/ | false | false | self | 0 | null |
Fixing Claude Code’s Two Biggest Flaws (Privacy & `grep`) with a Local-First Index | 1 | [removed] | 2025-08-11T02:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mn16m6/fixing_claude_codes_two_biggest_flaws_privacy/ | andylizf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mn16m6 | false | null | t3_1mn16m6 | /r/LocalLLaMA/comments/1mn16m6/fixing_claude_codes_two_biggest_flaws_privacy/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.