title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Intel Arc Pro B60 Dual 48G Turbo Maxsun GPU Pricing Revealed | 143 | Like many others, I was hyped for the dual GPU Intel Arc Pro B60, so I emailed Maxsun for a quote. Their US distributor hit me back with $5k per unit for 3 GPUs, or $4.5k each for 5+.
Sure, dual GPUs should cost more, but this is *10x* the rumored MSRP of the 24GB card. Space savings are nice, but not *that* nice.
RI... | 2025-06-30T22:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lokp88/intel_arc_pro_b60_dual_48g_turbo_maxsun_gpu/ | Airwalker19 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lokp88 | false | null | t3_1lokp88 | /r/LocalLLaMA/comments/1lokp88/intel_arc_pro_b60_dual_48g_turbo_maxsun_gpu/ | false | false | self | 143 | {'enabled': False, 'images': [{'id': 'hyq5VZeTQ6qEivgKR8UNVjykvZnnE2LgoajEOpxZ5bg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/QvtIX4qa99UPS9pW1xt2modSd4pW0ngywpBy3gUoSqo.jpg?width=108&crop=smart&auto=webp&s=667607aee3688d555db5e54a077c3cc6a667d70d', 'width': 108}, {'height': 284, 'url': '... |
A Meta-Framework for Self-Improving LLMs with Transparent Reasoning | 49 | > **Framework overview:**
> LLMs iteratively refine their own outputs—typically through a three‑phase cycle **draft → critique → revision**, repeat until convergence (all phases & stop rules are configurable).
> I started coding three weeks ago after an eight‑year break and zero professional dev experience.
---
**The... | 2025-06-30T21:59:34 | https://github.com/hankbesser/recursive-companion | henryb213 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lokkpc | false | null | t3_1lokkpc | /r/LocalLLaMA/comments/1lokkpc/a_metaframework_for_selfimproving_llms_with/ | false | false | default | 49 | null |
How do "AI detectors" work | 1 | Hey there, I'm doing research on how "AI detectors" work or if they are even real? they sound like snake oil to me... but do people actually pay for that? any insights on this would be highly appreciated! | 2025-06-30T21:50:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lokcrw/how_do_ai_detectors_work/ | BlueeWaater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lokcrw | false | null | t3_1lokcrw | /r/LocalLLaMA/comments/1lokcrw/how_do_ai_detectors_work/ | false | false | self | 1 | null |
Built a framework for LLMs to iteratively improve their own outputs through critique cycles | 1 | > **Framework overview**
> Built a framework where LLMs iteratively refine their own outputs—typically through a three‑phase cycle **draft → critique → revision**, repeat until convergence (all phases & stop rules are configurable).
> I started coding three weeks ago after an eight‑year break and zero professional dev ... | 2025-06-30T21:44:10 | https://github.com/hankbesser/recursive-companion | henryb213 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lok7ao | false | null | t3_1lok7ao | /r/LocalLLaMA/comments/1lok7ao/built_a_framework_for_llms_to_iteratively_improve/ | false | false | default | 1 | null |
F5-TTS installation error | 1 | 2025-06-30T21:42:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lok68d/f5tts_installation_error/ | TheHunter24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lok68d | false | null | t3_1lok68d | /r/LocalLLaMA/comments/1lok68d/f5tts_installation_error/ | false | false | 1 | null | ||
[News] Datacenter GPUs May Have an Astonishingly Short Lifespan of Only 1 to 3 Years | TrendForce News | 151 | 2025-06-30T21:40:05 | https://www.trendforce.com/news/2024/10/31/news-datacenter-gpus-may-have-an-astonishingly-short-lifespan-of-only-1-to-3-years/ | EasternBeyond | trendforce.com | 1970-01-01T00:00:00 | 0 | {} | 1lok3r2 | false | null | t3_1lok3r2 | /r/LocalLLaMA/comments/1lok3r2/news_datacenter_gpus_may_have_an_astonishingly/ | false | false | default | 151 | {'enabled': False, 'images': [{'id': '7cRnC2dFTB8VTd7qs9tim3BVul_HOXlhVu97BYC8mXw', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/7cRnC2dFTB8VTd7qs9tim3BVul_HOXlhVu97BYC8mXw.jpeg?width=108&crop=smart&auto=webp&s=1893491904b4e9ac390d2c75f06e424869ed5946', 'width': 108}, {'height': 110, 'url': '... | |
Tailscale Terms and Conditions Change | 1 | [removed] | 2025-06-30T21:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lok0k9/tailscale_terms_and_conditions_change/ | Comfortable_Town9383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lok0k9 | false | null | t3_1lok0k9 | /r/LocalLLaMA/comments/1lok0k9/tailscale_terms_and_conditions_change/ | false | false | self | 1 | null |
LLM model recommendation for poor HW | 0 | Hey,
I'm looking for a LLM to run on my shitty laptop (DELL UltraSharp U2422H, 24–32GB RAM, 4GB VRAM). The model should support tool use (like a calculator or `DuckDuckGoSearchRun()`), and decent reasoning ability would be a bonus, though I know that's probably pushing it with my hardware.
I’ve triedllama3.2:3b , wh... | 2025-06-30T21:28:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lojtq3/llm_model_recommendation_for_poor_hw/ | ReputationMindless32 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lojtq3 | false | null | t3_1lojtq3 | /r/LocalLLaMA/comments/1lojtq3/llm_model_recommendation_for_poor_hw/ | false | false | self | 0 | null |
[WIRED] Here Is Everyone Mark Zuckerberg Has Hired So Far for Meta’s ‘Superintelligence’ Team | 251 | 2025-06-30T21:19:51 | https://www.wired.com/story/mark-zuckerberg-welcomes-superintelligence-team/ | bllshrfv | wired.com | 1970-01-01T00:00:00 | 0 | {} | 1lojlrw | false | null | t3_1lojlrw | /r/LocalLLaMA/comments/1lojlrw/wired_here_is_everyone_mark_zuckerberg_has_hired/ | false | false | default | 251 | {'enabled': False, 'images': [{'id': 'hHtdtWWuX05qlxJIIZFgrRzaMxdrmlIQ8OiqTPog1_w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/hHtdtWWuX05qlxJIIZFgrRzaMxdrmlIQ8OiqTPog1_w.jpeg?width=108&crop=smart&auto=webp&s=d177b8cff8728cb615ad8ecfe9832d31a4883fc0', 'width': 108}, {'height': 113, 'url': '... | |
OpenSource CLI Agent with Local models. | 8 | Hey everyone, I'm building this CLI coding agent right now. My big goal is to turn it into a fully autonomous bot that runs on a server, handles error reports, crash logs, and random issues, then tracks them down and fixes everything on its own.
For the moment, it's just a basic CLI tool packed with features for deali... | 2025-06-30T21:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lojgxl/opensource_cli_agent_with_local_models/ | x8ko_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lojgxl | false | null | t3_1lojgxl | /r/LocalLLaMA/comments/1lojgxl/opensource_cli_agent_with_local_models/ | true | false | spoiler | 8 | {'enabled': False, 'images': [{'id': 'XOM6yqQSk8OHCQafjKsMt_it6ey7fyVrTrYARfC2cbc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XOM6yqQSk8OHCQafjKsMt_it6ey7fyVrTrYARfC2cbc.png?width=108&crop=smart&auto=webp&s=3dd179acc38acfd732283181a8e51635eaa3437b', 'width': 108}, {'height': 108, 'url': 'h... |
Gemma-3n VRAM usage | 9 | Hello fellow redditors,
I am trying to run Gemma-3n-E2B and E4B advertised as 2gb-3gb VRAM models. However, I couldn't run E4B due to torch outOfMemory, but when I ran E2B it took 10gbs and after few requests I went out of memory.
I am trying to understand, is there a way to run these models really on 2gb-3gb V... | 2025-06-30T21:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lojd3e/gemma3n_vram_usage/ | el_pr3sid3nt3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lojd3e | false | null | t3_1lojd3e | /r/LocalLLaMA/comments/1lojd3e/gemma3n_vram_usage/ | false | false | self | 9 | null |
arXiv2Docker: Computational Reproducibility with the ExperimentOps Agent | 10 | We've all been there, spend a morning setting up to find out it's not gonna work for your application.
From [SUPER](https://arxiv.org/pdf/2409.07440):
*As a recent study shows (Storks et al., 2023), both novice and advanced researchers find the challenge of "setting up the code base" to be the most difficult part ... | 2025-06-30T20:57:06 | remyxai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1loj134 | false | null | t3_1loj134 | /r/LocalLLaMA/comments/1loj134/arxiv2docker_computational_reproducibility_with/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': 'rak71t31n4af1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/rak71t31n4af1.png?width=108&crop=smart&auto=webp&s=be04a1b682b02ed41a25715706c136da1291adb2', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/rak71t31n4af1.png?width=216&crop=smart&auto=web... | |
Chat UI Framwork | 1 | Hi folks I am trying to start a new project and looking for chat UI frameworks. What are the options?
Thanks | 2025-06-30T20:52:28 | https://www.reddit.com/r/LocalLLaMA/comments/1loiwzz/chat_ui_framwork/ | __lawless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loiwzz | false | null | t3_1loiwzz | /r/LocalLLaMA/comments/1loiwzz/chat_ui_framwork/ | false | false | self | 1 | null |
Alice AI | 1 | [removed] | 2025-06-30T20:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1loit1o/alice_ai/ | JaceCarter1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loit1o | false | null | t3_1loit1o | /r/LocalLLaMA/comments/1loit1o/alice_ai/ | false | false | self | 1 | null |
How to run Hunyuan-A13B on a RTX 5090 / Blackwell ? | 1 | Hi folks!
Since the launch of Hunyuan-A13B, I’ve been struggling to get it running on an RTX 5090 with 32 GB of RAM. The official Docker images from Tencent don’t seem to be compatible with the Blackwell architecture. I even tried building vLLM from source via `git clone`, but no luck either.
Any hints? | 2025-06-30T20:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lohzzj/how_to_run_hunyuana13b_on_a_rtx_5090_blackwell/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lohzzj | false | null | t3_1lohzzj | /r/LocalLLaMA/comments/1lohzzj/how_to_run_hunyuana13b_on_a_rtx_5090_blackwell/ | false | false | self | 1 | null |
Gemma3 says its palm 2.xx and gemini pro 1.5... | 0 | What is this, I run ollama run gemma3:27b-it-qat --verbose. and it states that its gemini pro or palm 2.x?
Is this normal behaviour? Loded model from ollama.
`>>> what model are you?`
`You're asking a great question – it’s good to check!`
`I am currently running on the **Gemini Pro 1.5 Pro** model.`
`Here's a bre... | 2025-06-30T20:00:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lohl90/gemma3_says_its_palm_2xx_and_gemini_pro_15/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lohl90 | false | null | t3_1lohl90 | /r/LocalLLaMA/comments/1lohl90/gemma3_says_its_palm_2xx_and_gemini_pro_15/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'uy4zMxRm7GciiY7sj6GnmTKVxfUz3ywJMcuk_LAJkao', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uy4zMxRm7GciiY7sj6GnmTKVxfUz3ywJMcuk_LAJkao.png?width=108&crop=smart&auto=webp&s=c97948fa4e00d727f7a8621c3a9954081007c45e', 'width': 108}, {'height': 113, 'url': 'h... |
Need open source Vlm for Trading chart analysis | 0 | Need open source Vlm for Trading chart analysis
comment the name of models that are on Huggingface or GitHub. | 2025-06-30T18:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lofsxc/need_open_source_vlm_for_trading_chart_analysis/ | Key-Mortgage-1515 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lofsxc | false | null | t3_1lofsxc | /r/LocalLLaMA/comments/1lofsxc/need_open_source_vlm_for_trading_chart_analysis/ | false | false | self | 0 | null |
Image input vs text input cost analysis | 0 | The way openai calculates the token cost is based on how many 512 by 512 tiles can be accommodated in image, for each they have number of tokens and there is a fixed cost as well, overall for 1024 by 1024 tokens it comes around 765 tokens, given the fact that LLMs are now multimodal inherently, models can take prompt a... | 2025-06-30T18:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lof9k8/image_input_vs_text_input_cost_analysis/ | Optimalutopic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lof9k8 | false | null | t3_1lof9k8 | /r/LocalLLaMA/comments/1lof9k8/image_input_vs_text_input_cost_analysis/ | false | false | self | 0 | null |
Run any LLM locally on your Mac in less than 2 mins | 0 | 2025-06-30T18:02:17 | https://www.dsdev.in/run-any-llm-locally-on-your-mac-in-less-than-2-mins | phantom69_ftw | dsdev.in | 1970-01-01T00:00:00 | 0 | {} | 1loejea | false | null | t3_1loejea | /r/LocalLLaMA/comments/1loejea/run_any_llm_locally_on_your_mac_in_less_than_2/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'FuBlmWr8dSvljH3h6bCp8NR83I3zLNVWRwtXqRNgQOk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FuBlmWr8dSvljH3h6bCp8NR83I3zLNVWRwtXqRNgQOk.png?width=108&crop=smart&auto=webp&s=445fc43faf5bbc57cd9833134eaff065fc485bad', 'width': 108}, {'height': 216, 'url': '... | |
Trying to build an AI assistant for an anesthesiologist using a private database – any advice? | 2 | Hi everyone,
I’ve been trying to help an anesthesiologist who works across a wide range of surgical specialties — to build a private AI assistant for clinical planning.
Here’s the idea:
He wants to input both patient-specific data (comorbidities, meds, allergies, previous surgeries, etc.) and surgery-specific data ... | 2025-06-30T17:34:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lodswf/trying_to_build_an_ai_assistant_for_an/ | lucasmed97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lodswf | false | null | t3_1lodswf | /r/LocalLLaMA/comments/1lodswf/trying_to_build_an_ai_assistant_for_an/ | false | false | self | 2 | null |
ERNIE 4.5 Collection from Baidu | 130 | 2025-06-30T17:27:55 | https://ernie.baidu.com/blog/posts/ernie4.5/ | AppearanceHeavy6724 | ernie.baidu.com | 1970-01-01T00:00:00 | 0 | {} | 1lodmc6 | false | null | t3_1lodmc6 | /r/LocalLLaMA/comments/1lodmc6/ernie_45_collection_from_baidu/ | false | false | default | 130 | null | |
A great collection of tutorials on how to create production-ready GenAI agents | 1 | [removed] | 2025-06-30T16:59:53 | https://www.reddit.com/r/LocalLLaMA/comments/1locv64/a_great_collection_of_tutorials_on_how_to_create/ | Ok-Bid-1264 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1locv64 | false | null | t3_1locv64 | /r/LocalLLaMA/comments/1locv64/a_great_collection_of_tutorials_on_how_to_create/ | false | false | self | 1 | null |
Open Source AI Editor: First Milestone | 209 | Let me know if you have any questions about open sourcing. Happy to answer.
vscode pm here | 2025-06-30T16:52:52 | https://code.visualstudio.com/blogs/2025/06/30/openSourceAIEditorFirstMilestone | isidor_n | code.visualstudio.com | 1970-01-01T00:00:00 | 0 | {} | 1lococc | false | null | t3_1lococc | /r/LocalLLaMA/comments/1lococc/open_source_ai_editor_first_milestone/ | false | false | default | 209 | {'enabled': False, 'images': [{'id': '7W6FU5na7gC1vKxg3pWb3QkD0a8T5GyzeaLh8U3roNc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7W6FU5na7gC1vKxg3pWb3QkD0a8T5GyzeaLh8U3roNc.png?width=108&crop=smart&auto=webp&s=13b3e96ac8483f7b499c5f5181796b3a28d2e746', 'width': 108}, {'height': 121, 'url': 'h... |
Upcoming Coding Models? | 1 | [removed] | 2025-06-30T16:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1loc97j/upcoming_coding_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loc97j | false | null | t3_1loc97j | /r/LocalLLaMA/comments/1loc97j/upcoming_coding_models/ | false | false | self | 1 | null |
Building a Star Wars HK-47 Smart Assistant – Need Help Choosing LLM + Voice Model | 1 | [removed] | 2025-06-30T16:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1loc3wn/building_a_star_wars_hk47_smart_assistant_need/ | GrandMoffJake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loc3wn | false | null | t3_1loc3wn | /r/LocalLLaMA/comments/1loc3wn/building_a_star_wars_hk47_smart_assistant_need/ | false | false | self | 1 | null |
n8n ,proxmox ,docker and Google API. | 11 | hi,
trying to use Google API in 8n8 (in a PROXMOX container ) and LMstudio (another machine in the same LAN) but it won't take my LAN ip adresse.n8n gives the localhost value by default. I know there is a trick with docker, like https://local.docker/v1, but it works only if both n8n and LMstudio work on the same machin... | 2025-06-30T16:26:28 | Able-Consequence8872 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lobzkr | false | null | t3_1lobzkr | /r/LocalLLaMA/comments/1lobzkr/n8n_proxmox_docker_and_google_api/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': '02pteeydc3af1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/02pteeydc3af1.png?width=108&crop=smart&auto=webp&s=749f00726bd1c83dcd9d5db19e47dfcdce277c1d', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/02pteeydc3af1.png?width=216&crop=smart&auto=webp... | |
Upcoming Coding Models? | 50 | Based on past threads from this sub, I see that below coding models are coming.
1. Qwen3 Coder - [Recent thread](https://www.reddit.com/r/LocalLLaMA/comments/1lm92se/qwen3_coder_soon/)
2. Deep Cogito - Preview models there
3. Polaris - Preview models there
4. Granite releasing any new coding models? Preview (General) ... | 2025-06-30T16:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lobyx5/upcoming_coding_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lobyx5 | false | null | t3_1lobyx5 | /r/LocalLLaMA/comments/1lobyx5/upcoming_coding_models/ | false | false | self | 50 | null |
Language Model Maker | 0 | This is a question,
If you're familiar with RPG maker, you may guess where this is going.
Is there a LM Maker, even very basic, shared as source code so people can create their own LMs small medium large etc.
In case of existing llms end up all commercial | 2025-06-30T16:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lobx8p/language_model_maker/ | GustaveVonZarovich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lobx8p | false | null | t3_1lobx8p | /r/LocalLLaMA/comments/1lobx8p/language_model_maker/ | false | false | self | 0 | null |
MCP tool development -- repeated calls with no further processing | 0 | I'm trying to make a fetch\_url tool using MCP:
[https://github.com/modelcontextprotocol](https://github.com/modelcontextprotocol)
Setup: LMStudio + Qwen32b / Gemma27b / Gemma12b / DeepSeek R1 (Qwen3 distil)
When I ask the model to get a URL, it successfully calls the fetch\_url function (and gets a correct respons... | 2025-06-30T16:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lobqvc/mcp_tool_development_repeated_calls_with_no/ | nuketro0p3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lobqvc | false | null | t3_1lobqvc | /r/LocalLLaMA/comments/1lobqvc/mcp_tool_development_repeated_calls_with_no/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8lBwgtz0GifbCTXgYXAe7KwHqB6r9d6n6NtOb1minSs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8lBwgtz0GifbCTXgYXAe7KwHqB6r9d6n6NtOb1minSs.png?width=108&crop=smart&auto=webp&s=3ad6a7b78d99e504bc515c83cbb4a5e75372c770', 'width': 108}, {'height': 216, 'url': '... |
I built an open source Ollama MCP client | 1 | [removed] | 2025-06-30T15:56:04 | https://v.redd.it/yary5jis63af1 | matt8p | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lob78q | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yary5jis63af1/DASHPlaylist.mpd?a=1753890980%2CMjY0MWQ4OWQ4OGJjYWQzYTQzMWIzOWU0YzUxZTRhYTMxOGY2NzFmMGJjMTliZmFkMWY0OGYxZTQ4ZWNiYzQ3Mg%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/yary5jis63af1/DASH_1080.mp4?source=fallback', 'h... | t3_1lob78q | /r/LocalLLaMA/comments/1lob78q/i_built_an_open_source_ollama_mcp_client/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dGRqNDZtaXM2M2FmMVKRezXehY92dUtTzbH_3JS1V5vvyONaJCE-Pduktd-G', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/dGRqNDZtaXM2M2FmMVKRezXehY92dUtTzbH_3JS1V5vvyONaJCE-Pduktd-G.png?width=108&crop=smart&format=pjpg&auto=webp&s=a6bfbad3ff9c699a988b81c291ba4f406838d... | |
Huawei opensources two LLMs without US tech | 1 | [removed] | 2025-06-30T15:53:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lob4lz/huawei_opensources_two_llms_without_us_tech/ | LLMLearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lob4lz | false | null | t3_1lob4lz | /r/LocalLLaMA/comments/1lob4lz/huawei_opensources_two_llms_without_us_tech/ | false | false | self | 1 | null |
Huawei opensources two LLMs without US tech | 1 | [removed] | 2025-06-30T15:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lob34u/huawei_opensources_two_llms_without_us_tech/ | LLMLearner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lob34u | false | null | t3_1lob34u | /r/LocalLLaMA/comments/1lob34u/huawei_opensources_two_llms_without_us_tech/ | false | false | self | 1 | null |
What Inference Server do you use to host TTS Models? Looking for someone who has used Triton. | 2 | # All the examples I have are highly unoptimized -
For eg, Modal Labs uses FastAPI - [https://modal.com/docs/examples/chatterbox\_tts\\](https://modal.com/docs/examples/chatterbox_tts%5C) BentoML also uses FastAPI like service - [https://www.bentoml.com/blog/deploying-a-text-to-speech-application-with-bentoml\\](https... | 2025-06-30T15:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lob0uu/what_inference_server_do_you_use_to_host_tts/ | tempNull | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lob0uu | false | null | t3_1lob0uu | /r/LocalLLaMA/comments/1lob0uu/what_inference_server_do_you_use_to_host_tts/ | false | false | self | 2 | null |
Drafted Llama as an enhanced parser for interactive fiction puzzles/games | 12 | Using Llama as a way to expand the types of games that can be played within interactive fiction, such as creating non-deterministic rubrics to grade puzzle solutions, allowing building/crafting with a wide range of objects.combinatorial possibilities, and enabling sentiment and emotion-based responses with NPCs as a wa... | 2025-06-30T15:32:42 | Fit-Lengthiness-4747 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1loal9v | false | null | t3_1loal9v | /r/LocalLLaMA/comments/1loal9v/drafted_llama_as_an_enhanced_parser_for/ | false | false | default | 12 | {'enabled': True, 'images': [{'id': 'wg741dwa23af1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/wg741dwa23af1.jpeg?width=108&crop=smart&auto=webp&s=e05157a169d43a815d3fd97f0751b289fb892c66', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/wg741dwa23af1.jpeg?width=216&crop=smart&auto=... | |
[Day 6/50] Building a Small Language Model from Scratch - What Is Positional Embedding and Why Does It Matter? | 43 | 2025-06-30T15:23:52 | https://www.reddit.com/r/LocalLLaMA/comments/1load8a/day_650_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1load8a | false | null | t3_1load8a | /r/LocalLLaMA/comments/1load8a/day_650_building_a_small_language_model_from/ | false | false | 43 | null | ||
Query | 0 | *I am a student who just cleared high school and will be joining college this year.I have interest in pursuing coding and AI/ml.*
*Will a macbook air m4 base be enough for ml in my 4 year of college??*
*Will also be getting a external SSD with that* | 2025-06-30T14:55:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lo9mcm/query/ | Sudden-Holiday-3582 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo9mcm | false | null | t3_1lo9mcm | /r/LocalLLaMA/comments/1lo9mcm/query/ | false | false | self | 0 | null |
[2506.21734] Hierarchical Reasoning Model | 25 | Abstract:
Reasoning, the process of devising and executing complex goal-oriented action sequences, remains a critical challenge in AI. Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. In... | 2025-06-30T13:54:40 | https://arxiv.org/abs/2506.21734 | absolooot1 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1lo84yj | false | null | t3_1lo84yj | /r/LocalLLaMA/comments/1lo84yj/250621734_hierarchical_reasoning_model/ | false | false | default | 25 | null |
Ollama and llama3.2-vision broken? | 1 | I’ve been using this combo successfully to recognize handwritten text.
After updating Ollama, llama3.2-vision goes into an endless hallucination loop and many attempts to modify the prompt.
I’ve tried doing a fresh install of Ollama, even older installs that I retained. Also increasing the context size, clearing cont... | 2025-06-30T12:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lo6gc0/ollama_and_llama32vision_broken/ | Significant_Post8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo6gc0 | false | null | t3_1lo6gc0 | /r/LocalLLaMA/comments/1lo6gc0/ollama_and_llama32vision_broken/ | false | false | self | 1 | null |
[Feedback Wanted] Megan AI – Local LLMs + UE5 MetaHumans (Kickstarter live • Steam EA 7-31) | 1 | [removed] | 2025-06-30T12:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lo6dtg/feedback_wanted_megan_ai_local_llms_ue5/ | ChrisZavadil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo6dtg | false | null | t3_1lo6dtg | /r/LocalLLaMA/comments/1lo6dtg/feedback_wanted_megan_ai_local_llms_ue5/ | false | false | 1 | null | |
Got all the hardware, Got my dataset, why does it take soo long to learn how to fine-tune? | 1 | So, I think I have honed in on my method of fine-tuning my local llm with local fine-tuining. After cmd and loading python paramaters utilizing GPT/Gemini to bro-code my way to being 90% there, I always failed. So, I finally looked up and saw all the different ways to fine-tune a dataset, and tried unsloth, but was uns... | 2025-06-30T12:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lo61eb/got_all_the_hardware_got_my_dataset_why_does_it/ | EasyConference4177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo61eb | false | null | t3_1lo61eb | /r/LocalLLaMA/comments/1lo61eb/got_all_the_hardware_got_my_dataset_why_does_it/ | false | false | self | 1 | null |
Been experimenting with “agent graphs” for local LLMs — basically turning thoughts into modular code | 4 | So I’ve been messing with this concept I’m calling agentic knowledge graphs, basically, instead of writing prompts one by one, you define little agents that represent aspects of your thinking. Then you connect them with logic and memory.
Each node in the graph is a persona or function (like a writing coach, journal cr... | 2025-06-30T12:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lo5xyx/been_experimenting_with_agent_graphs_for_local/ | KonradFreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo5xyx | false | null | t3_1lo5xyx | /r/LocalLLaMA/comments/1lo5xyx/been_experimenting_with_agent_graphs_for_local/ | false | false | self | 4 | null |
What is the current best local coding model with <= 4B parameters? | 34 | Hello, I am looking for <= 4B coding models. I realize that none of these will be practical for now just looking for some to do experiments.
Here is what i found so far:
- Menlo / Jan-nano — 4.02 B (Not really coding but I expect it to be better than others)
- Gemma — 4 B / 2 B
- Qwen 3 — 4 B / 0.6 B
- Phi-4 Mini —... | 2025-06-30T12:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lo5vnf/what_is_the_current_best_local_coding_model_with/ | Wooden-Key751 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo5vnf | false | null | t3_1lo5vnf | /r/LocalLLaMA/comments/1lo5vnf/what_is_the_current_best_local_coding_model_with/ | false | false | self | 34 | null |
Has anyone tried running 2 AMD Ryzen™ AI Max+ 395 in parallel? | 15 | Hi everyone,
Some models require more VRAM to run. I was thinking of getting 2 AMD Ryzen™ AI Max+ 395 and trying to run them in parallel. I wonder if anyone has tried this? Does anyone have any information?
Have a nice one:) | 2025-06-30T12:09:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lo5uz6/has_anyone_tried_running_2_amd_ryzen_ai_max_395/ | orkutmuratyilmaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo5uz6 | false | null | t3_1lo5uz6 | /r/LocalLLaMA/comments/1lo5uz6/has_anyone_tried_running_2_amd_ryzen_ai_max_395/ | false | false | self | 15 | null |
Been experimenting with “agent graphs” for local LLMs — basically letting your thoughts become code | 1 | So I’ve been messing with this concept I’m calling agentic knowledge graphs, basically, instead of writing prompts one by one, you define little agents that represent aspects of your thinking. Then you connect them with logic and memory.
Each “node” in the graph is a persona or function (like a writing coach, journal ... | 2025-06-30T12:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lo5qo3/been_experimenting_with_agent_graphs_for_local/ | KonradFreeman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo5qo3 | false | null | t3_1lo5qo3 | /r/LocalLLaMA/comments/1lo5qo3/been_experimenting_with_agent_graphs_for_local/ | false | false | self | 1 | null |
Ollama or VLLM? | 0 | Ollama is easy to use, has a lot of models, uses GPU and CPU if needed, can run and test and server so many models with a few commands.
VLLM more complex, more commands to type, more limitations, not as popular.
Lets say there is an office of 10 to 50 people, they want a custom AI, which one will you implement and wh... | 2025-06-30T11:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lo4qxf/ollama_or_vllm/ | Careful-State-854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo4qxf | false | null | t3_1lo4qxf | /r/LocalLLaMA/comments/1lo4qxf/ollama_or_vllm/ | false | false | self | 0 | null |
Any uncensored LLM that can beat Dolphine3.0-Qwen2.5 | 1 | [removed] | 2025-06-30T10:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lo4hfx/any_uncensored_llm_that_can_beat_dolphine30qwen25/ | Puzzled_Library6773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo4hfx | false | null | t3_1lo4hfx | /r/LocalLLaMA/comments/1lo4hfx/any_uncensored_llm_that_can_beat_dolphine30qwen25/ | false | false | self | 1 | null |
Which would be the best uncensored model to run on 4gb Vram laptop using LMStudio? | 0 | Hi, just installed LMStudio, don't know which model to download, my requirement is to learn about some stuff that CHATGPT wouldn't help me with. Guide me please. | 2025-06-30T10:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lo42x8/which_would_be_the_best_uncensored_model_to_run/ | CRESCENTNINJA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo42x8 | false | null | t3_1lo42x8 | /r/LocalLLaMA/comments/1lo42x8/which_would_be_the_best_uncensored_model_to_run/ | false | false | self | 0 | null |
From the trenches, running TinyLlama-1.1B-Chat-v0.1 on iPhone | 21 | Just sharing my efforts, really, and thank you for reading in advance.
I am working on an LLM engine nicknamed Nyra in rust and c++20.
So managed to do local LLM Inference on iPhone in 70ms and 15 TPS (could be massively improved once metal is in motion)
One of the images shows that previously I optimized... | 2025-06-30T10:21:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3y10/from_the_trenches_running_tinyllama11bchatv01_on/ | rvnllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3y10 | false | {'oembed': {'author_name': 'Ervin Bosenbacher', 'author_url': 'https://www.youtube.com/@rvnllm', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/6ZMplYIsTyw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1lo3y10 | /r/LocalLLaMA/comments/1lo3y10/from_the_trenches_running_tinyllama11bchatv01_on/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'dt9-WMKabCLh-xNeXHwgVKpGxeuZavoRA4i4gCf5uKw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dt9-WMKabCLh-xNeXHwgVKpGxeuZavoRA4i4gCf5uKw.jpeg?width=108&crop=smart&auto=webp&s=85b9916ff8bc2b2d28b55404aba5b6317ac9022b', 'width': 108}, {'height': 162, 'url': '... | |
Arcee.ai: Releasing Five Open-Weights Models: SuperNova 70B, Virtuoso-Large 72B, Caller 32B, GLM-4-32B-Base-32K, and Homunculus 12B | 6 | 2025-06-30T10:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3rx5/arceeai_releasing_five_openweights_models/ | Nunki08 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3rx5 | false | null | t3_1lo3rx5 | /r/LocalLLaMA/comments/1lo3rx5/arceeai_releasing_five_openweights_models/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'yYf7GTwtgQMu4mc_aqMvvPgB2_XpclRMnxWQCa0pvNk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yYf7GTwtgQMu4mc_aqMvvPgB2_XpclRMnxWQCa0pvNk.png?width=108&crop=smart&auto=webp&s=eee168a93406e1c26b5afe7150cfe597cee3df0c', 'width': 108}, {'height': 121, 'url': 'h... | ||
Built on nano-vLLM foundation: enterprise features for real-world applications | 1 | [removed] | 2025-06-30T10:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3oa3/built_on_nanovllm_foundation_enterprise_features/ | CodeStackDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3oa3 | false | null | t3_1lo3oa3 | /r/LocalLLaMA/comments/1lo3oa3/built_on_nanovllm_foundation_enterprise_features/ | false | false | self | 1 | null |
Ollama to llama.cpp: system prompt? | 3 | I’m considering transitioning from Ollama llama.cpp. Does llama.cpp have an equivalent feature to Ollama’s modelfiles, whereby you can bake a system prompt into the model itself before calling it from a Python script (or wherever)? | 2025-06-30T09:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3l7w/ollama_to_llamacpp_system_prompt/ | psychonomy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3l7w | false | null | t3_1lo3l7w | /r/LocalLLaMA/comments/1lo3l7w/ollama_to_llamacpp_system_prompt/ | false | false | self | 3 | null |
Built enterprise-grade infrastructure around nano-vLLM - 6 months of production learningsnfrastructure | 1 | [removed] | 2025-06-30T09:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3kdm/built_enterprisegrade_infrastructure_around/ | CodeStackDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3kdm | false | null | t3_1lo3kdm | /r/LocalLLaMA/comments/1lo3kdm/built_enterprisegrade_infrastructure_around/ | false | false | self | 1 | null |
Deepseek R1 Web ouputs much more chain-of-thought information than API? | 4 | This is what I observed, the Web print out much more detailed chain-of-thought information than API. Anybody else observed the same issue? I wonder why it's like that. | 2025-06-30T09:38:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lo39jd/deepseek_r1_web_ouputs_much_more_chainofthought/ | Tectorumiris | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo39jd | false | null | t3_1lo39jd | /r/LocalLLaMA/comments/1lo39jd/deepseek_r1_web_ouputs_much_more_chainofthought/ | false | false | self | 4 | null |
Affordable dev system (spark alternative?) | 5 | I’m working on a science project at a University of Applied Sciences. We plan to purchase a server with an NVIDIA H200 GPU. This system will host LLM services for students.
For development purposes, we’d like to have a second system where speed isn’t critical, but it should still be capable of running the same models ... | 2025-06-30T09:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lo35gq/affordable_dev_system_spark_alternative/ | _camera_up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo35gq | false | null | t3_1lo35gq | /r/LocalLLaMA/comments/1lo35gq/affordable_dev_system_spark_alternative/ | false | false | self | 5 | null |
Guide: How to run an MCP tool Server | 10 | This is a short guide to help people who want to know a bit more about MCP tool servers. This guide is focused only on local MCP servers offering tools using the STDIO transport. It will not go into authorizations or security. Since this is a subreddit about local models I am going to assume that people are running the... | 2025-06-30T09:30:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lo3548/guide_how_to_run_an_mcp_tool_server/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo3548 | false | null | t3_1lo3548 | /r/LocalLLaMA/comments/1lo3548/guide_how_to_run_an_mcp_tool_server/ | false | false | self | 10 | {'enabled': True, 'images': [{'id': 'Etf2C3qCzdffiiFrCfKKKBKnc9jhsBUPWqDqyzAtM1I', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Etf2C3qCzdffiiFrCfKKKBKnc9jhsBUPWqDqyzAtM1I.gif?width=108&crop=smart&format=png8&s=7358c5e29d79877d7d599a588b470bde93fc485d', 'width': 108}, {'height': 144, 'url': '... |
Current State of Code Tab/Autocomplete Models??? | 20 | I love cursor, but that love is solely for the tab completion model. It’s a ok vs code clone and cline is better chat/agent wise. I have to use gh copilot at work and it’s absolute trash compared to that tab model. Are there any open-source models that come close in 2025? I saw zeta but that’s a bit underwhelming and o... | 2025-06-30T08:08:09 | https://huggingface.co/zed-industries/zeta | Much-Contract-1397 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lo1xma | false | null | t3_1lo1xma | /r/LocalLLaMA/comments/1lo1xma/current_state_of_code_tabautocomplete_models/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'NjeXyJ-H2-ngvCqZHmYzHLdzIwJM3Nr4H0k94STgc3g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NjeXyJ-H2-ngvCqZHmYzHLdzIwJM3Nr4H0k94STgc3g.png?width=108&crop=smart&auto=webp&s=627c1b7855e9655469bade77fc637375d74b59fa', 'width': 108}, {'height': 116, 'url': 'h... | |
Has anyone tried using LLaMA for assistant-style or general-purpose queries? | 0 | Hey everyone,
I'm currently exploring LLaMA (via Grok) with the goal of building a personal assistant, and I'm curious — has anyone here tried using LLaMA for handling assistant-style interactions or general-purpose queries?
Would love to hear about your experiences — especially how it performs in areas like task ... | 2025-06-30T07:32:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lo1ew4/has_anyone_tried_using_llama_for_assistantstyle/ | Technical-Charge-365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo1ew4 | false | null | t3_1lo1ew4 | /r/LocalLLaMA/comments/1lo1ew4/has_anyone_tried_using_llama_for_assistantstyle/ | false | false | self | 0 | null |
Models for generating QA-pairs from text dataset | 5 | Which models offer the best quality-to-performance in terms of prompt adherence and context length for such a usecase? I am currently using NousResearch/Hermes-3-Llama-3.1-8B-GGUF for this task after having failed in trying to get Qwen2.5 7B to give questions from the actual theory text not sections of the book. I am u... | 2025-06-30T07:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lo1d8t/models_for_generating_qapairs_from_text_dataset/ | Sasikuttan2163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lo1d8t | false | null | t3_1lo1d8t | /r/LocalLLaMA/comments/1lo1d8t/models_for_generating_qapairs_from_text_dataset/ | false | false | self | 5 | null |
Accelerated LLM Inference on AMD Instinct™ GPUs with vLLM 0.9.x and ROCm | 37 | 2025-06-30T06:48:55 | https://rocm.blogs.amd.com/software-tools-optimization/vllm-0.9.x-rocm/README.html | fallingdowndizzyvr | rocm.blogs.amd.com | 1970-01-01T00:00:00 | 0 | {} | 1lo0rk8 | false | null | t3_1lo0rk8 | /r/LocalLLaMA/comments/1lo0rk8/accelerated_llm_inference_on_amd_instinct_gpus/ | false | false | default | 37 | null | |
So whatever happened to d(iffuser)LLMs? | 47 | This morning, I got an E-Mail from the team behind the Mercury Coder LLM, Inception (https://www.inceptionlabs.ai/) that basically announced a chat-focused model. Pretty neat, sent along an API example with cURL also. Simple and nice.
But this reminded me of dLLMs in general - they haven't really been talked a lot abo... | 2025-06-30T05:29:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lnzj5e/so_whatever_happened_to_diffuserllms/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnzj5e | false | null | t3_1lnzj5e | /r/LocalLLaMA/comments/1lnzj5e/so_whatever_happened_to_diffuserllms/ | false | false | self | 47 | null |
Rumors are OAI's New OS Model potentially "frontier" level in OS space? | 0 | We saw Yacine hyping it up hard right after he left xAI, Altman even followed him back the same day. Now, other "adjacent" figures, people with ties to insiders who've previously leaked accurate info, are echoing similar hints (like that tweet going around).
OpenAI caught a lot of flack after CPO Kevin Weil said their... | 2025-06-30T04:24:08 | townofsalemfangay | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnyfu0 | false | null | t3_1lnyfu0 | /r/LocalLLaMA/comments/1lnyfu0/rumors_are_oais_new_os_model_potentially_frontier/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'k1mft2t3pz9f1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/k1mft2t3pz9f1.png?width=108&crop=smart&auto=webp&s=b93bf43402bb810a78add8d3026e445c258e5274', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/k1mft2t3pz9f1.png?width=216&crop=smart&auto=web... | |
What subscription to buy? | 0 | I am beginner and I want to start learning about LLMs and finetuning.
I have an old laptop with just 4 gigabytes of VRAM (RTX 2050). I can't invest in new hardware. What is currently the best rental service available for getting a decent GPU/TPU that can handle finetuning and RL for small models? | 2025-06-30T04:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lny5qy/what_subscription_to_buy/ | ConsistentStruggle82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lny5qy | false | null | t3_1lny5qy | /r/LocalLLaMA/comments/1lny5qy/what_subscription_to_buy/ | false | false | self | 0 | null |
Major AI platforms will eventually have ads | 266 | I see this as a huge reason to continue advancement of local LLMs. OpenAI, Google, Microsoft, Anthropic, etc. all the big players have investors to answer to, and will eventually need to stop burning money. They will get pressured into a sustainable business model. I think Google has already lost a lot of traffic to AI... | 2025-06-30T03:40:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lnxo8y/major_ai_platforms_will_eventually_have_ads/ | MattDTO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnxo8y | false | null | t3_1lnxo8y | /r/LocalLLaMA/comments/1lnxo8y/major_ai_platforms_will_eventually_have_ads/ | false | false | self | 266 | null |
Best Model For Text-To-Audio & Voice Assistant? | 2 | I apologize if this has been asked before, or asked often but i personally couldn't find anything solid through self-research or scrolling through this reddit feed. **Are there any GOOD local AI text to voice models that can work independently/and with a local SLM/LLM?** I'm really trying to give my home assistant a vo... | 2025-06-30T03:38:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lnxml5/best_model_for_texttoaudio_voice_assistant/ | ExcogitationMG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnxml5 | false | null | t3_1lnxml5 | /r/LocalLLaMA/comments/1lnxml5/best_model_for_texttoaudio_voice_assistant/ | false | false | self | 2 | null |
Week 2: Building a Small Language Model from Scratch(Positional Embeddings, RoPE, and Model Distillation) - June 30 - July 4 | 33 | 2025-06-30T03:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lnx8js/week_2_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnx8js | false | null | t3_1lnx8js | /r/LocalLLaMA/comments/1lnx8js/week_2_building_a_small_language_model_from/ | false | false | 33 | null | ||
Healthcare space | 0 | The healthcare space is overdue for disruption. I’ve already built a working prototype—now I’m assembling a sharp, agile team to take it to market. I’m looking for one exceptional full-stack engineer with AI expertise and one driven individual with strong sales, marketing, and business acumen. If you're ready to help r... | 2025-06-30T02:52:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lnwsfe/healthcare_space/ | junebugg62 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnwsfe | false | null | t3_1lnwsfe | /r/LocalLLaMA/comments/1lnwsfe/healthcare_space/ | false | false | self | 0 | null |
GPU Learning and Optimization on Macbook | 6 | So my doubt is very simple. I wish to buy a macbook and would like to locally build and train my VLM and LLM models (mini ones).
What are my options of frameworks etc to learn and utilise to squeeze out the compute juice for this in macOS GPU cores. Any alternative to cuda? Does JAX work alright? What are my options? | 2025-06-30T01:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lnv75q/gpu_learning_and_optimization_on_macbook/ | Electronic-Guess-878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnv75q | false | null | t3_1lnv75q | /r/LocalLLaMA/comments/1lnv75q/gpu_learning_and_optimization_on_macbook/ | false | false | self | 6 | null |
Baidu releases ERNIE 4.5 models on huggingface | 621 | llama.cpp support for ERNIE 4.5 0.3B
[https://github.com/ggml-org/llama.cpp/pull/14408](https://github.com/ggml-org/llama.cpp/pull/14408)
vllm Ernie4.5 and Ernie4.5MoE Model Support
[https://github.com/vllm-project/vllm/pull/20220](https://github.com/vllm-project/vllm/pull/20220) | 2025-06-30T00:34:16 | https://huggingface.co/collections/baidu/ernie-45-6861cd4c9be84540645f35c9 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lnu4zl | false | null | t3_1lnu4zl | /r/LocalLLaMA/comments/1lnu4zl/baidu_releases_ernie_45_models_on_huggingface/ | false | false | default | 621 | {'enabled': False, 'images': [{'id': 'Wyzo5BvQjbbXvCrrpLypEcj3XicuXWigLyl_Acs2b5k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Wyzo5BvQjbbXvCrrpLypEcj3XicuXWigLyl_Acs2b5k.png?width=108&crop=smart&auto=webp&s=271da8cd611d7149ea4435ca8308fd09ef2655cd', 'width': 108}, {'height': 116, 'url': 'h... |
BAIDU releases ERNIE 4.5 | 1 | [https://huggingface.co/baidu/ERNIE-4.5-VL-424B-A47B-Base-Paddle](https://huggingface.co/baidu/ERNIE-4.5-VL-424B-A47B-Base-Paddle)
llama.cpp support for ERNIE 4.5 0.3B
[https://github.com/ggml-org/llama.cpp/pull/14408](https://github.com/ggml-org/llama.cpp/pull/14408)
vllm Ernie4.5 and Ernie4.5MoE Model Support
[... | 2025-06-30T00:31:50 | https://huggingface.co/collections/baidu/ernie-45-6861cd4c9be84540645f35c9 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lnu372 | false | null | t3_1lnu372 | /r/LocalLLaMA/comments/1lnu372/baidu_releases_ernie_45/ | false | false | default | 1 | null |
Kimi-Dev-72B - Minimum specs needed to run on a high end PC | 2 | Just recently watched Julian Goldie's facebook post on Kimi-dev-72b. He seemed to be saying he was running this on a PC, but the AI models are saying it takes a high end server, that costs substantially more money, to run it. Anyone have any experience or helpful input on this?
Thanks, | 2025-06-30T00:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lnu0o0/kimidev72b_minimum_specs_needed_to_run_on_a_high/ | texrock100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnu0o0 | false | null | t3_1lnu0o0 | /r/LocalLLaMA/comments/1lnu0o0/kimidev72b_minimum_specs_needed_to_run_on_a_high/ | false | false | self | 2 | null |
Simple textual lists for llm rankings | 2 | Hey there all. I know benchmarks exist, but they're too clunky for screen readers (I'm blind). So is there some sort of active blog or website or mailing list that cuts through all that rainfall of models and actually tells us which ones are the best based on size and specialty? Thanks. | 2025-06-30T00:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lntw6i/simple_textual_lists_for_llm_rankings/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lntw6i | false | null | t3_1lntw6i | /r/LocalLLaMA/comments/1lntw6i/simple_textual_lists_for_llm_rankings/ | false | false | self | 2 | null |
Is there a deepseek r1 uncensored? | 0 | I'm enjoying using deepseek r1 in LM studio. Is a good tool, but i'm annoyed by how defensive it is if something doesn't like because has parameters and guidelines too heavy to ignore and i am a noob to edit an AI (if that's even possible with what i have in hardware, software and knowledge avalible). So, as the tittle... | 2025-06-29T23:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lnt9kl/is_there_a_deepseek_r1_uncensored/ | Elfo_Sovietico | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnt9kl | false | null | t3_1lnt9kl | /r/LocalLLaMA/comments/1lnt9kl/is_there_a_deepseek_r1_uncensored/ | false | false | self | 0 | null |
Help me design a robust on-prem Llama 3 70B infrastructure for 30 users – Complete hardware/software list wanted | 0 | Hi everyone,
I’m planning to build a **private, on-premise infrastructure** to serve **Llama 3 70B** for my office (about 30 users, possibly with a few remote users via VPN).
**No data or files should leave our local network** – security and privacy are key. All inference and data processing must stay entirely withi... | 2025-06-29T23:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lnt6yj/help_me_design_a_robust_onprem_llama_3_70b/ | Routine_Fail_2255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnt6yj | false | null | t3_1lnt6yj | /r/LocalLLaMA/comments/1lnt6yj/help_me_design_a_robust_onprem_llama_3_70b/ | false | false | self | 0 | null |
AMD published 51 quants (mostly ONNX) in HF this week (a third of their current total of 145) | 2 | [https://huggingface.co/amd/models](https://huggingface.co/amd/models) | 2025-06-29T23:29:05 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnst4m | false | null | t3_1lnst4m | /r/LocalLLaMA/comments/1lnst4m/amd_published_51_quants_mostly_onnx_in_hf_this/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'lxBvsfGX5DBlxJ7Kq_mtplJCRZ_eriRlRM7EdcytejM', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/k01eyatr9y9f1.png?width=108&crop=smart&auto=webp&s=b42f8eb1a91dfe9e0c9e72cea526155127be3af1', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/k01eyatr9y9f1.png... | ||
What Is Context Engineering? My Thoughts.. | 0 | Basically it's a step above 'prompt engineering '
The prompt is for the moment, the specific input.
'Context engineering' is setting up for the moment.
Think about it as building a movie - the background, the details etc. That would be the context framing. The prompt would be when the actors come in and say their... | 2025-06-29T23:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsqkl/what_is_context_engineering_my_thoughts/ | Lumpy-Ad-173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsqkl | false | null | t3_1lnsqkl | /r/LocalLLaMA/comments/1lnsqkl/what_is_context_engineering_my_thoughts/ | false | false | self | 0 | null |
Build a PC or not? | 4 | Hey everyone,
I’m planning to get started with machine learning. Right now, I have an M1 Mac Mini (16GB RAM, 50GB storage left). Will it be enough?
Appreciate any advice! | 2025-06-29T23:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsgvy/build_a_pc_or_not/ | InternetBest7599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsgvy | false | null | t3_1lnsgvy | /r/LocalLLaMA/comments/1lnsgvy/build_a_pc_or_not/ | false | false | self | 4 | null |
LLM Inference with CPP only | 0 | I am trying to look for cpp based llm inference and post processing repos, any ideas on where can I get started? Llama cpp has efficient post processing techniques? | 2025-06-29T23:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lnscnw/llm_inference_with_cpp_only/ | Waste_Ad_2764 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnscnw | false | null | t3_1lnscnw | /r/LocalLLaMA/comments/1lnscnw/llm_inference_with_cpp_only/ | false | false | self | 0 | null |
Please convince me not to get a GPU I don't need. Can any local LLM compare with cloud models? | 56 | I pay for Claude to assist with coding / tool calling which I use for my job all day. I feel a strong urge to waste tons of money on a nice GPU, but realistically the models aren't as strong or even as cheap as the cloud models.
I'm trying to self-reflect hard and in this moment of clarity, I see this as a distract of... | 2025-06-29T23:05:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lnsax9/please_convince_me_not_to_get_a_gpu_i_dont_need/ | TumbleweedDeep825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnsax9 | false | null | t3_1lnsax9 | /r/LocalLLaMA/comments/1lnsax9/please_convince_me_not_to_get_a_gpu_i_dont_need/ | false | false | self | 56 | null |
GitHub - khimaros/enc: `cc`, but for english | 8 | this tool "compiles" (more accurately, transpiles) english language files to any other programming language. for example `enc hello.en -o hello.py`. there is more documentation and many examples in the repo. it is compatible (and has been tested with) llama.cpp/server | 2025-06-29T22:23:14 | https://github.com/khimaros/enc | xhimaros | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lnrda7 | false | null | t3_1lnrda7 | /r/LocalLLaMA/comments/1lnrda7/github_khimarosenc_cc_but_for_english/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'jKqSSxi8Sioq0zv6YFa1yIx8ylabQGBwCeebx6g7SYA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jKqSSxi8Sioq0zv6YFa1yIx8ylabQGBwCeebx6g7SYA.png?width=108&crop=smart&auto=webp&s=b161fa9ce957fec3b22531c81079c1c9b4462a6a', 'width': 108}, {'height': 108, 'url': 'h... |
You can just RL a model to beat any "AI detectors" | 396 | ERROR: type should be string, got "\n\nhttps://preview.redd.it/p4binxqqvx9f1.png?width=783&format=png&auto=webp&s=5af26533b3e667d6f0382d11163331aedf6bc42d\n\nhttps://preview.redd.it/k4tcfdmsvx9f1.png?width=2574&format=png&auto=webp&s=934ff9d043c7021764743c443feff0f0767c25cd\n\nBaseline \n• Model: Llama-3.1 8B-Instruct \n• Prompt: plain \"Write an essay about X\" \n• Detector: ZeroGPT \nResult: 100 % AI-written\n\nhttps://preview.redd.it/09nmithvvx9f1.png?width=1204&format=png&auto=webp&s=82d0071d8579effb1f1b75eaa5c037a56385ef9d\n\n \nData \n• Synthetic dataset of 150 school-style prompts (history, literature, tech). Nothing fancy, just json lines + system prompt \"You are a human essay writer\"\n\nhttps://preview.redd.it/d189whuxvx9f1.png?width=3456&format=png&auto=webp&s=5fdd406d1df4a40f3f4c1623b6b049026559f29e\n\nFirst training run \nAfter \\~30 GRPO steps on a single A100: \n• ZeroGPT score drops from 100 → 42 % \nThe model learned: \n Write a coherent intro \n Stuff one line of high-entropy junk \n Finish normally \nAverage \"human-ness\" skyrockets because detector averages per-sentence scores\n\nhttps://preview.redd.it/c4bkar70wx9f1.png?width=941&format=png&auto=webp&s=4e3a86287c2d0cc273fd9f3854634cbd7c8ecf75\n\n \nPatch #1 \nAdded a gibberish classifier (tiny DistilRoBERTa) and multiplied reward by its minimum \"clean\" score. Junk lines now tank reward → behaviour disappears. GRPO’s beta ≈ how harshly to penalize incoherence. Set β = 0.4 and reward curve stabilized; no more oscillation between genius & garbage. Removed reasoning (memory constraints).\n\nhttps://preview.redd.it/prmgkja2wx9f1.png?width=652&format=png&auto=webp&s=79f46c100445337e257dc3b7666ffdf2ba826252\n\n \nTiny models crush it \nSwapped in Qwen 0.5B LoRA rank 8, upped num\\_generations → 64. \nResult after 7 steps: best sample already at 28 % \"human\". Smaller vocab seems to help leak less LM \"signature\" (the model learned to use lots of proper nouns to trick the detector).\n\nhttps://preview.redd.it/2e6g1pm7wx9f1.png?width=800&format=png&auto=webp&s=cfbcaa7fd8c6baa2a05d063a3989ba282c8d31a2\n\nColab: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1\\_(8B)-GRPO.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb)\n\nDetector bug? \nZeroGPT sometimes marks the first half AI, second half human for the same paragraph. The RL agent locks onto that gradient and exploits it. Classifier clearly over-fits surface patterns rather than semantics\n\nSingle scalar feedback is enough for LMs to reverse-engineer public detectors \n \nAdd even a tiny auxiliary reward (gibberish, length) to stop obvious failure modes \n \nPublic \"AI/Not-AI\" classifiers are security-through-obscurity\n\nReward function: [https://codefile.io/f/R4O9IdGEhg](https://codefile.io/f/R4O9IdGEhg)\n\n" | 2025-06-29T22:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lnrd1t/you_can_just_rl_a_model_to_beat_any_ai_detectors/ | HOLUPREDICTIONS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnrd1t | false | null | t3_1lnrd1t | /r/LocalLLaMA/comments/1lnrd1t/you_can_just_rl_a_model_to_beat_any_ai_detectors/ | false | false | 396 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': '... | |
OpenAI reportedly ‘recalibrating’ compensation in response to Meta hires | TechCrunch | 1 | 2025-06-29T22:01:58 | https://techcrunch.com/2025/06/29/openai-reportedly-recalibrating-compensation-in-response-to-meta-hires/ | RhubarbSimilar1683 | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1lnqw0p | false | null | t3_1lnqw0p | /r/LocalLLaMA/comments/1lnqw0p/openai_reportedly_recalibrating_compensation_in/ | false | false | default | 1 | null | |
Using classifier-free guidance to prompt instruct models (with the tags) works better for creative writing than prompting the model outright | 0 | OK, so I was playing around with classifier-free guidance, and it occurred to me: Why not just put the whole damn string in there? I loathe how programmatic the responses can be, so maybe that might give the poor thing some freaking room to breathe, lol. Human beings do not acquire and use language that way, so why ... | 2025-06-29T21:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lnqtog/using_classifierfree_guidance_to_prompt_instruct/ | apodicity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnqtog | false | null | t3_1lnqtog | /r/LocalLLaMA/comments/1lnqtog/using_classifierfree_guidance_to_prompt_instruct/ | false | false | self | 0 | null |
AGI/ASI Research 20250628 - Corporate Artificial General Intelligence | 0 | 2025-06-29T21:47:21 | https://v.redd.it/u7fuxd3msx9f1 | Financial_Pick8394 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnqk9i | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/u7fuxd3msx9f1/DASHPlaylist.mpd?a=1753825655%2CMDgzZjAxMTdhMmYxMzExYWZkNDMyNjY3OThkMDI2ZmY2MzAyYWRjZDRiM2QwMGIzZTA1NzU3YTIwYTg5MDhhNQ%3D%3D&v=1&f=sd', 'duration': 179, 'fallback_url': 'https://v.redd.it/u7fuxd3msx9f1/DASH_720.mp4?source=fallback', 'h... | t3_1lnqk9i | /r/LocalLLaMA/comments/1lnqk9i/agiasi_research_20250628_corporate_artificial/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ODRnMm5kM21zeDlmMZgV6qzRoQ4E7N2p7lxVbXEMmKy1tARUpqxCXk_wp3rX.png?width=108&crop=smart&format=pjpg&auto=webp&s=39c95ba3e0b6ec5ad57b8d7abda8059bc5c88... | ||
Upgraded from 3090 to 5090. Oobabooga complaints. | 3 | So as the title said, i got new drivers, but getting CUDA Fatal error when loading. Tried pip uninstall torch, torchaudio, and torch vision with an fresh install again.
Tried
pip install --pre --upgrade --no-cache-dir torch --extra-index-url https://download.pytorch.org/whl/nightly/cu128
Not sure what ... | 2025-06-29T21:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lnqaea/upgraded_from_3090_to_5090_oobabooga_complaints/ | jebeller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnqaea | false | null | t3_1lnqaea | /r/LocalLLaMA/comments/1lnqaea/upgraded_from_3090_to_5090_oobabooga_complaints/ | false | false | self | 3 | null |
Current State of OpenAI | 49 | 2025-06-29T20:49:09 | noblex33 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lnp73w | false | null | t3_1lnp73w | /r/LocalLLaMA/comments/1lnp73w/current_state_of_openai/ | false | false | default | 49 | {'enabled': True, 'images': [{'id': 'p9nlm707ix9f1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=108&crop=smart&auto=webp&s=d0894a05385b3659a19522c234fa0f7be3ad5dc7', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=216&crop=smart&auto=w... | ||
Running AI models on phone on a different OS? | 0 | Has anyone tried running a local LLM on a phone running GrapheneOS or another lightweight Android OS?
Stock Android tends to consume 70–80% of RAM at rest, but I'm wondering if anyone has managed to reduce that significantly with Graphene and fit something like DeepSeek-R1-0528-Qwen3-8B (Q4 quant) in memory.
If no ... | 2025-06-29T19:45:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lnnoc1/running_ai_models_on_phone_on_a_different_os/ | AspecialistI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnnoc1 | false | null | t3_1lnnoc1 | /r/LocalLLaMA/comments/1lnnoc1/running_ai_models_on_phone_on_a_different_os/ | false | false | self | 0 | null |
hunyuan-a13b: any news? GGUF? MLX? | 85 | Like many I’m excited about this model. We had a big thread on it, then crickets. Any news? | 2025-06-29T19:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lnmp98/hunyuana13b_any_news_gguf_mlx/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnmp98 | false | null | t3_1lnmp98 | /r/LocalLLaMA/comments/1lnmp98/hunyuana13b_any_news_gguf_mlx/ | false | false | self | 85 | null |
4x 4090 48GB inference box (I may have overdone it) | 914 | A few months ago I discovered that 48GB 4090s were starting to show up on the western market in large numbers. I didn't think much of it at the time, but then I got my payout from the mt.gox bankruptcy filing (which has been ongoing for over 10 years now), and decided to blow a chunk of it on an inference box for local... | 2025-06-29T18:33:40 | https://www.reddit.com/gallery/1lnlxp1 | 101m4n | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lnlxp1 | false | null | t3_1lnlxp1 | /r/LocalLLaMA/comments/1lnlxp1/4x_4090_48gb_inference_box_i_may_have_overdone_it/ | false | false | 914 | {'enabled': True, 'images': [{'id': 'o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=108&crop=smart&auto=webp&s=0f79a082f28f554cc94ca3e5fdf80eb9c0222d3c', 'width': 108}, {'height': 288, 'url': '... | |
How do you use datasets from huggingface/kaggle etc into local apps like lmstudio or jan local apps | 1 | I am a beginner, and have started using local apps like lmstudio and jan, however I am unable to figure how does one uses dataset from sites like kaggle or huggingface | 2025-06-29T18:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lnlo69/how_do_you_use_datasets_from_huggingfacekaggle/ | vasuhawa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnlo69 | false | null | t3_1lnlo69 | /r/LocalLLaMA/comments/1lnlo69/how_do_you_use_datasets_from_huggingfacekaggle/ | false | false | self | 1 | null |
Trying to figure out when it makes sense... | 4 | So I'm an independent developer of 25+ yrs. I've really enjoyed working with AI (Claude and OpenAI mostly) for my coding assistant in the past 6 months, it's not been very expensive but I'm also not using it "full time" either.
I did some LLM experimentation with my old RX580 8GB card which is not very good for actua... | 2025-06-29T18:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lnlmpi/trying_to_figure_out_when_it_makes_sense/ | Waste-Toe7042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnlmpi | false | null | t3_1lnlmpi | /r/LocalLLaMA/comments/1lnlmpi/trying_to_figure_out_when_it_makes_sense/ | false | false | self | 4 | null |
Context Engineering | 0 | "Context engineering is the delicate art and science of filling the context window with just the right information for the next step." — Andrej Karpathy.
A practical, first-principles handbook inspired by Andrej Karpathy and 3Blue1Brown for moving beyond prompt engineering to the wider discipline of context desig... | 2025-06-29T18:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lnldsj/context_engineering/ | recursiveauto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnldsj | false | null | t3_1lnldsj | /r/LocalLLaMA/comments/1lnldsj/context_engineering/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'cSwldoCrYnNsuV2bVD5J9hR8KcWC5c_WQOPgKOFJcjc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cSwldoCrYnNsuV2bVD5J9hR8KcWC5c_WQOPgKOFJcjc.png?width=108&crop=smart&auto=webp&s=4e4c6dc6a3f4fff84fd7abc85fa4486c41e3fc91', 'width': 108}, {'height': 108, 'url': 'h... |
According to rumors NVIDIA is planning a RTX 5070 Ti SUPER with 24GB VRAM | 204 | 2025-06-29T18:03:15 | https://videocardz.com/newz/nvidia-also-planning-geforce-rtx-5070-ti-super-with-24gb-gddr7-memory | BringerOfNuance | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1lnl6we | false | null | t3_1lnl6we | /r/LocalLLaMA/comments/1lnl6we/according_to_rumors_nvidia_is_planning_a_rtx_5070/ | false | false | default | 204 | {'enabled': False, 'images': [{'id': 'nvS9CwLfNaU7BwSp_zJYNRF4C9Wqsv0EQEs557kjO8Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nvS9CwLfNaU7BwSp_zJYNRF4C9Wqsv0EQEs557kjO8Y.jpeg?width=108&crop=smart&auto=webp&s=c4ec05b33a639f326ee6d23e23a2d5829e8b6881', 'width': 108}, {'height': 112, 'url': '... | |
What memory/vram temperatures do you get (particularly anyone with gddr7 in the RTX 50X0 series)? | 2 | Doesnt seem to be much public info on gddr7 thermals generally. | 2025-06-29T17:41:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lnknry/what_memoryvram_temperatures_do_you_get/ | MuddyPuddle_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnknry | false | null | t3_1lnknry | /r/LocalLLaMA/comments/1lnknry/what_memoryvram_temperatures_do_you_get/ | false | false | self | 2 | null |
Prompt Smells, Just Like Code | 40 | We all know about code smells. When your code works, but it’s messy and you just know it’s going to cause pain later.
The same thing happens with prompts. I didn’t really think about it until I saw our LLM app getting harder and harder to tweak… and the root cause? Messy, overcomplicated prompts, complex workflows.
S... | 2025-06-29T17:10:07 | https://blog.surkar.in/prompt-smells-just-like-code | thesmallstar | blog.surkar.in | 1970-01-01T00:00:00 | 0 | {} | 1lnjw6m | false | null | t3_1lnjw6m | /r/LocalLLaMA/comments/1lnjw6m/prompt_smells_just_like_code/ | false | false | default | 40 | {'enabled': False, 'images': [{'id': 'CM_MCPik5clqILb0zxA6bpU1V0O-DrGVq9FOihM0CAc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CM_MCPik5clqILb0zxA6bpU1V0O-DrGVq9FOihM0CAc.png?width=108&crop=smart&auto=webp&s=74e8427cf080ad1bf7cf47257084f7d6b7828a7e', 'width': 108}, {'height': 113, 'url': 'h... |
Prompt Smells, Just Like code | 1 | We all know about code smells. When your code works, but it’s messy and you just know it’s going to cause pain later.
The same thing happens with prompts. I didn’t really think about it until I saw our LLM app getting harder and harder to tweak… and the root cause? Messy, overcomplicated prompts, complex workflows.
P... | 2025-06-29T17:07:21 | https://blog.surkar.in/prompt-smells-just-like-code | thesmallstar | blog.surkar.in | 1970-01-01T00:00:00 | 0 | {} | 1lnjtrs | false | null | t3_1lnjtrs | /r/LocalLLaMA/comments/1lnjtrs/prompt_smells_just_like_code/ | false | false | default | 1 | null |
I built a multi-modal semantic search framework | 0 | I’ve developed a unified framework for multi-modal semantic search that removes the typical production-infrastructure bottleneck and lets you focus entirely on front-end features.
In most production environments, enabling semantic search demands multiple, separately configured components. This framework bundles everyt... | 2025-06-29T16:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lnj7wb/i_built_a_multimodal_semantic_search_framework/ | Available_Ad_5360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnj7wb | false | null | t3_1lnj7wb | /r/LocalLLaMA/comments/1lnj7wb/i_built_a_multimodal_semantic_search_framework/ | false | false | self | 0 | null |
How are local or online models scraping? Is it different from search? | 5 | Are the scrapers usually part of the model or is it an MCP server? How did scrapers change after ai? Deep research is probably one of the most useful things I’ve used, if I run it locally with openwebui and the search integration (like ddg) how does it get the data from sites? | 2025-06-29T16:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lniut8/how_are_local_or_online_models_scraping_is_it/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lniut8 | false | null | t3_1lniut8 | /r/LocalLLaMA/comments/1lniut8/how_are_local_or_online_models_scraping_is_it/ | false | false | self | 5 | null |
Best foss LLMs for analysing PTE essay for potato system | 0 | Hi guys, I'm developing a PTE essay generation and evaluation (scoring, giving feedback, etc.) tool for learning the AI and LLMs using python and ollama.
The problem is my potato system. (6GB Usable RAM outof 8GB with No GPU)
Which are the best FOSS LLMs out there for this scenario? (Which are the best if I've CHAD �... | 2025-06-29T16:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lniptg/best_foss_llms_for_analysing_pte_essay_for_potato/ | UnknownSh00ter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lniptg | false | null | t3_1lniptg | /r/LocalLLaMA/comments/1lniptg/best_foss_llms_for_analysing_pte_essay_for_potato/ | false | false | self | 0 | null |
Best local set up for getting writing critique/talking about the characters? | 3 | Hi. I have a RTX 3060 with 12 Gb vram gpu. A fairly alright computer for entry level AI stuff.
I've been experimenting with LM Studio, GT4ALL, AnythingLLM and Dot.
My use case is that I want to upload chapters of a book I'm writing for fun, get critiques, have it tell me strengths and weaknesses in my writing and a... | 2025-06-29T16:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lniowu/best_local_set_up_for_getting_writing/ | Vast_Description_206 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lniowu | false | null | t3_1lniowu | /r/LocalLLaMA/comments/1lniowu/best_local_set_up_for_getting_writing/ | false | false | self | 3 | null |
AI coding agents...what am I doing wrong? | 22 | Why are other people having such good luck with ai coding agents and I can't even get mine to write a simple comment block at the top of a 400 line file?
The common refrain is it's like having a junior engineer to pass a coding task off to...well, I've never had a junior engineer scroll 1/3rd of the way through a file... | 2025-06-29T16:19:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lnin1x/ai_coding_agentswhat_am_i_doing_wrong/ | furyfuryfury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lnin1x | false | null | t3_1lnin1x | /r/LocalLLaMA/comments/1lnin1x/ai_coding_agentswhat_am_i_doing_wrong/ | false | false | self | 22 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.