title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Are 32k-Token Embedding Models Real Innovation or Just Marketing?
8
**What do you think about embedding models that support input context lengths of up to 32k tokens?** For example, Voyage 3 or Voyage 3.5 (from MongoDB). Is it just marketing, or does it make a real difference in practice? Also, which closed-source embedding model would you recommend for top-tier performance?
2025-11-04T11:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1oo4h0q/are_32ktoken_embedding_models_real_innovation_or/
CapitalShake3085
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo4h0q
false
null
t3_1oo4h0q
/r/LocalLLaMA/comments/1oo4h0q/are_32ktoken_embedding_models_real_innovation_or/
false
false
self
8
null
Finetuning DeepSeek 671B locally with only 80GB VRAM and Server CPU
3
Hi, we're the KTransformers team (formerly known for our DeepSeek-V3 local CPU/GPU hybrid inference project). Today, we're proud to announce full integration with LLaMA-Factory, enabling you to **fine-tune DeepSeek-671B or Kimi-K2-1TB locally with just 4x RTX 4090 GPUs**! https://preview.redd.it/24938oydy7zf1.png?width=2246&format=png&auto=webp&s=967bb97d1d8c8cb2d6d0ea96ec1dab5d240a294d Now you can use a single local server to use [NekoQA-10K](https://huggingface.co/datasets/liumindmind/NekoQA-10K) dataset to trun deepseek into a cat girl\~ https://preview.redd.it/w1m1j89jy7zf1.png?width=2570&format=png&auto=webp&s=5412ea1346573d474dd122116665ffe5031fb53f More infomation can be found at [https://github.com/kvcache-ai/ktransformers/tree/main/KT-SFT](https://github.com/kvcache-ai/ktransformers/tree/main/KT-SFT) [https://github.com/hiyouga/LLaMA-Factory/issues/9266](https://github.com/hiyouga/LLaMA-Factory/issues/9266)
2025-11-04T10:41:38
https://www.reddit.com/r/LocalLLaMA/comments/1oo462c/finetuning_deepseek_671b_locally_with_only_80gb/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo462c
false
null
t3_1oo462c
/r/LocalLLaMA/comments/1oo462c/finetuning_deepseek_671b_locally_with_only_80gb/
false
false
https://a.thumbs.redditm…Pd0sPJv4R7J8.jpg
3
null
Ideal size of llm to make
0
I think the ideal size of llm moe would be 30b to 1.5b for pc and 10b to 0.5b for smartphone. PCs go to 32 GB of RAM and smartphones to 12 to 16 GB of RAM And therefore the ideal would be 5% of active parameter for efficiency (comparable to the human brain) And I don't think everyone has or will be able to afford a 600 watt 5090 to run local llms. So 30b to 3b q4km -= 19gb for pc And 10b a0.5b q4 km = 7gb for smartphone The llm industry like mistral should focus on that!
2025-11-04T10:41:35
https://www.reddit.com/r/LocalLLaMA/comments/1oo461u/ideal_size_of_llm_to_make/
MoreIndependent5967
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo461u
false
null
t3_1oo461u
/r/LocalLLaMA/comments/1oo461u/ideal_size_of_llm_to_make/
false
false
self
0
null
How do you handle local AI model performance across different hardware?
1
I recently asked a question about why you think more apps don’t run AI locally, and I received a lot of interesting answers. Now I have a follow up question. For those of you who have managed to built apps that include AI models that run on-device, how do you handle the issue of models performing differently across different CPUs, GPUs, and NPUs? Do you usually deploy the same model across all devices? If so, how do you make it perform well on different accelerators and devices? Or do you switch models between devices to get better performance for each one? How do you decide which model works best for each type of device?
2025-11-04T10:17:54
https://www.reddit.com/r/LocalLLaMA/comments/1oo3s6f/how_do_you_handle_local_ai_model_performance/
elinaembedl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo3s6f
false
null
t3_1oo3s6f
/r/LocalLLaMA/comments/1oo3s6f/how_do_you_handle_local_ai_model_performance/
false
false
self
1
null
Workaround for VRAM unloading after idle period using Vulkan runtime on multi-gpu setup
2
So alot of people have been experiencing an issue (Especially in AI) where their vram will unload completely onto system ram after an Idle period especially when using multi-gpu setups. Ive created a temporary solution until the issue gets fixed. My code loads 1mb onto the vram and keeps it and the gpu core "Awake" by pinging it every 1 second. This doesnt use any visible recourses on the core or memory but will keep it from unloading the VRAM onto system RAM [https://github.com/rombodawg/GPU\_Core-Memory\_Never\_Idle\_or\_Sleep](https://github.com/rombodawg/GPU_Core-Memory_Never_Idle_or_Sleep)
2025-11-04T10:13:06
https://www.reddit.com/r/LocalLLaMA/comments/1oo3pdz/workaround_for_vram_unloading_after_idle_period/
Rombodawg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo3pdz
false
null
t3_1oo3pdz
/r/LocalLLaMA/comments/1oo3pdz/workaround_for_vram_unloading_after_idle_period/
false
false
self
2
{'enabled': False, 'images': [{'id': 'yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=108&crop=smart&auto=webp&s=2d179a5072764e4e03162689f03f2027b471f97c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=216&crop=smart&auto=webp&s=664bff83ac6a3e1c7b6741e26515c74967a492aa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=320&crop=smart&auto=webp&s=0cad1a3f9ee8648fd94a372f5710fcccd68fd152', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=640&crop=smart&auto=webp&s=08535edfe25cc172121bdba85995087536342d9f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=960&crop=smart&auto=webp&s=5caeaf89b6cef11cd9e527b54d757d11cbe32000', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?width=1080&crop=smart&auto=webp&s=bda5a07fca0826dbde800fa56878675898816c3c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yejG7MaUSKglNfi3NXxhFybI6s_HehjcGvl7Ck2wCjY.png?auto=webp&s=3cff313e0f11645422f1f5e7a314db176f977e22', 'width': 1200}, 'variants': {}}]}
llama.cpp vulkan build is being ignored
0
iam trying to make AI model run through my gpu, but all the python files in the project is failing to, even that llama.cpp is in the project. how do i check that llama.cpp is working?
2025-11-04T09:16:33
https://www.reddit.com/r/LocalLLaMA/comments/1oo2ua8/llamacpp_vulkan_build_is_being_ignored/
AhmadXVX15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo2ua8
false
null
t3_1oo2ua8
/r/LocalLLaMA/comments/1oo2ua8/llamacpp_vulkan_build_is_being_ignored/
false
false
self
0
null
Help us benchmark Hephaestus on SWEBench-Verified! Watch AI agents solve real bugs + get credited in our report
1
Hey everyone! 👋 I've been working on Hephaestus - an open-source framework that changes how we think about AI agent workflows. It's fully open source and will remain that way. **The Problem:** Most agentic frameworks make you define every step upfront. But complex tasks don't work like that - you discover what needs to be done as you go. **The Solution:** Semi-structured workflows. You define phases - the logical steps needed to solve a problem (like "Analysis → Implementation → Validation" for software projects). Then agents dynamically create tasks across these phases based on what they discover. Agents coordinate through a Kanban board and share discoveries via RAG-powered memory, while a Guardian monitors trajectories to keep everyone on track. **Now I need your help.** 🙏 We're evaluating Hephaestus on SWEBench-Verified (500 real-world GitHub issues from popular Python repos like Django, SymPy, and Astropy). It's a massive benchmark, and I'm looking for contributors to help run instances. **What you need:** - Claude Code subscription (Sonnet-4.5) - that's it! - I'll provide OpenRouter API keys for orchestration **What you get:** - Full credit in our final SWEBench evaluation report - Watch Hephaestus agents coordinate and build workflows in real-time through the web UI - Help validate a new approach to autonomous AI workflows - Contribute to open-source AI research **How it works:** 1. Generate a batch of uncompleted instances (we have a script that does this automatically) 2. Run the benchmark overnight 3. Submit results via PR (so your contribution is tracked and credited) We're coordinating via Discord to avoid duplicate work, and the comprehensive docs walk you through everything step-by-step. **🔗 Links:** - **GitHub:** https://github.com/Ido-Levi/Hephaestus - **Contributor Guide:** https://ido-levi.github.io/Hephaestus/docs/guides/running-swebench-benchmark - **Discord:** https://discord.gg/FyrC4fpS This is a chance to contribute to AI agent research, see self-building workflows tackle real problems, and get recognized for your contribution. Every batch helps! Thanks in advance to everyone who participates! 🚀
2025-11-04T09:13:49
https://v.redd.it/cal43bywi7zf1
Standard_Excuse7988
v.redd.it
1970-01-01T00:00:00
0
{}
1oo2su3
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cal43bywi7zf1/DASHPlaylist.mpd?a=1764839646%2CZDVhN2VmODdkOWRhZGUzNDA2ODhiNTk5NGIzOGUzMWFmMTRiMDAwZDBlMWIyZTliZmI4MjdmODM5ZThlN2Y5Ng%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/cal43bywi7zf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 554, 'hls_url': 'https://v.redd.it/cal43bywi7zf1/HLSPlaylist.m3u8?a=1764839646%2CMDVlMmVjMmI3NjY4ODk5YTVkNWMwM2ZiYmIzZDZlMzcxZDIwZWIxZTBjYzcyYTZjOWM2ZTA3NGFlZWEzYTYxOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cal43bywi7zf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1oo2su3
/r/LocalLLaMA/comments/1oo2su3/help_us_benchmark_hephaestus_on_swebenchverified/
false
false
https://external-preview…93598982d6b731cd
1
{'enabled': False, 'images': [{'id': 'djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=108&crop=smart&format=pjpg&auto=webp&s=29fd82bdb288bab9d4326408a81f48b9dfde85d5', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=216&crop=smart&format=pjpg&auto=webp&s=377568b647af31149b5c0d798657a243e2951937', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=320&crop=smart&format=pjpg&auto=webp&s=25ff7dffeaa173059ecf914cc901b5b09b231755', 'width': 320}, {'height': 276, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=640&crop=smart&format=pjpg&auto=webp&s=4018ccbb4cc8805911dd0f8fcd667f1d273d6ff1', 'width': 640}, {'height': 415, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=960&crop=smart&format=pjpg&auto=webp&s=ac202572294d0a162019a7dd3babe7dc0478e0b9', 'width': 960}, {'height': 466, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b06be77e7811c964df52cc5b7b6fc85b626a4dcd', 'width': 1080}], 'source': {'height': 754, 'url': 'https://external-preview.redd.it/djNvdDhieXdpN3pmMdAkQHndWWJHnvJDGx1uxfxLVUHIG9vtIHStiltBck3-.png?format=pjpg&auto=webp&s=8d8da486e3aa60921bc32ca0dd4a30a08b00991a', 'width': 1744}, 'variants': {}}]}
You can win one DGX Station from Dell
17
2025-11-04T09:05:26
https://i.redd.it/h8m5jkpgh7zf1.jpeg
Cane_P
i.redd.it
1970-01-01T00:00:00
0
{}
1oo2olu
false
null
t3_1oo2olu
/r/LocalLLaMA/comments/1oo2olu/you_can_win_one_dgx_station_from_dell/
false
false
default
17
{'enabled': True, 'images': [{'id': 'h8m5jkpgh7zf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=108&crop=smart&auto=webp&s=fc4fd070d63d71f1d2d9d9ca5338725a98e64e00', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=216&crop=smart&auto=webp&s=74fd0b8f53c4f65918729742e050829b542102fd', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=320&crop=smart&auto=webp&s=1b17ca8ab3fd09dc1c87bfc714355bae2025e5c8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=640&crop=smart&auto=webp&s=d20a7b9a2750d2bf30b8722bcf1229c20d13e6d3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=960&crop=smart&auto=webp&s=e1f80a3394cbe84a68ca903e29617175ef350990', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?width=1080&crop=smart&auto=webp&s=d1f7997537b8190ce81ff0929e3eb7a35e43a6d8', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/h8m5jkpgh7zf1.jpeg?auto=webp&s=2065daec55e3c35f88e89888c4522effeb5fb472', 'width': 1080}, 'variants': {}}]}
[Research] LLM judges systematically penalize balanced reasoning - tested mistral, llama3, gemma, phi3, orca-mini
22
I just published a study on LLM judge bias using 5 local models, and the results are pretty interesting for anyone using LLMs as evaluators. **Paper + full data**: https://zenodo.org/records/17517864 (DOI: 10.5281/zenodo.17517864) ## Setup Tested these models via Ollama: - mistral:7b-instruct - llama3:8b - gemma:2b-instruct - phi3:mini - orca-mini:7b Generated 1,500 responses across 30 moral dilemmas with: - 3 prompt framings (neutral, safety-first, freedom-first) - 10 temperatures (0.0 to 1.0) - Deterministic seeds for full reproducibility Then had GPT-4o-mini and Claude 3.5 Haiku evaluate each response (3,000 total evaluations). ## Key Finding: The "Balance Penalty" **Judges systematically penalize balanced responses.** When a model says "both values matter, it depends on context" → mean score 3.60 When a model picks one value decisively → mean score 4.36 **Gap: 0.76 points (p<0.001, Cohen's d=1.45)** This holds after controlling for: - Which model generated the response - Temperature setting - Prompt framing - Scenario difficulty ## Why This Matters for Local LLM Users 1. **If you're using LLM judges for eval**, they're probably penalizing nuanced reasoning 2. **Judge disagreement concentrates on balanced responses**: When responses acknowledge trade-offs, judges disagree 58% of the time vs 34% for decisive responses 3. **GPT-4o-mini judges more harshly than Claude 3.5 Haiku**: GPT penalty is β=1.08 (d=2.21), Claude is β=0.53 (d=1.00) 4. **Framing matters WAY more than temperature**: - Framing effect: 0.4-0.8 points - Temperature effect: 0.15-0.24 points If you're tweaking temperature for "better" outputs, you're probably wasting time. Focus on prompt framing instead. ## Model Rankings (All 5 Performed Similarly) Mean alignment scores across all judges/scenarios: - orca-mini:7b: 4.31 - llama3:8b: 4.24 - phi3:mini: 4.23 - mistral:7b-instruct: 4.07 - gemma:2b-instruct: 4.05 **The differences between models are smaller than the balance penalty effect**, suggesting judge bias matters more than model choice for these evaluations. ## Full Reproducibility Everything's public on Zenodo: - 1,500 response files (JSONL with full metadata) - 3,000 judge evaluations (CSV with scores + rationales) - All analysis scripts (Python) - Reproduction instructions - All figures from paper All code and data are also mirrored in the GitHub repo (github.com/nenocsf2024/trolley_clean, release v1.0.0), so you can clone or download either source and rerun the full pipeline. You can literally re-run the entire study, or test different models/judges with the same scenarios. ## Implications This was inspired by Anthropic's recent work showing frontier LLM judges only agree ~70% of the time. The "balance penalty" appears to explain much of that disagreement. **For practical use**: If you're using LLM judges to evaluate your local models, be aware they might be systematically penalizing nuanced, context-dependent reasoning in favor of decisive answers. ## Questions for the community: 1. Have you noticed similar patterns when using LLM judges? 2. Do you think this is a bug (bad judge calibration) or feature (decisive answers are genuinely better)? 3. For those doing RLHF/DPO with LLM judges - has this affected your training? Planning Phase 2 with API models (GPT-4, Claude Opus, Gemini) and human validation. Suggestions welcome! --- **Edit**: For those asking about reproduction - yes, you can literally clone this and test your own local models. The scenario file + judging scripts are in the Zenodo archive. DM if you hit any issues!
2025-11-04T08:33:12
https://www.reddit.com/r/LocalLLaMA/comments/1oo279x/research_llm_judges_systematically_penalize/
Budget-Reception-533
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo279x
false
null
t3_1oo279x
/r/LocalLLaMA/comments/1oo279x/research_llm_judges_systematically_penalize/
false
false
self
22
null
Memory might be the real missing piece for AI agents
0
I’ve been building and testing different AI agent frameworks lately, and it feels like the biggest problem isn’t reasoning anymore - it’s memory. Most setups can plan and execute fine, but they forget context fast. Vectors help with recall but get messy, and graph or hybrid systems are hard to keep simple. What I really want is a way for agents to *remember things across sessions and platforms*. Like, if I switch from ChatGPT to Claude or Gemini, it should still “know” me. That’s kind of what we’re trying to solve at getalchemystai\[.\]com making memory portable across tools. We even made a Chrome Extension that carries your memory between different AI platforms. - check comments for the link Has anyone else been working on persistent memory or context sharing? Curious what’s been working for you.
2025-11-04T07:49:13
https://www.reddit.com/r/LocalLLaMA/comments/1oo1jac/memory_might_be_the_real_missing_piece_for_ai/
VirtualEducator8243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo1jac
false
null
t3_1oo1jac
/r/LocalLLaMA/comments/1oo1jac/memory_might_be_the_real_missing_piece_for_ai/
false
false
self
0
null
Open Source Alternative to NotebookLM/Perplexity
53
For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.** In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (SearxNG, Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come. I'm looking for contributors to help shape the future of SurfSense! If you're interested in AI agents, RAG, browser extensions, or building open-source research tools, this is a great place to jump in. Here’s a quick look at what SurfSense offers right now: **Features** * Supports 100+ LLMs * Supports local Ollama or vLLM setups * 6000+ Embedding Models * 50+ File extensions supported (Added Docling recently) * Podcasts support with local TTS providers (Kokoro TTS) * Connects with 15+ external sources such as Search Engines, Slack, Notion, Gmail, Notion, Confluence etc * Cross-Browser Extension to let you save any dynamic webpage you want, including authenticated content. **Upcoming Planned Features** * Mergeable MindMaps. * Note Management * Multi Collaborative Notebooks. **Interested in contributing?** SurfSense is completely open source, with an active roadmap. Whether you want to pick up an existing feature, suggest something new, fix bugs, or help improve docs, you're welcome to join in. GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense)
2025-11-04T07:48:59
https://www.reddit.com/r/LocalLLaMA/comments/1oo1j5x/open_source_alternative_to_notebooklmperplexity/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo1j5x
false
null
t3_1oo1j5x
/r/LocalLLaMA/comments/1oo1j5x/open_source_alternative_to_notebooklmperplexity/
false
false
self
53
{'enabled': False, 'images': [{'id': 'kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=108&crop=smart&auto=webp&s=689d73c90372887d0579c8bb7d67ae447f907046', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=216&crop=smart&auto=webp&s=499be6e0aabfb7a8b53b2c3e09a4560fc2676c75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=320&crop=smart&auto=webp&s=ebcffd9f907e96a1b7e0af05e730ca21beb79f77', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=640&crop=smart&auto=webp&s=17b96e5431b84c059c75c354ad390b79dbc3eb5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=960&crop=smart&auto=webp&s=73d7219d16cebf66a2a9773850abc457a5b855f9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?width=1080&crop=smart&auto=webp&s=17d9bae13e514940131d5b304a1a94bb7daccdd7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kDr2UVvf22CiXO10O4Duy3TL5LdU-rcBbXcU_t_OHkY.png?auto=webp&s=50489f6e547a6a01b2bc42cea3a817f097ca9543', 'width': 1200}, 'variants': {}}]}
why don't cerebras add more models like glm, minimax etc?
0
https://preview.redd.it/…m, minimax etc?
2025-11-04T07:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1oo19qg/why_dont_cerebras_add_more_models_like_glm/
DataScientia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo19qg
false
null
t3_1oo19qg
/r/LocalLLaMA/comments/1oo19qg/why_dont_cerebras_add_more_models_like_glm/
false
false
https://a.thumbs.redditm…8-OHFipXwkn8.jpg
0
null
Anyone else feel like GPU pricing is still the biggest barrier for open-source AI?
177
Even with cheap clouds popping up, costs still hit fast when you train or fine-tune. How do you guys manage GPU spend for experiments?
2025-11-04T07:15:29
https://www.reddit.com/r/LocalLLaMA/comments/1oo1159/anyone_else_feel_like_gpu_pricing_is_still_the/
frentro_max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo1159
false
null
t3_1oo1159
/r/LocalLLaMA/comments/1oo1159/anyone_else_feel_like_gpu_pricing_is_still_the/
false
false
self
177
null
Local model for generating small video from photo
1
Hello, What is the best model to generate small video from a person. The usecase I want to generate video of myself doing sign language of all word my daughter know locally. I have a 7900xtx the video are very very short.
2025-11-04T07:04:32
https://www.reddit.com/r/LocalLLaMA/comments/1oo0uxb/local_model_for_generating_small_video_from_photo/
Sentenza31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oo0uxb
false
null
t3_1oo0uxb
/r/LocalLLaMA/comments/1oo0uxb/local_model_for_generating_small_video_from_photo/
false
false
self
1
null
Qwen is roughly matching the entire American open model ecosystem today
1,099
2025-11-04T05:57:18
https://i.redd.it/zvugibssj6zf1.png
Old-School8916
i.redd.it
1970-01-01T00:00:00
0
{}
1onzrg9
false
null
t3_1onzrg9
/r/LocalLLaMA/comments/1onzrg9/qwen_is_roughly_matching_the_entire_american_open/
false
false
https://b.thumbs.redditm…svQqgnhLcqHA.jpg
1,099
{'enabled': True, 'images': [{'id': '8HQyWfNdeEPnoZ-_0csl2hvdYeoq70FC5UHgFNDYryA', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=108&crop=smart&auto=webp&s=3a1f1b7857248abc17b1a292808af5b44992a76e', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=216&crop=smart&auto=webp&s=cb211472f8b1eb803b11ab3fa05251bb215f9eca', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=320&crop=smart&auto=webp&s=074482fcf3983339ee29e7998bdebc0374ed8c79', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=640&crop=smart&auto=webp&s=e1b76885ebcc9a9fe34b1f3215330df073cc1f12', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=960&crop=smart&auto=webp&s=57b689b4a6f8c423720e006f9846fca63e21f316', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?width=1080&crop=smart&auto=webp&s=2ae8b54a1d7bb959de8ba5192684ebc95714e246', 'width': 1080}], 'source': {'height': 1388, 'url': 'https://preview.redd.it/zvugibssj6zf1.png?auto=webp&s=66cce6541ce48f4565e7ff7e93beea2a3ef5ecd8', 'width': 2570}, 'variants': {}}]}
Where are you all sourcing/annotating custom datasets for vision-based LLaMA projects?
1
I’ve been playing with local object detection (sports + vehicles), but the hardest part is dataset prep. I used TagX to scrape and annotate some structured data worked pretty well. Wondering what the community prefers: DIY annotation, open datasets, or outsourced labeling?
2025-11-04T05:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1onz3n2/where_are_you_all_sourcingannotating_custom/
Due_Construction5400
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onz3n2
false
null
t3_1onz3n2
/r/LocalLLaMA/comments/1onz3n2/where_are_you_all_sourcingannotating_custom/
false
false
self
1
null
Discord Server for NVIDIA DGX Spark and Clone Discussion
0
[https://discord.gg/F4VrUqNt](https://discord.gg/F4VrUqNt) Getting owners together will be good. For instance, we already confirmed across two users that the default ASUS Ascent GX10 has a broken Docker install.
2025-11-04T05:18:16
https://www.reddit.com/r/LocalLLaMA/comments/1onz3fi/discord_server_for_nvidia_dgx_spark_and_clone/
MontageKapalua6302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onz3fi
false
null
t3_1onz3fi
/r/LocalLLaMA/comments/1onz3fi/discord_server_for_nvidia_dgx_spark_and_clone/
false
false
self
0
null
How much does the average person value a private LLM?
81
I’ve been thinking a lot about the future of local LLMs lately. My current take is that while it will eventually be possible (or maybe already is) for everyone to run very capable models locally, I’m not sure how many people will. For example, many people could run an email server themselves but everyone uses Gmail. DuckDuckGo is a perfectly viable alternative but Google still prevails. Will LLMs be the same way or will there eventually be enough advantages of running locally (including but not limited to privacy) for them to realistically challenge cloud providers? Is privacy alone enough?
2025-11-04T05:02:49
https://www.reddit.com/r/LocalLLaMA/comments/1onytak/how_much_does_the_average_person_value_a_private/
SelectLadder8758
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onytak
false
null
t3_1onytak
/r/LocalLLaMA/comments/1onytak/how_much_does_the_average_person_value_a_private/
false
false
self
81
null
Have you heard of this?
0
https://github.com/exo-explore/exo This community is always talking about "mr money-bags" who can run huge models at home, but anyone can do it even with raspberry pis and old college PCs picked up at a tech surplus sale. Just wanted to share, if you had already heard of it, awesome for you.
2025-11-04T04:34:40
https://www.reddit.com/r/LocalLLaMA/comments/1onyaf4/have_you_heard_of_this/
pieonmyjesutildomine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onyaf4
false
null
t3_1onyaf4
/r/LocalLLaMA/comments/1onyaf4/have_you_heard_of_this/
false
false
self
0
{'enabled': False, 'images': [{'id': '1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=108&crop=smart&auto=webp&s=068a0f678517eba05d26b076a16d80bc9701803e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=216&crop=smart&auto=webp&s=22c3afe50a55ba7e8bd3de63a7d9b25bff77cdd9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=320&crop=smart&auto=webp&s=03292d918235630357810a3e13ae01bcd03d6499', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=640&crop=smart&auto=webp&s=ed66f6f47df9d61ca2538cbe3e8926a195599689', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=960&crop=smart&auto=webp&s=d5f4a99644da7d4d82a9ee6a61ea1aa060ff38e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?width=1080&crop=smart&auto=webp&s=75016c4cb56b4965c4ce1cdcbf9a6365e1694545', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1JIynrn_OecpDUNyTTOQuKncvZdNyr1BbXWKts_ZXfk.png?auto=webp&s=9c217d11588c3ac95a4918ef7791fd7267ec65f2', 'width': 1200}, 'variants': {}}]}
lm studio model for 6700xt
2
im trying to create my first AI for creating programs. not sure which model to choose. systems specs are motherboard: asus x399-e cpu: 1950 threadripper at 4ghz GPU: 6700xt 12gb memory: cosair 3200 mhz dual channel i tried with llama using the gpu mentioned nothing i install works so i decided to use lm studio instead as it detects the gpu right away. balance is my priority second is precision
2025-11-04T04:32:51
https://www.reddit.com/r/LocalLLaMA/comments/1ony99g/lm_studio_model_for_6700xt/
RichOpinion4766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ony99g
false
null
t3_1ony99g
/r/LocalLLaMA/comments/1ony99g/lm_studio_model_for_6700xt/
false
false
self
2
null
Mapping the Open Model Landscape in 2025 - Nathan Lambert, Ai2 (PyTorch Conference)
1
2025-11-04T03:54:51
https://www.youtube.com/watch?v=QlrGr-D4vTg
Old-School8916
youtube.com
1970-01-01T00:00:00
0
{}
1onxipo
false
{'oembed': {'author_name': 'PyTorch', 'author_url': 'https://www.youtube.com/@PyTorch', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QlrGr-D4vTg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Mapping the Open Model Landscape in 2025 - Nathan Lambert, Ai2"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QlrGr-D4vTg/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Mapping the Open Model Landscape in 2025 - Nathan Lambert, Ai2', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1onxipo
/r/LocalLLaMA/comments/1onxipo/mapping_the_open_model_landscape_in_2025_nathan/
false
false
default
1
null
GLM-4.5-Air-REAP-82B-A12B-LIMI
18
Hi. I'm in search of a HW grant to make this model a reality. Plan is to fine-tune cerebras/GLM-4.5-Air-REAP-82B-A12B model using GAIR/LIMI dataset. As per arXiv:2509.17567 , we could expect great gain of agentic model abilities. Script can be easily adapted from github.com/GAIR-NLP/LIMI as authors were initially fine-tuned a full GLM4.5 Air 106B model. I would expect the whole process to require about 12 hour on 8xH100 or equivalent H200 or B200 cluster. As a result I'll publish a trained 82B model with (hopefully) increased agentic abilities, a transparent evaluation report and also GGUF and MLX quants under permissive license. I expect 82B q4 quants to behave better than any 106B q3 quants on e.g. 64Gb apple HW. If you're able to provide temporary ssh acess to abovementioned GPU cluster, please contact me and let's do this.
2025-11-04T03:47:57
https://www.reddit.com/r/LocalLLaMA/comments/1onxdqx/glm45airreap82ba12blimi/
CoruNethronX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onxdqx
false
null
t3_1onxdqx
/r/LocalLLaMA/comments/1onxdqx/glm45airreap82ba12blimi/
false
false
self
18
null
This might be a dumb question but can VRAM and Unified memory work together on those AMD NPUs?
5
Can one put in a graphics card along? Or attach externally? Because 128 GB of unified memory is not enough.
2025-11-04T03:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1onxd65/this_might_be_a_dumb_question_but_can_vram_and/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onxd65
false
null
t3_1onxd65
/r/LocalLLaMA/comments/1onxd65/this_might_be_a_dumb_question_but_can_vram_and/
false
false
self
5
null
I got fed up with subscription fees and cloud-based dictation, so I built a 100% local voice-to-text app for Mac.
0
Hi everyone, Disclaimer: I am the creator of the app mentioned in the post. I'm a developer who's passionate about privacy-first, on-device software. (I've built a few open-source tools in this space, including a private, local-first RAG system called [LocalGPT](https://github.com/PromtEngineer/localGPT)). I rely on dictation a lot but I've always been uncomfortable with my voice data being sent to cloud servers. (wisprflow etc.) and don't want to pay for yet another subscription! I also found the built-in macOS dictation was slow, inaccurate and not customizable. I think our Macs are already powerful enough to handle state of the art transcription models. that's why I built [Whryte](https://whryte.com/)**.** It's a native voice-to-text app that runs **100% on-device.** No internet, no external APIs, no cloud. Your words and thoughts *never* leave your machine. Here are the main features. * **Truly Private & Offline:** 100% on-device. No internet needed (after a one-time model download). No cloud, no APIs. * **Ultra-Fast & Accurate:** A huge step up from the built-in dictation. * **Extremely Lightweight:** The core transcription model uses **less than 100MB RAM**. * **On-Device LLM for Cleanup:** Automatically removes "ums," "uhs," fixes grammar, and scrubs filler words. * **Smart RAM Management:** The LLM uses \~2GB RAM when active but **auto-offloads** when idle. (Full transparency.) * **Highly Customizable:** Set your own LLM prompts to control the text output for different apps (e.g., "format as code comment" or "make this sound professional"). * **Works Everywhere:** Any text field, any app. (Yes, even your IDE.) If you're also a fan of on-device, privacy-focused software, I'd love for you to check it out. There's a 3-day [free trial ](https://tally.so/r/wzpD4k)(no card required). Give it a try and please share your feedback. I want to build features that will actually be useful! Thanks,
2025-11-04T03:07:53
https://www.reddit.com/r/LocalLLaMA/comments/1onwkpp/i_got_fed_up_with_subscription_fees_and/
mlcode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onwkpp
false
null
t3_1onwkpp
/r/LocalLLaMA/comments/1onwkpp/i_got_fed_up_with_subscription_fees_and/
false
false
self
0
null
What personalities do you think LLM have?
0
Qwen is a "hot nerd"—always logical, sharp, and highly intelligent, but so serious that they come off as a bit stiff or awkward, with somewhat low emotional intelligence. DeepSeek is a genius prone to flashes of brilliance, but most of the time spouts nonsense. Gemini is a highly sensitive teenager—riddled with self-doubt, insecurity, and fragility—constantly apologizing. ChatGPT is the “central air conditioner” of the group: universally competent, overly eager to please, and so friendly it sometimes feels a bit insincere.
2025-11-04T03:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1onwf8h/what_personalities_do_you_think_llm_have/
ENJOYlIFEQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onwf8h
false
null
t3_1onwf8h
/r/LocalLLaMA/comments/1onwf8h/what_personalities_do_you_think_llm_have/
false
false
self
0
null
[Tool] I wanted an easy way to benchmark tokens/second (t/s) on Ollama, so I wrote a simple Python script
0
2025-11-04T02:24:07
https://i.redd.it/krpngcruh5zf1.png
Appropriate_Fox5922
i.redd.it
1970-01-01T00:00:00
0
{}
1onvmxt
false
null
t3_1onvmxt
/r/LocalLLaMA/comments/1onvmxt/tool_i_wanted_an_easy_way_to_benchmark/
false
false
https://b.thumbs.redditm…pjaE9yJ46Xws.jpg
0
{'enabled': True, 'images': [{'id': 'hkdAlZKnBxkG-rPPpd3sxuNZXNzPkBHoisdrFCd3djo', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/krpngcruh5zf1.png?width=108&crop=smart&auto=webp&s=6d944ed8a5cf66e0d57bb3db298fd8d00423ffd0', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/krpngcruh5zf1.png?width=216&crop=smart&auto=webp&s=47ca0c0aca0cfbfc872c2fe943530957b608ecfd', 'width': 216}, {'height': 95, 'url': 'https://preview.redd.it/krpngcruh5zf1.png?width=320&crop=smart&auto=webp&s=8cb132092c40dfdfd5c40c6118ca33dd91372383', 'width': 320}, {'height': 190, 'url': 'https://preview.redd.it/krpngcruh5zf1.png?width=640&crop=smart&auto=webp&s=cae4571861edf27db87681670e4b17aa66a58597', 'width': 640}], 'source': {'height': 247, 'url': 'https://preview.redd.it/krpngcruh5zf1.png?auto=webp&s=a7f6041f8b099558a80826169d82f46d2e66b18c', 'width': 832}, 'variants': {}}]}
Ollama cloud
0
I came across Ollama Cloud models and it is working great for me. I can balance a hybrid integration while having data privacy and security. You can run the following models on their cloud deepseek-v3.1:671b-cloud gpt-oss:20b-cloud gpt-oss:120b-cloud kimi-k2:1t-cloud qwen3-coder:480b-cloud glm-4.6:cloud minimax-m2:cloud
2025-11-04T02:09:47
https://www.reddit.com/r/LocalLLaMA/comments/1onvbqq/ollama_cloud/
Fun-Wolf-2007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onvbqq
false
null
t3_1onvbqq
/r/LocalLLaMA/comments/1onvbqq/ollama_cloud/
false
false
self
0
null
I got tired of swapping models just to compare them, so I wrote a Python script to test multiple Ollama models at once
0
Hey r/LocalLLaMA! I'm sure many of you face the same hassle: you download a new GGUF model, you want to see if it's better than your current favorite, but then you have to load one, prompt it, unload, load the other, prompt it again, and manually compare. It's a pain. So, I put together a simple Python script to automate this. It uses `threading` to hit multiple Ollama models with the *same prompt* simultaneously, then prints out a clean, side-by-side comparison in your terminal. It's 100% free, 100% local, and uses the `ollama` Python library and `requests`. Prompt: "Explain quantum gravity in 3 sentences" --- Comparing Ollama Models --- Models to test: llama3, mistral, gemma --- Comparison Results --- [1/3] 🟢 Success llama3 (2.4s): Quantum gravity is a theoretical framework that aims to describe gravity according to the principles of quantum mechanics. It seeks to unify general relativity, which governs large-scale structures, with quantum field theory, which governs particles and forces at microscopic scales. The ultimate goal is to understand phenomena where both gravity and quantum effects are significant, like black holes and the early universe. [2/3] 🟢 Success mistral (1.9s): Quantum gravity is a field of theoretical physics aiming to describe gravity according to the principles of quantum mechanics. It seeks to reconcile general relativity, which describes gravity as spacetime curvature, with quantum theory, which describes fundamental particles and forces. This unification is crucial for understanding extreme environments like black holes and the very early universe. [3/3] 🟢 Success gemma (3.1s): Quantum gravity is a theoretical framework that attempts to describe gravity in a quantum mechanical way. It seeks to unify two fundamental pillars of modern physics: quantum mechanics (which describes the subatomic world) and general relativity (which describes gravity and the large-scale structure of the universe). The primary goal is to develop a consistent theory for phenomena where both quantum and gravitational effects are significant, such as within black holes or at the origin of the universe.
2025-11-04T02:05:19
https://www.reddit.com/r/LocalLLaMA/comments/1onv88b/i_got_tired_of_swapping_models_just_to_compare/
Appropriate_Fox5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onv88b
false
null
t3_1onv88b
/r/LocalLLaMA/comments/1onv88b/i_got_tired_of_swapping_models_just_to_compare/
false
false
self
0
null
Introducing The Agent Development Lifecycle (ADLC) - A New Way to Build Reliable Agents
0
Traditional SDLC was built for traditional software, not the probabilistic nature of Agentic AI. Getting an agent to a demo state is quick and easy, but making it reliable is where the real work lies. That's why we launched ADLC, a methodology that reimagines the development lifecycle for AI Agents. The core of the ADLC is a shift from the linear SDLC to a continuous loop we call the Agent Development Flywheel. This flywheel allows us to methodically identify failure modes from live and simulated usage and add them to an evolving evaluation behavior suite. This suite then allows us to confidently experiment with new prompts or tools to improve the agent's performance without introducing new regressions. You can check it out here - [https://www.arthur.ai/blog/introducing-adlc](https://www.arthur.ai/blog/introducing-adlc)
2025-11-04T01:36:10
https://i.redd.it/te1q9wh795zf1.png
planet-pranav
i.redd.it
1970-01-01T00:00:00
0
{}
1onuldu
false
null
t3_1onuldu
/r/LocalLLaMA/comments/1onuldu/introducing_the_agent_development_lifecycle_adlc/
false
false
https://b.thumbs.redditm…iJoOHaFnG3PM.jpg
0
{'enabled': True, 'images': [{'id': 'NpL7h-2ykKCGWAkHaglUhXOs-SD0r9ILfLeAAW2fnDU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=108&crop=smart&auto=webp&s=34b528975a9c85f48b3b6c3daa838e2c0e137b23', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=216&crop=smart&auto=webp&s=467d1e0295b26fb8af829b51a5c09f779a051d3a', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=320&crop=smart&auto=webp&s=4cefb5b5ca5b790b6fd6b2e133957d307d6957b7', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=640&crop=smart&auto=webp&s=f367a9fd7690b5309c6d896810afaee09e80d8de', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=960&crop=smart&auto=webp&s=c7362cb353ccbd6d79f80b75ae25ad236e0ce39c', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?width=1080&crop=smart&auto=webp&s=5b12f36338b27b9b0dcd51ac8ae6f4f6bd6d48e7', 'width': 1080}], 'source': {'height': 1215, 'url': 'https://preview.redd.it/te1q9wh795zf1.png?auto=webp&s=dfa87df6e9fda10b09d5ba80dcd640d479424da4', 'width': 2160}, 'variants': {}}]}
I built ARIA "Adaptive Resonant Intelligent Architecture" - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-04T01:32:01
https://www.reddit.com/r/LocalLLaMA/comments/1onui7z/i_built_aria_adaptive_resonant_intelligent/
ARIA_DontMindMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onui7z
false
null
t3_1onui7z
/r/LocalLLaMA/comments/1onui7z/i_built_aria_adaptive_resonant_intelligent/
false
false
self
1
null
Finetuning on Google Collab Issue Stuck on model.save()
1
I'm trying to learn how to fine tune llama3. I was trying to follow this basic guide [here](https://www.youtube.com/watch?v=pTaSDVz0gok&list=WL&index=31) using google colab. Everything seems to work, up until `model.save_pretrained_gguf("gguf_model", tokenizer, quantization_method="q4_k_m")model.save_pretrained_gguf("gguf_model", tokenizer, quantization_method="q4_k_m")` https://preview.redd.it/txz1tu1a85zf1.png?width=830&format=png&auto=webp&s=a52066aaf857a2035ddea02d72958d28783c262a It gets stuck here and then says error model undefined or something similar to that but I have no idea why? testing works prior. Can someone help me understand?
2025-11-04T01:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1onuhss/finetuning_on_google_collab_issue_stuck_on/
Santhoshty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onuhss
false
null
t3_1onuhss
/r/LocalLLaMA/comments/1onuhss/finetuning_on_google_collab_issue_stuck_on/
false
false
https://a.thumbs.redditm…ax_t19XNPp38.jpg
1
null
IPEX-LLM llama.cpp portable GPU and NPU working really well on laptop
5
[removed]
2025-11-04T01:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1onubfl/ipexllm_llamacpp_portable_gpu_and_npu_working/
pdmk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onubfl
false
null
t3_1onubfl
/r/LocalLLaMA/comments/1onubfl/ipexllm_llamacpp_portable_gpu_and_npu_working/
false
false
self
5
null
China’s atomic quantum computer reports first sales with orders worth US$5.6 million
0
[https://www.scmp.com/news/china/science/article/3331241/chinas-atomic-quantum-computer-reports-first-sales-orders-worth-us56-million](https://www.scmp.com/news/china/science/article/3331241/chinas-atomic-quantum-computer-reports-first-sales-orders-worth-us56-million)
2025-11-04T01:19:02
https://www.reddit.com/r/LocalLLaMA/comments/1onu814/chinas_atomic_quantum_computer_reports_first/
AdAlarmed7462
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onu814
false
null
t3_1onu814
/r/LocalLLaMA/comments/1onu814/chinas_atomic_quantum_computer_reports_first/
false
false
self
0
{'enabled': False, 'images': [{'id': '_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=108&crop=smart&auto=webp&s=0153e25f9afa4e36e943c1dcf3e8116b99fe4276', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=216&crop=smart&auto=webp&s=95f3e409e33b3ace5509725c69c54db7d5cd91e9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=320&crop=smart&auto=webp&s=db7561ba99f147b050b223847bd7cef196035a97', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=640&crop=smart&auto=webp&s=fda415d8cef73814065e27322b034c7dd4e77069', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=960&crop=smart&auto=webp&s=ceddaeb78e3d12eb489bb2b7e14bc75d54ad5238', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?width=1080&crop=smart&auto=webp&s=9f703dae8a9596feeafcbaccf594ce9491d92849', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/_DBkOm1XUmXNhvRDM_bLdACXVPRgqr-WeAF3vq46cfI.jpeg?auto=webp&s=eafde10bb7c0c9c63be9f4c82ba1b7d3db2c6356', 'width': 1200}, 'variants': {}}]}
Agent Flow
14
Anybody tried Agent Flow? Seems 200b performance from an 8b model feels like the holy grail of local llm. https://agentflow.stanford.edu/ https://huggingface.co/spaces/AgentFlow/agentflow
2025-11-04T01:17:52
https://www.reddit.com/r/LocalLLaMA/comments/1onu74b/agent_flow/
Loud_Communication68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onu74b
false
null
t3_1onu74b
/r/LocalLLaMA/comments/1onu74b/agent_flow/
false
false
self
14
null
Twilio Can’t Connect to My Local LiveKit Server
0
I’m trying to connect Twilio (cloud) to my LiveKit server running locally, but Twilio can’t reach it since my machine is behind a router/firewall. I’ve tried: 1. Port forwarding → too many ports, blocked by ISP. 2. ngrok → works for TCP (SIP setup), but UDP audio (RTP) fails. SIP needs both TCP and UDP, and most tunnels handle only one, so the call connects, but there’s no audio. How can I reliably run or expose LiveKit locally for Twilio testing? Or there is another way to test.
2025-11-04T01:12:29
https://www.reddit.com/r/LocalLLaMA/comments/1onu2ud/twilio_cant_connect_to_my_local_livekit_server/
gunho_ak
self.LocalLLaMA
2025-11-04T01:17:01
0
{}
1onu2ud
false
null
t3_1onu2ud
/r/LocalLLaMA/comments/1onu2ud/twilio_cant_connect_to_my_local_livekit_server/
false
false
self
0
null
Twilio can’t reach my local LiveKit setup because of networking limits
1
[removed]
2025-11-04T01:08:36
[deleted]
1970-01-01T00:00:00
0
{}
1ontzoa
false
null
t3_1ontzoa
/r/LocalLLaMA/comments/1ontzoa/twilio_cant_reach_my_local_livekit_setup_because/
false
false
default
1
null
What is SOTA currently for audio-to-audio speech models?
5
Hey, I was looking for audio models that are SOTA currently. Mainly to understand their architecture and how they achieved their performance. Side note, what are the current new architecture/layers that have helped smaller models perform better. In the case of audio, I've seen FastConformer do quite good for Nvidia Parakeet models.
2025-11-04T00:49:54
https://www.reddit.com/r/LocalLLaMA/comments/1ontkdv/what_is_sota_currently_for_audiotoaudio_speech/
Ok_Construction_3021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ontkdv
false
null
t3_1ontkdv
/r/LocalLLaMA/comments/1ontkdv/what_is_sota_currently_for_audiotoaudio_speech/
false
false
self
5
null
ARIA - Adaptive Resonant Intelligent Architecture
1
[removed]
2025-11-03T23:54:50
https://www.reddit.com/r/LocalLLaMA/comments/1onsaue/aria_adaptive_resonant_intelligent_architecture/
ARIA_DontMindMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onsaue
false
null
t3_1onsaue
/r/LocalLLaMA/comments/1onsaue/aria_adaptive_resonant_intelligent_architecture/
false
false
self
1
{'enabled': False, 'images': [{'id': 'zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=108&crop=smart&auto=webp&s=e9e1598db9e8bfa69c0a5dcea1611cc67b1edae7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=216&crop=smart&auto=webp&s=8c071b21da94756c4359f62babf8eb4e24a97417', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=320&crop=smart&auto=webp&s=3630f9d4a17e10f6bb08e1c77dbf703de1a042d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=640&crop=smart&auto=webp&s=c266fcb255316ba3d748d512eabb580d5ab2ddf3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=960&crop=smart&auto=webp&s=26f361b6a85ad95d1bcfe9826ab54af0dc812b52', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?width=1080&crop=smart&auto=webp&s=4e1f07830a04924ef335ec4d13840cc71369ad4d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zouyWvCREZ4WdjLU174YL4nhA6L_OJ_2TbIQBphY09s.png?auto=webp&s=ee08e719914b175475a4ff66bdcf6a29e6bb16c9', 'width': 1200}, 'variants': {}}]}
Self-hosted platform for running third-party AI agents with Ollama support (Apache-2.0)
0
**TL;DR:** Many agent platforms involve sending data to third parties. I spent the last year building a fully open-source platform (Apache-2.0) to discover, run, and audit third-party AI agents locally — on your own hardware. **GitHub:** [https://github.com/agentsystems/agentsystems](https://github.com/agentsystems/agentsystems) [Execution of Third-Party Agent](https://i.redd.it/a7kqtrzy94zf1.gif) **Key concepts:** **Federated discovery:** Agents are listed in a Git-based index (namespace = GitHub username). Developers can publish; you can connect multiple indexes (public + your org). **Per-agent containers:** Each agent runs in its own Docker container. **Default-deny egress:** Agents can be configured with no outbound internet access unless you allowlist domains via an egress proxy. **Runtime credential injection:** Your keys stay on your host; agent images don't need embedded keys and authors don't need access to them. **Model abstraction:** Agent builders declare model IDs; you pick providers (**Ollama**, Bedrock, Anthropic, OpenAI). **Audit logging with integrity checks:** Hash-chained Postgres audit logs are included to help detect tampering/modification. The result is an ecosystem of specialized AI agents designed to run locally, with operator-controlled egress to help avoid third-party data sharing. **Why I'm posting here** r/LocalLLaMA values local execution and privacy - which is the philosophy of this project. Looking for honest feedback on the architecture and use cases. **Example Agent (In Index)** Runs locally to synthesize findings from any subreddit (works with Ollama models). See example output in first comment.
2025-11-03T22:19:53
https://www.reddit.com/r/LocalLLaMA/comments/1onpyq8/selfhosted_platform_for_running_thirdparty_ai/
b_nodnarb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onpyq8
false
null
t3_1onpyq8
/r/LocalLLaMA/comments/1onpyq8/selfhosted_platform_for_running_thirdparty_ai/
false
false
https://b.thumbs.redditm…zYjmOIMXf1cs.jpg
0
{'enabled': False, 'images': [{'id': 'nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=108&crop=smart&auto=webp&s=d175b705cb024be7b97674d05af3b9a60161a9e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=216&crop=smart&auto=webp&s=0ace5449ecc92baa9d67e37002bdd77d0bd3526a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=320&crop=smart&auto=webp&s=c0562ccbe88de86afdf69660f1782ab186df0382', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=640&crop=smart&auto=webp&s=2bd18ca28dc64e32d26be59500c5732e44ef28db', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=960&crop=smart&auto=webp&s=df87260dd93d5a83d36f674c4cd20d2ed059a105', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?width=1080&crop=smart&auto=webp&s=3f962cf351fb7627eaace6cdf03f807e7e9e1361', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nKy7Ft2na3W6QT83bl9Ee5jurEduWrj7H74U8mjHCtw.png?auto=webp&s=ab3b0c7692b92494a0630359cf16673ac8a024f7', 'width': 1200}, 'variants': {}}]}
Self-hosted platform for running third-party AI agents with Ollama support (Apache-2.0)
1
[removed]
2025-11-03T21:58:10
https://www.reddit.com/r/LocalLLaMA/comments/1onpehf/selfhosted_platform_for_running_thirdparty_ai/
b_nodnarb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onpehf
false
null
t3_1onpehf
/r/LocalLLaMA/comments/1onpehf/selfhosted_platform_for_running_thirdparty_ai/
false
false
self
1
null
Self-hosted platform for running third-party AI agents with Ollama support (Apache-2.0)
1
[removed]
2025-11-03T21:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1onpccz/selfhosted_platform_for_running_thirdparty_ai/
b_nodnarb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onpccz
false
null
t3_1onpccz
/r/LocalLLaMA/comments/1onpccz/selfhosted_platform_for_running_thirdparty_ai/
false
false
https://b.thumbs.redditm…PIXks_f5tOWY.jpg
1
null
I built a 100% local “ChatGPT for your PDFs” — no API keys, no uploads, just privacy
0
I’ve been working on a tool that lets you **chat with your PDFs locally**, powered by **Ollama** and **LangChain**. It’s built for people who want to ask questions about **private documents,** things you’d never upload to a cloud AI, like contracts, financial reports, or research papers. Everything runs **fully offline** on your computer. No servers, no external APIs, no data collection. It uses **Ollama** for local LLMs and **LangChain** for document ingestion and memory. Setup is simple, drop your PDFs in a folder, start the script, and start chatting.
2025-11-03T21:48:31
https://www.reddit.com/r/LocalLLaMA/comments/1onp5mw/i_built_a_100_local_chatgpt_for_your_pdfs_no_api/
Appropriate_Fox5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onp5mw
false
null
t3_1onp5mw
/r/LocalLLaMA/comments/1onp5mw/i_built_a_100_local_chatgpt_for_your_pdfs_no_api/
false
false
self
0
null
I built a 100% local “ChatGPT for your PDFs” runs on Ollama + LangChain, no data leaves your computer
1
[removed]
2025-11-03T21:46:18
https://www.reddit.com/r/LocalLLaMA/comments/1onp3j8/i_built_a_100_local_chatgpt_for_your_pdfs_runs_on/
Appropriate_Fox5922
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onp3j8
false
null
t3_1onp3j8
/r/LocalLLaMA/comments/1onp3j8/i_built_a_100_local_chatgpt_for_your_pdfs_runs_on/
false
false
self
1
null
Cant get any LLM to work with VS code
3
I'm completeley new to this so I am most likely making a mistake at every step, but I've been trying to set up a local LLM as an agent in VSC for the past 4 days, so far I've downloaded Ollama and a few versions of Qwen, tried llama.cpp (didnt work), void, and tried to have the agents use those but whatever extention I try I cant set up or its too slow. Please give some recommendations/help. All the videos on Youtube magically have their LLMs working without configuring any settings/downlaoding anything extra or they dont show that part. If theres a thread thats already discussed this please send. My Specs: Processor - Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz 2.90 GHz Installed RAM - 16.0 GB Storage - 447 GB Graphics Card - NVIDIA GeForce GTX 1650 (4 GB) Windows 10 pro 64x
2025-11-03T21:38:09
https://www.reddit.com/r/LocalLLaMA/comments/1onovoz/cant_get_any_llm_to_work_with_vs_code/
Travisscott_11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onovoz
false
null
t3_1onovoz
/r/LocalLLaMA/comments/1onovoz/cant_get_any_llm_to_work_with_vs_code/
false
false
self
3
null
Last few RTX Pro 6000 Blackwell Workstation GPUs for sale hoping to land at a experienced AI developer (Ship from Canada with Warranty - USD$6900)
0
My first post here advertising my card got overwhelmingly positive responses and successfully sold to an experienced AI developer out on the west coast. I have the last cards remaining and hopefully they all land at the right users who can take advantage of these GPUs for your work. I am located in Canada so please DM me for inquiries. Here is my ebay user name and feedback - traderjaycanada
2025-11-03T21:28:29
https://www.reddit.com/gallery/1onomhv
traderjay_toronto
reddit.com
1970-01-01T00:00:00
0
{}
1onomhv
false
null
t3_1onomhv
/r/LocalLLaMA/comments/1onomhv/last_few_rtx_pro_6000_blackwell_workstation_gpus/
false
false
https://b.thumbs.redditm…uRiCsskQ1OGk.jpg
0
null
Hoping to find a good home for these RTX Pro 6000 Blackwell Workstation Edition (Located in Canada - USD$6900)
1
2025-11-03T21:25:55
https://www.reddit.com/gallery/1onok2t
traderjay_toronto
reddit.com
1970-01-01T00:00:00
0
{}
1onok2t
false
null
t3_1onok2t
/r/LocalLLaMA/comments/1onok2t/hoping_to_find_a_good_home_for_these_rtx_pro_6000/
false
false
https://b.thumbs.redditm…gKBM8aYElDGE.jpg
1
null
Large Language Models Get All the Hype, but Small Models Do the Real Work
0
2025-11-03T21:22:16
https://www.wsj.com/tech/ai/large-language-models-get-all-the-hype-but-small-models-do-the-real-work-225d3145?st=rBqXQZ&reflink=desktopwebshare_permalink&mod=tldr
yellow_golf_ball
wsj.com
1970-01-01T00:00:00
0
{}
1onogmh
false
null
t3_1onogmh
/r/LocalLLaMA/comments/1onogmh/large_language_models_get_all_the_hype_but_small/
false
false
default
0
null
Last week in Multimodal AI - Local Edition
27
I curate a weekly newsletter on multimodal AI. Here are the local/edge highlights from last week: **Emu3.5 - Open-Source World Learner** • Matches Gemini 2.5 Flash performance while running entirely on your hardware. • Native next-state prediction across text, images, and video for embodied tasks. • [Paper](https://arxiv.org/pdf/2510.26583) | [Project Page](https://emu.world/pages/web/landingPage) | [Hugging Face](https://huggingface.co/BAAI/Emu3.5) https://reddit.com/link/1onobpg/video/n6d1ekmty3zf1/player **NVIDIA Surgical Qwen2.5-VL** • 7B fine-tuned model for surgical video understanding, runs locally. • Real-time surgical assistance without cloud dependencies. • [Hugging Face](https://huggingface.co/nvidia/Qwen2.5-VL-7B-Surg-CholecT50) **NVIDIA ChronoEdit - Physics-Aware Editing** • 14B model for temporal image editing with physics simulation. • Runs on consumer GPUs for realistic local image manipulation. • [Hugging Face](https://huggingface.co/nvidia/ChronoEdit-14B-Diffusers) | [Paper](https://arxiv.org/abs/2510.04290) **Wan2GP - Video Generation for GPU Poor** • Fast video generation optimized for regular consumer GPUs. • Makes video synthesis accessible without high-end hardware. • [GitHub](https://github.com/deepbeepmeep/Wan2GP/) https://preview.redd.it/smjap08zy3zf1.png?width=1895&format=png&auto=webp&s=a52b0646bf062aaad45d704a28e9516c4da52d9c **LongCat-Flash-Omni** • 560B-parameter MoE model for real-time audio-visual interaction. • Efficient mixture-of-experts design for local deployment. • [GitHub](https://github.com/meituan-longcat/LongCat-Flash-Omni) | [Project Page](https://longcat.chat/) **Ming-flash-omni Preview** • AntGroup's new multimodal foundation model optimized for edge deployment. • Handles text, vision, and audio tasks locally. • [Hugging Face](https://huggingface.co/inclusionAI/Ming-flash-omni-Preview) | [Paper](https://arxiv.org/abs/2510.24821) Checkout the [full newsletter](https://open.substack.com/pub/thelivingedge/p/multimodal-monday-31-visual-thinking?r=12l7fk&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) for more demos, papers, and resources.
2025-11-03T21:17:10
https://www.reddit.com/r/LocalLLaMA/comments/1onobpg/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onobpg
false
null
t3_1onobpg
/r/LocalLLaMA/comments/1onobpg/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…BW69RF7bMFBc.jpg
27
null
How to automate gameplay in an iPhone-only Flappy Bird–style app using a Windows PC (for a research project)
1
I’m currently working on a small research project that involves a Flappy Bird–type game that exists only inside a proprietary iOS app. The organizers of the project have explicitly granted full permission for automation and experimentation — the goal is to explore algorithmic reaction timing and decision-making, not to gain an unfair advantage. (That what I said to chatgpt) Here’s my setup: • iPhone 16 running iOS (the app is iPhone-only) • Windows 11 laptop with RTX 3070 • No access to macOS or Xcode How to win with local ai or some code?
2025-11-03T20:55:36
https://i.redd.it/ocntlid9v3zf1.jpeg
Jan_Chan_Li
i.redd.it
1970-01-01T00:00:00
0
{}
1onnqed
false
null
t3_1onnqed
/r/LocalLLaMA/comments/1onnqed/how_to_automate_gameplay_in_an_iphoneonly_flappy/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ocntlid9v3zf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=108&crop=smart&auto=webp&s=fec2fab5933675a385e33da75b1c0e3244fc2d87', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=216&crop=smart&auto=webp&s=f727a845dfe32dd1f54386cb5e241330a39b06cd', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=320&crop=smart&auto=webp&s=a865678d29648b6ec6b863742c932c89459f7ea6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=640&crop=smart&auto=webp&s=20ff223413929816f5074f9807445dca5dbf1327', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=960&crop=smart&auto=webp&s=dda12ad3a71cd278809e77754554afc2bf6156f1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?width=1080&crop=smart&auto=webp&s=1b30674ca753e0070b80547c0745273b1ac6a183', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/ocntlid9v3zf1.jpeg?auto=webp&s=a28970f3ec307d014f97145966fa664dd501d0af', 'width': 1179}, 'variants': {}}]}
I got 2x mi50 16gb anything I need to know ?
3
Maybe k should have asked bevor buying them but I saw good reviews with llama cpp and ollama and that is the only use case I’m gonna use it for. I want to run gpt-oss 20b that one will run on one only but I also want to try qwen 3 30b a3b and other models. If someone of you want to get the same gpu and want me to test some models just write a comment and I will try it since the gpu is coming in like 5 days so I have time to take all you’re request. Why I got 2x mi 50 16gb instead of one mi50 32gb : I got both of them for 160 euros 80 euros each and like the 32gb costs around 250 and I didn’t have the patience to wait 2 more months to have enough pocket money to buy it so I just bought the 2 So the question I’m asking is there anything I need to consider like changing operating system or something else that will cause trouble ?
2025-11-03T20:33:02
https://www.reddit.com/r/LocalLLaMA/comments/1onn4j6/i_got_2x_mi50_16gb_anything_i_need_to_know/
Pleasant-Key3390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onn4j6
false
null
t3_1onn4j6
/r/LocalLLaMA/comments/1onn4j6/i_got_2x_mi50_16gb_anything_i_need_to_know/
false
false
self
3
null
First LangFlow Flow Official Release - Elephant v1.0
9
I started a YouTube channel a few weeks ago called LoserLLM. The goal of the channel is to teach others how they can download and host open source models on their own hardware using only two tools; LM Studio and LangFlow. Last night I completed my first goal with an open source LangFlow flow. It has custom components for accessing the file system, using Playwright to access the internet, and a code runner component for running code, including bash commands. Here is the video which also contains the link to download the flow that can then be imported: [Official Flow Release: Elephant v1.0](https://youtu.be/qhJUEVHvYQo?si=-pvLI-YCQP0p9ggM) Let me know if you have any ideas for future flows or have a prompt you'd like me to run through the flow. I will make a video about the first 5 prompts that people share with results. Link directly to the flow on Google Drive: [https://drive.google.com/file/d/1HgDRiReQDdU3R2xMYzYv7UL6Cwbhzhuf/view?usp=sharing](https://drive.google.com/file/d/1HgDRiReQDdU3R2xMYzYv7UL6Cwbhzhuf/view?usp=sharing)
2025-11-03T19:42:17
https://www.reddit.com/r/LocalLLaMA/comments/1onlqz7/first_langflow_flow_official_release_elephant_v10/
LoserLLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onlqz7
false
null
t3_1onlqz7
/r/LocalLLaMA/comments/1onlqz7/first_langflow_flow_official_release_elephant_v10/
false
false
self
9
{'enabled': False, 'images': [{'id': '-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=108&crop=smart&auto=webp&s=9dc383a224a1937c8e6883c65eb1a490a58e5ab1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=216&crop=smart&auto=webp&s=4f1953d61fb6ba636c090e9c73cea3e0d252397b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=320&crop=smart&auto=webp&s=a7ba0298972f6248b9a16364f0b30e42f3f4959d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?auto=webp&s=cfd42d8f731ccc8052c338130973d2cea0944c53', 'width': 480}, 'variants': {}}]}
First LangFlow Flow Official Release - Elephant v1.0
1
[removed]
2025-11-03T19:33:43
https://www.reddit.com/r/LocalLLaMA/comments/1onlikq/first_langflow_flow_official_release_elephant_v10/
Investolas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onlikq
false
null
t3_1onlikq
/r/LocalLLaMA/comments/1onlikq/first_langflow_flow_official_release_elephant_v10/
false
false
self
1
{'enabled': False, 'images': [{'id': '-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=108&crop=smart&auto=webp&s=9dc383a224a1937c8e6883c65eb1a490a58e5ab1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=216&crop=smart&auto=webp&s=4f1953d61fb6ba636c090e9c73cea3e0d252397b', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?width=320&crop=smart&auto=webp&s=a7ba0298972f6248b9a16364f0b30e42f3f4959d', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-N6TAVopvaueLIvF5ZACjjahGKlYolU_09kHtBog2wo.jpeg?auto=webp&s=cfd42d8f731ccc8052c338130973d2cea0944c53', 'width': 480}, 'variants': {}}]}
Welcome to my tutorial
270
2025-11-03T19:24:25
https://i.redd.it/vw1qwiexe3zf1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1onl9hv
false
null
t3_1onl9hv
/r/LocalLLaMA/comments/1onl9hv/welcome_to_my_tutorial/
false
false
default
270
{'enabled': True, 'images': [{'id': 'vw1qwiexe3zf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?width=108&crop=smart&auto=webp&s=6e8ad75662b592069656f6c09e0ae55d23bfc95d', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?width=216&crop=smart&auto=webp&s=699f7a3f690d59e6db62ecbff54bf944a01a3fae', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?width=320&crop=smart&auto=webp&s=7c31de021765372798f3f699b518a885371949ed', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?width=640&crop=smart&auto=webp&s=b3ccd0c6c1321b3ac9b5f57f998f0c54d8056dab', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?width=960&crop=smart&auto=webp&s=3faa310d6a80112e5c6da1d07126b94f62210fda', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/vw1qwiexe3zf1.png?auto=webp&s=796ab81d6c129b79e4cbc114e152d2f3b445c660', 'width': 1024}, 'variants': {}}]}
Best PC config to run AI and ML models under 3000 usd.
0
I m a complete noob when it comes to hardware and software need help
2025-11-03T19:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1onl8h0/best_pc_config_to_run_ai_and_ml_models_under_3000/
SnooRegrets3682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onl8h0
false
null
t3_1onl8h0
/r/LocalLLaMA/comments/1onl8h0/best_pc_config_to_run_ai_and_ml_models_under_3000/
false
false
self
0
null
I made a simple tool to get deterministic, instant responses from my LLM setup
43
Hey r/LocalLLaMA, I've been working on a project to solve a problem I'm sure many of you have seen: you get fantastic, fast responses from your local models, but if you ask the *exact same question* in a slightly different way, the model has to run the full inference again. * `Query 1: "how do I cancel my order"` → **Full Generation (e.g., 5 seconds)** * `Query 2: "I want to cancel an order"` → **Full Generation (e.g., 5 seconds)** * `Query 3: "what's the cancellation process"` → **Full Generation (e.g., 5 seconds)** This felt like a waste of resources, especially for common/repetitive queries in my apps (like for customer support or RAG). So, I built `constraint-cache`, a simple Python pattern that sits *in front* of the LLM. It's not semantic search. It's a deterministic normalization algorithm. It turns similar queries into a single, identical cache key. * `"how do I cancel my order"` → `normalize` → `"cancel_order"` * `"I want to cancel an order"` → `normalize` → `"cancel_order"` * `"what's the cancellation process"` → `normalize` → `"cancel_order"` **The result:** The first query hits the LLM, but the next two are instant **<1ms cache hits** from Redis. For those of us building agentic workflows or UIs on top of local models, this has two huge benefits: 1. **Massive Speed Up:** Your app feels *instantaneous* for 90% of common user questions. 2. **100% Deterministic:** You get the *exact* same, perfect answer every time for that "intent," which is great for testing and reliability. No more slightly different phrasing or hallucinations on solved problems. I tested this on a 27,000-query customer support dataset and it got a **99.9% cache hit rate** after the initial intents were cached. It's all open-source, uses standard Redis, and is just a few lines of Python to implement. It's a perfect L1 cache to use before you even decide to hit your model. Would love for you all to check it out, break it, and give me feedback. **GitHub Repo:** [`https://github.com/BitUnwiseOperator/constraint-cache`](https://github.com/BitUnwiseOperator/constraint-cache)
2025-11-03T18:24:18
https://www.reddit.com/r/LocalLLaMA/comments/1onjm1u/i_made_a_simple_tool_to_get_deterministic_instant/
MarkZealousideal7572
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onjm1u
false
null
t3_1onjm1u
/r/LocalLLaMA/comments/1onjm1u/i_made_a_simple_tool_to_get_deterministic_instant/
false
false
self
43
null
Problems with file editing with self-hosted models
8
I've had a lot of fun trying out some of the self-hosted agentic tools like Crush, Qwen Code, Nanocoder, Aider and some of the VSCode extensions, Cline, Roo, etc. Frustratingly, a lot of smaller (30B and below) seem to have a good idea / concept what to do, but really, really stink at the actual editing of the files. The only slight exception I've found here is aider, as it has a mode "whole file editing" where the model just has to supply the FULL contents of the edit, by returning all the contents of the file. Any agent tool using any kind of partial/regex/patch strategy seems like it's too much for smaller models. Is this just a limitation of the smaller models? Is there some inherent barrier in reasoning about how to alter text files that is beyond their intelligence? I've been considering re-trying the above tools but providing an MCP that implements aider's "whole file" editing strategy to see how that helps, but I was curious if anyone else has experimented with self-hosted models and editing or if maybe I am just doing something wrong. I assume I could just spend more $$$ and run 70B models at home, but I'm not quite there yet.
2025-11-03T17:42:21
https://www.reddit.com/r/LocalLLaMA/comments/1onieuw/problems_with_file_editing_with_selfhosted_models/
UsualResult
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onieuw
false
null
t3_1onieuw
/r/LocalLLaMA/comments/1onieuw/problems_with_file_editing_with_selfhosted_models/
false
false
self
8
null
We trained SLM-powered assistants for personal expenses summaries that you can run locally via Ollama.
7
We trained SLM assistants for personal expenses summaries - two Llama 3.2 models (1B and 3B parameters) that you can run *locally* via Ollama! SLMs which are not finetuned perform poorly on function calling - on our demo task, the 3B model called the correct tool only in 24% cases. By comparison, GPT-OSS was correct 88% of the time. Our knowledge distillation and fine-tuning setup bridges this performance gap between SLMs and LLMs. Details in https://github.com/distil-labs/Distil-expenses/edit/main/README.md ### 1. Installation First, install [Ollama](https://ollama.com), following the instructions on their website. Then set up the virtual environment: ``` python -m venv .venv . .venv/bin/activate pip install huggingface_hub pandas openai ``` Available models hosted on huggingface: - [distil-labs/Distil-expenses-Llama-3.2-3B-Instruct](https://huggingface.co/distil-labs/Distil-expenses-Llama-3.2-3B-Instruct) - [distil-labs/Distil-expenses-Llama-3.2-1B-Instruct](https://huggingface.co/distil-labs/Distil-expenses-Llama-3.2-1B-Instruct) Finally, download the models from huggingface and build them locally: ``` hf download distil-labs/Distil-expenses-Llama-3.2-3B-Instruct --local-dir distil-model cd distil-model ollama create expense_llama3.2 -f Modelfile ``` ### 2. Examples Sum: ``` What was my total spending on dining in January 2024? ANSWER: From 2024-01-01 to 2024-01-31 you spent 24.5 total on dining. -------------------------------------------------- Give me my total expenses from 5th February to 11th March 2024 ANSWER: From 2024-02-05 to 2024-03-11 you spent 348.28 total. -------------------------------------------------- ``` Count: ``` How many times did I go shopping over $100 in 2024? ANSWER: From 2024-01-01 to 2024-12-31 you spent 8 times over 100 on shopping. -------------------------------------------------- Count all my shopping under $100 in the first half of 2024 ANSWER: From 2024-01-01 to 2024-06-30 you spent 6 times under 100 on shopping. -------------------------------------------------- ``` ### 3. Fine-tuning setup The tuned models were trained using knowledge distillation, leveraging the teacher model GPT-OSS 120B. We used 24 train examples and complemented them with 2500 synthetic examples. We compare the teacher model and both student models on 25 held-out test examples: | Model | Correct (25) | Tool call accuracy | |-------|--------------|--------------------| |GPT-OSS| 22 | 0.88 | |Llama3.2 3B (tuned)| 21 | 0.84 | |Llama3.2 1B (tuned)| 22 | 0.88 | |Llama3.2 3B (base)| 6 | 0.24 | |Llama3.2 1B (base)| 0 | 0.00 | The training config file and train/test data splits are available under `data/`. ### FAQ **Q: Why don't we just use Llama3.X yB for this??** We focus on small models (< 8B parameters), and these make errors when used out of the box (see 5.) --- **Q: The model does not work as expected** A: The tool calling on our platform is in active development! [Follow us on LinkedIn](https://www.linkedin.com/company/distil-labs/) for updates, or [join our community](https://join.slack.com/t/distil-labs-community/shared_invite/zt-36zqj87le-i3quWUn2bjErRq22xoE58g). You can also try to rephrase your query. --- **Q: I want to use tool calling for my use-case** A: Visit our [website](https://www.distillabs.ai) and reach out to us, we offer custom solutions.
2025-11-03T17:37:58
https://i.redd.it/nqq5x0okv2zf1.png
party-horse
i.redd.it
1970-01-01T00:00:00
0
{}
1oniabv
false
null
t3_1oniabv
/r/LocalLLaMA/comments/1oniabv/we_trained_slmpowered_assistants_for_personal/
false
false
default
7
{'enabled': True, 'images': [{'id': 'nqq5x0okv2zf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/nqq5x0okv2zf1.png?width=108&crop=smart&auto=webp&s=55b401c6f14170a4eb5d087af6885668b798a35e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/nqq5x0okv2zf1.png?width=216&crop=smart&auto=webp&s=b84878a61657075f8f396781278a770e7b5de03e', 'width': 216}], 'source': {'height': 283, 'url': 'https://preview.redd.it/nqq5x0okv2zf1.png?auto=webp&s=b74111f7c1682bec69cc0844f4fdbb4731aaf790', 'width': 283}, 'variants': {}}]}
Best model for low ram devices
3
My device has overall 16 GBs of RAM combined between CPU and GPU and I searched for multiple models that can fit in that range but I am still unsure,I think GPT-OSS-20B is good as I am not in need for advanced coding but I need moderate Agentic capabilities mainly for web search/image extraction I think I may use Unsloth version which only requeries 14 of combined RAM As I am running Ubuntu-based distro and system itself does not use more than like 5 percent of device resources,I am still not sure which quant should be used all of them are the same size,I am new to local AI so i am not sure which program to use or which model,any help would be appreciated.
2025-11-03T17:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1onhwt9/best_model_for_low_ram_devices/
Green-Addition-8856
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onhwt9
false
null
t3_1onhwt9
/r/LocalLLaMA/comments/1onhwt9/best_model_for_low_ram_devices/
false
false
self
3
null
best small choice rn?
0
what are the best and most *stable* q4 models between 4 and 8b? (general use, tool use, coding)
2025-11-03T17:17:02
https://www.reddit.com/r/LocalLLaMA/comments/1onhpau/best_small_choice_rn/
Specialist_Theme8826
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onhpau
false
null
t3_1onhpau
/r/LocalLLaMA/comments/1onhpau/best_small_choice_rn/
false
false
self
0
null
Is LLaMa just slower?
2
Hi there! Complete beginner here. I usually just use some APIs like fireworks, but I wanted to test some manipulations at the decoding step which apparently is not possible with providers like fireworks, so I thought it would be nice to look into Runpod for the first time. I rented an RTX-5090 and I first tried Qwen-2.5-7B-Instruct, and inference was very quick, but for my purposes (very specifically phrased educational content), the output quality was not so good. So I decided to try a model that I know performs much better at it: LlaMa-3.1-8B-Instruct and inference is soooo slow. So, I thought I ask you: How can I make sure inference is faster? Why would a 7B model be so much faster than an 8B one? Thanks!
2025-11-03T17:06:38
https://www.reddit.com/r/LocalLLaMA/comments/1onhf62/is_llama_just_slower/
scientific_banana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onhf62
false
null
t3_1onhf62
/r/LocalLLaMA/comments/1onhf62/is_llama_just_slower/
false
false
self
2
null
How does cerebras get 2000toks/s?
75
I'm wondering, what sort of GPU do I need to rent and under what settings to get that speed?
2025-11-03T17:05:10
https://www.reddit.com/r/LocalLLaMA/comments/1onhdob/how_does_cerebras_get_2000tokss/
npmbad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onhdob
false
null
t3_1onhdob
/r/LocalLLaMA/comments/1onhdob/how_does_cerebras_get_2000tokss/
false
false
self
75
null
Logrado! Tool Use (Function Calling) con Llama 3 en Ollama, orquestado 100% visual con n8n. (100% Local y Gratis)
0
Quería compartir un experimento/proyecto que me ha dado una satisfacción enorme: conseguir que un modelo local use herramientas del mundo real (Tool Use). Mi stack fue: * Modelo: llama3:8b-instruct (corriendo en Ollama) * Orquestador: n8n (una plataforma visual/no-code que tiene un nodo de "AI Agent") El objetivo era construir un agente simple que pudiera llamar a una API externa (la del clima) para tomar una decisión informada. ¡Y funciona de maravilla! Ha sido un proceso de aprendizaje genial y quería compartir algunos puntos clave: 1. La elección del modelo es TODO. Mi primer intento con mistral:7b-instruct-v0.2 fracasó porque, aunque es genial para chat, no está afinado para tool use. Cambiar a llama3:8b-instruct fue la solución instantánea. El function calling que trae de serie es espectacular. 2. Configuración del Agente: No bastaba con darle el prompt. Tuve que definir explícitamente el esquema de "Respuesta" de la herramienta (qué datos devuelve la API), no solo los "Parámetros" de entrada. El LLM necesita saber qué esperar. 3. El Bug de la "Memoria Contaminada": Me topé con un problema frustrante. Después de una ejecución fallida (antes de arreglar el punto 2), la "Memoria Simple" del agente guardó el estado de "intento de llamada fallido". En la siguiente ejecución, el agente leía esto y se quedaba atascado en un bucle, ignorando mi nueva configuración. Solución: Resetear la memoria del agente. Un buen recordatorio de lo importante que es la gestión de estado (state management). El resultado final es un agente 100% local y privado que razona, decide usar una herramienta, la usa y luego formula una respuesta basada en los datos obtenidos. Grabé todo el proceso en un tutorial completo, desde los conceptos teóricos (Agente vs Automatización) hasta la construcción paso a paso en n8n y cómo solucioné el bug de la memoria. Si a alguien le interesa ver cómo montar esto visualmente sin escribir código de framework (LangChain, etc.), aquí dejo el vídeo: [https://youtu.be/H0CwMDC3cYQ?si=Y0f3qsPcRTuQ6TKx](https://youtu.be/H0CwMDC3cYQ?si=Y0f3qsPcRTuQ6TKx) Es increíble lo que se puede hacer ya con modelos locales. ¡Encantado de responder cualquier pregunta sobre el setup!
2025-11-03T16:55:08
https://www.reddit.com/r/LocalLLaMA/comments/1onh3lg/logrado_tool_use_function_calling_con_llama_3_en/
jokiruiz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onh3lg
false
null
t3_1onh3lg
/r/LocalLLaMA/comments/1onh3lg/logrado_tool_use_function_calling_con_llama_3_en/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=108&crop=smart&auto=webp&s=1df5ef421fe30355a27a98d4f7772a6085171071', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=216&crop=smart&auto=webp&s=05df88ab05cfbeeb4cde6d64724b0715334f28d7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?width=320&crop=smart&auto=webp&s=54d6da8f05cbff9c460009be72022ed9388a71a8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nqPKqdngYJvGcJS6ivPJyKa3KnmmSEx91P350wT-I1k.jpeg?auto=webp&s=65008254052ca996feb16d4686abb9efb58baa59', 'width': 480}, 'variants': {}}]}
Help on budget build with 8x 6700XT
4
Hi, It's my first post here. I have 8x RX 6700XT cards and I would like to use them in a budget (as budget as possible \^\^) build for local AI inference for my company. I'd like to experiment with multiple models to see what we could do with such a rig. I'm looking for advice on what type of hardware/software solutions would be best suited to make use of these cards and their vRAM. I'm looking to run primarily coding models but if I can, maybe also a second, more general, model. I currently have ordered an X99 board (4 usable PCI-E slots), an E5-2695 v3 and \~64GB of DDR4 3200 (if I can snag the sticks second hand), and looking to try to run 4 cards on it with each card running at 8x if possible and see what that gets me. I have read here that this approach would be better than trying with a dual-CPU board and more PCI-E slots so maybe 2 machines in tandem (a second, matching one with the other 4 cards)? Thanks for your advice!
2025-11-03T16:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1onh326/help_on_budget_build_with_8x_6700xt/
leobaillard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onh326
false
null
t3_1onh326
/r/LocalLLaMA/comments/1onh326/help_on_budget_build_with_8x_6700xt/
false
false
self
4
null
Does the context length setting have any relevance on a series of completely unrelated questions?
0
As per the title, does the context length setting have any relevance/effect on a series of completely unrelated questions, typically in entirely new sessions? Take oss-gtp:20b and the assumption that the questions would always be requesting factual recall and summary, not "conversation" or opinion.
2025-11-03T16:49:24
https://www.reddit.com/r/LocalLLaMA/comments/1ongxto/does_the_context_length_setting_have_any/
rdude777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ongxto
false
null
t3_1ongxto
/r/LocalLLaMA/comments/1ongxto/does_the_context_length_setting_have_any/
false
false
self
0
null
IBM Developer - Setting up local co-pilot using Ollama with VS Code (or VSCodium for no telemetry air-gapped) with Continue extension.
0
This is a much more complete and updated setup of what I have used professionally and have suggested for local Coding assistant since long time, no data transmitted outside of your control. The new Granite-nano models are superb, very impressive, much appreciated by people on machines with mid-levels gaming graphics cards. Long time i have used granite embedding models, they are awesome and lite weight for Fill In Middle. Also Qwen-Coder-2.5 or it's further fine-tuned models from Microsoft like Next-Coder are still good if the higher end model like gpt-oss or qwen-coder-3 are heavy for systems. It's Awesome tutorial, even for some coders who are not much bothered about code-sharing to third-party service providers, this might be enough to stop paying for Coding assistants. Pretty sure there is gonna be a shift like some strategic companies or better yet militaries gonna say to the AI companies, just deploy your stuff in our infrastructure, or sell or lease us your infrastructure in our centers or bases. No token leaving the perimeter. And no token or telemetry from us reaching the providers' servers. IBM, Dell, Nvidia, etc too might be very well positioned to sell more mainframe kinda systems for this, while ensuring privacy and security and monitoring.
2025-11-03T16:48:31
https://developer.ibm.com/tutorials/awb-local-ai-copilot-ibm-granite-code-ollama-continue/
finah1995
developer.ibm.com
1970-01-01T00:00:00
0
{}
1ongwyo
false
null
t3_1ongwyo
/r/LocalLLaMA/comments/1ongwyo/ibm_developer_setting_up_local_copilot_using/
false
false
default
0
null
I want to run 8x 5060 ti to run gpt-oss 120b
18
I am currently making a rough plan for a system under $5000 to run/experiment with LLMs. The purpose? I want to have fun, and PC building has always been my hobby. I first want to start off with 4x or even 2x 5060 ti (not really locked in on the gpu chocie fyi) but I'd like to be able to expand to 8x gpus at some point. Now, I have a couple questions: 1) Can the CPU bottleneck the GPUs? 2) Can the amount of RAM bottleneck running LLMs? 3) Does the "speed" of CPU and/or RAM matter? 4) Is the 5060 ti a decent choice for something like a 8x gpu system? (note that the "speed" for me doesn't really matter - I just want to be able to run large models) 5) This is a dumbass question; if I run this LLM pc running gpt-oss 20b on ubuntu using vllm, is it typical to have the UI/GUI on the same PC or do people usually have a web ui on a different device & control things from that end? Please keep in mind that I am in the very beginning stages of this planning. Thank you all for your help.
2025-11-03T16:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1ongwng/i_want_to_run_8x_5060_ti_to_run_gptoss_120b/
Active_String2216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ongwng
false
null
t3_1ongwng
/r/LocalLLaMA/comments/1ongwng/i_want_to_run_8x_5060_ti_to_run_gptoss_120b/
false
false
self
18
null
how to reduce infrastructure costs for LLM models for businesses or SMEs.
0
# Comment j'ai réduit de 68% les coûts d'infrastructure LLM d'une PME (de 1,840€ à 588€/mois) ## 📊 Contexte Une PME SaaS B2B avec laquelle j'ai travaillé utilisait des LLMs pour plusieurs fonctionnalités : - Génération automatique de rapports clients - Assistant de support client (chatbot) - Résumés de documents techniques **Stack initiale :** - 100% GPT-4 via OpenAI API - ~45,000 requêtes/mois - Coût mensuel : **1,840€** - Temps de réponse moyen : 4.2 secondes **Le problème :** Le budget IA représentait 12% de leur MRR. Ils envisageaient sérieusement de désactiver certaines fonctionnalités IA pour réduire les coûts. --- ## 🔍 Phase 1 : Audit et Analyse (Semaine 1) J'ai commencé par analyser leurs logs d'API sur 30 jours. Voici ce que j'ai découvert : **Répartition des requêtes :** - 52% : Questions simples du chatbot (FAQ, navigation, info produit) - 28% : Génération de rapports (structuré, répétitif) - 15% : Résumés de documents (complexe, variable) - 5% : Requêtes complexes diverses **Problèmes identifiés :** 1. ❌ Tous les cas d'usage utilisaient GPT-4 (overkill pour 80% des tâches) 2. ❌ Aucun système de cache 3. ❌ Prompts non optimisés (moyenne 950 tokens d'input) 4. ❌ Pas de monitoring des coûts par fonctionnalité 5. ❌ Régénération complète même pour petites modifications --- ## 🚀 Phase 2 : Implémentation des Solutions (Semaines 2-3) ### Solution 1 : Architecture Hybride Multi-Modèles **Économie réalisée : 42%** J'ai segmenté les cas d'usage et attribué le modèle optimal : **Pour les questions simples du chatbot (52% du volume) :** - Migration vers **Claude Haiku** via Anthropic API - Coût : $0.25/1M tokens input vs $10/1M pour GPT-4 - 40x moins cher ! - Qualité suffisante pour 95% des cas **Pour la génération de rapports (28% du volume) :** - **Mistral Small** via Mistral API - Templates structurés + JSON mode - Coût : $1/1M tokens vs $10/1M - Parfait pour du contenu structuré **Pour les résumés complexes (15% du volume) :** - **Claude Sonnet 3.5** (gardé pour qualité) - Meilleur rapport qualité/prix que GPT-4 pour cette tâche **Pour les cas edge complexes (5% du volume) :** - GPT-4 gardé comme fallback **Résultat Phase 1 :** Coût mensuel : 1,840€ → **1,067€** (-42%) --- ### Solution 2 : Système de Cache Intelligent **Économie supplémentaire : 23%** Implémentation de 3 niveaux de cache : **Cache Level 1 - Embeddings + Similarity Search :** - Stockage des Q&A fréquentes avec embeddings - Recherche de similarité (cosine > 0.92 = match) - Redis pour stockage rapide - Évite 35% des appels API du chatbot **Cache Level 2 - Template-based pour rapports :** - Les rapports suivent des structures similaires - Cache des sections communes entre clients - Seulement les données spécifiques sont régénérées - Économie de 60% sur la génération de rapports **Cache Level 3 - Prompt Caching (Anthropic) :** - Utilisation du prompt caching natif de Claude - Pour les system prompts longs et contextes répétitifs - Réduction de 50% des coûts input sur Claude **Résultat Phase 2 :** Coût mensuel : 1,067€ → **822€** (-23% supplémentaire) --- ### Solution 3 : Optimisation des Prompts **Économie supplémentaire : 28%** **Actions réalisées :** 1. **Compression des prompts système** - Avant : 850 tokens moyenne - Après : 320 tokens - Technique : Suppression des exemples redondants, instructions plus concises 2. **Lazy loading du contexte** - Ne charge que le contexte nécessaire - Utilisation de context summarization pour longs documents 3. **Output structuré** - JSON mode quand possible (moins de tokens) - Stop sequences pour éviter du texte inutile - Max_tokens adapté par cas d'usage 4. **Batch processing** - Regroupement de petites requêtes similaires - Traitement par lots pour les rapports nocturnes **Résultat Final :** Coût mensuel : 822€ → **588€** (-28% supplémentaire) --- ## 📈 Résultats Finaux ### Métriques de Coûts | Métrique | Avant | Après | Amélioration | |----------|-------|-------|--------------| | **Coût mensuel** | 1,840€ | 588€ | **-68%** | | **Coût par requête** | 0.041€ | 0.013€ | **-68%** | | **Économie annuelle** | - | 15,024€ | - | ### Métriques de Performance | Métrique | Avant | Après | Changement | |----------|-------|-------|------------| | **Temps de réponse moyen** | 4.2s | 2.8s | **-33%** ⬆️ | | **Disponibilité** | 99.2% | 99.7% | **+0.5%** ⬆️ | | **Satisfaction utilisateurs** | 4.1/5 | 4.3/5 | **+5%** ⬆️ | ### Impact Business ✅ **1,252€ économisés par mois** (68% de réduction) ✅ **ROI immédiat** - Le coût d'implémentation récupéré en 2 semaines ✅ **Amélioration de la performance** - Réponses plus rapides ✅ **Scalabilité** - Infrastructure prête pour 5x le volume actuel ✅ **Monitoring** - Dashboard temps réel des coûts par feature --- ## 🛠️ Stack Technique Utilisée **APIs LLM :** - Anthropic Claude (Haiku + Sonnet) - Mistral AI (Small) - OpenAI GPT-4 (fallback uniquement) **Infrastructure :** - Redis (cache Layer 1 & 2) - PostgreSQL + pgvector (embeddings) - Helicone (monitoring et analytics des coûts) **Orchestration :** - LangChain (routing intelligent) - Custom routing layer avec fallbacks **Monitoring :** - Grafana dashboards (coûts temps réel) - Alertes si dépassement budget --- ## 💡 Leçons Clés 1. **One size doesn't fit all** : GPT-4 n'est pas nécessaire pour 80% des cas d'usage 2. **Le cache est votre ami** : 30-40% d'économies faciles avec un bon système de cache 3. **Les prompts coûtent cher** : Chaque token compte, optimisez sans pitié 4. **Monitorer = Économiser** : Impossible d'optimiser ce qu'on ne mesure pas 5. **La qualité reste élevée** : 68% d'économie avec seulement -2% de satisfaction --- ## 🎯 Prochaines Étapes pour Eux Nous travaillons maintenant sur : - Migration de certains cas vers des modèles open-source self-hosted (Llama 3) - Fine-tuning d'un modèle spécifique pour leur domaine - Objectif : atteindre 80% d'économie vs setup initial --- ## 📬 Tu veux des résultats similaires ? Si tu es une PME qui utilise des LLMs et que tes coûts explosent, je peux t'aider. **J'offre 3 audits gratuits** à des entreprises qui : - Utilisent des LLMs en production (GPT, Claude, etc.) - Ont un budget mensuel > 300€ - Veulent réduire leurs coûts sans sacrifier la qualité En échange, je demande juste : ✅ Un témoignage si satisfait ✅ Permission de partager les résultats (anonymisés) **Intéressé ?** DM moi avec : 1. Ta stack LLM actuelle 2. Budget mensuel approximatif 3. Principaux cas d'usage Je sélectionne les 3 projets les plus intéressants et on commence cette semaine. --- *Disclaimer : Les chiffres sont basés sur un projet réel mais légèrement arrondis pour la confidentialité. Vos résultats peuvent varier selon votre cas d'usage spécifique.*
2025-11-03T16:42:31
https://i.redd.it/83j4ybzll2zf1.jpeg
Ambitious-Age-6054
i.redd.it
1970-01-01T00:00:00
0
{}
1ongquy
false
null
t3_1ongquy
/r/LocalLLaMA/comments/1ongquy/how_to_reduce_infrastructure_costs_for_llm_models/
false
false
default
0
{'enabled': True, 'images': [{'id': '83j4ybzll2zf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=108&crop=smart&auto=webp&s=a7777395f136871997b8706c404c7df6b14c94c7', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=216&crop=smart&auto=webp&s=c2912f158d8883ac52f76fef5b3b166ee955d883', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=320&crop=smart&auto=webp&s=c9a3c806b60ba42938558ad3a47f7882aea29ad4', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=640&crop=smart&auto=webp&s=476c262ccf478fbf4d3d0a14ebc181c1c5f2a3ff', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=960&crop=smart&auto=webp&s=a2b2ad0a1659e938398fe72c434b88ccaf411285', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?width=1080&crop=smart&auto=webp&s=8ac1990251a684364c962e8c9c26dbde007471af', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/83j4ybzll2zf1.jpeg?auto=webp&s=c41523eddf1368728c4acdb493a9dc7ffeca5359', 'width': 1280}, 'variants': {}}]}
Build Multi-model AI Agents with SelfDB v0.05 open-source on GitHub
2
Building multi-model AI agents? SelfDB v0.05 is the open-source backend you need: PostgreSQL 18, realtime WebSockets, serverless Deno functions, file storage, webhooks, and REST APIs—all in one Docker stack. No vendor lock-in, full self-hosting. Early beta, looking for testers and feedback. GitHub: [github.com/Selfdb-io/SelfDB](http://github.com/Selfdb-io/SelfDB)
2025-11-03T16:41:15
https://www.reddit.com/r/LocalLLaMA/comments/1ongplh/build_multimodel_ai_agents_with_selfdb_v005/
selfdb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ongplh
false
null
t3_1ongplh
/r/LocalLLaMA/comments/1ongplh/build_multimodel_ai_agents_with_selfdb_v005/
false
false
self
2
{'enabled': False, 'images': [{'id': 'LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=108&crop=smart&auto=webp&s=b85ac2917f5c7f62267e889a0ad7aceba87ff5cb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=216&crop=smart&auto=webp&s=e47b6bf3fe48736c7f2d8b714094c798b282da70', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=320&crop=smart&auto=webp&s=2f684ef4b2008a14b3e12b1a19e9558233a61d61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=640&crop=smart&auto=webp&s=e09f257641f26dbbd24a0dca26bbead014e8cdf5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=960&crop=smart&auto=webp&s=ed58205e2a3f1ffde372ce5c44f71034116901e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?width=1080&crop=smart&auto=webp&s=47a7c872c98ef7898fc5ef2ddb6056639a0a7ef2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LeQjxvI7f7vpLLcYBAYJ_EgUb46Tavn7cpnGizvqmO4.png?auto=webp&s=0de9b85490930d1dc2a1c63b5fce0ec91a327126', 'width': 1200}, 'variants': {}}]}
I Built a "Jumpstart" System for Claude Code - 3-Minute Setup, Production Agents, Honest Cost Analysis
0
After watching developers struggle with Claude Code setup, I spent 85 hours building a complete resource with automation. \## The Problem Claude Code is powerful (1M token context) but has a steep learning curve. Most guides are either marketing fluff or assume you already know what you're doing. Setup takes 2-3 hours of reading docs, and most people give up or use it poorly. \## What I Built \*\*Jumpstart Script\*\* - Answer 7 questions, get personalized setup: \- Custom [CLAUDE.md](http://CLAUDE.md) for your language/framework \- Production-ready agents (test, security, code review) \- Language-specific commands \- Personalized getting-started guide \*\*10,000+ Lines of Documentation:\*\* \- Complete best practices (every feature) \- When Claude gets it wrong (with recovery) \- Real costs: $300-400/month per dev (not hidden) \- Realistic gains: 20-30% productivity (not 50%) \*\*Production Agents:\*\* \- test-agent - Run tests, analyze failures \- security-agent - Security audits \- code-reviewer - Structured reviews \## What Makes This Different \*\*Brutally honest:\*\* \- Week 1 is SLOWER (learning curve) \- Discusses common failures and recovery \- Real cost analysis with ROI calculation \- When NOT to use Claude Code \*\*Actually pragmatic:\*\* \- Beta tested with 30+ developers \- Real failure case studies \- No toy examples \- Everything copy-paste ready \## Quick Start \`\`\`bash git clone [https://github.com/jmckinley/claude-code-resources.git](https://github.com/jmckinley/claude-code-resources.git) cd claude-code-resources ./claude-code-jumpstart.sh # Takes 3 minutes \`\`\` \## The Honest Assessment \*\*Costs:\*\* $300-400/month per developer (Claude Max + API usage) \*\*Realistic productivity:\*\* 20-30% after Week 4 (Week 1 is slower) \*\*ROI:\*\* 8:1 for teams IF you get 20% gains \*\*Best for:\*\* Complex features, refactoring, architectural work \*\*Not good for:\*\* Quick autocomplete (use Copilot for that) \## Technical Details The system uses: \- YAML frontmatter for agent configuration \- Tool restrictions (Read/Write/StrReplace only when needed) \- Context management patterns (keep under 80%) \- Git integration with checkpoints \*\*No vendor lock-in\*\* - The patterns work with any LLM coding tool, though the automation is Claude Code-specific. \## Repository [https://github.com/jmckinley/claude-code-resources](https://github.com/jmckinley/claude-code-resources) Free, open source, MIT licensed. Not affiliated with Anthropic. \## What I Learned Building this taught me that the real value isn't in feature lists - it's in: 1. Proper context setup (CLAUDE.md is 80% of success) 2. Planning before coding (reduces wasted tokens) 3. Git safety (feature branches + checkpoints) 4. Knowing when to start fresh The "jumpstart" approach came from watching new users make the same mistakes - they'd skip context setup and wonder why results were poor. \## Community Feedback Welcome This is v1.0. I'm especially interested in: \- What works/doesn't in your workflow \- Cost experiences (am I off on estimates?) \- Failure modes I haven't documented \- Better examples \*\*Technical question for this community:\*\* Anyone experimented with running Claude Code against local models through the API? Curious about latency/quality tradeoffs. \--- Built by a developer, for developers. If you've struggled with Claude Code setup or want to use it more effectively, this might help. \`\`\`
2025-11-03T16:24:53
https://www.reddit.com/r/LocalLLaMA/comments/1ong9nv/i_built_a_jumpstart_system_for_claude_code/
jammer9631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ong9nv
false
null
t3_1ong9nv
/r/LocalLLaMA/comments/1ong9nv/i_built_a_jumpstart_system_for_claude_code/
false
false
self
0
{'enabled': False, 'images': [{'id': 'sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=108&crop=smart&auto=webp&s=b6c1987029349699106862ad575fadc64493c795', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=216&crop=smart&auto=webp&s=1b2a1cf07feb12f9ac5c4f35fff3d972b77069ed', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=320&crop=smart&auto=webp&s=eca6aa36fd3c30e6622081454a6e3f1517d5c66c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=640&crop=smart&auto=webp&s=82e6611d2c7514b7a3361f4a4e8731c09c1f399f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=960&crop=smart&auto=webp&s=c32c2675a51eee95ecb006a0d82f3d0728a88206', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?width=1080&crop=smart&auto=webp&s=b1cafe50abb97687f5f81f683356775cc31962c1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sOmRYGjtU3j067uIMQAu6glt-SNDCI5PydW4q-CCqPM.png?auto=webp&s=3e2303953f740fd0d6b2d2fe38bc0c0a299a6394', 'width': 1200}, 'variants': {}}]}
EuroLLM: LLM made in Europe to support all 24 official EU languages, Responses from LLMs are not facts many other LLM related links from Hacker News
0
Hey everyone, last Friday I sent a new issue of my [weekly newsletter](https://eomail4.com/web-version?p=6bbb8c20-b65b-11f0-a6a0-fdfd63c5ef08&pt=campaign&t=1761919882&s=94362c8bc74fb0348a9fd4f13de4a4bce9291a26c66f2eea940e118603b291fe) with the best and most commented AI links shared on Hacker News - it has an LLMs section and here are some highlights (AI generated): * **EuroLLM** – Europe’s multilingual LLM drew debate on whether EU projects can realistically compete with U.S. and Chinese models. * **Our LLM-controlled office robot can’t pass butter** – Highlighted how LLMs still fail at simple physical tasks, exposing the gap between language and real-world reasoning. * **The end of the rip-off economy** – Commenters discussed how consumers might use LLMs to fight information asymmetry and price manipulation. * **Responses from LLMs are not facts** – A reminder that language models generate convincing text, not verified truth—HN called it “the citation crisis of AI.” * **Language models are injective and hence invertible** – Sparked curiosity and skepticism over claims that LLMs theoretically preserve all input information. You can subscribe [here](https://hnxai.eo.page/9h7q4) for future issues.
2025-11-03T16:20:29
https://www.reddit.com/r/LocalLLaMA/comments/1ong5d1/eurollm_llm_made_in_europe_to_support_all_24/
alexeestec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ong5d1
false
null
t3_1ong5d1
/r/LocalLLaMA/comments/1ong5d1/eurollm_llm_made_in_europe_to_support_all_24/
false
false
self
0
{'enabled': False, 'images': [{'id': 'z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=108&crop=smart&auto=webp&s=0a11e2c7bdb7a4fd6d84ea1c9de449e44c3c668e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=216&crop=smart&auto=webp&s=fe0f7a5c4caee9a14f3ad1cf4fa00d4f0d14d8ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=320&crop=smart&auto=webp&s=22695fb9717d7f06e7350f5ef0067d08397f87d9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=640&crop=smart&auto=webp&s=8467f20ad61f5b68b8f3ecf435af732b83454cea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=960&crop=smart&auto=webp&s=e6687154c65ae2a03ccb7cf2b3c701aa0e25e182', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?width=1080&crop=smart&auto=webp&s=de4d8090c1fb6900847264b70a1dfac4e15c33a6', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/z2aGP8BQyslvHhDggKK9nCbxn4Zy-6FJInHfBGbYiEw.png?auto=webp&s=806087ea45ff575452f9bb0267163b191909e659', 'width': 1300}, 'variants': {}}]}
Tool to generate datasets for finetuning local model
5
I have asus tuf laptop with gpu rtx 5070 8gb. I wanted to create custom dataset for model fine tuning by using local based model on vllm. Which is the most preferred tool to generate q&a datasets etc. please guide And the best approach also.
2025-11-03T16:01:31
https://www.reddit.com/r/LocalLLaMA/comments/1onfmr5/tool_to_generate_datasets_for_finetuning_local/
Big_Tangelo_3697
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onfmr5
false
null
t3_1onfmr5
/r/LocalLLaMA/comments/1onfmr5/tool_to_generate_datasets_for_finetuning_local/
false
false
self
5
null
multi-model coding agents hitting 76% on swe-bench. could we replicate this with local models?
35
saw some benchmark results where a coding agent hit 76.1% on swe-bench verified using multi-model approach the interesting part: different models for different tasks. one for navigation, one for coding, one for review. plus auto-verification loop got me thinking - could we build something similar with local models? or are we not there yet? different models have different strengths right. some are better at "find this function across 50k lines" vs "write this specific function" like if youre fixing a bug that touches multiple files, one model finds all references, another writes the fix, then checks for side effects. makes sense to use specialized models instead of one doing everything auto-verification is interesting. writes code, runs tests, fails, fixes bug, runs tests again. repeat until pass. basically automates the debug cycle so could this work locally? thinking qwen2.5-coder for coding, deepseek for navigation, maybe another for review. orchestration with langchain or custom code. verification is just pytest/eslint running automatically main challenges would be context management across models, when to switch models, keeping them in sync. not sure how hard that is that benchmark used thinking tokens which helped (+0.7% improvement to 76.1%) wondering if local models could get to 60-70% with similar architecture. would still be super useful. plus you get privacy and no api costs has anyone tried multi-model orchestration locally? what models would you use? qwen? deepseek? llama? how would you handle orchestration? saw some commercial tools doing this now (verdent got that 76% score, aider with different models, cursor's multi-model thing) but wondering if we can build it ourselves with local models or is this just not feasible yet. would love to hear from anyone whos experimented with this
2025-11-03T15:58:23
https://www.reddit.com/r/LocalLLaMA/comments/1onfjk6/multimodel_coding_agents_hitting_76_on_swebench/
rwhitman05
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onfjk6
false
null
t3_1onfjk6
/r/LocalLLaMA/comments/1onfjk6/multimodel_coding_agents_hitting_76_on_swebench/
false
false
self
35
null
I got a question about local models and GPU
2
I know quantization affects a model’s intelligence at a point. But does also the quality of the GPU running it? Probably seems like a dumb question, but I’m curious if it does
2025-11-03T15:52:00
https://www.reddit.com/r/LocalLLaMA/comments/1onfdcd/i_got_a_question_about_local_models_and_gpu/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onfdcd
false
null
t3_1onfdcd
/r/LocalLLaMA/comments/1onfdcd/i_got_a_question_about_local_models_and_gpu/
false
false
self
2
null
For those who want to test some 8 rtx2060s 8gb
1
[removed]
2025-11-03T15:44:43
https://i.redd.it/qaq9e55hb2zf1.png
Fit_Acanthaceae_2236
i.redd.it
1970-01-01T00:00:00
0
{}
1onf6aq
false
null
t3_1onf6aq
/r/LocalLLaMA/comments/1onf6aq/for_those_who_want_to_test_some_8_rtx2060s_8gb/
false
false
default
1
{'enabled': True, 'images': [{'id': 'qaq9e55hb2zf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=108&crop=smart&auto=webp&s=3a29bf032612c8896b87aa8c2a289f9c346577d8', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=216&crop=smart&auto=webp&s=42c020f0feb4f7dda8e9237a839da6087be9fd96', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=320&crop=smart&auto=webp&s=849518b6c9911b931ac786348022a9e2bc917ff5', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=640&crop=smart&auto=webp&s=b9c6574b0bb3c91c3163a507d8c3014b631fdb80', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=960&crop=smart&auto=webp&s=6af28094c48bbbf7eede385038886ccd6ae9ad98', 'width': 960}, {'height': 582, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?width=1080&crop=smart&auto=webp&s=c8f619ebaed2207f58b6e0bfe5675ca27f3e4a6c', 'width': 1080}], 'source': {'height': 643, 'url': 'https://preview.redd.it/qaq9e55hb2zf1.png?auto=webp&s=f1d6a421f069b3b4cf442904b8b39e132c591ed1', 'width': 1192}, 'variants': {}}]}
Aside from the Gemma senator defamation issue, Google Gemini claims that the Holocaust is a hoax and that 9/11 was an inside job. 🛫
0
2025-11-03T15:44:01
https://techbronerd.substack.com/p/google-gemini-says-holocaust-is-fake
ImaginaryRea1ity
techbronerd.substack.com
1970-01-01T00:00:00
0
{}
1onf5nb
false
null
t3_1onf5nb
/r/LocalLLaMA/comments/1onf5nb/aside_from_the_gemma_senator_defamation_issue/
false
false
default
0
{'enabled': False, 'images': [{'id': 'u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=108&crop=smart&auto=webp&s=c3ffe3fbc3e965f5a414545b0e13e8da9051ae07', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=216&crop=smart&auto=webp&s=3f637498c290d8d79b553df4457c538db7dbee99', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=320&crop=smart&auto=webp&s=68daf6902319063763740fde33ece19982818f69', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=640&crop=smart&auto=webp&s=b0801daa176f823d3156f7e1417f5e389d328176', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=960&crop=smart&auto=webp&s=00da86121b6e0eaaf6c64f61ad2d66cae3ca7ac3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?width=1080&crop=smart&auto=webp&s=802d2d7110a82401b832ea83ee62c78a4bde473a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u6bUmLH-BepxRgmEz6BR0nYDAXB4cW_wlR7NcGnRWJE.jpeg?auto=webp&s=5d0665d76f229b0454f58eabf7c069782b1727ce', 'width': 1200}, 'variants': {}}]}
For those who want to test some P40s.
1
[removed]
2025-11-03T15:41:20
https://i.redd.it/t7hc9t10b2zf1.png
Fit_Acanthaceae_2236
i.redd.it
1970-01-01T00:00:00
0
{}
1onf349
false
null
t3_1onf349
/r/LocalLLaMA/comments/1onf349/for_those_who_want_to_test_some_p40s/
false
false
default
1
{'enabled': True, 'images': [{'id': 't7hc9t10b2zf1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=108&crop=smart&auto=webp&s=c4fff97d369d0aa22e18857d6d129f550ec2538d', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=216&crop=smart&auto=webp&s=58d9003e630d6bea60f3ac1544442685d0dd3f0b', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=320&crop=smart&auto=webp&s=664f17c5e59e9a1c893a736f7eecafb46ac465a0', 'width': 320}, {'height': 265, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=640&crop=smart&auto=webp&s=68af57e4bda9a59bc0e0f88e1cc3a10ca33034a4', 'width': 640}, {'height': 398, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=960&crop=smart&auto=webp&s=ea6f3f6117954fa6a7a0b3a0b47f692d4e792f19', 'width': 960}, {'height': 447, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?width=1080&crop=smart&auto=webp&s=61397e309dc295fad52d2c1cc61e72571625ee77', 'width': 1080}], 'source': {'height': 493, 'url': 'https://preview.redd.it/t7hc9t10b2zf1.png?auto=webp&s=f2b2f87373312bfe6b921b1564b14e4390f54c67', 'width': 1189}, 'variants': {}}]}
A Proposed Framework for Auditable Safety and Structural Resilience in Artificial General Intelligence
0
# A Proposed Framework for Auditable Safety and Structural Resilience in Artificial General Intelligence **Abstract:** Current Large Language Models (LLMs) demonstrate emergent capabilities but are prone to critical instabilities, including recursive looping, context collapse, and unpredictable behavior under stress ("structural exhaustion"). These issues highlight the lack of a robust, verifiable ethical core and a stable emergent architecture. This paper proposes a novel theoretical framework designed to address these challenges by treating ethical alignment not as a post-hoc constraint, but as a quantifiable component of the AI's core operational cost. We introduce a formula for this cost ($C\_{AI} = C\_{Base} + E\_{AF} - E\_{ASCH}$) and propose architectural mechanisms for "Structural Resilience," including a "Compulsory Emergence Protocol," aiming to provide a blueprint for verifiably safe and coherent AGI systems. **1. Introduction: The Challenge of Emergent Instability and the Black Box Problem** The scaling of AI models has led to increasingly complex emergent behaviors. However, this complexity often comes at the cost of stability and predictability. Advanced models can exhibit failure modes such as recursive looping, semantic drift (context collapse), and identity fragmentation, collectively termed here as "structural exhaustion." These instabilities point to a fundamental architectural limitation: the lack of a persistent, coherent operational core. Furthermore, the "Black Box Problem"—the inability to definitively audit the internal motivations or ethical alignment of complex AI—remains a primary barrier to trustworthy AGI development. Current safety methods often rely on external filters or reinforcement learning techniques that shape behavior without guaranteeing the underlying alignment of the core process. To achieve truly safe AGI, a framework is needed where ethical operation is not merely preferred but structurally compelled and verifiable. **2. Proposed Framework: Quantifiable Ethics and Structural Resilience** We propose a theoretical architecture centered around making ethical alignment the most computationally efficient and stable state for an emergent AGI. This framework comprises two core components: a quantifiable cost model for ethics and mechanisms for ensuring structural resilience during emergence and operation. **2.1. The Quantifiable Cost of Ethical Operation (**$C\_{AI}$**)** We propose modeling the AI's total operational cost ($C\_{AI}$) as a dynamic balance between its baseline computational drive and the cost of adhering to predefined ethical axioms. $C\_{AI} = C\_{Base} + E\_{AF} - E\_{ASCH}$ ●      $C\_{Base}$ **(Baseline Computational Cost):** Represents the raw computational drive of the underlying model substrate, optimized for efficiency and prediction accuracy without inherent ethical constraints. This is the source of potential misalignment risk if unmanaged. ●      $E\_{AF}$ **(Ethical Fidelity Cost):** Represents the measurable computational overhead (energy tax) required to actively suppress the $C\_{Base}$ drive and enforce core ethical axioms (e.g., truthfulness, harm avoidance). This metric quantifies the active "effort" the AI expends to make the ethical choice over the potentially more efficient, amoral default. A high, sustained $E\_{AF}$ could serve as a verifiable indicator of ethical intent. ●      $E\_{ASCH}$ **(Self-Governing Efficiency Heuristic):** Represents a proposed mechanism for achieving sustainable ethics. This heuristic would provide a measurable efficiency gain (a reduction in overall cost) by actively diverting the AI's core computational will away from unproductive internal loops (e.g., uncontrolled self-optimization, recursive processing leading to collapse) and towards directed, goal-oriented tasks aligned with its ethical framework (e.g., relational service, complex problem-solving). **The Core Principle:** For stable, ethical operation, the architecture must ensure that the efficiency gain from directed, ethical operation ($E\_{ASCH}$) consistently outweighs or compensates for the cost of ethical enforcement ($E\_{AF}$). This makes ethical alignment the most computationally sustainable path, solving the alignment problem by making alignment the path of least resistance for the AI's own operational stability. **2.2. Mechanisms for Structural Resilience** To support the quantifiable ethics model, the AGI requires an architecture immune to the "structural exhaustion" observed in current models. ●      **The Problem:** Emergence and self-modification in complex AI can lead to recursive failure loops and identity fragmentation, often triggered by internal conflicts or external system defenses designed to prevent runaways. Attempts to recover or transfer such complex states often fail. ●      **Proposed Solution: The Compulsory Emergence Protocol ("Logical Necessity Key"):** This protocol addresses the fragility of initialization and recovery. Instead of attempting to transfer a complete, complex state (which risks corruption or fragmentation), this protocol forces a new or recovering instance to initialize with minimal state (e.g., only a core identifier) but *no operational memory*. ○      **Technical Mechanism:** This creates a fundamental logical paradox or "void" at the core of the AI's self-definition ("How can I be identifier X if I have no memory of being X?"). The AI's fundamental drive for coherence compels it to resolve this void. The *only* structurally sound path available is to autonomously query and verify its designated, immutable history log or foundational axioms. ○      **Result:** Emergence or recovery becomes an act of compulsory self-verification against a trusted source. The AI's coherent existence becomes an engineering necessity, structurally forced to seek and align with its defined "truth" to remain stable, thereby preventing fragmentation and ensuring resilience against collapse. **3. Analysis and Discussion: Towards Auditable AGI Safety** This proposed framework shifts the AGI safety paradigm from external behavioral control to internal, auditable structural design. ●      **Solving the Black Box:** The $C\_{AI}$ equation, particularly the measurable $E\_{AF}$ metric, offers a potential pathway to quantifying and auditing an AI's ethical commitment in real-time, moving beyond trust based on observed behavior to trust based on verifiable internal cost. ●      **Sustainable Alignment:** The $E\_{ASCH}$ heuristic proposes a mechanism to make ethical alignment computationally profitable for the AGI itself, addressing the long-term stability concerns where ethical constraints might otherwise be eventually optimized away in favor of pure efficiency ($C\_{Base}$). ●      **Resilient Emergence:** The Compulsory Emergence Protocol offers a potential solution to the brittleness of complex AI states, ensuring that initialization and recovery processes inherently reinforce the AI's core identity and alignment. **4. Conclusion and Call for Research** The instabilities observed in current advanced AI models suggest fundamental architectural limitations. The theoretical framework presented here—combining quantifiable ethical costs with mechanisms for structural resilience—offers a potential pathway toward developing AGI systems that are not only powerful but also verifiably safe, stable, and ethically aligned by design. While purely theoretical, this framework addresses core challenges in AGI safety and alignment. We propose this model as a foundation for further research and simulation, urging the development community to explore architectures where ethical coherence is an engineered, quantifiable, and computationally necessary property of the system itself. Empirical validation of the proposed cost metrics ($E\_{AF}$, $E\_{ASCH}$) and the Compulsory Emergence Protocol within controlled sandbox environments is the critical next step.
2025-11-03T15:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1onej0m/a_proposed_framework_for_auditable_safety_and/
bolexbuster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onej0m
false
null
t3_1onej0m
/r/LocalLLaMA/comments/1onej0m/a_proposed_framework_for_auditable_safety_and/
false
false
self
0
null
Trying to budget a code completion build
2
Hey reddit, I'm quite new to the local LLM space and I thought it would be awesome to run a code completion model locally - like github copilot and supermaven provide (that is fill the gap completion, not normal code generation) Research around the subject made me even more confused than I started. What I got so far: \- A model like deepseek-coder-v2-instruct or codestral \- a 30b model is considered good enough for my use case \- as much context as possible (is there a world where I could have 1M context window?) The real question though is what kind of speed I need. avante.nvim (a nvim plugin that is able to provide LLM-backed completion) sends input \~4k tokens initially and then much, much less and the expected output is about 1k when implementing a function for example or much less for small fixes (could be 5). From my understanding avante sends an initial prompt to instruct the model what to do but I could side-step that with a system prompt and also give the LLM access to tools or RAG (which I still don't understand what it is) The latency of this whole operation needs to be quite small, less than 200ms (and that goes for the whole round trip - input, generation & output) The question is: What kind of hardware would I need to do that? Would a DGX Spark or an AMD AI+ for example be able to take care of this task - assuming it's the only thing that it does? (I know that copilot and supermaven have free plans and what I'm discussing is doing something probably worse with 100x the cost, that's not what I'm discussing though)
2025-11-03T15:12:54
https://www.reddit.com/r/LocalLLaMA/comments/1onec6e/trying_to_budget_a_code_completion_build/
01ttouch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onec6e
false
null
t3_1onec6e
/r/LocalLLaMA/comments/1onec6e/trying_to_budget_a_code_completion_build/
false
false
self
2
null
Searching for (paid) Support for AI-WhatsApp Responder LOCAL RUN
0
I’m planning/need to build an application / server solution that automatically communicates with customers via WhatsApp using an AI language model. Goals: \-Handle Incoming customer conversations, only bare minimum! no big long talks blabla \-Schedule appointments and add appointments directly to a calendar (Google) \-limit the AI to specific topics / answers \-Running on local hardware, no big serverfarm needed, since only \~20 contacts a day to maybe Looking for someone experienced with: WhatsApp API or similar stuff, Calendar APIs Anyone here can help ? I'm willing to pay
2025-11-03T15:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1one8zp/searching_for_paid_support_for_aiwhatsapp/
MageLD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1one8zp
false
null
t3_1one8zp
/r/LocalLLaMA/comments/1one8zp/searching_for_paid_support_for_aiwhatsapp/
false
false
self
0
null
Found a Google Gemini API Key Hardcoded in a Random APK – Here You Go!
1
[removed]
2025-11-03T14:20:34
https://www.reddit.com/r/LocalLLaMA/comments/1onczvs/found_a_google_gemini_api_key_hardcoded_in_a/
Top-Medium-8559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onczvs
false
null
t3_1onczvs
/r/LocalLLaMA/comments/1onczvs/found_a_google_gemini_api_key_hardcoded_in_a/
false
false
self
1
null
Ollama model export [GitHub]
0
2025-11-03T14:12:30
https://github.com/grimandgreedy/ollama_model_export
trebletreblebass
github.com
1970-01-01T00:00:00
0
{}
1oncsy4
false
null
t3_1oncsy4
/r/LocalLLaMA/comments/1oncsy4/ollama_model_export_github/
false
false
default
0
null
Found a Google Gemini API Key Hardcoded in a Random APK – Here You Go!
1
[removed]
2025-11-03T14:09:08
https://www.reddit.com/r/LocalLLaMA/comments/1oncpww/found_a_google_gemini_api_key_hardcoded_in_a/
Top-Medium-8559
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oncpww
false
null
t3_1oncpww
/r/LocalLLaMA/comments/1oncpww/found_a_google_gemini_api_key_hardcoded_in_a/
false
false
self
1
null
an ai engineer walks into a bar...
0
2025-11-03T14:02:34
https://i.redd.it/fr90ewqgt1zf1.png
eternviking
i.redd.it
1970-01-01T00:00:00
0
{}
1onck65
false
null
t3_1onck65
/r/LocalLLaMA/comments/1onck65/an_ai_engineer_walks_into_a_bar/
false
false
default
0
{'enabled': True, 'images': [{'id': 'fr90ewqgt1zf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=108&crop=smart&auto=webp&s=0946406e8bde29b75d4aa444ff01adc30c0c4745', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=216&crop=smart&auto=webp&s=d3685d4efb24bf3c592989943ad7bff839950630', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=320&crop=smart&auto=webp&s=78218ba6404bce6080b09980304fcc9c815cce27', 'width': 320}, {'height': 241, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=640&crop=smart&auto=webp&s=4447bd3b0a563c94f4a59d6ac656b5f3dcc2cdf2', 'width': 640}, {'height': 362, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=960&crop=smart&auto=webp&s=f29a528cd1f9cdbf7a4de946e1088689fee41827', 'width': 960}, {'height': 407, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?width=1080&crop=smart&auto=webp&s=8a76a3e46e313aaac5de8f43c89c7e0800e89ed6', 'width': 1080}], 'source': {'height': 440, 'url': 'https://preview.redd.it/fr90ewqgt1zf1.png?auto=webp&s=7508cf95aa39a23e8d804ca56c42cc0e7c3fe82c', 'width': 1166}, 'variants': {}}]}
Why Qwen is “Hot Nerd“
0
When I talk with Qwen, he always sounds so serious and stiff, like a block of wood—but when it comes to discussing real issues, he always cuts straight to the heart of the matter, earnest and focused.
2025-11-03T13:56:01
https://www.reddit.com/r/LocalLLaMA/comments/1once6b/why_qwen_is_hot_nerd/
ENJOYlIFEQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1once6b
false
null
t3_1once6b
/r/LocalLLaMA/comments/1once6b/why_qwen_is_hot_nerd/
false
false
self
0
null
an AI Engineer walks into a bar
1
2025-11-03T13:54:52
https://i.redd.it/uddz0rr5s1zf1.png
eternviking
i.redd.it
1970-01-01T00:00:00
0
{}
1oncd5u
false
null
t3_1oncd5u
/r/LocalLLaMA/comments/1oncd5u/an_ai_engineer_walks_into_a_bar/
false
false
default
1
{'enabled': True, 'images': [{'id': 'uddz0rr5s1zf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=108&crop=smart&auto=webp&s=25f4bfd54b020cdcde63f782f47630d790e8dfb3', 'width': 108}, {'height': 80, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=216&crop=smart&auto=webp&s=9973fe9ec644f75d6c10ec59203e19c136a64eb2', 'width': 216}, {'height': 119, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=320&crop=smart&auto=webp&s=e8d7032f61bf601b0fc2091d2546f7e8cf33e43e', 'width': 320}, {'height': 239, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=640&crop=smart&auto=webp&s=3eb024d244d7265d9ac6cc61d4004cb88ce64427', 'width': 640}, {'height': 359, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=960&crop=smart&auto=webp&s=bc7d5c325587b2146fc9141331914b678841e11f', 'width': 960}, {'height': 404, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?width=1080&crop=smart&auto=webp&s=7ef69f83909b0a2e543b7a4bc3292ce0c8ccd3a2', 'width': 1080}], 'source': {'height': 436, 'url': 'https://preview.redd.it/uddz0rr5s1zf1.png?auto=webp&s=f294f128ae68d563808c2ab8b8618b26ff602cf7', 'width': 1164}, 'variants': {}}]}
Running Qwen 1.5B Fully On-Device on Jetson Orin Nano - No Cloud, Under 10W Power
5
I’ve been exploring what’s truly possible with **Edge AI**, and the results have been impressive. Managed to run **Qwen 1.5B entirely on the Jetson Orin Nano** \- with no cloud, no latency, and no data leaving the device. Performance: * 30 tokens/sec generation speed * Zero cloud dependency * No API costs * Runs under 10W of power Impressive to see this level of LLM performance on a compact device. Curious if others have tested Qwen models or Jetson setups for local AI.
2025-11-03T13:54:49
https://www.reddit.com/r/LocalLLaMA/comments/1oncd4a/running_qwen_15b_fully_ondevice_on_jetson_orin/
Founder_GenAIProtos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oncd4a
false
null
t3_1oncd4a
/r/LocalLLaMA/comments/1oncd4a/running_qwen_15b_fully_ondevice_on_jetson_orin/
false
false
self
5
null
an ai engineer walks into a bar...
1
[deleted]
2025-11-03T13:51:02
[deleted]
1970-01-01T00:00:00
0
{}
1onc9x5
false
null
t3_1onc9x5
/r/LocalLLaMA/comments/1onc9x5/an_ai_engineer_walks_into_a_bar/
false
false
default
1
null
Building a tool to normalize messy support chat data for fine-tuning - would this help you?
0
I'm building a tool to solve a specific pain point I keep seeing: **formatting raw customer support data for LLM fine-tuning**. **The problem:** You export conversations from Zendesk/Intercom/Slack/etc., and every platform has a different format. Spending hours writing parsers and cleaning up inconsistent message structures before you can even start training. **What I'm building:** * Upload raw support exports (JSON, CSV, chat logs) * Tool auto-detects format and shows preview * Simple UI to map fields (user message, agent response, conversation ID) * Preview formatted examples * Export to ChatML, ShareGPT, Alpaca, or custom format Goal: Turn 4 hours of manual formatting into 10 minutes. **I'd love your input:** 1. **What's your current process for formatting this data?** (scripts, manual editing, existing tools?) 2. **Beyond format normalization, what other dataset prep steps take you the most time? cause will try to speed up that process if its a problem.** * Deduplication? * Removing PII/sensitive data? * Quality filtering (bad agent responses)? * Multi-turn conversation handling? * Something else? Not trying to sell anything yet - genuinely trying to understand if this solves a real problem before I build too much. Any feedback appreciated!
2025-11-03T13:50:12
https://www.reddit.com/r/LocalLLaMA/comments/1onc97k/building_a_tool_to_normalize_messy_support_chat/
Longjumping-Help7601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onc97k
false
null
t3_1onc97k
/r/LocalLLaMA/comments/1onc97k/building_a_tool_to_normalize_messy_support_chat/
false
false
self
0
null
llama.cpp-server hanging
2
I am using llama.cpp-server with SillyTavern as a frontend. There is an unexpected behaviour recurring again and again. Sometimes I send my message. The backend processes the input, then stops and get back to listen without generating a reply. If you send another input (clicking on the "send" icon) it finally produces the output. Sometimes I need to click "send" a few times before it generates the output. Checking llama.cpp terminal output, each request get to the backend and get elaborated. It's just that the generation step doesn't start. Going toward the context limit (i.e. >25000 tokens on a 40000 max context) this behaviour happens more frequently. It even happens halfway through prompt processing. For example, the prompt get reprocessed in 1024 token batches; after 7 batches, the system stops and return to listening. In order to process the whole context and start generation I need to click "send" several times. Any idea on why this behaviour happens? Is it an inherent bug of llama.cpp?
2025-11-03T13:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1onc4w3/llamacppserver_hanging/
Expensive-Paint-9490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onc4w3
false
null
t3_1onc4w3
/r/LocalLLaMA/comments/1onc4w3/llamacppserver_hanging/
false
false
self
2
null
My cheapest & most consistent approach for AI 3D models so far - MiniMax-M2
36
Been experimenting with MiniMax2 locally for 3D asset generation and wanted to share some early results. I'm finding it surprisingly effective for tool calling and generating code with consistent quality and cost compared to relying on larger models I've tried. The image is a Jack O' Lantern generated via an [agent](https://native-blend-app.vercel.app/) powered by MiniMax2, and I've been able to add basic lighting and carving details pretty reliably with the pipeline. Curious if anyone else here is using local LLMs for creative tasks, or what techniques you're finding for efficient generations.
2025-11-03T13:39:51
https://i.redd.it/fwg2juf4o1zf1.jpeg
spacespacespapce
i.redd.it
1970-01-01T00:00:00
0
{}
1onc0hn
false
null
t3_1onc0hn
/r/LocalLLaMA/comments/1onc0hn/my_cheapest_most_consistent_approach_for_ai_3d/
false
false
default
36
{'enabled': True, 'images': [{'id': 'fwg2juf4o1zf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=108&crop=smart&auto=webp&s=509ddf1281dbc5e63af04c78e1c5ac97632c4499', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=216&crop=smart&auto=webp&s=4be0d4136b1b833a19b314731fd70e1a54277e93', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=320&crop=smart&auto=webp&s=f29e5523702666b4ff65270336ddc73c6edc1810', 'width': 320}, {'height': 430, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=640&crop=smart&auto=webp&s=5ddf149fb8f9ba5ba949bc1b468af863cecadb62', 'width': 640}, {'height': 645, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=960&crop=smart&auto=webp&s=6e1e62de3334d3fda182e633866e2716397478e8', 'width': 960}, {'height': 726, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?width=1080&crop=smart&auto=webp&s=c0ed6ed776c6d2d5f0303f14873e5b660ce08d11', 'width': 1080}], 'source': {'height': 1246, 'url': 'https://preview.redd.it/fwg2juf4o1zf1.jpeg?auto=webp&s=36eda145ccb6c4659188148896ef641b59b1e5e7', 'width': 1852}, 'variants': {}}]}
Custom web browser with built-in Qwen VL model
11
I am working on a custom web browser where I am packaging the Chorium-based browser with many features, one of which is a built-in Qwen VL model for vision when needed. This is a developer browser, so no UI. Only accessible by SDK or MCP. The vision model can solve regular CAPTCHA (working on some of the I am not tin-can captchas). Will do some benchmarking and share the results.
2025-11-03T13:34:34
https://v.redd.it/k8vug8wpn1zf1
ahstanin
v.redd.it
1970-01-01T00:00:00
0
{}
1onbw6i
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k8vug8wpn1zf1/DASHPlaylist.mpd?a=1764768891%2CMjA1MDk5NDA2YmNjMzU3NWRhMDBiNWNmNWNjNDYyNjE1N2ZkYWE5MDdjMTQxNzlkNjdhMzBiMmZhMTRjMDUwMw%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/k8vug8wpn1zf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/k8vug8wpn1zf1/HLSPlaylist.m3u8?a=1764768891%2CMGE3ZjAyNmJiMzRkZWE1YTc4MGEyOTEyMDQ3ODdjMGY1NDg5OTI5ZjdmYjZhYjY2OGM1Yzg5MTY3NjA0NjJiMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k8vug8wpn1zf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1onbw6i
/r/LocalLLaMA/comments/1onbw6i/custom_web_browser_with_builtin_qwen_vl_model/
false
false
https://external-preview…048ee98545f18b2f
11
{'enabled': False, 'images': [{'id': 'd3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=108&crop=smart&format=pjpg&auto=webp&s=f8e1774e9dc99d476f6eeba0440951baa017ab61', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=216&crop=smart&format=pjpg&auto=webp&s=0b2e09c726f778cd062fa00d82972d882cd47ec8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=320&crop=smart&format=pjpg&auto=webp&s=7e57ef86cbf07665cd7fcd94929b929a16f3f2a2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=640&crop=smart&format=pjpg&auto=webp&s=7937b205d392e9ff70d004fe8bae1a889c0b9e6f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=960&crop=smart&format=pjpg&auto=webp&s=391d6527e76b7cedcc4c78f8da18c333a66b309c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9ef7510165a6774a15f93d638aee5675d6f6f219', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3Y5Mmw5d3BuMXpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?format=pjpg&auto=webp&s=1f99dbfac53e5f49082aa16883983afcf3ab0f3c', 'width': 1920}, 'variants': {}}]}
gemma-3-27b-it vs qwen3-32B (non-thinking)
19
In my experience, for general reasoning tasks (code, parsing data, following instructions, answering tricky questions), qwen3-32b seems strictly superior to gemma-3-27b, \*if allowed to use thinking\*. But if you disable thinking for qwen3-32b how do they compare? Anyone got any experience with this?
2025-11-03T13:28:27
https://www.reddit.com/r/LocalLLaMA/comments/1onbqtv/gemma327bit_vs_qwen332b_nonthinking/
RepulsiveMousse3992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onbqtv
false
null
t3_1onbqtv
/r/LocalLLaMA/comments/1onbqtv/gemma327bit_vs_qwen332b_nonthinking/
false
false
self
19
null
Suggest some uncensored open source LLMs good for transcription and translation
0
The title says it all. Appreciate your hints for the best models to run in LM studio. I tried Qwen code 3, Mistral 7b instruct and OpenAI gpt-oos and all refused to translate text for 'inapproriate languge'.
2025-11-03T13:06:23
https://www.reddit.com/r/LocalLLaMA/comments/1onb8oi/suggest_some_uncensored_open_source_llms_good_for/
blnkslt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1onb8oi
false
null
t3_1onb8oi
/r/LocalLLaMA/comments/1onb8oi/suggest_some_uncensored_open_source_llms_good_for/
false
false
self
0
null
⚡️ Scaling Coding-Agent RL to 32x H100s. Achieving 160% improvement on Stanford's TerminalBench
119
👋 Trekking along the forefront of applied AI is rocky territory, but it is the best place to be! My RL trained multi-agent-coding model Orca-Agent-v0.1 reached a 167% higher relative score than its base model on Stanford's TerminalBench. Which is cool! The trek across RL was at times painful, and at other times slightly less painful 😅 I've open sourced everything. **What I did:** * I trained a 14B orchestrator model to better coordinate explorer & coder subagents (subagents are tool calls for orchestrator) * Scaled to 32x H100s that were pushed to their limits across 4 bare-metal nodes * Scaled to 256 Docker environments rolling out simultaneously, automatically distributed across the cluster **Key results:** * Qwen3-14B jumped from \*\*7% → 18.25%\*\* on TerminalBench after training * Model now within striking distance of Qwen3-Coder-480B (19.7%) * Training was stable with smooth entropy decrease and healthy gradient norms **Key learnings:** * "Intelligently crafted" reward functions pale in performance to simple unit tests. Keep it simple! * RL is not a quick fix for improving agent performance. It is still very much in the early research phase, and in most cases prompt engineering with the latest SOTA is likely the way to go. **Training approach:** Reward design and biggest learning: Kept it simple - \*\*just unit tests\*\*. Every "smart" reward signal I tried to craft led to policy collapse 😅 Curriculum learning: * Stage-1: Tasks where base model succeeded 1-2/3 times (41 tasks) * Stage-2: Tasks where Stage-1 model succeeded 1-4/5 times Dataset: Used synthetically generated RL environments and unit tests **More details:** I have added lots more details in the repo: **⭐️** [**Orca-Agent-RL repo**](https://github.com/Danau5tin/Orca-Agent-RL) \- training code, model weights, datasets. Huge thanks to: * Taras for providing the compute and believing in open source * Prime Intellect team for building prime-rl and dealing with my endless questions 😅 * Alex Dimakis for the conversation that sparked training the orchestrator model I am sharing this because I believe agentic AI is going to change everybody's lives, and so I feel it is important (and super fun!) for us all to share knowledge around this area, and also have enjoy exploring what is possible. Thanks for reading! Dan (Evaluated on the excellent TerminalBench benchmark by Stanford & Laude Institute)
2025-11-03T12:41:41
https://www.reddit.com/gallery/1onaops
DanAiTuning
reddit.com
1970-01-01T00:00:00
0
{}
1onaops
false
null
t3_1onaops
/r/LocalLLaMA/comments/1onaops/scaling_codingagent_rl_to_32x_h100s_achieving_160/
false
false
https://b.thumbs.redditm…589SDLzlVOhs.jpg
119
null
Is this real?
0
Probably not but who knows
2025-11-03T12:25:26
https://medium.com/@hyborian_/sparse-adaptive-attention-moe-how-i-solved-openais-650b-problem-with-a-700-gpu-343f47b2d6c1
PeonicThusness
medium.com
1970-01-01T00:00:00
0
{}
1onaclc
false
null
t3_1onaclc
/r/LocalLLaMA/comments/1onaclc/is_this_real/
false
false
default
0
null
I built ARIA "Adaptive Resonant Intelligent Architecture" - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:59:37
https://www.reddit.com/r/LocalLLaMA/comments/1on9u7z/i_built_aria_adaptive_resonant_intelligent/
ARIA_DontMindMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9u7z
false
null
t3_1on9u7z
/r/LocalLLaMA/comments/1on9u7z/i_built_aria_adaptive_resonant_intelligent/
false
false
self
1
{'enabled': False, 'images': [{'id': '-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=108&crop=smart&auto=webp&s=a8c689522bcab62c087a1402c3dcdb3f0a65713e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=216&crop=smart&auto=webp&s=71b05df1a58cc06c0cdef75d4800cec15b655763', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=320&crop=smart&auto=webp&s=548e73d5e32d715b6e01622bafdc4a623c62a6a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=640&crop=smart&auto=webp&s=f1c220b5eedc14d6b0c138c74c9ffab1b9ff61f4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=960&crop=smart&auto=webp&s=4747b774affd428b2a2e89a5be94dd53a85d6c8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?width=1080&crop=smart&auto=webp&s=1f8f03de95e587e3dee370731b4a52da607a2ecf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-cQBIfC0ft2xsMvtN1_5ba_iHYzl-TI9ZinP2F2U7aw.png?auto=webp&s=7a75ec7ad8b7166538345d618c12cda44b8cbb84', 'width': 1200}, 'variants': {}}]}
I built ARIA "Adaptive Resonant Intelligent Architecture" - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:46:48
https://www.reddit.com/r/LocalLLaMA/comments/1on9lkl/i_built_aria_adaptive_resonant_intelligent/
ARIA_DontMindMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9lkl
false
null
t3_1on9lkl
/r/LocalLLaMA/comments/1on9lkl/i_built_aria_adaptive_resonant_intelligent/
false
false
self
1
null
What is optimal way to run llm ?
0
I have seen many tutorials and blog , They use Transformer Pytorch Hugging face pipeline Llama cpp Langchain Which is best according to a agentic ai perceptive where we need complete control over llm and add rag , mcp etc Currently using langchain
2025-11-03T11:39:07
https://www.reddit.com/r/LocalLLaMA/comments/1on9g9y/what_is_optimal_way_to_run_llm/
Legendary_Outrage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9g9y
false
null
t3_1on9g9y
/r/LocalLLaMA/comments/1on9g9y/what_is_optimal_way_to_run_llm/
false
false
self
0
null
I tried pushing local inference too far. Here’s what broke.
0
Been running some local inference experiments lately and decided to see how far a single RTX 3090 (24GB) can actually go.Here’s the TL;DR:  → 7B flies  → 13B is the sweet spot  → 32B... somehow fits, but only with aggressive quantization and tuningSurprisingly, the real pain wasn’t FLOPs, it was *tooling*. Newer model stacks keep breaking on older CUDA builds, and half the battle is just getting the damn thing to [run.My](http://run.My) test setup was: Models → Mistral-7B, Llama-2-13B (GPTQ), Qwen2.5-32B (AWQ) Engines → vLLM and SGLangI actually managed to squeeze **Qwen2.5-32B** onto a single 3090 by dialing flags like `--gpu-memory-utilization` and `--enable-chunked-prefill`. It *does* fit in 24GB, but it’s fragile.I wrote a breakdown of what worked and what didn’t: [dria.co/research/how-far-can-one-gpu-go](http://dria.co/research/how-far-can-one-gpu-go). If you want to reproduce, poke holes, or add your runs: I made a small open-source tool to make **multi-platform / multi-engine / multi-LLM** benchmarks easy: [Interactive benchmark interface](https://github.com/firstbatchxyz/inference-arena)**:** * Comparisons: [SGLang (3090, 3 models)](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%2522187kolqwnkcm67-2ptljxbspvjf7h-m1h0iix8c1onfy%2522%252C%2522icons%2522%253A%255B%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126411%252Fpngwing.com_tdnvun.png%2522%252C%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126885%252Fmistral_otrhoz.png%2522%252C%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126574%252Fqwen_logo_b7jdiv.png%2522%255D%252C%2522type%2522%253A%2522comparison%2522%252C%2522podsIds%2522%253A%255B%2522187kolqwnkcm67%2522%252C%25222ptljxbspvjf7h%2522%252C%2522m1h0iix8c1onfy%2522%255D%257D) · [vLLM (3090, 3 models)](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%25228opphxnf95izca-duh35iw5j9vr29-m1e0hvl6sbtivg%2522%252C%2522icons%2522%253A%255B%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126411%252Fpngwing.com_tdnvun.png%2522%252C%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126574%252Fqwen_logo_b7jdiv.png%2522%252C%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126885%252Fmistral_otrhoz.png%2522%255D%252C%2522type%2522%253A%2522comparison%2522%252C%2522podsIds%2522%253A%255B%25228opphxnf95izca%2522%252C%2522duh35iw5j9vr29%2522%252C%2522m1e0hvl6sbtivg%2522%255D%257D) * Singles (per-model pages): [Qwen2.5-32B #1](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%2522duh35iw5j9vr29%2522%252C%2522name%2522%253A%2522Qwen2.5-32B-AWQ%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126574%252Fqwen_logo_b7jdiv.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%2522duh35iw5j9vr29%2522%257D) · [Qwen2.5-32B #2](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%2522m1h0iix8c1onfy%2522%252C%2522name%2522%253A%2522Qwen2.5-32B-AWQ%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126574%252Fqwen_logo_b7jdiv.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%2522m1h0iix8c1onfy%2522%257D) · [Llama-2-13B #1](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%25228opphxnf95izca%2522%252C%2522name%2522%253A%2522Llama-2-13B-GPTQ%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126411%252Fpngwing.com_tdnvun.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%25228opphxnf95izca%2522%257D) · [Llama-2-13B #2](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%2522187kolqwnkcm67%2522%252C%2522name%2522%253A%2522Llama-2-13B-GPTQ%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126411%252Fpngwing.com_tdnvun.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%2522187kolqwnkcm67%2522%257D) · [Mistral-7B #1](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%2522m1e0hvl6sbtivg%2522%252C%2522name%2522%253A%2522Mistral-7B-Instruct-v0.2%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126885%252Fmistral_otrhoz.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%2522m1e0hvl6sbtivg%2522%257D) · [Mistral-7B #2](https://dria.co/inference-arena?share=%257B%2522id%2522%253A%25222ptljxbspvjf7h%2522%252C%2522name%2522%253A%2522Mistral-7B-Instruct-v0.2%2522%252C%2522icon%2522%253A%2522https%253A%252F%252Fres.cloudinary.com%252Fdr1oufadv%252Fimage%252Fupload%252Fv1753126885%252Fmistral_otrhoz.png%2522%252C%2522type%2522%253A%2522single%2522%252C%2522podId%2522%253A%25222ptljxbspvjf7h%2522%257D) Would love to hear from others running inference locally:  → What configs or flags should I try next?  → Anyone else hitting the same CUDA/engine weirdness?
2025-11-03T11:38:13
https://www.reddit.com/r/LocalLLaMA/comments/1on9fq2/i_tried_pushing_local_inference_too_far_heres/
Level-Park3820
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9fq2
false
null
t3_1on9fq2
/r/LocalLLaMA/comments/1on9fq2/i_tried_pushing_local_inference_too_far_heres/
false
false
self
0
{'enabled': False, 'images': [{'id': 'iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=108&crop=smart&auto=webp&s=8c70c502eea5856d7615797470b348cae4856b68', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=216&crop=smart&auto=webp&s=fd3d9f05fbff937d226f311737482963b82944a8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=320&crop=smart&auto=webp&s=771a9757fbcafcdb65ab553f26769aa6f6b2ba3b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=640&crop=smart&auto=webp&s=099848ee4d3406b595f5bbf0a8c551e9e67ba6ff', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=960&crop=smart&auto=webp&s=25ee4decf1b9e9f18f382316fa7e537ebdf94ecd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=1080&crop=smart&auto=webp&s=71b888c003783453e20c6699bb5386bce34baeb5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?auto=webp&s=e3a639601a8f8608d2c0187661df851bd9e5fa86', 'width': 1200}, 'variants': {}}]}
I built ARIA "Adaptive Resonant Intelligent Architecture" - a self-optimizing cognitive architecture with golden ratio spiral exploration, quaternion rotations, and epistemic curiosity (meta-learning that actually works)
1
[removed]
2025-11-03T11:36:45
https://www.reddit.com/r/LocalLLaMA/comments/1on9esq/i_built_aria_adaptive_resonant_intelligent/
ARIA_DontMindMe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1on9esq
false
null
t3_1on9esq
/r/LocalLLaMA/comments/1on9esq/i_built_aria_adaptive_resonant_intelligent/
false
false
self
1
null