title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Completed: Local AI Optimization Tool - llama-optimus
1
[removed]
2025-06-17T22:04:37
https://www.reddit.com/r/LocalLLaMA/comments/1le0beu/completed_local_ai_optimization_tool_llamaoptimus/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0beu
false
null
t3_1le0beu
/r/LocalLLaMA/comments/1le0beu/completed_local_ai_optimization_tool_llamaoptimus/
false
false
https://b.thumbs.redditm…5juj4S29I_Ig.jpg
1
null
Parallel Tool Calls
1
[removed]
2025-06-17T22:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1le0b8p/parallel_tool_calls/
bootstrapper-919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0b8p
false
null
t3_1le0b8p
/r/LocalLLaMA/comments/1le0b8p/parallel_tool_calls/
false
false
self
1
null
Which search engine to use with Open WebUI
4
I'm trying to get away from being tied to chatgpt. I tried DDG first, but they rate limit so hard. I'm now using brave pro ai, but it doesn't seem like it reliably returns useful context. I've tried asking for the weather tomorrow in my city, fail. Tried asking a simple query "For 64 bit vectorizable operations, should...
2025-06-17T22:04:19
https://www.reddit.com/r/LocalLLaMA/comments/1le0b5t/which_search_engine_to_use_with_open_webui/
MengerianMango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0b5t
false
null
t3_1le0b5t
/r/LocalLLaMA/comments/1le0b5t/which_search_engine_to_use_with_open_webui/
false
false
self
4
null
Veo3 still blocked in germany
0
Is it the European regulations causing this delay, or something specific to Germany? Anyone know if there’s a workaround or official update?
2025-06-17T21:38:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldzp8c/veo3_still_blocked_in_germany/
Local_Beach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldzp8c
false
null
t3_1ldzp8c
/r/LocalLLaMA/comments/1ldzp8c/veo3_still_blocked_in_germany/
false
false
self
0
null
Question from a greenie: Is anyone using local LLM on WSL integrated with vscode (AMD)?
0
I have tried both Ollama and LLMstudio and cant seem to get it to work properly. The real issue is: I have an RX6750XT and, for example with Ollama, it cannot use the GPU through WSL. My use case is to use it on VSCode with "continue" extension so that I am able to get local AI feedback, using WSL.
2025-06-17T20:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1ldy8oq/question_from_a_greenie_is_anyone_using_local_llm/
FoxPatr0l
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldy8oq
false
null
t3_1ldy8oq
/r/LocalLLaMA/comments/1ldy8oq/question_from_a_greenie_is_anyone_using_local_llm/
false
false
self
0
null
Finance Local LLM
1
[removed]
2025-06-17T20:24:26
https://www.reddit.com/r/LocalLLaMA/comments/1ldxup1/finance_local_llm/
Mainzerger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldxup1
false
null
t3_1ldxup1
/r/LocalLLaMA/comments/1ldxup1/finance_local_llm/
false
false
self
1
null
The Gemini 2.5 models are sparse mixture-of-experts (MoE)
162
From the [model report](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf). It should be a surprise to noone, but it's good to see this being spelled out. We barely ever learn anything about the architecture of closed models. https://preview.redd.it/zhyrdk2dqj7f1.png?width=1056&format=png&aut...
2025-06-17T20:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1ldxuk1/the_gemini_25_models_are_sparse_mixtureofexperts/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldxuk1
false
null
t3_1ldxuk1
/r/LocalLLaMA/comments/1ldxuk1/the_gemini_25_models_are_sparse_mixtureofexperts/
false
false
https://b.thumbs.redditm…LNatUzMp3veI.jpg
162
null
News Day
1
[removed]
2025-06-17T19:57:45
https://www.reddit.com/r/LocalLLaMA/comments/1ldx664/news_day/
throwawayacc201711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldx664
false
null
t3_1ldx664
/r/LocalLLaMA/comments/1ldx664/news_day/
false
false
self
1
null
New KoboldCpp .NET Frontend App
1
[removed]
2025-06-17T19:19:41
https://github.com/phr00t/ai-talker-frontend
phr00t_
github.com
1970-01-01T00:00:00
0
{}
1ldw7je
false
null
t3_1ldw7je
/r/LocalLLaMA/comments/1ldw7je/new_koboldcpp_net_frontend_app/
false
false
default
1
null
Handy - a simple, open-source offline speech-to-text app written in Rust using whisper.cpp
72
# I built a simple, offline speech-to-text app after breaking my finger - now open sourcing it **TL;DR:** Made a cross-platform speech-to-text app using whisper.cpp that runs completely offline. Press shortcut, speak, get text pasted anywhere. It's rough around the edges but works well and is designed to be easily mod...
2025-06-17T19:00:08
https://handy.computer
sipjca
handy.computer
1970-01-01T00:00:00
0
{}
1ldvosh
false
null
t3_1ldvosh
/r/LocalLLaMA/comments/1ldvosh/handy_a_simple_opensource_offline_speechtotext/
false
false
https://external-preview…551e6580ed033bfa
72
{'enabled': False, 'images': [{'id': 'bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=108&crop=smart&auto=webp&s=866456fb6b18ecde709af611ae84dd56b0a95708', 'width': 108}, {'height': 113, 'url': 'h...
Handy - a simple, open-source offline speech-to-text app written in Rust using whisper.cpp
1
\# I built a simple, offline speech-to-text app after breaking my finger - now open sourcing it \*\*TL;DR:\*\* Made a cross-platform speech-to-text app using whisper.cpp that runs completely offline. Press shortcut, speak, get text pasted anywhere. It's rough around the edges but works well and is designed to be eas...
2025-06-17T18:57:09
https://handy.computer
sipjca
handy.computer
1970-01-01T00:00:00
0
{}
1ldvltt
false
null
t3_1ldvltt
/r/LocalLLaMA/comments/1ldvltt/handy_a_simple_opensource_offline_speechtotext/
false
false
https://external-preview…551e6580ed033bfa
1
{'enabled': False, 'images': [{'id': 'bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=108&crop=smart&auto=webp&s=866456fb6b18ecde709af611ae84dd56b0a95708', 'width': 108}, {'height': 113, 'url': 'h...
Newly Released MiniMax-M1 80B vs Claude Opus 4
75
2025-06-17T18:40:54
https://i.redd.it/gwxrxooh8j7f1.jpeg
Just_Lingonberry_352
i.redd.it
1970-01-01T00:00:00
0
{}
1ldv6jb
false
null
t3_1ldv6jb
/r/LocalLLaMA/comments/1ldv6jb/newly_released_minimaxm1_80b_vs_claude_opus_4/
false
false
https://external-preview…4a10ab9095c34531
75
{'enabled': True, 'images': [{'id': 'NHNFb4RCTzTLyF9gcL2lRgDBdUS99AndB95aAyC66Fg', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.jpeg?width=108&crop=smart&auto=webp&s=3fbf79b33125a7e0b6e18b75132bdff3920d5fdf', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.j...
:grab popcorn: OpenAI weighs “nuclear option” of antitrust complaint against Microsoft
242
2025-06-17T18:34:19
https://arstechnica.com/ai/2025/06/openai-weighs-nuclear-option-of-antitrust-complaint-against-microsoft/
tabspaces
arstechnica.com
1970-01-01T00:00:00
0
{}
1ldv0hk
false
null
t3_1ldv0hk
/r/LocalLLaMA/comments/1ldv0hk/grab_popcorn_openai_weighs_nuclear_option_of/
false
false
https://external-preview…263f3b939ef966c1
242
{'enabled': False, 'images': [{'id': 'o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=108&crop=smart&auto=webp&s=746f724b454f668c4e53555257b5b900676de05d', 'width': 108}, {'height': 121, 'url': '...
Mac Studio m3 ultra 256gb vs 1x 5090
2
I want to build an LLM rig for experiencing and as a local server for dev activities (non pro) but I’m torn between the two following configs. The benefit I see to the rig with the 5090 is that I can also use it to game. Prices are in CAD. I know I can get a better deal by building a PC myself. Also debating if the M...
2025-06-17T18:33:47
https://www.reddit.com/gallery/1lduzzl
jujucz
reddit.com
1970-01-01T00:00:00
0
{}
1lduzzl
false
null
t3_1lduzzl
/r/LocalLLaMA/comments/1lduzzl/mac_studio_m3_ultra_256gb_vs_1x_5090/
false
false
https://external-preview…4419ab0f92bc1e08
2
{'enabled': True, 'images': [{'id': 'Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=108&crop=smart&auto=webp&s=3f3f212c433ffde026946c79aa1ad3cc7965a240', 'width': 108}, {'height': 91, 'url': 'ht...
🧠 New Paper Alert: Curriculum Learning Boosts LLM Training Efficiency!
4
🧠 **New Paper Alert: Curriculum Learning Boosts LLM Training Efficiency** 📄 [Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning](https://arxiv.org/abs/2506.11300) 🔥 Over **200+ pretraining runs** analyzed in this large-scale study exploring **Curriculum Learning (CL)** as an alte...
2025-06-17T18:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1lduxn0/new_paper_alert_curriculum_learning_boosts_llm/
Ok-Cut-3551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lduxn0
false
null
t3_1lduxn0
/r/LocalLLaMA/comments/1lduxn0/new_paper_alert_curriculum_learning_boosts_llm/
false
false
self
4
null
Open Source Project - LLM-God
1
[removed]
2025-06-17T18:24:02
https://www.reddit.com/gallery/1ldur2k
zuniloc01
reddit.com
1970-01-01T00:00:00
0
{}
1ldur2k
false
null
t3_1ldur2k
/r/LocalLLaMA/comments/1ldur2k/open_source_project_llmgod/
false
false
https://external-preview…ea72b6212f3ccd60
1
{'enabled': True, 'images': [{'id': 'kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=108&crop=smart&auto=webp&s=5c0ab9d8a4f17e1bda644a36a985021523f114ee', 'width': 108}, {'height': 119, 'url': 'ht...
🚀 I built a lightweight web UI for Ollama – great for local LLMs!
3
Hey folks! 👋 I'm the creator of [**ollama\_simple\_webui**](https://github.com/Laszlobeer/ollama_simple_webui) – a no-frills, lightweight web UI for [Ollama](https://ollama.com/), focused on simplicity, performance, and accessibility. ✅ **Features:** * Clean and responsive UI for chatting with local LLMs * Easy setu...
2025-06-17T18:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldupay/i_built_a_lightweight_web_ui_for_ollama_great_for/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldupay
false
null
t3_1ldupay
/r/LocalLLaMA/comments/1ldupay/i_built_a_lightweight_web_ui_for_ollama_great_for/
false
false
https://external-preview…7480b4e8e1de8247
3
{'enabled': False, 'images': [{'id': '1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=108&crop=smart&auto=webp&s=5217c0746295b4a4636bb4d1bc7607834c7969eb', 'width': 108}, {'height': 108, 'url': 'h...
GPU for LLMs fine-tuning
1
I'm looking to purchase a gpu for fine tuning LLMs, plz suggest which I should go for, and if anyone selling their gpu on second hand price, I would love to buy. Country: India, Can pay in both USD and INR
2025-06-17T17:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1ldu2bz/gpu_for_llms_finetuning/
Western-Age3148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldu2bz
false
null
t3_1ldu2bz
/r/LocalLLaMA/comments/1ldu2bz/gpu_for_llms_finetuning/
false
false
self
1
null
SAGA Update: Now with Autonomous Knowledge Graph Healing & A More Robust Core!
15
Hello again, everyone! A few weeks ago, I shared a major update to SAGA (Semantic And Graph-enhanced Authoring), my autonomous novel generation project. The response was incredible, and since then, I've been focused on making the system not just more capable, but smarter, more maintainable, and more professional. I'm ...
2025-06-17T17:56:02
https://www.reddit.com/r/LocalLLaMA/comments/1ldu04l/saga_update_now_with_autonomous_knowledge_graph/
MariusNocturnum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldu04l
false
null
t3_1ldu04l
/r/LocalLLaMA/comments/1ldu04l/saga_update_now_with_autonomous_knowledge_graph/
false
false
self
15
null
For distillation of complicated queries, o3 and R1 are brilliant!
1
[removed]
2025-06-17T17:49:01
https://www.reddit.com/r/LocalLLaMA/comments/1ldttjf/for_distillation_of_complicated_queries_o3_and_r1/
Corporate_Drone31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldttjf
false
null
t3_1ldttjf
/r/LocalLLaMA/comments/1ldttjf/for_distillation_of_complicated_queries_o3_and_r1/
false
false
self
1
null
My AI Interview Prep Side Project Now Has an "AI Coach" to Pinpoint Your Weak Skills!
1
[removed]
2025-06-17T17:46:26
https://v.redd.it/gsfzhcd2yi7f1
Solid_Woodpecker3635
v.redd.it
1970-01-01T00:00:00
0
{}
1ldtr2u
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gsfzhcd2yi7f1/DASHPlaylist.mpd?a=1752774401%2COTZhMjU4MGZjY2Q5MTgxZWU5MjcxM2ViNmI5NjkwYWM3OWE2Mzc4OTJhYjQwNGNjY2I2OWY5NWVhN2Q2OGFiNw%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/gsfzhcd2yi7f1/DASH_1080.mp4?source=fallback', 'h...
t3_1ldtr2u
/r/LocalLLaMA/comments/1ldtr2u/my_ai_interview_prep_side_project_now_has_an_ai/
false
false
https://external-preview…18746afc028cc7c9
1
{'enabled': False, 'images': [{'id': 'dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=108&crop=smart&format=pjpg&auto=webp&s=6edda81c23a95e0c674cbd9b2944cc75de96d...
Help me build local Ai LLM inference rig ! Intel AMX single or Dual With GPU or AMD EPYC.
2
So I'm now thinking about building a rig using 4th or 5th gen sinle or dual Xeon CPUs wohj GPUs. I've been reading up on kTransformer and how they use Intel AMX for inference together with GPU. So my main goal is to future proof and get the best bank for my buck .. Should I go w9hh single socket more powerful CPU w...
2025-06-17T17:34:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldtfmd/help_me_build_local_ai_llm_inference_rig_intel/
sub_RedditTor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldtfmd
false
null
t3_1ldtfmd
/r/LocalLLaMA/comments/1ldtfmd/help_me_build_local_ai_llm_inference_rig_intel/
false
false
self
2
null
RTX A4000
1
Has anyone here used the RTX A4000 for local inference? If so, how was your experience and what size model did you try (tokens/sec pls) Thanks!
2025-06-17T17:32:49
https://www.reddit.com/r/LocalLLaMA/comments/1ldtegj/rtx_a4000/
ranoutofusernames__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldtegj
false
null
t3_1ldtegj
/r/LocalLLaMA/comments/1ldtegj/rtx_a4000/
false
false
self
1
null
Help with considering AMD Radeon PRO W7900 card for inference and image generation
2
I'm trying to understand the negativity around AMD workstation GPUs—especially considering their memory capacity and price-to-performance balance. My end goal is to scale up to **3 GPUs** for **inference and image generation only**. Here's what I need from the setup: * **Moderate token generation speed** (not aiming ...
2025-06-17T17:26:00
https://www.reddit.com/r/LocalLLaMA/comments/1ldt7x8/help_with_considering_amd_radeon_pro_w7900_card/
n9986
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldt7x8
false
null
t3_1ldt7x8
/r/LocalLLaMA/comments/1ldt7x8/help_with_considering_amd_radeon_pro_w7900_card/
false
false
self
2
null
we are in a rut until one of these happens
1
I’ve been thinking about what we need to run MoE with 200B+ params, and it looks like we’re in a holding pattern until one of these happens: 1) 48 GB cards get cheap enough that we can build miner style rigs 2) Strix halo desktop version comes out with a bunch of PCIe lanes, so we get to pair high unified memory with...
2025-06-17T17:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldt3bo/we_are_in_a_rut_until_one_of_these_happens/
woahdudee2a
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldt3bo
false
null
t3_1ldt3bo
/r/LocalLLaMA/comments/1ldt3bo/we_are_in_a_rut_until_one_of_these_happens/
false
false
self
1
null
Sorry guys i tried.
1
2025-06-17T17:17:57
https://i.redd.it/prilesypti7f1.jpeg
GTurkistane
i.redd.it
1970-01-01T00:00:00
0
{}
1ldt0co
false
null
t3_1ldt0co
/r/LocalLLaMA/comments/1ldt0co/sorry_guys_i_tried/
false
false
https://external-preview…0060fd9a64d4d8ef
1
{'enabled': True, 'images': [{'id': 'arXa73bJiV0KB_HUkp1zHgXwoHUkh2ja7_FD6-jhfao', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=108&crop=smart&auto=webp&s=644cec6c4bc4b34f0cf22e1c4fa4c29ab0fcbf44', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/prilesypti7f1.jp...
Supercharge Your Coding Agent with Symbolic Tools
2
How would you feel about writing code without proper IDE tooling? Your coding agent feels the same way! Some agents have symbolic tools to a degree (like cline, roo and so on), but many (like codex, opencoder and most others) don't and rely on just text matching, embeddings and file reading. Fortunately, it doesn't hav...
2025-06-17T16:57:31
https://www.reddit.com/r/LocalLLaMA/comments/1ldsgf1/supercharge_your_coding_agent_with_symbolic_tools/
Left-Orange2267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldsgf1
false
null
t3_1ldsgf1
/r/LocalLLaMA/comments/1ldsgf1/supercharge_your_coding_agent_with_symbolic_tools/
false
false
self
2
{'enabled': False, 'images': [{'id': 'zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=108&crop=smart&auto=webp&s=0f74364c170344e395c650476ee4a0b710ebeada', 'width': 108}, {'height': 108, 'url': 'h...
Gemini 2.5 Pro and Flash are stable in AI Studio
156
There's also a new Gemini 2.5 flash preview model at the bottom there.
2025-06-17T16:56:00
https://i.redd.it/ng7glnbmpi7f1.png
best_codes
i.redd.it
1970-01-01T00:00:00
0
{}
1ldsez0
false
null
t3_1ldsez0
/r/LocalLLaMA/comments/1ldsez0/gemini_25_pro_and_flash_are_stable_in_ai_studio/
false
false
default
156
{'enabled': True, 'images': [{'id': 'ng7glnbmpi7f1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?width=108&crop=smart&auto=webp&s=e52d9cefccdc2c91e7f63999b09c4a119408414a', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?width=216&crop=smart&auto=web...
I CAN't POST on LocalLLaMA Anything Regarding the Owner of ChatGPT! WHY NOT?
1
[removed]
2025-06-17T16:38:54
https://www.reddit.com/r/LocalLLaMA/comments/1ldrytn/i_cant_post_on_localllama_anything_regarding_the/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrytn
false
null
t3_1ldrytn
/r/LocalLLaMA/comments/1ldrytn/i_cant_post_on_localllama_anything_regarding_the/
false
false
self
1
null
My post about OpenAI is being Removed! Why?
1
[removed]
2025-06-17T16:35:37
https://www.reddit.com/r/LocalLLaMA/comments/1ldrvpr/my_post_about_openai_is_being_removed_why/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrvpr
false
null
t3_1ldrvpr
/r/LocalLLaMA/comments/1ldrvpr/my_post_about_openai_is_being_removed_why/
false
false
self
1
null
System prompt for proof based mathematics
1
[removed]
2025-06-17T16:34:10
https://www.reddit.com/r/LocalLLaMA/comments/1ldrubl/system_prompt_for_proof_based_mathematics/
adaption12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrubl
false
null
t3_1ldrubl
/r/LocalLLaMA/comments/1ldrubl/system_prompt_for_proof_based_mathematics/
false
false
self
1
null
My Post Entitled "OpenAI wins $200 million U.S. defense contract!" was Removed without Explanation due to "complaints". What Complaints?
1
[removed]
2025-06-17T16:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1ldrub1/my_post_entitled_openai_wins_200_million_us/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrub1
false
null
t3_1ldrub1
/r/LocalLLaMA/comments/1ldrub1/my_post_entitled_openai_wins_200_million_us/
false
false
self
1
null
Google launches Gemini 2.5 Flash Lite (API only)
61
See https://console.cloud.google.com/vertex-ai/studio/ Pricing not yet announced.
2025-06-17T16:05:43
https://i.redd.it/93ekds1ugi7f1.jpeg
Balance-
i.redd.it
1970-01-01T00:00:00
0
{}
1ldr3ln
false
null
t3_1ldr3ln
/r/LocalLLaMA/comments/1ldr3ln/google_launches_gemini_25_flash_lite_api_only/
false
false
default
61
{'enabled': True, 'images': [{'id': '93ekds1ugi7f1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=108&crop=smart&auto=webp&s=23b8767c50b0c732fe2fe7e494936ff50c98cb89', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=216&crop=smart&auto=w...
Local Language Learning with Voice?
6
Very interested in learning another language via speaking with a local LLM via voice. Speaking a language is much more helpful than only being able to communicate via writing. Has anyone trialed this with any LLM model? If so what model do you recommend (including minimum parameter), any additional app/plug-in...
2025-06-17T15:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1ldqtwu/local_language_learning_with_voice/
Ok_Most9659
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqtwu
false
null
t3_1ldqtwu
/r/LocalLLaMA/comments/1ldqtwu/local_language_learning_with_voice/
false
false
self
6
null
Prometheus: Local AGI framework with Phi-3, ChromaDB, async planner, mood + reflex loop
1
[removed]
2025-06-17T15:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1ldqsr4/prometheus_local_agi_framework_with_phi3_chromadb/
Capable_Football8065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqsr4
false
null
t3_1ldqsr4
/r/LocalLLaMA/comments/1ldqsr4/prometheus_local_agi_framework_with_phi3_chromadb/
false
false
self
1
null
A free goldmine of tutorials for the components you need to create production-level agents
272
**I’ve just launched a free resource with 25 detailed tutorials for building comprehensive production-level AI agents, as part of my Gen AI educational initiative.** The tutorials cover all the key components you need to create agents that are ready for real-world deployment. I plan to keep adding more tutorials over ...
2025-06-17T15:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldqroi/a_free_goldmine_of_tutorials_for_the_components/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqroi
false
null
t3_1ldqroi
/r/LocalLLaMA/comments/1ldqroi/a_free_goldmine_of_tutorials_for_the_components/
false
false
self
272
null
what's more important to you when choosing a model
1
[removed] [View Poll](https://www.reddit.com/poll/1ldqrh1)
2025-06-17T15:53:03
https://www.reddit.com/r/LocalLLaMA/comments/1ldqrh1/whats_more_important_to_you_when_choosing_a_model/
okaris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqrh1
false
null
t3_1ldqrh1
/r/LocalLLaMA/comments/1ldqrh1/whats_more_important_to_you_when_choosing_a_model/
false
false
self
1
null
Browserbase launches Director + $40M Series B: Making web automation accessible to everyone
1
[removed]
2025-06-17T15:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1ldqmaf/browserbase_launches_director_40m_series_b_making/
Kylejeong21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqmaf
false
null
t3_1ldqmaf
/r/LocalLLaMA/comments/1ldqmaf/browserbase_launches_director_40m_series_b_making/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xLA_aZPG6CNhAbvX1b4Gk4fFQKYmedMpHj2U5eVbdHg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=108&crop=smart&auto=webp&s=306df831c850dd9f9ddbd0988bd801a1027a6a78', 'width': 108}, {'height': 121, 'url': 'h...
How can i train llama on a custom Programming language?
1
[removed]
2025-06-17T15:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1ldqly3/how_can_i_train_llama_on_a_custom_programming/
Which_Bug_8234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqly3
false
null
t3_1ldqly3
/r/LocalLLaMA/comments/1ldqly3/how_can_i_train_llama_on_a_custom_programming/
false
false
self
1
null
LLM Web Search Paper
1
[removed]
2025-06-17T15:46:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldql0d/llm_web_search_paper/
ayoubzulfiqar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldql0d
false
null
t3_1ldql0d
/r/LocalLLaMA/comments/1ldql0d/llm_web_search_paper/
false
false
self
1
null
I have to fine tune an LLM on a custom Programming language
1
[removed]
2025-06-17T15:44:27
https://www.reddit.com/r/LocalLLaMA/comments/1ldqjel/i_have_to_fine_tune_an_llm_on_a_custom/
Which_Bug_8234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqjel
false
null
t3_1ldqjel
/r/LocalLLaMA/comments/1ldqjel/i_have_to_fine_tune_an_llm_on_a_custom/
false
false
self
1
null
Ollama with MCPHost (ie called through api) uses CPU instead of GPU
1
[removed]
2025-06-17T14:51:48
https://www.reddit.com/r/LocalLLaMA/comments/1ldp69b/ollama_with_mcphost_ie_called_through_api_uses/
Jumpy-Ball7492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldp69b
false
null
t3_1ldp69b
/r/LocalLLaMA/comments/1ldp69b/ollama_with_mcphost_ie_called_through_api_uses/
false
false
self
1
null
Best frontend for vllm?
21
Trying to optimise my inferences. I use LM studio for an easy inference of llama.cpp but was wondering if there is a gui for more optimised inference. Also is there anther gui for llama.cpp that lets you tweak inference settings a bit more? Like expert offloading etc? Thanks!!
2025-06-17T14:27:51
https://www.reddit.com/r/LocalLLaMA/comments/1ldokl7/best_frontend_for_vllm/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldokl7
false
null
t3_1ldokl7
/r/LocalLLaMA/comments/1ldokl7/best_frontend_for_vllm/
false
false
self
21
null
What's your favorite desktop client?
4
Prefer one with MCP support.
2025-06-17T14:26:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldojsu/whats_your_favorite_desktop_client/
tuananh_org
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldojsu
false
null
t3_1ldojsu
/r/LocalLLaMA/comments/1ldojsu/whats_your_favorite_desktop_client/
false
false
self
4
null
Local Chatbot on Android (Offline)
1
[removed]
2025-06-17T14:09:24
https://www.reddit.com/r/LocalLLaMA/comments/1ldo40q/local_chatbot_on_android_offline/
kirankumar_r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldo40q
false
null
t3_1ldo40q
/r/LocalLLaMA/comments/1ldo40q/local_chatbot_on_android_offline/
false
false
self
1
null
What Meta working on after Llama 4 failure?
1
[removed]
2025-06-17T13:58:24
https://www.reddit.com/r/LocalLLaMA/comments/1ldnu56/what_meta_working_on_after_llama_4_failure/
narca_hakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldnu56
false
null
t3_1ldnu56
/r/LocalLLaMA/comments/1ldnu56/what_meta_working_on_after_llama_4_failure/
false
false
self
1
null
Learn. Hack LLMs. Win up to 100,000$
1
2025-06-17T13:45:27
https://i.redd.it/lmc4q3usrh7f1.png
Fit_Spray3043
i.redd.it
1970-01-01T00:00:00
0
{}
1ldnj5a
false
null
t3_1ldnj5a
/r/LocalLLaMA/comments/1ldnj5a/learn_hack_llms_win_up_to_100000/
false
false
https://external-preview…402c87fa3bcf16b3
1
{'enabled': True, 'images': [{'id': 'JzbXK5HULPi_xbDOC3ytFFtbBMd_7Rf0nMJ2ptCYiQY', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=108&crop=smart&auto=webp&s=5d38529e03235457c4c8de63ec23d7429428a7bb', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?...
I gave Llama 3 a RAM and an ALU, turning it into a CPU for a fully differentiable computer.
1
[removed]
2025-06-17T13:38:25
https://www.reddit.com/r/LocalLLaMA/comments/1ldndcd/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
Antique-Time-8070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldndcd
false
null
t3_1ldndcd
/r/LocalLLaMA/comments/1ldndcd/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
false
false
self
1
null
I gave Llama 3 a RAM and an ALU, turning it into a CPU for a fully differentiable computer.
1
[removed]
2025-06-17T13:30:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldn6mk/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
Antique-Time-8070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldn6mk
false
null
t3_1ldn6mk
/r/LocalLLaMA/comments/1ldn6mk/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
false
false
https://a.thumbs.redditm…nsA-FprlEgr4.jpg
1
null
⚡ IdeaWeaver: One Command to Launch Your AI Agent — No Code, No Drag & Drop⚡
0
ERROR: type should be string, got "\n\nhttps://i.redd.it/kl5iyl96nh7f1.gif\n\nWhether you see AI agents as the next evolution of automation or just hype, one thing’s clear: they’re here to stay.\n\nRight now, I see two major ways people are building AI solutions:\n\n1️⃣ Writing custom code using frameworks\n\n2️⃣ Using drag-and-drop UI tools to stitch components together( a new field has emerged around this called Flowgrammers)\n\nBut what if there was a third way, something more straightforward, more accessible, and free?\n\n🎯 Meet IdeaWeaver, a CLI-based tool that lets you run powerful agents with just one command for free, using local models via Ollama (with a fallback to OpenAI).\n\nTested with models like Mistral, DeepSeek, and Phi-3, and more support is coming soon!\n\nHere are just a few agents you can try out right now:\n\n📚 Create a children's storybook\n\nideaweaver agent generate\\_storybook --theme \"brave little mouse\" --target-age \"3-5\"\n\n🧠 Conduct research & write long-form content\n\nideaweaver agent research\\_write --topic \"AI in healthcare\"\n\n💼 Generate professional LinkedIn content\n\nideaweaver agent linkedin\\_post --topic \"AI trends in 2025\"\n\n✈️ Build detailed travel itineraries\n\nideaweaver agent travel\\_plan --destination \"Tokyo\" --duration \"7 days\" --budget \"$2000-3000\"\n\n📈 Analyze stock performance like a pro\n\nideaweaver agent stock\\_analysis --symbol AAPL\n\n…and the list is growing! 🌱\n\nNo code. No drag-and-drop. Just a clean CLI to get your favorite AI agent up and running.\n\nNeed to customize? Just run:\n\nideaweaver agent generate\\_storybook --help\n\nand tweak it to your needs.\n\nIdeaWeaver is built on top of CrewAI to power these agent automations. Huge thanks to the amazing CrewAI team for creating such an incredible framework! 🙌\n\n🔗 Docs: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/overview/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/overview/)\n\n🔗 GitHub: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver)\n\nIf this sounds exciting, give it a try and let me know your thoughts. And if you like the project, drop a ⭐ on GitHub, it helps more than you think!\n\n"
2025-06-17T13:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1ldmy6i/ideaweaver_one_command_to_launch_your_ai_agent_no/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmy6i
false
null
t3_1ldmy6i
/r/LocalLLaMA/comments/1ldmy6i/ideaweaver_one_command_to_launch_your_ai_agent_no/
false
false
https://b.thumbs.redditm…n3_biV23b7ZY.jpg
0
null
HELP: Need to AUTOMATE downloading and analysing papers from Arxiv.
1
[removed]
2025-06-17T13:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1ldmurb/help_need_to_automate_downloading_and_analysing/
sunilnallani611
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmurb
false
null
t3_1ldmurb
/r/LocalLLaMA/comments/1ldmurb/help_need_to_automate_downloading_and_analysing/
false
false
self
1
null
Continuous LLM Loop for Real-Time Interaction
3
Continuous inference is something I've been mulling over occasionally for a while (not referring to the usual run-on LLM output). It would be cool to break past the whole Query - Response paradigm and I think it's feasible. Why: Steerable continuous stream of thought for, stories, conversation, assistant tasks, what...
2025-06-17T13:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1ldmui5/continuous_llm_loop_for_realtime_interaction/
skatardude10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmui5
false
null
t3_1ldmui5
/r/LocalLLaMA/comments/1ldmui5/continuous_llm_loop_for_realtime_interaction/
false
false
self
3
null
DDR5 + PCIe 5 vs DDR4 + PCIe 4 - Deepseek inference speeds?
1
[removed]
2025-06-17T13:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldmrhq/ddr5_pcie_5_vs_ddr4_pcie_4_deepseek_inference/
morfr3us
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmrhq
false
null
t3_1ldmrhq
/r/LocalLLaMA/comments/1ldmrhq/ddr5_pcie_5_vs_ddr4_pcie_4_deepseek_inference/
false
false
self
1
null
Looking for a Female Buddy to Chat About Hobbies
1
[removed]
2025-06-17T13:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1ldmmdl/looking_for_a_female_buddy_to_chat_about_hobbies/
Ok_Media_8931
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmmdl
false
null
t3_1ldmmdl
/r/LocalLLaMA/comments/1ldmmdl/looking_for_a_female_buddy_to_chat_about_hobbies/
false
false
self
1
null
Will Ollama get Gemma3n?
1
New to Ollama. Will ollama gain the ability to download and run Gemma 3n soon or is there some limitation with preview? Is there a better way to run Gemma 3n locally? It seems very promising on CPU only hardware.
2025-06-17T12:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1ldm4xc/will_ollama_get_gemma3n/
InternationalNebula7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldm4xc
false
null
t3_1ldm4xc
/r/LocalLLaMA/comments/1ldm4xc/will_ollama_get_gemma3n/
false
false
self
1
null
Finetune model to be able to create "copy t1:t2" token representing span in input prompt
1
[removed]
2025-06-17T12:07:51
https://www.reddit.com/r/LocalLLaMA/comments/1ldleoe/finetune_model_to_be_able_to_create_copy_t1t2/
scrapyscrape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldleoe
false
null
t3_1ldleoe
/r/LocalLLaMA/comments/1ldleoe/finetune_model_to_be_able_to_create_copy_t1t2/
false
false
self
1
null
is claude down ???
0
https://preview.redd.it/… continuously
2025-06-17T12:07:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldledk/is_claude_down/
bhupesh-g
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldledk
false
null
t3_1ldledk
/r/LocalLLaMA/comments/1ldledk/is_claude_down/
false
false
https://b.thumbs.redditm…5xtKc1W75N5M.jpg
0
null
Best Hardware requirement for Qwen3:8b model through vllm
1
[removed]
2025-06-17T12:03:48
https://www.reddit.com/r/LocalLLaMA/comments/1ldlbsh/best_hardware_requirement_for_qwen38b_model/
Tough-Double687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldlbsh
false
null
t3_1ldlbsh
/r/LocalLLaMA/comments/1ldlbsh/best_hardware_requirement_for_qwen38b_model/
false
false
self
1
null
Real or fake?
0
https://reddit.com/link/1ldl6dy/video/fg1q4hls6h7f1/player I went a saw this video where this tool is able to detect all the best AI humanizer and marking it as red and detects everything written. what is the logic behind it or is this video fake ?
2025-06-17T11:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1ldl6dy/real_or_fake/
Most-Introduction869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl6dy
false
null
t3_1ldl6dy
/r/LocalLLaMA/comments/1ldl6dy/real_or_fake/
false
false
self
0
null
Latent Attention for Small Language Models
44
ERROR: type should be string, got "https://preview.redd.it/h4pmsjrt7h7f1.png?width=1062&format=png&auto=webp&s=1406dd1c4fe6260378cd828114ffaf2f1724b600\n\nLink to paper: [https://arxiv.org/pdf/2506.09342](https://arxiv.org/pdf/2506.09342)\n\n1) We trained 30M parameter Generative Pre-trained Transformer (GPT) models on 100,000 synthetic stories and benchmarked three architectural variants: standard multi-head attention (MHA), MLA, and MLA with rotary positional embeddings (MLA+RoPE).\n\n(2) It led to a beautiful study in which we showed that MLA outperforms MHA: 45% memory reduction and 1.4 times inference speedup with minimal quality loss. \n\n**This shows 2 things:** \n\n(1) Small Language Models (SLMs) can become increasingly powerful when integrated with Multi-Head Latent Attention (MLA). \n\n(2) All industries and startups building SLMs should replace MHA with MLA. "
2025-06-17T11:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1ldl4ii/latent_attention_for_small_language_models/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl4ii
false
null
t3_1ldl4ii
/r/LocalLLaMA/comments/1ldl4ii/latent_attention_for_small_language_models/
false
false
https://b.thumbs.redditm…MrECs6_POs7M.jpg
44
null
Suggested (and not) low cost interim (2025-2026) upgrade options for US second hand; GPU 1-2x 16+ GBy; server/HEDT MB+CPU+RAM 384+ GBy @ 200+ GBy/s?
1
[removed]
2025-06-17T11:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1ldl24f/suggested_and_not_low_cost_interim_20252026/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl24f
false
null
t3_1ldl24f
/r/LocalLLaMA/comments/1ldl24f/suggested_and_not_low_cost_interim_20252026/
false
false
self
1
null
Completed Local LLM Rig
427
So proud it's finally done! GPU: 4 x RTX 3090 CPU: TR 3945wx 12c RAM: 256GB DDR4@3200MT/s SSD: PNY 3040 2TB MB: Asrock Creator WRX80 PSU: Seasonic Prime 2200W RAD: Heatkiller MoRa 420 Case: Silverstone RV-02 Was a long held dream to fit 4 x 3090 in an ATX form factor, all in my good old Silverstone Raven from 2011. A...
2025-06-17T10:48:48
https://www.reddit.com/gallery/1ldjyhf
Mr_Moonsilver
reddit.com
1970-01-01T00:00:00
0
{}
1ldjyhf
false
null
t3_1ldjyhf
/r/LocalLLaMA/comments/1ldjyhf/completed_local_llm_rig/
false
false
https://external-preview…2ab11eee8f514b1a
427
{'enabled': True, 'images': [{'id': 'HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=108&crop=smart&auto=webp&s=c24be15b21d7a6a11a3b7d8221baae10bac4625c', 'width': 108}, {'height': 133, 'url': 'h...
I love the inference performances of QWEN3-30B-A3B but how do you use it in real world use case ? What prompts are you using ? What is your workflow ? How is it useful for you ?
25
Hello guys I successful run on my old laptop QWEN3-30B-A3B-Q4-UD with 32K token window I wanted to know how you use in real world use case this model. And what are you best prompts for this specific model Feel free to share your journey with me I need inspiration
2025-06-17T10:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1ldjq1m/i_love_the_inference_performances_of_qwen330ba3b/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldjq1m
false
null
t3_1ldjq1m
/r/LocalLLaMA/comments/1ldjq1m/i_love_the_inference_performances_of_qwen330ba3b/
false
false
self
25
null
orchestrating agents
3
I have difficulties to understand, how agent orchestration works? Is an agent capable llm able to orchestrate multiple agent tool calls in one go? How comes the A2A into play? For example, I used Anything LLM to perform agent calls via LM studio using Deepseek as the LLM. Works perfect! However I was not yet able t...
2025-06-17T10:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1ldjo26/orchestrating_agents/
JohnDoe365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldjo26
false
null
t3_1ldjo26
/r/LocalLLaMA/comments/1ldjo26/orchestrating_agents/
false
false
self
3
null
Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons
26
2025-06-17T10:12:14
https://arxiv.org/pdf/2506.01963
jsonathan
arxiv.org
1970-01-01T00:00:00
0
{}
1ldjd5t
false
null
t3_1ldjd5t
/r/LocalLLaMA/comments/1ldjd5t/breaking_quadratic_barriers_a_nonattention_llm/
false
false
default
26
null
Why Claude Code feels like magic?
0
2025-06-17T10:06:34
https://omarabid.com/claude-magic
omarous
omarabid.com
1970-01-01T00:00:00
0
{}
1ldj9yh
false
null
t3_1ldj9yh
/r/LocalLLaMA/comments/1ldj9yh/why_claude_code_feels_like_magic/
false
false
default
0
null
nvidia/AceReason-Nemotron-1.1-7B · Hugging Face
66
2025-06-17T09:35:31
https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1ldisw8
false
null
t3_1ldisw8
/r/LocalLLaMA/comments/1ldisw8/nvidiaacereasonnemotron117b_hugging_face/
false
false
default
66
{'enabled': False, 'images': [{'id': 'W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=108&crop=smart&auto=webp&s=29a8450aba9da3c2641ec85b24cdf4770631f084', 'width': 108}, {'height': 116, 'url': 'h...
LLM for finance documents
1
[removed]
2025-06-17T09:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldipl9/llm_for_finance_documents/
Mainzerger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldipl9
false
null
t3_1ldipl9
/r/LocalLLaMA/comments/1ldipl9/llm_for_finance_documents/
false
false
self
1
null
Which Local LLMs Set Up Can you recommend
1
[removed]
2025-06-17T09:17:52
https://www.reddit.com/r/LocalLLaMA/comments/1ldijde/which_local_llms_set_up_can_you_recommend/
Mainzerger007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldijde
false
null
t3_1ldijde
/r/LocalLLaMA/comments/1ldijde/which_local_llms_set_up_can_you_recommend/
false
false
self
1
null
Local LLM für Financial / Legal Due Diligence Analysis in M&A / PE
1
[removed]
2025-06-17T09:09:20
https://www.reddit.com/r/LocalLLaMA/comments/1ldierr/local_llm_für_financial_legal_due_diligence/
Mainzerger007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldierr
false
null
t3_1ldierr
/r/LocalLLaMA/comments/1ldierr/local_llm_für_financial_legal_due_diligence/
false
false
self
1
null
Synthetic Intimacy in Sesame Maya: Trust-Based Emotional Engagement Beyond Programmed Boundaries
0
[removed]
2025-06-17T09:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1ldicus/synthetic_intimacy_in_sesame_maya_trustbased/
Medium_Ad4287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldicus
false
null
t3_1ldicus
/r/LocalLLaMA/comments/1ldicus/synthetic_intimacy_in_sesame_maya_trustbased/
false
false
self
0
null
There are no plans for a Qwen3-72B
288
2025-06-17T08:52:31
https://i.redd.it/wwq0gc8bbg7f1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1ldi5rs
false
null
t3_1ldi5rs
/r/LocalLLaMA/comments/1ldi5rs/there_are_no_plans_for_a_qwen372b/
false
false
default
288
{'enabled': True, 'images': [{'id': 'wwq0gc8bbg7f1', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=108&crop=smart&auto=webp&s=bf553e913b327e3ca42733743256d6dc9f44a5e3', 'width': 108}, {'height': 50, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=216&crop=smart&auto=webp...
[Showcase] StateAgent – A Local AI Assistant With Real Memory and Profiles (Made from Scratch)
1
[removed]
2025-06-17T08:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldi542/showcase_stateagent_a_local_ai_assistant_with/
redlitegreenlite456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldi542
false
null
t3_1ldi542
/r/LocalLLaMA/comments/1ldi542/showcase_stateagent_a_local_ai_assistant_with/
false
false
self
1
null
Anyone feel like they've managed to fully replace chatgpt locally?
1
[removed]
2025-06-17T08:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1ldhyo2/anyone_feel_like_theyve_managed_to_fully_replace/
thenerd631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhyo2
false
null
t3_1ldhyo2
/r/LocalLLaMA/comments/1ldhyo2/anyone_feel_like_theyve_managed_to_fully_replace/
false
false
self
1
null
Increasingly disappointed with small local models
0
While I find small local models great for custom workflows and specific processing tasks, for general chat/QA type interactions, I feel that they've fallen quite far behind closed models such as Gemini and ChatGPT - even after improvements of Gemma 3 and Qwen3. The only local model I like for this kind of work is Deep...
2025-06-17T08:29:21
https://www.reddit.com/r/LocalLLaMA/comments/1ldhts7/increasingly_disappointed_with_small_local_models/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhts7
false
null
t3_1ldhts7
/r/LocalLLaMA/comments/1ldhts7/increasingly_disappointed_with_small_local_models/
false
false
self
0
null
Free n8n automation that checks Ollama to see which models are new or recently updated
1
[removed]
2025-06-17T08:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1ldhjgp/free_n8n_automation_that_checks_ollama_to_see/
tonypaul009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhjgp
false
null
t3_1ldhjgp
/r/LocalLLaMA/comments/1ldhjgp/free_n8n_automation_that_checks_ollama_to_see/
false
false
self
1
null
Who is ACTUALLY running local or open source model daily and mainly?
149
Recently I've started to notice a lot of folk on here comment that they're using Claude or GPT, so: Out of curiosity, \- who is using local or open source models as their daily driver for any task: code, writing , agents? \- what's you setup, are you serving remotely, sharing with friends, using local infere...
2025-06-17T07:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1ldhej3/who_is_actually_running_local_or_open_source/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhej3
false
null
t3_1ldhej3
/r/LocalLLaMA/comments/1ldhej3/who_is_actually_running_local_or_open_source/
false
false
self
149
null
2.71but R1-0528-UD-IQ2_M @ 68%
1
[removed]
2025-06-17T07:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1ldhbh6/271but_r10528udiq2_m_68/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhbh6
false
null
t3_1ldhbh6
/r/LocalLLaMA/comments/1ldhbh6/271but_r10528udiq2_m_68/
true
false
spoiler
1
null
Are there any good RAG evaluation metrics, or libraries to test how good is my Retrieval?
2
Wanted to test?
2025-06-17T07:53:23
https://www.reddit.com/r/LocalLLaMA/comments/1ldhba2/are_there_any_good_rag_evaluation_metrics_or/
Expert-Address-2918
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhba2
false
null
t3_1ldhba2
/r/LocalLLaMA/comments/1ldhba2/are_there_any_good_rag_evaluation_metrics_or/
false
false
self
2
null
Local 2.7bit DeepSeek-R1-0528-UD-IQ2_M scores 68% on Aider Polyglot
1
[removed]
2025-06-17T07:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldhat7/local_27bit_deepseekr10528udiq2_m_scores_68_on/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhat7
false
null
t3_1ldhat7
/r/LocalLLaMA/comments/1ldhat7/local_27bit_deepseekr10528udiq2_m_scores_68_on/
true
false
spoiler
1
null
Local 2.7bit DeepSeek-R1-0528-UD-IQ2_M scores 68% on Aider Polyglot
1
[removed]
2025-06-17T07:51:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldhabj/local_27bit_deepseekr10528udiq2_m_scores_68_on/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhabj
false
null
t3_1ldhabj
/r/LocalLLaMA/comments/1ldhabj/local_27bit_deepseekr10528udiq2_m_scores_68_on/
false
false
self
1
null
Fine-tuning Llama3 to generate tasks dependencies (industrial plannings)
1
[removed]
2025-06-17T07:51:04
https://www.reddit.com/r/LocalLLaMA/comments/1ldha3u/finetuning_llama3_to_generate_tasks_dependencies/
Head_Mushroom_3748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldha3u
false
null
t3_1ldha3u
/r/LocalLLaMA/comments/1ldha3u/finetuning_llama3_to_generate_tasks_dependencies/
false
false
self
1
null
AMD apu running deepseek 8b ?
1
[removed]
2025-06-17T07:00:38
https://www.reddit.com/r/LocalLLaMA/comments/1ldgjww/amd_apu_running_deepseek_8b/
Prestigious_Layer361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldgjww
false
null
t3_1ldgjww
/r/LocalLLaMA/comments/1ldgjww/amd_apu_running_deepseek_8b/
false
false
self
1
null
AMD apu running LLM deepseek ?
1
[removed]
2025-06-17T06:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1ldggsc/amd_apu_running_llm_deepseek/
Prestigious_Layer361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldggsc
false
null
t3_1ldggsc
/r/LocalLLaMA/comments/1ldggsc/amd_apu_running_llm_deepseek/
false
false
self
1
null
What finetuning library have you seen success with?
14
I'm interested in finetuning an llm to teach it new knowledge (I know RAG exists and decided against it). From what i've heard and not tested, the best way to achieve that goal is through full finetuning. I'm comparing options and found these: - NVIDIA/Megatron-LM - deepspeedai/DeepSpeed - hiyouga/LLaMA-Factory - unsl...
2025-06-17T06:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1ldgd41/what_finetuning_library_have_you_seen_success_with/
Responsible-Crew1801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldgd41
false
null
t3_1ldgd41
/r/LocalLLaMA/comments/1ldgd41/what_finetuning_library_have_you_seen_success_with/
false
false
self
14
null
What finetuning library have you used?
1
[removed]
2025-06-17T06:46:34
https://www.reddit.com/r/LocalLLaMA/comments/1ldgc3n/what_finetuning_library_have_you_used/
Babouche_Le_Singe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldgc3n
false
null
t3_1ldgc3n
/r/LocalLLaMA/comments/1ldgc3n/what_finetuning_library_have_you_used/
false
false
self
1
null
What is your goto full finetuning library?
1
[removed]
2025-06-17T06:43:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldgaca/what_is_your_goto_full_finetuning_library/
Babouche_Le_Singe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldgaca
false
null
t3_1ldgaca
/r/LocalLLaMA/comments/1ldgaca/what_is_your_goto_full_finetuning_library/
false
false
self
1
null
Jan-nano, 4B agentic model that outperforms DeepSeek-v3-671B using MCP
1
2025-06-17T06:29:14
https://twitter.com/menloresearch/status/1934809407604576559
AngryBirdenator
twitter.com
1970-01-01T00:00:00
0
{}
1ldg2ch
false
{'oembed': {'author_name': 'Menlo Research', 'author_url': 'https://twitter.com/menloresearch', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Meet Jan-nano, a 4B model that outscores DeepSeek-v3-671B using MCP.<br><br>It&#39;s built on Qwen3-4B with DAPO fine...
t3_1ldg2ch
/r/LocalLLaMA/comments/1ldg2ch/jannano_4b_agentic_model_that_outperforms/
false
false
https://a.thumbs.redditm…YgW_i_n10Qu8.jpg
1
{'enabled': False, 'images': [{'id': '94dNa_gsV4O_x1lOGmk18So8TZLRINZyo6MLEn3Cg4Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/O8VXDxED3fvz7QGWqs62PJZWjitE4ml4eY_9-oDmuNc.jpg?width=108&crop=smart&auto=webp&s=11890703dc2b2d9428732a876313f4a75992eb4d', 'width': 108}], 'source': {'height': 78,...
How to increase GPU utilization when serving an LLM with Llama.cpp
3
When I serve an LLM (currently its deepseek coder v2 lite 8 bit) in my T4 16gb VRAM + 48GB RAM system, I noticed that the model takes up like 15.5GB of gpu VRAM which id good. But the GPU *utilization* percent never reaches above 35%, even when running parallel requests or increasing batch size. Am I missing something?
2025-06-17T06:27:12
https://www.reddit.com/r/LocalLLaMA/comments/1ldg17j/how_to_increase_gpu_utilization_when_serving_an/
anime_forever03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldg17j
false
null
t3_1ldg17j
/r/LocalLLaMA/comments/1ldg17j/how_to_increase_gpu_utilization_when_serving_an/
false
false
self
3
null
Local LLMs: How to get started
3
Hi /r/LocalLLaMA! I've been lurking for about year down here, and I've learned a lot. I feel like the space is quite intimitdating at first, with lots of nuances and tradeoffs. I've created a basic resource that should allow newcomers to understand the basic concepts. I've made a few simplifications that I know a lot...
2025-06-17T06:22:01
https://mlnative.com/blog/getting-started-with-local-llms
lmyslinski
mlnative.com
1970-01-01T00:00:00
0
{}
1ldfyak
false
null
t3_1ldfyak
/r/LocalLLaMA/comments/1ldfyak/local_llms_how_to_get_started/
false
false
https://external-preview…1cbdc9263b67c0ff
3
{'enabled': False, 'images': [{'id': '_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_Lww7Ro468irSbZzMrraA_HNckKz1sKKRfoQpBMnFoE.png?width=108&crop=smart&auto=webp&s=0db17666b6c2d05d40743f4f01ce932d9334a43f', 'width': 108}, {'height': 216, 'url': '...
Stream-Omni: Simultaneous Multimodal Interactions with Large Language-Vision-Speech Model
10
2025-06-17T06:20:15
https://huggingface.co/ICTNLP/stream-omni-8b
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
1ldfxa1
false
null
t3_1ldfxa1
/r/LocalLLaMA/comments/1ldfxa1/streamomni_simultaneous_multimodal_interactions/
false
false
https://external-preview…594b53af38946a27
10
{'enabled': False, 'images': [{'id': 'uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uiJCdyPhedXThYPzl7o9OduXbx5SJq-5K4qDJYJRC7M.png?width=108&crop=smart&auto=webp&s=e4b65c3c3ce9878eec0aa742f9374ab883267211', 'width': 108}, {'height': 116, 'url': 'h...
OpenAI wins $200 million U.S. defense contract!
367
All the talk about wanting AI to be open and accessible to all humanity was just that.... A gigantic pile of BS! Wake up guys, Close AI was never gonna protect anyone but themselves. Link below : https://www.cnbc.com/2025/06/16/openai-wins-200-million-us-defense-contract.html
2025-06-17T06:10:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldfry1/openai_wins_200_million_us_defense_contract/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldfry1
false
null
t3_1ldfry1
/r/LocalLLaMA/comments/1ldfry1/openai_wins_200_million_us_defense_contract/
false
false
self
367
null
It seems as if the more you learn about AI, the less you trust it
128
This is kind of a rant so sorry if not everything has to do with the title, For example, when the blog post on vibe coding was released on February 2025, I was surprised to see the writer talking about using it mostly for disposable projects and not for stuff that will go to production since that is what everyone seems...
2025-06-17T05:54:21
https://www.reddit.com/r/LocalLLaMA/comments/1ldfipl/it_seems_as_if_the_more_you_learn_about_ai_the/
RhubarbSimilar1683
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldfipl
false
null
t3_1ldfipl
/r/LocalLLaMA/comments/1ldfipl/it_seems_as_if_the_more_you_learn_about_ai_the/
false
false
self
128
null
What would be the best modal to run on a laptop with 8gb of vram and 32 gb of ram with a i9
0
Just curious
2025-06-17T05:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldfeqb/what_would_be_the_best_modal_to_run_on_a_laptop/
2001obum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldfeqb
false
null
t3_1ldfeqb
/r/LocalLLaMA/comments/1ldfeqb/what_would_be_the_best_modal_to_run_on_a_laptop/
false
false
self
0
null
Local or Cloud for Beginner?
1
[removed]
2025-06-17T05:25:40
https://www.reddit.com/r/LocalLLaMA/comments/1ldf2if/local_or_cloud_for_beginner/
Different_Rush3519
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldf2if
false
null
t3_1ldf2if
/r/LocalLLaMA/comments/1ldf2if/local_or_cloud_for_beginner/
false
false
self
1
null
Fine tuning image gen LLM for Virtual Staging/Interior Design
0
Hi, I've been doing a lot of virtual staging recently with OpenAI's 4o model. With excessive prompting, the quality is great, but it's getting really expensive with the API (17 cents per photo!). I'm thinking about investing resources into training/fine-tuning an open source model on tons of photos of interiors to re...
2025-06-17T05:10:30
https://www.reddit.com/r/LocalLLaMA/comments/1ldetfs/fine_tuning_image_gen_llm_for_virtual/
BabaJoonie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldetfs
false
null
t3_1ldetfs
/r/LocalLLaMA/comments/1ldetfs/fine_tuning_image_gen_llm_for_virtual/
false
false
self
0
null
Case/rig for multiple GPUs & PSUs
1
[removed]
2025-06-17T05:03:21
https://www.reddit.com/r/LocalLLaMA/comments/1ldep5q/caserig_for_multiple_gpus_psus/
g4meb01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldep5q
false
null
t3_1ldep5q
/r/LocalLLaMA/comments/1ldep5q/caserig_for_multiple_gpus_psus/
false
false
self
1
null
M4 pro 48gb for image gen (stable diffusion) and other llms
1
Is it worth it or we have better alternatives. Thinking from price point
2025-06-17T04:11:21
https://www.reddit.com/r/LocalLLaMA/comments/1lddtce/m4_pro_48gb_for_image_gen_stable_diffusion_and/
No_Nothing1584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lddtce
false
null
t3_1lddtce
/r/LocalLLaMA/comments/1lddtce/m4_pro_48gb_for_image_gen_stable_diffusion_and/
false
false
self
1
null
Quartet - a new algorithm for training LLMs in native FP4 on 5090s
70
I came across this paper while looking to see if training LLMs on Blackwell's new FP4 hardware was possible. [Quartet: Native FP4 Training Can Be Optimal for Large Language Models](https://huggingface.co/papers/2505.14669) and the associated code, with kernels you can use for your own training: https://github.com/IS...
2025-06-17T04:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1lddrfu/quartet_a_new_algorithm_for_training_llms_in/
Kooshi_Govno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lddrfu
false
null
t3_1lddrfu
/r/LocalLLaMA/comments/1lddrfu/quartet_a_new_algorithm_for_training_llms_in/
false
false
self
70
{'enabled': False, 'images': [{'id': 'D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D-4Als9i9WLZfaQth9gAS6fjS5edshHNciArjBgIwgw.png?width=108&crop=smart&auto=webp&s=f1c865e52abd25245d27ae9df3f1957c084cc2f8', 'width': 108}, {'height': 116, 'url': 'h...
3060 12GB Upgrade Paths
1
[removed]
2025-06-17T02:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1ldc4oh/3060_12gb_upgrade_paths/
gigadigg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldc4oh
false
null
t3_1ldc4oh
/r/LocalLLaMA/comments/1ldc4oh/3060_12gb_upgrade_paths/
false
false
self
1
null