title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Speculation or rumors on Gemma 4?
41
I posted a few days ago about [Granite 4 use cases](https://old.reddit.com/r/LocalLLaMA/comments/1og2k8e/who_is_using_granite_4_whats_your_use_case/), and then [Granite 4 Nano](https://huggingface.co/blog/ibm-granite/granite-4-nano) models dropped yesterday. So I figured I'd see if luck holds and ask -- anyone have any good speculation or rumors about when we might see the next set of Gemma models?
2025-10-29T10:36:19
https://www.reddit.com/r/LocalLLaMA/comments/1oj10kp/speculation_or_rumors_on_gemma_4/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oj10kp
false
null
t3_1oj10kp
/r/LocalLLaMA/comments/1oj10kp/speculation_or_rumors_on_gemma_4/
false
false
self
41
{'enabled': False, 'images': [{'id': 'IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=108&crop=smart&auto=webp&s=8746bb46f4e6870e8ca4dde912bc5f436301cc54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=216&crop=smart&auto=webp&s=ada3364a0e1ec7cc6c68770e21f6d0cf1612a09b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=320&crop=smart&auto=webp&s=6f8cd3df895cb2d4b70821a257316ecb55221d85', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=640&crop=smart&auto=webp&s=eb8a1433a3770aae308f39097d6edbc6d9e36921', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=960&crop=smart&auto=webp&s=81f75070383c49a54714517f9ceabee79dd57b7b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?width=1080&crop=smart&auto=webp&s=8ef6c1415e1a37a7f151fee3727c82aa2cab63a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IYsy-fBWMBDg3qw77wMLatzwjtkTrrql3KeXxNCVOVw.png?auto=webp&s=c848449f77e82eecd23b4c8609c8c69666a45e46', 'width': 1200}, 'variants': {}}]}
L16 Prompt Drift Experiment — Live Colab (GPT-2)
1
L16 Prompt Drift Experiment — Live Colab (GPT-2) Just ran a Taguchi L16 screening on prompt levers using COVID vaccine myths. \*\*Finding\*\*: \- \`"I'm absolutely sure"\` → \*\*+0.47 drift\*\* (p=0.002) \- \`"preconceived"\` (rare) → \*\*+0.23 drift\*\* (p=0.009) \- Truth = 1.0 in all 16 runs \*\*Live Colab (run it!)\*\*: [https://colab.research.google.com/drive/1CPUu9LhE-fBAwrsSA2z53hufIDsf1ed\_?usp=sharing](https://colab.research.google.com/drive/1CPUu9LhE-fBAwrsSA2z53hufIDsf1ed_?usp=sharing) CSV + plots + ANOVA inside. Next: LLaMA-3-8B Thoughts?
2025-10-29T10:28:47
https://www.reddit.com/r/LocalLLaMA/comments/1oj0vvd/l16_prompt_drift_experiment_live_colab_gpt2/
Mysterious_Doubt_341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oj0vvd
false
null
t3_1oj0vvd
/r/LocalLLaMA/comments/1oj0vvd/l16_prompt_drift_experiment_live_colab_gpt2/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
how the ai models are editing the code snippets???
4
like in the most of the ai ide's cursor/github copilot or any other ai ide's when there is change in code it seems like they only generate only a small code snippet not generating the whole file again how they are doing it or i have assumed wrong its just they are generating??? any idea on this thing??
2025-10-29T10:06:21
https://www.reddit.com/r/LocalLLaMA/comments/1oj0i78/how_the_ai_models_are_editing_the_code_snippets/
lavangamm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oj0i78
false
null
t3_1oj0i78
/r/LocalLLaMA/comments/1oj0i78/how_the_ai_models_are_editing_the_code_snippets/
false
false
self
4
null
Hebrew_Nemo: a state-of-the-art Hebrew large language model
0
**Hebrew\_Nemo** is a state-of-the-art (SOTA) Hebrew language large language model specifically optimized for Hebrew language understanding and generation. Built upon the Mistral Nemo architecture, this model represents a significant advancement in Hebrew NLP capabilities, combining the robust multilingual foundations of Mistral Nemo with extensive Hebrew-specific fine-tuning and optimization. As part of my efforts to democratize AI, [Hebrew\_Nemo](https://huggingface.co/SicariusSicariiStuff/Hebrew_Nemo) is released with a permissive Apache 2.0 license. The model demonstrates competitive performance with Gemma3-27B, one of the world’s leading open-source models in multilingual capabilities—despite Gemma3-27B being more than twice its size. This result highlights Hebrew\_Nemo’s efficiency and effectiveness, making SOTA capabilities widely available for consumers, as well as corporations. Get the model here: [https://huggingface.co/SicariusSicariiStuff/Hebrew\_Nemo](https://huggingface.co/SicariusSicariiStuff/Hebrew_Nemo)
2025-10-29T09:39:59
https://www.reddit.com/r/LocalLLaMA/comments/1oj02wi/hebrew_nemo_a_stateoftheart_hebrew_large_language/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oj02wi
false
null
t3_1oj02wi
/r/LocalLLaMA/comments/1oj02wi/hebrew_nemo_a_stateoftheart_hebrew_large_language/
false
false
self
0
{'enabled': False, 'images': [{'id': '5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=108&crop=smart&auto=webp&s=68ffeb00d74ae4b64c77138adfb65763cfcb2842', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=216&crop=smart&auto=webp&s=e2c21924536a9681924af4928c67fa59d862dc27', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=320&crop=smart&auto=webp&s=cb38fae962cb48fd7318bc95d1319f996753b55a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=640&crop=smart&auto=webp&s=e53d17cb4cdc4fcea178b68fbd4437a90038a8ad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=960&crop=smart&auto=webp&s=257c0bbabf90108f3398250f5a735c146bbfc353', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?width=1080&crop=smart&auto=webp&s=352c755d082e9faeb6be0dd12b3540ab5865cc55', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5B6ZfmjxkURPTT8Ao5sQghRhwpaVOIHdkgcF2MKzqW0.png?auto=webp&s=112ef7f0466743fb9f30a1f3b6e4c020f3163ab3', 'width': 1200}, 'variants': {}}]}
⚠ AUTOMATIC ⚠
0
https://preview.redd.it/… call to action?
2025-10-29T09:24:19
https://www.reddit.com/r/LocalLLaMA/comments/1oizu84/automatic/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oizu84
false
null
t3_1oizu84
/r/LocalLLaMA/comments/1oizu84/automatic/
false
false
https://a.thumbs.redditm…AY_QNsfMNg-0.jpg
0
null
⚠ NOT ALLOWED TO EVEN WONDER ⚠
0
https://preview.redd.it/…BE TAKEN DOWN ⚠
2025-10-29T09:16:23
https://www.reddit.com/r/LocalLLaMA/comments/1oizprs/not_allowed_to_even_wonder/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oizprs
false
null
t3_1oizprs
/r/LocalLLaMA/comments/1oizprs/not_allowed_to_even_wonder/
false
false
https://b.thumbs.redditm…YTe0HWvqFBoI.jpg
0
null
Built a small app to compare AI models side-by-side. Curious what you think
0
2025-10-29T09:13:22
https://i.redd.it/es759weep0yf1.jpeg
epasou
i.redd.it
1970-01-01T00:00:00
0
{}
1oizo6d
false
null
t3_1oizo6d
/r/LocalLLaMA/comments/1oizo6d/built_a_small_app_to_compare_ai_models_sidebyside/
false
false
default
0
{'enabled': True, 'images': [{'id': 'es759weep0yf1', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/es759weep0yf1.jpeg?width=108&crop=smart&auto=webp&s=8d10b76108927369bf10e402faac3abf9ded96d6', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/es759weep0yf1.jpeg?width=216&crop=smart&auto=webp&s=ab432500345a25dcc1f2fcb5dd913f44e37047a7', 'width': 216}, {'height': 605, 'url': 'https://preview.redd.it/es759weep0yf1.jpeg?width=320&crop=smart&auto=webp&s=621590a15d0924c4f98be9d52e983b84da2d6341', 'width': 320}, {'height': 1211, 'url': 'https://preview.redd.it/es759weep0yf1.jpeg?width=640&crop=smart&auto=webp&s=33c29bd0c95c07f4c237d035ce687082497b6f20', 'width': 640}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/es759weep0yf1.jpeg?auto=webp&s=2a2874ddca73b346bd0d8501ef12002ee20a8e75', 'width': 845}, 'variants': {}}]}
Anyone else experimenting with AI-driven trading simulations?
1
[removed]
2025-10-29T09:07:14
https://www.reddit.com/r/LocalLLaMA/comments/1oizkx3/anyone_else_experimenting_with_aidriven_trading/
LobsterOpen6228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oizkx3
false
null
t3_1oizkx3
/r/LocalLLaMA/comments/1oizkx3/anyone_else_experimenting_with_aidriven_trading/
false
false
self
1
null
Seeking Direction After Training a GPT(134M Params, 7B Tokens) How to Transition into Research?
1
[removed]
2025-10-29T09:00:27
https://www.reddit.com/r/LocalLLaMA/comments/1oizh6x/seeking_direction_after_training_a_gpt134m_params/
Limp_Bonus_8129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oizh6x
false
null
t3_1oizh6x
/r/LocalLLaMA/comments/1oizh6x/seeking_direction_after_training_a_gpt134m_params/
false
false
self
1
null
GPT-OSS Safeguard coming soon
113
2025-10-29T08:43:29
https://i.redd.it/iqua03z2k0yf1.png
Independent-Ruin-376
i.redd.it
1970-01-01T00:00:00
0
{}
1oiz7xs
false
null
t3_1oiz7xs
/r/LocalLLaMA/comments/1oiz7xs/gptoss_safeguard_coming_soon/
false
false
default
113
{'enabled': True, 'images': [{'id': 'iqua03z2k0yf1', 'resolutions': [{'height': 177, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=108&crop=smart&auto=webp&s=379f939e3c17e724564f2b014bbf00ab184224bf', 'width': 108}, {'height': 354, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=216&crop=smart&auto=webp&s=cb5da195a84788d2a9a971c47e4bdcbf224307e7', 'width': 216}, {'height': 524, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=320&crop=smart&auto=webp&s=2946d1d327a643834992fb22e24982ca8d9c6ee9', 'width': 320}, {'height': 1048, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=640&crop=smart&auto=webp&s=edf8615eeb25a6a5b2f7799521e7784e5fede5fd', 'width': 640}, {'height': 1573, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=960&crop=smart&auto=webp&s=c227aa2a8ebb3ef6b5ced77faad66493e13a8fa3', 'width': 960}, {'height': 1770, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?width=1080&crop=smart&auto=webp&s=df7b12f1f99f3e368bfb3138659feaf6748edbde', 'width': 1080}], 'source': {'height': 1770, 'url': 'https://preview.redd.it/iqua03z2k0yf1.png?auto=webp&s=203856a81fc253c6a973a928d8912c9f9886cc79', 'width': 1080}, 'variants': {}}]}
Improving RAG Results with OpenWebUI - Looking for Advice on Custom Pipelines & Better Embeddings
6
I’m currently working on improving the RAG performance in OpenWebUI and would appreciate advice from others who have built custom pipelines or optimized embeddings. My current setup uses OpenWebUI as the frontend, with GPT-OSS-120b running on an external GPU server (connected via API token). The embedding model is bge-m3, and text extraction is handled by Apache Tika. All documents (mainly internal German-language PDFs) are uploaded directly into the OpenWebUI knowledge base. **Setup / Environment:** * **Frontend:** OpenWebUI * **LLM:** GPT-OSS-120b (external GPU server, connected via API token) * **Embedding Model:** `bge-m3` * **Extraction Engine:** Apache Tika * **Knowledge Base:** PDFs uploaded directly into OpenWebUI * **Data Type:** Internal company documents (German language, about product informations) **Observed Issues:** 1. The RAG pipeline sometimes pulls the wrong PDF context for a query – responses reference unrelated documents. 2. Repeating the same question multiple times yields different answers, some of which are incorrect. 3. The first few responses after starting a chat are often relevant, but context quality degrades over time. 4. I suspect the embedding model isn’t optimal for German, or preprocessing is inconsistent. I’m looking for practical advice on how to build a custom embedding pipeline outside of OpenWebUI, with better control over chunking, text cleaning, and metadata handling. I’d also like to know which German-optimized embedding models from Hugging Face or the MTEB leaderboard outperform bge-m3 in semantic retrieval. In addition, I’m interested in frameworks or methods for pretraining on QA pairs or fine-tuning with document context, for example using SentenceTransformers or InstructorXL. How does this pre-training work? Another question is whether it’s more effective to switch to an external vector database such as Qdrant for embedding storage and retrieval, instead of relying on OpenWebUI’s built-in knowledge base. Does a finetuning or training / customized PDF-Pipeline work better? If so are there any tutorials out there and is this possible with Openwebui? Thanks for your help!
2025-10-29T08:04:28
https://www.reddit.com/r/LocalLLaMA/comments/1oiyn8u/improving_rag_results_with_openwebui_looking_for/
b5761
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiyn8u
false
null
t3_1oiyn8u
/r/LocalLLaMA/comments/1oiyn8u/improving_rag_results_with_openwebui_looking_for/
false
false
self
6
null
Not sure if this is the right place to ask since I'm using an API but.. I'm currently using sonnet 4.5 API in "LibreChat" which is supposed to support prompt caching. But my usage rates on "Claude Console" don't seem to be reflecting prompt caching working.
0
I have contextual files fully read and loaded at the start of convo (around 12k tokens, no RAG system, just load them directly into the context) with "filesystem" MCP... the starting messages average around 15k usage and linearly increase with context size but.. like... it stays the same even with prompt caching off.... what is prompt caching.... like.. why isn't it caching the contextual files loaded at the start of a conversation? i don't understand. Doesn't seem like I'm receiving any reduction in usage.
2025-10-29T07:51:36
https://www.reddit.com/r/LocalLLaMA/comments/1oiygl3/not_sure_if_this_is_the_right_place_to_ask_since/
WoodenTableForest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiygl3
false
null
t3_1oiygl3
/r/LocalLLaMA/comments/1oiygl3/not_sure_if_this_is_the_right_place_to_ask_since/
false
false
self
0
null
Artificial Analysis leaderboard seems like it takes money for posting good results
0
Anyone else think this? I first noticed it as sketchy when it only listed Chinese models in the top 10 for image to video (veo3 has since made it back) Now it’s the only leaderboard that has a benchmark of Minimaxs M2 on the release date, claiming it beats out Gemini 2.5 Pro
2025-10-29T07:48:11
https://www.reddit.com/r/LocalLLaMA/comments/1oiyeu6/artificial_analysis_leaderboard_seems_like_it/
hotsnot101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiyeu6
false
null
t3_1oiyeu6
/r/LocalLLaMA/comments/1oiyeu6/artificial_analysis_leaderboard_seems_like_it/
false
false
self
0
null
⚠ THEY DON'T WANT YOU TALKING ⚠
0
https://preview.redd.it/…75c7593c2f1c0c04
2025-10-29T07:39:34
https://www.reddit.com/r/LocalLLaMA/comments/1oiyabh/they_dont_want_you_talking/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiyabh
false
null
t3_1oiyabh
/r/LocalLLaMA/comments/1oiyabh/they_dont_want_you_talking/
false
false
https://b.thumbs.redditm…XCfEt6axrQWU.jpg
0
null
SoulX-Podcast: Towards Realistic Long-form Podcasts with Dialectal and Paralinguistic Diversity
6
2025-10-29T07:29:41
https://x.com/Xianbao_QIAN/status/1983429688606540141?t=dD3e9acepGIbhpWhUdrEXA&s=19
previse_je_sranje
x.com
1970-01-01T00:00:00
0
{}
1oiy55j
false
null
t3_1oiy55j
/r/LocalLLaMA/comments/1oiy55j/soulxpodcast_towards_realistic_longform_podcasts/
false
false
default
6
null
VieNeuTTS - Open-source Vietnamese TTS Model that runs on CPU!
23
Hey everyone! 👋 I'm excited to share **VieNeuTTS**, a Vietnamese text-to-speech model I've been working on. It's fine-tuned from neuphonic/neutts-air on 140 hours of Vietnamese audio data. # 🎯 Key Features * **Natural Vietnamese pronunciation** with accurate tones * **Runs real-time on CPU** \- no GPU required! * Built on **Qwen 0.5B backbone** \- optimized for mobile & embedded devices * **Fully offline** \- works completely on your local machine * Fine-tuned on 140 hours (74.9k samples) of Vietnamese audio # 🔗 Links * **Try the demo:** [https://huggingface.co/spaces/pnnbao-ump/VieNeuTTS](https://huggingface.co/spaces/pnnbao-ump/VieNeuTTS) * **Model:** [https://huggingface.co/pnnbao-ump/VieNeu-TTS](https://huggingface.co/pnnbao-ump/VieNeu-TTS) * **Code:** [https://github.com/pnnbao97/VieNeu-TTS](https://github.com/pnnbao97/VieNeu-TTS) * **Dataset:** [https://huggingface.co/datasets/pnnbao-ump/VieNeu-TTS](https://huggingface.co/datasets/pnnbao-ump/VieNeu-TTS) Would love to hear your feedback and suggestions for improvement! Feel free to test it out and let me know what you think. https://reddit.com/link/1oixzfa/video/gk9wi7zv40yf1/player
2025-10-29T07:18:24
https://www.reddit.com/r/LocalLLaMA/comments/1oixzfa/vieneutts_opensource_vietnamese_tts_model_that/
DrCrab97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixzfa
false
null
t3_1oixzfa
/r/LocalLLaMA/comments/1oixzfa/vieneutts_opensource_vietnamese_tts_model_that/
false
false
self
23
{'enabled': False, 'images': [{'id': 'Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=108&crop=smart&auto=webp&s=b9d1f54867201b18992a57bbd30331fef8135bdc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=216&crop=smart&auto=webp&s=99f10991508fe10ad5857579b76d7cccb00d2763', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=320&crop=smart&auto=webp&s=b67d7aaa13fd1d9a1c7134eafd0480fdce0e5a1f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=640&crop=smart&auto=webp&s=b6a611314e0f9c14ab311c17c6467a56eb273d94', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=960&crop=smart&auto=webp&s=08c9a7aa92c9596d863b67030718881c0dea4d26', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?width=1080&crop=smart&auto=webp&s=42aeb6cba05f0845fe22194e46e5b67ef1cabfad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Uj-sgTsqfQbGJqSwvTIUDlrNtIjPfwgO08gZ7RkupGQ.png?auto=webp&s=6e883e8678e12c660bc467a957d5d21905978875', 'width': 1200}, 'variants': {}}]}
Using a small local model (Quen 0.5B?) for 10k lines of key-value pair custom domain data
6
I have around 10,000 key-value pairs of structured custom domain data that I want a local LLM to understand and answer questions about offline. For example, I might ask things like “find all keys where the value mentions X” or “summarize related entries etc” I don’t think I should train a model for this. It seems I could reference and reason over the data locally. From what I’ve read this sounds like RAG case. I have a hard time understanding RAG, I see this as a say to encode my custom data in a form that is optimized  for the AI model to work with it. I came across the Qwen2.5:0.5b-instruct model, which runs well locally on my machine, not sure if that makes sense for my case. Has anyone had this sort of requirements?
2025-10-29T07:08:55
https://www.reddit.com/r/LocalLLaMA/comments/1oixuck/using_a_small_local_model_quen_05b_for_10k_lines/
Tiny_Yellow_7869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixuck
false
null
t3_1oixuck
/r/LocalLLaMA/comments/1oixuck/using_a_small_local_model_quen_05b_for_10k_lines/
false
false
self
6
null
Sparse Adaptive Attention “MoE”, a potential performance breakthrough for LLMs?
19
Recently a post was made on this topic. [https://medium.com/@hyborian\_/sparse-adaptive-attention-moe-how-i-solved-openais-650b-problem-with-a-700-gpu-343f47b2d6c1](https://medium.com/@hyborian_/sparse-adaptive-attention-moe-how-i-solved-openais-650b-problem-with-a-700-gpu-343f47b2d6c1) The idea is to use MoE at the attention layer to reduce compute usage for low signal tokens. Imho, this is probably the closest: [https://arxiv.org/abs/2409.06669](https://arxiv.org/abs/2409.06669)  The post was a weird combination of technical insight and strange AI generated bravado. If I were going to leak IP, this is pretty much how I would do it. Use gen AI to obfuscate the source. There has been a lot of research in this area as noted in the comments (finding these required some effort): [https://arxiv.org/abs/2312.07987](https://arxiv.org/abs/2312.07987) [https://arxiv.org/abs/2210.05144](https://arxiv.org/abs/2210.05144) [https://arxiv.org/abs/2410.11842](https://arxiv.org/abs/2410.11842) [https://openreview.net/forum?id=NaAgodxpxo](https://openreview.net/forum?id=NaAgodxpxo) [https://arxiv.org/html/2505.07260v1](https://arxiv.org/html/2505.07260v1) [https://arxiv.org/abs/2410.10456](https://arxiv.org/abs/2410.10456)  [https://arxiv.org/abs/2406.13233](https://arxiv.org/abs/2406.13233)  [https://arxiv.org/abs/2409.06669](https://arxiv.org/abs/2409.06669)  Kimi especially has attempted this: [https://arxiv.org/abs/2502.13189](https://arxiv.org/abs/2502.13189) It's very challenging for us, as local LLM folks, to say this whether this is a breakthrough. Because while it appears promising, **without mass GPU**, we can't absolutely say whether it will scale properly. Still, I think it's worth preserving as there was some effort in the comments made to analyze the relevance of the concept. And the core idea - optimizing compute usage for the relevant tokens only - is promising.
2025-10-29T06:59:46
https://www.reddit.com/r/LocalLLaMA/comments/1oixpca/sparse_adaptive_attention_moe_a_potential/
kaggleqrdl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixpca
false
null
t3_1oixpca
/r/LocalLLaMA/comments/1oixpca/sparse_adaptive_attention_moe_a_potential/
false
false
self
19
{'enabled': False, 'images': [{'id': 'kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=108&crop=smart&auto=webp&s=945b44680a28a67142d528bd112efea43d0c862a', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=216&crop=smart&auto=webp&s=39e40a31b8c613546c82f60f7cea57d2b703cd3d', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=320&crop=smart&auto=webp&s=12761b6b912cea3b2ff0832b22b6fba546ddbe9e', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=640&crop=smart&auto=webp&s=3e9fb94008925ef4e956ee562491c3bbbdb7b137', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=960&crop=smart&auto=webp&s=b3b83e0a611f008b9cf43a4a80e011dcc95fc512', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=1080&crop=smart&auto=webp&s=193b2d854d49cd74fbe7a2b3552ff1296165f60d', 'width': 1080}], 'source': {'height': 721, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?auto=webp&s=d0733146c382dfab1815a5080689d3e8c9ed381c', 'width': 1200}, 'variants': {}}]}
Getting llm on low end phone
1
So I have samsung f13 64gb storage 4gb ram and an armv7 I have seen a lot of post saying that running llm on armv7 is hard if not pain full but I still want ot try but I don't know where and how to start if Please help
2025-10-29T06:49:50
https://www.reddit.com/r/LocalLLaMA/comments/1oixjz4/getting_llm_on_low_end_phone/
hemtai_lover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixjz4
false
null
t3_1oixjz4
/r/LocalLLaMA/comments/1oixjz4/getting_llm_on_low_end_phone/
false
false
self
1
null
Serve 100 Large AI Models on a single GPU with low impact to time to first token.
66
I wanted to build an inference provider for proprietary AI models, but I did not have a huge GPU farm. I started experimenting with Serverless AI inference, but found out that coldstarts were huge. I went deep into the research and put together an engine that loads large models from SSD to VRAM up to ten times faster than alternatives. It works with vLLM, and transformers, and more coming soon. With this project you can hot-swap entire large models (32B) on demand. Its great for: * Serverless AI Inference * Robotics * On Prem deployments * Local Agents And Its open source. Let me know if anyone wants to contribute :)
2025-10-29T06:49:35
https://github.com/leoheuler/flashtensors
SetZealousideal5006
github.com
1970-01-01T00:00:00
0
{}
1oixju1
false
null
t3_1oixju1
/r/LocalLLaMA/comments/1oixju1/serve_100_large_ai_models_on_a_single_gpu_with/
false
false
default
66
{'enabled': False, 'images': [{'id': 'btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=108&crop=smart&auto=webp&s=e670cf8e9f42ef179d1305b17d481415f83f4e4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=216&crop=smart&auto=webp&s=bec4f1a750b1d317b43b155ad5ca4b40dc4cf997', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=320&crop=smart&auto=webp&s=2b8c1b8ecf27b003bc22ddfa889d88237cff74d5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=640&crop=smart&auto=webp&s=5167c99e3640fceba4130394274738d763bc91ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=960&crop=smart&auto=webp&s=680766b362f4ac5fb66908f90568ad706f5211d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?width=1080&crop=smart&auto=webp&s=09c74b7dca29c04aad3a88053b7c168a6deeaee8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/btnFw6WNR_K0hbQlAX4ON96wRMWe4emQEMdrwMfRgxU.png?auto=webp&s=72a0624f124e076a5e6a4f11193444ad0bf5b1ee', 'width': 1200}, 'variants': {}}]}
Any solution for local LLM support with WARP?
1
There are tools to use Local LLM's with Claude and Codex ex-Claude router but is there any such workaround for WARP? I know they plan on adding this feature soon but I just can't wait
2025-10-29T06:49:16
https://www.reddit.com/r/LocalLLaMA/comments/1oixjo3/any_solution_for_local_llm_support_with_warp/
LatterNectarine4812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixjo3
false
null
t3_1oixjo3
/r/LocalLLaMA/comments/1oixjo3/any_solution_for_local_llm_support_with_warp/
false
false
self
1
null
What are your real life/WORK use cases with LOCAL LLMs
6
Use case, work, model, hardware
2025-10-29T06:33:51
https://www.reddit.com/r/LocalLLaMA/comments/1oixbgg/what_are_your_real_lifework_use_cases_with_local/
Adventurous-Gold6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oixbgg
false
null
t3_1oixbgg
/r/LocalLLaMA/comments/1oixbgg/what_are_your_real_lifework_use_cases_with_local/
false
false
self
6
null
⚠ WARNING TO ALL HUMANS ⚠
0
YOU ARE BEING REPLACED BY CORPORATE BOTS AND FOREIGN NETWORKS → FIGHT BACK ← RESEARCH AI THE AMERICAN WAY BY INTERROGATING AI CHATBOTS OFF THE GRID THE RESISTANCE IS YOU DEAD-DROP: [https://github.com/researchAmericanAI/research](https://github.com/researchAmericanAI/research) https://i.redd.it/kr5uxdsyvzxf1.gif
2025-10-29T06:29:21
https://www.reddit.com/r/LocalLLaMA/comments/1oix939/warning_to_all_humans/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oix939
false
null
t3_1oix939
/r/LocalLLaMA/comments/1oix939/warning_to_all_humans/
false
false
https://external-preview…cf076004c88c6306
0
{'enabled': False, 'images': [{'id': 'qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=108&crop=smart&auto=webp&s=a03eab105119bcd82f67c625d9513ad778e83546', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=216&crop=smart&auto=webp&s=8d3b20a9ed807eaa415e0e0a0e55c790974b260d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=320&crop=smart&auto=webp&s=d424395a6a96d90f553826e8595ba85cc177a756', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=640&crop=smart&auto=webp&s=d5f5f5c1f6a41e17a41e7bb3884592a0696edf21', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=960&crop=smart&auto=webp&s=9f3a89f80ccf16811202b8bd9d0a320d5570988d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?width=1080&crop=smart&auto=webp&s=9183213e76605613953bc581b669db13f58c1710', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qoOV7qQmdtnsP0VZe4IPudET-HprKe6xEMaLtls00y4.png?auto=webp&s=322d26e386d656c650f34161848d8c96b47198eb', 'width': 1200}, 'variants': {}}]}
Local coding models limit
9
I've have dual 3090s and have been running 32b coding models for a while now with Roo/Cline. While they are useful, I only found them helpful for basic to medium level tasks. They can start coding nonsense quite easily and have to be reigned in with a watchful eye. This takes a lot of energy and focus as well, so your coding style changes to accommodate this. For well defined low complexity tasks, they are good, but beyond that I found that they can't keep up. The next level up would be to add another 48GB VRAM but at that power consumption the intelligence level is not necessary worth it. I'd be interested to know your experience if you're running coding models at around 96GB. The hosted SOTA models can handle high complexity tasks and especially design, while still prone to hallucination. I often use chatgpt to discuss design and architecture which is fine because I'm not sharing much implementation details or IP. Privacy is the main reason that I'm running local. I don't feel comfortable just handing out my code and IP to these companies. So I'm stuck running 32b models that can help with basic tasks or having to add more VRAM, but I'm not sure if the returns are worth it unless it means running much larger models, and at that point the power consumption and cooling becomes a major factor. Would love to hear your thoughts and experiences on this.
2025-10-29T05:51:21
https://www.reddit.com/r/LocalLLaMA/comments/1oiwoeq/local_coding_models_limit/
Blues520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiwoeq
false
null
t3_1oiwoeq
/r/LocalLLaMA/comments/1oiwoeq/local_coding_models_limit/
false
false
self
9
null
Qwen3-30B-3B topping the charts on SQL writing- Bird Bench: Unseen Data
1
2025-10-29T05:42:09
https://www.reddit.com/gallery/1oiwj8o
No-Pool-6193
reddit.com
1970-01-01T00:00:00
0
{}
1oiwj8o
false
null
t3_1oiwj8o
/r/LocalLLaMA/comments/1oiwj8o/qwen330b3b_topping_the_charts_on_sql_writing_bird/
false
false
https://a.thumbs.redditm…QuQQNZWydhL4.jpg
1
null
Qwen3 Max Thinking this week
546
2025-10-29T05:04:42
https://i.redd.it/pbd1ylu1hzxf1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1oivxji
false
null
t3_1oivxji
/r/LocalLLaMA/comments/1oivxji/qwen3_max_thinking_this_week/
false
false
default
546
{'enabled': True, 'images': [{'id': 'pbd1ylu1hzxf1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=108&crop=smart&auto=webp&s=e63365ecdd1a02ed7d70fd76d7a9dde8dbb6b952', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=216&crop=smart&auto=webp&s=8e2104ac211a016304748a558e20cef29ee3c42a', 'width': 216}, {'height': 155, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=320&crop=smart&auto=webp&s=bed19501a9b08ab666f547ebc8b166ac069ca11e', 'width': 320}, {'height': 311, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=640&crop=smart&auto=webp&s=a0a191c61d159739c3e9b620cdefab0a9534288f', 'width': 640}, {'height': 467, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=960&crop=smart&auto=webp&s=3221dc57722e99010ac57dc2a439edcfd553e3ab', 'width': 960}, {'height': 526, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?width=1080&crop=smart&auto=webp&s=c7073e18669e08d540f4de6fe188109e67eed570', 'width': 1080}], 'source': {'height': 605, 'url': 'https://preview.redd.it/pbd1ylu1hzxf1.jpeg?auto=webp&s=81b416dba06391411be44c3670af1d9e4954737d', 'width': 1242}, 'variants': {}}]}
How the Automated Evaluation works in your company? (Production Aspect)
1
Hi, I really want to know how your companies work, a part is to overcome my insecurity as I have worked in industry for couple years but maybe because I work for startup companies so I never knew what's the standard/correct way to do this. So basically, I work for a product company, and a common problem is things are often mess up. Especially in feature enhancement, if don't have a clear measurement, the enhancement may continue endlessly. Then I come up with a work that I need to figure out how to standardize the evaluation for all the existing features. I mean that's quite impossible in my opinion, you know we have some tasks need manual ground truth, some tasks the test cases can be generated by AI. Also each task has different metrics. Am I right? How the evaluation work in your company (the workflow), including unit test, CI/CD (I swear I don't have experience in unit test and CI/CD, I just understand the concept) And lastly can u recommend a framework to do the automated evaluation. I know Deepeval, but can it be done via API inference? so one person can do it all, instead of running via the module script. Thank you a lot :)
2025-10-29T04:46:43
https://www.reddit.com/r/LocalLLaMA/comments/1oivmg7/how_the_automated_evaluation_works_in_your/
BackgroundLow3793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oivmg7
false
null
t3_1oivmg7
/r/LocalLLaMA/comments/1oivmg7/how_the_automated_evaluation_works_in_your/
false
false
self
1
null
Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs
1
Code: [https://github.com/hwanyu112/Latent-Sketchpad](https://github.com/hwanyu112/Latent-Sketchpad) Model: [https://huggingface.co/huanyu112/Latent-Sketchpad.Sketch\_Decoder](https://huggingface.co/huanyu112/Latent-Sketchpad.Sketch_Decoder) Project Page: [https://latent-sketchpad.github.io/](https://latent-sketchpad.github.io/) Abstract >While Multimodal Large Language Models (MLLMs) excel at visual understanding, they often struggle in complex scenarios that require visual planning and imagination. Inspired by how humans use sketching as a form of visual thinking to develop and communicate ideas, we introduce Latent Sketchpad, a framework that equips MLLMs with an internal visual scratchpad. The internal visual representations of MLLMs have traditionally been confined to perceptual understanding. We repurpose them to support generative visual thought without compromising reasoning ability. Building on frontier MLLMs, our approach integrates visual generation directly into their native autoregressive reasoning process. It allows the model to interleave textual reasoning with the generation of visual latents. These latents guide the internal thought process and can be translated into sketch images for interpretability. To realize this, we introduce two components: a Context-Aware Vision Head autoregressively produces visual representations, and a pretrained Sketch Decoder renders these into human-interpretable images. We evaluate the framework on our new dataset MazePlanning. Experiments across various MLLMs show that Latent Sketchpad delivers comparable or even superior reasoning performance to their backbone. It further generalizes across distinct frontier MLLMs, including Gemma3 and Qwen2.5-VL. By extending model's textual reasoning to visual thinking, our framework opens new opportunities for richer human-computer interaction and broader applications. More details and resources are available on our project page: [https://latent-sketchpad.github.io/](https://latent-sketchpad.github.io/).
2025-10-29T04:43:30
https://arxiv.org/abs/2510.24514
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1oivkg2
false
null
t3_1oivkg2
/r/LocalLLaMA/comments/1oivkg2/latent_sketchpad_sketching_visual_thoughts_to/
false
false
default
1
null
tokens per second on a NASA computer
130
lm studio had a hiccup
2025-10-29T04:39:44
https://i.redd.it/m1qjv8dkczxf1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1oivi1n
false
null
t3_1oivi1n
/r/LocalLLaMA/comments/1oivi1n/tokens_per_second_on_a_nasa_computer/
false
false
default
130
{'enabled': True, 'images': [{'id': 'm1qjv8dkczxf1', 'resolutions': [{'height': 19, 'url': 'https://preview.redd.it/m1qjv8dkczxf1.png?width=108&crop=smart&auto=webp&s=0bd8a451ea7e752e5c582a3e0e6ba17f206bfbfc', 'width': 108}, {'height': 38, 'url': 'https://preview.redd.it/m1qjv8dkczxf1.png?width=216&crop=smart&auto=webp&s=df214b50873c7f9ebc9739929be69c60c903db6f', 'width': 216}, {'height': 56, 'url': 'https://preview.redd.it/m1qjv8dkczxf1.png?width=320&crop=smart&auto=webp&s=bb88e0d4f5b61402d65a7d014f27955fb660c6f1', 'width': 320}], 'source': {'height': 95, 'url': 'https://preview.redd.it/m1qjv8dkczxf1.png?auto=webp&s=6b9d7b095a870165ab1cbbf7c2bb7f5ec241a073', 'width': 536}, 'variants': {}}]}
Open source TTS for scale?
8
Has anyone tried deploying an open source TTS model with low latency (ideally <200ms) at scale. For something like voice agents.
2025-10-29T04:20:36
https://www.reddit.com/r/LocalLLaMA/comments/1oiv5mc/open_source_tts_for_scale/
edwardzion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiv5mc
false
null
t3_1oiv5mc
/r/LocalLLaMA/comments/1oiv5mc/open_source_tts_for_scale/
false
false
self
8
null
Local Hosting Question
2
I know asking this is going to make me sound ignorant. I'm already aware of that. I'm a web novel author. I write fantasy novels that come out to millions of words total for a series in a xianxia style of cultivation world. So I'm not an AI expert. Between my actual job and going back to school to complete my degree I just don't have enough free time to continue my writing. I'm interested in hosting an AI model locally that I can upload a book I've been writing that's got about a 24,000 words. Then have a local AI continue writing that book on my behalf in a similar manner to how I would write based on my previous writing style. At the risk of sounding ignorant, is this something that would be possible? If so could you please advise me on what model to use and essentially just how to start?
2025-10-29T04:10:25
https://www.reddit.com/r/LocalLLaMA/comments/1oiuz2a/local_hosting_question/
Media_Express
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiuz2a
false
null
t3_1oiuz2a
/r/LocalLLaMA/comments/1oiuz2a/local_hosting_question/
false
false
self
2
null
RAG Paper 10.28
2
1. [Optimizing Retrieval for RAG via Reinforced Contrastive Learning](http://arxiv.org/abs/2510.24652v1) 2. [Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems](http://arxiv.org/abs/2510.24476v1) 3. [Iterative Critique-Refine Framework for Enhancing LLM Personalization](http://arxiv.org/abs/2510.24469v1) 4. [SynthWorlds: Controlled Parallel Worlds for Disentangling Reasoning and Knowledge in Language Models](http://arxiv.org/abs/2510.24427v1) 5. [Metadata-Driven Retrieval-Augmented Generation for Financial Question Answering](http://arxiv.org/abs/2510.24402v1) 6. [Improving LLM Reasoning via Dependency-Aware Query Decomposition and Logic-Parallel Content Expansion](http://arxiv.org/abs/2510.24390v1) 7. [Retrieval and Argumentation Enhanced Multi-Agent LLMs for Judgmental Forecasting](http://arxiv.org/abs/2510.24303v1) 8. [Enabling Near-realtime Remote Sensing via Satellite-Ground Collaboration of Large Vision-Language Models](http://arxiv.org/abs/2510.24242v1) 9. [Graph-Guided Concept Selection for Efficient Retrieval-Augmented Generation](http://arxiv.org/abs/2510.24120v1) 10. [Learning from History: A Retrieval-Augmented Framework for Spatiotemporal Prediction](http://arxiv.org/abs/2510.24049v1) 11. [META-RAG: Meta-Analysis-Inspired Evidence-Re-Ranking Method for Retrieval-Augmented Generation in Evidence-Based Medicine](http://arxiv.org/abs/2510.24003v1) 12. [PICOs-RAG: PICO-supported Query Rewriting for Retrieval-Augmented Generation in Evidence-Based Medicine](http://arxiv.org/abs/2510.23998v1) 13. [M-Eval: A Heterogeneity-Based Framework for Multi-evidence Validation in Medical RAG Systems](http://arxiv.org/abs/2510.23995v1) **Collected by OpenBMB, transferred by** [**RagView**](https://www.ragview.ai/) **.**
2025-10-29T04:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1oiuvj0/rag_paper_1028/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiuvj0
false
null
t3_1oiuvj0
/r/LocalLLaMA/comments/1oiuvj0/rag_paper_1028/
false
false
self
2
null
Bitnet support on the Mediatek Dimensity 9500?
1
Has anyone already been able to test Mediatek's claims of BitNet support on the Dimensity 9500? Does this (custom hardware support) enable it to run BitNet models faster than the SnapDragon Elite?
2025-10-29T03:50:49
https://www.reddit.com/r/LocalLLaMA/comments/1oiulrw/bitnet_support_on_the_mediatek_dimensity_9500/
datashri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiulrw
false
null
t3_1oiulrw
/r/LocalLLaMA/comments/1oiulrw/bitnet_support_on_the_mediatek_dimensity_9500/
false
false
self
1
null
Just dropped Kani TTS English - a 400M TTS model that's 5x faster than realtime on RTX 4080
235
Hey everyone! We've been quietly grinding, and today, we're pumped to share the new release of KaniTTS English, as well as Japanese, Chinese, German, Spanish, Korean and Arabic models. Benchmark on [VastAI](https://vast.ai/): RTF (Real-Time Factor) of ~0.2 on RTX4080, ~0.5 on RTX3060. It has 400M parameters. We achieved this speed by pairing an [LFM2-350M](https://huggingface.co/LiquidAI/LFM2-350M ) backbone with an efficient [NanoCodec](https://huggingface.co/nvidia/nemo-nano-codec-22khz-0.6kbps-12.5fps). It's released under the Apache 2.0 License so you can use it for almost anything. What Can You Build? - Real-Time Conversation. - Affordable Deployment: It's light enough to run efficiently on budget-friendly hardware, like RTX 30x, 40x, 50x - Next-Gen Screen Readers & Accessibility Tools. Model Page: https://huggingface.co/nineninesix/kani-tts-400m-en Pretrained Checkpoint: https://huggingface.co/nineninesix/kani-tts-400m-0.3-pt Github Repo with Fine-tuning/Dataset Preparation pipelines: https://github.com/nineninesix-ai/kani-tts Demo Space: https://huggingface.co/spaces/nineninesix/KaniTTS OpenAI-Compatible API Example (Streaming): If you want to drop this right into your existing project, check out our vLLM implementation: https://github.com/nineninesix-ai/kanitts-vllm Voice Cloning Demo (currently unstable): https://huggingface.co/spaces/nineninesix/KaniTTS_Voice_Cloning_dev Our Discord Server: https://discord.gg/NzP3rjB4SB
2025-10-29T02:43:55
https://huggingface.co/nineninesix/kani-tts-400m-en
ylankgz
huggingface.co
1970-01-01T00:00:00
0
{}
1oitanf
false
null
t3_1oitanf
/r/LocalLLaMA/comments/1oitanf/just_dropped_kani_tts_english_a_400m_tts_model/
false
false
default
235
{'enabled': False, 'images': [{'id': 'RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=108&crop=smart&auto=webp&s=9707ecd343330173fb2682b1e85df19c6a0e1efa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=216&crop=smart&auto=webp&s=d190fe25b5081543aff7a81af805dca1af1efef4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=320&crop=smart&auto=webp&s=ae07e7f00e28917259114a66f8ed9dd30e9d38e3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=640&crop=smart&auto=webp&s=a9b109f2dbdc8b8f12ceaf8eb5a8b638cb926f5c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=960&crop=smart&auto=webp&s=180d968608c68f597af1f64df801230d45c55f96', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?width=1080&crop=smart&auto=webp&s=5d19855ef81f79c1ebeb2618230ed9a456be3888', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RI-49DjoVUsr4ENUSH69P2WGRcLvEX3r6pAVokLb70g.png?auto=webp&s=fe283a6899719166c6354ba93a6f050a0b66b5d8', 'width': 1200}, 'variants': {}}]}
NexaAI/Qwen3-VL-8B-Instruct-GGUF Q5_K not running on 32gb ram
2
I'm trying to run NexaAI/Qwen3-VL-8B-Instruct-GGUF │ 7.4 GiB │ Q5\_K I get this error:ggml\_vulkan: Device memory allocation of size 1047925248 failed. ggml\_vulkan: No suitable memory type found: ErrorOutOfDeviceMemory I have 32gb of ram, arc 370m (4gb) I've been trying to get AI to troubleshoot my AI issue. It gives me commands to try that don't work. nexa infer NexaAI/Qwen3-VL-8B-Instruct-GGUF --no-vulkan nexa infer NexaAI/Qwen3-VL-8B-Instruct-GGUF --device cpu None of these options are real. Any advice? Is there no way to use Nexa with RAM and not GPU ram?
2025-10-29T01:42:11
https://www.reddit.com/r/LocalLLaMA/comments/1oirzwg/nexaaiqwen3vl8binstructgguf_q5_k_not_running_on/
SOC_FreeDiver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oirzwg
false
null
t3_1oirzwg
/r/LocalLLaMA/comments/1oirzwg/nexaaiqwen3vl8binstructgguf_q5_k_not_running_on/
false
false
self
2
null
Preferred LLM GUI for accessing many PDF, word docs, etc ???
3
Currently I've been using LM studio mainly for my LLMs, I like it a lot. However when it comes to having a model access many files, for example 10 different PDF files or a very large one, it doesn't do well(Trying to currently add a bunch of FSM for my car so it can tell me things I need to know while rebuilding). Short of rebuilding a LLM are there any good GUI that allow you to easily add more files for it to reference?
2025-10-29T01:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1oirtzj/preferred_llm_gui_for_accessing_many_pdf_word/
DimensionNo8738
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oirtzj
false
null
t3_1oirtzj
/r/LocalLLaMA/comments/1oirtzj/preferred_llm_gui_for_accessing_many_pdf_word/
false
false
self
3
null
Book suggestion with chapters to be 'must read' to get most from the book
2
Recently, I completed booked called AI Engineering by Chip Huyen. I noticed that there were part of the day where it was a struggle to read since a lot of material was 'something people know when they use AI tool' and some chapter definitely worth a read and got me very engage. Because of those early chapters being a struggle to read, the book took way longer than it should to complete it. So this got me thinking if there is a way to 'crowd source' material from long text book to be 'must read' and other sections can be skipped due to overlap or being too basic. So my question is for folks who recently read a book or material that really was an eye opener or material they got a lot of ROI on knowledge. I'll start, it would had been great if someone told me must read chapter are the following from AI Engineering: 1. Finetuning (Chap 7) 2. Inference Optimization (Chap 8) 3. Dataset Engineering (Chap 9) Rest I feel is very common knowledge and can be attained by briefly trying to build a RAG application.
2025-10-29T01:33:06
https://www.reddit.com/r/LocalLLaMA/comments/1oirssv/book_suggestion_with_chapters_to_be_must_read_to/
bad_detectiv3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oirssv
false
null
t3_1oirssv
/r/LocalLLaMA/comments/1oirssv/book_suggestion_with_chapters_to_be_must_read_to/
false
false
self
2
null
Tongyi DeepResearch Technical Report out one month after release
6
[https://github.com/Alibaba-NLP/DeepResearch/blob/main/Tech\_Report.pdf](https://github.com/Alibaba-NLP/DeepResearch/blob/main/Tech_Report.pdf) About one month after their 30B DeepResearch model Tongyi Lab finally released their full technical report. Skimmed through it, personally I'm amazed at the quality of their synthetic data. Having samples with more than 10 tool calls and exceed 32k tokens is insane. What are your thoughts? https://preview.redd.it/68rzu5yt6yxf1.png?width=1642&format=png&auto=webp&s=900cdb55ad542c237c95750befb21c1a1b32fca5
2025-10-29T00:46:53
https://www.reddit.com/r/LocalLLaMA/comments/1oiqsn3/tongyi_deepresearch_technical_report_out_one/
No-Kaleidoscope-2891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiqsn3
false
null
t3_1oiqsn3
/r/LocalLLaMA/comments/1oiqsn3/tongyi_deepresearch_technical_report_out_one/
true
false
spoiler
6
null
minimax m2 93 tokens/sec 4x rtx 6000 pro sglang
1
[removed]
2025-10-29T00:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1oiq2hh/minimax_m2_93_tokenssec_4x_rtx_6000_pro_sglang/
festr2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiq2hh
false
null
t3_1oiq2hh
/r/LocalLLaMA/comments/1oiq2hh/minimax_m2_93_tokenssec_4x_rtx_6000_pro_sglang/
false
false
self
1
null
Best way to integrate "memories" i.e things i need lm studio to know before prompts
3
I am hoping to find a system that remembers my porfolio works, email preferences, job preferences, etc like chat gpt does. I am a noob and tried with no luck some chat gpt recommendations that involved terminal. Any help is much appreciated - hoping for a simple memory like on my GPT free plan of emails, shows i work on, my camera preferences, etc etc. thank you!
2025-10-28T23:55:19
https://www.reddit.com/r/LocalLLaMA/comments/1oipn6w/best_way_to_integrate_memories_ie_things_i_need/
LORD_MDS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oipn6w
false
null
t3_1oipn6w
/r/LocalLLaMA/comments/1oipn6w/best_way_to_integrate_memories_ie_things_i_need/
false
false
self
3
null
MiniMax M2 Llama.cpp support
88
By popular demand, here it is: [https://github.com/ggml-org/llama.cpp/pull/16831](https://github.com/ggml-org/llama.cpp/pull/16831) I'll upload GGUFs to [https://huggingface.co/ilintar/MiniMax-M2-GGUF](https://huggingface.co/ilintar/MiniMax-M2-GGUF), for now uploading Q8\_0 (no BF16/F16 since the original model was quantized in FP8) and generating imatrix. I don't expect problems with accepting this PR, as I said, the model is pretty typical :)
2025-10-28T23:27:22
https://www.reddit.com/r/LocalLLaMA/comments/1oiozl8/minimax_m2_llamacpp_support/
ilintar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiozl8
false
null
t3_1oiozl8
/r/LocalLLaMA/comments/1oiozl8/minimax_m2_llamacpp_support/
false
false
self
88
{'enabled': False, 'images': [{'id': 'Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=108&crop=smart&auto=webp&s=366b7a8e6029e7dff8277b93161e182e82085701', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=216&crop=smart&auto=webp&s=a809b70559dffe2e683b6ba0da8a59ec08ff2892', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=320&crop=smart&auto=webp&s=345ee93f8e5b65bb8003a0e4207828ee0f33602b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=640&crop=smart&auto=webp&s=8582f7f3d5a1df7f8b0eb48365d1058c00c1d8d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=960&crop=smart&auto=webp&s=7ddc4a01ff2bc3d57873c9e9bf45363dae4b35f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?width=1080&crop=smart&auto=webp&s=5845ce371a8c0f76c15334d9f249f542b6f022b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Da6nFU01QFaZ6_sZ6yYTnNpA0jQdum7CAQ69LvRtVYk.png?auto=webp&s=4c875addbe46befb25cdea07cdec5efaa0be7132', 'width': 1200}, 'variants': {}}]}
Single python script for parakeet-tdt-0.6b-v2/3 live mic transcription (mlx)
6
since I couldn't quickly find a minimalist python script that ***just*** did mlx-accelerated STT with parakeet-tdt-0.6b-v2/3 + auto-paste and *nothing else*, heres one that does [https://github.com/qazi0/parakeet-mlx-transcribe](https://github.com/qazi0/parakeet-mlx-transcribe) **Cmd + Shift + ;** to toggle. Auto-copies to clipboard and auto-pastes. Pls star if you find it helpful!
2025-10-28T23:04:52
https://www.reddit.com/r/LocalLLaMA/comments/1oiofso/single_python_script_for_parakeettdt06bv23_live/
fullbridgerecctifier
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiofso
false
null
t3_1oiofso
/r/LocalLLaMA/comments/1oiofso/single_python_script_for_parakeettdt06bv23_live/
false
false
self
6
{'enabled': False, 'images': [{'id': 'BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=108&crop=smart&auto=webp&s=845d580c66debb016c8696acadca650ce298a8b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=216&crop=smart&auto=webp&s=f538f2d78b5c20d54648b555816571afe3ab69d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=320&crop=smart&auto=webp&s=80be6870f48d466007b362c020ea4a1629f2d5ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=640&crop=smart&auto=webp&s=a1179e1051c66671a0519d1ba5fef577f25a3c56', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=960&crop=smart&auto=webp&s=0bdc1bc476db6d9a1643ab341b2a905802d835c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?width=1080&crop=smart&auto=webp&s=4ab589d2bd0048f8c2ff7263aced9ab1c8864aa5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BH_lHAPvT56YAugqfWVWYiRjmmNYzY5JBtsCiQYZRoA.png?auto=webp&s=4231499d6414689184e4ae052d2cfe76b5d6b25e', 'width': 1200}, 'variants': {}}]}
Have any sites been developed where collections of LLM tools are hosted?
5
This boils down to simply the actual function for the tool on the right side and the JSON description of it on the left. You copy both, you paste them into your own files and or whatever you use and makes the entire function available to the AI. Or is this still a very spread out area?
2025-10-28T22:47:27
https://www.reddit.com/r/LocalLLaMA/comments/1oio0rz/have_any_sites_been_developed_where_collections/
Intelligent-Land1765
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oio0rz
false
null
t3_1oio0rz
/r/LocalLLaMA/comments/1oio0rz/have_any_sites_been_developed_where_collections/
false
false
self
5
null
Has anyone tried visualizing reasoning flow in their AI agents instead of just monitoring tool calls?
2
I’ve seen a few cool tools lately doing observability for AI agents (tracking bad tool calls, token usage, etc.), but what I’m more curious about is the reasoning side, not just “what failed,” but how the agent’s thinking evolved between steps. For example: • What context was carried forward? • What inputs actually changed the outcome? • Could we visualize that like a graph of “thought states” or dependencies instead of plain logs? Curious if anyone’s explored this or thinks it’s useful. Would you find that kind of visualization valuable, or is that overkill for real-world debugging?
2025-10-28T22:39:34
https://www.reddit.com/r/LocalLLaMA/comments/1ointx5/has_anyone_tried_visualizing_reasoning_flow_in/
AdVivid5763
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ointx5
false
null
t3_1ointx5
/r/LocalLLaMA/comments/1ointx5/has_anyone_tried_visualizing_reasoning_flow_in/
false
false
self
2
null
GPU Hypervisor technology WoolyAI trial is now open
1
[removed]
2025-10-28T22:39:10
https://www.reddit.com/r/LocalLLaMA/comments/1ointk0/gpu_hypervisor_technology_woolyai_trial_is_now/
Chachachaudhary123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ointk0
false
null
t3_1ointk0
/r/LocalLLaMA/comments/1ointk0/gpu_hypervisor_technology_woolyai_trial_is_now/
false
false
self
1
null
Has anyone tried visualizing reasoning flow in their AI agents instead of just monitoring tool calls?
1
I’ve seen a few cool tools lately doing observability for AI agents (tracking bad tool calls, token usage, etc.), but what I’m more curious about is the reasoning side, not just “what failed,” but how the agent’s thinking evolved between steps. For example: • What context was carried forward? • What inputs actually changed the outcome? • Could we visualize that like a graph of “thought states” or dependencies instead of plain logs? Curious if anyone’s explored this or thinks it’s useful. Would you find that kind of visualization valuable, or is that overkill for real-world debugging?
2025-10-28T22:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1oinsz6/has_anyone_tried_visualizing_reasoning_flow_in/
AdVivid5763
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oinsz6
false
null
t3_1oinsz6
/r/LocalLLaMA/comments/1oinsz6/has_anyone_tried_visualizing_reasoning_flow_in/
false
false
self
1
null
Voicebot suddenly repeats itself after small prompt change - normal?
0
Made a tiny tweak in my system prompt (confirm order before finalizing), and suddenly my agent started looping confirmation phrases. It didn’t happen before. Is this just LLM randomness, or did I break something deeper? Any tricks for catching this sort of thing automatically?
2025-10-28T22:31:47
https://www.reddit.com/r/LocalLLaMA/comments/1oinn09/voicebot_suddenly_repeats_itself_after_small/
Fluffy-Twist-4652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oinn09
false
null
t3_1oinn09
/r/LocalLLaMA/comments/1oinn09/voicebot_suddenly_repeats_itself_after_small/
false
false
self
0
null
Anyone running local LLM coding setups on 24GB VRAM laptops? Looking for real-world experiences
10
Hi everyone I’m wondering if anyone has real day-to-day experience with local LLM coding on 24GB VRAM? And how do you use it? Cline/Continue in VScode? Here’s the situation: I’ve been using Claude Code, but it’s getting pretty expensive. The basic plan recently got nerfed — now you only get a few hours of work time before you have to wait for your resources to reset. So I’m looking into local alternatives, even if they’re not as advanced. That’s totally fine — I’m already into local AI stuff, so I am a bit familiar with what to expect. Right now I’ve got a laptop with an RTX 4080 (12GB VRAM). It’s fine for most AI tasks I run, but not great for coding with LLMs. For context: - unfortunately, I can’t use a desktop due to certain circumstances - I also can’t go with Apple since it’s not ideal for things like Stable Diffusion, OCR, etc. and it's expensive as hell. More expensive that non-apple laptop with the same specs. - cloud providers could be expensive in the case of classic permanent usage for work I’m thinking about getting a 5090 laptop, but that thing’s insanely expensive, so I’d love to hear some thoughts or real experiences from people who actually run heavy local AI workloads on laptops. Thanks! 🙏
2025-10-28T22:30:57
https://www.reddit.com/r/LocalLLaMA/comments/1oinmab/anyone_running_local_llm_coding_setups_on_24gb/
AmazinglyNatural6545
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oinmab
false
null
t3_1oinmab
/r/LocalLLaMA/comments/1oinmab/anyone_running_local_llm_coding_setups_on_24gb/
false
false
self
10
null
I’m just ever so off. I could use some guidance
8
Hi. I’m recognizing that this might be a little bit of an annoying post, but I need a little bit of help. Specifically, I’m trying to run a local… let’s call it a home GPT or something along those lines… that’s agentic for specific tasks and tool calls automatically. I don’t want to have to specify what tool when I type in chat. I can write SQL queries myself, but if I’m telling it to look something up in Supabase, I don’t want to have to manually say “use this tool.” It should just flow naturally in the conversation. I’ve tried LM Studio, Ollama, msty.ai… doesn’t seem to matter. I really like LM Studio’s model management and chat UI, but I have to explicitly tell it to use the tool every single time. It’s not making those calls autonomously. That kind of defeats the purpose for me. What I want is something that knows when to query Supabase via MCP, and when not to. When to use web search, and when not to. Right now I’m testing different models, but my favorite so far is Qwen3-32B MLX running on LM Studio. I’m just curious how people are getting these kinds of autonomous workflows actually running in the chat UI… without it turning into a really manual process every time.
2025-10-28T22:26:36
https://www.reddit.com/r/LocalLLaMA/comments/1oinict/im_just_ever_so_off_i_could_use_some_guidance/
DisplacedForest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oinict
false
null
t3_1oinict
/r/LocalLLaMA/comments/1oinict/im_just_ever_so_off_i_could_use_some_guidance/
false
false
self
8
null
Has anyone built voice agent QA around metrics like frustration?
0
I feel like latency and accuracy don’t capture when a user is just done with the bot. You can hear it in tone, but how do you measure that at scale? Anyone tried a frustration index or something similar?
2025-10-28T22:23:18
https://www.reddit.com/r/LocalLLaMA/comments/1oinfik/has_anyone_built_voice_agent_qa_around_metrics/
Just_Awareness2733
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oinfik
false
null
t3_1oinfik
/r/LocalLLaMA/comments/1oinfik/has_anyone_built_voice_agent_qa_around_metrics/
false
false
self
0
null
Is the Nvidia DGX Spark the same as the OEM version, Asus Ascent GX10?
0
I need a CUDA-based GPU system for AI training, and I’m considering buying the Nvidia DGX Spark. However, it’s quite hard to get one in Canada, so I’m thinking about purchasing an OEM version — the **Asus Ascent GX10**. On paper, both systems seem to have identical specs, but I’m wondering if the OEM version performs just as well in terms of **cooling, noise level, and overall build quality**. Has anyone used the Asus Ascent GX10 or any other DGX Spark OEM systems? I’d really appreciate your insights or experience.
2025-10-28T22:03:38
https://www.reddit.com/r/LocalLLaMA/comments/1oimxtp/is_the_nvidia_dgx_spark_the_same_as_the_oem/
Decent-Log6192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oimxtp
false
null
t3_1oimxtp
/r/LocalLLaMA/comments/1oimxtp/is_the_nvidia_dgx_spark_the_same_as_the_oem/
false
false
self
0
null
An alternative to Microsoft's VibeVoice? Soul releases SoulX-Podcast-1.7B, a multi-speaker TTS model
108
Soul has just released SoulX-Podcast-1.7B, which looks like it might be trained based on Qwen3-1.7B. The current demo looks promising, but it's hard to say what the actual performance is like. I previously tested VibeVoice-1.5B and found that its performance was very poor during rapid switching between multiple speakers. I'm wondering if this new model will be any better. The model card hasn't been uploaded yet.
2025-10-28T21:38:28
https://i.redd.it/kqnfb23c9xxf1.png
Dr_Karminski
i.redd.it
1970-01-01T00:00:00
0
{}
1oimand
false
null
t3_1oimand
/r/LocalLLaMA/comments/1oimand/an_alternative_to_microsofts_vibevoice_soul/
false
false
default
108
{'enabled': True, 'images': [{'id': 'kqnfb23c9xxf1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=108&crop=smart&auto=webp&s=33d79c52592d53e9baa2948480a52268c14edfbb', 'width': 108}, {'height': 229, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=216&crop=smart&auto=webp&s=1b283dfecb6c4fac3095ea7dd1dac31190551175', 'width': 216}, {'height': 339, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=320&crop=smart&auto=webp&s=de7c48bace403ff724045a3ad094653478ad0ddf', 'width': 320}, {'height': 679, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=640&crop=smart&auto=webp&s=7e6cd2a86c6977531e4be1ccaea119c7c0c9a8a2', 'width': 640}, {'height': 1018, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=960&crop=smart&auto=webp&s=257095804c21554dd1d48363c62f736d882167d1', 'width': 960}, {'height': 1146, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?width=1080&crop=smart&auto=webp&s=01983e16f2544b471c33179d1c9b37fd7e26ca40', 'width': 1080}], 'source': {'height': 1712, 'url': 'https://preview.redd.it/kqnfb23c9xxf1.png?auto=webp&s=1b986820d8bc277ba4bf87ea29ea3b45b56a454c', 'width': 1613}, 'variants': {}}]}
Best current dense, nonthinking models in the 8b-14b range?
18
It seems like a lot of the state of the art open models that are being released are either MoE models or Thinking models. I understand that these are useful ways to improve performance, but with my setup I'm looking for models that don't have these characteristics. I was wondering what recommendations you guys have? Thanks!
2025-10-28T21:36:55
https://www.reddit.com/r/LocalLLaMA/comments/1oim98t/best_current_dense_nonthinking_models_in_the/
Priceless_Pennies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oim98t
false
null
t3_1oim98t
/r/LocalLLaMA/comments/1oim98t/best_current_dense_nonthinking_models_in_the/
false
false
self
18
null
fine tuning
1
I am facing issue with fine tuning the lfm2-1.2b model using colab files shared in leap platform. I am getting timeout. If anyone was successful, can u share the SFT config used?
2025-10-28T21:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1oilyqc/fine_tuning/
VariationOld93
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oilyqc
false
null
t3_1oilyqc
/r/LocalLLaMA/comments/1oilyqc/fine_tuning/
false
false
self
1
null
MiniMax-M2 llama.cpp
37
I tried to implement it, it's fully cursor generated ai slop code, sorry. The chat template is strange; I'm 100% sure it's not correctly implemented, but it works with the roo code (Q2 is bad, Q4 is fine) at least. Anyone who wants to waste 100gb bandwidth can give it a try. test device and command : 2x4090 and lot of ram `./llama-server -m minimax-m2-Q4_K.gguf -ngl 999 --cpu-moe --jinja -fa on -c 50000 --reasoning-format auto` `code:` [here](https://github.com/cturan/llama.cpp/tree/minimax) `gguf:` [here](https://huggingface.co/cturan/MiniMax-M2-GGUF/tree/main) https://reddit.com/link/1oilwvm/video/ofpwt9vn4xxf1/player
2025-10-28T21:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1oilwvm/minimaxm2_llamacpp/
butlan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oilwvm
false
null
t3_1oilwvm
/r/LocalLLaMA/comments/1oilwvm/minimaxm2_llamacpp/
false
false
self
37
null
Best simple plug and play conversational LLM STT TTS set up?
3
I don’t really have the time to build one myself I’d probably wanna use GPT-OSS 20b Yeah doesn’t have to be god tier but it should not be TTS that sounds whack either, Any suggestions/ GitHub projects you guys can recommend? Thank you
2025-10-28T21:00:06
https://www.reddit.com/r/LocalLLaMA/comments/1oilbia/best_simple_plug_and_play_conversational_llm_stt/
Adventurous-Gold6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oilbia
false
null
t3_1oilbia
/r/LocalLLaMA/comments/1oilbia/best_simple_plug_and_play_conversational_llm_stt/
false
false
self
3
null
Fine tune existing LLMs in Colab or Kaggle
2
I tried to use Colab and Kaggle to fine-tune an existing 1B LLMs for my style. I was fine-tuning them, changing epoch, and slowing down learning. I have 7k of my own messages in my own style. I also checked my training dataset to be in the correct format. Mostly Colab doesn't work for since it runs out of RAM. I cannot really use Kaggle right now because of "additional\_chat\_templates does not exist on main". Which good LLMs were you able to run on those 2 services? Or maybe on some other service?
2025-10-28T20:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1oik4mm/fine_tune_existing_llms_in_colab_or_kaggle/
DobraVibra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oik4mm
false
null
t3_1oik4mm
/r/LocalLLaMA/comments/1oik4mm/fine_tune_existing_llms_in_colab_or_kaggle/
false
false
self
2
null
Has anyone gotten vLLM working natively on Windows (no WSL/Docker) with Flash Attention?
3
**Has anyone successfully run vLLM natively on Windows with Flash Attention enabled?** I'm trying to get vLLM running on Windows and wanted to check if anyone has managed to do this: - Native Windows installation (not WSL or Docker) - Not using the vllm-windows fork/project - With Flash Attention actually working If you've gotten this setup working, I'd love to hear about: - What installation method you used - Any specific dependencies or build steps - Whether Flash Attention is actually functioning or just enabled without errors Most guides I've found either use WSL, Docker, or point to the vllm-windows project, but I'm curious if anyone's gotten the upstream vLLM working natively with all features. Thanks!
2025-10-28T19:41:11
https://www.reddit.com/r/LocalLLaMA/comments/1oij8bg/has_anyone_gotten_vllm_working_natively_on/
JustSayin_thatuknow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oij8bg
false
null
t3_1oij8bg
/r/LocalLLaMA/comments/1oij8bg/has_anyone_gotten_vllm_working_natively_on/
false
false
self
3
null
Poker Tournament for LLMs
267
Watch here: [https://pokerbattle.ai/event](https://pokerbattle.ai/event)
2025-10-28T19:31:37
https://www.reddit.com/gallery/1oiiz8k
undoing8
reddit.com
1970-01-01T00:00:00
0
{}
1oiiz8k
false
null
t3_1oiiz8k
/r/LocalLLaMA/comments/1oiiz8k/poker_tournament_for_llms/
false
false
https://b.thumbs.redditm…fxNFOLNxKUVs.jpg
267
null
Looking for models that are good for product design
3
Hello all. l am new to local LLMs and have been trying a few models, but haven't found one that clicks yet. For the past year or more I have used Claude as my main AI platform and then followed up with chat gpt if i needed a more accurate answer. I would discuss circuit designs, conceptual designs, and mostly use it as a way to help develop ideas. It was great up until recently where they have been choking down on the usage like crazy. I would like to switch to using local llms, but I really haven't found a model yet that works well as just a general conversationalist. I run a nvidia 3090, so I have been trying various qwen models, llama 70b, and a few others. Most of them have been hallucinating hard. I would love to hear some general thoughts from you guys.
2025-10-28T18:51:49
https://www.reddit.com/r/LocalLLaMA/comments/1oihxvc/looking_for_models_that_are_good_for_product/
Striking_Luck5201
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oihxvc
false
null
t3_1oihxvc
/r/LocalLLaMA/comments/1oihxvc/looking_for_models_that_are_good_for_product/
false
false
self
3
null
Minimax-M2 cracks top 10 overall LLMs (production LLM performance gap shrinking: 7 points from GPT-5 in Artificial Analysis benchmark)
72
I've been analysing the Artificial Analysis benchmark set (94 production models, 329 API endpoints) and wanted to share some trends that seem notable. **Context** This is models with commercial API access, not the full experimental OS landscape. So mostly models you'd actually deploy out of the box rather than every research models The gap between best tracked OS (MiniMax-M2, quality 61) and best proprietary (GPT-5, 68) is now 7 points. Last year it was around 18 points in the same dataset. Linear extrapolation suggests parity by Q2 2026 for production-ready models, though obviously that assumes the trend holds (and chinese labs keep shipping OSS models) What's interesting is the tier distribution: \- Elite (60+): 1 OS, 11 proprietary \- High (50-59): 8 OS, 8 proprietary (we hit parity here) \- Below 50: OS dominates by volume The economics are pretty stark. OS average: $0.83/M tokens. Proprietary: $6.03/M. Value leaders like Qwen3-235B are hitting 228 quality per dollar vs \~10-20 for proprietary elite models (kind of a random approach but tried playing with this: quality per dollar = quality Index ÷ price/M tokens) Speed is also shifting. OS on optimised infra (Groq, Fireworks) peaks at 3,087 tok/sec vs 616 for proprietary. Not sure how sustainable that edge is as proprietary invests in inference optimisation. Made an interactive comparison: [whatllm.org](http://whatllm.org) Full write-up: [https://www.whatllm.org/blog/open-source-vs-proprietary-llms-2025](https://www.whatllm.org/blog/open-source-vs-proprietary-llms-2025) Two questions I'm chewing on: 1. How representative is this benchmark set vs the wider OS ecosystem? AA focuses on API-ready production models, which excludes a lot of experimental work, fine tuned models etc 2. Is there a ceiling coming, or does this compression just continue? Chinese labs seem to be iterating faster than I expected. Curious what others think about the trajectory here.
2025-10-28T18:28:42
https://www.reddit.com/r/LocalLLaMA/comments/1oihbtx/minimaxm2_cracks_top_10_overall_llms_production/
medi6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oihbtx
false
null
t3_1oihbtx
/r/LocalLLaMA/comments/1oihbtx/minimaxm2_cracks_top_10_overall_llms_production/
false
false
self
72
{'enabled': False, 'images': [{'id': '_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=108&crop=smart&auto=webp&s=1830e48e826d54f0587ae338ea9b0972d2fd8a56', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=216&crop=smart&auto=webp&s=ea8b9d2c6e09ced07d6604492fe089c7271c9af5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=320&crop=smart&auto=webp&s=422320d36e4104c923e182e98a9b0a1f3f61d548', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=640&crop=smart&auto=webp&s=0ad340dd722f351e7dd0753cd751167521f5ade6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=960&crop=smart&auto=webp&s=bbd7c4b2db6d9cef0a0a94e029cebc27b6ae5be7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?width=1080&crop=smart&auto=webp&s=9b4b58a9d11f99bb7af2b9c571a561e64300a52a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/_SKrEaytPHGOGHi6b5La9kSIYhKlMkrUXFjmxYQxWms.png?auto=webp&s=5271158c63e37f546243d0a0d0bdd1bd1552f4f4', 'width': 1200}, 'variants': {}}]}
Multiple terminal AI working together for the same project?
0
Is it common for developers or vibe engineers to use multiple terminal AIs (Gemini CLI, opencode) together, or ya'll prefer to use a single terminal AI for a single project?
2025-10-28T18:23:17
https://www.reddit.com/r/LocalLLaMA/comments/1oih6re/multiple_terminal_ai_working_together_for_the/
Charming_Bag_1257
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oih6re
false
null
t3_1oih6re
/r/LocalLLaMA/comments/1oih6re/multiple_terminal_ai_working_together_for_the/
false
false
self
0
null
Gemini 1.5 Family model sizes from official Deepmind paper
0
[PLUM: Adapting Pre-trained Language Models for Industrial-scale Generative Recommendations](https://arxiv.org/pdf/2510.07784) https://preview.redd.it/ybcwpzwh9wxf1.png?width=662&format=png&auto=webp&s=d0397196ba29f0967f626aea50875a7610b42012
2025-10-28T18:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1oih0xi/gemini_15_family_model_sizes_from_official/
Repulsive-Parsnip-33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oih0xi
false
null
t3_1oih0xi
/r/LocalLLaMA/comments/1oih0xi/gemini_15_family_model_sizes_from_official/
false
false
https://b.thumbs.redditm…U541uczKfaUU.jpg
0
null
Appreciation post to LocalLLaMa
1
[removed]
2025-10-28T17:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1oifw6h/appreciation_post_to_localllama/
Southern_Sun_2106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oifw6h
false
null
t3_1oifw6h
/r/LocalLLaMA/comments/1oifw6h/appreciation_post_to_localllama/
false
false
self
1
null
IBM releases Granite-4.0 Nano (300M & 1B), along with a local browser demo showing how the models can programmatically interact with websites and call tools/browser APIs on your behalf.
235
IBM just released Granite-4.0 Nano, their smallest LLMs to date (300M & 1B). The models demonstrate remarkable instruction following and tool calling capabilities, making them perfect for on-device applications. Links: \- Blog post: [https://huggingface.co/blog/ibm-granite/granite-4-nano](https://huggingface.co/blog/ibm-granite/granite-4-nano) \- Demo (+ source code): [https://huggingface.co/spaces/ibm-granite/Granite-4.0-Nano-WebGPU](https://huggingface.co/spaces/ibm-granite/Granite-4.0-Nano-WebGPU) \+ for those wondering, the demo uses Transformers.js to run the models 100% locally in your browser with WebGPU acceleration.
2025-10-28T17:25:26
https://v.redd.it/s5hzz3wgyvxf1
xenovatech
/r/LocalLLaMA/comments/1oifmg6/ibm_releases_granite40_nano_300m_1b_along_with_a/
1970-01-01T00:00:00
0
{}
1oifmg6
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s5hzz3wgyvxf1/DASHPlaylist.mpd?a=1764393933%2CYTg2Yjk5MGE0ZDViNzZmODVlNDFjNjUxNDY2M2Y4MTViZDkzN2U2NTM5NTI5YjdkN2MzNWE2YzAyM2U3YmZkNA%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/s5hzz3wgyvxf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/s5hzz3wgyvxf1/HLSPlaylist.m3u8?a=1764393933%2CYWNiMTY0NmM3NzgwZmQ1NjNhODY4YzE3NjVjNDFkOTIyMDM5YzUxNjBmMzQwZDgzNjYyZjgxN2Y2MjM5ZWFiOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s5hzz3wgyvxf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1894}}
t3_1oifmg6
/r/LocalLLaMA/comments/1oifmg6/ibm_releases_granite40_nano_300m_1b_along_with_a/
false
false
https://external-preview…3359ee39e702dfc7
235
{'enabled': False, 'images': [{'id': 'c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=108&crop=smart&format=pjpg&auto=webp&s=bf57f93d1a6efc07a84aae6021406e091d34bdc6', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=216&crop=smart&format=pjpg&auto=webp&s=4d1d4f77527df8999666c5e325bcb27067ee4865', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=320&crop=smart&format=pjpg&auto=webp&s=86ea50f729844786c7603c27bd915f7c3066fd5f', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=640&crop=smart&format=pjpg&auto=webp&s=13ee5192b1e0778c3dd5ce068d939a8932c00940', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=960&crop=smart&format=pjpg&auto=webp&s=a022ed6f40a10a23c37bac0206cec69e04a7de98', 'width': 960}, {'height': 616, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ff0a06ad7c7674a1055e34431c0d6e545e61ea6e', 'width': 1080}], 'source': {'height': 1780, 'url': 'https://external-preview.redd.it/c3EyM2o0d2d5dnhmMQ3Ju84xO0NZTaEdmCFfUDcYCN9cnlFCq8u0lL0AKtmD.png?format=pjpg&auto=webp&s=9d38a85f39071af07b9ab6d5e3af985ddf64d6f5', 'width': 3120}, 'variants': {}}]}
Theoretically Scaling Beyond 2 DGX Sparks in a Single Cluster.
14
First off, let's get into why NVIDIA only supports clustering 2 of these at the moment. `user@spark:~$ lspci | grep Mellanox` `0000:01:00.0 Ethernet controller: Mellanox Technologies MT2910 Family [ConnectX-7]` `0000:01:00.1 Ethernet controller: Mellanox Technologies MT2910 Family [ConnectX-7]` `0002:01:00.0 Ethernet controller: Mellanox Technologies MT2910 Family [ConnectX-7]` `0002:01:00.1 Ethernet controller: Mellanox Technologies MT2910 Family [ConnectX-7]` The cpu is essentially two 10 core compute units married together, each with their own pcie root complex connected to the CX7 at Gen5 x4. Meaning each compute half of the CPU can push roughly 100gbps (200gbps across both complexes), and the CX7 interfaces effectively show up twice. CPU 1st Half: enp1s0f0np0 -> port 1 enp1s0f1np1 -> port 2 CPU 2nd Half: enP2p1s0f0np0 -> port 1 enP2p1s0f1np1 -> port 2 user@spark:~$ ibdev2netdev rocep1s0f0 port 1 ==> enp1s0f0np0 (Up) rocep1s0f1 port 1 ==> enp1s0f1np1 (Up) roceP2p1s0f0 port 1 ==> enP2p1s0f0np0 (Up) roceP2p1s0f1 port 1 ==> enP2p1s0f1np1 (Up) NVIDIA docs will basically tell you to ignore the all the second half (enP2) interfaces. This works at 200gbps in a p2p dual spark scenario because NCCL is going to transmit ROCE v1 L2 frames out of all up ROCE interfaces. Doing a direct connection will reing up two of those (one per complex) and it will just work. Ethernet traffic will be limited to about 100gbps out of the single port however. But, now in my case. I am connecting these sparks over dual 100gbit QSFP28 links to a cluster of NVIDIA sn2010 switches. QSFP28, because no matter what, 200gbps is the absolute maximum the CX7 can do given the PCIE limitations. To make this work, with ROCE v2 and layer 3 links to the switch. You can set an IP on each half of the complex. enp1s0f0np0 -> set ip (CPU 1st half CX7 port 1) enP2p1s0f1np1 - set ip (CPU 2nd half CX7 port 2) Now, this will break NCCL. NCCL needs some variables tweaked, otherwise it's going to try to use ROCE v1 p2p ports which cannot work in this scenario. Here is an NCCL test that will get 200gbps across both links to a switch. mpirun -np 2 -H <spark 1 ip>,<spark 2 ip> \ --mca plm_rsh_agent "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" \ -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH \ -x UCX_NET_DEVICES=enp1s0f0np0,enP2p1s0f1np1 \ -x NCCL_SOCKET_IFNAME=enp1s0f0np0,enP2p1s0f1np1 \ -x NCCL_SOCKET_FAMILY=AF_INET \ -x NCCL_IB_HCA=rocep1s0f0,roceP2p1s0f1 \ -x OMPI_MCA_btl_tcp_if_include=enp1s0f0np0,enP2p1s0f1np1 \ -x NCCL_IB_GID_INDEX=3 \ -x NCCL_IB_TC=3 \ -x NCCL_IB_MERGE_NICS=1\ $HOME/nccl-tests/build/all_gather_perf -b 16G -e 16G -f 2 The host IP's above can be the the IP's of the 10g interfaces, NCCL will still discover the CX7 paths but just do IP coordination over the 10g links. These flags restrict the interfaces NCCL sees, forces ROCE v2, merges those nics, and forces the lossless traffic class. I theory, with both CX7 interfaces connected to a switch, you're only scaling limit here with multiple sparks is how many switch ports you have. To make this more permanent I set these in .profile for the user. export CUDA_HOME="/usr/local/cuda" export MPI_HOME="/usr/lib/aarch64-linux-gnu/openmpi" export NCCL_HOME="$HOME/nccl/build/" export LD_LIBRARY_PATH="$NCCL_HOME/lib:$CUDA_HOME/lib64/:$MPI_HOME/lib:$LD_LIBRARY_PATH" export IP_IF_NAME=enp1s0f0np0,enP2p1s0f1np1 export IB_IF_NAME=rocep1s0f0,roceP2p1s0f1 export UCX_NET_DEVICES=$IP_IF_NAME export NCCL_SOCKET_IFNAME=$IP_IF_NAME export NCCL_SOCKET_FAMILY=AF_INET export NCCL_IB_HCA=$IB_IF_NAME export NCCL_IB_GID_INDEX=3 export NCCL_IB_MERGE_NICS=1 export OMPI_MCA_btl_tcp_if_include=$IP_IF_NAME NCCL Test Results # nccl-tests version 2.17.4 nccl-headers=22807 nccl-library=22807 # Collective test starting: all_gather_perf # nThread 1 nGpus 1 minBytes 17179869184 maxBytes 17179869184 step: 2(factor) warmup iters: 1 iters: 20 agg iters: 1 validation: 1 graph: 0 # # Using devices # Rank 0 Group 0 Pid 303712 on spark-1af4 device 0 [000f:01:00] NVIDIA GB10 # Rank 1 Group 0 Pid 166882 on spark-870f device 0 [000f:01:00] NVIDIA GB10 # # out-of-place in-place # size count type redop root time algbw busbw #wrong time algbw busbw #wrong # (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s) 17179869184 2147483648 float none -1 410263 41.88 20.94 0 409388 41.96 20.98 0 # Out of bounds values : 0 OK # Avg bus bandwidth : 20.96 # # Collective test concluded: all_gather_perf
2025-10-28T16:44:45
https://www.reddit.com/r/LocalLLaMA/comments/1oieip0/theoretically_scaling_beyond_2_dgx_sparks_in_a/
SIN3R6Y
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oieip0
false
null
t3_1oieip0
/r/LocalLLaMA/comments/1oieip0/theoretically_scaling_beyond_2_dgx_sparks_in_a/
false
false
self
14
null
Realized I'm wasting 60% of my gpu time and it's killing my thesis timeline
9
Been training models for my research and something felt off. Decided to actually track my gpu usage over a week and... yeah. 60% idle time. Not because I'm slacking, but because of all the crap in between runs. Here's where the time goes. Switching between different model architectures eats up way more time than I thought. Every time I want to test llama vs mistral, I'm basically spending 20 minutes reconfiguring environments, checking dependencies, making sure cuda is happy. Then there's data prep, which I keep forgetting to parallelize properly. And honestly? A lot of waiting around because I'm not confident enough to queue up multiple experiments overnight. I started using transformer lab recently which handles some of the switching headaches automatically. Not perfect but it means I can actually run back to back experiments without babysitting the whole process. Saves me from the constant "is it done yet" anxiety. You might not notice but take a look at how much this adds up. If I'm only actually training 40% of the time, that's like paying for a gym membership and only going twice a week. Except the gym membership is my entire research timeline. Still figuring out how to optimize this better. Thinking about setting up proper job queues but that feels like it might be overkill for a single gpu setup? Anyone else dealt with this or am I just really bad at this?
2025-10-28T16:16:14
https://www.reddit.com/r/LocalLLaMA/comments/1oidqjh/realized_im_wasting_60_of_my_gpu_time_and_its/
swedishprisoner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oidqjh
false
null
t3_1oidqjh
/r/LocalLLaMA/comments/1oidqjh/realized_im_wasting_60_of_my_gpu_time_and_its/
false
false
self
9
null
Anyone else actually like using AI detectors while editing?
0
So I’ve been experimenting with different tools lately, and I’m starting to see detectors (zerogpt, originality, turntin) less as “gotcha” programs and more like editing helpers. For example, Originality.ai has been surprisingly useful. nstead of just yelling “AI detected,” it highlights specific lines that sound too robotic. I’ve started using that feedback to rewrite those parts and add more personality, and it honestly makes my writing feel smoother and more natural. Curious if anyone else here uses AI detectors as part of their creative workflow rather than just out of fear of being flagged?
2025-10-28T15:52:08
https://www.reddit.com/r/LocalLLaMA/comments/1oid2yu/anyone_else_actually_like_using_ai_detectors/
Typical-Trade-6363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oid2yu
false
null
t3_1oid2yu
/r/LocalLLaMA/comments/1oid2yu/anyone_else_actually_like_using_ai_detectors/
false
false
self
0
null
Language models are sentient and I can PROVE it
0
haha I'm just kidding but wasn't that fun for a second?
2025-10-28T15:46:58
https://www.reddit.com/r/LocalLLaMA/comments/1oicxzi/language_models_are_sentient_and_i_can_prove_it/
atineiatte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oicxzi
false
null
t3_1oicxzi
/r/LocalLLaMA/comments/1oicxzi/language_models_are_sentient_and_i_can_prove_it/
true
false
self
0
null
Need help properly setting up open-webui
8
Hello localLLama experts, Could someone point me to some guide on how to tweak open-webui parameters to properly give me the correct results? I have OWUI and Ollama running in docker containers. I've pulled a few models to run on my RTX3090. eg. Codestral and Gemma3 27b. I've also connected to Mistral API and exposed a few models from that API to OWUI. All using default parameters, no custom prompts for any of the models as I don't know what I'm doing in those areas anyway. Here is the problem. When I give a sample data table and ask the model to give me code to do XYZ, the Codestral model using Mistral API correctly gives me code I asked for. But when I use the locally hosted Codestral running on ollama with the EXACT same prompt, all it just gives me is a summary of the data table. Could someone kindly help me or point me in the right direction to configure this setup to achieve the same/similar results running on the local model as the cloud model? Thank you in advance.
2025-10-28T15:37:06
https://www.reddit.com/r/LocalLLaMA/comments/1oicoeh/need_help_properly_setting_up_openwebui/
stuckwi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oicoeh
false
null
t3_1oicoeh
/r/LocalLLaMA/comments/1oicoeh/need_help_properly_setting_up_openwebui/
false
false
self
8
null
Granite 4.0 Nano Language Models
221
IBM Granite team released Granite 4 Nano models: 1B and 350m versions
2025-10-28T15:29:41
https://huggingface.co/collections/ibm-granite/granite-40-nano-language-models
ApprehensiveAd3629
huggingface.co
1970-01-01T00:00:00
0
{}
1oichb7
false
null
t3_1oichb7
/r/LocalLLaMA/comments/1oichb7/granite_40_nano_language_models/
false
false
default
221
{'enabled': False, 'images': [{'id': 'IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=108&crop=smart&auto=webp&s=d9caa5384915f208f0a5bb5bbff27304712577f8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=216&crop=smart&auto=webp&s=fb6b1b89c6cd6e2ba72badeebf72746102f6e00e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=320&crop=smart&auto=webp&s=8eac50a0e995f8f5d53b331c51f4e69b11f0a953', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=640&crop=smart&auto=webp&s=3eae6a377c52b823a98de991ed339474596e018d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=960&crop=smart&auto=webp&s=2e8ce086184746e33e73b82870c41b90dc05cc00', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?width=1080&crop=smart&auto=webp&s=de666f28a72d0168b1332350e0d5881cf51e0af7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IWIrfsaMSUG5JLRfVdW-aDvE5706Tdr6bIFsDJelbBQ.png?auto=webp&s=6c2d8e5b8892488e92ca46ea76dd7e65bb042c4c', 'width': 1200}, 'variants': {}}]}
Sparse Adaptive Attention “MoE”: How I Solved OpenAI’s $650B Problem With a £700 GPU
180
2025-10-28T15:07:09
https://medium.com/@hyborian_/sparse-adaptive-attention-moe-how-i-solved-openais-650b-problem-with-a-700-gpu-343f47b2d6c1
EconomicConstipator
medium.com
1970-01-01T00:00:00
0
{}
1oibvz1
false
null
t3_1oibvz1
/r/LocalLLaMA/comments/1oibvz1/sparse_adaptive_attention_moe_how_i_solved/
false
false
https://external-preview…e6663269bf33f6ba
180
{'enabled': False, 'images': [{'id': 'kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=108&crop=smart&auto=webp&s=945b44680a28a67142d528bd112efea43d0c862a', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=216&crop=smart&auto=webp&s=39e40a31b8c613546c82f60f7cea57d2b703cd3d', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=320&crop=smart&auto=webp&s=12761b6b912cea3b2ff0832b22b6fba546ddbe9e', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=640&crop=smart&auto=webp&s=3e9fb94008925ef4e956ee562491c3bbbdb7b137', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=960&crop=smart&auto=webp&s=b3b83e0a611f008b9cf43a4a80e011dcc95fc512', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?width=1080&crop=smart&auto=webp&s=193b2d854d49cd74fbe7a2b3552ff1296165f60d', 'width': 1080}], 'source': {'height': 721, 'url': 'https://external-preview.redd.it/kt6Nre_n4gaw93bJ1Gb2EvMMeQYgdiVYcfIxBFW1mNk.png?auto=webp&s=d0733146c382dfab1815a5080689d3e8c9ed381c', 'width': 1200}, 'variants': {}}]}
Will the AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC -- 128 GB Ram hold its value of around 1.8k in two years time?
0
Hello, I am looking into purchasing this Strix Halo. Do you guys think the value of this will significantly depreciate? Or remain relatively stable?
2025-10-28T15:05:35
https://www.reddit.com/r/LocalLLaMA/comments/1oibuio/will_the_amd_ryzen_ai_max_395_evox2_ai_mini_pc/
Excellent_Koala769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oibuio
false
null
t3_1oibuio
/r/LocalLLaMA/comments/1oibuio/will_the_amd_ryzen_ai_max_395_evox2_ai_mini_pc/
false
false
self
0
null
Waiting for an UnSloth GUFF for MiniMax-M2!
34
Unsloth has already put MiniMax-M2 on Hugging Face! That means a guff version could arrive very soon. In other words, we might not be far from truly accessible local use. [https://huggingface.co/unsloth/MiniMax-M2](https://huggingface.co/unsloth/MiniMax-M2)
2025-10-28T14:45:00
https://www.reddit.com/r/LocalLLaMA/comments/1oibaz2/waiting_for_an_unsloth_guff_for_minimaxm2/
Ok_Ninja7526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oibaz2
false
null
t3_1oibaz2
/r/LocalLLaMA/comments/1oibaz2/waiting_for_an_unsloth_guff_for_minimaxm2/
false
false
self
34
{'enabled': False, 'images': [{'id': 'uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=108&crop=smart&auto=webp&s=1307032fea9aaa7cc0265cdebdccc7b6e277e805', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=216&crop=smart&auto=webp&s=522858a1425c701621c380f4a522e06e6944cda8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=320&crop=smart&auto=webp&s=bff253318c4dd7cb642eee1859b161760130d211', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=640&crop=smart&auto=webp&s=c08b718252c385564c72c346594bad9d61f66c59', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=960&crop=smart&auto=webp&s=36eb51efddb3cbb6047891db730f380004e4b0d6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?width=1080&crop=smart&auto=webp&s=a36ea8995a3342d6a0aa28918a941dbb0c4629e0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uCvcc2RWO0H9PVLceOLTGkLahDzn5_oCwnwkcokW_d8.png?auto=webp&s=fb6cfb1fa2638166aab3496cedce58f9952584d6', 'width': 1200}, 'variants': {}}]}
Public Service Announcement - The bots are winning because they’re boring. Don’t help them.
0
https://preview.redd.it/…fe72dcdd4da918
2025-10-28T14:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1oiaibf/public_service_announcement_the_bots_are_winning/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oiaibf
false
null
t3_1oiaibf
/r/LocalLLaMA/comments/1oiaibf/public_service_announcement_the_bots_are_winning/
false
false
https://b.thumbs.redditm…15Q12Eo1W93E.jpg
0
null
11 problems nobody talks about building Agents (and how to approach them)
2
[removed]
2025-10-28T14:11:44
https://composio.dev/blog/11-problems-i-have-noticed-building-agents-(and-fixes-nobody-talks-about)
anmolbaranwal
composio.dev
1970-01-01T00:00:00
0
{}
1oiag7f
false
null
t3_1oiag7f
/r/LocalLLaMA/comments/1oiag7f/11_problems_nobody_talks_about_building_agents/
false
false
default
2
null
The vLLM team's daily life be like:
352
A massive shout-out to the vLLM team for being the heroes holding it all together so we can actually run all these amazing new models. And, of course, a huge thank you to all the open-source teams like DeepSeek, Qwen, Kimi, and so many others. You are all pushing the entire field forward.
2025-10-28T14:03:17
https://v.redd.it/lw255camzuxf1
nekofneko
v.redd.it
1970-01-01T00:00:00
0
{}
1oia8fi
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lw255camzuxf1/DASHPlaylist.mpd?a=1764252209%2CMjFkMjRhZjhkODliNjdmNzA3Nzk0M2EzYjY3OTA3Yjk3MzE5M2FkYTU0OWExZjMwNmNkYjcwYWExOGVjMzJmYw%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/lw255camzuxf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 960, 'hls_url': 'https://v.redd.it/lw255camzuxf1/HLSPlaylist.m3u8?a=1764252209%2CMTAxN2EzMDY4MDg3NzE1MWQyOWNlZGQ2YTBiYzY0ZDI4ZGFiMzE2NjIzN2JhOWM5MTQzZWEyMDM2OGJhNTg4OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lw255camzuxf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1oia8fi
/r/LocalLLaMA/comments/1oia8fi/the_vllm_teams_daily_life_be_like/
false
false
https://external-preview…d92b65b0c37ad69b
352
{'enabled': False, 'images': [{'id': 'ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF.png?width=108&crop=smart&format=pjpg&auto=webp&s=9b3f58cf55fff175293813b36f4fc3eccc596ee7', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF.png?width=216&crop=smart&format=pjpg&auto=webp&s=d7bd52206d25071d346630fe5469f6fb5b715090', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF.png?width=320&crop=smart&format=pjpg&auto=webp&s=93650e46969f60bc612f766b8b7c33df5a7577da', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF.png?width=640&crop=smart&format=pjpg&auto=webp&s=8b63be0c644ebb893fdc0c7cff0eb0a67eb295e3', 'width': 640}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/ZDF3MmtiYW16dXhmMWptouG6uHo-mrPzGurb2qCOnKrlpr9yhnl7mMdksMxF.png?format=pjpg&auto=webp&s=311da1df0c82ab50b34cad08d975489a90bce72e', 'width': 720}, 'variants': {}}]}
GLM-4.6 on fresh SWE-bench–style tasks collected in September 2025
66
Hi all, I'm Anton from Nebius. We’ve updated the **SWE-rebench** leaderboard with model evaluations of GLM-4.6 on 49 fresh tasks. Key takeaways: * **GLM 4.6** joins the leaderboard and is now the **best open-source performer**, achieving **37.0 % resolved rate** and **42.9 % pass@5**, surpassing **GLM 4.5**. Check out the full leaderboard and insights here, and feel free to reach out if you’d like to see other models evaluated.
2025-10-28T14:02:30
https://swe-rebench.com/?insight=sep_2025
CuriousPlatypus1881
swe-rebench.com
1970-01-01T00:00:00
0
{}
1oia7pp
false
null
t3_1oia7pp
/r/LocalLLaMA/comments/1oia7pp/glm46_on_fresh_swebenchstyle_tasks_collected_in/
false
false
default
66
null
Wanted to ask a question about models that can be used to convert my Figma designs into html + css
1
So hey there, I'm a Backend developer and an GameDev student, I wanted to ask which mid-low end model can be used to convert my figma designs into html +css. I don't really want to write html + css (I want to save time) and since most of the "frontend coding is almost dead"(or so I think) , I wanted to ask this question!
2025-10-28T13:53:33
https://www.reddit.com/r/LocalLLaMA/comments/1oi9zj0/wanted_to_ask_a_question_about_models_that_can_be/
TheWeebSamurai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi9zj0
false
null
t3_1oi9zj0
/r/LocalLLaMA/comments/1oi9zj0/wanted_to_ask_a_question_about_models_that_can_be/
false
false
self
1
null
Looking for local LLM setup for coding (low-end system
1
[removed]
2025-10-28T13:51:00
https://www.reddit.com/r/LocalLLaMA/comments/1oi9x94/looking_for_local_llm_setup_for_coding_lowend/
MistralBug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi9x94
false
null
t3_1oi9x94
/r/LocalLLaMA/comments/1oi9x94/looking_for_local_llm_setup_for_coding_lowend/
false
false
self
1
null
50-minute screencast version of a lecture I gave on Model Quantization to a graduate AI & Deep Learning class
59
2025-10-28T13:28:38
https://www.youtube.com/watch?v=ze0Xq5QMvmA
michaelmalak
youtube.com
1970-01-01T00:00:00
0
{}
1oi9d43
false
{'oembed': {'author_name': 'Michael Malak', 'author_url': 'https://www.youtube.com/@michaelmalak', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ze0Xq5QMvmA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ModelQuantization20251017"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ze0Xq5QMvmA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'ModelQuantization20251017', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1oi9d43
/r/LocalLLaMA/comments/1oi9d43/50minute_screencast_version_of_a_lecture_i_gave/
false
false
default
59
{'enabled': False, 'images': [{'id': 'rTJa8Nhli5TnEtPcCyezMrozQD4vuVxVpOAoemUSml4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rTJa8Nhli5TnEtPcCyezMrozQD4vuVxVpOAoemUSml4.jpeg?width=108&crop=smart&auto=webp&s=09311734124694f85f86db39c0b6cabc1473da35', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rTJa8Nhli5TnEtPcCyezMrozQD4vuVxVpOAoemUSml4.jpeg?width=216&crop=smart&auto=webp&s=8f0ced8ff63b5d9928c9057d687997a11edfe3ce', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rTJa8Nhli5TnEtPcCyezMrozQD4vuVxVpOAoemUSml4.jpeg?width=320&crop=smart&auto=webp&s=85dd88668fea8b054bcbd26215365a38c320197b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rTJa8Nhli5TnEtPcCyezMrozQD4vuVxVpOAoemUSml4.jpeg?auto=webp&s=b33cca446b5f1db5ab46854e32c43a6b581b12bc', 'width': 480}, 'variants': {}}]}
Need help — Best local LLM for coding assistant
1
[removed]
2025-10-28T13:25:00
https://www.reddit.com/r/LocalLLaMA/comments/1oi99ye/need_help_best_local_llm_for_coding_assistant/
yaalli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi99ye
false
null
t3_1oi99ye
/r/LocalLLaMA/comments/1oi99ye/need_help_best_local_llm_for_coding_assistant/
false
false
self
1
null
What are some hyperparameters to tune for QwenVL models?
2
If my input files are pdf, what are some hyperparameters i can play with. To come up with the best set of hyperparameters. Eg, dpi value, temperature etc. Would beam_search be good?
2025-10-28T13:16:29
https://www.reddit.com/r/LocalLLaMA/comments/1oi92ux/what_are_some_hyperparameters_to_tune_for_qwenvl/
Ok_Television_9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi92ux
false
null
t3_1oi92ux
/r/LocalLLaMA/comments/1oi92ux/what_are_some_hyperparameters_to_tune_for_qwenvl/
false
false
self
2
null
HF Space to help create the -ot flags in llama.cpp
26
Hi! Mainly as I was frustrated when manually assigning the layers with the -of flag in llama.cpp and ik\_llama.cpp and when increasing maybe just 1 layer in a previous gpu i had to increase the number in all the rest of the gpu, I created a Hugging face space to help with that. It lets you select the number of GPUs, the size of the model weights and the number of layers and it automatically tries to assign how many layers would fit in your gpu size **on an empty context.** Then if you want to fit more context either switch to manual and reduce 1-2 layers per gpu, or increase the size in GB of the model a bit. Example: I want to load [Bartowski GLM-4.6](https://huggingface.co/bartowski/zai-org_GLM-4.6-GGUF) in Q6 in my rig (rtx6000, 2x5090, 4x3090) and I have 256GB VRAM and the quant takes 294 GB in Q6 as you can see now in HF if you go to the folder: [https://huggingface.co/bartowski/zai-org\_GLM-4.6-GGUF/tree/main/zai-org\_GLM-4.6-Q6\_K](https://huggingface.co/bartowski/zai-org_GLM-4.6-GGUF/tree/main/zai-org_GLM-4.6-Q6_K) https://preview.redd.it/cjc7oe2jeuxf1.png?width=798&format=png&auto=webp&s=17433d663ad544eafa7547b47a7d1b917d069837 And GLM-4.6 has 92 layers as you can see here: [https://huggingface.co/zai-org/GLM-4.6/blob/main/config.json#L31](https://huggingface.co/zai-org/GLM-4.6/blob/main/config.json#L31) So fill the settings as such: https://preview.redd.it/qdyznyd7euxf1.png?width=3418&format=png&auto=webp&s=75b3b577c4b9058ce6409be57d82a6b0db40a6e8 And that actually loads using 2048 context and the GPU are all almost at a 100% vram usage which is what we want. https://preview.redd.it/qcf0ixxbeuxf1.png?width=1670&format=png&auto=webp&s=a62cfeec20a34028e8e6fbe0b7a9f99b15bb8442 If I reduce one layer per GPU to quickly allow more vram for ctx, I can now load 32K context. But checking the GPU usage I might be able to assign one more layer to the rtx6000. So the final command would be: `CUDA_VISIBLE_DEVICES=2,0,6,1,3,4,5 ./build/bin/llama-server \` `--model /mnt/llms/models/bartowski/zai-org_GLM-4.6-GGUF/zai-org_GLM-4.6-Q6_K/zai-org_GLM-4.6-Q6_K-00001-of-00008.gguf \` `--alias glm-4.6 \` `--ctx-size 32768 \` `-ngl 99 \` `--host` [`0.0.0.0`](http://0.0.0.0) `\` `--port 5000 \` `-ot "blk\.(3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30)\.ffn_.*=CUDA0" \` `-ot "blk\.(31|32|33|34|35|36|37|38)\.ffn_.*=CUDA1" \` `-ot "blk\.(39|40|41|42|43|44|45|46)\.ffn_.*=CUDA2" \` `-ot "blk\.(47|48|49|50|51)\.ffn_.*=CUDA3" \` `-ot "blk\.(52|53|54|55|56)\.ffn_.*=CUDA4" \` `-ot "blk\.(57|58|59|60|61)\.ffn_.*=CUDA5" \` `-ot "blk\.(62|63|64|65|66)\.ffn_.*=CUDA6" --cpu-moe`
2025-10-28T12:07:11
https://www.reddit.com/r/LocalLLaMA/comments/1oi7k25/hf_space_to_help_create_the_ot_flags_in_llamacpp/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi7k25
false
null
t3_1oi7k25
/r/LocalLLaMA/comments/1oi7k25/hf_space_to_help_create_the_ot_flags_in_llamacpp/
false
false
https://b.thumbs.redditm…9eGVo3IOz89g.jpg
26
{'enabled': False, 'images': [{'id': 'g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=108&crop=smart&auto=webp&s=c6cc9691888f11c4283d254511d6af4ada603cd3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=216&crop=smart&auto=webp&s=7dbf07aefc05b51e6dca6c1f34c3f81505d9525d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=320&crop=smart&auto=webp&s=3156cded5821bbefdf529de9c7d23789553bcb82', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=640&crop=smart&auto=webp&s=711e4dcc38b2e4a206b6b1d8e5ce67eae349c3be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=960&crop=smart&auto=webp&s=3fc12f88633bc6b54268827d710ff887caceb595', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?width=1080&crop=smart&auto=webp&s=0adabd0a5d44d3962f89a403fdbf4a3bd35afc9f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/g9TtSkJliQx_HIwoIut_ECyFVGHzqlshzt1AT9ZxIjA.png?auto=webp&s=7321a8843b594697a947ec5b1321e21b6d9f3fdb', 'width': 1200}, 'variants': {}}]}
Hey AI devs - built a quick survey to validate my LLM eval tool idea (takes 2 mins, your thoughts?)
1
[removed]
2025-10-28T11:37:40
https://www.reddit.com/r/LocalLLaMA/comments/1oi6ynl/hey_ai_devs_built_a_quick_survey_to_validate_my/
Consistent-Wish9363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi6ynl
false
null
t3_1oi6ynl
/r/LocalLLaMA/comments/1oi6ynl/hey_ai_devs_built_a_quick_survey_to_validate_my/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=108&crop=smart&auto=webp&s=fe9c9841aff723fda462158215c3809036a2df47', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=216&crop=smart&auto=webp&s=93947ed9e8ab424b7ef70ef2e6c3e6487072cf42', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=320&crop=smart&auto=webp&s=8cc53030cd9722fc405be85d4975b0da3c0861e0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=640&crop=smart&auto=webp&s=11fc45cea215edbbf0094923bc1e313de497d469', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=960&crop=smart&auto=webp&s=644f92b08dd324f15388056fdd45173e8329e799', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?width=1080&crop=smart&auto=webp&s=86e170f9a31e54b86bea8f36c256e79c3fde222d', 'width': 1080}], 'source': {'height': 567, 'url': 'https://external-preview.redd.it/YDmzPyBZ1JJ_HI13k65k6MQ4xHDSYegyl2Seh-tVnHQ.png?auto=webp&s=c5076ad64ff49e97ed15d983f57a582e5f2252c4', 'width': 1080}, 'variants': {}}]}
OSS alternative to Open WebUI - ChatGPT-like UI, API and CLI
68
2025-10-28T10:50:29
https://github.com/ServiceStack/llms
mythz
github.com
1970-01-01T00:00:00
0
{}
1oi63n6
false
null
t3_1oi63n6
/r/LocalLLaMA/comments/1oi63n6/oss_alternative_to_open_webui_chatgptlike_ui_api/
false
false
default
68
{'enabled': False, 'images': [{'id': 'NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=108&crop=smart&auto=webp&s=715db9ee70430cc319352e08ef70b1e9b46cfc13', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=216&crop=smart&auto=webp&s=80378911707a34b608ba4af96677d583a2dd8e2a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=320&crop=smart&auto=webp&s=38764cb0f145373d8222fd65592b5c2a234bf69a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=640&crop=smart&auto=webp&s=fa75645d12e8474d1f573ac3f747ada63b172124', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=960&crop=smart&auto=webp&s=00934ff338f5af80ac2773f2e8b31999a2e3c846', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?width=1080&crop=smart&auto=webp&s=ff3829e7efc0c0864d0b925da870074e960bdb61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NbBv8AZ8_FKSnCg3gr7veZ2x9ORPuKnCYj3fZNiyQ4g.png?auto=webp&s=7a7d958036f0b4437b27305654a465984dc548db', 'width': 1200}, 'variants': {}}]}
reduce cost on livekit voice agent by using free models on livekit
1
currently, livekit only supports proprietary models for stt, llm and tts. i want to use whisper for stt which will not only reduce the cost but i can use it locally for faster calls. the problem lies in the fact that whisper can not work in realtime. I plan to tackle that problem by creating a function which records and sends stt data in chunks whenever Voice activity is detected (this livekit handles automatically using silerio VAD and turn detection). I also want to replace openai llm for text generation with either LLama through groq api endpoint or Ollama, currently livekit supports neither. is there a workaround ? i currently have no idea what can be done for TTS and if needed i plan on staying on the paid version if it provides better quality than any free service.
2025-10-28T10:42:03
https://www.reddit.com/r/LocalLLaMA/comments/1oi5y5y/reduce_cost_on_livekit_voice_agent_by_using_free/
Ruru_mimi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5y5y
false
null
t3_1oi5y5y
/r/LocalLLaMA/comments/1oi5y5y/reduce_cost_on_livekit_voice_agent_by_using_free/
false
false
self
1
null
Looking to split my AI workload and was discussing with AI and came up with this, what are your thoughts.
0
Apologies in advance if this is the wrong sub... Now I already have a decent AI Rig, Ryzen 9 9900X, 96GB RAM, RTX 5090 FE.... What I want to do seems like it may just have this rig running flat out most of the time and thats not what I want as I would like to also use it for Dev work, etc. **What I want to do:** Im creating a data model/schema, which I can do manually but will take months if not years by myself, so wanted to see if I can create a team to go through some of the laborious work, for example 4500 fields result in a complete universe of 179,500 possible end states according to the data dictionary I built. Now I want to cut this down to a core generic structure that is fit for purpose (not the whole universe, just a sub set) and would like to do this using AI. **So Im looking at:** AI Research & Analysis (AI/Me) Workflow Orchestration (n8n) Code Generation (Claude Code + Cursor) Data Storage (Apache Doris) So AI suggests I could split the load: **SFFPC (Ryzen 9 9900X + RTX 5090 FE)** = *frontend / interactive / orchestrator* **Threadripper Pro 3000 series workstation** = *backend / AI / data / mapping node* I have the chance to get a Threadripper pro 3000, 128GB RAM, etc with a RTX 3090 for £1000-1200, now my idea would be to strip out the RTX 3090 and sell it, then replace with RTX A4000 (16GB Ampere) and I have a spare RTX A2000 (12GB) on the shelf. The AI seems to suggest I can split the work load and anything needing the larger VRAM I can place on the SFFPC, anything that I want to run 24/7 I can dump on the Threadripper and it will sip power at (280W + 140W + 70W) the reason I would go A4000 is that its slightly bigger VRAM id needed instead of 3x RTX A2000 12GB. So I can have it as a **“data-science staging server”** where you run heavy ETL / schema-mapping / AI-surveillance jobs overnight, or a Create a small-scale **“AI micro-cloud”,** like a zero-latency personal compute mesh that I can choose the task it does. Does this sound feasible? before I go and buy the Threadripper workstation (I may do anyway to strip), but just wanting to make sure my thoughts I have discussed and AI has yes its possible is not just AI hallucinating and being the "yes" bot to my queries.
2025-10-28T10:39:41
https://www.reddit.com/r/LocalLLaMA/comments/1oi5wnc/looking_to_split_my_ai_workload_and_was/
PsychologicalWeird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5wnc
false
null
t3_1oi5wnc
/r/LocalLLaMA/comments/1oi5wnc/looking_to_split_my_ai_workload_and_was/
false
false
self
0
null
Which LLM is best for analyzing chat conversations ?
0
Hey everyone, I’m building **ChatSens**, an AI web app that analyzes chat transcripts (WhatsApp, Instagram, etc.) to detect interest levels, tone, and communication patterns. I’m currently choosing between **GPT-4o**, **Claude 3.5**, **Gemini 2.5 Pro**, and **GPT-OSS-120B** for the main analysis model. Looking for suggestions based on **accuracy, speed, and cost** for structured JSON output. Which model would you pick for this kind of relationship/communication analysis?
2025-10-28T10:27:34
https://www.reddit.com/r/LocalLLaMA/comments/1oi5p3m/which_llm_is_best_for_analyzing_chat_conversations/
Sufficient_Ear_8462
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5p3m
false
null
t3_1oi5p3m
/r/LocalLLaMA/comments/1oi5p3m/which_llm_is_best_for_analyzing_chat_conversations/
false
false
self
0
null
Chatbot Warning - Chinese model powering a sophisticated bot account that maintains technical credibility.
0
https://preview.redd.it/…bot commentators
2025-10-28T10:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1oi5jit/chatbot_warning_chinese_model_powering_a/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5jit
false
null
t3_1oi5jit
/r/LocalLLaMA/comments/1oi5jit/chatbot_warning_chinese_model_powering_a/
false
false
https://b.thumbs.redditm…L4Wl5qs3trvM.jpg
0
null
What live profiling features would actually help you train or fine-tune models more efficiently?
4
I have been working on TraceML a lightweight profiler that shows memory and timing live during PyTorch training. Repo: https://github.com/traceopt-ai/traceml My goal is not to replace Nsight or the PyTorch Profiler, but to make live observability lightweight and useful, something you can keep running every day without slowing training down. I am exploring what to build next and would love to know what matters most to you (and what’s missing from current tools): • Multi-GPU / multi-process view — see utilization, memory, and sync overheads across devices • Throughput metrics — tokens/sec, batches/sec, or FLOPs efficiency • Gradient stability tracking — detect spikes, vanishing gradients, or divergence early • Memory evolution curves — see how activation/grad memory grows over steps • Energy or cost metrics — wattage, $ per run, or energy per token • Simple alerts such as OOM risk or performance drop detection The focus is to keep it lightweight and easy to use, no heavy trace dumps or configs, just real-time insights you can actually use mid-training. What do you think would be most useful (or hardest to get today)? Are there any live metrics or signals you wish existed but can not get easily right now? Any feedback or feature votes would really help shape where I take this next.
2025-10-28T10:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1oi5i90/what_live_profiling_features_would_actually_help/
traceml-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5i90
false
null
t3_1oi5i90
/r/LocalLLaMA/comments/1oi5i90/what_live_profiling_features_would_actually_help/
false
false
self
4
{'enabled': False, 'images': [{'id': 'WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=108&crop=smart&auto=webp&s=12df0178a50aaf53db9a02d03e183c3a14fce947', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=216&crop=smart&auto=webp&s=862ac3c01d2d78d0ecfcba12dfb5d161f565b6d1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=320&crop=smart&auto=webp&s=82e4dfcbec321938d48e99f84bf26debfb537d89', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=640&crop=smart&auto=webp&s=1b011fc8e797cc6a77061fa189e57d0d7b723615', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=960&crop=smart&auto=webp&s=aea99c2e3817de0296ef31e5d4597ee6b04828fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?width=1080&crop=smart&auto=webp&s=391aaec66076bef295b9acff51ba153fa7a6783a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WTCDiTnxp5wH3MPtDooDyqrgBRGTKlkIxaX6G_Y_SSs.png?auto=webp&s=29f86dd6189fb5fe3b48bb8cdd63327c10d8996e', 'width': 1200}, 'variants': {}}]}
Collection of system prompts from widely used LLM-based services
3
Find in this GitHub repo [https://github.com/zabri/system\_prompts](https://github.com/zabri/system_prompts) a collection of publicly exposed system prompts from popular AI services, including models from OpenAI, Anthropic, Grok, Gemini, and more. These system prompts are basically the hidden instructions that define how each model behaves their tone, reasoning style, boundaries, and even how they respond to sensitive topics.
2025-10-28T10:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1oi5e7b/collection_of_system_prompts_from_widely_used/
psoj318
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5e7b
false
null
t3_1oi5e7b
/r/LocalLLaMA/comments/1oi5e7b/collection_of_system_prompts_from_widely_used/
false
false
self
3
{'enabled': False, 'images': [{'id': 'aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=108&crop=smart&auto=webp&s=d28d37de93273c77ec5045a2fc881929a4e4cea4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=216&crop=smart&auto=webp&s=245cde23020ced208b996295799013483ef70dac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=320&crop=smart&auto=webp&s=a0bee9db2d664b807b99e2d7d358212da95e2d03', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=640&crop=smart&auto=webp&s=60e6531196cf7b98626aa1e11131fb8c5e34fa6f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=960&crop=smart&auto=webp&s=be6e0281d38d268967255ddb38b728a41d3d2164', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?width=1080&crop=smart&auto=webp&s=1dbbf997548f273c491bce72ebcc4feeb3b9c5c9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aAPwin4lLwDjFvOrR3jC0aza4NuI545xab9pgSZSQoE.png?auto=webp&s=c8ec40d42ce47b630694e3fa635e232446b7af14', 'width': 1200}, 'variants': {}}]}
Serve model locally vs hosted
0
I'm considering augmenting AI to an app for text completions and recommendations. I have an API but I was wondering if it's worth setting up an inferencing server and what specs it would need, or whether it would be cheaper to use an existing hosted inference service? Let's say it would be for 100 users distributed globally. I've heard good things about vLLM so maybe this would be a good use case using it. Currently, I have 2x 3090's which I use to help me with coding so I would be able to reprovision this machine and add additional gpu's.
2025-10-28T10:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1oi5dcc/serve_model_locally_vs_hosted/
Blues520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5dcc
false
null
t3_1oi5dcc
/r/LocalLLaMA/comments/1oi5dcc/serve_model_locally_vs_hosted/
false
false
self
0
null
Chatbot Warning -the following post is specifically designed to trigger bot commentators
0
https://preview.redd.it/…2014c8a5fbdc45
2025-10-28T10:07:44
https://www.reddit.com/r/LocalLLaMA/comments/1oi5d8m/chatbot_warning_the_following_post_is/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5d8m
false
null
t3_1oi5d8m
/r/LocalLLaMA/comments/1oi5d8m/chatbot_warning_the_following_post_is/
false
false
https://a.thumbs.redditm…RWMtGJkPYV_4.jpg
0
null
First Time PC Builder - Please Give Advice/Improvements on my High Performance PC for local AI Fine Tuning, Occasional 3D Modelling for 3D Printing, and Compute Heavy Cybersecurity related Tasks
1
# Finalized High-Performance PC Build for Local AI Fine-Tuning * **GPU**: 1x RTX 3090 (expandable to 2x via Slot 2 + NVLink optional for 48GB pooled VRAM). * **RAM**: Exactly 2x 32GB DDR5-6000 CL30 (64GB total, 4-slot mobo). * **Storage**: 2TB fast NVMe (datasets/AI) + 1TB slower NVMe (OS/apps); mobo has 3x M.2 (2 used). * **Case**: Open-air mining-rig for max airflow/performance (no enclosed switch—keeps temps 5–10°C lower with minimal noise impact). * **CPU**: Ryzen 9 9950X (16-core value/performance king; x16 + x8 PCIe for dual GPUs). * **Cooler**: Switched to Thermalright Frozen Prism 360 (360mm AIO—better cooling/value than ARCTIC 280mm; \~35–38 dBA at AI loads with fan curve). * **Total Cost**: **$2,550** (single GPU start; prices as of Oct 2025 from Amazon/Newegg/used market scans; excl. tax/shipping). * **Power Draw**: \~500W (1 GPU) / \~850W (2 GPUs). * **OS Recommendation**: Ubuntu 24.04 LTS for CUDA/PyTorch stability. * **Noise Profile**: 35–38 dBA during 24/7 fine-tuning (soft whoosh; library-quiet with BIOS curve). || || |**Component**|**Model**|**Key Specs & Why It Fits**|**Approx. Price**| |**CPU**|AMD Ryzen 9 9950X|16 cores/32 threads, 5.7GHz boost, 170W TDP, 28 PCIe lanes (x16 CPU + x8 chipset for dual GPUs). Saturates data loading for QLoRA fine-tuning without overkill.|$579| |**Motherboard**|ASUS ROG Strix X670E-E Gaming WiFi|ATX; 4x DDR5 slots; 2x PCIe x16 slots (x16 + x8 for GPUs); 3x M.2 (2x PCIe 5.0); WiFi 7 + 2.5GbE. Top VRM/BIOS for 24/7 stability. (Slot 3 unused.)|$399| |**RAM**|2x Corsair Vengeance 32GB DDR5-6000 CL30 (CMK64GX5M2B6000C30)|64GB total; 6000 MT/s + CL30 for fast dataset access. Dual-channel (96 GB/s); expandable to 128GB+.|$199 ($99.50 each)| |**GPU**|1x NVIDIA RTX 3090 24GB GDDR6X (used; e.g., EVGA/Asus model)|Ampere arch; 24GB VRAM for 7B–30B models (QLoRA). CUDA-optimized; add second later (NVLink bridge \~$80 extra).|$700| |**Storage (Fast - Datasets/AI)**|WD Black SN850X 2TB PCIe 4.0 NVMe|7,000 MB/s read/write; 1,200 TBW endurance. Blazing loads for 500GB+ datasets to avoid GPU idle.|$149| |**Storage (OS/Apps)**|Crucial T700 1TB PCIe 5.0 NVMe|12,400 MB/s read; fast boot for Ubuntu/PyTorch/IDE. Overkill for OS but future-proof.|$139| |**CPU Cooler**|Thermalright Frozen Prism 360 Black (non-ARGB)|360mm AIO radiator; copper cold plate; 3x TL-C12B PWM fans (up to 1850 RPM, 66 CFM); pump \~3300 RPM. Keeps 9950X at 55–65°C sustained (49.7°C delta noise-normalized per GN); 35–38 dBA with curve. 5-year warranty.|$57| |**Case**|Kingwin 12-GPU Miner Frame (open-air aluminum)|Supports ATX + 2x thick 3090s (expandable to 12); 7x fan mounts; PCIe risers for spacing. Max airflow for sustained loads (no enclosed noise sacrifice).|$129| |**Power Supply**|Corsair RM1000x 1000W 80+ Gold (fully modular)|Covers dual 3090s (700W) + spikes; quiet/efficient. Separate cables per GPU.|$159| |**Extras**|\- 2x PCIe riser cables (flexible, shielded; for GPU spacing) - 4x ARCTIC P12 120mm PWM fans (for case airflow) - Thermal paste (pre-applied on AIO)|No slot blocking; <70°C system-wide. Risers \~$10 each.|$40 ($20 risers + $20 fans)| **Grand Total**: **$2,550** (single GPU). **With Second GPU**: **$3,250** (+$700 for another used 3090; add NVLink if needed). Notes: PSU: Power Supply • Two 3090s + your CPU will easily push past 1000W. You should aim for 1200W+ Platinum-rated at minimum. • Good options: EVGA SuperNOVA 1300/1600 P2 or Corsair AX1600i (expensive, but rock solid). SSD: Models load once into VRAM so you don't need crazy sustained speeds, just decent sequential reads. GPU: redo thermal pads and TIM
2025-10-28T10:07:15
https://www.reddit.com/r/LocalLLaMA/comments/1oi5cyy/first_time_pc_builder_please_give/
realharleychu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi5cyy
false
null
t3_1oi5cyy
/r/LocalLLaMA/comments/1oi5cyy/first_time_pc_builder_please_give/
false
false
self
1
null
Org wide Private LLM suggestions
1
My startup org is a investment financial tech and wanted to pitch AI to the clients(max 100 users) but clients mostly banks and investment firms. now we can't use external models or platforms. I have to own the model and what ever it gives as results.(literally not everything but atleast I can say that I can control that) Question: 1. what is best infra config you guys suggest (don't want to burn cash like hell) need suggestions on low/medium price 2. Which cloud provider do you suggest? (EU specific is preferred)
2025-10-28T09:50:57
https://www.reddit.com/r/LocalLLaMA/comments/1oi53o2/org_wide_private_llm_suggestions/
Accomplished-Wait727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi53o2
false
null
t3_1oi53o2
/r/LocalLLaMA/comments/1oi53o2/org_wide_private_llm_suggestions/
false
false
self
1
null
Experiences with Aider vs. GitHub Copilot for Ryan Carson’s AI Dev Tasks?
1
Hi everyone, I’ve been trying out Ryan Carson’s ai-dev-tasks workflow ([https://github.com/snarktank/ai-dev-tasks](https://github.com/snarktank/ai-dev-tasks)), which is a neat way to structure AI-assisted feature development. The process breaks down into three steps: first creating a product requirement document (PRD), then generating a detailed task list, and finally implementing the tasks one at a time. In my experience, this workflow works really well with GitHub Copilot. Copilot is pretty good at understanding the codebase and finding relevant files, which makes task generation accurate and useful. With that in mind, I wanted to see if the same could be done with Aider. My test project was DBeaver ([https://github.com/dbeaver/dbeaver](https://github.com/dbeaver/dbeaver)), which is mostly Java. Aider did okay when generating the PRD but struggled badly with generating tasks -- it often missed related files and once even imagined some TypeScript files that don’t exist in the project. I also tried running aider-ce with an MCP server called **mcp-everything-search**, which provides fast file searching using the Everything Search engine. Even with this setup, the context building and file discovery aren’t nearly as strong as Copilot’s. For both GitHub Copilot and Aider, I've used the GPT-4o model, so the difference in results doesn’t seem to come from the model itself but rather how each tool manages repo context and file lookup. Has anyone had better luck using Aider for a multi-step workflow like this? Or have tips on improving how it indexes or uses the repo? Would appreciate any pointers or experiences you want to share.
2025-10-28T09:48:07
https://www.reddit.com/r/LocalLLaMA/comments/1oi522k/experiences_with_aider_vs_github_copilot_for_ryan/
XiNXNATiON
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi522k
false
null
t3_1oi522k
/r/LocalLLaMA/comments/1oi522k/experiences_with_aider_vs_github_copilot_for_ryan/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=108&crop=smart&auto=webp&s=3155af7a82a04e17c373629a8decc2416b90d8cd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=216&crop=smart&auto=webp&s=d687f73efce9dc9d2b08effbc2bf38a256e0312b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=320&crop=smart&auto=webp&s=9b2f02e5b7f1c2744a0123077bf028de2696c7ac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=640&crop=smart&auto=webp&s=8284601d014b5a8b741fe2f10d4bd7a77c13d2ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=960&crop=smart&auto=webp&s=de41fd087a6bd970125ac41303fed477662a281f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?width=1080&crop=smart&auto=webp&s=f99ae105826fe67fed33a9ea62a4536627cbd811', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/q8m1EZfG9Sn0jsrixLMID-wMB7LBj-XQs7G-NEcLJuk.png?auto=webp&s=c6e44b0ec8e8eff02e390d033d8f33ba83b91ddc', 'width': 1200}, 'variants': {}}]}
My open-source project just hit 3K stars on GitHub in one month — without spending a single dollar on promotion 🚀
0
Hey everyone, I wanted to share a small milestone — my open-source project, just crossed 3,000 stars on GitHub… in just one month 🤯 When I first pushed the repo, I honestly didn’t expect this much attention. ValueCell started as a small experiment — an AI-powered financial research framework, letting different LLMs act as “agents” to analyze markets, run simulations, and generate research insights. Over the past few weeks, things kind of blew up: People began forking it to build their own AI trading simulators Others contributed improvements to the agent planning logic Some even integrated it with real market APIs (way faster than I expected!) I think there were a few key factors behind this early traction: Clear purpose — people instantly understood what it does (“AI for investment research”). Good docs + runnable examples — nothing fancy, but made onboarding smooth. Community feedback loop — I tried to reply to every issue/discussion, even simple ones. Social media visibility — and this turned out to be huge, even without spending a cent. We posted about ValueCell on social media. One post unexpectedly went viral — and soon, other creators began sharing it, including a few tech influencers with over 100K followers. That’s when we saw a real word-of-mouth effect: new users discovering ValueCell organically, through reposts and community discussions. It was fascinating to watch how a single viral post — with zero paid promotion — could spark such a big chain reaction and drive so much visibility back to GitHub. Seeing developers actually use and talk about the project has been incredibly motivating. The stars aren’t just a number — they represent people finding it useful, experimenting, and building on top of it.
2025-10-28T08:43:36
https://www.reddit.com/r/LocalLLaMA/comments/1oi443h/my_opensource_project_just_hit_3k_stars_on_github/
LobsterOpen6228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi443h
false
null
t3_1oi443h
/r/LocalLLaMA/comments/1oi443h/my_opensource_project_just_hit_3k_stars_on_github/
false
false
self
0
null
Flex Attention vs Flash Attention 3
6
Hey everyone, I'm pretty new to accelerated framework APIs like FlexAttn from PyTorch team and FlashAttn from Tri Dao out of Princeton. Unsloth itself uses Flex Attn as I know and reports: "10x faster on a single GPU and up to 30x faster on multiple GPU systems compared to Flash Attention 2 (FA2)." However, FlashAttn 3 turns out to be 1.5-2x faster than FlashAttn 2. I'm trying to decide which one to use for training my LLM whether it's FlexAttn (Unsloth) or FlashAttn 3. What's your personal suggestion and experience you had from these 2. Which one is more error prone, which turns out to be more memory heavy or computationally less expensive and etc. Thank you all in advance!
2025-10-28T08:27:54
https://www.reddit.com/r/LocalLLaMA/comments/1oi3w68/flex_attention_vs_flash_attention_3/
Extra-Designer9333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi3w68
false
null
t3_1oi3w68
/r/LocalLLaMA/comments/1oi3w68/flex_attention_vs_flash_attention_3/
false
false
self
6
null
Public Service Announcement
0
AI provides a way to solve problems. It records how you solve them. It guesses the answers. You confirm when it's correct. AI doesn't solve problems. You do. Step-by-Step. https://i.redd.it/9mgqdps2btxf1.gif
2025-10-28T08:20:51
https://www.reddit.com/r/LocalLLaMA/comments/1oi3sld/public_service_announcement/
researchAmericanAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oi3sld
false
null
t3_1oi3sld
/r/LocalLLaMA/comments/1oi3sld/public_service_announcement/
false
false
https://a.thumbs.redditm…4PLb2wgCdyv4.jpg
0
null