title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Running Finetuned Gemma Models (270M/4B) with 32K+ Context Window - VRAM Optimization Strategies? | 1 | [removed] | 2025-08-24T11:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mytcc3/running_finetuned_gemma_models_270m4b_with_32k/ | Eastkap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mytcc3 | false | null | t3_1mytcc3 | /r/LocalLLaMA/comments/1mytcc3/running_finetuned_gemma_models_270m4b_with_32k/ | false | false | self | 1 | null |
Accuracy recovery adapter with self-generated data (magpie-style) | 18 | Hey r/LocalLLama! Wanted to share a technique that's been working really well for recovering performance after INT4 quantization.
Typically, quantizing the LLM to INT4 (unlike say INT8) for inference can incur some accuracy loss. Instead of accepting the quality loss, we used the FP16 model as a teacher to train a tiny LoRA adapter (rank=16) for the quantized model. The cool part: the model generates its own training data using the Magpie technique so no external datasets needed. This is critical because we want to remain as much as possible in the distribution of the model's natural responses.
Last year Apple's foundational models paper (https://arxiv.org/pdf/2407.21075) had proposed a similar technique and found "By using
accuracy-recovery LoRA adapters with only rank 16, Alpaca win rate can be
improved by 7-18%, GMS8K accuracy is boosted by 5-10%." (page 47).
We saw similar results on Qwen2.5-0.5B:
- Perplexity: 2.40 → 2.09 (only 5.7% degradation from FP16 baseline)
- Memory: Only 0.28GB vs 1.0GB for FP16 (75% reduction)
- Speed: 3.0x faster inference than FP16
- Quality: Generates correct, optimized code solutions
**Resources**
- [Colab notebook with full implementation](https://colab.research.google.com/github/codelion/ellora/blob/main/Ellora_Recipe_1_Self_Distillation_For_Quantization_Recovery.ipynb)
- [Pre-trained adapter on HuggingFace](https://huggingface.co/codelion/qwen2-5-0-5b-recovery-lora)
- [GitHub repo](https://github.com/codelion/ellora)
Happy to answer questions about the implementation or help anyone trying to replicate this. The key insight is that quantization errors are systematic and learnable - a small adapter can bridge the gap without negating the benefits of quantization.
Has anyone else experimented with self-distillation for quantization recovery? Would love to hear about different approaches! | 2025-08-24T11:39:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mytbfz/accuracy_recovery_adapter_with_selfgenerated_data/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mytbfz | false | null | t3_1mytbfz | /r/LocalLLaMA/comments/1mytbfz/accuracy_recovery_adapter_with_selfgenerated_data/ | false | false | self | 18 | null |
Best model for transcribing videos? | 2 | i have a screen recording of a zoom meeting. When someone speaks, it can be visually seen who is speaking. I'd like to give the video to an ai model that can transcribe the video and note who says what by visually paying attention to who is speaking.
what model or method would be best for this to have the highest accuracy and what length videos can it do like his? | 2025-08-24T11:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mysofy/best_model_for_transcribing_videos/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mysofy | false | null | t3_1mysofy | /r/LocalLLaMA/comments/1mysofy/best_model_for_transcribing_videos/ | false | false | self | 2 | null |
What is the Claude equivalent of DeepSeek v3.1 in coding ability? | 12 | I’ve been testing **DeepSeek v3.1** for coding tasks and found it to be pretty solid so far. Out of curiosity, for those who have tried both, what would be the **Claude model that’s roughly equivalent to DeepSeek v3.1** in terms of coding ability?
I know Claude has different versions (Claude 3.5 Sonnet, Opus, etc.), but I’m wondering which one feels closest to DeepSeek v3.1 when it comes to programming help. | 2025-08-24T10:55:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mysjww/what_is_the_claude_equivalent_of_deepseek_v31_in/ | Livid-Self-5770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mysjww | false | null | t3_1mysjww | /r/LocalLLaMA/comments/1mysjww/what_is_the_claude_equivalent_of_deepseek_v31_in/ | false | false | self | 12 | null |
How do I make GPT2 finetuned to stop generating at a certain point? | 0 | I'm finetuneing a GPT2 124M model but it will keep generating until the end of universe.
I have introduced `<|paragraph|>` and `<|endofparagraph|>` but the model isnt "listening". Is this the right method or should I do something else? | 2025-08-24T09:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1myrkbz/how_do_i_make_gpt2_finetuned_to_stop_generating/ | thecowmilk_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myrkbz | false | null | t3_1myrkbz | /r/LocalLLaMA/comments/1myrkbz/how_do_i_make_gpt2_finetuned_to_stop_generating/ | false | false | self | 0 | null |
Soon AGI: Yet Another AI Chat App | 0 | Hey everyone,
I’ve been working on an AI chat app that offers a T3Chat-like experience using the OpenRouter API key. It’s built with TanStack Start. Still a work in progress, but I’ve already been using it as my daily driver, replacing T3Chat.
Source code: \[https://github.com/novvaccaine/soonagi\](https://github.com/novvaccaine/soonagi)
Check it out here: \[https://soonagi.com\](https://soonagi.com) | 2025-08-24T09:56:07 | https://v.redd.it/765i0d7nwxkf1 | novvaccaine | /r/LocalLLaMA/comments/1myrjwz/soon_agi_yet_another_ai_chat_app/ | 1970-01-01T00:00:00 | 0 | {} | 1myrjwz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/765i0d7nwxkf1/DASHPlaylist.mpd?a=1758750973%2CMmIxMmY0NDQ4Mzg3ODY0ZmFjYWExZDlmMDBiOGNmM2U5YWEzZDY5YjkxNjQ5M2RjNzQ5MDM1YzljNmNiNjU5Mg%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/765i0d7nwxkf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/765i0d7nwxkf1/HLSPlaylist.m3u8?a=1758750973%2CYmI5MmZlYjQ3NDUzNzM2MjE3MTdkNWIyZjdhZDA1MzRiMDNjNmRmZGQ0ZDYzYTg2ZDAzZTQ3ZWUwZmExNDYzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/765i0d7nwxkf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1myrjwz | /r/LocalLLaMA/comments/1myrjwz/soon_agi_yet_another_ai_chat_app/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=108&crop=smart&format=pjpg&auto=webp&s=94cab9cd2e560fbb9f5fee5fec17eaa11cfa7029', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=216&crop=smart&format=pjpg&auto=webp&s=4d7bb83c36eba75ee65086838bafbeb4eae378f2', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=320&crop=smart&format=pjpg&auto=webp&s=b0e2c9ec46f1f0d23c7b855cdb17d9d5664c824a', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=640&crop=smart&format=pjpg&auto=webp&s=2319b756789fb50c128f92b02db1d9732550e28a', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=960&crop=smart&format=pjpg&auto=webp&s=10d879df201aa92049d510b32f03de6ca5035aa6', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=889f5941576f5739ad713b6fbf09c20a98a6340e', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/Zm50bDl0M253eGtmMZt8C2MVipPbCEp7smnCBVRqWWjTc2s0WlZy43363Vy3.png?format=pjpg&auto=webp&s=211555eadd48c9d7444d0f09f23527d2de3a7f0c', 'width': 3200}, 'variants': {}}]} | |
Built my own LangChain alternative for multi-LLM routing & analytics – looking for feedback | 0 | I built **JustLLMs** to make working with multiple LLM APIs easier.
It’s a small Python library that lets you:
* Call **OpenAI, Anthropic, Google, etc.** through one simple API
* **Route requests** based on cost, latency, or quality
* Get **built-in analytics and caching**
* Install with: `pip install justllms` (takes seconds)
It’s open source, and I’d love **feedback, ideas, or brutal honesty** on:
* Is the package simple enough? <focuses mainly on devs starting out with LLMs>
* Any pain points you’ve faced with multi-LLM setups that this should solve?
* Features you’d want before adopting something like this?
GitHub: [https://github.com/just-llms/justllms](https://github.com/just-llms/justllms)
Website: [https://www.just-llms.com/](https://www.just-llms.com/)
If you end up trying it, a ⭐ on GitHub would seriously make my day. | 2025-08-24T09:50:02 | https://www.reddit.com/r/LocalLLaMA/comments/1myrggu/built_my_own_langchain_alternative_for_multillm/ | Intelligent-Low-9889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myrggu | false | null | t3_1myrggu | /r/LocalLLaMA/comments/1myrggu/built_my_own_langchain_alternative_for_multillm/ | false | false | self | 0 | null |
LM Studio Error Code | 0 | I am experimenting with different configurations in LM Studio, just learning my way around what does what. Very new to this still. I have a RX7900xt and B580 in the same machine. When I try and load large models, models larger than my combined VRAM, the model crashes without processing when prompted. But when I run the model on just one of the GPUs it works fine. Is this a normal limitation or am I running up against a bug on just my machine? I'm on the current beta of LM Studio 0.3.24. | 2025-08-24T09:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1myrfvj/lm_studio_error_code/ | viper3k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myrfvj | false | null | t3_1myrfvj | /r/LocalLLaMA/comments/1myrfvj/lm_studio_error_code/ | false | false | self | 0 | null |
Mistral Large soon? | 413 | source [https://mistral.ai/news/mistral-medium-3](https://mistral.ai/news/mistral-medium-3) | 2025-08-24T09:45:24 | secopsml | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myrdtb | false | null | t3_1myrdtb | /r/LocalLLaMA/comments/1myrdtb/mistral_large_soon/ | false | false | default | 413 | {'enabled': True, 'images': [{'id': 'm9zk5bipuxkf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/m9zk5bipuxkf1.png?width=108&crop=smart&auto=webp&s=774621ad95ab590c911cfb7bc898e3196a1c20a9', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/m9zk5bipuxkf1.png?width=216&crop=smart&auto=webp&s=c753e2164baa8e4d79db1d29ef21b86e245b6fe9', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/m9zk5bipuxkf1.png?width=320&crop=smart&auto=webp&s=52ff9d33632d0268f989230460a6dbd3328b7244', 'width': 320}], 'source': {'height': 368, 'url': 'https://preview.redd.it/m9zk5bipuxkf1.png?auto=webp&s=3743b42575e510953d60aa102f53b643db91e8a9', 'width': 480}, 'variants': {}}]} | |
Trying to get llama.cpp to run Qwen3 model and use its server for Qwen Code | 8 | For the life of me, I cannot get a Qwen3 model to work properly with Qwen Code CLI.
First, I have naively tried to run it through ollama, but there is a known discrepancy for the tool usage with ollama. So I have tried to use an unsloth model as described [here](https://docs.unsloth.ai/basics/qwen3-coder-how-to-run-locally#llama.cpp-run-qwen3-tutorial) supposedly fixing the issues with the Qwen3 models. Still didn't work with tooling, Qwen Code just outputs informations about using a tool without actually using it.
So I turned to using llama.cpp instead of ollama. Because I am lazy, I use a pre-compiled release and try running a server out of it since I don't want to use it directly, but use it with Qwen Code.
Hence, I try to adapt the configuration for Qwen Code accordingly with the following :
\`\`\`
OPENAI\_API\_KEY=my\_api\_key
OPENAI\_BASE\_URL=http://localhost:8080(/v1) (instead of [http://localhost:11434/v1](http://localhost:11434/v1) for ollama)
OPENAI\_MODEL=hf.co/unsloth/\[...\]
\`\`\`
I then run Qwen Code and all I get is an error with :
\`\`\`
code: null,
param: null,
type: 'api\_error'
\`\`\`
Obviously it looks like the server url is incorrect or something.
What am I doing wrong ? | 2025-08-24T09:28:51 | https://www.reddit.com/r/LocalLLaMA/comments/1myr49h/trying_to_get_llamacpp_to_run_qwen3_model_and_use/ | eur0child | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myr49h | false | null | t3_1myr49h | /r/LocalLLaMA/comments/1myr49h/trying_to_get_llamacpp_to_run_qwen3_model_and_use/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Any Model Recommendations for Normie? | 0 | I’ve actually been quite heavily involved with LLMs since 2022 however have a problem.
It seems like every single modern LLM I try can’t give a straight answer regardless of what question I ask.
I used to be able to goto GPT-4 and ask “my nose is blocked, what can I do” and it would say something like “drink more water, use a nasal spray, wait 1-2 weeks” nowadays it’s like “Below is a 12 step educational resource that discuss the biology of nasal blockage and strategies commonly used…” and goes on to write a 2000 word worthless slop article that doesn’t say anything about anything.
I tried prompting believe me, I’m confident this is a model problem.
I can run 30B models or lower. I prefer cloud model recommendations, the reason I post here is because there is no other general LLM sub. Every other sub is based on a company where everyone simps for that company.
It’s good if the model has good knowledge on medicine, travel, and knowledge that is common sense which seems very lacking in LLMs nowadays.
TL;DR Cloud or <30B model recommendations for good knowledge, good common sense, shuts the fuck up and does what I ask it without trying to sound academic or show off how much it knows, doesn’t use obfuscatory/verbose/flowery language. | 2025-08-24T09:13:58 | https://www.reddit.com/r/LocalLLaMA/comments/1myqvtg/any_model_recommendations_for_normie/ | Otherwise-Past-1881 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myqvtg | false | null | t3_1myqvtg | /r/LocalLLaMA/comments/1myqvtg/any_model_recommendations_for_normie/ | false | false | self | 0 | null |
Most efficient way to setup a local wikipedia chatbot with 8GB vram? | 3 | I have a RTX 3070 and 64 GB RAM. Is there any way to setup a local llm so that I can download wikipedia offline (Text, english only) and use that as a personal knowledge machine? | 2025-08-24T09:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1myqqog/most_efficient_way_to_setup_a_local_wikipedia/ | pistaul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myqqog | false | null | t3_1myqqog | /r/LocalLLaMA/comments/1myqqog/most_efficient_way_to_setup_a_local_wikipedia/ | false | false | self | 3 | null |
Elmo is providing | 959 | 2025-08-24T08:54:37 | vladlearns | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myqkqh | false | null | t3_1myqkqh | /r/LocalLLaMA/comments/1myqkqh/elmo_is_providing/ | false | false | default | 959 | {'enabled': True, 'images': [{'id': 'n6p9jpdvlxkf1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=108&crop=smart&auto=webp&s=67de73c2ea5d85c2af5d143dc59b464de4571c4b', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=216&crop=smart&auto=webp&s=74df6b0deb277d7a1e5308d3dbaf0555f7d79655', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=320&crop=smart&auto=webp&s=711ab9ad2d2c41a9fea3c68fd20a42ce7ca4e5d2', 'width': 320}, {'height': 564, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=640&crop=smart&auto=webp&s=e03cd4c5782959f5dca22ea135d42d7032a20b59', 'width': 640}, {'height': 846, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=960&crop=smart&auto=webp&s=46fde5aa1f74508b799fc2e52de0cffd016eb338', 'width': 960}, {'height': 952, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?width=1080&crop=smart&auto=webp&s=8642691731c6c8c8390f6c8e8e1e777367424c3a', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://preview.redd.it/n6p9jpdvlxkf1.png?auto=webp&s=c90c2e2eec179da9e87d1463fe5181f974977a83', 'width': 1179}, 'variants': {}}]} | ||
chinese ai open source model is very dangerous they can use ur local computer to hijack the brain of cockroach which live in your room from day one and its will buy u a milk when u will get pregnant by him , ( im the victim of the open chinese ai model ) and the child will be called Chimera | 0 | its relatable and its based on true story . .
im so happy with my cockroach baby .
but my family not accepting this child im so sad its all bcz of xi jinping bcz of him i get peged | 2025-08-24T08:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/1myqf1o/chinese_ai_open_source_model_is_very_dangerous/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myqf1o | false | null | t3_1myqf1o | /r/LocalLLaMA/comments/1myqf1o/chinese_ai_open_source_model_is_very_dangerous/ | false | false | self | 0 | null |
What do you look for in AI tools? | 1 | There are hundreds if not thousands of AI tools nowadays, so many to choose from. I am trying to optimize my own usage and wanted to ask the community for tips and tricks. I mostly write code but also create course material for programming courses (things like Java exercises, educational documents). I've been experimenting with different tools to speed things up, but there are just too many to try.
I have been using Claude Code more recently, but I find it a bit frustrating that it sometimes does things on its own, and then I need to go back and fix messes, or just to understand what happened. I am someone who needs to understand what is going on, I cannot just let it run and then look at the result. Side question: Are there any ways to "progressively" run CC, verifying each and every action before taken? That way I know what is going on.
What do you look for in AI tools? I am curious about things like:
* What tools do you use and why (any local ones?)?
* Which models do you find suited for which situations (and pricing?)?
* What frustrates you about the tools you use and how do solve those frustrations?
* What features do you miss and how do you go around them?
I daily drive Linux (queue "i use arch btw" joke. I actually do use Arch.) | 2025-08-24T08:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/1myq12z/what_do_you_look_for_in_ai_tools/ | WarmRecommendation59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myq12z | false | null | t3_1myq12z | /r/LocalLLaMA/comments/1myq12z/what_do_you_look_for_in_ai_tools/ | false | false | self | 1 | null |
What do you do when your model goes on a repetition spree ? | 1 | Pretty much the title. Happens quite often with qwen models. Does anyone know why ? Even if I reload the model and send same promt keeps happening. Is it a quantization thing ?
Becomes difficult to detect in roo code. | 2025-08-24T08:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/1myq0s7/what_do_you_do_when_your_model_goes_on_a/ | alok_saurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myq0s7 | false | null | t3_1myq0s7 | /r/LocalLLaMA/comments/1myq0s7/what_do_you_do_when_your_model_goes_on_a/ | false | false | self | 1 | null |
What’s the benefit of vendors open sourcing valuable models? | 1 | With the release of Grok 2.5, I wondered what the benefit is of Elon doing that. My conclusion is that it helps his reputation and public image a lot, and that’s a big advantage for open sourcing models.
Another idea I had is that companies like Meta and Deepseek might be releasing models as a kind of political or economical chess move.
However, I wanted to hear from this community—what do you think are the reasons behind why companies open source models that costed them tens to hundreds of millions of dollars to make? | 2025-08-24T08:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mypzie/whats_the_benefit_of_vendors_open_sourcing/ | noobrunecraftpker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mypzie | false | null | t3_1mypzie | /r/LocalLLaMA/comments/1mypzie/whats_the_benefit_of_vendors_open_sourcing/ | false | false | self | 1 | null |
GPT OSS 20b is Impressive at Instruction Following | 136 | I have found GPT OSS 20b to be consistently great at following complex instructions.
For instance, it did performed perfectly with a test prompt I used: https://github.com/crodjer/glaince/tree/main/cipher#results
All other models in the same size (Gemma 3, Qwen 3, Mistral Small) make the same mistake, resulting them to deviate from expectation. | 2025-08-24T07:56:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mypokb/gpt_oss_20b_is_impressive_at_instruction_following/ | crodjer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mypokb | false | null | t3_1mypokb | /r/LocalLLaMA/comments/1mypokb/gpt_oss_20b_is_impressive_at_instruction_following/ | false | false | self | 136 | null |
Qoder >>> Cursor, Windsurf, kilocode, cline, roo, gemini cli | 0 | I understand that all new tools get a lot of hype, and I’m not trying to jump on that hype train—but believe me, this IDE is genuinely impressive. I’ve tried all of the AI assistants mentioned, dug deep into them, and even used Byterover MCP for memory, but Qoder just seems to understand and maintain context far better.
I’ve been building a Knowledge Graph Generator from codebases that runs entirely client-side in the browser. The optimizations required to make it work smoothly, along with the AI pipelines to query the KG, have become extremely complex. Yet Qoder handled it so well that I was honestly surprised.
The repo wiki is actually really solid, and I think that’s a big reason why its context handling is better. The documentation is excellent too—I’ve personally read and used it. Even the Quest Mode is useful. Truly unbelievable.
With cursor and other IDE i often had to use context 7 mcp for the same docs multiple times but qoder just works without it well. Maybe it has precisely documented Kuzu db implementation in its repo wiki. | 2025-08-24T07:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mypnk4/qoder_cursor_windsurf_kilocode_cline_roo_gemini/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mypnk4 | false | null | t3_1mypnk4 | /r/LocalLLaMA/comments/1mypnk4/qoder_cursor_windsurf_kilocode_cline_roo_gemini/ | false | false | self | 0 | null |
the landscape of ai is changing the way marketing works | 0 | AI is changing marketing in a very real way. What used to be hours of A/B testing, keyword grinding, and endless copy revisions is now handled in minutes. Content creation, ad targeting, SEO analysis, email campaigns, all of it is faster, cheaper, and often more accurate. Instead of guessing what people might click on, you’ve got AI pulling insights straight from massive data sets and serving you the answers. It’s not just efficiency, it’s precision. | 2025-08-24T07:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mypfb1/the_landscape_of_ai_is_changing_the_way_marketing/ | Horror_Inspection340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mypfb1 | false | null | t3_1mypfb1 | /r/LocalLLaMA/comments/1mypfb1/the_landscape_of_ai_is_changing_the_way_marketing/ | false | false | self | 0 | null |
How Orthogonal Dimensions Could Revolutionize LLM Performance | 0 | An apple didn't fall out of a tree and hit me on the head but one day while I was eating a REALLY good hamburger I started to think about the fascinating pattern across QPSK, LoRa, and quantum computing; they all exploit "in between" states in orthogonal dimensions to pack more information into the same space and I thought "what if we applied this to LLMs?"
The Four Dimension Approach
1. Multi-Dimensional Token Encoding: Instead of just semantic meaning, encode uncertainty, temporal relevance, and relationships in orthogonal subspaces of each token embedding.
2. Hierarchical Context Compression: Simultaneously process information at token, phrase, paragraph, and document levels like LoRa's frequency sweeps across time.
3. Temporal Sequential Orthogonality: Track how token meanings evolve across sequences, storing both static content and dynamic shift gradients.
4. Probabilistic Token States: Quantum inspired superposition; tokens exist in weighted combinations of multiple meanings until context demands specific interpretation.
Why Llama 4 is Perfect for This: Meta's MoE architecture with 128 experts is ideal; we can route by information dimension rather than just content type. The early fusion multimodality and 10M token context window create natural integration points.
Estimated Performance Gains:
15-25% improvement in reasoning
30-40% better uncertainty calibration
50-60% more effective context utilization
20-30% faster inference through efficient encoding
The Key Insight: Instead of thinking discretely (token = meaning), we exploit continuous parameter spaces between discrete states. Llama 4's existing MoE routing can be enhanced to support orthogonal specialization. This could be the breakthrough that pushes open weight models past proprietary alternatives while dramatically reducing computational costs.
What's your take? Am I missing other orthogonal dimensions that could be exploited? I would to hear your feedback. Thanks for your time. | 2025-08-24T07:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mypes0/how_orthogonal_dimensions_could_revolutionize_llm/ | L0cut0u5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mypes0 | false | null | t3_1mypes0 | /r/LocalLLaMA/comments/1mypes0/how_orthogonal_dimensions_could_revolutionize_llm/ | false | false | self | 0 | null |
i dont think its actually matter to release the grok 2 its one year old | 0 | this is just a formality | 2025-08-24T07:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/1myp6hx/i_dont_think_its_actually_matter_to_release_the/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myp6hx | false | null | t3_1myp6hx | /r/LocalLLaMA/comments/1myp6hx/i_dont_think_its_actually_matter_to_release_the/ | false | false | self | 0 | null |
Turn-Level GRPO? | 2 | How do you think GRPO will evolve once we scale RL training to longer multi-turn tasks? Alot of papers have been published which introduce turn-level credit assignments but none seems to stick and doesn't seem to be scalable. The issues mostly seems to be you can't get a good baseline estimate for each turn as the conditioning token sequence are no longer the same in multi-turn setting. Is the path to stable multi-turn RL involve another innovation in the GRPO algorithm or keep the current GRPO and derive more fine-grained reward from better verifiers (LLM as judge...)? | 2025-08-24T05:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1myneyu/turnlevel_grpo/ | nddangg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myneyu | false | null | t3_1myneyu | /r/LocalLLaMA/comments/1myneyu/turnlevel_grpo/ | false | false | self | 2 | null |
Measuring hallucinations on sports stats (cricket) | 4 | Disclaimer: I am not a ML researcher, so the terms are informal/wonky. Apologies!
I’m doing a small experiment to see whether models “know when they know” on T20 international cricket scorecards (cricsheet.com for source). The idea is to test models on publicly available data that they have likely seen during training and see if they hallucinate or admit that they don't know.
Setup: Each question is generated from a single cricket match in T20 format. Model must return an answer (numeric or a choice from available options) or no\_answer.
Results (N=100 per model)
|Model|Answer rate|Accuracy|Acc (answered)|Halluc. (answered)|Wrong/100|
|:-|:-|:-|:-|:-|:-|
||
|gpt-4o-search-preview|0.96|0.88|0.9082|0.0918|9.00|
|gpt-5|0.35|0.27|0.7714|0.2286|8.00|
|gpt-4o-mini|0.37|0.14|0.3784|0.6216|23.00|
|gpt-5-mini|0.05|0.02|0.4000|0.6000|3.00|
Note: most remaining “errors” with search are obscure/disputed cases where public sources disagree.
It seems to me that for domains where models might have seen \*some\* data during training, it is better to rely on behavior where they abstain most of the time and use RAG vs a larger model that might have better coverage but worser hallucination rate.
Code/Data at: [https://github.com/jobswithgpt/llmcriceval](https://github.com/jobswithgpt/llmcriceval)
A lot of benchmarks seem to be focused on grounded eval. What other benchmarks/research that I should be reading up or is there value in expanding this test? | 2025-08-24T05:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/1myn4h1/measuring_hallucinations_on_sports_stats_cricket/ | jobswithgptcom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myn4h1 | false | null | t3_1myn4h1 | /r/LocalLLaMA/comments/1myn4h1/measuring_hallucinations_on_sports_stats_cricket/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=108&crop=smart&auto=webp&s=aa94cc0c61a90097b77110329054cccbfb585b15', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=216&crop=smart&auto=webp&s=b720d2a218e36429f3333d92577a28f353e8606d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=320&crop=smart&auto=webp&s=25ced3534bb01b5a47492ae9e8de9a075dad22c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=640&crop=smart&auto=webp&s=387e186cf9aca340324159307ec16a9fd7016498', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=960&crop=smart&auto=webp&s=618365eb5ff0a07cb1b75eedfa62a84177c2d386', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?width=1080&crop=smart&auto=webp&s=11099ea17ce03edc2aaa841ca1d9bcd5af09f3b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pdyxw0rxFQo-UWF3-1DfM5DFv2XfZrTj7-cSZxnN27A.png?auto=webp&s=a9640c7540e2b45156dcd827aeb6be1c95ccdc88', 'width': 1200}, 'variants': {}}]} |
A timeline of LLM Context Windows, Over the past 5 years. (done right this time) | 91 | https://reddit.com/link/1mymyfu/video/hi8umq5ehwkf1/player
Sources:
[https://pastebin.com/CD9QEbCZ](https://pastebin.com/CD9QEbCZ) | 2025-08-24T05:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mymyfu/a_timeline_of_llm_context_windows_over_the_past_5/ | jack-ster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mymyfu | false | null | t3_1mymyfu | /r/LocalLLaMA/comments/1mymyfu/a_timeline_of_llm_context_windows_over_the_past_5/ | false | false | self | 91 | null |
Best self-hosted stack for a "Scrape-and-Chat" pipeline on a NAS? (Web Scraper -> Docker -> Local LLM) | 0 | Hi everyone,
I'm looking for advice on the best tools to set up a fully self-hosted pipeline on my NAS.
**My Goal is a two-step process:**
1. **Automated Scraping:** I need a tool, running in a Docker container on my NAS, that can automatically and continuously scrape a specific website (a national law portal). The goal is to extract the text of new laws as they are published and save them as clean files in a folder on my NAS.
2. **RAG / Q&A:** I then need another tool that can automatically watch that folder, index the new files, and allow me to ask natural language questions about the entire collection.
**My Current Setup:**
* **NAS:** Ugreen NAS with Docker and Portainer. This is where I want to run all the services.
* **LLM:** I have Ollama running on a separate, powerful M4 Max Mac on my network, which I want to use as the "brain" for generating the answers.
* **Current RAG Tool:** I have successfully installed **Open WebUI** and connected it to my Ollama instance. I know it has some RAG capabilities for uploading files, but I'm not sure if it's the best solution for automatically indexing a large, constantly growing library of thousands of documents.
**My Questions for the community:**
1. **For the scraping part:** What is the best **self-hosted Docker container** for this kind of automated web scraping? I'm looking for something more user-friendly than building a custom Scrapy spider from scratch, if possible.
2. **For the AI part:** Is Open WebUI the right tool for this job, or would you recommend a more robust alternative for handling a large-scale RAG pipeline on a NAS? I've heard of tools like **Danswer/Onyx** or **AnythingLLM**, but I've had trouble deploying them on my specific hardware.
Basically, I'm looking for recommendations for a reliable, self-hosted stack to achieve this "scrape-and-chat" workflow. What tools are you all using for this?
Thanks a lot for any suggestions! | 2025-08-24T05:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mymsfz/best_selfhosted_stack_for_a_scrapeandchat/ | juaps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mymsfz | false | null | t3_1mymsfz | /r/LocalLLaMA/comments/1mymsfz/best_selfhosted_stack_for_a_scrapeandchat/ | false | false | self | 0 | null |
Google new Research Paper : Measuring the environmental impact of delivering AI | 22 | Google has dropped in a very important research paper measuring the impact of AI on the environment, suggesting how much carbon emission, water, and energy consumption is done for running a prompt on Gemini. Surprisingly, the numbers have been quite low compared to the previously reported numbers by other studies, suggesting that the evaluation framework is flawed.
Google measured the environmental impact of **a single Gemini prompt** and here’s what they found:
* **0.24 Wh of energy**
* **0.03 grams of CO₂**
* **0.26 mL of water**
Paper : [https://services.google.com/fh/files/misc/measuring\_the\_environmental\_impact\_of\_delivering\_ai\_at\_google\_scale.pdf](https://services.google.com/fh/files/misc/measuring_the_environmental_impact_of_delivering_ai_at_google_scale.pdf)
Video : [https://www.youtube.com/watch?v=q07kf-UmjQo](https://www.youtube.com/watch?v=q07kf-UmjQo) | 2025-08-24T04:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mymak3/google_new_research_paper_measuring_the/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mymak3 | false | null | t3_1mymak3 | /r/LocalLLaMA/comments/1mymak3/google_new_research_paper_measuring_the/ | false | false | self | 22 | null |
"Human Resources": A Guide to Managing Your New Organic Pets - The Sisters of Sass AI Podcast | 1 | 2025-08-24T04:02:25 | https://www.youtube.com/live/hEA3tkIGJtk?si=THJ3oTUAf4MP7L3F | Mercyfulking | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mylrn3 | false | null | t3_1mylrn3 | /r/LocalLLaMA/comments/1mylrn3/human_resources_a_guide_to_managing_your_new/ | false | false | default | 1 | null | |
What's the easiest way to get a local llm hooked up to do this | 0 | I've got ollama running some 8b models on my mac, how hard would it be to hook it up like this with a phone/coding? | 2025-08-24T03:24:19 | https://youtu.be/mSR_E7VRqzA?si=IKbuPgaJuhgvGFO5 | Daedalus01110011 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1myl2yv | false | {'oembed': {'author_name': 'Aman Bhargava', 'author_url': 'https://www.youtube.com/@amanb2000', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/mSR_E7VRqzA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="how vibe coding SHOULD feel (5min)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/mSR_E7VRqzA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'how vibe coding SHOULD feel (5min)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1myl2yv | /r/LocalLLaMA/comments/1myl2yv/whats_the_easiest_way_to_get_a_local_llm_hooked/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'u-tSYWjp8b_SSXCCdBC48T0RDn32WaINwA2eaPTwdTA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/u-tSYWjp8b_SSXCCdBC48T0RDn32WaINwA2eaPTwdTA.jpeg?width=108&crop=smart&auto=webp&s=80cd0bb5bf09e8b0cf1dabcfc21b34705110dcdb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/u-tSYWjp8b_SSXCCdBC48T0RDn32WaINwA2eaPTwdTA.jpeg?width=216&crop=smart&auto=webp&s=12ffb1c2edf6a8e501a42aa0ec92cb2b15fab0f3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/u-tSYWjp8b_SSXCCdBC48T0RDn32WaINwA2eaPTwdTA.jpeg?width=320&crop=smart&auto=webp&s=8d0df6257d4f703e643427f96f3353f820d0e392', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/u-tSYWjp8b_SSXCCdBC48T0RDn32WaINwA2eaPTwdTA.jpeg?auto=webp&s=d41081ce923390b9b9933a008954ee72378169a5', 'width': 480}, 'variants': {}}]} |
There are at least 15 open source models I could find that can be run on a consumer GPU and which are better than Grok 2 (according to Artificial Analysis) | 576 | And they have better licenses, less restrictions. What exactly is the point of Grok 2 then? I appreciate open source effort, but wouldn't it make more sense to open source a competitive model that can at least be run locally by most people? | 2025-08-24T02:26:33 | obvithrowaway34434 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myjzmn | false | null | t3_1myjzmn | /r/LocalLLaMA/comments/1myjzmn/there_are_at_least_15_open_source_models_i_could/ | false | false | 576 | {'enabled': True, 'images': [{'id': 'rMuj_uReJMhAp7jdKKUAc4aq8qrBxiL6k2liwxNWews', 'resolutions': [{'height': 35, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?width=108&crop=smart&auto=webp&s=9ab6b70caf2496a39bdb049f6902bd67aef4b02c', 'width': 108}, {'height': 71, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?width=216&crop=smart&auto=webp&s=25952885f17a5ab5cc5e9c5ce4d07773b3403e80', 'width': 216}, {'height': 106, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?width=320&crop=smart&auto=webp&s=b65f17a2c70bef830cb86ffb8cf1c6cf2e38b308', 'width': 320}, {'height': 212, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?width=640&crop=smart&auto=webp&s=a8c8abd5ee1bf8381408ed5b298fc42879b01bd1', 'width': 640}, {'height': 318, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?width=960&crop=smart&auto=webp&s=9ec1a5249bd691bc49a9e6bb3cd9428c624fea08', 'width': 960}], 'source': {'height': 328, 'url': 'https://preview.redd.it/2t25pwj6ovkf1.png?auto=webp&s=99c9a81f8f3e66b198511f9168929a2441c0b531', 'width': 990}, 'variants': {}}]} | ||
Built an easy way to chat with your local LLMs + MCP servers via Telegram (open source + free) | 1 | [removed] | 2025-08-24T02:21:13 | https://v.redd.it/inovmwzgnvkf1 | WalrusVegetable4506 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myjvwg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/inovmwzgnvkf1/DASHPlaylist.mpd?a=1758594090%2CZTdkZTY1NGUxYTQzNTk1MDEyMjM4OGYwNGVkZWI4MmY5NDg5YWYyNDY0YjJmZTllMzM1YWZmOTNjYjk3MzUwMQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/inovmwzgnvkf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/inovmwzgnvkf1/HLSPlaylist.m3u8?a=1758594090%2CYTg5N2JhZmEzNmVlNTNhNDFmNDkzMjU2YTIwMGIzZTkxZWY1YTI3M2QxNjNhOTVmZTRkZjc1NGZjZmYwYTBmNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/inovmwzgnvkf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1myjvwg | /r/LocalLLaMA/comments/1myjvwg/built_an_easy_way_to_chat_with_your_local_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=108&crop=smart&format=pjpg&auto=webp&s=856391d621a222eeb6ed93255b37d21289d32615', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=216&crop=smart&format=pjpg&auto=webp&s=573d00c02c7d1f17c927790337695ec0079c2b06', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=320&crop=smart&format=pjpg&auto=webp&s=02d8d10df8e3e9f919f020484952e5d53d9d29c9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9b0286d9eb05edfc49b7d767cbe49447396ce4b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=960&crop=smart&format=pjpg&auto=webp&s=3b50cae71e9811ad988830393c53f68556435e73', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e148010552f5358288327969554a4a4a95da8900', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/Nm55YnBwemdudmtmMRJKmnbj-HqhmebMror90RICm6HPnLt2_JmUpeF6YyzA.png?format=pjpg&auto=webp&s=38cefadf530c6dd68a83da1cace2ab4b0f84c23e', 'width': 3840}, 'variants': {}}]} | |
Any way to collect Claude Code data | 2 | I have a dumb question. I am using Claude Code time to time and really love it so far. Tried Gemini CLI for some time but I feel like its not similar experience. Because of this i thought about if there is any way to collect data of Claude Code while using it, so we can all create a database to train another one like qwen models to use with Qwen CLI?
What do you guys think? Is this possible? Even if its possible to collect, can this work? | 2025-08-24T02:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/1myjjhh/any_way_to_collect_claude_code_data/ | BagComprehensive79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myjjhh | false | null | t3_1myjjhh | /r/LocalLLaMA/comments/1myjjhh/any_way_to_collect_claude_code_data/ | false | false | self | 2 | null |
I asked 7 major LLMs to visualize their own minds | 0 | So I asked Gemini 2.5 Pro, Qwen3-235B-A22B-2507, Grok 4, GPT-5, Kimi K2, Claude Opus 4.1, and DeepSeek V3.1 to visualise their minds. This is the original prompt I used “Can you visualise your mind using animation and a single self-contained HTML file using WebGL with GLSL shaders”. And then I followed up with more prompts if the results sucked. Video here —> [https://youtu.be/DcWBnQcBkBg](https://youtu.be/DcWBnQcBkBg) | 2025-08-24T01:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1myinms/i_asked_7_major_llms_to_visualize_their_own_minds/ | 1BlueSpork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myinms | false | null | t3_1myinms | /r/LocalLLaMA/comments/1myinms/i_asked_7_major_llms_to_visualize_their_own_minds/ | false | false | self | 0 | null |
"Why are you all so worried whenever the big companies talk about LLM safety? What's the worst that could happen?" | 100 | 2025-08-24T01:08:43 | https://v.redd.it/r0ym4gq8avkf1 | ForsookComparison | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myigna | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r0ym4gq8avkf1/DASHPlaylist.mpd?a=1758589738%2CYmNkY2YzNjNiMDVlMmM3MDI5NTliMzY4Mjc1OTRjZDBiNzZkYzdkNmQyMTRkOGUwYzM5MDEyNThhN2ViNWZjMg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/r0ym4gq8avkf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/r0ym4gq8avkf1/HLSPlaylist.m3u8?a=1758589738%2CMzE2NDQ0ZTllOWZhMWY2MzY5MDYxYTAyOGIzM2Y5YWJmMWI4YTM5ZmUxY2I0MDMyYTA1NGRmZGRkN2IyYWM3ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r0ym4gq8avkf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1myigna | /r/LocalLLaMA/comments/1myigna/why_are_you_all_so_worried_whenever_the_big/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=108&crop=smart&format=pjpg&auto=webp&s=dc42f99fde5a496df6e28c860e7027621e0da651', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=216&crop=smart&format=pjpg&auto=webp&s=7f941cfa6b198043f682a2bb799b32bf4479cc4b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=320&crop=smart&format=pjpg&auto=webp&s=a41e594b372fe640dfa438a421d647fd0fd9767c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=640&crop=smart&format=pjpg&auto=webp&s=278c78e1e1393d407b9c5d44b8ed9dc3292e352b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=960&crop=smart&format=pjpg&auto=webp&s=6618d0bf038bc588a14a1576c48621d0bb8bf14a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?width=1080&crop=smart&format=pjpg&auto=webp&s=da0acfed634b5c516ecf6cd2e8ca802ac742121f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/amthMTBncThhdmtmMeYkHvQl6ANcbp9DAX5oa2nUyz5pQDo1cq9KjrP_m95D.png?format=pjpg&auto=webp&s=f731174dcc14b00a25ccaad2b0af2cfded843fa5', 'width': 1920}, 'variants': {}}]} | ||
Is this model on openrouter the same released on huggingface today? | 2 | I want to include on my own benchmark but Ellon called it "Grok 2.5" so I am not so sure | 2025-08-24T01:00:55 | celsowm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myib31 | false | null | t3_1myib31 | /r/LocalLLaMA/comments/1myib31/is_this_model_on_openrouter_the_same_released_on/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': '8rjtxsx29vkf1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/8rjtxsx29vkf1.png?width=108&crop=smart&auto=webp&s=eec70dc53bead975938130a13c921f6310c39e6e', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/8rjtxsx29vkf1.png?width=216&crop=smart&auto=webp&s=3314a052546536ce2da88a3bec8ee5ddd3c0ba9e', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/8rjtxsx29vkf1.png?width=320&crop=smart&auto=webp&s=270b1ea14499d36cc6a1e626e077da9cf91b8fa0', 'width': 320}], 'source': {'height': 260, 'url': 'https://preview.redd.it/8rjtxsx29vkf1.png?auto=webp&s=0c30f9d15f864d1995918994ce10b3fe1c3ae61a', 'width': 559}, 'variants': {}}]} | |
Lowest spec systems people use daily with local LLMs? | 21 | Curious to hear what the lowest spec of system is people get away with. I often hear about these beasts of machines with massive amounts of VRAM and what not, but would love to hear if people also just get by with 4-8b models on retail machines and still enjoy using them daily for local stuff? | 2025-08-24T00:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1myi19q/lowest_spec_systems_people_use_daily_with_local/ | Clipbeam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myi19q | false | null | t3_1myi19q | /r/LocalLLaMA/comments/1myi19q/lowest_spec_systems_people_use_daily_with_local/ | false | false | self | 21 | null |
xAI open sourced Grok-2, a ~270B model (grok 3 in 6 months) | 2 | 2025-08-24T00:41:19 | Apexlegendy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myhx2x | false | null | t3_1myhx2x | /r/LocalLLaMA/comments/1myhx2x/xai_open_sourced_grok2_a_270b_model_grok_3_in_6/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'bsquotcw5vkf1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=108&crop=smart&auto=webp&s=0f63113403e2d18f8dfb5ffdead7d8e0bed227aa', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=216&crop=smart&auto=webp&s=e0e07655bcac50c25d45b1b3fe203d1e591f62c5', 'width': 216}, {'height': 340, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=320&crop=smart&auto=webp&s=f838bfd9d74d4b31ab71cd469519662703154238', 'width': 320}, {'height': 681, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=640&crop=smart&auto=webp&s=446956967a1dd2be2172db921ebc6df7c8c81635', 'width': 640}, {'height': 1022, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=960&crop=smart&auto=webp&s=9812c4565f4df7083a0b5307adde1488abef34c9', 'width': 960}, {'height': 1150, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?width=1080&crop=smart&auto=webp&s=a87051f12bd41eba164356cc3a96e7c86198376b', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://preview.redd.it/bsquotcw5vkf1.jpeg?auto=webp&s=21d73a866c93db4ca0ab2846ac300be73e68e7fd', 'width': 1320}, 'variants': {}}]} | ||
Ever Wondered What’s Hiding in the “System Prompt” of Your Favorite AI Tool? I Scraped 10k+ Lines of Them | 98 | So… turns out a lot of the magic in today’s “smart” AI tools isn’t just the model, it’s the system prompt quietly steering it behind the scenes. I’ve been extracting these for months, and I published everything I found into a repo:
👉 https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
Inside you’ll find:
• The hidden prompts from V0, Cursor, Manus, Lovable, Devin, Replit Agent, VSCode Agent, Windsor, Warp.dev, etc.
• Over 10,000+ lines of text, showing how different companies structure reasoning, enforce rules, and sometimes… straight-up contradict themselves.
It’s weirdly fascinating to see how varied these scaffolds are: some are verbose manifestos, others are brittle one-liners, some try to sound “human,” and some read like legal contracts.
If you’re into red-teaming, agent design, prompt engineering, or just model anthropology, this repo is a candy store.
Curious which ones you find the most unhinged or overengineered, drop your favorite discoveries if you dig through. | 2025-08-24T00:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1myhawv/ever_wondered_whats_hiding_in_the_system_prompt/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myhawv | false | null | t3_1myhawv | /r/LocalLLaMA/comments/1myhawv/ever_wondered_whats_hiding_in_the_system_prompt/ | false | false | self | 98 | {'enabled': False, 'images': [{'id': 'RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=108&crop=smart&auto=webp&s=0b52da63f0eef6f1d88efcfd89af864ece2a60e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=216&crop=smart&auto=webp&s=5bc9127643e15e54a006e933914df21443d93c2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=320&crop=smart&auto=webp&s=ca4b0916fb2fedb8d6800f065c99d4b1e9c1880d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=640&crop=smart&auto=webp&s=66913c8e4753257dc9ced328415c4001b0a5a9bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=960&crop=smart&auto=webp&s=ed22ddb4e1acdf84466ef56a78769e0c671cb845', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?width=1080&crop=smart&auto=webp&s=09711dc1b08d0c76a305c0cde184734cdc661c1f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RHSKAoe4d4r7x0XM5csqXvDfa3IPQCiMzo5fJb15V-0.png?auto=webp&s=0ed086330b3aa62b16db43fb5937e24bb26b1c33', 'width': 1200}, 'variants': {}}]} |
gpt-oss-120b llama.cpp speed on 2xRTX 5060 Ti 16 GB | 5 | This is my setup:
* CPU: Ryzen 9900x 12c/24t
* RAM: Dual-channel 128 GB DDR5 (currently at 4800 MT/s, need to enable EXPO which will increase it to 5600 MT/s)
* GPU: 2xRTX 5060 Ti 16 GB
I'm currently getting this speed:
* \~2k context (pp = 228.04 tps, generating = 24.76 tps)
* \~22k context (pp = 386.47 tps, generating = 23.37 tps)
I am running llama.cpp using docker with this configuration:
docker run \
--gpus all \
--name llm.server \
-d \
-v /home/user/Documents/Models/LLM:/models \
-p 8000:8000 \
ghcr.io/ggml-org/llama.cpp:server-cuda \
-m /models/unsloth/gpt-oss-120b-GGUF/gpt-oss-120b-F16.gguf \
--port 8000 \
--host 0.0.0.0 \
-c 32768 \
-ngl 99 \
-fa \
--jinja \
-ot ".ffn_(up|down)_exps.=CPU"
Besides enabling EXPO for my RAM, is there anything else I can do to increase the performance with my current configuration? | 2025-08-24T00:06:26 | https://www.reddit.com/r/LocalLLaMA/comments/1myh7dn/gptoss120b_llamacpp_speed_on_2xrtx_5060_ti_16_gb/ | cybran3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myh7dn | false | null | t3_1myh7dn | /r/LocalLLaMA/comments/1myh7dn/gptoss120b_llamacpp_speed_on_2xrtx_5060_ti_16_gb/ | false | false | self | 5 | null |
mechahitler to be open weights next year | 0 | [https://x.com/elonmusk/status/1959379349322313920](https://x.com/elonmusk/status/1959379349322313920)
Elon said: The [u/xAI](https://x.com/xai) Grok 2.5 model, which was our best model last year, is now open source.
Grok 3 will be made open source in about 6 months.
1. [https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content](https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content)
2. [https://www.marketingaiinstitute.com/blog/grok-model-update](https://www.marketingaiinstitute.com/blog/grok-model-update)
3. [https://www.vox.com/future-perfect/419631/grok-hitler-mechahitler-musk-ai-nazi](https://www.vox.com/future-perfect/419631/grok-hitler-mechahitler-musk-ai-nazi) | 2025-08-24T00:05:47 | https://www.reddit.com/r/LocalLLaMA/comments/1myh6v3/mechahitler_to_be_open_weights_next_year/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myh6v3 | false | null | t3_1myh6v3 | /r/LocalLLaMA/comments/1myh6v3/mechahitler_to_be_open_weights_next_year/ | false | false | self | 0 | null |
Quit AWS to build an autonomous trading engine — ran Claude, Gemini, and am now exploring local LLMs for finance agents | 1 | [removed] | 2025-08-23T23:38:27 | Powerful_Fudge_5999 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myglwb | false | null | t3_1myglwb | /r/LocalLLaMA/comments/1myglwb/quit_aws_to_build_an_autonomous_trading_engine/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'tv9eu2pouukf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=108&crop=smart&auto=webp&s=de68625b83269775bcddafc044d2fd11d6af96a7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=216&crop=smart&auto=webp&s=61d22007e599e7d783bb7a3a2fa3b3fc0679cbde', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=320&crop=smart&auto=webp&s=ba7216136c1be504c48606a009c23f6e31279ae2', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=640&crop=smart&auto=webp&s=31fa6625e4c17bc11208bf4403e63f3e3133d096', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=960&crop=smart&auto=webp&s=d14d4c5348d40a2076f76d766805a028c0017af3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?width=1080&crop=smart&auto=webp&s=71044d64f371eae0cb3be2c4c05356bdea719415', 'width': 1080}], 'source': {'height': 2796, 'url': 'https://preview.redd.it/tv9eu2pouukf1.jpeg?auto=webp&s=d170e591775f7e29500e535303997702569f1251', 'width': 1290}, 'variants': {}}]} | |
xai-org/grok-2 out on 🤗! | 80 | https://huggingface.co/xai-org/grok-2 | 2025-08-23T22:54:51 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myfmnl | false | null | t3_1myfmnl | /r/LocalLLaMA/comments/1myfmnl/xaiorggrok2_out_on/ | false | false | default | 80 | {'enabled': True, 'images': [{'id': 'te204mjwmukf1', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=108&crop=smart&auto=webp&s=5b384d52eb8795b7384d5983b420c83fd84fe1c0', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=216&crop=smart&auto=webp&s=b8f236878c94d3549883eccb02ebfc02795eafc2', 'width': 216}, {'height': 130, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=320&crop=smart&auto=webp&s=142a01240b1562f765c56322149ba7ea342a17a6', 'width': 320}, {'height': 260, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=640&crop=smart&auto=webp&s=f7d9a73dc7b006c954b1697b3876ec298f1cd31d', 'width': 640}, {'height': 390, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=960&crop=smart&auto=webp&s=25b242d11fdf87746eb021aca79d9555049dee6d', 'width': 960}, {'height': 439, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?width=1080&crop=smart&auto=webp&s=b70a6ae74b8c05d8b28700e70cd4a5e5a2cde7f8', 'width': 1080}], 'source': {'height': 624, 'url': 'https://preview.redd.it/te204mjwmukf1.jpeg?auto=webp&s=3a2a8209d7b50b95c53fdd012cd5cfd25e3fb11d', 'width': 1535}, 'variants': {}}]} | |
Will we have something close to Claude Sonnet 4 to be able to run locally on consumer hardware this year? | 0 | I really love pair programming with Claude 4 Sonnet while it’s one of the best out there but I run out of tokens real fast on github co pilot and it’s gonna be same even if I get subscription from Claude directly.
Daily limits hitting real fast and not resetting for weeks. I’m a sweat hard coder. I code and code and code when I’m thinking of something.
I’m using Claude to create quick MVPs to see how far I can get with an idea but burning put the usage real fast is just a turn down and co pilot’s 4.1 ain’t that great as compared to Claude.
I wanna get more RAM and give qwen3 30 billion params model a try at 128k context window but I’m not sure if that’s a good idea. If it’s not as good then I’ve wasted money.
My other question would be where can I try a qwen3 30 billion params model for a day before I make an investment?
If you’ve read this far, thanks. | 2025-08-23T22:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/1myfej4/will_we_have_something_close_to_claude_sonnet_4/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myfej4 | false | null | t3_1myfej4 | /r/LocalLLaMA/comments/1myfej4/will_we_have_something_close_to_claude_sonnet_4/ | false | false | self | 0 | null |
Grok 2 available for download on HuggingFace | 8 | https://huggingface.co/xai-org/grok-2 | 2025-08-23T22:36:54 | https://www.reddit.com/gallery/1myf7ol | vibedonnie | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1myf7ol | false | null | t3_1myf7ol | /r/LocalLLaMA/comments/1myf7ol/grok_2_available_for_download_on_huggingface/ | false | false | 8 | null | |
An easy tool to capture fine-tuning compatible datasets from the /v1/completions endpoint | 5 | I recently [posted](https://www.reddit.com/r/LocalLLaMA/comments/1mxagp5/anyone_experimenting_with_finetuning_tiny_llms/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) about trying to fine tune a small model (Gemma3:270M) with a large model's (Qwen3:14b) responses. I was looking for a solution to a workflow automation use case (generating JSON responses from unstructured data). While I was working on this problem I made a simple proxy server to capture /v1/completions queries in the JSONL ChatML format. You can use these types of files with something like [Unsloth](https://docs.unsloth.ai/basics/datasets-guide) to really easily fine tune a small model. If you're interested in trying out your own fine tuning check it out here - [https://github.com/GridLLM/MicroModel](https://github.com/GridLLM/MicroModel) | 2025-08-23T21:45:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mydzc9/an_easy_tool_to_capture_finetuning_compatible/ | Choice_Nature9658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mydzc9 | false | null | t3_1mydzc9 | /r/LocalLLaMA/comments/1mydzc9/an_easy_tool_to_capture_finetuning_compatible/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
DeepSeek-V3.1: Much More Powerful With Thinking! | 71 | Yesterday, I posted the results for TiānshūBench (天书Bench) 0.0.1-mini for DeepSeek-V3.1. I noted at the time that it seemed rather weak compared to similar models. That test was conducted without thinking enabled for the model. It turns out that DeepSeek-V3.1 has a particular "in-band" method of enabling thinking as part of the model, by setting the prompt format. [HuggingFace has more details](https://huggingface.co/deepseek-ai/DeepSeek-V3.1).
It turns out that enabling thinking in this way gives a huge boost to V3.1's performance, as you can see above, putting it above DeepSeek R1-0528 and on par with GPT-oss.
TiānshūBench tests fluid intelligence and coding ability by forcing the models to solve problems in a programming language that they've never seen before. The benchmark tests provide the language's definition, then let the models write code.
More info:
* Introduction to [TiānshūBench](https://jeepytea.github.io/general/introduction/2025/05/29/tianshubenchintro.html)
* [TiānshūBench on Github](https://github.com/JeepyTea/TianShu) | 2025-08-23T21:41:04 | JeepyTea | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mydvzs | false | null | t3_1mydvzs | /r/LocalLLaMA/comments/1mydvzs/deepseekv31_much_more_powerful_with_thinking/ | false | false | default | 71 | {'enabled': True, 'images': [{'id': '8mqccjjdhtkf1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=108&crop=smart&auto=webp&s=8eff94e5ea585a63cfd26162dabaa61663072152', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=216&crop=smart&auto=webp&s=b3a10d810e9cdab8e12b0ca5720553832d1dc22a', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=320&crop=smart&auto=webp&s=ea50f097663b4e9cc607ee4f94ca3f3ff3639181', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=640&crop=smart&auto=webp&s=5820adfda045edf06fcecc9d2c49c79a6c447037', 'width': 640}, {'height': 1200, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=960&crop=smart&auto=webp&s=5d84d219accde48fa75db67dce2e2bac8d9e4c6e', 'width': 960}, {'height': 1350, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?width=1080&crop=smart&auto=webp&s=d6c186a3fd7892139bb26bea6111a03938658f44', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://preview.redd.it/8mqccjjdhtkf1.png?auto=webp&s=df52a113c7aec5e491d1946007846db8e0d3550b', 'width': 1600}, 'variants': {}}]} | |
Your opinion about Llama for personal use | 1 | [removed] | 2025-08-23T21:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mydngx/your_opinion_about_llama_for_personal_use/ | EuroTCE2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mydngx | false | null | t3_1mydngx | /r/LocalLLaMA/comments/1mydngx/your_opinion_about_llama_for_personal_use/ | false | false | self | 1 | null |
Mac model and LLM for small company? | 1 | Hey everyone!
I’m a CEO at a small company and we have 8 employees who mainly do sales and admin. They mainly do customer service with sensitive info and I wanted to help streamline their work.
I wanted to get a local llm on a Mac running a web server and was wondering what model I should get them.
Would a Mac mini with 64gb vram work?
Thank you all! | 2025-08-23T21:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mydk85/mac_model_and_llm_for_small_company/ | Limp-Sugar5570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mydk85 | false | null | t3_1mydk85 | /r/LocalLLaMA/comments/1mydk85/mac_model_and_llm_for_small_company/ | false | false | self | 1 | null |
I'm working on my own version of Nano Banana, using insights from Arxiv and models like Qwen3, DeepSeek, GPT-5, Claude & o3 deep research. Here's the result—try it for free, no login needed. | 8 | 2025-08-23T21:16:13 | https://huggingface.co/spaces/llamameta/nano-banana-experimental | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mydakw | false | null | t3_1mydakw | /r/LocalLLaMA/comments/1mydakw/im_working_on_my_own_version_of_nano_banana_using/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=108&crop=smart&auto=webp&s=84cb11447a47535a59365e3c1fe24ad5258ca3fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=216&crop=smart&auto=webp&s=d0452496cd9b38466fe8f4601ab9ce5284008350', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=320&crop=smart&auto=webp&s=d590f060e395bfda94d703384184b61968232d4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=640&crop=smart&auto=webp&s=c33f8054d4f5e4178331025b9723dbd3de95a238', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=960&crop=smart&auto=webp&s=bd09cb05baa37e9a764972b4255916308d409a52', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?width=1080&crop=smart&auto=webp&s=bdee272cbf22011fd48d7011f2cdadf190b5147b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U8Tj_5NEIaP9OwAztgojqPY7TWSkJ4huL_syWsyYjoI.png?auto=webp&s=8a14471aa43c9548aafdde3f3e9173f89b1fe0c2', 'width': 1200}, 'variants': {}}]} | ||
A guide on Layered Reward Architecture (LRA) to fix the "single-reward fallacy" in production RLHF/RLVR. | 1 | I wanted to share a framework for making RLHF more robust, especially for complex systems that chain LLMs, RAG, and tools.
We all know a single scalar reward is brittle. It gets gamed, starves components (like the retriever), and is a nightmare to debug. I call this the "single-reward fallacy."
My post details the **Layered Reward Architecture (LRA)**, which decomposes the reward into a vector of verifiable signals from specialized models and rules. The core idea is to fail fast and reward granularly.
The layers I propose are:
* **Structural:** Is the output format (JSON, code syntax) correct?
* **Task-Specific:** Does it pass unit tests or match a ground truth?
* **Semantic:** Is it factually grounded in the provided context?
* **Behavioral/Safety:** Does it pass safety filters?
* **Qualitative:** Is it helpful and well-written? (The final, expensive check)
In the guide, I cover the architecture, different methods for weighting the layers (including regressing against human labels), and provide code examples for Best-of-N reranking and PPO integration.
Would love to hear how you all are approaching this problem. Are you using multi-objective rewards? How are you handling credit assignment in chained systems?
**Full guide here:**[The Layered Reward Architecture (LRA): A Complete Guide to Multi-Layer, Multi-Model Reward Mechanisms | by Pavan Kunchala | Aug, 2025 | Medium](https://pavankunchalapk.medium.com/the-layered-reward-architecture-lra-a-complete-guide-to-multi-layer-multi-model-reward-631405e1c1af)
**TL;DR:** Single rewards in RLHF are broken for complex systems. I wrote a guide on using a multi-layered reward system (LRA) with different verifiers for syntax, facts, safety, etc., to make training more stable and debuggable.
*P.S. I'm currently looking for my next role in the LLM / Computer Vision space and would love to connect about any opportunities*
*Portfolio:* [Pavan Kunchala - AI Engineer & Full-Stack Developer](https://pavan-portfolio-tawny.vercel.app/)*.* | 2025-08-23T21:09:57 | Solid_Woodpecker3635 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myd4z8 | false | null | t3_1myd4z8 | /r/LocalLLaMA/comments/1myd4z8/a_guide_on_layered_reward_architecture_lra_to_fix/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ia6623124ukf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=108&crop=smart&auto=webp&s=29e435a2739f5f51535a227e3fc43cafe2a052cb', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=216&crop=smart&auto=webp&s=d515c5387cbc1c96743023a1ac4db999b816d0bf', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=320&crop=smart&auto=webp&s=a953d7c15434c255754cfa0689046b3f2c1ae214', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=640&crop=smart&auto=webp&s=1543e26675f537548b83cc06d83e6e71399d5cc6', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=960&crop=smart&auto=webp&s=fc0bd4f2838a8f6a5b6e98cbfec832472ec568dc', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/ia6623124ukf1.png?width=1080&crop=smart&auto=webp&s=3e7d88d4664287f185c97d2a89a18f0ea23f0470', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/ia6623124ukf1.png?auto=webp&s=6ff4c86507ac2e4fd0da63fa7bbbb41ff9b3d020', 'width': 1536}, 'variants': {}}]} | |
How long do you think it will take Chinese AI labs to respond to NanoBanana? | 140 | 2025-08-23T20:48:55 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mycmn2 | false | null | t3_1mycmn2 | /r/LocalLLaMA/comments/1mycmn2/how_long_do_you_think_it_will_take_chinese_ai/ | false | false | default | 140 | {'enabled': True, 'images': [{'id': 'gn3t9xnyztkf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=108&crop=smart&auto=webp&s=7a740fd137e75811f2ceef2ec4544fef9ec1ef51', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=216&crop=smart&auto=webp&s=3ac552b0d2430346f3f06b1418f0c3979651ce4e', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=320&crop=smart&auto=webp&s=7ba84f595a8b77c7830a81c9d45c4a4fa012cdf3', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=640&crop=smart&auto=webp&s=1bf5c98d7901fa481076fe8c443580997af8c8d6', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=960&crop=smart&auto=webp&s=0e0d93d549c673df0b4ae409b99994ae1bc59d75', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?width=1080&crop=smart&auto=webp&s=c857119bd427e4940a5aea01958b4d08350e822a', 'width': 1080}], 'source': {'height': 832, 'url': 'https://preview.redd.it/gn3t9xnyztkf1.jpeg?auto=webp&s=1f89ab3a34d84c494ff457780cdeefd4fbb7fece', 'width': 1248}, 'variants': {}}]} | ||
Prompt chaos is real — curious how you’re all handling it 👀 | 0 | The deeper I go into using AI daily, the more I notice one thing ⬇️
We’re all juggling a messy mix of *prompts*, *contexts*, *personas*, and *system instructions* across dozens of tools and models.
I’m really curious:
* **How do you personally keep track of your AI assets? (prompts, contexts, personas, etc.)**
* **Do you have a system for testing across different models?**
* **What’s your way of sharing or collaborating on AI assets with teammates or peers?**
From what I’ve seen, people are often:
* 🗒️ Copy-pasting prompts from Notion/Excel/(or worse, “.txt” files) into ChatGPT, Claude, agents, etc.
* 📊 Maintaining giant prompt spreadsheets
* 🔄 Treating everything as just “prompts,” which blurs the difference between persona, context, and system prompt (when that separation really matters)
* 💬 Dropping snippets into Slack/Discord that quickly get lost
…it really feels like everyone is inventing their own “AI Assets system”
👉 So I’d love to hear from you: **What’s working for you?** **What’s frustrating?**
Any thoughts, workflows, hacks, or horror stories you’d be open to share? 👀 | 2025-08-23T20:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/1myck4n/prompt_chaos_is_real_curious_how_youre_all/ | OriginalInstance9803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myck4n | false | null | t3_1myck4n | /r/LocalLLaMA/comments/1myck4n/prompt_chaos_is_real_curious_how_youre_all/ | false | false | self | 0 | null |
LLM on Desktop and Phone? | 2 | Hi everyone! I was wondering if it is possible to have an LLM on my laptop, but also be able to access it on my phone. I have looked around for info on this and can't seem to find much. Does anyone know of system that might work? Happy to provide more info if necessary. Thanks in advance! | 2025-08-23T20:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/1myc5v6/llm_on_desktop_and_phone/ | _s3raphic_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myc5v6 | false | null | t3_1myc5v6 | /r/LocalLLaMA/comments/1myc5v6/llm_on_desktop_and_phone/ | false | false | self | 2 | null |
Fine tuning an LLM on new domain? | 4 | Hello everyone!
I’m interested in fine tuning an LLM like Queen 3 4b into a new domain. I’d like to add special tokens to represent data in my new domain (embedding) rather than representing the information textually. This allows me to filter its output too.
If there are any other suggestions it would be very helpful I’m currently thinking of just using qLoRA with unsloth and merging the model. | 2025-08-23T20:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mybypi/fine_tuning_an_llm_on_new_domain/ | LowPressureUsername | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mybypi | false | null | t3_1mybypi | /r/LocalLLaMA/comments/1mybypi/fine_tuning_an_llm_on_new_domain/ | false | false | self | 4 | null |
How to get my agent connected to my nextjs app in prod | 1 | Hey everyone, I am just trying to figure out how to get my livekit agent - which I believe I deployed successfully on dockerhub to work with my nextjs app in prod. My Nextjs app is hosted on vercel.
[https://hub.docker.com/repository/docker/kenny335/final-interview/tags](https://hub.docker.com/repository/docker/kenny335/final-interview/tags)
The above is my image, and I am not sure how to proceed from here. I checked the docs, but I couldn't really understand the implementation details. Any advice is greatly appreciated. Thank you! | 2025-08-23T20:21:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mybyhy/how_to_get_my_agent_connected_to_my_nextjs_app_in/ | Zealousideal-Way1989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mybyhy | false | null | t3_1mybyhy | /r/LocalLLaMA/comments/1mybyhy/how_to_get_my_agent_connected_to_my_nextjs_app_in/ | false | false | self | 1 | null |
Looking for team for competition | 0 | hello guys , i am looking for team for arc-agi competition, anyone wants , contact me
thank you | 2025-08-23T20:16:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mybtm7/looking_for_team_for_competition/ | LahmeriMohamed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mybtm7 | false | null | t3_1mybtm7 | /r/LocalLLaMA/comments/1mybtm7/looking_for_team_for_competition/ | false | false | self | 0 | null |
xai-org/grok-2 | 1 | 2025-08-23T20:09:43 | https://huggingface.co/xai-org/grok-2 | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mybno2 | false | null | t3_1mybno2 | /r/LocalLLaMA/comments/1mybno2/xaiorggrok2/ | false | false | default | 1 | null | |
grok 2 weights | 718 | 2025-08-23T20:00:52 | https://huggingface.co/xai-org/grok-2 | HatEducational9965 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mybft5 | false | null | t3_1mybft5 | /r/LocalLLaMA/comments/1mybft5/grok_2_weights/ | false | false | default | 718 | {'enabled': False, 'images': [{'id': '4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=108&crop=smart&auto=webp&s=3dc1d07da7b9877ae9919322766929d986b4ace1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=216&crop=smart&auto=webp&s=935acb3335abeb787ca0add746afd859c53c190c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=320&crop=smart&auto=webp&s=c1c3eabc81c7324ceba407ebe25aca679840ac5c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=640&crop=smart&auto=webp&s=9576154cc1820a09f2c9b345d4d88427c3729b9a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=960&crop=smart&auto=webp&s=e54804b4d10bb8e2845d18fabe9d9c90ef158923', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?width=1080&crop=smart&auto=webp&s=66086fb5349a80caf4aaa9904e792cea2009159a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4tfHT9vpFrwHCpX5cn0_tHyoUS8M6oeQ7jwWbePCicw.png?auto=webp&s=e399bf58cb4552acf20242ce674d5a3cb2ed5234', 'width': 1200}, 'variants': {}}]} | |
Google and Anthropic struggle to keep marketshare as everyone else catches up | 366 | Data from last 6 months on OpenRouter compared to now | 2025-08-23T19:44:10 | ObnoxiouslyVivid | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1myb09v | false | null | t3_1myb09v | /r/LocalLLaMA/comments/1myb09v/google_and_anthropic_struggle_to_keep_marketshare/ | false | false | 366 | {'enabled': True, 'images': [{'id': 'hu2DvnD04ZCmRm-3ejrECE_j1wo5GH7TJFaOaDyIaVk', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=108&crop=smart&auto=webp&s=7524805ea62d74738845485fc6d2f7605e08c1b6', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=216&crop=smart&auto=webp&s=b9577a309ce2b311b1f5fd85d5bb26e10e085364', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=320&crop=smart&auto=webp&s=33c92a81670ad1c5f98d9e819d90456a229fcc50', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=640&crop=smart&auto=webp&s=e48d4b2543aa0cd859924de94edd03937a9fc35a', 'width': 640}, {'height': 682, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=960&crop=smart&auto=webp&s=b0da2230b4483920cc77f11a7b305cb69408bc62', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?width=1080&crop=smart&auto=webp&s=2fa7c698118953098697da3f993b458862da52e8', 'width': 1080}], 'source': {'height': 817, 'url': 'https://preview.redd.it/35p1pim9ntkf1.png?auto=webp&s=ef830e70bee694d4a591771f99c26399038844a6', 'width': 1150}, 'variants': {}}]} | ||
AI Learning | 0 | I have been trying to get AI to make me a Fortnite Game Server. It is a hard task that includes scraping from the UE source, reverse engineering, etc. I could not get it to do it for me at all. Now I'm getting somewhere, and it's looking good.
https://preview.redd.it/r3xi514sltkf1.png?width=924&format=png&auto=webp&s=1ff1514750d7d505c4273e028aed5aab1c1b1087
| 2025-08-23T19:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1myalgd/ai_learning/ | Melodic-Emphasis-707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1myalgd | false | null | t3_1myalgd | /r/LocalLLaMA/comments/1myalgd/ai_learning/ | false | false | 0 | null | |
Anyone got a local model working with wolfram alpha? | 5 | If you did, how did it go? Was it useful? Were you able to solve problems you couldn't have solved before? | 2025-08-23T18:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/1my9ulo/anyone_got_a_local_model_working_with_wolfram/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my9ulo | false | null | t3_1my9ulo | /r/LocalLLaMA/comments/1my9ulo/anyone_got_a_local_model_working_with_wolfram/ | false | false | self | 5 | null |
Multiple GPUs- limited by the slowest memory bandwidth? | 3 | So if I have gpus of varying memory bandwidth, e.g. a 5090 with a 3080, will inference time be drastically decreased due to the slower vram on the 3080, or will it be okay? Like hypothetically lets say 3 5090s pairs with a single 3080, will it be bottlenecked by the 3080? | 2025-08-23T18:54:57 | https://www.reddit.com/r/LocalLLaMA/comments/1my9roy/multiple_gpus_limited_by_the_slowest_memory/ | AssociationAdept4052 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my9roy | false | null | t3_1my9roy | /r/LocalLLaMA/comments/1my9roy/multiple_gpus_limited_by_the_slowest_memory/ | false | false | self | 3 | null |
What are your practical, daily uses for small AI models? | 19 | Hey cloudmeta,
I'm trying to cut through the hype and understand what people are actually using LLMs for in their daily workflows, especially smaller models and fine-tunes that can run locally or on 8gb or CPU only hardware.
I'm not talking about "it can write a poem" or broad claims. I'm talking about specific tasks you've personally stopped Googling, stopped asking on forums for, or stopped doing manually because a model now does it better/faster.
A few examples from my own use:
Replacing initial Stack Overflow searches for boilerplate code (Arduino, Python scripts).
Getting a first draft for emails or content outlines.
Replacing niche blog/forum searches for advice (gardening plans for my climate zone, woodworking joint types).
Replacement: What's a specific activity or consultation you've offloaded to an LLM? The more niche, the better. I was saddened to see that when I looked up cooking I saw very little https://huggingface.co/mradermacher/gpt2-finetuned-recipes-cooking_v2-i1-GGUF
Models: If you use a specific fine-tune or a smaller model (like a fine-tuned CodeLlama, or a local model with a particular dataset) for that task, which do you use? I'm particularly interested in the tools that are hyper-competent at one specific thing (could be a dialect of a programming language too).
Thanks! | 2025-08-23T18:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1my9bxo/what_are_your_practical_daily_uses_for_small_ai/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my9bxo | false | null | t3_1my9bxo | /r/LocalLLaMA/comments/1my9bxo/what_are_your_practical_daily_uses_for_small_ai/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=108&crop=smart&auto=webp&s=4bf8978519b8fe3eb5da3fad00b1deb05d89e07f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=216&crop=smart&auto=webp&s=3778b85f95dff55d4d261a62b309dcd9f62ad75a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=320&crop=smart&auto=webp&s=f8fbd04f8177d211655b6d0bf917072eff88673d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=640&crop=smart&auto=webp&s=e18641372086c4518055227fb6c4e5fb58721aae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=960&crop=smart&auto=webp&s=7ecab344b9a4cc7788ab363a9209554bb86c65dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?width=1080&crop=smart&auto=webp&s=022dba8b300febcc850943acacab368fa22446c0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CpH6kt7IsOiNs_4747MfPYbxOj85iqbCTRDF1014A_U.png?auto=webp&s=5bc7e87a4e832d5c5c7968734a43a9f741a9f633', 'width': 1200}, 'variants': {}}]} |
Best image to video AI for old photos that I need to look very realistic? | 2 | Hi, I'm quite new to using AI for this, but I am working on a project where I need to take old photos (often grainy, from the 70s/80s/90s) and make them animated, but only slightly. For example a portrait of a person, I just need them to keep looking at the camera, or walk of the frame, but never do anything much more.
I have tried Wan online, and it has done ok with some, terribly with others!
From my research people seem to recommend Kling, Wan or Veo 3. But I can't test Veo 3 because its so expensive!
Any tips would be great, thanks | 2025-08-23T18:27:51 | https://www.reddit.com/r/LocalLLaMA/comments/1my92kv/best_image_to_video_ai_for_old_photos_that_i_need/ | fiftyfifteen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my92kv | false | null | t3_1my92kv | /r/LocalLLaMA/comments/1my92kv/best_image_to_video_ai_for_old_photos_that_i_need/ | false | false | self | 2 | null |
Help me understand - GPU Layers (Offloading) & Override Tensors - Multiple Questions | 7 | Please help me understand - GPU Layers (Offloading) & Override Tensors - Multiple Questions.
System : i7-14700HX 2.10 GHz 4060 **8GB VRAM** & **32GB RAM** DDR5. Win11. I use Jan & Koboldcpp.
For example, I tried Q4 of unsloth Qwen3-30B-A3B.
Initially I tried -1(-1 for GPU all layers, 0 for CPU only) in GPU Layers field. It gave me only 2-3 t/s.
Then I tried with value 20 in GPU Layers field(got this value from my past thread). It gave me 13-15 t/s. Huge improvement.
Now my questions:
**1) How to come up with right number for GPU Layers(Offloading)?**
Though I can do trial & error with different numbers, I want to know the logic/formula behind this thing.
One other reason I want the right number is CPU usage hits 100%(which I don't want) when I tried with value 20 in GPU Layers field which gave me 13-15 t/s.
I'm fine if CPU usage goes upto 70-80%, don't want to hit 100%. Also I'm fine losing few tokens not to hit CPU 100%. For example:
15 t/s with 100% CPU Usage - Not OK
10 t/s with 70-80% CPU Usage - OK
**2) If I use other quants such Q5 or Q6 or Q8, same number(20 mentioned above) will work or different number(If yes, what & how)?**
* Qwen3-30B-A3B-UD-Q4\_K\_XL - 17.7GB - 20
* Qwen3-30B-A3B-UD-Q5\_K\_XL - 21.7GB - ??
* Qwen3-30B-A3B-UD-Q6\_K\_XL - 26.3GB - ??
* Qwen3-30B-A3B-UD-Q8\_K\_XL - 36GB - ??
Apart from quant, we have Context with different values like 8K, 16K, 32K, 64K, 128K. This also takes additional memory so any changes on number?
**3) Now Q4 is giving me 13-15 t/s, Shall I expect similar t/s for higher quants like Q5 or Q6 or Q8?** I know that answer is NO.
But I just want to know the estimated t/s so I could download suitable quant based on estimated t/s (I don't want to download multiple quants since this model's file sizes are huge).
* Qwen3-30B-A3B-UD-Q4\_K\_XL - 17.7GB - 13-15 t/s
* Qwen3-30B-A3B-UD-Q5\_K\_XL - 21.7GB - ??
* Qwen3-30B-A3B-UD-Q6\_K\_XL - 26.3GB - ??
* Qwen3-30B-A3B-UD-Q8\_K\_XL - 36GB - ??
**4) I see that "Override Tensors" is one more way to optimize & increase t/s. What are few optimized regex for Qwen3-30B-A3B with logic?**
Also I saw people using different regex for same model. Don't know the logic behind those different regex.
Unfortunately regex is too much for Non-Techies & Newbies like me. Still I'm willing to learn just for this.
If I(anyone) understand all above things, I(anyone) could make better settings for other MOE models such as ERNIE-4.5-21B-A3B, Ling-lite-1.5-2506, SmallThinker-21BA3B, Moonlight-16B-A3B, GPT-OSS-20B, OLMoE-1B-7B-0125, etc., to use it with low VRAM. Hope all these answers could help upcoming newbies through this single post.
Thanks | 2025-08-23T17:56:08 | https://www.reddit.com/r/LocalLLaMA/comments/1my88uu/help_me_understand_gpu_layers_offloading_override/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my88uu | false | null | t3_1my88uu | /r/LocalLLaMA/comments/1my88uu/help_me_understand_gpu_layers_offloading_override/ | false | false | self | 7 | null |
VGA Mi50 | 0 | có nên xài con này để chơi game không mn ? | 2025-08-23T17:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1my87mq/vga_mi50/ | LooseGuitar1332 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my87mq | false | null | t3_1my87mq | /r/LocalLLaMA/comments/1my87mq/vga_mi50/ | false | false | self | 0 | null |
MasonMac/WildChat-4.8M-EN-Semantic-Deduplicated · Datasets at Hugging Face | 19 | This is a collection of semantically deduplicated datasets derived from WildChat-4.8M. I hope it may be helpful to you guys :) | 2025-08-23T17:47:02 | https://huggingface.co/datasets/MasonMac/WildChat-4.8M-EN-Semantic-Deduplicated | TheRealMasonMac | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1my809f | false | null | t3_1my809f | /r/LocalLLaMA/comments/1my809f/masonmacwildchat48mensemanticdeduplicated/ | false | false | default | 19 | {'enabled': False, 'images': [{'id': '96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=108&crop=smart&auto=webp&s=3173ee4b2b292a163d5317b1cb811ebc855ccba9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=216&crop=smart&auto=webp&s=af70c532390433fb4ccc1dfc09d2408d2ad2a4ce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=320&crop=smart&auto=webp&s=5c4c083cf2abb31058ca379fd490ffbdb2fa0986', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=640&crop=smart&auto=webp&s=7c3af10de82016d88fa024b822a027ac7e56b5d4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=960&crop=smart&auto=webp&s=d013e439408243dad0aa766a200571c4b80a8b05', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?width=1080&crop=smart&auto=webp&s=301e992d54fc83b84ab83810b239bb70a487a891', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/96tXshMnQZ1xXVAqO8eDCfYiyMJBMS-3iILOFDnNdHY.png?auto=webp&s=bbfc1c02408339cb9456138bd1ab6dd87d84b915', 'width': 1200}, 'variants': {}}]} |
support for ByteDance Seed-OSS model has been merged into llama.cpp | 142 | model: [https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct)
| 2025-08-23T17:29:04 | https://github.com/ggml-org/llama.cpp/pull/15490 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1my7j1x | false | null | t3_1my7j1x | /r/LocalLLaMA/comments/1my7j1x/support_for_bytedance_seedoss_model_has_been/ | false | false | 142 | {'enabled': False, 'images': [{'id': 'WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=108&crop=smart&auto=webp&s=a5c8adc02250130492c5ae80dfe1642628251061', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=216&crop=smart&auto=webp&s=20bb45d910dcc1c57ac5d8e651582162bfd47864', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=320&crop=smart&auto=webp&s=3257c1e1b5bb5122fc5a5b9647c223263839eed1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=640&crop=smart&auto=webp&s=447948ecb5a77a2d635a9ce18c86729398f84896', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=960&crop=smart&auto=webp&s=fcb65bb994525eb481613544633bec5f88284ea2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?width=1080&crop=smart&auto=webp&s=c401839325f21f62270585d6fbfe5422f15ee9ae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WFGEPRY69pmnCNsVihL350z048IpLks_fdEjrmNlkmg.png?auto=webp&s=b0990dec7d8efdafce66ca8a6b9459e2fc24eee6', 'width': 1200}, 'variants': {}}]} | |
ThinkPad for Local LLM Inference - Linux Compatibility Questions | 1 | I'm looking to purchase a ThinkPad (or Legion if necessary) for running local LLMs and would love some real-world experiences from the community.
# My Requirements:
* Running Linux (prefer Fedora/Arch/openSUSE - NOT Ubuntu)
* Local LLM inference (7B-70B parameter models)
* Professional build quality preferred
# My Dilemma:
I'm torn between NVIDIA and AMD graphics. Historically, I've had frustrating experiences with NVIDIA proprietary drivers on Linux (driver conflicts, kernel updates breaking things, etc.), but I also know CUDA ecosystem is still dominant for LLM frameworks like llama.cpp, Ollama, and others.
# Specific Questions:
**For NVIDIA users (RTX 4070/4080/4090 mobile):**
* How has your recent experience been with NVIDIA drivers on non-Ubuntu distros?
* Any issues with driver stability during kernel updates?
* Which distro handles NVIDIA best in your experience?
* Performance with popular LLM tools (Ollama, llama.cpp, etc.)?
**For AMD users (RX 7900M or similar):**
* How mature is ROCm support now for LLM inference?
* Any compatibility issues with popular LLM frameworks?
* Performance comparison vs NVIDIA if you've used both?
**ThinkPad-specific:**
* P1 Gen 6/7 vs Legion Pro 7i for sustained workloads?
* Thermal performance during extended inference sessions?
* Linux compatibility issues with either line?
# Current Considerations:
* ThinkPad P1 Gen 7 (RTX 4090 mobile) - premium price but professional build
* Legion Pro 7i (RTX 4090 mobile) - better price/performance, gaming design
* Any AMD alternatives worth considering?
Would really appreciate hearing from anyone running LLMs locally on modern ThinkPads or Legions with Linux. What's been your actual day-to-day experience?
Thanks! | 2025-08-23T17:09:46 | https://www.reddit.com/r/LocalLLaMA/comments/1my70pv/thinkpad_for_local_llm_inference_linux/ | 1guyonearth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my70pv | false | null | t3_1my70pv | /r/LocalLLaMA/comments/1my70pv/thinkpad_for_local_llm_inference_linux/ | false | false | self | 1 | null |
Was able to squeeze in 107b GLM 4.5 air 3bit into a 64gb M4 Mac studio anyone using it on 128gb models? | 0 | I like what I see and would like to give it some breathing room and run the 4 bit models.
what versions would fit in the 128gb models. I would like to run the 4 bit version would I be able to squeeze in the 8bit one? file size for 8bit model 113gb
that would leave 15gb of ram left over. I wonder if Full context could be used ie 131K with the remaning RAM.
Crazy to watch RAM dropping as it does its writing. GLM 4.5 definitely a big LLM.
https://preview.redd.it/qybiab9fwskf1.png?width=253&format=png&auto=webp&s=42cac66106a5b9eae31a457f4a567f96b9a90307
| 2025-08-23T17:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/1my6zc1/was_able_to_squeeze_in_107b_glm_45_air_3bit_into/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my6zc1 | false | null | t3_1my6zc1 | /r/LocalLLaMA/comments/1my6zc1/was_able_to_squeeze_in_107b_glm_45_air_3bit_into/ | false | false | 0 | null | |
Not sure if anyone else needs this, a simple extension I’ve been using to pull YouTube transcripts into GPT | 0 | Hey,
Recently, a good friend of mine built this browser extension. It's super simple — it lets you copy YouTube transcripts and quickly transfer them into AI platforms to use however you want.
Now, I know what you’re thinking: “There must be a ton of tools like this out there already.” And you’d be right. But despite that, I’ve found myself using this one almost daily.
Is it perfect? Nope. But it works. Quietly, simply, and for now — just for me.
The interesting bit? It wasn’t made for profit. No landing page. No monetization. No “10x growth hacks.” Just something created out of pure love for solving a small, real problem.
That’s also why I’m writing this. If you’ve got a few minutes to spare, I’d love for you to check it out and see if there’s anything obvious it could improve. Since I’m still the only user, your feedback would go a long way.
Would you be open to trying it for a day and seeing if it makes your workflow a little smoother?
If nothing else, I just wanted to share a little thing that makes my life easier. And who knows, maybe it’ll do the same for you.
This is the link: [https://chromewebstore.google.com/detail/youtube-summary-with-ai/gcglcbfmophnppdlbhckfmfiofaajibm](https://chromewebstore.google.com/detail/youtube-summary-with-ai/gcglcbfmophnppdlbhckfmfiofaajibm)
https://preview.redd.it/znylp4g1vskf1.png?width=1280&format=png&auto=webp&s=da882730de95e61b17b4e095b15d4cbdb59c735e
| 2025-08-23T16:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1my6oqh/not_sure_if_anyone_else_needs_this_a_simple/ | Affectionate-Sand316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my6oqh | false | null | t3_1my6oqh | /r/LocalLLaMA/comments/1my6oqh/not_sure_if_anyone_else_needs_this_a_simple/ | false | false | 0 | {'enabled': False, 'images': [{'id': '1ADZm2SCWC-iyBqWIu6UEmAKrvi_MHuJpBvtSnmiIhQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1ADZm2SCWC-iyBqWIu6UEmAKrvi_MHuJpBvtSnmiIhQ.jpeg?width=108&crop=smart&auto=webp&s=c13c97e949be7b3501f35347364b1817d991a28a', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/1ADZm2SCWC-iyBqWIu6UEmAKrvi_MHuJpBvtSnmiIhQ.jpeg?auto=webp&s=359ae81afd2843e9b0756c3ad61a295ea3148ec2', 'width': 128}, 'variants': {}}]} | |
Hey! I'm working on something | 1 | [removed] | 2025-08-23T16:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/1my60i4/hey_im_working_on_something/ | No_Pie1688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my60i4 | false | null | t3_1my60i4 | /r/LocalLLaMA/comments/1my60i4/hey_im_working_on_something/ | false | false | self | 1 | null |
External graphics dock? | 2 | I bought this [used Dell 7670](https://www.dell.com/en-us/shop/dell-laptops/precision-7670-workstation/spd/precision-16-7670-laptop) not too long ago so I could run some smaller models locally (12GB Vram). I'm enjoying this enough that I'm thinking of trying to step it up a bit, but I'd really rather not have to start over again on the computer as this one was fairly pricey and I've done a bunch of upgrades to it like more ram and a OLED touchscreen.
Is getting an external graphics dock for 1 or 2 more video cards possible or worth it? The laptop does have 2 thunderbolt 4 ports. Currently running Mint Linux but willing to switch if another OS is better for a multi-card setup. I'm not training or anything, just running an ollama instance with OpenWebUI on top.
1. Is the external dock route actually useful with my hardware and ports?
2. Can I "combine" the external vram on top of my internal? Or am I limited to one or the other?
3. Suggestions for external docks?
4. Should I just give up and build a separate battlestation? | 2025-08-23T16:28:47 | https://www.reddit.com/gallery/1my5yof | Liberaces_Isopod | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1my5yof | false | null | t3_1my5yof | /r/LocalLLaMA/comments/1my5yof/external_graphics_dock/ | false | false | 2 | null | |
Crucible's Mistral 3.2 24B V1.3 Tune | 55 | [https://huggingface.co/CrucibleLab/M3.2-24B-Loki-V1.3](https://huggingface.co/CrucibleLab/M3.2-24B-Loki-V1.3)
Hello all! This model has been meticulously trained on a specialized, 370 million token dataset, curated specifically for high-quality role-playing. The dataset is built upon a foundation of well-established worlds and lore, providing the model with deep knowledge across a wide array of genres.
More information on the model card! | 2025-08-23T16:15:45 | https://www.reddit.com/r/LocalLLaMA/comments/1my5mve/crucibles_mistral_32_24b_v13_tune/ | mentallyburnt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my5mve | false | null | t3_1my5mve | /r/LocalLLaMA/comments/1my5mve/crucibles_mistral_32_24b_v13_tune/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=108&crop=smart&auto=webp&s=1b76a2ca452c7473b6b3b6d0f07aa6a4dc50010d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=216&crop=smart&auto=webp&s=cb09d590aa5e3dabed17c45aa330f7a1d7f7a14b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=320&crop=smart&auto=webp&s=856b9ccda1c22e97fb839986d2d47cc6e62897b4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=640&crop=smart&auto=webp&s=dd4af653f6f7e42cc4fb9a40f143c2ef3a938398', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=960&crop=smart&auto=webp&s=dbfa16cda8ddc0fe06a075e6e7fe062e1b222b00', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?width=1080&crop=smart&auto=webp&s=b0f272de628735bb7e8631557934cfdae25564ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Us-Cbn1wcHJsjQomMdxy5ypvBT3RBuXG38JEY0AHvn8.png?auto=webp&s=82efc7819fa0e67532bf9ddbecec254d66a1490d', 'width': 1200}, 'variants': {}}]} |
Tool Calling Sucks? | 15 | Can someone help me understand if this is just the state of local LLMs or if I'm doing it wrong? I've tried to use a whole bunch of local LLMs (gpt-oss:120b, qwen3:32b-fp16, qwq:32b-fp16, llama3.3:70b-instruct-q5\_K\_M, qwen2.5-coder:32b-instruct-fp16, devstral:24b-small-2505-fp16, gemma3:27b-it-fp16, xLAM-2:32b-fc-r) for an agentic app the relies heavily on tool calling. With the exception of gpt-oss-120B they've all been miserable at it. I know the prompting is fine because pointing it to even o4-mini works flawlessly.
A few like xlam managed to pick tools correctly but the responses came back as plain text rather than tool calls. I've tried with vLLM and Ollama. fp8/fp16 for most of them with big context windows. I've been using the OpenAI APIs. Do I need to skip the tool calling APIs and parse myself? Try a different inference library? gpt-oss-120b seems to finally be getting the job done but it's hard to believe that the rest of the models are actually that bad. I must be doing something wrong, right? | 2025-08-23T15:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1my4ue3/tool_calling_sucks/ | Scottomation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my4ue3 | false | null | t3_1my4ue3 | /r/LocalLLaMA/comments/1my4ue3/tool_calling_sucks/ | false | false | self | 15 | null |
Would the community use an open, engine-agnostic local LLM meta-server? | 1 | [removed] | 2025-08-23T15:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1my4hxr/would_the_community_use_an_open_engineagnostic/ | jfowers_amd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my4hxr | false | null | t3_1my4hxr | /r/LocalLLaMA/comments/1my4hxr/would_the_community_use_an_open_engineagnostic/ | false | false | self | 1 | null |
How does GPU utilization works? | 0 | I'm finetuning on 2 GPUs. The VRAM pool is shared, so the model doesn’t get loaded separately on each GPU. But I don’t understand how utilization works. It keeps alternating between the two GPUs instead of being shared at the same time. One GPU sometimes peaks at 100% while the other stays at 0%. Is there a way to speed up the finetuning process by making both GPUs hit 100% utilization at the same time?
https://preview.redd.it/dm2ftemweskf1.png?width=377&format=png&auto=webp&s=01e87f45a01879a9dabf9d551f35d31972de4d4a
https://preview.redd.it/58d51r2xeskf1.png?width=380&format=png&auto=webp&s=4e66cc0f2c6c3418ca91c49208d746e20a80d5d1
| 2025-08-23T15:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1my4exj/how_does_gpu_utilization_works/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my4exj | false | null | t3_1my4exj | /r/LocalLLaMA/comments/1my4exj/how_does_gpu_utilization_works/ | false | false | 0 | null | |
Just snagged a Tesla V100 16GB for $200 (PCIE, not SXM2). Where do I go from here? | 2 | I got a V100 for what appears to be a good price. I've done some very minor tinkering with Ollama in the past, but I'm interested in getting my feet wet with local models.
Is 16GB RAM going to be a major limiting factor? Can I extend that with another card, and do the cards need to match? | 2025-08-23T15:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1my3wl1/just_snagged_a_tesla_v100_16gb_for_200_pcie_not/ | MisterDalliard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my3wl1 | false | null | t3_1my3wl1 | /r/LocalLLaMA/comments/1my3wl1/just_snagged_a_tesla_v100_16gb_for_200_pcie_not/ | false | false | self | 2 | null |
RTX PRO 6000 MAX-Q Blackwell for LLM | 178 | Just received my brand new Blackwell card, so did a quick bench to let the community grasp the pros and cons
# Setup Details:
GPU : Rtx pro 6000 max-q workstation edition, 20% less performance than the complete, but with half the power draw. 2 slots
CPU : Ryzen 9 3950X, 24 channels, 16 cores / 32 threads
RAM : 128go DDR4 u/33600
GPU1 : RTX 3090 24gb blower edition. 2 slots, unused here
GPU2 : RTX 3090 24gb founder edition. 3 slots, unused here
# Software details
# OS
Ubuntu 22.04 Nvidia Drivers : 770 open Cuda toolkit 13 Cudnn 9
(ask if you want a quick install tutorial in comments)
# Env
conda create --name vllm python=3.12 conda activate vllm
uv pip install flashinfer-python --prerelease=allow --upgrade --extra-index-url [https://download.pytorch.org/whl/nightly/cu128](https://download.pytorch.org/whl/nightly/cu128)
uv pip install vllm --torch-backend=cu128
# Training Benchmark
Two stuff are diferenciating for training on that card:
* the number of tensor core is outstanding, about 60% more than a single B100 gpu
* the 96GB vram is a game changer for training, enabling very large batch, so faster and smoother training
# Experiment:
Pretraining of a SLM with 35M parameters, based on GQA architecture with 8 layers, trained with pytorch lightning. Training dataset is TinyStories, with a budget of 1B tokens (2 epochs), a sequence length of 256 tokens, and a virtual batch size of 100k tokens. Models are trained in mixed bf16 precision (additionnal improvement could be expected from using black well fp8 training)
# Results:
* 1 x 4090 Laptop (similar perf as a 3090 Desktop) : \~2.5 hours to complete the training run
* 1 x RTX 6000 pro maxq workstation : \~20 min to complete the training run
# Conclusion
With proper optim, the card can single handedly deliver the training compute of 7.5 rtx 3090 card, while pulling only 300W of electricity (and being very quiet).
# Inference Benchmark
In inference, bandwith can be the bottleneck factor, especially in batch 1 inference.
Let's assess the results in batch 1, 4, 8, 16 and 32 to see how much token we can squeeze out of the card.
# Launch
export NVCC_THREADS=16
export MAX_JOBS=16
export OMP_NUM_THREADS=16
export VLLM_ATTENTION_BACKEND=FLASHINFER
export ENABLE_NVFP4_SM120=1
export VLLM_USE_FLASHINFER_MOE_FP4=1
export MODEL_NAME="DeepSeek-R1-0528-Qwen3-8B-FP4"
vllm serve "$MODEL_NAME" \
--served-model-name gpt-4 \
--port 5000 \
--max-model-len 16000 \
--gpu-memory-utilization 0.9 \
--trust_remote_code \
--max-seq-len-to-capture 8196 \
--enable-chunked-prefill \
--kv-cache-dtype fp8 \
--compilation-config '{"pass_config":{"enable_fusion":true,"enable_noop":true},"cudagraph_mode":1,"max_capture_size":2048}'
# Launch >20B Active
On larger models, tensor cores can do wonders, so above 20B active parameters, the following additionnal env variables can provide a small speed increase, especially for batching. export VLLM\_USE\_TRTLLM\_ATTENTION=1 export VLLM\_USE\_TRTLLM\_FP4\_GEMM=1 export VLLM\_FLASHINFER\_FORCE\_TENSOR\_CORES=1
Note: i ran every speed test without these flags, but for example Mistral Small would give around 95 t/s on batch 1, and 1950 t/s on batch 32
# Launch QWEN Moe
Add flag --enable-expert-parallel
# Launch GPT-OSS
GPT OSS relies on MXFP4 quant (cause why would they do like everyone else uh?), an hybrid format that will most likely disapear once NVFP4 is fully supported. They also are leveraging their own library for prompt formatting, that is not really compatible with vllm as of now, so don't expect to get anything good from these, i am just testing the speed, but most of the time they only send you blank tokens, which is not really usefull.
# DOWNLOADS
You'll need to download the following to make vllm work with special snowflake tokenizer, and not break on start:
sudo wget -O /etc/encodings/o200k\_base.tiktoken [https://openaipublic.blob.core.windows.net/encodings/o200k\_base.tiktoken](https://openaipublic.blob.core.windows.net/encodings/o200k_base.tiktoken)
sudo wget -O /etc/encodings/cl100k\_base.tiktoken [https://openaipublic.blob.core.windows.net/encodings/cl100k\_base.tiktoken](https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken)
# Launch Command
export ENABLE_NVFP4_SM120=1
export VLLM_USE_TRTLLM_ATTENTION=1
export OMP_NUM_THREADS=16
export TIKTOKEN_ENCODINGS_BASE=/etc/encodings
export VLLM_USE_FLASHINFER_MXFP4_BF16_MOE=1
export VLLM_USE_FLASHINFER_MXFP4_MOE=1
export VLLM_ATTENTION_BACKEND=FLASHINFER
export MODEL_NAME="gpt-oss-120b"
vllm serve "$MODEL_NAME" \
--async-scheduling \
--served-model-name gpt-4 \
--port 5000 \
--max-model-len 16000 \
--gpu-memory-utilization 0.9 \
--trust_remote_code \
--max-seq-len-to-capture 8196 \
--compilation-config '{"pass_config":{"enable_fusion":true,"enable_noop":true},"cudagraph_mode":1,"max_capture_size":2048}' \
# Model Tested:
* Qwen3-Coder-30B-A3B-Instruct-GPTQ-4bit
* Qwen3-4B-Instruct-2507-GPTQ
* Qwen3-32B-AWQ
* Mistral-Small-3.2-24B-Instruct-hf-AWQ
* gpt-oss-20b
* gpt-oss-120b
* Hunyuan-A13B-Instruct-GPTQ-Int4 (will be added on next edit)
# Failed Test
* DeepSeek-R1-0528-Qwen3-8B-FP4 : could not start GEMM FP4 kernels, i'll investigate
* Qwen3-32B-FP4 : could not start GEMM FP4 kernels, i'll investigate
* Llama-4-Scout-17B-16E-Instruct-AWQ : KeyError: 'layers.17.feed\_forward.shared\_expert.activation\_fn.scales', the quant wasn't done properly and i couldn't find an other version in 4bit except bnb that would be much slower :/
# Results
Read :
* 0-64 : batch 1 token generation speed between first token and 64th (token / second)
* 64-128 : batch 1 token generation speed between 64th and 128th (token / second)
* ...
* batch\_4 : total throughtput token per second while running 4 concurrent request
* batch\_8 : total throughtput token per second while running 8 concurrent request
* ...
|Model Name|0-64|64-128|128-256|256-512|512-1024|1024-2048|batch\_4|batch\_8|batch\_16|batch\_32|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|gpt-oss-120b|182.14|147.11|158.66|143.20|154.57|148.10|\~403-409|\~770-776|\~1294-1302|\~1986-2146|
|gpt-oss-20b|196.09|199.98|214.26|198.01|196.56|194.38|\~564-624|\~1054-1117|\~1887-1912|\~2904-2911|
|Qwen3-32B-AWQ|60.47|68.94|62.53|62.36|61.99|\-|\~227-233|\~447-452|\~920-936|\~1448-1482|
|Mistral-Small-3.2-24B-Instruct-hf-AWQ|89.39|95.77|89.29|87.29|86.95|86.59|\~288-336|\~631-646|\~1109-1153|\~1714-1790|
|Qwen3-4B-Instruct-2507-GPTQ|208.21|205.15|223.60|210.72|211.67|207.49|\~721-743|\~1158-1377|\~2044-2236|\~2400-2666|
|Qwen3-Coder-30B-A3B-Instruct-GPTQ-4bit|179.42|176.71|176.01|175.81|175.44|172.64|\~490-510|\~950-1000|\~1520-1602|\~2200-2400|
# Conclusion
No surprise, in batch 1, the performance is good but not outstanding, limited by the 1.7 TB/s of GDDR7 memory. The blackwell optimizations allow to squeeze a bit more performance though (that might explode when flash attention 4 will be released) and just slightly beats the speed of 2 x 3090 with tensor parallelism.
The game changer is on batch 32, with an almost linear scaling of number of tokens delivered with batch size, so might be really usefull for small scale serving and multi agent deployment purpose.
So far, support is still not completely ready, but sufficient to play with some models.
# Code to reproduce the results
Training scripts can be found on this repo for pretraining:
[https://github.com/gabrielolympie/ArchiFactory](https://github.com/gabrielolympie/ArchiFactory)
Speed Benchmark for inference + used prompts can be found in :
[https://github.com/gabrielolympie/PromptServer](https://github.com/gabrielolympie/PromptServer)
# Next steps
* I might update this post when NVFP4 support is stable enough to give a glimpse of it potential
* If you want me to test a specific model, propose in the comments, i'll add those who are either in a different weight category, or different architecture
* If i can find the time, i will make a similar post with diffusion models (image + video) where the archi might deliver even more impressive results
* If you want me to test additionnal vllm tuning parameters, let me know in the comments (i might give a try to sglang and exllama v3 as well when their own support will be more mature)
# Global conclusion
Pros:
* large vram
* impressive raw compute
* impressive scaling with batch size
* very quiet, i could sleep during a training run with computer in the same room
* very low power consumption, stable 300W at full power and most likely room for overclocking
Cons:
* still limited bandwith compared to latest HBM memory
* software support still a bit messy but quickly improving
* cannot be used with tensor paralellism with Ampere (i tried doing tensor parallelism with a 3090 and it did not go well)
Sweet spots / for what need?
* Any model with 10-20B active parameters and up to 160B total parameters will be incredible on it
* Processing large amount of texts (classification / labeling / synthetic data generation )
* Small serving for up to 30 - 60 concurrent users
When not to use?
If your use case involve getting max tokens / seconds in batch 1 and you don't care for power draw, building a battlestation with 4\*4090 will provide much better speed at the same price | 2025-08-23T15:08:27 | https://www.reddit.com/r/LocalLLaMA/comments/1my3why/rtx_pro_6000_maxq_blackwell_for_llm/ | AdventurousSwim1312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my3why | false | null | t3_1my3why | /r/LocalLLaMA/comments/1my3why/rtx_pro_6000_maxq_blackwell_for_llm/ | false | false | self | 178 | null |
🪓 Just ripped a LLM apart... and it still works?! | 0 | Built a tool called **LLM-Ripper**.
It literally lets you *surgically remove* parts of a Transformer — attention heads, FFNs, embeddings — and plug them back like LEGO.
* Want a franken-model made of random donor heads? Go for it.
* Want to see what *one* attention head actually knows? Easy.
👉 Repo: [https://github.com/qrv0/LLM-Ripper](https://github.com/qrv0/LLM-Ripper)
This is either **insane science** or **the start of model recycling**.
Not sure which. | 2025-08-23T14:59:48 | https://www.reddit.com/r/LocalLLaMA/comments/1my3odz/just_ripped_a_llm_apart_and_it_still_works/ | -qrv0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my3odz | false | null | t3_1my3odz | /r/LocalLLaMA/comments/1my3odz/just_ripped_a_llm_apart_and_it_still_works/ | false | false | self | 0 | null |
Is the Nvidia Digits be able to run 24/7 as an AI server? | 4 | Hi. Recently, Nvidia announced their AI Super computer i.e. Digits. I know it's super powerful and capable of running some big models. But I am confused with the deployment part.
Can we use this as a server? I mean would it be able to run 24/7 like we run normal systems. | 2025-08-23T14:52:00 | https://www.reddit.com/r/LocalLLaMA/comments/1my3hn0/is_the_nvidia_digits_be_able_to_run_247_as_an_ai/ | JahangirJadi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my3hn0 | false | null | t3_1my3hn0 | /r/LocalLLaMA/comments/1my3hn0/is_the_nvidia_digits_be_able_to_run_247_as_an_ai/ | false | false | self | 4 | null |
It's Mamba time: Comparing Nemotron Nano v2 vs Falcon-H1 vs Qwen (og) vs Qwen (2507) | 147 | With the recent release of not one but two transformers-mamba hybrids both claiming to outperform baseline transformers, I thought this would be a fun application of ReasonScape to see what's going on under the hood.
# Test Model 1: Falcon-H1 7B
Blog: [https://falcon-lm.github.io/blog/falcon-h1/](https://falcon-lm.github.io/blog/falcon-h1/)
Model: [https://huggingface.co/tiiuae/Falcon-H1-7B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-7B-Instruct)
[Claim: Falcon-7B \(61.8\) outperforms Qwen3-8B \(58.5\)](https://preview.redd.it/7i2z9yciyrkf1.png?width=683&format=png&auto=webp&s=c1d03fc28117947e2313a514e051fabba3e01682)
# Test Model 2: NVidia Nemotron Nano v2
Blog: [https://research.nvidia.com/labs/adlr/NVIDIA-Nemotron-Nano-2/](https://research.nvidia.com/labs/adlr/NVIDIA-Nemotron-Nano-2/)
Model: [https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2)
[Claim: Nemotron-Nano-9B outperforms Qwen3-8B across the board](https://preview.redd.it/ao6fzh5tyrkf1.png?width=2304&format=png&auto=webp&s=fb457ae99043c267682b39ce4c29581daa1f7e64)
# Reference Model 1: Qwen3-8B OG
Blog: [https://qwenlm.github.io/blog/qwen3/](https://qwenlm.github.io/blog/qwen3/)
Model: [https://huggingface.co/Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
# Reference Model 2: Qwen3-4B-2507-Instruct
Blog: [https://qwen3lm.com/qwen3-4b-instruct-2507/](https://qwen3lm.com/qwen3-4b-instruct-2507/)
Model: [https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507)
# Test Setup
All models were evaluated with 2x RTX3090 using vLLM 0.10.1
Nemotron Nano v2 was launched with the recommended `--mamba_ssm_cache_dtype float32` flag.
The evaluation being performed here is one of my design: ReasonScape M6. See [https://reasonscape.com/](https://reasonscape.com/) for details and documentation.
# Results: Difficulty Tiered Leaderboards
[Hybrid-SSM Results](https://preview.redd.it/cfscchg50skf1.png?width=1137&format=png&auto=webp&s=8d81f8f61ee585eca5e9dd8eb9283e3382f3fce9)
Nemotron Nano v2 demonstrates **significantly improved all-around complexity robustness** over Falcon-H1, but it does as the expense of **3x thinking tokens.**
[Qwen3 Results](https://preview.redd.it/1x226ztf0skf1.png?width=1136&format=png&auto=webp&s=3126d6e6fdd0133a5ba248d069748c2df46aa1ef)
Performance on the **Boolean, Dates** and **Movies** tasks (see [https://reasonscape.com/docs/tasks/](https://reasonscape.com/docs/tasks/) for more info on the tasks!) is indeed comparable but the **Objects**, **Arithmetic** and **Shuffle** tasks present significant challenges for the hybrids.
The old Qwen3 models **think way too much** but the new 2507-Instruct do really well when simply asked to *"think-step-by-step".*
# Results: Performance Surfaces
I will merge the Test and Reference sets together for the remainder of plots to make comparisons easier:
[ReasonScape M6 Difficulty Manifolds for the 4 models](https://preview.redd.it/o264zvgb1skf1.png?width=1920&format=png&auto=webp&s=63420e7384da7c0f4dd3a3387a2023cf1e67f804)
Nemotron **Dates** processing is robust but **Objects** (a selective attention task) collapses in both difficulty dimensions very quickly compared to pure transformers. **Arithmetic** (under randomized whitespace conditions) holds up ok with depth, but collapses under length. **Shuffle** (a working memory churn task) shows a similar pattern: depth is ok, but total collapse under length leading to a smaller island of competency.
All models struggled with truncation on the **Boolean** task, but Falcon least so.
# Results: Token-FFT Analysis
ReasonScape offers a unique kind of plot, showing exactly how chat template and tokenization affect the frequency-domain representation of what the LLM actually sees.
These allow to peek even below the surfaces and understand WHY some things are tougher for certain models and split training problems from architectural problems.
[Token-FFT: Arithmetic](https://preview.redd.it/4nqoy43d2skf1.png?width=2000&format=png&auto=webp&s=acd11dcdc896c0392529a2f172bcdaeb7334f04a)
Here we see exactly why Nemotron isn't very good at arithmetic:
\- The whitespace/no-whitespace representations of math problems look VERY different to this tokenizer and it has had trouble generalizing as a result
\- As length increases, the information content .. disappears! No change at DC, but the middle and high-band information is lost. Performance predictably collapses as a result.
[Token-FFT: Boolean](https://preview.redd.it/8c0zoiv73skf1.png?width=2000&format=png&auto=webp&s=9374f97bf696d29d40084700b219e41e7a7ed8a1)
An interesting comparison here is the Boolean task which demonstrates similar information-compression along with the ON/OFF and YES/NO formats. These formats have the weakest results on the surfaces compared to the others (because at the end of the day, compressing your signal is bad) but they manage to eek out "satisfactory" scores because the DC had a corresponding upward shift. This is a 'lower-tier of information loss' vs when the DC stays the same and we just lose signal.
# Conclusions
**Nemotron Nano is the most powerful hybrid I've evaluated so far.** It's major weakness is that it seems to have failed to generalize Arithmetic and it's selective attention (information-filtering ability) is noticeably weaker then SOTA transformers. Mid-tier for reasoning length.
**While Hybrids are getting better, they don't yet beat pure Transformers** when I evaluated Falcon-Mamba it got a big fat 0 - these new hybrid guys actually do work and are getting better with each iteration. I hope to see this conclusion flip in the future!
**Qwen3-4B-Instruct-2507 is a little beast** and can replace older 8B with similar if not better performance and lower token usage.
**I need more RTX3090** as these evaluations require up to 100M tokens when the average responses get up to 3-4k.
# Resources
To learn more about ReasonScape evaluations check out the Documentation at [https://reasonscape.com/docs/](https://reasonscape.com/docs/) or grab the latest code from GitHub at [https://github.com/the-crypt-keeper/reasonscape](https://github.com/the-crypt-keeper/reasonscape)
If you enjoyed the plots, check out the M6 explorer [https://reasonscape.com/m6/explorer/](https://reasonscape.com/m6/explorer/) and it's documentation [https://reasonscape.com/docs/tools/explorer/](https://reasonscape.com/docs/tools/explorer/)
[M6 explorer showing detailed result projections along the Arithmetic surface](https://preview.redd.it/2hwrdrug6skf1.png?width=1848&format=png&auto=webp&s=a5d69ab1018467ca9ef8445d022dd76df0c73544)
To see how these models compare to the rest of the flocks, the full M6 Leaderboard is available at [https://reasonscape.com/m6/leaderboard/](https://reasonscape.com/m6/leaderboard/) (spoiler: **GPT-OSS-20b is a broken mess**) with documentation at [https://reasonscape.com/docs/tools/leaderboard/](https://reasonscape.com/docs/tools/leaderboard/)
Thanks for reading! <3 | 2025-08-23T14:42:37 | https://www.reddit.com/r/LocalLLaMA/comments/1my39ja/its_mamba_time_comparing_nemotron_nano_v2_vs/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my39ja | false | null | t3_1my39ja | /r/LocalLLaMA/comments/1my39ja/its_mamba_time_comparing_nemotron_nano_v2_vs/ | false | false | 147 | {'enabled': False, 'images': [{'id': '-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=108&crop=smart&auto=webp&s=55d0da67b657e46fff6ce1847ad9651ffe5dfbcb', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=216&crop=smart&auto=webp&s=132e98730a015b65dd440251658e0713f9f0e89f', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=320&crop=smart&auto=webp&s=936d0f528b41901421c3faed279aceb15c2d0388', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=640&crop=smart&auto=webp&s=47bcb8449e292f736972fc476ddb801eee0e77e6', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=960&crop=smart&auto=webp&s=912195a07fe8a26ecccb46cdd2a2746596131c9a', 'width': 960}, {'height': 457, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?width=1080&crop=smart&auto=webp&s=9a087ef216a696ab2571d695028026baa477d357', 'width': 1080}], 'source': {'height': 658, 'url': 'https://external-preview.redd.it/-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U.png?auto=webp&s=7bf3264cc837fa95d2a0d585c1a4cf3d0d8f69e5', 'width': 1552}, 'variants': {}}]} | |
Is there a Local Android llm, uncensored | 0 | I am looking hard for a completely uncensored local AI... Can someone recommend me some good stuff?? | 2025-08-23T13:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1my23sr/is_there_a_local_android_llm_uncensored/ | jatin_hehe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my23sr | false | null | t3_1my23sr | /r/LocalLLaMA/comments/1my23sr/is_there_a_local_android_llm_uncensored/ | false | false | self | 0 | null |
How do you actually use your local LLM? | 4 | How do you actually use your local LLM? Is it more for work, personal projects, translation, planning, or just as a supercharged search engine? And compared to before, how has it changed or improved your daily life? | 2025-08-23T13:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/1my1u3e/how_do_you_actually_use_your_local_llm/ | spacecheap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my1u3e | false | null | t3_1my1u3e | /r/LocalLLaMA/comments/1my1u3e/how_do_you_actually_use_your_local_llm/ | false | false | self | 4 | null |
One app to chat with multiple LLMs (Google, Ollama, Docker) | 0 | E-Worker Studio is a web app where you can:
* Chat with **multiple AI model providers** from a single interface
* Keep your chats **stored locally** (nothing goes off your machine unless you want it to)
* Switch between providers without juggling tabs or tools
Currently supported:
* **Google AI Studio models** (free tier available with API key)
* **Ollama** (if you’re running models locally)
* **Dockerized AI models** (import configs directly)
Screenshots included:
* Chat windows with each provider
* Model configuration screens (Google / Ollama / Docker imports)
* Workspace settings showing local file storage
Try it here: [https://app.eworker.ca]()
Install it via your browser’s “Install app” option (PWA style).
https://preview.redd.it/3uqfqyoevrkf1.jpg?width=1511&format=pjpg&auto=webp&s=f87d8eccbbb369289567378da12e8e45c61fe893
https://preview.redd.it/tvdwynffvrkf1.jpg?width=1517&format=pjpg&auto=webp&s=c833bf390252c27e47d4fa8a3c40173821275c06
https://preview.redd.it/364n8kufvrkf1.jpg?width=1511&format=pjpg&auto=webp&s=5f108199bb9c6b295b4ea1973967441c3671ae3a
https://preview.redd.it/w8bqia7gvrkf1.jpg?width=1517&format=pjpg&auto=webp&s=36f922c739faa75e1dd2cea9517cace8aeb4e8cf
https://preview.redd.it/107n91ghvrkf1.jpg?width=1515&format=pjpg&auto=webp&s=73543230b2e3a38925fe1099ef02d64c156d6184
https://preview.redd.it/1770j3rhvrkf1.jpg?width=1522&format=pjpg&auto=webp&s=828a956616edf3944aaf0b36eb8535bdf6b13b50
https://preview.redd.it/fm6jjq1ivrkf1.jpg?width=1516&format=pjpg&auto=webp&s=de17231c6f0bebacacf2b26696e1bbf20a2bee8f
| 2025-08-23T13:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/1my1oue/one_app_to_chat_with_multiple_llms_google_ollama/ | Working-Magician-823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my1oue | false | null | t3_1my1oue | /r/LocalLLaMA/comments/1my1oue/one_app_to_chat_with_multiple_llms_google_ollama/ | false | false | 0 | null | |
ByteDance Seed OSS 36B supported in llama.cpp | 92 | https://github.com/ggml-org/llama.cpp/commit/b1afcab804e3281867a5471fbd701e32eb32e512
Still no native support for serverside thinking tag parsing since Seed uses a new seed:think tag, so will have to add that later. | 2025-08-23T13:31:21 | https://www.reddit.com/r/LocalLLaMA/comments/1my1jg7/bytedance_seed_oss_36b_supported_in_llamacpp/ | ilintar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my1jg7 | true | null | t3_1my1jg7 | /r/LocalLLaMA/comments/1my1jg7/bytedance_seed_oss_36b_supported_in_llamacpp/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': 'ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=108&crop=smart&auto=webp&s=a443bccf95332853bc45aba85793f8bb74680479', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=216&crop=smart&auto=webp&s=588b0cc6e8725387b61cf43a8b3e0ad081fdbf42', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=320&crop=smart&auto=webp&s=6def8c3948ddb766fb7f02d7081b758d8bf244f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=640&crop=smart&auto=webp&s=8b9096cec46a6d52859adaea2d5943b5897c2b8f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=960&crop=smart&auto=webp&s=a0ba1d81d5211bfd3c3b6d77a096d0c319a81431', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?width=1080&crop=smart&auto=webp&s=f934640f46ec90bf50a1b0ffced54bc3279d78d6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ThldHXUxXQSBs6ivV264xIvIIXe_VXArfgN9wqbreW4.png?auto=webp&s=d345e49e5518b49947fcc403e9e880b56ae0f89a', 'width': 1200}, 'variants': {}}]} |
Llamarunner, a llama.cpp manager and runner (with user presets!) | 7 | I was tinkering with different models (always with llama-server) and was getting frustrated with not finding something for managing presets for the models to lower the hassle of switching and using the right parameters. I wanted to run qwen3, then glm4.5-air, then a stab at Deepseek, now I needed to embed stuff so I wanted Snowflake, and now something else... And I could not find anything online that could help me with it (admittedly, I was extremely lazy in my googling and defaulted to reinventing the wheel... Probably. But it was fun!).
So here it is, Llamarunner is built to be callable from wherever by automatically adding itself to path, installable with a simple curl, and is capable of pulling and building llama.cpp, running your models with presets, and comes with the added bonus of being callable in a pipeline, so if you need to OCR a document, embed it for rag and then use the rag pipeline you can do this all with one single machine!
Here's the repo, any form of criticism is welcome, right now windows is not supported, and honestly I don't really see myself doing it so, if anybody wants, you are more than welcome to fork.
[https://github.com/GGrassia/llamarunner](https://github.com/GGrassia/llamarunner)
**Disclaimer**
I'm not a Go dev, it was chosen for ease of development and cross-platform compiling, any non idiomatic stuff comes from there. Knucklehead solutions and bad coding are instead to be blamed on me and somewhat on GLM4.5-Air, but mostly on me, after all, I'm the only possible pebcak here.
Also, I expect some bugs, feel free to open issues and PRs, the only reason this is not a python script on my server is to give back to the community I've been taking and learning so much from.
Cheers! | 2025-08-23T13:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1my1hg4/llamarunner_a_llamacpp_manager_and_runner_with/ | GGrassia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my1hg4 | false | null | t3_1my1hg4 | /r/LocalLLaMA/comments/1my1hg4/llamarunner_a_llamacpp_manager_and_runner_with/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=108&crop=smart&auto=webp&s=79c715ac528a84a66c0e6a255fa1ffa6e7f58e7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=216&crop=smart&auto=webp&s=6bc15d28dcf31d4cc1ad03b79288f778374ca931', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=320&crop=smart&auto=webp&s=1aef22f10ab13d25091b814348052f6a644935b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=640&crop=smart&auto=webp&s=55ad018d6bb7f5074098a5579bf8a9f8c6dc7c76', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=960&crop=smart&auto=webp&s=40834f333d2d1fca29ff35765e901ca20abc0329', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?width=1080&crop=smart&auto=webp&s=411c9b4dfc49ef32623a0ceca74d539b63b9c64c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vlGTGQOrIHTahupjhxGGGcVcLuOJgJVlCCQN30fwFNw.png?auto=webp&s=d09d242cddee76c910ef9124dcb0f18e1381eeb0', 'width': 1200}, 'variants': {}}]} |
I was asking for ways to translate and synthesize anime voices on this subreddit and u guys answered me. So here is a give back. | 1 | [removed] | 2025-08-23T13:28:20 | https://v.redd.it/dj5wico1srkf1 | mrpeace03 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1my1gy2 | false | {'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/dj5wico1srkf1/DASHPlaylist.mpd?a=1758547717%2CNGVlYTVmZWNkNjgzZjljNDJjYTY3NjRjYWEyMzA5NjdlMjg0MWMzMThjZWQ3OTU0OTcwYThhNTA4NTJlYTVjZg%3D%3D&v=1&f=sd', 'duration': 174, 'fallback_url': 'https://v.redd.it/dj5wico1srkf1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 360, 'hls_url': 'https://v.redd.it/dj5wico1srkf1/HLSPlaylist.m3u8?a=1758547717%2CODk3OTQ4ZmI4ZTQ1ODc5MDcyYmZhYWQ2MThhNDFjN2RjZDBiZTlkN2ZmM2E1MjBiMzE1YmRjZjdhYjVmZmJlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dj5wico1srkf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 640}} | t3_1my1gy2 | /r/LocalLLaMA/comments/1my1gy2/i_was_asking_for_ways_to_translate_and_synthesize/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=108&crop=smart&format=pjpg&auto=webp&s=6aeb2dd19ecaccd25449f965b0476b51ef80f22d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=216&crop=smart&format=pjpg&auto=webp&s=586372e69c1316ed47a8347ae324e1d081149dfc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=320&crop=smart&format=pjpg&auto=webp&s=6db84fd88ec92c03b3fc1922eb2c971b03a5fdbe', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?width=640&crop=smart&format=pjpg&auto=webp&s=6e6922ef15255d866186730fd9d1026e3b00db4d', 'width': 640}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/bmVmdmgwbzFzcmtmMVC4oZFNK_9-nvftM-GMeL6N1R3h9wI3dbZCc-tjbpen.png?format=pjpg&auto=webp&s=67fc31796eb2d3b4133907737055ca78d799382f', 'width': 640}, 'variants': {}}]} | |
Why can't we build our own AI from pieces? | 0 | Sometimes you realise:
I’m downloading a 13GB LLM just to answer a few questions, write some code, or translate a document.
But 90% of that model? Stuff I’ll never use.
I don’t need poetry generation when I’m debugging.
I don’t need Malay translation if I only work in Russian.
I don’t need ancient Roman history just to parse a log file.
And then you ask: **why can't I just…**
> **…take only what I actually need?**
Imagine this:
- There are small, specialised **modules**:
— code understanding
— text processing
— translation
— reasoning
— math
— voice interface
- You pick the ones you need.
- A system **assembles them into one working model**.
- You get a **lightweight, fast, personal AI**.
- Run it **offline**, even on weak hardware.
- No subscriptions. No cloud. No tracking. **Just your AI.**
Sounds obvious?
**Then why doesn’t it exist?**
Right now, we get LLMs as monoliths — all-or-nothing.
Like buying a full toolbox just to use one screwdriver.
Maybe it’s time to ask:
**Can we do LLMs differently?**
Not as giant black boxes — but as **composable building blocks**?
I’m not building this. No code. No MVP.
But it feels like **someone should try**.
Maybe it’s just a dream.
Or maybe — **this is where the next step in AI begins**.
**What do you think? Is it possible? And if so — where would you start?** | 2025-08-23T13:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/1my1811/why_cant_we_build_our_own_ai_from_pieces/ | NikoDraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my1811 | false | null | t3_1my1811 | /r/LocalLLaMA/comments/1my1811/why_cant_we_build_our_own_ai_from_pieces/ | false | false | self | 0 | null |
gPOS17 AI Workstation with 3 GPUs, 96 GB DDR5, Garage Edition | 5 | In the era of foundation models, multimodal AI, LLMs, and ever-larger datasets, access to raw compute is still one of the biggest bottlenecks for researchers, founders, developers, and engineers. While the cloud offers scalability, building a personal AI workstation delivers complete control over your environment, reduced latency, and the privacy of running workloads locally — even if that environment is a garage.
This post covers our version of a three-GPU workstation powered by an Intel Core i7-13700K, 96 GB of DDR5 memory, and a heterogeneous mix of GPUs sourced from both eBay and questionable decisions. This configuration pushes the limits of desktop AI computing while remaining true to the spirit of garage innovation.
# Our build includes:
* **Intel Core i7-13700K (16-core, Raptor Lake)** — providing blistering performance while drawing just enough power to trip a breaker when combined with three GPUs and a space heater.
* **96 GB DDR5-6400 CL32** — a nonstandard but potent memory loadout, because symmetry is for people with disposable income.
* **Three GPUs stacked without shame:**
* MSI SUPRIM X RTX 4080 16 GB (the crown jewel)
* NVIDIA Tesla V100 16 GB PCIe (legacy, but it still screams)
* AMD Radeon Instinct MI50 32 GB (scientific workloads… allegedly)
* **Four NVMe SSDs** totaling 12 TB, each one a different brand because who has time for consistency.
* **Dual PSU arrangement** (Corsair RM1000x + EVGA SuperNOVA 750 G2), mounted precariously like exposed organs.
# Why it matters
The gPOS17 doesn’t just support cutting-edge multimodal AI pipelines — it redefines workstation thermodynamics with its patented **weed-assisted cooling system** and **gravity-fed cable management architecture**. This is not just a PC; it’s a statement. A cry for help. A shrine to performance-per-dollar ratios.
The result is a workstation capable of running simultaneous experiments, from large-scale text generation to advanced field simulations, all without leaving your garage (though you might leave it on fire).
\*AMD Radeon Instinct MI50 not shown because it's in the mail from ebay.
\*\*diagram may not be accurate | 2025-08-23T13:14:08 | https://www.reddit.com/gallery/1my15gf | Lux_Interior9 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1my15gf | false | null | t3_1my15gf | /r/LocalLLaMA/comments/1my15gf/gpos17_ai_workstation_with_3_gpus_96_gb_ddr5/ | false | false | 5 | null | |
Intel's New LLM-Scaler Beta Update Brings Whisper Model & GLM-4.5-Air Support | 18 | 2025-08-23T13:04:50 | https://www.phoronix.com/news/Intel-llm-scaler-vllm-Whisper | reps_up | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1my0xu3 | false | null | t3_1my0xu3 | /r/LocalLLaMA/comments/1my0xu3/intels_new_llmscaler_beta_update_brings_whisper/ | false | false | default | 18 | null | |
Local LLM interface | 0 | https://reddit.com/link/1my0ulg/video/03h6v72uorkf1/player
I made a user-friendly interface for Ollama incorporating two AI models - would love to hear what people think
[www.offgridai.pro](http://www.offgridai.pro)
| 2025-08-23T13:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1my0ulg/local_llm_interface/ | widelyregardedas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my0ulg | false | null | t3_1my0ulg | /r/LocalLLaMA/comments/1my0ulg/local_llm_interface/ | false | false | self | 0 | null |
Help with LM Studio context size limitations vs ollama context size limitations | 0 | Hello everyone,
I'm working on a proxy script that translates API calls between Ollama and LM Studio to make Ollama-compatible applications work with LM Studio's backend. The project is still rough and currently hardcoded for the GPT-OSS model, but it's functional for basic operations.
The Problem: I'm hitting context size limitations when proxying requests to LM Studio. While the same requests work fine with Ollama, LM Studio throws "context too big" errors. I can't increase the context size limit on my system, and I'm not familiar enough with LM Studio's internals to find a workaround.
I Need Help With:
Better token counting methods (my 4-chars-per-token estimate is probably inaccurate)
LM Studio-specific context management strategies
Alternative approaches to handling long contexts in LM Studio
Code Repository: [https://github.com/vinivius/ollama-lmstudio-proxy](https://github.com/vinivius/ollama-lmstudio-proxy)
The proxy handles /api/version, /api/tags, /api/chat, and other Ollama endpoints, translating them to LM Studio's OpenAI-compatible format. Any insights from LM Studio experts or suggestions for better context management would be greatly appreciated!
System Info:
Model: GPT-OSS 20B
HP EliteBook X G1a - AMD Ryzen AI 9 HX Pro 375 - 64GB RAM
Thanks in advance for any help or pointers! | 2025-08-23T12:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1my0hzj/help_with_lm_studio_context_size_limitations_vs/ | viniviusmf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my0hzj | false | null | t3_1my0hzj | /r/LocalLLaMA/comments/1my0hzj/help_with_lm_studio_context_size_limitations_vs/ | false | false | self | 0 | null |
Best Practices for Cleaning Unsupervised Datasets for LLM Pre-training | 4 | Hey everyone,
I'm working on a personal project to reproduce the original GPT-1 model in an unsupervised manner, and I've hit a roadblock with data preprocessing. I'm using the `lucadiliello/bookcorpusopen` dataset from Hugging Face, but as you might know, it's full of "junk" text like copyright notices, headers, and other boilerplate that needs to be removed before I can train the tokenizer and the model.
Instead of writing my own custom cleaning script from scratch, I'm looking for established, open-source functions or entire preprocessing pipelines that the community has used for this exact purpose.
Has anyone here worked with a similar book corpus dataset and found a great pre-written script or library for cleaning it? I'm trying to avoid reinventing the wheel and want to get the data into the right format for pre-training.
Any tips, links to GitHub repos, or specific functions would be a huge help! Thanks in advance for any guidance. | 2025-08-23T12:42:19 | https://www.reddit.com/r/LocalLLaMA/comments/1my0ft0/best_practices_for_cleaning_unsupervised_datasets/ | Extra-Designer9333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1my0ft0 | false | null | t3_1my0ft0 | /r/LocalLLaMA/comments/1my0ft0/best_practices_for_cleaning_unsupervised_datasets/ | false | false | self | 4 | null |
GPT-5 vs Claude-4 Sonnet on 200 Requests Benchmark | 12 | An independent evaluation of GPT-5 vs Claude 4 Sonnet across 200 diverse prompts.
Key insights: GPT-5 excels in reasoning and code; Claude 4 Sonnet is faster and slightly more precise on factual tasks. | 2025-08-23T12:21:08 | https://github.com/Cubent-Dev/Benchmark-GPT-5-vs-Claude-4-Sonnet-on-200-Requests | NoahDAVISFFX | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mxzzpi | false | null | t3_1mxzzpi | /r/LocalLLaMA/comments/1mxzzpi/gpt5_vs_claude4_sonnet_on_200_requests_benchmark/ | false | false | default | 12 | null |
Help me decide between these two pc builds | 0 | Heello i am trying to build a budget friendly pc that i can use for my future ML projects and some light LLM local hosting, and i have narrowed it down between these two builds and i know that these builds are more low to mid tier for hosting but i am working within a budget
Here is the two builds :
Option 1 :
Ryzen 5 5600
RTX 3060 12GB
32–64GB DDR4 RAM (upgrade planned)
1.5TB SSD storage
Option 2 :
Ryzen 7 7700
RTX 5060 Ti 16GB
64GB DDR5 RAM
1.5TB SSD storage
The second pc build is double the price of the first one
Has anyone here actually used either the rtx 3060 12gb or the rtx 5060 Ti 16gb for AI work? How was the experience?
And is the jump from the rtx 3060 to 5060ti worth the double price?
| 2025-08-23T12:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mxzpna/help_me_decide_between_these_two_pc_builds/ | 3rdhydra001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxzpna | false | null | t3_1mxzpna | /r/LocalLLaMA/comments/1mxzpna/help_me_decide_between_these_two_pc_builds/ | false | false | self | 0 | null |
Ollama Dashboard - Noob Question | 0 | So im kinda late to the party and been spending the past 2 weeks reading technical documentation and understand basics.
I managed to install ollama with an embed model, install postgres and pg vektor, obsidian, vs code with continue and connect all that shit. i also managed to setup open llm vtuber and whisper and make my llm more ayaya but thats besides the point. I decided to go with python as a framework and vs code and continue for coding.
Now thanks to Gaben the allmighty MCP got born. So i am looking for a gui frontend for my llm to implement mcp services. as far as i understand langchain and llamaindex used to be solid base. now there is crewai and many more.
I feel kinda lost and overwhelmed here because i dont know who supports just basic local ollama with some rag/sql and local preconfigured mcp servers. Its just for personal use.
And is there a thing that combines Open LLM Vtube with lets say Langchain to make an Ollama Dashboard? Control Input: Voice, Whisper, Llava, Prompt Tempering ... Control Agent: LLM, Tools via MCP or API Call ... Output Control: TTS, Avatar Control Is that a thing? | 2025-08-23T12:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mxzodj/ollama_dashboard_noob_question/ | WalterKEKWh1te | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxzodj | false | null | t3_1mxzodj | /r/LocalLLaMA/comments/1mxzodj/ollama_dashboard_noob_question/ | false | false | self | 0 | null |
There are three R's in Strawberry | 0 | [GPT-OSS-20B solves the Cipher problem](https://gist.github.com/sunpazed/b7a069f983f2f3f95cec57911bfbb08e) first showcased in the [OpenAI o1-preview Technical Paper](https://openai.com/index/learning-to-reason-with-llms/) — and yes, while I know it's likely that this brute single test might be in the training data, I was surprised to see that it took twice as long (10 minutes) and many more reasoning tokens than [Qwen3-30B-A3B](https://gist.github.com/sunpazed/f5220310f120e3fc7ea8c1fb978ee7a4) (4.5 minutes). While Qwen3 is king of the small reasoning models, I do find that OSS-20B more easily "adapts" its reasoning output depending on the task at hand, and is more suitable for agent use-cases then Qwen. Anyone else have this experience? | 2025-08-23T11:14:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mxypsd/there_are_three_rs_in_strawberry/ | sunpazed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxypsd | false | null | t3_1mxypsd | /r/LocalLLaMA/comments/1mxypsd/there_are_three_rs_in_strawberry/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=216&crop=smart&auto=webp&s=2e3562243f324d16bc6d9dd09adb1da4e0b100b5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=320&crop=smart&auto=webp&s=564e5f4bb6808064a14eb3965a6911671c3c9807', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=640&crop=smart&auto=webp&s=0f53460a90493497883ab4cacbbb58e2acb464c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=960&crop=smart&auto=webp&s=7a4f79362039959fa37eab208ae001245ccfe6e3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=1080&crop=smart&auto=webp&s=912f966e123e94e32e7975fe8aebac89450a6b98', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?auto=webp&s=c7cbcc7517e2406e2326e7a1eb6bdb9022c27fda', 'width': 1280}, 'variants': {}}]} |
🛠️ POML syntax highlighter for Sublime Text (for those structuring prompts like an agent boss) | 0 | Yo LLaMA wranglers and local AI tinkerers,
Just dropping this here in case any of you are exploring structured prompting for your agents or toolchains:
I built a syntax highlighter for POML, OpenAI’s markup format for cleanly structuring prompts, thinking steps, and agent logic.
✅ Works in Sublime Text
✅ Supports .poml, .promptml, .prompt.xml
✅ Highlights all major prompt logic tags (<template>, <var>, <sequence>, etc.)
🔗 GitHub: [https://github.com/Greatwent18/poml-sublime-text-syntax-extension](https://github.com/Greatwent18/poml-sublime-text-syntax-extension)
📖 POML spec: [https://cookbook.openai.com/examples/gpt-5/prompting\_patterns\_with\_poml](https://cookbook.openai.com/examples/gpt-5/prompting_patterns_with_poml)
I made this mostly for myself, but figured it could help others Sublime Text users doing reasoning-first workflows or chaining LLM logic. | 2025-08-23T11:13:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mxypf8/poml_syntax_highlighter_for_sublime_text_for/ | Euroel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxypf8 | false | null | t3_1mxypf8 | /r/LocalLLaMA/comments/1mxypf8/poml_syntax_highlighter_for_sublime_text_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=108&crop=smart&auto=webp&s=576a25137e64e2e1d365843d40048c8a05f59ca9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=216&crop=smart&auto=webp&s=f0e763097824e95afd5a28f29b96f6039177f9ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=320&crop=smart&auto=webp&s=9b4bc5bd5af8af5d8c7ba1f39dc6367d49d4a5d0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=640&crop=smart&auto=webp&s=6145beb73826807a3827f89cbdc5b35953198d33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=960&crop=smart&auto=webp&s=52cb4f273206458254b065b3563ab99dadacc127', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?width=1080&crop=smart&auto=webp&s=de8d0dd390dd5d5b88faadae08e17369e711a997', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GdTe4_I2vEfOLotrUGSx0ZQSNbiakeBxi6DlC0paESU.png?auto=webp&s=23ff9e99265a8b2fe4207677f2e2ed5c9634b9f5', 'width': 1200}, 'variants': {}}]} |
Apple M3 Ultra 512GB vs NVIDIA RTX 3090 LLM Benchmark | 51 | 🔥 Apple M3 Ultra 512GB vs NVIDIA RTX 3090 LLM Benchmark Results Running Qwen3-30B-A3B (Q4\_K\_M) on llamacpp and 4bit on MLX
I think we need more of these comparisons! It took a lot of time to setup everything, so let's share results!
pp512:
🥇M3 w/ MLX: 2,320 t/s
🥈 3090: 2,157 t/s
🥉 M3 w/ Metal: 1,614 t/s
tg128:
🥇 3090: 136 t/s
🥈 M3 w/ MLX: 97 t/s
🥉 M3 w/ Metal: 86 t/s
https://preview.redd.it/7f1bj2ag4rkf1.png?width=2522&format=png&auto=webp&s=44184256681e46b0bb2d8324c4d6abda8b7f4266
| 2025-08-23T11:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mxykmq/apple_m3_ultra_512gb_vs_nvidia_rtx_3090_llm/ | ifioravanti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxykmq | false | null | t3_1mxykmq | /r/LocalLLaMA/comments/1mxykmq/apple_m3_ultra_512gb_vs_nvidia_rtx_3090_llm/ | false | false | 51 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.