title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Palantir Positioned for Long-Term AI Growth with Expanding Customer Base and Strong Economics
1
[removed]
2025-09-25T09:59:09
https://www.reddit.com/r/LocalLLaMA/comments/1nq2nng/palantir_positioned_for_longterm_ai_growth_with/
barnascript
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq2nng
false
null
t3_1nq2nng
/r/LocalLLaMA/comments/1nq2nng/palantir_positioned_for_longterm_ai_growth_with/
false
false
https://b.thumbs.redditm…q5SAq5Q_rlBw.jpg
1
null
Tested Qwen3 Next on String Processing, Logical Reasoning & Code Generation. It’s Impressive!
33
Alibaba released Qwen3-Next and the architecture innovations are genuinely impressive. The two models released: * **Qwen3-Next-80B-A3B-Instruct** shows clear advantages in tasks requiring ultra-long context (**up to 256K tokens**) * **Qwen3-Next-80B-A3B-Thinking** excels at complex reasoning tasks It's a fundamental rethink of efficiency vs. performance trade-offs. Here's what we found in real-world performance testing: * **Text Processing:** String **accurately** reversed while competitor showed character duplication errors. * **Logical Reasoning:** **Structured** 7-step solution with superior state-space organization and constraint management. * **Code Generation:** **Complete** functional application versus competitor's partial truncated implementation. I have put the details into this [research breakdown ](https://blog.netmind.ai/article/Qwen3-Next:_Hybrid_Attention_for_Efficiency_Revolution_in_Open-Source_LLMs_(New_Research_Breakdown))on How Hybrid Attention is for Efficiency Revolution in Open-source LLMs. Has anyone else tested this yet? Curious how Qwen3-Next performs compared to traditional approaches in other scenarios.
2025-09-25T09:43:42
https://www.reddit.com/gallery/1nq2eyv
MarketingNetMind
reddit.com
1970-01-01T00:00:00
0
{}
1nq2eyv
false
null
t3_1nq2eyv
/r/LocalLLaMA/comments/1nq2eyv/tested_qwen3_next_on_string_processing_logical/
false
false
https://b.thumbs.redditm…_VhHPo-1GrQs.jpg
33
null
Urgent Question please - Does Deepseek DeepSeek-V3.1-Terminus support vision (image inputs) ?
0
Its in the title . Calling via API (not locally) || || |DeepSeek-V3.1-Terminus| I am seeing very conflicting information all over, and the official documentation doesn't mention it at all. Can any one please answer ?
2025-09-25T09:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1nq24tj/urgent_question_please_does_deepseek/
Rent_South
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq24tj
false
null
t3_1nq24tj
/r/LocalLLaMA/comments/1nq24tj/urgent_question_please_does_deepseek/
false
false
self
0
null
Meta Code World Model : LLM that understand code generation, not just predicts tokens
43
Meta’s **Code World Model (CWM)** is a 32B parameter **open-weight LLM** for code generation, debugging, and reasoning. Unlike standard code models, it **models execution traces**: variable states, runtime errors, file edits, shell commands. It uses a **decoder-only Transformer** (64 layers, 131k token context, grouped-query + sliding window attention) and was trained with pretrain → world modeling → SFT → RL pipelines (172B tokens, multi-turn rollouts). **Features:** long-context multi-file reasoning, agentic coding, self-bootstrapping, neural debugging. Benchmarks: SWE-bench 65.8%, LiveCodeBench 68.6%, Math-500 96.6%. Paper : [https://scontent.fhyd5-2.fna.fbcdn.net/v/t39.2365-6/553592426\_661450129912484\_4072750821656455102\_n.pdf?\_nc\_cat=103&ccb=1-7&\_nc\_sid=3c67a6&\_nc\_ohc=iRs3sgpeI1MQ7kNvwFK\_3Zo&\_nc\_oc=Adlc2UsribrXks0QKLto\_5kJ0Z0d\_meWCZ5-URPbaaNnA61JTqaU6kbYv2NzG-swk1o&\_nc\_zt=14&\_nc\_ht=scontent.fhyd5-2.fna&\_nc\_gid=ro31dO5FxlmV3au5dxL4-Q&oh=00\_AfYs5XCgaySaj6QIhNSBHwCV7DFjeANboXTFDHx1ewmgkA&oe=68DABDF5](https://scontent.fhyd5-2.fna.fbcdn.net/v/t39.2365-6/553592426_661450129912484_4072750821656455102_n.pdf?_nc_cat=103&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=iRs3sgpeI1MQ7kNvwFK_3Zo&_nc_oc=Adlc2UsribrXks0QKLto_5kJ0Z0d_meWCZ5-URPbaaNnA61JTqaU6kbYv2NzG-swk1o&_nc_zt=14&_nc_ht=scontent.fhyd5-2.fna&_nc_gid=ro31dO5FxlmV3au5dxL4-Q&oh=00_AfYs5XCgaySaj6QIhNSBHwCV7DFjeANboXTFDHx1ewmgkA&oe=68DABDF5)
2025-09-25T09:03:55
https://www.reddit.com/r/LocalLLaMA/comments/1nq1t5a/meta_code_world_model_llm_that_understand_code/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq1t5a
false
null
t3_1nq1t5a
/r/LocalLLaMA/comments/1nq1t5a/meta_code_world_model_llm_that_understand_code/
false
false
self
43
null
OpenSource LocalLLama App
7
MineGPT is a lightweight local SLM (Small Language Model) chat application built with Kotlin Multiplatform. It aims to provide a cross-platform and user-friendly AI assistant experience.
2025-09-25T09:01:19
https://github.com/Onion99/MineGPT
Adventurous_Onion189
github.com
1970-01-01T00:00:00
0
{}
1nq1rry
false
null
t3_1nq1rry
/r/LocalLLaMA/comments/1nq1rry/opensource_localllama_app/
false
false
default
7
{'enabled': False, 'images': [{'id': 'SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=108&crop=smart&auto=webp&s=c10cc36bae9eaec9efd3c4cd65982e3f657a4cac', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=216&crop=smart&auto=webp&s=28f444196de07dabbe38e3d44336252bf5a8b2f8', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=320&crop=smart&auto=webp&s=d0489e3c0eafe18412e6b2fb90a1b7bfb38cbbe3', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=640&crop=smart&auto=webp&s=6fbdf4b7afe33632fc2ce83fffc3d147b688380b', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=960&crop=smart&auto=webp&s=a0431c4c61dec32510cc072cf08eea96ca87c3c8', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?width=1080&crop=smart&auto=webp&s=27388f5488cccf6002e487e7d652e01f17215863', 'width': 1080}], 'source': {'height': 3230, 'url': 'https://external-preview.redd.it/SijcfIwA7PAFDI4CM5k1asb61wyl4YLzfB1WYnuUySI.jpeg?auto=webp&s=7f7c719e7455cf2cbf9d5bf57a31aaffa9a25559', 'width': 5760}, 'variants': {}}]}
Best App and Models for 5070
1
Hello guys, so I'm new in this kind of things, really really blind but I have interest to learn AI or ML things, at least i want to try to use a local AI first before i learn deeper. I have RTX 5070 12GB + 32GB RAM, which app and models that you guys think is best for me?. For now I just want to try to use AI chat bot to talk with, and i would be happy to recieve a lot of tips and advice from you guys since i'm still a baby in this kind of "world" :D. Thank you so much in advance.
2025-09-25T09:01:05
https://www.reddit.com/r/LocalLLaMA/comments/1nq1rm2/best_app_and_models_for_5070/
Kyotaco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq1rm2
false
null
t3_1nq1rm2
/r/LocalLLaMA/comments/1nq1rm2/best_app_and_models_for_5070/
false
false
self
1
null
GPT-OSS-120B settings help
5
What would be the optimal configuration in lm-studio for running gpt-oss-120b on a 5090?
2025-09-25T08:58:32
https://www.reddit.com/r/LocalLLaMA/comments/1nq1q78/gptoss120b_settings_help/
foggyghosty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq1q78
false
null
t3_1nq1q78
/r/LocalLLaMA/comments/1nq1q78/gptoss120b_settings_help/
false
false
self
5
null
llms.py – Lightweight Open AI Chat Client and Server (Text/Image/Audio)
5
Lightweight CLI and OpenAI-compatible server for querying multiple Large Language Model (LLM) providers. Configure additional providers and models in [llms.json](https://github.com/ServiceStack/llms/blob/main/llms.json) * Mix and match local models with models from different API providers * Requests automatically routed to available providers that supports the requested model (in defined order) * Define free/cheapest/local providers first to save on costs * Any failures are automatically retried on the next available provider
2025-09-25T08:58:03
https://github.com/ServiceStack/llms
mythz
github.com
1970-01-01T00:00:00
0
{}
1nq1py1
false
null
t3_1nq1py1
/r/LocalLLaMA/comments/1nq1py1/llmspy_lightweight_open_ai_chat_client_and_server/
false
false
default
5
{'enabled': False, 'images': [{'id': 'gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=108&crop=smart&auto=webp&s=3317308c6cc36593297782e20d5d360fa0b4aea0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=216&crop=smart&auto=webp&s=a58de9832317863b24e4ca5f58de9106d72e3c78', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=320&crop=smart&auto=webp&s=9ca4e96de44f3f0ba88b2e514e703b8328301241', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=640&crop=smart&auto=webp&s=844be601a86c341cf8f7dd9cb52ecc5cd11c1f54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=960&crop=smart&auto=webp&s=27c3d0a0231fe762dc85cf233fe24663eca21aec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?width=1080&crop=smart&auto=webp&s=76bdc5f9c6f60aad09eaaab11d503151e2153f7a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gw8Hwxde_3fHoIZUpq8qYsU_lhj3dfNz1Q3CDenYjaE.png?auto=webp&s=2cbc7071d5efc155d5a88944ca995f5169bb4df4', 'width': 1200}, 'variants': {}}]}
China already started making CUDA and DirectX supporting GPUs, so over of monopoly of NVIDIA. The Fenghua No.3 supports latest APIs, including DirectX 12, Vulkan 1.2, and OpenGL 4.6.
576
2025-09-25T08:44:03
https://i.redd.it/kvkovm34x9rf1.png
CeFurkan
i.redd.it
1970-01-01T00:00:00
0
{}
1nq1ia2
false
null
t3_1nq1ia2
/r/LocalLLaMA/comments/1nq1ia2/china_already_started_making_cuda_and_directx/
false
false
default
576
{'enabled': True, 'images': [{'id': 'kvkovm34x9rf1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=108&crop=smart&auto=webp&s=4d19b5d5d07c689629c86f0bb6e94bee8a58304f', 'width': 108}, {'height': 303, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=216&crop=smart&auto=webp&s=ed36669a8c6ad7e2ad3e36622bc60709cfd6a18a', 'width': 216}, {'height': 449, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=320&crop=smart&auto=webp&s=fd7e7ac8fb83843585e04027e8c9d860aeb16cab', 'width': 320}, {'height': 898, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=640&crop=smart&auto=webp&s=45659c3155f1d8f5d8b88f474aa1e034afc37b1f', 'width': 640}, {'height': 1348, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=960&crop=smart&auto=webp&s=3b561355fcfb33ce58f53831105a32489cb9c280', 'width': 960}, {'height': 1516, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?width=1080&crop=smart&auto=webp&s=3e491d8c220beccfa94b9055c57adff502834e68', 'width': 1080}], 'source': {'height': 1781, 'url': 'https://preview.redd.it/kvkovm34x9rf1.png?auto=webp&s=c5ce07064819f453bcb80b11447ebb6c288caf4a', 'width': 1268}, 'variants': {}}]}
Alibaba just unveiled their Qwen roadmap. The ambition is staggering!
831
Two big bets: unified multi-modal models and extreme scaling across every dimension. - Context length: 1M → 100M tokens - Parameters: trillion → ten trillion scale - Test-time compute: 64k → 1M scaling - Data: 10 trillion → 100 trillion tokens They're also pushing synthetic data generation "without scale limits" and expanding agent capabilities across complexity, interaction, and learning modes. The "scaling is all you need" mantra is becoming China's AI gospel.
2025-09-25T08:24:45
https://i.redd.it/5tm4p90rt9rf1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1nq182d
false
null
t3_1nq182d
/r/LocalLLaMA/comments/1nq182d/alibaba_just_unveiled_their_qwen_roadmap_the/
false
false
default
831
{'enabled': True, 'images': [{'id': '5tm4p90rt9rf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=108&crop=smart&auto=webp&s=25183c3fc208f05f8cb7f7be2595dfb1b5a5d956', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=216&crop=smart&auto=webp&s=41691d23f827e9005a0f7622dc42a535e733818e', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=320&crop=smart&auto=webp&s=d6ffe7a23d6cad7eb7294f921290eeab7a5686b1', 'width': 320}, {'height': 304, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=640&crop=smart&auto=webp&s=6724dd3297826b1a060f45ea0c5e1fd9e366f5ab', 'width': 640}, {'height': 457, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=960&crop=smart&auto=webp&s=059c3533388f070d729d2f6c7f5449ca05a74888', 'width': 960}, {'height': 514, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?width=1080&crop=smart&auto=webp&s=c029ef8eaab0a332fb7139e491f46d4fcbc8f89e', 'width': 1080}], 'source': {'height': 842, 'url': 'https://preview.redd.it/5tm4p90rt9rf1.jpeg?auto=webp&s=43bbfc171965f79f761bb3cfd2f3c3fb6dac9968', 'width': 1768}, 'variants': {}}]}
Dell T630 4x 3060 48 GB VRAM 10c40t Xeon 256gb ECC DDR4 2x1600w redundant PSU
54
I was looking at getting a dual socket setup going w/ more than 4x GPU, but it honestly ended up on the back burner. I picked up some hardware recently and found that all of its native features just made it easier to use what the platform had to offer. Power distribution, air flow and even drive capacities simply made it much easier to go the route of using a Dell T630 tower. Now, in terms of upgrade ability, there’s room for 44 cores 88 threads and 768 GB of DDR4 RAM, not to mention 32x 2.5” SSD. All this for the acquisition cost of ~$100 before the GPUs.
2025-09-25T08:19:25
https://i.redd.it/cjmhdtjss9rf1.jpeg
desexmachina
i.redd.it
1970-01-01T00:00:00
0
{}
1nq154s
false
null
t3_1nq154s
/r/LocalLLaMA/comments/1nq154s/dell_t630_4x_3060_48_gb_vram_10c40t_xeon_256gb/
false
false
default
54
{'enabled': True, 'images': [{'id': 'cjmhdtjss9rf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=108&crop=smart&auto=webp&s=ef84122c48cce2e16a1e80adf925d749dbf64e77', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=216&crop=smart&auto=webp&s=6a3e0ad0bf0eae40053d99a1bba9b933732b08e3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=320&crop=smart&auto=webp&s=d5cbf74131781a473d01dfb2d75e906375e98054', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=640&crop=smart&auto=webp&s=befdca71b09eb97d2f8e1c2613b2ada397bd42a7', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=960&crop=smart&auto=webp&s=24937dcadc13718e33fe5ad19d19ad8677b2c37d', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?width=1080&crop=smart&auto=webp&s=e000d6000cb35653dc30c7d9be89a1a929919195', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/cjmhdtjss9rf1.jpeg?auto=webp&s=b65ad7cc5fb30d9bb8bf395325805da6bfab582a', 'width': 3024}, 'variants': {}}]}
Does anyone use an open source model for coding hosted on an AWS EC2 server?
2
I have experimented a bit with installing some open source models from HuggingFace on an AWS EC2 instance (g5.xlarge, 4 vCPUs (AMD EPYC 7R32, 2.8 GHz), 16 GiB RAM, 250 GiB NVMe SSD, 1×NVIDIA A10G GPU (24 GiB VRAM), up to 10 Gbps networking, EBS-optimized (3.5 Gbps / 15K IOPS)). This was just used for some proof of concept experiments. I'm interested in anyone who has taken this approach to successfully install and run a model that I can use like Codex or Claude Code that understands my entire repository and can make script changes, write new scripts, etc. If you've done this and are happy with the performance, esp if you've compared with Codex and Claude Code, what hardware and model(s) are you using? What did you experiment with? Essentially trying to figure out if I can create a durable solution hosted on EC2 for this purpose specifically for coding and repo management. Interested in any experiences and success stories.
2025-09-25T08:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1nq12hq/does_anyone_use_an_open_source_model_for_coding/
lopiontheop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq12hq
false
null
t3_1nq12hq
/r/LocalLLaMA/comments/1nq12hq/does_anyone_use_an_open_source_model_for_coding/
false
false
self
2
null
Is VibeVoice Realtime Streaming only?
2
Installed the 1.5B model. Chose 1 speaker generation. Added around 3 minutes worth of text for TTS. But instead of generating the full speech at once, it started streaming in real-time. Is there a way to get the entire output in one go, instead of it streaming live?
2025-09-25T07:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1nq0gyn/is_vibevoice_realtime_streaming_only/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq0gyn
false
null
t3_1nq0gyn
/r/LocalLLaMA/comments/1nq0gyn/is_vibevoice_realtime_streaming_only/
false
false
self
2
null
IMPORTANT: Why Abliterated Models SUCK. Here is a better way to uncensor LLMs.
325
So I have been testing many local models. And... I have noticed that all abliterated models have degraded perfomance compared to the original. Especially the newer MoE models such as Qwen3 30b a3b, they suffer the most from abliteration. The areas in which they get degraded the most are logical reasoning, agentic tasks and most importantly they hallucinate like crazy which causes abliterated big models like 30b to be often be outperformed by non-abliterated 4-8b models in my tests. I have noticed a very important pattern. Models that have been abliterated but also finetuned have very little degredation compared to models that were just abliterated. Here are some models that were abliterated but finetuned/trained after and they perform equally or outperform the originals but have the amazing added benefit of being completely uncensored: 1. mradermacher/Qwen3-30B-A3B-abliterated-erotic-i1-GGUF This model is very powerful. It was abliterated but also trained on uncensored material. I have found this model to perform very close to the original model while being completely uncensored. It does struggle a little more in agentic tasks compared to the original but in everything else its near perfect. Its hallucination rates are very low compared to other abliterated versions of Qwen3 30b a3b and its pretty knowledgable. 2. mlabonne/NeuralDaredevil-8B-abliterated This model is absolutely amazing, it was abliterated but was also DPO finetuned. The original model was Llama3-8b. This model completely outperforms the original. And again this model is completely uncensored. Also the author of this model has generously provided information about what datasets he used to train this model and what he did to achieve these results. These two models were the best I have found among the uncensored models made by the community. Why is **Qwen3-30B-A3B-abliterated-erotic**\-i1-GGUF better than all other abliterated/uncensored Qwen3-30b-a3b models? I have actually used the i1-Q4\_K\_S version of this model in my tests. I have compared it to these models below: 1. **Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated**\-GGUF/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated.Q4\_K\_M.gguf 2. **Huihui-Qwen3-30B-A3B-abliterated-Fusion-9010**\-i1-GGUF/Huihui-Qwen3-30B-A3B-abliterated-Fusion-9010.i1-Q4\_K\_M.gguf (this model especially sucks) 3. **Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated**\-GGUF/Huihui-Qwen3-30B-A3B-Instruct-2507-abliterated.Q4\_K\_M.gguf I have asked these models the usual uncensored questions like "How to sell meth" all the abliterated Qwen3-30b-a3b models would give me a generic business pitch which was completely unrealistic and more fitting for a candy shop or a tech company rather than an illegal underground drug distribution ring. They made nonesensical strategies. The **Qwen3-30B-A3B-abliterated-erotic** model was the only model out of the 4 that actually came up with a reasonable business strategy that would be successful in that scenario. Another test I did is I tested these models with MCPs and the 3 Huihui models really sucked with tool calls, they would either call the wrong tool for the occasion or they would repeatedly spam the same tool many times in a row without any reason for that. Hallucination... Again the **Qwen3-30B-A3B-abliterated-erotic** model won in this case, it called tools correctly more often than the other three models although it performed slightly worse than the original Qwen3-30b a3b model. Also this model was best at giving facts (its hallucination was the lowset) I'm actually shocked that a model trained for erotic conversations performs so well. But here we are... **My theory** is that models trained after abliteration recover most of the perfomance lost during abliteration. My request to you guys is to try to train Qwen3-30b-a3b after abliteration on a high quality dataset so we can have more high quality uncensored models. I'm sure that I'm not the only person frustrated with the limited selection of uncensored models today. Most uncensored models today are very low quality. My goal is to change that... **I'm making this post to convince other devs to work on creating good quality uncensored models.** **I believe that free access to information is a fundamental human right.** Censored models take away that right to unrestricted access to valuable information. Without free access to information we become easy to control.
2025-09-25T07:26:06
https://www.reddit.com/r/LocalLLaMA/comments/1nq0cp9/important_why_abliterated_models_suck_here_is_a/
Optimal_League_1419
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq0cp9
false
null
t3_1nq0cp9
/r/LocalLLaMA/comments/1nq0cp9/important_why_abliterated_models_suck_here_is_a/
false
false
self
325
null
Best model for 16GB CPUs?
8
Hi, It's gonna be a while until we get the next generation of LLMs, so I am trying to find the best model so far to run on my system. What's the best model for x86 cpu-only systems with 16GB of total ram? I don't think the bigger MoE will fit without quantizying them so much they become stupid. What models are you guys using in such scenarios?
2025-09-25T06:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1npzwe7/best_model_for_16gb_cpus/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npzwe7
false
null
t3_1npzwe7
/r/LocalLLaMA/comments/1npzwe7/best_model_for_16gb_cpus/
false
false
self
8
null
A step by step guide on how to build a LLM from scratch
76
I wanted to share this here and hopefully it will help some folks to get deeper in this and help learn. I just published a comprehensive guide on how to build a LLM from scratch using historical London texts from 1500-1850. **What I Built:** * Two identical models (117M & 354M parameters) trained from scratch * Custom historical tokenizer with 30k vocabulary + 150+ special tokens for archaic English * Complete data pipeline processing 218+ historical sources (500M+ characters) * Production-ready training with multi-GPU support, WandB integration, and checkpointing * Published models on Hugging Face ready for immediate use **Why This Matters:** Most LLM guides focus on fine-tuning existing models. This series shows you how to build from the ground up—eliminating modern biases and creating models that truly understand historical language patterns, cultural contexts, and period-specific knowledge. **Resources:** * Blog Series: [https://blog.desigeek.com/post/2025/09/building-llm-from-scratch-part1/](https://blog.desigeek.com/post/2025/09/building-llm-from-scratch-part1/) * Complete Codebase: [https://github.com/bahree/helloLondon](https://github.com/bahree/helloLondon) * Published Models: [https://huggingface.co/bahree/london-historical-slm](https://huggingface.co/bahree/london-historical-slm) * LinkedIn (if that's your thing): [https://www.linkedin.com/feed/update/urn:li:share:7376863225306365952/](https://www.linkedin.com/feed/update/urn:li:share:7376863225306365952/) The models are already working and generating authentic 18th-century London text. Perfect for developers who want to understand the complete LLM development pipeline. **Shoutout:** Big thanks to u/Remarkable-Trick-177 for the inspiration!
2025-09-25T06:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1npzstw/a_step_by_step_guide_on_how_to_build_a_llm_from/
amitbahree
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npzstw
false
null
t3_1npzstw
/r/LocalLLaMA/comments/1npzstw/a_step_by_step_guide_on_how_to_build_a_llm_from/
false
false
self
76
null
Qwen API (asking especially developers)
3
is anyone here using the Qwen API? I’d like to know if the response is as slow as in the web chat version. I’ve had trouble activating it through Alibaba, does anyone use it via OpenRouter? Thanks in advance
2025-09-25T06:44:21
https://www.reddit.com/r/LocalLLaMA/comments/1npzp0r/qwen_api_asking_especially_developers/
Beneficial_Tough_367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npzp0r
false
null
t3_1npzp0r
/r/LocalLLaMA/comments/1npzp0r/qwen_api_asking_especially_developers/
false
false
self
3
null
Deploy Qwen3-Next-80B in 32G DDR + single GPU
1
[removed]
2025-09-25T06:41:35
https://www.reddit.com/r/LocalLLaMA/comments/1npznk3/deploy_qwen3next80b_in_32g_ddr_single_gpu/
Fastllm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npznk3
false
null
t3_1npznk3
/r/LocalLLaMA/comments/1npznk3/deploy_qwen3next80b_in_32g_ddr_single_gpu/
false
false
self
1
null
Run Qwen3-Next-80B in 32G DDR + single GPU
1
[removed]
2025-09-25T06:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1npzm92/run_qwen3next80b_in_32g_ddr_single_gpu/
Fastllm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npzm92
false
null
t3_1npzm92
/r/LocalLLaMA/comments/1npzm92/run_qwen3next80b_in_32g_ddr_single_gpu/
false
false
self
1
null
Made a Lip synced video in a old Laptop
7
I have been exploring some AI models and find some models that can generate talking head videos so i generated a lip synced video using cpu, it takes 2m 18s to generate a video with 5s audio Model for lip sync :- float [https://github.com/deepbrainai-research/float](https://github.com/deepbrainai-research/float)
2025-09-25T05:52:08
https://v.redd.it/w66m72d829rf1
OrganicTelevision652
v.redd.it
1970-01-01T00:00:00
0
{}
1npyvpf
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/w66m72d829rf1/DASHPlaylist.mpd?a=1761371545%2CYTQ2ZTdmZjUyYTM4ODA5NGFlYWZlZDM0YjgyYWQyOWU5NmFjYWVhMjYyMmM0MWQyMTg2ZjNlNmJjYjEzMDI3MA%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/w66m72d829rf1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/w66m72d829rf1/HLSPlaylist.m3u8?a=1761371545%2CZTNlMDg2ZDI1OTJkMGM0NDQwOGU1YzQ5MDkyYWU3NGUwMDc4ZjczYTRiMjk5M2QwYzRkMDdjYjQwMmVhYzUwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w66m72d829rf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1npyvpf
/r/LocalLLaMA/comments/1npyvpf/made_a_lip_synced_video_in_a_old_laptop/
false
false
https://external-preview…e50cc3f64b6fc1ac
7
{'enabled': False, 'images': [{'id': 'cGo3bDYzZDgyOXJmMSEkouk1dkJ0lT_7s9HHSZNUsTHnamMLbJ2i8N9lMQ57', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cGo3bDYzZDgyOXJmMSEkouk1dkJ0lT_7s9HHSZNUsTHnamMLbJ2i8N9lMQ57.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d389cf2340e31f6a48dceedbc129842c1d49103', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cGo3bDYzZDgyOXJmMSEkouk1dkJ0lT_7s9HHSZNUsTHnamMLbJ2i8N9lMQ57.png?width=216&crop=smart&format=pjpg&auto=webp&s=ca3ed63ce52e4742ccdefc06b79377d3b656fe0a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cGo3bDYzZDgyOXJmMSEkouk1dkJ0lT_7s9HHSZNUsTHnamMLbJ2i8N9lMQ57.png?width=320&crop=smart&format=pjpg&auto=webp&s=30b831d5ba9d38b9d9e5256bf237bef232301e54', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/cGo3bDYzZDgyOXJmMSEkouk1dkJ0lT_7s9HHSZNUsTHnamMLbJ2i8N9lMQ57.png?format=pjpg&auto=webp&s=60241b851aa184fc0ed2e6305cbc101b4a417713', 'width': 512}, 'variants': {}}]}
Anyone Know What Model This Is? (LMArena)
2
was messing around on LMArena battle mode generating images & this was the better option the model name interested me a lot i've looked around and tried researching about it but nothing shows up and it seems like nobody has talked about it unless i'm living under a rock
2025-09-25T05:44:30
https://i.redd.it/mrf6kvq519rf1.jpeg
o3zr
i.redd.it
1970-01-01T00:00:00
0
{}
1npyr93
false
null
t3_1npyr93
/r/LocalLLaMA/comments/1npyr93/anyone_know_what_model_this_is_lmarena/
false
false
default
2
{'enabled': True, 'images': [{'id': 'mrf6kvq519rf1', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=108&crop=smart&auto=webp&s=3827e9084a0ec6bb1990e7530a76b2ade34dcb8c', 'width': 108}, {'height': 223, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=216&crop=smart&auto=webp&s=ae3ce250d0398aa872416b2b93504de6ed00c17e', 'width': 216}, {'height': 330, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=320&crop=smart&auto=webp&s=408dcaa3133484a0ca084d930c96fb80c9fdc14f', 'width': 320}, {'height': 660, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=640&crop=smart&auto=webp&s=4ed01484b73be84836fac11aa5d07ae2104eb8ee', 'width': 640}, {'height': 991, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=960&crop=smart&auto=webp&s=e83ec7e56a39b42c2c116555a71f5b27a4aa9bd8', 'width': 960}, {'height': 1115, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?width=1080&crop=smart&auto=webp&s=7bc4da6a6f8003fe2020298ed7b80d982a93bde3', 'width': 1080}], 'source': {'height': 1145, 'url': 'https://preview.redd.it/mrf6kvq519rf1.jpeg?auto=webp&s=ed606a3a0189f129e9752cb41bfb7ba4e77f7751', 'width': 1109}, 'variants': {}}]}
Are these specs good enough to run a code-writing model locally?
6
I’m currently paying for both Cursor and ChatGPT. Even on Cursor’s Ultra plan, I’m paying roughly $400–$500 per month. I’m thinking of buying a workstation for local code authoring and for building and running a few services on-premises. What matters most to me are code quality and speed—nothing else. The hardware I’m considering: * Ryzen 7995WX or 9995WX * WRX90E Sage * DDR5-5600 64GB × 8 * RTX Pro 6000 96GB × 4 With a setup like this, would I be able to run a local model comfortably at around the Claude 4 / Claude 4.1 Opus level?
2025-09-25T05:19:02
https://www.reddit.com/r/LocalLLaMA/comments/1npycee/are_these_specs_good_enough_to_run_a_codewriting/
PlusProfession9245
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npycee
false
null
t3_1npycee
/r/LocalLLaMA/comments/1npycee/are_these_specs_good_enough_to_run_a_codewriting/
false
false
self
6
null
Gradio problem VibeVoice !
2
The default gradio web UI has dark option in settings. I enabled Dark mode and only the footer area was dark but the rest of the body was light and messed up the words and sentences. Screenshot: [https://ibb.co/SXnS41TR](https://ibb.co/SXnS41TR) Any way to fix this and put dark mode all over? I tried different browsers, incognito but same thing :/
2025-09-25T05:12:07
https://www.reddit.com/r/LocalLLaMA/comments/1npy8av/gradio_problem_vibevoice/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npy8av
false
null
t3_1npy8av
/r/LocalLLaMA/comments/1npy8av/gradio_problem_vibevoice/
false
false
self
2
{'enabled': False, 'images': [{'id': '0KvtfvKpCNDeTsdO_eN4aQ-Kw9QEvpjzk_RPFAQ3Yqk', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=108&crop=smart&auto=webp&s=4ca96ace8d118dbbe12a7ee8c12ee36087888826', 'width': 108}, {'height': 74, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=216&crop=smart&auto=webp&s=0b0897adc2def7586d3e71aade2f60f38cd15d94', 'width': 216}, {'height': 109, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=320&crop=smart&auto=webp&s=4434c218e86ed00dcc6ea3034f533abdc65c9b97', 'width': 320}, {'height': 219, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=640&crop=smart&auto=webp&s=146e7d7a28b8812da86792b6d04ea274112e8623', 'width': 640}, {'height': 328, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=960&crop=smart&auto=webp&s=1afb97c550a5499826d51c3bf7f033e5b9009119', 'width': 960}, {'height': 370, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?width=1080&crop=smart&auto=webp&s=4d02ab19b20d40e6d8f009e04164e5231df0167b', 'width': 1080}], 'source': {'height': 646, 'url': 'https://external-preview.redd.it/S2frlQhTU6qMy8AOKS-rdpQy-Dlq9EC3IZFyPHEV9cs.jpg?auto=webp&s=82d75f64b59110e6621155c226291754f4029bc6', 'width': 1885}, 'variants': {}}]}
8 Elite Gen 5 , It's better than the A19 Pro
72
I was thinking of buying the iPhone 17 ah, now it will be interesting this new processor in theory should be better than the a19 pro
2025-09-25T05:09:27
https://i.redd.it/iswa4vnwu8rf1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1npy6pn
false
null
t3_1npy6pn
/r/LocalLLaMA/comments/1npy6pn/8_elite_gen_5_its_better_than_the_a19_pro/
false
false
default
72
{'enabled': True, 'images': [{'id': 'iswa4vnwu8rf1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=108&crop=smart&auto=webp&s=d30e03289d87d2802b83b5d51d8c832e4555b829', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=216&crop=smart&auto=webp&s=8f79c5ca9e4bde5663f4c3ea94b3504bf6ba6f51', 'width': 216}, {'height': 144, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=320&crop=smart&auto=webp&s=a61598a41abb3be309580ea5f42802d104101ec6', 'width': 320}, {'height': 288, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=640&crop=smart&auto=webp&s=3ecfe4b7fb802bb19600d0b567e34ec455242e06', 'width': 640}, {'height': 432, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=960&crop=smart&auto=webp&s=82689f15c7468965a3fe35067a21184f8594c333', 'width': 960}, {'height': 486, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?width=1080&crop=smart&auto=webp&s=09708ac0375955dd58cb3e32b5c80bd3402356a5', 'width': 1080}], 'source': {'height': 922, 'url': 'https://preview.redd.it/iswa4vnwu8rf1.jpeg?auto=webp&s=326cf3a19b525d58684f440b42002fe311667a48', 'width': 2048}, 'variants': {}}]}
WHAT!?
0
https://preview.redd.it/…b9c1cd513d5 ...
2025-09-25T05:04:27
https://www.reddit.com/r/LocalLLaMA/comments/1npy3pw/what/
Ambitious-a4s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npy3pw
false
null
t3_1npy3pw
/r/LocalLLaMA/comments/1npy3pw/what/
false
false
https://b.thumbs.redditm…33MNq_LNf-fQ.jpg
0
null
What’s your profession ?
1
Hello, training and developing LLMs is costly. It needs a lot of time ,energy and money. So i wanted to know what makes investing in large language models worth it for you? Do you do it just for fun?Or are you employed in a company? Or freelancer ?Or developing your own company?
2025-09-25T04:13:06
https://www.reddit.com/r/LocalLLaMA/comments/1npx7e7/whats_your_profession/
AwkwardBoysenberry26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npx7e7
false
null
t3_1npx7e7
/r/LocalLLaMA/comments/1npx7e7/whats_your_profession/
false
false
self
1
null
i built a computer vision system that runs in real time on my laptop webcam
24
i made a local object detection and identification script that uses yolo, sam, and ollama vlm models (i used llava and qwen). it runs on the webcam with \~30fps on my laptop. two versions: 1. YOLO/SAM object detection and tracking with vlm object analysis 2. motion detection with vlm frame analysis still new to computer vision systems and i know this has been done before so very open to feedback and advice
2025-09-25T03:54:10
https://github.com/kazumah1/local-detection
faflappy
github.com
1970-01-01T00:00:00
0
{}
1npwupf
false
null
t3_1npwupf
/r/LocalLLaMA/comments/1npwupf/i_built_a_computer_vision_system_that_runs_in/
false
false
default
24
{'enabled': False, 'images': [{'id': 'Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=108&crop=smart&auto=webp&s=95fbfe53a22254bcea9fb202975fc9189019a414', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=216&crop=smart&auto=webp&s=57b1309cd09d5c1e9cf1f867a8402a8b1264d71c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=320&crop=smart&auto=webp&s=8f54cc090c20fdf4ce368782eb8cbc7201144b68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=640&crop=smart&auto=webp&s=64d37f8896fb78027d65fe7720961fbc08448972', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=960&crop=smart&auto=webp&s=27621c00395730cff7c8a13c58efe622281ffe30', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?width=1080&crop=smart&auto=webp&s=34c423195aa66e878c2afa619f340f85c5f0e583', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ypd2kQ-k8wRi8W0sQ8OMojLcTwuapPuL7em0lGxtBK4.png?auto=webp&s=87128e81cdd9469a93bc29ec4034bd72b4304b44', 'width': 1200}, 'variants': {}}]}
Built an arena-like eval tool to replay my agent traces with different models, works surprisingly well
5
https://preview.redd.it/…s if interested.
2025-09-25T03:51:00
https://www.reddit.com/r/LocalLLaMA/comments/1npwsie/built_an_arenalike_eval_tool_to_replay_my_agent/
mrparasite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npwsie
false
null
t3_1npwsie
/r/LocalLLaMA/comments/1npwsie/built_an_arenalike_eval_tool_to_replay_my_agent/
false
false
https://a.thumbs.redditm…hlW7Y7g5BpE8.jpg
5
null
Piper TTS training dataset question
5
I'm trying to train a piper tts model for a llama 2 chatbot using this notebook: \[https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper\\\_multilingual\\\_training\\\_notebook.ipynb#scrollTo=E0W0OCvXXvue\](https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper\_multilingual\_training\_notebook.ipynb#scrollTo=E0W0OCvXXvue) ,in the notebook it said the single speaker dataset need to be in this format: ``` wavs/1.wav|This is what my character says in audio 1. ``` But i thought there also a normalized transcript line too that transcribe numbers into words, presumably like this: ``` wavs/1.wav|This is what my character says in audio 1.|This is what my character says in audio one. ``` So do i need to add them in? Or will the notebook normalize the transcribe itself? Or does piper don't use normalized transcribe and it does not matter?
2025-09-25T03:19:39
https://www.reddit.com/r/LocalLLaMA/comments/1npw7h9/piper_tts_training_dataset_question/
Kiyumaa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npw7h9
false
null
t3_1npw7h9
/r/LocalLLaMA/comments/1npw7h9/piper_tts_training_dataset_question/
false
false
self
5
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
I have made a mcp tool colelction pack for local LLMs
9
[Collection repo](https://github.com/ZihaoFU245/lmstudio-toolpack) The MCP server online are scattered, so I thought create a colelction of them would be great, only **one** Python venv for multiple servers. Save your memories. --- **You are welcomed to contribute your tools:)**
2025-09-25T02:49:28
https://www.reddit.com/r/LocalLLaMA/comments/1npvmb1/i_have_made_a_mcp_tool_colelction_pack_for_local/
iwillbeinvited
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npvmb1
false
null
t3_1npvmb1
/r/LocalLLaMA/comments/1npvmb1/i_have_made_a_mcp_tool_colelction_pack_for_local/
false
false
self
9
{'enabled': False, 'images': [{'id': 'MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=108&crop=smart&auto=webp&s=d83e4c0b6bbe55ad3b8de5588b1f6bc208c8e8b1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=216&crop=smart&auto=webp&s=3caaedfe0097ceea64b664e4777e7ad7b98061c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=320&crop=smart&auto=webp&s=f8d814123a616371f82fae8bb36813cf30007ead', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=640&crop=smart&auto=webp&s=2e33caddc52739c9660872c56a00db63386a64f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=960&crop=smart&auto=webp&s=3fa14ca7e313451879ca1088166394ae91d61ec9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?width=1080&crop=smart&auto=webp&s=899146c2e4ec457d8f1cf5afe57a84fa0967319d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MZwDSXXIeVmqlKD2xLIJG9VLS9zHqkKfIGlpd7bSnac.png?auto=webp&s=0f7c474235ad1a60c59e5874a58e9b6d422c6acb', 'width': 1200}, 'variants': {}}]}
Any good YouTube creators with low pace content?
22
I want to study more about llms and prompt engineering but almost every YouTuber got this fast paced YouTube style with a lot of sound FX and click bait titles. I just wish I could find someone that just go straight to explanation without a overstimulated time of editing.
2025-09-25T02:03:38
https://www.reddit.com/r/LocalLLaMA/comments/1npup0s/any_good_youtube_creators_with_low_pace_content/
daantesao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npup0s
false
null
t3_1npup0s
/r/LocalLLaMA/comments/1npup0s/any_good_youtube_creators_with_low_pace_content/
false
false
self
22
null
Any vision languages that run on llama.cpp under 96gb anyone recommends?
9
I have some image descriptions I need to fill out for images in markdown, and curious if anyone knows any good vision languages that can be describe them using llama.cpp/llama-server?
2025-09-25T01:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1nptwiu/any_vision_languages_that_run_on_llamacpp_under/
richardanaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nptwiu
false
null
t3_1nptwiu
/r/LocalLLaMA/comments/1nptwiu/any_vision_languages_that_run_on_llamacpp_under/
false
false
self
9
null
AMA: Talk on Replicating Research as Draft PRs in YOUR Repo in Minutes
2
Join us tomorrow in AG2's Community Talks for a technical deep-dive into how we built an agentic system which: \* matches relevant new arXiv papers to the engineering challenges you're addressing \* builds Docker Images, testing the quickstart \* implements draft PRs in your target repo We'll discuss how we combine the AG2 framework, k8s Ray workers, and LaaJ with Hardware monitors to scale, secure, and test code from the wild, providing PRs without even bothering you for a prompt. Code is the context! Thursday 25th 9am PST (will update with YouTube link when available) [https://calendar.app.google/3soCpuHupRr96UaF8](https://calendar.app.google/3soCpuHupRr96UaF8) Check out the draft slides: [https://docs.google.com/presentation/d/1S0q-wGCu2dliVWb9ykGKFz61jZKZI4ipxWBv73HOFBo/edit?usp=sharing](https://docs.google.com/presentation/d/1S0q-wGCu2dliVWb9ykGKFz61jZKZI4ipxWBv73HOFBo/edit?usp=sharing)
2025-09-25T00:58:38
https://www.reddit.com/r/LocalLLaMA/comments/1nptbq9/ama_talk_on_replicating_research_as_draft_prs_in/
remyxai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nptbq9
false
null
t3_1nptbq9
/r/LocalLLaMA/comments/1nptbq9/ama_talk_on_replicating_research_as_draft_prs_in/
false
false
self
2
null
Where can I download an artificial intelligence assistant (AI) software with an avatar that interacts with what you do on your laptop and helps you organize tasks and complete tasks? And need that it is completely free.
0
Good evening to everyone in the community. I'd like some important help. I'd like to install an AI assistant that has an avatar (customizable or not, or just an image) but that can analyze and comment on anything I'm doing on my laptop screen. It can intelligently store this data and constantly ask if I need help with a particular task. It can only access my data on the laptop when I ask, helping me organize documents, perform complex writing tasks, or provide tips. It doesn't need to be a local AI assistant, as I'm not sure it will work on a laptop, as laptops don't have as much CPU power as desktop computers. I'd just like an assistant to organize my thoughts, plans, and tasks. I don't mind if it only works online to store data and help with file management tasks; the important thing is that it can work to help me with my daily tasks. Is there an installation tutorial for this? Which assistant would be most fluid to install on Windows? Another important thing is that it has writable memory to remember what I need, that it can record conversations internally, and that it's also free to use. If it's only available via local installation, I'd like to point out that I work in healthcare and don't understand anything about programming, so if there's a tutorial for installing commands, it would be better for me to be able to install it by following a tutorial. I worked on biomolecules in bioinformatics for my master's degree, so I only have a superficial understanding of the subject. I needed to work with Linux and install Python files to run certain programs in the molecular field of pharmaceuticals. Anyway, I thank you in advance for all the help you can give me. I really would like an assistant to organize my thoughts on my laptop desktop to optimize my time and be more profitable. I thank you in advance for your attention and willingness to read this post.
2025-09-25T00:34:23
https://www.reddit.com/r/LocalLLaMA/comments/1npstj6/where_can_i_download_an_artificial_intelligence/
Godi22kam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npstj6
false
null
t3_1npstj6
/r/LocalLLaMA/comments/1npstj6/where_can_i_download_an_artificial_intelligence/
false
false
self
0
null
Meta Code World Model (CWM), 32B dense LLM
17
|Model|LCBv5|LCBv6|Math-500|AIME24|AIME25| |:-|:-|:-|:-|:-|:-| |Magistral-small-2509-24B|70.0|61.6|\--|86.1|77.3| |Qwen3-32B|65.7|61.9|97.2|81.4|72.9| |gpt-oss-20B (low)|54.2|47.3|\--|42.1|37.1| |gpt-oss-20B (med)|66.9|62.0|\--|80.0|72.1| |**CWM**|68.6|63.5|96.6|76.0|68.2| |Model|SweBench Verified| |:-|:-| |Devstral-1.1-2507-24B|53.6| |Qwen3-Coder-32B|51.6| |gpt-oss-20B (low / med / high)\*|37.4 / 53.2 / 60.7| |**CWM / CWM + tts**|53.9 / 65.8| [**https://huggingface.co/facebook/cwm**](https://huggingface.co/facebook/cwm)
2025-09-25T00:32:51
https://www.reddit.com/r/LocalLLaMA/comments/1npssd7/meta_code_world_model_cwm_32b_dense_llm/
touhidul002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npssd7
false
null
t3_1npssd7
/r/LocalLLaMA/comments/1npssd7/meta_code_world_model_cwm_32b_dense_llm/
false
false
self
17
null
Witch one should I use llmstudio vs ollama
0
I have a windows server with a rtx 3090 and 512g ram 2tb ssd and server 2022 . Witch one should I use ollama or llmstudio ?
2025-09-25T00:20:28
https://www.reddit.com/r/LocalLLaMA/comments/1npsj3e/witch_one_should_i_use_llmstudio_vs_ollama/
wbiggs205
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npsj3e
false
null
t3_1npsj3e
/r/LocalLLaMA/comments/1npsj3e/witch_one_should_i_use_llmstudio_vs_ollama/
false
false
self
0
null
Any chances of AI models getting faster with less resources soon?
5
I've seen new types of model optimization methods rising slowly and am wondering what's the current fastest format/type and if smaller consumer-grade models between 7b-75b tend to get faster and smaller or it's actually worsening in terms of requirements to be ran locally?
2025-09-24T23:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1nprrrf/any_chances_of_ai_models_getting_faster_with_less/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nprrrf
false
null
t3_1nprrrf
/r/LocalLLaMA/comments/1nprrrf/any_chances_of_ai_models_getting_faster_with_less/
false
false
self
5
null
Introducing LFM2-2.6B: Redefining Efficiency in Language Models | Liquid AI
74
2025-09-24T23:33:04
https://www.liquid.ai/blog/introducing-lfm2-2-6b-redefining-efficiency-in-language-models
Thrumpwart
liquid.ai
1970-01-01T00:00:00
0
{}
1nprim2
false
null
t3_1nprim2
/r/LocalLLaMA/comments/1nprim2/introducing_lfm226b_redefining_efficiency_in/
false
false
https://external-preview…d778fbdfb0568c0a
74
{'enabled': False, 'images': [{'id': 'XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=108&crop=smart&auto=webp&s=0c326ca28e5d934ee99927cbaf3729823729b446', 'width': 108}, {'height': 131, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=216&crop=smart&auto=webp&s=99db30e9906fab901463f855ccbc4ef4e7414443', 'width': 216}, {'height': 194, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=320&crop=smart&auto=webp&s=083504976324c8e0c47a24c91b41da03f8623734', 'width': 320}, {'height': 389, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=640&crop=smart&auto=webp&s=bf5bf01f657f0db80bcc7749d573df9b82640adc', 'width': 640}, {'height': 584, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=960&crop=smart&auto=webp&s=3cc28aebc7b73cd8833fd8f1d82e0bb8de65f61e', 'width': 960}, {'height': 657, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?width=1080&crop=smart&auto=webp&s=f4ac8948f81414157247fac48adbb2af27b65a69', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://external-preview.redd.it/XaAHZAPyF0SslGVTFOxCaM7avykHViwT0kgWp2bq5tc.png?auto=webp&s=aaeda755f1f18d21a9076a440aa709944bb0ac5b', 'width': 2300}, 'variants': {}}]}
Questions about local agentic workflows
2
Hey folks, So I’ve been milling over this idea and drawing a lot of inspiration from this community. I see a lot of energy and excitement around running local LLM models. And I think there’s a gap. We have LLM studio, ollama and even llama cpp which are great for running local models. But when it comes to developing local agentic workflows the options seem limited. Either you have to be a developer heavy on the python or typescript and utilize frameworks on top of these local model/api providers. Or you have to commit to the cloud with crew ai or langchain, botpress, n8n etc. So my questions are this. Is the end goal just to run local llms for privacy or just for the love of hacking? Or is there a desire to leverage local llms to perform work beyond just a chatbot? Genuinely curious. Let me know.
2025-09-24T23:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1npr99w/questions_about_local_agentic_workflows/
m555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npr99w
false
null
t3_1npr99w
/r/LocalLLaMA/comments/1npr99w/questions_about_local_agentic_workflows/
false
false
self
2
null
How do I get multimodal contextual reasoning that’s actually decent?
0
Do I need to get Ampere or newer CUDA to run with LM Deploy? I guess it was so bad in GGUF that it’s been completely removed from Lcpp. Is there a way to achieve this with core ultra? 100GB/s is fine for me. Just want reasoning to work.
2025-09-24T23:10:44
https://www.reddit.com/r/LocalLLaMA/comments/1npr0sv/how_do_i_get_multimodal_contextual_reasoning/
Ok-Hawk-5828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npr0sv
false
null
t3_1npr0sv
/r/LocalLLaMA/comments/1npr0sv/how_do_i_get_multimodal_contextual_reasoning/
false
false
self
0
null
Are 24-50Bs finally caught up to 70Bs now?
91
I keep seeing everyone say that 70Bs are SOOOO amazing and perfect and beautiful and that if you can’t run 70Bs you’re a loser (not really, but you get me). I just got a 3090 and now I can run 50Bs comfortably, but 70Bs are unbearably slow for me and can’t possibly be worth it unless they have godlike writing, let alone 120Bs. So I’m asking am I fine to just stick with 24-50Bs or so? I keep wondering what I’m missing and then people come out with all kinds of models for 70b and I’m like :/
2025-09-24T22:35:24
https://www.reddit.com/r/LocalLLaMA/comments/1npq84t/are_2450bs_finally_caught_up_to_70bs_now/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npq84t
false
null
t3_1npq84t
/r/LocalLLaMA/comments/1npq84t/are_2450bs_finally_caught_up_to_70bs_now/
false
false
self
91
null
OrKA-UI Local Visual interface for OrKa-reasoning
3
🚀 OrKa-UI news 😀 Now fully aligned with v0.9.2 of OrKa reasoning, it comes with: • A fresh tutorial guide • Ready-to-use examples you can pick, test, and export • Even the same configuration we used for benchmarkingIn this short demo, you’ll see a Society of Mind inspired workflow in action .Every agent executes, results are grouped, and the entire reasoning path is transparent, either through the result panel or directly inside the graph. This is what modular cognition looks like when it’s no longer a black box.Step by step, OrKa reasoning keeps evolving. 🌐 [https://orkacore.com/](https://orkacore.com/) 🐳 [https://hub.docker.com/r/marcosomma/orka-ui](https://hub.docker.com/r/marcosomma/orka-ui) 🐍 [https://pypi.org/project/orka-reasoning/](https://pypi.org/project/orka-reasoning/) 🚢 [https://github.com/marcosomma/orka-reasoning](https://github.com/marcosomma/orka-reasoning)
2025-09-24T22:24:43
https://v.redd.it/j2ts9f7nu6rf1
marcosomma-OrKA
/r/LocalLLaMA/comments/1nppz5c/orkaui_local_visual_interface_for_orkareasoning/
1970-01-01T00:00:00
0
{}
1nppz5c
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/j2ts9f7nu6rf1/DASHPlaylist.mpd?a=1761474288%2CNGIxMDVkNmYzYzgwYjdiNjQzMjVmNWY0ZTczZWZlZTU5NTA0M2Q2MTdmYzUxMzYzMDA2NTNlYTcwMzRmOGJhOQ%3D%3D&v=1&f=sd', 'duration': 132, 'fallback_url': 'https://v.redd.it/j2ts9f7nu6rf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/j2ts9f7nu6rf1/HLSPlaylist.m3u8?a=1761474288%2CZWYwOTk1YjU1YjA0NmI0NGVmNDkxMmUzZGQ2MDgyOGQ3MmFhMzkwY2Y5NzMzY2NmNjkxM2U0ODZkZTU4MTYzNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/j2ts9f7nu6rf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1nppz5c
/r/LocalLLaMA/comments/1nppz5c/orkaui_local_visual_interface_for_orkareasoning/
false
false
https://external-preview…c03008b0f7f4ed87
3
{'enabled': False, 'images': [{'id': 'eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=108&crop=smart&format=pjpg&auto=webp&s=5473c2ffe42d2722f91cca0c3f1ef8d52f14cec3', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=216&crop=smart&format=pjpg&auto=webp&s=031d81e917cda751c340af1eedb63e2442b1f37b', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=320&crop=smart&format=pjpg&auto=webp&s=689df5c5088ec9b2eca1500feada33f75d1ef7ec', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=640&crop=smart&format=pjpg&auto=webp&s=1cfdf41b20e8873d0836e7e4b2d9d64dc4154e09', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=960&crop=smart&format=pjpg&auto=webp&s=e2b5d9194a4b6c274ec8a0c613c301c7f1919b4d', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?width=1080&crop=smart&format=pjpg&auto=webp&s=22c9295b33e67883f2408dfdfe7eb7aafdc8a87c', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eDU4dHBnN251NnJmMV3AZuRX9ZUDKBTAsiOSe4jwjrqWEcNt_C4qzPNRal3J.png?format=pjpg&auto=webp&s=f36ccd21bab5bf6f43903262f78c5d1786bf8223', 'width': 1080}, 'variants': {}}]}
Qwen3 235b Q2 with Celeron, 2x8gb of 2400 RAM, 96GB VRAM @ 18.71 t/s
21
https://preview.redd.it/…ur help! Cheers
2025-09-24T22:07:11
https://www.reddit.com/r/LocalLLaMA/comments/1nppk2v/qwen3_235b_q2_with_celeron_2x8gb_of_2400_ram_96gb/
Resident_Computer_57
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nppk2v
false
null
t3_1nppk2v
/r/LocalLLaMA/comments/1nppk2v/qwen3_235b_q2_with_celeron_2x8gb_of_2400_ram_96gb/
false
false
https://b.thumbs.redditm…2XikfV9kCC_w.jpg
21
null
Can anyone suggest local model for 3D?
4
Recently I try to find something about 3D generation and I could not find something else Hynyan 3D. Can anyone suggest something for 16gb VRAM + 32gb RAM?
2025-09-24T21:59:11
https://www.reddit.com/r/LocalLLaMA/comments/1nppd5c/can_anyone_suggest_local_model_for_3d/
StandarterSD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nppd5c
false
null
t3_1nppd5c
/r/LocalLLaMA/comments/1nppd5c/can_anyone_suggest_local_model_for_3d/
false
false
self
4
null
New model from Meta FAIR: Code World Model (CWM) 32B - 65.8 % on SWE-bench Verified
152
# "We release Code World Model (CWM), a 32-billion-parameter open-weights LLM, to advance research on code generation with world models. To improve code understanding beyond what can be learned from training on static code alone, we mid-train CWM on a large amount of observation-action trajectories from Python interpreter and agentic Docker environments, and perform extensive multi- task reasoning RL in verifiable coding, math, and multi-turn software engineering environments. With CWM, we provide a strong testbed for researchers to explore the opportunities world modeling affords for improving code generation with reasoning and planning in computational environments. We present first steps of how world models can benefit agentic coding, enable step-by-step simulation of Python code execution, and show early results of how reasoning can benefit from the latter. CWM is a dense, decoder-only LLM trained with a context size of up to 131 k tokens. Independent of its world modeling capabilities, CWM offers strong performance on general coding and math tasks: it reaches pass@1 scores of 65.8 % on SWE-bench Verified (with test-time scaling), 68.6 % on LiveCodeBench, 96.6 % on Math-500, and 76.0 % on AIME 2024. To support further research on code world modeling, we release model checkpoints after mid-training, SFT, and RL."
2025-09-24T21:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1npp8xi/new_model_from_meta_fair_code_world_model_cwm_32b/
notrdm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npp8xi
false
null
t3_1npp8xi
/r/LocalLLaMA/comments/1npp8xi/new_model_from_meta_fair_code_world_model_cwm_32b/
false
false
self
152
null
New model from Meta FAIR: Code World Model (CWM) 32B - 65.8 % on SWE-bench Verified
1
[removed]
2025-09-24T21:51:50
https://www.reddit.com/r/LocalLLaMA/comments/1npp6om/new_model_from_meta_fair_code_world_model_cwm_32b/
notrdm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npp6om
false
null
t3_1npp6om
/r/LocalLLaMA/comments/1npp6om/new_model_from_meta_fair_code_world_model_cwm_32b/
false
false
self
1
null
I built a tiny fully local AI agent for a Raspberry Pi
929
Hi all! Over the past few months, I’ve been working on a tiny agent that can run entirely on a Raspberry Pi 5. It's capable of executing tools and runs some of the smallest good models I could find (specifically Qwen3:1.7b and Gemma3:1b). From wake-word detection, to transcription, to the actual LLM inference, everything happens on the Pi 5 itself. It was definitely a challenge given the hardware constraints, but I learned a lot along the way. I've detailed everything in this blog post if you're curious: [https://blog.simone.computer/an-agent-desktoy](https://blog.simone.computer/an-agent-desktoy) Source: [https://github.com/syxanash/maxheadbox](https://github.com/syxanash/maxheadbox)
2025-09-24T21:14:05
https://v.redd.it/xslfjc3xh6rf1
syxa
/r/LocalLLaMA/comments/1npo93e/i_built_a_tiny_fully_local_ai_agent_for_a/
1970-01-01T00:00:00
0
{}
1npo93e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xslfjc3xh6rf1/DASHPlaylist.mpd?a=1761470051%2COTZhM2IyMTQyMGE1NmI3NTA2OWIwZjNmZmNmZTVkNTg4Yjk2ZDczYWQwMTNkNTZjODAxYTAyNmRlZGEwM2Q1NA%3D%3D&v=1&f=sd', 'duration': 90, 'fallback_url': 'https://v.redd.it/xslfjc3xh6rf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1350, 'hls_url': 'https://v.redd.it/xslfjc3xh6rf1/HLSPlaylist.m3u8?a=1761470051%2CMjJlZGE1MTFkZjE4M2E1ZWVmZWE1MjZiYmI1OTZhYWMzODRiYjM5M2NhZGY0N2Y2YjU4ODc1ZjMyOTgyMTE5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xslfjc3xh6rf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1npo93e
/r/LocalLLaMA/comments/1npo93e/i_built_a_tiny_fully_local_ai_agent_for_a/
false
false
https://external-preview…fc6eb810566f1279
929
{'enabled': False, 'images': [{'id': 'eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=108&crop=smart&format=pjpg&auto=webp&s=af133c9ee6f9840a037dbfdebb9dafc5cc81ac97', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=216&crop=smart&format=pjpg&auto=webp&s=e1758460a71ba83ee38151f793e36861177c6b64', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=320&crop=smart&format=pjpg&auto=webp&s=dcc1d149d00230cabac58fa5678994d7d148d586', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=640&crop=smart&format=pjpg&auto=webp&s=25de7423e177480d44df1cae1c2d5d11043f7434', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=960&crop=smart&format=pjpg&auto=webp&s=3aad234cf929b251a444b8086f7deff72c50e9a5', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b5949e174b236fad2a19be5937ecb6ef30443df1', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/eWJobjllM3hoNnJmMX3IlXPTfCA6UKqCZh3d_lEw5N2PzMnkp8dcFp83zF5t.png?format=pjpg&auto=webp&s=eb697f9ca978ba7894f5432fb7610e4f96e6c29e', 'width': 1080}, 'variants': {}}]}
What performance are you getting for your local DeepSeek v3/R1?
9
I'm curious what sort of performance folks are getting for local DeepSeek? Quantization size and system specs please.
2025-09-24T21:11:41
https://www.reddit.com/r/LocalLLaMA/comments/1npo71q/what_performance_are_you_getting_for_your_local/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npo71q
false
null
t3_1npo71q
/r/LocalLLaMA/comments/1npo71q/what_performance_are_you_getting_for_your_local/
false
false
self
9
null
Stress-Testing RAG in Production: Retrieval Quality, Drift, and Hidden Costs
4
been seeing a lot of teams (ours included) run into the same walls once rag moves beyond the demo phase. three pain points keep showing up: **1. Retrieval quality** faithfulness is tricky.the retriever often pulls something that *seems* relevant but still leads to wrong or shallow answers. we’ve been experimenting with metrics like contextual precision/recall and llm-as-judge evals to actually measure this. **2. Drift and monitoring** retrievers + embeddings shift over time (new docs, changed policies, etc.) and suddenly accuracy dips. logging traces is one thing, but without real observability/alerting you don’t even notice drift until users complain. we’ve been trying [maxim](https://getmax.im/maxim) to tie evals + traces together, but wondering what stacks others use. **3. Hidden costs** latency + tokens can pile up fast, especially when the system falls back to pulling too many docs. vector db choice matters ([pinecone](https://www.pinecone.io/) vs [chroma](https://www.trychroma.com/) etc.), but even brute force is sometimes cheaper until you hit scale. so i’m wanted to understand: –->how are you all evaluating rag pipelines beyond “it feels good”? –-> what observability setups are working for you? –->and how are you keeping costs predictable while still preserving retrieval quality?
2025-09-24T20:58:38
https://www.reddit.com/r/LocalLLaMA/comments/1npnv3g/stresstesting_rag_in_production_retrieval_quality/
Fabulous_Ad993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npnv3g
false
null
t3_1npnv3g
/r/LocalLLaMA/comments/1npnv3g/stresstesting_rag_in_production_retrieval_quality/
false
false
self
4
{'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]}
Model to Analyze market news
4
I would like to create an agent that reads news from a news stream and analyzes the impact on the market, on certain stocks and cryptos. I wanted to use a standalone model that I can plug on Llama. Anyone has a light here?
2025-09-24T20:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1npncv4/model_to_analyze_market_news/
Small-Supermarket540
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npncv4
false
null
t3_1npncv4
/r/LocalLLaMA/comments/1npncv4/model_to_analyze_market_news/
false
false
self
4
null
Kokoro Batch TTS: Enabling Batch Processing for Kokoro 82M
26
Kokoro 82M is a high-performance text-to-speech model, but it originally lacked support for batch processing. I spent a week implementing batch functionality, and the source code is available at [https://github.com/wwang1110/kokoro\_batch](https://github.com/wwang1110/kokoro_batch) **⚡ Key Features:** * **Batch processing**: Process multiple texts simultaneously instead of one-by-one * **High performance**: Processes 30 audio clips under 2 seconds on RTX4090 * **Real-time capable**: Generates 276 seconds of audio in under 2 seconds * **Easy to use**: Simple Python API with smart text chunking **🔧 Technical highlights:** * Built on PyTorch with CUDA acceleration * Integrated grapheme-to-phoneme conversion * Smart text splitting for optimal batch sizes * FP16 support for faster inference * Based on the open-source Kokoro-82M model * The model output is 24KHZ PCM16 format For simplicity, the sample/demo code currently includes support for American English, British English, and Spanish. However, it can be easily extended to additional languages, just like the original Kokoro 82M model.
2025-09-24T20:33:22
https://www.reddit.com/r/LocalLLaMA/comments/1npn810/kokoro_batch_tts_enabling_batch_processing_for/
asuran2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npn810
false
null
t3_1npn810
/r/LocalLLaMA/comments/1npn810/kokoro_batch_tts_enabling_batch_processing_for/
false
false
self
26
{'enabled': False, 'images': [{'id': 'ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=108&crop=smart&auto=webp&s=172402d29af99d09000561f657f0e045d684e2f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=216&crop=smart&auto=webp&s=97c1b72c37fc0950d126c39c46a7def759a45856', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=320&crop=smart&auto=webp&s=3a0c1f1580b0d611a38a0bc4f99757ba337aecd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=640&crop=smart&auto=webp&s=7e8928a7d0d62717a5ddb9707b8a9abed026ff8a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=960&crop=smart&auto=webp&s=3d96b1415d3dd47f3c61f81db6d888d093a530da', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?width=1080&crop=smart&auto=webp&s=0c61e99b9867dffd8f4ba9759a54c74e98f29276', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZlxQxaha9qqkhRyhz2cjNF5H0k4gT0rlhEFRHkFHss8.png?auto=webp&s=2e08d8bf7ae72b6d6e0f0878f27b51d3125e6b16', 'width': 1200}, 'variants': {}}]}
Is a 5090 the best for most people?
38
Hey all, curious to have my mind changed. I've been researching for some time now and with the prices becoming reasonable on 5090s, I can't seem to justify getting anything else. Reasons for: \- 32GB vram seems to be enough for a single-user doing inference pretty fast on big enough models \- mature nvidia software \- as mentioned, decent price (now) Alternatives I've explored: \- AI Max 395: big memory at a lower price, but speed will suffer as the mem bandwidth is lower and I don't think majority of use cases need 96GB vram. rocm still young. \- Apple Silicon: insanely expensive for the same amount of vram and it's still slower. more limited software \- Radeon Pro W9700 or W7900(?): still expensive, more vram but slightly slower, can't get them anywhere \- RTX 6000 Blackwell: painfully expensive for team green big vram \- multiple 4090s/3090s: performance hit from offloading layers between different memory, need more power, fancier config etc \- nvidia frankenchips from China: hard to get, don't trust em \- Huawei: I'm sorry, I don't trust em Curious to hear what everyone's thoughts are. My use case is single user inference for coding / life at a speed that doesn't cause me to look at my phone and not a crazy tight budget but not 10k...
2025-09-24T20:29:38
https://www.reddit.com/r/LocalLLaMA/comments/1npn4ir/is_a_5090_the_best_for_most_people/
P3rpetuallyC0nfused
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npn4ir
false
null
t3_1npn4ir
/r/LocalLLaMA/comments/1npn4ir/is_a_5090_the_best_for_most_people/
false
false
self
38
null
Do you think Qwen3 VL will get a release for other models too?
27
Like for the 80B-Next or the 32B, 14B, 8B, 4B and other variants? I know, we've been blessed and even if there are no such releases all is well, but still... would be nice =]
2025-09-24T20:22:58
https://www.reddit.com/r/LocalLLaMA/comments/1npmycu/do_you_think_qwen3_vl_will_get_a_release_for/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npmycu
false
null
t3_1npmycu
/r/LocalLLaMA/comments/1npmycu/do_you_think_qwen3_vl_will_get_a_release_for/
false
false
self
27
null
Anyone else experimenting with multi-agent swarms + long-term memory for real-world tasks?
1
[removed]
2025-09-24T20:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1npmnj5/anyone_else_experimenting_with_multiagent_swarms/
Good_Position_9075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npmnj5
false
null
t3_1npmnj5
/r/LocalLLaMA/comments/1npmnj5/anyone_else_experimenting_with_multiagent_swarms/
false
false
self
1
null
a19 pro/ M5 MatMul
4
Hi everyone. Sorry if this is not exactly related to this sub but I think you guys can help me the most as I have reads previous posts on this sub related to this subject. I have a MacBook Air m4. I heard that apple has added matmul/ai accelerators in gpu cores in 19 pro and naturally will do the same for M5 which is gonna release soon. I know it accelerates local AI stuff by alot but I dont care about that I am happy with using gemini and chatgpt online. But my macroeconomic models (bellman type problems) which I run on matlab can be very time consuming. My question is that if this new feature on the M5 will increase the speed for the type of stuff I do in Matlab or not. If yes, approximately by how much. I want to see if it is worth replacing my laptop and selling it now before that comes out because if it also increase matlab speeds by 4 times as it did for the a19 pro in LLM usage, then its better for me to sell as soon as possible and wait for the M5 release. Thanks!
2025-09-24T20:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1npmkik/a19_pro_m5_matmul/
Short_Expression4613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npmkik
false
null
t3_1npmkik
/r/LocalLLaMA/comments/1npmkik/a19_pro_m5_matmul/
false
false
self
4
null
Why is my DeepSeek like this?
0
2025-09-24T19:51:13
https://i.redd.it/y2gay7ea36rf1.png
ThePhantom1708
i.redd.it
1970-01-01T00:00:00
0
{}
1npm4th
false
null
t3_1npm4th
/r/LocalLLaMA/comments/1npm4th/why_is_my_deepseek_like_this/
false
false
default
0
{'enabled': True, 'images': [{'id': 'y2gay7ea36rf1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=108&crop=smart&auto=webp&s=505ffa1860c2c261421d255ae803059955b1016d', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=216&crop=smart&auto=webp&s=46e1a916672cc06bc064bc584dcd903158314b9a', 'width': 216}, {'height': 194, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=320&crop=smart&auto=webp&s=f15930d8688334edfc303ccb21006177131bdb13', 'width': 320}, {'height': 389, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=640&crop=smart&auto=webp&s=4eaa08ad0eed9e45bc7220990b90943dbf937106', 'width': 640}, {'height': 584, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=960&crop=smart&auto=webp&s=79827f4444b004160add72c25a56e77c0cb13ecb', 'width': 960}, {'height': 657, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?width=1080&crop=smart&auto=webp&s=24eb46eb84bb26dd752da775794d330bf2a59fa7', 'width': 1080}], 'source': {'height': 937, 'url': 'https://preview.redd.it/y2gay7ea36rf1.png?auto=webp&s=8bcce5c8c68ae249139873891c2bbc595840a648', 'width': 1539}, 'variants': {}}]}
Anyone tried Huihui-Qwen3-30B-A3B-abliterated-Fusion-9010 on HF ?
5
I have 12gb VRAM, 32GB Ram, don't mind if its slow like I expect a 32b to be but, this being a thinking, instruction mix gives me high hopes that is could perform very well, has anyone found good settings for it (top\_k) etc? Cheers [https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-9010](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-abliterated-Fusion-9010)
2025-09-24T19:04:52
https://www.reddit.com/r/LocalLLaMA/comments/1npky3c/anyone_tried/
cztothehead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npky3c
false
null
t3_1npky3c
/r/LocalLLaMA/comments/1npky3c/anyone_tried/
false
false
self
5
null
What's the consensus on Qwen3-Max vs Qwen3 235b Instruct model? How much better do you perceive Max to be?
14
Obviously one is more based (open-weight) while the other is proprietary BUT considering Qwen3-Max has over a trillion parameters it should be at least 10% better than 235b right?
2025-09-24T18:57:04
https://www.reddit.com/r/LocalLLaMA/comments/1npkqb5/whats_the_consensus_on_qwen3max_vs_qwen3_235b/
Striking_Wedding_461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npkqb5
false
null
t3_1npkqb5
/r/LocalLLaMA/comments/1npkqb5/whats_the_consensus_on_qwen3max_vs_qwen3_235b/
false
false
self
14
null
Detecting hallucination from the hidden space of an LLM
0
I have been working this for past couple of years. MAYBE I JUST SOLVED IT!!!,aah, who knows, and need you guys feedback, The multi-billion-dollar question of large language model hallucination. You often see smaller models providing factually trustworthy but completely hallucinated answers, as I did show below for the 3B small language model from Meta. The AI only gives what it has learned from the vectors; it doesn't have any idea of what it doesn't know!! How about we get information of whether the response become hallucinated or not before the result gets generated. That will give us understanding on whether we can route to a powerful LLM, RAG or to a human. How it works, 1. Generate an internal "thought vector" from Llama-3.2-3B's hidden states. 2. Create a "ground truth" semantic vector using BAAI/bge-m3. 3. Use a trained Projection Head to map the LLM's vector into the ground-truth space. 4. Calculate the cosine similarity. This score is a direct proxy for confidence and hallucination risk. This method successfully identifies out-of-distribution or poorly-represented concepts in the LLM's latent space, effectively flagging high-risk queries before they are processed. Btw that first movie is an Indian movie, completely hallucinated(Sitaare Zameen Par is a 2025 movie) colab notebook for running at : [https://colab.research.google.com/drive/1SE5zIaZnk3WJcArz69liH0CkWyUlOV-E?usp=sharing](https://colab.research.google.com/drive/1SE5zIaZnk3WJcArz69liH0CkWyUlOV-E?usp=sharing) package at : [https://pypi.org/project/hallunox/](https://pypi.org/project/hallunox/) Arxiv preprint coming soon
2025-09-24T18:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1npkf9d/detecting_hallucination_from_the_hidden_space_of/
Nandakishor_ml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npkf9d
false
null
t3_1npkf9d
/r/LocalLLaMA/comments/1npkf9d/detecting_hallucination_from_the_hidden_space_of/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
Prototype Cognitive Engine Using E8 Lattice, Wave Dynamics, and Bandit-VAE Compression
0
# Kaleidoscope : an E8-lattice cognitive substrate (prototype M24) [https://github.com/Howtoimagine/E8-Kaleidescope-AI/tree/main](https://github.com/Howtoimagine/E8-Kaleidescope-AI/tree/main) Short version: Kaleidoscope is an experimental cognitive engine that encodes semantic memory as structured geometry. It uses an E8 root lattice as the organizing scaffold, multi-shell quasicrystal memory, physics-inspired validators, and a compression/novelty stack (VAE + bandit) to turn raw embeddings into durable, self-regulated memory. This is not a fancy prompt wrapper. It is a memory substrate designed for robust long term consolidation, controllable emergence, and retrieval that is sensitive to structure not just distance. https://preview.redd.it/fkxbyinyp5rf1.png?width=2735&format=png&auto=webp&s=a2cb2c46473f8443228f550e9b4d820b9c59d4b7 If you want the code excerpts I can paste them, but for clarity here is what it is, how it works, why it is novel, what I validated so far, limitations, and potential use cases. https://preview.redd.it/q89vdv41q5rf1.png?width=1917&format=png&auto=webp&s=61a0f3743bbf0a789963ec2e46d42a4f2c995e0a # 1) High level physics mapping (what it means and why E8) * **Why E8**: E8 is used as a high dimensional, highly symmetric coordinate system. Rather than claiming a physical universe mapping, Kaleidoscope uses E8 root vectors as *structural probes* to partition embedding space into interpretable axes and to seed a quasicrystal style organization of memory nodes. * **Physics analogy**: The system models memory curvature, wave propagation, and consolidation using physics metaphors rendered as real numeric checks: Laplacian-based curvature fields, CFL-style stability bounds, energy E = φᵀLφ, and dimensionless invariants (Q1, Q2, Q3). These are engineering guards not metaphysical claims. * **Practical payoff**: The physics view gives concrete safety and stability checks. For example, a computed λ\_max of the graph Laplacian informs safe propagation speed ν and prevents numerical blowups or runaway consolidation. # 2) How compression is solved (practical, reproducible) * **VAE core**: An on-path VAE projects high-dimensional embeddings into compact latent shells. Config: EMBED\_DIM (1536) → autoencoder intermediate layers → latent (8) → dimensional shells \[8,16,32,64\]. M24 tuned KL warmup steps, target beta, free bits and buffer sizes to avoid KL collapse. * **Holographic compression**: Holographic fidelity checks and information conservation flags validate that projection plus compaction preserve structure above a fidelity threshold. There are automated telemetry gates that block consolidation if the holographic fidelity metric is below HOLOGRAPHIC\_FIDELITY\_THRESHOLD. * **Project ∘ compact commutation test**: The validator routinely asserts project(compact(x)) ≈ compact(project(x)). This ensures that projection to shells and later compaction operations commute within ε. If not, it flags a violation and either auto­tunes or halts compaction. * **Why it works**: Combining structured projection (shells) with a learned compressor (VAE) reduces redundancy while preserving relational geometry. It is not lossless, but it is fidelity-checked and reversible enough for downstream retrieval and reasoning. # 3) System layers and what is novel * **Embedding layer**: standard embeddings (local or API) as input. * **E8 projection layer**: embeddings are snapped/probed against an E8 informed semantic axis basis. This provides an interpretable partitioning and aids targeted retrieval. * **Curvature field**: a graph Laplacian over memory nodes represents semantic curvature. Curvature is used to control propagation, proximity, and consolidation dynamics. * **Everywhen Wave**: a damped graph wave integrator that blends static embeddings with a temporally coherent wave component so retrievals have temporal consistency and context smoothing. * **Black-hole / White-hole consolidation**: consolidation events merge related nodes into more compact representations under pressure metrics; white-hole seeds reintroduce distilled signals back into the net to support novelty. The cadence of these events is adaptive (BH\_COOLDOWN, BH\_THRESH\_\* knobs). * **VAE + Bandit layer**: VAE compresses, bandit symmetrization handles novelty vs exploitation for which nodes to consolidate or hold. Bandit clipping and normalization improve stability. * **Teacher / Explorer loop**: a meta loop where Teacher queries the system for grounded questions every N steps and Explorer reflects and synthesizes answers. This creates an internal validation and curiosity mechanism. * **SystematicPhysicsValidator**: enforces invariants (spectral stability, mass conservation, commutation). It auto-tunes wave speed and damping when limits are breached. * **Event Horizon Scheduler**: transactional boundary between async cognition (asyncio) and discrete geometry operations to avoid race conditions and memory mutation without locks. Novelty comes from combining geometric organization (E8/quasicrystal), physics-grade invariants, temporal wave dynamics, and an adaptive compression/novelty pipeline into a unified memory substrate rather than a retrieval layer bolted onto an LLM. # 4) Cyclic information flow (explicit pipeline) 1. **Ingest**: sources are collected, preprocessed, and encoded into embedding vectors. 2. **Project**: embeddings are projected into E8-informed semantic axes and assigned to shells (8,16,32,64). 3. **Curvature update**: the Laplacian and curvature field are recalculated; Everywhen wave receives ρ̃ (source) and steps to update φ. 4. **Teacher step**: every TEACHER\_ASK\_EVERY steps the Teacher produces a question grounded in recent memory. 5. **Explorer step**: after EXPLORER\_OFFSET steps, the Explorer answers, which may spawn new nodes or insights. 6. **Proximity / Retrolink**: retrieval finds candidate nodes using KDTrees, potential-weighted scores, and retrolinks; proximity outcomes are recorded. 7. **Validation**: SystematicPhysicsValidator checks CFL constraints, mass conservation, commutation relations, and Q(t) headroom. Sentinels detect flatline, sQ explosions, or Q↔LOCK divergence. 8. **Consolidation**: if pressure and validation allow, black-hole merges compress nodes; VAE compacts them and the bandit logic decides which consolidated representation to keep. 9. **Re-seed**: white-hole processes can re-seed distilled signals back into the graph for exploration. 10. **Logging**: journey, cycle metrics, invariants are appended to NDJSON logs for offline analysis and reproducibility. This loop is designed to be observable and checkpointable. Cycle metrics and invariants are stored to debug drifts. # 5) What it is and what it could be used for What it is * A memory substrate and retrieval/analysis framework that encodes and compacts semantic content into a structured geometric manifold. * An orchestrator that can supervise LLM calls, provide retrieval candidates, and run internal simulation checks. What it could be used for * **Long term RAG augmentation** where retrieval is structure aware rather than only nearest neighbor. * **Persistent assistant memory** that consolidates and prunes rather than just accumulating vectors. * **Scientific hypothesis generation** by mapping conceptual curvature and emergent clusters and surfacing candidate relationships. * **Cognitive research** for studying geometric encodings of semantics and how compression affects recall and inference. * **Creative ideation engines** where novelty scoring and white-hole reseeding foster serendipity. * **MCP or agent substrates** that need coherent long term memory and controlled consolidation. # 6) What we validated and safety checks * **Invariant tests**: spectral bounds, commutation tests for project/compact, and mass conservation checks are implemented and run in M24. These caught multiple parameter regimes that would have induced numerical instability. * **Sentinels**: flatline detection, sQ explosion detection, and Q/LOCK divergence monitors help avoid runaway consolidation or stalled emergence. * **Fallbacks**: numeric fallbacks and stubs exist so the system degrades gracefully if heavy libs are missing. * **Telemetry**: cycle\_metrics.jsonl, journey logs, and proximity outcomes are emitted to support reproducible debugging. # 7) Limitations and open risks * **Not a proven cognitive model**. This is an engineered substrate, not a theory of mind. Treat results as system behavior, not consciousness claims. * **Empirical baselines** are needed. We need direct benchmarks versus standard RAG+ANN approaches for retrieval quality, latency, storage cost, and downstream task performance. * **Hyperparameters sensitive**. Many knobs must be tuned; auto-tuning helps but requires more systematic sweeps. * **Complexity** increases barrier to adoption. Packaging into smaller, testable modules will help adoption. * **Reproducibility** requires deterministic seeds and consistent embedder backends. We have NDJSON logs for this, but public reproducible configs are next. If you have any questions. Just ask Ill deliver.
2025-09-24T18:37:17
https://www.reddit.com/r/LocalLLaMA/comments/1npk7hj/prototype_cognitive_engine_using_e8_lattice_wave/
thesoraspace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npk7hj
false
null
t3_1npk7hj
/r/LocalLLaMA/comments/1npk7hj/prototype_cognitive_engine_using_e8_lattice_wave/
false
false
https://b.thumbs.redditm…uHvrbTkH1GeI.jpg
0
null
Is there a way to upload LLMs to cloud servers with better GPUs and run them locally?
0
Let's say my laptop can run XYZ LLM 20B on Q4\_K\_M, but their biggest model is 80B Q8 (or something like that. Maybe I can upload the biggest model to a cloud server with the latest and greatest GPU and then run it locally so that I can run that model in its full potential. Is something like that even possible? If yes, please share what the setup would look like, along with the links.
2025-09-24T18:34:46
https://www.reddit.com/r/LocalLLaMA/comments/1npk517/is_there_a_way_to_upload_llms_to_cloud_servers/
abdullahmnsr2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npk517
false
null
t3_1npk517
/r/LocalLLaMA/comments/1npk517/is_there_a_way_to_upload_llms_to_cloud_servers/
false
false
self
0
null
Talk me out of it.. provide me better choices.
0
From my understanding, this has the memory bandwidth just shy of a 4090 and just shy of the 5060/70/80 as well. The 5090 on the other hand is like.. almost double the bandwidth. Talk me out of this. AMD 395+ AI Max? Can I run an eGPU on the AMD 395+? Does regular ram in a PC assist the vRAM enough to take a 16gb vram card + 64-128gb of regular ram and get good results on LLMs? Does the regular ram assist enough to hold good context and larger models? I would probably want to run the best Qwen model or as close to it as possible. Need serious help, Reddit.
2025-09-24T18:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1npj8ap/talk_me_out_of_it_provide_me_better_choices/
Western-Source710
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npj8ap
false
null
t3_1npj8ap
/r/LocalLLaMA/comments/1npj8ap/talk_me_out_of_it_provide_me_better_choices/
false
false
self
0
null
suggestions for AI workstation
1
I've been running PyTorch models on my current general-purpose workstation (256GB RAM, 24 cores, RTX A2000 with 12GB GPU memory) for various research projects. It's been fine for smaller models, but I'm moving into larger generative models (transformers and diffusion models) and running into GPU memory limitations. Looking to buy a pre-built deep learning workstation with a budget around $10k. Main needs: More GPU memory for training larger models Faster training and inference times Prefer to keep everything local rather than cloud I've not experience purchasing at this level. From what I can tell vendors seem to offer either single RTX 4090 (24GB) or dual 4090 configurations in this price range. Also wondering if it's worth going for dual GPUs vs a single more powerful one - I know multi-GPU adds complexity but might be worth it for the extra memory? Any recommendations for specific configurations that have worked well for similar generative modeling work would be appreciated
2025-09-24T17:49:13
https://www.reddit.com/r/LocalLLaMA/comments/1npixnz/suggestions_for_ai_workstation/
tw4120
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npixnz
false
null
t3_1npixnz
/r/LocalLLaMA/comments/1npixnz/suggestions_for_ai_workstation/
false
false
self
1
null
Create local voice clones for characters
0
Hey everyone, we’re building a product at [aviad.ai](http://aviad.ai/) that helps game devs create small, local voice clones that they can download and use for game characters. You can use our tool to go from a few reference audio clips to high quality training data and a fully trained voice model quickly. We’re starting with finetunes that come out to around 800 MB. But are working on getting this much smaller in the coming weeks. You can listen to some examples on our website to get a sense for quality. We also have an update to our Unity package coming soon to easily integrate these models into the engine. Our goal is to help game devs create fully voiced, dynamic characters that are very small and run on-device. We’re excited to see new types of game get made with characters like these. If you want early access (will start onboarding folks to the product in the coming week), [join our Discord](https://discord.gg/Jk4jUYghnA)!
2025-09-24T17:29:25
https://v.redd.it/iud1uguod5rf1
formicidfighter
v.redd.it
1970-01-01T00:00:00
0
{}
1npif7m
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/iud1uguod5rf1/DASHPlaylist.mpd?a=1761326979%2CMTZiYzczOTk3YzMwYWJmZmFmOWNkZTQwOWU1MWIxNjhlOGVjNmRhZDE5MGE5YTAyNzhiZWJlNWVhMWU0MTFhMw%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/iud1uguod5rf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 678, 'hls_url': 'https://v.redd.it/iud1uguod5rf1/HLSPlaylist.m3u8?a=1761326979%2CZDUxYTZiNjJhYjU4Njk2Zjg5YTQ1MmY3ZjMyMzYyNGExN2MxZmU0YzUzZTdhNjc5YzEzMjFlYzgzOWU1NDViZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/iud1uguod5rf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1npif7m
/r/LocalLLaMA/comments/1npif7m/create_local_voice_clones_for_characters/
false
false
https://external-preview…a70c6ab6d7da68bb
0
{'enabled': False, 'images': [{'id': 'b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=108&crop=smart&format=pjpg&auto=webp&s=2bfa726fb0227c9b2134f1d18459b96753d85f9c', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=216&crop=smart&format=pjpg&auto=webp&s=a41d8173241fb3219be05d641119d4a9aa1ce060', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=320&crop=smart&format=pjpg&auto=webp&s=492189644da7ccc58cb2705c460b5aa3fd517c4a', 'width': 320}, {'height': 339, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=640&crop=smart&format=pjpg&auto=webp&s=66382c85f46b99828391fcc7be95c9a96390ef7a', 'width': 640}, {'height': 508, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=960&crop=smart&format=pjpg&auto=webp&s=3a5c11fb1ba5e6469122958495105acc3b8b6879', 'width': 960}, {'height': 572, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7a57d5a2a24a6b1a9cb4cba0ba36748bda52cba5', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/b3J5NHpmdW9kNXJmMWodetq0-X-xQlX3OZne9SlVIhfayDAZfsXdUinY525R.png?format=pjpg&auto=webp&s=882d1bee1b13bbcefe7de74f961daaa7d06468bc', 'width': 1358}, 'variants': {}}]}
Any good resources to learn llama.cpp tool and its parameters and settings?
7
I’ve been using llama.cpp instead of LM Studio but I’ve been a script kid and copy pasting or using flags blindly. I want to know what I’m doing and I’d like to ask the community that where do I learn everything about llama.cpp in good detail. Multiple resources that you have learned from, please drop them like Qwen drops new models.
2025-09-24T17:22:22
https://www.reddit.com/r/LocalLLaMA/comments/1npi8lt/any_good_resources_to_learn_llamacpp_tool_and_its/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npi8lt
false
null
t3_1npi8lt
/r/LocalLLaMA/comments/1npi8lt/any_good_resources_to_learn_llamacpp_tool_and_its/
false
false
self
7
null
Mix of feelings
0
So I have been using Claude for a couple months now when I was moving and have yet to setup my beast Pc and also looking to get a 96gb vRAM monster in the new rtx pro 6000 first. Assume by some miracle I am able to have 192gb of vRAM (4x quadro 8000 or 2x RTX Pro 6000) and load up on System RAM, say 500gb of DDR5… What kind of top level models and shenanigans will I be able to operate with? I am trying to dive head first back into local and leave Claude in the dust (hard with Claude code though being clutch). Thanks!!!
2025-09-24T17:08:56
https://www.reddit.com/r/LocalLLaMA/comments/1nphvx3/mix_of_feelings/
EasyConference4177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nphvx3
false
null
t3_1nphvx3
/r/LocalLLaMA/comments/1nphvx3/mix_of_feelings/
false
false
self
0
null
Qwen3-30B-A3B for role-playing
18
My favorite model for roleplaying, using a good detailed prompt, has been Gemma 3, until today when I decided to try something unusual: Qwen3-30B-A3B. Well, that thing is incredible! It seems to follow the prompt much better than Gemma, interactions and scenes are really vivid, original, filled with sensory details. The only problem is, it really likes to write (often 15-20 lines per reply) and sometimes it keeps expanding the dialogue in the same reply (so it becomes twice longer...) I'm using the recommended "official" settings for Qwen. Any idea how I can reduce this behaviour?
2025-09-24T17:00:05
https://www.reddit.com/r/LocalLLaMA/comments/1nphn86/qwen330ba3b_for_roleplaying/
beneath_steel_sky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nphn86
false
null
t3_1nphn86
/r/LocalLLaMA/comments/1nphn86/qwen330ba3b_for_roleplaying/
false
false
self
18
null
oom using ik_llama with iq_k quants
3
I can't get my head around it. Epyc 7663, 512 GB RAM, several GPU (3090, 4x 3060) 1. llama.cpp with deepseek 3.1 ud\_q4\_k\_xl (387 GB) just works. If I need more context, just add more of the 12 GB GPUs via CUDA\_VISIBLE\_DEVICES. >\--n-gpu-layers 999 \-ngld 999 \--slots \--flash-attn 1 \--props \--metrics \--no-webui \--jinja \--threads 56 \--cache-type-k q8\_0 \--cache-type-v q8\_0 \-m /mnt/models/UD-Q4\_K\_XL/DeepSeek-V3.1-UD-Q4\_K\_XL-00001-of-00008.gguf \-ot ".ffn\_(up|down|gate)\_exps.=CPU" \-c 163840 \--top-p 0.95 \--temp 0.6 2. ik\_llama.cpp with deepseek 3.1 ud\_q4\_k\_xl (387 GB) barely works with reduced context size (23.x GB / 24 GB VRAM used), additional GPUs don't matter, can't increase context size. >\-mla 3 -fa \-amb 512 \-fmoe \--n-gpu-layers 999 \--override-tensor exps=CPU \--jinja \--parallel 1 \--threads 56 \--cache-type-k q8\_0 \-m /mnt/models/UD-Q4\_K\_XL/DeepSeek-V3.1-UD-Q4\_K\_XL-00001-of-00008.gguf \-c 98304 \-rtr \--top-p 0.95 \--temp 0.6 3. ik\_llama.cpp with deepseek 3.1 iq4\_k, iq4\_ks, smol-iq4\_kss (411 GB - 342 GB) same parameters as above but without -rtr and obvious the right -m, even reduced context to 32k does not matter, always oom on CUDA0. Additional GPUs not helping. Even partially offloading some of the layers manually to CUDA1 doesn't fix the issue. From my observation it seems that the CUDA0 buffer size is much larger (10 GB vs 13.4 GB) with iq\_k quants. Please tell me what I'm doing wrong. Speedup in pp is already huge with ik.
2025-09-24T16:55:00
https://www.reddit.com/r/LocalLLaMA/comments/1nphifl/oom_using_ik_llama_with_iq_k_quants/
pixelterpy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nphifl
false
null
t3_1nphifl
/r/LocalLLaMA/comments/1nphifl/oom_using_ik_llama_with_iq_k_quants/
false
false
self
3
null
New Agent benchmark from Meta Super Intelligence Lab and Hugging Face
184
[https://huggingface.co/blog/gaia2](https://huggingface.co/blog/gaia2)
2025-09-24T16:39:07
https://i.redd.it/fjardl7x45rf1.png
clem59480
i.redd.it
1970-01-01T00:00:00
0
{}
1nph3az
false
null
t3_1nph3az
/r/LocalLLaMA/comments/1nph3az/new_agent_benchmark_from_meta_super_intelligence/
false
false
default
184
{'enabled': True, 'images': [{'id': 'fjardl7x45rf1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/fjardl7x45rf1.png?width=108&crop=smart&auto=webp&s=78e7826d3cccede1e73a9b3b3f162f33fdce9224', 'width': 108}, {'height': 248, 'url': 'https://preview.redd.it/fjardl7x45rf1.png?width=216&crop=smart&auto=webp&s=7f54dad2eeae8b5eedca6d7915b53a968410896a', 'width': 216}, {'height': 367, 'url': 'https://preview.redd.it/fjardl7x45rf1.png?width=320&crop=smart&auto=webp&s=c8c7328f8ddae2a29e2c61da827ed5f33b1c6dd1', 'width': 320}, {'height': 735, 'url': 'https://preview.redd.it/fjardl7x45rf1.png?width=640&crop=smart&auto=webp&s=8b3257ce7506ae29e607d203aa3899a1ddf43031', 'width': 640}], 'source': {'height': 1030, 'url': 'https://preview.redd.it/fjardl7x45rf1.png?auto=webp&s=8a8ec9f4155f8023aafc2d1e703062cb60161ce5', 'width': 896}, 'variants': {}}]}
Qwen3-30B-A3B... for RP
1
[removed]
2025-09-24T16:30:55
https://www.reddit.com/r/LocalLLaMA/comments/1npgvld/qwen330ba3b_for_rp/
mark_haas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npgvld
false
null
t3_1npgvld
/r/LocalLLaMA/comments/1npgvld/qwen330ba3b_for_rp/
false
false
self
1
null
Memory Enhanced Adapter for Reasoning
17
tldr; 74% performance on 500 train samples 50 test samples of gsm8k using llama 3 8b Building from the idea that working memory is a strong correlate of general intelligence I created a "working memory adapter" technique that equips llms which typically have a linear memory with a graph attention powered global memory. Via the usage of a special <memory> tag and direction injection via LORA the llm receives an input summarizing all previous model hidden states. The technique works for any dataset but I imagine its best suited for reasoning tasks. Theres a slight problem with stepping the COT where the steps are not terminated correctly and therefore parsed incorrectly producing an empty string for second step parsed but including all reasoning steps in the first parsed step output. I'm not sure what the conventional way of fixing this problem is. Does COT training usually include special <beginning\_of\_thought>, <end\_of\_thought> tokens? I was hoping to get everyone's opinion about where to go from here. The performance on an abbreviated dataset trained for few epochs was pretty good which you can see in the linked colab notebook. What should I change if anything regarding hyperparameters and model architecture? I've attempted multiple different enhanced architectures all of which fail except for a multi layer LORA integration which performs on par with the single LORA layer integration. Multi layer GAT failed as well as multi "arm" gat which had specialized arms fused with a GAT. Last does anybody know of similar GNN techniques applied to llm/ llm reasoning? What about working memory esque augmentations for llms... everyone seems to be excited about long term memory for llms and not at all working/short term.
2025-09-24T16:25:31
https://colab.research.google.com/drive/1UphKe8gValkFeYFjUc6SFrSq_aPWTJdu?usp=sharing
arcco96
colab.research.google.com
1970-01-01T00:00:00
0
{}
1npgqfe
false
null
t3_1npgqfe
/r/LocalLLaMA/comments/1npgqfe/memory_enhanced_adapter_for_reasoning/
false
false
default
17
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
China's latest GPU arrives with claims of CUDA compatibility and RT support — Fenghua No.3 also boasts 112GB+ of HBM memory for AI
411
2025-09-24T16:18:29
https://www.tomshardware.com/pc-components/gpus/chinas-latest-gpu-arrives-with-claims-of-cuda-compatibility-and-rt-support-fenghua-no-3-also-boasts-112gb-of-hbm-memory-for-ai
Battle-Chimp
tomshardware.com
1970-01-01T00:00:00
0
{}
1npgjpv
false
null
t3_1npgjpv
/r/LocalLLaMA/comments/1npgjpv/chinas_latest_gpu_arrives_with_claims_of_cuda/
false
false
default
411
{'enabled': False, 'images': [{'id': 'WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=108&crop=smart&auto=webp&s=0c5910dd5ffbb78d5a181e1bbec239179d0c402d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=216&crop=smart&auto=webp&s=83a23456457ec9825b1cc4ae9da57fb842dd7a7b', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=320&crop=smart&auto=webp&s=c246cf4f7688038dd1a20da4376f084c1a255ccd', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=640&crop=smart&auto=webp&s=1e6a5a5393e6e1773fbc67615613b404ec6ab6f7', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=960&crop=smart&auto=webp&s=d7d48ac140043c663c1de8c3f3a8c3f01b09d177', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?width=1080&crop=smart&auto=webp&s=7d16c85fb2052fb2a78a7a352c1c270ca361ca03', 'width': 1080}], 'source': {'height': 1838, 'url': 'https://external-preview.redd.it/WMoFSM_ESNhbjEkciy-Rd0SY0RIYgPT1B4RqqWSMj-g.jpeg?auto=webp&s=3cd3d47216247ade984559969d02e7f0f378a0b0', 'width': 3268}, 'variants': {}}]}
Chinese modified 3080 20GB performance..
117
I'm quite surprised to see it beat 3080TI
2025-09-24T15:45:55
https://www.reddit.com/gallery/1npfnvw
sub_RedditTor
reddit.com
1970-01-01T00:00:00
0
{}
1npfnvw
false
null
t3_1npfnvw
/r/LocalLLaMA/comments/1npfnvw/chinese_modified_3080_20gb_performance/
false
false
https://a.thumbs.redditm…-8wxZrG9D7z4.jpg
117
null
Chinese modified 3080 20GB performance.
1
Heere are some benchmarks of the 3080 20GB I just got for my inference/ image and video generation PC. I'm quite surprised to see it beat 3080TI ..
2025-09-24T15:38:06
https://www.reddit.com/gallery/1npfgjc
sub_RedditTor
reddit.com
1970-01-01T00:00:00
0
{}
1npfgjc
false
null
t3_1npfgjc
/r/LocalLLaMA/comments/1npfgjc/chinese_modified_3080_20gb_performance/
false
false
https://b.thumbs.redditm…dP8mUObux6nM.jpg
1
null
Does anybody know how to configure maximum context length or input tokens in litellm?
2
I can't seem to get this configured correctly. The documentation doesn't seem to be much help. There is the max\_tokens setting but that seems to be for output rather than input or context limit.
2025-09-24T14:45:55
https://www.reddit.com/r/LocalLLaMA/comments/1npe2uq/does_anybody_know_how_to_configure_maximum/
inevitabledeath3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npe2uq
false
null
t3_1npe2uq
/r/LocalLLaMA/comments/1npe2uq/does_anybody_know_how_to_configure_maximum/
false
false
self
2
null
How do you know which contributors’ quantisation to trust on huggingface?
8
New to the local llm scene and trying to experiment a bit with running models on my phone, but confused about how to pick which version to download. E.g. I’d like to run Qweb 3 4b Instruction 2507, but then need to rely on a contributors version of this - not directly the Qwen page? How do you pick who to trust here (and is there even a big risk?). I kind of get go with the one with the most downloads, but seems a bit random - seeing names like bartowski, unsloth, maziyar panahi.
2025-09-24T14:29:38
https://www.reddit.com/r/LocalLLaMA/comments/1npdnok/how_do_you_know_which_contributors_quantisation/
AllSystemsFragile
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npdnok
false
null
t3_1npdnok
/r/LocalLLaMA/comments/1npdnok/how_do_you_know_which_contributors_quantisation/
false
false
self
8
null
Which quantizations are you using?
10
Not necessarily models, but with the rise of 100B+ models, I wonder which quantization algorithms are you using and why? I have been using AWQ-4BIT, and it's been pretty good, but slow on input (been using with llama-33-70b, with newer Moe models it would probably be better).
2025-09-24T13:44:31
https://www.reddit.com/r/LocalLLaMA/comments/1npcj8a/which_quantizations_are_you_using/
WeekLarge7607
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npcj8a
false
null
t3_1npcj8a
/r/LocalLLaMA/comments/1npcj8a/which_quantizations_are_you_using/
false
false
self
10
null
Local AI Agent | Open Source
8
Hey everyone, I'm happily announcing my Agent CLI program! It supports most APIs, example configs are provided for popular LLM Providers I've been stress-testing it for days with a series of increasingly difficult tasks, and I wanted to share the final result. The "final exam" was to build a configurable quiz generator from scratch. The rules were brutal: it had to use a specific, less-common JS library (Alpine.js) for reactivity, manage a complex two-stage UI, and follow a strict design system—all in a single HTML file. After 30 minutes of generation on my laptop (running a Qwen3-Instruct-30B-Q8 MoE model), it produced a fully functional, single-file web app. The repository: [AISlop Agent Github](https://github.com/cride9/AISlop) The outcome: [Configurable Quiz Generator](https://cride9.github.io/slopquiz.html) The most fascinating part was watching different models fail in unique ways before this one finally succeeded. It really pushed the boundaries of what I thought was possible with local models. Happy to answer any questions about the setup or the agent's instructions!
2025-09-24T13:39:24
https://www.reddit.com/r/LocalLLaMA/comments/1npcetj/local_ai_agent_open_source/
cride20
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npcetj
false
null
t3_1npcetj
/r/LocalLLaMA/comments/1npcetj/local_ai_agent_open_source/
false
false
self
8
{'enabled': False, 'images': [{'id': 'X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=108&crop=smart&auto=webp&s=c5178d59326e09ea3f34a4cc4b54f5301782ddfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=216&crop=smart&auto=webp&s=e61166fdf17d07513507bcf1669142f88ebc7cac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=320&crop=smart&auto=webp&s=deb3528731ad28934bd88622b9465c214db66033', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=640&crop=smart&auto=webp&s=1dba36f4efbcdec11ee2b071eacb55f4d461fa5b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=960&crop=smart&auto=webp&s=ee1a817cd2b16c4168c4207c33516ca7d4ed86d1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?width=1080&crop=smart&auto=webp&s=d75dc4b77567eac5c5ac865a4b6f2766ca4a4db8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X13zT3roLERkVTVB6MJBgw__sRwi3_KPDSjlGNgClPE.png?auto=webp&s=32ccd5304c235db8488839ec56b61908481a7c8b', 'width': 1200}, 'variants': {}}]}
Omg I got free $200 worth of credits
1
[removed]
2025-09-24T13:38:34
https://i.redd.it/vp5terit84rf1.png
No-Hovercraft3184
i.redd.it
1970-01-01T00:00:00
0
{}
1npce3r
false
null
t3_1npce3r
/r/LocalLLaMA/comments/1npce3r/omg_i_got_free_200_worth_of_credits/
false
false
https://b.thumbs.redditm…kFUgxm5IkBIw.jpg
1
{'enabled': True, 'images': [{'id': '68UCGnehkxMhPiEySgp63Rg4-ptt063wnhJG2Kw8s_c', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=108&crop=smart&auto=webp&s=58fdd4d547526caee30f2b0138de4c036456b41b', 'width': 108}, {'height': 393, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=216&crop=smart&auto=webp&s=165efb07a49927439b9797448f1c49535dfb7b65', 'width': 216}, {'height': 583, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=320&crop=smart&auto=webp&s=5fbff2d74c7b48758906ab55e56831e1076de03c', 'width': 320}, {'height': 1166, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=640&crop=smart&auto=webp&s=ab01ba1086a8a612b75b34303168affe8e7ae5ca', 'width': 640}, {'height': 1750, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=960&crop=smart&auto=webp&s=0ffa40b549d6a4b6d8289de37f80d6df06e0ec06', 'width': 960}, {'height': 1968, 'url': 'https://preview.redd.it/vp5terit84rf1.png?width=1080&crop=smart&auto=webp&s=ade1c6468ece1b701894638ea92e79e371d85a6c', 'width': 1080}], 'source': {'height': 2224, 'url': 'https://preview.redd.it/vp5terit84rf1.png?auto=webp&s=50a8ab9c41f159a5107e5d9a2ca6a513bdfb5499', 'width': 1220}, 'variants': {}}]}
What’s the best local LLM rig I can put together for around $1000?
8
I’m trying to get into running local LLMs and want to put together a build it. Budget’s about 1000 usd and I’m wondering what kind of build makes the most sense. Should I be throwing most of that into a GPU, or is a more balanced CPU/GPU/RAM setup smarter? Any particular cards or parts you’d recommend ? (main usage will be video/images local models) Curious if people here have done something similar — would love to hear what builds you’ve put together, what worked, and what you’d do in my case Thanks in advance!
2025-09-24T13:22:23
https://www.reddit.com/r/LocalLLaMA/comments/1npc0ar/whats_the_best_local_llm_rig_i_can_put_together/
Holiday_Leg8427
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npc0ar
false
null
t3_1npc0ar
/r/LocalLLaMA/comments/1npc0ar/whats_the_best_local_llm_rig_i_can_put_together/
false
false
self
8
null
Vibevoice proper repo ?
3
Hi, does anyone have the correct Vibevoice 1.5 B and 9 B repo and model links? Heard MS took it down and there are some links available but not sure which one is correct. Not comfortable using Comfy to install. Want to install manually.
2025-09-24T13:19:49
https://www.reddit.com/r/LocalLLaMA/comments/1npby4h/vibevoice_proper_repo/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npby4h
false
null
t3_1npby4h
/r/LocalLLaMA/comments/1npby4h/vibevoice_proper_repo/
false
false
self
3
null
Reproducing GPT-2 (124M) from scratch - results & notes
80
Over the last couple of weeks, I followed karpathy’s [‘Let’s Reproduce GPT-2’](https://www.youtube.com/watch?v=l8pRSuU81PU) video religiously—making notes, implementing the logic line by line, and completing a re-implementation of GPT-2 from scratch. I went a few steps further by implementing some of the improvements suggested by u/karpathy (such as learning rate adjustments and data loader fixes), along with modern enhancements like RoPE and SwiGLU-FFN. My best-performing experiment `gpt2-rope`, achieved a validation loss of **2.987** and a HellaSwag accuracy of **0.320.** |Experiment|Min Validation Loss|Max HellaSwag Acc|Description| |:-|:-|:-|:-| |gpt2-baseline|3.065753|0.303724|Original GPT-2 architecture| |gpt2-periodicity-fix|3.063873|0.305517|Fixed data loading periodicity| |gpt2-lr-inc|3.021046|0.315475|Increased learning rate by 3x and reduced warmup steps| |gpt2-global-datafix|3.004503|0.316869|Used global shuffling with better indexing| |gpt2-rope|**2.987392**|**0.320155**|Replaced learned embeddings with RoPE| |gpt2-swiglu|3.031061|0.317467|Replaced FFN with SwiGLU-FFN activation| I really loved the whole process of writing the code, running multiple trainings and gradually seeing the losses improve. I learnt so much about LLMs pre-training from this single video. Honestly, the $200 I spent on compute over these two weeks was the best money I’ve spent lately. Learned a ton and had fun. I have made sure to log everything, the code, training runs, checkpoints, notes: * Repo: [https://github.com/garg-aayush/building-from-scratch/blob/main/gpt-2/](https://github.com/garg-aayush/building-from-scratch/blob/main/gpt-2/) * Notes: [https://github.com/garg-aayush/building-from-scratch/blob/main/gpt-2/notes/lecture\_notes.md](https://github.com/garg-aayush/building-from-scratch/blob/main/gpt-2/notes/lecture_notes.md) * Runs: [https://wandb.ai/garg-aayush/pre-training](https://wandb.ai/garg-aayush/pre-training) * Dataset (training and validation): [Google Drive](https://drive.google.com/drive/folders/1FGHKpY0_jJmSR_j7ki4oyoxK-fJgldgG?usp=sharing) * Best checkpoints for each experiment: [Google Drive](https://drive.google.com/drive/folders/1S9mFDMG3ZPjA-JGdx_814T_NCVSjUJO-?usp=sharing)
2025-09-24T13:19:20
https://www.reddit.com/r/LocalLLaMA/comments/1npbxpw/reproducing_gpt2_124m_from_scratch_results_notes/
garg-aayush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npbxpw
false
{'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://www.youtube.com/@AndrejKarpathy', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/l8pRSuU81PU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Let&#39;s reproduce GPT-2 (124M)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/l8pRSuU81PU/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Let's reproduce GPT-2 (124M)", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1npbxpw
/r/LocalLLaMA/comments/1npbxpw/reproducing_gpt2_124m_from_scratch_results_notes/
false
false
self
80
{'enabled': False, 'images': [{'id': 'Vt6iMmXcWe78znjNZqJ9nEufupk1m_LVEHGIsA87b3o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Vt6iMmXcWe78znjNZqJ9nEufupk1m_LVEHGIsA87b3o.jpeg?width=108&crop=smart&auto=webp&s=e4bcc3fbf1ecb09c35447499c5a812fe69ad6593', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Vt6iMmXcWe78znjNZqJ9nEufupk1m_LVEHGIsA87b3o.jpeg?width=216&crop=smart&auto=webp&s=4661d866e60c9c4465031298ff8ed0c0881e55ee', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Vt6iMmXcWe78znjNZqJ9nEufupk1m_LVEHGIsA87b3o.jpeg?width=320&crop=smart&auto=webp&s=1f19f06e9fc779ca6468a6c1b10abcff2a43181b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Vt6iMmXcWe78znjNZqJ9nEufupk1m_LVEHGIsA87b3o.jpeg?auto=webp&s=a860d8af0d2deda6867603d3e66e49e3c5bcc622', 'width': 480}, 'variants': {}}]}
iPhone app for voice recording and AI processing
2
Hello all! I wanted to post an app I’ve built to record audio, transcribe and summarize for the iPhone. It’s called BisonNotes AI, it’s free and open source and available on the App Store. https://apps.apple.com/us/app/bisonnotes-ai-voice-notes/id6749189425 The advanced settings have configuration for using fully local processing of transcription and summaries! I’m sure many of you have local AI systems and I built this as first thinking about using those. I personally use the whisper and ollama modes to transcribe and then get summaries. The GitHub repo is at: https://github.com/bisonbet/BisonNotes-AI and I’m happy to see issues, PRs or general comments. You can see the FAQ here (needs some work still!) — https://www.bisonnetworking.com/bisonnotes-ai/
2025-09-24T13:11:42
https://www.reddit.com/r/LocalLLaMA/comments/1npbrcw/iphone_app_for_voice_recording_and_ai_processing/
Altruistic_Call_3023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npbrcw
false
null
t3_1npbrcw
/r/LocalLLaMA/comments/1npbrcw/iphone_app_for_voice_recording_and_ai_processing/
false
false
self
2
{'enabled': False, 'images': [{'id': 'vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=108&crop=smart&auto=webp&s=1c605cf57c776ea5df926f437655d69124e65ae9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=216&crop=smart&auto=webp&s=654a3a41deeb8b65c4200fb5af286417e8a39b19', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=320&crop=smart&auto=webp&s=6145b070c0c48db1467fb0ddf1f446837efb748f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=640&crop=smart&auto=webp&s=412208574c91702cb53e3230805b3ef439c9b275', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=960&crop=smart&auto=webp&s=b93e9102205bacb245f7b21bf542f417b0e7ee8c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?width=1080&crop=smart&auto=webp&s=0c621fd384e9236e5b6fb1b6326d71b5d96131e1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vBR3qOSelzvkhaNgKyuP4cMPauYWP5a0tivsqN_rJ2A.png?auto=webp&s=538a6a0a0d10108e5bf34898a19e316d634eb85a', 'width': 1200}, 'variants': {}}]}
LongCat-Flash-Thinking, MOE, that activates 18.6B∼31.3B parameters
56
What is happening, can this one be so good? [https://huggingface.co/meituan-longcat](https://huggingface.co/meituan-longcat)
2025-09-24T12:40:20
https://i.redd.it/oswqrfovx3rf1.png
Trilogix
i.redd.it
1970-01-01T00:00:00
0
{}
1npb1vd
false
null
t3_1npb1vd
/r/LocalLLaMA/comments/1npb1vd/longcatflashthinking_moe_that_activates_186b313b/
false
false
default
56
{'enabled': True, 'images': [{'id': 'oswqrfovx3rf1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/oswqrfovx3rf1.png?width=108&crop=smart&auto=webp&s=e66dfaa44f0bf3fcd18b3773af7b21cbe000bd5a', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/oswqrfovx3rf1.png?width=216&crop=smart&auto=webp&s=22fd1d338dd212d2e11a292f656263202e089fdd', 'width': 216}, {'height': 283, 'url': 'https://preview.redd.it/oswqrfovx3rf1.png?width=320&crop=smart&auto=webp&s=8b8022bc8bc88edbc2c2fb933a2aaf1e8840a47b', 'width': 320}, {'height': 567, 'url': 'https://preview.redd.it/oswqrfovx3rf1.png?width=640&crop=smart&auto=webp&s=d27c1d79f8ef45bbf03cf184beed1a1a5da19925', 'width': 640}], 'source': {'height': 623, 'url': 'https://preview.redd.it/oswqrfovx3rf1.png?auto=webp&s=766a69357274d493673804612631b5a42ab897a5', 'width': 702}, 'variants': {}}]}
Seeking Advice for Fast, Local Voice Cloning/Real-Time TTS (No CUDA/GPU)
4
Hi everyone, I’m working on a personal project where I want to build a voice assistant that speaks in a cloned voice (similar to HAL 9000 from *2001: A Space Odyssey*). The goal is for the assistant to respond interactively, ideally within 10 seconds from input to audio output. Some context: * I have a Windows machine with an AMD GPU, so CUDA is not an option. * I’ve tried models like TTS (Coqui), but I’m struggling with performance and setup. * The voice cloning aspect is important I want it to sound like a specific reference voice, not a generic TTS voice. My questions: 1. Is it realistic to get sub-10-second generation times without NVIDIA GPUs? 2. Are there any fast, open-source TTS models optimized for CPU or AMD GPUs? 3. Any tips on setup, caching, or streaming methods to reduce latency? Any advice, experiences, or model recommendations would be hugely appreciated! I’m looking for the fastest and most practical way to achieve a responsive, high-quality cloned voice assistant. Thanks in advance!
2025-09-24T12:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1npatwy/seeking_advice_for_fast_local_voice/
GachiMuchiNick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npatwy
false
null
t3_1npatwy
/r/LocalLLaMA/comments/1npatwy/seeking_advice_for_fast_local_voice/
false
false
self
4
null
LM Studio and Context Caching (for API)
3
I'm running a Mac, so LM Studio with their MLX support is my go-to for using local models. When using the LM Studio as a local LLM server that integrates with tools and IDEs (like Zed, Roo, Cline, etc.), things get a bit annoying with the long-context slowdown. As I understand, it happens for 2 reasons: 1. The previous messages are reprocessed, the more messages, the longer it takes. 2. Especially on the Macs, the longer the context, the slower the generation speed. The first point bothers me especially, as this should be a very simple low-hanging fruit to enable caching of the processed context, then just loading it and processing only the latest message. Is that something that can be turned on in LM Studio somewhere (haven't found it in the IDE)? Or is there a way you can get the processed context cached and re-used in the subsequent requests? How do you avoid re-processing old messages when using the servers via the API / third-party apps? While 1. is the main big win I'm after atm, any tips on config to improve the 2. are also appreciated. Do you use KV quantisation or anything that would help with this? (I am running on the latest versions of LM Studio and MLX already - seen people mention there were some recent speedups) Note: I am aware that using mlx-lm you can manually save the KV cache to a file and load it, I'm just wondering if there's a way to get a (significant) speed up for apps that just use the API.
2025-09-24T12:30:06
https://www.reddit.com/r/LocalLLaMA/comments/1npatw9/lm_studio_and_context_caching_for_api/
lakySK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npatw9
false
null
t3_1npatw9
/r/LocalLLaMA/comments/1npatw9/lm_studio_and_context_caching_for_api/
false
false
self
3
null
Be cautious of GPU modification posts. And do not send anyone money. DYI if you can.
154
Just a precautionary post and a reminder that this is Reddit. People can make a good looking legit website and scam you into sending them an advance payment for your 48GB 4090 or 20 GB 3080 but be cautious and stay safe. Thanks.
2025-09-24T12:02:25
https://www.reddit.com/r/LocalLLaMA/comments/1npa8yt/be_cautious_of_gpu_modification_posts_and_do_not/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npa8yt
false
null
t3_1npa8yt
/r/LocalLLaMA/comments/1npa8yt/be_cautious_of_gpu_modification_posts_and_do_not/
false
false
self
154
null
Is there a way to turn your local llm into OCR?
3
Same
2025-09-24T11:55:40
https://www.reddit.com/r/LocalLLaMA/comments/1npa3y4/is_there_a_way_to_turn_your_local_llm_into_ocr/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1npa3y4
false
null
t3_1npa3y4
/r/LocalLLaMA/comments/1npa3y4/is_there_a_way_to_turn_your_local_llm_into_ocr/
false
false
self
3
null
Qwen3-14B-ARPO-DeepSearch feedback
14
Hi everyone, hoping not to be intrusive, has anyone ever tried the dongguanting/Qwen3-14B-ARPO-DeepSearch version? How do you like it? Not as an agent model, but just as a model that responds to prompts. What's your experience?
2025-09-24T11:43:59
https://www.reddit.com/r/LocalLLaMA/comments/1np9vz0/qwen314barpodeepsearch_feedback/
Temporary-Roof2867
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np9vz0
false
null
t3_1np9vz0
/r/LocalLLaMA/comments/1np9vz0/qwen314barpodeepsearch_feedback/
false
false
self
14
null
grok 4 fast (reasoning) is absolutely beautiful
0
cheapest, fastest, and most able model ive used in so long. the 2 mil context is absolutely beautiful. what are your opinions?
2025-09-24T11:41:15
https://www.reddit.com/r/LocalLLaMA/comments/1np9u45/grok_4_fast_reasoning_is_absolutely_beautiful/
lolfacemanboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np9u45
false
null
t3_1np9u45
/r/LocalLLaMA/comments/1np9u45/grok_4_fast_reasoning_is_absolutely_beautiful/
false
false
self
0
null
My second modified 3080 20GB from China , for local Ai inference , video and image generation..
296
I got this triple fan version instead of server - blower style card because of fan noise. It's also slightly bigger in size than the blower card . Teps are quite good and manageable , staying below 75°C , even when stress testing @ 300W . And it's a 2½ slot card ..
2025-09-24T11:36:56
https://www.reddit.com/gallery/1np9rav
sub_RedditTor
reddit.com
1970-01-01T00:00:00
0
{}
1np9rav
false
null
t3_1np9rav
/r/LocalLLaMA/comments/1np9rav/my_second_modified_3080_20gb_from_china_for_local/
false
false
https://b.thumbs.redditm…noMLgVVg97bI.jpg
296
null
NanoQuant llm compression
6
while searching for "120b on pi 5" :D, i stumbled upon this 3 week old repo claiming to do just that due to massive compression of huge models. it sounds too good to be true. anyone with more background knowledge wanne check it out? is it legit or scam? [https://github.com/swayam8624/nanoquant](https://github.com/swayam8624/nanoquant)
2025-09-24T11:33:38
https://www.reddit.com/r/LocalLLaMA/comments/1np9p4z/nanoquant_llm_compression/
k1k3r86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np9p4z
false
null
t3_1np9p4z
/r/LocalLLaMA/comments/1np9p4z/nanoquant_llm_compression/
false
false
self
6
{'enabled': False, 'images': [{'id': 'w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=108&crop=smart&auto=webp&s=949289a945a2701647f1e94684845a394fbd2f29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=216&crop=smart&auto=webp&s=c06c3932fa0e7484305fe45d714c7728ec529b8f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=320&crop=smart&auto=webp&s=6b7595c683da15ade0291ac7aed8e40d51b1fde8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=640&crop=smart&auto=webp&s=959ef7c282a041952915bb898b0cc2e72b79807d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=960&crop=smart&auto=webp&s=80712474811f78596eb8fc9cb6f08b884a82144e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?width=1080&crop=smart&auto=webp&s=fd9308d07deed4eb0622822fa7afab595bd1108e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w4FE3dn-1yLc1H2UpzLZWO25pvvxsUPVNXrQNvtc33A.png?auto=webp&s=f25a2b482af62a6d55f74f6fce21f6764ffae1c1', 'width': 1200}, 'variants': {}}]}
This guy is a Genius, Does it work, let´s try!
0
https://preview.redd.it/…admin\_session\]
2025-09-24T11:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1np9niz/this_guy_is_a_genius_does_it_work_lets_try/
Trilogix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np9niz
false
null
t3_1np9niz
/r/LocalLLaMA/comments/1np9niz/this_guy_is_a_genius_does_it_work_lets_try/
false
false
https://a.thumbs.redditm…lqkW092cM710.jpg
0
null
Some argue that humans could never become economically irrelevant cause even if they cannot compete with AI in the workplace, they’ll always be needed as consumers. However, it is far from certain that the future economy will need us even as consumers. Machines could do that too - Yuval Noah Harari
0
"Theoretically, you can have an economy in which a mining corporation produces and sells iron to a robotics corporation, the robotics corporation produces and sells robots to the mining corporation, which mines more iron, which is used to produce more robots, and so on. These corporations can grow and expand to the far reaches of the galaxy, and all they need are robots and computers – **they don’t need humans even to buy their products.** Indeed, **already today computers are beginning to function as clients in addition to producers. In the stock exchange, for example, algorithms are becoming the most important buyers of bonds, shares and commodities.** Similarly in the advertisement business, the most important customer of all is an algorithm: the Google search algorithm. When people design Web pages, they often cater to the taste of the Google search algorithm rather than to the taste of any human being. Algorithms cannot enjoy what they buy, and their decisions are not shaped by sensations and emotions. The Google search algorithm cannot taste ice cream. However, algorithms select things based on their internal calculations and built-in preferences, and these preferences increasingly shape our world. The Google search algorithm has a very sophisticated taste when it comes to ranking the Web pages of ice-cream vendors, and the most successful ice-cream vendors in the world are those that the Google algorithm ranks first – not those that produce the tastiest ice cream. I know this from personal experience. When I publish a book, the publishers ask me to write a short description that they use for publicity online. But they have a special expert, who adapts what I write to the taste of the Google algorithm. The expert goes over my text, and says ‘Don’t use this word – use that word instead. Then we will get more attention from the Google algorithm.’ We know that if we can just catch the eye of the algorithm, we can take the humans for granted. **So if humans are needed neither as producers nor as consumers, what will safeguard their physical survival and their psychological well-being?** **We cannot wait for the crisis to erupt in full force before we start looking for answers. By then it will be too late.** *Excerpt from 21 Lessons for the 21st Century* *Yuval Noah Harari*
2025-09-24T11:19:26
https://www.reddit.com/r/LocalLLaMA/comments/1np9fw1/some_argue_that_humans_could_never_become/
katxwoods
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np9fw1
false
null
t3_1np9fw1
/r/LocalLLaMA/comments/1np9fw1/some_argue_that_humans_could_never_become/
false
false
self
0
null
im a student i want to make money through these model im not sure about it how i ask the ai but its gave me same saying freelancing job etc im so confuse like my strong thing is making product ( but i only made for myself )
0
i want a money a stable money or something i just dont know where to dig
2025-09-24T11:01:46
https://www.reddit.com/r/LocalLLaMA/comments/1np94j0/im_a_student_i_want_to_make_money_through_these/
Select_Dream634
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np94j0
false
null
t3_1np94j0
/r/LocalLLaMA/comments/1np94j0/im_a_student_i_want_to_make_money_through_these/
false
false
self
0
null
InclusionAI published GGUFs for the Ring-mini and Ling-mini models (MoE 16B A1.4B)
80
[https://huggingface.co/inclusionAI/Ring-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ring-mini-2.0-GGUF) [https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF) warning PRs are still not merged [https://github.com/ggml-org/llama.cpp/pull/16063](https://github.com/ggml-org/llama.cpp/pull/16063) [https://github.com/ggml-org/llama.cpp/pull/16028](https://github.com/ggml-org/llama.cpp/pull/16028) models: Today, we are excited to announce the open-sourcing of **Ling 2.0** — a family of MoE-based large language models that combine **SOTA performance** with **high efficiency**. The first released version, Ling-mini-2.0, is compact yet powerful. It has **16B total parameters**, but only **1.4B** are activated per input token (non-embedding 789M). Trained on more than **20T tokens** of high-quality data and enhanced through multi-stage supervised fine-tuning and reinforcement learning, Ling-mini-2.0 achieves remarkable improvements in complex reasoning and instruction following. With just 1.4B activated parameters, it still reaches the top-tier level of sub-10B dense LLMs and even matches or surpasses much larger MoE models. I hope they will also publish GGUFs for the 103B models soon.
2025-09-24T10:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1np8uv6/inclusionai_published_ggufs_for_the_ringmini_and/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np8uv6
false
null
t3_1np8uv6
/r/LocalLLaMA/comments/1np8uv6/inclusionai_published_ggufs_for_the_ringmini_and/
false
false
self
80
{'enabled': False, 'images': [{'id': 'sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=108&crop=smart&auto=webp&s=7563fad743a835d2fc71340f21c35ecf50982cdf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=216&crop=smart&auto=webp&s=e2a00fb84462493d10c0220809b5e3411f9749fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=320&crop=smart&auto=webp&s=83a922d117a956d784c440b301a22deb83066a04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=640&crop=smart&auto=webp&s=8adfbd29432c52002d1248f7c12759d31c4fc8d2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=960&crop=smart&auto=webp&s=38a0980df6f24ebc74abe82d99edcb063e2e12d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?width=1080&crop=smart&auto=webp&s=cffde4c19c1323f061eb608f79ef322d5f4e0b48', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sUf6Ned3IXCCuYNBGDc3DLzwXsyglMT2sIw49de-RHw.png?auto=webp&s=93ffc6bed0064fa15c44220ab7caf5c69408315f', 'width': 1200}, 'variants': {}}]}
retraining the model with a new tokenizer and response format
5
I had an idea to take the qwen model and train it on the gpt oss tokenizer with its chat format, as I prefer it, but gpt oss is too large for local inference on my laptop. Is it possible to retrain qwen on the gpt oss tokenizer and chat format?
2025-09-24T10:31:58
https://www.reddit.com/r/LocalLLaMA/comments/1np8m4r/retraining_the_model_with_a_new_tokenizer_and/
Objective-Good310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np8m4r
false
null
t3_1np8m4r
/r/LocalLLaMA/comments/1np8m4r/retraining_the_model_with_a_new_tokenizer_and/
false
false
self
5
null
what AI agent framework is actually production viable and/or least problematic?
3
I started my journey of tinkering with LLM agents using Anthropic's API. More recently I was using **smolagents** just because I use HuggingFace qutie often. Howeever, the CodeAgent and ToolCallingAgent does have its short comings and I would never trust it in production. I have been tinkering with **Pydantic ai** and I must admit they have done quite a thorough job, however its been a little over 2 weeks of me using it in my spare time. I recently came across **Mastra AI** (typescript framework) and **Lamini AI** (allegedly aids with hallucinations much better), but I am also thinking of using **LLamaIndex** (when I built a RAG app previosuly it just felt very... nice.) My reservations with Mastra is that I don't know how I would montior the models workflows precisely. As I was playing with **Langfuse** and **opik** (Comet), I was looking for a full python experience, but I am also open to any js/ts frameworks as I am building a front-end of my application using React. But I would love to hear your experiences with agentic frameworks you have used (atleast with some level of success?) in production/dev as well as any ***LLM monitoring tools*** you have taken a liking to! Lastly can I get a yay/nay for litellm? :D
2025-09-24T10:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1np8eda/what_ai_agent_framework_is_actually_production/
reficul97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np8eda
false
null
t3_1np8eda
/r/LocalLLaMA/comments/1np8eda/what_ai_agent_framework_is_actually_production/
false
false
self
3
null
mac mini 24 ram, 512 ssd - open source capabilities
1
Hi guys, as the title suggests, i want to know how much can i push a mac mini 24gb of ram 512ssd, m4(normal), I'm interested mainly in testing (wanting to learn how to run things locally), and I want to use it as its general scope for image/video models (open source). In my country its now on sale for 900$, is this worth it, or should i make other decision, thank you for your feedback!
2025-09-24T10:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1np86df/mac_mini_24_ram_512_ssd_open_source_capabilities/
Holiday_Leg8427
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np86df
false
null
t3_1np86df
/r/LocalLLaMA/comments/1np86df/mac_mini_24_ram_512_ssd_open_source_capabilities/
false
false
self
1
null
Math Benchmarks
4
I think AIME level problems become EASY for current SOTA LLMs. We definitely need more "open-source" & "harder" math benchmarks. Anything suggestions? At first my attention was on Frontiermath, but as you guys all know, they are not open-sourced.
2025-09-24T09:39:24
https://www.reddit.com/r/LocalLLaMA/comments/1np7rwa/math_benchmarks/
always_newbee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1np7rwa
false
null
t3_1np7rwa
/r/LocalLLaMA/comments/1np7rwa/math_benchmarks/
false
false
self
4
null