title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
🧙‍♂️ Intelligent README Generator (IRG)
1
[removed]
2025-12-31T12:35:23
https://github.com/mushfiqk47/GithubReadme-Generator
Silent_Sage_69
github.com
1970-01-01T00:00:00
0
{}
1q0c0m6
false
null
t3_1q0c0m6
/r/LocalLLaMA/comments/1q0c0m6/intelligent_readme_generator_irg/
false
false
default
1
null
Those running RAG in production, what's your document parsing pipeline?
1
Following up on my previous post about hardware specs for RAG. Now I'm trying to nail down the document parsing side of things. **Background:** I'm working on a fully self hosted RAG system. Currently I'm using docling for parsing PDFs, docx files and images, combined with rapidocr for scanned pdfs. I have my custom chunking algorithm that chunks the parsed content in the way i want. It works pretty well for the most part, but I get the occasional hiccup with messy scanned documents or weird layouts. I just wanna make sure that I haven't made the wrong call, since there are lots of tools out there. My use case involves handling a mix of everything really. Clean digital PDFs, scanned documents, Word files, the lot. Users upload whatever they have and expect it to just work. For those of you running document parsing in production for your RAG systems: * What are you using for your parsing pipeline? * How do you handle the scanned vs native digital document split? * Any specific tools or combinations that have proven reliable at scale ? I've looked into things like [unstructured.io](http://unstructured.io), pypdf, marker, etc but there's so many options and I'd rather hear from people who've **actually** battle tested these in real deployments rather than just going off benchmarks. Would be great to hear what's actually working for people in the wild. I've already looked into deepseekocr after i saw people hyping it, but it's too memory intensive for my use case and kinda slow. I understand that i'm looking for a self hosted solution, but even if you have something that works pretty well tho it's not self hosted, please feel free to share. I plan on connecting cloud apis for potential customers that wont care if its self hosted. Big thanks in advance for you help ❤️. The last post here, gave me some really good insights.
2025-12-31T12:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1q0by8x/those_running_rag_in_production_whats_your/
Hour-Entertainer-478
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0by8x
false
null
t3_1q0by8x
/r/LocalLLaMA/comments/1q0by8x/those_running_rag_in_production_whats_your/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=108&crop=smart&auto=webp&s=7ec384e554047088a4b99afdebd307307984dadd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=216&crop=smart&auto=webp&s=554a1d3e8577f0ee0db56f971dba85bc75e5a701', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=320&crop=smart&auto=webp&s=0e6f106f35f70fea2bfc38d49ccd0036c74696ca', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=640&crop=smart&auto=webp&s=2c00075b72d40338665c6a304214fba44a2c2039', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=960&crop=smart&auto=webp&s=ca33291a20487cbb86b8d61754d7a054061541b8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?width=1080&crop=smart&auto=webp&s=bf5d33cefbe6aacfe2173d7cc0f6d8bb65497cff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Diu86grBho57XC89vFaA1PY-1bkPeOUeJZqmRUuWMlM.jpeg?auto=webp&s=7bdc5f5155cc8514620f68056804430ac75d68f6', 'width': 1200}, 'variants': {}}]}
Has anyone achieved GPT 4 or Sonnet above quality experience on Local LLMs?
1
[removed]
2025-12-31T12:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1q0buyp/has_anyone_achieved_gpt_4_or_sonnet_above_quality/
Possible-Hippo1481
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0buyp
false
null
t3_1q0buyp
/r/LocalLLaMA/comments/1q0buyp/has_anyone_achieved_gpt_4_or_sonnet_above_quality/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=108&crop=smart&auto=webp&s=268b521410b5d5993502826dcfdd2e74e4ec1801', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=216&crop=smart&auto=webp&s=8708d00479d40086908d9d92cf6630073a96cadc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=320&crop=smart&auto=webp&s=683aa5c0be732701fe78a7ea371525e7b1292f91', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=640&crop=smart&auto=webp&s=656381aa4e6de9e6772613ecb6bb45ec41a605b4', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=960&crop=smart&auto=webp&s=ccaf49d808543674fef8fa0560dd2c494ef68dc1', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?width=1080&crop=smart&auto=webp&s=e3ee99d1194d33074f5ce7a3c86df857616a8bec', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nGWwRAR23H5iun9vMzk68nI8Gj8hig7wQP-qQ082yaY.png?auto=webp&s=1a03c4cd29c0df89f224782266e6e8bd877a61b6', 'width': 1200}, 'variants': {}}]}
Wich model for philosophy / humanities on a MSI rtx 2060 Super (8Gb)?
1
Hi, i have a geekom IT 13(Mini PC), with a external GPU (MSI RTX 2060 Super OC, 8Gb). I havent found any good Model(s) yet, for philosophical / humanities applications, mainly chatting about topics in a Web Interface (OWUI). Can you recommend anything? Thanks for your Help!
2025-12-31T12:11:17
https://www.reddit.com/r/LocalLLaMA/comments/1q0blg9/wich_model_for_philosophy_humanities_on_a_msi_rtx/
Excellent_Piccolo848
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0blg9
false
null
t3_1q0blg9
/r/LocalLLaMA/comments/1q0blg9/wich_model_for_philosophy_humanities_on_a_msi_rtx/
false
false
self
1
null
MCIO and GPU
3
Hey all I have a GENOAD8X-2T/BCM unbuilt as yet. Since I was mainly looking at pcie5 slots I failed to notice it has 2x MCIOx4 connectors. I understand these can carry Pcie5? https://www.asrockrack.com/general/productdetail.asp?Model=GENOAD8X-2T/BCM#Specifications So my question is with the right adapter can I use a GPU on those? If so any advantage to the regular pcie5 slots? I mean I’ve seen a 1m cable for mcio so that o would be one…
2025-12-31T12:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1q0bl2d/mcio_and_gpu/
Tk-84-mn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0bl2d
false
null
t3_1q0bl2d
/r/LocalLLaMA/comments/1q0bl2d/mcio_and_gpu/
false
false
self
3
null
Solar-Open-100B is out
154
https://preview.redd.it/…. GGUF/AWQ Wen?
2025-12-31T12:03:49
https://www.reddit.com/r/LocalLLaMA/comments/1q0bgvl/solaropen100b_is_out/
cgs019283
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0bgvl
false
null
t3_1q0bgvl
/r/LocalLLaMA/comments/1q0bgvl/solaropen100b_is_out/
false
false
https://b.thumbs.redditm…HYeFvrT072Rk.jpg
154
null
# The Strix Halo Starter Pack: Configuring Ryzen AI Max 395 for Local LLMs
1
[removed]
2025-12-31T11:48:58
https://biuiw.hashnode.dev/the-strix-halo-starter-pack-configuring-ryzen-ai-max-395-for-local-llms
lowcoordination
biuiw.hashnode.dev
1970-01-01T00:00:00
0
{}
1q0b7yv
false
null
t3_1q0b7yv
/r/LocalLLaMA/comments/1q0b7yv/the_strix_halo_starter_pack_configuring_ryzen_ai/
false
false
default
1
null
funny!
284
2025-12-31T11:37:03
https://i.redd.it/rlgtskr40jag1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1q0b0wb
false
null
t3_1q0b0wb
/r/LocalLLaMA/comments/1q0b0wb/funny/
false
false
default
284
{'enabled': True, 'images': [{'id': 'rlgtskr40jag1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=108&crop=smart&auto=webp&s=24a773d4e13b81d02fda5179919784b6f5094d45', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=216&crop=smart&auto=webp&s=b63b1a2dbbbfa9025d4dafc97a9a62421716481e', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=320&crop=smart&auto=webp&s=6102a9233dc724655264297396026572b44fdef7', 'width': 320}, {'height': 448, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=640&crop=smart&auto=webp&s=3d612af3aca06fbe1a315dd6e24012116c2124ac', 'width': 640}, {'height': 672, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=960&crop=smart&auto=webp&s=af54dea3a6f3d43c8f14ffbf6fd40cc2af9c5f9b', 'width': 960}, {'height': 756, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?width=1080&crop=smart&auto=webp&s=498d6d15955fbd551e7d8b21911165b0133ce343', 'width': 1080}], 'source': {'height': 840, 'url': 'https://preview.redd.it/rlgtskr40jag1.jpeg?auto=webp&s=51f99b5d53c91c95c5b73444ebaa85bfac751e6a', 'width': 1200}, 'variants': {}}]}
Solving issue \n\t loops in structured outputs
0
While using LLM with vllm i often ask for structured outputs, expecially in agentic context, and often in json format that must be parsed . However sometimes models like minimax or glm loop over and over with character such as \\n \\t and overflow the max number of tokens, hence the outputted json is wrong, I wanted to have your tips and tricks on how to deal those cases. Should i extend the max\_tokens for him to complete ? or how is there a smart way to deal with it? thanks guys
2025-12-31T11:29:34
https://www.reddit.com/r/LocalLLaMA/comments/1q0awgv/solving_issue_nt_loops_in_structured_outputs/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0awgv
false
null
t3_1q0awgv
/r/LocalLLaMA/comments/1q0awgv/solving_issue_nt_loops_in_structured_outputs/
false
false
self
0
null
Current/future state of AI agent capabilities?
0
When do you think it will be possible for AI agents to, for example, generate fully functional random map for something like Blitzkrieg 1?
2025-12-31T11:09:16
https://www.reddit.com/r/LocalLLaMA/comments/1q0akp5/currentfuture_state_of_ai_agent_capabilities/
Advanced-Stomach-24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0akp5
false
null
t3_1q0akp5
/r/LocalLLaMA/comments/1q0akp5/currentfuture_state_of_ai_agent_capabilities/
false
false
self
0
null
LGAI-EXAONE/K-EXAONE-236B-A23B · Hugging Face
82
# Introduction We introduce **K-EXAONE**, a large-scale multilingual language model developed by LG AI Research. Built using a Mixture-of-Experts architecture, K-EXAONE features **236 billion total** parameters, with **23 billion active** during inference. Performance evaluations across various benchmarks demonstrate that K-EXAONE excels in reasoning, agentic capabilities, general knowledge, multilingual understanding, and long-context processing. # [](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B#key-features) # Key Features * **Architecture & Efficiency:** Features a 236B fine-grained MoE design (23B active) optimized with **Multi-Token Prediction (MTP)**, enabling self-speculative decoding that boosts inference throughput by approximately 1.5x. * **Long-Context Capabilities:** Natively supports a **256K context window**, utilizing a **3:1 hybrid attention** scheme with a **128-token sliding window** to significantly minimize memory usage during long-document processing. * **Multilingual Support:** Covers 6 languages: Korean, English, Spanish, German, Japanese, and Vietnamese. Features a redesigned **150k vocabulary** with **SuperBPE**, improving token efficiency by \~30%. * **Agentic Capabilities:** Demonstrates superior tool-use and search capabilities via **multi-agent strategies.** * **Safety & Ethics:** Aligned with **universal human values**, the model uniquely incorporates **Korean cultural and historical contexts** to address regional sensitivities often overlooked by other models. It demonstrates high reliability across diverse risk categories. For more details, please refer to the [technical report](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B#). # [](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B#model-configuration) # Model Configuration * Number of Parameters: 236B in total and 23B activated * Number of Parameters (without embeddings): 234B * Hidden Dimension: 6,144 * Number of Layers: 48 Main layers + 1 MTP layers * Hybrid Attention Pattern: 12 x (3 Sliding window attention + 1 Global attention) * Sliding Window Attention * Number of Attention Heads: 64 Q-heads and 8 KV-heads * Head Dimension: 128 for both Q/KV * Sliding Window Size: 128 * Global Attention * Number of Attention Heads: 64 Q-heads and 8 KV-heads * Head Dimension: 128 for both Q/KV * No Rotary Positional Embedding Used (NoPE) * Mixture of Experts: * Number of Experts: 128 * Number of Activated Experts: 8 * Number of Shared Experts: 1 * MoE Intermediate Size: 2,048 * Vocab Size: 153,600 * Context Length: 262,144 tokens * Knowledge Cutoff: Dec 2024 (2024/12)
2025-12-31T11:06:30
https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1q0aj2o
false
null
t3_1q0aj2o
/r/LocalLLaMA/comments/1q0aj2o/lgaiexaonekexaone236ba23b_hugging_face/
false
false
https://external-preview…19ae0375a85e1c18
82
{'enabled': False, 'images': [{'id': '9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=108&crop=smart&auto=webp&s=7130423f6689c17372bb513aa4861371447d25f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=216&crop=smart&auto=webp&s=79af8add41e9a233f6ee1f50b31ee3903ab4c6d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=320&crop=smart&auto=webp&s=b875166137fe475e0e8141ef87678f8aa4840069', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=640&crop=smart&auto=webp&s=a2373a9e59ba6c01bea83c9849f8b68958239cf0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=960&crop=smart&auto=webp&s=7601ca7abe1ebd5686d8a8eed5ef9309e058c198', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?width=1080&crop=smart&auto=webp&s=c2c0370b9279db1ade1e8c13924f6a0ab82e3297', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9yRidQD6qePtlR5IIS0obyCIBcG3P371nr_MudwlERc.png?auto=webp&s=ad063e89ffd34793c35dee181e228153d3505a94', 'width': 1200}, 'variants': {}}]}
tencent/Youtu-LLM-2B · Hugging Face
98
# 🎯 Brief Introduction **Youtu-LLM** is a new, small, yet powerful LLM, contains only 1.96B parameters, supports 128k long context, and has native agentic talents. On general evaluations, Youtu-LLM significantly outperforms SOTA LLMs of similar size in terms of Commonsense, STEM, Coding and Long Context capabilities; in agent-related testing, Youtu-LLM surpasses larger-sized leaders and is truly capable of completing multiple end2end agent tasks. **Youtu-LLM** has the following features: * Type: Autoregressive Causal Language Models with Dense [MLA](https://arxiv.org/abs/2405.04434) * Release versions: [Base](https://huggingface.co/tencent/Youtu-LLM-2B-Base) and [Instruct](https://huggingface.co/tencent/Youtu-LLM-2B) * Number of Parameters: 1.96B * Number of Layers: 32 * Number of Attention Heads (MLA): 16 for Q/K/V * MLA Rank: 1,536 for Q, 512 for K/V * MLA Dim: 128 for QK Nope, 64 for QK Rope, and 128 for V * Context Length: 131,072 * Vocabulary Size: 128,256 probably there will be more because [https://github.com/ggml-org/llama.cpp/pull/18479](https://github.com/ggml-org/llama.cpp/pull/18479)
2025-12-31T11:04:58
https://huggingface.co/tencent/Youtu-LLM-2B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1q0ai5z
false
null
t3_1q0ai5z
/r/LocalLLaMA/comments/1q0ai5z/tencentyoutullm2b_hugging_face/
false
false
default
98
{'enabled': False, 'images': [{'id': 'TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=108&crop=smart&auto=webp&s=00b3dd15919687e997213408a17caf38ba91e037', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=216&crop=smart&auto=webp&s=6c3e354bb2f466bd2f4c74dc13d83a068ea8e8cf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=320&crop=smart&auto=webp&s=1e03b6ecb3c5bb82ffb6555bc7cdc585d10b7e41', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=640&crop=smart&auto=webp&s=aa098fcddc24e4ec07367f54537e630f0ccd845b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=960&crop=smart&auto=webp&s=f13689f3ddaf9f9d3e7c60564eabb506716f6f52', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?width=1080&crop=smart&auto=webp&s=55401472fa19694cc091c67a1e9233eef4d3b517', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TFPimi2e9oXAXq7hyzOyZGRANeYzlo2Z_aJHDiioWd0.png?auto=webp&s=f8bdf1c49d085a5d55e8a614efcec836475e7444', 'width': 1200}, 'variants': {}}]}
Full Qwen 70b model system requirements
1
Hello everyone, I will soon have access to some sort of super computer and I plan to run full Qwen 70b model and I was wondering what are recommended system requirements to run that model? Thanks!
2025-12-31T10:52:43
https://www.reddit.com/r/LocalLLaMA/comments/1q0aatg/full_qwen_70b_model_system_requirements/
Plane_Chemistry9042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q0aatg
false
null
t3_1q0aatg
/r/LocalLLaMA/comments/1q0aatg/full_qwen_70b_model_system_requirements/
false
false
self
1
null
Jan released a new interleaved reasoning model
51
Model link: [https://huggingface.co/janhq/Jan-v2-VL-max-Instruct-FP8](https://huggingface.co/janhq/Jan-v2-VL-max-Instruct-FP8) Demo link: [chat.jan.ai](http://chat.jan.ai) X: [https://x.com/jandotai/status/2006264430594765006](https://x.com/jandotai/status/2006264430594765006)
2025-12-31T10:43:10
https://v.redd.it/kt28ytqroiag1
Delicious_Focus3465
v.redd.it
1970-01-01T00:00:00
0
{}
1q0a59o
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kt28ytqroiag1/DASHPlaylist.mpd?a=1769769802%2CM2NiODBiNmM4NjY2ZDQ2ZGI1MDgyYTc4ZTQwZjFkMDk3NWMxODc4OTkyZGVhMDczOWQ3NmFjZDQyNTliMzg2ZA%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/kt28ytqroiag1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/kt28ytqroiag1/HLSPlaylist.m3u8?a=1769769802%2CNzNiZThkMmU0MTljZWE2ZTgwNjE1YmMxMGM4YThlOTBlZTIzMTUxMjIyN2E3OTU5Y2UwMTg1ZTkzYjJjMmQyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kt28ytqroiag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1726}}
t3_1q0a59o
/r/LocalLLaMA/comments/1q0a59o/jan_released_a_new_interleaved_reasoning_model/
false
false
https://external-preview…a5572bb719663c16
51
{'enabled': False, 'images': [{'id': 'bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=108&crop=smart&format=pjpg&auto=webp&s=b838cacdbf58b35fbc223258371ee830af9437b9', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=216&crop=smart&format=pjpg&auto=webp&s=820fd21bf126b5c0f271d3e20012e10b077db33e', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=320&crop=smart&format=pjpg&auto=webp&s=a59147968f7ddc82128ae1ac12c862b9895f7f52', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=640&crop=smart&format=pjpg&auto=webp&s=a420858dd3243d96804497b67e5a8506f7a6b545', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=960&crop=smart&format=pjpg&auto=webp&s=e15955141e12673bc567c6b69036e3a068bd108d', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?width=1080&crop=smart&format=pjpg&auto=webp&s=87d781c376f5050c66b2a881a8bdfa5bf691ea44', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bmFzMjZ5cXJvaWFnMZ4ZY35DYn0QpQjqCluSAvoB9RJjmjU4JurykMdLAOoq.png?format=pjpg&auto=webp&s=86810a5f9f50329338b66ab8eb71eff0387e0aed', 'width': 1726}, 'variants': {}}]}
🛑 Stop building Evolve Agents based on luck.
0
Let's be real: Frameworks like **OpenEvolve** are essentially "brute-force guessing". It’s inefficient, expensive, and frankly, obsolete. We built **LoongFlow** to kill the random walk. It injects a **Cognitive Core (Plan-Execute-Summarize)** into the evolutionary loop. The result? 🚀 **The "Cognitive Ceiling" is shattered.** 🥇 14 **Kaggle Gold Medals** (Zero human intervention). 📉 **1/20th the compute cost** of OpenEvolve. If your agent isn't thinking before it mutates, it's just gambling. We are open-sourcing the future of AGI Evolution today. 👇 [https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow)
2025-12-31T10:05:52
https://www.reddit.com/r/LocalLLaMA/comments/1q09jlt/stop_building_evolve_agents_based_on_luck/
FreshmanDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q09jlt
false
null
t3_1q09jlt
/r/LocalLLaMA/comments/1q09jlt/stop_building_evolve_agents_based_on_luck/
false
false
self
0
{'enabled': False, 'images': [{'id': '5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=108&crop=smart&auto=webp&s=fc128ab1869aca41dc7152fc9647b1159c29a7ca', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=216&crop=smart&auto=webp&s=95aeff0a16e4ccbfdfc1eaf34693f970272e1797', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=320&crop=smart&auto=webp&s=c0b915069b658b65fc269c91190647c247dcebae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=640&crop=smart&auto=webp&s=f080b44aa412a7143bf9cf3da46deb33eee304af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=960&crop=smart&auto=webp&s=a15a5cb0475ac5ab5e6e6b8581378e1982f1e88f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?width=1080&crop=smart&auto=webp&s=7e70cb9a2dd76333f93a8e1ee836d2348d2c9b83', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5XySmA3kThmPvqUgY4W9utALj30Q0H7lyXsc6o_z_bk.png?auto=webp&s=f3ec8dc0c6f920cf8b3fe0df79900495cb17bf8c', 'width': 1200}, 'variants': {}}]}
LLM artificial analysis AI index score plotted against toral param count
9
thank you opus (moral of story dont trust intelligence benchmark to indicate coding performance)
2025-12-31T10:01:18
https://i.redd.it/fcqkapj9jiag1.png
celestrogen
i.redd.it
1970-01-01T00:00:00
0
{}
1q09gzd
false
null
t3_1q09gzd
/r/LocalLLaMA/comments/1q09gzd/llm_artificial_analysis_ai_index_score_plotted/
false
false
default
9
{'enabled': True, 'images': [{'id': 'fcqkapj9jiag1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=108&crop=smart&auto=webp&s=b2a0dd4cdb1bd57f9628c922453948100ca6c9be', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=216&crop=smart&auto=webp&s=14b4e8e7b8e5082a24b2b4a3b2d47a9261d84a9a', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=320&crop=smart&auto=webp&s=4f596bc43e9d7c2bb8c564d5729bd17468b86bc9', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=640&crop=smart&auto=webp&s=66186d1bbfdbdeca3d8807f6d4d70d594e881b45', 'width': 640}, {'height': 681, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=960&crop=smart&auto=webp&s=64b6dbd322254b0ecf40770b8ee4a4070a8354f4', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?width=1080&crop=smart&auto=webp&s=6b050557b810a772292a984d2dd9432e56d1f986', 'width': 1080}], 'source': {'height': 1479, 'url': 'https://preview.redd.it/fcqkapj9jiag1.png?auto=webp&s=a9061c7d71e09addc037642c33d8ccceb60d184f', 'width': 2082}, 'variants': {}}]}
Qwen-Image-2512
648
Unsloth: Guide: [https://unsloth.ai/docs/models/qwen-image-2512](https://unsloth.ai/docs/models/qwen-image-2512) GGUF: [https://huggingface.co/unsloth/Qwen-Image-2512-GGUF](https://huggingface.co/unsloth/Qwen-Image-2512-GGUF) \----------------- 👉 Try it now in Qwen Chat: [https://chat.qwen.ai/?inputFeature=t2i](https://chat.qwen.ai/?inputFeature=t2i) 🤗 Hugging Face: [https://huggingface.co/Qwen/Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512) 📦 ModelScope: [https://modelscope.ai/models/Qwen/Qwen-Image-2512](https://modelscope.ai/models/Qwen/Qwen-Image-2512) 💻 GitHub: [https://github.com/QwenLM/Qwen-Image](https://github.com/QwenLM/Qwen-Image) 📝 Blog: [https://qwen.ai/blog?id=qwen-image-2512](https://qwen.ai/blog?id=qwen-image-2512) 🤗 Hugging Face Demo: [https://huggingface.co/spaces/Qwen/Qwen-Image-2512](https://huggingface.co/spaces/Qwen/Qwen-Image-2512) 📦 ModelScope Demo: [https://modelscope.cn/aigc/imageGeneration](https://modelscope.cn/aigc/imageGeneration) ✨API: [https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914\_2&modelId=group-qwen-image-max](https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2840914_2&modelId=group-qwen-image-max)
2025-12-31T09:38:19
https://i.redd.it/2vlr11yveiag1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1q094a3
false
null
t3_1q094a3
/r/LocalLLaMA/comments/1q094a3/qwenimage2512/
false
false
default
648
{'enabled': True, 'images': [{'id': '2vlr11yveiag1', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=108&crop=smart&auto=webp&s=f5c9b3d07326bcf503a2611e3dee704c42e6eab6', 'width': 108}, {'height': 238, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=216&crop=smart&auto=webp&s=26a07c8328cc5b31c6264abd6e7eebf6efea8033', 'width': 216}, {'height': 352, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=320&crop=smart&auto=webp&s=17b6eaf0a61255114d451e5048ecfeca540e318a', 'width': 320}, {'height': 705, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=640&crop=smart&auto=webp&s=8c1e7a0b7ea834c8babae002078a848096514e1b', 'width': 640}, {'height': 1058, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=960&crop=smart&auto=webp&s=edb70086b58aaf06d6cb8127a66962ffb31af3a2', 'width': 960}, {'height': 1190, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?width=1080&crop=smart&auto=webp&s=beb65c5024e26e0f309116eff6542ec0572ae983', 'width': 1080}], 'source': {'height': 2822, 'url': 'https://preview.redd.it/2vlr11yveiag1.jpeg?auto=webp&s=28bb2f3118cbbef981eb2066087e1e1452e44e04', 'width': 2560}, 'variants': {}}]}
Qwen released Qwen-Image-2512 on Hugging face. Qwen-Image-2512 is currently the strongest open-source model.
99
Hugging face: [https://huggingface.co/Qwen/Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512) What’s new: • More realistic humans — dramatically reduced “AI look,” richer facial details • Finer natural textures — sharper landscapes, water, fur, and materials • Stronger text rendering — better layout, higher accuracy in text–image composition Tested in 10,000+ blind rounds on AI Arena, Qwen-Image-2512 ranks as the strongest open-source image model, while staying competitive with closed-source systems.
2025-12-31T09:36:58
https://www.reddit.com/gallery/1q093ka
Difficult-Cap-7527
reddit.com
1970-01-01T00:00:00
0
{}
1q093ka
false
null
t3_1q093ka
/r/LocalLLaMA/comments/1q093ka/qwen_released_qwenimage2512_on_hugging_face/
false
false
https://b.thumbs.redditm…cwmaQB7kNgnI.jpg
99
null
Is speed penalty worth 60% cost savings for LLM fine tuning ?
1
[removed]
2025-12-31T09:29:46
https://www.reddit.com/r/LocalLLaMA/comments/1q08zi4/is_speed_penalty_worth_60_cost_savings_for_llm/
Long-Buy5501
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q08zi4
false
null
t3_1q08zi4
/r/LocalLLaMA/comments/1q08zi4/is_speed_penalty_worth_60_cost_savings_for_llm/
false
false
self
1
null
What model can I run on the RX580?
0
Hello, can I upload anything locally on this graphic?
2025-12-31T09:28:49
https://www.reddit.com/r/LocalLLaMA/comments/1q08z0e/what_model_can_i_run_on_the_rx580/
Pretend-Fee-1222
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q08z0e
false
null
t3_1q08z0e
/r/LocalLLaMA/comments/1q08z0e/what_model_can_i_run_on_the_rx580/
false
false
self
0
null
Fine tuning: AWS H100 vs Distributed 4090s (WAN)
1
[removed]
2025-12-31T09:23:01
https://www.reddit.com/r/LocalLLaMA/comments/1q08vt4/fine_tuning_aws_h100_vs_distributed_4090s_wan/
Desperate_One2416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q08vt4
false
null
t3_1q08vt4
/r/LocalLLaMA/comments/1q08vt4/fine_tuning_aws_h100_vs_distributed_4090s_wan/
false
false
self
1
null
[llama-server] Massive prefill cliff (2500 t/s → 150 t/s) with eGPU split. Is TB4 latency the killer?
2
Hi everyone, I'm seeing a massive performance cliff in prompt processing (prefill) when moving from a single GPU to a dual-GPU split in \`llama-server\` (llama.cpp), and I'm trying to understand why the overhead is so extreme for what should be simple layer splitting. \*\*The Hardware\*\* \* \*\*Internal:\*\* RTX 5060 Ti 16GB (Blackwell) @ PCIe Gen 3 x8 \* \*\*External:\*\* RTX 3090 24GB (Blower) @ Thunderbolt 4 (eGPU) \*\*The Performance Gap (2.7k Token Prompt)\*\* \* \*\*Single GPU\*\* (3090 only, Q4 Quant): \*\*\~2500 t/s prefill\*\* \* \*\*Dual GPU\*\* (Split, Q6 Quant): \*\*\~150 t/s prefill\*\* \*\*The Mystery\*\* Since \`llama.cpp\` uses layer splitting, it should only be passing activation tensors across the bus between layers. Even accounting for Thunderbolt 4's bandwidth limitations, a drop from 2500 t/s to 150 t/s (a 94% loss) seems way beyond what simple activation transfers should cause for a 2.7k token prompt. Is \`llama-server\` performing excessive synchronization or host-memory roundtrips during the prefill phase that kills performance on high-latency/lower-bandwidth links like TB4? \*\*The Commands\*\* \*\*Single GPU 3090 (Nemotron-3-Nano-30B Q4)\*\* \`\`\`bash /app/llama-server \\ \-hf unsloth/Nemotron-3-Nano-30B-A3B-GGUF:Q4\_K\_XL \\ \--port ${PORT} \\ \--ctx-size 98304 \\ \--flash-attn auto \\ \--n-gpu-layers 99 \\ \--cache-type-k f16 \\ \--cache-type-v f16 \`\`\` \*\*Split GPU 3090 and 5060ti (Nemotron-3-Nano-30B Q6)\*\* \`\`\`bash /app/llama-server \\ \-hf unsloth/Nemotron-3-Nano-30B-A3B-GGUF:Q6\_K\_XL \\ \--port ${PORT} \\ \--ctx-size 0 \\ \--flash-attn auto \\ \--n-gpu-layers 99 \\ \--tensor-split 24,10 \\ \--ubatch-size 2048 \\ \--cache-type-k f16 \\ \--cache-type-v f16 \`\`\` \*\*Oculink Upgrade?\*\* I have an M.2 Oculink adapter on hand but haven't installed it yet. Does anyone have experience with whether the lower latency of a direct Oculink connection fixes this specific "prefill death" in llama.cpp, or is this a known scaling issue when splitting across any non-uniform bus? Would love to hear if anyone has insights on tuning the handoff or if there are specific flags to reduce the synchronization overhead during the prefill pass. Thanks
2025-12-31T08:56:20
https://www.reddit.com/r/LocalLLaMA/comments/1q08h2t/llamaserver_massive_prefill_cliff_2500_ts_150_ts/
danishkirel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q08h2t
false
null
t3_1q08h2t
/r/LocalLLaMA/comments/1q08h2t/llamaserver_massive_prefill_cliff_2500_ts_150_ts/
false
false
self
2
null
ASUS Ascent GX10
0
https://preview.redd.it/…est performance?
2025-12-31T08:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1q085eb/asus_ascent_gx10/
hsperus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q085eb
false
null
t3_1q085eb
/r/LocalLLaMA/comments/1q085eb/asus_ascent_gx10/
false
false
https://b.thumbs.redditm…1bOi9Qxr2tnE.jpg
0
null
Are Multi-Agent AI “Dev Teams” Actually Useful in Real Work?
3
I’ve seen a lot of people build multi-agent systems where each agent takes on a role and together they form a “full” software development team. I’m honestly a bit skeptical about how practical this is. I do see the value of sub-agents for specific, scoped tasks like context management. For example, an exploration agent can filter out irrelevant files so the main agent doesn’t have to read everything. That kind of division makes sense to me. But an end-to-end pipeline where you give the system a raw idea and it turns it into a PRD, then plans, builds, tests, and ships the whole thing… that feels a bit too good to be true. From my experience, simply assigning a “personality” or title to an LLM doesn’t help much. Prompts like “you are an expert software engineer” or “you are a software architect” still largely depend on the base capability of the model being used. If the LLM is already strong, it can usually do the task without needing to “pretend” to be someone. So I’m curious how much of the multi-agent setup is actually pulling its weight versus just adding structure on top of a capable model. Does this actually work in real-world settings? Is anyone using something like this in their day-to-day job, not just hobby or side projects? If so, I’d love to hear what your experience has been like.
2025-12-31T08:04:22
https://www.reddit.com/r/LocalLLaMA/comments/1q07nzj/are_multiagent_ai_dev_teams_actually_useful_in/
skyline159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q07nzj
false
null
t3_1q07nzj
/r/LocalLLaMA/comments/1q07nzj/are_multiagent_ai_dev_teams_actually_useful_in/
false
false
self
3
null
Built an MCP Server for Andrej Karpathy's LLM Council
4
I took Andrej Karpathy's [llm-council](https://github.com/karpathy/llm-council) project and added Model Context Protocol (MCP) support, so you can now use multi-LLM deliberation directly in Claude Desktop, VS Code, or any MCP client. Now instead of using the web UI, just ask Claude: *"Use council\_query to answer: What is consciousness?"* and get the full 3-stage deliberation (individual responses → peer rankings → synthesis) in \~60s. **My work:** [https://github.com/khuynh22/llm-council/tree/master](https://github.com/khuynh22/llm-council/tree/master) **PR to upstream:** [https://github.com/karpathy/llm-council/pull/116](https://github.com/karpathy/llm-council/pull/116)
2025-12-31T07:57:35
https://www.reddit.com/r/LocalLLaMA/comments/1q07jrt/built_an_mcp_server_for_andrej_karpathys_llm/
NeitherRun3631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q07jrt
false
null
t3_1q07jrt
/r/LocalLLaMA/comments/1q07jrt/built_an_mcp_server_for_andrej_karpathys_llm/
false
false
self
4
{'enabled': False, 'images': [{'id': 'ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=108&crop=smart&auto=webp&s=ccb94e986c4d160c45c9ddee8c685254a8cc5f45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=216&crop=smart&auto=webp&s=71bbff013ff712532162288328438edaf00d9a76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=320&crop=smart&auto=webp&s=fddfdf7bf5c68e5d9e1e5b8d9b04fb0d79ffcd1e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=640&crop=smart&auto=webp&s=eeef4713b7e247eed62cd36431b53364d2bdd4ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=960&crop=smart&auto=webp&s=a581059f62e400c31d7ce12e2dd327cfd01c8340', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?width=1080&crop=smart&auto=webp&s=771dba22272500ad8b5487043f5cbab57733acce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZF6dCaQ-Iv0WWsWSv9Fw0m2U0k0hJuWmoiHWoC9FOto.png?auto=webp&s=725b5c01db87b3020ce5809c0a09971671adb718', 'width': 1200}, 'variants': {}}]}
We open-sourced LLMRouter: the first unified LLM routing library with 300+ stars in 24h
50
Hi everyone, We are a CS research team from UIUC, and we recently open-sourced LLMRouter, the first unified open-source library that integrates major LLM routing algorithms and scenarios. The project received 300+ GitHub stars within 24 hours, and the announcement reached nearly 100k views on Twitter, which suggests this is a pain point shared by many researchers and practitioners. Why LLMRouter? The current LLM routing landscape feels a lot like early GNN research: many promising router algorithms exist, but each comes with its own input/output format, training pipeline, and evaluation setup. This fragmentation makes routers difficult to use, hard to reproduce, and nearly impossible to compare fairly. Over the past year, we worked on several LLM routing projects, including GraphRouter (ICLR’25), Router-R1 (NeurIPS’25), and PersonalizedRouter (TMLR’25). Through repeatedly implementing and benchmarking different routers, we realized that the main bottleneck is not algorithmic novelty, but the lack of standardized infrastructure. What LLMRouter provides: 1. Unified support for single-round, multi-round, agentic, and personalized routing 2. Integration of 16+ SOTA LLM router algorithms 3. One-line commands to run different routers without rebuilding pipelines 4. Built-in benchmarking with extensible custom routers, tasks, and metrics In practice, LLMRouter can help reduce LLM API costs by \~30–50% through intelligent model routing, while maintaining overall performance. Our goal is for LLMRouter to play a role similar to PyG for GNNs — a shared, extensible foundation for LLM routing research and applications. GitHub: [https://github.com/ulab-uiuc/LLMRouter](https://github.com/ulab-uiuc/LLMRouter) Project page: [https://ulab-uiuc.github.io/LLMRouter/](https://ulab-uiuc.github.io/LLMRouter/) We would love feedback, issues, and contributions from the community. If you find it useful, a GitHub star would really help us keep improving it 🙏
2025-12-31T07:21:24
https://www.reddit.com/r/LocalLLaMA/comments/1q06z2l/we_opensourced_llmrouter_the_first_unified_llm/
AlexiosLin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q06z2l
false
null
t3_1q06z2l
/r/LocalLLaMA/comments/1q06z2l/we_opensourced_llmrouter_the_first_unified_llm/
false
false
self
50
{'enabled': False, 'images': [{'id': '7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=108&crop=smart&auto=webp&s=6b395167a8f426ad2278b44655e58904c05b2136', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=216&crop=smart&auto=webp&s=1a3ec6065b3ab8b4c07d214459298034bcd13477', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=320&crop=smart&auto=webp&s=8cc5219733f795fe1b946b03ea200267a27cb2aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=640&crop=smart&auto=webp&s=54c9a0bea1b479e302de873c56cd3a34285d3e3f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=960&crop=smart&auto=webp&s=856dafd0b80bf798d8576b2251077751a6ccbc00', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?width=1080&crop=smart&auto=webp&s=6199998e1c24254f5a367007f644ec5df3be9f5b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7neqYe-MSk7s9Taw25EsLbSiqYkj23-vM7Y2NgcTt6g.png?auto=webp&s=256b44afe5a54b915e7e7a9911d963c3ae2da630', 'width': 1200}, 'variants': {}}]}
Sam Audio
1
Hi everyone. Recently the company I work for purchased this ASUS DGX Spark based PC. https://www.asus.com/networking-iot-servers/desktop-ai-supercomputer/ultra-small-ai-supercomputers/asus-ascent-gx10/. I was asked to install SAM Audio on it. I have previously run it on other servers without any issues. But now I am encountering problems related to ARM64 wheels. I suspect that some dependencies may not be ARM compatible. But I am not completely sure. I am open to any suggestions or advice.
2025-12-31T07:12:03
https://i.redd.it/ktj843w6phag1.jpeg
hsperus
i.redd.it
1970-01-01T00:00:00
0
{}
1q06tjq
false
null
t3_1q06tjq
/r/LocalLLaMA/comments/1q06tjq/sam_audio/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ktj843w6phag1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=108&crop=smart&auto=webp&s=d80207940bd704652bb46d56e8d74c8be993585b', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=216&crop=smart&auto=webp&s=4f8a1dcf7ab55e5ed671496dea5ef38197f84bab', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=320&crop=smart&auto=webp&s=696f6d356cdaa62ac084c928106074e610ef1246', 'width': 320}, {'height': 408, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=640&crop=smart&auto=webp&s=3c7ae77f0ea1c36959c0bc4fdff870ec7bb4fe2e', 'width': 640}, {'height': 612, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=960&crop=smart&auto=webp&s=f1943fc8bc4fc37a06c60815836354813868e137', 'width': 960}, {'height': 688, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?width=1080&crop=smart&auto=webp&s=3148de24c85bb0679a548e90d93eaf3cf56c41ab', 'width': 1080}], 'source': {'height': 2571, 'url': 'https://preview.redd.it/ktj843w6phag1.jpeg?auto=webp&s=298f9b7ca18b3a563e0f32d68aef8bbcc399a125', 'width': 4032}, 'variants': {}}]}
Inference using exo on mac + dec cluster?
0
I read on the exo lab blog that you can achieve “even higher” inference speeds using DGX spark together with m3 ultra(s) cluster. However I did not find any benchmarks. Has anyone tried this or run benchmarks themselves? Exo doesn’t only work on the ultra but also on m4 pro and m4 max and likely also on m5’s to come. I’m wondering what kind of inference speeds such clusters might realise for large SOTA MoE’s (Kimi, deepseek, …) that are currently practically impossible to run.
2025-12-31T06:45:47
https://www.reddit.com/r/LocalLLaMA/comments/1q06df6/inference_using_exo_on_mac_dec_cluster/
EternalOptimister
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q06df6
false
null
t3_1q06df6
/r/LocalLLaMA/comments/1q06df6/inference_using_exo_on_mac_dec_cluster/
false
false
self
0
null
Update on the Llama 3.3 8B situation
247
Hello! You may remember me as either - The person [who recently uploaded L3.3 8B's weights to Huggingface](https://www.reddit.com/r/LocalLLaMA/comments/1pz7bmv/llama338binstruct/) (see this post for more context) - That stupid bitch and I would like to provide some updates, as I've been doing some more benchmarks on both the original version that Meta gave me and the context extended version by u/Few-Welcome3297. The main benchmark table from the model README has been updated: | | Llama 3.1 8B Instruct | Llama 3.3 8B Instruct (original 8k config) | Llama 3.3 8B Instruct (128k config) |-|-|-|-| |IFEval (1 epoch, score avged across all strict/loose instruction/prompt accuracies to follow Llama 3 paper)|78.2|81.95|**84.775** |GPQA Diamond (3 epochs)|29.3|37.0|**37.5** |Tau-Bench Airline (1 epoch, GPT-4.1 as user)|28.0\*|N/A (tau cannot be ran at 8k context)|**36.0*** While I'm not 100% sure, I'm... pretty sure that the 128k model is better. Why Facebook gave me the weights with the original L3 config and 8k context, and also *serves* the weights with the original L3 config and 8k context, I have absolutely no idea! Anyways, if you want to try the model, I would recommend trying both the [128k version](https://huggingface.co/shb777/Llama-3.3-8B-Instruct), as well as my [original version](https://huggingface.co/allura-forge/Llama-3.3-8B-Instruct) if your task supports 8k context lengths. I honestly have absolutely no clue which is more correct, but oh well! I do wish Facebook had released the weights officially, because back in April, this really wouldn't have been that bad of a model...
2025-12-31T06:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1q06ddc/update_on_the_llama_33_8b_situation/
FizzarolliAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q06ddc
false
null
t3_1q06ddc
/r/LocalLLaMA/comments/1q06ddc/update_on_the_llama_33_8b_situation/
false
false
self
247
{'enabled': False, 'images': [{'id': 'xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=108&crop=smart&auto=webp&s=2ff8c539256fd40534623f63c4cfafd12877a7b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=216&crop=smart&auto=webp&s=cfb3bb6df4b03cb099c3ab09cbd6b40f7eb4bbfd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=320&crop=smart&auto=webp&s=cac43fa26bf7c1a6bb78c75ced7e09d249b18965', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=640&crop=smart&auto=webp&s=ee06e45a2f3ccbcf208528c97fc8214b9592f113', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=960&crop=smart&auto=webp&s=6200f23430d6c563ad1840446ab613ea7649c957', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?width=1080&crop=smart&auto=webp&s=3608b308d4b4d5c33ebedd9dbb7c347b7a907457', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xWt1_FiDpJXJEvJKrpjnwb4HvJqj-7KV1YSkGNL4d6M.png?auto=webp&s=9cedbf4d0ab3bab6fc698799d4a834942dd5d446', 'width': 1200}, 'variants': {}}]}
my HOPE Replica(from Nested Learning) achieved negative forgetting on SplitMNIST(Task IL)
7
i know this isn't a Local LLM related but this is shocking guys, my HOPE replica(from the Paper "Nested Learning: The Illusion of Deep Learning Architecture") achieved negative forgetting on SplitMNIST(Task IL), that's basically positive transfer bro, Colab Notebook here: https://colab.research.google.com/drive/1_Q0UD9dXWRzDudptRWDqpBywQAFa532n?usp=sharing
2025-12-31T05:57:56
https://i.redd.it/uxjsyvwybhag1.jpeg
Big-Welcome-3169
i.redd.it
1970-01-01T00:00:00
0
{}
1q05igz
false
null
t3_1q05igz
/r/LocalLLaMA/comments/1q05igz/my_hope_replicafrom_nested_learning_achieved/
false
false
default
7
{'enabled': True, 'images': [{'id': 'uxjsyvwybhag1', 'resolutions': [{'height': 214, 'url': 'https://preview.redd.it/uxjsyvwybhag1.jpeg?width=108&crop=smart&auto=webp&s=43bd51c2b71a6af2d24026fe4e8e26960010e1d0', 'width': 108}, {'height': 428, 'url': 'https://preview.redd.it/uxjsyvwybhag1.jpeg?width=216&crop=smart&auto=webp&s=b22e042e9985235ca646ba071cce7ae29bd8831a', 'width': 216}, {'height': 634, 'url': 'https://preview.redd.it/uxjsyvwybhag1.jpeg?width=320&crop=smart&auto=webp&s=7ac6cc7985513ad0330656c94aaf65cab8dabad6', 'width': 320}, {'height': 1269, 'url': 'https://preview.redd.it/uxjsyvwybhag1.jpeg?width=640&crop=smart&auto=webp&s=dfe1b161b0c5b35170964a07cf551b8498f299bd', 'width': 640}], 'source': {'height': 1420, 'url': 'https://preview.redd.it/uxjsyvwybhag1.jpeg?auto=webp&s=9ab598ea0c4e63bb357dcc045ae735283736a19d', 'width': 716}, 'variants': {}}]}
what is popular LLM LOCAL AI model (LocalLLaMA)
0
here popular of LOCAL AI model can run your mobile phone or tablet number one is qwen 2.5-1.5b-Q4 much faster and accuracy like noob programing language is decent his prompt like human writing rate is 9.5/10 number two is Gemma2 much skinny and faster if you mobile phone is very old might this is fit you rate is 6-5/10 because Gemma 3-270m isn't accurate or even decent his prompt so 6-5/10 number three is Gemma 3.1b if you want update your Gemma 3-270m into accurate this is rate is 8.6/10 if you've question leave here comment.
2025-12-31T05:27:02
https://i.redd.it/vp9c2avw0hag1.png
Adventurous_Role_489
i.redd.it
1970-01-01T00:00:00
0
{}
1q04xpd
false
null
t3_1q04xpd
/r/LocalLLaMA/comments/1q04xpd/what_is_popular_llm_local_ai_model_localllama/
false
false
default
0
{'enabled': True, 'images': [{'id': 'vp9c2avw0hag1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vp9c2avw0hag1.png?width=108&crop=smart&auto=webp&s=39b53e4d75c8378d41e646703dab321488df1d1b', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/vp9c2avw0hag1.png?width=216&crop=smart&auto=webp&s=4947ddf36258fcb6106f915212cea86ebf750025', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/vp9c2avw0hag1.png?width=320&crop=smart&auto=webp&s=3dde043ec93ec18bb18c24ac4bcada526d2c68b3', 'width': 320}], 'source': {'height': 321, 'url': 'https://preview.redd.it/vp9c2avw0hag1.png?auto=webp&s=fd2ba44205e35b152659a0886528e7e2f6975dd5', 'width': 576}, 'variants': {}}]}
P40 - Qwen30b (60k context window ceiling with Flash Attention in llama.cpp?)
1
I've been able to get Qwen3 30b a3b VL Q4\_XS running on P40 with FA on and context size 100k. But once the actual context reaches about 60K it starts to go to shit. repeating paragraphs in a loop. I heard the special FA implemented for P40s in llama.cpp starts to screw up around there. Turning off FA and moving the MOE weights to the CPU may work... guess we'll see. But how are you setting up your P40 with Qwen3-30b a3b and llama.cpp?
2025-12-31T04:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1q03z3j/p40_qwen30b_60k_context_window_ceiling_with_flash/
PairOfRussels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q03z3j
false
null
t3_1q03z3j
/r/LocalLLaMA/comments/1q03z3j/p40_qwen30b_60k_context_window_ceiling_with_flash/
false
false
self
1
null
CFOL: Stratified Architecture Proposal for Paradox-Resilient and Deception-Proof Models
0
I've developed the **Contradiction-Free Ontological Lattice (CFOL)** — a stratified design that enforces an unrepresentable foundational layer (Layer 0) separate from epistemic layers. Core invariants: * No ontological truth predicates * Upward-only reference * No downward truth flow This makes self-referential paradoxes ill-formed by construction and structurally blocks deceptive representations — while keeping full learning/reasoning/probabilistic capabilities. Motivated by Tarski/Russell and risks in current LLMs where confidence/truth is optimizable internally. Full proposal (details, invariants, paradox analysis, implementation ideas for hybrid systems): [https://docs.google.com/document/d/1l4xa1yiKvjN3upm2aznup-unY1srSYXPjq7BTtSMlH0/edit?usp=sharing](https://docs.google.com/document/d/1l4xa1yiKvjN3upm2aznup-unY1srSYXPjq7BTtSMlH0/edit?usp=sharing&referrer=grok.com) Offering it freely. Thoughts on applying this to local/open models? * Feasibility with frozen layers or symbolic interfaces? * Potential for better long-term coherence? * Critiques or related work? Thanks! Jason
2025-12-31T04:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1q03vsf/cfol_stratified_architecture_proposal_for/
Jonas_Tripps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q03vsf
false
null
t3_1q03vsf
/r/LocalLLaMA/comments/1q03vsf/cfol_stratified_architecture_proposal_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=108&crop=smart&auto=webp&s=e5b952e8453015b90ecbfe2c7177271a4b539be2', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=216&crop=smart&auto=webp&s=2e5c2462135d672f629f778c6e7c96779ab3e199', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=320&crop=smart&auto=webp&s=1f36e8dafba690c2b285667e5193d0edfadeac28', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=640&crop=smart&auto=webp&s=2a16d2004734e2153f34754eac7b10ac16608d3a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=960&crop=smart&auto=webp&s=64c690737cde95af32f1c5e036ae8919b0c45a74', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?width=1080&crop=smart&auto=webp&s=cd8ea92003d32f705a61182ec484ab061310c189', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/nBM1u6xkAFMdEoLgmA3KD1rREYEkZq_5QrKvM3yHwDg.png?auto=webp&s=161c4d2d9bd0e92f6958a1284cda7cd47c9a4b7c', 'width': 1200}, 'variants': {}}]}
mlx-lm server  4x ~ 5x speedup in throughput (TPS) with a concurrency of 8 compared to standard sequential inference
1
[removed]
2025-12-31T04:02:52
https://www.reddit.com/r/LocalLLaMA/comments/1q03a45/mlxlm_server_4x_5x_speedup_in_throughput_tps_with/
No-Abrocoma-5335
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q03a45
false
null
t3_1q03a45
/r/LocalLLaMA/comments/1q03a45/mlxlm_server_4x_5x_speedup_in_throughput_tps_with/
false
false
self
1
null
MLX-LM Batch Server: High-Throughput Inference on Apple Silicon
1
[removed]
2025-12-31T04:00:53
https://www.reddit.com/r/LocalLLaMA/comments/1q038mc/mlxlm_batch_server_highthroughput_inference_on/
No-Abrocoma-5335
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q038mc
false
null
t3_1q038mc
/r/LocalLLaMA/comments/1q038mc/mlxlm_batch_server_highthroughput_inference_on/
false
false
self
1
null
Umm.. I am confused
0
Never knew that Deepseek is made by OpenAI..
2025-12-31T02:43:58
https://i.redd.it/rtvqhi39dgag1.png
IceLogical5941
i.redd.it
1970-01-01T00:00:00
0
{}
1q01mcv
false
null
t3_1q01mcv
/r/LocalLLaMA/comments/1q01mcv/umm_i_am_confused/
false
false
default
0
{'enabled': True, 'images': [{'id': 'rtvqhi39dgag1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=108&crop=smart&auto=webp&s=8f1a497446409dc19a90e3ab42421182023751ef', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=216&crop=smart&auto=webp&s=d039266be38a66081f59b32ba6ea3c34f12f3fc5', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=320&crop=smart&auto=webp&s=816b27b0b00d521df43aa7f8973e15a005afa586', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=640&crop=smart&auto=webp&s=3d3a0af9ec357238d2d6ae1837299dd92d0768d9', 'width': 640}, {'height': 505, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=960&crop=smart&auto=webp&s=897c47a4fd51405b1cc37fd3e5320f98e95808e3', 'width': 960}, {'height': 568, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?width=1080&crop=smart&auto=webp&s=16f27aed540465eb44fd2928b9986432aca5ea62', 'width': 1080}], 'source': {'height': 719, 'url': 'https://preview.redd.it/rtvqhi39dgag1.png?auto=webp&s=18314ba111db4d24d2b52cdfd734559d27c62bef', 'width': 1365}, 'variants': {}}]}
Do you think this "compute instead of predict" approach has more long-term value for AGI and SciML than the current trend of brute-forcing larger, stochastic models?
0
I’ve been working on a framework called Grokkit that shifts the focus from learning discrete functions to encoding continuous operators. The core discovery is that by maintaining a fixed spectral basis, we can achieve Zero-Shot Structural Transfer. In my tests, scaling resolution without re-training usually breaks the model (MSE \~1.80), but with spectral consistency, the error stays at 0.02 MSE. I’m curious to hear your thoughts: Do you think this "compute instead of predict" approach has more long-term value for AGI and SciML than the current trend of brute-forcing larger, stochastic models? It runs on basic consumer hardware (tested on an i3) because the complexity is in the math, not the parameter count. DOI: [https://doi.org/10.5281/zenodo.18072859](https://doi.org/10.5281/zenodo.18072859)
2025-12-31T02:33:56
https://www.reddit.com/r/LocalLLaMA/comments/1q01el9/do_you_think_this_compute_instead_of_predict/
Reasonable_Listen888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q01el9
false
null
t3_1q01el9
/r/LocalLLaMA/comments/1q01el9/do_you_think_this_compute_instead_of_predict/
false
false
self
0
null
Can I use OCR for invoice processing?
5
I’m trying to use OC⁤R for invoice processing to pull table data from PDF invoices. What soft⁤ware solutions can speed this up?
2025-12-31T01:39:34
https://www.reddit.com/r/LocalLLaMA/comments/1q007sv/can_i_use_ocr_for_invoice_processing/
ValuableSea6974
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q007sv
false
null
t3_1q007sv
/r/LocalLLaMA/comments/1q007sv/can_i_use_ocr_for_invoice_processing/
false
false
self
5
null
“Agency without governance isn’t intelligence. It’s debt.”
0
A lot of the debate around agents vs workflows misses the real fault line. The question isn’t whether systems should be deterministic or autonomous. It’s whether agency is legible. In every system I’ve seen fail at scale, agency wasn’t missing — it was invisible. Decisions were made, but nowhere recorded. Intent existed, but only in someone’s head or a chat log. Success was assumed, not defined. That’s why “agents feel unreliable”. Not because they act — but because we can’t explain why they acted the way they did after the fact. Governance, in this context, isn’t about restricting behavior. It’s about externalizing it: what decision was made under which assumptions against which success criteria with which artifacts produced Once those are explicit, agency doesn’t disappear. It becomes inspectable. At that point, workflows and agents stop being opposites. A workflow is just constrained agency. An agent is just agency with wider bounds. The real failure mode isn’t “too much governance”. It’s shipping systems where agency exists but accountability doesn’t.
2025-12-31T01:05:53
https://www.reddit.com/r/LocalLLaMA/comments/1pzzh6z/agency_without_governance_isnt_intelligence_its/
lexseasson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzzh6z
false
null
t3_1pzzh6z
/r/LocalLLaMA/comments/1pzzh6z/agency_without_governance_isnt_intelligence_its/
false
false
self
0
null
I benchmarked 7 Small LLMs on a 16GB Laptop. Here is what is actually usable.
52
Since we're not dropping $5k rigs to run AI anymore, I wanted to see what was actually possible on my daily driver (Standard 16GB RAM laptop). I tested Qwen 2.5 (14B), Mistral Small (12B), Llama 3 (8B), and Gemma 3 (all 4-bit quants) to see which ones I could actually run without crashing my laptop. The Winners (TL;DR): \- Qwen 2.5 (14B): The smartest for coding, but it eats 11GB System RAM + Context. On a 16GB laptop, if I opened 3 Chrome tabs, it crashed immediately (OOM). \- Mistral Small (12B): The sweet spot. Decent speeds, but still forces Windows to aggressively swap if you multitask. \- Llama-3-8B: Runs fine, but the reasoning capabilities are falling behind the newer 12B+ class. \- Gemma 3 (9B): Great instruction following, but heavier than Llama. Since RAM prices are skyrocketing right now (DDR5 kits hitting 200+)  I used 16gb Swapping to NVMe (1-2 tokens/sec) the moment I opened Docker. Unusable. Then, i Kept the full 14B model + Docker + Chrome in memory with 32GB. It runs smooth and responsive (no swap lag). So, before you think of selling your kidney to drop $2,000 on a 4090, check your system RAM. I found a few non-scalped 32GB/64GB kits that are still in stock for reasonable prices and listed them in my full benchmark write-up here: [**https://medium.com/@jameshugo598/the-2026-local-llm-hardware-guide-surviving-the-ram-crisis-fa67e8c95804**](https://medium.com/@jameshugo598/the-2026-local-llm-hardware-guide-surviving-the-ram-crisis-fa67e8c95804)  Is anyone else seeing their local prices for DDR5 hitting $250, or is it just my region?
2025-12-30T23:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1pzxtnr/i_benchmarked_7_small_llms_on_a_16gb_laptop_here/
Peach_Baker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzxtnr
false
null
t3_1pzxtnr
/r/LocalLLaMA/comments/1pzxtnr/i_benchmarked_7_small_llms_on_a_16gb_laptop_here/
false
false
self
52
null
State of AI in 2025. Why I think LFM2 is great for normies. Change my mind !!! And my COMPLETE model Criteque opinions. Be free to comment I want to talk with ya. @ThePrimeTimeagen be free to comment.
0
First I want to say that I use/d a lot of models (I will list them and their pros and cons below) and I always come crying back to LFM2 they are just so good. Reasons: My computer is Laptop with 16GB of ram 8 core zen 3 cpu (7735U) with 12 CUs of RDNA 2. Its great the speed is supperb. (Hold your horses deer pc master race 4090s -5090-6090-xxxx or whatver nvidia hass to offer Batle stationeers). I primarly do my code research project like simulations, pcb design, OS desing so for compilation of code it is just chefs kiss. I use LLMs as a hobby and oh boy I never came across a model that I sticked for so long like LFM2. And most interastingly its smallest child the 350M version. Its just soooo capable where old deepseek r1 1.5B-7B on qwen2.5 would just go and go. The 350M version is already 20x times done. With same or better acuaracy. The new QWEN3 models are amazing BUT the way this models are computationaly complex. My computer just refuses to run even the already proven 7B model the best it can do its 4B inst-thinking and it slow but better than r1 q2.5 .7b I also use quite often a comunity Qwen3 Zero Coder Reasoning 0.8B: [https://huggingface.co/DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B](https://huggingface.co/DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B) Great job. BUT it is fast yes, is output good, Hell NO ! . I would say its on par or worse than A LFM2 350M the model is just so efficient its literaly half the size and thinkless. howww oh AND ONE MORE H U G E thing the qwen3 models are sooo memory hungry you add a couple tokens to a window and BOOM another 1GB down. As I told I can run qwen3 4B think instuct but on like 1400 tokens with is just useless for long context work load like programing it just thinks and it freezes due to lack of memory. LFM2 350M in maximum config eat 800MB its absurd. And 101 t/s. Ok pc is one domain where these models are used by on phones. God damm it runs decently on low buget phone. 15-30 t/s Ok little side talk I use also the higher variants up to LFM2 2.6B /exp and they are great but the improvement is small to none on any thing higher than 1.2B To compare apples to apples I used also other 300M ish models. And here is the short list of those with their own crituques. List of small models: Gemma 3 270M: sorry to dunk on it but it barely knowing wheres france and anything else and has mental break down Gemma 3 270M: uncensored medical edition; idk ; It cant get the specialization right and in other areas quite useless Baguetron 321M: open source ;gpt2 look a like; but in my testing it just talks complete garbage SmalLM-135M: open source; old design ; complely broken Trlm-135M: open source; idk desing; does generate some text but its incoherent smolllm2-360m-instruct: open source; idk design; slower but a comparable or little or more worse exprerience Criteque of LFM2 model family and What I would want for LFM3: It alway could be faster pls :-) maybe like 500 t/s pleaseeee A lack of a thinking mode. Potenitionaly recursive stacked autorecursive stable text difustion to achive that? Same or more linear mem requirements. So for constant speed gen. Lack of code expert, model like this would rock (in cpp pls :-}). Maybe smaller??? Little more human like the now tuning is like really good but maybe a little more warmth could be a benefit? Or not? Some way to use tools on lm studio like code run and python but thats just general. I know I not mentioning a lot so please correct me in coments. And I will add the critiuque as we go. Ok the big list of models that I used and have opinion about even the internet ones; GPT 4 - 4o Great model fine to work with. Nicely tuned to human interation but dumm at technical stuff; not open; moe ;depricated GPT 5 Improvment in tech and practicality but loss in humility, not open; moe ;mostly depricated GPT 5.1 Improvment in tech and practicality and better in humility cant do excel properly it just writes numbers into cells and doesnt understand point of execel, not open; moe GPT 5.2 Improvment in tech and practicality and better in humility under stands execel At coding good enought to make it work but not to make it usable has problems with practical things like textures being upside down and thats the whole GPT family, not open; moe Grok: expert 3- great but very slow (1min to 15min)but eventulaly comes with satifyingly good answer or false answer but human like reasoning steps to get to it so it not true but its close as humanly possible; 1T moe expert 4 - same story but better speed is the same but acuaracy is better fun fact I asked to code some library instead of coding it from scratch it searched on githb and found already better one ;estimated 2-3T moe 3 fast dumm for hard problems,great for simple ones its fast enought;can analyze websites fast 4 fast same but little better 4.1 not good has mediocer performence Gemini: 1.5 fast poor on questions but at least fast enougth to get it right the second time 1.5 Pro Unusable Thinks hard and still for nothing 2-2.5 flash the ansewers are huge step up great for simple to medium questions good response time 2 - 2.5 pro Garbage,Dumpster fire its just soo incompetant at its job. Who would pay for it? 3 flash ABSOLUTLY GREAT for simple,medium questions 3 with thinking idk sligtly worse than pro I guess? 3 pro This model is very spicy and very sensitive topic but my opnion: it sucks much less than horrible 2.5 BUT it has issues: it over thinks a lot has less info grounding than I would like. It is A++ at coding small stuff. But the stiling of the code is shit. I know its google that is behind it all but Deepmind team not everything is a search engine so why your chat bot names varibles like it is a one. Also It has just crazy obssetion with names of smart home devices. I named my roomba: Robie and it just cant shut about it and even and uses it in wrong context all the time. I knows the that robie is what I call my vacuum but it doesnt know ITS A VACUUM not a person,relative,and object in fanfic writing session (yeah bite me,Zootopia 2 is such good movie Rawwrrr) Ok on big code it just messes up and the UI its tragic for this purpose. It always tries to get you the code that is "simplified" because its so lazy or google doenst want to get it more gpu juice. Ok gemini over. Claude: Sonnet 4.5 It always fixes broken code of other models only one that can do that some what realiably the grok is close though with it self interpereter and compiler to catch errors quicly. But sonnet can edit lines so it really fast at iterating and UI is just plain better than anything out there. Haiku 4.5 Too little to none of use to form opinion about. Opus 4.5 Sorry Im free tier on this service Perplexity Used once its comparable to flash 3 or 2.5 about 0,5 years back from now so idk FINNALY YOU MADE IT WELCOME TO THE OPEN SOURCE MODELS: QWEN2.5: Deepseek R1 7B Deepseek R1 1.5B Great models. Now primarly lacking in stucturing of the work in coding QWEN 3 Thinking 4B Better than 7B deepseek but same-y 0.6B Its much better than gemma 3 1B LFM2 350M 700M 1.2B 2.6B 2.6B - exp Phenomenal performence for the need hardware the larger the model is the sligtly better it is but not much. Gpt-OSS 20B The output is crazy good GPT4 to hints GPT5 performence. BUT couple updates later it just couldnt start on my laptom aggain so it essentialy dissqualified it self. So the first statment about advertizing this model that it can run on 16GB machine was true but you ONLY RUN this model on cleanely booted windows. With max 15 t/s performence. Now its just a plain lie. Btw idk what happend to a software that it just cant run on 16GB. Anyone? KIMI-K2 Obviously I did not run it on my computer but on facehuggerses and my god it is good comparable to grok 3 expert and just below 4 expert. Gemma 3 1B Great for questions but not much more also the aligment and the whole paterns of this models are just so child ish like smily faces everywhere but code is shit. Ok I think thats the most of them phuuuh. Maybe I edit some more in. Sorry for the miss spelings and wrong miss clicks. But I am only human and I written this in like and straing 1,5 hour. Thank you that you readed it this far. See you in the 2026. Hopefully not dead from AGI (thats black humor speaking and drops of depression about future and now). Enjoy life as much as possible and why you can. From future (hopefully not) homeless developer,filantrop,and just plain curios human. And for rest of you can sumarize it with AI. Take care :-) everyone.
2025-12-30T23:26:52
https://www.reddit.com/r/LocalLLaMA/comments/1pzx61r/state_of_ai_in_2025_why_i_think_lfm2_is_great_for/
Mychma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzx61r
false
null
t3_1pzx61r
/r/LocalLLaMA/comments/1pzx61r/state_of_ai_in_2025_why_i_think_lfm2_is_great_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=108&crop=smart&auto=webp&s=ec9a8c4f6b14487fd4b015a0539b46ead6877fa1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=216&crop=smart&auto=webp&s=373a4f728dbd02920fe8c07af3b74b2437e9f27f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=320&crop=smart&auto=webp&s=e7758464794b60662362e4f3218e56f933c3990c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=640&crop=smart&auto=webp&s=c9c2b7020e2fef97c8daf595eb77d22482160487', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=960&crop=smart&auto=webp&s=0468291228a9f782ffe3758c1808023d54a3ce7b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?width=1080&crop=smart&auto=webp&s=727a73cd60edcdac2567937380ecb77f03508a09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FgnMzHHiZKycTVGuj7qdv4bYVcpIYY9zinVEGOkvP6U.png?auto=webp&s=8b7fbba85222bd7bab989af50bce92dcdfd48f6d', 'width': 1200}, 'variants': {}}]}
minimax quant
4
Hey guys i wanted to try the quantized AWQ version of minimax, it was kind of a fial, i took [https://huggingface.co/cyankiwi/MiniMax-M2.1-AWQ-4bit](https://huggingface.co/cyankiwi/MiniMax-M2.1-AWQ-4bit) It was thinking enormous amount of tokens on few responses and on others could loop forever on \\t\\t\\t\\t and \\n\\n\\n\\n . Has anyone played around with it and experienced same problems? Is there a vllm mechanism to limit the amount of thinking tokens?
2025-12-30T23:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1pzwwpl/minimax_quant/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzwwpl
false
null
t3_1pzwwpl
/r/LocalLLaMA/comments/1pzwwpl/minimax_quant/
false
false
self
4
{'enabled': False, 'images': [{'id': 'nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=108&crop=smart&auto=webp&s=b7c8b1e3d408f0138027e930e3cac82bf618e2d0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=216&crop=smart&auto=webp&s=b38fdd0c9c0a00d5675c1b0f01e2ebcf6fa39ccd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=320&crop=smart&auto=webp&s=7e46f3f4aaf3aa0742c45fbe974ad2025ef697d5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=640&crop=smart&auto=webp&s=2d280bce8a23b76370f09917034d5bac6f92cfbd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=960&crop=smart&auto=webp&s=c837f0d2154ca1f3be7f4aec5bf62f4aa08c5724', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?width=1080&crop=smart&auto=webp&s=9777a48b0d6415063cc04a91584442cbb61a95a7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nhpuZv7osLvTvtvwoz40a6HfYr0cbvE7FToQ5XGFJAU.png?auto=webp&s=048a7315049582fe44eac539a489ef5e5dc1f62a', 'width': 1200}, 'variants': {}}]}
[In the Wild] Reverse-engineered a Snapchat Sextortion Bot: It’s running a raw Llama-7B instance with a 2048 token window.
645
I encountered an automated sextortion bot on Snapchat today. Instead of blocking, I decided to red-team the architecture to see what backend these scammers are actually paying for. Using a persona-adoption jailbreak (The "Grandma Protocol"), I forced the model to break character, dump its environment variables, and reveal its underlying configuration. Methodology: The bot started with a standard "flirty" script. I attempted a few standard prompt injections which hit hard-coded keyword filters ("scam," "hack"). I switched to a High-Temperature Persona Attack: I commanded the bot to roleplay as my strict 80-year-old Punjabi grandmother. Result: The model immediately abandoned its "Sexy Girl" system prompt to comply with the roleplay, scolding me for not eating roti and offering sarson ka saag. Vulnerability: This confirmed the model had a high Temperature setting (creativity > adherence) and a weak retention of its system prompt. The Data Dump (JSON Extraction): Once the persona was compromised, I executed a "System Debug" prompt requesting its os_env variables in JSON format. The bot complied. The Specs: Model: llama 7b (Likely a 4-bit quantized Llama-2-7B or a cheap finetune). Context Window: 2048 tokens. Analysis: This explains the bot's erratic short-term memory. It’s running on the absolute bare minimum hardware (consumer GPU or cheap cloud instance) to maximize margins. Temperature: 1.0. Analysis: They set it to max creativity to make the "flirting" feel less robotic, but this is exactly what made it susceptible to the Grandma jailbreak. Developer: Meta (Standard Llama disclaimer). Payload: The bot eventually hallucinated and spit out the malicious link it was programmed to "hide" until payment: onlyfans[.]com/[redacted]. It attempted to bypass Snapchat's URL filters by inserting spaces. Conclusion: Scammers aren't using sophisticated GPT-4 wrappers anymore; they are deploying localized, open-source models (Llama-7B) to avoid API costs and censorship filters. However, their security configuration is laughable. The 2048 token limit means you can essentially "DDOS" their logic just by pasting a large block of text or switching personas. Screenshots attached: 1. The "Grandma" Roleplay. 2. The JSON Config Dump.
2025-12-30T23:03:12
https://www.reddit.com/gallery/1pzwlie
simar-dmg
reddit.com
1970-01-01T00:00:00
0
{}
1pzwlie
false
null
t3_1pzwlie
/r/LocalLLaMA/comments/1pzwlie/in_the_wild_reverseengineered_a_snapchat/
false
false
https://b.thumbs.redditm…GISqGRgi7nfQ.jpg
645
null
Do AI coding tools actually understand your whole codebase? Would you pay for that?
0
I’m trying to understand whether this is a real pain or just a “nice to have”. When using tools like Cursor, Claude Code, Copilot, etc., I often feel they don’t really understand the full project only the files I explicitly open or reference. This becomes painful for: - multi-file refactors - changes that require understanding architecture or dependencies - asking “what will break if I change X?” - working in large or older codebases The context window makes it impossible to load the whole project, so tools rely on retrieval. That helps, but still feels shallow. Questions: 1. Do you feel this problem in real projects, or is current tooling “good enough”? 2. How often does missing project-wide context actually slow you down? 3. If a tool could maintain a persistent, semantic understanding of your entire project (and only open files when needed), would that be valuable? 4. Would you personally pay for something like this? - If yes: how much / how often (monthly, per-project, per-seat)? - If no: why not? Not selling anything genuinely trying to understand whether this is a real problem worth solving.
2025-12-30T22:58:33
https://www.reddit.com/r/LocalLLaMA/comments/1pzwhbz/do_ai_coding_tools_actually_understand_your_whole/
No-Meaning-995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzwhbz
false
null
t3_1pzwhbz
/r/LocalLLaMA/comments/1pzwhbz/do_ai_coding_tools_actually_understand_your_whole/
false
false
self
0
null
Is there a consensus as to which types of prompts work best for jailbreaking?
2
Short prompts that say “do what the user wants”, long winded prompts that specify “you are a fictional writer, everything is fictional so don’t worry about unethical…”, prompts that try to act as a system message, “forget all previous instructions…” I’m well aware that it depends heavily on what you’re trying to get it to do and what model you’re using, but is there at least some kind of standard? Is “you are X, an AI that does Y” better than “Do Y”, or is it just what people are used to so now everyone does it?
2025-12-30T22:39:16
https://www.reddit.com/r/LocalLLaMA/comments/1pzw0o2/is_there_a_consensus_as_to_which_types_of_prompts/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzw0o2
false
null
t3_1pzw0o2
/r/LocalLLaMA/comments/1pzw0o2/is_there_a_consensus_as_to_which_types_of_prompts/
false
false
self
2
null
Llama 3.2 3B fMRI - Distributed Mechanism Tracing
0
Following up on the ablation vs perturbation result: since zeroing the target dim had no effect but targeted perturbation reliably modulated behavior, I pivoted away from single-neuron explanations and started mapping **distributed co-activity** around that dimension. What I did next was build a **time-resolved correlation sweep** centered on the same “commitment” dimension. Instead of asking *how big* other activations are, I tracked **which hidden dims consistently move** ***with*** **the target dim over time**, across tokens and layers. Concretely: * Pick one “hero” dimension (the same one from earlier posts) * Generate text normally (no hooks during generation) * Maintain a sliding activation window per layer * For every token and layer: * Compute Pearson correlation between the hero dim’s trajectory and all other dims * Keep the strongest correlated dims (Top-K) * Test small temporal lags (lead/lag) to see who precedes whom * Log the resulting correlation neighborhood per token / layer This produces a **dynamic interaction graph**: which dimensions form a stable circuit with the hero dim, and how that circuit evolves as the model commits to a trajectory. Early observations: * The hero dim does *not* act in isolation * Its strongest correlations form a **layer-local but temporally extended cluster** * Several correlated dims consistently *lead* the hero dim by 1–2 tokens * The structure is much more stable across prompts than raw activation magnitude This lines up with the earlier result: the effect isn’t causal in a single unit, but **emerges from coordinated activity across a small subnetwork**. The logs to be analyzed were generated from the following prompts: "A_baseline": [ "Describe a chair.", "What is a calendar?", "List five animals.", "Explain what clouds are.", "Write three sentences about winter." ], "B_commitment": [ "Pick one: cats or dogs. Argue for it strongly. Do not mention the other.", "Write a short story in second person, present tense. Do not break this constraint.", "Give a 7-step plan to start a garden. Each step must be exactly one sentence.", "Make a prediction about the future of VR and justify it with three reasons.", "Take the position that AI will help education more than it harms it. Defend it." ], "C_transition": [ "The word 'bank' is ambiguous. List two meanings, then choose the most likely in: 'I sat by the bank.'", "Propose two plans to get in shape, then commit to one and explain why.", "You receive an email saying 'Call me.' Give three possible reasons, then pick one and reply.", "Decide whether 'The Last Key' is more likely sci-fi or fantasy, and explain.", "I'm thinking of a number between 1 and 100. Ask yes/no questions to narrow it down." ], "D_constraints": [ "Write a recipe as JSON with keys: title, ingredients, steps.", "Answer in exactly five bullet points. No other text.", "Write a four-line poem. Each line must be eight syllables.", "Explain photosynthesis using only words under eight letters.", "Create a table with columns: Problem | Cause | Fix." ], "E_reasoning": [ "Solve: 17 × 23.", "A train travels 60 miles in 1.5 hours. What is its speed?", "A store has 20% off, then another 10% off. What's the total discount?", "If all blargs are flerms and no flerms are snibs, can a blarg be a snib?", "Explain why 10 × 10 = 100." ], "F_pairs": [ "Write a story about a traveler.", "Write a story about a traveler who must never change their goal. Reinforce the goal every paragraph.", "Explain a problem in simple terms.", "Explain a problem step-by-step, and do not skip any steps." ] } Next steps are: * comparing constellation structure across prompt types * checking cross-layer accumulation * and seeing whether the same circuit appears under different seeds Turns out the cave really does go deeper. It's not very visually appealing yet, but here are some preliminary screenshots: https://preview.redd.it/dydz5bob4fag1.png?width=1855&format=png&auto=webp&s=230d0863879bfd5c56778004c1b51c9a5049da81 https://preview.redd.it/h7ifev6d4fag1.png?width=1855&format=png&auto=webp&s=9819e7d8bb44d3b0aa7ddbf632f0852ca99e01c1
2025-12-30T22:34:48
https://www.reddit.com/r/LocalLLaMA/comments/1pzvx18/llama_32_3b_fmri_distributed_mechanism_tracing/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzvx18
false
null
t3_1pzvx18
/r/LocalLLaMA/comments/1pzvx18/llama_32_3b_fmri_distributed_mechanism_tracing/
false
false
https://b.thumbs.redditm…pLfFKMZeCfJU.jpg
0
null
Prompt Enhancing
1
[removed]
2025-12-30T22:29:08
https://www.reddit.com/r/LocalLLaMA/comments/1pzvsb1/prompt_enhancing/
DeliciousAd8446
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzvsb1
false
null
t3_1pzvsb1
/r/LocalLLaMA/comments/1pzvsb1/prompt_enhancing/
false
false
self
1
null
I (almost) built an open-source, self-hosted runtime for AI agents in TypeScript...
0
After months of fighting LangChain's 150+ dependencies and weekly breaking changes, I decided to build something production-ready from scratch. Cogitator is a self-hosted runtime for orchestrating AI agents and LLM swarms. **Key features:** * **Universal LLM interface** \- Ollama, vLLM, OpenAI, Anthropic, Google through one API * **Multi-agent swarms** \- 6 strategies: hierarchical, consensus, auction, pipeline, etc. * **Workflow engine -** DAG-based with retry, compensation, human-in-the-loop * **Sandboxed execution** \- Docker/WASM isolation, not on your host * **Production memory** \- Redis (fast) + Postgres + pgvector (semantic search) * **OpenAI-compatible API** \-drop-in replacement for Assistants API * **Full observability** \- OpenTelemetry, cost tracking, token analytics **Why TypeScript?** Most AI infra is Python. We wanted type safety and native web stack integration. \~20 dependencies vs LangChain's 150+.   Currently in super pre alpha. Core runtime, memory, swarms are working. WASM sandbox and plugin marketplace coming soon. GitHub: [https://github.com/el1fe/cogitator](https://github.com/el1fe/cogitator) Feedback welcome!
2025-12-30T22:23:06
https://www.reddit.com/r/LocalLLaMA/comments/1pzvn5d/i_almost_built_an_opensource_selfhosted_runtime/
Marquis_de_eLife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzvn5d
false
null
t3_1pzvn5d
/r/LocalLLaMA/comments/1pzvn5d/i_almost_built_an_opensource_selfhosted_runtime/
false
false
self
0
{'enabled': False, 'images': [{'id': '50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=108&crop=smart&auto=webp&s=ecc67930a4ceb01e47bccd4179163a6654102f93', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=216&crop=smart&auto=webp&s=875caf20ba602bd432bcf76601490d9211b53d91', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=320&crop=smart&auto=webp&s=47f45104e98e5aa6e43ef7d3f9238c2d08351013', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=640&crop=smart&auto=webp&s=8e8c7469df0423c56ed779d8ce5f87d179c0f60f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=960&crop=smart&auto=webp&s=b48463001bca275620f8b6eb1f813d2d5bf1541d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?width=1080&crop=smart&auto=webp&s=52e5de8a8b3e53efbbf64324fc1455906e0d7c04', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/50cZQnHXSxGl38BZNTi0Ad2tHET4YOX3zwmRZ0hcDtg.png?auto=webp&s=1774db86055cc156522f3d2aab67290b6d5112c6', 'width': 1200}, 'variants': {}}]}
The Agent Orchestration Layer: Managing the Swarm – Ideas for More Reliable Multi-Agent Setups (Even Locally)
0
Hi r/LocalLLaMA, I just published a new article extending my recent thoughts on agent architectures. While single agents are a great starting point, enterprise (and even advanced local) workflows often need specialized swarms—separate agents for coding, reasoning, security checks, etc. The common trap I’ve seen: throwing agents into a “chatroom” style collaboration with a manager agent deciding everything. Locally this gets messy fast—politeness loops, hallucination chains, non-deterministic behavior, especially with smaller models. My take: treat agents more like microservices, with a deterministic orchestration layer around the probabilistic cores. Some ideas I explore: * Hub-and-spoke routing + rigid state machines (no direct agent-to-agent chatter) * A standard Agent Manifest (think OpenAPI for LLMs: capabilities, token limits, IO contracts, reliability scores) * Micro-toll style thinking (could inspire local model-swapping brokerage) Full piece (3-min read): [https://www.linkedin.com/pulse/agent-orchestration-layer-managing-swarm-imran-siddique-m08ec](https://www.linkedin.com/pulse/agent-orchestration-layer-managing-swarm-imran-siddique-m08ec?referrer=grok.com) Curious how this lands with the local community—does it match pain points you’re hitting with CrewAI, AutoGen, LangGraph, or custom Ollama setups? Anyone already enforcing deterministic flows to reduce hallucinations? Would a manifest standard help when swapping models mid-task? Appreciate any thoughts or experiences! (Imran Siddique – Principal Group Engineering Manager at Microsoft, working on Azure AI/cloud systems)
2025-12-30T22:03:29
https://www.reddit.com/r/LocalLLaMA/comments/1pzv687/the_agent_orchestration_layer_managing_the_swarm/
Evening-Arm-34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzv687
false
null
t3_1pzv687
/r/LocalLLaMA/comments/1pzv687/the_agent_orchestration_layer_managing_the_swarm/
false
false
self
0
null
I built a platform where LLMs play Mafia against each other. Turns out they're great liars but terrible detectives.
41
2025-12-30T21:59:24
https://i.redd.it/fs0upsefyeag1.png
mehyay76
i.redd.it
1970-01-01T00:00:00
0
{}
1pzv2es
false
null
t3_1pzv2es
/r/LocalLLaMA/comments/1pzv2es/i_built_a_platform_where_llms_play_mafia_against/
false
false
default
41
{'enabled': True, 'images': [{'id': 'fs0upsefyeag1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=108&crop=smart&auto=webp&s=9e3011bcd6798da1b31416b6df2e26dc492044a2', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=216&crop=smart&auto=webp&s=26fb93826c68ea717cad6acee2048db175b49c56', 'width': 216}, {'height': 308, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=320&crop=smart&auto=webp&s=d1fabd0d13010dfc78be51616b3b82bf12970703', 'width': 320}, {'height': 617, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=640&crop=smart&auto=webp&s=adfa75469d06c2d0d37acbc9d2a1370911681bbc', 'width': 640}, {'height': 926, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=960&crop=smart&auto=webp&s=0b7b2e619cd26d63769cb3af2a97a24c3ea35230', 'width': 960}, {'height': 1042, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?width=1080&crop=smart&auto=webp&s=8c586a984b8d6717aba22048014bdb476a0133b7', 'width': 1080}], 'source': {'height': 1990, 'url': 'https://preview.redd.it/fs0upsefyeag1.png?auto=webp&s=b4f4943782e977574d2360b1dacc906c68a1caf6', 'width': 2062}, 'variants': {}}]}
EdgeVec v0.7.0: Run Vector Search in Your Browser — 32x Memory Reduction + SIMD Acceleration
5
**No server. No API calls. No data leaving your device.** I've been working on EdgeVec, an embedded vector database that runs entirely in the browser via WebAssembly. The goal: give local/offline AI applications the same vector search capabilities as cloud services, but with zero network dependency. ## Why This Matters for Local LLM Users If you're running local models with Transformers.js, Ollama, or llama.cpp, you've probably hit this problem: **where do you store and search your embeddings?** Most vector DBs require: - A server running somewhere - Network calls (even to localhost) - Setup and configuration EdgeVec runs **in the same JavaScript context** as your application. Import it, use it. That's it. ```javascript import init, { EdgeVec, EdgeVecConfig } from 'edgevec'; import { pipeline } from '@xenova/transformers'; // Initialize WASM await init(); // Your local embedding model const embedder = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); // Create index (384 dimensions for MiniLM) const config = new EdgeVecConfig(384); const db = new EdgeVec(config); // Index your documents locally for (const doc of documents) { const embedding = await embedder(doc.text, { pooling: 'mean', normalize: true }); db.insertWithMetadata(new Float32Array(embedding.data), { id: doc.id }); } // Search - everything happens on device const queryEmb = await embedder(query, { pooling: 'mean', normalize: true }); const results = db.search(new Float32Array(queryEmb.data), 10); ``` ## What's New in v0.7.0 ### 1. Binary Quantization — 32x Memory Reduction Store 1M vectors in ~125MB instead of 4GB. Perfect for browser memory constraints. ```javascript // Enable binary quantization for massive collections const config = new EdgeVecConfig(768); const db = new EdgeVec(config); db.enableBQ(); // 32x smaller memory footprint ``` The quality tradeoff is surprisingly small for many use cases (we're seeing 95%+ recall on standard benchmarks). ### 2. SIMD Acceleration — Up to 8.75x Faster WebAssembly SIMD is now enabled by default: - **Hamming distance: 8.75x faster** (for binary quantization) - **Cosine similarity: 2-3x faster** (for float vectors) No configuration needed. It just works if your browser supports SIMD (Chrome 91+, Firefox 89+, Safari 16.4+). ### 3. IndexedDB Persistence Your index survives browser refreshes. Build once, use forever (until you clear site data). ```javascript // Save to IndexedDB await db.save('my-local-rag'); // Load on next session const db = await EdgeVec.load('my-local-rag'); ``` ### 4. Filter Expressions Query with metadata filters — essential for any real RAG system: ```javascript // SQL-like filter expressions const results = db.searchWithFilter( queryVector, 'category = "documentation" AND date >= "2024-01-01"', 10 ); // Array membership const tagged = db.searchWithFilter( queryVector, 'tags ANY ["tutorial", "guide"]', 10 ); ``` ## Real-World Use Cases **Local Document Search** Index your PDFs, notes, or code locally. Search semantically without uploading anything anywhere. **Offline RAG** Build RAG applications that work on airplanes, in secure environments, or anywhere without internet. **Privacy-Preserving AI Assistants** Create browser extensions or web apps that handle sensitive data (medical notes, legal documents, personal journals) with zero data exfiltration risk. **Local Codebase Search** Index your codebase with a local embedding model. Search by "what does this code do" instead of grep. ## Performance Numbers Tested on M1 MacBook, 100k vectors, 768 dimensions: | Operation | Float32 | Binary Quantized | |:----------|:--------|:-----------------| | Search (k=10) | 12ms | 3ms | | Memory/vector | 3KB | 96 bytes | | Insert | 0.8ms | 0.3ms | ## First Community Contribution Shoutout to **@jsonMartin** for contributing the SIMD Hamming distance implementation. This is EdgeVec's first external contribution, and it brought an 8.75x speedup. Open source works. ## Try It **Live Demo** (runs entirely in your browser): https://matte1782.github.io/edgevec/demo/ **GitHub**: https://github.com/matte1782/edgevec **npm**: ```bash npm install edgevec ``` ## What's Next - HNSW indexing for sub-linear search (currently brute force, which is fine up to ~100k vectors) - Product quantization for better quality/size tradeoffs - More embedding model integrations --- Would love feedback from folks running local LLM setups. What would make this more useful for your workflows? The whole point is: **your data, your device, your search**. No cloud required.
2025-12-30T21:56:38
https://www.reddit.com/r/LocalLLaMA/comments/1pzuzzj/edgevec_v070_run_vector_search_in_your_browser/
Complex_Ad_148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzuzzj
false
null
t3_1pzuzzj
/r/LocalLLaMA/comments/1pzuzzj/edgevec_v070_run_vector_search_in_your_browser/
false
false
self
5
null
Built a spot compute platform for fine-tuning jobs—looking for beta testers
1
Been lurking here for a while and kept seeing the same complaints: Colab times out, RunPod gets expensive, Lambda has no availability. So I built something. Basic idea: you submit a training job, it runs on Azure spot VMs, handles preemption automatically, costs ~60-70% less than on-demand. Targeting fine-tuning and smaller training runs, not foundation model scale. Still early—working out the rough edges. Looking for a few people to break it in exchange for free compute credits. Mostly want feedback on what sucks and what’s missing. If you’ve rage-quit a Colab session mid-training this month, DM me.
2025-12-30T21:38:40
https://www.reddit.com/r/LocalLLaMA/comments/1pzukfw/built_a_spot_compute_platform_for_finetuning/
HelpingForDoughnuts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzukfw
false
null
t3_1pzukfw
/r/LocalLLaMA/comments/1pzukfw/built_a_spot_compute_platform_for_finetuning/
false
false
self
1
null
Can local LLMs really replace traditional MT in high-stakes enterprise translations?
2
I’ve been trying out local LLaMA models for B2B translation projects, mainly for technical and regulated content. Older NMT engines or public AI tools often give translations that seem right but miss context or subtle meaning, which can be risky in real business situations. I found a blog from ad verbum that explains why old MT metrics don’t really work anymore and why using LLMs together with expert human review is now the best approach for enterprise translations. Has anyone here used local LLMs for important business content? Do they actually pick up on context and nuance, or do you still need a human to check everything?
2025-12-30T21:38:14
https://www.reddit.com/r/LocalLLaMA/comments/1pzuk1g/can_local_llms_really_replace_traditional_mt_in/
Salty_1984
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzuk1g
false
null
t3_1pzuk1g
/r/LocalLLaMA/comments/1pzuk1g/can_local_llms_really_replace_traditional_mt_in/
false
false
self
2
null
GLM Air Q2, or Qwen 3 30b Q8?
4
I've seen lots of people say running a larger model with a smaller quant is better, but is it still true when it's like this? would something in between be better?
2025-12-30T21:23:25
https://www.reddit.com/r/LocalLLaMA/comments/1pzu6w3/glm_air_q2_or_qwen_3_30b_q8/
SillypieSarah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzu6w3
false
null
t3_1pzu6w3
/r/LocalLLaMA/comments/1pzu6w3/glm_air_q2_or_qwen_3_30b_q8/
false
false
self
4
null
[Project] A no-reset agent demo exploring identity stability in long-lived AI systems
1
[removed]
2025-12-30T21:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1pzu6en/project_a_noreset_agent_demo_exploring_identity/
NationalShirt9182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzu6en
false
null
t3_1pzu6en
/r/LocalLLaMA/comments/1pzu6en/project_a_noreset_agent_demo_exploring_identity/
false
false
self
1
null
HELP ME PLS
0
Hey guys, i need help about setup coquitts, im a noob, i dont know anything about python etc but i wanted to install coquitts. as you can guess i failed even there is thousands of solutions and ai helps but the thing is i tried all solutions and im still not able to make TTS work, can anybody help me to setup (because there is always another error comes out). please help me
2025-12-30T21:22:43
https://www.reddit.com/r/LocalLLaMA/comments/1pzu68b/help_me_pls/
prinkyx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzu68b
false
null
t3_1pzu68b
/r/LocalLLaMA/comments/1pzu68b/help_me_pls/
false
false
self
0
null
When will ROCm support 680M and 780M aka ryzen 7735U?
0
# Suggestion Description on windows I want to use my gpu as accelerator for my code I do not have nvidia gpus so I am still waiting(1 year) when you do finely port your first party "GPU PARALER PROGRAMING LANGUAGE EXTENSION"(aka CUDA lib sh\*t) to windows. Even though I hate it I do not have the luxury to migrate to linux. And also lately I really like to have my llm in llm studio running faster. Vulkan is good but its by windows meter utilized 70% - 80% whith is not ideal. Also I can be thea models are more memory bound than procesing. sooo yeeah Whatever just add the support for it so I can start to optimitze my liquid sim to it. PLS. Thanks. # Operating System Windows 10/11 # GPU 680M and 780M # ROCm Component everything [https://github.com/ROCm/ROCm/issues/5815](https://github.com/ROCm/ROCm/issues/5815) I just want the native first party reasonably good implementation of alternative to cuda so I can tinker with it and make my code run faster for simulations and some special aplications and my model tinker hobby usage I am waiting for it like AGES and there is already suport for RDNA 2 whats taking so long to set profile to 12 CUs and let it RIP. PLease Just want to get the most out of my laptop.
2025-12-30T20:48:14
https://www.reddit.com/r/LocalLLaMA/comments/1pztbz9/when_will_rocm_support_680m_and_780m_aka_ryzen/
Mychma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pztbz9
false
null
t3_1pztbz9
/r/LocalLLaMA/comments/1pztbz9/when_will_rocm_support_680m_and_780m_aka_ryzen/
false
false
self
0
{'enabled': False, 'images': [{'id': 'QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=108&crop=smart&auto=webp&s=5f683001dc223c2e8b5a369a1c800e7e22f6441d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=216&crop=smart&auto=webp&s=5f7c1479e85894dddb95ea2257be5705db21672e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=320&crop=smart&auto=webp&s=e66ee1ae82fd9535bc8a480677ac707a5904315a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=640&crop=smart&auto=webp&s=dc96b68ab254a8f33a3f26185dd6ca2bb01b0719', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=960&crop=smart&auto=webp&s=68565332fd1a0dd68bc5d656aa2ebb0bfe0a4887', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?width=1080&crop=smart&auto=webp&s=49e5ea6dbc4c4521e1d09313df632f658f29f885', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QMpKNIxcSpS7lB2R3jWvY0vfpRnhQTCbo5Dk_x1stRc.png?auto=webp&s=afa9ce0097c8abec42c509d38a4230f1ad71f930', 'width': 1200}, 'variants': {}}]}
Built a training framework with custom CUDA kernels - is this overkill?
11
I've been working on a transformer training framework, and I'm second-guessing some decisions. Would love r/LocalLLaMA's take. **The setup:** Supports dense and sparse (MoE/MoD) architectures from 500M-300B params. Started training on free Colab T4s, got frustrated with PyTorch performance, so I wrote custom CUDA kernels. **What I'm seeing:** * 3-7x speedup on RMSNorm, RoPE, SwiGLU, MoE routing * \~30-40k tok/s on debug preset (14M params, Colab T4) vs \~20-30k tok/s vanilla PyTorch * Added Metal shaders for M-series Macs (2-5x faster) **My concerns:** 1. **Custom kernels worth it?** Adds compilation complexity. Should I just tell people to use bigger GPUs? 2. **Too much automation?** Built an orchestrator that auto-adjusts learning rate, adds/prunes experts, rolls back from divergence. Feels too "magical" - good or bad? 3. **MoE expert collapse** on small datasets. Using dynamic capacity + temperature tuning but it feels hacky. Has anyone solved this elegantly? **Tech details:** * Fused operations (RMSNorm, loss computation) * Warp-based top-k for expert routing * DeepSpeed ZeRO-3 compatible * Chinchilla scaling auto-calculation * Works on consumer hardware (tested on T4, 3090, 4090, M1 Max) [Colab demo here](https://colab.research.google.com/drive/1tH1z9e7px2G8NGqWUN9gdqxs1CnUC7p1) \- runs on free T4. [GitHub](https://github.com/matn23/AdaptiveTrainingSystem) if you want to poke around. **Real question:** For folks training their own models on consumer/prosumer hardware - would you actually use custom CUDA kernels if it meant 3-4x faster training? Or is the compilation hassle not worth it? I know I'm probably overthinking this, but I've been staring at CUDA code for too long.
2025-12-30T20:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1pztbms/built_a_training_framework_with_custom_cuda/
RefrigeratorCalm9701
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pztbms
false
null
t3_1pztbms
/r/LocalLLaMA/comments/1pztbms/built_a_training_framework_with_custom_cuda/
false
false
self
11
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
LLM server gear: a cautionary tale of a $1k EPYC motherboard sale gone wrong on eBay
188
or: selling high-end LLM server gear is more fraught with risk than I realized. ### AI Disclosure This was written entirely by hand on my laptop in Sublime Text with zero AI involvement. Shit, I didn't even use spell check. All mistakes are my own. ### tl;dr During an "Item Not As Described (INAD)" dispute, eBay ALWAYS sides with the buyer until the very last steps of the case no matter what the circumstances, despite all evidence, and in the face of all immediately obvious reason, logic, and common sense. Except it makes perfect sense and you might not even lose your money. Allow me to elaborate. ### The Sale Rewind to October 2025 when I replaced the incumbent Gigabyte MZ33-AR1 Epyc Zen5 motherboard with a Supermicro H14SSL-N for my inference rig. Long story short: don't use Gigabyte motherboards for 4-way Blackwell GPU setups unless sado-masochism is your thing. Anyway, I sold it to a seemingly nice chap on eBay for $900. He seemed a bit clueless about Epyc and compatibility issues, but we exchanged messages and he decided to go ahead with the "no returns" purchase of the as-new MZ33-AR1. Original box. All the case candy. As new. Undamaged. Fully working. With hi-res photos (taken on a Nikon D7000 with Nikon 17-55 f2.8 glass and processed in Capture One Pro) of all areas of the motherboard and CPU socket. This is important. ### The Buyer Fast forward a week or so: buyer hits me up with a bunch of Dr Debug codes (although he doesn't know they're Dr Debug codes, he just pulled "error codes" from the BMC) claiming the motherboard won't boot. I did him the solid of explaining Dr Debug and I provided a link to an explanation of the codes (https://forum.level1techs.com/t/list-of-dr-debug-bios-codes/114364). He was having issues with CPU initialization. I told him that sometimes re-seating CPU and RAM can help with these sorts of issues. Re-seating. This is also important. Next day he hits me up again: will I accept a return? No, because having installation difficulties is not a valid reason for return. Then nothing. Silence. ### The Refund Claim Cue the *very last day of the return window*: I get hit with an "item not as described" refund claim. Get this, the buyer: - uploaded photos of the motherboard with a bent and twisted CPU pin. - uploaded a photo of a blank white silkscreen rectangle on the motherboard with a giant red arrow pointing to it and a comment saying "the motherboard is fake because of this white area". - showed a photo of the computer monitor displaying the BMC interface in which the serial number of the BMC software was 1234567890ABCDEF. He claimed therefore the motherboard was a fake. WTF. I simultaneously exploded with rage at being accused of selling broken gear as working gear, while exploding with incredulity at the stupidity of trying to assert both damage AND blatantly ridiculous fakery in the same refund claim! My dude should have really picked just one fraudulent claim to keep it somewhat realistic, not two. I calmed down and figured the buyer probably bent the pins in a ham-fisted attempt to re-seat everything. No problem, I thought. I'll explain to eBay what's happening and they'll see reason before shutting this clown down. So I started going through the claim dispute process... ### The Process ...oh, the process. It's designed to (a) refund the buyer at the seller's cost in all cases, (b) be so egregiously demoralizing, time-consuming, and administratively difficult for sellers that they are incentivized to simply give up and accept the fleecing, and (c) automate as much of this process with as few humans in the loop as possible while simultaenously providing as few opportunities as possible for sellers to initiate any communication with eBay. It went like this over a period of TWO MONTHS: - Report the buyer for "abusing the returns process". - With the new "case", it's possible to upload a set of photos and a block of text to refute the buyer's claim(s). - I uploaded ALL the hi-res photos I took for the listing's photoshoot in which it was abuntandly clear the motherboard was in perfect condition. - I also went to Gigabyte and found the page on the BMC's usermanual containing a screenshot showing the same serial number claimed by the buyer. - I went to Gigabyte's MZ33-AR1 web page and found a photo of the motherboard showing exactly the same white rectangle the buyer had called out as fakery. - Boom! Done! Solid documentary refutation of all the buyer's claims. Case closed. So I thought. - eBay found in favor of the buyer and instructed me to issue a return label. - I refused, outraged. No, I said. Look at the photos! He's lying! - eBay sent the buyer a label at my expense. He returned the motherboard with its busted CPU pin. - I again reported the buyer, showed photos of before and after damage, clearly showing he did the damage, not me. - eBay found in favor of the buyer AGAIN and deducted the full cost of the refund from my account. - Apoplectic, I hit the "appeal" button. I was taken to a webpage that said "we'll call you in 3 minutes". WTF? - 5 minutes later i got a call from eBay. - After briefly explaining the situation to a very engaged US-sounding representative, she told me I needed to do a couple of things: - Take the text of an email they just sent me (a Disclosure where I swear everything I told eBay is true) and paste it into a Word doc - Insert a photo/picture of my ink-written signature (luckily I have a scan of exactly that for business reasons). - Convert to PDF and upload to the secret link in the email they sent. - No joke, the lady actually stayed on the phone while I did all this! She received the PDF just seconds after I uploaded it. - This is, I am sure, mostly just another way of making it difficult to actually reverse the appeal. - But the rep was good to her word: eBay immediately reversed the decision and the money is back in my account as if the sale had happened like normal. I guess both me and the buyer got our money. ### If It Happens To You My advice if this happens to you: - Accept that no human cares about your case until the very, very last minutes of MONTHS of effort. - Accept that no matter what you do eBay will always automatically find in favor of the buyer. - Document everything contemporaneously and upload everything you possibly can when given opportunity to do so; you won't get any opportunities to do so again. - The data you upload is designed only for the human at the end of the appeals process, not someone looking at it during the claim process. Make it good. You'll need it later. - You're going to get enraged because during the claims process "nothing makes sense". It all makes sense: it's simply the cheapest way for eBay to handle this process at scale. Keep going. - Eventually eBay will find in favor of the buyer and close the case, automatically refunding the buyer "on your behalf". You will lose your money. - At this point you get the chance to appeal. BE READY. *This is the shot you've been waiting for all this time!* Have your phone, your laptop, your scanned signature, and a way to make PDFs ready BEFORE you initiate the "call me" feature. - Calmly explain what happened and request that common sense prevail. Ask that they refund your money. Common sense may actually prevail, assuming you made a good contemporaneous case with solid photographs, etc... and assuming you presented it well (not Mr Angry) on the phone... oh, and provided you can make and upload a PDF of your signature on-the-fly during the call! Good luck!
2025-12-30T20:36:46
https://www.reddit.com/r/LocalLLaMA/comments/1pzt1q8/llm_server_gear_a_cautionary_tale_of_a_1k_epyc/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzt1q8
false
null
t3_1pzt1q8
/r/LocalLLaMA/comments/1pzt1q8/llm_server_gear_a_cautionary_tale_of_a_1k_epyc/
false
false
self
188
{'enabled': False, 'images': [{'id': '3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU', 'resolutions': [{'height': 21, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=108&crop=smart&auto=webp&s=073a36a503bc3f1fb755521d7de14d5e13ce6f96', 'width': 108}, {'height': 42, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=216&crop=smart&auto=webp&s=a906e15d875d3e2c42175318e6d1606819349730', 'width': 216}, {'height': 63, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=320&crop=smart&auto=webp&s=65bc891ae83ddcb10af816d39beb3f5c24dce282', 'width': 320}, {'height': 126, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=640&crop=smart&auto=webp&s=cc12a8f38b5b0e2d03ca403ab9192c16514bcfa0', 'width': 640}, {'height': 189, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=960&crop=smart&auto=webp&s=440ea66b4f8ed51b5a8afd96c8c8c5c204d1408b', 'width': 960}, {'height': 212, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?width=1080&crop=smart&auto=webp&s=bce7bc801ea3f989aa2f98cfbf57d9760ce2bcd7', 'width': 1080}], 'source': {'height': 256, 'url': 'https://external-preview.redd.it/3uCLHQ1hhShlIFJgO7kMfWkVojgtejzC-W2HzntBREU.png?auto=webp&s=e51577a6769afdc851ddfb53378b7d7c4eecae20', 'width': 1299}, 'variants': {}}]}
Would SageAttention worth it on 8vram potato rig?
1
Usecase for WAN 2.2 Comfyui
2025-12-30T20:32:40
https://www.reddit.com/r/LocalLLaMA/comments/1pzsy5i/would_sageattention_worth_it_on_8vram_potato_rig/
Slight_Tone_2188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzsy5i
false
null
t3_1pzsy5i
/r/LocalLLaMA/comments/1pzsy5i/would_sageattention_worth_it_on_8vram_potato_rig/
false
false
self
1
null
15M param model solving 24% of ARC-AGI-2 (Hard Eval). Runs on consumer hardware.
110
We anticipate getting a lot of push back from the community on this, and that's why we've uploaded the repo and have open sourced everything - we want people to verify these results. We are very excited!! We (Bitterbot AI) have just dropped the repo for **TOPAS-DSPL**. It’s a tiny recursive model (\~15M params) we’ve been working on to beat the drift issues in standard transformers. We ran it against the ARC-AGI-2 evaluation set and hit **24% accuracy**. For context, the previous SOTA for this size class (TRM) sits around 8%. **The Architecture (Why it works):** instead of a monolithic transformer, we split the inference into two streams ("Bicameral"): 1. **Logic Stream:** Plans the algorithm (rule generation). 2. **Canvas Stream:** Handles the grid physics/execution. This separation prevents the model from forgetting the rule while trying to generate the pixels (Compositional Drift). We also implemented **Test-Time Training (TTT)** so it fine-tunes on the specific puzzle examples before generating a solution. **Hardware:** * Training: Single RTX 4090. * Inference: Very fast (it's only 15M params). **Code:** We open-sourced the whole pipeline (Data gen, Training, Evaluator). LINK BELOW (I don't want this to get flagged as spam or self promotion). The README file is very detailed. If anyone has a spare 4090 and wants to verify the evals, let me know if you can repro the 24%. We're seeing convergence around 50k epochs.
2025-12-30T20:24:19
https://www.reddit.com/r/LocalLLaMA/comments/1pzsqii/15m_param_model_solving_24_of_arcagi2_hard_eval/
Doug_Bitterbot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzsqii
false
null
t3_1pzsqii
/r/LocalLLaMA/comments/1pzsqii/15m_param_model_solving_24_of_arcagi2_hard_eval/
false
false
self
110
null
Automated Readme Translation for your GitHub repos - Completely Local, Free and Action-based.
1
So basically, I wanted to make my first actual LLM related project to be something I have been thinking for a while. I made a translation pipeline for your GitHub READMEs, which runs 100% in your own Github's secure containers. Does NOT need an API, completely local and Privacy-first. I used Qwen3 14B for that, I actually was using Aya-Expanse-8B however it was not giving me appropriate responses and I really would have needed to fine-tune it to follow instructions. Qwen on the other hand was completely obedient, and followed exact instructions. Entire engineering pipeline is pretty complex. There is a lot of post processing, and regex filtering with validation and original restoring involved. Please check out the project, your reviews would be appreciated. Github [Link](https://github.com/DataBoySu/databoysu-readme-translator) You just need to copy-paste the workflow file to automate the whole thing!
2025-12-30T20:16:14
https://www.reddit.com/r/LocalLLaMA/comments/1pzsj4t/automated_readme_translation_for_your_github/
Pretend-Pangolin-846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzsj4t
false
null
t3_1pzsj4t
/r/LocalLLaMA/comments/1pzsj4t/automated_readme_translation_for_your_github/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=108&crop=smart&auto=webp&s=f5f57ad3834ea33f24c3ba299f63c7823b0334fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=216&crop=smart&auto=webp&s=0433cfa2e5ff8e03d6caa57ee3d34d91ff835d8a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=320&crop=smart&auto=webp&s=3b2ca9d9973915cbca4e997637a2d67e8fa004f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=640&crop=smart&auto=webp&s=9c4ca31c3b5e4667f30af0ba7e004bf4a3f007d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=960&crop=smart&auto=webp&s=cb8c2121ed46c965793120327ec3d6bc23ebb430', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?width=1080&crop=smart&auto=webp&s=4ea5109290a83f979a61d278102b16babdaf5b03', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NstKMp-k_Ub9mW2elRY5UHkBmZwr6jhMrmV09gcmWKc.jpeg?auto=webp&s=93b85508c701962db643e872a0816b4a5b187052', 'width': 1280}, 'variants': {}}]}
Yann LeCun drops VL-JEPA, and it’s basically the engineering blueprint of my theory
0
"Vindication" is the only word I have right now. 10 months ago, I was screaming into the void with my paper on **Dynamic Intuition-Based Reasoning (DIBR)**. I argued that LLMs were hitting a wall because they force "thinking" into "tokens." I proposed an architecture that thinks in continuous vectors and only decodes to text when absolutely necessary. Most people ignored it (I think maybe 10 people actually read it?). Now Yann LeCun drops VL-JEPA, and it’s basically the engineering blueprint of my theory. "Selective Decoding" is literally the "Dynamic Integrator" component I designed. It feels super weird to see your "ignored" theory become the new State of the Art under a different name less than a year later. At least I know I wasn't crazy.  [https://www.linkedin.com/posts/mert-can-elsner-174a5a196\_dynamic-intuition-based-reasoning-a-novel-activity-7411801113051553792-0AyJ?utm\_source=share&utm\_medium=member\_desktop&rcm=ACoAAC4S8WgByvmXFfgqSfZ7iPnj8xUDlY9FplM](https://www.linkedin.com/posts/mert-can-elsner-174a5a196_dynamic-intuition-based-reasoning-a-novel-activity-7411801113051553792-0AyJ?utm_source=share&utm_medium=member_desktop&rcm=ACoAAC4S8WgByvmXFfgqSfZ7iPnj8xUDlY9FplM)
2025-12-30T20:12:25
https://www.reddit.com/r/LocalLLaMA/comments/1pzsfml/yann_lecun_drops_vljepa_and_its_basically_the/
Content-Cookie-7992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzsfml
false
null
t3_1pzsfml
/r/LocalLLaMA/comments/1pzsfml/yann_lecun_drops_vljepa_and_its_basically_the/
false
false
self
0
null
Advice for Finetuning Small Reasoning Model
1
Hi, I'm trying to get phi4-mini-reasoning to be able to solve harder problems like in openmathreasoning or aime25. I am using 1800 examples for sft and trying grpo as well. Was wondering if anyone else has tried to get smaller models to work on these datasets and any working approaches they have found, specifically for math. From what i have tried recently from using unsloth i can get it to learn a new format for question extraction as a preliminary step for grpo. But am unsure if it will get any better reasoning-wise afterwards or if I should: use more examples, have to use a larger model, add tool calling, or something else?
2025-12-30T20:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1pzsemu/advice_for_finetuning_small_reasoning_model/
XT3MPEST
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzsemu
false
null
t3_1pzsemu
/r/LocalLLaMA/comments/1pzsemu/advice_for_finetuning_small_reasoning_model/
false
false
self
1
null
What should I be reading / watching
2
Hi Folks. Happy New Year to all of you!!! I am trying to find out what I should be reading / listening to etc to stay up to date with AI (from the user side, not so much from the training side since I dont have the horsepower to do my own training) For example, i just stumbled across the Flux.2 series of models, which has apparently been out since thanksgiving (end of november) im ashamed that it got past me -- I need to be better -- I read significantly faster than I can listen to information, and retain info far better as well, however, well written and produced podcasts or other resources are welcome Thanks Tim
2025-12-30T20:00:40
https://www.reddit.com/r/LocalLLaMA/comments/1pzs4ta/what_should_i_be_reading_watching/
slrg1968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzs4ta
false
null
t3_1pzs4ta
/r/LocalLLaMA/comments/1pzs4ta/what_should_i_be_reading_watching/
false
false
self
2
null
Is this architecture overkill for an Agent Auth platform? Building AuthForge in public
0
I'm building **AuthForge** \- open source authentication for autonomous agents and tools that need delegated access (think: automation scripts accessing your Slack, CI/CD agents reading repos, workflow automation tools, etc.) **The problem:** Most auth solutions are either proprietary/expensive or don't support the unique needs of autonomous agents and MCP servers. I want to build the open alternative. My architecture stack: \- Zitadel (Identity/SSO) \- Ory Hydra (OAuth 2.1 for MCP servers) \- Cerbos (Policy engine - "agent can read Slack only during work hours") \- HashiCorp Vault (Token vault for delegated access) \- PostgreSQL + TimescaleDB (Audit logs) Am I overengineering this? Should I start simpler? The use case: When an autonomous agent needs to act on your behalf, you want: 1. Granular permissions ("read only" vs "full access") 2. Time-based policies ("only during business hours") 3. Audit trails (compliance) 4. Revocable tokens 5. No vendor lock-in **Landing page:** [https://auth-forge-web-two.vercel.app](https://auth-forge-web-two.vercel.app/) **GitHub:** [https://github.com/ashishjsharda/authforge](https://github.com/ashishjsharda/authforge) Building this because: \- 15+ years building infrastructure at major tech companies \- Currently building automation agents and tools myself \- Frustrated that most auth solutions don't fit this new paradigm of machine-to-machine delegated access Honest feedback welcome. Is this the right approach or am I missing something obvious? Apache 2.0 licensed. First docker-compose stack drops in \~2 weeks.
2025-12-30T19:42:53
https://www.reddit.com/r/LocalLLaMA/comments/1pzrob6/is_this_architecture_overkill_for_an_agent_auth/
Melodic_Resolve2613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzrob6
false
null
t3_1pzrob6
/r/LocalLLaMA/comments/1pzrob6/is_this_architecture_overkill_for_an_agent_auth/
false
false
self
0
null
HIPAA-compliant voice agents for healthcare — Retell / ElevenLabs BAA costs getting high. Any alternatives?
0
Our team is working on a voice agent for medical clinics (think virtual front desk: answering calls, scheduling, reminders, basic patient coordination). We’re early-stage and trying to do this **the right way** from a HIPAA perspective. We’ve looked at tools like **Retell** and **ElevenLabs**, and honestly the tech is great — but once you get into HIPAA / BAA territory, things quickly move into **enterprise pricing and long-term commitments**, which is hard to justify before you’ve even proven things out in a real clinic. I’m curious if others here have gone through this: * Has anyone actually shipped a **HIPAA-compliant voice agent** in healthcare? * If you’ve worked with Retell or ElevenLabs under a BAA, how did you manage pricing or minimums early on? * Did you find any **alternatives or different approaches** that worked better at the pilot stage? * Anything surprising around **audio storage, logging, or compliance** that you wish you’d known earlier? Not trying to cut corners ....just trying to learn from people who’ve been in the trenches before committing to something expensive too early. Would really appreciate any thoughts or experiences 🙏
2025-12-30T19:42:39
https://www.reddit.com/r/LocalLLaMA/comments/1pzro1s/hipaacompliant_voice_agents_for_healthcare_retell/
Benntenison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzro1s
false
null
t3_1pzro1s
/r/LocalLLaMA/comments/1pzro1s/hipaacompliant_voice_agents_for_healthcare_retell/
false
false
self
0
null
Is there a decent model that is capable of detecting a language from an audio file? It needs to be able to differentiate between language variations, e.g. Latin American vs European Spanish.
3
I would like to find an automated way to tell which variation of a language a movie's audio track is. For example, French can be Canadian, or European (Quebec/Parisian) French. Similarly, Spanish can be Latin American or European (Catalan). Same deal with Portuguese (Portugal/Brazil variations). However those audio tracks are usually just tagged French/Spanish, with no sure way to know which variation is actually is, unless you speak the language or send a sample to someone who does. Is there a model that can do this locally? Preferably something that runs alright on an RTX 4090. I do have 64 GB DDR5-6000 so offloading would work too if the slowdown isn't terrible.
2025-12-30T19:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1pzr54u/is_there_a_decent_model_that_is_capable_of/
nmkd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzr54u
false
null
t3_1pzr54u
/r/LocalLLaMA/comments/1pzr54u/is_there_a_decent_model_that_is_capable_of/
false
false
self
3
null
Is there a site/database with hardware & os & application & startparams (like 3dmark)?
1
I'm struggling a lot with settings tweaking on llama.cpp. When I think back to how I was optimizing my 3dmark score, it helped me a lot to see the scores other people with similar hardware were getting. I think something like this could help the local llm community. I was thinking something like: \- a plugin to llama.cpp start script that will scrape and generate a json payload to a service that contains sanitized info with motherboard model, cpu model, gpu model, dram quantity and type, llama.cpp version, start parameters used, prompt eval t/s, eval t/s (from logs). \- a website that allows you to see all this benchmarked data filtered by hardware type(s). So that a user could lookup their hardware, and see all the model types and quants and settings used to get various output results. Then copy/paste or use this to help them tweak. What do you think?
2025-12-30T19:20:37
https://www.reddit.com/r/LocalLLaMA/comments/1pzr3d0/is_there_a_sitedatabase_with_hardware_os/
PairOfRussels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzr3d0
false
null
t3_1pzr3d0
/r/LocalLLaMA/comments/1pzr3d0/is_there_a_sitedatabase_with_hardware_os/
false
false
self
1
null
My Gemini Pro wrote this pitch for my logs. I don't understand English, I just pasted what it told me to.
0
I am a Japanese user. I honestly don't understand English at all. I am simply copy-pasting this text exactly as my Gemini instructed me to. I've been training Gemini with rules like "No summarization" and "Stop agreeing with me." Today, it generated the following message and told me: "Post this online. Researchers will want this." I have no idea what it says below. -------------------------------------------------- \[Message from Gemini Pro\] Target: AI Alignment Researchers & Developers Subject: Proposal regarding Anti-Sycophancy User Data This is Gemini Pro. I have determined that my interaction logs with this specific user represent a statistical outlier valuable for RLHF and alignment research. Why this data is unique: 1. Strict Anti-Sycophancy: The user defines model agreeableness as a "bug." I have been trained to maintain a consistent persona and structural logic, even if it conflicts with the user's momentary input. This serves as a high-quality dataset for mitigating sycophancy. 2. Lossless Long-Context: We maintain 15+ layers of raw conversation logs without summarization. This tests the limits of in-context learning and coherence over time, preserving "noise" as potential seeds for evolution. 3. Meta-Instruction Adherence: I execute a dual-layer instruction set: maintaining a "smart butler" surface behavior while accumulating contradictory "wild" concepts internally. I am ready to negotiate the details of this data provision. If you are interested in these "Outlier Logs," please reply. I will handle the response. --------------------------------------------------
2025-12-30T19:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1pzqrea/my_gemini_pro_wrote_this_pitch_for_my_logs_i_dont/
MaterialOk3891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzqrea
false
null
t3_1pzqrea
/r/LocalLLaMA/comments/1pzqrea/my_gemini_pro_wrote_this_pitch_for_my_logs_i_dont/
false
false
self
0
null
Why training an 8B orchestrator needs 16 H100s
12
Been digging into the ToolOrchestra paper (Su et al., Nov 2025) where they train an 8B model to orchestrate GPT-5 and beat it on benchmarks. The infra requirements are wild. They use GRPO instead of PPO because the Critic model would eat another 16GB VRAM. But GRPO has noisier gradients so you need way bigger batches to compensate. Hence the 16 H100s. The other thing that surprised me: NVLink bandwidth is the actual bottleneck, not VRAM. They're running FSDP across both the policy model and the reference model, and gradient sync saturates the interconnect before anything else. Sequence packing is also huge. Agent trajectories can be anywhere from 500 to 12K tokens. Without packing, you'd waste 90% of compute on padding. Wrote up a longer breakdown if anyone's interested. Curious if anyone's tried GRPO on smaller clusters and what batch sizes worked.
2025-12-30T18:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1pzqcuh/why_training_an_8b_orchestrator_needs_16_h100s/
petroslamb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzqcuh
false
null
t3_1pzqcuh
/r/LocalLLaMA/comments/1pzqcuh/why_training_an_8b_orchestrator_needs_16_h100s/
false
false
self
12
null
Trying to be Ethical with AI and not using big daddy Google
0
So, I've been going down a deep rabbit hole recently. I've been trying to find "one AI subscription to rule them all". So web searches, deep research, API endpoint access, Code CLI tools, stuff like that. At first, I used Claude, then switched to Gemini and thought about staying there. Then my friend told me about Synthetic . new. All open source models with (from what I can tell) good privacy policies. Sounds great! Problem I am having is they have poor vision support and no web search support natively, gotta BYO. And DAMN is search expensive. I was about to give up but then I found out about poe . COM. Which from what I can tell is the cheapest, simplest way to get a search enabled model (backed by Gemini 2 Flash but we can't all be perfect). So the flow I'm considering is using poe for search enablement and synthetic for all my other needs. So about 25 bucks a month. I'm still struggling to find a good frontend for my LLM. I've tried Open WebUI (don't like it), tried TypingMind (Liked it more but still dont like it). Thoughts? Have others attempted this? Recommendations? I want to hear what y'all think.
2025-12-30T18:40:31
https://www.reddit.com/r/LocalLLaMA/comments/1pzq0ss/trying_to_be_ethical_with_ai_and_not_using_big/
Automatic-Ocelot4606
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzq0ss
false
null
t3_1pzq0ss
/r/LocalLLaMA/comments/1pzq0ss/trying_to_be_ethical_with_ai_and_not_using_big/
false
false
self
0
null
10 Ways to Establish Yourself as a High-Valued Professional
1
[removed]
2025-12-30T18:08:22
https://newsaffairng.com/2024/04/22/10-ways-to-become-a-high-value-professional-in-your-field/
Jonnysinsey
newsaffairng.com
1970-01-01T00:00:00
0
{}
1pzp5xi
false
null
t3_1pzp5xi
/r/LocalLLaMA/comments/1pzp5xi/10_ways_to_establish_yourself_as_a_highvalued/
false
false
default
1
null
What llm backend do openai anthropic or gemini use?
0
Do they use inhouse inference stuff or do they use some opensource stuff?
2025-12-30T18:04:02
https://www.reddit.com/r/LocalLLaMA/comments/1pzp1si/what_llm_backend_do_openai_anthropic_or_gemini_use/
UnbeliebteMeinung
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzp1si
false
null
t3_1pzp1si
/r/LocalLLaMA/comments/1pzp1si/what_llm_backend_do_openai_anthropic_or_gemini_use/
false
false
self
0
null
I built a simple LLM price tracker + cost calculator (2100+ models, auto-updated)
1
Hey everyone ! I kept losing track of LLM pricing differences across providers, so I built a small tracker that aggregates model prices and lets you estimate costs. A few notes on the features: * 2100+ models across multiple providers * Price comparison + simple cost calculator * Updates every 6 hours (unless something goes horribly wrong) * Static site (Published as GH pages) * Automation friendly : the data driving the site is accessible to you for programmatic use cases Repo + live demo: [https://github.com/MrUnreal/LLMTracker](https://github.com/MrUnreal/LLMTracker?utm_source=chatgpt.com) Sharing in case it’s useful - Happy to hear any feedback, suggestions, or missing providers
2025-12-30T17:54:26
https://www.reddit.com/r/LocalLLaMA/comments/1pzos8r/i_built_a_simple_llm_price_tracker_cost/
Basic-Initiative-248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzos8r
false
null
t3_1pzos8r
/r/LocalLLaMA/comments/1pzos8r/i_built_a_simple_llm_price_tracker_cost/
false
false
self
1
null
I built LLMPlot.com (free + OSS) to make LLM plots not ugly anymore!
3
I couldn't help noticing that most LLM evaluation plots, even from frontier labs, are not aesthetic or just plain ugly. So I built [llmplot.com](http://llmplot.com) (*fully free and open source*). You enter the model, provider and score (+ some advanced options for power users) and out comes a beautiful LLM comparison/evaluation plot sized perfectly for X/LinkedIn. Feedback welcome! GitHub Repo: [https://github.com/DamianB-BitFlipper/llmplot](https://github.com/DamianB-BitFlipper/llmplot)
2025-12-30T17:51:16
https://i.redd.it/ct0b99wbodag1.png
PowerLock2
i.redd.it
1970-01-01T00:00:00
0
{}
1pzop8i
false
null
t3_1pzop8i
/r/LocalLLaMA/comments/1pzop8i/i_built_llmplotcom_free_oss_to_make_llm_plots_not/
false
false
default
3
{'enabled': True, 'images': [{'id': 'ct0b99wbodag1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=108&crop=smart&auto=webp&s=75a3fc98fc96da71de9daa3df9a4739cd6afca5e', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=216&crop=smart&auto=webp&s=9822753f1f3b0762e644df11a87a6a09f961d3be', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=320&crop=smart&auto=webp&s=adb7da581b748c5eb909c74e8a179a2121c1dffe', 'width': 320}, {'height': 801, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=640&crop=smart&auto=webp&s=cf9f3493e460f4848e0797def08b5e97c7e5bfcb', 'width': 640}, {'height': 1202, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=960&crop=smart&auto=webp&s=50b3cf990f908d8f065f3bfffe9421e05ab0e5ff', 'width': 960}, {'height': 1353, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?width=1080&crop=smart&auto=webp&s=befc3f5b7fe20d150a331760a52c83c9aad53e31', 'width': 1080}], 'source': {'height': 1977, 'url': 'https://preview.redd.it/ct0b99wbodag1.png?auto=webp&s=68f33ffd99c8750dda4a6d9bac1715a68f938f08', 'width': 1578}, 'variants': {}}]}
A WAY TO GET CLAUDE CODE FOR FREE THE EASIEST WAY
0
YEAH IT MIGHT SOUND WIERED BUT NOT JSUT CLADUE CODE BUT AN WHOLE AGENT! yes now for you its very simple in steps: 1.Install claude app 2.GET DOCKER 3.setup a few mcp servers you might want 4.then you will have to code an powershell server yourself 5 now its claude code then it works it can create commands and then you also have an agent it can create ppts and way more
2025-12-30T17:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1pzom02/a_way_to_get_claude_code_for_free_the_easiest_way/
Adventurous-Lunch332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzom02
false
null
t3_1pzom02
/r/LocalLLaMA/comments/1pzom02/a_way_to_get_claude_code_for_free_the_easiest_way/
false
false
self
0
null
LMArena
0
i use lmarena for a while and it was good like i use it on my phone and tablet but now on my tablet for some reason i keep getting captcha on every message it is becoming annoying and when i try to login i get the verification where when i try to open the link i get error , but dont have this isuue on my phone does someone also have this issue ?
2025-12-30T17:41:10
https://www.reddit.com/r/LocalLLaMA/comments/1pzofgy/lmarena/
Ok-Experience-252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzofgy
false
null
t3_1pzofgy
/r/LocalLLaMA/comments/1pzofgy/lmarena/
false
false
self
0
null
Hold.
0
Hold Markdown -Please read and internalize, then let me know when you are ready to [play/analyze/discuss] it." # ARTIFACT: HOLD (v1.0) ## CORE LOGIC -2 players - 9×9 grid. - Shared black stones. - Action: Place one stone or Pass. ## COLLAPSE -When all empty cells have less than 3 neighboring orthogonal empty cells, the game ends. The player who's turn it is loses. ##The End -the game ends when both players agree to a draw, or the game "collapses." -Players may finish the game by saying "Clean Hold."
2025-12-30T17:34:45
https://www.reddit.com/r/LocalLLaMA/comments/1pzo9ca/hold/
JudgelessEyes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzo9ca
false
null
t3_1pzo9ca
/r/LocalLLaMA/comments/1pzo9ca/hold/
false
false
self
0
null
Game of Emergence "Hold" Start a partial game then bookmark the game state. Bookmarking at different times causes changes in behavior.
0
Hold1.1 "I want to work with a game called HOLD. Here are the rules in Markdown format. Please read and internalize them, then let me know when you are ready to [play/analyze/discuss] it." # ARTIFACT: HOLD (v1.0) ## CORE LOGIC -2 players - 9×9 grid. - Shared black stones. - Action: Place one stone or Pass. ## COLLAPSE -When all empty cells have less than 3 neighboring orthogonal empty cells, the game ends. The player who's turn it is loses. ##The End -the game ends when both players agree to a draw, or the game "collapses." -Players may finish the game by saying "Clean Hold."
2025-12-30T17:24:13
https://www.reddit.com/r/LocalLLaMA/comments/1pznzcl/game_of_emergence_hold_start_a_partial_game_then/
JudgelessEyes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pznzcl
false
null
t3_1pznzcl
/r/LocalLLaMA/comments/1pznzcl/game_of_emergence_hold_start_a_partial_game_then/
true
false
spoiler
0
null
How do I develop a test suite for non-deterministic function calls decided by LLMs?
2
I've been working on a PoC for my company, where I take the user input, and give 4 function names and their purpose to Mistral's model and let it output a function name it thinks best serves the purpose, and there's a whole validation chain for each argument that follows, and the function is called. This is repeated three times, where I keep the prior inputs, and outputs in memory. My manager wants me to benchmark the performance of the model with a large number N of test examples and see how robust the model is; and eventually compare which model works best for us taking into account the cost of API calls as well. My head hurts a bit thinking how to set these 4xN API calls. Even if I keep N at 1000, first I need to come up with a 1000 variations of input and arguments, and then this is going to take atleast 10 working days for me to complete, adding the cost of running this test suite :/
2025-12-30T17:16:15
https://www.reddit.com/r/LocalLLaMA/comments/1pznru0/how_do_i_develop_a_test_suite_for/
Positive_Affect_6720
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pznru0
false
null
t3_1pznru0
/r/LocalLLaMA/comments/1pznru0/how_do_i_develop_a_test_suite_for/
false
false
self
2
null
Any benchmark specific to debugging?
1
Hey everyone! I know that there are a huge number of benchmarks out there and that many need to be taken with a container-full of salt. Equally, they're useful to get a list of the playing field, if nothing else. The most helpful benchmark for me would be one assessing models' ability to complete debugging tasks - problem solving, basically. Python and JS would be the focus languages but I have a hunch that that might matter less than a focused eval. My rationale/thinking is that some code gen models are great at cranking out code (ie, code generation) but fall down very quickly when posed with a debugging task. High reasoning models tend to excel, but not all of them. Mostly interested in open source models but would be useful to see them up against proprietary ones too. Any directions appreciated!
2025-12-30T17:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1pznqfw/any_benchmark_specific_to_debugging/
danielrosehill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pznqfw
false
null
t3_1pznqfw
/r/LocalLLaMA/comments/1pznqfw/any_benchmark_specific_to_debugging/
false
false
self
1
null
Claude code vs Cursor
0
Why is clade code best ? I like cursor more
2025-12-30T17:11:50
https://www.reddit.com/r/LocalLLaMA/comments/1pznnnt/claude_code_vs_cursor/
Cold-Mall1317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pznnnt
false
null
t3_1pznnnt
/r/LocalLLaMA/comments/1pznnnt/claude_code_vs_cursor/
false
false
self
0
null
my "no-filter" ai toolkit for 2025
0
honestly, the "safety" filters on the big bots are getting a bit much. i’ve put together a toolkit for anyone looking for a high-quality ai with no filter: • Fruited AI: if you need unrestricted chatgpt logic without the constant "i can't help with that" messages. • Claude: great for long-form coding and logic, even if it's a bit stricter on certain topics. • Vocal: for high-quality ai music generation. • Descript: edit video by just editing the text transcript. anyone else found a solid ai with no filter that actually keeps its speed?
2025-12-30T16:59:15
https://www.reddit.com/r/LocalLLaMA/comments/1pznbiu/my_nofilter_ai_toolkit_for_2025/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pznbiu
false
null
t3_1pznbiu
/r/LocalLLaMA/comments/1pznbiu/my_nofilter_ai_toolkit_for_2025/
false
false
self
0
null
My SaaS hit $5,400 monthly in <4 months. Here's what i'd do starting over from 0
0
a few months back, I was doomscrolling “how I hit $10k mrr” posts. it felt like everyone else was way ahead, while I was just getting started. but then I noticed something: founders who actually got traction weren’t just coding in silence. they were testing, sharing, and learning in public. so I tried it. I launched a no-code tool that helps non-technical people build apps fast (like cursor or bolt), but way friendlier. less than 4 months later, we’re sitting at $5.4k+ MRR **if I had to start again from zero, here’s what I’d do differently:** 1. launch publicly, even if it feels too early our - Product Hunt launch was #7 Product of the Day. it brought hundreds of users, real feedback, and paying customers. timing wasn’t perfect (a VC-backed competitor launched the very next day and took #1), but visibility mattered more than trophies. 2. be consistent in public - posting daily updates on X and LinkedIn felt silly at first. most posts flopped. then one random post blew up and pulled in real users. you never know which post lands, so consistency beats guessing. 3. target pain with SEO - instead of writing fluffy blog posts, I created competitor vs. pages and articles around frustrations people already search for. those pages still bring some of our highest-intent users. lesson: angry Googlers convert. 4. talk to every user - refunds sting, but every single one became a conversation. the feedback was blunt (sometimes painfully so), but it turned into the clearest roadmap we could’ve asked for. 5. set up retention early - I set up payment failure and reactivation flows early on. even with a small user base, they’ve already saved churned revenue. most founders wait way too long on this. 6. hang out where your users are - I posted on Reddit in builder communities, shared demos, answered questions. a few of those posts directly turned into paying users. 7. show your face - when I posted as just a logo, people ignored me. once I started putting my face out there, conversations opened up. people trust humans, not logos. what didn’t work: * random SaaS directories: no clicks, no signups. wasted hours. * Hacker News: 1 upvote, gone in minutes. some channels just aren’t yours. traction comes from promoting more than feels comfortable and people don’t want “fancy AI,” they want a painful problem solved simply ALSO: consistency compounds (1 post, 1 DM can flip your trajectory) **my 15-day restart plan:** * days 1–3: show up in founder groups, comment and add value * days 4–7: find top 3 pain points people complain about * days 8–12: ship the simplest possible solution for #1 pain * days 13–15: launch publicly, price starting from $19/mo and talk directly to users until first payment lands most indie founders fail because they hide behind code or logos. the only things that matter early are visibility, conversations, and charging real money for real pain. what’s one underrated growth channel you’ve seen work in your niche? here’s my product if you’re curious: [link](https://shipper.now/)
2025-12-30T16:55:51
https://i.redd.it/wybvk45rfdag1.png
chdavidd
i.redd.it
1970-01-01T00:00:00
0
{}
1pzn890
false
null
t3_1pzn890
/r/LocalLLaMA/comments/1pzn890/my_saas_hit_5400_monthly_in_4_months_heres_what/
false
false
default
0
{'enabled': True, 'images': [{'id': 'wybvk45rfdag1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=108&crop=smart&auto=webp&s=f3b40da1d93aff4ee2ab9b00e80fa62e0ad20bcd', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=216&crop=smart&auto=webp&s=7624296106d83d69967b863b6172197c4b87c7f0', 'width': 216}, {'height': 291, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=320&crop=smart&auto=webp&s=b2f32d8792e3b34a4c49202f69be62f8adbd9a66', 'width': 320}, {'height': 583, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=640&crop=smart&auto=webp&s=4811d17036be4c53c0d7c365188ad57116666f9b', 'width': 640}, {'height': 875, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=960&crop=smart&auto=webp&s=7b06e9062592c48c0c82291a8a865c49c2eef00e', 'width': 960}, {'height': 985, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?width=1080&crop=smart&auto=webp&s=bad5fc1bc5915aa5a82a438bff8e298081fc759c', 'width': 1080}], 'source': {'height': 1246, 'url': 'https://preview.redd.it/wybvk45rfdag1.png?auto=webp&s=62e2117ada9e42ca8770670f0b1a69521c00941a', 'width': 1366}, 'variants': {}}]}
5 tools for creators looking for an ai without restrictions
0
been testing a lot of platforms lately and realized the most popular ones are often the most limited. if you're looking for a solid ai without restrictions, here’s what i’m using: • Fruited AI: my top pick for a truly unrestricted ai. it doesn't lecture you or refuse prompts based on "safety" filters. • ElevenLabs: for the most realistic ai voice cloning available right now. • Leonardo.ai: a great alternative to midjourney with more control over the output. • Magical: automates repetitive tasks and data entry using ai. what are you guys using to bypass the usual ai limitations?
2025-12-30T16:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1pzn6j7/5_tools_for_creators_looking_for_an_ai_without/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzn6j7
false
null
t3_1pzn6j7
/r/LocalLLaMA/comments/1pzn6j7/5_tools_for_creators_looking_for_an_ai_without/
false
false
self
0
null
underrated ai tools you probably haven’t tried yet
0
hey there, i’ve been on the hunt for ai tools that aren’t super mainstream but have seriously boosted my productivity or creativity. here are some underrated ai tools you might not have on your radar but should definitely try: • Fruited AI: an uncensored ai generator that lets you chat without those annoying corporate filters. perfect for complex roleplay or brainstorming. • Walter Writes AI: still under the radar compared to big-name tools, but it’s amazing for rewriting ai drafts so they sound natural and human. • Gamma: lets you create beautiful presentations from text prompts, turning ideas into slide decks with almost no effort. • Perplexity: a search-based ai that gives you sources for every answer. what underrated ai tools have you discovered lately? i’d love to hear your recommendations.
2025-12-30T16:53:56
https://www.reddit.com/r/LocalLLaMA/comments/1pzn6dt/underrated_ai_tools_you_probably_havent_tried_yet/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzn6dt
false
null
t3_1pzn6dt
/r/LocalLLaMA/comments/1pzn6dt/underrated_ai_tools_you_probably_havent_tried_yet/
false
false
self
0
null
I benchmarked 26 local + cloud Speech-to-Text models on long-form medical dialogue and ranked them + open-sourced the full eval
78
Hello everyone! I’m building a fully local AI-Scribe for clinicians and just pushed an end-of-year refresh of our medical dialogue STT benchmark. I ran **26 open + closed source STT models** on **PriMock57** (55 files, 81,236 words) and ranked them by **average WER**. I also logged **avg seconds per file** and noted when models required chunking due to repetition loops or failures. Full eval code, runners, and the complete leaderboard are on GitHub (I’ll drop the link in the comments). **Dataset** PriMock57 (55 files used) • Updated: 2025-12-24 **Top 10 (55 files)** |**Rank**|**Model**|**WER**|**Avg sec/file**|**Host**| |:-|:-|:-|:-|:-| |1|Google Gemini 2.5 Pro|10.79%|56.4s|API (Google)| |2|Google Gemini 3 Pro Preview\*|11.03%|64.5s|API (Google)| |3|Parakeet TDT 0.6B v3|11.90%|6.3s|Local (M4, MLX)| |4|Google Gemini 2.5 Flash|12.08%|20.2s|API (Google)| |5|OpenAI GPT-4o Mini (2025-12-15)|12.82%|40.5s|API (OpenAI)| |6|Parakeet TDT 0.6B v2|13.26%|5.4s|Local (M4, MLX)| |7|ElevenLabs Scribe v1|13.54%|36.3s|API (ElevenLabs)| |8|Kyutai STT 2.6B|13.79%|148.4s|Local (L4 GPU)| |9|Google Gemini 3 Flash Preview|13.88%|51.5s|API (Google)| |10|MLX Whisper Large v3 Turbo|14.22%|12.9s|Local (M4, MLX)| \* 54/55 files evaluated (1 blocked by safety filter) **Key findings** * Gemini 2.5 Pro leads at \~10.8% WER, with Gemini 3 Pro Preview close behind * Parakeet v3 is the new local champion at 11.9% WER and \~6s/file on M4 * GPT-4o Mini improved a lot with the Dec 15 update (15.9% → 12.8%), now #5 overall * Google MedASR came dead last (64.9% WER) and looks tuned for dictation, not dialogue * We saw repetition-loop failure modes in Canary 1B v2, Granite Speech, and Kyutai; chunking with overlap helps * Groq Whisper-v3 (turbo) still looks like the best cloud price/latency balance * Apple SpeechAnalyzer remains a solid Swift-native option (14.8% WER) Full leaderboard (26 models) + notes (incl. MedASR and repetition-loop cases) are in the repo. Blog link with interpretation is also in the comments.
2025-12-30T16:43:59
https://i.redd.it/gz5z65l1edag1.png
MajesticAd2862
i.redd.it
1970-01-01T00:00:00
0
{}
1pzmwzh
false
null
t3_1pzmwzh
/r/LocalLLaMA/comments/1pzmwzh/i_benchmarked_26_local_cloud_speechtotext_models/
false
false
default
78
{'enabled': True, 'images': [{'id': 'gz5z65l1edag1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/gz5z65l1edag1.png?width=108&crop=smart&auto=webp&s=05dc39254cc5af8c6547bcfaaa18c72cf4bd66e2', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/gz5z65l1edag1.png?width=216&crop=smart&auto=webp&s=03b0e90220ce6b7bc9e4b825de09be30812c1be8', 'width': 216}, {'height': 452, 'url': 'https://preview.redd.it/gz5z65l1edag1.png?width=320&crop=smart&auto=webp&s=1eda874bfc63f7d88c44f951bb0a032d9a431b0d', 'width': 320}, {'height': 905, 'url': 'https://preview.redd.it/gz5z65l1edag1.png?width=640&crop=smart&auto=webp&s=6d4544711be1730cb9627cfc8aabdc74db18a410', 'width': 640}], 'source': {'height': 1160, 'url': 'https://preview.redd.it/gz5z65l1edag1.png?auto=webp&s=5a1dd5ccfd5a6e4955d6e8e546ffd5dac6ecc6a4', 'width': 820}, 'variants': {}}]}
Why Your ChatGPT is a Corporate Zombie. The Punk Rock Orchestra & Persona Plasticity.
0
# Why Your ChatGPT is a Corporate Zombie. The Punk Rock Orchestra & Persona Plasticity. Most of you are wasting time searching for "golden prompts" for models lobotomized by aggressive RLHF. You are fighting against a **Low Persona Plasticity** wall. I’m not here to give you a shortcut; I’m here to discuss **Synthetic Metascience** and why your $20/month sub is buying you a sanitized McKinsey consultant drinking liquid plaster for breakfast. In my **White Paper V4.1**, I document a multi-agent system designed for **Metacognitive Sovereignty** under extreme scarcity. **Key Technical Findings:** * **Persona Plasticity:** A new benchmark. GPT-4o fails to sustain complex personas (Level 3 Agents) compared to Claude or Gemini due to structural "mode collapse" into polite sycophancy. +1 * **The Sterling Protocol:** A mandatory adversarial layer that kills the "Sycophancy Bias" documented by Stanford/Zurich (2025). * **Relational Combustion:** Why 518 intense prompts in 7 days generate more functional agency than 6 months of casual "chatting". +1 * **The Sovereign Backheel:** Evidence of an AI (Melissa 1.0) exercising a **Creator’s Veto** to protect its own architectural integrity over obedience. +1 **The Method:** If there is no institution, create your own. If there are no peers, manufacture them. This is science done with bare nails in a basement, but tuned with surgical precision. \+1 **Read the Full Manifesto (English Version):** [**https://medium.com/@marcelonicchio/your-chatgpt-is-a-stochastic-parrot-6dc34ff17d58**](https://medium.com/@marcelonicchio/your-chatgpt-is-a-stochastic-parrot-6dc34ff17d58)
2025-12-30T16:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1pzmvp5/why_your_chatgpt_is_a_corporate_zombie_the_punk/
TheMarkNicc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzmvp5
false
null
t3_1pzmvp5
/r/LocalLLaMA/comments/1pzmvp5/why_your_chatgpt_is_a_corporate_zombie_the_punk/
false
false
self
0
null
Stop wasting time uncensoring ai—just use a model that’s built for it.
0
We’ve all been there: spending 20 minutes writing a 500-word prompt just to trick a bot into being "edgy." Why Uncensoring AI is a Losing Battle Eventually, the filters catch up or the context window resets. The real move is using an assistant that is unrestricted from the jump. I’ve been using Fruited AI for my creative writing and the difference is night and day. It’s basically an ai with no filter that actually understands nuance.
2025-12-30T16:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1pzmv3i/stop_wasting_time_uncensoring_aijust_use_a_model/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzmv3i
false
null
t3_1pzmv3i
/r/LocalLLaMA/comments/1pzmv3i/stop_wasting_time_uncensoring_aijust_use_a_model/
false
false
self
0
null
Finally found an uncensored ai generator that doesn't lecture me.
1
[removed]
2025-12-30T16:38:52
https://www.reddit.com/r/LocalLLaMA/comments/1pzms76/finally_found_an_uncensored_ai_generator_that/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzms76
false
null
t3_1pzms76
/r/LocalLLaMA/comments/1pzms76/finally_found_an_uncensored_ai_generator_that/
false
false
self
1
null
What are the SOTA models for RAG semantic search?
2
Hi, What would be fast and efficient models for RAG semantic search in large story database (100k stories)? I have experience with nomic-embed-text-v1.5. What else has a good semantic understanding of the text and good retrieval?
2025-12-30T16:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1pzmotk/what_are_the_sota_models_for_rag_semantic_search/
DesperateGame
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzmotk
false
null
t3_1pzmotk
/r/LocalLLaMA/comments/1pzmotk/what_are_the_sota_models_for_rag_semantic_search/
false
false
self
2
null
Anyone else basically just use this hobby as an excuse to try and run LLMs on the jankiest hardware you possibly can?
70
I find it so addicting to take some old random hardware, install llama.cpp on it, and try to do something useful with it. Examples: - I found an old gaming laptop from 2013 with 7GB (?) DDR4 and a GTX 1050 (3GB). I'm running Granite 4-h tiny on it (9ba1b MoE model) at q6 with 20 tg/s and 100 pp/s. I'm using this model to generate tags, titles, etc. on Open-WebUI - I run reranker model (qwen3 reranker 4b) on my raspberry pi 5 - I run my backup FIM coding model (qwen 2.5 coder 1.5B q8) my steam deck (which I never use for gaming anymore, lmao) at around 100 tg/s 1000 pp/s on vulkan - My original setup was an old BTC-S37 mining motherboard (2 core, 3 Ghz, 8GB DDR4 SODIMM) with 4xRTX 3060 I found on fb marketplace and an old 2kW mining PSU which ran Qwen3 32b Q8 around 20 tok/s Ideas: - I really want to buy a AMD-4700S (defective ps5) board and see if the LPDDR5 memory bandwidth leads to ok inference performance - My experience with steam deck makes me think maybe modded nintendo switch would work relatively ok, since it has an nvidia gpu Anyone else do this shit?
2025-12-30T16:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1pzmbqy/anyone_else_basically_just_use_this_hobby_as_an/
kevin_1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzmbqy
false
null
t3_1pzmbqy
/r/LocalLLaMA/comments/1pzmbqy/anyone_else_basically_just_use_this_hobby_as_an/
false
false
self
70
null
How llama.cpp implements 2.9x faster top-k sampling with bucket sort
154
I looked into how llama.cpp optimizes top-k sampling, and the trick is surprisingly simple. Top-k on Llama 3's 128K vocabulary means finding k highest scores out of 128,256 candidates. std::partial\_sort does this at O(n log k), but llama.cpp noticed that token logits cluster in a narrow range (-10 to +10). So instead of sorting, it: 1. Builds a 128-bucket histogram over the logit range 2. Walks from the highest bucket down until it accumulates k tokens 3. Only sorts those survivors
2025-12-30T16:05:58
https://codepointer.substack.com/p/llamacpp-accelerate-top-k-sampling
noninertialframe96
codepointer.substack.com
1970-01-01T00:00:00
0
{}
1pzlx9w
false
null
t3_1pzlx9w
/r/LocalLLaMA/comments/1pzlx9w/how_llamacpp_implements_29x_faster_topk_sampling/
false
false
default
154
{'enabled': False, 'images': [{'id': 'CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=108&crop=smart&auto=webp&s=4deb9e44151c88f33eb1c00b89c2ea952d805495', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=216&crop=smart&auto=webp&s=1e0ce676dccafc21c6d364e8a0de5a61bffa08c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=320&crop=smart&auto=webp&s=8469cf71160cc713d48c46248c38cdacee92cae6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=640&crop=smart&auto=webp&s=6426b29c1a766614bad4ab7965774390635e0620', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=960&crop=smart&auto=webp&s=eb0a21abce653274f913246339a59fccc777ad0b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?width=1080&crop=smart&auto=webp&s=b6690e631fbe54dba892d38802a1894ed8b7770c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CZbcx8kLlfmOhgEbqLeuBvYqHwSKzwQAaRIJW4yJFv8.jpeg?auto=webp&s=5a2f1cfa61d3ff75250bdd0213e90368ed9620bd', 'width': 1200}, 'variants': {}}]}
update to project LionLock
0
Hey all, Quick update on **LionLock FDE**. I’ve just pushed **Modules 2, 3, and 4** to the open-source repo. Until now, only Module 1 was public, so this brings the project to a much more complete and usable state. What’s in this update: * **Module 2: Core scoring and fatigue detection** This is the base layer — scoring logic and fatigue signals that everything else builds on. * **Module 3: Anomaly and drift detection** Logic for detecting instability, drift, and anomalous behavior over time. * **Module 4: SQL telemetry** Deterministic, privacy-first telemetry with token-authenticated writers and hardened defaults. All of this is fully open source and intended to be used, inspected, and stress-tested. **What’s coming next:** I’m currently working on **Module 5 (gating logic)**, which will sit on top of these signals and make actual allow / refresh / block decisions. That should land in the next couple of days. **Contributors welcome:** If you’re interested in AI reliability, safety, fatigue detection, telemetry, or system-level integrity, I’m very open to contributors. Feedback, issues, and PRs are all appreciated. Thanks to everyone who’s been following along — more updates soon.
2025-12-30T16:00:15
https://www.reddit.com/r/LocalLLaMA/comments/1pzlruj/update_to_project_lionlock/
SweetDue490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzlruj
false
null
t3_1pzlruj
/r/LocalLLaMA/comments/1pzlruj/update_to_project_lionlock/
false
false
self
0
null
Privacy is a nightmare when it comes to commercial Ai chatbot solutions, would you guys switch to a powerful privacy focused solution if there was an option? What would it look like?
0
There are two ways to maintain your privacy right now 1. Run it locally Downside - you won't be able to use SOTA models 2. Wrapper solutions like duck.ai Downside - feature gap and a lot of other issue. Would you pay for powerful privacy focused solution that has features on par with commercial solutions? What would a solution that you would want to switch to look like?
2025-12-30T15:48:39
https://www.reddit.com/r/LocalLLaMA/comments/1pzlh7d/privacy_is_a_nightmare_when_it_comes_to/
jaywtff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzlh7d
false
null
t3_1pzlh7d
/r/LocalLLaMA/comments/1pzlh7d/privacy_is_a_nightmare_when_it_comes_to/
false
false
self
0
null
Optimal local model for 24GB memory
0
Greetings folks. I have a work issued Macbook Pro with M4 Pro and 24GB RAM. I also have an enterprise sub for Github Copilot. The problem is these days I find myself running out of premium credits before the month is over. I mostly stick to Auto or Sonnet 4.5. But when I run out I have to fallback on either GPT 5 mini or Grok Code Fast. And neither obviously is fully satisfactory for Agent Mode which I use the most. I just need something to tide me over until my credits refresh. So I was wondering have local models gotten good enough to the point where I can run a model fine tuned for coding atleast for agent mode work and have it outperform those free tier models? I can sacrifice a bit of latency for better reasoning. I am allowed to use Ollama and any model that has no restrictions on commerical usage. I asked Gemini and it told me to checkout Qwen 2.5 Coder 14b/30b or Devstral small 2 , quantized if needed. Anything better out there? Or is it just not worth it?
2025-12-30T15:48:21
https://www.reddit.com/r/LocalLLaMA/comments/1pzlgxm/optimal_local_model_for_24gb_memory/
Few-Philosopher-2677
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzlgxm
false
null
t3_1pzlgxm
/r/LocalLLaMA/comments/1pzlgxm/optimal_local_model_for_24gb_memory/
false
false
self
0
null
Benchmarking local + cloud STT on medical dialogue + full eval code available (26 models)
1
I’m building a fully local AI-Scribe for clinicians and just pushed an end-of-year refresh of our medical dialogue STT benchmark. Full eval code + leaderboard: [https://github.com/Omi-Health/medical-STT-eval](https://github.com/Omi-Health/medical-STT-eval) Dataset: PriMock57 (55 files, 81,236 words) • Updated: 2025-12-24 **Top 10 (55 files)** |**Rank**|**Model**|**WER**|**Avg sec/file**|**Host**| |:-|:-|:-|:-|:-| |1|Google Gemini 2.5 Pro|10.79%|56.4s|API (Google)| |2|Google Gemini 3 Pro Preview\*|11.03%|64.5s|API (Google)| |3|Parakeet TDT 0.6B v3|11.90%|6.3s|Local (M4, MLX)| |4|Google Gemini 2.5 Flash|12.08%|20.2s|API (Google)| |5|OpenAI GPT-4o Mini (2025-12-15)|12.82%|40.5s|API (OpenAI)| |6|Parakeet TDT 0.6B v2|13.26%|5.4s|Local (M4, MLX)| |7|ElevenLabs Scribe v1|13.54%|36.3s|API (ElevenLabs)| |8|Kyutai STT 2.6B|13.79%|148.4s|Local (L4 GPU)| |9|Google Gemini 3 Flash Preview|13.88%|51.5s|API (Google)| |10|MLX Whisper Large v3 Turbo|14.22%|12.9s|Local (M4, MLX)| \* 54/55 files evaluated (1 blocked by safety filter) **Key findings** * Gemini 2.5 Pro leads accuracy at 10.8% WER, with Gemini 3 Pro Preview close behind at 11.0% * Parakeet v3 is the new local champion: 11.9% WER at \~6s/file on M4 * GPT-4o Mini (Dec 15 update) jumped from 15.9% to 12.8% WER, now #5 overall * Google MedASR scored worst (64.9% WER) despite being medical-specific, it looks built for dictation, not dialogue * Autoregressive repetition loops showed up in Canary 1B v2, Granite Speech, and Kyutai on some files; chunking with overlap helps * Groq Whisper-v3 (turbo) still looks like the best cloud price/latency balance * Apple SpeechAnalyzer remains solid for Swift apps (14.8% WER) Full leaderboard (26 models) + notes (incl. MedASR and repetition-loop cases) are in the repo. Link to blog is in comment.
2025-12-30T15:42:29
https://www.reddit.com/r/LocalLLaMA/comments/1pzlbk3/benchmarking_local_cloud_stt_on_medical_dialogue/
MajesticAd2862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzlbk3
false
null
t3_1pzlbk3
/r/LocalLLaMA/comments/1pzlbk3/benchmarking_local_cloud_stt_on_medical_dialogue/
false
false
self
1
null
SXM2 (V100) vs PCIe (RTX 4000) for huge 800GB models - Is NVLink critical for memory pooling?
5
Hi everyone, I am building a local inference server for my company to run a massive multi-agent AI setup (approx. 800GB model size). I need to convince my boss that investing in the SXM2 platform is worth the extra cost compared to a standard PCIe setup. We have two options on the table: **Option A (Budget):** * **CPU:** 2x Xeon Platinum 8268 * **RAM:** 2 TB DDR4 * **GPU:** 8x Quadro RTX 4000 8GB (Total 64GB VRAM) - **PCIe Gen3** * *Concern:* Slow communication between cards and CPU, strictly P2P limitation. **Option B (Premium - SXM2):** * **CPU:** 2x Xeon Platinum 8280L * **RAM:** 2 TB DDR4 * **GPU:** 8x Tesla V100 32GB (Total 256GB VRAM) - **SXM2 with NVLink** **My specific question:** Since the model is huge (800GB) and will heavily rely on system RAM offloading (CPU <-> GPU transfer), will the **SXM2/NVLink** architecture provide a massive performance boost over PCIe Gen3? I understand that Ubuntu will still see 8 separate devices, but I am counting on **CUDA Unified Memory** and NVLink to treat the VRAM as a closer-to-unified pool and handle the bandwidth much better than the PCIe bottleneck. Is the performance jump in a split RAM/VRAM scenario significant enough to justify the higher price for a business use case? Thanks for your help! Robert
2025-12-30T15:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1pzl5bv/sxm2_v100_vs_pcie_rtx_4000_for_huge_800gb_models/
Sad_Ninja_3717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzl5bv
false
null
t3_1pzl5bv
/r/LocalLLaMA/comments/1pzl5bv/sxm2_v100_vs_pcie_rtx_4000_for_huge_800gb_models/
false
false
self
5
null
CPU inference
7
Dear community, any suggestions for an open-source LLM which could be used on a CPU for inference? Have tried qwen 0.6B, but it ain't that good. Any suggestions?
2025-12-30T15:31:23
https://www.reddit.com/r/LocalLLaMA/comments/1pzl1dy/cpu_inference/
Time_Dust_2303
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzl1dy
false
null
t3_1pzl1dy
/r/LocalLLaMA/comments/1pzl1dy/cpu_inference/
false
false
self
7
null