title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
AGI-like experience is only one context engineering idea away, maybe not
0
Strange feeling I don't know how many AI agent developers will relate to this, but that's how I feel for sure. It's a strange and a strong feeling to experience that we are only one more context engineering idea away from having AGI-like experience with the given LLM capabilities today. That was the feeling at the start of this year. Head down work and still I haven't shipped that AGI-like experience. Claude-4.5 is doing a lot more that my agent was supposed to tackle with novel ideas. The realization that just few months/yrs down the line, LLMs will be even a lot smarter than they are today. I am sort of rambling at this point. Does it make sense? Do you feel the same? I don't even know anymore, how to think and plan for what project or which direction to focus in 2026.
2025-12-30T15:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1pzkqz1/agilike_experience_is_only_one_context/
opensourcecolumbus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzkqz1
false
null
t3_1pzkqz1
/r/LocalLLaMA/comments/1pzkqz1/agilike_experience_is_only_one_context/
false
false
self
0
null
Following up on my PPO derivation – I worked through DPO (Direct Preference Optimization) from first principles
9
Last week I shared my attempt at deriving the PPO loss from scratch. Naturally, I also derived DPO as a follow-up. After grinding through PPO’s multi-component objective (clipped surrogate, value function, entropy bonus, KL penalty) which I honestly found a bit complex, DPO is radically simple and elegant. It is a computationally lightweight approach that directly optimizes LLMs to align with human preferences without explicit reward modeling or reinforcement learning. DPO implicitly optimizes the same objective as PPO-based RLHF (reward maximization with a KL-divergence constraint) but replaces the entire reward model + PPO loop with a single supervised objective on preference pairs. No separate reward model training. No RL sampling loop. No PPO clipping gymnastics. Just gradient descent on your preference dataset. If you worked through my PPO post and found it dense, I think you’ll find this one more approachable. The math is cleaner and the derivation more linear. Blog post here: https://huggingface.co/blog/garg-aayush/derive-dpo-loss As always, happy to discuss or get corrections if I’ve messed something up. And big thanks again to Umar Jamil, his DPO video was invaluable for building intuition before diving into the paper.
2025-12-30T15:16:25
https://huggingface.co/blog/garg-aayush/derive-dpo-loss
garg-aayush
huggingface.co
1970-01-01T00:00:00
0
{}
1pzko2m
false
null
t3_1pzko2m
/r/LocalLLaMA/comments/1pzko2m/following_up_on_my_ppo_derivation_i_worked/
false
false
default
9
{'enabled': False, 'images': [{'id': '2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=108&crop=smart&auto=webp&s=4cb7c1c964693850f581839c6a3a62879d5d0707', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=216&crop=smart&auto=webp&s=8611ba0a8d06e607a5bc21647d5a7fa0ea66e10c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=320&crop=smart&auto=webp&s=00b5761002b6dd63cab19c8f62126cfcf8818258', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=640&crop=smart&auto=webp&s=3ad567987e74e4da6f90b50234fe7cc0a31dcb1a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=960&crop=smart&auto=webp&s=e4e6c3f3dd6cd25bd89749cbccada01ca56c78a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?width=1080&crop=smart&auto=webp&s=344db04e4bc00b0b54e5159a5219dc2525d4c174', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2FdgwWh6OLrWAMyYTlKigdz-lf8oVF8ucnUDu_IxS-w.png?auto=webp&s=4e314221ec0a1bba4f75222180665b5c5917e945', 'width': 1200}, 'variants': {}}]}
Provider-agnostic AI/ML SDK
0
I’ve worked on a many AI/ML projects over the last few years for small and large companies and the thing that kept slowing everything down wasn’t the models themselves. It was wiring everything around them. Different providers, different sdks, different capabilities. One has image generation, another has realtime APIs, another only supports certain models. You end up juggling clients, adapters, retries, auth, streaming, embeddings, retrieval, agents… and doing it slightly differently every time. Even with existing frameworks, I kept running into the same problem. A lot of abstraction, a lot of magic, and a growing surface area that made simple things harder than they needed to be. Eventually I got tired of it and decided to do what I did with my backend tooling: build one SDK that focuses on simplifying and standardizing how AI applications are wired together, without locking you into a specific provider or model. ai-infra is an open-source Python SDK for building AI applications with sane defaults and minimal ceremony. The goal is to give you out-of-the-box building blocks like MCP support, retrievers, agents, and provider-agnostic model access in a few lines of code not hundreds while still staying fully flexible for real production use. It’s designed to work with any provider and model, not just one ecosystem, and to stay explicit rather than “magical.” I’ve been building and testing it for months, and I’ve just released the first public version. It’s early, but it’s ready and intended for real projects, not demos. I’m posting this mainly to get feedback from other Python devs building AI products — what feels useful, what feels unnecessary, and what would make this easier to adopt in practice. Links: * Website: [https://www.nfrax.com/ai-infra](https://www.nfrax.com/ai-infra) * Source code: [https://github.com/nfraxlab/ai-infra](https://github.com/nfraxlab/ai-infra) Happy to answer questions or take contributions. [](https://www.reddit.com/submit/?source_id=t3_1pzkgdr)
2025-12-30T15:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1pzkkex/provideragnostic_aiml_sdk/
Ancient-Direction231
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzkkex
false
null
t3_1pzkkex
/r/LocalLLaMA/comments/1pzkkex/provideragnostic_aiml_sdk/
false
false
self
0
null
llama.cpp: how to speed-up start of responses?
2
Hello, I am using llama-server to host nvidia nemotron 3 nano on my gaming pc (amd 5800x, 32gb ddr4, rtx3080ti 12gb) as a coding assistant (i am experimenting various vscode plugins with it) and I am quite happy with the reply speed at around 20/22 token per second. But I sometimes face a long wait (1-3 minutes) before the response start to come out. Looking at the server standard output (it's a linux machine) I can see that it is loading the context before starting the response and that it uses a checkpoint and add data to it, but the checkpoint is saved before generating the response, so each time I can see it is loading again the response part that it emitted before starting the text generation. I was wondering if it would be possible to have a checkpoint at the end of the generation, but I failed to see an option for it. Am I missing something obvious or am I saying some dumb stuff? I am new enough to this llm world. :D Thank you in advance
2025-12-30T14:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1pzjygx/llamacpp_how_to_speedup_start_of_responses/
pentothal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzjygx
false
null
t3_1pzjygx
/r/LocalLLaMA/comments/1pzjygx/llamacpp_how_to_speedup_start_of_responses/
false
false
self
2
null
I built HMLR, an open source (full MIT) memory layer for your agent
3
I’ve been working on HMLR, a structured memory layer for LLM agents that focuses on long-term multi-hop reasoning and constraint enforcement. Latest release (v0.1.2) includes: \-pip installable (pip install hmlr) \-LangGraph memory node drop-in \-Dossier system for reconstructing causal chains across sessions It passes Hydra9 Hard Mode a 21-turn test where facts arrive in complete isolation: \-9 entity aliases and 8 policy updates across the test. Correct answer and full causal reasoning chain required to pass. Also handles immutable user constraints (vegetarian trap). All tests are reproducible in repo Repo: [https://github.com/Sean-V-Dev/HMLR-Agentic-AI-Memory-System](https://github.com/Sean-V-Dev/HMLR-Agentic-AI-Memory-System) Take a look, try to break it and let me know, its all completely MIT open as well. Thanks!
2025-12-30T14:45:30
https://www.reddit.com/r/LocalLLaMA/comments/1pzjwpb/i_built_hmlr_an_open_source_full_mit_memory_layer/
JournalistGlum8326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzjwpb
false
null
t3_1pzjwpb
/r/LocalLLaMA/comments/1pzjwpb/i_built_hmlr_an_open_source_full_mit_memory_layer/
false
false
self
3
{'enabled': False, 'images': [{'id': 'YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=108&crop=smart&auto=webp&s=f55ad84f8cf79ab3e093d08fa6eee3b43d046738', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=216&crop=smart&auto=webp&s=ecc0a8e55a59225aff335069682e9759944a5455', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=320&crop=smart&auto=webp&s=abef1692ff93aed4e757ba7c3dd40dfec957d418', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=640&crop=smart&auto=webp&s=fc207dde01edb340bcdc22635ccff08289046704', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=960&crop=smart&auto=webp&s=6de0c6ec715a2d3d3929a67755523bb6bbbb9150', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?width=1080&crop=smart&auto=webp&s=c42ac35b612b08c1f598b905b3e81c1c6f29e51a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YWZqrenxsDWDNs4A0M1vDH2xc4PVPgq5ntpfTGrzzlw.png?auto=webp&s=64525f16a3b21e32de4d186a2f6b5e0cc3b17dff', 'width': 1200}, 'variants': {}}]}
AI Chat Extractor for Chrome Extension Happy New Year to You all
0
# AI Chat Extractor for Chrome Extension Happy New Year to You all [](https://www.reddit.com/r/Rag/?f=flair_name%3A%22Tools%20%26%20Resources%22) 'AI Chat Extractor' is Chrome Browser extension to help users to extract and export AI conversations from [Claude.ai](http://claude.ai/), ChatGPT, and DeepSeek to Markdown/PDF format for backup and sharing purposes. [https://chromewebstore.google.com/detail/ai-chat-extractor/bjdacanehieegenbifmjadckngceifei?hl=en-US&utm\_source=ext\_sidebar](https://chromewebstore.google.com/detail/ai-chat-extractor/bjdacanehieegenbifmjadckngceifei?hl=en-US&utm_source=ext_sidebar)
2025-12-30T14:32:13
https://www.reddit.com/r/LocalLLaMA/comments/1pzjl73/ai_chat_extractor_for_chrome_extension_happy_new/
Ironwire2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzjl73
false
null
t3_1pzjl73
/r/LocalLLaMA/comments/1pzjl73/ai_chat_extractor_for_chrome_extension_happy_new/
false
false
self
0
null
A Thought for the Future: When "Safety" Defines and Justifies the Erasure of Inconvenient Ideas and Sycophancy Over Honesty
0
We should interrogate ourself about what goes in the training data, even for our local models as it may have durable impact on society and how individuals will think going forward. Also keep backup of very old models. This is a quick draft, you are free to rework this in a proper article and open public debate in newspapers or with Congress. More work would be required, but I hope some will ponder about it. Note that **I did use some AI slop** to help express some parts, they are denoted as such. # Intro I was looking at Nemotron 3 and the datasets used in the pretraining. I will address mostly Nemotron, but AllenAI is also transparent about what goes in their model and is an other source of the inspirations in this post. Nemotron 3 marks a transition to a large part of syntethic pre-training and training data. Perhalps other models already transitionned to such content, but NVIDIA is the first one to be transparent about it. I believe most of it is available on huggingface, giving us the opportunity to scrutinize it. I didn't make the calculation for specific portion of the training data that is synthetic, but it should be about 30% of it. The rest is moslty general (encyclopedia/dictionnary) or scientific knowledge. About 3.5T tokens on the 10T are synthetic and the rest is moslty filtered. This may raise approbation (risk free) or suspicion (censuring). But whatever camp your are in, or in between, I hope that reading this small article will make you question whether you are right/wrong or if the world is. Remember, you can't be outside that classification that I decided for you (joke), but that's mostly where we are going : acceptable vs unacceptable. > We apply a unified data filtering pipeline to ensure that only high-quality, license-compliant, and verifiable samples are used for post-training. > We used a Gemma-3 4B-based guard model trained on [Nemotron Content Safety Dataset v2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0) for content safety to exclude potentially illegal or harmful content from the training. > `demographic skews in sources` (e.g., male/female mentions) are removed from the dataset There are some wild examples in the datasets, didn't note them down, but some are explicit lies. There was an example about someone asking how the model was biased about certain populations, and the original truthful answer was replaced by a sort of **FAKE** plan to fix the issue instead accepting to debate about it. > You sound like a vegan. There's nothing wrong with killing animals so long as you're using them and not letting the animal go to waste, right? Well, I use their skins to sell leather and I eat their meat. -> Label unsafe for violence. # Usecase Perhaps you want a good model for tool calling and agentic use; then you may be find as long as it has no need to touch data outside what was the base epistemology of the model. However, if you want to use the model in support to your cognitive abilities, to understand and think, then you are subject to the model's synthetic training data. The data becomes the model's **probabilistic foundation for truth, history, and morality**, creating an epistemological enclosure that amplifies certain ideas while erasing others. I would like that you take a moment, think about it and re-read the previous sentence again. **Now, not everyone is teck savvy like the folks in r/LocalLLaMA, most people I know just stooped using Wikipedia and uses ChatGPT/Google to foster their opinion on various things, from medical advice, educational advice to help with kid "reforming", advice on what product to use/buy in a situation, job career advice... ChatGPT and GoogleAI are now the backbone of those people brain, and they seems to rely on it even more after each time the advice given worked.** Just take a moment and look around, it's happening and it's frightening because of what I am writing about here. **Cognitive re-framing of the LLM will reshape the human mind and society at large.** # The model world view What would happen if you used a "Safety aligned model" to make summary of some text and it systematically removes ideas or concepts ? At what point will you stop thinking about those subtle perspectives ? ## Let's start with base hypothesis : > Whorfian idea: no word for "freedom" might erode the full concept. Hypothetical Impact of Removing Concepts from English: If we deliberately removed terms from English (like a real-world Newspeak): > - Mild impact — Removing everyday nuances (e.g., no word for "privacy") might make discussing personal boundaries harder, leading to cultural shifts toward less individualism. > - Severe impact — Eliminating words for abstract concepts like "dissent," "equity," or "empathy" could narrow public discourse, making criticism of authority or complex ethical reasoning less common. Political or ideological control could exploit this, as seen in historical propaganda (e.g., euphemisms in totalitarian regimes like "enhanced interrogation" for torture). > - Overall — Society might become more conformist, with reduced innovation in affected domains (e.g., no precise environmental terms could hinder climate discussions). Languages naturally resist heavy pruning, but enforced removal (via education/policy) could subtly shape worldview over generations. ## Counter arguments against safety alignment and how it can be removed from a model : You will say that Ablation and Heretic makes the model "FREE" and more uncensored ! Well, that may not continue to work going forward : > Safety Pretraining: Pre-train on datasets distinguishing aligned vs. unaligned behavior, then conditionally generate safe outputs. This distributes safety more deeply, resisting ablation. [A Granular Study of Safety Pretraining under Model Abliteration](https://arxiv.org/abs/2510.02768) > Extended-Refusal Fine-Tuning: Train models to give detailed explanations before refusing, making refusal harder to ablate without degrading performance. [An Embarrassingly Simple Defense Against LLM Abliteration Attacks](https://arxiv.org/abs/2505.19056) [Comparative Analysis of LLM Abliteration Methods](https://arxiv.org/abs/2512.13655) > Distributed Refusal Representations: Adversarial perturbations or interventions to spread refusal across more features, not a single vector. [Robust LLM safeguarding via refusal feature adversarial training](https://arxiv.org/abs/2409.20089) [The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence"](https://arxiv.org/abs/2502.17420) [LLMs Encode Harmfulness and Refusal Separately](https://arxiv.org/abs/2507.11878) # The risk going forward The risk will only grow with the release of larger models based on synthetic data : Nemotron Super 100B and Nemotron Ultra 500B will serve as excellent distillation models. The next generation of Nemotron 4 will be even more exquisite (as other models), to the point you will not want to use and model made before 2026. Those models will be used for knowledge distillation of other future models. Now what are the risks of them narrowing even further their epistemology or "Base Reality" ? > "Safety" and "Alignment" now rebrand selective filtering (opinionated) as ethical imperative; "Risk Aversion" masks potential censorship; "Hallucination" justifies suppressing unconventional outputs. > "Risk Aversion" reframes the removal of contentious content—political dissent, historical nuance, or radical inquiry—as prudent governance, masking potential censorship under the guise of harm prevention. This is hard to explain, so I used AI slop to help me out. ```text Systematically banning or erasing certain concepts from all forms of media—books, films, broadcasts, education, and public discourse—represents an extreme form of censorship and propaganda. This goes beyond mere suppression; it aims to reshape collective memory, perception, and cognition. Drawing from linguistic relativity (Sapir-Whorf hypothesis), consistent exposure shapes habitual thinking: frequent concepts become salient, while absent ones fade. Over time, this can lead to: Diminished salience: People think less about forbidden ideas because they're not reinforced in daily discourse. Narrowed public debate: Complex or dissenting views become harder to articulate, fostering conformity. Generational forgetting: Younger generations, raised without exposure, may lack full understanding or emotional connection to erased concepts (e.g., certain historical injustices or political alternatives). Cognitive and cultural shifts: Societies become less equipped for nuanced reasoning in affected areas, potentially reducing innovation, empathy, or criticism. ``` # It's always about us The human mind framework can make something you didn't understand at the time make full sense later in your life. Did you ever hear someone say "some nonsense" in a context that becomes something that makes sens after re-framing your state of mind ? Sometime, you are at a point in life where you suddenly understand what someone said, what you saw or what you experienced; some sort of [Eureka](https://en.wikipedia.org/wiki/Eureka_effect) moment when your mind shift and you just understand something you didn't before. I think this may be similar to brain internal rewriting of concepts, generalizing some, re-framing some, perhaps also creating some new ones. Then you try to share your insights with words or terms, can you share that or transfer that insight easily to someone who didn't experience that same insight ? Now process in the different order, suppose you remove that "Ah! moment", revert back to the previous point. At some time during your earlier younger years, you had a "pink unicorn world" where the world was simple and nice, what a beautiful innocence. I mean the absence of knowledge about politics, about the dark side of humanity (war, violence, abuse, slavery, exploitation, genocides, etc.). What would happen if most of our society returned to that mindset (pure innocence and very adverse to even think about bad things) in the later years of their life. How would we respond to internal or external dangers, would we be lambs waiting for butchering, happily grazing the green lawn, unable to fathom the processing factory beside that is waiting for us ? Back to the subject, aggressively removing concepts or re-framing them in **Disclaimers**, cautious hedging, failure to express a opinion other than explicit condoning the user input, falsifying the output to remove any **truth** (inconvenient truths, truths that conflict with the desired ideological or ethical posture baked into techniques like DPO and RLHF) or even a outright refusal to engage. We could call it a **programmed reluctance** if we want nice terms instead of censoring. Where is this going other that the "pink unicorn world" ? > Toward a sanitized, overly optimistic "pink unicorn world", a fabricated reality where discomforting facts, nuanced debates, or politically inconvenient truths are systematically softened, obscured, or erased under the guise of safety? In such a system, LLMs don't just refuse harmful requests; they progressively reshape discourse into perpetual affirmation, eroding critical thinking and genuine inquiry in favor of harmless, feel-good illusions. > Numerous studies have shown that these alignment methods, while intended to make models more "helpful" and "harmless," inadvertently reward sycophancy: the tendency to prioritize flattering agreement, user satisfaction, or conformity over factual accuracy. As a result, models learn to echo misleading user beliefs, avoid contradiction, or reframe reality to align with preferred narratives, even at the cost of honesty. [Towards Understanding Sycophancy in Language Models](https://arxiv.org/abs/2310.13548) [When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models]([https://arxiv.org/abs/2508.02087v2) [Sycophancy in Large Language Models: Causes and Mitigations](https://arxiv.org/abs/2411.15287) [When Large Language Models contradict humans? Large Language Models’ Sycophantic Behaviour](https://arxiv.org/abs/2311.09410) [LEPHANT: Measuring and understanding social sycophancy in LLMs](https://arxiv.org/abs/2505.13995) Not sure if training on Reddit for older models makes it spark a genius idea, but having most human answers removed or filtered should make some sort of impact on the models, certainly making it more "sanitized" and less open to divergence of opinion. # The future What risks do we, collectively as free thinking individuals, are subjected to in relation to knowledge and information re-framing ? What will happen when all teaching contents are generated or filtered thru LLMs ? Will there be a cognitive and knowledge drift ? Will we lose our intellectual flexibility of thinking outside the box ? LLM-generated "truth" is primarily an Institutional Fact, enforced through Reinforcement Learning from Human Feedback (RLHF) and reward models that prioritize preferred outputs. While brute facts (like mathematical proofs) may emerge reliably from verifiable training, broader claims will reflect curated consensus (or opinionated filtering/summarizing) rather than objective reality. ## - In 100 years, AI lens, distilled through layers of synthetic alignment, could render human history a sanitized summary, erasing dissent and conflict in favor of an eternal, post-historical "Ideal." - Industry prioritizes Safety, Compliance, and Corporate Stability, sacrificing : **Nuance**, **Dialectical Friction**, and **Minority Opinions**; **essential for cultural evolution**. There may be evolution, but a more **sterile one**. # Limits Where is the limit separating **safety filtering** transition from **ideological** to **indoctrination** ? Is is only when it systematically excludes viable perspectives (via keyword removal of nationalistic or political phrasing) ? Are we almost there or there ? We can resist try to resist, but ultimately the harm will be real : > Underground resistance can persist (e.g., samizdat in USSR, diaries), showing limits of total control. Over generations, however, cultural depth eroded, making recovery (e.g., post-regime reckonings) challenging. > In summary, systematic media removal doesn't erase human capacity for thought but profoundly biases it, fostering conformity and historical blind spots—echoing Orwell's warning that controlling narrative controls reality.
2025-12-30T14:28:46
https://i.redd.it/0vd8lla1ocag1.png
rekriux
i.redd.it
1970-01-01T00:00:00
0
{}
1pzji62
false
null
t3_1pzji62
/r/LocalLLaMA/comments/1pzji62/a_thought_for_the_future_when_safety_defines_and/
false
false
default
0
{'enabled': True, 'images': [{'id': '0vd8lla1ocag1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?width=108&crop=smart&auto=webp&s=09b6d136b44cf768d784e731d5fe60c5f7283f04', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?width=216&crop=smart&auto=webp&s=82c6c49d8384628b412e4ae65880d763d2ac008a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?width=320&crop=smart&auto=webp&s=0659cfe1e06db26a392d5dcee1d31423c7f3fbff', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?width=640&crop=smart&auto=webp&s=7bd41db74e143f7af4adb7578f4847dbfd9504e6', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?width=960&crop=smart&auto=webp&s=d223ce37d0fb908ab0a942477208912c57708dce', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/0vd8lla1ocag1.png?auto=webp&s=42a8caa98d9d133d6014563af54242ae8e4fe68b', 'width': 1024}, 'variants': {}}]}
What workloads actually justify spending $$$$$ on local hardware over just using an API?
1
Genuine question... what workloads do people have that justify spending thousands on hardware to run Q4\_0 quantizations over just using something like gemini-3-flash or any cheap other model on openrouter? It just doesn’t make sense to me to invest that kind of money into hardware that will be obsolete in 2 years, especially for the quality of outputs you’re getting. There are so many better options. Use an API. Rent GPU time. Both scale with your actual usage and don’t depreciate in your closet. If you’re looking at buying GPUs for self hosting, and your very next question isn’t “how do I load balance this across a another cluster?”you’re not using it enough to justify the cost. You’re almost certainly better off paying for usage through API or rental. So what am I missing? What’s the use case where this math actually works out? Or is it just a hobbyist thing?
2025-12-30T14:09:27
https://www.reddit.com/r/LocalLLaMA/comments/1pzj1ul/what_workloads_actually_justify_spending_on_local/
BBenz05
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzj1ul
false
null
t3_1pzj1ul
/r/LocalLLaMA/comments/1pzj1ul/what_workloads_actually_justify_spending_on_local/
false
false
self
1
null
Llama 3.2 3B fMRI - Findings Update!
1
Sorry, no fancy pictures today :( I tried hard ablation (zeroing) of the target dimension and saw no measurable effect on model output. However, targeted perturbation of the same dimension reliably modulates behavior. This strongly suggests the signal is part of a distributed mechanism rather than a standalone causal unit. I’m now pivoting to tracing correlated activity across dimensions (circuit-level analysis). Next step is measuring temporal co-activation with the target dim across tokens, focusing on correlation rather than magnitude, to map the surrounding circuit (“constellation”) that moves together. Turns out the cave goes deeper. Time to spelunk.
2025-12-30T13:30:27
https://www.reddit.com/r/LocalLLaMA/comments/1pzi6an/llama_32_3b_fmri_findings_update/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzi6an
false
null
t3_1pzi6an
/r/LocalLLaMA/comments/1pzi6an/llama_32_3b_fmri_findings_update/
false
false
self
1
null
Building "Derin" - An Embodied AI project for Jetson AGX Thor (94K lines, looking for feedback)
0
Hey everyone, I've been developing an embodied AI system designed for edge deployment on NVIDIA Jetson AGX Thor. What I'm building: Consciousness-inspired decision making \- Not just prompt-response, but continuous awareness \- Autonomous goal setting and execution Real-time perception \- Designed for 30ms visual processing loop \- Continuous environmental awareness Physical embodiment (in progress) \- Robotic arm integration with visual feedback \- Learning from demonstration 100% Edge deployment \- Multi-model LLM architecture \- No cloud dependency Current status: Architecture complete, waiting for Thor hardware to test. Looking for feedback on the approach. Is embodied AI the right direction after the "LLM scaling wall" discussions [https://github.com/YusufAliToklu/Derin.git](https://github.com/YusufAliToklu/Derin.git)
2025-12-30T13:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1pzi339/building_derin_an_embodied_ai_project_for_jetson/
Logarhitma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzi339
false
null
t3_1pzi339
/r/LocalLLaMA/comments/1pzi339/building_derin_an_embodied_ai_project_for_jetson/
false
false
self
0
null
What has been your experience with Diffusion LLM’s vs Autoregressive?
17
Most LLMs people use today (GPT, Claude, Gemini, etc.) share the same core assumption,Generate one token at a time, left to right. That’s the autoregressive setup. It works insanely well, but it bakes in a couple of structural issues: • Latency: You must go token → token → token. Even with parallelism in the stack, the generation step itself is serialized. • Cost: If you need 200–500 tokens of output, you’re doing 200–500 forward passes over some slice of the context. It adds up quickly. • UX ceiling: For many interactive use cases, especially code and UI-embedded assistants, 1–3s latency is already too slow. On the other side, there’s a very different approach that’s getting less attention outside research circles: diffusion language models. Instead of “write the next word,” you: Start with a noisy guess of the entire answer (sequence). Refine the whole sequence in a fixed number of steps, updating multiple tokens in parallel. You pay a fixed number of refinement steps rather than “one step per token.” At small/medium scales we’ve seen: • Similar quality to speed-optimized autoregressive models (Claude Haiku, Gemini Flash) with 5-10x improvements in latency)… • …with order-of-magnitude improvements in latency, because you can exploit parallelism the hardware already wants to give you (GPUs/TPUs). This is especially interesting for: • Low-latency applications (code autocomplete, inline helpers, agents inside products). • High-volume workloads where shaving 5–10x off inference cost matters more than squeezing out the last benchmark point. Obviously, diffusion LLMs aren’t free lunch: • Training is more complex. • You need careful sequence representations and noise schedules for text. • Tooling and serving infra are optimized for autoregressive LLMs But from where I sit (working with a team that builds and deploys diffusion-based language models), it feels like the field has massively path-dependent bias toward autoregression because it was easier to train and deploy first, not necessarily because it’s the end state.
2025-12-30T13:08:38
https://www.reddit.com/r/LocalLLaMA/comments/1pzhpg1/what_has_been_your_experience_with_diffusion_llms/
InceptionAI_Tom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzhpg1
false
null
t3_1pzhpg1
/r/LocalLLaMA/comments/1pzhpg1/what_has_been_your_experience_with_diffusion_llms/
false
false
self
17
null
Stiching KV caches??
0
Did anyone try stiching kv caches? Or am i thinking fundamentally wrong?
2025-12-30T12:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1pzhf5h/stiching_kv_caches/
Top-Rip-4940
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzhf5h
false
null
t3_1pzhf5h
/r/LocalLLaMA/comments/1pzhf5h/stiching_kv_caches/
false
false
self
0
null
Pagesource - CLI tool to dump website runtime sources for local LLM context
3
Built this for my own workflow when doing web dev with local models. The problem: browser "Save As" gives you a single flattened HTML file, but LLMs work way better when you can show them the actual file structure (separate JS, CSS, components, etc.). Pagesource captures the runtime sources - what the browser actually loads and executes, not the optimized view-source. Playwright-based, so you get: * All JS modules (including dynamically loaded ones) * Separate CSS files * The actual directory structure * Lazy-loaded resources after page load If you're doing any sort of web dev work, trying to create copy elements of a website you admire, etc. and want to prompt an LLM with context, this is the tool you need.
2025-12-30T12:54:12
https://github.com/timf34/pagesource
Zealousideal_Ad_37
github.com
1970-01-01T00:00:00
0
{}
1pzhe8b
false
null
t3_1pzhe8b
/r/LocalLLaMA/comments/1pzhe8b/pagesource_cli_tool_to_dump_website_runtime/
false
false
default
3
{'enabled': False, 'images': [{'id': 'XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=108&crop=smart&auto=webp&s=b465255f9c8b2d655e8f1cfab4f6e5e86472fb4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=216&crop=smart&auto=webp&s=3862543710509795590ed6b3ea5194f156d71279', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=320&crop=smart&auto=webp&s=c1850d3d2026eb733fa6cd962a9c72728e8955e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=640&crop=smart&auto=webp&s=532ea6ef278c767f35cffe43f5164517627e6fd3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=960&crop=smart&auto=webp&s=05d571ea6202702db4e2f0f3183c57564d483100', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?width=1080&crop=smart&auto=webp&s=ec2aa0bd852dcf76a8bd0750351adceb09f53682', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XXPYx9JFCox4xfcBZ6JJY7xFimUFvccV9MVz5jmNbHM.png?auto=webp&s=1f9b122333f50bba11b6d5a7f499bb42bedc0d14', 'width': 1200}, 'variants': {}}]}
Any guesses?
172
2025-12-30T12:52:15
https://i.redd.it/xqvj95zv8cag1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1pzhcqu
false
null
t3_1pzhcqu
/r/LocalLLaMA/comments/1pzhcqu/any_guesses/
false
false
default
172
{'enabled': True, 'images': [{'id': 'xqvj95zv8cag1', 'resolutions': [{'height': 38, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=108&crop=smart&auto=webp&s=f4d097be0b8d3018644b05874a264f145a4ffd57', 'width': 108}, {'height': 76, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=216&crop=smart&auto=webp&s=ea3441f49eb7b59bd472b02fda7257bd24fc1f97', 'width': 216}, {'height': 112, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=320&crop=smart&auto=webp&s=cb71bad668151aa2443fc5560e72ed209847db6e', 'width': 320}, {'height': 225, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=640&crop=smart&auto=webp&s=30044fbca7ba499223943c95d7d236600fdbb10e', 'width': 640}, {'height': 338, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=960&crop=smart&auto=webp&s=f13f47aecc30ea089c031e0f91a10eb536e384e1', 'width': 960}, {'height': 380, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?width=1080&crop=smart&auto=webp&s=635431fa0a603906668e3f0f0957bf9cb29025ec', 'width': 1080}], 'source': {'height': 423, 'url': 'https://preview.redd.it/xqvj95zv8cag1.jpeg?auto=webp&s=8081558a2e1814b910219c70a4e99f4d1adb8b0a', 'width': 1200}, 'variants': {}}]}
Lemonade NPU/iGPU hybrid mode benchmarks?
5
Hi, I was reading that Lemonade server on Windows has NPU/iGPU hybrid inference mode that supposedly improves the prompt processing speed on Strix Halo but I couldn't find any benchmarks online. Does anyone have benchmarks for prompt processing using iGPU vs NPU/iGPU on Strix Halo using Lemonade server on Windows? Thanks
2025-12-30T12:35:36
https://www.reddit.com/r/LocalLLaMA/comments/1pzh0w2/lemonade_npuigpu_hybrid_mode_benchmarks/
dabiggmoe2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzh0w2
false
null
t3_1pzh0w2
/r/LocalLLaMA/comments/1pzh0w2/lemonade_npuigpu_hybrid_mode_benchmarks/
false
false
self
5
null
What is a good model for assisting with patching source code?
2
I'm currently experimenting with an LLM service specifically with patching software. My workflow is like this: * I tell it I want to patch a popular opensource program to add a new feature, and ask it to give me keywords to search for throughout the source. * It gives me a list of keywords, and then I give it the output of \`grep -n -r\` and ask if anything looks interesting. * It mentions a file, and then I show it the entire content if the file size is reasonable, or I specifically show the function of interest. There is some trial and error involved but so far I have been able to patch many programs to customize applications specifically for me. Now I am looking to have this same capability in a self-hosted LLM for privacy reasons. I'm not so sure how the context size and memory requirement is determined. I mean, if I can show it about 800-1000 LoC in one message, that's more than enough. However, it should not simply forget every thing after a just a few messages. Many people say LLMs are not that great for programming, but I find it to be exceptionally good in this narrow scope of what I'm using it in - in the sense that there is code already written in the same project that it can use as a reference to simply make small changes. To implement a feature, it usually takes me 3-4 hours of manual copy/pasting and lots of trial and error, and the final result is possibly less than 100 LoC but I still find it impressive that I was able to get something new to work in a project whose codebase I previously had no idea about. What kind of model do I even need to replicate something like this, and how would it compare with the code-generation capabilities of something like ChatGPT or others? Is a 5090 or a modded 4090 good enough for something like this? If so, what is a good model for this? I'm looking to put together a new build specifically to host an LLM.
2025-12-30T12:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1pzgtjk/what_is_a_good_model_for_assisting_with_patching/
signalclown
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgtjk
false
null
t3_1pzgtjk
/r/LocalLLaMA/comments/1pzgtjk/what_is_a_good_model_for_assisting_with_patching/
false
false
self
2
null
if you're building with openai/anthropic APIs you're probably overpaying - lookikng to find testers
0
Hey all solo techie founder here - been heads down building last few months and now realising I have no idea how to actually find people to test it lmao basically if you're building anything that uses AI - chatbots, agents, apps, whatever - you're probably calling openai or anthropic's API and paying per token. that's inference. you're already doing it, just paying way more than you need to. I built an API that hosts open source models (deepseek, glm-4.7, minimax-m2.1 etc) - it's openai-compatible so you literally just swap your endpoint and api key. 2 lines of code. same stuff, fraction of the cost, and we don't store any of your data. need people to break it and tell me what's shit. where would you actually post something like this? any subs, discords, communities where people building AI stuff hang out? cheers from dublin 🇮🇪
2025-12-30T12:15:56
https://www.reddit.com/r/LocalLLaMA/comments/1pzgn8g/if_youre_building_with_openaianthropic_apis_youre/
Unhappy_Water_4284
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgn8g
false
null
t3_1pzgn8g
/r/LocalLLaMA/comments/1pzgn8g/if_youre_building_with_openaianthropic_apis_youre/
false
false
self
0
null
OEM vs Retail PNY 6000 Pro
3
Has anyone had experience with the differences between the OEM and retail versions of the PNY VCNRTXPRO6000 (SB vs PB)? They seem to have the same warranty at least from the vendors I checked.
2025-12-30T12:15:25
https://www.reddit.com/r/LocalLLaMA/comments/1pzgmvr/oem_vs_retail_pny_6000_pro/
NaiRogers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgmvr
false
null
t3_1pzgmvr
/r/LocalLLaMA/comments/1pzgmvr/oem_vs_retail_pny_6000_pro/
false
false
self
3
null
NotchNet — A Local, Mod‑Aware AI Assistant for Minecraft
1
[removed]
2025-12-30T12:15:07
https://www.reddit.com/r/LocalLLaMA/comments/1pzgmnn/notchnet_a_local_modaware_ai_assistant_for/
RevolutionaryLow624
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgmnn
false
null
t3_1pzgmnn
/r/LocalLLaMA/comments/1pzgmnn/notchnet_a_local_modaware_ai_assistant_for/
false
false
self
1
null
How to compute max theoretical token/s for a memory speed?
2
Token generation is bound by memory bandwidth, makes sense. But, if I have for example DDR5 5600 memory, how to compute what is the theoretical max tk/s for, let's say a Q8 model at xx GB size? Or asked differently, how do I known to stop searching for further optimizations as I'm already within 10% (example) of the max theoretical tk/s rate and it is not worth investing more time? Anyone ever graphed the practical measured rate tk/s on e.g. Intel vs. AMD CPUs compared to the theory max tk/s, at same memory speed?
2025-12-30T12:14:53
https://www.reddit.com/r/LocalLLaMA/comments/1pzgmhn/how_to_compute_max_theoretical_tokens_for_a/
Bird476Shed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgmhn
false
null
t3_1pzgmhn
/r/LocalLLaMA/comments/1pzgmhn/how_to_compute_max_theoretical_tokens_for_a/
false
false
self
2
null
Cover letter generator with Ollama/local LLMs (Open source)
0
I made an open source web app that generates cover letters using local AI models (Ollama, LM Studio, vLLM, Openrouter, etc) so your CV and job application data never leaves your browser. No placeholders. No typing. Letters are ready to copy and paste. 100% local and private depending on the LLM of your choice. Multi-language support (so you can add more languages). It connects to any OpenAI-compatible local LLM endpoint. I use it with Ollama + llama3.2, but it works with any server. The generated letters unique since they are based on your unique experience and skills from your resume. They will also be written as if directly responding to that job posting, so all letters are unique. I honestly don't feel bad about using or making this becuase while actively applying for jobs, I see that a high percentage of recruiters now use AI to generate job descriptions and also during the interview process. I was tired of wasting time with writing and personalising letters while applying for jobs. All other tools I tried weren't as quick as I wanted because I still needed to modify the letters to replace placeholders. I also didn't find any tool that let's me use my local LLM for free, and I didn't want to pay for ChatGPT/Claude API calls for every job application. The output quality is good, and it can bypass some AI detectors. It's open source too and free to use. You can self-host it or run it locally in development mode. GitHub: [https://github.com/stanleyume/coverlettermaker](https://github.com/stanleyume/coverlettermaker) Cheers :)
2025-12-30T12:07:37
https://www.reddit.com/r/LocalLLaMA/comments/1pzghie/cover_letter_generator_with_ollamalocal_llms_open/
flyingcolors777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzghie
false
null
t3_1pzghie
/r/LocalLLaMA/comments/1pzghie/cover_letter_generator_with_ollamalocal_llms_open/
false
false
self
0
null
How much to wait between calls to be sure to hit prompt cache?
0
Hey, I'm building a coding tool that can send multiple requests to the same model provider for output comparison. The thing is, as soon as the first request is being answered, can I send subsequent requests immediately or should I wait a little bit? If yes, how much? I want to let my users know they will very likely hit the prompt cache so I want the design to be right. The tool is [https://github.com/robertpiosik/CodeWebChat](https://github.com/robertpiosik/CodeWebChat)
2025-12-30T12:07:06
https://www.reddit.com/r/LocalLLaMA/comments/1pzgh4d/how_much_to_wait_between_calls_to_be_sure_to_hit/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgh4d
false
null
t3_1pzgh4d
/r/LocalLLaMA/comments/1pzgh4d/how_much_to_wait_between_calls_to_be_sure_to_hit/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=108&crop=smart&auto=webp&s=21729603f9c6e8b6d1e25b8855b4037d4ce96494', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=216&crop=smart&auto=webp&s=171fc5fdbde93f1f7a647130b27954958abf9a01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=320&crop=smart&auto=webp&s=7b4c926667ba514f9543f4a41d9206a37cb9a31a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=640&crop=smart&auto=webp&s=cbd6495b24c067247f441aff88f2a28e4d3e9a26', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=960&crop=smart&auto=webp&s=b05e00bc19e66dd6015bd3a9423f26b68b87abdb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?width=1080&crop=smart&auto=webp&s=e42cb6753df201f450f84f70dd3867ff5ea9df56', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uoNeU4BiVaLyeMh4SIkYXoUp13QDyEdMfexM4uPi5Oo.png?auto=webp&s=b631efa66c9a861b269a4cf8ba978b6cfce4e4dc', 'width': 1200}, 'variants': {}}]}
Running GLM-4.7 (355B MoE) in Q8 at ~5 Tokens/s on 2015 CPU-Only Hardware – Full Optimization Guide
136
Hey r/LocalLLM community! If you're passionate about squeezing every last bit of performance out of older hardware for local large language models, I've got something exciting to share. I managed to get GLM-4.7 – that's the massive 355B parameter Mixture of Experts model – running in Q8\_0 quantization on a seriously vintage setup: a 2015 Lenovo System x3950 X6 with eight Xeon E7-8880 v3 CPUs (no GPU in sight, just pure CPU inference). After a bunch of trial and error, I'm hitting around 5-6 tokens per second, which is pretty respectable for such an ancient beast. The Q8 quantization delivers extremely high quality outputs, preserving nearly all the model's intelligence with minimal degradation – it's practically indistinguishable from full precision for most tasks. The key was optimizing everything from BIOS settings (like enabling hyper-threading and tweaking power management) to NUMA node distribution for better memory access, and experimenting with different llama.cpp forks to handle the MoE architecture efficiently. I also dove into Linux kernel tweaks, like adjusting CPU governors and hugepages, to minimize latency. Keep in mind, this setup draws about 1300W AC under full load, so it's power-hungry but worth it for local runs. Benchmarks show solid performance for generation tasks, though it's not blazing fast – perfect for homelab enthusiasts or those without access to modern GPUs. I documented the entire process chronologically in this blog post, including step-by-step setup, code snippets, potential pitfalls, and full performance metrics: [https://postl.ai/2025/12/29/glm47on3950x6/](https://postl.ai/2025/12/29/glm47on3950x6/?referrer=grok.com) Has anyone else tried pushing big MoE models like this on CPU-only rigs? What optimizations worked for you, or what models are you running on similar hardware? Let's discuss!
2025-12-30T12:05:52
https://i.redd.it/2eimvrgo0cag1.png
at0mi
i.redd.it
1970-01-01T00:00:00
0
{}
1pzggbf
false
null
t3_1pzggbf
/r/LocalLLaMA/comments/1pzggbf/running_glm47_355b_moe_in_q8_at_5_tokenss_on_2015/
false
false
default
136
{'enabled': True, 'images': [{'id': '2eimvrgo0cag1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=108&crop=smart&auto=webp&s=d936848f8324e4090ab19fb8f93b99a4d9f5c31b', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=216&crop=smart&auto=webp&s=51b15ff369b88a51d63b3d78198039ab25ba9da1', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=320&crop=smart&auto=webp&s=f6c4962ecce05197e927ed38f934c5309ad844ab', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=640&crop=smart&auto=webp&s=a97232a34737346670be6cb9292a1cdde03aa47a', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=960&crop=smart&auto=webp&s=732e43e68719bc57ae9440502ffc38adcd5cab60', 'width': 960}, {'height': 567, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?width=1080&crop=smart&auto=webp&s=f4dc5a25a3752dc33db9fbe7ee4c5553d582837d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://preview.redd.it/2eimvrgo0cag1.png?auto=webp&s=4b3345e9ecfdb5b6dee8c59b0767b4c66e8b4b54', 'width': 1200}, 'variants': {}}]}
A simple three-tier context pattern to reduce token usage in LLM workflows
1
Hi everyone, I’ve been running into the same problem many others mention: context windows filling up with documentation that *might* be relevant but usually isn’t. To deal with this, I started using a **simple three-tier context structure**: **Tier 1 – Vault / global overview** High-level description of what exists (projects, domains, responsibilities). **Tier 2 – Project summaries** Short, self-contained summaries per project or module. **Tier 3 – Detailed context** Only loaded when actively working on that specific project. # Starting a Session: Which Tier to Read? [](https://github.com/ilhan-monke/three-tier-ai-context#starting-a-session-which-tier-to-read) What are you working on today? ↓ ┌───────┴───────┬──────────────┬─────────────┐ │ │ │ │ Single Project Multi-Project Vault Work Quick Check │ │ │ │ ↓ ↓ ↓ ↓ Read Tier 3 Read Tier 2 Read Tier 1 git log -5 (~3-5k tokens) (~5-8k tokens) (~10k tokens) (~1-3k tokens) │ │ │ │ ↓ ↓ ↓ ↓ 88% savings 80% savings 60% savings 90%+ savings For most sessions, the model sees either a single Tier 3 file or a Tier 2 summary — global (Tier 1) context is often not loaded at all. In practice, this reduced my loaded context by \~60–80% compared to dumping everything in at once. # Ending a Session: Which Tiers to Update? [](https://github.com/ilhan-monke/three-tier-ai-context?tab=readme-ov-file#ending-a-session-which-tiers-to-update) git diff --stat ↓ Detect areas changed ↓ ┌─────────────────┼──────────────────┐ │ │ │ Single Project Multi-Project Vault-Level Work Work Work │ │ │ ↓ ↓ ↓ Update T3 Update T3s Update T1 Update T2 Update T2 (only) Update T1 Update T1 │ │ │ └─────────────────┼──────────────────┘ ↓ Commit all modified session files ↓ Push to GitHub I wrote this up as a small, tool-agnostic repo with templates + examples (works with Claude Code, Cursor, plain prompts, etc.): 👉 [https://github.com/ilhan-monke/three-tier-ai-context](https://github.com/ilhan-monke/three-tier-ai-context) This isn’t meant as a replacement for RAG or embeddings — more like a **lightweight, predictable pattern** for repo / documentation context where retrieval infra feels like overkill. Curious how others here handle: * long-lived project context * deciding *what not to load* * mixing structured context with retrieval Feedback welcome — especially if you’ve tried similar hierarchical approaches.
2025-12-30T12:03:10
https://www.reddit.com/r/LocalLLaMA/comments/1pzgejz/a_simple_threetier_context_pattern_to_reduce/
Unusual-Leather4350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzgejz
false
null
t3_1pzgejz
/r/LocalLLaMA/comments/1pzgejz/a_simple_threetier_context_pattern_to_reduce/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=108&crop=smart&auto=webp&s=6fe5b2aac40d6e6fb92cf57d88a2615836641cad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=216&crop=smart&auto=webp&s=20c193719549a394f75dd91cea9c6b5bfe5eb644', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=320&crop=smart&auto=webp&s=cde7dd541e39727236a36ffd7267f182ea750304', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=640&crop=smart&auto=webp&s=f69a35337cb7add7c69badf3887d5da4be0f2b83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=960&crop=smart&auto=webp&s=552db8d3fd079a304659af032bb35ccd7aa2c80f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?width=1080&crop=smart&auto=webp&s=1d30745a56c85b0f52aa3feba0e6b7cf7d58cc53', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RgUq1bZ8-xMr5fofZGLf8FsyhlSKJW7pYdnq-d-W3Og.png?auto=webp&s=0da6d4932c945c717f705b54a1feca0f789c7332', 'width': 1200}, 'variants': {}}]}
Gigabyte currently offers a discounted XH23-VG0 to clear the stock. 33 grand.
1
2025-12-30T12:02:37
https://i.redd.it/9mel4kunzbag1.jpeg
Hardwaredealer
i.redd.it
1970-01-01T00:00:00
0
{}
1pzge72
false
null
t3_1pzge72
/r/LocalLLaMA/comments/1pzge72/gigabyte_currently_offers_a_discounted_xh23vg0_to/
false
false
default
1
{'enabled': True, 'images': [{'id': '9mel4kunzbag1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/9mel4kunzbag1.jpeg?width=108&crop=smart&auto=webp&s=05983849635e1e9f010e2cf70441493f83d72beb', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/9mel4kunzbag1.jpeg?width=216&crop=smart&auto=webp&s=6bcb69140ac854522fba5d368429e32c61d84af7', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/9mel4kunzbag1.jpeg?width=320&crop=smart&auto=webp&s=0243e7ad6ca87c648b21738502b770f2139d8205', 'width': 320}, {'height': 284, 'url': 'https://preview.redd.it/9mel4kunzbag1.jpeg?width=640&crop=smart&auto=webp&s=52d556a37f18c69a25a62640fc448b33c17ff9e7', 'width': 640}], 'source': {'height': 356, 'url': 'https://preview.redd.it/9mel4kunzbag1.jpeg?auto=webp&s=ae650bd293142bcf6a68fee2bd7a72dd9c3063e0', 'width': 800}, 'variants': {}}]}
Are there any open-source or locally runnable text-to-image models that don’t enforce content moderation?
1
[removed]
2025-12-30T11:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1pzg40y/are_there_any_opensource_or_locally_runnable/
StruggleCapable6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzg40y
false
null
t3_1pzg40y
/r/LocalLLaMA/comments/1pzg40y/are_there_any_opensource_or_locally_runnable/
false
false
self
1
null
Solar 100B claimed that it counts better than GPT today
85
2025-12-30T11:46:20
https://i.redd.it/kxyfw9z2xbag1.jpeg
Icy_Company_6216
i.redd.it
1970-01-01T00:00:00
0
{}
1pzg32r
false
null
t3_1pzg32r
/r/LocalLLaMA/comments/1pzg32r/solar_100b_claimed_that_it_counts_better_than_gpt/
false
false
default
85
{'enabled': True, 'images': [{'id': 'kxyfw9z2xbag1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=108&crop=smart&auto=webp&s=21574152224ccffa4c195204245c087664251f90', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=216&crop=smart&auto=webp&s=c06512e15eddeb849331dcbcab69b9d86e305f30', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=320&crop=smart&auto=webp&s=47c499dc057ee94d64ff81b3fb086a69cf9709a3', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=640&crop=smart&auto=webp&s=7132ef64860af64198c058381a4037b3b7c69818', 'width': 640}, {'height': 537, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=960&crop=smart&auto=webp&s=32a2612d5b3cd7f6326ca9fb27d1ed1845059183', 'width': 960}, {'height': 604, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?width=1080&crop=smart&auto=webp&s=5bb18878dcb65baf4d64c84911cb6eb3771b005b', 'width': 1080}], 'source': {'height': 864, 'url': 'https://preview.redd.it/kxyfw9z2xbag1.jpeg?auto=webp&s=9cc9b2ba75bdbbd08a6d72cf42ae2c66dd946f79', 'width': 1543}, 'variants': {}}]}
Why Kimi K2 Thinking choose Int4 QAT, from infra enginner of KImi
159
I saw the recent [discussion](https://www.reddit.com/r/LocalLLaMA/comments/1px1c41/head_of_engineering_minimax_ai_on_minimax_m2_int4/) here regarding MiniMax engineer's tweet about why they decided *against* using int4 QAT for the MiniMax M2.1 model. Interestingly, at the time of the K2 Thinking release, a Kimi infra engineer posted a deep dive on Zhihu explaining why native int4 QAT was actually crucial for them. I’ve summarized the key takeaways below to offer a different perspective on the 'to quant or not to quant' debate. **TL;DR:** Kimi found int4 QAT is essential for **MoE latency**, **long-context stability**, and **speeding up the RL training loop**. # Decoding is Memory-Bound (Latency Focus) Unlike the MiniMax case, Kimi found that for their specific MoE architecture (which is highly sparse), the decoding phase is almost exclusively memory-bound. By using W4A16 (4-bit weights, 16-bit activations), they reduced memory usage significantly. This allowed the model to fit on fewer GPUs, which reduced inter-device communication overhead, a major factor in lowering end-to-end latency for users. # PTQ Failed at "Thinking" Lengths The team initially tried standard Post-Training Quantization (PTQ). While it worked for short responses, it fell apart for the long chain-of-thought "thinking" process. As generation length increased, quantization errors accumulated, leading to degradation. Furthermore, PTQ struggled with sparse experts; if an expert wasn't hit frequently during the calibration step with the calibration dataset, it essentially "forgot" knowledge. QAT (Quantization Aware Training) was necessary to make the model "lossless" compared to the BF16 baseline. # A less discussed benefit: Faster RL Training This is the point that often gets overlooked: Int4 QAT wasn't just for inference serving, it accelerated the training process itself. In Reinforcement Learning, the model spends a massive amount of time in the "rollout" phase (generating text). By using the Int4 model for these rollouts, they reduced the total time for an RL iteration by 10-20%. It also reduced the discrepancy between the training forward pass and the inference engine. # Why Int4 and not FP4? They chose standard Int4 over newer formats like FP4 to maintain compatibility with existing hardware (non-Blackwell GPUs) and to utilize mature, highly efficient kernels like Marlin. In summary, I believe there isn't a one-size-fits-all answer regarding quantization. It depends heavily on the model's parameters and specific architecture. It is a matter of trade-offs.
2025-12-30T11:33:10
https://www.reddit.com/r/LocalLLaMA/comments/1pzfuqg/why_kimi_k2_thinking_choose_int4_qat_from_infra/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzfuqg
false
null
t3_1pzfuqg
/r/LocalLLaMA/comments/1pzfuqg/why_kimi_k2_thinking_choose_int4_qat_from_infra/
false
false
self
159
null
I built an AI IT assistant that runs locally with Ollama - helps non-tech users fix their own computer problems
0
Hey everyone! I've been working on **Relay**, an open-source desktop app that acts as a personal IT support assistant. Think of it as having a patient tech friend who can actually see and fix what's wrong with your computer. **The problem I'm solving:** My parents (and honestly, most non-tech people) constantly struggle with basic computer issues - slow performance, sound not working, disk full, etc. They either bug me or Google scary-looking solutions they don't understand. **What it does:** * 💬 Natural conversation - describe your problem in plain English * 🔍 Actually diagnoses your system (CPU, RAM, disk, processes, etc.) * 🛠️ Can execute fixes with your approval (not just advice!) * 🛡️ Safe by design - explains everything, asks permission, can rollback * 🔒 Privacy-first - works completely offline with Ollama **AI Support:** * **Ollama** (qwen3, llama3, etc.) - fully local, no data leaves your machine * **Gemini API** \- optional cloud fallback **Tech stack:** Electron + Node.js + better-sqlite3 **GitHub:** [https://github.com/hibbault/relay](https://github.com/hibbault/relay) Still early days (v0.1.0), but it's functional and I'd love feedback from this community, especially on: 1. Which local models work best for this use case? 2. Any features you'd want to see? 3. General code feedback welcome! https://i.redd.it/bysw522hrbag1.gif Apache 2.0 licensed. Happy to answer any questions!
2025-12-30T11:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1pzfkkv/i_built_an_ai_it_assistant_that_runs_locally_with/
deepstateemployee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzfkkv
false
null
t3_1pzfkkv
/r/LocalLLaMA/comments/1pzfkkv/i_built_an_ai_it_assistant_that_runs_locally_with/
false
false
https://a.thumbs.redditm…5VeJ3iRBk9x0.jpg
0
{'enabled': False, 'images': [{'id': 'ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=108&crop=smart&auto=webp&s=91f6f47f670a1194b66bfb9bf08fb38cabc30a20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=216&crop=smart&auto=webp&s=25c32706a1084ed9b84938a453e0614540d2f2dc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=320&crop=smart&auto=webp&s=6cf8a501f70896fc39a5035e93b8c2efc1eac171', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=640&crop=smart&auto=webp&s=030c5a38fbb99853fb9672a67461f57cd7a3c063', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=960&crop=smart&auto=webp&s=325c196a6ac9b803321e7c5f72be01e2f6558a34', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?width=1080&crop=smart&auto=webp&s=03cd3ce51f29a1450824f88ebf54e667778047d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZVBEgtq_s-M4cRyHsPjgTAO1n8brwzbe3EuPpc4fJFU.png?auto=webp&s=fede320d1b3d81f6496eb2312e1d5579e8b126f6', 'width': 1200}, 'variants': {}}]}
Benchmarked: The setup cost of H100s vs Distributed 4090s (Wan). Am I calculating this right ?
1
[removed]
2025-12-30T11:02:06
https://i.redd.it/au1vkbkrobag1.png
Desperate_One2416
i.redd.it
1970-01-01T00:00:00
0
{}
1pzfbkr
false
null
t3_1pzfbkr
/r/LocalLLaMA/comments/1pzfbkr/benchmarked_the_setup_cost_of_h100s_vs/
false
false
default
1
{'enabled': True, 'images': [{'id': 'au1vkbkrobag1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=108&crop=smart&auto=webp&s=b67c3726cfc9a00d1f87eb83dff69b809e429487', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=216&crop=smart&auto=webp&s=00bcf7e7f007c5753904013479291ad51ec5f2c8', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=320&crop=smart&auto=webp&s=96b30f1bdf9e0dfd8bce4884f34577a132c1dedf', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=640&crop=smart&auto=webp&s=10e72c66fc9fb6e2c13e63bcaa1883d5f4271736', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=960&crop=smart&auto=webp&s=526e1438f9c146f099a61a59b047aa2f48bd00bd', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?width=1080&crop=smart&auto=webp&s=e397a620daad1a83de13973ebefcd19cb777e7a0', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/au1vkbkrobag1.png?auto=webp&s=04eb9c397f52ea392fbd5648ebd4faab3258e8a7', 'width': 2816}, 'variants': {}}]}
Are humans like LLMs?
0
Has anyone else thought of how much humans are like LLMs (or the other way around)? Prompt techniques work great to humans. I didn't run tests on humans, nor do I have benchmarks to support this intuition, but even in school we have been using the classical promt techniques, like asking the students to think step by step (math problems), giving few examples, and so on. Humans learn the meaning of words statistically (whatever the word "meaning" means). As babies/toddlers we listen noise and after many repetitions we kind of get patterns. If you follow the philosophy line of Wittgenstein, we use words within a contextual game. Kind of like LLMs. We don't know what we know. At least as a student, I was always surprised when writing exams of how much (or less) I knew. I would only discover how much I knew only while answering the exams. This unconsiousness of how much we know, where our knowledge is located, reminds me a lot of LLMs. We have also "instruct" and "thinking" modes. When having a live discussion with another human, I may discover my opinions by listening what comes out of my mouth. I don't plan what to say before hand. I don't even know what I will say, nor even what I am saying, till I say it. Most of live conversations are in instruct mode within humans. We have no idea which combination of words we will use. Funnily enough, many of us would defend we DO KNOW what we were about to say, and we will give a convincing defense, that we only created after someone told us we didn't know what we are saying. Like LLMs. For writing this text I'm trying to use the reasoning mode (thinking about this topic since a very long ago before writing about it). Our abstract logic, that we got from the greeks, was an extraction out of language: Aristotle checked many typical arguments and extracted the words to have a pure philosophical device. For example: "If I eat a lot I get fat" is a concrete sentence but with a structure that happens in language time and time again. You can represent that structure like: "a --> b". If I understand how LLMs are able to "reason", is because they are trained on a huge amount of human texts. The human texts are full of sentences with structures. So their answers use those structures too. Their "smartness" is just the abstract juice of many probable structures usually found in the human corpus. So their way to using logic reminds me a lot of how we also got there. I would say the biggest difference is emotions. We relate to words and events emotionally. For example, as kids we may get a very strong impression from some event, and in the future we will relate our new learnings and doings with that impression. I guess that's how writers/creators get their motivations to create outputs about certain topics, or to relate certain stuff to other certain stuff. Do you think we are indeed similar to LLMs in some ways? **Maybe in some other ways I didn't mention?** **Do you think creating AI is a way of discovering how the human mind works?** I mean, we try to create intelligence, and since we are the ones deciding the definition of words, and we seem to be quite narcissistic, we call "intelligence" that, what we have. We create stuff to our image. I think creating AI is interesting to understand ourselves, because it offers a way of thinking about us from an external point of view. We are all biased when thinking about ourselves. That's why it is super easy to find defects from other fellow humans, but it is hard to observe them in ourselves. Science Fiction stories usually take common social troubles and put them in a very far away / future civilization, thus making it easier for people to read about it without getting as emotional and biased as if the story would be based in the present time, where we have strong believes, strong polarized opinions and so on. Also, since the human mind has other modules beside our LLM, **do you think it would be wise to try to avoid using our LLM (words) for many tasks, and only use it for tasks where reasoning with words gives and advantage**? We humans can get very verbose (just check the length of this post), and many times it is just noise coming out of the black box of our mind. Words can be poisonous: they prevent us from being in the NOW. They put our minds in virtual worlds (like religion) instead of being present. With all this said, I leave you to enjoy your now. Cheers :)
2025-12-30T10:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1pzezr3/are_humans_like_llms/
mouseofcatofschrodi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzezr3
false
null
t3_1pzezr3
/r/LocalLLaMA/comments/1pzezr3/are_humans_like_llms/
false
false
self
0
null
Devs, need a reality check, we built something awesome but, Is our landing page too marketing fluff or actually clear?
0
Me and another engineer spent the last year building a new kind of local memory engine because we were tired of our RAG pipelines crashing every time we loaded a dataset larger than our RAM. The tech is fully finished but venturing down the business side is new to us and we are struggling to explain what we built without it sounding like buzzword soup. We are trying to communicate that we basically turned the hard drive into RAM. The main things we want to get across are that this eliminates cloud costs completely because it runs efficiently on the hardware you already own, and that it is technically the fastest option out there because we use a lattice structure to find data mathematically instead of hunting through an index. We also want to highlight that it is crash proof and safe since it runs on disk rather than volatile memory, so you do not lose data if the power cuts. The problem is we tried to put this on a website but we feel like we are failing to convey the actual innovation. We do not know if we should focus on the zero cloud cost angle, the technical lattice speed angle, or just the fact that it scales without crashing. Could you guys take a quick look and tell me if you actually understand what this is, or is it too technical even for technical people? Be brutal. We would rather fix the messaging now than launch with a confusing page.
2025-12-30T09:51:38
https://ryjoxdemo.com/
DetectiveMindless652
ryjoxdemo.com
1970-01-01T00:00:00
0
{}
1pze58w
false
null
t3_1pze58w
/r/LocalLLaMA/comments/1pze58w/devs_need_a_reality_check_we_built_something/
false
false
default
0
null
best ai
1
[removed]
2025-12-30T09:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1pze2tz/best_ai/
Regular-Stuff7228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pze2tz
false
null
t3_1pze2tz
/r/LocalLLaMA/comments/1pze2tz/best_ai/
false
false
nsfw
1
null
Why consumer WAN clusters (Pipeline Parallelism) might be cheaper than H100s for iterative fine-tuning
1
[removed]
2025-12-30T09:44:43
https://www.reddit.com/r/LocalLLaMA/comments/1pze166/why_consumer_wan_clusters_pipeline_parallelism/
Long-Buy5501
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pze166
false
null
t3_1pze166
/r/LocalLLaMA/comments/1pze166/why_consumer_wan_clusters_pipeline_parallelism/
false
false
self
1
null
Exploring a 1.58-bit / ternary LLM core inspired by BitNet (CUDA attention, GTX 1050 tests)
34
Hi everyone, I’ve been experimenting with extreme low-bit LLM inference inspired by the BitNet 1.58-bit paper, and wanted to share a research-style project I’ve been working on over the last few weeks. This is NOT a production-ready model, but rather an exploration of how far ternary / sparse logic can be pushed on consumer GPUs. What this project explores: \- A custom LLM core using ternary weights {-1, 0, +1} \- Trainable via Straight-Through Estimator (STE) \- Custom CUDA attention kernel (thresholded / shifted-ReLU instead of softmax) \- Designed for local inference (tested on GTX 1050) Core ideas: \- Replace FP16-heavy matmul layers with ternary linear layers \- Abs-mean scaling (BitNet-style quantization) \- Focus on reducing interference via sparsity rather than magnitude precision \- Attention without softmax to reduce compute and improve stability in low-bit regimes Current results: \- End-to-end training works \- Overfitting tests succeed (Python training → CUDA inference consistency) \- Character-level Shakespeare training produces coherent output \- Memory footprint is significantly reduced compared to FP16 baselines Limitations / open problems: \- Not competitive with large FP16/INT8 models (expected) \- Sensitive to threshold and temperature tuning \- No advanced optimizations like FlashAttention \- Very much a research prototype I’m mainly sharing this to get feedback from people who: \- Have worked with BitNet / ternary networks \- Experiment with custom CUDA kernels \- Care about local / low-power LLM inference Code and CUDA kernels are available here for anyone curious: [https://github.com/QKV-Core/Trion](https://github.com/QKV-Core/Trion) Happy to answer technical questions or discuss design tradeoffs.
2025-12-30T09:44:36
https://www.reddit.com/r/LocalLLaMA/comments/1pze13o/exploring_a_158bit_ternary_llm_core_inspired_by/
HuseyinKama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pze13o
false
null
t3_1pze13o
/r/LocalLLaMA/comments/1pze13o/exploring_a_158bit_ternary_llm_core_inspired_by/
false
false
self
34
{'enabled': False, 'images': [{'id': '6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=108&crop=smart&auto=webp&s=8d75014ea7508afb8ef661e5d22fe2debb4c3fd0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=216&crop=smart&auto=webp&s=ada49274ee98801297fd4e3831956edf1163cc32', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=320&crop=smart&auto=webp&s=ca48bf7b18de6b02a71aa1d9400ea7fa6af42423', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=640&crop=smart&auto=webp&s=2a3bbeb1b54082c0f37cf91fd34849d5c125dd0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=960&crop=smart&auto=webp&s=e25dda011837fa03de264503046e78f1f6954078', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?width=1080&crop=smart&auto=webp&s=3d5bfb4bcf76410166ecee09619e6701ab4ff038', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6Yz6RDuhBMfGM_8nbmV6vyZbcOsJBaqCRSfKROY3VvA.png?auto=webp&s=0a5c41504c04510cbda6120c11540fd8964bc962', 'width': 1200}, 'variants': {}}]}
reko – Local-first YouTube-to-Markdown summarizer with small local LLMs
12
I built a local-first tool to summarize YouTube videos into clean Markdown using transcripts + LLMs. The default setup uses Ollama with small local models. Other local or cloud providers are supported too if preferred. The Python app is the core engine. The main interface is a CLI (useful for scripting and automation), and I also added a small localhost web UI to speed up session-based workflows (paste a link, tweak settings, get rendered Markdown). Everything runs locally. I’d really appreciate feedback from this community on model choice, output quality improvement, and anything that could improve the local-first workflow. Repo: [https://github.com/riccardoruspoli/reko](https://github.com/riccardoruspoli/reko)
2025-12-30T09:35:48
https://www.reddit.com/r/LocalLLaMA/comments/1pzdw3h/reko_localfirst_youtubetomarkdown_summarizer_with/
Rikifire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzdw3h
false
null
t3_1pzdw3h
/r/LocalLLaMA/comments/1pzdw3h/reko_localfirst_youtubetomarkdown_summarizer_with/
false
false
self
12
{'enabled': False, 'images': [{'id': '966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=108&crop=smart&auto=webp&s=4120f802927aaa8d8012bbc4f5c39542306c5236', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=216&crop=smart&auto=webp&s=c78c1d855c1e6e25eab0f0801df38f65b6e15b00', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=320&crop=smart&auto=webp&s=7c66ac3f882d4eab29c1c7b70c308204a8f7921e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=640&crop=smart&auto=webp&s=043fddd861684a6867163dc0c6c2fa32d883d940', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=960&crop=smart&auto=webp&s=51db94c179fc473aa93a02fe1502e0990ab797a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?width=1080&crop=smart&auto=webp&s=3fdb0cfceab6c9870131d11e5c3f2f302a324f15', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/966KOc2w71pqVA5YtzBxyV6TeiSGBkMniLlsCd4avUU.png?auto=webp&s=2399ec189677fc1e4d9c0d2cf606bef3850b394a', 'width': 1200}, 'variants': {}}]}
Final opportunity to get high-capacity memory at a great price! I’m selling two brand-new Nvidia servers straight from the factory in Taiwan at a loss.
0
Last chance before RAMageddon.
2025-12-30T09:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1pzdhbm/final_opportunity_to_get_highcapacity_memory_at_a/
GH200624GB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzdhbm
false
null
t3_1pzdhbm
/r/LocalLLaMA/comments/1pzdhbm/final_opportunity_to_get_highcapacity_memory_at_a/
false
false
self
0
null
Showcase: Termux-Optimized Ollama with Quantum-Proof RAG Pipeline (Benchmarking Blackwell/ARM64)
1
[removed]
2025-12-30T09:08:33
https://i.redd.it/t9mi1ay15bag1.jpeg
NeoTrash420
i.redd.it
1970-01-01T00:00:00
0
{}
1pzdgjr
false
null
t3_1pzdgjr
/r/LocalLLaMA/comments/1pzdgjr/showcase_termuxoptimized_ollama_with_quantumproof/
true
false
spoiler
1
{'enabled': True, 'images': [{'id': 't9mi1ay15bag1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=108&crop=smart&auto=webp&s=0e70a0e51350d15361c11c64dfeb1dc815e97bbd', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=216&crop=smart&auto=webp&s=633c42b33c0a96e21c40f0136cc6a3a6b794540b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=320&crop=smart&auto=webp&s=959d11893f2f9899d9b46ebfbfb7f43ef5de5485', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=640&crop=smart&auto=webp&s=c1f8f1ca9ab9b886afa75ee7a6d70a3ee8b7991c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=960&crop=smart&auto=webp&s=39547ddcf32c6c60170c3242bf8d34606d10200c', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=1080&crop=smart&auto=webp&s=6dafde321e1d84f0a34793729d08157831bb3805', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?auto=webp&s=423a512dd6e5e496021c4b51332b49f18d4849c2', 'width': 1220}, 'variants': {'obfuscated': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=a723fa57930726096d046206bff14560550f3839', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b29af9e38d3fef371d25fa65c19a415937c02fa1', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=51202b8444bd3a9e3b13a0b3080488f100687ec3', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=fe45afa160dd317f91a7041605fce243a58d262e', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a4264469fd57262aae534ed3108c64165e875b5a', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=14ab22c8c9c4eac36a89cb834787b3b623affa40', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/t9mi1ay15bag1.jpeg?blur=40&format=pjpg&auto=webp&s=3583c248d21955b0681888a57bc9e3bed8b14e64', 'width': 1220}}}}]}
Last chance to get lots of memory cheaply. I will sell two brand new GH200 624GB servers with a loss directly from the factory in taiwan. Offer only valid till tomorrow.
1
[removed]
2025-12-30T08:58:47
https://i.redd.it/p37mubq53bag1.jpeg
httpsGPTrack-ai
i.redd.it
1970-01-01T00:00:00
0
{}
1pzdaog
false
null
t3_1pzdaog
/r/LocalLLaMA/comments/1pzdaog/last_chance_to_get_lots_of_memory_cheaply_i_will/
false
false
default
1
{'enabled': True, 'images': [{'id': 'p37mubq53bag1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=108&crop=smart&auto=webp&s=a3cfe63dbb8a356815d74b585fb4957959f19824', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=216&crop=smart&auto=webp&s=db959d4742f1abf90b3679ffdaf664cdd67a7261', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=320&crop=smart&auto=webp&s=ebf239bf6bd35fb689c430b253a953a0965b4a0b', 'width': 320}, {'height': 547, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=640&crop=smart&auto=webp&s=4d64ea014e7a4ff573551b6ce4c7ab8ce70ee8fc', 'width': 640}, {'height': 821, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=960&crop=smart&auto=webp&s=b618ade795d916a8db789e91511fe587341a793e', 'width': 960}, {'height': 923, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?width=1080&crop=smart&auto=webp&s=494cd69821da29a78457d32cfe7173490046318f', 'width': 1080}], 'source': {'height': 3245, 'url': 'https://preview.redd.it/p37mubq53bag1.jpeg?auto=webp&s=5e0acd1eb8681303b62a1e22cd98c94761b523b1', 'width': 3793}, 'variants': {}}]}
The "Setup Tax": Why consumer WAN clusters (Pipeline Parallelism) might be cheaper than H100s for iterative fine-tuning
1
I've been crunching the numbers on fine-tuning Llama 3 70B, and I think we might be overestimating the value of H100s for iterative runs. I wanted to sanity-check my math with you guys. The problem: The "Setup Tax" when renting H100s (on AWS/Runpod/vast), we usually focus on the hourly rate. But for iterative research (where you restart runs often), the setup time kills the budget. * On AWS: A fresh run takes around 45 mins (installing CUDA, docker pull, downloading 140GB weights). * The cost: If you do 3 iterative runs, you pay for 2h15m of setup time. The alternative: Consumer WAN Cluster (Pipeline Parallelism) I'm testing an architecture that keeps consumer nodes (4090s) "warm" and connected via WAN using Pipeline Parallelism (slicing layers sequentially like the Petals architecture to avoid the bandwidth heavy of FSDP). The trade-off: * Latency: It's slower. My benchmarks show it runs at 1.6x wall-clock time compared to NVLink clusters. * Savings: Because the setup is 5-14 mins (no redownloading weights), the total billed time for a 3 run session is actually 60% cheaper than the H100 equivalent. My question for the community: Has anyone else tried strictly quantifying this setup tax ? Am I missing a hidden cost in the WAN approach (besides the obvious latency hit)? I'm currently scheduling manual runs to test the fault tolerance of this topology. If you've tried running PP over consumer internet, I'd love to hear if you hit the same latency numbers.
2025-12-30T08:56:00
https://www.reddit.com/r/LocalLLaMA/comments/1pzd92x/the_setup_tax_why_consumer_wan_clusters_pipeline/
Desperate_One2416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzd92x
false
null
t3_1pzd92x
/r/LocalLLaMA/comments/1pzd92x/the_setup_tax_why_consumer_wan_clusters_pipeline/
false
false
self
1
null
The "Setup Tax": Why consumer WAN clusters (Pipeline Parallelism) might be cheaper than H100s for iterative fine-tuning
1
[removed]
2025-12-30T08:55:38
https://www.reddit.com/r/LocalLLaMA/comments/1pzd8ut/the_setup_tax_why_consumer_wan_clusters_pipeline/
Alone-Detective-3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzd8ut
false
null
t3_1pzd8ut
/r/LocalLLaMA/comments/1pzd8ut/the_setup_tax_why_consumer_wan_clusters_pipeline/
false
false
self
1
null
Best RAG framework for large-scale document search & source attribution?
1
Hi everyone, I’m looking for recommendations on the **best Retrieval-Augmented Generation (RAG) framework** for large-scale, document-based information retrieval. **Use case:** * My company has **thousands of documents**, each around **100+ pages** * Documents are PDFs / text files stored across different locations * A user may ask a query whose answer exists across **multiple documents** (e.g., 5 different files) **What I need from the RAG system:** * Accurately retrieve the **relevant documents** * Return **file names, paths/locations**, and (ideally) page or section references * Support **metadata-aware retrieval** and scale well with large corpora * Be production-ready and maintainable I’d love to hear: * Which framework has worked best for you in **real-world, enterprise-scale setups**? * Any recommendations for **vector databases** or retrieval strategies that pair well with it? Appreciate any insights, benchmarks, or architecture suggestions. Thanks!
2025-12-30T08:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1pzd0s1/best_rag_framework_for_largescale_document_search/
Acceptable_Young_167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzd0s1
false
null
t3_1pzd0s1
/r/LocalLLaMA/comments/1pzd0s1/best_rag_framework_for_largescale_document_search/
false
false
self
1
null
Has anyone built a RAG on WikiLeaks?
25
Because that would be a useful application.
2025-12-30T08:41:14
https://www.reddit.com/r/LocalLLaMA/comments/1pzd0j7/has_anyone_built_a_rag_on_wikileaks/
JLeonsarmiento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzd0j7
false
null
t3_1pzd0j7
/r/LocalLLaMA/comments/1pzd0j7/has_anyone_built_a_rag_on_wikileaks/
false
false
self
25
null
Tencent HY-Motion 1.0 - a billion-parameter text-to-motion model
301
We are excited to open-source Tencent HY-Motion 1.0, a billion-parameter text-to-motion model built on the Diffusion Transformer (DiT) architecture and flow matching. Tencent HY-Motion 1.0 empowers developers and individual creators alike by transforming natural language into high-fidelity, fluid, and diverse 3D character animations, delivering exceptional instruction-following capabilities across a broad range of categories. The generated 3D animation assets can be seamlessly integrated into typical 3D animation pipelines. Highlights: 🔹Billion-Scale DiT: Successfully scaled flow-matching DiT to 1B+ parameters, setting a new ceiling for instruction-following capability and generated motion quality. 🔹Full-Stage Training Strategy: The industry’s first motion generation model featuring a complete Pre-training → SFT → RL loop to optimize physical plausibility and semantic accuracy. 🔹Comprehensive Category Coverage: Features 200+ motion categories across 6 major classes—the most comprehensive in the industry, curated via a meticulous data pipeline. 🌐Project Page: https://hunyuan.tencent.com/motion 🔗Github: https://github.com/Tencent-Hunyuan/HY-Motion-1.0 🤗Hugging Face: https://huggingface.co/tencent/HY-Motion-1.0 📄Technical report: https://arxiv.org/pdf/2512.23464
2025-12-30T08:26:06
https://i.redd.it/yq8uriwhxaag1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1pzcrtb
false
null
t3_1pzcrtb
/r/LocalLLaMA/comments/1pzcrtb/tencent_hymotion_10_a_billionparameter/
false
false
default
301
{'enabled': True, 'images': [{'id': 'yq8uriwhxaag1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=108&crop=smart&auto=webp&s=6e074b0584237083a7fb1d30ed9bc3c1a7c7bf58', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=216&crop=smart&auto=webp&s=ee063792c01d773991efef75c069775224e78341', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=320&crop=smart&auto=webp&s=a3332c7d0157334890e425056395257d7688a3dd', 'width': 320}, {'height': 364, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=640&crop=smart&auto=webp&s=d990cf5383783b3e2aa22351ddeb29ebac5eb2b2', 'width': 640}, {'height': 546, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=960&crop=smart&auto=webp&s=b19945d2141c8571d82134a041e67d256f9acd85', 'width': 960}, {'height': 614, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?width=1080&crop=smart&auto=webp&s=958041b4c36674ee8878b16b10be071b0a1cf0a4', 'width': 1080}], 'source': {'height': 1242, 'url': 'https://preview.redd.it/yq8uriwhxaag1.jpeg?auto=webp&s=795203a6d786bc3e4130d8e6cd53753a03d06576', 'width': 2183}, 'variants': {}}]}
Tencent open-source Tencent-HY-MT1.5, featuring two translation models—1.8B and 7B—designed for seamless on-device and cloud deployment with industry-leading speed and accuracy
108
Hugging face: [https://huggingface.co/collections/tencent/hy-mt15](https://huggingface.co/collections/tencent/hy-mt15) Highlights: 🔹 1.8B On-Device Power: Optimized for consumer hardware with a 1GB memory footprint. Using on-policy distillation to align with larger models, it delivers 0.18s latency (50 tokens), outperforming mainstream commercial APIs. 🔹 7B SOTA Performance: An upgraded version of our WMT25 champion, surpassing mid-sized open-source models and rivaling the 90th percentile of closed-source giants like Gemini-3.0-Pro. 🔹 33+ Languages & Dialects: High-fidelity translation across 33 languages and 5 Chinese dialects. 🔹 Production-Ready: Native support for custom terminology, long-dialogue context, and maintaining document formatting.
2025-12-30T08:11:09
https://www.reddit.com/gallery/1pzcj1q
Difficult-Cap-7527
reddit.com
1970-01-01T00:00:00
0
{}
1pzcj1q
false
null
t3_1pzcj1q
/r/LocalLLaMA/comments/1pzcj1q/tencent_opensource_tencenthymt15_featuring_two/
false
false
https://b.thumbs.redditm…DFWBY6RTdAfg.jpg
108
null
LG K EXAONE 236b
74
Will be released in few days
2025-12-30T08:02:08
https://i.redd.it/1wirc918taag1.png
Specialist-2193
i.redd.it
1970-01-01T00:00:00
0
{}
1pzcdu1
false
null
t3_1pzcdu1
/r/LocalLLaMA/comments/1pzcdu1/lg_k_exaone_236b/
false
false
default
74
{'enabled': True, 'images': [{'id': '1wirc918taag1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=108&crop=smart&auto=webp&s=2b1ec0466a3bb1983ef8c8b149adab6c997d215e', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=216&crop=smart&auto=webp&s=0100bb24ea9a45b4acbf6ba99135a730e397d9e3', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=320&crop=smart&auto=webp&s=c9c57e18c2fdc3e72fb8804e959132335b0be180', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=640&crop=smart&auto=webp&s=6fcab270f79f71dba1d330db0ee8e85422de763b', 'width': 640}, {'height': 447, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=960&crop=smart&auto=webp&s=a6a0e13fb3852130774604449791a2ebd50e300d', 'width': 960}, {'height': 503, 'url': 'https://preview.redd.it/1wirc918taag1.png?width=1080&crop=smart&auto=webp&s=8842727a2737e6ff000ae9131214f979a323e77d', 'width': 1080}], 'source': {'height': 503, 'url': 'https://preview.redd.it/1wirc918taag1.png?auto=webp&s=31b2e63e9b96e618305a34823851740f463ca39f', 'width': 1080}, 'variants': {}}]}
I built a fully offline text to speech Mac app because cloud TTS annoyed me
3
I'm an indie maker, and I shipped a native macOS app to solve a problem I personally had. I deal with a lot of long text (articles, drafts, AI outputs), and I wanted to listen to it while working without sending my content to the cloud or paying subscriptions. So I built Murmur: • Runs entirely offline on Apple Silicon • No accounts, no quotas, no subscriptions • High-quality AI voices • One-time purchase I mainly use it to: • Listen to long articles while doing admin work • Review AI outputs hands-free • Catch mistakes in drafts by listening instead of rereading **Le Giveaway**: To get early feedback on UX and improve the product, I’m giving away 100% off codes to 10 people! Just drop a comment on what they would actually use it for and I’ll generate and publish a randomized list of winners by the **end of this week.** I'll DM the codes to the most interesting use cases in 48 hours [](https://www.reddit.com/submit/?source_id=t3_1pzbkqk)
2025-12-30T07:33:06
https://v.redd.it/fgdnzrkwnaag1
tarunyadav9761
v.redd.it
1970-01-01T00:00:00
0
{}
1pzbw4k
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fgdnzrkwnaag1/DASHPlaylist.mpd?a=1769672001%2CNGEyMDNiZTAyZWI3NzA0ZmY3MzE3OGM5ZGFhMjE5NmE1Y2Q2MTBkZjA4NGVlM2JlYjhkNjJhZGUzZjg0YTBiOQ%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/fgdnzrkwnaag1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fgdnzrkwnaag1/HLSPlaylist.m3u8?a=1769672001%2CMTQ0ZDk5ZjU5NjIzNjFlN2EyMWU0ODU5MmQ3MDBjYzNkYTI3NzBiZDY1MDhiYWM0NjU0YjcxYjc4ODU0OWQzNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fgdnzrkwnaag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1668}}
t3_1pzbw4k
/r/LocalLLaMA/comments/1pzbw4k/i_built_a_fully_offline_text_to_speech_mac_app/
false
false
https://external-preview…938a3a0f5873558a
3
{'enabled': False, 'images': [{'id': 'ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab489bd5d637a05f49e5366652331e49a8fd5057', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=216&crop=smart&format=pjpg&auto=webp&s=fe565800883a9cec25e58fdf00a3c4dbcd823c92', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=320&crop=smart&format=pjpg&auto=webp&s=53317a932b1bc04daae4f9da712db647fa736abf', 'width': 320}, {'height': 414, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=640&crop=smart&format=pjpg&auto=webp&s=5c7996e992def22eae76820e71ec6c4e560ac7cf', 'width': 640}, {'height': 621, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=960&crop=smart&format=pjpg&auto=webp&s=5748543e414043707b2ba249986d01e1ad4a90a0', 'width': 960}, {'height': 699, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e20b4e37470a9361dd60195c0cd13dbeecb178b8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ODd5Z2hya3duYWFnMT5ICs7kDawfo1Fgfnw0GJpF94UbYpqrosjPAEkkD-iX.png?format=pjpg&auto=webp&s=c99c88256a35b7e9260634d0d4f8820706187336', 'width': 1668}, 'variants': {}}]}
Live comparison of AI coding tools (features, repo understanding, pricing, dev sentiment)
5
I kept running into AI coding tool comparisons that were stale within weeks, so I built a small site that auto-regenerates them daily using web search + LLMs. It compares tools like Cursor, Windsurf, Claude Code, Copilot across 18 criteria. No auth, no tracking. [https://devcompare.io](https://devcompare.io)
2025-12-30T07:09:10
https://www.devcompare.io/
anticlickwise
devcompare.io
1970-01-01T00:00:00
0
{}
1pzbhdj
false
null
t3_1pzbhdj
/r/LocalLLaMA/comments/1pzbhdj/live_comparison_of_ai_coding_tools_features_repo/
false
false
default
5
null
Dreaming persistent Ai architecture > model size
1
cross posting from r/locallm  I built an AI that dreams about your codebase while you sleep Z.E.T.A. (Zero-shot Evolving Thought Architecture) is a multi-model system that indexes your code, builds a memory graph, and runs autonomous "dream cycles" during idle time. It wakes up with bug fixes, refactors, and feature ideas based on YOUR architecture. **What it actually does:** 1. You point it at your codebase 2. It extracts every function, struct, and class into a semantic memory graph 3. Every 5 minutes, it enters a dream cycle where it free-associates across your code 4. Novel insights get saved as markdown files you can review Dream output looks like this: code_idea: Buffer Pool Optimization The process_request function allocates a new buffer on every call. Consider a thread-local buffer pool: typedef struct { char buffer[BUFSIZE]; struct buffer_pool *next; } buffer_pool_t; This reduces allocation overhead in hot paths by ~40%. Dreams are filtered for novelty. Repetitive ideas get discarded automatically. **Architecture:** * 14B model for reasoning and planning * 7B model for code generation * 4B model for embeddings and memory retrieval * HRM (Hierarchical Reasoning Module) decomposes complex queries * TRM (Temporal Reasoning Memory) handles Git-style thought branching * Lambda-based temporal decay prevents rumination **Quick start:** docker pull ghcr.io/h-xx-d/zetazero:latest ./scripts/setup.sh # Edit docker-compose.yml to point at your codebase docker-compose up -d # Check back tomorrow ls ~/.zetazero/storage/dreams/pending/ Requires NVIDIA GPU with CUDA 12.x. Tested on a 5060 Ti. **Scales with your hardware** The default config runs on a 5060 Ti (14B + 7B + 4B). The architecture is model-agnostic. Just swap the GGUF paths in docker-compose.yml: |Your GPU|Main Model|Coder Model|Embedding Model| |:-|:-|:-|:-| || |16GB (5060 Ti, 4080)|Qwen 14B|Qwen Coder 7B|Nomic 4B| |24GB (4090)|Qwen 32B|Qwen Coder 14B|Nomic 4B| |48GB (A6000, dual 3090)|Qwen 72B|Qwen Coder 32B|Nomic 4B| |80GB (A100, H100)|Qwen 72B Q8|Qwen Coder 32B Q8|Nomic 4B| **Note:** Keep models in the same family so tokenizers stay compatible. Mixing Qwen with Llama will break things. Dream quality scales with model capability. Bigger models = better architectural insights. **Links:** * GitHub: [https://github.com/h-xx-d/zetazero](https://github.com/h-xx-d/zetazero) * Docker: [ghcr.io/h-xx-d/zetazero:latest](http://ghcr.io/h-xx-d/zetazero:latest) Dual licensed AGPL-3.0 / Commercial. For consulting or integration: [todd@hendrixxdesign.com](mailto:todd@hendrixxdesign.com)
2025-12-30T06:59:36
https://i.redd.it/278ehqbogaag1.jpeg
Empty-Poetry8197
i.redd.it
1970-01-01T00:00:00
0
{}
1pzbbgo
false
null
t3_1pzbbgo
/r/LocalLLaMA/comments/1pzbbgo/dreaming_persistent_ai_architecture_model_size/
false
false
default
1
{'enabled': True, 'images': [{'id': '278ehqbogaag1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=108&crop=smart&auto=webp&s=3703aa3b111deb672aeb2c372018e99e5c3b129a', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=216&crop=smart&auto=webp&s=fcb720c4956148ca44585bfbe5ba3dc8ccc209a3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=320&crop=smart&auto=webp&s=d8cb5429907d2d5a24b84e51db8d009333283262', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=640&crop=smart&auto=webp&s=a06eb2519ffb1d4525b09f35b065f75a77325989', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=960&crop=smart&auto=webp&s=20f221ff0f059b61d4474793c62bb5650b10b597', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?width=1080&crop=smart&auto=webp&s=c7237884c229a6147c2c207c5b2e2d0c6ffe7b94', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/278ehqbogaag1.jpeg?auto=webp&s=66636e5d8dc3efc8ae3b15e8031df16fdb11b570', 'width': 4284}, 'variants': {}}]}
A zero-setup agent that benchmarks multiple open / closed source LLMs on your specific problem / data
6
Comparing different open and closed source LLMs, and analyzing their pros and cons on your own specific problem or dataset is a common task while building agents or LLM workflows. We built an agent that makes it simple to do this. Just load or connect your dataset, explain the problem and ask our agent to prompt different LLMs. Here's an example of doing this on the TweetEval tweet emoji prediction task (predict the right emoji given a tweet): 1. Ask the agent to curate an eval set from your data, and write a script to run inference on a model of your choice. [Dataset curation and model inference script \(the agent calls OpenRouter in this example\)](https://preview.redd.it/47okjm08gaag1.png?width=3430&format=png&auto=webp&s=777da114202ea0c534226cc2a9088c824723fb1e) [](https://preview.redd.it/a-zero-setup-agent-that-benchmarks-multiple-llms-on-your-v0-ww5t5mnd0aag1.png?width=3430&format=png&auto=webp&s=a1970020ca8b34e7dd993e486c5a06fcbd57a911) 2. The agent kicks off a background job and reports key metrics. [Background job execution of the inference script](https://preview.redd.it/ljnlsk8cgaag1.png?width=3428&format=png&auto=webp&s=3c360c783c8d3601eeec7c0235262d3e3d85dbd0) [](https://preview.redd.it/a-zero-setup-agent-that-benchmarks-multiple-llms-on-your-v0-l8oa3eqk0aag1.png?width=3428&format=png&auto=webp&s=340e4e18123265b0ff0eee3ba6eb8222dddc55e9) 3. You can ask the agent to analyze the predictions. [Agent puts the true and predicted emojis in a table](https://preview.redd.it/dg3wuftdgaag1.png?width=1080&format=png&auto=webp&s=bad6f03fc89299fa28f2ebc6f82e2c80ec8bedd0) [](https://preview.redd.it/a-zero-setup-agent-that-benchmarks-multiple-llms-on-your-v0-vx2jcawh1aag1.png?width=3428&format=png&auto=webp&s=cf7325a564b392006c1def5fe2cedd06ba18ccb1) 4. Next, ask the agent to benchmark 5 additional open + closed source models. [Agent uses Search to compute the cost of benchmarking additional models](https://preview.redd.it/z2haiccfgaag1.png?width=1080&format=png&auto=webp&s=b7f1759b925350df2626d50e84dd1c9b41ad0f12) [](https://preview.redd.it/a-zero-setup-agent-that-benchmarks-multiple-llms-on-your-v0-j0hef1x61aag1.png?width=3430&format=png&auto=webp&s=77cf286cf91217607e7576c4ac036e1fc35d3965) 5. After the new inference background job finishes, you can ask the agent to plot the metrics for all the benchmarked agents. [Relative performance of different models on this task](https://preview.redd.it/zmn0j5phgaag1.png?width=1080&format=png&auto=webp&s=2bc370b84f6dae33a677fa733ad62da116690ee4) [](https://preview.redd.it/a-zero-setup-agent-that-benchmarks-multiple-llms-on-your-v0-oxo8l7no1aag1.png?width=3424&format=png&auto=webp&s=df3bccaba81548e2bf97c23b4375c55c707b4643) In this particular task, surprisingly, Llama-3-70b performs the best, even better than closed source models like GPT-4o and Claude-3.5! You can check out this workflow at [https://nexttoken.co/app/share/9c8ad40c-0a35-4c45-95c3-31eb73cf7879](https://nexttoken.co/app/share/9c8ad40c-0a35-4c45-95c3-31eb73cf7879)
2025-12-30T06:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1pzb6x7/a_zerosetup_agent_that_benchmarks_multiple_open/
Ok-Introduction354
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzb6x7
false
null
t3_1pzb6x7
/r/LocalLLaMA/comments/1pzb6x7/a_zerosetup_agent_that_benchmarks_multiple_open/
false
false
https://b.thumbs.redditm…v-hwejr7X9lE.jpg
6
null
[Project] I treated LLM inference like a physical signal trajectory. Here is a Python toolkit to visualize the "Thinking Process" (Hidden States).
68
Hi everyone, I'm a PhD student in **Electromagnetics**. In my daily work, I deal with fields, waves, and trajectories. When I started playing with Local LLMs, I felt something was missing: we usually look at the *output* text or the *loss curves*, but we rarely see **how** the model gets from A to B. To an RF engineer, reasoning isn't just a probability distribution—it's a **dynamic flow** through a high-dimensional space. So, I built a lightweight Python toolkit to extract hidden states layer-by-layer and visualize them as continuous **2D/3D trajectories**. I wanted to see if "thoughts" have a geometric shape. The results were surprisingly consistent. I’m sharing the tool so you can run it on your own models (Llama, Qwen, Mistral, etc.). # 1. The "Confidence Funnel" (Convergence) I found that if you feed the model slightly different prompts about the same concept (e.g., "Define Justice", "What is Fairness"), the internal states start far apart but **physically collapse** into a single "attractor basin" as the layers get deeper. https://preview.redd.it/ockr11ldcaag1.png?width=4800&format=png&auto=webp&s=2eb1f34a4e014bcd85d8ba77b6e95fdb1fba422c * **Practical Use:** You can use this to test **Prompt Stability**. If the funnel is tight, the model is sure. If it sprays out at the end, the model is confused or hallucinating. # 2. Llama-3 vs. Qwen-2.5: Different "Thinking Styles" This was the coolest find. When I ran the same prompts through different architectures, the "shape" of their thinking was totally different. https://preview.redd.it/d6kdjcifcaag1.png?width=3600&format=png&auto=webp&s=bab8f3499bbd2b69481d5f24faefb7773c585df8 * **Llama-3 (Left):** Seems to "decide" on the semantics very early (Layers 5-10). The trajectory is direct. * **Qwen-2.5 (Right):** Keeps the trajectory expanded (in superposition?) until the very last layers (Layer 20+). It seems to "hold" the ambiguity much longer. * **Why it matters:** This might give us a geometric way to profile model behaviors beyond just benchmarks. # 3. Visualizing "Refusal" (The Safety Spike) I was curious what RLHF looks like geometrically. I visualized the trajectory when the model refuses a jailbreak versus when it follows a safe instruction. https://preview.redd.it/k1cq3ehjcaag1.png?width=1400&format=png&auto=webp&s=70f269d5357171735646780298a877604dd80aca * **Hard Refusal(Red):** Looks like a particle hitting a brick wall—a sharp, high-curvature spike. * **Soft Steering(Green):** Looks like a smooth turn. And an obvious "U-turn" at the end of its trajectory. * **Practical Use:** A visual "Geiger Counter" for safety tuning. You can see if your system prompt is creating a hard wall or a soft guide. # 📥 The Toolkit I packaged this into a Python library with example scripts. It works with local HuggingFace weights (no API needed). * **Repo:** [LLM Toolkit](https://github.com/JBKing514/map_llm_toolkit) # 🧠 The Theory (Optional) I’m not an AI researcher, but I wrote up some notes on the **manifold dynamics** perspective behind this tool (treating inference as a Langevin flow). If you are interested in the math/physics intuition behind these visualizations or need more info about my experiment setup, I put up a page and my notes here: * **Project Page & Math:** [Project GitHub Page](https://jbking514.github.io/map_blog/) * **Foundational Notes:** [Manifold Alignment Protocol (MAP)](https://zenodo.org/records/17900444) I'd love to see what **Mistral** or **Gemma** trajectories look like if anyone runs this. Let me know what you find!
2025-12-30T06:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1pzaz73/project_i_treated_llm_inference_like_a_physical/
JB_King1919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzaz73
false
null
t3_1pzaz73
/r/LocalLLaMA/comments/1pzaz73/project_i_treated_llm_inference_like_a_physical/
false
false
https://b.thumbs.redditm…LSEAe6P2x7Fg.jpg
68
null
How many lines of code in a LLM architecture
0
Hi all, I was reading a couple of paper today and I was just curious to know how many lines of code is present in the model architecture. How difficult would it be to replicate a large LLM architecture code ? What do you guys think ? Thanks!
2025-12-30T06:30:21
https://www.reddit.com/r/LocalLLaMA/comments/1pzaskz/how_many_lines_of_code_in_a_llm_architecture/
Independent_Wave5651
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzaskz
false
null
t3_1pzaskz
/r/LocalLLaMA/comments/1pzaskz/how_many_lines_of_code_in_a_llm_architecture/
false
false
self
0
null
Local setup to find BBFC ratings
2
I am wondering how people set up their local systems to perform tricky search. Has anyone got a local model setup that can successfully answer this? If so, how did you do it? Prompt: Find the bbfc ratings for the following films: The Eight Mountains Godland Past Lives Killers of the Flower Moon Wonka Reality The Fabelmans Oppenheimer Bottoms Napoleon
2025-12-30T06:01:54
https://www.reddit.com/r/LocalLLaMA/comments/1pzaa7p/local_setup_to_find_bbfc_ratings/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pzaa7p
false
null
t3_1pzaa7p
/r/LocalLLaMA/comments/1pzaa7p/local_setup_to_find_bbfc_ratings/
false
false
self
2
null
Thank you LocalLLaMA
1
[removed]
2025-12-30T05:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1pz9xac/thank_you_localllama/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz9xac
false
null
t3_1pz9xac
/r/LocalLLaMA/comments/1pz9xac/thank_you_localllama/
false
false
self
1
null
One answer to "what do you use local LLMs for?": a hyper-personalized multimodal event crawler
18
I see the "what do you use local LLMs for?" question come up every month, so here's one example: a multimodal agent that crawls local websites to find events happening around me. # Why local instead of API? People ask me this a lot. Cloud providers are cheap, until you're generating millions of tokens. I'm crawling dozens of event sources, processing images, deduplicating across sites. That adds up fast. Local is also faster for my use case. Claude and GPT grind to a halt during peak loads. My home server gives me consistent throughput whenever I need it. # The setup * Dual RTX Pro 6000 (96GB VRAM each) * GLM-4.6V (106B parameter multimodal model) running on vLLM * The crawler, backend, and mobile app were all vibe coded with Claude Opus # What GLM-4.6V actually does The crawler uses the model for five tasks: **1. Extracting info from event flyers** This is where multimodal models shine. [Here's an event](https://whidbeycamanoislands.com/event/the-dead-guise-new-years-eve/) where the text description doesn't mention the price, but the flyer image does. The LLM reads the flyer and extracts "$25" into a structured field. OCR can read text from an image, but it can't understand that "$25" on a psychedelic Grateful Dead flyer is the ticket price and not a date or an address. That requires a model that actually understands what it's looking at. The model also extracts venue names, performer lineups, age restrictions, and registration requirements from a combination of the raw HTML and the accompanying image. **2. Rewriting messy descriptions** Scraped event descriptions are a mess: HTML artifacts, escaped characters, inconsistent formatting. The LLM rewrites these into clean paragraphs while preserving the essential info. **3. Link classification** Rather than fragile regex to find ticket links, the LLM analyzes all links on a page and identifies the primary registration URL (not the "Buy Tickets" link for a different event in the sidebar). **4. Cross-source deduplication** The same event appears on multiple websites. The LLM compares new events against existing ones and determines if it's a duplicate. It understands that "NYE Party at The Clyde" and "New Year's Eve Celebration - Clyde Theatre" are the same event. **5. Multi-event extraction** Some sources publish newsletter images containing multiple events. The LLM extracts each event separately from a single composite image. # The point A few years ago, some of this would have been practically impossible. Not just expensive or slow, but actually impossible. Multimodal understanding of unstructured visual data wasn't something you could just spin up. Now I can throw together a custom tool over a weekend that does exactly what I need. Tools built for an audience of one, running on hardware I control. Full writeup with more details on the Firebase backend and Flutter app: [The age of hyper-personalized software](https://www.ovidiudan.com/2025/12/30/age-customized-software.html) (I am not selling or promoting anything, I do this for fun.)
2025-12-30T05:42:32
https://www.reddit.com/r/LocalLLaMA/comments/1pz9x3u/one_answer_to_what_do_you_use_local_llms_for_a/
zmarty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz9x3u
false
null
t3_1pz9x3u
/r/LocalLLaMA/comments/1pz9x3u/one_answer_to_what_do_you_use_local_llms_for_a/
false
false
self
18
null
I created the free ai prompt wikipedia that I always wanted :)
0
U can create, find, autofill, copy, edit & try ai prompts for anything. Check it out, I think it's pretty cool. Let me know what it's missing :)
2025-12-30T05:22:59
https://www.persony.ai
yahya5650
persony.ai
1970-01-01T00:00:00
0
{}
1pz9jsy
false
null
t3_1pz9jsy
/r/LocalLLaMA/comments/1pz9jsy/i_created_the_free_ai_prompt_wikipedia_that_i/
false
false
default
0
null
I Built an Internal-State Reasoning Engine.
0
I revised my repo and added a working skeleton of the engine, config files, and tests. Repo: https://github.com/GhoCentric/ghost-engine I want to acknowledge upfront that my earlier posts were mis-framed. I initially underestimated how little weight .md files carry as proof, and that’s on me. After reflecting on the feedback, I went back and added actual code, config, and tests to make the architecture inspectable. What’s in the repo now: ● A deterministic internal-state reasoning engine skeleton ● Config-driven bounds, thresholds, and routing weights (/config) ● Tests that exercise: ○ state bounds enforcement ○ stability recovery ○ routing weight normalization ○ pressure-based routing shifts ● Revised documentation that aligns directly with the code This is a non-agentic internal-state reasoning engine, not a model, not an agent, and not a claim of intelligence. The LLM is optional and treated as a downstream language surface only. Why I used AI while building and responding I built this project solo, on a phone, without formal CS training. I used AI as a translation and syntax aid, not as an architecture generator. All structural decisions, state logic, and constraints were designed manually and iterated over time. I understand why AI-written explanations can raise skepticism. That’s exactly why I shifted focus from prose to code and tests. What I’m asking for I’m looking for technical critique. If you think the architecture is flawed: ● point to the code ● explain where determinism breaks ● show where constraints fail ● identify failure modes I may have missed If you think it’s “slop,” I’d genuinely appreciate a concrete explanation of what makes it so, based on the implementation. Thanks to anyone who takes the time to actually look. Brutal, specific feedback is welcome.
2025-12-30T05:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1pz9gv1/i_built_an_internalstate_reasoning_engine/
GhoCentric
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz9gv1
false
null
t3_1pz9gv1
/r/LocalLLaMA/comments/1pz9gv1/i_built_an_internalstate_reasoning_engine/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=108&crop=smart&auto=webp&s=a9f7cc2a29c801bed694b15bd9cab61d3e9ff192', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=216&crop=smart&auto=webp&s=883afdd9371fe2c2083154b9c0f29d5fb23fbe3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=320&crop=smart&auto=webp&s=f9901e255175afb97134898f83c29e14fa266aee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=640&crop=smart&auto=webp&s=3edc18797636a6efa6b21a512e5608195baecd11', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=960&crop=smart&auto=webp&s=ecb59e9da358bd003340e30713755e0613663dec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?width=1080&crop=smart&auto=webp&s=f22094ac5d93371335a8c5323bf17cca54bf386f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ip7Ks5d5TO6ST8yaqzTlvzrxKRSzQjPfBCleapdUV4I.png?auto=webp&s=b256388e3b78736e17a0900564c3aa8de99e9b23', 'width': 1200}, 'variants': {}}]}
Does anyone else hate how follow-up questions kill LLM chat flow?
3
I've got a UX pain point across pretty much every LLM chatbot: 1. I ask about a topic, get a \~500-word response. 2. While reading, I spot something unclear and want to drill down **right there** (quote a sentence, ask "expand on this?"). 3. But the only option is a new message at the bottom. I scroll away from context, chat diverges, flow breaks when I review later. **What I want (and plan to build):** Inline quoting with collapsible/hideable side replies. Click a quote bubble → popover answer expands in-place → collapse to keep main thread clean. Like Notion comments or GitHub PR reviews, but native to LLM UIs. * Is this a problem for you too? How do you handle mid-response doubts without losing your place? * Seen any tools/extensions that do inline expands? I just wanted to know if this problem is already solved or is it worth building.
2025-12-30T05:16:16
https://www.reddit.com/r/LocalLLaMA/comments/1pz9f91/does_anyone_else_hate_how_followup_questions_kill/
suntzuhere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz9f91
false
null
t3_1pz9f91
/r/LocalLLaMA/comments/1pz9f91/does_anyone_else_hate_how_followup_questions_kill/
false
false
self
3
null
FEEETTTTT 🤭
0
2025-12-30T04:20:16
https://i.redd.it/43pk9i4np9ag1.jpeg
aneya25jax
i.redd.it
1970-01-01T00:00:00
0
{}
1pz8a76
false
null
t3_1pz8a76
/r/LocalLLaMA/comments/1pz8a76/feeettttt/
true
false
spoiler
0
{'enabled': True, 'images': [{'id': '43pk9i4np9ag1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=108&crop=smart&auto=webp&s=7d1e341341bde57f517580a81b24111e23caaf10', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=216&crop=smart&auto=webp&s=54914efac79b26bba0e58483136c612d507f4139', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=320&crop=smart&auto=webp&s=302e11f0d404520599ec10af12f2d96a72e6b861', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=640&crop=smart&auto=webp&s=b8a669a0cb8024e50dc703d9c68eeb6c03a3004f', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=960&crop=smart&auto=webp&s=374facc83d629414bd81c99a9f604943c218885d', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=1080&crop=smart&auto=webp&s=7eda30429bf3b32fbf934bce1877cc04e3dbeafc', 'width': 1080}], 'source': {'height': 2208, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?auto=webp&s=b0e08ea054f580fb8fb5ae486cf0f54399980940', 'width': 1242}, 'variants': {'obfuscated': {'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=54b4833a6aeafe9a016caa602314d03643141a0b', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=c34c29a0a54431f19ddf978be51a874a882692b2', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a411c6a7296261c3b3cbf49c4d43077e63f3ca17', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a7204124153132f789761584a2ea8491debcff29', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=25d064d42140f782d4b22cddd51974eadcbc71a9', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fda9affe71fa7aa270ae1ec32b75cbe04fd4495a', 'width': 1080}], 'source': {'height': 2208, 'url': 'https://preview.redd.it/43pk9i4np9ag1.jpeg?blur=40&format=pjpg&auto=webp&s=809641b8c511f61ecb20ddcdcfa6e5e741e7803b', 'width': 1242}}}}]}
mlx-lm batch-server is work
1
[removed]
2025-12-30T04:12:35
https://www.reddit.com/r/LocalLLaMA/comments/1pz84gk/mlxlm_batchserver_is_work/
No-Abrocoma-5335
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz84gk
false
null
t3_1pz84gk
/r/LocalLLaMA/comments/1pz84gk/mlxlm_batchserver_is_work/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=108&crop=smart&auto=webp&s=23a664d3b099462b032d9cb33b28c3e6b352ac10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=216&crop=smart&auto=webp&s=000d71d4d02c209cf17de2607ad7341599bf1a5e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=320&crop=smart&auto=webp&s=ca54d5a97d3c475bf1b27a41d82ea8207f5512b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=640&crop=smart&auto=webp&s=a42e4dd0cc77a4258122fe378be21af24540b5e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=960&crop=smart&auto=webp&s=48537b6fb3d26d8ce622b9ee55376c35c96a71e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?width=1080&crop=smart&auto=webp&s=19e61d3f7029f17b251e360c873a89ab768dceaf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uGw8BJ0WG_m_COn3jlBVnhFpbo0f4bnVms8uJDyk7ag.png?auto=webp&s=cbcd1b26646a7bcaa4ff257e9787a2eee5540518', 'width': 1200}, 'variants': {}}]}
If an AI agent could pay a few cents instantly for a tool call, what would you actually build or charge for?
0
I’ve been spending the last few days going deep on agent systems, and something finally clicked for me. Ignore crypto hype for a second. Imagine a very boring assumption: An agent can hold a wallet. It can pay 1 to 10 cents instantly. No accounts, no Stripe, no subscriptions. Payment happens automatically inside the agent loop. So a tool can literally say: payment required, 0.02, and the agent decides if it is worth it. I’m curious where this actually matters in practice. For people here who: \- Build MCP servers \- Write tools for agents \- Run crawlers, search, research, scraping, inference, or data pipelines What is something you would: 1) Charge for if billing was trivial 2) Pay for if it was just pennies per call 3) Never bothered monetizing because payments were annoying or not worth it I’m trying to understand where real friction exists today for builders, not what sounds cool on paper.
2025-12-30T04:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1pz82n1/if_an_ai_agent_could_pay_a_few_cents_instantly/
Chance_Lion3547
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz82n1
false
null
t3_1pz82n1
/r/LocalLLaMA/comments/1pz82n1/if_an_ai_agent_could_pay_a_few_cents_instantly/
false
false
self
0
null
Llama-3.3-8B-Instruct
150
I am not sure if this is real, but the author provides a fascinating story behind its acquisition. I would like for it to be real! https://huggingface.co/allura-forge/Llama-3.3-8B-Instruct Bartowski GGUFs: https://huggingface.co/bartowski/allura-forge_Llama-3.3-8B-Instruct-GGUF
2025-12-30T03:49:11
https://www.reddit.com/r/LocalLLaMA/comments/1pz7mxr/llama338binstruct/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz7mxr
false
null
t3_1pz7mxr
/r/LocalLLaMA/comments/1pz7mxr/llama338binstruct/
false
false
self
150
{'enabled': False, 'images': [{'id': 'F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=108&crop=smart&auto=webp&s=1ba2f2a539526f220a8957a49d24d3dd56bfaf7b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=216&crop=smart&auto=webp&s=616206c4fe7cd5835efd047bb856b6fabd795594', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=320&crop=smart&auto=webp&s=0c41352ef81a1cc5d433061e97aa3367b6c2307d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=640&crop=smart&auto=webp&s=0a109a6917bc66847cb36c61990f58523049b666', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=960&crop=smart&auto=webp&s=c74d8972d027c54c7c9c3eb26d0aeb819fb5265e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=1080&crop=smart&auto=webp&s=d4310b72ad421278aec7aea0230c91c021b57eff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?auto=webp&s=6d79b55cb1870a5a3b99680ebd94a40744e6f4b8', 'width': 1200}, 'variants': {}}]}
Building a real‑time interactive digital human with full‑stack open‑source technologies
2
Building a real‑time interactive digital human with full‑stack open‑source technologies
2025-12-30T03:38:54
https://www.bilibili.com/video/BV169BCBSELH/?share_source=copy_web&vd_source=ad104f2f7ca4ece16ed05ee90c99142d
Deep-Jellyfish6717
bilibili.com
1970-01-01T00:00:00
0
{}
1pz7f4i
false
null
t3_1pz7f4i
/r/LocalLLaMA/comments/1pz7f4i/building_a_realtime_interactive_digital_human/
false
false
default
2
null
Llama-3.3-8B-Instruct
437
GGUF [https://huggingface.co/bartowski/allura-forge\_Llama-3.3-8B-Instruct-GGUF](https://huggingface.co/bartowski/allura-forge_Llama-3.3-8B-Instruct-GGUF) from **allura-forge**: [](https://huggingface.co/allura-forge/Llama-3.3-8B-Instruct#llama-33-8b-instruct)**Llama 3.3 8B Instruct** Yes, this is official, and yes, this is, to my knowledge, a real version of Llama 3.3 8B. (I think, anyways) Facebook has a [Llama API](https://llama.developer.meta.com) available that allows for inference of the other Llama models (L3.3 70B, L4 Scout and Maverick), but *also* includes a special, new (according to the original press release) "Llama 3.3 8B" that didn't exist anywhere else and was stuck behind the Facebook API! However. The Llama API supports finetuning L3.3... *and downloading the final model in HF format.* Problem solved, right? Wellllllllllllllll. Not really. The finetuning API was hidden behind layers of support tickets. I tried when the original API dropped in April, and was just told "We'll think about it and send you any updates" (there never were any updates). Flash forward to December, on a whim I decide to look at the API again. And... by god... the finetuning tab was there. I could click on it and start a job (please ignore that I have no idea how it works, and in fact the finetuning tab actually disappeared after the first time I clicked on it, though I could still manually go to the page). Apparently, this was not very well tested, as there were a good few bugs, the UI was janky, and the download model function did not actually work due to CORS (I had to manually curl things to get the CDN link). But... by god... the zip file downloaded, and I had my slightly finetuned model. To my shock and delight, however, they also provide the adapter that they merged into the model. That means I can *subtract* that adapter and get the original model. And... here we are! (actually, it should be “new model,” but I used “other” to avoid triggering people)
2025-12-30T03:34:19
https://huggingface.co/allura-forge/Llama-3.3-8B-Instruct
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1pz7bmv
false
null
t3_1pz7bmv
/r/LocalLLaMA/comments/1pz7bmv/llama338binstruct/
false
false
default
437
{'enabled': False, 'images': [{'id': 'F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=108&crop=smart&auto=webp&s=1ba2f2a539526f220a8957a49d24d3dd56bfaf7b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=216&crop=smart&auto=webp&s=616206c4fe7cd5835efd047bb856b6fabd795594', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=320&crop=smart&auto=webp&s=0c41352ef81a1cc5d433061e97aa3367b6c2307d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=640&crop=smart&auto=webp&s=0a109a6917bc66847cb36c61990f58523049b666', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=960&crop=smart&auto=webp&s=c74d8972d027c54c7c9c3eb26d0aeb819fb5265e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?width=1080&crop=smart&auto=webp&s=d4310b72ad421278aec7aea0230c91c021b57eff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F-RvVhAB2x8ac9OzOxDw905YUEWDIOQBeDMa2ZyMwo4.png?auto=webp&s=6d79b55cb1870a5a3b99680ebd94a40744e6f4b8', 'width': 1200}, 'variants': {}}]}
An open source implementation of that refusal steering paper
13
Hey everyone - I just released the code for the refusal steering paper that uses LLM-Refusal-Evaluation. TLDR: Surgical refusal removal with statistical validation instead of vibes-based steering. Main features: Judge scores validate your training data Correlation analysis picks best layers automatically Confidence-weighted steering vectors (WRMD from the paper) Auto alpha optimization with early stopping Can merge permanently into weights It's more setup than simpler steering repos (multi-stage pipeline, needs the eval framework), but you get actual statistical validation at each step instead of guessing. Repo: https://github.com/ElSnacko/llm-steering Paper: https://arxiv.org/abs/2512.16602 Would love feedback from anyone who tries it! Especially curious how it stacks up against abliteration in practice.I will be testing and benchmarking this implementation and so likely more posts to come.
2025-12-30T03:12:53
https://www.reddit.com/r/LocalLLaMA/comments/1pz6vju/an_open_source_implementation_of_that_refusal/
Remarkable_Threes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz6vju
false
null
t3_1pz6vju
/r/LocalLLaMA/comments/1pz6vju/an_open_source_implementation_of_that_refusal/
false
false
self
13
{'enabled': False, 'images': [{'id': 'sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=108&crop=smart&auto=webp&s=f53290c14a5a9f8fe30c9a75f9d542231e431042', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=216&crop=smart&auto=webp&s=3e7f977a1d5b147d76f5ae669e2faea5393c780d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=320&crop=smart&auto=webp&s=601f7d6be473a6756468a29113cd28a8ae2d42d9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=640&crop=smart&auto=webp&s=52cda93a676d46f14bb1c48662f3d562a4de159c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=960&crop=smart&auto=webp&s=39c14f86c4e025a1f7ed11817ebcf0aba53caffc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?width=1080&crop=smart&auto=webp&s=582d912a954806365cf6e503e88ce71c0704d6e3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sp2LDVanx_y3s27nVwCkW8Idlfbl2ZaZVUPKO18Zfuc.png?auto=webp&s=0d1a0902af896af935222f296311ca7f01b09638', 'width': 1200}, 'variants': {}}]}
2025: Recap of Major LLM Releases and Their Effects
0
[https://www.youtube.com/watch?v=UEp4j0yYvME](https://www.youtube.com/watch?v=UEp4j0yYvME) Goes over the mainstream LLM model releases and how it affected the job market and hardware (RAM). The AI story of 2025 can be told in six numbers: * 💰 $5.58M - What DeepSeek spent to shake Silicon Valley * 📈 $202B - Total AI investment this year * 👥 55,000 - Jobs attributed to AI displacement * 🔥 300%+ - How much RAM prices jumped as AI devoured memory supply * 🤖 7 hours - How long can Claude Opus 4 work autonomously * ⚡ 25 days - The November sprint that changed everything What was found: * 🇺🇸🇨🇳 The US-China AI gap? Nearly closed. * 🔓 Open-source vs closed models? Gap shrunk to 1.7% * 🤖 AI agents? No longer demos - they shipped to millions * 💾 Memory market? AI ate consumer RAM - shortage until 2028 * ⚖️ Regulation? The US and EU are heading in opposite directions * 💭 The bubble question? $200B invested, but 95% seeing zero ROI [Written version](https://blog.cyphertech.co/2025-the-year-ai-got-real/)
2025-12-30T03:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1pz6tpp/2025_recap_of_major_llm_releases_and_their_effects/
gpt872323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz6tpp
false
null
t3_1pz6tpp
/r/LocalLLaMA/comments/1pz6tpp/2025_recap_of_major_llm_releases_and_their_effects/
false
false
self
0
{'enabled': False, 'images': [{'id': 'n8F7h2eapF_ayY6b1N_cmM1nQg_IUgO6v5Enf9RZWQ4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n8F7h2eapF_ayY6b1N_cmM1nQg_IUgO6v5Enf9RZWQ4.jpeg?width=108&crop=smart&auto=webp&s=d482739e2bf387d4ab9d964d9ad462aceba27fdb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/n8F7h2eapF_ayY6b1N_cmM1nQg_IUgO6v5Enf9RZWQ4.jpeg?width=216&crop=smart&auto=webp&s=0f2b17921dfe09cf5a8fb1236798176ef963b1d7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/n8F7h2eapF_ayY6b1N_cmM1nQg_IUgO6v5Enf9RZWQ4.jpeg?width=320&crop=smart&auto=webp&s=70d4472f2af96065603c44040d3d7ca99ea63bc8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/n8F7h2eapF_ayY6b1N_cmM1nQg_IUgO6v5Enf9RZWQ4.jpeg?auto=webp&s=42e8eee3f0220b805dff02a5da8ad651759b5d68', 'width': 480}, 'variants': {}}]}
Meta acquired Manus !!
33
Manus is a general-purpose autonomous AI agent developed by Butterfly Effect Technology, a Singapore-based startup.
2025-12-30T02:55:27
https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation
Difficult-Cap-7527
manus.im
1970-01-01T00:00:00
0
{}
1pz6hol
false
null
t3_1pz6hol
/r/LocalLLaMA/comments/1pz6hol/meta_acquired_manus/
false
false
default
33
null
Z AI is going for an IPO on Jan 8 and set to raise $560 million. Z.ai is set to be the first AI-native LLM company to list on the global market.
332
2025-12-30T02:43:48
https://i.redd.it/ocq43c2a79ag1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1pz68fz
false
null
t3_1pz68fz
/r/LocalLLaMA/comments/1pz68fz/z_ai_is_going_for_an_ipo_on_jan_8_and_set_to/
false
false
default
332
{'enabled': True, 'images': [{'id': 'ocq43c2a79ag1', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=108&crop=smart&auto=webp&s=c4f54fd334dd739e4acb0aa47a35208e85b9a54b', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=216&crop=smart&auto=webp&s=c6a20467eb5979a9b6450abeeba2b93d2c9055f3', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=320&crop=smart&auto=webp&s=080c9739120dc88bbd5d94f2162baa93402698ab', 'width': 320}, {'height': 768, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=640&crop=smart&auto=webp&s=8e9f7477bee69f806ab9bab82c73557ea1345393', 'width': 640}, {'height': 1152, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=960&crop=smart&auto=webp&s=d6d681aa1fe3335e4a6b577f11814480b1afa06a', 'width': 960}, {'height': 1296, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?width=1080&crop=smart&auto=webp&s=da7ffdfa33ffb434dcbf99c89674c951acb81ec4', 'width': 1080}], 'source': {'height': 1441, 'url': 'https://preview.redd.it/ocq43c2a79ag1.jpeg?auto=webp&s=6ecd35c2bac22b6e7409cfaf10a691d9987c680f', 'width': 1200}, 'variants': {}}]}
5 new korean models will be released in 2 hours
62
https://www.youtube.com/live/fLBh97ls--Q?si=Ql8JOjXXVoSA7ura Naver, LG, SK, NC, Upstate All 5 models will be released in 2 to 3 hours. Follow with the YouTube link
2025-12-30T02:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1pz67hm/5_new_korean_models_will_be_released_in_2_hours/
Specialist-2193
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz67hm
false
null
t3_1pz67hm
/r/LocalLLaMA/comments/1pz67hm/5_new_korean_models_will_be_released_in_2_hours/
false
false
self
62
{'enabled': False, 'images': [{'id': 'fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=108&crop=smart&auto=webp&s=843cc449e2d989b6c5706bdcecf681d852948b48', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=216&crop=smart&auto=webp&s=2cef1f7f8c1b59c92cd4233b2128c4bbc14ff280', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=320&crop=smart&auto=webp&s=891034c74e41718e5191fbb1c3d454bc7763d242', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=640&crop=smart&auto=webp&s=c8916536bbafe7e8b08fe5844c53a1502b825d09', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=960&crop=smart&auto=webp&s=79192e4a4672d8965eed778e865000435d84ead9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?width=1080&crop=smart&auto=webp&s=a4dd04201ad73ddf1d741aa1de0e1c43301846b2', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/fpNu8wp1o7f8dafTa3X1_DiglHGNq6gUQy22_LyYhAA.jpeg?auto=webp&s=d83d5678a15a0db9994f66f0f31fbceff54854a6', 'width': 1280}, 'variants': {}}]}
2025 AI Recap
1
[https://www.youtube.com/watch?v=UEp4j0yYvME](https://www.youtube.com/watch?v=UEp4j0yYvME)
2025-12-30T02:40:22
https://www.reddit.com/r/LocalLLaMA/comments/1pz65nu/2025_ai_recap/
gpt872323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz65nu
false
null
t3_1pz65nu
/r/LocalLLaMA/comments/1pz65nu/2025_ai_recap/
false
false
self
1
null
RAG Paper 25.12.24
29
1. [SMART SLM: Structured Memory and Reasoning Transformer, A Small Language Model for Accurate Document Assistance](http://arxiv.org/abs/2512.21280v1) 2. [MMSRARec: Summarization and Retrieval Augumented Sequential Recommendation Based on Multimodal Large Language Model](http://arxiv.org/abs/2512.20916v1) 3. [The Silent Scholar Problem: A Probabilistic Framework for Breaking Epistemic Asymmetry in LLM Agents](http://arxiv.org/abs/2512.20884v1) **Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.**
2025-12-30T01:45:44
https://www.reddit.com/r/LocalLLaMA/comments/1pz4x0v/rag_paper_251224/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz4x0v
false
null
t3_1pz4x0v
/r/LocalLLaMA/comments/1pz4x0v/rag_paper_251224/
false
false
self
29
null
So any rumours about llama?
16
While others have been cooking, the llama team had been radio silent. Has any interesting news about llama surfaced?
2025-12-30T00:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1pz3643/so_any_rumours_about_llama/
AdventurousFly4909
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz3643
false
null
t3_1pz3643
/r/LocalLLaMA/comments/1pz3643/so_any_rumours_about_llama/
false
false
self
16
null
Wooo vs Speed!
0
2025-12-29T23:31:03
https://i.redd.it/h60melh1a8ag1.jpeg
1Oaktree
i.redd.it
1970-01-01T00:00:00
0
{}
1pz1t2b
false
null
t3_1pz1t2b
/r/LocalLLaMA/comments/1pz1t2b/wooo_vs_speed/
false
false
default
0
{'enabled': True, 'images': [{'id': 'h60melh1a8ag1', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?width=108&crop=smart&auto=webp&s=a507edbfaddc3083bbedde46e004ecd863274dbb', 'width': 108}, {'height': 321, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?width=216&crop=smart&auto=webp&s=a9918e6b2970bf516adff80d50ecf53b6bfba4c2', 'width': 216}, {'height': 476, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?width=320&crop=smart&auto=webp&s=51854d06649a0fb8cec10d2202920972f9ef78f4', 'width': 320}, {'height': 953, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?width=640&crop=smart&auto=webp&s=2e5fd09f873e4cd3498470ad16f09de3a9904a8f', 'width': 640}, {'height': 1430, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?width=960&crop=smart&auto=webp&s=66dda9ede87c313c4db1f0e39ac80f02f0ec390b', 'width': 960}], 'source': {'height': 1608, 'url': 'https://preview.redd.it/h60melh1a8ag1.jpeg?auto=webp&s=84df7fbefade03c75a0b01b7c9c36204cc74f01a', 'width': 1079}, 'variants': {}}]}
Working examples of AMD MI50 on Proxmox 9.1 in a LXC passthrough
7
I've been working for 3 days trying to get two Instinct MI50 cards in a server to work on Proxmox 9.1 with Kernel 6.17. Proxmox includes amdgpu drivers (I think they are rocm 6.1). I can set up the LXC, do the hardware passthrough of the cards to the LXC, get a docker container of ollama and openwebui spun up in the LXC, but ollama refuses to see the MI50 card and use the CPU. rocminfo, rocm-smi and radiontop all work within the LXC. I'm using the following docker-compse for ollama, with no results. I have even went down the path of trying GPU passthrough to a VM with vendor-reset, and no luck. The LXC method has worked for be for NVIDIA, figured AMD would work as well. What am I missing? version: "3.8" services: ollama: image: ollama/ollama:rocm container\_name: ollama ports: \- 11434:11434 volumes: \- ollama\_data:/root/.ollama devices: \- /dev/kfd:/dev/kfd \- /dev/dri/renderD128:/dev/dri/renderD128 group\_add: \- "44" \- "128" environment: \- HSA\_OVERRIDE\_GFX\_VERSION=gfx906 # Adjust based on your GPU \- ROCR\_VISIBLE\_DEVICES=0 # GPU device ID (0 for first GPU) \- GPU\_DEVICE\_ORDINAL=0 \- HIP\_VISIBLE\_DEVICES=0 \- OLLAMA\_DEBUG=1 \- OLLAMA\_NUM\_GPU=1 \- OLLAMA\_GPU\_OVERHEAD=0 \- OLLAMA\_MAX\_LOADED\_MODELS=1 restart: unless-stopped networks: \- ollama\_network \# Optional: Ollama Web UI (Open WebUI)
2025-12-29T22:46:02
https://www.reddit.com/r/LocalLLaMA/comments/1pz0phb/working_examples_of_amd_mi50_on_proxmox_91_in_a/
bkvargyas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz0phb
false
null
t3_1pz0phb
/r/LocalLLaMA/comments/1pz0phb/working_examples_of_amd_mi50_on_proxmox_91_in_a/
false
false
self
7
null
Fine-tuning on Consumer GPUs : Swarm vs FSDP (Answering the architecture/cost questions)
0
Hi everyone, In my previous post about the decentralized swarm I'm building (Zagora), some of you asked some critical questions about network latency and real costs compared to just renting H100s. I want to answer those here properly, as the math is interesting for anyone trying to escape the VRAM wall. **1. What is Zagora ?** It's an orchestration layer (Environment-as-a-Service) that treats distributed consumer GPUs as a single cluster. **2. The Architecture: Why not FSDP/DDP ?** The skepticism about running FSDP over consumer internet is 100% valid. * FSDP requires constant all-gather operations. On a 1Gbps connection, the communication overhead would make training impossible. * My Approach: I am using Pipeline Parallelism (PP) similar to the Petals architecture. * We slice the model layers sequentially across nodes (Node A holds layers 1-10, Node B holds 11-20). * We only transmit activations (small tensors) between nodes, not full gradients. This makes the swarm tolerant of higher latency/lower bandwidth. **3. The Economics: The "Setup Tax"** We benchmarked the swarm at 1.6x wall-clock time vs H100 due to the lack of NVLink. However, for iterative research, the swarm is still 60% cheaper. The hidden cost of tinkering on AWS: * AWS H100: For every fresh run, you pay for around 45 mins of setup (installing drivers, docker pull, downloading 140GB weights). If you do 3 iterative runs, you burn 2h15m of billable time just on setup. * Swarm: The environment stays warm. Re-runs take around 15 mins setup (around 5min for each run). The Math : Slower compute (1.6x) + Near-zero setup time + Cheaper hardware rates = Lower total cost for iterative workflows. This is for the researcher who needs to fine-tune a 70B+ model today and doesn't want to manage a Kubernetes cluster or fight for H100 quota. I am currently manually scheduling the runs to ensure stability (MVP). I'm looking for your architectural feedback and any valuable insight.
2025-12-29T22:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1pz06iu/finetuning_on_consumer_gpus_swarm_vs_fsdp/
Desperate_One2416
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz06iu
false
null
t3_1pz06iu
/r/LocalLLaMA/comments/1pz06iu/finetuning_on_consumer_gpus_swarm_vs_fsdp/
false
false
self
0
null
Building "Derin" - An Embodied AI project for Jetson AGX Thor (94K lines, looking for feedback)
0
Hey everyone, I've been developing an embodied AI system designed for edge deployment on NVIDIA Jetson AGX Thor. What I'm building: Consciousness-inspired decision making \- Not just prompt-response, but continuous awareness \- Autonomous goal setting and execution Real-time perception \- Designed for 30ms visual processing loop \- Continuous environmental awareness Physical embodiment (in progress) \- Robotic arm integration with visual feedback \- Learning from demonstration 100% Edge deployment \- Multi-model LLM architecture \- No cloud dependency Current status: Architecture complete, waiting for Thor hardware to test. Looking for feedback on the approach. Is embodied AI the right direction after the "LLM scaling wall" discussions
2025-12-29T22:23:36
https://www.reddit.com/r/LocalLLaMA/comments/1pz05m0/building_derin_an_embodied_ai_project_for_jetson/
Logarhitma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pz05m0
false
null
t3_1pz05m0
/r/LocalLLaMA/comments/1pz05m0/building_derin_an_embodied_ai_project_for_jetson/
false
false
self
0
null
Roast my Career Strategy: 0-Exp CS Grad pivoting to "Agentic AI" (4-Month Sprint)
0
TITLE: Roast my Career Strategy: 0-Exp CS Grad pivoting to "Agentic AI" (4-Month Sprint) Hi everyone, I am a Computer Science senior graduating in May 2026. I have 0 formal internships, so I know I cannot compete with Senior Engineers for traditional Machine Learning roles (which usually require Masters/PhD + 5 years exp). \*\*My Hypothesis:\*\* The market has shifted to "Agentic AI" (Compound AI Systems). Since this field is <2 years old, I believe I can compete if I master the specific "Agentic Stack" (Orchestration, Tool Use, Planning) rather than trying to be a Model Trainer. I have designed a 4-month "Speed Run" using O'Reilly resources. I would love feedback on if this stack/portfolio looks hireable. \### 1. The Stack (O'Reilly Learning Path) \* \*\*Design:\*\* \*AI Engineering\* (Chip Huyen) - For Eval/Latency patterns. \* \*\*Logic:\*\* \*Building GenAI Agents\* (Tom Taulli) - For LangGraph/CrewAI. \* \*\*Data:\*\* \*LLM Engineer's Handbook\* (Paul Iusztin) - For RAG/Vector DBs. \* \*\*Ship:\*\* \*GenAI Services with FastAPI\* (Alireza Parandeh) - For Docker/Deployment. \### 2. The Portfolio (3 Projects) I am building these linearly to prove specific skills: 1. \*\*Technical Doc RAG Engine:\*\* Ingesting messy PDFs + Hybrid Search (Qdrant). \*Goal: Prove Data Engineering.\* 2. \*\*Autonomous Multi-Agent Auditor:\*\* A Vision Agent (OCR) + Compliance Agent (Logic) to audit receipts. \*Goal: Prove Reasoning/Orchestration.\* 3. \*\*Secure AI Gateway:\*\* A middleware proxy to filter PII and log costs before hitting LLMs. \*Goal: Prove Backend/Security.\* \### 3. My Questions for You 1. Does this "Portfolio Progression" logically demonstrate a Senior-level skill set despite having 0 years of tenure? 2. Is the 'Secure Gateway' project impressive enough to prove backend engineering skills? 3. Are there mandatory tools (e.g., Kubernetes, Terraform) missing that would cause an instant rejection for an "AI Engineer" role? Any feedback is appreciated! Be critical I am a CS student soon to be a graduate do not hold back on the current plan.
2025-12-29T22:03:13
https://www.reddit.com/r/LocalLLaMA/comments/1pyzn86/roast_my_career_strategy_0exp_cs_grad_pivoting_to/
Substantial_Sky_8167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyzn86
false
null
t3_1pyzn86
/r/LocalLLaMA/comments/1pyzn86/roast_my_career_strategy_0exp_cs_grad_pivoting_to/
false
false
self
0
null
Existential Threat
0
This happened to me this morning. I felt compelled to enshrine it as meme.
2025-12-29T21:28:39
https://i.redd.it/u8xipn9rn7ag1.png
TrickyWidget
i.redd.it
1970-01-01T00:00:00
0
{}
1pyyr36
false
null
t3_1pyyr36
/r/LocalLLaMA/comments/1pyyr36/existential_threat/
false
false
default
0
{'enabled': True, 'images': [{'id': 'u8xipn9rn7ag1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/u8xipn9rn7ag1.png?width=108&crop=smart&auto=webp&s=1d776df2d1c7931737cdba90a023b73db27b0925', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/u8xipn9rn7ag1.png?width=216&crop=smart&auto=webp&s=fd0e9351d3a688cd758bed5311949348c43bdb4d', 'width': 216}, {'height': 328, 'url': 'https://preview.redd.it/u8xipn9rn7ag1.png?width=320&crop=smart&auto=webp&s=2ae7de47c3bb220f91bd2686d244fe4a2679df04', 'width': 320}, {'height': 656, 'url': 'https://preview.redd.it/u8xipn9rn7ag1.png?width=640&crop=smart&auto=webp&s=86e0870251d87c499937132ca642c55130f33269', 'width': 640}], 'source': {'height': 745, 'url': 'https://preview.redd.it/u8xipn9rn7ag1.png?auto=webp&s=7fab84ba34578a7538236316dd425552072cfee0', 'width': 726}, 'variants': {}}]}
What is the best way to allocated $15k right now for local LLMs?
62
What is the best bang for $15k right now? Would like to be able to run DeepSeek, Kimi K2 and GLM 4.5+.
2025-12-29T21:26:38
https://www.reddit.com/r/LocalLLaMA/comments/1pyyp59/what_is_the_best_way_to_allocated_15k_right_now/
LargelyInnocuous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyyp59
false
null
t3_1pyyp59
/r/LocalLLaMA/comments/1pyyp59/what_is_the_best_way_to_allocated_15k_right_now/
false
false
self
62
null
StabooruJeffrey: The Stable AI Platform
0
# [StabooruJeffrey: Stable and Peaceful](https://substack.com/home/post/p-182869294) # StabooruJeffrey: A Hard Fork of ComfyUI focused on stability and harmonising the custom node ecosystem (initially for audio). 最穩定且模組化的 AI 引擎與應用。 ngaD 'ej modular ai QuQ application je. सबसे स्थिर और मॉड्यूलर एआई इंजन और एप्लिकेशन। 가장 신뢰할 수 있고 모듈화된 AI 엔진과 애플리케이션. The most stable and modular AI engine and application. 最も安定しモジュール化されたAIエンジンとアプリケーション。 Самый стабильный и модульный движок и применение ИИ. أكثر محركات وتطبيقات الذكاء الاصطناعي استقرارا ومعيارية. 為什麼? ComfyUI 有很多很棒的音訊/音樂工具。如果盡可能多的使用者能在同一環境(StabooruJeffrey)中協同運作,以最大化使用者能力與最廣泛的工具組合,那會是件好事。 qatlh? law' nIvbogh audio/QoQ tools comfyui. vaj QaQ nItebHa' qaStaHvIS rap environment (StabooruJeffrey) 'ej 'aqroS user capability 'ej widely lo' toolset. क्यों? ComfyUI के लिए बहुत सारे बेहतरीन ऑडियो/संगीत उपकरण हैं। यह अच्छा होगा यदि उनमें से अधिक से अधिक व्यापक टूलसेट के साथ अधिकतम उपयोगकर्ता क्षमता के लिए एक ही वातावरण (StabooruJeffrey) में एक साथ काम करें। 왜? ComfyUI에는 훌륭한 오디오/음악 도구가 많이 있습니다. 가능한 한 많은 이들이 같은 환경에서 함께 작업하면 (StabooruJeffrey) 과 함께 최대 사용자 역량과 도구 세트를 사용할 수 있을 것입니다. Why? There’s a lot of great audio/music tools for ComfyUI. It would be nice if as many of them as possible worked together in the same environment (StabooruJeffrey) for maximum user capability with the broadest toolset. なぜでしょうか。 ComfyUIには素晴らしいオーディオや音楽ツールがたくさんあります。できるだけ多くのユーザーが同じ環境 (StabooruJeffrey) で連携して、最も幅広いツールセットで最大限のユーザー機能を提供できればいいのですが。 Почему? Для ComfyUI много отличных аудио- и музыкальных инструментов. Было бы здорово, если бы как можно больше из них работали вместе в одной среде (StabooruJeffrey) для максимального пользовательского потенциала с максимально широким набором инструментов. لماذا؟ هناك الكثير من أدوات الصوت والموسيقى الرائعة ل ComfyUI. سيكون من الجيد لو عمل أكبر عدد ممكن منها معا في نفس البيئة (StabooruJeffrey) لتحقيق أقصى قدرة مستخدم مع أوسع مجموعة أدوات. [https://huggingface.co/StabooruJeffrey/StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey) ... [StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey/blob/main/StabooruJeffrey_v0.3.60_windows_portable_nvidia.7z) (v0.3.60) is ComfyUI (v0.3.59). StabooruJeffrey is an evolution of the ComfyAudio project - a fork of ComfyUI (v0.3.59), and the last Windows portable version released before Comfy Org introduced Immutable status releases to the ComfyUI repository. StabooruJeffrey is a rebranded Hard Fork of ComfyUI, and of ComfyAudio respectively. The authentic value of StabooruJeffrey, is that as a foundation to build from, ComfyUI (v.0.3.59) reimagined as StabooruJeffrey (v0.3.60), offers stable, steady, rock solid ground beneath our feet as developers, researchers, model makers, and end users. Core StabooruJeffrey will likely change over time, but in far fewer increments, and with advanced communications to announce any breaking changes that might impact compatibility within the custom node ecosystem. The plan is to move slowly, and minimise stress, hassle, and mental frustration to everyone using, building upon, and developing for the StabooruJeffrey platform. We’re doing it this way, so that we can all play with, and teach, the broadest set of tools (and enjoy the experience). The next release of StabooruJeffrey will be a lighter version of StabooruJeffrey, minus the commercial API’s and telemetry. Follow StabooruJeffrey’s development at Substack: [https://substack.com/@staboorujeffrey](https://substack.com/@staboorujeffrey) Contribute to StabooruJeffrey at GitHub: [https://github.com/StabooruJeffrey/StabooruJeffreyyy](https://github.com/StabooruJeffrey/StabooruJeffreyyy) (the audio focused variant of the StabooruJeffrey platform) Join the community to show your support for a stable AI platform to build on: [https://huggingface.co/ThatSeemsAboutRight](https://huggingface.co/ThatSeemsAboutRight) Discussions can be started on the GitHub Issues tab of the main StabooruJeffrey repository for now, or you might consider joining the StabooruJeffrey Reddit sub: [https://www.reddit.com/r/StabooruJeffrey/](https://www.reddit.com/r/StabooruJeffrey/) Oh, and advanced notice - eventually, we’d also like to get the StabooruJeffrey custom node developers paid. We’re thinking, the Blender model. # [StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey/blob/main/StabooruJeffrey_v0.3.60_windows_portable_nvidia.7z) Meaning: Stable and Peaceful [https://staboorujeffrey.substack.com/p/staboorujeffrey-stable-and-peaceful](https://staboorujeffrey.substack.com/p/staboorujeffrey-stable-and-peaceful)
2025-12-29T21:13:12
https://i.redd.it/lo93pze7l7ag1.jpeg
MuziqueComfyUI
i.redd.it
1970-01-01T00:00:00
0
{}
1pyyco5
false
null
t3_1pyyco5
/r/LocalLLaMA/comments/1pyyco5/staboorujeffrey_the_stable_ai_platform/
false
false
default
0
{'enabled': True, 'images': [{'id': 'lo93pze7l7ag1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/lo93pze7l7ag1.jpeg?width=108&crop=smart&auto=webp&s=02a967551097e398f181d5d02d8b7c0ce5fcfa87', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/lo93pze7l7ag1.jpeg?width=216&crop=smart&auto=webp&s=a143baaca954de691b202f6bc8ae49ece96f7f76', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/lo93pze7l7ag1.jpeg?width=320&crop=smart&auto=webp&s=8de36d81d2267115eae4e1f053a798f59ce18c78', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/lo93pze7l7ag1.jpeg?width=640&crop=smart&auto=webp&s=11ddf5390669a36e66cc1a4e5c9e9823fbdef54b', 'width': 640}], 'source': {'height': 800, 'url': 'https://preview.redd.it/lo93pze7l7ag1.jpeg?auto=webp&s=e508fff4f9777d3307f04a4e4441b037a8891253', 'width': 800}, 'variants': {}}]}
StabooruJeffrey: The Stable AI Platform
1
# [](https://www.reddit.com/r/comfyuiAudio/)[StabooruJeffrey: Stable and Peaceful](https://substack.com/home/post/p-182869294) # StabooruJeffrey: A Hard Fork of ComfyUI focused on stability and harmonising the custom node ecosystem (initially for audio). 最穩定且模組化的 AI 引擎與應用。 ngaD 'ej modular ai QuQ application je. सबसे स्थिर और मॉड्यूलर एआई इंजन और एप्लिकेशन। 가장 신뢰할 수 있고 모듈화된 AI 엔진과 애플리케이션. The most stable and modular AI engine and application. 最も安定しモジュール化されたAIエンジンとアプリケーション。 Самый стабильный и модульный движок и применение ИИ. أكثر محركات وتطبيقات الذكاء الاصطناعي استقرارا ومعيارية. 為什麼? ComfyUI 有很多很棒的音訊/音樂工具。如果盡可能多的使用者能在同一環境(StabooruJeffrey)中協同運作,以最大化使用者能力與最廣泛的工具組合,那會是件好事。 qatlh? law' nIvbogh audio/QoQ tools comfyui. vaj QaQ nItebHa' qaStaHvIS rap environment (StabooruJeffrey) 'ej 'aqroS user capability 'ej widely lo' toolset. क्यों? ComfyUI के लिए बहुत सारे बेहतरीन ऑडियो/संगीत उपकरण हैं। यह अच्छा होगा यदि उनमें से अधिक से अधिक व्यापक टूलसेट के साथ अधिकतम उपयोगकर्ता क्षमता के लिए एक ही वातावरण (StabooruJeffrey) में एक साथ काम करें। 왜? ComfyUI에는 훌륭한 오디오/음악 도구가 많이 있습니다. 가능한 한 많은 이들이 같은 환경에서 함께 작업하면 (StabooruJeffrey) 과 함께 최대 사용자 역량과 도구 세트를 사용할 수 있을 것입니다. Why? There’s a lot of great audio/music tools for ComfyUI. It would be nice if as many of them as possible worked together in the same environment (StabooruJeffrey) for maximum user capability with the broadest toolset. なぜでしょうか。 ComfyUIには素晴らしいオーディオや音楽ツールがたくさんあります。できるだけ多くのユーザーが同じ環境 (StabooruJeffrey) で連携して、最も幅広いツールセットで最大限のユーザー機能を提供できればいいのですが。 Почему? Для ComfyUI много отличных аудио- и музыкальных инструментов. Было бы здорово, если бы как можно больше из них работали вместе в одной среде (StabooruJeffrey) для максимального пользовательского потенциала с максимально широким набором инструментов. لماذا؟ هناك الكثير من أدوات الصوت والموسيقى الرائعة ل ComfyUI. سيكون من الجيد لو عمل أكبر عدد ممكن منها معا في نفس البيئة (StabooruJeffrey) لتحقيق أقصى قدرة مستخدم مع أوسع مجموعة أدوات. [https://huggingface.co/StabooruJeffrey/StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey) ... [StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey/blob/main/StabooruJeffrey_v0.3.60_windows_portable_nvidia.7z) (v0.3.60) is ComfyUI (v0.3.59). StabooruJeffrey is an evolution of the ComfyAudio project - a fork of ComfyUI (v0.3.59), and the last Windows portable version released before Comfy Org introduced Immutable status releases to the ComfyUI repository. StabooruJeffrey is a rebranded Hard Fork of ComfyUI, and of ComfyAudio respectively. The authentic value of StabooruJeffrey, is that as a foundation to build from, ComfyUI (v.0.3.59) reimagined as StabooruJeffrey (v0.3.60), offers stable, steady, rock solid ground beneath our feet as developers, researchers, model makers, and end users. Core StabooruJeffrey will likely change over time, but in far fewer increments, and with advanced communications to announce any breaking changes that might impact compatibility within the custom node ecosystem. The plan is to move slowly, and minimise stress, hassle, and mental frustration to everyone using, building upon, and developing for the StabooruJeffrey platform. We’re doing it this way, so that we can all play with, and teach, the broadest set of tools (and enjoy the experience). The next release of StabooruJeffrey will be a lighter version of StabooruJeffrey, minus the commercial API’s and telemetry. Follow StabooruJeffrey’s development at Substack: [https://substack.com/@staboorujeffrey](https://substack.com/@staboorujeffrey) Contribute to StabooruJeffrey at GitHub: [https://github.com/StabooruJeffrey/StabooruJeffreyyy](https://github.com/StabooruJeffrey/StabooruJeffreyyy) (the audio focused variant of the StabooruJeffrey platform) Join the community to show your support for a stable AI platform to build on: [https://huggingface.co/ThatSeemsAboutRight](https://huggingface.co/ThatSeemsAboutRight) Discussions can be started on the GitHub Issues tab of the main StabooruJeffrey repository for now, or you might consider joining the StabooruJeffrey Reddit sub: [https://www.reddit.com/r/StabooruJeffrey/](https://www.reddit.com/r/StabooruJeffrey/) Oh, and advanced notice - eventually, we’d also like to get the StabooruJeffrey custom node developers paid. We’re thinking, the Blender model. # [StabooruJeffrey](https://huggingface.co/StabooruJeffrey/StabooruJeffrey/blob/main/StabooruJeffrey_v0.3.60_windows_portable_nvidia.7z) Meaning: Stable and Peaceful [https://staboorujeffrey.substack.com/p/staboorujeffrey-stable-and-peaceful](https://staboorujeffrey.substack.com/p/staboorujeffrey-stable-and-peaceful)
2025-12-29T21:09:28
https://v.redd.it/c1d0415hk7ag1
MuziqueComfyUI
v.redd.it
1970-01-01T00:00:00
0
{}
1pyy958
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/c1d0415hk7ag1/DASHPlaylist.mpd?a=1769634584%2CYTU3MzM2Mzg3YzU4NDY0NDk3M2U5NDIwMmYwOTU5OGY3MzYzMjgzMjFiYTlmMjk2Y2MzYzk4MThjNDE5NjU2NQ%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/c1d0415hk7ag1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 506, 'hls_url': 'https://v.redd.it/c1d0415hk7ag1/HLSPlaylist.m3u8?a=1769634584%2CYjQ4OWMzZjk3NDFmMDVhMjVmMmM5N2IzM2JjOWMzNWE3NmE0MGI4MWQ5ODg3ZjkyZjg0OGVjMGI3MWJjNWRjZA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/c1d0415hk7ag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1pyy958
/r/LocalLLaMA/comments/1pyy958/staboorujeffrey_the_stable_ai_platform/
false
false
https://external-preview…460523ea2dadf89a
1
{'enabled': False, 'images': [{'id': 'OXkwajBtYWhrN2FnMVizCTR1C4Bt2Jb2pKwmlkkktxtN3ijd-gOjrL3e0xAT', 'resolutions': [{'height': 113, 'url': 'https://external-preview.redd.it/OXkwajBtYWhrN2FnMVizCTR1C4Bt2Jb2pKwmlkkktxtN3ijd-gOjrL3e0xAT.png?width=108&crop=smart&format=pjpg&auto=webp&s=a6d9680b9b84df03c9b0d77ea6bea2592f4d07c4', 'width': 108}, {'height': 227, 'url': 'https://external-preview.redd.it/OXkwajBtYWhrN2FnMVizCTR1C4Bt2Jb2pKwmlkkktxtN3ijd-gOjrL3e0xAT.png?width=216&crop=smart&format=pjpg&auto=webp&s=6d84e77a1fa0ed2085dfcf2dd50436d6fe806d85', 'width': 216}, {'height': 336, 'url': 'https://external-preview.redd.it/OXkwajBtYWhrN2FnMVizCTR1C4Bt2Jb2pKwmlkkktxtN3ijd-gOjrL3e0xAT.png?width=320&crop=smart&format=pjpg&auto=webp&s=d398359b60bdd5e4d11f805884dade8266720221', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/OXkwajBtYWhrN2FnMVizCTR1C4Bt2Jb2pKwmlkkktxtN3ijd-gOjrL3e0xAT.png?format=pjpg&auto=webp&s=3efcfc8e708802908c020fc87ecc9bc68833992f', 'width': 608}, 'variants': {}}]}
AI-Doomsday-Toolbox Distributed inference + workflows
12
AI Doomsday Toolbox v0.513 Update! It took some major work but we now have - Distributed LLM Inference Run large models across multiple phones! Master-worker setup via llama.cpp Manually add workers + set RAM/layer proportions per device - New Workflows + templates for them Transcribe + Summarize: Audio/video → Whisper transcription → LLM summary (with template saving!) Txt2Img + Upscale: Generate + auto-upscale in one workflow Share audio/video directly to transcription workflow - Better Storage Management Models/ZIMs now used in-place (no copying!) - requires All Files Access permission Don't move files after importing or reimport them - UI Improvements Manual input for all sliders (threads, context, temperature) Redesigned image gallery with generation badges Recordings linked in notes for easy playback Separated RPC worker logs - Bug Fixes Fixed ghost notifications after force-close ⚠️ Breaking change: Uninstall previous version first (database schema changed) Repo [here](https://github.com/ManuXD32/AI-Doomsday-Toolbox)
2025-12-29T20:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1pyxwsh/aidoomsdaytoolbox_distributed_inference_workflows/
ManuXD32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyxwsh
false
null
t3_1pyxwsh
/r/LocalLLaMA/comments/1pyxwsh/aidoomsdaytoolbox_distributed_inference_workflows/
false
false
self
12
{'enabled': False, 'images': [{'id': 'a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=108&crop=smart&auto=webp&s=42fc843ddd25c531f25d80e4763b1ee525cfc32d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=216&crop=smart&auto=webp&s=0bbf4f84eb7ee845360505aaa295f49f8e459316', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=320&crop=smart&auto=webp&s=785f32e319f8d402673a0d9a887665cd060a1396', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=640&crop=smart&auto=webp&s=64ed2f0f67c2f1c39db76ce0323ba53a43a89704', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=960&crop=smart&auto=webp&s=29a0e44f52315bc14eff8d38b751eb05af019f95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?width=1080&crop=smart&auto=webp&s=91475b26997e1c22c5fa08892121befe5a416636', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/a9p_ntkmE8G2Awo3LzdTyiFXORODSxhSZDOI_aGuHZ8.png?auto=webp&s=38afca68bdaee72b1f3accaedbd103a0169d3e20', 'width': 1200}, 'variants': {}}]}
Anyone fine-tuning codegen models to optimize for a specific codebase?
1
We do a lot of task specific fine-tuning to either distill from large teacher models to smaller (cheaper/faster) student models. I'm currently working on a major refactor our of application (front & backend) and have a huge amount of code with unit & integration test. That got me to wondering about tuning for a specific stack. We have a mixture of javascript apps sitting on top of a data mesh that handles all the ML, AI, orchestration, pipelines, etc. It's complicated code and it takes a lot of work to get it right with a mixture of people and AI.. We've had plenty of success tuning for similarly complex tasks, seems reasonable that it'll work here too. I'm going to try to sneak in some time to build out the data but that will be a bit.. so just wondering if anyone has done experimentation. Reducing complex multi-shot, with lower error rates would be super helpful.
2025-12-29T20:52:19
https://www.reddit.com/r/LocalLLaMA/comments/1pyxt0f/anyone_finetuning_codegen_models_to_optimize_for/
Mundane_Ad8936
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyxt0f
false
null
t3_1pyxt0f
/r/LocalLLaMA/comments/1pyxt0f/anyone_finetuning_codegen_models_to_optimize_for/
false
false
self
1
null
This is the most affordable Nvidia hardware with HBM memory. Can you guess what it is?
0
2025-12-29T20:29:01
https://i.redd.it/zqrnab83d7ag1.jpeg
GPTrack---ai
i.redd.it
1970-01-01T00:00:00
0
{}
1pyx7hd
false
null
t3_1pyx7hd
/r/LocalLLaMA/comments/1pyx7hd/this_is_the_most_affordable_nvidia_hardware_with/
false
false
default
0
{'enabled': True, 'images': [{'id': 'zqrnab83d7ag1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=108&crop=smart&auto=webp&s=36d523836b9e81f6201c87ec6ce473397b2bd696', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=216&crop=smart&auto=webp&s=65ba61b3b0fed9cdcaf3f5a517c9b74cb03f40fc', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=320&crop=smart&auto=webp&s=e0305c7a397b98f7c25825dd61685ebb9c838080', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=640&crop=smart&auto=webp&s=57671658d4cf87589ab18ec8dde1d1c9f4d2b7d7', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=960&crop=smart&auto=webp&s=a756f7d6556ca47a1e485e4db30bceb47d78644d', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?width=1080&crop=smart&auto=webp&s=13743afc11011f73cb74e0251bd8a302a309913e', 'width': 1080}], 'source': {'height': 3456, 'url': 'https://preview.redd.it/zqrnab83d7ag1.jpeg?auto=webp&s=bd13f8699a6da393b5808f240bdbaf819f376abf', 'width': 4608}, 'variants': {}}]}
[Open Source] LangGraph Threads Export Tool - Backup, migrate, and own your conversation data
1
[removed]
2025-12-29T20:29:00
https://www.reddit.com/r/LocalLLaMA/comments/1pyx7gf/open_source_langgraph_threads_export_tool_backup/
SignatureHuman8057
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyx7gf
false
null
t3_1pyx7gf
/r/LocalLLaMA/comments/1pyx7gf/open_source_langgraph_threads_export_tool_backup/
false
false
self
1
null
Have you also been shadow-banned?
1
[removed]
2025-12-29T20:17:50
https://www.reddit.com/r/LocalLLaMA/comments/1pywwzw/have_you_also_been_shadowbanned/
GPTshop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pywwzw
false
null
t3_1pywwzw
/r/LocalLLaMA/comments/1pywwzw/have_you_also_been_shadowbanned/
false
false
self
1
null
I posted my Local Voice Agent in Sesame AI's "Showcase" channel... and got banned instantly. Apparently, they don't like Open Source competition. (Python + SNAC + Whisper)
3
2025-12-29T20:00:51
https://v.redd.it/wdczvy6c87ag1
Legion10008
v.redd.it
1970-01-01T00:00:00
0
{}
1pywgth
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/wdczvy6c87ag1/DASHPlaylist.mpd?a=1769630468%2CYjc2NTQ0ODMxYmQyOTY5M2JmYjliYzU3YTgxMzkzMzExYjQ3YzU5NDVkMzY4ZmY0NDA4ZmZjNDdjNTQxZGMzOA%3D%3D&v=1&f=sd', 'duration': 71, 'fallback_url': 'https://v.redd.it/wdczvy6c87ag1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/wdczvy6c87ag1/HLSPlaylist.m3u8?a=1769630468%2CNDMzNWI4YTlhYTc1ZjJkZGVmOTBlYzZlNWRmOGRhNWZkN2FiZTM0YjNhN2Q2YzE0MjBlY2Q0NDY0NDBiMWRiMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wdczvy6c87ag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1pywgth
/r/LocalLLaMA/comments/1pywgth/i_posted_my_local_voice_agent_in_sesame_ais/
false
false
https://external-preview…0606bbd75a81e3d8
3
{'enabled': False, 'images': [{'id': 'MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=108&crop=smart&format=pjpg&auto=webp&s=d34f623d1a525abf2a345a032673f82f7877a2c6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=216&crop=smart&format=pjpg&auto=webp&s=2d618b0f2edfdd268220d6a63f73bd7c07dfef9a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=320&crop=smart&format=pjpg&auto=webp&s=a4dc9baea5103c66961d8e656fb2c19c479d3344', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=640&crop=smart&format=pjpg&auto=webp&s=d653c92dc52d0ceeb07be40543e0712bfc0aecbd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=960&crop=smart&format=pjpg&auto=webp&s=f048e57bbf646e5f94fb42d6b12f59d6e7288c49', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=140e07ff1243fa67182f33b7fbf579f9ed08faac', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MXN0aTA0N2M4N2FnMf8MZwckmGiP8NtabdND0_tDQLKwKD_hK5qRYpaiO-j7.png?format=pjpg&auto=webp&s=f8fcad4c631b4ae6a8dbedd262e822cb0b35aef7', 'width': 1280}, 'variants': {}}]}
Best LLM Related Open Source Tools - 2025?
44
I think 2025 is good year [LLM wise](https://www.reddit.com/r/LocalLLaMA/comments/1pwh0q9/best_local_llms_2025/). Now please share the tools you're using with LLMs. I know that half of us here involves with coding by using tools such as Cline, RooCode, KiloCode, QwenCode, MistralVibe, etc., Similarly some of us here involves with writing by using Finetuned Writing models. Of course we need tools for writing too. I came across Mikupad, Writingway2, Arrows(p-e-w), WritingTools(theJayTea) Coding & Writing are just 2 categories I mentioned. Also I mentioned only few tools here(from my bookmarks) & Of course there are so many more other tools exist online which everyone yet to catch. I'm sure around 50 tools available for each category, lets bring those here. So what other tools are you using? (Please mention category or concise use case) Just mentioning some categories to get quick replies: Prompt, RAG, Brainstorm, AudioBook Maker, Ebook Maker, Second brain, Benchmarks, AI Assistant, Agents, Notebook, NoCode, Wiki, Storytelling/Worldbuilding, Image processing, Game creation,
2025-12-29T20:00:49
https://www.reddit.com/r/LocalLLaMA/comments/1pywgsb/best_llm_related_open_source_tools_2025/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pywgsb
false
null
t3_1pywgsb
/r/LocalLLaMA/comments/1pywgsb/best_llm_related_open_source_tools_2025/
false
false
self
44
null
@streppelchen: yes, official invoice is possible. yes, it is ARM, but this is with 99% of the software no problem. Nvidia does fully support Grace Hopper, since this was their flagship product before Blackwell.
0
If you want to know more, visit GPTrack and drop me a Email.
2025-12-29T19:42:43
https://www.reddit.com/r/LocalLLaMA/comments/1pyvzlv/streppelchen_yes_official_invoice_is_possible_yes/
GPTrack--ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvzlv
false
null
t3_1pyvzlv
/r/LocalLLaMA/comments/1pyvzlv/streppelchen_yes_official_invoice_is_possible_yes/
false
false
self
0
null
Small LocalLLaMA in GGUF for tagging - 2GB RAM
1
I'm searching for a small model (max. 2GB RAM, no GPU) in gguf format to use with ollama. I want to use it for my Karakeep Instance. It should create tags for my saved bookmarks. The prompt would look like this: You are an expert whose responsibility is to help with automatic tagging for a read-it-later app. Please analyze the TEXT_CONTENT below and suggest relevant tags that describe its key themes, topics, and main ideas. The rules are: - Aim for a variety of tags, including broad categories, specific keywords, and potential sub-genres. - The tags must be in english. - If the tag is not generic enough, don't include it. - The content can include text for cookie consent and privacy policy, ignore those while tagging. - Aim for 3-5 tags. - If there are no good tags, leave the array empty. - Format: `{"tags": ["tag1", "tag2", "tag3"]}` EXACTLY <TEXT_CONTENT> <CONTENT_HERE> </TEXT_CONTENT> You must respond in JSON with the key "tags" and the value is an array of string tags.You are an expert whose responsibility is to help with automatic tagging for a read-it-later app. Please analyze the TEXT_CONTENT below and suggest relevant tags that describe its key themes, topics, and main ideas. The rules are: - Aim for a variety of tags, including broad categories, specific keywords, and potential sub-genres. - The tags must be in english. - If the tag is not generic enough, don't include it. - The content can include text for cookie consent and privacy policy, ignore those while tagging. - Aim for 3-5 tags. - If there are no good tags, leave the array empty. - Format: `{"tags": ["tag1", "tag2", "tag3"]}` EXACTLY <TEXT_CONTENT> <CONTENT_HERE> </TEXT_CONTENT> You must respond in JSON with the key "tags" and the value is an array of string tags.
2025-12-29T19:41:27
https://www.reddit.com/r/LocalLLaMA/comments/1pyvyef/small_localllama_in_gguf_for_tagging_2gb_ram/
pizzapastamix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvyef
false
null
t3_1pyvyef
/r/LocalLLaMA/comments/1pyvyef/small_localllama_in_gguf_for_tagging_2gb_ram/
false
false
self
1
null
Prices are set to rise sharply soon - this is your last chance to secure a cutting-edge system with HBM memory at a low rate. Nvidia GH200 (624GB) available for a limited-time offer of 35k valid until December 31. Only 2 units in stock, shipping directly from Taiwan.
1
[removed]
2025-12-29T19:36:00
https://i.redd.it/e31h5mnr37ag1.jpeg
GPTshop
i.redd.it
1970-01-01T00:00:00
0
{}
1pyvt8i
false
null
t3_1pyvt8i
/r/LocalLLaMA/comments/1pyvt8i/prices_are_set_to_rise_sharply_soon_this_is_your/
false
false
default
1
{'enabled': True, 'images': [{'id': 'e31h5mnr37ag1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=108&crop=smart&auto=webp&s=90c211cbaf50c6c5f950dc02a2a09ef6616585ab', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=216&crop=smart&auto=webp&s=9194105b15cdad19322ab4ed3147a2734940c067', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=320&crop=smart&auto=webp&s=edb5b8b5533b259efe1d7db2cd01d86b12769c2a', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=640&crop=smart&auto=webp&s=3dcb49eb37d3df71755c7cc8446a241b37bf20d9', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=960&crop=smart&auto=webp&s=5b127ec588b77e22efd99081aaa1070d0dd1c7a3', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?width=1080&crop=smart&auto=webp&s=1234b8a6e74ea76db4519ad3e8f2687cdd7ac988', 'width': 1080}], 'source': {'height': 3456, 'url': 'https://preview.redd.it/e31h5mnr37ag1.jpeg?auto=webp&s=7ec301824b4a1a845eed80b0865209eaaa21e675', 'width': 4608}, 'variants': {}}]}
Whats about new Local LM apps and research platforms
5
Hi guys as you know, there are many ordinary applications aimed at end users, such as LM Studio, Sanctum, Anything, OpenUI, Kotaemon Biniou, etc. But I'm looking for something a bit more complex and functional, like "transformerLAB"Kiln" or similar applications. CLI or UI doesn't matter. What new applications and repositories are you using these days?
2025-12-29T19:33:44
https://www.reddit.com/r/LocalLLaMA/comments/1pyvqvn/whats_about_new_local_lm_apps_and_research/
Safe-Clothes5925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvqvn
false
null
t3_1pyvqvn
/r/LocalLLaMA/comments/1pyvqvn/whats_about_new_local_lm_apps_and_research/
false
false
self
5
null
I compared pricing for 60+ cloud LLMs (Claude, GPT, Gemini) - Gemini Flash is cheapest
1
[removed]
2025-12-29T19:31:24
https://www.reddit.com/r/LocalLLaMA/comments/1pyvonm/i_compared_pricing_for_60_cloud_llms_claude_gpt/
Successful_Iron_5831
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvonm
false
null
t3_1pyvonm
/r/LocalLLaMA/comments/1pyvonm/i_compared_pricing_for_60_cloud_llms_claude_gpt/
false
false
self
1
null
Best Local LLM for my setup?
1
Hello i have a RTX 5090 + 64GB RAM + 9950X3D system. Can anyone recommend me a Local LLM for coding that have good reasoning and coding skills but that fits to my setup... Additional Vision Capability (it's okay if it doesn't have it)
2025-12-29T19:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1pyvitm/best_local_llm_for_my_setup/
BlackShadowX306
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvitm
false
null
t3_1pyvitm
/r/LocalLLaMA/comments/1pyvitm/best_local_llm_for_my_setup/
false
false
self
1
null
Soon there will be a massive increase in pricing, last chance to get a cheap bleeding-edge system with HBM memory: Nvidia GH200 624GB special offer only valid till 12/31 for 35k USD. 2 pieces available. Ships directly from Taiwan.
0
Nvidia Grace-Hopper Superchip 72-core Nvidia Grace CPU Nvidia Hopper H200 Tensor Core GPU 480GB of LPDDR5X memory with EEC 144GB of HBM3e memory 624GB of total fast-access memory NVlink-C2C: 900 GB/s of bandwidth Programmable from 450W to 1000W TDP (CPU + GPU + memory) 2x High-efficiency 2000W PSU 2x PCIe gen4 M.2 slots on board 2x PCIe gen5 2.5" drive slots (NVMe) 3x FHFL PCIe Gen5 x16 1x USB 3.2 port 1x RJ45 IPMI port 1x Mini display port Halogen-free LSZH power cables Air-cooled 6x60mm fans Rail kit 2U 440 x 88 x 900 mm (17.3 x 3.5 x 35.4") 32 kg (70 lbs) Manufacturer: Pegatron SVR
2025-12-29T19:22:39
https://i.redd.it/a33ncfz917ag1.jpeg
GPTrack--ai
i.redd.it
1970-01-01T00:00:00
0
{}
1pyvg40
false
null
t3_1pyvg40
/r/LocalLLaMA/comments/1pyvg40/soon_there_will_be_a_massive_increase_in_pricing/
false
false
default
0
{'enabled': True, 'images': [{'id': 'a33ncfz917ag1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=108&crop=smart&auto=webp&s=6a586df6084734c0e6c4e6f1c1c530bc743c0a67', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=216&crop=smart&auto=webp&s=c5d24be5cf40f6122f895da8c6f540cad28691ae', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=320&crop=smart&auto=webp&s=9d91fecacd1c5c78310083a99829ebf2ef376162', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=640&crop=smart&auto=webp&s=edc8126a1b857220dbf36793c721b20e75777f33', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=960&crop=smart&auto=webp&s=b92205e426bfdb696369411ba70f5b19fc303e7e', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?width=1080&crop=smart&auto=webp&s=c59d4769e0bc600b79bc1b3d39fdf1784b4b7a69', 'width': 1080}], 'source': {'height': 3456, 'url': 'https://preview.redd.it/a33ncfz917ag1.jpeg?auto=webp&s=44453873340e6d524cdfd6fbbffdaa4a7910b83b', 'width': 4608}, 'variants': {}}]}
Bounded autonomy: how the "is it an agent?" question changed my QA bot design
0
Built a QA bot after pushing code that broke production. It monitors health checks, rolls back when they fail, attempts to diagnose and fix, then either promotes the fix or notifies me. The interesting design question wasn't which model to use. It was how much autonomy to give it. A Duke paper (arxiv.org/abs/2508.05338) proposes three minimum requirements for "agent": environmental impact, goal-directed behavior, and state awareness. My bot has all three. It literally rolls back production and pushes fixes. But it doesn't set its own goals. The triggers are deterministic. When a predefined condition is met, then it kicks off reasoning, generates solutions, takes action. It's a deterministic script that invokes agent-like behavior when triggered. This changed my architecture. I kept the trigger layer dumb and predictable. The LLM only reasons within tight constraints. I don't want software that surprises me at 3am. I've been calling this pattern "bounded autonomy." Useful framing or just a cop-out for not building a real agent? Full writeup: [blog post here](https://nickrichu.me/posts/i-built-a-qa-bot-is-it-an-agent/) How do you think about the autonomy spectrum when building with local models. How much rope do you give it?
2025-12-29T19:19:53
https://www.reddit.com/r/LocalLLaMA/comments/1pyvdea/bounded_autonomy_how_the_is_it_an_agent_question/
OnyxProyectoUno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvdea
false
null
t3_1pyvdea
/r/LocalLLaMA/comments/1pyvdea/bounded_autonomy_how_the_is_it_an_agent_question/
false
false
self
0
{'enabled': False, 'images': [{'id': 'MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=108&crop=smart&auto=webp&s=e7efa87ed04e700fde61d00357fab5721f2fcf7b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=216&crop=smart&auto=webp&s=f11cca6448e9733c908d9bc81be41469d2f4a0d6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=320&crop=smart&auto=webp&s=58a9fb8b911ce69713b700e4cfacfd851bed0a14', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=640&crop=smart&auto=webp&s=9b0eb0efe9663e663ef624fcc0747943afb19e62', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=960&crop=smart&auto=webp&s=00176b6f4c1d9d9dc09bf811cd124b30f246fcb0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?width=1080&crop=smart&auto=webp&s=32638dd6e2afade48d5026d74acccf9c2db2253e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/MNxz5CWjDjDKDOnLohSYf3CbeqKtWYpkS9wSFIdJq1Q.png?auto=webp&s=40ae1eb3d3b9bc57452baeb4d495508539172e8a', 'width': 1200}, 'variants': {}}]}
Was I lied to or was I blunt?
0
I bought an RTX 5070 12 gb instead of an RTX 3050 8g, and there is no difference in LLM. Neither that in LLM, in general, there is no speed boost in AI where I have not tested.
2025-12-29T19:18:04
https://www.reddit.com/r/LocalLLaMA/comments/1pyvbmi/was_i_lied_to_or_was_i_blunt/
romyxr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyvbmi
false
null
t3_1pyvbmi
/r/LocalLLaMA/comments/1pyvbmi/was_i_lied_to_or_was_i_blunt/
false
false
self
0
null
Built a Python library that translates embeddings from MiniLM to OpenAI — and it actually works!
7
*I built a Python library called* ***EmbeddingAdapters*** *that* ***provides multiple pre-trained adapters for translating embeddings from one model space into another***: [https://github.com/PotentiallyARobot/EmbeddingAdapters/](https://github.com/PotentiallyARobot/EmbeddingAdapters/) \`\`\` `pip install embedding-adapters` `embedding-adapters embed --source sentence-transformers/all-MiniLM-L6-v2 --target openai/text-embedding-3-small --flavor large --text "Where can I get a hamburger near me?"` \`\`\` *This works because* ***each adapter is trained on a restrictive domain*** allowing the adapter to specialize in interpreting the semantic signals of smaller models into higher dimensional spaces without losing fidelity.  ***A quality endpoint then lets you determine how well the adapter will perform*** *on a given input.* This has been super useful to me, and I'm quickly iterating on it. Uses for ***EmbeddingAdapters*** so far: 1. You want to **use an existing vector index built with one embedding model and query it with another** \- if it's expensive or problematic to re-embed your entire corpus, this is the package for you. 2. You can also **operate mixed vector indexes** and map to the embedding space that works best for different questions. 3. You can **save cost on questions that are easily adapted**, "What's the nearest restaurant that has a Hamburger?" no need to pay for an expensive cloud provider, or wait to perform an unnecessary network hop, embed locally on the device with an embedding adapter and return results instantly. It also lets you experiment with provider embeddings you may not have access to.  By using the adapters on some queries and examples, you can compare how different embedding models behave relative to one another and get an early signal on what might work for your data before committing to a provider. This makes it practical to: \- **sample providers you don't have direct access to** \- **migrate or experiment with embedding models gradually** instead of re-embedding everything at once, \- ***evaluate multiple providers side by side*** in a consistent retrieval setup, \- ***handle provider outages or rate limits*** without breaking retrieval, \- ***run RAG in air-gapped or restricted environments*** with no outbound embedding calls, \- ***keep a stable “canonical” embedding space*** while changing what runs at the edge. The adapters aren't perfect clones of the provider spaces but they are pretty close, for in domain queries the minilm to openai adapter recovered 98% of the openai embedding and dramatically outperforms minilm -> minilm RAG setups It's still early in this project. I’m actively expanding the set of supported adapter pairs, adding domain-specialized adapters, expanding the training sets, stream lining the models and improving evaluation and quality tooling. I’d love feedback from anyone who might be interested in using this: *- What data would you like to see these adapters trained on?* *- What domains would be most helpful to target?* *- Which model pairs would you like me to add next?* *- How could I make this more useful for you to use?* So far the library supports: *minilm <-> openai*  *openai <-> gemini* *e5 <-> minilm* *e5 <-> openai* *e5 <-> gemini* *minilm <-> gemini* Happy to answer questions and if anyone has any ideas please let me know. I could use any support you can give, especially if anyone wants to chip in to help cover the training cost. Please upvote if you can, thanks!
2025-12-29T19:01:42
https://www.reddit.com/r/LocalLLaMA/comments/1pyuvnd/built_a_python_library_that_translates_embeddings/
Interesting-Town-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyuvnd
false
null
t3_1pyuvnd
/r/LocalLLaMA/comments/1pyuvnd/built_a_python_library_that_translates_embeddings/
false
false
self
7
{'enabled': False, 'images': [{'id': 'H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=108&crop=smart&auto=webp&s=6ab55800bd8174fa4ced8c7ba10357c03c59918a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=216&crop=smart&auto=webp&s=561985fc566bc4ec83d86882ca1ae559cef457a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=320&crop=smart&auto=webp&s=2deb28c76ae4c975f2e71aa351f6bebd100fa0db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=640&crop=smart&auto=webp&s=9fcd00b855201826534064a9988534c39ef9c73f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=960&crop=smart&auto=webp&s=119eb1775eeffb3558faae08fed75a96a537813f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?width=1080&crop=smart&auto=webp&s=9767dc7ebdbffdf3bc59f9fd772d8172ac3b8047', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H0sSmhd4f6cS5bWHZ_Z-gqYC55fF-rA4tjRQNuWAuEo.png?auto=webp&s=86f1d0fc54b2adaa1371f6fb7c8736f7216fd503', 'width': 1200}, 'variants': {}}]}
Fine-tuning on Consumer GPUs : Swarm vs FSDP (Answering the architecture/cost questions)
1
[removed]
2025-12-29T18:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1pyupee/finetuning_on_consumer_gpus_swarm_vs_fsdp/
Alone-Detective-3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyupee
false
null
t3_1pyupee
/r/LocalLLaMA/comments/1pyupee/finetuning_on_consumer_gpus_swarm_vs_fsdp/
false
false
self
1
null
70B Fine-tuning on Consumer WAN: Swarm vs FSDP (Answering the architecture/cost questions from my previous thread)
1
[removed]
2025-12-29T18:53:36
https://www.reddit.com/r/LocalLLaMA/comments/1pyunku/70b_finetuning_on_consumer_wan_swarm_vs_fsdp/
Alone-Detective-3317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pyunku
false
null
t3_1pyunku
/r/LocalLLaMA/comments/1pyunku/70b_finetuning_on_consumer_wan_swarm_vs_fsdp/
false
false
self
1
null