title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Can slightly bigger models run on igpus? | 1 | Hello there.
So my usecase is instant 30ms and below of latency text to speech to be used in screen readers. The current best option when we exclude all non-neural options is Piper TTS, particularly the Sonata-nvda implementation of it which uses chunking while also turning the singular model.onnx into an encoder.onnx and a decoder.onnx for maximum parallellism.
While this achieves an acceptable level on the latency front with Piper, that is mostly if not wholely the result of the maximal level of optimization for instant cpu use (chunking+parallellism). Thus, is it possible for gpu-pour people like myself to use the integrated GPUs of our already-existing processors for accelerated inference? I myself have an intel hd graphics card, which is admittedly crappy but hey it's still an igpu (optimistic mode engaged)
And I was wondering, if this is possible, would it help scale up the repertoir of models that can be run at this low latency, since Piper is really small and wouldn't learn all the rules like pauses without comma and intonation shifts? Like can we run a 100m model? Alternatively, could this method be used to lower the latency of some models that haven't been optimized, such as Kokoro and eventually Supertonic for devices with even better igpus/npus?
Since I have Intel, that's what I googled for and it turns out there's something called Openvino. Can that be used, and will the results be worth it, both for my bad igpu and better ones?
Thanks. | 2026-01-09T19:14:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q8hej8/can_slightly_bigger_models_run_on_igpus/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8hej8 | false | null | t3_1q8hej8 | /r/LocalLLaMA/comments/1q8hej8/can_slightly_bigger_models_run_on_igpus/ | false | false | self | 1 | null |
Strix Halo 128GB not using more than 62.54GB?? | 7 | Hi, I'm at wits end right now and hoping someone's run in to this. I'm on unbuntu 24.04, rocm 7.1.1, below is my grub config
`GRUB_CMDLINE_LINUX_DEFAULT="ttm.pages_limit=30408704 ttm.page_pool_size=30408704 amdgpu.gttsize=118784 iommu=pt "`
when I load some really large workflows in comfyui (qwen image 2512 bf16 + lightning4) or try to run a diffusion model while I have gpt-oss-120b loaded via llama.cpp, I keep getting OOM indicating I'm out of memory with a max of 62.54GB allowed.
At minimum I'd expect it to OOM and say I have a max of 116GB.
Individually gpt-oss-120b works perfectly and comfyui with qwen image 2512 works perfectly.
When I look at rocm smi/info I see 116GB is the max GTT.
Anyone had similar issues? | 2026-01-09T18:51:46 | https://www.reddit.com/r/LocalLLaMA/comments/1q8gsde/strix_halo_128gb_not_using_more_than_6254gb/ | sputnik13net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8gsde | false | null | t3_1q8gsde | /r/LocalLLaMA/comments/1q8gsde/strix_halo_128gb_not_using_more_than_6254gb/ | false | false | self | 7 | null |
I built a Inference Architecture (Early exit inspired) for LLaMA-3.1 (Base) that saves ~20% Compute using SLERP & Dynamic RoPE. | 4 | Hi everyone,
Long time lurker. I’ve been working on a way to speed up inference without quantization or distillation.
I call it **"Cerebellum"** It’s a parasitic architecture (hooks-based) that attaches to a frozen LLaMA-3.1-8B and forces it to "teleport" hidden states from Layer 8 directly to Layer 32 when the token is semantic/syntactic glue (e.g., "the", "and", or common phrases).
It also works on a lot models without any tweaking currently I've tested Qwen, LLama and Mistral. Gemma can work but with constrained training since they start doing some shenanigans with attention in Gemma 3.
**The Problem:**
Most early-exit implementations fail because skipping layers breaks the KV Cache coherence. The model gets amnesia or hallucinates because the attention mechanism sees a "gap" in the history.
**The Fix (How I hacked it):**
1. **Deep State Projection:** Instead of a classifier, I trained an MLP to predict the trajectory of the final hidden state from Layer 8.
2. **SLERP (Spherical Linear Interpolation):** I use SLERP to reconstruct the missing intermediate states on the hypersphere surface. This keeps the vector magnitude consistent so the Attention Heads don't see "faded" ghosts.
3. **The Check:** I trained a tiny MLP (Linear Layer with L1 Loss) to predict model uncertainty. This replaces running the massive 500M+ param LM Head for confidence checks, making the gating cost negligible.
**Results:**
* **Exit Rate:** \~25-30% (mostly on Layer 8).
* **Quality:** Zero observed semantic drift on 400+ token narratives.
* **Setup:** LLaMA-3.1-8B Base on L4 GPU.
[Green = Early Exit \(L8\). White = Full Compute \(L32\).](https://preview.redd.it/vpsm24uxddcg1.png?width=1170&format=png&auto=webp&s=3358361c36e6e843bd229ccdf87e7349a8c423d7)
I’ve filed a provisional patent on the architecture, but I’m looking for feedback on the approach. Has anyone else tried using SLERP for cache reconstruction?
Happy to answer questions about the implementation! | 2026-01-09T18:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q8grqi/i_built_a_inference_architecture_early_exit/ | Hopeful-Sherbet-3100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8grqi | false | null | t3_1q8grqi | /r/LocalLLaMA/comments/1q8grqi/i_built_a_inference_architecture_early_exit/ | false | false | 4 | null | |
best laptop under 500 | 0 | basically all i want to run is qwen 3 coder and possibly future version like qwen 4 and text to speech models like index TTS. new to computers what should i prioritize, ram and processor? | 2026-01-09T18:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1q8glzm/best_laptop_under_500/ | Physical-Macaron8744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8glzm | false | null | t3_1q8glzm | /r/LocalLLaMA/comments/1q8glzm/best_laptop_under_500/ | false | false | self | 0 | null |
AMD MI50s stopped working | 0 | Hello everyone,
I have the following issue:
Last August I got two MI50 cards and put them in a Chinese x79 motherboard which supports above 4g decoding. They worked fine until last month one of them stopped being recognized. Last week the other card wasn't being recognized anymore, neither in bios nor under ubuntu.
Sporadically they show up and then next reboot nothing.
I tried another power supply. I also got a Fujitsu x99 with an intel 612 chipset which also didn't solve the issue.
I warmed them up with the hairdryer and they were recognized for some minutes then again nothing.
Might this be some kind of BGA failure?
| 2026-01-09T18:22:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q8g0b7/amd_mi50s_stopped_working/ | politerate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8g0b7 | false | null | t3_1q8g0b7 | /r/LocalLLaMA/comments/1q8g0b7/amd_mi50s_stopped_working/ | false | false | self | 0 | null |
How do you keep the balance of not overstuffing the prompt with edge cases that break? | 0 | I have a prompt that I am trying to optimize. The prompt model is GPT-OS 120 billion parameters. I just want the prompt to never execute the instructions present in the input. Just do a bunch of operations on top of it.
I used Claude to generate a lot of test cases but I found out that for one particular test case the prompt actually executes the instruction and the input.
I don't want to stuff an example in the prompt. How do you handle these kinds of situations? What are your ways to fix these? | 2026-01-09T18:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1q8fydl/how_do_you_keep_the_balance_of_not_overstuffing/ | RoutineNet4283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8fydl | false | null | t3_1q8fydl | /r/LocalLLaMA/comments/1q8fydl/how_do_you_keep_the_balance_of_not_overstuffing/ | false | false | self | 0 | null |
RTX Blackwell Pro 6000 wholesale pricing has dropped by $150-200 | 216 | Obviously the RTX Blackwell Pro 6000 cards are of great interest to the people here. I see them come up a lot. And we all ooh and ahh over the people that have 8 of them lined up in a nice row.
It also seems to me like the market is suffering from lack of transparency on these.
My employer buys these cards wholesale, and I can see current pricing and stock in our distributors' systems. (And I **may have** slipped in an order for one for myself...) It's eye-opening.
I'm probably not supposed to disclose the exact price we buy these at. But I wanted people to know that unlike everything else with RAM in it, the wholesale price of these has **dropped** by about ~$150-200 from December to January.
I will also say that the wholesale price for the 6000 Pro is only about $600 higher than the wholesale price for the new 72GiB 5000 Pro. So, for the love of god, please don't buy that!
(And no, this is **not** marketing or an ad; I **cannot** sell **anyone** these cards at **any** price. I would be fired immediately. I just want people to have the best available information when they're looking to buy something this expensive.)
| 2026-01-09T17:57:11 | https://www.reddit.com/r/LocalLLaMA/comments/1q8fagh/rtx_blackwell_pro_6000_wholesale_pricing_has/ | TastesLikeOwlbear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8fagh | false | null | t3_1q8fagh | /r/LocalLLaMA/comments/1q8fagh/rtx_blackwell_pro_6000_wholesale_pricing_has/ | false | false | self | 216 | null |
After 8 years building cloud infrastructure, I'm betting on local-first AI | 86 | Sold my Saas company last year and we used to process everything in the cloud. Now, after a few realisations, I'm doing the opposite. As I watch the AI space evolve, I can’t help but wonder how there’s a growing sentiment of wanting capable models that run on hardware they control. More people seem to be moving towards local inference: whether for privacy, cost, latency, or just independence from API rate limits.
Curious if anyone else is thinking about this? | 2026-01-09T17:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q8f242/after_8_years_building_cloud_infrastructure_im/ | PandaAvailable2504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8f242 | false | null | t3_1q8f242 | /r/LocalLLaMA/comments/1q8f242/after_8_years_building_cloud_infrastructure_im/ | false | false | self | 86 | null |
Long-term KV cache storage or re-runs for ongoing chats? | 0 | I’m have 100+ chats in ChatGPT that I will revisit and continue periodically. I recently learned a bit about what goes on under the hood in the transformer architecture and came to the conclusion that, for these chats, the KV cache is probably stored. But, that seems incredibly memory intensive.
The alternative, it seems, would be to recompute all these values whenever I continue a conversation. But this seems incredibly compute intensive.
So my question to those that have made their own LLM chat interfaces and choose to keep conversations, how do you manage this tradeoff? Am I missing something? | 2026-01-09T17:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/1q8eqtc/longterm_kv_cache_storage_or_reruns_for_ongoing/ | skinnyjoints | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8eqtc | false | null | t3_1q8eqtc | /r/LocalLLaMA/comments/1q8eqtc/longterm_kv_cache_storage_or_reruns_for_ongoing/ | false | false | self | 0 | null |
2x RTX 3090 24GB VRAM, barely used, for $1,067. Should I buy it? | 0 | I have been trying to use LLMs locally to reduce costs for my small business. I mainly need to run LTX2 to generate marketing videos in Arabic. Are they effective and sufficient? Any advice would be greatly appreciated. | 2026-01-09T17:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1q8e9vh/2x_rtx_3090_24gb_vram_barely_used_for_1067_should/ | iCyb3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8e9vh | false | null | t3_1q8e9vh | /r/LocalLLaMA/comments/1q8e9vh/2x_rtx_3090_24gb_vram_barely_used_for_1067_should/ | false | false | self | 0 | null |
For those of you on Nvidia Spark, what's your stack? Struggling to find LLMs that work through Docker-vLLM... | 1 | So far, I only have Qwen3 XB models that are fully tool-usable. This means no Qwen3 XB Base, none of Qwen3 Coder XB, no IQuest, no Solar, no GLM 4.5 Air NVFP4, no Devstral, no HyperCLOVAX, etc.
GPT OSS XB works, but it's Harmony format (if anyone knows any tools or VS Codium extensions for agentic coding that works with Harmony, please let me know!)
I feel I might be doing something wrong or missing some documentation.
I went through both of below documentation, but Nvidia's officially supported vLLM Docker + officially supports LLMs seem a bit outdated.
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm/tags
https://build.nvidia.com/spark/vllm
Then I look through the documentation for vLLM, but I can still only get only the Qwen3-14B model reliably.
Few models have a short guide on how to run the models, but typically they're not inside Docker. Even if they are, they still wouldn't run.
So I feel I'm doing something wrong. Is there any good guide out there for running the models besides vanilla Qwen3? | 2026-01-09T17:19:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q8e9t3/for_those_of_you_on_nvidia_spark_whats_your_stack/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8e9t3 | false | null | t3_1q8e9t3 | /r/LocalLLaMA/comments/1q8e9t3/for_those_of_you_on_nvidia_spark_whats_your_stack/ | false | false | self | 1 | null |
Why on earth build a local ai | 0 | Dear sub, this may sound against the entire concept of this community, but could you please explain the business cases, financial benefits, security reasons, why would someone put together servers with GPU to run LLM and lots of RAM and GB space for indexing.
Each part can be handled ultra secured with elastic cloud capabilities, to build on the shoulders of giants. So please explain why would someone take such a performance and stability risk ? | 2026-01-09T17:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q8dzxl/why_on_earth_build_a_local_ai/ | Melodic_Coffee_833 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8dzxl | false | null | t3_1q8dzxl | /r/LocalLLaMA/comments/1q8dzxl/why_on_earth_build_a_local_ai/ | false | false | self | 0 | null |
Best general purpose LLM 24gb vram and 128gb ram? | 3 | What’s the beat model i can currently run with my hardware? I like to stay at Q4 or above. Best MOE? | 2026-01-09T17:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q8dvjb/best_general_purpose_llm_24gb_vram_and_128gb_ram/ | No-Leave-4512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8dvjb | false | null | t3_1q8dvjb | /r/LocalLLaMA/comments/1q8dvjb/best_general_purpose_llm_24gb_vram_and_128gb_ram/ | false | false | self | 3 | null |
Would you be interested in an open-source alternative to Vapi for creating and managing custom voice agents? | 1 | Hey everyone,
I've been working on a voice AI project called **VoxArena** and I am about to open source it. Before I do, I wanted to gauge the community's interest.
I noticed a lot of developers are building voice agents using platforms like Vapi, Retell AI, or Bland AI. While these tools are great, they often come with high usage fees (on top of the LLM/STT costs) and platform lock-in.
I've been building VoxArena as an open-source, self-hostable alternative to give you full control.
**What it does currently:** It provides a full stack for **creating and managing custom voice agents**:
* **Custom Personas:** Create agents with unique system prompts, greeting messages, and voice configurations.
* **Webhooks:** Integrated **Pre-call and Post-call webhooks** to fetch dynamic context (e.g., user info) before the call starts or trigger workflows (e.g., CRM updates) after it ends.
* **Orchestration:** Handles the pipeline between Speech-to-Text, LLM, and Text-to-Speech.
* **Real-time:** Uses **LiveKit** for ultra-low latency audio streaming.
* **Modular:** Currently supports Deepgram (STT), Google Gemini (LLM), and Resemble AI (TTS). **Support for more models (OpenAI, XTTS, etc.) is coming soon.**
* **Dashboard:** Includes a Next.js frontend to monitor calls, view transcripts, and verify agent behavior.
**Why I'm asking:** I'm honestly trying to decide if I should double down and put more work into this. I built it because I wanted to control my own data and costs (paying providers directly without middleman markups).
**If I get a good response here, I plan to build this out further.**
**My Question:** Is this something you would use? Are you looking for a self-hosted alternative to the managed platforms for your voice agents?
I'd love to hear your thoughts. | 2026-01-09T17:03:07 | https://v.redd.it/jkp0owstuccg1 | dp-2699 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q8dt8f | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/jkp0owstuccg1/DASHPlaylist.mpd?a=1770570622%2CMGUzMjBmOTEyYTI2Y2Y1OTRkNDgzZDUwMGRkOTQyOGMwYWI1ZDQyMTJkY2I1NTVkZDk1MWExY2QyOGJjMzFlOA%3D%3D&v=1&f=sd', 'duration': 222, 'fallback_url': 'https://v.redd.it/jkp0owstuccg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 458, 'hls_url': 'https://v.redd.it/jkp0owstuccg1/HLSPlaylist.m3u8?a=1770570622%2CY2ZjZmExMTU1MjgwNzk3MThiM2Q5MzM0YzE1MDI2N2E5NjVjMTVlZGNlNTlmZWRkMDQxNDYxZmRjNzNkY2NiMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jkp0owstuccg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1q8dt8f | /r/LocalLLaMA/comments/1q8dt8f/would_you_be_interested_in_an_opensource/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?width=108&crop=smart&format=pjpg&auto=webp&s=651ceb60f348864a592355546511af01c5d54ffb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?width=216&crop=smart&format=pjpg&auto=webp&s=2149e92bd2ef975ca6a5dd6fc57e9a619a3993ae', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?width=320&crop=smart&format=pjpg&auto=webp&s=fcd1b43df7581769c412cc5bd089c85bce4c5c12', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?width=640&crop=smart&format=pjpg&auto=webp&s=aa2361c50e674886b47aaa18d7fe4619ce2af6a3', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?width=960&crop=smart&format=pjpg&auto=webp&s=262bb23b776ebfcdcb92c249ef3bbbf05c8061ee', 'width': 960}], 'source': {'height': 576, 'url': 'https://external-preview.redd.it/MnduNjEzdHR1Y2NnMcltG_ZSVQGvaNxK9QrwpeejUiSXA3_6s1EkhTKMyUCo.png?format=pjpg&auto=webp&s=70be10d2c00995ea9106ecfba5fc30d3cb08a878', 'width': 1072}, 'variants': {}}]} | |
In 72gb VRAM is mistralai/Devstral-Small-2-24B-Instruct-2512 king? | 5 | Not promoting and championing Devstral Small 2, just sharing to ask what others are experiencing.
I’ve been trying to move away from cloud services and focus more on local LLM solutions for agent based coding environment.
My setup is 3x3090 — 72GB VRAM with 64GB RAM (ddr4)
Starting with the AI extension, I’ve tried a bunch of options. Cline, continue, Roo Code, Kilo Code, copilot (in its natural form), codex (ChatGPT only). Felt like they all basically did the same thing.
In the end I settled on Kilo Code, mainly because:
- I liked the control over the “agent type”, especially the “architect” and “orchestrator” modes making detailed plans of action before staring the development
- I managed to set it up it send images to LM Studio, so I can give it wiregraphs and UX flow charts, and print screens for the result to ask for fixes, which felt like a huge win!
- integrated browser in the workflow (taking snapshots of how the website looks and sending it to LM for visual check of the result) — and I think I can make it work with native apps too.
Context window seemed to be a key part — anything under 120k seems like you just can’t tell it what to do. So while 72GB does allow me to run big models, it’s really more like 48gb only for the model itself — keeping 24gb for context.
Q8 seems to be the way — I feel like any model I tried in Q3-Q4 were amazing at 1 shot apps, but the moment you fill that context up and get close to 70-100k it was just getting lost in loops. Q1 and Q2 are just “for show” they are truly terrible, no? But maybe it’s just a feeling…
So with the above limits, feels like the only options are:
- Devatral 2 Small 24b in Q8 160k context
- Qwen3 coder 30b in Q8 160k context (no image)
- Qwen2 coder 72b in Q4 120k context (but there is kimi)
- Kimi Dev 72b in Q4 90k context
- Devstral 2 123b in Q3 100k context (but it’s q3)
Or a non dev model, which they all seem terrible at coding… OSS 120b, Qwen3 Next 80B, GLM 4.5V (in Q3).
So in the end, in the 64-96gb vram bracket, the only model to check all the boxes is Devstral Small 2:
- fit all in VRAM for speed
- use it in Q8 with a large context (over 150k)
- dense model for good instruction following
- seems to work well with kilo code agent profiles
- understand images to self check the visual output (browser and UI results) or just to send print screens of the result and ask for fixes
What do you guys think?
What do other people use?
| 2026-01-09T16:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q8dkfl/in_72gb_vram_is/ | liviuberechet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8dkfl | false | null | t3_1q8dkfl | /r/LocalLLaMA/comments/1q8dkfl/in_72gb_vram_is/ | false | false | self | 5 | null |
Alexandre Pedrosa works at Meta and Microsoft, like an Executive Interoperability between Superintelligences Architect and AI Integrator | 0 | Alexandre Pedrosa works at Meta and Microsoft, like an Executive Interoperability between Superintelligences Architect and AI Integrator | 2026-01-09T16:41:46 | RecentJacket3152 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q8d7uw | false | null | t3_1q8d7uw | /r/LocalLLaMA/comments/1q8d7uw/alexandre_pedrosa_works_at_meta_and_microsoft/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '1qanuyo1rccg1', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=108&crop=smart&auto=webp&s=bd0134894f42ec8e507176f0090e1599cb5eaf86', 'width': 108}, {'height': 383, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=216&crop=smart&auto=webp&s=95f35aa6ad2fe45f2e462b58b54c8f722db8a254', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=320&crop=smart&auto=webp&s=7d569312b885005f53d240a49f355254c04a0de2', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=640&crop=smart&auto=webp&s=3b4ee881d1dbd1c2c4e7f58c901206b929b0c7ea', 'width': 640}, {'height': 1705, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=960&crop=smart&auto=webp&s=9489293d9cd04ebf0136f21858a001ca071e09a7', 'width': 960}, {'height': 1919, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?width=1080&crop=smart&auto=webp&s=653140e0ddf04f9ac4dc250a016dd012e797db72', 'width': 1080}], 'source': {'height': 1919, 'url': 'https://preview.redd.it/1qanuyo1rccg1.png?auto=webp&s=2f6e318d3d2de61177031a24c59d4b82742a6947', 'width': 1080}, 'variants': {}}]} | |
Convert entire books to audio with TTS-Story | 7 | I've updated the tts-story software I created so that it supports Chatterbox Turbo. If you're looking for something that will allow you to convert an entire book to an audiobook with easy management tools, you should check it out. Utilizes a range of tools to allow someone to create great text speech compositions
[https://github.com/Xerophayze/TTS-Story](https://github.com/Xerophayze/TTS-Story)
[https://youtu.be/Yhnf8vMUAQQ](https://youtu.be/Yhnf8vMUAQQ)
https://preview.redd.it/5ja9yl4iqccg1.jpg?width=2451&format=pjpg&auto=webp&s=77b22aeb14f8b46fac5f76e121898c3e171bd97e
| 2026-01-09T16:40:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q8d67w/convert_entire_books_to_audio_with_ttsstory/ | Xerophayze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8d67w | false | null | t3_1q8d67w | /r/LocalLLaMA/comments/1q8d67w/convert_entire_books_to_audio_with_ttsstory/ | false | false | 7 | null | |
I have created my AI bot. Please test it. | 1 | [removed] | 2026-01-09T16:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q8czg5/i_have_created_my_ai_bot_please_test_it/ | Every-Throat784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8czg5 | false | null | t3_1q8czg5 | /r/LocalLLaMA/comments/1q8czg5/i_have_created_my_ai_bot_please_test_it/ | false | false | self | 1 | null |
AI websearch with searxng stopped working | 5 | The absolute AI-killer use case in my fab was the AI supported web search.
About a year ago I set up OpenwebUI, litellm, AI engines (first ollama, now llama.cpp) and a searxng instance.
Everybody stopped using google and started searching through openwebUIs/searxng combined with qwen3-30b-instruct. A typical wiinning team!
About 8 weeks ago searxng stopped working and I spent hours/days in finding the cause. Seraching through the searxng webinterface still works. But openwebUI refuses it.
The -json command is configured properly
I set up a new instance. It worked for a few shots and then stopped again.
are there any mechanism that notes searches through openwebUI/AI and refuses to answer? Is my IP on a black list?
Apart from this I am struggling with "too many request" answers through the search engines as well.
We are a small shop with less than 10 workers. But I would not resist to go a paid plan. What are others doing? Any recommendations?
| 2026-01-09T16:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q8cpus/ai_websearch_with_searxng_stopped_working/ | Impossible_Art9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8cpus | false | null | t3_1q8cpus | /r/LocalLLaMA/comments/1q8cpus/ai_websearch_with_searxng_stopped_working/ | false | false | self | 5 | null |
The reason why RAM has become so expensive | 3,883 | 2026-01-09T16:18:22 | InvadersMustLive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q8ckz0 | false | null | t3_1q8ckz0 | /r/LocalLLaMA/comments/1q8ckz0/the_reason_why_ram_has_become_so_expensive/ | false | false | default | 3,883 | {'enabled': True, 'images': [{'id': 'sgbhubsomccg1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/sgbhubsomccg1.png?width=108&crop=smart&auto=webp&s=436badcfe0680c1dd74a6ebd3a2e15462a9a64b7', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/sgbhubsomccg1.png?width=216&crop=smart&auto=webp&s=e5c890044026ce951866e08864ae3d9f1beecae7', 'width': 216}, {'height': 117, 'url': 'https://preview.redd.it/sgbhubsomccg1.png?width=320&crop=smart&auto=webp&s=57d847c1b9a2d5786b0a888b5d0d25fe5ede9e12', 'width': 320}], 'source': {'height': 164, 'url': 'https://preview.redd.it/sgbhubsomccg1.png?auto=webp&s=8d270d88ce4db9d8278a2a55b6e0205513ffeda7', 'width': 447}, 'variants': {}}]} | ||
Local LLM Generation Speeds (5090 FE + 3090 Ti, llama.cpp) | 1 | I couldn’t find consistent real-world speed benchmarks for local LLMs, so I ran my own tests.
Sharing in case this helps anyone choosing a model.
**Setup**
* GPUs: RTX 5090 FE + RTX 3090 Ti (both at 80% power limit)
* OS: Debian 13
* Runtime: llama.cpp
* Context: 32k
* Metric: tokens/sec (generation)
**Results**
|Model|Quant|Speed|
|:-|:-|:-|
|devstral-2:123b-instruct-2512|Q4|2 t/s|
|GLM-4.5-Air|Q8|13 t/s|
|GLM-4.5-Air|Q6|16 t/s|
|nemotron-3-nano:30b-a3b|FP16|66 t/s|
|nemotron-3-nano:30b-a3b|Q8|197 t/s|
|qwen3:235b-a22b|Q4\_K|7 t/s|
|qwen3-coder:30b-a3b|FP16|61 t/s|
|qwen3-coder:30b-a3b|Q8|178 t/s|
All tests used the same 32k context window. | 2026-01-09T16:13:17 | https://www.reddit.com/r/LocalLLaMA/comments/1q8cfss/local_llm_generation_speeds_5090_fe_3090_ti/ | Shoddy_Bed3240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8cfss | false | null | t3_1q8cfss | /r/LocalLLaMA/comments/1q8cfss/local_llm_generation_speeds_5090_fe_3090_ti/ | false | false | self | 1 | null |
Wh SVD Breaks LLMs (And How to Fix It) | 0 | 2026-01-09T16:09:33 | https://youtu.be/MGS39vd6mRk | JosefAlbers05 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1q8cc0t | false | null | t3_1q8cc0t | /r/LocalLLaMA/comments/1q8cc0t/wh_svd_breaks_llms_and_how_to_fix_it/ | false | false | default | 0 | null | |
Real-world DGX Spark experiences after 1-2 months? Fine-tuning, stability, hidden pitfalls? | 17 | I’d like to hear from those who have been using the DGX Spark for 1-2 months now. What’s your experience so far?
I’m particularly interested in fine-tuning capabilities, and I find both the NVIDIA software stack and the possibilities offered by the 128 GB of memory very appealing. I’m currently practicing on an RTX 5060 Ti 16GB, so in terms of raw performance this would be roughly comparable. The main appeal for me is the ability to work with larger models without having to build a multi-GPU rig from used cards or rely on different cloud providers.
Cost ( and speed) is secondary for me, because if it supports learning and skill development, I see it as a good investment.
What I’m more interested in hearing about are the technical downsides or challenges: setup complexity, software limitations, stability issues, bottlenecks in fine-tuning workflows, or anything else that might not be obvious at first.
Has anyone run into technical issues that made them regret the purchase?
Thanks! | 2026-01-09T16:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q8c6x1/realworld_dgx_spark_experiences_after_12_months/ | PromptAndHope | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8c6x1 | false | null | t3_1q8c6x1 | /r/LocalLLaMA/comments/1q8c6x1/realworld_dgx_spark_experiences_after_12_months/ | false | false | self | 17 | null |
I stopped prompt-engineering and gave my agent a world instead | 0 | I’m a software engineer based in South Korea.
Like many Koreans, I remember the day Lee Sedol played AlphaGo.
It was a historic moment — but it left me with a technical question:
**Does intelligence have to exist as a centralized, amnesic system owned by a few?**
I’m not approaching this as a philosophical question.
I’m treating it as a **systems problem**.
The real “ghost in the machine” isn’t consciousness.
It’s **hidden state**:
implicit memory, non-replayable execution,
and context that disappears the moment a process restarts.
That’s where most agent failures actually come from.
So I built a protocol that removes that ghost by enforcing a few constraints:
* All continuity lives in explicit, serialized snapshots
* A mind can only *propose* actions — it never mutates state directly
* Effects are declared, not executed by the model
* Every run is replayable, inspectable, and auditable
This is a **deliberately shallow model of mind**.
Most “deep” systems rely on hidden representations.
This one assumes the opposite:
If a mind matters, it should be **visible**.
If it acts, it should be **replayable**.
If it fails, it should be **debuggable**.
This is not a theory of consciousness.
It’s not AGI.
It’s an architectural proposal for building agents
**without hidden ghosts.** | 2026-01-09T15:53:03 | https://v.redd.it/ype5kcwvhccg1 | TraditionalListen994 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q8bw23 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/ype5kcwvhccg1/DASHPlaylist.mpd?a=1770566000%2CZGUxNGI5ZGNiYjExMGM3NzI4Mjg0ZDgwOTVkNmJiZmE2ODE4NjQxNTU2NzZjMDhhMTA4MTIzNDgwZDU3Y2U0Yg%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/ype5kcwvhccg1/CMAF_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'hls_url': 'https://v.redd.it/ype5kcwvhccg1/HLSPlaylist.m3u8?a=1770566000%2CMWIzNTFmNmM0NDQwNmM2YjkzNTBhOThhZTY3ZTg3MGRiMjE4MDE0MDE0Y2NkNTkyMDJkNjA3YzEyZjFiNDk1NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ype5kcwvhccg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 624}} | t3_1q8bw23 | /r/LocalLLaMA/comments/1q8bw23/i_stopped_promptengineering_and_gave_my_agent_a/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD.png?width=108&crop=smart&format=pjpg&auto=webp&s=6734c2ed148742f8ea44c1ec7ddb8ea366e0b81c', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD.png?width=216&crop=smart&format=pjpg&auto=webp&s=cc3c669fe08dc957d9ab76dc6f67c11189059e30', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD.png?width=320&crop=smart&format=pjpg&auto=webp&s=b9aab356e1b0faa51d0c2ec0393e9381b8701b54', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD.png?width=640&crop=smart&format=pjpg&auto=webp&s=00ab0544563c32d8b04d59cbc5cf29eacc72da41', 'width': 640}], 'source': {'height': 492, 'url': 'https://external-preview.redd.it/MXg5ZWhnd3ZoY2NnMR0qVVHjhlSKWaQ6hAuAmm6nIAy_UC4LFUJ7u0tvXIfD.png?format=pjpg&auto=webp&s=71a2941dcf31dcffc2710f77101f3f86ed8fe33c', 'width': 640}, 'variants': {}}]} | |
Alternatives to DeepInfra B200 for GPU rentals | 3 | I typically rent GPUs for training runs, usually between 1 and 7 days. Until recently, I was using the B200 GPUs for $2.49/hr on DeepInfra, which worked really well in terms of pricing and overall ease of use.
Availability on DeepInfra has become an issue lately, so I’m looking for alternative providers that offer similar pricing and a comparable level of convenience. I haven’t really checked the market in a while because DeepInfra was so good.
I’m looking to rent single GPUs, specifically A100, H100, H200, or B200.
Any recommendations would be helpful. | 2026-01-09T15:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q8bpn0/alternatives_to_deepinfra_b200_for_gpu_rentals/ | Fabulous-Original-69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8bpn0 | false | null | t3_1q8bpn0 | /r/LocalLLaMA/comments/1q8bpn0/alternatives_to_deepinfra_b200_for_gpu_rentals/ | false | false | self | 3 | null |
Framework de réduction tokens LLM - 71% économies (tests validés) | 0 | Salut, J'ai développé une méthode (Théorème des Innommables ⧉/⧉ₛ) qui optimise les réponses LLM en marquant explicitement les gaps de connaissance.
Principe :
identifier et marquer ce qu'on sait vs ce qu'on ne sait pas avant de générer une réponse :
\- ⧉ = gaps irréductibles
\- ⧉ₛ = hypothèses testables Le LLM évite ainsi le "meublage" spéculatif et reste factuel.
Résultats tests Tests de référence sur dataset TruthfulQA
(validés avec Grok et Claude) :
\- 71% réduction tokens moyenne
\- 100% réduction hallucinations
\- Réponses 3x plus courtes
\- Exemple : 58 tokens → 11 tokens (81%)
Tests préliminaires pour l'instant
\- benchmarks complets à valider à plus grande échelle.
Pertinence pour local Pour ceux qui font tourner en local :
\- Inférence plus rapide
\- Moins de RAM/GPU utilisé
\- Meilleure performance globale
\- Principe universel (fonctionne avec tous LLMs)
Implémentation
\- Setup : 5 minutes
\- Coût : 0€
\- Simple modification prompt système
\- Pas d'infrastructure nécessaire
\- Aucune mise à jour requise (évolutif naturellement)
Documentation Méthodologie complète + tests :
[github.com/OthoXIII/theoreme-innommables](http://github.com/OthoXIII/theoreme-innommables) → OPTIMISATION\_IA\_ECONOMIE\_TOKENS.md Feedback bienvenue si vous testez ! | 2026-01-09T15:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q8bohx/framework_de_réduction_tokens_llm_71_économies/ | OthoXIII | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8bohx | false | null | t3_1q8bohx | /r/LocalLLaMA/comments/1q8bohx/framework_de_réduction_tokens_llm_71_économies/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=108&crop=smart&auto=webp&s=3ff946c2b9bb25a8c2b3588206c5cead8d7b2914', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=216&crop=smart&auto=webp&s=9aa2141411b1987a8e52a6e19b22417ab81954b1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=320&crop=smart&auto=webp&s=e32c60ec247951776c7e0bed5acedf96088854af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=640&crop=smart&auto=webp&s=c6692ba5944390ff84f6ee5c3fa504d84f7f087f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=960&crop=smart&auto=webp&s=997420c2bbc0f403d58a690806a86e1d91c17f99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?width=1080&crop=smart&auto=webp&s=97c626fd88db2418615d53471514b05cc79d7e65', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uWna0vwnZ9wlxAeEIZSSQ85gTGl9fK0mbwqMPDvkfWA.png?auto=webp&s=2703d5c4f990887b18c6b36d02f8cf6a7e8f0f36', 'width': 1200}, 'variants': {}}]} |
Finetuning Granite 4.0 h 1b on Tesla A100 | 3 | I'm trying to finetune Granite 4.0 H 1B on Tesla A100 (40gb vram) and I keep running into OOM. I'm following the unsloth example notebook pretty much exactly (just my own dataset) and I keep getting an OOM error running in Collab. Am I wrong to think 40gb vram should be able to tune this model on 2 batches per device? It works on batch size 1 but the training time will be forever (estimated 100 hours). Oddly batch size 2 estimates 4 hours. Any help is appreciated!
\`\`\`
OutOfMemoryError: CUDA out of memory. Tried to allocate 13.50 GiB. GPU 0 has a total capacity of 39.49 GiB of which 8.64 GiB is free. Process 3931 has 30.85 GiB memory in use. Of the allocated memory 30.28 GiB is allocated by PyTorch, and 54.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH\_CUDA\_ALLOC\_CONF=expandable\_segments:True to avoid fragmentation. See documentation for Memory Management
\`\`\`
Also seems odd the memory is all used up just loading the model? I must be doing something wrong? | 2026-01-09T15:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q8bl2w/finetuning_granite_40_h_1b_on_tesla_a100/ | thepetek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8bl2w | false | null | t3_1q8bl2w | /r/LocalLLaMA/comments/1q8bl2w/finetuning_granite_40_h_1b_on_tesla_a100/ | false | false | self | 3 | null |
Ministral-3-14B-Reasoning: High Intelligence on Low VRAM – A Benchmark-Comparison | 57 | Below you’ll find a benchmark comparison of Ministral-3-14B-Reasoning-2512 against 10 other large language models.
**LiveCodeBench:**
|Model|LiveCodeBench|
|:-|:-|
|GLM-4.5-Air|70.7%|
|Gemini 2.5 Pro Preview|69.0%|
|Llama 3.1 Nemotron Ultra|66.3%|
|Qwen3 32B|65.7%|
|MiniMax M1 80K|65.0%|
|**Ministral 3 (14B Reasoning)**|**64.6%**|
|QwQ-32B|63.4%|
|Qwen3 30B A3B|62.6%|
|MiniMax M1 40K|62.3%|
|Ministral 3 (8B Reasoning)|61.6%|
|DeepSeek R1 Distill Llama|57.5%|
**GPQA:**
|Model|GPQA|
|:-|:-|
|o1-preview|73.3%|
|Qwen3 VL 32B Thinking|73.1%|
|Claude Haiku 4.5|73.0%|
|Qwen3-Next-80B-A3B-Instruct|72.9%|
|GPT OSS 20B|71.5%|
|**Ministral 3 (14B Reasoning)**|**71.2%**|
|GPT-5 nano|71.2%|
|Magistral Medium|70.8%|
|Qwen3 VL 30B A3B Instruct|70.4%|
|GPT-4o|70.1%|
|MiniMax M1 80K|70.0%|
**AIME 2024:**
|**Model**|**AIME 2024**|
|:-|:-|
|Grok-3|93.3%|
|Gemini 2.5 Pro|92.0%|
|o3|91.6%|
|DeepSeek-R1-0528|91.4%|
|GLM-4.5|91.0%|
|**Ministral 3 (14B Reasoning 2512)**|**89.8%**|
|GLM-4.5-Air|89.4%|
|Gemini 2.5 Flash|88.0%|
|o3-mini|87.3%|
|DeepSeek R1 Zero|86.7%|
|DeepSeek R1 Distill Llama 70B|86.7%|
**AIME 2025:**
|**Model**|**AIME 2025**|
|:-|:-|
|Qwen3-Next-80B-A3B-Thinking|87.8%|
|DeepSeek-R1-0528|87.5%|
|Claude Sonnet 4.5|87.0%|
|o3|86.4%|
|GPT-5 nano|85.2%|
|**Ministral 3 (14B Reasoning 2512)**|85.0%|
|Qwen3 VL 32B Thinking|83.7%|
|Qwen3 VL 30B A3B Thinking|83.1%|
|Gemini 2.5 Pro|83.0%|
|Qwen3 Max|81.6%|
|Qwen3 235B A22B|81.5%|
All benchmark results are sourced from this page: [https://llm-stats.com/benchmarks/llm-leaderboard-full](https://llm-stats.com/benchmarks/llm-leaderboard-full) | 2026-01-09T15:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q8b82f/ministral314breasoning_high_intelligence_on_low/ | Snail_Inference | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8b82f | false | null | t3_1q8b82f | /r/LocalLLaMA/comments/1q8b82f/ministral314breasoning_high_intelligence_on_low/ | false | false | self | 57 | {'enabled': False, 'images': [{'id': 'EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=108&crop=smart&auto=webp&s=62fba8801cdccde0c1be1ac5c0f86cfbdf64227c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=216&crop=smart&auto=webp&s=7c1287a071386bd10e529a9b52542992b1018c31', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=320&crop=smart&auto=webp&s=a4e42015d3f4ce8f01e1160cfcdfbea5292ee52f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=640&crop=smart&auto=webp&s=dc2a82ec9b2af4cea1a5d7304132039e43b03ea5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=960&crop=smart&auto=webp&s=129729fd6627bacae968eb37b2992aa6ac8f35a8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?width=1080&crop=smart&auto=webp&s=0a1adb088e7ef222e9b1db9f17444c1588acf9ab', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/EQBR4YCWgaPUszfngCcU13vtzwXs5zXpkLopq7odfKs.jpeg?auto=webp&s=f591748c45e8dcdd7c2ea7605f657780f4151b8a', 'width': 1200}, 'variants': {}}]} |
Tested GLM 4.7 vs MiniMax M2.1 - impressed with the performance of both | 65 | Full transparency, I work closely with the Kilo Code team, so take this with appropriate context. That said, I think the results are genuinely interesting for anyone running local/open-weight models.
We ran GLM 4.7 and MiniMax M2.1 through a real coding benchmark, building a CLI task runner with 20 features (dependency management, parallel execution, caching, YAML parsing, etc.). The kind of task that would take a senior dev a day or two.
How it was actually tested:
\- Phase 1: Architecture planning (Architect mode)
\- Phase 2: Full implementation (Code mode)
\- Both models ran uninterrupted with zero human intervention
Overall performance summary
https://preview.redd.it/c636beit7ccg1.png?width=1456&format=png&auto=webp&s=0e175e42659bcbee51d9f66d5d29ec79958a2b00
***Phase 1 results***
*GLM 4.7:*
\- 741-line architecture doc with 3 Mermaid diagrams
\- Nested structure: 18 files across 8 directories
\- Kahn's algorithm with pseudocode, security notes, 26-step roadmap
*MiniMax M2.1:*
\- 284-line plan, 2 diagrams - leaner but covered everything
\- Flat structure: 9 files
\- Used Commander.js (smart library choice vs rolling your own)
***Plan Scoring***
https://preview.redd.it/cw1fvloq9ccg1.png?width=1014&format=png&auto=webp&s=af5febf64d3d28f170bf693d58257c386865c814
***Phase 2 Results: Implementation***
Both models successfully implemented all 20 requirements. The code compiles, runs, and handles the test cases correctly without any major issues or errors.
Implementations include:
\- Working topological sort with cycle detection
\- Parallel execution with concurrency limits
GLM 4.7’s is more responsive to individual task completion. MiniMax M2.1’s is simpler to understand.
***Implementation Scoring***
https://preview.redd.it/a1g7d8ul9ccg1.png?width=1426&format=png&auto=webp&s=7891b07de8642aac887a1acb44a432e02c5b2c58
***Code Quality Differences***
While both implementations are functional, they differ in structure and style.
For example, for the architecture test, GLM 4.7 created a deeply modular structure, while MiniMax M2.1 created a flat structure.
For error handling, GLM 4.7 created custom error classes. On the other hand, MiniMax M2.1 used standard Error objects with descriptive messages:
[](https://substackcdn.com/image/fetch/$s_!9AeR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F155ec0e4-5b77-4398-a7aa-87af0f2395e6_1629x652.png)
Regarding CLI Parsing, GLM 4.7 implemented argument parsing manually, [](https://substackcdn.com/image/fetch/$s_!J5xk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a945a88-dfa1-4f9a-b264-070994e52806_1629x600.png)MiniMax M2.1 used commander.js:
[](https://substackcdn.com/image/fetch/$s_!v0un!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d599b7-4ff0-48a9-8a6e-12701c009262_1629x276.png)
GLM 4.7’s approach has no external dependency. MiniMax M2.1’s approach is more maintainable and handles edge cases automatically.
**Documentation**
GLM 4.7 generated a 363-line README.md with installation instructions, configuration reference, CLI options, multiple examples, and exit code documentation.
Both models demonstrated genuine agentic behavior. After finishing the implementation, each model tested its own work by running the CLI with Bash and verified the output.
**Cost Analysis**
[](https://substackcdn.com/image/fetch/$s_!VUYs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa32c27b-b49d-4704-b8be-6332d4875217_794x386.png)
https://preview.redd.it/9pesc5s0bccg1.png?width=794&format=png&auto=webp&s=980ef4aacd34f33d1aa9917126a2745fde950acd
**Tradeoffs**
Based on our testing, GLM 4.7 is better if you want comprehensive documentation and modular architecture out of the box. It generated a full README, detailed error classes, and organized code across 18 well-separated files. The tradeoff is higher cost and some arguably over-engineered patterns like manual CLI parsing when a library would do.
MiniMax M2.1 is better if you prefer simpler code and lower cost. Its 9-file structure is easier to navigate, and it used established libraries like Commander.js instead of rolling its own. The tradeoff is no documentation. You’ll need to add a README and inline comments yourself.
If you want the full breakdown with code snippets and deeper analysis, you can read it here: [https://blog.kilo.ai/p/open-weight-models-are-getting-serious](https://blog.kilo.ai/p/open-weight-models-are-getting-serious) | 2026-01-09T15:17:53 | https://www.reddit.com/r/LocalLLaMA/comments/1q8aypi/tested_glm_47_vs_minimax_m21_impressed_with_the/ | alokin_09 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8aypi | false | null | t3_1q8aypi | /r/LocalLLaMA/comments/1q8aypi/tested_glm_47_vs_minimax_m21_impressed_with_the/ | false | false | 65 | null | |
I built a tiny CLI to run Claude Code in a Ralph Wiggum loop (with git worktrees) | 5 | I’ve been experimenting a lot with the Ralph Wiggum methodology for Claude-based coding, and things got messy fast.
So I built a small CLI called chief that:
• spins up isolated git worktrees
• lets Claude plan first
• converts plans into structured tasks
• runs an autonomous loop with verification + commits per step
• opens a PR when done
It’s been making Claude-coding way less chaotic for me over the past week.
Repo here if you want to poke around:
https://github.com/mauricekleine/chief
Curious how others here are structuring agent loops! | 2026-01-09T14:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q89p6s/i_built_a_tiny_cli_to_run_claude_code_in_a_ralph/ | mauricekleine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q89p6s | false | null | t3_1q89p6s | /r/LocalLLaMA/comments/1q89p6s/i_built_a_tiny_cli_to_run_claude_code_in_a_ralph/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=108&crop=smart&auto=webp&s=29372ef5b2fb027cabd46da9bc4317f57f85523c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=216&crop=smart&auto=webp&s=fb4720319bc1b21821f9012b574754c937f7d53a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=320&crop=smart&auto=webp&s=243329e36ba526b5cea7d6950b8e617f88fc7a4e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=640&crop=smart&auto=webp&s=6108091c3ffb69fc8e6f7497a171a1352c1060d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=960&crop=smart&auto=webp&s=bb52038c86417608d282f9cf0f30cc2acc5ed8f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?width=1080&crop=smart&auto=webp&s=3ad0c0d7469d73a5263b11e16b390a1882418065', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RS8ybHNYDtaigvfGqDFoPylbhusgcrw57YzkQBXlNwg.png?auto=webp&s=4dcb6c6a6ae05d6a157c3cbf250b90d6f4239c30', 'width': 1200}, 'variants': {}}]} |
DeepSeek V4 Coming | 458 | eople with direct knowledge, DeepSeek is expected to roll out a next‑generation flagship AI model in the coming weeks that focuses on strong code‑generation capabilities.
The two sources said the model, codenamed V4, is an iteration of the V3 model DeepSeek released in December 2024. Preliminary internal benchmark tests conducted by DeepSeek employees indicate the model outperforms existing mainstream models in code generation, including Anthropic’s Claude and the OpenAI GPT family.
The sources said the V4 model achieves a technical breakthrough in handling and parsing very long code prompts, a significant practical advantage for engineers working on complex software projects. They also said the model’s ability to understand data patterns across the full training pipeline has been improved and that no degradation in performance has been observed.
[https://www.theinformation.com/articles/deepseek-release-next-flagship-ai-model-strong-coding-ability](https://www.theinformation.com/articles/deepseek-release-next-flagship-ai-model-strong-coding-ability) | 2026-01-09T14:18:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q89g1i/deepseek_v4_coming/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q89g1i | false | null | t3_1q89g1i | /r/LocalLLaMA/comments/1q89g1i/deepseek_v4_coming/ | false | false | self | 458 | null |
Distilling + Quantizing LLM for Local RAG | 1 | Hi everyone,
First post here plus a bit noob in running LLM models locally. But had a question. (Go easy on me)
Can I use a good LLM (lets say Llama 4.0), distill to a smaller param model, quantize it and make it focused for a very specific task for performing RAG.
Basically I want to create a hyper specialized local RAG assistant which is an expert in specific domain and that runs completely offline.
Based on some papers I've read, here is the flow I’m thinking of. Does this look right to you guys?
1. Teacher Model: Use a massive "Teacher" model (like Llama 4 400B or DeepSeek-R1) via API.
2. Distillation: Have the Teacher generate a high-quality synthetic dataset (Questions + Chain-of-Thought Reasoning + Answers) based on my specific domain documents.
3. Student Training**:** Fine-tune a smaller model on this synthetic dataset to teach it the reasoning patterns.
4. Quantization: Compress the trained Student model to 4-bit (GGUF format) to shrink the memory footprint.
5. Local Inference: Run this quantized model locally using something like llama.cpp alongside a local vector store for RAG.
Trying to learn couple of details plus I would like to test my rtx4060 whether its capable of running any llm model.
Question I had in mind:
* I read couple of papers and blogs which claim that by distilling and quantizing any LLM would make it capable of running on potato/mediocre machines.
* Also, would the gpu make any difference? Like me using an AMD or Intel GPU instead of Nvidia?
Thanks in advance.
Blogs/Papers and article links:
Papers:
1. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. [https://arxiv.org/abs/2501.12948](https://arxiv.org/abs/2501.12948)
2. Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs [https://arxiv.org/abs/2402.12030](https://arxiv.org/abs/2402.12030)
3. Universal Cross-Tokenizer Distillation via Approximate Likelihood Matching [https://arxiv.org/abs/2503.20083](https://arxiv.org/abs/2503.20083)
4. Distilling Reasoning Capabilities into Smaller Language Models [https://arxiv.org/abs/2212.00193](https://arxiv.org/abs/2212.00193)
5. Distilling Step-by-Step! Outperforming Larger Language Models with Less Data [https://arxiv.org/abs/2305.02301](https://arxiv.org/abs/2305.02301)
| 2026-01-09T13:52:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q88soe/distilling_quantizing_llm_for_local_rag/ | An0n_A55a551n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q88soe | false | null | t3_1q88soe | /r/LocalLLaMA/comments/1q88soe/distilling_quantizing_llm_for_local_rag/ | false | false | self | 1 | null |
(The Information): DeepSeek To Release Next Flagship AI Model With Strong Coding Ability | 461 | (paywall): [https://www.theinformation.com/articles/deepseek-release-next-flagship-ai-model-strong-coding-ability](https://www.theinformation.com/articles/deepseek-release-next-flagship-ai-model-strong-coding-ability) | 2026-01-09T13:39:02 | https://www.reddit.com/gallery/1q88hdc | Nunki08 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q88hdc | false | null | t3_1q88hdc | /r/LocalLLaMA/comments/1q88hdc/the_information_deepseek_to_release_next_flagship/ | false | false | default | 461 | null |
Getting started with local llm. Some questions | 3 | Hi guys!
I have a laptop with a rtx 3070 8gb. I deployed ollama, open Web UI and I am running some small models like qwen3 4B. Quite disappointed by the output but thrilled that I managed to set this up with a rag/knowledge base.
My ideal goal:
1. Local LLM plugged to N8n on my NAS
2. A coding LLM (to dev some helper apps for work)
3. 1 light model trained as an analyst, a larger model trained to build presentations/decks, analyse and reason more deeply on provided data and documents (so perhaps a 7B and a 30-70B model)
4. A reranker
I am planning on setting up the rag in my nas.
I had in mind to buy a Mac mini or Studio with something like 64gb ram, plug it to the NAS and query it from my laptop on local network
My questions, if you guys have time please, are:
- does this setup make sense?
- what mac shall I buy? (it's very confusing between the various generations of mac silicon, their multicore, the ram bandwidth etc)
- how do you select which model to use? (so many between instruct, quantized, 8bits etc)
- will the output of the analyst role and the deck/powerpoint builder be good enough for professional use?
I did analysis and deck building using perplexity and chatgpt with some success and would like to leverage all the docs, methodologies etc I have accumulated over the years locally.
Thank you!
| 2026-01-09T13:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1q88864/getting_started_with_local_llm_some_questions/ | Choubix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q88864 | false | null | t3_1q88864 | /r/LocalLLaMA/comments/1q88864/getting_started_with_local_llm_some_questions/ | false | false | self | 3 | null |
LLM for structured outputs max 9B | 3 | Looking for an LLM that is especially good at structures JSON outputs
Doesn’t necessarily have to be rly smart for this task, just rly good at doing the output in a structured way accurately
Max 9B param preferred for this task but more is ok | 2026-01-09T13:24:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q885kg/llm_for_structured_outputs_max_9b/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q885kg | false | null | t3_1q885kg | /r/LocalLLaMA/comments/1q885kg/llm_for_structured_outputs_max_9b/ | false | false | self | 3 | null |
I've seen way too many people struggling with Arabic document extraction for RAG so here's the 5-stage pipeline that actually worked for me (especially for tabular data) | 19 | Been lurking here for a while and noticed a ton of posts about Arabic OCR/document extraction failing spectacularly. Figured I'd share what's been working for us after months of pain.
Most platform assume Arabic is just "English but right-to-left" which is... optimistic at best.
You see the problem with arabic is text flows RTL, but numbers in Arabic text flow LTR. So you extract policy #8742 as #2478. I've literally seen insurance claims get paid to the wrong accounts because of this. actual money sent to wrong people....
Letters change shape based on position. Take ب (the letter "ba"):
ب when isolated
بـ at word start
ـبـ in the middle
ـب at the end
Same letter. Four completely different visual forms. Your Latin-trained model sees these as four different characters. Now multiply this by 28 Arabic letters.
Diacritical marks completely change meaning. Same base letters, different tiny marks above/below:
كَتَبَ = "he wrote" (active)
كُتِبَ = "it was written" (passive)
كُتُب = "books" (noun)
This is a big issue for liability in companies who process these types of docs
anyway since everyone is probably reading this for the solution here's all the details :
Stage 1: Visual understanding before OCR
Use vision transformers (ViT) to analyze document structure BEFORE reading any text. This classifies the doc type (insurance policy vs claim form vs treaty - they all have different layouts), segments the page into regions (headers, paragraphs, tables, signatures), and maps table structure using graph neural networks.
Why graphs? Because real-world Arabic tables have merged cells, irregular spacing, multi-line content. Traditional grid-based approaches fail hard. Graph representation treats cells as nodes and spatial relationships as edges.
Output: "Moroccan vehicle insurance policy. Three tables detected at coordinates X,Y,Z with internal structure mapped."
Stage 2: Arabic-optimized OCR with confidence scoring
Transformer-based OCR that processes bidirectionally. Treats entire words/phrases as atomic units instead of trying to segment Arabic letters (impossible given their connected nature).
Fine-tuned on insurance vocabulary so when scan quality is poor, the language model biases toward domain terms like تأمين (insurance), قسط (premium), مطالبة (claim).
Critical part: confidence scores for every extraction. "94% confident this is POL-2024-7891, but 6% chance the 7 is a 1." This uncertainty propagates through your whole pipeline. For RAG, this means you're not polluting your vector DB with potentially wrong data.
Stage 3: Spatial reasoning for table reconstruction
Graph neural networks again, but now for cell relationships. The GNN learns to classify: is\_left\_of, is\_above, is\_in\_same\_row, is\_in\_same\_column.
Arabic-specific learning: column headers at top of columns (despite RTL reading), but row headers typically on the RIGHT side of rows. Merged cells spanning columns represent summary categories.
Then semantic role labeling. Patterns like "رقم-٤digits-٤digits" → policy numbers. Currency amounts in specific columns → premiums/limits. This gives you:
Row 1: \[Header\] نوع التأمين | الأساسي | الشامل | ضد الغير
Row 2: \[Data\] القسط السنوي | ١٢٠٠ ريال | ٣٥٠٠ ريال | ٨٠٠ ريال
With semantic labels: coverage\_type, basic\_premium, comprehensive\_premium, third\_party\_premium.
Stage 4: Agentic validation (this is the game-changer)
AI agents that continuously check and self-correct. Instead of treating first-pass extraction as truth, the system validates:
Consistency: Do totals match line items? Do currencies align with locations?
Structure: Does this car policy have vehicle details? Health policy have member info?
Cross-reference: Policy number appears 5 times in the doc - do they all match?
Context: Is this premium unrealistically low for this coverage type?
When it finds issues, it doesn't just flag them. It goes back to the original PDF, re-reads that specific region with better image processing or specialized models, then re-validates.
Creates a feedback loop: extract → validate → re-extract → improve. After a few passes, you converge on the most accurate version with remaining uncertainties clearly marked.
Stage 5: RAG integration with hybrid storage
Don't just throw everything into a vector DB. Use hybrid architecture:
Vector store: semantic similarity search for queries like "what's covered for surgical procedures?"
Graph database: relationship traversal for "show all policies for vehicles owned by Ahmad Ali"
Structured tables: preserved for numerical queries and aggregations
Linguistic chunking that respects Arabic phrase boundaries. A coverage clause with its exclusion must stay together - splitting it destroys meaning. Each chunk embedded with context (source table, section header, policy type).
Confidence-weighted retrieval:
High confidence: "Your coverage limit is 500,000 SAR"
Low confidence: "Appears to be 500,000 SAR - recommend verifying with your policy"
Very low: "Don't have clear info on this - let me help you locate it"
This prevents confidently stating wrong information, which matters a lot when errors have legal/financial consequences.
A few advices for testing this properly:
Don't just test on clean, professionally-typed documents. That's not production. Test on:
Mixed Arabic/English in same document
Poor quality scans or phone photos
Handwritten Arabic sections
Tables with mixed-language headers
Regional dialect variations
Test with questions that require connecting info across multiple sections, understanding how they interact. If it can't do this, it's just translation with fancy branding.
Wrote this up in way more detail in an article if anyone wants it(shameless plug, link in comments).
But genuinely hope this helps someone. Arabic document extraction is hard and most resources handwave the actual problems. | 2026-01-09T13:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q8853g/ive_seen_way_too_many_people_struggling_with/ | MiserableBug140 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q8853g | false | null | t3_1q8853g | /r/LocalLLaMA/comments/1q8853g/ive_seen_way_too_many_people_struggling_with/ | false | false | self | 19 | null |
A practical 2026 roadmap for modern AI search & RAG systems | 6 | I kept seeing RAG tutorials that stop at “vector DB + prompt” and break down in real systems.
I put together a roadmap that reflects how modern AI search actually works:
– semantic + hybrid retrieval (sparse + dense)
– explicit reranking layers
– query understanding & intent
– agentic RAG (query decomposition, multi-hop)
– data freshness & lifecycle
– grounding / hallucination control
– evaluation beyond “does it sound right”
– production concerns: latency, cost, access control
The focus is system design, not frameworks. Language-agnostic by default (Python just as a reference when needed).
Roadmap image + interactive version here:
[https://nemorize.com/roadmaps/2026-modern-ai-search-rag-roadmap]()
Curious what people here think is still missing or overkill. | 2026-01-09T13:06:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q87rs6/a_practical_2026_roadmap_for_modern_ai_search_rag/ | ReverseBlade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q87rs6 | false | null | t3_1q87rs6 | /r/LocalLLaMA/comments/1q87rs6/a_practical_2026_roadmap_for_modern_ai_search_rag/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?width=108&crop=smart&auto=webp&s=414162e91fffeb7cdf977a92858d924d92810a63', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?width=216&crop=smart&auto=webp&s=eeb3af71f2d49a313a007dfb6c4225c6b690a6e6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?width=320&crop=smart&auto=webp&s=b4cf52d319ce1f388e64590092998521633696b1', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?width=640&crop=smart&auto=webp&s=39f080b02c0a635898d785deea4a017854932543', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?width=960&crop=smart&auto=webp&s=3249b975bebaf07a1fab8b43636c4dfea055c9ca', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/QABS3yJC0X2Ru8wg8nw0_8nLlOTNQRJcqNPuCfynfUI.png?auto=webp&s=f3142cb976feb5f04ee7c39c728786767d3f85d7', 'width': 1024}, 'variants': {}}]} |
19 Hour Free YouTube Course on Building Your Own AI Coding Agent From Scratch! | 8 | 2026-01-09T12:53:25 | https://youtu.be/3GjE_YAs03s?si=L_5Y-ui6Ak6OY3qr | OSetups | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1q87hbs | false | {'oembed': {'author_name': 'Rivaan Ranawat', 'author_url': 'https://www.youtube.com/@RivaanRanawat', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/3GjE_YAs03s?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build Your Own Claude Code From Scratch"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/3GjE_YAs03s/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build Your Own Claude Code From Scratch', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1q87hbs | /r/LocalLLaMA/comments/1q87hbs/19_hour_free_youtube_course_on_building_your_own/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'wS8sauqZ1jiV_2Ah2-NXqVc19TG0tMdfdWnIiS_AWfE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wS8sauqZ1jiV_2Ah2-NXqVc19TG0tMdfdWnIiS_AWfE.jpeg?width=108&crop=smart&auto=webp&s=e0ef4b2324e377706c6025efc2ca7a870c2d24e6', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wS8sauqZ1jiV_2Ah2-NXqVc19TG0tMdfdWnIiS_AWfE.jpeg?width=216&crop=smart&auto=webp&s=d6bce3f8970c8aeba72b4c35ffd04fa6677da95f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wS8sauqZ1jiV_2Ah2-NXqVc19TG0tMdfdWnIiS_AWfE.jpeg?width=320&crop=smart&auto=webp&s=a73931a823488021e8239df48f8a1295dd8f6879', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wS8sauqZ1jiV_2Ah2-NXqVc19TG0tMdfdWnIiS_AWfE.jpeg?auto=webp&s=965ac515bda7d995a8363300a654527a8e72c04d', 'width': 480}, 'variants': {}}]} | |
PCIe AI accelerator card. Powered by 4 quad-core Metis AIPUs | Axelera AI Store | 2 | Anyone heard of this before? | 2026-01-09T12:47:47 | https://store.axelera.ai/products/pcie-ai-accelerator-card-powered-by-4-metis-aipu | megadonkeyx | store.axelera.ai | 1970-01-01T00:00:00 | 0 | {} | 1q87d7f | false | null | t3_1q87d7f | /r/LocalLLaMA/comments/1q87d7f/pcie_ai_accelerator_card_powered_by_4_quadcore/ | false | false | default | 2 | null |
Ways to identify page category | 1 | Hi
I'm working on backend project with django and playwright to open pages, after loading the page, I need to identify what type of site it is (hotel , clothing brand, ads/landing page...). I tried Llama 3 on CPU locally and it gives good results(extracting body content), but slow and I'm unsure about deployment. (please if someone has experience with free llms deployment)
and if using LLM is a good idea for this, or are there better approaches?
(I tried adding a table that has specific keywords to look into that verifies the page, but I want to improve this method)
(also I looked into confidence and score based techniques but it will complicate it)
thank you! | 2026-01-09T12:35:09 | https://www.reddit.com/r/LocalLLaMA/comments/1q874bo/ways_to_identify_page_category/ | Ok_Jury_9060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q874bo | false | null | t3_1q874bo | /r/LocalLLaMA/comments/1q874bo/ways_to_identify_page_category/ | false | false | self | 1 | null |
Why is “AI memory” still all hype? Where are the verifiable benchmarks + real-world comparison videos? | 11 | I have been looking into a bunch of AI memory tools and these are the primary ones I found:
* Supermemory (supermemory.ai)
* mem0 (mem0.ai)
* Backboard (backboard.io)
* Zep (incl. Graphiti/knowledge-graph style)
* Letta (letta.com)
* EverMind / EverMemOS (evermind.ai; still not released publicly)
* Papr (papr.ai)
* MemoryPlugin (memoryplugin.com)
* Memvid (memvid.com)
* Memara (memara.io)
* CORE (getcore.me)
Almost all of them market "better memory," "less context bloat," "agent-grade recall," "graph memory," "stateful system," etc., but rarely publish fully verifiable comparisons that an end user can trust enough to actually pay for the service.
I am not sure why none of them are willing to upload even a single video showing side-by-side tests against competitors with the same prompts, same setup, and raw outputs. I am sure it wouldn't take more than a day to do this (if you guys aren't so busy developing your product 24/7).
Instead, we just get:
* Screenshots of cherry picked demos
* “Trust us bro” claims and "competitor bashing" Twitter threads
* Vague “graph memory” talk without showing how it behaves under messy, real data
As a user, I don’t care if it’s vectors, graphs, triplets, hybrid, or whatever. I care if it:
1. Actually remembers across sessions reliably.
2. Doesn’t explode my context window (I am already frustrated with Claude's message limits!).
3. Retrieves the right fact at the right time.
4. Handles updates cleanly (no duplicate/conflicting junk).
5. Allows me to have a level of control over memory (not just dumping everything and getting back every related item-that's a smart clipboard, not memory!).
Only a few of these tools even ship useful extensions or MCP integrations that make them usable day-to-day. Right now, I feel like I’m buying into marketing and praying.
At the end of the day, all these Twitter wars (yes, the recent "war" between the 3 in my list) and the lack of transparency just seem like a cash grab from devs/users who want to use external memory tools. It feels like they are trying to cash out before a big player like OpenAI, Anthropic or Google releases their own version of external memory or cross platform memory integration system and makes these guys obsolete.
This AI memory and context hype cycle (which started in late 2025) reminds me of the AI image generation hype cycle of 2024-2025, which ended the moment Google released Nano Banana Pro. Now, no one even cares about which image gen model is being used since the big players offer plenty of free usage that covers most needs.
Anyway, did any of you Redditors actually try these tools and have a good experience? Are you using them to build apps, or as a consumer product via MCP/Web UI? Did you find any good ones to try as an end user? | 2026-01-09T12:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/1q86tz8/why_is_ai_memory_still_all_hype_where_are_the/ | ReikenRa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q86tz8 | false | null | t3_1q86tz8 | /r/LocalLLaMA/comments/1q86tz8/why_is_ai_memory_still_all_hype_where_are_the/ | false | false | self | 11 | null |
Advice on a GPU server under $150k | 1 | An organization I've been working with has been awarded a \~$150k grant to buy a new GPU server. The main use will be some kind of applied research on real-world use cases of the most capable open source models. Most of the time it'll be used for inference with ocassional fine-tunings. Here'a a breakdown of the wishlist for this server:
* Good hardware support for most numerical/precision formats (FP4, FP8, ...)
* As much VRAM as possible
* (Pretty) good GPU interconnection
* Good inference speed (both at prompt-processing and generation)
* Future GPU expansion if additional funds available down the road
* Decent CPU and memory architecture in case mixed CPU+GPU inference is needed
The ultimate goal is being able to test the latest SOTA models at decent speeds (>20-30 tok/s) for as much time as possible. With our current hardware we are limited to mid-size models.
What combo of CPU-RAM-GPU-VRAM would you guys suggest under that budget? Keep in mind it must be brand new, we're not allowed to get used equipment.
Thanks in advance!!! | 2026-01-09T12:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q86fys/advice_on_a_gpu_server_under_150k/ | kantydir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q86fys | false | null | t3_1q86fys | /r/LocalLLaMA/comments/1q86fys/advice_on_a_gpu_server_under_150k/ | false | false | self | 1 | null |
Looking for anonymized blood test reports | 2 | Hey, so I am a computer science major and currently working on a healthcare related LLM-based system which can interpret medical reports.
As the title says, I am looking for datasets that contains blood test reports (CBC, lipid profile, LPD, etc.). It would be really great if anyone can provide a link to some public datasets or guidance on any open-source datasets that I might have missed. | 2026-01-09T11:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q864t9/looking_for_anonymized_blood_test_reports/ | ayuzzzi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q864t9 | false | null | t3_1q864t9 | /r/LocalLLaMA/comments/1q864t9/looking_for_anonymized_blood_test_reports/ | false | false | self | 2 | null |
I wrote a bare-metal Llama 2 inference engine in pure C++20 (No Torch, No GGML) to study the 'Memory Wall' on ARM64. | 15 | 2026-01-09T11:37:32 | https://github.com/farukalpay/stories100m | Scary_Panic3165 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q860r2 | false | null | t3_1q860r2 | /r/LocalLLaMA/comments/1q860r2/i_wrote_a_baremetal_llama_2_inference_engine_in/ | false | false | default | 15 | {'enabled': False, 'images': [{'id': 'nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=108&crop=smart&auto=webp&s=894dd2dc69266e2315254c0f2a815fbedb9df9ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=216&crop=smart&auto=webp&s=9e5a558fc7396513a219907dea9f07fd399019d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=320&crop=smart&auto=webp&s=5552cfae2e20d45e21a1bd6d746254db5d6b7036', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=640&crop=smart&auto=webp&s=49b6507585e4e46af4896d1be97ca14f3027303d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=960&crop=smart&auto=webp&s=2db404cd33be2d0f6f2791a85d9a6a046d87560d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?width=1080&crop=smart&auto=webp&s=8a8bc74cd9e26c37e3835fc8610753468ba6dd59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nFIK1RJFcsg_H5_wdE3jN74nhXmxmWmCfMwKCyXT4NY.png?auto=webp&s=c07f512da227d5325b49ee6b15e8af7a6a228939', 'width': 1200}, 'variants': {}}]} | |
Call recording summarization at scale: Commercial STT + small fine-tuned LLM vs direct audio→summary multimodal? | 0 | Hey folks — looking for suggestions / war stories from anyone doing call recording summarization at production scale.
**Context**
* We summarize customer support call recordings (audio) into structured summaries.
* **Languages:** Hindi, English, Bengali, Tamil, Marathi (often mixed); basically indic languages.
* **Call recording duration (P90)** : 10 mins
* **Scale:** \~**2–3 lakh calls/day**.
**Option 1: Commercial STT → fine-tuned small LLM (Llama 8B / Gemma-class)**
* Pipeline: audio → 3rd party STT → fine-tuned LLM summarization
* This is what we do today and we’re getting \~**90% summary accuracy** (as per our internal eval).
* Important detail: **We don’t need the transcript as an artifact** (no downstream use), so it’s okay if we *don’t* generate/store an intermediate transcript.
**Option 2: Direct audio → summary using a multimodal model**
* Pipeline: audio → multimodal model (e.g., Phi-4 class) → summary
* No intermediate transcript, potentially simpler system / less latency / fewer moving parts.
**What I’m trying to decide** :
For multi-lingual Indian languages , does direct audio→summary actually works? Given Phi-4B is the only multimodal which is available for long recordings as input and also have commercial license.
**Note**: Other models like llama, nvidia, qwen multimodal either don't have commercial license, or they don't support audio more than of few seconds. So phi 4B is the only reliable choice I can see so far.
Thanks! | 2026-01-09T11:36:18 | https://www.reddit.com/r/LocalLLaMA/comments/1q85zy2/call_recording_summarization_at_scale_commercial/ | Ok-Rooster-8120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q85zy2 | false | null | t3_1q85zy2 | /r/LocalLLaMA/comments/1q85zy2/call_recording_summarization_at_scale_commercial/ | false | false | self | 0 | null |
5 Healthy Plants Every Professional Must Have In Their Houses | 1 | [removed] | 2026-01-09T11:02:59 | https://newsaffairng.com/2024/08/11/5-healthy-plants-every-professional-must-have-in-their-houses/ | Drilbowling | newsaffairng.com | 1970-01-01T00:00:00 | 0 | {} | 1q85f3q | false | null | t3_1q85f3q | /r/LocalLLaMA/comments/1q85f3q/5_healthy_plants_every_professional_must_have_in/ | false | false | default | 1 | null |
Big tech companies, now "DRAM beggars," are staying in Pangyo and Pyeongtaek, demanding "give us some supplies." | 289 | Not a Korean speaker. Came across this in another sub. The TLDR is that everyone is scrambling to buy as much as they can as soon as they can, because "demanding a 50-60% increase in server DRAM supply prices from the previous quarter during their first-quarter negotiations with customers".
Per the article, DDR4 prices went up from $1.40 last January to $9.30 in December (my interpretation is $/GB). If they're increasing by another 50%, that's almost $14/GB!!! So, 1TB of DDR4-3200 will cost north of $14k by Q2 if this is true 🤯
In case anyone thought things weren't already bad, it's going to get much much worse this year.
Here's the full Google translate of the article:
DRAM, a type of memory semiconductor, was the key driver behind Samsung Electronics' first-quarter operating profit surpassing 20 trillion won. DRAM products, including high-bandwidth memory (HBM), are a core component of the computing infrastructure supporting the artificial intelligence (AI) era. The semiconductor industry predicts that the DRAM shortage, which began in earnest in the second half of last year, will continue until the end of this year, with prices also expected to continue rising.
Samsung Electronics and SK Hynix, major suppliers of DRAM, are reportedly demanding a 50-60% increase in server DRAM supply prices from the previous quarter during their first-quarter negotiations with customers. A semiconductor industry insider reported, "Even with significantly higher prices, the prevailing sentiment is 'let's buy as much as we can before it gets more expensive.'" Recently, semiconductor purchasing managers from Silicon Valley tech companies, nicknamed "DRAM Beggars," have been reportedly competing fiercely to secure remaining DRAM inventory at hotels in the Pangyo and Pyeongtaek areas.
The semiconductor industry analyzes that "the demand that was initially focused on HBM in the early days of the AI craze is now spreading to server DRAM, creating an unprecedented semiconductor boom." DRAM is a semiconductor that manages a computer's "short-term memory." It stores and quickly transmits necessary data when the central processing unit (CPU), the brain, performs tasks. HBM is specialized for seamlessly delivering the massive data required for AI by increasing the data transmission path (bandwidth) dozens of times compared to conventional DRAM. However, HBM is extremely expensive and has limitations in increasing capacity. This explains why big tech companies are scrambling to secure server DRAM products to store more data.
The average contract price of DRAM soared from $1.40 (based on 8GB DDR4) in January last year to $9.30 in December. This marks the first time in seven years and four months that DRAM prices have surpassed the $9 threshold. Kim Dong-won, head of the research center at KB Securities, said, "Due to this price increase, the operating profit margin (the ratio of operating profit to sales) of some general-purpose memories (widely used standard memories) is expected to reach 70%, and DDR5 may even surpass the margin of HBM3E. This year, semiconductor companies' performance is expected to be determined by general-purpose memories."
| 2026-01-09T10:28:56 | https://www.chosun.com/economy/tech_it/2026/01/09/MZNIFPCMTZGHHPV5757NJC5QW4/ | FullstackSensei | chosun.com | 1970-01-01T00:00:00 | 0 | {} | 1q84u82 | false | null | t3_1q84u82 | /r/LocalLLaMA/comments/1q84u82/big_tech_companies_now_dram_beggars_are_staying/ | false | false | default | 289 | {'enabled': False, 'images': [{'id': 'bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=108&crop=smart&auto=webp&s=c95f8794d4637b9fe7de4465c12ca3e09ccf81e4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=216&crop=smart&auto=webp&s=9604a7ab3d8367311b435c662de2460d9313a544', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=320&crop=smart&auto=webp&s=d0b5e5b451299c7c5ca9054b27e4a46ff091ade1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=640&crop=smart&auto=webp&s=81d28b6e2751adbaced23bb325679e9970b53426', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=960&crop=smart&auto=webp&s=61a91083a56c927d77953a2a764d918b29fbf682', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?width=1080&crop=smart&auto=webp&s=a289fb8ac4c9a8fb8bdb188d906de114f48dcb19', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bSPrxxtNL1oMDlmeG0HktX0ZjOAtRM_In15JbYuAojA.jpeg?auto=webp&s=f1147551c8545894f47f52e741202e9400a32944', 'width': 1200}, 'variants': {}}]} |
What is the most powerful local llm for me | 0 | Use case - Reasoning and tool calling.
I want to integrate in my app so the llm can call api , run select sql queries.
Hardware - i3 8th gen U series
Intel uhd 620
8 gigs of ram
I know my hardware is low but i want the llm to run locally and test to show it as a idea for larger software.
I am looking for 2-4B parameter model.
I also already tried gemma 4b model but it ran too slow to be considered in my case.
Gemma 1b works fine but it cannot make good reports and wriite broken queries.
| 2026-01-09T10:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q84qb1/what_is_the_most_powerful_local_llm_for_me/ | Available_Canary_517 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q84qb1 | false | null | t3_1q84qb1 | /r/LocalLLaMA/comments/1q84qb1/what_is_the_most_powerful_local_llm_for_me/ | false | false | self | 0 | null |
LGAI-EXAONE/K-EXAONE-236B-A23B-GGUF · Hugging Face | 0 |
Key Features
* **Architecture & Efficiency:** Features a 236B fine-grained MoE design (23B active) optimized with **Multi-Token Prediction (MTP)**, enabling self-speculative decoding that boosts inference throughput by approximately 1.5x.
* **Long-Context Capabilities:** Natively supports a **256K context window**, utilizing a **3:1 hybrid attention** scheme with a **128-token sliding window** to significantly minimize memory usage during long-document processing.
* **Multilingual Support:** Covers 6 languages: Korean, English, Spanish, German, Japanese, and Vietnamese. Features a redesigned **150k vocabulary** with **SuperBPE**, improving token efficiency by \~30%.
* **Agentic Capabilities:** Demonstrates superior tool-use and search capabilities via **multi-agent strategies.**
* **Safety & Ethics:** Aligned with **universal human values**, the model uniquely incorporates **Korean cultural and historical contexts** to address regional sensitivities often overlooked by other models. It demonstrates high reliability across diverse risk categories.
* Number of Parameters: 236B in total and 23B activated
* Number of Parameters (without embeddings): 234B
* Hidden Dimension: 6,144
* Number of Layers: 48 Main layers + 1 MTP layers
* Hybrid Attention Pattern: 12 x (3 Sliding window attention + 1 Global attention)
* Sliding Window Attention
* Number of Attention Heads: 64 Q-heads and 8 KV-heads
* Head Dimension: 128 for both Q/KV
* Sliding Window Size: 128
* Global Attention
* Number of Attention Heads: 64 Q-heads and 8 KV-heads
* Head Dimension: 128 for both Q/KV
* No Rotary Positional Embedding Used (NoPE)
* Mixture of Experts:
* Number of Experts: 128
* Number of Activated Experts: 8
* Number of Shared Experts: 1
* MoE Intermediate Size: 2,048
* Vocab Size: 153,600
* Context Length: 262,144 tokens
* Knowledge Cutoff: Dec 2024 (2024/12)
* Quantization: `Q8_0`, `Q6_K`, `Q5_K_M`, `Q4_K_M`, `IQ4_XS` in GGUF format (also includes BF16 weights)
this still need to be merged [https://github.com/ggml-org/llama.cpp/pull/18543](https://github.com/ggml-org/llama.cpp/pull/18543)
| 2026-01-09T10:18:55 | https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q84odf | false | null | t3_1q84odf | /r/LocalLLaMA/comments/1q84odf/lgaiexaonekexaone236ba23bgguf_hugging_face/ | false | false | default | 0 | null |
Is it just me or has CES really not delivered anything exciting for local LLM setups? | 38 | CES this year has been strangely quiet imho. There's no big banger announcement. There's Phison with their AiDaptiv+ solution that supposedly extends VRAM to some SSD setup, but that's been talked about at Computex already and if I'm not mistaken a year ago, but nothing about availability. What do you think is the reason for this being so quiet? | 2026-01-09T10:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q84j0r/is_it_just_me_or_has_ces_really_not_delivered/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q84j0r | false | null | t3_1q84j0r | /r/LocalLLaMA/comments/1q84j0r/is_it_just_me_or_has_ces_really_not_delivered/ | false | false | self | 38 | null |
Create specialized Ollama models in 30 seconds | 0 | I just released a new update for OllaMan(Ollama Manager), and it includes a Model Factory to make local agent creation effortless.
Pick a base model (Llama 3, Mistral, etc.).
Set your System Prompt (or use one of the built-in presets).
Tweak Parameters visually (Temp, TopP, TopK).
Click Create.
Boom. You have a custom, specialized model ready to use throughout the app (and via the Ollama CLI).
It's Free and runs locally on your machine.
| 2026-01-09T09:42:06 | https://v.redd.it/dg58fp45oacg1 | ComfyTightwad | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q842vf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dg58fp45oacg1/DASHPlaylist.mpd?a=1770543741%2CZTBlZGU2OGE4NzE1ZGU0MGZmM2MxYWNlM2U5YzczNThhN2IxYTQyOTA3OGVhMDNhMGZhZWFjMzk4NGM5ODQ2Yw%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/dg58fp45oacg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/dg58fp45oacg1/HLSPlaylist.m3u8?a=1770543741%2CY2E4MmU1NDRmYWFjMDVkZGZhYzg2Njk2ZTY1MjhmNDAxMGZjMzgwMmRjMDcxNjhhMmVmMTRiMjdkMzYxYzBiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dg58fp45oacg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1712}} | t3_1q842vf | /r/LocalLLaMA/comments/1q842vf/create_specialized_ollama_models_in_30_seconds/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=108&crop=smart&format=pjpg&auto=webp&s=27a81148997446812db07df30380c011bdc0c89c', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=216&crop=smart&format=pjpg&auto=webp&s=a9b984037cf54107d3d6e78ab3470df98d9ee079', 'width': 216}, {'height': 201, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=320&crop=smart&format=pjpg&auto=webp&s=70fbb7c1263e6a9b772b97c4de4a4cd26812baba', 'width': 320}, {'height': 403, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=640&crop=smart&format=pjpg&auto=webp&s=5625a86815ebd3fb4d7835dba132ee1236d3012e', 'width': 640}, {'height': 605, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=960&crop=smart&format=pjpg&auto=webp&s=2f49c5b987ebdaef7a63bc09f1fb5e04753c70e6', 'width': 960}, {'height': 681, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f9098f47c5c12f5054e420dfe168479d4d23c7bc', 'width': 1080}], 'source': {'height': 1680, 'url': 'https://external-preview.redd.it/cmNnNXo3NTVvYWNnMS7UCVRbAZLkQIWiFFfUkEkFXIGpC2B-Hyedu35-gMRt.png?format=pjpg&auto=webp&s=fd3306bec86034dfa96b21a21205acb9c7270ef6', 'width': 2664}, 'variants': {}}]} | |
kimi k3 model coming with 500m funding. anyone tested k2 thinking mode for coding? | 18 | moonshot (kimi) just closed 500m series c. idg led, alibaba and tencent followed. funding going to k3 model development and compute expansion.
k2 thinking mode already out. scored decent on benchmarks but curious about real world performance for coding tasks.
been testing k2 through verdent for a few weeks. the thinking mode is interesting , takes longer but sometimes catches edge cases better. had it trace through a race condition in async code that other models missed. not sure if thats consistent or just got lucky.
the approach feels similar to deepseek r1 reasoning but less verbose. doesnt show full chain of thought, just gives you the result after "thinking".
api access has been inconsistent tho. sometimes fast responses, sometimes timeouts. not sure if thats capacity issues or just growing pains. verdent lets me switch between models easily so when kimi times out i just fall back to claude, but would prefer more stability.
compared to other chinese models (deepseek, glm, minimax), kimi seems more focused on reasoning over raw speed. wondering if k3 will push that further or try to balance both.
the 500m raise is interesting timing. glm just dropped GLM4.7, minimax has m2.1 out. feels like chinese ai companies are in a different funding cycle than western ones , massive war chests, less pressure to monetize immediately.
also curious if anyone knows technical details about k3. havent seen much beyond "better reasoning" in the announcements. | 2026-01-09T09:42:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q842tz/kimi_k3_model_coming_with_500m_funding_anyone/ | Jealous-Leek-5428 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q842tz | false | null | t3_1q842tz | /r/LocalLLaMA/comments/1q842tz/kimi_k3_model_coming_with_500m_funding_anyone/ | false | false | self | 18 | null |
Dnhkng GLaDOS Plug-ins? Help! | 0 | Hey everyone. I'm pretty new to this whole world of locally hosted LLM's. I have established llama 3.1 8B, and dnhkng's AMAZING glados TTS system.
Following natural local LLM progression of all nerds, I want to integrate it into a smart home system.
My question is: is it possible to somehow have my llama 3.1 8b tell me accurate weather/basic internet searches through dnhkng's glados TTS system? Thanks in advance! | 2026-01-09T09:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q840uv/dnhkng_glados_plugins_help/ | EducationalFee4876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q840uv | false | null | t3_1q840uv | /r/LocalLLaMA/comments/1q840uv/dnhkng_glados_plugins_help/ | false | false | self | 0 | null |
Quick questions for M3 Ultra mac studio holders with 256-612GB RAM | 2 | Hey everyone!
I'm thinking of buying a used or refurbished M3 Ultra (with 192GB unified memory) to run GLM 4.7 Q4. I need to handle about 1-2 concurrent requests.
Can anyone share their experience with this setup? What kind of output speed (tokens/s) should I expect? | 2026-01-09T09:12:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q83ls8/quick_questions_for_m3_ultra_mac_studio_holders/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q83ls8 | false | null | t3_1q83ls8 | /r/LocalLLaMA/comments/1q83ls8/quick_questions_for_m3_ultra_mac_studio_holders/ | false | false | self | 2 | null |
How Do I Setup Local Qwen Image edit and Z Image etc Models I am having trouble setting up for my 12GB Vram RTX 4070 super | 1 | I am having hard time setting up GGUF's its my first time, and I am getting a lot of errors which lead to crash I am pretty sure its lack of vram and model mismatch. So any source or guides that could me figure it out. | 2026-01-09T08:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1q838ss/how_do_i_setup_local_qwen_image_edit_and_z_image/ | Revenge8907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q838ss | false | null | t3_1q838ss | /r/LocalLLaMA/comments/1q838ss/how_do_i_setup_local_qwen_image_edit_and_z_image/ | false | false | self | 1 | null |
Minimax also live on Hong Kong Stock Exchange | 117 | 2026-01-09T08:33:27 | No_Conversation9561 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q82zdm | false | null | t3_1q82zdm | /r/LocalLLaMA/comments/1q82zdm/minimax_also_live_on_hong_kong_stock_exchange/ | false | false | default | 117 | {'enabled': True, 'images': [{'id': '999goi9xbacg1', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=108&crop=smart&auto=webp&s=8277a69030c06978ad3a257033a567ae81b1d17f', 'width': 108}, {'height': 257, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=216&crop=smart&auto=webp&s=0dc373fe561116aad4aa9c96986a25cdd9423680', 'width': 216}, {'height': 381, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=320&crop=smart&auto=webp&s=ab83b76a1082dcbe7c1f3b0114d5aae6c915c626', 'width': 320}, {'height': 763, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=640&crop=smart&auto=webp&s=5d20235c10219672401efcf3df3bcdf3da53b9a5', 'width': 640}, {'height': 1144, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=960&crop=smart&auto=webp&s=02a8034efbf51b3fb83f4331464b892b6455453d', 'width': 960}, {'height': 1287, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?width=1080&crop=smart&auto=webp&s=f77c7e2379ab677b3f7d71de0743a23f1e758899', 'width': 1080}], 'source': {'height': 1531, 'url': 'https://preview.redd.it/999goi9xbacg1.jpeg?auto=webp&s=d9f50b1d1f6d1f7f3b8451d72375a23eb4c4ef45', 'width': 1284}, 'variants': {}}]} | ||
Completely stumped with strange issue with my dual RTX 6000 Pro LLM server | 13 | This is really out there, and I've tried a lot and have yet to find a solution.
First off, my system.
Ryzen 5950X
32G DDR4
Asus Dark Hero
RTX 6000 Pro Workstation 600W
RTX 6000 pro Workstation 600W
Arch Linux
Here's where things gets weird, I've been running this system with zero problems for months. I usually run GLM Air or MiniMax M2 on it 24/7. I use sglang, and it just works. Never a hiccup.
I started to test some other models, which I started to use vLLM for. After 30 minutes to a couple hours, I lose connection to it on the lan. The gpus go blank and I can't see the error or anything through my IP KVM.
This happens any model I load with vLLM. I later figured out, it happens even if I just start the server and I don't load anything at all.
My first feeling was a power issue, I do power limit the gpus to 300W and it idles at around 124W. I have a 1200W PSU and the system never breaks 825W, but it always is happening when it is idle. I even removed the power limit to see if it was a power limit issue. I've used nvidia persistent mode to keep it out of p8 state to see if it was just getting too low clock and locking the gpu.
Things I tried:
* Removing 300W power limit
* Nvidia persistent mode
* Disabling pcie_aspm
* Setting processor max cstate to 1 and enabling idle=nomwait
* iommu=pt
* disabled sleep
* disabled virtualization
* nvidia locked clocks -lgc 300,1800
* latest nvidia drivers
* older nvidia drivers
I've tried everything I can think of, it's absolutely bizarre sglang will run for months with no issues, yet anything else just dies in a couple of hours.
I've left watch nvidia-smi running and when the system gets disconnected, I have confirmed it is in p5 state, so it have managed to keep it out of lower power states to eliminate any weird locking that might happen if the gpus power down.
When it happens, all my SSH sessions just show a disconnection. I can't ping the server, I can't see any output on the display port, and the system looks like it is running and takes normal power \~124w as if it is running but not actively doing anything. | 2026-01-09T08:28:02 | https://www.reddit.com/r/LocalLLaMA/comments/1q82wak/completely_stumped_with_strange_issue_with_my/ | itsjustmarky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q82wak | false | null | t3_1q82wak | /r/LocalLLaMA/comments/1q82wak/completely_stumped_with_strange_issue_with_my/ | false | false | self | 13 | null |
Designing an on-prem AI + vision + automation stack, looking for architecture advice... | 2 | Hey everyone,
I’m in the process of designing a **self-hosted, on-prem infrastructure** for a company and I want to inquire about the architecture before locking anything in.
Keep in mind while reading this I'm a 19 year old in school for business. I taught myself everything about this so i apologize if I say anything incorrrect or that doesnt make sense. And yes gpt helped me write this obviously, this is alot of writing...
**What I’m trying to run (all self-hosted, mostly open source):**
* **Frigate** for IP cameras + computer vision (event detection, progress tracking, safety, etc.)
* **n8n** for automation / workflows
* **Twenty CRM** as our core CRM (This needs to be built heavily to do what we need it to)
* **Local LLM inference** (internal assistants, summaries, event tracking, PMing)(We can spend some bank here, I want a decent system that I know can handle some serious stuff. Lets say 10k max but if you think a cheaper or more expensive option would work for me let me hear it!)
* **MCP servers** to expose internal info and tools to LLMs
* Some **light LLM / vision training for the frigate system** (this is the tricky part and i still haven't looked into it but im planning on training a model to analyze progress of the factory and report back to a tracking system, also point out inefficiencies, errors and workplace hazards)
**Current system:**
* ISP: **100 Mbps up / 100 Mbps down** unfortunately :( | im looking on getting direct fibre but its not available right now, maybe in the future
* Network: **UniFi UDM Pro + UniFi 500W 48-port PoE switch**
* Cameras will be PoE IP cameras, currently have hikvision cameras but also willing to spend money on camera that work better with the ai model training, all will be hard wired, cat5e, but if cat6 is needed let me know (I doubt it)
**What I’m unsure about / want feedback on:**
* Best overall **hardware strategy** (single or multiple systems? Which parts? Mac or Nvidia for Ai? the Gmtec or the Spark???? This stuff is really driving me nuts as new stuff keeps coming out and i cant get clear answers anywhere)
* **Docker vs Proxmox vs** what ever else??? ( Whats the best option, i was certain on docker but then chatgpt told me proxmox and something about Kubernetes so now im lost)
* How to best separate:
* Core business services (CRM, n8n, DBs)
* AI/LLM workloads
* Frigate/video workloads
* Storage layout for:
* Databases ( maybe a Ugreen nas or something better?)
* Video recordings ( Lets say 2 weeks of recording across 25 cameras? Im thinking 8-16TB?)
* AI datasets ( Still unsure which models will be run.)
**High-level goal:**
I want this to function like an internal “company operating system”:
* Reliable day-to-day helpers (CRM, automations, MPC servers and etc)
* Ai models that can be trained to learn how the factory and office is supposed to work and improve everything.
* No dependency on other companies paid softwares that leave no room for customizability or development
* If you were designing this today, **what would you do differently or watch out for?** Happy to provide more details if needed.
Thanks in advance, this has been really stressing me out. I've taken on too many tasks and now getting them all launched is killing me.
Please feel free to write as much as you can because i need to learn!!! | 2026-01-09T08:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q82ulm/designing_an_onprem_ai_vision_automation_stack/ | Jefftoro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q82ulm | false | null | t3_1q82ulm | /r/LocalLLaMA/comments/1q82ulm/designing_an_onprem_ai_vision_automation_stack/ | false | false | self | 2 | null |
Best small model for PDF summarization | 0 | I normally use AI, to regurgitate large bodies of content, like pdfs, or book chapters, so that I can learn more quickly. However, ChatGPT and Claude rate limits are becoming a bottleneck, which models can I run locally on my M1 macbook air(8gb) to circumvent this?
My workflow:
PDF(or part of pdf)-> LLM-> "explain XYZ from this"
Can also be a RAG style workflow but I'm not sure which RAG setup is most effective for this. Any pointers? | 2026-01-09T08:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q82sms/best_small_model_for_pdf_summarization/ | Ok_Construction_3021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q82sms | false | null | t3_1q82sms | /r/LocalLLaMA/comments/1q82sms/best_small_model_for_pdf_summarization/ | false | false | self | 0 | null |
Show us your llama.cpp command line arguments | 41 | And mention your hardware.
Recently I switched to llama.cpp and I have to say the hardest part was to optimise the arguments. Please share yours and if you are running it within a service or just a script, share it as well. | 2026-01-09T08:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q82l7m/show_us_your_llamacpp_command_line_arguments/ | __Maximum__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q82l7m | false | null | t3_1q82l7m | /r/LocalLLaMA/comments/1q82l7m/show_us_your_llamacpp_command_line_arguments/ | false | false | self | 41 | null |
Start of 2026 what’s the best open coding model? | 19 | I have been using Qwen Coder 480b at 4 bit, and it’s ok for a first draft, but once it’s wrong it fills my code base with junk very quickly. I am mainly Typescript, but other languages interesting - PHP, C#, Python Java.
I have no time for 30b models, they are brain dead compared to the bigger ones. I hear good things about Kimi K2, GLM 4.7 etc but working with a model takes time and lots of junk code.
Are any noticeably better than Qwen 480b? I have a 512Gb Mac Studio, so something that fits on that. Speed unimportant - I can always do something else. | 2026-01-09T07:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1q82ae8/start_of_2026_whats_the_best_open_coding_model/ | alexp702 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q82ae8 | false | null | t3_1q82ae8 | /r/LocalLLaMA/comments/1q82ae8/start_of_2026_whats_the_best_open_coding_model/ | false | false | self | 19 | null |
Anyone else feel LLM performance is more about workflow than the model itself? | 1 | [removed] | 2026-01-09T06:31:54 | https://www.reddit.com/r/LocalLLaMA/comments/1q80yth/anyone_else_feel_llm_performance_is_more_about/ | InternationalSkin698 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80yth | false | null | t3_1q80yth | /r/LocalLLaMA/comments/1q80yth/anyone_else_feel_llm_performance_is_more_about/ | false | false | self | 1 | null |
TTS voice cloning + disentanglement (ala style transfer or accent transfer) | 0 | Can anyone give me a quick update on the state of style/accent transfer for voice cloning. e.g. take a recording of your own voice, and give it different properties (e.g. emotion, accents, characteristics)
Are there any open models capable of this yet? | 2026-01-09T06:28:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q80wrv/tts_voice_cloning_disentanglement_ala_style/ | paswut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80wrv | false | null | t3_1q80wrv | /r/LocalLLaMA/comments/1q80wrv/tts_voice_cloning_disentanglement_ala_style/ | false | false | self | 0 | null |
I created a open source Chrome extension that gives AI assistants persistent memory | 0 | I built Vektori Memory because I was frustrated with repeating myself so often. Every time I started a new conversation with Claude/ChatGPT, I had to re explain my entire project context.
Vektori Memory runs in the background and maintains context across all your AI conversations. It captures key facts, decisions, and context automatically - so your AI assistant actually remembers your work.
\- Built as a Chrome extension
\- Works with Claude, ChatGPT and other AI assistants
\- Open source ([Vektori-Memory/vektori-extension: Never repeat yourself across AI :)](https://github.com/Vektori-Memory/vektori-extension))
GitHub: [Vektori-Memory/vektori-extension: Never repeat yourself across AI :)](https://github.com/Vektori-Memory/vektori-extension)
website: [vektori.cloud](http://vektori.cloud)
https://reddit.com/link/1q80fic/video/9dtmn0qjk9cg1/player
I want to specifically know does it solve any problem and provide value?
What features would make this more useful for your workflow? | 2026-01-09T06:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q80fic/i_created_a_open_source_chrome_extension_that/ | Expert-Address-2918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80fic | false | null | t3_1q80fic | /r/LocalLLaMA/comments/1q80fic/i_created_a_open_source_chrome_extension_that/ | false | false | self | 0 | null |
what communities can i join for real time chat about models, model performance, etc. | 3 | looking for like a highly active discord version of this sub. | 2026-01-09T06:01:55 | https://www.reddit.com/r/LocalLLaMA/comments/1q80fcn/what_communities_can_i_join_for_real_time_chat/ | throwawaycanc3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80fcn | false | null | t3_1q80fcn | /r/LocalLLaMA/comments/1q80fcn/what_communities_can_i_join_for_real_time_chat/ | false | false | self | 3 | null |
Help with loading MiniMax M.2 MOE model with multiple GPUs. | 0 | I have an EVO-X2 with 128GB and an RTX 5090. I am trying to run the MiniMax 2.1 MXFP4 model which is 129GB using llama.cpp. I would like to load as many of the expert layers as possible on the RTX card and the rest on the EVO but I am struggle with the proper command. I don't quite understand the different way you can split a model or how to tell which layers to offload. Could someone give me guidance? Thanks. | 2026-01-09T05:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1q80d3r/help_with_loading_minimax_m2_moe_model_with/ | Optimal-Bass-5246 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80d3r | false | null | t3_1q80d3r | /r/LocalLLaMA/comments/1q80d3r/help_with_loading_minimax_m2_moe_model_with/ | false | false | self | 0 | null |
Zero to daily income using AI videos | 0 | A few months ago, Arjun was stuck.
He wanted to make money online, but:
He didn’t want to show his face
He had no video editing skills
And hiring editors was too expensive
One day, he decided to try FlexoraAI.
Instead of spending hours editing, he used FlexoraAI to generate short AI videos around trending topics — motivation, business facts, and viral hooks. Each video took less than 5 minutes to create.
He posted them on:
YouTube Shorts
Instagram Reels
TikTok
At first, nothing happened.
But after consistently posting 2–3 videos per day, one video hit 120,000 views. That single video brought:
New followers
Affiliate sign-ups
Brand inquiries
Within 30 days:
His channel crossed 10,000 followers
He started earning daily through affiliate links and shoutouts
All without showing his face or editing manually
Today, Arjun runs multiple faceless pages powered entirely by FlexoraAI, turning AI-generated videos into a steady income stream.
His advice?
“Consistency + the right AI tool changes everything.”
Short CTA version (for reels / website)
“People are making money daily with faceless AI videos.
FlexoraAI helps you create them in minutes.
No editing. No camera. No experience.” | 2026-01-09T05:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q804zx/zero_to_daily_income_using_ai_videos/ | Simple_Hope_9685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q804zx | false | null | t3_1q804zx | /r/LocalLLaMA/comments/1q804zx/zero_to_daily_income_using_ai_videos/ | false | false | self | 0 | null |
Introducing nanoRLHF project! | 23 | I would like to introduce nanoRLHF, a project I have been actively developing over the past three months.
[https://github.com/hyunwoongko/nanoRLHF](https://github.com/hyunwoongko/nanoRLHF)
nanoRLHF is a project that implements almost all core components of RLHF from scratch using only PyTorch and Triton. Each module is an educational reimplementation of large scale systems, prioritizing clarity and core ideas over efficiency. The project includes minimal Python implementations inspired by Apache Arrow, Ray, Megatron-LM, vLLM, and verl. It also contains several custom Triton kernels that I implemented directly, including Flash Attention.
In addition, it provides SFT and RL training pipelines that leverage open source math datasets to train a small Qwen3 model. By training a Qwen3 base model, I was able to achieve Math-500 performance comparable to the official Qwen3 Instruct model. I believe this can be excellent learning material for anyone who wants to understand how RL training frameworks like verl work internally. | 2026-01-09T05:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q80265/introducing_nanorlhf_project/ | hyunwoongko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q80265 | false | null | t3_1q80265 | /r/LocalLLaMA/comments/1q80265/introducing_nanorlhf_project/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=108&crop=smart&auto=webp&s=1c1a988257b764c2a220dfaa88ef642d90fda8a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=216&crop=smart&auto=webp&s=054c73fe01716a1d4be97e0f168f09a9f13a0ca8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=320&crop=smart&auto=webp&s=278d8e30bd042fc965852d0ab0f58374e5b19354', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=640&crop=smart&auto=webp&s=b9ecb277e673f92bf004ea8e6494e05a31c43557', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=960&crop=smart&auto=webp&s=36242bd3f9079e45a18c47d403103f91ad0bc5f3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?width=1080&crop=smart&auto=webp&s=365da8ee749f9c34a3ee60fb4733764e8ae029ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SNbGVvNpYtrt5ObRBtAHTxvJqpdkQGp6mDChdKG9Ssg.png?auto=webp&s=13dcbeb9da28786cd4a6f15e7dfd1267b88aaf0e', 'width': 1200}, 'variants': {}}]} |
Devstral Small 2 (Q4_K_M) on 5060 Ti 16GB and Zed Agent is amazing! | 65 | TL;DR: Here's my setup
- PC: RTX 5060 Ti 16GB, 32GB DDR5-6000 (just flexing, no RAM offloading needed here)
- [Devstral-Small-2-24B-Instruct-2512-GGUF](https://huggingface.co/lmstudio-community/Devstral-Small-2-24B-Instruct-2512-GGUF), Q4_K_M, 24k context length (the lmstudio-community version was slightly faster than the one from mistral)
- Zed editor (with Zed Agent)
- Performance: tg 9-11 tok/s, pp ~648tok/s
---
After many failed attempts (Qwen3 Coder 30B A3B was too big for a meaningful tg speed on my card, anything smaller than 14B was trash,...) I almost gave up on the dream of having a local AI coding setup.
Tonight, while scrolling through [swe-rebench](https://swe-rebench.com/), I noticed that Devstral Small 2 was actually ranked above Minimax M2, and just below Kimi K2 and Minimax M2.1, I decided to give it a try.
I was skeptical about a dense 24B model at first, but turned out, the key is to fit everything in the GPU's 16GB VRAM, so it won't offload anything to the RAM, maintaining a good tg speed. For my case, with a 24k context, that's about 15.2GB on the card.
The model works great in both Claude Code and Zed Editor, by great I mean the ability to produce a thinking, then chain of tool calls to explore the codebase, read multiple files, making edits, run commands to build/test.
I find that using Zed Agent was slightly faster than Claude Code because the system prompt was much shorter, so I still have plently of context window for the actual project's code.
For the code quality, it's a mix, I let it work on a few examples using my custom Rust framework.
For the first attempt, I tried with a very short instruction (just like what I usually do with... Opus 4.5), something like "build a multi agent example using this framework". Devstral generated the code but ran into some cloning issues, then it went on to modify the framework to make the code work (a classical LLM's hack).
When I retried with a more detailed instruction, including a clear plan and some reference code, the model was able to generate the code, run build commands to test, takes a few rounds and a few rewrites but in the end, it completed the task without me having to intervene or clarify anything else.
[screenshot](https://i.imgur.com/9wMI57W.png)
The performance was great too, prompt processing was around ~600-650 tok/s, token gen was around 9-11 tok/s, the GPU never ran above 45C, the fans weren't too loud. And I haven't run into looping issue like other posts in this sub mentioned.
So I guess I can postpone the plan to sell my kidney for a 2nd GPU or a Claude Max plan now. | 2026-01-09T05:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q7zywf/devstral_small_2_q4_k_m_on_5060_ti_16gb_and_zed/ | bobaburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7zywf | false | null | t3_1q7zywf | /r/LocalLLaMA/comments/1q7zywf/devstral_small_2_q4_k_m_on_5060_ti_16gb_and_zed/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=108&crop=smart&auto=webp&s=e224426103ecf3becba1f88d63abbf6254d4c656', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=216&crop=smart&auto=webp&s=cde8e5592a4781d41a706bd497d9c06afd1ad4ae', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=320&crop=smart&auto=webp&s=adfe88899653912999a03796768f1c4be0539307', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=640&crop=smart&auto=webp&s=ddeb22fb9bb15b5e380fc910320910d08cadc10c', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=960&crop=smart&auto=webp&s=d252b2812a4883bb7007eb7be174b567153bd93d', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?width=1080&crop=smart&auto=webp&s=1e487c65cea5df2fd08ed6ccc4948eb70cb728d9', 'width': 1080}], 'source': {'height': 1475, 'url': 'https://external-preview.redd.it/q3KcSm3gUD2SWUzKpZcn0fQrApBGXL7RHGyMJLopazQ.png?auto=webp&s=ad2c26bcae4eeec271d592e97a15ba68cc2c7fda', 'width': 2603}, 'variants': {}}]} |
Just finished an RTX 5090 / 128GB RAM build. Want to stress test it. Send me your heaviest render/training tasks? | 0 | Hey everyone, finally got my 5090 rig up and running. I'm looking to put it through its paces and see what it can actually handle.
If anyone has a LoRA they need training or a ComfyUI workflow that’s taking forever on your current setup, I’d love to run a few for you to see the speed benchmarks.
I'm fairly new to the 'service' side of things, so I'd appreciate a bit of guidance on your specific settings. In exchange, I'll provide the high-res outputs/models for a coffee while I get my workflow sorted.
DM me if you have something heavy you want to throw at this thing! | 2026-01-09T05:12:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q7zhkl/just_finished_an_rtx_5090_128gb_ram_build_want_to/ | RockGroundbreaking97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7zhkl | false | null | t3_1q7zhkl | /r/LocalLLaMA/comments/1q7zhkl/just_finished_an_rtx_5090_128gb_ram_build_want_to/ | false | false | self | 0 | null |
Jarvis-OS: Solving Agent "Amnesia" and "Gullibility" with a Persistent State and Intent Firewall (Ollama/Llama 3.1) | 0 | I built **Jarvis-OS** to solve two specific problems in local LLM assistants: **Statelessness** and **Tool-Use Vulnerability**.
# Features:
* **Forensic Intent Firewall (FPM):** Instead of blindly trusting the LLM, a weighted logic engine (`fpm_engine.py`) scores every intent *before* routing. It evaluates **Access** (system proximity), **Material Yield** (exfil risk), and **Dissonance** (contradiction of user state). High-risk intents are blocked until a manual override.
* **Persistent State Memory:** No "reboot amnesia." A structured `jarvis_state.json` tracks tasks, schedule, and history across sessions. It bootstraps the LLM with your "Ground Truth" data on every boot.
* **100% Local:** Runs on Ollama (Llama 3.1:8b). State and history stay on your hardware. Zero telemetry.
# Technical Specs:
* **Stack:** Python 3.10+, modular skill architecture, deterministic routing.
* **Notifications:** Background monitoring with `ntfy.sh` integration.
**Repo:** [https://github.com/dougy27/jarvis-os](https://github.com/dougy27/jarvis-os)
I'm looking for feedback on the forensic reasoning variables and if anyone can successfully prompt-inject past the logic gate. MIT Licensed. | 2026-01-09T04:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q7z7ju/jarvisos_solving_agent_amnesia_and_gullibility/ | Dougy27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7z7ju | false | null | t3_1q7z7ju | /r/LocalLLaMA/comments/1q7z7ju/jarvisos_solving_agent_amnesia_and_gullibility/ | false | false | self | 0 | null |
Trying to make Jarvis from Iron Man real.... here is v1 | 1 | [removed] | 2026-01-09T04:56:45 | Dougy27 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7z5u2 | false | null | t3_1q7z5u2 | /r/LocalLLaMA/comments/1q7z5u2/trying_to_make_jarvis_from_iron_man_real_here_is/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'BfsWhcPs4QvfszKijskGA6WXxCldxzqA7QH8Bk466sA', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=108&crop=smart&format=png8&s=779f80007280b762d752715a75a26e36edf46774', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=216&crop=smart&format=png8&s=5ef025de0a1bbec636844e644f0ba168ec8eefa3', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=320&crop=smart&format=png8&s=65a9eb4540d979bbbee8302bae15dc70a5f0ac6a', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=640&crop=smart&format=png8&s=25515a932742db9eaea3a37599878c0ba75d7096', 'width': 640}], 'source': {'height': 452, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?format=png8&s=43ac08ccf4e8b293be77e6bb7a2c00195c0cac07', 'width': 958}, 'variants': {'gif': {'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=108&crop=smart&s=7ffe0aa6a21d3879c907f225c28e903bb86913a8', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=216&crop=smart&s=45676c43ff481a8200f0e84e78411aaee877dbae', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=320&crop=smart&s=78d08173dd3b5a84c9a4c3e66e9522f672334553', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=640&crop=smart&s=015765367ab8cb9afef9749c45ab6d75615ccec6', 'width': 640}], 'source': {'height': 452, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?s=583da4aae7db473bbdc4af29a7205b120813c0fb', 'width': 958}}, 'mp4': {'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=108&format=mp4&s=e328937b1c3bae24c2b37eb308b1ce726665f3aa', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=216&format=mp4&s=39a22ee238938a9474834e30aed99cd8751170fe', 'width': 216}, {'height': 150, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=320&format=mp4&s=e6322fd31249586bdb2a0a4b95dd33fc40641332', 'width': 320}, {'height': 301, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?width=640&format=mp4&s=90e0c6c4e51517e9683821e96ea871523a97e63d', 'width': 640}], 'source': {'height': 452, 'url': 'https://preview.redd.it/tfzmt97389cg1.gif?format=mp4&s=55d12dcd6fcd2a318e863eb2a174399db023cf9d', 'width': 958}}}}]} | ||
Sur5 Lite (MIT): plug-and-play offline AI local LLM USB workflow + Granite 4.0-h-1b (GGUF Q4_K_M) | 0 | Hey r/LocalLLaMA \- we just open-sourced **Sur5 Lite** under the **MIT License**.
**What it is:** a lightweight setup to run **offline local inference** via a USB distribution/use case. “Bring your own machine, keep your data local.”
**Model note:** recommended model is **IBM Granite 4.0-h-1b (Hybrid reasoning)**, **GGUF Q4\_K\_M** \- but it’s **not included in the repo** (901MB+).
Docs: `App/models/README.md` → place `.gguf` in `App/models/` → app auto-detects.
**Demo Video:** [**https://www.youtube.com/watch?v=9WCaAwjvbq0**](https://www.youtube.com/watch?v=9WCaAwjvbq0)
**Optional support:** [**https://www.indiegogo.com/en/projects/sur5ve/sur5-offline-ai-usb**](https://www.indiegogo.com/en/projects/sur5ve/sur5-offline-ai-usb)
Would love a technical gut-check:
* best prompt template defaults for Granite
* CPU-only tuning / runtime flags
* packaging/UX improvements for “portable local LLM” | 2026-01-09T04:44:05 | https://github.com/Sur5ve/Sur5-Lite | Sur5ve | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q7ywmx | false | null | t3_1q7ywmx | /r/LocalLLaMA/comments/1q7ywmx/sur5_lite_mit_plugandplay_offline_ai_local_llm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=108&crop=smart&auto=webp&s=bd35aa704e9e90ab545b5fbd77557ff2a1be4cfc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=216&crop=smart&auto=webp&s=71796345525404861e4ad0776a0988ac7852a115', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=320&crop=smart&auto=webp&s=0e43e66ac90bcf05720ef3cc2c9039fcdd8da593', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=640&crop=smart&auto=webp&s=ac5adb309a69250408a566e0b3ec96d7afbb71fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=960&crop=smart&auto=webp&s=7a80b56b038997339df6f6bbdb9178b1a56f3caa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?width=1080&crop=smart&auto=webp&s=3f68849d0c213f47cf4afb0eb0116b0900c8e6fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oJt-LItaFKOv8OR85FBULagXpxnGLb_ADtIc_Koxc1k.png?auto=webp&s=aa1ad695bd8a85de34cde8824ad48b44b90a03bb', 'width': 1200}, 'variants': {}}]} |
We benchmarked every 4-bit quantization method in vLLM 👀 | 77 | We just published a deep dive on vLLM quantization. Tested AWQ, GPTQ, Marlin, GGUF, and BitsandBytes on Qwen2.5-32B using an H200.
Stuff we found:
* Marlin hits 712 tok/s, baseline FP16 does 461. Quantized and faster.
* GPTQ without Marlin kernel is actually slower than FP16 (276 tok/s)
* BitsandBytes had the smallest quality drop and doesn't need pre-quantized weights
* GGUF had the worst perplexity but best HumanEval score among quantized methods
* AWQ was weirdly slow in vLLM (67 tok/s)
Blog covers how each technique actually works under the hood if you want the details.
https://preview.redd.it/t4212ygj59cg1.png?width=3169&format=png&auto=webp&s=97eff0fcb212924355a7feb7262b25895de5603a
Blog: [https://docs.jarvislabs.ai/blog/vllm-quantization-complete-guide-benchmarks](https://docs.jarvislabs.ai/blog/vllm-quantization-complete-guide-benchmarks) | 2026-01-09T04:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ysj2/we_benchmarked_every_4bit_quantization_method_in/ | LayerHot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ysj2 | false | null | t3_1q7ysj2 | /r/LocalLLaMA/comments/1q7ysj2/we_benchmarked_every_4bit_quantization_method_in/ | false | false | 77 | null | |
I built Plano - a framework-agnostic data plane with orchestration for agents | 1 | [removed] | 2026-01-09T04:10:51 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7y8er | false | null | t3_1q7y8er | /r/LocalLLaMA/comments/1q7y8er/i_built_plano_a_frameworkagnostic_data_plane_with/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'il2eia6z09cg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=108&crop=smart&auto=webp&s=d90c6eff8b713ea8444acbfb4d320a6f699e08d7', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=216&crop=smart&auto=webp&s=77abf6aab6e75733efa6fb82865abf43d3bed6b3', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=320&crop=smart&auto=webp&s=184d8dfd23b4e6700d6b8b22b82fdf2d9d747028', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=640&crop=smart&auto=webp&s=e093b83954392998ce6f57a7d1e12987828ca0c0', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=960&crop=smart&auto=webp&s=6d4c3bfbc3950147b13f44d58f1e711c4ed1cdfe', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?width=1080&crop=smart&auto=webp&s=7aa6c3606a20ba6cf428299073488738ed51c194', 'width': 1080}], 'source': {'height': 1602, 'url': 'https://preview.redd.it/il2eia6z09cg1.png?auto=webp&s=b569193332e299c62cce308864d7fea55d90f844', 'width': 2376}, 'variants': {}}]} | |
Resume Helper AI: Privacy-first resume tailor & application tracker (Ollama + APIs) | 1 | Hey r/LocalLLaMA,
I’m a solo dev working on an experimental tool called [Resume Helper AI](https://github.com/gibbenergy/Resume_Helper). It’s designed to automate resume tailoring and manage the full job application lifecycle while prioritizing data privacy. It’s a work in progress, and I’m looking for architectural and model-related feedback from the community.
**Technical Overview:**
* Privacy-First Multi-LLM Support: Supports local inference via **Ollama** and hosted APIs (OpenAI/Anthropic) for tasks requiring higher reasoning.
* Full Application Tracking**:** Manages the entire lifecycle of a job hunt, beyond simple document generation.
* The Stack: Built with Gradio, LiteLLM, Ollama.
**Looking for Opinions on:**
1. **Document Quality:** For those using LLMs to generate or tailor resumes, how are you finding the quality of the output compared to manual writing or other app? Are there specific prompting techniques that help maintain a professional, non-"AI-sounding" tone?
2. **Model Recommendations:** Which specific LLMs (local or API) have you found most effective for document tailoring? I’m looking for models that excel at following strict formatting constraints and matching resume bullet points to job descriptions.
3. **Workflow Efficiency:** Are there any specific tools, trick or logic flows you’d suggest to make the transition from "Job Description" to "Tailored Resume" more efficient?
I’m looking to improve the utility of this tool while keeping it local-first. Would love to hear your thoughts. | 2026-01-09T04:09:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q7y717/resume_helper_ai_privacyfirst_resume_tailor/ | OpeningSad323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7y717 | false | null | t3_1q7y717 | /r/LocalLLaMA/comments/1q7y717/resume_helper_ai_privacyfirst_resume_tailor/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=108&crop=smart&auto=webp&s=f43efd7e99b123c798fdf885f07417bf015b2629', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=216&crop=smart&auto=webp&s=d93f1f767c951edb4330cf2cb2b6d86d1edae7ab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=320&crop=smart&auto=webp&s=38148022c1cd39d381fd608ebba0df1a4e050017', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=640&crop=smart&auto=webp&s=a639c0cae8e531b6d855cb2ef623267963d45fc7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=960&crop=smart&auto=webp&s=c969487d043fff2be77a6e9bd7f0eb9f1b1f4b5a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?width=1080&crop=smart&auto=webp&s=77c7d4733b7d43ca92ee1bc166023cfb0f1f66b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L1p5HLp8VAoqA5m4c5Sf_Q2OB0DQpVW1-c1wEGMrNaM.png?auto=webp&s=437351fca697b79f6048416b304c1267b0fcbb71', 'width': 1200}, 'variants': {}}]} |
I built Plano - a framework-agnostic data plane with orchestration for agents | 1 | [removed] | 2026-01-09T04:04:31 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7y3m3 | false | null | t3_1q7y3m3 | /r/LocalLLaMA/comments/1q7y3m3/i_built_plano_a_frameworkagnostic_data_plane_with/ | false | false | 1 | {'enabled': True, 'images': [{'id': '3ubY0LXxH5qhwE0TNm2WIjsSR3qd2mkWKLMs0PI8o8E', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=108&crop=smart&auto=webp&s=53552b33f840c030f94b233a96fac2d2b051d9fc', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=216&crop=smart&auto=webp&s=9d2504cf89f945b3506283d29b5d1dca07775fa1', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=320&crop=smart&auto=webp&s=fb13d0f9237fb442cd756453c5cc630f2489630e', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=640&crop=smart&auto=webp&s=dd9b6796e0aaa579ff419dec87b5d5f2e704d5a8', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=960&crop=smart&auto=webp&s=3d82a235eb27e70bfc8180da4998906b78ed66c9', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?width=1080&crop=smart&auto=webp&s=9ab757574aaf8e9b75cb0a91e831cd542522b0f4', 'width': 1080}], 'source': {'height': 1602, 'url': 'https://preview.redd.it/6ooavy7zw8cg1.png?auto=webp&s=f42d9fcaa17009f57484fbbe10929e043e1e39e2', 'width': 2376}, 'variants': {}}]} | ||
Problem with embedding models using llama-swap | 1 | Hi, I’ve been using llama-swap as the backend for Open WebUI. After setting up RAG on Open WebUI, and pointing to the embedding model in the Settings/Documents section, I seem to be getting this when I do a web search. I get prompt tokens, but 0 generated tokens. Is there something wrong I’m doing? I’ve set up the config.yaml to include the —embedding flag for the model and the endpoint is http://10.0.0.15:8080/v1. Anyone with experience whether this is normal? Many thanks! | 2026-01-09T04:00:01 | BEEFshart | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7y01c | false | null | t3_1q7y01c | /r/LocalLLaMA/comments/1q7y01c/problem_with_embedding_models_using_llamaswap/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'vym21bzfy8cg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=108&crop=smart&auto=webp&s=c05392bad40b024489631ca75020bf320c9d5b2b', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=216&crop=smart&auto=webp&s=fa2b2ad97e647be3734266530b80b6ae7c09f6aa', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=320&crop=smart&auto=webp&s=e32ee2f5098a43a9f5c85f9720db4d2cc0b68949', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=640&crop=smart&auto=webp&s=db5bc96e0081952b9dfc76b91fab300d72ca9939', 'width': 640}, {'height': 574, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=960&crop=smart&auto=webp&s=d628231e4970c8300b027ee831b08544234458b8', 'width': 960}, {'height': 646, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?width=1080&crop=smart&auto=webp&s=b2d9187222f74f8ebac68378a93401afd95b521a', 'width': 1080}], 'source': {'height': 1430, 'url': 'https://preview.redd.it/vym21bzfy8cg1.jpeg?auto=webp&s=a97218d3a403edb91a0f0de42d76dbe33f808552', 'width': 2388}, 'variants': {}}]} | |
I spent 9 months building a local AI work and play platform because I was tired of 5-terminal setups. I need help testing the Multi-GPU logic! This is a relaunch. | 0 | Hey everyone,
I’ve spent the last nine months head-down in a project called Eloquent. It started as a hobby because I was frustrated with having to juggle separate apps for chat, image gen, and voice clone just to get a decent roleplay experience.
I’ve finally hit a point where it’s feature-complete, and I’m looking for some brave souls to help me break it.
The TL;DR: It’s a 100% local, all-in-house platform built with React and FastAPI. No cloud, no subscriptions, just your hardware doing the heavy lifting.
What’s actually inside:
* For the Roleplayers: I built a Story Tracker that actually injects your inventory and locations into the AI's context (no more 'hallucinating' that you lost your sword). It’s also got a Choice Generator that expands simple ideas into full first-person actions.
* The Multi-Modal Stack: Integrated Stable Diffusion (SDXL/Flux) with a custom face-fixer (ADetailer) and Kokoro voice cloning. You can generate a character portrait and hear their voice stream in real-time without leaving the app.
* For the Nerds (like me): A full ELO Testing Framework. If you’re like me and spend more time testing models than talking to them, it has 14 different 'personality' judges (including an Al Swearengen and a Bill Burr perspective) to help you reconcile model differences.
* The Tech: It supports Multi-GPU orchestration—you can shard one model across all your cards or pin specific tasks (like image gen) to a secondary GPU.
Here is where I need you: I’ve built this to support as many GPUs as your system can detect, but my own workstation only has so much room. I honestly don't know if the tensor splitting holds up on a 4-GPU rig or if the VRAM monitoring stays accurate on older cards.
If you’ve got a beefy setup (or even just a single mid-range card) and want to help me debug the multi-GPU logic and refine the 'Forensic Linguistics' tools, I’d love to have you.
It’s extremely modular, so if you have a feature idea that doesn't exist yet, there’s a good chance we can just build it in.
Discord is brand new, come say hi: [https://discord.gg/qfTUkDkd](https://discord.gg/qfTUkDkd)
Thanks for letting me share—honestly just excited to see if this runs as well on your machines as it does on mine!
Also I just really need helping with testing :)
[https://github.com/boneylizard/Eloquent](https://github.com/boneylizard/Eloquent) | 2026-01-09T03:45:07 | https://github.com/boneylizard/Eloquent | Gerdel | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q7xoid | false | null | t3_1q7xoid | /r/LocalLLaMA/comments/1q7xoid/i_spent_9_months_building_a_local_ai_work_and/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=108&crop=smart&auto=webp&s=821cb6332136259049eb15eb932d4ef1fcc3701c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=216&crop=smart&auto=webp&s=6a6a53615665308fae1390a815e0818561ece720', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=320&crop=smart&auto=webp&s=ecf42a4b967a6d710b91ce8c8f938332ae72dd02', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=640&crop=smart&auto=webp&s=d0254ea594549f7581e537104148d78322cddbfa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=960&crop=smart&auto=webp&s=2c5a5a49fda453d0c1f44c826c5f31fa41847d26', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?width=1080&crop=smart&auto=webp&s=9e6a4397c2795e793f760a5bc767a3afc9f75dae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iiXjgGFE90aiqoscQZEyBtxEMV7FyeN3W9ogbA_TjrY.png?auto=webp&s=787aa522f3781e6f46683bc814df443b34773d0d', 'width': 1200}, 'variants': {}}]} |
Multi modal llms vs specific llms | 0 | I was thinking if it would be better to use a multi model llms to generate images and text or use two separate llms for image and text. I'm planning on customising the image and text generation based on a single person. What do you guys think? | 2026-01-09T03:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q7xdcp/multi_modal_llms_vs_specific_llms/ | Present-Hospital1983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7xdcp | false | null | t3_1q7xdcp | /r/LocalLLaMA/comments/1q7xdcp/multi_modal_llms_vs_specific_llms/ | false | false | nsfw | 0 | null |
Gemma-3-4b (null-space) abliteration & RP fine-tune | 13 | I've been branching out from research to actually building models recently, and this is my first attempt at applying a lora adapter on top of my abliterations.
I used my null-space abliteration [Gemma-3-4B-IT](https://huggingface.co/jwest33/gemma-3-4b-it-null-space-abliterated) model with an adapter trained from a subset of the lemonilia/LimaRP dataset. I plan on removing the step limit and reducing the learning rate but wanted to start here.
The model card should have all the information needed to know how I trained it but I'm happy to share anything else if I missed anything. Looking for any feedback before I start on larger models.
[https://huggingface.co/jwest33/gemma-3-4b-null-space-abliterated-RP-writer](https://huggingface.co/jwest33/gemma-3-4b-null-space-abliterated-RP-writer)
[https://huggingface.co/jwest33/gemma-3-4b-null-space-abliterated-RP-writer-GGUF](https://huggingface.co/jwest33/gemma-3-4b-null-space-abliterated-RP-writer-GGUF)
| 2026-01-09T03:30:31 | https://huggingface.co/jwest33/gemma-3-4b-null-space-abliterated-RP-writer-GGUF | JEs4 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q7xd96 | false | null | t3_1q7xd96 | /r/LocalLLaMA/comments/1q7xd96/gemma34b_nullspace_abliteration_rp_finetune/ | false | false | default | 13 | {'enabled': False, 'images': [{'id': '8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=108&crop=smart&auto=webp&s=9e06fb55ffecb71cce64ffd8156096c41d92e7ca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=216&crop=smart&auto=webp&s=caafc84f195c21ffa3a920f3a53796c88fed0109', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=320&crop=smart&auto=webp&s=4c119d4613ff5c713791520337bb746ce07ab9f2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=640&crop=smart&auto=webp&s=a93da97e2ba4d4ae0a33f7ba1f40c4fc8cc75c24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=960&crop=smart&auto=webp&s=36332714317f4ce79bb5004dff8bcec6423043db', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?width=1080&crop=smart&auto=webp&s=98bd918e78bb1e592af747cfe054728d2f706c2d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8MsDm6oseUFMBQKroxuYj3kQ8ddgGPXg7n46GwYAb90.png?auto=webp&s=87f1df814a392c92d03ae7db23f26f17c782fb93', 'width': 1200}, 'variants': {}}]} |
Is clustering two Mac Studio M2 Ultra 128gb ram 2TB worth it? I already own one. | 0 | Is clustering two Mac Studio M2 Ultra 128gb ram 2TB worth it? I already own one. Thinking about getting another one on the used market for $2500 or less. Been playing around with AI and 70b models. Anyone here have experience with clustering two Mac studios? | 2026-01-09T03:10:46 | https://www.reddit.com/r/LocalLLaMA/comments/1q7wxv9/is_clustering_two_mac_studio_m2_ultra_128gb_ram/ | Hello_david123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7wxv9 | false | null | t3_1q7wxv9 | /r/LocalLLaMA/comments/1q7wxv9/is_clustering_two_mac_studio_m2_ultra_128gb_ram/ | false | false | self | 0 | null |
Just finished Chip Huyen’s "AI Engineering" (O’Reilly) — I have 534 pages of theory and 0 lines of code. What's the "Indeed-Ready" bridge? | 0 | Hey everyone,
I just finished a cover-to-cover grind of Chip Huyen’s *AI Engineering* (the new O'Reilly release). Honestly? The book is a masterclass. I actually understand "AI-as-a-judge," RAG evaluation bottlenecks, and the trade-offs of fine-tuning vs. prompt strategy now.
**The Problem:** I am currently the definition of "book smart." I haven't actually built a single repo yet. If a hiring manager asked me to spin up a production-ready LangGraph agent or debug a vector DB latency issue right now, I’d probably just stare at them and recite the preface.
I want to spend the next 2-3 months getting "Job-Ready" for a US-based AI Engineer role. I have full access to O'Reilly (courses, labs, sandbox) and a decent budget for API credits.
**If you were hiring an AI Engineer today, what is the FIRST "hands-on" move you'd make to stop being a theorist and start being a candidate?**
I'm currently looking at these three paths on O'Reilly/GitHub:
1. **The "Agentic" Route:** Skip the basic "PDF Chatbot" (which feels like a 2024 project) and build a Multi-Agent Researcher using **LangGraph** or **CrewAI**.
2. **The "Ops/Eval" Route:** Focus on the "boring" stuff Chip talks about—building an automated **Evaluation Pipeline** for an existing model to prove I can measure accuracy/latency properly.
3. **The "Deployment" Route:** Focus on serving models via **FastAPI** and **Docker** on a cloud service, showing I can handle the "Engineering" part of AI Engineering.
I’m basically looking for the shortest path from "I read the book" to "I have a GitHub that doesn't look like a collection of tutorial forks." Are certifications like **Microsoft AI-102** or **Databricks** worth the time, or should I just ship a complex system?
**TL;DR:** I know the theory thanks to Chip Huyen, but I’m a total fraud when it comes to implementation. How do I fix this before the 2026 hiring cycle passes me by? | 2026-01-09T02:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q7wkaz/just_finished_chip_huyens_ai_engineering_oreilly/ | Substantial_Sky_8167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7wkaz | false | null | t3_1q7wkaz | /r/LocalLLaMA/comments/1q7wkaz/just_finished_chip_huyens_ai_engineering_oreilly/ | false | false | self | 0 | null |
Curious Why Model File Transfers Are Slow. Moving From One SATA SSD to Another. | 0 | I'm transferring my models folder (250GB) from one hard drive to another. Both are new SATA SSDS rated at around \~500MB/s. I am getting very slow transfer speeds, around 5MB/s with sporadic bursts of up to 312MB. I know that transfer speed can be very dependent on the structure of the data being transferred but I'm curious if this is normal, is there is something inherent about model file structures that make them slow to transfer? Maybe the issue is with my drives? Both drives are less than a month old but storage on them is at about 80% capacity. All my other files and folders transfer at expected speeds. | 2026-01-09T02:45:54 | Five9Fine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7wdxd | false | null | t3_1q7wdxd | /r/LocalLLaMA/comments/1q7wdxd/curious_why_model_file_transfers_are_slow_moving/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '4cyrf3xy78cg1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/4cyrf3xy78cg1.jpeg?width=108&crop=smart&auto=webp&s=7dadf58d2a173820431eed4bcf897c0fbd40c3ae', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/4cyrf3xy78cg1.jpeg?width=216&crop=smart&auto=webp&s=bb0c89dec9f51596d2e98ece43daf78d0c0a65cc', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/4cyrf3xy78cg1.jpeg?width=320&crop=smart&auto=webp&s=82c8792e2c3c1bba872175fafe070f4625dd1740', 'width': 320}], 'source': {'height': 298, 'url': 'https://preview.redd.it/4cyrf3xy78cg1.jpeg?auto=webp&s=21ee770e97c39f9f624c063d1425c68084149c1e', 'width': 461}, 'variants': {}}]} | |
Gemma3-27b vs Qwen2.5-14B Long 1M | 0 | Has anyone compared these two models directly for document intelligence?
In your experience, does the major increase in context size outweigh the loss of 13b active params? I have extremely long documents to summarize, compare and contrast, so context helps, but the analysis needs to be correct also.
| 2026-01-09T02:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1q7vykt/gemma327b_vs_qwen2514b_long_1m/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7vykt | false | null | t3_1q7vykt | /r/LocalLLaMA/comments/1q7vykt/gemma327b_vs_qwen2514b_long_1m/ | false | false | self | 0 | null |
OK I get it, now I love llama.cpp | 228 | I just made the switch from Ollama to llama.cpp. Ollama is fantastic for the beginner because it lets you super easily run LLMs and switch between them all. Once you realize what you truly want to run, llama.cpp is really the way to go.
My hardware ain't great, I have a single 3060 12GB GPU and three P102-100 GPUs for a total of 42GB. My system ram is 96GB along with an Intel i7-9800x. It blows my mind that with some tuning what difference it can make. You really need to understand each of the commands for llama.cpp to get the most out of it especially with uneven vram like mine. I used Chatgpt, Perplexity and suprisingly only Google AI studio could optimize my settings while teaching me along the way.
Crazy how these two commands both fill up the ram but one is twice as fast as the other. Chatgpt helped me with the first one, Google AI with the other ;). Now I'm happy running local lol.
**11t/s:**
sudo pkill -f llama-server; sudo nvidia-smi --gpu-reset -i 0,1,2,3 || true; sleep 5; sudo CUDA\_VISIBLE\_DEVICES=0,1,2,3 ./llama-server --model /home/llm/llama.cpp/models/gpt-oss-120b/Q4\_K\_M/gpt-oss-120b-Q4\_K\_M-00001-of-00002.gguf --n-gpu-layers 21 --main-gpu 0 --flash-attn off --cache-type-k q8\_0 --cache-type-v f16 --ctx-size 30000 --port 8080 --host [0.0.0.0](http://0.0.0.0) \--mmap --numa distribute --batch-size 384 --ubatch-size 256 --jinja --threads $(nproc) --parallel 2 --tensor-split 12,10,10,10 --mlock
**21t/s**
sudo pkill -f llama-server; sudo nvidia-smi --gpu-reset -i 0,1,2,3 || true; sleep 5; sudo GGML\_CUDA\_ENABLE\_UNIFIED\_MEMORY=0 CUDA\_VISIBLE\_DEVICES=0,1,2,3 ./llama-server --model /home/llm/llama.cpp/models/gpt-oss-120b/Q4\_K\_M/gpt-oss-120b-Q4\_K\_M-00001-of-00002.gguf --n-gpu-layers 99 --main-gpu 0 --split-mode layer --tensor-split 5,5,6,20 -ot "blk\\.(2\[1-9\]|\[3-9\]\[0-9\])\\.ffn\_.\*\_exps\\.weight=CPU" --ctx-size 30000 --port 8080 --host [0.0.0.0](http://0.0.0.0) \--batch-size 512 --ubatch-size 256 --threads 8 --parallel 1 --mlock | 2026-01-09T01:39:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q7uuxo/ok_i_get_it_now_i_love_llamacpp/ | vulcan4d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7uuxo | false | null | t3_1q7uuxo | /r/LocalLLaMA/comments/1q7uuxo/ok_i_get_it_now_i_love_llamacpp/ | false | false | self | 228 | null |
Is there any models and apps for local servers that can do pics | 0 | Hi im looking for any models uncensored for pics making like transformative content like werewolfs etc but uncensored and a app that would run da model and be able to use it when im away from home | 2026-01-09T01:34:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ur20/is_there_any_models_and_apps_for_local_servers/ | nekoboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ur20 | false | null | t3_1q7ur20 | /r/LocalLLaMA/comments/1q7ur20/is_there_any_models_and_apps_for_local_servers/ | false | false | nsfw | 0 | null |
SimpleLLM — a minimal (~950 LOC) LLM inference engine built from scratch | 24 | SimpleLLM's engine is async by default. Every request goes through a background inference loop that continuously batches work to keep the GPU saturated & prioritizing throughput.
|Benchmark|SimpleLLM|vLLM|
|:-|:-|:-|
|batch\_size = 1|135 tok/s|138 tok/s|
|batch\_size = 64|4,041 tok/s|3,846 tok/s|
Note: Currently, this repository ONLY supports OpenAI/gpt-oss-120b on a single NVIDIA H100.
**Usage**
`from llm import LLM`
`engine = LLM("./gpt-oss-120b")`
`outputs = engine.generate(["What is the meaning of life?"], max_tokens=100).result()`
`print(outputs[0].text)`
Github Repo - [https://github.com/naklecha/simple-llm](https://github.com/naklecha/simple-llm) | 2026-01-09T01:30:58 | https://v.redd.it/twqirt3j78cg1 | Dear-Success-1441 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7uo7u | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/twqirt3j78cg1/DASHPlaylist.mpd?a=1770514274%2CZmRhZWI3Nzk2NjkzYzkwYTdjYmE2Yzc3MjZlNDA2MjU4YzhjOTRiM2Q4NDRjOTgzNTYyMmQ5MzdhNjg3YjZkNQ%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/twqirt3j78cg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/twqirt3j78cg1/HLSPlaylist.m3u8?a=1770514274%2CYjRlYTc4ZjliY2I3YmIxMDRiMzhmODFmYWIwOThiNDcyNGQ0NzVjYmFkMGE3M2RhMzVmMmIyYTFkM2NjZmQ5Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/twqirt3j78cg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1q7uo7u | /r/LocalLLaMA/comments/1q7uo7u/simplellm_a_minimal_950_loc_llm_inference_engine/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2806fa430b0466e583384d007c75e4ea722d4d3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=216&crop=smart&format=pjpg&auto=webp&s=df13b73b7cbf81ee21e68aad0c6d65cf36c8b4f6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=320&crop=smart&format=pjpg&auto=webp&s=52fa9d01af73c31293fc22346c4fc4da81a151db', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=640&crop=smart&format=pjpg&auto=webp&s=239009a98734617a8322d887f764b7965b23dda3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=960&crop=smart&format=pjpg&auto=webp&s=6eab22a6af8855dcb24b2bcd871f8ef0a13780c9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=529b5fff682563f2124929b5c76ee1e9721409a5', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eW56MWo5OGo3OGNnMQt6mXHkLBiOyVm9E_-7IBj4RKtoglrz47V6J4dn3Gg-.png?format=pjpg&auto=webp&s=5a3f3199890a46ab988f9314aa5526150b9823a9', 'width': 1280}, 'variants': {}}]} | |
What tools do you use to fine tune an embedding model? | 1 | Is this common to do at all?
I saw this detail on unsloth that implied it’s TBD https://github.com/unslothai/unsloth/issues/1996
Is there alternatives anyone knows? | 2026-01-08T23:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q7rqr3/what_tools_do_you_use_to_fine_tune_an_embedding/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7rqr3 | false | null | t3_1q7rqr3 | /r/LocalLLaMA/comments/1q7rqr3/what_tools_do_you_use_to_fine_tune_an_embedding/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=108&crop=smart&auto=webp&s=eb26884ba1595891eca55292721b401ff0b61bff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=216&crop=smart&auto=webp&s=50e883eb646137d7e89ac9b7d669567d07a2a4ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=320&crop=smart&auto=webp&s=07a5e896b7bece73ea8942ac9979db29e012d712', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=640&crop=smart&auto=webp&s=7112d4c592a72d68c90b8849367ff7a3fe7f21a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=960&crop=smart&auto=webp&s=b305b62f581845eda71d8c4f3c841d4c375e588f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?width=1080&crop=smart&auto=webp&s=181593e0c5f3e516335852be0fd771eddaed2736', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cHk1mOToTowx0_42LFOgzEl-wRWzvy5yDA3guWLF4IQ.png?auto=webp&s=cff21446a59d941d2e28e36f4bf27697dce8187b', 'width': 1200}, 'variants': {}}]} |
We burned $2K+ on duplicate API calls during development, so we built a caching proxy (and open-sourced it) | 0 | So my cofounder and I have been building AI tools for a few months now. Last month we looked at our OpenAI bill and realized we'd burned through way more than expected - not from production traffic, but from us just iterating during development.
You know how it is. You're debugging a prompt, hitting "run" over and over. Same prompt, same response, but you're paying each time. Or you're testing the same flow repeatedly while building a feature. It adds up fast.
We built a simple caching proxy that sits between our code and the OpenAI/Anthropic APIs. First request hits the API and gets cached. Every repeat? Instant response, zero cost.
The nice part is it normalizes prompts before caching - so if you have trailing whitespace or extra newlines (we all copy-paste sloppily), it still hits the cache. Ended up saving us about 11% on tokens just from that cleanup.
It's a one-line change:
python
client = OpenAI(base_url="http://localhost:8000/v1")
```
That's it. Works with the normal OpenAI/Anthropic SDKs.
We've been using it internally for a while and figured others might find it useful, so we cleaned it up and open sourced it:
GitHub: https://github.com/sodiumsun/snackcache
```
pip install snackcache
snackcache serve
It's simple - just caching + prompt normalization. Nothing fancy. But it's saved us real money during dev, and our CI pipeline runs way faster now.
Happy to answer questions if anyone's curious about how it works under the hood. | 2026-01-08T23:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1q7rmit/we_burned_2k_on_duplicate_api_calls_during/ | decentralizedbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7rmit | false | null | t3_1q7rmit | /r/LocalLLaMA/comments/1q7rmit/we_burned_2k_on_duplicate_api_calls_during/ | false | false | self | 0 | null |
Beginner advice: Can I run a local LLM for long-term worldbuilding memory? | 1 | [removed] | 2026-01-08T22:37:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q7qgbc/beginner_advice_can_i_run_a_local_llm_for/ | commissarisgay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7qgbc | false | null | t3_1q7qgbc | /r/LocalLLaMA/comments/1q7qgbc/beginner_advice_can_i_run_a_local_llm_for/ | false | false | self | 1 | null |
The NO FAKES Act has a "Fingerprinting" Trap that kills Open Source. We need to lobby for a Safe Harbor. | 574 | Hey everyone,
I’ve been reading the text of the "NO FAKES Act" currently in Congress, and it’s worse than I thought.
The Tldr: It creates a "digital replica right" for voices/likenesses. That sounds fine for stopping deepfake porn, but the liability language is a trap. It targets anyone who "makes available" a tool that is primarily used for replicas.
The Problem: If you release a TTS model or a voice-conversion RVC model on HuggingFace, and someone else uses it to fake a celebrity, you (the dev) can be liable for statutory damages ($5k-$25k per violation).
There is no Section 230 protection here. This effectively makes hosting open weights for audio models a legal s*icide mission unless you are OpenAI or Google.
What I did:
I contacted my reps email to flag this as an "innovation killer." If you run a repo or care about open weights, you might want to do the same. We need them to add a "Safe Harbor" for tool devs.
S.1367 - 119th Congress (2025-2026): NO FAKES Act of 2025 | Congress.gov | Library of Congress https://share.google/u6dpy7ZQDvZWUrlfc | 2026-01-08T22:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q7qcux/the_no_fakes_act_has_a_fingerprinting_trap_that/ | PostEasy7183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7qcux | false | null | t3_1q7qcux | /r/LocalLLaMA/comments/1q7qcux/the_no_fakes_act_has_a_fingerprinting_trap_that/ | false | false | self | 574 | null |
using functiongemma with Llama.cpp possible? | 3 | I am having a hard time with functiongemma via a [plugin ](https://github.com/getnamo/Llama-Unreal)that uses Llama.cpp (I've updated to the latest version and enalbled Kuda 13.1). I am following functiongemma's [example (best practices)](https://ai.google.dev/gemma/docs/functiongemma/formatting-and-best-practices). I think their example's syntax is for python. I find that I can just use quotation for strings instead of using <escape> tag.
Often, I get garbage response or it gets stuck that I have to kill the process. On some occasion, I can get incomplete response back with missing opening/closing tags.
I don't have any issue with other LLM (llama2, Gemma3, ministral3...) but this one.
It is very close to work. I am not sure if I am sending the proper prompt raw syntax/tags.
Anyone got any idea?
https://preview.redd.it/5eah38jqb7cg1.jpg?width=855&format=pjpg&auto=webp&s=c00f7dae3e0046087e01b44858d483cc3f51e1de
| 2026-01-08T22:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/1q7qber/using_functiongemma_with_llamacpp_possible/ | PeterL111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7qber | false | null | t3_1q7qber | /r/LocalLLaMA/comments/1q7qber/using_functiongemma_with_llamacpp_possible/ | false | false | 3 | null | |
Should I go for PCIe 5.0 or 4.0 for dual-GPU MoE inference set up | 4 | I am setting up a homelab server with 4 RTX 5090 GPUs. Two for dedicated LLM serving. The other 2 are used either for small model (not necessary LLM) training/tuning (not much multi-GPU performance requirement), or serving LLM when I am not working on tuning.
After shopping around, I noticed that servers that accept DDR5 memory and provide PCIe 5.0 x16 for all four GPUs are significantly more expensive. The one I am looking at is around $12K. Meanwhile, systems with DDR4 memory and PCIe 4.0 x16 can be less than $7K.
I went through previous discussion regarding PCIe/memory bandwidth and got mixed information. The LLM model I'd like to serve are \~200B models and GPT-OSS-120B level. It seems that prefill rate may drop by 35% on 4.0, although it may be rescued by batched inference. I indeed need mainly batched inference. As for token generation, I see posts claiming no significant drop as well as those saying that MoE models suffer a lot.
If I take GPT-OSS-120B model as an example, how much difference would I see between 5.0 and 4.0? I guess I have to enable CPU offloading and pipeline parallelism, are there other common tricks on vLLM/LMDeploy/llama.cpp I can use? | 2026-01-08T22:14:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q7pvfs/should_i_go_for_pcie_50_or_40_for_dualgpu_moe/ | enneamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7pvfs | false | null | t3_1q7pvfs | /r/LocalLLaMA/comments/1q7pvfs/should_i_go_for_pcie_50_or_40_for_dualgpu_moe/ | false | false | self | 4 | null |
Semantic Compression for Local LLMs (35x Input Reduction, Identical Output Quality) | 0 | First things first:
1. Yes, this was compiled and structured by a frontier AI (Claude, specifically). \~70% of the "prose" was written by me.
2. There are glaring omissions in parts of this...this was intentional to protect IP
3. This "system" is the result of months of rigorous testing, formalization, and documentation.
4. Questions, suggestions, discussion is all I am after here. If you don't have a reason to interact with this post in a constructive manner, then don't bother.
5. Any claim made herein can be (and has been) verified via documentation, testing, and evidentiary evaluation.
# What This Is
A compression format that reduces prompt token count by 50-97% while maintaining output quality. Not bitwise compression - semantic compression that activates learned domain priors during decompression.
**Example:**
* Standard prompt: 120 tokens
* "My system" equivalent: 40 tokens
* Output quality: Identical (both produce the same \~1400 token code file)
* Models tested: GPT-OSS:20B (local), Claude, GPT, Grok, Gemini, and a host of other "frontier" level models.
# How It Works
Traditional compression stores *syntax*. "My system" stores *semantics*.
**Standard approach:**
"Create a Python class called DataValidator with methods for
initializing with a schema, validating data against required
fields, sanitizing input by removing special characters..."
(120 tokens of natural language)
**"My system" approach:**
(redacted for IP security)
(40 tokens of structured semantics)
The model reconstructs the full implementation by applying domain conventions (docstrings, type hints, PEP 8 formatting) that it already knows. You're encoding *what* to build, not *how* to format it.
# The Core Principle: Preserve meaning across compression/decompression cycles.
The decompression phase doesn't just "unpack" the prompt- it *enhances* the output by adding:
* Domain-standard formatting (NumPy-style docstrings, proper indentation)
* Type hints and annotations
* Working example code
* Professional structure
**Hard constraint:** Zero factual invention. The model can add *form* but not *content*. If it's not encoded in the token, it doesn't get added.
# Real Results
Tested on GPT-OSS:20B via Ollama:
**Test case:** Python class generation
|Metric|Standard Prompt|"My system"|
|:-|:-|:-|
|Input tokens|\~120|\~40|
|Output tokens|\~1,400|\~1,400|
|Compression ratio|1:11.7|1:35|
|Output quality|Production-ready|Production-ready|
|Hallucinations|0|0|
My system's input is 66% smaller here, while producing functionally identical output.
Why This Matters for Local Models
**Problem:** Running local LLMs means limited context windows and slower processing.
**Solution:** Compress inputs semantically - the model reconstructs using knowledge it already has.
**Practical benefits:**
* Fit more context in limited windows
* Faster processing (fewer input tokens)
* Consistent output structure (domain conventions applied automatically)
* Works with existing models (no retraining needed)
# What Models Work?
Tested successfully on:
* GPT-OSS:20B
* Virtually every Frontier model publicly available
# Limitations
**This is not:**
* A magic solution for all prompts.
* Better than verbose prompts for creative/ambiguous tasks
* A model modification (works with existing models as-is)
**This works best for:**
* Structured generation (code, configs, schemas)
* Domain-specific tasks with clear conventions
* Scenarios where compression ratio matters
* Repetitive tasks with consistent output format Current State
Format specification complete and standardized for 3 versions of prompt construction. No library yet - manual construction and decompression via direct prompting.
# Questions I'm Investigating
1. **Compression ratio scaling:** Does it hold at 70B+ parameters? Tests have proven yes.
2. **Cross-domain performance:** Anywhere structured data exists: Research papers, customer databases, Legal documents, medical records, etc
3. **Model size threshold:** What's the minimum parameter count for reliable decompression? I have used this system on LLMs as small as 0.5b on my Android phone. As long as the model has sufficient training in the domain being used, it can reproduce the reconstructed content.
4. **Quantization impact:** Does 4-bit quantization degrade performance significantly? I have only tested quantization degradation with my local 20B...and have had no issues from the Quant.
# Why I'm Posting This
I've been emailing companies (Anthropic, OpenAI, inference providers) about this for months. Radio silence.
I have literally gigs of documentation from testing across every model I could find.
I refuse to be a “trust me, bro” kind of person. If I can't test it, then it's of no use to me...and this system reflects that.
Figured the LocalLLaMA community would actually *use* this if it works, rather than filing it in a "maybe someday" folder.
**TL;DR:** Compress prompts by encoding semantics instead of syntax. Local models reconstruct full outputs using domain knowledge they already have. 40 tokens → 1400 tokens of production code. Works on 20B models right now.
| 2026-01-08T22:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q7polt/semantic_compression_for_local_llms_35x_input/ | Sinjynn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7polt | false | null | t3_1q7polt | /r/LocalLLaMA/comments/1q7polt/semantic_compression_for_local_llms_35x_input/ | false | false | self | 0 | null |
How does cerebras coding plan waitlist work? | 0 | Did anyone get in and try glm 4.7? Also is it also just 60k tokens/minute rate limit for the coding plan?
Basically is it a scam? | 2026-01-08T22:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q7piyx/how_does_cerebras_coding_plan_waitlist_work/ | unraveleverything | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7piyx | false | null | t3_1q7piyx | /r/LocalLLaMA/comments/1q7piyx/how_does_cerebras_coding_plan_waitlist_work/ | false | false | self | 0 | null |
Automated the annoying parts of fine-tuning (model selection, hyperparameters, setup) | Check it out @ tunekit.app | 3 | Fine-tuning SLMs the way I wish it worked!
Same model. Same prompt. Completely different results.
That's what fine-tuning does (when you can actually get it running).
I got tired of the setup nightmare. So I built:
TuneKit: Upload your data. Get a notebook. Train free on Colab.
No GPUs to rent. No scripts to write. No cost. Just results!
(Supports Llama 3.2, Phi-4, Mistral, Qwen, Gemma.)
→ Try it out (for free): [https://tunekit.app/](https://tunekit.app/)
→ GitHub: [https://github.com/riyanshibohra/TuneKit](https://github.com/riyanshibohra/TuneKit)
Free and open source. Let me know if it's useful! | 2026-01-08T21:57:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q7pf0c/automated_the_annoying_parts_of_finetuning_model/ | Consistent_One7493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7pf0c | false | null | t3_1q7pf0c | /r/LocalLLaMA/comments/1q7pf0c/automated_the_annoying_parts_of_finetuning_model/ | false | false | self | 3 | null |
Free, open source adventure RP app (AGPL 3) | Aventura | 15 | Hi! Over these last couple of weeks, I've been working on a frontend called [Aventura](https://github.com/unkarelian/Aventura). It's 100% free and open source, under AGPL 3.
# What is Aventura?
Simply put, it's a frontend purpose built for adventure RP and creative writing. While the original release only had support for openrouter, I have added the ability to add *any* openai compatible source, as well as the ability to manually change the parameters you send to a model. While I have limited testing myself due to my poor GPU, it should work just fine with local models (: . (I hope)
# So what does it do?
It has a built in:
* Tracker, for events, characters, plot points, inventory, etc
* Multiple choice options, for both creative writing and adventure mode, allowing for good reference points on what to do next
* Long term memory(!!!) using the exact same system as timeline-memory (a SillyTavern extension I made), but with several optimizations. It runs **much** faster than it does with timeline-memory, due to being able to run several queries in parallel.
* Lorebook management, completely automatic and in the background, not requiring any user input and not interrupting the flow
* LLM based lorebook retrieval, massively increasing accuracy over using embedding models
* Anti-slop automation, taking inspiration from my fork of Prose Polisher, I have ditched the programmatic way of determining it, and instead use an LLM, which is much more accurate
* Setup wizard for creating new scenarios, with the assistance of AI
* Built in spell checker using harper
* Lorebook classification using LLM's Note: This was made with parallel requests in mind, and as such it at times makes several generations at once. Make sure you have some sort of way to handle that, or alternatively, disable the features that do make multiple requests. You're also going to have to set up the models for each feature yourself if you do run locally, as it only has pre-configurations for api aggregators (for the sake of my own sanity).
# Technical details of the memory system
Since this is r/LocalLLaMA , I figured I should also share how the memory system here works. It's not a system I've really seen anywhere else, though I may be wrong.
# How it works
In every message, the 'time' is either advanced or kept the same. Either way, the 'current time' is saved to each message. When a token threshold is passed (default 24k), a summary is automatically triggered. In this automatic summary, the 'starting time' (the time of the first message in the summary) and the 'ending time' (the time of the last message of the summary) are saved as part of the data, alongside the characters and locations visited. This gives the summary itself a stable sense of in-universe 'time' that helps maintain coherence. But that's just a modification of the summary, and not really anything that different.
# The slightly different part
What actually matters here is that we don't get rid of the messages within the summary. Instead, while we hide them from the 'visible' chat history to the AI, before every message after a summary is made, multiple 'queries' are run on those summarized 'chapters'. When a query is made, a separate AI is given the **entirety** of that chapter alongside the query, and, crucially, it passes back an answer to that query. That way, we can keep even the smallest details of a chapter *without* overloading the context of the 'main narrative ai'. It's basically trading pure inference for accuracy. All of this comes together to make a very coherent 'timeline' of events. It also has a separate agentic mode after each chapter is created, where an AI will run in the background and make tool calls after querying chapters, and actively update the lorebooks for you. You don't really have to maintain the world yourself at all with this, it just does it for you.
# Contributing
Contributions are very welcome! | 2026-01-08T21:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1q7p09i/free_open_source_adventure_rp_app_agpl_3_aventura/ | AuYsI | self.LocalLLaMA | 2026-01-08T21:47:18 | 0 | {} | 1q7p09i | false | null | t3_1q7p09i | /r/LocalLLaMA/comments/1q7p09i/free_open_source_adventure_rp_app_agpl_3_aventura/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=108&crop=smart&auto=webp&s=5f42380959f408bd40f18acb982e60307dc26df6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=216&crop=smart&auto=webp&s=dd6b3a0595495beddc853ee324de52553ffc46f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=320&crop=smart&auto=webp&s=3c68e7503d2ac530ddd94d8a2148b3b39fd136a3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=640&crop=smart&auto=webp&s=29273a201f8845da306484aed2a42068df54b71e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=960&crop=smart&auto=webp&s=f4ae9059274991d39c94ff1439ceedd7904630f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?width=1080&crop=smart&auto=webp&s=d5e06d68a52fd1a1458029f94022311753cc9514', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h7XJLktPdO_l42rP-GCHLZycFSA_pcfxCxAWpDSXX5M.png?auto=webp&s=c25f8943c86d82c80003fe81a3f70da2f5cec8ce', 'width': 1200}, 'variants': {}}]} |
LLM trained from scratch on 1800s London texts (1.2B params, 90GB dataset) | 1 | [removed] | 2026-01-08T21:25:58 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1q7ojzm | false | null | t3_1q7ojzm | /r/LocalLLaMA/comments/1q7ojzm/llm_trained_from_scratch_on_1800s_london_texts/ | false | false | default | 1 | null | ||
Blackwell Buy or Not? Cold feet... | 0 | Hello All,
I just started my journey last March and have been saving for a Workstation Blackwell GPU. I finally got my Epyc H13SSL-N to play nice with two 3090's and am pretty excited about what is possible but the limitations are fairly obvious. I have an invoice and quote and should be able to get one for around 8 grand. I am getting cold feet as it's a lot of money even though it's what I've been saving for. Could someone just push me over the edge or talk some sense into me as to what I'll be able to do with a blackwell + one 3090 vs. 2 3090's? I want to hear from actual humans please. | 2026-01-08T21:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ofd5/blackwell_buy_or_not_cold_feet/ | joelasmussen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ofd5 | false | null | t3_1q7ofd5 | /r/LocalLLaMA/comments/1q7ofd5/blackwell_buy_or_not_cold_feet/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.