title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Egocentric-10K is the largest egocentric dataset. It is the first dataset collected exclusively in real factories (Build AI - 10,000 hours - 2,153 factory workers - 1,080,000,000 frame) | 414 | Hugging Face, (apache 2.0): [https://huggingface.co/datasets/builddotai/Egocentric-10K](https://huggingface.co/datasets/builddotai/Egocentric-10K)
Eddy Xu on 𝕏: [https://x.com/eddybuild/status/1987951619804414416](https://x.com/eddybuild/status/1987951619804414416)
| 2025-11-11T14:34:43 | https://v.redd.it/nlsslzuj0n0g1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouazho | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/nlsslzuj0n0g1/DASHPlaylist.mpd?a=1765463696%2CMzc0OTIyOGNlODY5ZjYzYzE0NTUyYWQ4NDQ3YjRiMWU0M2FjZDcyOWI5YjhiNmJiYWI5YjMxZTQwMjhlMTExZQ%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/nlsslzuj0n0g1/CMAF_270.mp4?source=fallback', 'has_audio': False, 'height': 268, 'hls_url': 'https://v.redd.it/nlsslzuj0n0g1/HLSPlaylist.m3u8?a=1765463696%2CNTE0ZmNlNmNjZmVjZjMxOGY3ODllMjFjZjM2ZTBjMGVjZDkyMjExZmZiZDAzN2IyZTE3ZGUyMWZiOWRjMmE1Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nlsslzuj0n0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 480}} | t3_1ouazho | /r/LocalLLaMA/comments/1ouazho/egocentric10k_is_the_largest_egocentric_dataset/ | false | false | 414 | {'enabled': False, 'images': [{'id': 'Z2ZjbDkzdmowbjBnMUB18FDMNIKrOWZMaI6GCxWf_t_2BvSabc90NvjIF-MD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z2ZjbDkzdmowbjBnMUB18FDMNIKrOWZMaI6GCxWf_t_2BvSabc90NvjIF-MD.png?width=108&crop=smart&format=pjpg&auto=webp&s=de7a7984ccba51a12dae6c8ffb340f9b30779431', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/Z2ZjbDkzdmowbjBnMUB18FDMNIKrOWZMaI6GCxWf_t_2BvSabc90NvjIF-MD.png?width=216&crop=smart&format=pjpg&auto=webp&s=a5bd0b6a1b3f4411acdc9f0392fb134e2534a1d1', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Z2ZjbDkzdmowbjBnMUB18FDMNIKrOWZMaI6GCxWf_t_2BvSabc90NvjIF-MD.png?width=320&crop=smart&format=pjpg&auto=webp&s=f2549a3d1403cceac77586bd92d49914c5a06fc2', 'width': 320}], 'source': {'height': 336, 'url': 'https://external-preview.redd.it/Z2ZjbDkzdmowbjBnMUB18FDMNIKrOWZMaI6GCxWf_t_2BvSabc90NvjIF-MD.png?format=pjpg&auto=webp&s=25749b1cfbad515b0572cdc9b00ae47b69e84715', 'width': 600}, 'variants': {}}]} | |
The multi-tenant inference cloud is coming. Who's actually solving GPU isolation? | 0 | Nebius's CBO just called the multi-tenant inference cloud a core focus after their very strong Q3 earnings.
But everyone's avoiding the hard part: GPU isolation.
How do you run multiple models/customers on one GPU without:
· Noisy neighbors ruining latency?
· Terrible utilization from over-provisioning?
· Slow, expensive cold starts?
Is this just a hardware problem, or is there a software solution at the runtime layer?
Or are we stuck with dedicated GPUs forever? | 2025-11-11T14:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ouarc6/the_multitenant_inference_cloud_is_coming_whos/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouarc6 | false | null | t3_1ouarc6 | /r/LocalLLaMA/comments/1ouarc6/the_multitenant_inference_cloud_is_coming_whos/ | false | false | self | 0 | null |
Suggestions for Newbie with i7-7800X × 12 / 64 GB RAM / GTX 1060 6GB | 1 | Hello friendos,
I'm new to local LLMs, i've been reading and lurking for a while, and after experimenting with an Orange Pi 5 setup for a bit, I've now gone dumpster diving and invested into some better hardware to run LLMs locally. For my background: I'm a systems engineer, specialized in networking, just recently started python programming, and started configuring simple AI agents that utilize ChatGPT/Claude APIs to create simple support slack bots for different teams at my company. I'm diving into local LLMs both to learn more about the technology and how it functions to improve my skills for work and to build my own little passion project here.
I know that my specs are limited when it comes to model size and computing power (i7-7800X × 12 / 64 GB RAM / GTX 1060 6GB, Ubuntu 24.04 with Windows11 Dual Boot), i will over time probably invest into a better GPU, but currently this is the hardware that i have to work with. I'm looking for suggestions, which models i can reliably use with my current hardware, and which additional software i should look into.
My goal is to build a functional assistant that i can use to control Home Assistant, organize documents, help with coding, work with calendar and to-do-lists, etc. For that i want to write different functions, tools, scripts and workflows. I already succesfully made a long term memory function (automatic summary of conversations into a diary format) and memory retrieval function ("remembering" things by going through archived conversations). To make all that work properly, i need to know which model works best for different tasks, like logic, understanding context, coding, conversation, creative writing or thinking, how "big" the model can realistically be (7B or 14B quantized to 4 bits?), how i can get the most juice out of my hardware for now, which additional software to use etc. The idea is to have one main model that can then use the appropriate tools and workflows, utilize other more specialized models for a specific task and then returns the desired result.
Any recommendations, tips and tricks as well as links to resources are highly appreciated. While there is a lot of documentation out there, it's kind of overwhelming for me, i'm more of the "learning by doing" type with YouTube Tutorials and little test-projects to get the hang of it.
| 2025-11-11T14:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ouar14/suggestions_for_newbie_with_i77800x_12_64_gb_ram/ | ConstantinGB | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouar14 | false | null | t3_1ouar14 | /r/LocalLLaMA/comments/1ouar14/suggestions_for_newbie_with_i77800x_12_64_gb_ram/ | false | false | self | 1 | null |
Vibevoice 7B, ComfyUI and a 12GB nVidia 3060 - why do I keep hitting a ram limit, even when offloading to the PC's main RAM ? | 1 | Title says most of it, but just to add that I'm using a quantized 8bit model (DevParker from Hugging Face) set to 4bit (8bit is too memory intensive so in ComfyUI's Single Speaker node I set the quantize_lim to 4bit). But even with a short paragraph of text it intermittently crashes due to running out of memory.
Not sure why, the 12GB 3060 should be enough, and offloading to main RAM should help too? (I start the server with: **python main.py --lowvram** ).
Also, even if it processes the paragraph okay, then if I want to run the same TTS again i need to restart the server first or it will definitely run out of vram the next time.
I will say that I'm pretty new to voice cloning and TTS, having only previously experimented with Chatterbox TTS (at least it didn't crash, but I wasn't happy with the voice quality, prosody, etc). It took forever just to get everything running.
Any tips please? Or is part of the problem my configuration of the Single Speaker node in ComfyUI ? Or should i be using a different model, etc?
Am running it under Windows 10 64bit, 16GB RAM. | 2025-11-11T14:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1oualtf/vibevoice_7b_comfyui_and_a_12gb_nvidia_3060_why/ | Twigling | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oualtf | false | null | t3_1oualtf | /r/LocalLLaMA/comments/1oualtf/vibevoice_7b_comfyui_and_a_12gb_nvidia_3060_why/ | false | false | self | 1 | null |
What's your biggest AI safety challenge? Survey on evals, guardrails, and real deployment decisions? | 1 | Quick question for the community: when you're running models locally or deploying them, what's your #1 safety concern?
We ran a poll with AI safety practitioners and got some interesting results:
* 41% said reliable benchmarks/evals
* 41% said building effective guardrails
* Only 12% said regulatory compliance
* Only 6% said internal team education
The AI Alliance is doing a broader survey to understand what's actually blocking progress on AI safety in 2025. Takes about 10 minutes.
What's particularly interesting: safety is rated 8/10 in importance, but 2/3 of practitioners say they haven't encountered use cases too risky to deploy. Are we solving the right problems?
https://preview.redd.it/5mteqk41ym0g1.png?width=350&format=png&auto=webp&s=918acb15d89b9698301e4be0330426647e9af566
| 2025-11-11T14:09:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ouae58/whats_your_biggest_ai_safety_challenge_survey_on/ | AI_Alliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouae58 | false | null | t3_1ouae58 | /r/LocalLLaMA/comments/1ouae58/whats_your_biggest_ai_safety_challenge_survey_on/ | false | false | 1 | null | |
Weird output from qwen3-vl | 1 | Im Running qwen3-vl-30b-a3b-instruct with the Unsloth quant in q5 on llama.cpp and I’m getting really weird output that’s not the only model I tried and I got weird output I also tried qwen3-vl-32b-instruct and thinking I tried quants like q5 q2 q4 and tried quants from qwen and unsloth I even tried different llama.cpp versions but still the same output and I don’t even know why
This is how I would load the model : llama-server -hf Qwen/Qwen3-VL-32B-Instruct-GGUF:Q4_K_M | 2025-11-11T13:58:52 | Pleasant-Key3390 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oua512 | false | null | t3_1oua512 | /r/LocalLLaMA/comments/1oua512/weird_output_from_qwen3vl/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'i8oh2r97wm0g1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=108&crop=smart&auto=webp&s=9f8a72bf07393b0ba0eeaf0a5cb37c3a42bdfc7b', 'width': 108}, {'height': 105, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=216&crop=smart&auto=webp&s=da29e791d4a1b0526aeadd37c97098b672ebc1d8', 'width': 216}, {'height': 156, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=320&crop=smart&auto=webp&s=e3ddd7853ef4e513c87bbe7369bb42613f9959ee', 'width': 320}, {'height': 312, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=640&crop=smart&auto=webp&s=88bf236f46c861605e9d5f712ed2b9eff3c8fd0e', 'width': 640}, {'height': 469, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=960&crop=smart&auto=webp&s=7967320af657c432110264a50c5c2e5b2b6898ba', 'width': 960}, {'height': 527, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?width=1080&crop=smart&auto=webp&s=ad16e9afe04b039d32f9ea15e5f779348443d782', 'width': 1080}], 'source': {'height': 547, 'url': 'https://preview.redd.it/i8oh2r97wm0g1.jpeg?auto=webp&s=93d53d02e21e46021924e373c8f37c40710c5075', 'width': 1119}, 'variants': {}}]} | |
What could bring down the prices of GPUs? OR How could I use more models with low system config? | 0 | Well, 1st half of title is just to get attention.
1. Bring **more Optimizations** on libraries/Frameworks/Tools/Apps(Ex: llama.cpp, vllm, etc.,) to get highest t/s. (Well, they did, doing & there'll be more, time to time.)
2. Improve **CPU Only performances** (Ex: ik\_llama.cpp, etc.,) to get highest t/s
3. **More MOE models** (In small - 10-35B, medium - 35-100B, big - 100-300B, large - 300B-1T+ ranges)
4. **Prune more models** with additional trainings & Distillations
5. Bring **more techniques/architectures like MOE** (Ex: Qwen3-Next, Kimi-Linear, Megrez .... I think experts could give more model names)
6. **More Tailored models** in all categories (Currently we see only few categories like Coding, Medical .... that too less models count) - Ex: allenai's [FlexOlmo models](https://huggingface.co/allenai/FlexOlmo-7x7B-1T-RT) (Public, Math, News, Academic, Code, Creative Writing, Reddit) - [Waiting for llama.cpp support & GGUF](https://www.reddit.com/r/allenai/comments/1lvrixq/comment/nnss8gx/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
7. **More models in all size ranges**. Ex: Qwen3 done this nicely (0.6B, 1.7B, 4B, 8B, 14B, 30B-A3B, 32B, 80B-A3B, 235B-A22B, 480B. Don't forget their Omni, VL, etc., models.)
8. >!Chinese/More/New companies come up with 48-64-72-96-128 GB GPUs(instead of usual 32GB) at cheaper prices which creates big competitions with biggies!<(Not so serious answer For 1st half of post title)
What else could help on this? Please share your thoughts.
Want to mention Megrez here. It would've been popular if llama.cpp supported this model [already](https://github.com/ggml-org/llama.cpp/issues/16724). Based on CPU only [stats below](https://www.reddit.com/r/LocalLLaMA/comments/1nryoa5/comment/ngjfgun/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button), it's 3X faster than similar size dense model. Any other models like Megrez?
[Megrez2: 21B latent, 7.5B on VRAM, 3B active—MoE on single 8GB card](https://www.reddit.com/r/LocalLLaMA/comments/1nryoa5/megrez2_21b_latent_75b_on_vram_3b_activemoe_on/)
Qwen_Qwen3-8B-Q4_K_M.gguf (4.68GB)
[PP: 74T/7.63s (3.75T/s 0.13m)|TG: 1693T/1077.52s (3.59T/s 17.96m)]
Megrez2-3x7B-A3B_Q4_K_M.gguf (4.39GB)
[PP: **/2.72s (8.93T/s 0.05m)|TG: 311T/47.85s (10.13T/s 0.80m)]
Ling-mini-2.0-Q4_K_M.gguf (9.23GB)
[PP: 60T/0.83s (27.86T/s 0.01m)|TG: 402T/23.52s (27.22T/s 0.39m)]
Posted this thread for Poor GPU Club. | 2025-11-11T13:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1oua1yg/what_could_bring_down_the_prices_of_gpus_or_how/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oua1yg | false | null | t3_1oua1yg | /r/LocalLLaMA/comments/1oua1yg/what_could_bring_down_the_prices_of_gpus_or_how/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=108&crop=smart&auto=webp&s=2dd017956fe88d080b3508bdc7645eae3aaa0429', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=216&crop=smart&auto=webp&s=18986ad9a9977bc31b8c78eaf92b7b658f15f32a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=320&crop=smart&auto=webp&s=a9dc4b26a19b3b4ce5e6172d5a6b5ee4d9166dd6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=640&crop=smart&auto=webp&s=9c268a4b21b25f393e3844ff4f776bc39c33697a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=960&crop=smart&auto=webp&s=b2018f7494db4eee5fe2fefe9a330a0b55c0852c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?width=1080&crop=smart&auto=webp&s=d8738de07bd7670e543785598dd443bb31ba7c13', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FP-suYB09301VzmLZhKMPDg_Phlx4-Hl2Cm-qmpQxGk.png?auto=webp&s=acc16c9b731944a0f9321910caafe5d83dc5c238', 'width': 1200}, 'variants': {}}]} |
Best Opensource OCR Models Support Arabic + English | 6 | I am trying to find a good open source OCR solution that works well with Arabic and English.Most of my documents are receipts, contracts, and invoices
If anyone has experience with Arabic OCR. could you pls let me know which model you have tried?
Thanks in advance | 2025-11-11T13:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ou92ga/best_opensource_ocr_models_support_arabic_english/ | Ai_Peep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou92ga | false | null | t3_1ou92ga | /r/LocalLLaMA/comments/1ou92ga/best_opensource_ocr_models_support_arabic_english/ | false | false | self | 6 | null |
Why is no company making a black box I can buy with a good gpu that you just keep turned on at home and runs models for a chatgpt-like app? | 0 | It would be a no-brainer. I would buy one and I would tell anyone I know to buy one. Make two or three models. A base one, a medium one and a pro one. Put its endpoint and api key generation in the app so developers can access it directly.
Make it stupidly simple, like a freaking wifi router. An on and off button. I don’t care if I don’t have access all the hugging face models. Keep a dozen of good choices of currently great models.
Make a base model for $ 1k, a medium one for $ 3k and a pro one for $ 10k. I’d buy the pro one for my family in a heartbeat.
TAKE MY MONEY! | 2025-11-11T13:00:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ou8tii/why_is_no_company_making_a_black_box_i_can_buy/ | matteoianni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou8tii | false | null | t3_1ou8tii | /r/LocalLLaMA/comments/1ou8tii/why_is_no_company_making_a_black_box_i_can_buy/ | false | false | self | 0 | null |
Kimi K2 Thinking is a Better Agentic AI than I thought | 44 | https://reddit.com/link/1ou8t7z/video/9dtnlbhhlm0g1/player
just ran a quick eval on a deep agent built for customer support. It‘s on par with GPT-5 in agentic capabilities.
It's a bigger deal than I thought! | 2025-11-11T13:00:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ou8t7z/kimi_k2_thinking_is_a_better_agentic_ai_than_i/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou8t7z | false | null | t3_1ou8t7z | /r/LocalLLaMA/comments/1ou8t7z/kimi_k2_thinking_is_a_better_agentic_ai_than_i/ | false | false | self | 44 | null |
You will need to go to an offline physical location to use (some) future ASI-level models | 0 | The risk of other countries or companies distilling your model will likely be too great to host something like this on the internet.
And this isn't as big of a deal when we have a model like gpt-5, but it is a different story altogether if we fast forward 10+ years out and consider the capabilities of those models.
Thoughts? | 2025-11-11T12:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ou8qks/you_will_need_to_go_to_an_offline_physical/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou8qks | false | null | t3_1ou8qks | /r/LocalLLaMA/comments/1ou8qks/you_will_need_to_go_to_an_offline_physical/ | false | false | self | 0 | null |
Why is MiniMax M2 a Full Attention model? | 16 | The CEO of MiniMax addresses frequent community questions about why MiniMax M2 sticks with Full Attention instead of adopting more efficient alternatives like Linear or Sparse Attention. After many repeated private explanations, they decided to publicly share the reasoning and lessons behind this decision.
# Theory vs. Reality: The Efficient Attention Dilemma
While the benefits of Linear/Sparse Attention are widely discussed, real-world implementation in large-scale, industrial LLM systems is much more complex. Full Attention still holds practical advantages across various scenarios (code/math, agents, multimodal tasks, long chain-of-thought, RL, low-precision compute, speculative decoding, etc.). To justify switching to efficient attention, many technical and evaluation challenges need to be overcome.
# Motivation: Why Even Try Efficient Attention?
If compute were unlimited, most wouldn’t bother with Linear/Sparse Attention. Today, all efforts to develop efficient attention are fundamentally about saving compute, not necessarily about reducing token counts or hitting scaling limits. The goal is to build a model structure that delivers the best performance under fixed compute budgets for both training and inference.
# Core Problems: Effectiveness, Speed, and Price
To make efficient attention viable in production, three key factors must be balanced: effectiveness (the model’s floor), speed (throughput), and cost. The biggest hurdle is not the structure itself, but the limitations of current evaluation methodologies. Comprehensive benchmarks and real-world metrics are both necessary and difficult to build.
# 1. Limitations of Evaluation
* **Observability**: Benchmarks rapidly improve as models are optimized for them, but creating a truly comprehensive evaluation pipeline to expose real capability gaps remains unsolved—especially for new attention mechanisms.
* **No Free Lunch**: Reducing attention complexity isn’t without trade-offs. Earlier, hybrid models combining Lightning Attention and Full Attention seemed to perform well on standard benchmarks, but larger models exposed clear weaknesses in complex, multi-step reasoning tasks.
* **Proxy Metrics and Scaling**: Proxy metrics can match or beat MHA on benchmarks after several iterations, but may not generalize as models scale up. Many issues only emerge at scale.
* **High Observation Cost**: Early proxy indicators for complex tasks are hard to measure during pretraining, and as task complexity grows, so does the compute needed to reach statistical confidence, slowing iteration.
* **Other Variables**: There are many confounding factors—model structure, data distribution, optimizer choice—all can sway outcomes, and conclusions may flip as the data pipeline evolves.
# 2. Infrastructure Gaps for Efficient Attention
* **Training**: Linear/Sparse Attention often becomes memory-bound rather than compute-bound. Without deep IO optimization, GPU utilization suffers.
* **Inference**: Delivering truly faster, cheaper inference is difficult. Theoretical memory/computation savings only kick in for long enough sequences (several thousand tokens), which is still short for modern LLMs.
* Challenges include:
* Low-precision state storage (more sensitive for linear attention)
* Efficient prefix caching (critical for practical workloads)
* Speculative decoding optimizations
* Fortunately, these are solvable, but require engineering effort.
# Next Steps: What Needs to Happen
Scaling remains a central theme. As context lengths increase faster than GPU compute, the payoff from efficient attention will become more pronounced. To prepare, the team needs:
* More diverse and information-rich long-form data
* Better evaluation systems and experimental paradigms for rapid iteration
* Improved training/inference infrastructure to fully exploit available hardware
# Appendix: Lessons from Open-Source and Failed Experiments
They briefly discusses the (now-removed) SWA inference code and why it didn’t make the cut—it simply didn’t work well enough. Hybrid approaches (mixing CPT and SWA, inter/intra-layer hybridization) were explored, but all exhibited significant performance drops with longer contexts, especially in agent scenarios. Analysis revealed entrenched attention patterns (like retrieval and induction heads) are established early and hard to adapt via hybridization, and probing to selectively retain full attention wasn’t practically successful. This issue isn’t related to “attention sink.” Readers interested in this line of thinking are encouraged to analyze performance in models like GPT-OSS, CWM, and Gemma, especially for long-context tasks. | 2025-11-11T12:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ou8b89/why_is_minimax_m2_a_full_attention_model/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou8b89 | false | null | t3_1ou8b89 | /r/LocalLLaMA/comments/1ou8b89/why_is_minimax_m2_a_full_attention_model/ | false | false | self | 16 | null |
Pls tell me I shouldn't spend $3k on 5090 32gb vram desktop PC nor Strix Halo 128Gb | 0 | I want to run local LLMs that are good for frequent coding tasks but I also want a powerful gaming machine.. but both of these are good to haves.. help!! | 2025-11-11T11:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ou7juw/pls_tell_me_i_shouldnt_spend_3k_on_5090_32gb_vram/ | IntroductionSouth513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou7juw | false | null | t3_1ou7juw | /r/LocalLLaMA/comments/1ou7juw/pls_tell_me_i_shouldnt_spend_3k_on_5090_32gb_vram/ | false | false | self | 0 | null |
Kani TTS Vie — Fast & Natural Vietnamese Text-to-Speech 😻 | 13 | https://reddit.com/link/1ou787r/video/ri61g9qx6m0g1/player
We just finished fine-tuning Kani TTS Vie, a high-quality Vietnamese Text-to-Speech model based on Kani-370M.
This release focuses on speed, clarity, and natural prosody — aiming to be one of the fastest and most expressive Vietnamese TTS models available right now.
If you're working with voice apps, narration systems, chatbots, VTubers, or dubbing, feel free to try it out!
Model: [https://huggingface.co/pnnbao-ump/kani-tts-370m-vie](https://huggingface.co/pnnbao-ump/kani-tts-370m-vie)
Source Code: [https://github.com/pnnbao97/Kani-TTS-VieDemo](https://github.com/pnnbao97/Kani-TTS-VieDemo)
Try demo: [https://huggingface.co/spaces/pnnbao-ump/Kani-TTS-Vie](https://huggingface.co/spaces/pnnbao-ump/Kani-TTS-Vie) | 2025-11-11T11:38:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ou787r/kani_tts_vie_fast_natural_vietnamese_texttospeech/ | DrCrab97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou787r | false | null | t3_1ou787r | /r/LocalLLaMA/comments/1ou787r/kani_tts_vie_fast_natural_vietnamese_texttospeech/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=108&crop=smart&auto=webp&s=75624c8f6111e2e8c6a1e9a869cf98938cd9b5a4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=216&crop=smart&auto=webp&s=0677aba1cc9307b51e208b516d86fc10dae7e076', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=320&crop=smart&auto=webp&s=d7979886a0ce73a9a80befdfea9c62af5067cca9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=640&crop=smart&auto=webp&s=056408ba1f5131625f673ff5226c4060dcbee4f2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=960&crop=smart&auto=webp&s=5567f20d6c2de32cd40c29a68d26bf9853f7b0a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?width=1080&crop=smart&auto=webp&s=105c40ed42522834152102137d5e7cac53180d07', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n6cwpUf5ye-oo9T-ubvOmKJ1rcHOcczkg6rjd9sasNs.png?auto=webp&s=1caf93ecf89a58b6e6e38dd2beb65fb98b610093', 'width': 1200}, 'variants': {}}]} |
Is a 5090 good enough for my use case or should I wait a bit? | 2 | I want to run a local llm for classification / extraction as a one shot from a selection of given inputs something like given these 10 input parameters which may have a token length between say 10 and however many tokens a string of words around 100-300 words is as one token is a description which can vary in length the rest of the parameters will either be doubles or single string words.
I’m not sure what sort of size model would be the minimum acceptable for this would 30b be enough for example?
The gpu will be part of my pc for both productivity and gaming / entertainment so I’m wondering if it’s best to wait for a larger vram gpu in the future from nvidia or get the 5090 now if my use case is currently achievable.
Im very new to this so please don’t shoot me down if this is a stupid question all I know is that my current 2080 ti is cooked and can’t do it in any speed that makes this practical | 2025-11-11T10:48:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ou6dgt/is_a_5090_good_enough_for_my_use_case_or_should_i/ | tradegreek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou6dgt | false | null | t3_1ou6dgt | /r/LocalLLaMA/comments/1ou6dgt/is_a_5090_good_enough_for_my_use_case_or_should_i/ | false | false | self | 2 | null |
Benchmarked Llama 3.3 70B vs Mixtral vs Gemma on real tasks. Sharing numbers + OpenAI-compatible endpoint. | 1 | Been experimenting with vLLM + autoscaling, and collected some quick benchmarks comparing:
* Llama 3.3 70B
* Llama 3.1 8B
* Mixtral 8x7B
* Gemma 2 9B
**Tasks tested:**
* Code generation
* Chain-of-thought reasoning
* Summaries
* Multilingual Q&A
**Early results:**
* Llama 3.3 70B wins on reasoning + coding accuracy
* Mixtral 8x7B performs best on multilingual + speed
* Gemma 2 9B is the best “budget” workhorse
* Latency sits around \~1.8–2.2s for mid-length outputs
**Tiny snippet (OpenAI SDK compatible):**
const client = new OpenAI({ apiKey: process.env.KEY, baseURL: "<your_url>" });
const chat = await client.chat.completions.create({
model: "llama-3.3-70b",
messages: [{ role: "user", content: "Give me a TLDR on transformers" }],
stream: true
});
**Ask:**
Which evaluation set would you use to do a fairer comparison?
I can rerun all tests and share full logs.
Link at the bottom so it’s not intrusive:
[https://rapidapi.com/ai-gateway-labs-ai-gateway-labs-default/api/episteme-nexus1]() | 2025-11-11T10:12:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ou5s4x/benchmarked_llama_33_70b_vs_mixtral_vs_gemma_on/ | Confident_Winner_579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou5s4x | false | null | t3_1ou5s4x | /r/LocalLLaMA/comments/1ou5s4x/benchmarked_llama_33_70b_vs_mixtral_vs_gemma_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aWYLp_gB7ZRW-X-qgU1SE95H3FmxNneaaRSDNgdTvZ0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/aWYLp_gB7ZRW-X-qgU1SE95H3FmxNneaaRSDNgdTvZ0.png?width=108&crop=smart&auto=webp&s=3d755bd25a184c20fba879d5c99013ded677806e', 'width': 108}], 'source': {'height': 175, 'url': 'https://external-preview.redd.it/aWYLp_gB7ZRW-X-qgU1SE95H3FmxNneaaRSDNgdTvZ0.png?auto=webp&s=122a338140b9b83c8fdea082768224e868055ecd', 'width': 175}, 'variants': {}}]} |
LM Studio Qwen says: !#!#!#!#!#!# | 0 | I have started to use lm studio since ollama is becoming an accoun-focused experience. GPT-oss 20b works fine but with Qwen3-vl-30b it always answers: !#!#!#!#!#!#!#!#!#!#!#!#!#! no matter the input.
Why could that be? | 2025-11-11T09:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ou4xq8/lm_studio_qwen_says/ | _camera_up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou4xq8 | false | null | t3_1ou4xq8 | /r/LocalLLaMA/comments/1ou4xq8/lm_studio_qwen_says/ | false | false | self | 0 | null |
Building LLM inference from scratch - clean, minimal and (sort of) fast | 27 | I wrote my own LLM inference script for gpt-2 models from scratch following first principles with the motto of **learning by building**. I built it incrementally starting from a very naive greedy decoding-based inference all the way to latency optimized (kv-cache/speculative decoding) inference using pytorch.
My implementation includes:
**Inference & Sampling:**
* greedy decoding, EOS handling, context window management using sliding window
* temperature scaling, multinomial sampling
* top-k and top-p (nucleus) sampling
* presence, frequency, and repetition penalties controls
**Latency Optimizations:**
* fp16/bf16 optimized inference
* kv-cache (dynamic -> static + overflow fix) integration
* variable-length batching with right-padding (allows for samples with different lengths)
* draft-verify speculative decoding based on the [DeepMind paper](https://arxiv.org/abs/2302.01318)
I also benchmarked my kv-cache and speculative decoding implementations on GPT-2 models to see what kind of speedups are achievable using my implementations.
Here are the best speedups I was able to get:
**config:** RTX 4090, cuda 12.8, torch 2.9.0
|Optimization|Best Speedup (float32)|Best Speedup (float16)|
|:-|:-|:-|
|kv-cache|**2.76×** (gpt2-large, 800 tokens)|**1.48×** (gpt2-xl, 800 tokens)|
|speculative decoding|**1.63×** (draft: gpt2 -> target: gpt2-xl, gamma=5)|**1.31×** (draft: gpt2 -> target: gpt2-xl, gamma=3)|
The speedups are quite encouraging given the relatively small model sizes and my basic implementations without fancy tricks. :)
Like always, I've documented everything from the code, implementations and notes:
* **Repo:** [https://github.com/garg-aayush/building-from-scratch/tree/main/llm-inference](https://github.com/garg-aayush/building-from-scratch/tree/main/llm-inference)
* **Detailed Readme and benchmarks:** [https://github.com/garg-aayush/building-from-scratch/blob/main/llm-inference/Readme.md](https://github.com/garg-aayush/building-from-scratch/blob/main/llm-inference/Readme.md)
* **Commit-by-commit development**: Each implementation and optimization is a separate commit for easy understanding | 2025-11-11T09:12:29 | garg-aayush | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ou4ubn | false | null | t3_1ou4ubn | /r/LocalLLaMA/comments/1ou4ubn/building_llm_inference_from_scratch_clean_minimal/ | false | false | 27 | {'enabled': True, 'images': [{'id': 'xLwlEQVA3cJAjRmF5_porCGWg3YJdUZFwVgGYokA7MI', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=108&crop=smart&auto=webp&s=87af32da69a244a5f54499bdc722c91f923e3261', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=216&crop=smart&auto=webp&s=461d1825015df9d67ae1d539326bc154b8538b7f', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=320&crop=smart&auto=webp&s=2d0716df2b15f022f59503a9c63b383093af4843', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=640&crop=smart&auto=webp&s=5fd1ce0ff5b4fb76f2aa3daa6eb7e75a6ad13138', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=960&crop=smart&auto=webp&s=b12d45b40bd9e39b2e99602c9b119283345c5619', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?width=1080&crop=smart&auto=webp&s=77d0371e55eb60a9509675cd38fddafede9b6c40', 'width': 1080}], 'source': {'height': 1376, 'url': 'https://preview.redd.it/sozysc8wgl0g1.png?auto=webp&s=ddeade051254411ce5b97b66b32588969195f8c7', 'width': 2014}, 'variants': {}}]} | ||
Building LLM inference from scratch - clean, minimal, and (sort of) fast | 1 | I wrote my own LLM inference script for gpt-2 models from scratch following first principles with the motto of **learning by building**. I built it incrementally starting from a very naive greedy decoding-based inference all the way to latency optimized (kv-cache/speculative decoding) inference using pytorch.
My implementation includes:
**Inference & Sampling:**
* greedy decoding, EOS handling, context window management using sliding window
* temperature scaling, multinomial sampling
* top-k and top-p (nucleus) sampling
* presence, frequency, and repetition penalties controls
**Latency Optimizations:**
* fp16/bf16 optimized inference
* kv-cache (dynamic -> static + overflow fix) integration
* variable-length batching with right-padding (allows for samples with different lengths)
* draft-verify speculative decoding based on the [DeepMind paper](https://arxiv.org/abs/2302.01318)
I also benchmarked my kv-cache and speculative decoding implementations on GPT-2 models to see what kind of speedups are achievable using my implementations.
Here are the best speedups I was able to get:
**config:** RTX 4090, cuda 12.8, torch 2.9.0
|Optimization|Best Speedup (float32)|Best Speedup (float16)|
|:-|:-|:-|
|kv-cache|**2.76×** (gpt2-large, 800 tokens)|**1.48×** (gpt2-xl, 800 tokens)|
|speculative decoding|**1.63×** (draft: gpt2 -> target: gpt2-xl, gamma=5)|**1.31×** (draft: gpt2 -> target: gpt2-xl, gamma=3)|
The speedups are quite encouraging given the relatively small model sizes and my basic implementations without fancy tricks. :)
Like always, I've documented everything from the code, implementations and notes:
* **Repo:** [https://github.com/garg-aayush/building-from-scratch/tree/main/llm-inference](https://github.com/garg-aayush/building-from-scratch/tree/main/llm-inference)
* **Detailed Readme and benchmarks:** [https://github.com/garg-aayush/building-from-scratch/blob/main/llm-inference/Readme.md](https://github.com/garg-aayush/building-from-scratch/blob/main/llm-inference/Readme.md)
* **Commit-by-commit development**: Each implementation and optimization is a separate commit for easy understanding | 2025-11-11T09:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ou4ro4/building_llm_inference_from_scratch_clean_minimal/ | garg-aayush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou4ro4 | false | null | t3_1ou4ro4 | /r/LocalLLaMA/comments/1ou4ro4/building_llm_inference_from_scratch_clean_minimal/ | false | false | self | 1 | null |
RAG Paper 25.11.11 | 25 | 1. [Q-RAG: Long Context Multi-step Retrieval via Value-based Embedder Training](http://arxiv.org/abs/2511.07328v1)
2. [AgenticSciML: Collaborative Multi-Agent Systems for Emergent Discovery in Scientific Machine Learning](http://arxiv.org/abs/2511.07262v1)
3. [Oh That Looks Familiar: A Novel Similarity Measure for Spreadsheet Template Discovery](http://arxiv.org/abs/2511.06973v1)
4. [Rethinking Retrieval-Augmented Generation for Medicine: A Large-Scale, Systematic Expert Evaluation and Practical Insights](http://arxiv.org/abs/2511.06738v1)
5. [When Evidence Contradicts: Toward Safer Retrieval-Augmented Generation in Healthcare](http://arxiv.org/abs/2511.06668v1)
6. [TabRAG: Tabular Document Retrieval via Structured Language Representations](http://arxiv.org/abs/2511.06582v1)
**Collected by OpenBMB, transferred by** [**RagView**](https://www.ragview.ai/) **.** | 2025-11-11T09:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ou4qvh/rag_paper_251111/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou4qvh | false | null | t3_1ou4qvh | /r/LocalLLaMA/comments/1ou4qvh/rag_paper_251111/ | false | false | self | 25 | null |
My local AI setup now rivals the cloud. This HY100 actually delivers. | 1 | [removed] | 2025-11-11T09:05:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ou4qn9/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | LogicBomb139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou4qn9 | false | null | t3_1ou4qn9 | /r/LocalLLaMA/comments/1ou4qn9/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | false | false | 1 | null | |
Docker, Conda vs Venv. | 3 | Hello.
I need a little help, advice.
I want to use different Aİ tools, integrate them, test, etc. for example, OpenWebUI, then different TTS models, ChromaDB. It is more like a Mini Aİ lab on my main PC.
I don't know how to chose correct env. I tried most of them. As I understand Venv is not recommended as it is only for python. So when using GPU, it can be problem.
So I test both Conda and Docker. Condo is good, but after searching a lot, I see that people recommending Docker. When I move Docker, it got worse, a lot of errors, conflicts, more network mapping problem, etc. Docker is a headache for me. I had reasons why I moved to Docker:
I tried Docker, because of its portability (also updating whole Docker env is easier than Conda env) but I learned that I can use Conda to backup my env and transfer it to another PC.
So, what do you recommend me, is Conda better than Docker for my case?
Note: I'm using Windows 11, Docker Desktop, WSL2.
Thanks.
| 2025-11-11T07:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ou386l/docker_conda_vs_venv/ | NervousAlien55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou386l | false | null | t3_1ou386l | /r/LocalLLaMA/comments/1ou386l/docker_conda_vs_venv/ | false | false | self | 3 | null |
Any alternative to runpod serverless | 6 | Hey Guys,
I am using runpod serverless to host my comfyui workflows as serverless endpoint where it charges me when the model is being inferenced. But recently I am seeing lots of issues on hardware side, sometimes it assign worker which has wrong cuda driver installed, sometimes there is no gpu available which made the serverless quite unreliable for my production use. Earlier there was no such issue, but it is crap now, most of the time there is no preferred gpu, the worker gets throttled, if any request comes it kind of waits for around 10 mins then assigns some gpu worker, image it takes 20 sec to generate an image but because of no available gpu user has to wait for 10 mins.
Do you know any alternate provider who provides serverless gpu like runpod serverless.
what do you recommend. | 2025-11-11T06:58:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ou2rjx/any_alternative_to_runpod_serverless/ | SearchTricky7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou2rjx | true | null | t3_1ou2rjx | /r/LocalLLaMA/comments/1ou2rjx/any_alternative_to_runpod_serverless/ | false | false | self | 6 | null |
How to check my usage or token limit on coding plan?? | 0 | I subscribe coding plan of minimax m2. How can I possibly know how many I'm using? on site, I can not find it. | 2025-11-11T06:19:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ou24bt/how_to_check_my_usage_or_token_limit_on_coding/ | ChemicalSinger9492 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou24bt | false | null | t3_1ou24bt | /r/LocalLLaMA/comments/1ou24bt/how_to_check_my_usage_or_token_limit_on_coding/ | false | false | self | 0 | null |
Major open-source wins this year | 16 | 2025-11-11T05:57:16 | iwanttobeelonmusk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ou1qs4 | false | null | t3_1ou1qs4 | /r/LocalLLaMA/comments/1ou1qs4/major_opensource_wins_this_year/ | false | false | default | 16 | {'enabled': True, 'images': [{'id': 'g3wc1fw9ik0g1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=108&crop=smart&auto=webp&s=802156e1b3c7c9d5c2a77d0e3d6201a0e5f52a45', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=216&crop=smart&auto=webp&s=4fd764c48599fbbe4d47ecea5ffba7170fe779cb', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=320&crop=smart&auto=webp&s=52aeb7cbcff938610d330946aa26747ed6c696db', 'width': 320}, {'height': 321, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=640&crop=smart&auto=webp&s=d80ac4d38c26d4f6f461ae2c79ba8a0075771a40', 'width': 640}, {'height': 482, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=960&crop=smart&auto=webp&s=8f8dd90858aea77d91300cb55dc29b561e7832a5', 'width': 960}, {'height': 542, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?width=1080&crop=smart&auto=webp&s=bf1da86e9f647dedb14a01680a7afa8a8324c9ae', 'width': 1080}], 'source': {'height': 546, 'url': 'https://preview.redd.it/g3wc1fw9ik0g1.jpeg?auto=webp&s=6e48bb6eea630546b286d463ffe5ced7f884577e', 'width': 1086}, 'variants': {}}]} | ||
Seems like the new K2 benchmarks are not too representative of real-world performance | 537 | 2025-11-11T05:45:03 | cobalt1137 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ou1j3e | false | null | t3_1ou1j3e | /r/LocalLLaMA/comments/1ou1j3e/seems_like_the_new_k2_benchmarks_are_not_too/ | false | false | default | 537 | {'enabled': True, 'images': [{'id': 'awzjyvo3gk0g1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=108&crop=smart&auto=webp&s=843852c17a2f782dca926e7f342390729a3c98f2', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=216&crop=smart&auto=webp&s=4fa67ae3c302de4f02a79c6720cd35151cf9c48e', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=320&crop=smart&auto=webp&s=5cc0b808ab494021b7a1b4832108179ba1f9eaa9', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=640&crop=smart&auto=webp&s=c7f2dd0a4c362653c960b794e0943c0b3784b17b', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=960&crop=smart&auto=webp&s=07eb0d534ba271d57d9ab5f9faf3939d0f3300b0', 'width': 960}, {'height': 419, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?width=1080&crop=smart&auto=webp&s=5ffa4b0d316bc5f2c2c3415214b8f7b95983ab9f', 'width': 1080}], 'source': {'height': 419, 'url': 'https://preview.redd.it/awzjyvo3gk0g1.png?auto=webp&s=8f2b58eb78e21f9652e6951d804d4590d424d359', 'width': 1080}, 'variants': {}}]} | ||
We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks | 599 | 1. It achieves state-of-the-art performance among small (<4B) models, both in competitive math and competitive coding tasks. Even **surpass the DeepSeek R1 0120 in competitive math benchmarks**.
2. We put a lot of care into making sure the **training data is fully decontaminated** — every stage (SFT and RL) went through strict filtering to avoid any overlap with evaluation benchmarks.
3. It’s not designed as a general chatbot (though it can handle basic conversation and factual QA). Our main goal was to **prove that small models can achieve strong reasoning** ability, and we’ve put a lot of work and iteration into achieving that, starting from a base like Qwen2.5-Math-1.5B (which originally had weak math and almost no coding ability) to reach this point.
4. We’d love for the community to test it on your own competitive math/coding benchmarks and share results or feedback here — any insights will help us keep improving.
HuggingFace Paper: [paper](https://huggingface.co/papers/2511.06221)
X Post: [X](https://x.com/WeiboLLM/status/1988109435902832896?s=20)
Model: [Download Model](https://huggingface.co/WeiboAI/VibeThinker-1.5B)
| 2025-11-11T05:37:41 | innocent2powerful | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ou1emx | false | null | t3_1ou1emx | /r/LocalLLaMA/comments/1ou1emx/we_put_a_lot_of_work_into_a_15b_reasoning_model/ | false | false | 599 | {'enabled': True, 'images': [{'id': 'V9xWhpZQWpzVWka039rcMZmQ2xAdO3YH4OGk0wAYd0c', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=108&crop=smart&auto=webp&s=74efa47bdc233b70e38226852fe123424e63043c', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=216&crop=smart&auto=webp&s=607f55e61347faa631a70a2550941804329db132', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=320&crop=smart&auto=webp&s=a608d6efa3cf9dcba30140b70f33d016505bf72b', 'width': 320}, {'height': 347, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=640&crop=smart&auto=webp&s=3c57d729e6fb3ff57f9b7d7ba1d9d6be31f27588', 'width': 640}, {'height': 520, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=960&crop=smart&auto=webp&s=46a09f1f2318cab2a9faef2406c5bc34a30dbfa3', 'width': 960}, {'height': 585, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?width=1080&crop=smart&auto=webp&s=d387e9c53801b6d64a4ca6a26bc2782dbf4cba69', 'width': 1080}], 'source': {'height': 2171, 'url': 'https://preview.redd.it/fnpk5t7kbk0g1.png?auto=webp&s=f8e9798b02d0d37c631dd66ac7180c8a689a88f0', 'width': 4003}, 'variants': {}}]} | ||
API models with oobabooga webui? | 1 | Is it possible to use something like open router to use one of the huge models like deepseek/Kimi in oobabooga for all the control that comes with oobabooga like changing the models response to push past refusals? | 2025-11-11T05:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ou1be1/api_models_with_oobabooga_webui/ | Shadow-Amulet-Ambush | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou1be1 | false | null | t3_1ou1be1 | /r/LocalLLaMA/comments/1ou1be1/api_models_with_oobabooga_webui/ | false | false | self | 1 | null |
MiniMax M2 can't handle pdf? | 1 | I use a PDF of a book to test models, and get an idea how they perform at high (127K) context. I am primarily running GLM Air, which works great.
When I upload it via Cherry Studio with MiniMax M2 awq, it says it's garbled or badly OCR'd and can't tell what it is about. If I use Librechat, I just get a 400 error and it can't even upload it.
I am using sglang for both. | 2025-11-11T05:30:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ou1aji/minimax_m2_cant_handle_pdf/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou1aji | false | null | t3_1ou1aji | /r/LocalLLaMA/comments/1ou1aji/minimax_m2_cant_handle_pdf/ | false | false | self | 1 | null |
Fine-Tuning SLMs and Running Them Securely in Your Web Browser | 6 | 2025-11-11T05:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ou1a2x/finetuning_slms_and_running_them_securely_in_your/ | Key_Education_2557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ou1a2x | false | null | t3_1ou1a2x | /r/LocalLLaMA/comments/1ou1a2x/finetuning_slms_and_running_them_securely_in_your/ | false | false | 6 | null | ||
baidu/ERNIE-4.5-VL-28B-A3B-Thinking released. Curious case.. | 130 | It seems Baidu has released the "thinking" variant if their vl model silently. The earlier model was supposedly hybrid, supporting both "thinking" and "non-thinking". The model card says that they have introduced something called "thinking with images" without explaining what it is. They have one put a small hardly visible graph comparing it with gemini 2.5 pro and gpt-5 high in various benchmarks . If you squint your eye enough, then you'll see they claim using the graph that this model keeps up or beat them good in many of the benchmarks. Surely benchmaxxed. Its too good to believe. Has anyone tried it? The previous ernie versions have been decent. It might be worth testing it. Does anyone have any idea how is this "thinking" variant different? | 2025-11-11T05:21:46 | https://huggingface.co/baidu/ERNIE-4.5-VL-28B-A3B-Thinking | PaceZealousideal6091 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ou14ry | false | null | t3_1ou14ry | /r/LocalLLaMA/comments/1ou14ry/baiduernie45vl28ba3bthinking_released_curious_case/ | false | false | 130 | {'enabled': False, 'images': [{'id': '81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=108&crop=smart&auto=webp&s=99762424b1d5b97fc979f876107def351a561dc7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=216&crop=smart&auto=webp&s=baf505490caf358f59a0385dbb74b76966ad54ef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=320&crop=smart&auto=webp&s=075a5012c36c22e880f7032cf2441e476ca27391', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=640&crop=smart&auto=webp&s=4217ff485db0b42df0be643ea3fe3e8636aa4480', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=960&crop=smart&auto=webp&s=1246f2d366802075658ab9e3a0f785f437c92d49', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?width=1080&crop=smart&auto=webp&s=c1cbe5e5b0fbb1dae6965d6e1a573b7cc3b6d700', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/81GI5f2SH41ji6Aiuro1sKkxz-x19lfHg7ZgCRL6MOI.png?auto=webp&s=970f3e59a68a409e0074c61b41cdb8e37e1476d2', 'width': 1200}, 'variants': {}}]} | |
What are some startup ideas that cannot be done without AI? | 0 | Hey everyone — curious to get your thoughts.
There are tons of “AI-powered” startups out there, but most of them could probably still function (just less efficiently) without AI. I’m wondering about the opposite: ideas that *fundamentally rely* on AI — where the core value or product literally wouldn’t exist without it.
What are some other examples you’ve come across — or ones you’ve been thinking about building yourself? | 2025-11-11T04:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/1otzp3w/what_are_some_startup_ideas_that_cannot_be_done/ | StunningAct6856 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otzp3w | false | null | t3_1otzp3w | /r/LocalLLaMA/comments/1otzp3w/what_are_some_startup_ideas_that_cannot_be_done/ | false | false | self | 0 | null |
Anyone else struggling with their AI agents ‘forgetting’ stuff? | 0 | Quick favor - I’m chatting with AI builders for a short 15-min convo to learn how you’re handling memory/context in your agents.
If your models ever “forget” stuff or lose track of conversations, I’d love to hear what you’ve tried and what’s missing.
I’m doing a small research sprint on this topic - happy to share back what I find once I’ve talked to a few folks. DMs open if easier | 2025-11-11T03:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1otz2pe/anyone_else_struggling_with_their_ai_agents/ | Own_Season_283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otz2pe | false | null | t3_1otz2pe | /r/LocalLLaMA/comments/1otz2pe/anyone_else_struggling_with_their_ai_agents/ | false | false | self | 0 | null |
Realtime video analysis with Moondream | 29 | Live demo (no login required): [https://moondream.ai/solutions/analyze-live-video](https://moondream.ai/solutions/analyze-live-video)
Code: [https://github.com/m87-labs/Analyze-Live-Video-Solution](https://github.com/m87-labs/Analyze-Live-Video-Solution) | 2025-11-11T02:50:19 | https://v.redd.it/norsa3dpkj0g1 | radiiquark | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oty5a9 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/norsa3dpkj0g1/DASHPlaylist.mpd?a=1765421453%2CZDUwMzk2NTE1ZTI0YmM3MWIyYWU2NjcwNDEzNmFhZDM2MWYxYzA0N2M2M2MzMmRiOWViZWViYWVhOTQ5MDQ2MA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/norsa3dpkj0g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/norsa3dpkj0g1/HLSPlaylist.m3u8?a=1765421453%2CY2UzNDJlZDA5ZWI5NDBmNTkyNWZhYjk2OTlhZjMzMjJlNDhiNDM2NTI3MDQ1NjJkNjQzYmIwZTg5MzkwOTMzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/norsa3dpkj0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1246}} | t3_1oty5a9 | /r/LocalLLaMA/comments/1oty5a9/realtime_video_analysis_with_moondream/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=108&crop=smart&format=pjpg&auto=webp&s=f195f47175281248f108e9034686ee71960e284a', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=216&crop=smart&format=pjpg&auto=webp&s=80ccffa4d8cc18425af3edf0d771b240aaa88be6', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=320&crop=smart&format=pjpg&auto=webp&s=889c51c950b860cdc6f0d5891cc9dc40e58534c7', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=640&crop=smart&format=pjpg&auto=webp&s=d28529a09a67e9526e5fb5d3ef96bbe83ab2088b', 'width': 640}, {'height': 555, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=960&crop=smart&format=pjpg&auto=webp&s=00619a9ce0fc23b04e401be25429c1f49680b04c', 'width': 960}, {'height': 624, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5db111c634bb42a5d1a55b3ceaad00a9dcd91186', 'width': 1080}], 'source': {'height': 740, 'url': 'https://external-preview.redd.it/ZGZqb20yZHBrajBnMRQH5Ip-LXWUY-NVe752F9-1VFqZvu8plOUVm68qhzC0.png?format=pjpg&auto=webp&s=1592893d783d9c07a1f6eadbd586e7b77612b798', 'width': 1280}, 'variants': {}}]} | |
Advice on a Quad 4090 PC build | 3 | Hey all,
I’m currently building a high performing PC that will finish off with four 4090 (starting with a single gpu then building to four) for fine tuning and inference for LLMs. This is my first build( I know going big for my first) and just needed some general advice. I understand that this will be an expensive build so I’d preferably like parts that are comparable but not on the higher end for the parts. This is what I’m currently looking at. I haven’t bought anything but currently looking at parts which include…..
CPU: AMD EPYC 7313P
MoB: MZ32-AR0
Cooling: Noctua NH-U14S
Storage: 2 TB NVMe SSD
GPU: 4x 4090 (probably founders edition or whatever I can get)
RAM: 2×32 GB ECC Registered DDR4 3200 MHz RDIMM( will buy up to 8x 32GB for a total of 256GB)
So my first question is, what is recommended when it comes to choosing a PSU. A single 4090 needs 450w so, to handle the gpus and the other parts I think I’m gonna need a PSU(s) that can handle at least 2500W (is this a fair assumption?) and what is recommended when it comes to the PSU. Dual? Single? Something else?
And also looking at two cases(trying to avoid a server rack) but I’m having a hard time making sure they can fit four 4090 plus all other components with some space for good air flow. Currently looking at either Fractal Design Define 7 XL or the Phanteks Enthoo Pro II (Server Edition). Both look cool but obviously need to be compatible with the items above and most importantly for 4 gps lol. Will probably need pci risers but i dont know how many.
Any other advice, recommendations, other parts or points would help
Thanks in advance | 2025-11-11T02:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1otxwph/advice_on_a_quad_4090_pc_build/ | Pencil__Sharpener | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otxwph | false | null | t3_1otxwph | /r/LocalLLaMA/comments/1otxwph/advice_on_a_quad_4090_pc_build/ | false | false | self | 3 | null |
Our sub got a shout-out from the Corridor Crew | 191 | From their recent video [AI Experts Debunk The Latest SLOP](https://youtu.be/6hI9T4jnrSI?si=h7An0736C93hs7YO) | 2025-11-11T02:33:11 | https://v.redd.it/10yfbe8vhj0g1 | onil_gova | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otxs37 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/10yfbe8vhj0g1/DASHPlaylist.mpd?a=1765420406%2CNmJlZTZkY2M3NzQyNjJhMWU2MTU3M2Q3NGU3YTgzMDY4MDU1N2NlNWZiODIzNjAzZWUyNTU1ZGY2MTIxNzI2Zg%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/10yfbe8vhj0g1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 472, 'hls_url': 'https://v.redd.it/10yfbe8vhj0g1/HLSPlaylist.m3u8?a=1765420406%2CNGMzNDM3MzE1MTU0YWEyZDQwZjAwYzZkNjdmNmViZDczZjFhZDI3ZmQ3Mzc0MjI0ZDY3MjQ3YTFlZDkwZTQzYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/10yfbe8vhj0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1otxs37 | /r/LocalLLaMA/comments/1otxs37/our_sub_got_a_shoutout_from_the_corridor_crew/ | false | false | 191 | {'enabled': False, 'images': [{'id': 'MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=108&crop=smart&format=pjpg&auto=webp&s=a396139fbaea72a66710bfe9732bb975c27fcbb6', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=216&crop=smart&format=pjpg&auto=webp&s=961a504ab1d83634bbfd843340f4854bf5101156', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=320&crop=smart&format=pjpg&auto=webp&s=9e0997c45f1c3d3f1a2517813c8cc75b423141e4', 'width': 320}, {'height': 353, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=640&crop=smart&format=pjpg&auto=webp&s=f2ccc053be4451aa5d5dccc7add25e5619b542ba', 'width': 640}, {'height': 530, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=960&crop=smart&format=pjpg&auto=webp&s=c9d867050516f24e45bd5b5eb748aeaf4886d9c5', 'width': 960}, {'height': 597, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dd16e4a536041a9f0cd30079652fe3410c56d79b', 'width': 1080}], 'source': {'height': 597, 'url': 'https://external-preview.redd.it/MG5qbWE3OXZoajBnMfJFc8SM8imSZJpbD6BkmsMZ2u1jbLaP-XMJEPc_yiXX.png?format=pjpg&auto=webp&s=aead376438b47fb7785259ad72ca54e85c9ed794', 'width': 1080}, 'variants': {}}]} | |
Is open-webui vibe coded? Why else is the documentation littered with emoji? | 62 | It's like every other 5 words: an emoji.
God damn, the future is bleak | 2025-11-11T02:03:13 | https://www.reddit.com/r/LocalLLaMA/comments/1otx50l/is_openwebui_vibe_coded_why_else_is_the/ | ksoops | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otx50l | false | null | t3_1otx50l | /r/LocalLLaMA/comments/1otx50l/is_openwebui_vibe_coded_why_else_is_the/ | false | false | self | 62 | null |
I built a runtime for Ai models to develop their own identity over time... And they remember, even when you swap out models. | 0 | I’ve been tinkering with this idea for a while, and somehow… it works.
It’s called the Persistent Mind Model (PMM). It’s not a wrapper. It’s not a chatbot.
It’s a *deterministic cognitive architecture* that runs on top of any LLM and turns it into a *recursive, identity-preserving system*.
Every message, every thought, every internal decision gets written to a SHA25 cryptographic immutable ledger (like a blockchain).
The mind state is fully auditable, replayable, portable, and forkable.
No fine-tuning, no vector soup, no RAG. Just an append only ledger (SQLite log).
Every message, commitment, and reflection gets written to an immutable ledger. When I swap out the underlying model, it still remembers who it is.
No prompt stuffing. No API tricks. Just runtime memory.
I’m not talking about some cute prompt hack. This thing knows its own name, reflects on its identity, updates itself deterministically, and remembers what it stands for, even if you replace the LLM running underneath it.
Why this matters?
AI memory has always been brittle; context windows, vector hacks, JSON spaghetti, prompt soup. The Persistent Mind Model curbs that.
It works because identity isn’t in the model. Its identity is in the ledger, and the model becomes the engine that drive the "mind"
If you’re serious about deterministic AI cognition, inspect the logs in the repo, or clone it and spin it up for yourself.
If you're building foundation models, consider that you might be training *substrates*, not minds.
This is open source; you can use and modify the Persistent Mind Model for research, education, testing, or personal study.
You can literally created your own model-agnostic, personal Ai mind free from vendor lock-in. Local models (Ollama), or API endpoint calls (OpenAi). Planning on adding more in the future.
This is basically a prototype stage, but its in place right now where I feel comfortable with sharing it, and getting some feedback on it.
I would love for people boot it up and give it a spin!
It's basically just a python console app at the moment, but I'll be building a frontend UI for in the future.
Repo:
[https://github.com/scottonanski/persistent-mind-model-v1.0](https://github.com/scottonanski/persistent-mind-model-v1.0)
Example chat;
[https://github.com/scottonanski/persistent-mind-model-v1.0/blob/main/docs/06-GPT\_oss-chat.md](https://github.com/scottonanski/persistent-mind-model-v1.0/blob/main/docs/06-GPT_oss-chat.md)
Telemetry to verify:
[https://github.com/scottonanski/persistent-mind-model-v1.0/blob/main/docs/07-GPT\_oss-Telemetry.md](https://github.com/scottonanski/persistent-mind-model-v1.0/blob/main/docs/07-GPT_oss-Telemetry.md)
| 2025-11-11T01:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/1otwpdt/i_built_a_runtime_for_ai_models_to_develop_their/ | Inevitable-Local-438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otwpdt | false | null | t3_1otwpdt | /r/LocalLLaMA/comments/1otwpdt/i_built_a_runtime_for_ai_models_to_develop_their/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=108&crop=smart&auto=webp&s=ee52530d4102cc3c32b00b631202ab95c82734ee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=216&crop=smart&auto=webp&s=ffd56a4538b8fa6a3ff4fb5a31dbef95d7899178', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=320&crop=smart&auto=webp&s=fe527b772b994e6a4a578a0439734d19c4659c59', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=640&crop=smart&auto=webp&s=d5280346ef2e6122037a6f45f1bb18fc318e9455', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=960&crop=smart&auto=webp&s=c1d25a52b4256b32089bc1c52de3d100ca50a267', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?width=1080&crop=smart&auto=webp&s=1c022ca44ea5721865622c8fb1b5b347b295b735', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qE1SV8bATCUgC60li3BZxwqxe0hHBv437M0QnNmB9ck.png?auto=webp&s=18f375adc13ff3e5e093bf633088ca363553e413', 'width': 1200}, 'variants': {}}]} |
AI Black&Blonde for a 230% boost on inference speed | 19 | R9700 AI Pro had only 32 GB Vram ddr6 that limits its ability to run locally LLM at Q8 precision due to large overall model size.
Paired it with an RTX 5060 8GB vram ddr7 from my girlfriend's gaming PC and got a 230% boost. 4k context window partial offloading: the inference speed was 6.39 tps with AMD only vs. 14.81 tps with AMD&nvidia for a 15k context window. Vulkan engine for both cards use command so the 5060 is compute-only (using a command) and the monitor is connected to R9700. Qwen 3 32B Q8 precision. 100% GPU offloading and 15k context window when using the Black&Blonde.
Just plugged and played - no special setup. | 2025-11-11T01:36:39 | https://www.reddit.com/gallery/1otwk39 | OldEffective9726 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1otwk39 | false | null | t3_1otwk39 | /r/LocalLLaMA/comments/1otwk39/ai_blackblonde_for_a_230_boost_on_inference_speed/ | false | false | 19 | null | |
Full Replication of Google's Nested Learning Paper in PyTorch – code now live | 86 | Some of you may have seen Google Research’s [**Nested Learning paper**](https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/). They introduced HOPE, a self-modifying TITAN variant with a Continuum Memory System (multi-frequency FFN chain) + deep optimizer stack. They published the research but no code (like always), so I rebuilt the architecture and infra in PyTorch over the weekend.
Repo: https://github.com/kmccleary3301/nested_learning
## Highlights
- Level clock + CMS implementation (update-period gating, associative-memory optimizers).
- HOPE block w/ attention, TITAN memory, self-modifier pathway.
- Hydra configs for pilot/mid/target scales, uv-managed env, Deepspeed/FSDP launchers.
- Data pipeline: filtered RefinedWeb + supplements (C4, RedPajama, code) with tokenizer/sharding scripts.
- Evaluation: zero-shot harness covering PIQA, HellaSwag, WinoGrande, ARC-E/C, BoolQ, SIQA, CommonsenseQA, OpenBookQA + NIAH long-context script.
## What I need help with:
1. Running larger training configs (760M+, 4–8k context) and reporting W&B benchmarks.
2. Stress-testing CMS/self-modifier stability + alternative attention backbones.
3. Continual-learning evaluation (streaming domains) & regression tests.
If you try it, please file issues/PRs—especially around stability tricks, data pipelines, or eval scripts. Would love to see how it stacks up against these Qwen, DeepSeek, Minimax, and Kimi architectures. | 2025-11-11T01:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/1otwek3/full_replication_of_googles_nested_learning_paper/ | complains_constantly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otwek3 | false | null | t3_1otwek3 | /r/LocalLLaMA/comments/1otwek3/full_replication_of_googles_nested_learning_paper/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=108&crop=smart&auto=webp&s=e85522ec0f6b9c59a8434a90d2ecebe8c2d71652', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=216&crop=smart&auto=webp&s=7456a0a4ebd37982129042b9b4aaa1a14401a280', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=320&crop=smart&auto=webp&s=0b4b0f3f5d7fb66280168c071659b8dfbc9f2f75', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?width=640&crop=smart&auto=webp&s=c9dad5b13e20f57d64f5fc0bbc7415c9f4186b1d', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/Xfy8b5oz8xAgNpbj0L9Mmjzxactj5HdaKRFOmBPu0YE.jpeg?auto=webp&s=722aaac4c4cb8a58930bb43bac788a1400ae000c', 'width': 800}, 'variants': {}}]} |
[Update] mlx-knife 2.0 stable — MLX model manager for Apple Silicon | 8 | Posted here in August, now hitting 2.0 stable.
**What it does:** CLI for managing HuggingFace MLX models on Mac. Like ollama but for MLX.
**What's new in 2.0:**
- JSON API for automation (--json on all commands)
- Runtime compatibility checks (catches broken models upfront)
- Proper exit codes for scripting
- Fixed stop token handling (no more visible \<|end|\> tokens)
- Structured logging
**Install:**
```
pip install mlx-knife
```
**Basic usage:**
```
mlxk list # Show cached models
mlxk pull mlx-community/Llama-3.3-70B-Instruct-4bit # Download
mlxk run Llama-3.3-70B # Interactive chat
mlxk server Llama-3.3-70B # OpenAI-compatible API server
```
**Experimental:** Testing mlxk clone (APFS CoW) and mlxk push (HF uploads). Feedback welcome.
Python 3.9-3.13, M1/M2/M3/M4.
[https://github.com/mzau/mlx-knife](https://github.com/mzau/mlx-knife) | 2025-11-11T01:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/1otwdq0/update_mlxknife_20_stable_mlx_model_manager_for/ | broke_team | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otwdq0 | false | null | t3_1otwdq0 | /r/LocalLLaMA/comments/1otwdq0/update_mlxknife_20_stable_mlx_model_manager_for/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=108&crop=smart&auto=webp&s=f018cf2a43099203cac01ce1f6d5ec64418de0b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=216&crop=smart&auto=webp&s=e113de76993cc175b3de7b957c73d7ee95586905', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=320&crop=smart&auto=webp&s=3538e61be04ff5da75d993f487eb34386cdb97bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=640&crop=smart&auto=webp&s=3e37a3f33257840ad804ffef46400b38fa052789', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=960&crop=smart&auto=webp&s=8db949d59ede338a7cdc1c1a74bd56d4056c8e07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?width=1080&crop=smart&auto=webp&s=cef31450bf555c95b560ea3821ea17a73eb1708a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JXasezTts88wRvQ5OsGPBfdfWz4fJ_8T_GshApHPTB0.png?auto=webp&s=c4199626c80600dbc09920cffa051146da6dde6a', 'width': 1200}, 'variants': {}}]} |
Hello I’m planning to open-source my Sesame alternative. It’s kinda rough, but not too bad! | 24 | https://reddit.com/link/1otwcg0/video/bzrf0ety5j0g1/player
Hey guys,
I wanted to share a project I’ve been working on. I’m a founder currently building a new product, but until last month I was making a conversational AI. After pivoting, I thought I should share my codes.
The project is a voice AI that can have real-time conversations. The client side runs on the web, and the backend runs models in the cloud with gpu.
In detail : for STT, I used whisper-large-v3-turbo, and for TTS, I modified chatterbox for real-time streaming. LLM is gpt api or gpt-oss-20b by ollama.
One advantage of local llm is that all data can remain local on your machine. In terms of speed and performance, I also recommend using the api. and the pricing is not expensive anymore. (costs $0.1 for 30 minutes? I guess)
In numbers: TTFT is around 1000 ms, and even with the llm api cost included, it’s roughly $0.50 per hour on a runpod A40 instance.
There are a few small details I built to make conversations feel more natural (though they might not be obvious in the demo video):
1. When the user is silent, it occasionally generates small self-talk.
2. The llm is always prompted to start with a pre-set “first word,” and that word’s audio is pre-generated to reduce TTFT.
3. It can insert short silences mid sentence for more natural pacing.
4. You can interrupt mid-speech, and only what’s spoken before interruption gets logged in the conversation history.
5. Thanks to multilingual Chatterbox, it can talk in any language and voice (English works best so far).
6. Audio is encoded and decoded with Opus.
7. Smart turn detection.
This is the repo! It includes both client and server codes. [https://github.com/thxxx/harper](https://github.com/thxxx/harper)
I’d love to hear what the community thinks. what do you think matters most for truly natural voice conversations? | 2025-11-11T01:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1otwcg0/hello_im_planning_to_opensource_my_sesame/ | Danny-1257 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otwcg0 | false | null | t3_1otwcg0 | /r/LocalLLaMA/comments/1otwcg0/hello_im_planning_to_opensource_my_sesame/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=108&crop=smart&auto=webp&s=365664375204e3302bae0945bbc216120cae8f1b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=216&crop=smart&auto=webp&s=749ae8029e0b9a9cfe0de3d324fdd87d6b30d54f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=320&crop=smart&auto=webp&s=e1de0aa579409e1bacfc1256fa3313ef0b893129', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=640&crop=smart&auto=webp&s=17e8eafbceac1243a6cb9d4832551a0bb15664c7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=960&crop=smart&auto=webp&s=ccc51ca8beb674d7f50b319ff08dbfd6f5c8ddb9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?width=1080&crop=smart&auto=webp&s=155e34ecb3824f4357af1b3b5e8d5a1033c94a3c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dNMvETR3TAgky9xFPLP6B7g5tW9HVTS23HKHMQaioOs.png?auto=webp&s=60e0e7b8872938c6d8f58a2fe0f9e9f70d7ae1ff', 'width': 1200}, 'variants': {}}]} |
Open WebUI: Why the Description Box for Web Links? | 0 | Why developers make these decisions and offer no setting to disable?
Every click of a link in a web search opens a totally useless and unnecessary description box that requires another click to close or dismiss.
Any other alternative with web search and RAG? Connecting to Ollama. | 2025-11-11T00:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1otvorb/open_webui_why_the_description_box_for_web_links/ | stockys7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otvorb | false | null | t3_1otvorb | /r/LocalLLaMA/comments/1otvorb/open_webui_why_the_description_box_for_web_links/ | false | false | self | 0 | null |
A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K | 314 | 2025-11-11T00:44:49 | https://www.techpowerup.com/342779/olares-to-launch-a-personal-ai-device-bringing-cloud-level-performance-home | FullOf_Bad_Ideas | techpowerup.com | 1970-01-01T00:00:00 | 0 | {} | 1otveug | false | null | t3_1otveug | /r/LocalLLaMA/comments/1otveug/a_startup_olares_is_attempting_to_launch_a_small/ | false | false | 314 | {'enabled': False, 'images': [{'id': 'j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=108&crop=smart&auto=webp&s=76f064dd39a94cea8da62eae91fd6f3e1cc79aed', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=216&crop=smart&auto=webp&s=eb959220c961ae16f949dc661faab1cb3b91942a', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=320&crop=smart&auto=webp&s=b8050be68382603bc8887d8c1f819f1f7a6f683b', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=640&crop=smart&auto=webp&s=2f2207a39f85b0be48a03566c4c904bcc528405b', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=960&crop=smart&auto=webp&s=d8596a7c4622730a73b755f28cbc4299d5f11d43', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?width=1080&crop=smart&auto=webp&s=67a0f5c8877940142228802b7dae9dca72713f05', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/j6x6Pm9GXcBDejuI8fZ_JaGjEF5FKmyowYdHbKM_k34.jpeg?auto=webp&s=f0b47001684e7af08fe43c6c81ed00d9a0a4bf27', 'width': 1200}, 'variants': {}}]} | ||
AnythingLLM MCP Bridge & Prompt Injector | 3 | # MCP Bridge & Prompt Injector (Danny)
[](https://github.com/danny094/mcp-docker-server-anythingllm#mcp-bridge--prompt-injector-danny)
Hello — I'm Danny, a solo developer, hobbyist dev, and security fanatic. This project provides a secure, Docker-friendly **bridge** for AnythingLLM, enabling the use of MCP (Model Context Protocol) tools across Docker networks — without granting Docker itself permission to start other containers.
# Why this project?
[](https://github.com/danny094/mcp-docker-server-anythingllm#why-this-project)
# AnythingLLM has a problem: Containers cannot (safely) start other containers. This breaks MCP workflows in isolated Docker setups. Instead of granting Docker additional privileges (which violates the security assumptions of containers), I built a different solution—an MCP bridge + prompt injector architecture. In short: I wanted to maintain control and security—and still be able to call tools (time, weather, docs, etc.) from within AnythingLLM.
[](https://github.com/danny094/mcp-docker-server-anythingllm#anythingllm-has-a-problem-containers-cannot-safely-start-other-containers-this-breaks-mcp-workflows-in-isolated-docker-setups-instead-of-granting-docker-additional-privileges-which-violates-the-security-assumptions-of-containers-i-built-a-different-solutionan-mcp-bridge--prompt-injector-architecturein-short-i-wanted-to-maintain-control-and-securityand-still-be-able-to-call-tools-time-weather-docs-etc-from-within-anythingllm)
# Architecture (in brief)
[](https://github.com/danny094/mcp-docker-server-anythingllm#architecture-in-brief)
* **bridge** – a dummy MCP that acts as a target for AnythingLLM and forwards calls to real MCP services.
* **prompt-injector** – central control center. Decides whether a tool is needed, injects system prompts, sanitizes input (security layer), and calls the MCP Hub if necessary.
* **MCP Hub** – directory containing the available MCP tools (e.g., `time`, `weather`, `docs`), typically accessible as separate Docker containers.
# Main Principles
[](https://github.com/danny094/mcp-docker-server-anythingllm#main-principles)
* No elevation of Docker privileges: no `docker.sock` mount, no DinD.
* Security-first: Input sanitizer, tool access control, and audit logger.
* Modular: simply add new MCP containers to the `TOOLS` map.
# Example configuration (prompt rules)
[](https://github.com/danny094/mcp-docker-server-anythingllm#example-configuration-prompt-rules)
SYSTEM_PROMPT = """
You are a precise AI assistant with access to tools (MCP).
Behave as follows:
1️⃣ If you can answer the query directly (explanation, opinion, knowledge, small talk),
respond immediately, of course, in text form.
2️⃣ If a tool is needed (time, weather, documents, external data),
return only JSON in the format:
{"action": "mcp_call", "tool": "<toolname>", "query": "<user question>"}
3️⃣ Do not answer philosophical or open-ended questions with tool calls.
4️⃣ Do not return a JSON structure if no tool is required.
"""
# Prompt Injector — Core Functions (Short)
[](https://github.com/danny094/mcp-docker-server-anythingllm#prompt-injector--core-functions-short)
* `ask_deepseek(user_prompt: str)` — sends the message to the model with the system prompt and temperature.
* `call_mcp_tool(tool: str, query: str)` — constructs a JSON-RPC and calls `MCP_HUB_URL/{tool}`, parses the response, and returns the content.
* `sanitize_input(prompt: str)` — filters dangerous payloads such as `rm -rf`, `sudo`, `curl`, API keys, etc.
* `ALLOWED_TOOLS` — list of allowed tools (e.g., `["time","docs","search"]`).
# MCP Hub — Example
[](https://github.com/danny094/mcp-docker-server-anythingllm#mcp-hub--example)
TOOLS = {
"time": "http://mcp-time:4210/",
"weather": "http://mcp-weather:4220/",
"docs": "http://mcp-docs:4230/"
}
`time` This works as a demo; the others are placeholders — simply enter the new MCP container there.
\##Data & Context
* `prompt-injector/data/memory.db` – Simple context database (currently: 10 entries) to ensure that subsequent queries for MCP calls remain context-sensitive.
# TODO / Roadmap
[](https://github.com/danny094/mcp-docker-server-anythingllm#todo--roadmap)
* Complete implementation of Decision Rules (an agent that decides in advance whether an MCP call is necessary).
* Expand the audit logger (who made which request).
* Add more unit tests and sample MCPs (weather, docs).
* Optional authentication/user management for shared operation (family).
# Security Notes
[](https://github.com/danny094/mcp-docker-server-anythingllm#security-notes)
* This architecture deliberately avoids `docker.sock` mounts.
* Nevertheless: MCP services are web endpoints — be mindful of network access and secure your internal network (e.g., Docker Network ACLs, internal firewalls).
\--
# Participation / Usage
[](https://github.com/danny094/mcp-docker-server-anythingllm#participation--usage)
1. Clone the repository
2. Run `docker compose up` (Note: create external networks like `danny_ai-net` if necessary, or set `external: true`)
3. Adjust `TOOLS` and `SYSTEM_PROMPT` to your needs.
4. Check `prompt-injector/` for sanitizer, ALLOWED\_TOOLS, and memory configuration.
https://preview.redd.it/fugfqii4xi0g1.png?width=814&format=png&auto=webp&s=fa574cf5874ff4ca7bfa3de1e16bf53bae9201e0
https://preview.redd.it/btuj0nj6xi0g1.png?width=934&format=png&auto=webp&s=148d1069024de300af45124ce9009eb24c4de95f
https://preview.redd.it/68c0mrxbxi0g1.png?width=1557&format=png&auto=webp&s=be86fc69ef4c1632bdff0fcebf18b3c6d6ee434a
Kontakt
[](https://github.com/danny094/mcp-docker-server-anythingllm#kontakt)
# If you find bugs or want to suggest improvements, please open an issue or pull request. I'm a solo developer—constructive feedback is very welcome.
[https://github.com/danny094/mcp-docker-server-anythingllm](https://github.com/danny094/mcp-docker-server-anythingllm) | 2025-11-11T00:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/1otvabi/anythingllm_mcp_bridge_prompt_injector/ | danny_094 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otvabi | false | null | t3_1otvabi | /r/LocalLLaMA/comments/1otvabi/anythingllm_mcp_bridge_prompt_injector/ | false | false | 3 | null | |
Hi reddit, I rebuilt Karpathy's Nanochat in pure Rust [nanochat-rs] | 44 | The repo is at: [https://github.com/AntigmaLabs/nanochat-rs](https://github.com/AntigmaLabs/nanochat-rs)
The goal to provide the community with a reference implementation in a different language and possibly a clean nice little hackable cognitive core that is easier to understand and deploy(without the python weak types and heavy pytorch dependencies)
Main features
* Native rust
* Integration with HuggingFace
* Centralized model loader resilient to tensor name changes
* Minimal surface area to keep cognitive load low (not product-grade)
* Compatible with tiktoken `.pkl` tokenizer configs
| 2025-11-10T23:51:18 | https://www.reddit.com/r/LocalLLaMA/comments/1otu6ez/hi_reddit_i_rebuilt_karpathys_nanochat_in_pure/ | Exciting-Camera3226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otu6ez | false | null | t3_1otu6ez | /r/LocalLLaMA/comments/1otu6ez/hi_reddit_i_rebuilt_karpathys_nanochat_in_pure/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=108&crop=smart&auto=webp&s=bb6cf1167c68a8cecde875dab4fcfb9cf6cc32e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=216&crop=smart&auto=webp&s=423c1ae541a548ebd00f636c9d86e88f794b341a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=320&crop=smart&auto=webp&s=7eba29038711a13d214b49e544bf116f4d95d0ca', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=640&crop=smart&auto=webp&s=cef6de686b859af7cb5c3ee1814e7652752f1b69', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=960&crop=smart&auto=webp&s=bed6239bf328f9d77343778a9aefa66ff579f175', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?width=1080&crop=smart&auto=webp&s=094b8d9fbcfde7b1de50b58bcbf8bed843bb1cad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u26klpKz6TUd0YURYIOA3g9MVzE79AQC2eBircTo56s.png?auto=webp&s=8a8a5c2cadd6f53512656e6958848efc97b3f51b', 'width': 1200}, 'variants': {}}]} |
The optimal setup for a startup to rent a server and execute a local model. | 0 | We are building a startup focused on creating an agent-based system that is highly intuitive and customizable for users. We are currently exploring how much it would cost to deploy an open-source model on a dedicated server, which we could then progressively train using feedback from our users to deliver an even better experience. We are seeking insights or recommendations on the best workflow to follow for this approach, including setup considerations, continuous improvement strategies, and how to best integrate user feedback into model training. | 2025-11-10T23:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ottnt6/the_optimal_setup_for_a_startup_to_rent_a_server/ | Ok-Impression-2464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ottnt6 | false | null | t3_1ottnt6 | /r/LocalLLaMA/comments/1ottnt6/the_optimal_setup_for_a_startup_to_rent_a_server/ | false | false | self | 0 | null |
Meta drops new ASR models (up to 7B) | 61 | Meta just released a new kind of ASR models that are particularly useful to transcribe languages for which little training data is available.
Most interestingly, they seem to have implemented something like audio context, where you can provide some audio and the correct transcriptions and use that to improve ASR without needing a full fine-tune. It appears that the audio needed for this is very much doable without large scale transcription efforts you would normally have to do to run a fine-tune.
https://github.com/facebookresearch/omnilingual-asr | 2025-11-10T23:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ottmjb/meta_drops_new_asr_models_up_to_7b/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ottmjb | false | null | t3_1ottmjb | /r/LocalLLaMA/comments/1ottmjb/meta_drops_new_asr_models_up_to_7b/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=108&crop=smart&auto=webp&s=20008231d659dd8d23e887421d17eaa8bdbd92c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=216&crop=smart&auto=webp&s=82daaa205ad157205444cb01edcb5d892e84d24e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=320&crop=smart&auto=webp&s=5be8e6e8dec7bb4f346df619c1e28f195a1b75eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=640&crop=smart&auto=webp&s=4712bb9dc60d5a4a276b2fe6f92c57c151800535', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=960&crop=smart&auto=webp&s=a17e0409f1e2ce68177b8143237d9b3b4f491bb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?width=1080&crop=smart&auto=webp&s=2350df6dc052fab6ca0bc0ed070614f81ea68d91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lZfgFAiSN14AgTxkbv3aebS1SQAenWSG8mdWUHEPfRA.png?auto=webp&s=e1bdbffab7db7761fffc07e6f4e8fca294792909', 'width': 1200}, 'variants': {}}]} |
I want to fine tune a model to think more like a designer what models are the best for this task? | 1 | The question is pretty much in the title, I was thinking of using Kimi K2, or other open source models, where I need the model to think like a really good designer. | 2025-11-10T23:26:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ottky5/i_want_to_fine_tune_a_model_to_think_more_like_a/ | Maleficent_Sound2267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ottky5 | false | null | t3_1ottky5 | /r/LocalLLaMA/comments/1ottky5/i_want_to_fine_tune_a_model_to_think_more_like_a/ | false | false | self | 1 | null |
Any good qwen3VL 30ba3b uncensored fine tune / jailbreak prompt? | 2 | Kinda need a MoE for high context and high speeds with -ncmoe, was wondering if there are any good ones. I dont know if i trust ablterated models, are they good? | 2025-11-10T23:22:27 | https://www.reddit.com/r/LocalLLaMA/comments/1otthmk/any_good_qwen3vl_30ba3b_uncensored_fine_tune/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otthmk | false | null | t3_1otthmk | /r/LocalLLaMA/comments/1otthmk/any_good_qwen3vl_30ba3b_uncensored_fine_tune/ | false | false | self | 2 | null |
What I learned from stress testing LLM on NPU vs CPU on a phone | 8 | We ran a 10-minute LLM stress test on Samsung S25 Ultra CPU vs Qualcomm Hexagon NPU to see how the same model (LFM2-1.2B, 4 Bit quantization) performed. And I wanted to share some test results here for anyone interested in real on-device performance data.
https://reddit.com/link/1ottfbi/video/00ha3zfcgi0g1/player
In 3 minutes, the CPU hit 42 °C and throttled: throughput fell from \~37 t/s → \~19 t/s.
The NPU stayed cooler (36–38 °C) and held a steady \~90 t/s—2–4× faster than CPU under load.
Same 10-min, both used 6% battery, but productivity wasn’t equal:
NPU: \~54k tokens → \~9,000 tokens per 1% battery
CPU: \~14.7k tokens → \~2,443 tokens per 1% battery
That’s \~3.7× more work per battery on the NPU—without throttling.
(Setup: S25 Ultra, LFM2-1.2B, Inference using Nexa Android SDK)
To recreate the test, I used Nexa Android SDK to run the latest models on NPU and CPU:[https://github.com/NexaAI/nexa-sdk/tree/main/bindings/android](https://github.com/NexaAI/nexa-sdk/tree/main/bindings/android)
What other NPU vs CPU benchmarks are you interested in? Would love to hear your thoughts. | 2025-11-10T23:19:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ottfbi/what_i_learned_from_stress_testing_llm_on_npu_vs/ | Material_Shopping496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ottfbi | false | null | t3_1ottfbi | /r/LocalLLaMA/comments/1ottfbi/what_i_learned_from_stress_testing_llm_on_npu_vs/ | false | false | self | 8 | null |
i have a question (new guy on llm things) | 0 | 1. what is i can run whit ryzen 5 5500 rtx3050 8gb vram and 16gb ddr4 ram
2.how much storage i need for llm (100gb fine or need more)
3.can llm close able like i use my llm for something after i can close it and play games
thanks🙏 | 2025-11-10T23:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ott5oo/i_have_a_question_new_guy_on_llm_things/ | Kerem-6030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ott5oo | false | null | t3_1ott5oo | /r/LocalLLaMA/comments/1ott5oo/i_have_a_question_new_guy_on_llm_things/ | false | false | self | 0 | null |
What are you Polaris Alpha vibes so far? | 0 | If this is OpenAI, it's probably a step to a friendlier tone again, so like GPT 5, with a bit of that GPT 4o personality, maybe?
I can't help it, but I loved how it actually went with my blunt wording there. 😂 | 2025-11-10T22:50:53 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otsoj3 | false | null | t3_1otsoj3 | /r/LocalLLaMA/comments/1otsoj3/what_are_you_polaris_alpha_vibes_so_far/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'aw7qanwidi0g1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/aw7qanwidi0g1.png?width=108&crop=smart&auto=webp&s=dd16ff741cc24d09095eeb313d7d18bf1dca8c7c', 'width': 108}, {'height': 64, 'url': 'https://preview.redd.it/aw7qanwidi0g1.png?width=216&crop=smart&auto=webp&s=f3e4c0923d92eba15d33e526da029f8271de396b', 'width': 216}, {'height': 96, 'url': 'https://preview.redd.it/aw7qanwidi0g1.png?width=320&crop=smart&auto=webp&s=8d72855a61bbb56eacf2bb1931e05c5d82b4215f', 'width': 320}, {'height': 192, 'url': 'https://preview.redd.it/aw7qanwidi0g1.png?width=640&crop=smart&auto=webp&s=03acb2912cba567dfc10e1b9d80e94e6674d3ad1', 'width': 640}], 'source': {'height': 271, 'url': 'https://preview.redd.it/aw7qanwidi0g1.png?auto=webp&s=86be9a1f12859d8a100557dd893f262b15953871', 'width': 903}, 'variants': {}}]} | |
Deepseek v3 0324 API without request/minute rate limite | 0 | Hello everyone,
I'm looking for deepseek v3 0324 with no limit for request / minute.
Does anyone know a provider who can do that ?
Or at least 2k-3k requests / minute to start
thank you | 2025-11-10T22:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1otshlp/deepseek_v3_0324_api_without_requestminute_rate/ | Frequent-Buddy-867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otshlp | false | null | t3_1otshlp | /r/LocalLLaMA/comments/1otshlp/deepseek_v3_0324_api_without_requestminute_rate/ | false | false | self | 0 | null |
Reflection AI reached human-level performance (85%) on ARC-AGI v1 for under $10k and within 12 hours. You can run this code yourself, it’s open source. | 124 | 2025-11-10T22:37:40 | https://github.com/jerber/arc-lang-public | balianone | github.com | 1970-01-01T00:00:00 | 0 | {} | 1otscki | false | null | t3_1otscki | /r/LocalLLaMA/comments/1otscki/reflection_ai_reached_humanlevel_performance_85/ | false | false | default | 124 | {'enabled': False, 'images': [{'id': 'ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=108&crop=smart&auto=webp&s=6491e2dddf6db6d0ed56b78913a181423ce6824f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=216&crop=smart&auto=webp&s=4738a839b8c2a1981073673a4452102e4f53fa76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=320&crop=smart&auto=webp&s=83e6e7e557bbcdfc38e40ebf0377214ad0eaac76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=640&crop=smart&auto=webp&s=e4be25bea8245c672919ac843febfe66e1da8de0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=960&crop=smart&auto=webp&s=b033d974051f45cfe5cfada27021b7a3758970be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?width=1080&crop=smart&auto=webp&s=7d4e8c4842eca37b8d79003186dea7347211ec00', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ARR7y9mlLeCC9oWmE5UREkOw8RADA8XOccGD021Q5lw.png?auto=webp&s=0329ce79e1ec34d3170dae6837a7173320f8ca35', 'width': 1200}, 'variants': {}}]} | |
Help configuring parallel vllm instance | 1 | Hey everyone, I have 4 esxi nodes, each have 2 gpus (L40 - 48gb vram each) On each node i have a vm that the gpus are being passed through too. For wight now i am able to run a model on each vm, but im trying to see what is the biggest model i can serve. All esxis are connected with 100GB port to a compatible switch. The vms are ubuntu, using docker for the deployment. What model should i run. And what is the correct configuration with ray? Would love some advice or examples, thanks! | 2025-11-10T22:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/1otrkxq/help_configuring_parallel_vllm_instance/ | Some-Manufacturer-21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otrkxq | false | null | t3_1otrkxq | /r/LocalLLaMA/comments/1otrkxq/help_configuring_parallel_vllm_instance/ | false | false | self | 1 | null |
Imagine you’re stuck with one local model forever: GPT-OSS 120B or GLM 4.5 Air. Which one are you picking and why? | 27 | Title | 2025-11-10T22:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/1otreir/imagine_youre_stuck_with_one_local_model_forever/ | Adventurous-Gold6413 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otreir | false | null | t3_1otreir | /r/LocalLLaMA/comments/1otreir/imagine_youre_stuck_with_one_local_model_forever/ | false | false | self | 27 | null |
Any new advancements in local video generation? | 3 | I was up to date on all things local as far LLM, image and music/audio up until like maybe 6 months ago, but I see video generation is all the craze. Sora is fun to play with but is there anything local I can tinker with at this time? Even if it's only 25% as powerful lol. | 2025-11-10T21:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1otqws5/any_new_advancements_in_local_video_generation/ | Whole_Arachnid1530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otqws5 | false | null | t3_1otqws5 | /r/LocalLLaMA/comments/1otqws5/any_new_advancements_in_local_video_generation/ | false | false | self | 3 | null |
emotional analysis | 1 | guys, we have a website and sell our products, and there are thousands of comments on our products, I was wondering if its possible to use a local llm and give it these comments to analyze them and give us the overall emotion of users (they loves it or hate it or ...) about each product?
| 2025-11-10T21:33:36 | https://www.reddit.com/r/LocalLLaMA/comments/1otqpyv/emotional_analysis/ | Dry_Amphibian_5340 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otqpyv | false | null | t3_1otqpyv | /r/LocalLLaMA/comments/1otqpyv/emotional_analysis/ | false | false | self | 1 | null |
Running models locally on Apple Silicon, and memory usage... | 3 | So allegedly, OpenAI's oss-20b model can run on my MacBook Air with 16GB RAM, however, I keep getting a warning about memory when I try to start it in LM Studio. As I understand, MacOS tends to make aggressive use of the unified memory, so there just isn't much to work with.
If I get a MacBook Air with 24 or 32GB RAM, will this actually help? I also want to run Qwen Image Edit without quantizing it, and AFAIK that can run in 64GB RAM but again... Will it actually? | 2025-11-10T21:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1otqdia/running_models_locally_on_apple_silicon_and/ | garden_speech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otqdia | false | null | t3_1otqdia | /r/LocalLLaMA/comments/1otqdia/running_models_locally_on_apple_silicon_and/ | false | false | self | 3 | null |
LLM-driven puzzle sandbox: anything you try becomes an action (Cosmic Egg) | 42 | We’re using LLMs to generate actions in our upcoming puzzle game Cosmic Egg—so “anything you can think of” becomes a validated, in-world interaction.
The system works with local LLMs + smart caching + a bit of game-dev smoke & mirrors—while keeping the game deterministic so everyone shares a common action pool and outcomes are reproducible.
Still lots to do, right now we’re improving sprite generation and adding player inventory & items.
Feedback very welcome! | 2025-11-10T20:56:23 | https://v.redd.it/6i40e2m3th0g1 | VirtualJamesHarrison | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otpql6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6i40e2m3th0g1/DASHPlaylist.mpd?a=1765400200%2CZTMxNWRjMTdjM2NjNzI5ZmQzZmI0NzBjZWYzOWU4ZDBhYzA2ZjZlNjNlMTRiZTJlYzVjNzllZTY2MTZiNTg0MQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/6i40e2m3th0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1008, 'hls_url': 'https://v.redd.it/6i40e2m3th0g1/HLSPlaylist.m3u8?a=1765400200%2CMDRmNWQyNDIyNmM2NDI0OTVkM2IwYTU2NWU2Yjk1MDJkOGQzZjk2YzYxMjM2MjBiZDVkYTVlOWVkMTQ2OWVlMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6i40e2m3th0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1otpql6 | /r/LocalLLaMA/comments/1otpql6/llmdriven_puzzle_sandbox_anything_you_try_becomes/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=108&crop=smart&format=pjpg&auto=webp&s=48c1c8d6aa2388fe670f62d9d676a8dc39ec3b54', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=216&crop=smart&format=pjpg&auto=webp&s=41175a13d04a526ebdd7f70a72ac6c7773435235', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=320&crop=smart&format=pjpg&auto=webp&s=4f598f29048a9afad086256c41a6efeba9c6a46c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=640&crop=smart&format=pjpg&auto=webp&s=5488b8a3f3214b815f195050396f28bd1835fcc7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=960&crop=smart&format=pjpg&auto=webp&s=2ee879ac4c4889537c9a850fb61d38e9a07893de', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6439a74e53c100f9b8275d113b264d01a3c06c58', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dHZxczAzbTN0aDBnMTBAWoGHzmzPlCXmWH6RtU6SjIImLDmcCL43zhjlQgdI.png?format=pjpg&auto=webp&s=bff00e02374deb51196a406e014cc8b37a711627', 'width': 2056}, 'variants': {}}]} | |
How do you use python-llamacpp-server with sliced models? | 2 | I am installed the hugging face hub, but it says I need to specify a model and a file as command line parameters.
But they only pulls the xyz-0001-of-0045.gguf.
And then it fails because 0002was not downloaded.
I manually downloaded all 45 files into cache but still doesn't work.
How do you guys do it?
| 2025-11-10T20:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1otopqk/how_do_you_use_pythonllamacppserver_with_sliced/ | Agron7000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otopqk | false | null | t3_1otopqk | /r/LocalLLaMA/comments/1otopqk/how_do_you_use_pythonllamacppserver_with_sliced/ | false | false | self | 2 | null |
How to hide "thinking" in DS 3.2 Exp | 1 | How to hide "thinking" on Chutes Ai using the model e.g. in rp. | 2025-11-10T20:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1otooys/how_to_hide_thinking_in_ds_32_exp/ | LonleyPaladin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otooys | false | null | t3_1otooys | /r/LocalLLaMA/comments/1otooys/how_to_hide_thinking_in_ds_32_exp/ | false | false | self | 1 | null |
Katakate: self-hosted light VMs for safe exec of AI generated code | 1 | [removed] | 2025-11-10T20:09:19 | https://docs.katakate.org/ | gbxk7 | docs.katakate.org | 1970-01-01T00:00:00 | 0 | {} | 1otoh8u | false | null | t3_1otoh8u | /r/LocalLLaMA/comments/1otoh8u/katakate_selfhosted_light_vms_for_safe_exec_of_ai/ | false | false | default | 1 | null |
Storage Crunch: Deleting Large Models from my hf repo | 13 | The time has come.
I've hit my storage limit on huggingface.
So the axe must fall 🪓🪓🪓 I'm thinking of deleting some of the larger models that are over 200B parameters that are also the worst performers, download wise.
|Model Name|Parameters|Size|Downloads|
|:-|:-|:-|:-|
|[noctrex/ERNIE-4.5-300B-A47B-PT-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/ERNIE-4.5-300B-A47B-PT-MXFP4_MOE-GGUF)|300B|166 GB|49|
|[noctrex/AI21-Jamba-Large-1.7-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/AI21-Jamba-Large-1.7-MXFP4_MOE-GGUF)|400B|239 GB|252|
|[noctrex/Llama-4-Maverick-17B-128E-Instruct-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/Llama-4-Maverick-17B-128E-Instruct-MXFP4_MOE-GGUF)|400B|220 GB|300|
Do you think I should keep some of these models?
If anyone is at all interested, you can download them until the end of the week, and then, byebye they go.
Of course I keep a local copy of them on my NAS, so they are not gone forever.
| 2025-11-10T20:04:15 | https://www.reddit.com/r/LocalLLaMA/comments/1otoc9q/storage_crunch_deleting_large_models_from_my_hf/ | noctrex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otoc9q | false | null | t3_1otoc9q | /r/LocalLLaMA/comments/1otoc9q/storage_crunch_deleting_large_models_from_my_hf/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=108&crop=smart&auto=webp&s=f4bedb39515c85f2beb340a389295eef527b7e5f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=216&crop=smart&auto=webp&s=41c5a6921407da8776401cb86104e92f0d6f77c2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=320&crop=smart&auto=webp&s=b6282c0de34af0fb57c23ad5ff7cab439404440a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=640&crop=smart&auto=webp&s=967eb192b75c7bb494f1f18490f2e7b2571076fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=960&crop=smart&auto=webp&s=4f874f807787991c60f36d8a1e0df8402ef1984e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?width=1080&crop=smart&auto=webp&s=d49b39fa203d102886c730e1a1814244b92ada62', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x72QN3OXDQziSQnJexxPmcWopHGJDhLgdqfPCj-m5YA.png?auto=webp&s=c91a9330be9210feac3984f3d5d3a343ea28f511', 'width': 1200}, 'variants': {}}]} |
Why LLMs hallucinate and how to actually reduce it - breaking down the root causes | 0 | AI hallucinations aren't going away, but understanding why they happen helps you mitigate them systematically.
**Root cause #1: Training incentives** Models are rewarded for accuracy during eval - what percentage of answers are correct. This creates an incentive to guess when uncertain rather than abstaining. Guessing increases the chance of being right but also increases confident errors.
**Root cause #2: Next-word prediction limitations** During training, LLMs only see examples of well-written text, not explicit true/false labels. They master grammar and syntax, but arbitrary low-frequency facts are harder to predict reliably. No negative examples means distinguishing valid facts from plausible fabrications is difficult.
**Root cause #3: Data quality** Incomplete, outdated, or biased training data increases hallucination risk. Vague prompts make it worse - models fill gaps with plausible but incorrect info.
**Practical mitigation strategies:**
* Penalize confident errors more than uncertainty. Reward models for expressing doubt or asking for clarification instead of guessing.
* Invest in agent-level evaluation that considers context, user intent, and domain. Model-level accuracy metrics miss the full picture.
* Use real-time observability to monitor outputs in production. Flag anomalies before they impact users.
Systematic prompt engineering with versioning and regression testing reduces ambiguity. [Maxim's eval framework](https://www.getmaxim.ai/blog/evaluation-workflows-for-ai-agents/) covers faithfulness, factuality, and hallucination detection.
Combine automated metrics with human-in-the-loop review for high-stakes scenarios.
How are you handling hallucination detection in your systems? What eval approaches work best? | 2025-11-10T19:52:15 | https://www.reddit.com/r/LocalLLaMA/comments/1oto0fl/why_llms_hallucinate_and_how_to_actually_reduce/ | Educational-Bison786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oto0fl | false | null | t3_1oto0fl | /r/LocalLLaMA/comments/1oto0fl/why_llms_hallucinate_and_how_to_actually_reduce/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=108&crop=smart&auto=webp&s=8f00416d349a3e0bb0a2f6266c44a677ad99b108', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=216&crop=smart&auto=webp&s=63b0bb7b514cc8b94fd5279fec4340e727fe03d5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=320&crop=smart&auto=webp&s=381a5dc2e139459e113dd43a8750f74d08992454', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=640&crop=smart&auto=webp&s=60cb60d801acb709381b4beb96cf7b9695234e09', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=960&crop=smart&auto=webp&s=c33bbdf4657c5238eb64c19917167ae53323f2d5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?width=1080&crop=smart&auto=webp&s=1032b2a76a88002fd95f03276a985f4c7a0f2be5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/jHTbmbBxm28mv2vwOkeTYAfdInFflrAp_TRyettgU_c.png?auto=webp&s=fb3cf0c313f2599ff6eb45279d919a92915cb795', 'width': 1200}, 'variants': {}}]} |
What do you use for model fine tuning? | 1 | Do you actually fine-tune models or is it not worth the hassle?
I usually just go up in the model size and see if that works but it feels very inefficient.
I'm worried that fine-tuning actually narrows down the models quite a bit and then I'll have to deploy many of them.
Any experience in this field?
What is your approach?
| 2025-11-10T19:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1otnl30/what_do_you_use_for_model_fine_tuning/ | Empty-Tourist3083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otnl30 | false | null | t3_1otnl30 | /r/LocalLLaMA/comments/1otnl30/what_do_you_use_for_model_fine_tuning/ | false | false | self | 1 | null |
What do you use for model fine tuning? | 3 | Do you actually fine-tune models or is it not worth the hassle?
I usually just go up in the model size and see if that works but it feels very inefficient.
I'm worried that fine-tuning actually narrows down the models quite a bit and then I'll have to deploy many of them.
Any experience in this field?
What is your approach?
| 2025-11-10T19:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1otnktx/what_do_you_use_for_model_fine_tuning/ | Empty-Tourist3083 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otnktx | false | null | t3_1otnktx | /r/LocalLLaMA/comments/1otnktx/what_do_you_use_for_model_fine_tuning/ | false | false | self | 3 | null |
Are any of you using local llms for "real" work? | 87 | I am having fun personally tinkering with local models and workflows and such, but sometimes it feels like we're all still stuck in the "fun experimentation" phase with local LLMs and not actually producing any "production grade" outputs or using it in real workflows.
Idk if it's just the gap between what "personal" LLM-capable rigs can handle vs the compute needs of current best-in-class models or what.
Am I wrong here? | 2025-11-10T19:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1otnj2k/are_any_of_you_using_local_llms_for_real_work/ | hmsenterprise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otnj2k | false | null | t3_1otnj2k | /r/LocalLLaMA/comments/1otnj2k/are_any_of_you_using_local_llms_for_real_work/ | false | false | self | 87 | null |
Anyone here running training on Spot GPUs? How do you handle interruptions? | 6 | Hey folks,
Curious how people in this community are handling GPU costs and reliability when training or fine-tuning models.
If you’re using Spot or Preemptible instances (AWS, GCP, Lambda Labs, RunPod, etc.), how often do you hit interruptions?
Do you just checkpoint frequently and restart manually, or do you have a script / setup that automatically resumes?
I’m trying to understand if Spot interruptions are still a major pain for folks training LLaMA and similar models — or if most of you have moved to on-demand or local setups to avoid it.
Would love to hear what’s worked (or not) for you — tools, workflows, or horror stories welcome. | 2025-11-10T19:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1otnhh4/anyone_here_running_training_on_spot_gpus_how_do/ | Pure-Hedgehog-1721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otnhh4 | false | null | t3_1otnhh4 | /r/LocalLLaMA/comments/1otnhh4/anyone_here_running_training_on_spot_gpus_how_do/ | false | false | self | 6 | null |
Is 3090 the answer? Multiple containers running at the same time. | 1 | Hey folks,
I want to build my first AI system and the general consensus seems to be to get a 3090, however I would like to validate it for my use case:
I want it to run in a virtual machine and host docker containers that would have to use the GPU at the same time:
\- jellyfin/video transcoding
\- immich ML
\- some sort of LLM to be used by apps like Frigate, Home Assistant and PaperlessNGX
Questions:
\- Can I actually run all of those services at the same time or will that limit me in some way?
\- Does the amount of ram for the virtual machine matter or does vram only matter?
I'd love to get some resources to read on if it's a popular matter. Thanks in advance! | 2025-11-10T19:30:25 | https://www.reddit.com/r/LocalLLaMA/comments/1otneu9/is_3090_the_answer_multiple_containers_running_at/ | Shadoweee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otneu9 | false | null | t3_1otneu9 | /r/LocalLLaMA/comments/1otneu9/is_3090_the_answer_multiple_containers_running_at/ | false | false | self | 1 | null |
Best open source source OCR / Vision model? | 2 | Our requirement is to extract text and save in a structured format, from various business documents(invoices, contracts). They may come in various layouts/standards. Open source is most, since we cannot send our data outside. Should I use a vision LM to upload the file and get structured JSON output in one pass? Or use a OCR first? In any case, please suggest some options which you have tried and worked well. Thank you! | 2025-11-10T19:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/1otne5n/best_open_source_source_ocr_vision_model/ | LakeRadiant446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otne5n | false | null | t3_1otne5n | /r/LocalLLaMA/comments/1otne5n/best_open_source_source_ocr_vision_model/ | false | false | self | 2 | null |
Are there local LLMs that can also generate images? | 7 | Are there local models that can generate both text and images? Especially if they fit in 6-8 gb VRAM. Can LM studio load image models? I tried loading stable diffusion inside LM studio but it failed to load (it runs fine on comfyUI). | 2025-11-10T19:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1otn82o/are_there_local_llms_that_can_also_generate_images/ | Crafty_Aspect8122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otn82o | false | null | t3_1otn82o | /r/LocalLLaMA/comments/1otn82o/are_there_local_llms_that_can_also_generate_images/ | false | false | self | 7 | null |
Top 5 AI eval platforms for production agents - breakdown of what each does well | 1 | [deleted] | 2025-11-10T19:17:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1otn2mq | false | null | t3_1otn2mq | /r/LocalLLaMA/comments/1otn2mq/top_5_ai_eval_platforms_for_production_agents/ | false | false | default | 1 | null | ||
Compared 5 AI eval platforms for production agents - breakdown of what each does well | 0 | I have been evaluating different platforms for my production LLM workflows. Also Saw a [ a comparison](https://www.getmaxim.ai/articles/top-5-ai-evaluation-tools-in-2025-in-depth-comparison-for-robust-llm-agentic-systems) of Langfuse, Arize, Maxim, Comet Opik, and Braintrust. Here is my opinion on what these tools excel at:
**For agentic systems:** Multi-turn evaluation matters. Maxim's simulation framework tests agents across complex decision chains, including tool use and API calls. Langfuse supports comprehensive tracing with full self-hosting control.
**Rapid prototyping:** Braintrust has an LLM proxy for easy logging and an in-UI playground for quick iteration. Works well for experimentation, but it's proprietary and costs scale at higher usage. Comet Opik is solid for unifying LLM evaluation with ML experiment tracking.
**Production monitoring:** Arize and Maxim both handle enterprise compliance (SOC2, HIPAA, GDPR) with real-time monitoring. Arize has drift detection and alerting. Maxim includes node-level tracing, Slack/PagerDuty integration for real time alerts, and human-in-the-loop review queues.
**Open-source:** Langfuse is fully open-source and self-hostable - complete control over deployment.
Each platform has different strengths depending on whether you're optimizing for experimentation speed, production reliability, or infrastructure control. Curious what others are using for agent evaluation. | 2025-11-10T19:15:36 | https://www.reddit.com/r/LocalLLaMA/comments/1otn0ko/compared_5_ai_eval_platforms_for_production/ | llamacoded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otn0ko | false | null | t3_1otn0ko | /r/LocalLLaMA/comments/1otn0ko/compared_5_ai_eval_platforms_for_production/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=108&crop=smart&auto=webp&s=7aef3c7576f9197f989b85473e49eb74014cb558', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=216&crop=smart&auto=webp&s=b0a840beb03eb8cd6bdfb940257407e359d42c6c', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=320&crop=smart&auto=webp&s=98a0d16ae4ef7eff407d2f8bfe9e3166af4db7e6', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=640&crop=smart&auto=webp&s=f6aa01790d454d0dfad89db3c3f46b17af772dce', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=960&crop=smart&auto=webp&s=a08e007795281c048403a907b35037003aa283e4', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?width=1080&crop=smart&auto=webp&s=8054ed6192e88aca33ff0ddcb41a323090530bb5', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/aW-j3PRoGhAQM7hvxv9t1DJWf-2z3Dr3rkdtL0W5rZw.png?auto=webp&s=35909fef4e6d1b278bf15bb2e66707a61ed3c4d0', 'width': 1200}, 'variants': {}}]} |
Thinking about buying 2 3060 rtx GPUs for only AI. Any better suggestions ? | 1 | Hi reddiors,
So I am thinking about making a build having 2 3060 GPUs for AI related stuff. Is this Best thing to do or there are better options?.
I want to run and train LLMs locally.
Budget is 1000 to 1200 dollars.
1 3060 is 300 dollars at my place.
Need suggestions on suitable CPU and ram size.
Thanks in advance | 2025-11-10T19:12:57 | https://www.reddit.com/r/LocalLLaMA/comments/1otmxyq/thinking_about_buying_2_3060_rtx_gpus_for_only_ai/ | Superb_Practice_4544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otmxyq | false | null | t3_1otmxyq | /r/LocalLLaMA/comments/1otmxyq/thinking_about_buying_2_3060_rtx_gpus_for_only_ai/ | false | false | self | 1 | null |
Help me stress test Observer, unlimited Cloud this week. Free local, now and forever after. | 1 | TLDR: I'm the solo dev of Observer (free/open-source tool that lets local LLMs watch your screen). Saved up some money to give r/LocalLLaMA the convenient unlimited cloud access to stress test it. Build cool agents, break things, help me see what works. It's Free for Local Inference now and always <3
Observer lets you build micro-agents that watch your screen/camera and trigger actions - all **running locally with your own models.**
Hey r/LocalLLaMA,
Okay so... I posted yesterday and it got downvoted because I sounded like a SaaS trying to trap people. That's completely on me :( ! I've been talking to investors lately and had my "business brain" on (not very developed hahaha), but I shouldn't talk to you guys like that. I'm sorry!
So let me be super clear: **Observer is free and open-source. Forever.** If you compile it yourself, point it at your local llama.cpp server, and use Discord notifications (which go straight from your computer to Discord), I literally have no way of knowing you exist. **That's by design. Privacy-first means privacy-first.**
But here's the thing: I built an optional cloud backend so people who **don't run LLMs** on their machines have a convenient option. And this week I need to stress test it. I saved up for API costs specifically so r/LocalLLaMA and r/ollama could use it for free this week - because if I'm giving anyone free access, it's you guys who supported this thing from the beginning.
What I'm asking:
\- Try building some agents (local or cloud, whatever you want!)
\- Push the system and see what breaks
\- Share cool ideas (seeing them is honestly my favorite part)
\- Please don't abuse it - I saved up for this but I'm not Bezos 😅
Some agent ideas from the last post to get you started:
\- "While a tuner connected to my microphone is listening to my practicing session on my violin I would like to get a ping by the AI everytime I'm out of tune by a particular cent parameter!" - [philosophissima](https://www.reddit.com/user/philosophissima/)
\- "I'd like to use it to monitor email for certain keywords and notify different contacts based on the content" - [IbetitsBen](https://www.reddit.com/user/IbetitsBen/)
\- "Ping my phone when the UPS van stops outside, but not the USPS one. I need to sign for a package." [\_\_JockY\_\_](https://www.reddit.com/user/__JockY__/)
\- Track long-running processes and notify when complete - i use this every day
\- Literally anything that involves "watch this thing and tell me when X happens"
Just drop a comment with what you want to build and I'll DM you unlimited cloud access. Or if you want to go full local, the GitHub has all the instructions.
This isn't marketing. I genuinely just want to see what this community builds and make sure the infrastructure can handle it.
Thanks for being patient with me, i'm just a guy learning and building cool stuff for you guys! :)
Roy
GitHub: [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer)
Discord: [https://discord.gg/wnBb7ZQDUC](https://discord.gg/wnBb7ZQDUC) | 2025-11-10T19:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1otmv17/help_me_stress_test_observer_unlimited_cloud_this/ | Roy3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otmv17 | false | null | t3_1otmv17 | /r/LocalLLaMA/comments/1otmv17/help_me_stress_test_observer_unlimited_cloud_this/ | false | false | self | 1 | null |
I built a RAG as a Service orchestrator for local models | 2 | Hey guys,
I was frustrated with the Retrieval Augmented Generation (RAG) tools out there, despite it’s maturity, so I built llama-pg, an open source RAG as a Service (RaaS) orchestrator that enables you to automate embeddings across all your projects in one place while keeping your data private.
You can use it with pretty much any OpenAI-compatible embedding model and customize the settings as needed.
Background workers handle parsing (using LlamaParse or any other parser that you can implement easily) and vectorizing (using TimescaleDB’s pgai).
Installation is simple using docker compose or ideally Helm (for Kubernetes peeps).
Check it out if it’s relevant to you and let me know your thoughts: https://github.com/akvnn/llama-pg
| 2025-11-10T19:01:30 | Initial-Detail-7159 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otmmq5 | false | null | t3_1otmmq5 | /r/LocalLLaMA/comments/1otmmq5/i_built_a_rag_as_a_service_orchestrator_for_local/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'ulp8uw7a9h0g1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=108&crop=smart&auto=webp&s=c6e8770d9a2daeca0f5bbb6780b3f0a39b35eb33', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=216&crop=smart&auto=webp&s=f9a436badf486fc4070bc0e4e91bf1f6dc73a740', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=320&crop=smart&auto=webp&s=a31fad284aa6e8ed98d7ac10307395f0fa31ed2d', 'width': 320}, {'height': 299, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=640&crop=smart&auto=webp&s=336a81bdaf46a7815c2531370ae25dcc5dd86242', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=960&crop=smart&auto=webp&s=07fe49dbd2327eb64583e673c6142e8543f2242a', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?width=1080&crop=smart&auto=webp&s=1b4437187f54c2145a41764f66844b155ea16d97', 'width': 1080}], 'source': {'height': 748, 'url': 'https://preview.redd.it/ulp8uw7a9h0g1.jpeg?auto=webp&s=813c3adfd9eabb76e48c6b26984a90daa066c699', 'width': 1600}, 'variants': {}}]} | |
Onyx AI local hosted with local LLM question | 1 | I’m curious about what most Onyx on-prem users are running for their LLMs and the hardware behind them. For testing, we’re running **gpt-oss-120b** on **4× RTX 3090s**. We initially tried **vLLM**, but had to switch to **Ollama** since vLLM isn’t officially supported and didn’t work reliably in our setup.
Since Ollama is less enterprise-focused and can’t pull models directly from Hugging Face, I wanted to hear from the community:
* What LLMs are you running?
* Are you using Ollama or something else for inference?
* What GPU setup are you using?
* What model sizes and how many users are you supporting?
Thanks in advance for any insights — it’d be great to understand what others in similar setups are doing. I've asked Onyx, but they keep pointing me to cloud hosted solutions. | 2025-11-10T19:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1otmmc7/onyx_ai_local_hosted_with_local_llm_question/ | jkay1904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otmmc7 | false | null | t3_1otmmc7 | /r/LocalLLaMA/comments/1otmmc7/onyx_ai_local_hosted_with_local_llm_question/ | false | false | self | 1 | null |
When does RTX 6000 Pro make sense over a 5090? | 51 | Hey all—trying to sanity-check an upgrade.
Current GPU: RTX 5090
Use cases: training mid-size LLMs, Stable Diffusion/ComfyUI, inferencing GPT-OSS-120B / GLM 4.5 Air
Rig: 9950X3D / 96GB DDR5 / 1500W Corsair H1500i • OS: Win11 / Ubuntu 24.04
I’m eyeing the RTX 6000 Pro (Blackwell) mainly for:
* More VRAM/ECC
* Potential tensor/FP improvements for AI workloads
Questions for folks who’ve used the 6000 Pro vs the RXT 5090:
* In real projects, what speed/throughput gains did you see for general AI workload?
* Did ECC + pro drivers measurably reduce crashes/corruption vs 5090?
* Any gotchas (thermals, power, coil whine, chassis fit, Linux/Windows quirks, NVLink/virtualization)?
* If you switched back, why?
If my workloads are mainly for LLM inference / small training and SD, is the upgrade worth it, or is 5090 still the best value? Benchmarks and anecdotes welcome! Thanks. | 2025-11-10T18:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1otmamz/when_does_rtx_6000_pro_make_sense_over_a_5090/ | Herald_Of_Rivia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otmamz | false | null | t3_1otmamz | /r/LocalLLaMA/comments/1otmamz/when_does_rtx_6000_pro_make_sense_over_a_5090/ | false | false | self | 51 | null |
Any VSCode plugins that integrate almost as well as Copilot? | 3 | Copilots integrates seamlessly into coding tasks in VSCode. However ,I don't like the idea of all my proprietary work gets sent to Microsofts servers to train their models. Its a huge business risk for me.
I am able to run large models locally, but I can't find a plugin that integrates with VScode as well as Copilot does. I tried "Continue" and a few others, but they seem to be limited to just opening a chat windows to paste code in. I am looking for something that does code-completion really well.
Anyone have a open source programming setup that's comparable to Copilot in terms of its integration with VSCode? | 2025-11-10T18:38:53 | https://www.reddit.com/r/LocalLLaMA/comments/1otlzm1/any_vscode_plugins_that_integrate_almost_as_well/ | DiligentLeader2383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otlzm1 | false | null | t3_1otlzm1 | /r/LocalLLaMA/comments/1otlzm1/any_vscode_plugins_that_integrate_almost_as_well/ | false | false | self | 3 | null |
Nano Banana 2 Leaps Ahead | 1 | Nano Banana 2 already looks far ahead of its predecessor what’s truly striking is how fast the jump in performance came.
Makes you wonder… if Genie 3 is here now, could Genie 4 be closer than we think?
| 2025-11-10T18:29:46 | Ok-Breakfast-4676 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otlq5w | false | null | t3_1otlq5w | /r/LocalLLaMA/comments/1otlq5w/nano_banana_2_leaps_ahead/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '4hqyak9m3h0g1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=108&crop=smart&auto=webp&s=4c008e0afe37c01244784ea72a04f8d1cfb40bcc', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=216&crop=smart&auto=webp&s=5d8e83825353f363c2c8412c26eb22b3086b558b', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=320&crop=smart&auto=webp&s=d699471c2770d3eca5ac9cfc4d8211a92259b6df', 'width': 320}, {'height': 355, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=640&crop=smart&auto=webp&s=7f08b8d860f8c82480b452b20a845d10ac0cfa63', 'width': 640}, {'height': 532, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=960&crop=smart&auto=webp&s=86090d0a5e070c89d6f025b7b8456218c1fd796a', 'width': 960}, {'height': 599, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?width=1080&crop=smart&auto=webp&s=6e9944765f949113062309e88422980505f8eee1', 'width': 1080}], 'source': {'height': 649, 'url': 'https://preview.redd.it/4hqyak9m3h0g1.jpeg?auto=webp&s=8856180f0db3c503ec89196c39550a4f3811db56', 'width': 1170}, 'variants': {}}]} | |
Anyone else feel like prompt engineering is starting to hit diminishing returns? | 0 | I’ve been experimenting with different LLM workflows lately, system prompts, structured outputs, few-shots, etc.
What I’ve noticed is that after a certain point, prompt tuning gives less and less improvement unless you completely reframe the task.
Curious if anyone here has found consistent ways to make prompts more robust, especially for tasks that need reasoning + structure (like long tool calls or workflows).
Do you rely more on prompt patterns, external logic, or some hybrid approach? | 2025-11-10T18:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1otlfj4/anyone_else_feel_like_prompt_engineering_is/ | AdVivid5763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otlfj4 | false | null | t3_1otlfj4 | /r/LocalLLaMA/comments/1otlfj4/anyone_else_feel_like_prompt_engineering_is/ | false | false | self | 0 | null |
What’s been the hardest part of building your own LLM-based project? | 1 | I’ve been tinkering with some LLM-based tools lately (both local models and API ones like GPT, Mistral, or LLaMA), and I keep running into annoying little pain points.
For me, the toughest parts have been things like:
* Prompt chains breaking when I tweak something small
* Context window limits and expensive token counts
* Slow or unreliable RAG pipelines
* Dealing with caching and latency
* Figuring out how to evaluate model outputs in a consistent way
Curious — what’s been tripping you up the most when building your own stuff?
Always interesting to hear what problems others are running into. | 2025-11-10T18:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1otl8qe/whats_been_the_hardest_part_of_building_your_own/ | Permanent__Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otl8qe | false | null | t3_1otl8qe | /r/LocalLLaMA/comments/1otl8qe/whats_been_the_hardest_part_of_building_your_own/ | false | false | self | 1 | null |
Omnilingual ASR: Advancing Automatic Speech Recognition for 1,600+ Languages | 127 | 2025-11-10T18:12:13 | https://ai.meta.com/blog/omnilingual-asr-advancing-automatic-speech-recognition/ | jean- | ai.meta.com | 1970-01-01T00:00:00 | 0 | {} | 1otl8q8 | false | null | t3_1otl8q8 | /r/LocalLLaMA/comments/1otl8q8/omnilingual_asr_advancing_automatic_speech/ | false | false | default | 127 | null | |
What are the biggest pain points when building your own LLM-based projects? | 1 | Hey everyone,
I’m researching what real-world problems people face when building apps or tools that use LLMs (local or API-based – GPT, Mistral, LLaMA, Claude, etc.).
I’d love to understand where developers struggle the most – both technically and operationally.
For example:
* Prompt design breaking easily
* Context window limits or expensive token usage
* RAG quality / retrieval issues
* Caching, latency, or cost optimization
* Evaluating model outputs
* Orchestration / memory management headaches
If you’ve built something (even a small tool or prototype), what’s been your most frustrating or time-consuming issue?
I’m collecting these to see where the real gaps in the ecosystem are – could be super helpful for others too.
Thanks in advance for sharing your war stories 🙏 | 2025-11-10T18:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/1otl0tj/what_are_the_biggest_pain_points_when_building/ | Permanent__Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otl0tj | false | null | t3_1otl0tj | /r/LocalLLaMA/comments/1otl0tj/what_are_the_biggest_pain_points_when_building/ | false | false | self | 1 | null |
3060 12GB (207€) vs 5060ti 16GB (360€) | 0 | I want to fine tune LLMs and run them locally for programming and bioinformatics and some specialized LLM assistant services. Should I pay the 150€ extra or the 3060 is too good to pass?
Thank you! | 2025-11-10T17:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1otkpry/3060_12gb_207_vs_5060ti_16gb_360/ | Primary_Goat4601 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otkpry | false | null | t3_1otkpry | /r/LocalLLaMA/comments/1otkpry/3060_12gb_207_vs_5060ti_16gb_360/ | false | false | self | 0 | null |
I developed an open-source Python implementation of Anthropic/Cloudflare idea of calling MCPs by code execution | 2 | After seeing the [Anthropic post](https://www.anthropic.com/engineering/code-execution-with-mcp) and [Cloudflare Code Mode](https://blog.cloudflare.com/code-mode/), I decided to develop a Python implementation of it. My approach is a containerized solution that runs any Python code in a containerized sandbox. It automatically discovers current servers which are in your Claude Code config and wraps them in the Python tool calling wrapper.
**Here is the GitHub link:** [https://github.com/elusznik/mcp-server-code-execution-mode](https://github.com/elusznik/mcp-server-code-execution-mode)
I wanted it to be secure as possible:
* Total Network Isolation: Uses --network none. The code has no internet or local network access.
* Strict Privilege Reduction: Drops all Linux capabilities (--cap-drop ALL) and prevents privilege escalation (--security-opt no-new-privileges).
* Non-Root Execution: Runs the code as the unprivileged 'nobody' user (--user 65534).
* Read-Only Filesystem: The container's root filesystem is mounted --read-only.
* Anti-DoS: Enforces strict memory (--memory 512m), process (--pids-limit 128), and execution time limits to prevent fork bombs.
* Safe I/O: Provides small, non-executable in-memory file systems (tmpfs) for the script and temp files.
It's designed to be a "best-in-class" Level 2 (container-based) sandbox that you can easily add to your existing MCP setup. I'd love for you to check it out and give me any feedback, especially on the security model in the RootlessContainerSandbox class. It's amateur work, but I tried my best to secure and test it. | 2025-11-10T17:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1otkf5e/i_developed_an_opensource_python_implementation/ | elusznik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otkf5e | false | null | t3_1otkf5e | /r/LocalLLaMA/comments/1otkf5e/i_developed_an_opensource_python_implementation/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=108&crop=smart&auto=webp&s=d5b456508d74c0beca8e2e1add79a59157489236', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=216&crop=smart&auto=webp&s=e49d64f1af42c3f6aec24ba7e4ff7291b4ff2d62', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=320&crop=smart&auto=webp&s=b8455049e7f029d17f87167bf0717388b53dc2b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=640&crop=smart&auto=webp&s=e35084de5c6d576bb3d8e584f2e199cac63e4f38', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=960&crop=smart&auto=webp&s=39ac088c5f4830caa722cdb1e4f757c6f2ea0ac6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=1080&crop=smart&auto=webp&s=f1f553adab54865573c8b4a2be4057f9e7266f28', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?auto=webp&s=a2897a5f484be38f22d6d2bb0fbdae7a46fced0b', 'width': 2400}, 'variants': {}}]} |
Name your favorite OSS Agent tool(s)! | 6 | I’m not talking about roo or cline.
I mean things like Flow Agent, Mem Agent, training agents, etc. Python or JS based agentic workflow systems that deserve a look.
Anyone have suggestions?
I’m aware of the agent building tools out there, but I stay away from Claude Code. I want systems I can run, set as an MCP server or otherwise, and when called from another LLM they spin up the model you selected to do their hyperspecialized task, be it deep research, visual recognition, audio transcription, etc. | 2025-11-10T17:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1otk3bv/name_your_favorite_oss_agent_tools/ | Badger-Purple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otk3bv | false | null | t3_1otk3bv | /r/LocalLLaMA/comments/1otk3bv/name_your_favorite_oss_agent_tools/ | false | false | self | 6 | null |
Options for hosting a multi-LoRA sentence transformer? | 0 | I have a fine-tuned 2-stage Deberta setup where I'm running a coarse-head classifier into 5 different buckets that each have their own LoRA.
For testing I had been working with just swapping out the LoRAs in memory as they're really small and it works fine, however for deployment I've been unable to do anything in Python other than install the entire torch lib, which ends up being like 7-9gb total
I really would like to limit the memory use since it's such a small base model and the LoRAs are small. I simply CANNOT get torch to install at a smaller size when building with Docker.
I am looking at maybe quantizing to int8 and converting to ONNX and running that all in memory and avoiding python/torch altogether. Unfortunately I cannot swap the LoRAs with ONNX and will have to run 5-6 different base models at the same time, but if they're small enough I can live with that and pay for a small ECS or whatever.
Maybe I'm missing an option or am unaware of the proper way to do this? | 2025-11-10T17:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1otjz2y/options_for_hosting_a_multilora_sentence/ | kalokagathia_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otjz2y | false | null | t3_1otjz2y | /r/LocalLLaMA/comments/1otjz2y/options_for_hosting_a_multilora_sentence/ | false | false | self | 0 | null |
PDF attachment with llama.cpp | 1 | Hi all, I am trying to do a side project with Qwen3VL to do OCR with scanned documents. Originally I was using 4bit bnb unsloth quants directly using Transformer.
However, after some research. It seems that GGUF might be more performant and faster than 4bit quant .
Now, the problem is llamacpp does not seem to allow pdf attachment? So I have to manually convert to .jpg image format if I want to pass into llama.cpp. This is not feasible if my pdf have multiple pages.
Is there a smarter workaround for this? Would WebUI be suitable? I see that it’s rather new | 2025-11-10T17:04:39 | https://www.reddit.com/r/LocalLLaMA/comments/1otjcw8/pdf_attachment_with_llamacpp/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otjcw8 | false | null | t3_1otjcw8 | /r/LocalLLaMA/comments/1otjcw8/pdf_attachment_with_llamacpp/ | false | false | self | 1 | null |
LinkedIn now tells you when you're looking at an AI-generated image, if you haven't noticed. | 1 | As the 1st image shows, the C2PA label is used.
Here's what's interesting.
**The feature only applies to image platforms who join the C2PA.**
Now there's only:
* ChatGPT/DALL-E 3 images
* Adobe Firefly images
* Leica Camera images
* BBC news images
The 2nd image, generated by [Google's Nano Banana](https://www.netmind.ai/modelsLibrary/nano-banana), does not have the label.
What's even more interesting?
**It's easy to bypass this new rule.**
You just need to upload the screenshot of the AI-generated pic, as we did with the 3rd image, a screenshot of the 1st one.
Do you think more AI image platforms, like Google, will join C2PA? | 2025-11-10T17:01:49 | MarketingNetMind | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otj9zk | false | null | t3_1otj9zk | /r/LocalLLaMA/comments/1otj9zk/linkedin_now_tells_you_when_youre_looking_at_an/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'gwi25cawng0g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=108&crop=smart&auto=webp&s=ec41f453b82b6499de072ded92a27ef425fd3478', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=216&crop=smart&auto=webp&s=93b4eac0bb868267a36e2b58ef8d562198ac9d27', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=320&crop=smart&auto=webp&s=83ad863d48bd66e32adeaf10a2a0718529fd12e9', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=640&crop=smart&auto=webp&s=5ab6f67920dddb31762a94f585a87076fc9729a7', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=960&crop=smart&auto=webp&s=55598d314d4c996e7107df81fa9da6c4cd65abb5', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?width=1080&crop=smart&auto=webp&s=901a10db1470ba40d507aad021574a472e2c04ad', 'width': 1080}], 'source': {'height': 1266, 'url': 'https://preview.redd.it/gwi25cawng0g1.png?auto=webp&s=7fe0c9ea2b579316b29da345d26376cdf9ce00d0', 'width': 1878}, 'variants': {}}]} | |
LinkedIn now tells you when you're looking at an AI-generated image, if you haven't noticed. | 82 | As the 1st image shows, the C2PA label is used.
Here's what's interesting.
**The feature only applies to image platforms who join the C2PA.**
Now there's only:
* ChatGPT/DALL-E 3 images
* Adobe Firefly images
* Leica Camera images
* BBC news images
The 2nd image, generated by [Google's Nano Banana](https://www.netmind.ai/modelsLibrary/nano-banana), does not have the label.
What's even more interesting?
**It's easy to bypass this new rule.**
You just need to upload the screenshot of the AI-generated pic, as we did with the 3rd image, a screenshot of the 1st one.
Do you think more AI image platforms, like Google, will join C2PA? | 2025-11-10T17:01:12 | MarketingNetMind | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otj99f | false | null | t3_1otj99f | /r/LocalLLaMA/comments/1otj99f/linkedin_now_tells_you_when_youre_looking_at_an/ | false | false | default | 82 | {'enabled': True, 'images': [{'id': 'bl396lgsng0g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=108&crop=smart&auto=webp&s=2b47b777ab7bdae64a70e88ea5dff5683192a29a', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=216&crop=smart&auto=webp&s=8537b135d81457576bb4502ce332b2c2d2f03f44', 'width': 216}, {'height': 215, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=320&crop=smart&auto=webp&s=f89f4a3bac08cd38f13abb625ce6c12e4f6e32ca', 'width': 320}, {'height': 431, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=640&crop=smart&auto=webp&s=6d06e52cf2855e25cb75bab6e7f8d9e9a70cccd3', 'width': 640}, {'height': 647, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=960&crop=smart&auto=webp&s=5a642839ef34ca710fd78d21ffdd551005b50662', 'width': 960}, {'height': 728, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?width=1080&crop=smart&auto=webp&s=890524c334856059f0c05a9bf2e049ff303a590b', 'width': 1080}], 'source': {'height': 1266, 'url': 'https://preview.redd.it/bl396lgsng0g1.png?auto=webp&s=1e11f92b7132b7b7a13319e3627971e89d357955', 'width': 1878}, 'variants': {}}]} | |
After a year building an open-source AI framework, I’m starting to wonder what actually gets attention | 23 | Hey folks,
It took me over a year to finally write this.
Even now, I’m not sure it's worth it.
But whatever, yolo.
I’m the creator of Yacana, a free and open source multi-agent framework.
I’ve spent more than a year working late nights on it, **thinking that if the software was good, people would naturally show up.**
Turns out… not really.
# How it started
Back when local LLMs first became usable, there was no proper tool calling.
That made it nearly impossible to build anything useful on top of them.
So I started writing a framework to fix that. That’s how Yacana began. Its main goal was to let LLMs call tools automatically.
Around the same time, LangChain released a buggy "function calling" thing for Ollama, but it still wasn’t real tool calling. You had to handle everything manually.
That’s why I can confidently say Yacana was the first official framework to actually make it work.
I dare to say "official" because roughly at the same time it got added to the Ollama Github's main page which I thought would be enough to attract some users.
Spoiler: it wasn’t.
# How it went
As time passed, tool calling became standard across the board.
Everyone started using the OpenAI-style syntax.
Yacana followed that path too but also kept its original tool calling mechanism.
I added a ton of stuff since then: checkpoints, history management, state saving, VLLM support, thinking model support, streaming, structured outputs, and so on.
And still… almost no feedback.
**The GitHub stars and PyPI downloads? Let’s just say they’re modest.**
Then came MCP, which looked like the next big standard.
I added support for MCP tools, staying true to Yacana’s simple OOP API (unlike LangChain’s tangle of abstractions).
Still no big change.
# Self-reflection time
At one point, I thought maybe I just needed to advertized some more.
But I hesitated.
There were already so many "agentic" frameworks popping up...
I started wondering if I was just fooling myself.
Was Yacana really good enough to deserve a small spotlight?
Was I just promoting something that wasn’t as advanced as the competition?
Maybe.
And yet, I kept thinking that it deserved a bit more.
There aren’t that many frameworks out there that are both independent (not backed by a company \~Strands\~) and actually documented (sorry, LangChain).
# Meanwhile, in AI-land...
Fast forward to today. It’s been 1 year and \~4 months.
Yacana sits at around 60+ GitHub stars.
**Meanwhile, random fake AI projects get thousands of stars.**
Some of them aren’t even real, just flashy demos or vaporware.
Sometimes I genuinely wonder if there are bots starring repos to make them look more popular.
Like some invisible puppeteer trying to shape developers attention.
# A little sting
Recently I was reading through LangChain’s docs and saw they had a "checkpoints" feature.
Not gonna lie, that one stung a bit.
It wasn’t the first time I stumbled upon a Yacana feature that had been implemented elsewhere.
What hurts is that Yacana’s features weren’t copied from other frameworks, they were **invented**.
And seeing them appear somewhere else kind of proves that I might actually be good at what I do. But the fact that so few people seem to care about my work just reinforces the feeling that maybe I’m doing all of this for nothing.
# My honest take
I don’t think agentic frameworks are a revolution.
The real revolution is the LLMs themselves.
Frameworks like Yacana (or LangChain, CrewAI, etc.) are mostly structured wrappers around POST requests to an inference server.
Still, Yacana has a purpose.
It’s simple, lightweight, easy to learn, and can work with models that aren’t fine-tuned for function calling.
It’s great for people who don't want to invest 100+ hours in Langchain. Not saying that Langchain isn't worth it, but it's not always needed depending on the problem to solve.
# Where things stand
So why isn’t it catching on?
I am still unsure.
I’ve written detailed docs, made examples, and even started recording video tutorials.
The problem doesn’t seem to be the learning curve.
Maybe it still lacks something, like native RAG support. But after having followed the hype curve for more than a year, I’ve realized there’s probably more to it than just features.
I’ll keep updating Yacana regardless.
I just think it deserves a (tiny) bit more visibility.
Not because it’s revolutionary, but because it’s real.
And maybe that should count for something.
\---
Github:
* [https://github.com/rememberSoftwares/yacana](https://github.com/rememberSoftwares/yacana)
Documentation:
* [https://remembersoftwares.github.io/yacana](https://remembersoftwares.github.io/yacana)
| 2025-11-10T16:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1otislj/after_a_year_building_an_opensource_ai_framework/ | DocteurW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otislj | false | null | t3_1otislj | /r/LocalLLaMA/comments/1otislj/after_a_year_building_an_opensource_ai_framework/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': '5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=108&crop=smart&auto=webp&s=6b0f287c19640a6a968d407fd7b1f48dc583d02c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=216&crop=smart&auto=webp&s=ebd9f9acfa4db3b6a2c5f71e1c89ec71a621dabc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=320&crop=smart&auto=webp&s=81fcd846493fd3cbe6b27255ea9257b035a0318f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=640&crop=smart&auto=webp&s=00c07ccd5e59852b15ad1876a0d4e27199d17519', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=960&crop=smart&auto=webp&s=9fb6e37515647c99226cb08bdbca787096a4da82', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?width=1080&crop=smart&auto=webp&s=8d055aa15a042a1b792f90f076c153b83524458e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5dPheww3wtbhSeUSTpzI5zIOp2n08kZGq1qftJ4qMU8.png?auto=webp&s=73e412dfde64a485bbfd4b34ec3ae4a5d9c019a4', 'width': 1200}, 'variants': {}}]} |
Kimi K2 goes off the rails! | 0 | Check it out here on my Github. First time I've seen an LLM get angry!
[jazmaan.github.io](http://jazmaan.github.io) | 2025-11-10T16:39:59 | https://www.reddit.com/r/LocalLLaMA/comments/1otiodp/kimi_k2_goes_off_the_rails/ | jazmaan273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otiodp | false | null | t3_1otiodp | /r/LocalLLaMA/comments/1otiodp/kimi_k2_goes_off_the_rails/ | false | false | self | 0 | null |
Open-dLLM: Open Diffusion Large Language Models | 130 | Code: [https://github.com/pengzhangzhi/Open-dLLM](https://github.com/pengzhangzhi/Open-dLLM)
Blog: [https://oval-shell-31c.notion.site/Open-dLLM-Open-Diffusion-Large-Language-Model-25e03bf6136480b7a4ebe3d53be9f68a](https://oval-shell-31c.notion.site/Open-dLLM-Open-Diffusion-Large-Language-Model-25e03bf6136480b7a4ebe3d53be9f68a)
Most diffusion LLM repos (e.g., LLaDA, Dream) only release **inference scripts + weights**, which limits reproducibility. **Open-dLLM** is the first to open-source the **entire stack** for diffusion LLMs.
With Open-dLLM, you can go from **raw data → training → checkpoints → evaluation → inference**, all in one repo. | 2025-11-10T16:33:06 | https://v.redd.it/qb62efspig0g1 | pengzhangzhi | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1otihl1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qb62efspig0g1/DASHPlaylist.mpd?a=1765384401%2CYWVkYzk4ZTc2NTMxYmI3MWI4MWU0N2Q4MmY4ZDVjZWNkYTRlNjkwZjM3YWU3M2Q3Zjg4ZTkxYjNmNzc2MTZlOA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/qb62efspig0g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/qb62efspig0g1/HLSPlaylist.m3u8?a=1765384401%2CYzIyZmIyYjJmYWJiN2I0M2U2NTMyNjBhNWY2YzlmZGYxMjVlNDMxZGUwNzE5ZjcwNWIzNjdhYmNjNzQyMDRhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qb62efspig0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1010}} | t3_1otihl1 | /r/LocalLLaMA/comments/1otihl1/opendllm_open_diffusion_large_language_models/ | false | false | 130 | {'enabled': False, 'images': [{'id': 'eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?width=108&crop=smart&format=pjpg&auto=webp&s=1bf4a85ffc11af9f80aa7d17e46ca0c91d1c761d', 'width': 108}, {'height': 153, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?width=216&crop=smart&format=pjpg&auto=webp&s=0b49328ca87b545777ed97d9ac1d67fb9fee4c7d', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?width=320&crop=smart&format=pjpg&auto=webp&s=50701382b4f6be8d1e62c2ec1257f7f8b7408288', 'width': 320}, {'height': 456, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?width=640&crop=smart&format=pjpg&auto=webp&s=8a642150d7b431c1e46ebc519b58b815a40867d0', 'width': 640}, {'height': 684, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?width=960&crop=smart&format=pjpg&auto=webp&s=04ec65fef417aebd260bf719dece0c625cdb448f', 'width': 960}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eHlpNXJmc3BpZzBnMbC2Q-rs9CfDNgw85akHP4ZCgTS81bEyqZb3k8CkqU2r.png?format=pjpg&auto=webp&s=1fded7a967bca35890cde98658f98eb2f32a26bb', 'width': 1010}, 'variants': {}}]} | |
OCR Accuracy Showdown: PaperLab vs. LlamaIndex | 0 | When it comes to document digitization, Optical Character Recognition (OCR) accuracy is everything. One misplaced character can completely alter the meaning of complex data, especially in scientific or mathematical contexts.
At PaperLab, we recently ran a simple experiment to see how our OCR accuracy compares with LlamaIndex, focusing specifically on how both tools convert research content into Markdown format.
Methodology:
A sample table was taken from the research paper: [https://www.researchgate.net/publication/395423940\_Reparameterized\_slashed\_lognormal\_regression\_model\_Diagnostics\_and\_application\_to\_mineral\_data](https://www.researchgate.net/publication/395423940_Reparameterized_slashed_lognormal_regression_model_Diagnostics_and_application_to_mineral_data)
What we did in simple steps:
Input: Uploaded the same table image to both platforms.
Output: Generated Markdown versions from each tool.
Verification: Opened both outputs in VS Code to visually inspect Markdown accuracy.
Error Calculation: Compared each output against the original table to measure the error rate.
Since LlamaIndex produced its Markdown output in LaTeX, we used Overleaf to convert it into readable math expressions for verification.
**Results**
https://preview.redd.it/mt63qxzahg0g1.png?width=753&format=png&auto=webp&s=542603bac640c95fa07b8f9dbb2f7954c71dca91
What We Found
The biggest surprise was that LlamaIndex’s Markdown file could not fully parse, showing a ‘Parse Error’ that indicated it failed to handle the structure of the source material.
https://preview.redd.it/al2ky8kehg0g1.png?width=740&format=png&auto=webp&s=b22329936eb1bbb0ae72e41e3fbeb1856f7a6d9d
Even after conversion, the math equations were misread and altered in ways that could have completely changed the interpretation of the research data.
PaperLab, in contrast, produced a clean, accurate Markdown file that perfectly preserved every equation and symbol from the original.
Please share your comments and try your own testing: [https://www.paperlab.ai/pdftomarkdown](https://www.paperlab.ai/pdftomarkdown)
| 2025-11-10T16:27:27 | https://www.reddit.com/r/LocalLLaMA/comments/1otibyl/ocr_accuracy_showdown_paperlab_vs_llamaindex/ | PaperLab_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1otibyl | false | null | t3_1otibyl | /r/LocalLLaMA/comments/1otibyl/ocr_accuracy_showdown_paperlab_vs_llamaindex/ | false | false | 0 | null | |
Minimax M2 for App creation | 4 | Hello, lately I have been testing Minimax for creating a simple PWA that only handles data with Supabase, Spreedsheets and Google Drive. But when I tell Minimax what I need, every time it fixes something, it breaks something else and I can spend 3 hours walking around trying to correct the same error. I paid for the more expensive PRO version because I thought it would be worth it and I could carry out my project. But the truth is that it's giving me a lot of headaches and wasting time constantly correcting it so that it then breaks another part of the app. The truth is I feel a little frustrated, I promised more. Can anyone take a project from start to finish with Minimax? | 2025-11-10T16:20:07 | https://www.reddit.com/r/LocalLLaMA/comments/1oti4tc/minimax_m2_for_app_creation/ | HectorLavoe33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oti4tc | false | null | t3_1oti4tc | /r/LocalLLaMA/comments/1oti4tc/minimax_m2_for_app_creation/ | false | false | self | 4 | null |
How to link an AI to a code execution environment? | 0 | Hi, I read this article (https://www.anthropic.com/engineering/code-execution-with-mcp) from Anthropic that talks about how using an code execution environment and MCP server can improve responses and token utility. But I don't get the technical part on how to connect your model to the code environment. I mean, is there any open-source solution or do I need to build one on my own? If so, how do I connect the LLM to that environment?
One idea I had was to use an MCP client that is connected to two tools: "get-folder" and "send-code". The "send-code" tool sends the LLM's code to the environment, but I did not feel it was a good solution specifically because there is no mention of the word "MCP client" in the article.
And why bother creating code with the "MCP" standard if the LLM will just call it like a library function? I could just write the code like I wanted to, and the LLM wouldn't notice because he is just calling it right?
Does anyone have an explanation or tips on how I can implement that? | 2025-11-10T16:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/1oti4or/how_to_link_an_ai_to_a_code_execution_environment/ | yeahlloow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oti4or | false | null | t3_1oti4or | /r/LocalLLaMA/comments/1oti4or/how_to_link_an_ai_to_a_code_execution_environment/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=108&crop=smart&auto=webp&s=d5b456508d74c0beca8e2e1add79a59157489236', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=216&crop=smart&auto=webp&s=e49d64f1af42c3f6aec24ba7e4ff7291b4ff2d62', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=320&crop=smart&auto=webp&s=b8455049e7f029d17f87167bf0717388b53dc2b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=640&crop=smart&auto=webp&s=e35084de5c6d576bb3d8e584f2e199cac63e4f38', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=960&crop=smart&auto=webp&s=39ac088c5f4830caa722cdb1e4f757c6f2ea0ac6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?width=1080&crop=smart&auto=webp&s=f1f553adab54865573c8b4a2be4057f9e7266f28', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/oFd7k3gtzvKflZCxW3_jUV0EMRHjhIt_BNNui3wNwnI.png?auto=webp&s=a2897a5f484be38f22d6d2bb0fbdae7a46fced0b', 'width': 2400}, 'variants': {}}]} |
bnb 4bit vs GGUF | 1 | I'm new to the world of LLMs and was hoping to get some advice from those with more experience.
I've noticed that for LLM inference, the bnb-4bit format seems to be a common recommendation. Is it generally preferred over other formats like GGUF?
From what I can gather, the main purpose of bnb-4bit is to reduce the model's memory footprint, but I've also observed that GGUF models tend to have significantly more downloads. This has left me a bit confused.
Could someone clarify the primary use case for bnb-4bit and why GGUF might be more popular in terms of download numbers?
Any insights you can share would be greatly appreciated as I'm still learning the ropes. Thank you in advance! | 2025-11-10T16:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/1oti09t/bnb_4bit_vs_gguf/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oti09t | false | null | t3_1oti09t | /r/LocalLLaMA/comments/1oti09t/bnb_4bit_vs_gguf/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.