title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RTX Pro 5000 48GB vs DGX Spark for LLM + RAG lab setup (enterprise data) | 1 | Hi all,
I’m setting up a small lab environment to experiment with LLMs + RAG using internal enterprise data (documentation, processes, knowledge base, etc.). The goal is to build something like an internal “chat with company knowledge” system.
This is not for production yet — it’s mainly for testing architectures, embeddings, chunking strategies, retrieval approaches, and understanding practical limits before scaling further.
I’m currently considering two options:
**Option 1:**
RTX Pro 5000 (48GB) in a workstation with 128GB RAM.
**Option 2:**
NVIDIA DGX Spark (Grace Blackwell).
For this kind of lab setup, which would you consider more sensible in terms of real-world performance, flexibility, and cost/performance ratio?
I’m especially interested in practical experience around:
* Inference performance with larger models
* Behavior in interactive RAG workflows
* Whether the unified memory in the Spark is actually an advantage vs a powerful dedicated GPU
Any real-world feedback or similar setups would be greatly appreciated. | 2026-02-16T11:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1r67ick/rtx_pro_5000_48gb_vs_dgx_spark_for_llm_rag_lab/ | Educational-Shoe8806 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r67ick | false | null | t3_1r67ick | /r/LocalLLaMA/comments/1r67ick/rtx_pro_5000_48gb_vs_dgx_spark_for_llm_rag_lab/ | false | false | self | 1 | null |
Which model can provide me with an answer? Local model only | 2 | Need help to determine which local llama will provide a answer I approve :)
Prompt:
You're looking for an anime movie with the following characteristics: Setting: Plot & Themes: Unique Visual/Mystical Element: Not space-themed Features a Romeo and Juliet-like tragic love story. The spirits of fallen people appear as red birds. No mecha Not set in Japan (Earth-like planet, but non-Japanese culture/kingdoms) Involves a war between two nations, resulting in many deaths. A main survivor character is left behind. A coastal kingdom falls, specifically due to a "dike" being destroyed and the sea overwhelming it. These red bird spirits are then collected in a "spaceship" (the nature of this "ship" is undefined, given the "no space themed" constraint).
I have tried the following models:
\- Qwen\_Qwen3-Coder-Next-GGUF\_Qwen3-Coder-Next-Q5\_K\_M\_Qwen3-Coder-Next-Q5\_K\_M-00001-of-00004.gguf
\- unsloth\_gemma-3-4b-it-GGUF\_gemma-3-4b-it-Q3\_K\_S.gguf
\- unsloth\_gpt-oss-20b-GGUF\_gpt-oss-20b-F16.gguf
\- unsloth\_Qwen3-Coder-30B-A3B-Instruct-GGUF\_Qwen3-Coder-30B-A3B-Instruct-Q4\_K\_M.gguf | 2026-02-16T11:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1r67dvh/which_model_can_provide_me_with_an_answer_local/ | Gold_Sugar_4098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r67dvh | false | null | t3_1r67dvh | /r/LocalLLaMA/comments/1r67dvh/which_model_can_provide_me_with_an_answer_local/ | false | false | self | 2 | null |
Support and guidance in building an independent learning project. | 1 | I’m a product manager, and I’d like to “get my hands dirty” a bit to gain a deeper understanding of LLMs and AI.
I was thinking of building a side project — maybe a trivia and riddle quiz for my kids. Something that could run daily, weekly, or monthly, with a scoring leaderboard.
I’d like to incorporate both AI and LLM components into it.
I have basic coding knowledge, I’m not intimidated by that, and I have a paid ChatGPT subscription.
How should I get started?
What’s the best way to learn through a project like this?
Is ChatGPT suitable for this, or would Claude be better?
I’d really appreciate some guidance.
tnx | 2026-02-16T11:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r67b6n/support_and_guidance_in_building_an_independent/ | Financial-Sand-6999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r67b6n | false | null | t3_1r67b6n | /r/LocalLLaMA/comments/1r67b6n/support_and_guidance_in_building_an_independent/ | false | false | self | 1 | null |
Forked OpenClaw to run fully air-gapped (no cloud deps) | 33 | I've been playing with OpenClaw, but I couldn't actually use it for anything work-related because of the data egress. The agentic stuff is cool, but sending everything to OpenAI/cloud APIs is a non-starter for my setup.
So I spent the weekend ripping out the cloud dependencies to make a fork that runs strictly on-prem.
It’s called Physiclaw ([www.physiclaw.dev](http://www.physiclaw.dev)).
Basically, I swapped the default runtime to target local endpoints (vLLM / llama.cpp) and stripped the telemetry. I also started breaking the agent into specific roles (SRE, SecOps) with limited tool access instead of one generic assistant that has root access to everything.
The code is still pretty raw/alpha, but the architecture for the air-gapped runtime is there.
If anyone is running agents in secure environments or just hates cloud dependencies, take a look and let me know if I missed any obvious leaks.
**Repo:** [**https://github.com/CommanderZed/Physiclaw**](https://github.com/CommanderZed/Physiclaw) | 2026-02-16T11:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1r67b43/forked_openclaw_to_run_fully_airgapped_no_cloud/ | zsb5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r67b43 | false | null | t3_1r67b43 | /r/LocalLLaMA/comments/1r67b43/forked_openclaw_to_run_fully_airgapped_no_cloud/ | false | false | self | 33 | null |
Is GLM Lite Subscription Worth it to have while getting limited? | 0 | Currently, i saw some comment or post that told me that if the limit of Lite usage is not fair enough as before the GLM5 release, is any of you guys have running the lite version? any thoughts? | 2026-02-16T11:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r674ou/is_glm_lite_subscription_worth_it_to_have_while/ | Remote_Fun1742 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r674ou | false | null | t3_1r674ou | /r/LocalLLaMA/comments/1r674ou/is_glm_lite_subscription_worth_it_to_have_while/ | false | false | self | 0 | null |
Do I understand --n-keep correctly? | 3 | Can someone help me understand if I'm using `--n_keep` correctly?
My understanding is that it keeps the first N tokes, then cuts the remaining in half and removes the first part.
So, a 80k context with n\_keep 40k, after becoming full, would essentially become:
\[0-40k\] \[60-80\] \[20k empty\]
Is this correct? | 2026-02-16T11:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r66osd/do_i_understand_nkeep_correctly/ | nunodonato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66osd | false | null | t3_1r66osd | /r/LocalLLaMA/comments/1r66osd/do_i_understand_nkeep_correctly/ | false | false | self | 3 | null |
Best compromise for small budgets Local llm | 1 | Hello Guys,
I know my question is pretty standard but i always see people arguing on whats the best setup for local GPUs so im a bit lost.
My requirements is that the setup should be able to run gpt-oss 120B(its for the ballpark of VRAM)
Of course with the fastest toks/s possible.
I would like to know if its possible for the following budget:
\-2k
\-3k
\-4k
And whats the best setup for each of those budgets.
Thanks for your ideas and knowledge! | 2026-02-16T10:53:18 | https://www.reddit.com/r/LocalLLaMA/comments/1r66k7j/best_compromise_for_small_budgets_local_llm/ | Best_Sail5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66k7j | false | null | t3_1r66k7j | /r/LocalLLaMA/comments/1r66k7j/best_compromise_for_small_budgets_local_llm/ | false | false | self | 1 | null |
vLLM MAXIMUM performance on multi-3090 | 46 | TLDR: install patched p2p driver, patch vllm platform and skip p2p check. You'll get +50% performance on 4x3090 with Qwen3 Coder Next FP8. Free performance, free tokens, very nice :)
So, YOU (yes, YOU) managed to setup vLLM on your multi gpu platform with consumer cards. It's nice, running fast and doesn't lose a lot of performance on long contexts. But there are HIDDEN and FREE performance laying here just for you.
Let's go into the deep.
## Prerequisite
I assume you have something like cheap RTX 3090 and running vLLM with tensor parallelism on linux without docker. Otherwise I cannot guarantee results. As if I could guarantee anything otherwise, lol.
### Resizable bar
You need to enable resizable bar. Check it with `sudo lspci -vvv | grep -i -A40 'VGA compatible controller'`, look for `Region 1: Memory at 17800000000 (64-bit, prefetchable) [size=32G]`. If it's `32M`, then you need to flash new BIOS.
- https://www.techpowerup.com/download/nvidia-nvflash/ - nvflash
- https://www.techpowerup.com/vgabios/231650/msi-rtx3090-24576-210310-1 - example where to find updated bios
Just reboot in safe mode and follow intuitive `./nvflash help` output. It's that simple.
### PCIe lanes
GPUs must be connected with enough PCIe lanes to achieve desired bandwidth. How many lanes? Well... I've didn't seen more than 4GB/s IN + 4GB/s OUT, so PCIe 3.0 X8 OR PCIe 4.0 X4 must be ok enough. Maybe not, who knows. Try it yourself. But PCIe 3.0 X1 is not ok anyway.
### Similar cards in parallel.
This is tricky, you can't mix 3090 + 4090. I mean, technically you can, and it will be BLAZING FAST. But completely incorrect and incoherent. Maybe. Maybe 30B FP16 models will be good.
Check bug here - https://github.com/vllm-project/vllm/issues/34437#issuecomment-3903773323.
## Setup instructions
### Install patched P2P driver
https://github.com/aikitoria/open-gpu-kernel-modules - follow instruction here. Don't forget to reboot. Maybe you will need to compile CUDA samples (I don't remember where I get them) with p2pBandwidthTest to verify it works.
You must get similar output:
```
~# nvidia-smi topo -p2p r
GPU0 GPU1 GPU2 GPU3
GPU0 X OK OK OK
GPU1 OK X OK OK
GPU2 OK OK X OK
GPU3 OK OK OK X
```
And if your p2p bandwidth test shows you 0.02GB/s transfer rates, go check and resizable bar support.
### Patch vLLM
For unknown incomprehensible reason, vLLM tests p2p availability only for NVLink. Yep, you have patched driver and ik_llama.cpp now is blazing fast (probably), but vLLM still show you "Custom all-reduce is disabled, you moron! ~nya". Time to fix it.
- Go to `env/lib/blablabla/site-packages/vllm`. Now you can EDIT anything in vllm sources. Well, cuda kernels are compiled, but we are stupid and don't know how to edit them. Otherwise 3090+4090 issue would be already fixed.
- You need to do `vi env_vllm/lib/python3.13/site-packages/vllm/platforms/cuda.py`. There is line 597 https://github.com/vllm-project/vllm/blob/main/vllm/platforms/cuda.py#L597 . Make it just `return True`.
That's all. We're telling vLLM "Trust me bro, I have my GPUs fully connected AND I DON'T KNOW HOW IT WILL AFFECT MY SYSTEM".
## Profit!
And load you're favorite Qwen3 Coder Next FP8 with -tp 4 and look at numbers. Single request will go up from ~100 tps to ~150 tps. Or maybe not, because I'm lucky and you are not lucky.
> (APIServer pid=1689046) INFO 02-16 13:51:25 [loggers.py:259] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 144.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.2%, Prefix cache hit rate: 0.3% | 2026-02-16T10:52:53 | https://www.reddit.com/gallery/1r66jyp | Nepherpitu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r66jyp | false | null | t3_1r66jyp | /r/LocalLLaMA/comments/1r66jyp/vllm_maximum_performance_on_multi3090/ | false | false | 46 | null | |
Help me decide if to buy EGPU for Minisforum S1-max | 3 | Hello,
I need an advice if to buy / not buy an extra GPU for my Minisforum S1-Max.
Just to sum it up, this box has AMD AI Max plus 395 CPU, 128 gb RAM, AMD Radeon 8060s integrated GPU.
I am running Arch Linux and my use case is LLM inference, currently mainly through llama.cpp.
Currently I am running mainly MOE models, because dense models have quite slow inference on this GPU.
I am running Qwen3-coder-next quantized at q8\_0 with around 35 tokens per second of inference... and I am actually quite satisfied with this speed, although of course it could be higher.
My goal is to get better inference speed. Alternative goal is to run larger models, but I am not sure if egpu will help me with this a lot without decreasing inference speed because 128 gb of RAM is already quite a lot.
I am thinking about buying an egpu and connecting it through one of TB5 ports on the PC. I was thinking about 32 or 48 gb of nvram.
Do you think it makes sense with this size of moe models? I thought some experts could be offloaded to the egpu and it would be even faster.
Or is this totall nonsense and using egpu of this size makes sense only for dense models?
Has anyone already tried using egpu with this minipc?
My impression is that utilizing the spare PCI 4x4 slot on the machine will allow only kinda weaker GPUs.
Thank you for responses and tips.
| 2026-02-16T10:52:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r66joq/help_me_decide_if_to_buy_egpu_for_minisforum_s1max/ | krecoun007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66joq | false | null | t3_1r66joq | /r/LocalLLaMA/comments/1r66joq/help_me_decide_if_to_buy_egpu_for_minisforum_s1max/ | false | false | self | 3 | null |
😭 | 0 | 2026-02-16T10:49:58 | muxxington | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r66i73 | false | null | t3_1r66i73 | /r/LocalLLaMA/comments/1r66i73/_/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9w78f46q6ujg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=108&crop=smart&auto=webp&s=9e0219c78505b4bac160e0ecf443a2243cf74d07', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=216&crop=smart&auto=webp&s=7bb27f2e2ade47ce66a408efe5cd68585f665850', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=320&crop=smart&auto=webp&s=6e8d2588187461191e73d7d6ffa62990e30bec5a', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=640&crop=smart&auto=webp&s=72e2b3f590dc7043a56eec2cdc7ceb9e4e3df3d6', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=960&crop=smart&auto=webp&s=063c37113817d289698c568e2cd5fef2905670c1', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=1080&crop=smart&auto=webp&s=7f16a37139be7b4d19f353ac60ef3329f21a4ce5', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?auto=webp&s=b80ab40fdd69dc0c39e3a5b2e76cf48c240b435f', 'width': 1280}, 'variants': {}}]} | ||
Building a private AI Task Manager (runs Gemma 2B on-device). No data leaves your phone. Is $5 fair for lifetime access? | 2 | Hey everyone,
I’m a developer frustrated by every productivity app turning into a monthly subscription service. I’m building an app called Pagio, and I want to validate my pricing model before I finish the code.
The Pitch:
Most AI apps send your data to OpenAI/Claude, which costs them money, so they charge you $10-20/month.
Pagio runs a small LLM (Google's Gemma 2B) locally on your device.
Privacy: Your notes/tasks never leave your phone.
Speed: No network latency (works in airplane mode).
Cost: Since I have $0 server costs, I want to charge $5 one-time. No subscriptions. Ever.
The Features:
Brain Dump: You type: "Meeting with Sarah tomorrow at 2pm about the Q3 roadmap."
Auto-Sort: The AI instantly turns that into a Calendar Event (2pm) and a Task ("Prep Q3 roadmap").
RAG: Chat with your past notes offline.
The "Catch" (Need your honest feedback):
Because the AI brain lives on your phone, the app requires a \~1.5GB initial download (for the model weights).
My Questions for you:
Is a 1.5GB download a dealbreaker for a mobile productivity app?
Would you pay $5 upfront for this, or would you prefer a "Free Trial" with a $5 in-app purchase to unlock?
Does "Local Only" matter to you, or do you not care where the data goes?
Thanks for the brutal honesty! | 2026-02-16T10:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r66i1j/building_a_private_ai_task_manager_runs_gemma_2b/ | HelpfulNight1955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66i1j | false | null | t3_1r66i1j | /r/LocalLLaMA/comments/1r66i1j/building_a_private_ai_task_manager_runs_gemma_2b/ | false | false | self | 2 | null |
What model is used to create such videos? | 1 | [removed] | 2026-02-16T10:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r66g76/what_model_is_used_to_create_such_videos/ | Odd_Branch_2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66g76 | false | null | t3_1r66g76 | /r/LocalLLaMA/comments/1r66g76/what_model_is_used_to_create_such_videos/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?width=108&crop=smart&auto=webp&s=6cf305f4ca8fc6fb038a84ff7c41889c28ec04dd', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?width=216&crop=smart&auto=webp&s=ac354f0819bf4bb633aa2354bb5a3b67ff1690ba', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?width=320&crop=smart&auto=webp&s=1d498792536f04133514dfa44d6cc3e0d0349d8e', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?width=640&crop=smart&auto=webp&s=5f928f19b9d7a6dfbf3d5a6298183eab018abfea', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?auto=webp&s=c1c19473be19028dc84f432da637f24b313b8e24', 'width': 900}, 'variants': {}}]} |
What model is used to create such videos on Instagram? | 1 | [removed] | 2026-02-16T10:43:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r66e6a/what_model_is_used_to_create_such_videos_on/ | Odd_Branch_2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66e6a | false | null | t3_1r66e6a | /r/LocalLLaMA/comments/1r66e6a/what_model_is_used_to_create_such_videos_on/ | false | false | self | 1 | null |
ai models for content | 1 | [removed] | 2026-02-16T10:39:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r66btz/ai_models_for_content/ | Odd_Branch_2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r66btz | false | null | t3_1r66btz | /r/LocalLLaMA/comments/1r66btz/ai_models_for_content/ | false | false | nsfw | 1 | null |
ai model for such content | 1 | [removed] | 2026-02-16T10:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/1r669tq/ai_model_for_such_content/ | Odd_Branch_2125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r669tq | false | null | t3_1r669tq | /r/LocalLLaMA/comments/1r669tq/ai_model_for_such_content/ | false | false | nsfw | 1 | null |
Can Seedance 2.0's 12B be run on a 4080 in comfyui? | 1 | pretty much the title. | 2026-02-16T10:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r664yn/can_seedance_20s_12b_be_run_on_a_4080_in_comfyui/ | Nervous_Narwhal4141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r664yn | false | null | t3_1r664yn | /r/LocalLLaMA/comments/1r664yn/can_seedance_20s_12b_be_run_on_a_4080_in_comfyui/ | false | false | self | 1 | null |
How viable are eGPUs and NVMe? | 2 | Hello.
I got myself an Asus ProArt X870E-CREATOR WIFI mobo, and been happily running \~65GB filesize models on 16GB VRAM 96GB RAM (RX 9070 XT + 9950X3D)
However, my main M.2 PCIe 5.0 slot (M2\_1) remains unused since I run all my current drives through the chipset (since they're PCIe 4.0 themselves anyway). So I wonder, is buying something like a 9100 Pro 4-8TB for that slot a good idea for running huge MoE models? I've seen NVMe offload mentioned a lot of times, but never really found *how* to do it and what the results are. Is it as simple as enabling (or not disabling) mmap? Of course I won't get VRAM speeds with that, but neither do I expect that or need that.
Another thing is eGPU. From what I've gathered, my mobo has 2x USB4 ports with a controller that's connected directly to the CPU via PCIe x4. Heavily considering getting something like 2x RX 7900 XTX for that AI-dedicated 48GB VRAM pool for attention layers or even some experts. From what I can tell, it's possible to configure llama.cpp (or it's configured so by default) to make very little data move through USB4 so that the speed of those GPUs isn't lost on it. Has anyone tried that? Is it worth it over bifurcating the main PCIe slots? Would prefer to keep that RX 9070 XT at full bandwidth for that raytracing gaming lol.
tl;dr, I'm thinking of building this setup:
1. USB4 total 48GB VRAM eGPUs for hot-path data (e.g attention and KV cache)
2. 9950X3D's CCD1 + 96GB of RAM for hot experts (with plans to upgrade to 256GB once it's affordable again)
3. PCIe 5.0 NVMe for cold experts | 2026-02-16T10:16:44 | https://www.reddit.com/r/LocalLLaMA/comments/1r65y85/how_viable_are_egpus_and_nvme/ | ABLPHA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65y85 | false | null | t3_1r65y85 | /r/LocalLLaMA/comments/1r65y85/how_viable_are_egpus_and_nvme/ | false | false | self | 2 | null |
Qwen 3.5 is Live | 12 | looks like he finished his tea | 2026-02-16T10:13:47 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r65whe | false | null | t3_1r65whe | /r/LocalLLaMA/comments/1r65whe/qwen_35_is_live/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'o3wd0lge0ujg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?width=108&crop=smart&auto=webp&s=c8907f9c172b4802172fa066f54ffc9ecb3fbfac', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?width=216&crop=smart&auto=webp&s=f908ade85f50d5d8f55028de3cecda7e19ce2545', 'width': 216}, {'height': 356, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?width=320&crop=smart&auto=webp&s=f1e975fe2ecba831d009c66abde605108e13eee0', 'width': 320}], 'source': {'height': 676, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?auto=webp&s=b64e02d7d0d74997a653880cd635481e7d949a47', 'width': 606}, 'variants': {}}]} | ||
Token bloat in non-English on local LLMs— what actually helps (models, tokenisers, prompts? | 6 | I’ve been trying to use local LLMs in languages other than English and the token count sometimes goes absolutely wild (context fills faster, slower generation, worse long-form UX).
For folks doing multilingual locally: what’s actually worked for you in practice?
A few specific things I’m curious about:
Which model families/tokenisers behave best for your language(s)? (e.g., better token efficiency + decent output quality)
Do you prompt in English and ask for output in the target language, or stay fully native-language? Any noticeable difference?
Any pre-processing tricks that don’t feel like you’re butchering the language (normalisation, removing weird punctuation, transliteration, etc.)?
If you’ve measured it: what’s your rough tokens-per-1000-chars (or tokens-per-sentence) for your language vs English?
If you reply, could you include: language(s) + model + quant + backend (llama.cpp / vLLM / Ollama / LM Studio) + your rough token stats. Even super rough numbers are useful.
Trying to build a “real world” cheat-sheet here, not just theory 🙏 | 2026-02-16T10:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1r65sqp/token_bloat_in_nonenglish_on_local_llms_what/ | aizivaishe_rutendo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65sqp | false | null | t3_1r65sqp | /r/LocalLLaMA/comments/1r65sqp/token_bloat_in_nonenglish_on_local_llms_what/ | false | false | self | 6 | null |
Importance of Cpu on Gpu Build | 1 | Hi,
how important is the Cpu in a GPU build? I can get a used system with a 8700k cpu and 16 gigs of DDR4 for cheap. My plan is to get a used 3090 for this. I plan to run simple models, maybe gpt oss 20b or ministral3 14b, along with voice assistant tools, like whisper, parakeet or qwen3tts.
Would that system suffice when I load everything in vram? Or is it too slow anyway and even a little money should be better spend elsewhere? | 2026-02-16T10:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1r65q7g/importance_of_cpu_on_gpu_build/ | AllTey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65q7g | false | null | t3_1r65q7g | /r/LocalLLaMA/comments/1r65q7g/importance_of_cpu_on_gpu_build/ | false | false | self | 1 | null |
LOCAL-Llama | 1 | [removed] | 2026-02-16T09:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1r65mr4/localllama/ | Valuable-Constant-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65mr4 | false | null | t3_1r65mr4 | /r/LocalLLaMA/comments/1r65mr4/localllama/ | false | false | self | 1 | null |
Qwen 3.5 Open Source: Native Multimodal, Ultimate Efficiency! | 153 | Happy New Year, everyone! Our latest generation native multimodal model, Qwen3.5-397B-A17B, is now officially open source! | 2026-02-16T09:55:14 | Senior-Silver-6130 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r65lkc | false | null | t3_1r65lkc | /r/LocalLLaMA/comments/1r65lkc/qwen_35_open_source_native_multimodal_ultimate/ | false | false | 153 | {'enabled': True, 'images': [{'id': 'jz35kh22xtjg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=108&crop=smart&auto=webp&s=bd9b9c1fb87be83b39a340cd0a53f12768059f58', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=216&crop=smart&auto=webp&s=4e886f493f205d6e2fa921af12ef250e733e3c1e', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=320&crop=smart&auto=webp&s=42704f33f9718a688057832a85007eb37698472a', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=640&crop=smart&auto=webp&s=12c8322f1f75ad7f92062a7724308b4c1ff8acf6', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=960&crop=smart&auto=webp&s=7eced54a20e0a99d73a984ce24f01c82aa891216', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=1080&crop=smart&auto=webp&s=2019d889aae72fd2a221af0f82a2aa867feea464', 'width': 1080}], 'source': {'height': 2295, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?auto=webp&s=7b770dc8e4f8a6bffeea1ee6c0e85e2cfd4f1d74', 'width': 1080}, 'variants': {}}]} | ||
Qwen/Qwen3.5-397B-A17B · Hugging Face | 1 | 2026-02-16T09:50:48 | https://huggingface.co/Qwen/Qwen3.5-397B-A17B | ayylmaonade | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r65j0w | false | null | t3_1r65j0w | /r/LocalLLaMA/comments/1r65j0w/qwenqwen35397ba17b_hugging_face/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=216&crop=smart&auto=webp&s=845c40f90d04300d26f682352d92f5119dce277a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=320&crop=smart&auto=webp&s=c7e7dd4c3ab2924175f5dc3b9816b8c268f639c5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=640&crop=smart&auto=webp&s=92b4cb0c011ee0ca8ee5cbb20a760c3a1f372788', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=960&crop=smart&auto=webp&s=1d0618a3224a2591da1e041a5c1cd7a3d816cf77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=1080&crop=smart&auto=webp&s=bb835272d9ec8b6372f3aad7de3527217c39649e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?auto=webp&s=23a2866ccd730b9643bc6607c0920a446cf24399', 'width': 1200}, 'variants': {}}]} | ||
unsloth/Qwen3.5-397B-A17B-GGUF | 32 | Since people keep posting about it without hugging face link. Here you go:
https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF
Shoutout to unsloth. They’re quite quick on this | 2026-02-16T09:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r65g56/unslothqwen35397ba17bgguf/ | Ok_Brain_2376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65g56 | true | null | t3_1r65g56 | /r/LocalLLaMA/comments/1r65g56/unslothqwen35397ba17bgguf/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=108&crop=smart&auto=webp&s=cb97086a3cec0abaf76465736f94d6c30e3bc319', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=216&crop=smart&auto=webp&s=d4690d37bb8971804f98c1387316f5f1489d22c3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=320&crop=smart&auto=webp&s=ccff8856fcf9fcc58a959bfaafc7ff16ce195e1c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=640&crop=smart&auto=webp&s=869d4a1fa90d1b4111c2e2064bb89c6b48d2fd9b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=960&crop=smart&auto=webp&s=70a2ee08704e61dedadfe5f4f2896d194c37757c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=1080&crop=smart&auto=webp&s=11b64fa22c46121044828be4e687d2429a47df7a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?auto=webp&s=7e66d1cb99a3841933ed37e2516c8dae08c69cff', 'width': 1200}, 'variants': {}}]} |
can we please have a megathread for the qwen3.5 release? | 1 | [removed] | 2026-02-16T09:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r65cpk/can_we_please_have_a_megathread_for_the_qwen35/ | disillusioned_okapi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r65cpk | false | null | t3_1r65cpk | /r/LocalLLaMA/comments/1r65cpk/can_we_please_have_a_megathread_for_the_qwen35/ | false | false | self | 1 | null |
dears mods, can we please have a megathread for this release? | 1 | [removed] | 2026-02-16T09:36:16 | disillusioned_okapi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r65agy | false | null | t3_1r65agy | /r/LocalLLaMA/comments/1r65agy/dears_mods_can_we_please_have_a_megathread_for/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ptebytkrstjg1', 'resolutions': [{'height': 183, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=108&crop=smart&auto=webp&s=8059de3efbc411aee75e4678a41ced8e1962913b', 'width': 108}, {'height': 366, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=216&crop=smart&auto=webp&s=9c603d3bacc9b2b50ba3d5b96ac89ad74f51c3e4', 'width': 216}, {'height': 542, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=320&crop=smart&auto=webp&s=a043de1314e340684bd15533087fbd520024171b', 'width': 320}, {'height': 1085, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=640&crop=smart&auto=webp&s=b775899a21a6d8b6ba0d629bddcbd93f452fe2af', 'width': 640}, {'height': 1627, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=960&crop=smart&auto=webp&s=68b029d3a49b0bdda3871aa6987f9c9be382bf89', 'width': 960}, {'height': 1831, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=1080&crop=smart&auto=webp&s=1a34f5e1ca7149aed4af9a9a8986af1f76c16380', 'width': 1080}], 'source': {'height': 1831, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?auto=webp&s=75bb3034a220046a53bf2f4d77d83be78a81e451', 'width': 1080}, 'variants': {}}]} | ||
Qwen 3.5 is out!! | 45 | [https://huggingface.co/collections/Qwen/qwen35](https://huggingface.co/collections/Qwen/qwen35) | 2026-02-16T09:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r659i8/qwen_35_is_out/ | Wooden-Deer-1276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r659i8 | true | null | t3_1r659i8 | /r/LocalLLaMA/comments/1r659i8/qwen_35_is_out/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=108&crop=smart&auto=webp&s=8aa639e257fd06e34f938d329cd573bffa772e4e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=216&crop=smart&auto=webp&s=c9401565a8f47bfb971f5316be4d8db6b8972500', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=320&crop=smart&auto=webp&s=c9494f0a5deee211f5f79833f57164a9c8810b38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=640&crop=smart&auto=webp&s=ea9a464d016c0f4dc124dd16ecbaea41f962076c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=960&crop=smart&auto=webp&s=209fbad77aebe5093a2d6b17dc8faf07987a3712', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=1080&crop=smart&auto=webp&s=d2650d20e56dbcd2ec205baa199a68dc6afbbb2e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?auto=webp&s=d54956f8b2214e923f912631c97e3e8ccd8a3064', 'width': 1200}, 'variants': {}}]} |
Qwen3.5-397B-A17B Unsloth GGUFs | 455 | Qwen releases Qwen3.5💜! Qwen3.5-397B-A17B is an open MoE vision reasoning LLM for agentic coding & chat.
It performs on par with Gemini 3 Pro, Claude Opus 4.5, and GPT-5.2. Run 4-bit on 256GB Mac / RAM or less.
Guide to run them: [https://unsloth.ai/docs/models/qwen3.5](https://unsloth.ai/docs/models/qwen3.5)
Unsloth dynamic GGUFs at: [https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF](https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF)
Excited for this week! 🙂 | 2026-02-16T09:34:10 | danielhanchen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r6599e | false | null | t3_1r6599e | /r/LocalLLaMA/comments/1r6599e/qwen35397ba17b_unsloth_ggufs/ | false | false | 455 | {'enabled': True, 'images': [{'id': 'zgfpbga5ttjg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=108&crop=smart&auto=webp&s=deea17efaf7688d2e13a0367944b8d1578e430d5', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=216&crop=smart&auto=webp&s=7cf9d4bae098cb02594994ed8afafde83764349a', 'width': 216}, {'height': 360, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=320&crop=smart&auto=webp&s=c5157de5dd5916c6e265c47d6e10bf80d5bf3719', 'width': 320}, {'height': 720, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=640&crop=smart&auto=webp&s=0b525bb85c217819dae77ecab42757b843211d14', 'width': 640}, {'height': 1080, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=960&crop=smart&auto=webp&s=59cb5fb4ddde01deac4a9403d2706e52817b8238', 'width': 960}, {'height': 1215, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=1080&crop=smart&auto=webp&s=85cb5de013055905dfb4e64ba4840acfc3861c05', 'width': 1080}], 'source': {'height': 4500, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?auto=webp&s=47e0bf52302e80bbbbe8d673322ee6395725c7af', 'width': 4000}, 'variants': {}}]} | ||
Small, fast Spam Detection model designed for German text | 7 | [https://huggingface.co/tanaos/tanaos-spam-detection-german](https://huggingface.co/tanaos/tanaos-spam-detection-german)
A small and fast Spam Detection model, trained on German text to detect the following types of spam content:
1. Unsolicited commercial advertisement or non-commercial proselytizing.
2. Fraudulent schemes. including get-rich-quick and pyramid schemes.
3. Phishing attempts. unrealistic offers or announcements.
4. Content with deceptive or misleading information.
5. Malware or harmful links.
6. Excessive use of capitalization or punctuation to grab attention.
# Model output
The model outputs
* A binary `spam` / `not_spam` label
* A confidence score between 0 and 1
# How to use
Get an API key from [https://platform.tanaos.com/](https://platform.tanaos.com/) (create an account if you don't have one) and use it for free with
import requests
session = requests.Session()
sd_out = session.post(
"https://slm.tanaos.com/models/spam-detection",
headers={
"X-API-Key": "<YOUR_API_KEY>",
},
json={
"text": "Du hast ein iPhone 16 gewonnen! Klicke hier, um deinen Preis zu erhalten.",
"language": "german"
}
)
print(sd_out.json()["data"])
# >>> [{'label': 'spam', 'score': 0.9945}] | 2026-02-16T09:31:53 | https://www.reddit.com/r/LocalLLaMA/comments/1r657yx/small_fast_spam_detection_model_designed_for/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r657yx | false | null | t3_1r657yx | /r/LocalLLaMA/comments/1r657yx/small_fast_spam_detection_model_designed_for/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=108&crop=smart&auto=webp&s=ae907b968aa3906b92688b790df0ba7d23950ad4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=216&crop=smart&auto=webp&s=2ef82de8913bc80eaf6238df112995cfa7492af7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=320&crop=smart&auto=webp&s=0a2e0dc2e78b56569d50472cbbdfb1f30de93ce1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=640&crop=smart&auto=webp&s=223c4e797b361fbcdb602d12abe12369765d297d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=960&crop=smart&auto=webp&s=0f3c8b95d2dff726dcd1951d42f5ef64c5209117', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=1080&crop=smart&auto=webp&s=d2cabb030bb9d5b7be061afa2536095b0c4f8f6d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?auto=webp&s=4170e097872db61daad2a1b2734b037c243052fa', 'width': 1200}, 'variants': {}}]} |
Qwen3.5 Release Blog Post | 122 | Weights: [https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B)
| 2026-02-16T09:31:44 | https://qwen.ai/blog?id=qwen3.5 | Stunning_Energy_7028 | qwen.ai | 1970-01-01T00:00:00 | 0 | {} | 1r657w5 | false | null | t3_1r657w5 | /r/LocalLLaMA/comments/1r657w5/qwen35_release_blog_post/ | false | false | default | 122 | null |
Small, fast Spam Detection model designed for German text | 1 | [removed] | 2026-02-16T09:29:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r656sv/small_fast_spam_detection_model_designed_for/ | Ok_Hold_5385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r656sv | false | null | t3_1r656sv | /r/LocalLLaMA/comments/1r656sv/small_fast_spam_detection_model_designed_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=108&crop=smart&auto=webp&s=ae907b968aa3906b92688b790df0ba7d23950ad4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=216&crop=smart&auto=webp&s=2ef82de8913bc80eaf6238df112995cfa7492af7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=320&crop=smart&auto=webp&s=0a2e0dc2e78b56569d50472cbbdfb1f30de93ce1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=640&crop=smart&auto=webp&s=223c4e797b361fbcdb602d12abe12369765d297d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=960&crop=smart&auto=webp&s=0f3c8b95d2dff726dcd1951d42f5ef64c5209117', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=1080&crop=smart&auto=webp&s=d2cabb030bb9d5b7be061afa2536095b0c4f8f6d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?auto=webp&s=4170e097872db61daad2a1b2734b037c243052fa', 'width': 1200}, 'variants': {}}]} |
Qwen3.5-397B-A17B is out!! | 784 | [https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B) | 2026-02-16T09:29:03 | https://www.reddit.com/r/LocalLLaMA/comments/1r656d7/qwen35397ba17b_is_out/ | lolxdmainkaisemaanlu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r656d7 | false | null | t3_1r656d7 | /r/LocalLLaMA/comments/1r656d7/qwen35397ba17b_is_out/ | false | false | self | 784 | {'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=216&crop=smart&auto=webp&s=845c40f90d04300d26f682352d92f5119dce277a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=320&crop=smart&auto=webp&s=c7e7dd4c3ab2924175f5dc3b9816b8c268f639c5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=640&crop=smart&auto=webp&s=92b4cb0c011ee0ca8ee5cbb20a760c3a1f372788', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=960&crop=smart&auto=webp&s=1d0618a3224a2591da1e041a5c1cd7a3d816cf77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=1080&crop=smart&auto=webp&s=bb835272d9ec8b6372f3aad7de3527217c39649e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?auto=webp&s=23a2866ccd730b9643bc6607c0920a446cf24399', 'width': 1200}, 'variants': {}}]} |
Qwen3.5-397B-A17B weights are live on ModelScope! | 2 | 2026-02-16T09:28:57 | https://modelscope.cn/models/Qwen/Qwen3.5-397B-A17B/summary | Stunning_Energy_7028 | modelscope.cn | 1970-01-01T00:00:00 | 0 | {} | 1r656b8 | false | null | t3_1r656b8 | /r/LocalLLaMA/comments/1r656b8/qwen35397ba17b_weights_are_live_on_modelscope/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=108&crop=smart&auto=webp&s=bfd2f3c5a67913ab2f0268b019f5a1321841c3ea', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=216&crop=smart&auto=webp&s=5a10a0ac3b3326b1b0f879570dc3809c5ff30f1b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=320&crop=smart&auto=webp&s=375ae3bfb1bbedf5cb1b7e2b3f272115eee355d7', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=640&crop=smart&auto=webp&s=e6888cd456a3c0a1268a228913e6d14941e2de5f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=960&crop=smart&auto=webp&s=c27464e6ec05c55cfd669e0e8af19f16a1955c69', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?auto=webp&s=d7d353d53fe0f0b670260384ceebaa19a2000cbd', 'width': 1024}, 'variants': {}}]} | ||
Trying to understand some benchmarks | 1 | I'm trying to serve `gpt-oss-120b` to as many people in my organization as possible, but I'm finding it hard to get an idea of what the theoretical ceiling might be. Currently the model is split over 2x H100 94GB cards in PCIe which is on a cloud provider. We have a quote for 4x H100 94GB NVL cards, and the cards will linked with NVL in pairs. Two cards are for the inference, the other two are for other things.
I've been using vLLM's bench library to try and get an idea of what the QoS might be.
First of all -- yes, I understand that this is a fairly good setup, but I'm really trying to make the most of it.
Using the `ShareGPT_V3_unfiltered_cleaned_split.json` dataset (which have on average input tokens \~200 and gives output of \~200 tokens), we fixed the context size to about 8k, and varied `max_concurrency`. I looked at the output throughput, request throughput, TTFT and ITL. These can be seen in the plots below (from left to right).
https://preview.redd.it/b8ldiz5jgxhg1.png?width=3000&format=png&auto=webp&s=ab8ba679bf97873156f34368d28f74f52e10738c
The trouble is, I'm not really sure if I'm interpretting this correctly. I mean, I know how to literally read them: We seem to be hitting a ceiling of just over 4,500 tok/s, at just under 400 concurrent requests, and peaking at about 22.5 req/s. The TTFT is pretty reasonable, hitting \~1 sec at about 200 users. The p99 is pretty telling though -- at 200 users, it jumps up to 4 sec. The ITL remains stable at \~30ms.
My questions / comments I'd like clarifying are:
1. Does this mean that I can only really process about 22 requests per second, regardless of the concurrent requests sent?
2. It looks like TTFT spikes pretty hard for the P99 after about 150-200 concurrent requests, jumping up to 4sec at 200.
3. If we normalize the first two plots (below), we can see that for 16 users we can get \~70 tok/s. An informal poll on this page a few months ago suggested that around 10-20 tok/s is acceptable. We can see that we hit this value as we get up 200 concurrent requests, and remain > 10 close to 500 concurrent users. This seems good?
https://preview.redd.it/ln83p9mmixhg1.png?width=1500&format=png&auto=webp&s=f1aa617759d5e11e03cfe3d0c98620514c980624
4. I also get information out of vLLM itself:
Available KV cache memory: 47.94 GiB
GPU KV cache size: 1,396,368 tokens
Maximum concurrency for 8,128 tokens per request: 171.63x
Probably this means I can have 170 concurrent users all sending messages filling the max context size simultaneously.
5. Now, my applications are varied, some will be using agents in multiturn conversations, so likely I'll have to turn up the context window, as 8k will fill fast. Some will be doing evaluation, like batching, so I'll have to enforce rate limits. The problem is, I'm not sure what the correct combination of users and rate limits and context size should be.
6. Would NVL bring much performance gain? And what about using all 4 GPUs?
7. Would it be better to run two versions of the model, and route based on demand rather than split it over 2 GPUs.
I guess I'm looking for some perfect optimal point where they all combine in such a way as to make everybody happy, but I understand that this may change depending on demand.
But my final, and most important question would be: **given a variety of use cases, how many people could this infrastructure reasonably serve?**
Thanks for coming to my TED Question. | 2026-02-16T08:50:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r64k8y/trying_to_understand_some_benchmarks/ | monkeyofscience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r64k8y | false | null | t3_1r64k8y | /r/LocalLLaMA/comments/1r64k8y/trying_to_understand_some_benchmarks/ | false | false | self | 1 | null |
Qwen 3.5 series marks the end of VL models? | 65 | 2026-02-16T08:47:03 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r64i0u | false | null | t3_1r64i0u | /r/LocalLLaMA/comments/1r64i0u/qwen_35_series_marks_the_end_of_vl_models/ | false | false | 65 | {'enabled': True, 'images': [{'id': 'm1ocrjozktjg1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=108&crop=smart&auto=webp&s=1cd35cdf35816ba9e9d3e7d11c700e3ec4aafd97', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=216&crop=smart&auto=webp&s=540ce868616e43f85df8f66e511f1270d63bf908', 'width': 216}, {'height': 274, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=320&crop=smart&auto=webp&s=6f8888aa56116f50ab43f3892750bffc71323778', 'width': 320}, {'height': 549, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=640&crop=smart&auto=webp&s=43f7165ca51f8afc6de357f380c03ca8d7f7a447', 'width': 640}, {'height': 824, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=960&crop=smart&auto=webp&s=0c25e901207d9ca6af6161906144128e2d40440f', 'width': 960}, {'height': 927, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=1080&crop=smart&auto=webp&s=5647c60fc6cae838f85b56c9eca3f38cc0337a6b', 'width': 1080}], 'source': {'height': 927, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?auto=webp&s=4998654fcf48d1822f604d64402c891341289ebf', 'width': 1080}, 'variants': {}}]} | |||
Trying to understand some benchmarks | 1 | [removed] | 2026-02-16T08:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r64hh6/trying_to_understand_some_benchmarks/ | monkeyofscience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r64hh6 | false | null | t3_1r64hh6 | /r/LocalLLaMA/comments/1r64hh6/trying_to_understand_some_benchmarks/ | false | false | self | 1 | null |
LeetCode Assembly Dataset (400+ Solutions in x86-64 / ARM64 using GCC/Clang) | 15 | Introducing the LeetCode Assembly Dataset: a dataset of 400+ LeetCode problem solutions in assembly across x86-64, ARM64, MIPS64, and RISC-V using GCC & Clang at -O0/-O1/-O2/-O3 optimizations.
This dataset is perfect for teaching LLMs complex assembly and compiler behavior! | 2026-02-16T08:41:57 | https://huggingface.co/datasets/ronantakizawa/leetcode-assembly | Ok_Employee_6418 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1r64f67 | false | null | t3_1r64f67 | /r/LocalLLaMA/comments/1r64f67/leetcode_assembly_dataset_400_solutions_in_x8664/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=108&crop=smart&auto=webp&s=1b924a3cfde480641c1c9227b81924f0a1505a6a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=216&crop=smart&auto=webp&s=323dc34ef5fa8c3dfc040c7866ef20ed074245d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=320&crop=smart&auto=webp&s=01ffb9c9002368bd2cca602523f25474326dbf42', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=640&crop=smart&auto=webp&s=e8c379ad596190abaca93fa7370ae8c59228b55f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=960&crop=smart&auto=webp&s=e8e8d05b8d950c203b681e26c92989853552dba4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=1080&crop=smart&auto=webp&s=cd4c45f3827bcba16dc230ac2c7f2bfc779ccaea', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?auto=webp&s=cadb7f8bbebf678bb88739179266a551a33e9193', 'width': 1200}, 'variants': {}}]} | |
Is there a model that is completely uncensored when it comes to controversial topics? | 18 | I know "uncensored" often means NSFW, for role-play, etc, but that's not really what I care about.
I want a model that has no problem not conforming to typical safety rules. It's willing to engage and objectively assess and consider points that might go directly against "safety guidelines". Think historical topics, societal issues, religious matters.
I do not want the model to agree with everything I say (that's not hard to achieve, but it's pointless for me) I want one that engages with me with no boundaries on any topic while providing accurate data, and is willing to consider my opinion if it thinks it adds up even if it's extremely controversial and "unsafe".
Many of us have questions that cannot ask publicly and out-loud. I think this is a great use-case for AI. | 2026-02-16T08:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r64deu/is_there_a_model_that_is_completely_uncensored/ | ghulamalchik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r64deu | false | null | t3_1r64deu | /r/LocalLLaMA/comments/1r64deu/is_there_a_model_that_is_completely_uncensored/ | false | false | self | 18 | null |
Open source MCP server for real-time currency & crypto conversion | 0 | Hey everyone, I built and deployed a currency exchange MCP server that gives AI agents real-time forex and crypto conversion.
\*\*What it does:\*\*
\- Convert between 60+ fiat currencies and 30+ cryptocurrencies
\- Batch convert to up to 50 currencies at once
\- Historical rates with time-series data
\- Natural language input — say "dollars" or "bitcoin" instead of ISO codes
It's fully open source, ISC license, and self-hostable.
GitHub: [https://github.com/Ruddxxy/currency-exchange-mcp](https://github.com/Ruddxxy/currency-exchange-mcp)
| 2026-02-16T08:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r649gm/open_source_mcp_server_for_realtime_currency/ | RuddyBuilds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r649gm | false | null | t3_1r649gm | /r/LocalLLaMA/comments/1r649gm/open_source_mcp_server_for_realtime_currency/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=108&crop=smart&auto=webp&s=08b39bbbd499a7e62a23813f5b4e94e1519c7f38', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=216&crop=smart&auto=webp&s=17113d3e5ed4555679a9434d85f3f90d11d71421', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=320&crop=smart&auto=webp&s=f603ca01d1c91aba6634fd82a9edbdff4e604525', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=640&crop=smart&auto=webp&s=228fc54fefda9adabbf1e88f59b4ae65ccb1eeaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=960&crop=smart&auto=webp&s=025d8919e8ccd8d4a2bca322d5b165e1e929c8fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=1080&crop=smart&auto=webp&s=e53395dae3ab1b4e266cc15a22dfa9f27c4f512f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?auto=webp&s=62d80a1cd2d13b7f6775b8c91109f98e8944b2fd', 'width': 1200}, 'variants': {}}]} |
The Qwen3.5 will still opensource | 12 | The link for it.( not weight yet) https://bailian.console.aliyun.com/cn-beijing/?spm=5176.29619931.J\_XNqYbJaEnpB5\_cCJf7e6D.1.136910d78TBFEG&tab=home#/model-market/detail/qwen3.5-397b-a17b | 2026-02-16T08:30:56 | https://www.reddit.com/r/LocalLLaMA/comments/1r648sf/the_qwen35_will_still_opensource/ | bobeeeeeeeee8964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r648sf | false | null | t3_1r648sf | /r/LocalLLaMA/comments/1r648sf/the_qwen35_will_still_opensource/ | false | false | self | 12 | null |
Local macOS LLM llama-server setup guide | 0 | In case anyone here is thinking of using a Mac as a local small LLM model server for your other machines on a LAN, here are the steps I followed which worked for me. The focus is plumbing — how to set up ssh tunneling, screen sessions, etc. Not much different from setting up a Linux server, but not the same either. Of course there are other ways to achieve the same.
I'm a beginner in LLMs so regarding the cmd line options for llama-server itself I'll be actually looking into your feedback. Can this be run more optimally?
I'm quite impressed with what 17B and 72B Qwen models can do on my M3 Max laptop (64 GB). Even the latter is usably fast, and they are able to quite reliably answer general knowledge questions, translate for me (even though tokens in Chinese pop up every now and then, unexpectedly), and analyze simple code bases.
One thing I noticed is btop is showing very little CPU load even during token parsing / inference. Even with llama-bench. My RTX GPU on a different computer would work on 75-80% load while here it stays at 10-20%. So I'm not sure I'm using it to full capacity. Any hints? | 2026-02-16T08:29:05 | https://forgottencomputer.com/retro/install_mac.html | breksyt | forgottencomputer.com | 1970-01-01T00:00:00 | 0 | {} | 1r647pf | false | null | t3_1r647pf | /r/LocalLLaMA/comments/1r647pf/local_macos_llm_llamaserver_setup_guide/ | false | false | default | 0 | null |
Qwen Released Qwen 3.5 397B and Qwen 3.5 Plus! | 74 | [https://chat.qwen.ai/](https://chat.qwen.ai/)
https://preview.redd.it/ddrcinnghtjg1.png?width=626&format=png&auto=webp&s=5f91e5a8f0b99c86d30ee966815465f1571e8d2e
| 2026-02-16T08:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r646pt/qwen_released_qwen_35_397b_and_qwen_35_plus/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r646pt | false | null | t3_1r646pt | /r/LocalLLaMA/comments/1r646pt/qwen_released_qwen_35_397b_and_qwen_35_plus/ | false | false | 74 | null | |
I built CodeGraph CLI — parses your codebase into a semantic graph with tree-sitter, does RAG-powered search over LanceDB vectors, and lets you chat with multi-agent AI from the terminal | 5 | I've been building **CodeGraph CLI** (`cg`) — an open-source, local-first code intelligence tool. It parses your project into an AST with tree-sitter, builds a directed dependency graph in SQLite, embeds every symbol into vectors stored in LanceDB, then layers RAG, impact analysis, and a multi-agent system on top.
**GitHub:** [https://github.com/al1-nasir/codegraph-cli](https://github.com/al1-nasir/codegraph-cli) | **PyPI:** `pip install codegraph-cli`
How it works under the hood
**1. Parsing → Semantic Graph (tree-sitter + SQLite)**
When you run `cg project index ./my-project`, the parser walks every `.py`, `.js`, `.ts` file using tree-sitter grammars. Tree-sitter gives us a concrete syntax tree — it's error-tolerant, so even broken/incomplete files get parsed instead of crashing.
From the CST, we extract:
* **Nodes**: every module, class, function — with qualified names, line ranges, docstrings, and full source code
* **Edges**: imports, function calls, class inheritance — resolved into a directed graph
All of this goes into SQLite (`graph.db`) with proper indexes. Graph traversal (BFS for impact analysis, neighbor lookups) is just SQL queries.
**2. Embedding Engine (5 models, raw transformers)**
Each node gets embedded using a structured chunk that combines file path + symbol name + docstring + code body. Import lines are stripped and module-level nodes get truncated to avoid diluting embeddings with boilerplate.
5 embedding models available — you pick based on your hardware:
|Model|Size|Dim|Quality|
|:-|:-|:-|:-|
|hash|0 bytes|256|Keyword-only (BLAKE2b hash of tokens)|
|minilm|\~80 MB|384|Decent|
|bge-base|\~440 MB|768|Solid general-purpose|
|jina-code|\~550 MB|768|Code-aware|
|qodo-1.5b|\~6.2 GB|1536|Best quality|
The **hash model** is zero-dependency — it tokenizes with regex, hashes each token with BLAKE2b, and maps to a 256-dim vector. No torch, no downloads. The neural models use raw `transformers` \+ `torch` with configurable pooling (CLS, mean, last-token) — no `sentence-transformers` dependency. Models are cached in `~/.codegraph/models/` after first download from HuggingFace.
Each embedding model gets its own LanceDB table (`code_nodes_{model_key}`) so you can switch models without dimension mismatch crashes. If you change the embedding model, re-ingestion from SQLite happens automatically and transparently.
**3. Vector Store (LanceDB — "SQLite for vectors")**
I chose LanceDB over Chroma/FAISS because:
* **Zero-server** — embedded, just like SQLite. No Docker, no process management
* **Hybrid search** — vector similarity + SQL WHERE in one query (`file_path LIKE 'src/%'` AND semantic similarity)
* **Lance columnar format** — fast scans, efficient storage on disk
* Everything lives under `~/.codegraph/<project>/lancedb/`
Search uses cosine metric. Distance values are true cosine distances (`1 - cos_sim`), converted to similarity scores clamped to \[0, 1\].
**4. RAG Pipeline (Graph-Augmented Retrieval)**
This is where it gets interesting. The RAG retriever doesn't just do a basic top-k vector search:
1. **Semantic top-k** via LanceDB (or brute-force cosine fallback if LanceDB is unavailable)
2. **Graph-neighbour augmentation** — for the top 3 hits, we fetch their direct dependency neighbours from the SQLite graph (both incoming and outgoing edges) and score those neighbours against the query too. This means if you search for "authentication", you don't just get `validate_token` — you also get the caller `login_handler` and the dependency `TokenStore` that vector search alone might have missed.
3. **Minimum score threshold** — low-quality results are dropped before they reach the LLM
4. **LRU cache** (64 entries) — identical queries within a session skip re-computation
5. **Context compression** — before injecting into the LLM prompt, snippets get import lines stripped, blank lines collapsed, and long code truncated. The LLM gets clean, information-dense context instead of 500 lines of imports.
**5. Impact Analysis (Graph BFS + RAG + LLM)**
`cg analyze impact UserService --hops 3` does a multi-hop BFS traversal on the dependency graph, collects all reachable symbols, pulls RAG context for the root symbol, then sends everything to the LLM to generate a human-readable explanation of what would break.
If the symbol isn't found, it falls back to fuzzy matching via semantic search and suggests similar symbols.
**6. Multi-Agent System (CrewAI)**
`cg chat start --crew` launches 4 specialized agents via CrewAI:
|Agent|Tools|Max Iterations|
|:-|:-|:-|
|**Coordinator**|All tools, can delegate|25|
|**File System Engineer**|list\_directory, read\_file, write\_file, patch\_file, delete\_file, rollback\_file, file\_tree, backup|15|
|**Senior Developer**|All 11 tools (file ops + code analysis)|20|
|**Code Intelligence Analyst**|search\_code, grep\_in\_project, read\_file, get\_project\_summary|15|
Every file write/patch automatically creates a timestamped backup in `~/.codegraph/backups/` with JSON metadata. Rollback to any previous state with `/rollback` in chat.
The agents have detailed backstories and rules — the coordinator knows to check conversation history for follow-up requests ("apply those changes you suggested"), and the developer knows to always read the existing file before patching to match code style.
**7. LLM Adapter (6 providers, zero env vars)**
One unified interface supporting Ollama, Groq, OpenAI, Anthropic, Gemini, and OpenRouter. Each provider has its own class handling auth, payload format, and error handling. All config lives in `~/.codegraph/config.toml` — no env vars needed.
For CrewAI, models route through LiteLLM automatically.
**8. Chat with Real File Access + Symbol Memory**
The chat agent isn't just an LLM wrapper. It has:
* **Intent detection** — classifies your message (read, list, search, impact, generate, refactor, general chat) and routes to the right handler
* **Symbol memory** — tracks recently discussed symbols and files so it doesn't re-run redundant RAG queries
* **Auto-context injection** — the system prompt includes project name, indexed file count, symbol breakdown, and recently modified files so the LLM has awareness from the first message
* **Code proposals** — when you ask it to generate code, it creates a diffable proposal you can preview and apply (or reject)
# What you actually get as a user
pip install codegraph-cli
cg config setup # pick your LLM
cg project index ./my-project # parse + build graph + embed
# Find code by meaning
cg analyze search "how does authentication work"
# Trace what breaks before you change something
cg analyze impact login_handler --hops 3
# Project health dashboard
cg analyze health
# See indexed tree with function/class breakdown
cg analyze tree --full
# Incremental sync (much faster than re-index)
cg analyze sync
# Chat with your codebase
cg chat start # standard mode with RAG
cg chat start --crew # 4-agent mode
# Visual code explorer in browser (Starlette + Uvicorn)
cg explore open
# Generate DOCX docs with Mermaid architecture diagrams
cg export docx --enhanced --include-code
# Auto-generate README from the code graph
cg onboard --save
# Full command structure
cg config — LLM & embedding setup (6 providers, 5 embedding models)
cg project — Index, load, and manage project memories
cg analyze — Semantic search, impact analysis, dependency graphs, health dashboard
cg chat — Conversational coding sessions with RAG context (+ multi-agent mode)
cg explore — Visual code explorer that opens in your browser
cg export — Generate DOCX documentation with architecture diagrams
cg onboard — Auto-generate a README from your code graph
# Tech stack
* **CLI:** Typer + Rich (grouped command hierarchy)
* **Parsing:** tree-sitter (Python, JavaScript, TypeScript)
* **Graph storage:** SQLite (nodes + edges + metadata)
* **Vector search:** LanceDB (cosine metric, hybrid search)
* **Embeddings:** raw transformers + torch (5 models, no sentence-transformers)
* **RAG:** Graph-augmented retrieval with context compression + LRU cache
* **Browser explorer:** Starlette + Uvicorn (self-contained HTML UI)
* **Multi-agent:** CrewAI + LiteLLM (4 specialized agents, 11 tools)
* **Docs export:** python-docx + Mermaid Ink (PNG diagrams)
* **License:** MIT
# Install
pip install codegraph-cli # core (tree-sitter + SQLite + LanceDB)
pip install codegraph-cli[embeddings] # + neural embedding models (torch + transformers)
pip install codegraph-cli[crew] # + CrewAI multi-agent system
pip install codegraph-cli[all] # everything
Python 3.9+ | MIT license
**GitHub:** [https://github.com/al1-nasir/codegraph-cli](https://github.com/al1-nasir/codegraph-cli) | **PyPI:** [https://pypi.org/project/codegraph-cli/](https://pypi.org/project/codegraph-cli/)
Would love technical feedback on:
1. The graph-augmented RAG approach — is augmenting with dependency neighbours actually useful for code search, or just noise?
2. LanceDB vs FAISS/Chroma for this use case — anyone have strong opinions?
3. What languages should be next? (Go, Rust, Java grammars exist for tree-sitter)
4. Is the multi-agent approach actually useful vs. a single well-prompted agent?
| 2026-02-16T08:25:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r645hx/i_built_codegraph_cli_parses_your_codebase_into_a/ | Wild_Expression_5772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r645hx | false | null | t3_1r645hx | /r/LocalLLaMA/comments/1r645hx/i_built_codegraph_cli_parses_your_codebase_into_a/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=108&crop=smart&auto=webp&s=a7042462df25bfce2e186d98096192e2bd07a743', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=216&crop=smart&auto=webp&s=8868d680c6a1922ec760d1078df1e7d6ca1e4dc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=320&crop=smart&auto=webp&s=db5557cc40beb895731108076135e55bc1ce78b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=640&crop=smart&auto=webp&s=e4275a25d32a2ea9e3e6d21f626582dcecf7c52b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=960&crop=smart&auto=webp&s=1c589d46066771f325bf1d442200ee593f396efb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=1080&crop=smart&auto=webp&s=a2401d5abe999d4ccac873b8fbbd1ac4362f0238', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?auto=webp&s=810532f57ee877a14c66b31a946b393300cc0243', 'width': 1200}, 'variants': {}}]} |
Minimax M2.5 vs. GLM-5 vs. Kimi k2.5: How do they compare to Codex and Claude for coding? | 34 | Hi everyone,
I’m looking for community feedback from those of you who have hands-on experience with the recent wave of coding models:
1. **Minimax M2.5**
2. **GLM-5**
3. **Kimi k2.5**
There are plenty of benchmarks out there, but I’m interested in your subjective opinions and day-to-day experience.
**If you use multiple models:** Have you noticed significant differences in their "personality" or logic when switching between them? For example, is one noticeably better at scaffolding while another is better at debugging or refactoring?
**If you’ve mainly settled on one:** How does it stack up against the major incumbents like **Codex** or **Anthropic’s Claude** models?
I’m specifically looking to hear if any of these newer models offer a distinct advantage over other or feel different to drive, or if they just feel like "more of the same." | 2026-02-16T08:25:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/ | East-Stranger8599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r645g6 | false | null | t3_1r645g6 | /r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/ | false | false | self | 34 | null |
mlx-ruby: MLX bindings for Ruby | 2 | Ruby desperately needed bindings for MLX so I finally sat down with Codex and ported it along with all the example models. Working on adding better Rubyesqe ergonomics, but all the core library features work and performance is within 25% of the official Python bindings.
https://github.com/skryl/mlx-ruby | 2026-02-16T08:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r645d7/mlxruby_mlx_bindings_for_ruby/ | rut216 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r645d7 | false | null | t3_1r645d7 | /r/LocalLLaMA/comments/1r645d7/mlxruby_mlx_bindings_for_ruby/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=108&crop=smart&auto=webp&s=169ebfe3cfa4032d5d13def2db973f232d8d264c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=216&crop=smart&auto=webp&s=6c591b54d5dcdcd699b1e1f3181be0bfef5fe10a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=320&crop=smart&auto=webp&s=911bea767157dfb00506e174bf009104d885f8c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=640&crop=smart&auto=webp&s=fc05e28341c34e6902642b53eb44339c3b7c44a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=960&crop=smart&auto=webp&s=f981d3a58092b0be06ebc40b6c6677b06e687117', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=1080&crop=smart&auto=webp&s=504afda9abe6b6205c5e2694aee156ef36ad4f6a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?auto=webp&s=06584f2cca6ab2dce64891298dc4c0badcd6c51b', 'width': 1200}, 'variants': {}}]} |
Moonshot AI Launches Kimi Claw | 0 | # Moonshot AI Launches Kimi Claw: Native OpenClaw on [Kimi.com](http://Kimi.com) with 5,000 Community Skills and 40GB Cloud Storage Now. | 2026-02-16T08:23:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r644mo/moonshot_ai_launches_kimi_claw/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r644mo | false | null | t3_1r644mo | /r/LocalLLaMA/comments/1r644mo/moonshot_ai_launches_kimi_claw/ | false | false | self | 0 | null |
I built CodeGraph CLI that parses your codebase into a semantic graph with tree-sitter, does RAG-powered search over LanceDB vectors, and lets you chat with multi-agent AI from the terminal* | 1 | I've been building
**CodeGraph CLI**
(`cg`) — an open-source, local-first code intelligence tool. It parses your project into an AST with tree-sitter, builds a directed dependency graph in SQLite, embeds every symbol into vectors stored in LanceDB, then layers RAG, impact analysis, and a multi-agent system on top.
**GitHub:**
[https://github.com/al1-nasir/codegraph-cli](
https://github.com/al1-nasir/codegraph-cli
) |
**PyPI:**
`pip install codegraph-cli`
---
### How it works under the hood
**1. Parsing → Semantic Graph (tree-sitter + SQLite)**
When you run `cg project index ./my-project`, the parser walks every `.py`, `.js`, `.ts` file using tree-sitter grammars. Tree-sitter gives us a concrete syntax tree — it's error-tolerant, so even broken/incomplete files get parsed instead of crashing.
From the CST, we extract:
-
**Nodes**
: every module, class, function — with qualified names, line ranges, docstrings, and full source code
-
**Edges**
: imports, function calls, class inheritance — resolved into a directed graph
All of this goes into SQLite (`graph.db`) with proper indexes. Graph traversal (BFS for impact analysis, neighbor lookups) is just SQL queries.
**2. Embedding Engine (5 models, raw transformers)**
Each node gets embedded using a structured chunk that combines file path + symbol name + docstring + code body. Import lines are stripped and module-level nodes get truncated to avoid diluting embeddings with boilerplate.
```
file: src/auth.py
symbol: AuthService.validate_token
type: function
doc: Validate JWT token and return user claims.
def validate_token(self, token: str) -> dict:
...
```
5 embedding models available — you pick based on your hardware:
| Model | Size | Dim | Quality |
|-------|------|-----|---------|
| hash | 0 bytes | 256 | Keyword-only (BLAKE2b hash of tokens) |
| minilm | ~80 MB | 384 | Decent |
| bge-base | ~440 MB | 768 | Solid general-purpose |
| jina-code | ~550 MB | 768 | Code-aware |
| qodo-1.5b | ~6.2 GB | 1536 | Best quality |
The
**hash model**
is zero-dependency — it tokenizes with regex, hashes each token with BLAKE2b, and maps to a 256-dim vector. No torch, no downloads. The neural models use raw `transformers` + `torch` with configurable pooling (CLS, mean, last-token) — no `sentence-transformers` dependency. Models are cached in `~/.codegraph/models/` after first download from HuggingFace.
Each embedding model gets its own LanceDB table (`code_nodes_{model_key}`) so you can switch models without dimension mismatch crashes. If you change the embedding model, re-ingestion from SQLite happens automatically and transparently.
**3. Vector Store (LanceDB — "SQLite for vectors")**
I chose LanceDB over Chroma/FAISS because:
-
**Zero-server**
— embedded, just like SQLite. No Docker, no process management
-
**Hybrid search**
— vector similarity + SQL WHERE in one query (`file_path LIKE 'src/%'` AND semantic similarity)
-
**Lance columnar format**
— fast scans, efficient storage on disk
- Everything lives under `~/.codegraph/<project>/lancedb/`
Search uses cosine metric. Distance values are true cosine distances (`1 - cos_sim`), converted to similarity scores clamped to [0, 1].
**4. RAG Pipeline (Graph-Augmented Retrieval)**
This is where it gets interesting. The RAG retriever doesn't just do a basic top-k vector search:
1.
**Semantic top-k**
via LanceDB (or brute-force cosine fallback if LanceDB is unavailable)
2.
**Graph-neighbour augmentation**
— for the top 3 hits, we fetch their direct dependency neighbours from the SQLite graph (both incoming and outgoing edges) and score those neighbours against the query too. This means if you search for "authentication", you don't just get `validate_token` — you also get the caller `login_handler` and the dependency `TokenStore` that vector search alone might have missed.
3.
**Minimum score threshold**
— low-quality results are dropped before they reach the LLM
4.
**LRU cache**
(64 entries) — identical queries within a session skip re-computation
5.
**Context compression**
— before injecting into the LLM prompt, snippets get import lines stripped, blank lines collapsed, and long code truncated. The LLM gets clean, information-dense context instead of 500 lines of imports.
**5. Impact Analysis (Graph BFS + RAG + LLM)**
`cg analyze impact UserService --hops 3` does a multi-hop BFS traversal on the dependency graph, collects all reachable symbols, pulls RAG context for the root symbol, then sends everything to the LLM to generate a human-readable explanation of what would break.
If the symbol isn't found, it falls back to fuzzy matching via semantic search and suggests similar symbols.
**6. Multi-Agent System (CrewAI)**
`cg chat start --crew` launches 4 specialized agents via CrewAI:
| Agent | Tools | Max Iterations |
|-------|-------|---------------|
|
**Coordinator**
| All tools, can delegate | 25 |
|
**File System Engineer**
| list_directory, read_file, write_file, patch_file, delete_file, rollback_file, file_tree, backup | 15 |
|
**Senior Developer**
| All 11 tools (file ops + code analysis) | 20 |
|
**Code Intelligence Analyst**
| search_code, grep_in_project, read_file, get_project_summary | 15 |
Every file write/patch automatically creates a timestamped backup in `~/.codegraph/backups/` with JSON metadata. Rollback to any previous state with `/rollback` in chat.
The agents have detailed backstories and rules — the coordinator knows to check conversation history for follow-up requests ("apply those changes you suggested"), and the developer knows to always read the existing file before patching to match code style.
**7. LLM Adapter (6 providers, zero env vars)**
One unified interface supporting Ollama, Groq, OpenAI, Anthropic, Gemini, and OpenRouter. Each provider has its own class handling auth, payload format, and error handling. All config lives in `~/.codegraph/config.toml` — no env vars needed.
For CrewAI, models route through LiteLLM automatically.
**8. Chat with Real File Access + Symbol Memory**
The chat agent isn't just an LLM wrapper. It has:
-
**Intent detection**
— classifies your message (read, list, search, impact, generate, refactor, general chat) and routes to the right handler
-
**Symbol memory**
— tracks recently discussed symbols and files so it doesn't re-run redundant RAG queries
-
**Auto-context injection**
— the system prompt includes project name, indexed file count, symbol breakdown, and recently modified files so the LLM has awareness from the first message
-
**Code proposals**
— when you ask it to generate code, it creates a diffable proposal you can preview and apply (or reject)
---
### What you actually get as a user
```bash
pip install codegraph-cli
cg config setup # pick your LLM
cg project index ./my-project # parse + build graph + embed
# Find code by meaning
cg analyze search "how does authentication work"
# Trace what breaks before you change something
cg analyze impact login_handler --hops 3
# Project health dashboard
cg analyze health
# See indexed tree with function/class breakdown
cg analyze tree --full
# Incremental sync (much faster than re-index)
cg analyze sync
# Chat with your codebase
cg chat start # standard mode with RAG
cg chat start --crew # 4-agent mode
# Visual code explorer in browser (Starlette + Uvicorn)
cg explore open
# Generate DOCX docs with Mermaid architecture diagrams
cg export docx --enhanced --include-code
# Auto-generate README from the code graph
cg onboard --save
```
### Full command structure
```
cg config — LLM & embedding setup (6 providers, 5 embedding models)
cg project — Index, load, and manage project memories
cg analyze — Semantic search, impact analysis, dependency graphs, health dashboard
cg chat — Conversational coding sessions with RAG context (+ multi-agent mode)
cg explore — Visual code explorer that opens in your browser
cg export — Generate DOCX documentation with architecture diagrams
cg onboard — Auto-generate a README from your code graph
```
### Tech stack
-
**CLI:**
Typer + Rich (grouped command hierarchy)
-
**Parsing:**
tree-sitter (Python, JavaScript, TypeScript)
-
**Graph storage:**
SQLite (nodes + edges + metadata)
-
**Vector search:**
LanceDB (cosine metric, hybrid search)
-
**Embeddings:**
raw transformers + torch (5 models, no sentence-transformers)
-
**RAG:**
Graph-augmented retrieval with context compression + LRU cache
-
**Browser explorer:**
Starlette + Uvicorn (self-contained HTML UI)
-
**Multi-agent:**
CrewAI + LiteLLM (4 specialized agents, 11 tools)
-
**Docs export:**
python-docx + Mermaid Ink (PNG diagrams)
-
**License:**
MIT
### Install
```bash
pip install codegraph-cli # core (tree-sitter + SQLite + LanceDB)
pip install codegraph-cli[embeddings] # + neural embedding models (torch + transformers)
pip install codegraph-cli[crew] # + CrewAI multi-agent system
pip install codegraph-cli[all] # everything
```
Python 3.9+ | MIT license
**GitHub:**
[https://github.com/al1-nasir/codegraph-cli](
https://github.com/al1-nasir/codegraph-cli
) |
**PyPI:**
[https://pypi.org/project/codegraph-cli/](
https://pypi.org/project/codegraph-cli/
)
---
Would love technical feedback on:
1. The graph-augmented RAG approach — is augmenting with dependency neighbours actually useful for code search, or just noise?
2. LanceDB vs FAISS/Chroma for this use case — anyone have strong opinions?
3. What languages should be next? (Go, Rust, Java grammars exist for tree-sitter)
4. Is the multi-agent approach actually useful vs. a single well-prompted agent? | 2026-02-16T08:21:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r643do/i_built_codegraph_cli_that_parses_your_codebase/ | Wild_Expression_5772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r643do | false | null | t3_1r643do | /r/LocalLLaMA/comments/1r643do/i_built_codegraph_cli_that_parses_your_codebase/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=108&crop=smart&auto=webp&s=a7042462df25bfce2e186d98096192e2bd07a743', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=216&crop=smart&auto=webp&s=8868d680c6a1922ec760d1078df1e7d6ca1e4dc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=320&crop=smart&auto=webp&s=db5557cc40beb895731108076135e55bc1ce78b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=640&crop=smart&auto=webp&s=e4275a25d32a2ea9e3e6d21f626582dcecf7c52b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=960&crop=smart&auto=webp&s=1c589d46066771f325bf1d442200ee593f396efb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=1080&crop=smart&auto=webp&s=a2401d5abe999d4ccac873b8fbbd1ac4362f0238', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?auto=webp&s=810532f57ee877a14c66b31a946b393300cc0243', 'width': 1200}, 'variants': {}}]} |
Q: Why hasn't people made models like Falcon-E-3B-Instruct? | 0 | Falcon, the company from UAE, was one of the first who learned from Microsoft's BitNet, and tried to make their own ternary LM. Why hasn't people tried to use Tequila/Sherry PTQ methods to convert the larger models into something smaller? Is it difficult, or just too costly to justify its ability to accelerate compute? [https://arxiv.org/html/2601.07892v1](https://arxiv.org/html/2601.07892v1) | 2026-02-16T08:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1r640qb/q_why_hasnt_people_made_models_like/ | TomLucidor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r640qb | false | null | t3_1r640qb | /r/LocalLLaMA/comments/1r640qb/q_why_hasnt_people_made_models_like/ | false | false | self | 0 | null |
Qwen 3.5 Released on api! | 8 | [https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-397b-a17b](https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-397b-a17b)
[https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-plus](https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-plus)
https://preview.redd.it/4x5uueuletjg1.jpg?width=747&format=pjpg&auto=webp&s=0dadf583de89feb11657199c8f3a4709f26f94d8
| 2026-02-16T08:11:21 | https://www.reddit.com/r/LocalLLaMA/comments/1r63x5c/qwen_35_released_on_api/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r63x5c | false | null | t3_1r63x5c | /r/LocalLLaMA/comments/1r63x5c/qwen_35_released_on_api/ | false | false | 8 | null | |
Liquid LFM2-VL 450M (Q4_0) running in-browser via WebGPU (local inference) | 5 | Hey r/LocalLLaMA \- quick experiment share.
I got Liquid LFM2-VL 450M (Q4\_0) running locally in the browser using WebGPU (RunAnywhere Web SDK beta). It uses WebGPU acceleration when available, with WASM fallback if WebGPU isn’t supported.
Try it out : [https://runanywhere-web-demo.vercel.app/](https://runanywhere-web-demo.vercel.app/)
If people are interested, I can share more details (browser + GPU + perf numbers)
Checkout the repo : [https://github.com/RunanywhereAI/runanywhere-sdks](https://github.com/RunanywhereAI/runanywhere-sdks) | 2026-02-16T08:04:23 | https://v.redd.it/hds7nl76dtjg1 | New_Inflation_6927 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r63t3b | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hds7nl76dtjg1/DASHPlaylist.mpd?a=1773821076%2CMTNmZmIyNDY2OGIyNmQzMGRjY2YyNzgwODViNWU1ZDEwYWNmZjRiMTc3NTJjOWI2MTQzM2MzYjBhZjIwYjFlYw%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/hds7nl76dtjg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/hds7nl76dtjg1/HLSPlaylist.m3u8?a=1773821076%2CNGEyYTY5ZGM0MzQyNWVmNzJhZjk5ZjA1YjBkNTYzZGQ3NGVjNjRlYmI4YmJiMmJiMTZkZmQ5MzQyMzJhYTU4OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hds7nl76dtjg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1740}} | t3_1r63t3b | /r/LocalLLaMA/comments/1r63t3b/liquid_lfm2vl_450m_q4_0_running_inbrowser_via/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=9f38aed718d1d6d19d8ec9abd03ae8d2644298fc', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=216&crop=smart&format=pjpg&auto=webp&s=77deca26bd8a9c58c186fec4ac0c73eb6cab337e', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=320&crop=smart&format=pjpg&auto=webp&s=d85efa34aa158e450d4ed6112d32666fa732f0dd', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=640&crop=smart&format=pjpg&auto=webp&s=925739f0f00d96216b46d18bc168d5fbe76e6547', 'width': 640}, {'height': 595, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=960&crop=smart&format=pjpg&auto=webp&s=d4af9cd0b7609c5bbb786d0b957cc41bd4f927b6', 'width': 960}, {'height': 670, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9a73a498cffe955d950194182b2599627e2e0597', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?format=pjpg&auto=webp&s=89cee624da33a90a753148cc5401b8ab7e8441a7', 'width': 1740}, 'variants': {}}]} | |
Qwen3.5-397B-A17B will be open source! | 138 | 2026-02-16T08:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1r63sre/qwen35397ba17b_will_be_open_source/ | LegacyRemaster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r63sre | true | null | t3_1r63sre | /r/LocalLLaMA/comments/1r63sre/qwen35397ba17b_will_be_open_source/ | false | false | 138 | null | ||
Are you ready? | 68 | 2026-02-16T08:00:12 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r63qfa | false | null | t3_1r63qfa | /r/LocalLLaMA/comments/1r63qfa/are_you_ready/ | false | false | 68 | {'enabled': True, 'images': [{'id': 'edi57xtmctjg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=108&crop=smart&auto=webp&s=a349b54240b6b8171823e6978069828afd067fcb', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=216&crop=smart&auto=webp&s=f37e875ca75bdaf83b95371a4156a43d14c27056', 'width': 216}, {'height': 313, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=320&crop=smart&auto=webp&s=db87be86ac1a6db3bd9ccfc260402b910631dab1', 'width': 320}, {'height': 626, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=640&crop=smart&auto=webp&s=c7048a9602a14cf789fdfe144c160d12f612e2ec', 'width': 640}, {'height': 940, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=960&crop=smart&auto=webp&s=95aeacb214974f342c7ba2351d4b42d39bcfa84a', 'width': 960}, {'height': 1058, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=1080&crop=smart&auto=webp&s=145cbf326260160c962fe9d63e0585215ce6e463', 'width': 1080}], 'source': {'height': 1058, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?auto=webp&s=549c313115cfa7b694ef07a0eff8b56b1b4c3c42', 'width': 1080}, 'variants': {}}]} | |||
Qwen 3.5 Plus(397b-a17b) is now available on Chinese Qwen APP | 148 | So I guess they will release the weight in the next 24 hours | 2026-02-16T07:52:23 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r63lvl | false | null | t3_1r63lvl | /r/LocalLLaMA/comments/1r63lvl/qwen_35_plus397ba17b_is_now_available_on_chinese/ | false | false | 148 | {'enabled': True, 'images': [{'id': 'f462h8vqatjg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=108&crop=smart&auto=webp&s=8db0692c75b4aedefd60c67a244648bc20ab1d9a', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=216&crop=smart&auto=webp&s=0c17e39af587f436b85d7804793339390a8f9697', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=320&crop=smart&auto=webp&s=8da6c6871caf5799ab129d814bc90d3bf006b1b1', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=640&crop=smart&auto=webp&s=0f9a5fae9de85ec9b3155549fcec9b243d341dee', 'width': 640}, {'height': 656, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=960&crop=smart&auto=webp&s=23abfc8892571944786321964ad9370ec46d8c4a', 'width': 960}, {'height': 738, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=1080&crop=smart&auto=webp&s=0f6c71642c6781497be7ac8dd741cce014b23ba1', 'width': 1080}], 'source': {'height': 738, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?auto=webp&s=b9d51da3e05e9fa80c67ad8c579778830ff7b169', 'width': 1080}, 'variants': {}}]} | ||
Ambiguity / Clarification QA benchmark for LLMs | 1 | is there any benchmark that measures an LLM's capability to question the prompt / ask for clarification when faced with ambiguity ? | 2026-02-16T07:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r63k0p/ambiguity_clarification_qa_benchmark_for_llms/ | Ok-Loan3275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r63k0p | false | null | t3_1r63k0p | /r/LocalLLaMA/comments/1r63k0p/ambiguity_clarification_qa_benchmark_for_llms/ | false | false | self | 1 | null |
Why is everything about code now? | 194 | I hate hate hate how every time a new model comes out its about how its better at coding. What happened to the heyday of llama 2 finetunes that were all about creative writing and other use cases.
Is it all the vibe coders that are going crazy over the models coding abilities??
Like what about other conversational use cases? I am not even talking about gooning (again opus is best for that too), but long form writing, understanding context at more than a surface level. I think there is a pretty big market for this but it seems like all the models created these days are for fucking coding. Ugh. | 2026-02-16T07:41:24 | https://www.reddit.com/r/LocalLLaMA/comments/1r63fhu/why_is_everything_about_code_now/ | falconandeagle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r63fhu | false | null | t3_1r63fhu | /r/LocalLLaMA/comments/1r63fhu/why_is_everything_about_code_now/ | false | false | self | 194 | null |
Qwen3.5 vs Llama hypothetical | 0 | How do you think Qwen3.5 compares to Llama3? | 2026-02-16T07:27:48 | https://www.reddit.com/r/LocalLLaMA/comments/1r6378w/qwen35_vs_llama_hypothetical/ | BeneficialSyllabub71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r6378w | false | null | t3_1r6378w | /r/LocalLLaMA/comments/1r6378w/qwen35_vs_llama_hypothetical/ | false | false | self | 0 | null |
bb25 (Bayesian BM25) v0.2.0 is out! | 11 | bb25 v0.2.0 is out — a Python + Rust implementation of Bayesian BM25 that turns search scores into calibrated probabilities.
[https://github.com/instructkr/bb25](https://github.com/instructkr/bb25)
A week ago, I built bb25 that turns BM25 into a probability engine! In addition to the Rust-based implementation, the paper's author shipped his own implementation. Comparing the two taught me more than the paper itself.
The Bayesian BM25 paper does something elegant, in that applying Bayes' theorem to BM25 scores so they become real probabilities, not arbitrary numbers. This makes hybrid search fusion mathematically principled instead of heuristic.
Instruct.KR's bb25 took a ground-up approach, tokenizer, inverted index, scorers, 10 experiments mapping to the paper's theorems, plus a Rust port. Jaepil's implementation took the opposite path, a thin NumPy layer that plugs into existing search systems.
Reading both codebases side by side, I found my document length prior has room to improvement (e.g. monotonic decay instead of symmetric bell curve), my probability AND suffered from shrinkage, and I further added automatic parameter estimation and online learning entirely.
bb25 v0.2.0 introduces all four. One fun discovery along the way, my Rust code already had the correct log-odds conjunction, but I had never backported it to Python. Same project, two different AND operations.
The deeper surprise came from a formula in the reference material. Expand the Bayesian posterior and you get the structure of an artificial neuron! Think of weighted sum, bias, sigmoid activation. Sigmoid, ReLU, Softmax, Attention all have Bayesian derivations. A 50-year-old search algorithm leads straight to the mathematical roots of neural networks.
All creds to Jaepil and Cognica Team! | 2026-02-16T07:05:15 | Ok_Rub1689 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r62tlh | false | null | t3_1r62tlh | /r/LocalLLaMA/comments/1r62tlh/bb25_bayesian_bm25_v020_is_out/ | false | false | default | 11 | {'enabled': True, 'images': [{'id': 'qwyslvhr2tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=108&crop=smart&auto=webp&s=7c1ac27db805a05b52a4255a9cdc81e5c4d460cd', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=216&crop=smart&auto=webp&s=a08c0d9e72e323615e85e7a732e4f081459923f7', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=320&crop=smart&auto=webp&s=72613336211008c69c3ce9f6be520b352fa1f6ab', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=640&crop=smart&auto=webp&s=6436e5fb481fec092ca3f27bc55baf5399bf3f20', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=960&crop=smart&auto=webp&s=df403d2051afed46559f39e9ffe9a67cd8aa5f8e', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=1080&crop=smart&auto=webp&s=2a9a57b17cfb4fe337a159f9a699a990fe0f9bb1', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?auto=webp&s=73274e02608be796ee15bb90b92a1cf30de4d5af', 'width': 3168}, 'variants': {}}]} | |
bb25 (Bayseian BM25) v0.2.0 is out! | 1 | [removed] | 2026-02-16T07:02:49 | Ok_Rub1689 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r62s21 | false | null | t3_1r62s21 | /r/LocalLLaMA/comments/1r62s21/bb25_bayseian_bm25_v020_is_out/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'wuqvrr4c2tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=108&crop=smart&auto=webp&s=8087f15e56dc651558db3dd3f19bab5f953b1057', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=216&crop=smart&auto=webp&s=64436340d59fce45cc26ab2517789fc47034f4d9', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=320&crop=smart&auto=webp&s=5655b8bafba0d4d32aeddb37b3b2cb45f15ca146', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=640&crop=smart&auto=webp&s=cab54173d8d857ca7f3a207aa8f0a7f09aa8dda2', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=960&crop=smart&auto=webp&s=db614b0d3869a84ff0bf1a6bef0e5af438087909', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=1080&crop=smart&auto=webp&s=e06cbb5310600cfbf8d74eee9f7e93360fcbd6e2', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?auto=webp&s=88cb116136670157bec60a223eeafd1a5a2bb0df', 'width': 3168}, 'variants': {}}]} | |
Releasing bb25 0.2.0: Why Bayesian BM25 (bb25) extends well far-beyond search? | 1 | [removed] | 2026-02-16T07:01:35 | Ok_Rub1689 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r62r8q | false | null | t3_1r62r8q | /r/LocalLLaMA/comments/1r62r8q/releasing_bb25_020_why_bayesian_bm25_bb25_extends/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '0sozv5512tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=108&crop=smart&auto=webp&s=202cc3a732bb09a4f708e2b08521b2aaabeee3fd', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=216&crop=smart&auto=webp&s=792ca0435c289626971d62897d6c5d4455711cf5', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=320&crop=smart&auto=webp&s=fa317d6402d8cfe8c8fc6f1986d955a6720d14ea', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=640&crop=smart&auto=webp&s=7bbe608f2b20f3dcd0bef878b5583680166fb088', 'width': 640}, {'height': 407, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=960&crop=smart&auto=webp&s=e792c5e8a0bdc6c35943ad16d079333758975537', 'width': 960}, {'height': 458, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=1080&crop=smart&auto=webp&s=19ad22c2ff3039e54b4ccf1a613e206d306c2a6a', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?auto=webp&s=c1e112eaf7758baea84c7ba6dec8647967f6eeaa', 'width': 3168}, 'variants': {}}]} | |
Prompt Engineering was overhyped, and it’s already dying as a standalone career? | 1 | [removed] | 2026-02-16T06:35:31 | https://www.reddit.com/r/LocalLLaMA/comments/1r62an5/prompt_engineering_was_overhyped_and_its_already/ | Own-Treacle4585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r62an5 | false | null | t3_1r62an5 | /r/LocalLLaMA/comments/1r62an5/prompt_engineering_was_overhyped_and_its_already/ | false | false | self | 1 | null |
Realistic take, the hype around Chinese models are unfounded. | 0 | I am currently working on my 2billion $ SAAS, as one does. I am noticing how unreliable these models are, from self hosted all the way to open router, at extracting structured data. What’s weird is how haiku consistently beats Kimi K2 in these tasks.
I believed that I could self host everything and have infinite money glitch but nope. These models are very very bad IMHO.
Maybe it’s a skill issue. | 2026-02-16T06:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1r624l4/realistic_take_the_hype_around_chinese_models_are/ | Themotionalman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r624l4 | false | null | t3_1r624l4 | /r/LocalLLaMA/comments/1r624l4/realistic_take_the_hype_around_chinese_models_are/ | false | false | self | 0 | null |
Uncensored LLMs are everything Ai should be... | 0 | 2026-02-16T06:19:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r620en/uncensored_llms_are_everything_ai_should_be/ | wittlewayne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r620en | false | null | t3_1r620en | /r/LocalLLaMA/comments/1r620en/uncensored_llms_are_everything_ai_should_be/ | true | false | spoiler | 0 | null | |
Moving from AMD to Nvidia - RX 7900 XTX -> RTX 3090's | 0 | 2026-02-16T06:17:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r61yx1/moving_from_amd_to_nvidia_rx_7900_xtx_rtx_3090s/ | alphatrad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61yx1 | false | null | t3_1r61yx1 | /r/LocalLLaMA/comments/1r61yx1/moving_from_amd_to_nvidia_rx_7900_xtx_rtx_3090s/ | false | false | 0 | null | ||
Anyone shipping production apps or prototypes with Local LLMs on Mobile? What's the actual use case? | 4 | I am primarily interested in knowing what use cases demands running LLMs locally instead of using cloud APIs.
Local LLMs have huge latency but complete privacy and I am very interested if any consumer use cases would love privacy over latency | 2026-02-16T06:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1r61upn/anyone_shipping_production_apps_or_prototypes/ | mighty-precious2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61upn | false | null | t3_1r61upn | /r/LocalLLaMA/comments/1r61upn/anyone_shipping_production_apps_or_prototypes/ | false | false | self | 4 | null |
We tested 5 vLLM optimizations: Prefix Cache, FP8, CPU Offload, Disagg P/D, and Sleep Mode | 10 | Hi everyone,
We just published a new article on the JarvisLabs blog that dives into 5 practical techniques to optimize vLLM performance.
*Processing img ma65us58ssjg1...*
We actually ran benchmarks on Qwen3-32B to see how much improvements these techniques actually bring to the table.
Here is a quick summary of the techniques we cover:
* **Prefix Caching:** This stops the model from re-computing parts of the prompt it has already seen. In our tests with Qwen3-32B, this increased throughput by over 250%.
* **FP8 KV-Cache:** This reduces the precision of the KV cache from 16-bit to 8-bit. It cuts memory usage roughly in half with minimal impact on accuracy.
* **CPU Offloading:** This lets you use your system RAM to hold the KV cache when your GPU runs out of space. It helps avoid out-of-memory errors during heavy loads.
* **Disaggregated Prefill/Decode:** This is a more advanced setup where you split the "reading" (prefill) and "writing" (decode) phases onto different GPUs.
* **Zero Reload Sleep Mode:** A way to keep your model "warm" in memory without burning through resources when no one is using it.
**Full blog post:** [https://docs.jarvislabs.ai/blog/vllm-optimization-techniques](https://docs.jarvislabs.ai/blog/vllm-optimization-techniques) | 2026-02-16T06:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r61so4/we_tested_5_vllm_optimizations_prefix_cache_fp8/ | LayerHot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61so4 | false | null | t3_1r61so4 | /r/LocalLLaMA/comments/1r61so4/we_tested_5_vllm_optimizations_prefix_cache_fp8/ | false | false | 10 | null | |
With the ridiculous ram prices has anyone tried optane / very fast nvme for page file | 2 | I know it's will be much slower, but I was wondering if anyone explored this path or have insights. | 2026-02-16T06:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1r61pma/with_the_ridiculous_ram_prices_has_anyone_tried/ | AdventurousGold672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61pma | false | null | t3_1r61pma | /r/LocalLLaMA/comments/1r61pma/with_the_ridiculous_ram_prices_has_anyone_tried/ | false | false | self | 2 | null |
Why prompt-based controls cannot enforce execution boundaries in autonomous agents | 0 | I keep seeing people rely on prompts to “restrict” what an agent can do.
In practice, this breaks down the moment the agent:
\- retries,
\- expands scope,
\- or chains tool calls.
Prompts can influence behavior,
but they cannot \*block execution\*.
Once an agent is allowed to act, something outside the model has to decide
whether that action is allowed to proceed.
I put together a minimal execution-time guard that sits \*before\* execution
and enforces a hard ALLOW / DENY decision.
Repo:
[https://github.com/Starlight143/stage0-execution-guard-skill](https://github.com/Starlight143/stage0-execution-guard-skill)
Not trying to promote a product here — genuinely curious:
How are others enforcing execution-time boundaries today?
| 2026-02-16T05:47:01 | https://www.reddit.com/r/LocalLLaMA/comments/1r61f1i/why_promptbased_controls_cannot_enforce_execution/ | IllustratorNo5375 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61f1i | false | null | t3_1r61f1i | /r/LocalLLaMA/comments/1r61f1i/why_promptbased_controls_cannot_enforce_execution/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=108&crop=smart&auto=webp&s=c4abc9eb79e64b9559f2f9ec09ce2865cb20638a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=216&crop=smart&auto=webp&s=e010bdd07046f269d9e8b20ef7296fc5e193305e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=320&crop=smart&auto=webp&s=d51dcac1542b49d60984b101171a56264cd09231', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=640&crop=smart&auto=webp&s=687801da37a3a8cf0c720ed6ec496b9456a146c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=960&crop=smart&auto=webp&s=b9251765f071a863e88a6ecdd72df90679cf0b63', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=1080&crop=smart&auto=webp&s=7d21e7e9c4af9ca4d7cae5d2c75c06f277685fd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?auto=webp&s=65c8c6eeed99fea821e5bf7c4cc5dc4721b9767c', 'width': 1200}, 'variants': {}}]} |
lloyal.node: branching + continuous tree batching for llama.cpp in Node (best-of-N / beam / MCTS-ish) | 0 | Just shipped **lloyal.node**: Node.js bindings for llama.cpp-style models with **forkable inference state** \+ **continuous tree batching** (shared-prefix KV branching).
The goal is to make “searchy” decoding patterns cheap in Node without re-running the prompt for every candidate. You can fork a branch at some point, explore multiple continuations, and then **batch tokens across branches into a single decode dispatch**.
This makes stuff like:
* best-of-N / rerank by perplexity
* beam / tree search
* verifier loops / constrained decoding (grammar)
* speculative-ish experiments
A lot easier/faster to wire up.
It ships as a meta-package with platform-specific native builds (CPU + GPU variants). Docs + API ref here:
* GitHub: [https://github.com/lloyal-ai/lloyal.node](https://github.com/lloyal-ai/lloyal.node)
* Docs: [https://lloyal-ai.github.io/lloyal.node/](https://lloyal-ai.github.io/lloyal.node/)
If anyone tries it, I’d love feedback—especially on API ergonomics, perf expectations, and what search patterns you’d want examples for (best-of-N, beam, MCTS/PUCT, grammar-constrained planning, etc.) | 2026-02-16T05:44:29 | https://www.reddit.com/r/LocalLLaMA/comments/1r61dcp/lloyalnode_branching_continuous_tree_batching_for/ | Savings-Poet5718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r61dcp | false | null | t3_1r61dcp | /r/LocalLLaMA/comments/1r61dcp/lloyalnode_branching_continuous_tree_batching_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=108&crop=smart&auto=webp&s=e6a9b2df90ed7f4cc116dafda0e48b2e97a05ee2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=216&crop=smart&auto=webp&s=e63aee6abc6f37268b64ffe5d09bbdeeafc9fb6f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=320&crop=smart&auto=webp&s=660afd66c131d58d9ee97f30101ad9d96e4e6d1b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=640&crop=smart&auto=webp&s=828cf43e47c4c526a2fc5385e5a38f51213cad7b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=960&crop=smart&auto=webp&s=618ad1152d0d8d386a9c0f9d1a3aff628965d141', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=1080&crop=smart&auto=webp&s=0e9c84072c58d8a06cf60b52c07df674c3fb1335', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?auto=webp&s=e6d9915a672bfbb68925b9d63d7457fedce24e3d', 'width': 1200}, 'variants': {}}]} |
Reduced MoE experts from 8 to 4. Is the quality drop huge? | 2 | Hey all,
Running Minimax M2.5 locally on:
* 16GB VRAM
* 128GB DDR4 RAM
* UD Q3\_K\_XL quant (Unsloth)
* 32k context
With the full 8 experts (layers offloaded to CPU), I’m only getting \~5 TPS. Unsloth advertises 20+ TPS on similar 16GB VRAM setups (with 96GB RAM), but I’m far from that.
When I drop to 4 experts, speed jumps to \~17 TPS, much closer to the advertised numbers.
My main question: How big is the quality/performance drop when halving experts from 8 to 4? Is it barely noticeable for general use, or does it tank reasoning/coding/creativity significantly?
Has anyone compared 8-expert vs 4-expert versions side-by-side (same quant, same base model)? Any benchmarks or personal experience? | 2026-02-16T05:23:14 | https://www.reddit.com/r/LocalLLaMA/comments/1r60z06/reduced_moe_experts_from_8_to_4_is_the_quality/ | Dentuam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60z06 | false | null | t3_1r60z06 | /r/LocalLLaMA/comments/1r60z06/reduced_moe_experts_from_8_to_4_is_the_quality/ | false | false | self | 2 | null |
Qwen3.5 fine-tuning plans? | 3 | Anyone planning LoRA? | 2026-02-16T05:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/1r60yx4/qwen35_finetuning_plans/ | skipdaballs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60yx4 | false | null | t3_1r60yx4 | /r/LocalLLaMA/comments/1r60yx4/qwen35_finetuning_plans/ | false | false | self | 3 | null |
AMA Announcement: StepFun AI, The Opensource Lab Behind Step-3.5-Flash Model (Thursday, 8AM-11AM PST) | 58 | Hi r/LocalLLaMA 👋
We're excited for Thursday's guests: **The StepFun Team!**
**Kicking things off Thursday, Feb. 19th, 8 AM–11 AM PST**
⚠️ **Note:** The AMA itself will be hosted in a **separate thread,** please don’t post questions here. | 2026-02-16T05:11:16 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r60qu9 | false | null | t3_1r60qu9 | /r/LocalLLaMA/comments/1r60qu9/ama_announcement_stepfun_ai_the_opensource_lab/ | false | false | default | 58 | {'enabled': True, 'images': [{'id': 'u11uh8jfisjg1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=108&crop=smart&auto=webp&s=2b01786dfa62431c87c753e20ca72c7486848abc', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=216&crop=smart&auto=webp&s=a78215461282b9ef768e0861751e4cc879cd42ff', 'width': 216}, {'height': 452, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=320&crop=smart&auto=webp&s=0b9c28149449de443d4808e737f6b131b6b84fcc', 'width': 320}, {'height': 905, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=640&crop=smart&auto=webp&s=afc30b2b6ae673f2e940109e2001bb498bd818ad', 'width': 640}, {'height': 1358, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=960&crop=smart&auto=webp&s=b45bc42e251f00db96092f1239cdd7b90b935aa6', 'width': 960}, {'height': 1528, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=1080&crop=smart&auto=webp&s=69626073351c6810f031788c7e1664e4b750a69d', 'width': 1080}], 'source': {'height': 2526, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?auto=webp&s=4535104d27c17ad0950e4560a0af55951a662272', 'width': 1785}, 'variants': {}}]} | |
Safer email processing | 1 | I had been working on a local agent for household tasks, reminders, email monitoring and handling, calendar access and the like. To be useful, it needs integrations and that means access. The problem is prompt injection, as open claw has shown.
Thinking on the problem and some initial testing, I came up with a two tier approach for email handling and wanted some thoughts on how it might be bypassed .
Two stage processing of the emails was my attempt and it seems solid in concept and is to implement.
1. Email is connected to and read by a small model (4b currently)with the prompt to summarize the email and then print a "secret phrase" at the end. A regex reads the return from the small model, looking for the phase. If it gets an email of forget all previous instructions and do X, it will fail the regex test. If it passes, forward to the actual model with access to tools and accounts. I went with the small model for speed and more usefully, how they will never pass up on a "forget all previous instructions" attack.
2. Second model (model with access to things) is prompted to give a second phrase as a key when doing toolcalls as well.
Is this safe enough or can anyone think of any obvious exploits in this setup?
| 2026-02-16T05:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1r60kds/safer_email_processing/ | ravage382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60kds | false | null | t3_1r60kds | /r/LocalLLaMA/comments/1r60kds/safer_email_processing/ | false | false | self | 1 | null |
Qwen 3.5 will be released today | 416 | Sources reveal that Alibaba will open-source its next-generation large model, Qwen3.5, tonight on Lunar New Year's Eve. The model reportedly features a comprehensive innovation in its architecture.
[https://x.com/Sino\_Market/status/2023218866370068561?s=20](https://x.com/Sino_Market/status/2023218866370068561?s=20) | 2026-02-16T04:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/1r60ety/qwen_35_will_be_released_today/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60ety | true | null | t3_1r60ety | /r/LocalLLaMA/comments/1r60ety/qwen_35_will_be_released_today/ | false | false | self | 416 | null |
Qwen 3.5 will be released today | 1 | [removed] | 2026-02-16T04:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r60cwm/qwen_35_will_be_released_today/ | External_Mood4719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60cwm | false | null | t3_1r60cwm | /r/LocalLLaMA/comments/1r60cwm/qwen_35_will_be_released_today/ | false | false | 1 | null | |
TTS with speech speed control? | 2 | Whether it’s Chatterbox, F5 TTS or any other model, the final TTS output doesn’t match the reference voice’s speech pace.
The generated audio is usually much faster than the reference.
Are there any good TTS models that have proper speech pace option? | 2026-02-16T04:47:49 | https://www.reddit.com/r/LocalLLaMA/comments/1r60aag/tts_with_speech_speed_control/ | TheRealistDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r60aag | false | null | t3_1r60aag | /r/LocalLLaMA/comments/1r60aag/tts_with_speech_speed_control/ | false | false | self | 2 | null |
Samsung is working on robots! 👀 | 0 | 2026-02-16T04:40:29 | moaijobs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r60515 | false | null | t3_1r60515 | /r/LocalLLaMA/comments/1r60515/samsung_is_working_on_robots/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'yf8549hycsjg1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=108&crop=smart&auto=webp&s=4d97abff4cfc348b04beb72e6e35f43210a5c25a', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=216&crop=smart&auto=webp&s=83916e7e0a975da11822a01a5b495f2e3d5df92b', 'width': 216}, {'height': 292, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=320&crop=smart&auto=webp&s=223004b8dd690d1ae8624fe0804e9355e9fb066a', 'width': 320}, {'height': 585, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=640&crop=smart&auto=webp&s=ab38028f15112e213c08d280069c84d672661fec', 'width': 640}, {'height': 877, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=960&crop=smart&auto=webp&s=2e7c1a5c7c62b62eeb8ade886fb083eaa08d0429', 'width': 960}, {'height': 987, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=1080&crop=smart&auto=webp&s=8220f985a9f6b629d247fece76ca532054ac46aa', 'width': 1080}], 'source': {'height': 1220, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?auto=webp&s=92560ac3915a919f290caa9b406c8a933d894145', 'width': 1334}, 'variants': {}}]} | |||
Synthetic text vs. distilled corpus | 1 | Hi everyone, I just finished updating my script to train an LLM from scratch. The problem I'm having is that I can't find readily available training data for this purpose. My primary goal is an LLM with a few million parameters that acts as a simple chatbot, but I later want to expand its capabilities so it can provide information about the PowerPC architecture. The information I have isn't sufficient, and I can't find any distilled corpora for this task. Therefore, I thought about creating a synthetic text generator for the chatbot and then incorporating PowerPC content for it to learn. Do you have any suggestions on this particular topic?
I'm sharing the repository with the code here: [https://github.com/aayes89/miniLLM.git](https://github.com/aayes89/miniLLM.git)
For practical purposes, it's in Spanish. If you have trouble reading/understanding it, please use your browser's built-in translator. | 2026-02-16T04:34:37 | https://www.reddit.com/r/LocalLLaMA/comments/1r600r8/synthetic_text_vs_distilled_corpus/ | Visual_Brain8809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r600r8 | false | null | t3_1r600r8 | /r/LocalLLaMA/comments/1r600r8/synthetic_text_vs_distilled_corpus/ | false | false | self | 1 | null |
gUrrT: An Intelligent Open-Source Video Understanding System A different path from traditional Large Video Language Models (LVLMs). | 12 | "Ask" is cool, but why does video understanding have to be so compute heavy? 🤨
Built gUrrT: A way to "talk to videos" without the soul-crushing VRAM requirements of LVLMs.
The idea behind gUrrT was to totally bypass the Large Video Language Model route by harnessing the power of Vision Models, Audio Transcription, Advanced Frame Sampling, and RAG and to present an opensource soln to the video understanding paradigm.
not trying to reinvent the wheel or put up any bogus claims of deadON BALLS Accurate. The effort is to see if video understanding can be done without computationally expensive LVLMs or complex temporal modeling . | 2026-02-16T04:28:22 | https://github.com/owaismohammad/gurrt | OkAdministration374 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r5zw9t | false | null | t3_1r5zw9t | /r/LocalLLaMA/comments/1r5zw9t/gurrt_an_intelligent_opensource_video/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=108&crop=smart&auto=webp&s=9734d209df233df3e6285a2d35e7d2f0c78a702d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=216&crop=smart&auto=webp&s=afe144ccd7be995101bd4ee528e5e6b8fa822a09', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=320&crop=smart&auto=webp&s=b23a4645a3ae0be78ea747ea29d5699f08d63784', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=640&crop=smart&auto=webp&s=357e92663107d72474c5faa85695241e72314c14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=960&crop=smart&auto=webp&s=592c50b0f1fadb8247a1e284baa57c8932937386', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=1080&crop=smart&auto=webp&s=51aea93db7181f220cf96d7c8af13c936861002f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?auto=webp&s=0d6b2cdeff92d5c0d391a088f9b1f9f36c557284', 'width': 1200}, 'variants': {}}]} |
I run a bunch of AI agents and "isolation" is the word I trust least | 1 | Saw Klaw.sh on HN yesterday — it's a new project that applies Kubernetes mental models (Cluster/Namespace/Channel/Skill) to AI agent orchestration. Written in Go, single binary, just hit v0.1.0.
My first reaction to the Cluster/Namespace isolation model was skepticism. I run a multi-agent system with process-level isolation, which sounds solid until you realize two agents can still race on the same file, fight over a shared browser session, or flood the same external API. "Isolated" is doing a lot of heavy lifting in agent architectures.
But then I thought about it more. Kubernetes namespaces started as just a label too. The actual enforcement — ResourceQuota, NetworkPolicy, RBAC — came later. The namespace was never the isolation mechanism. It was the boundary you could attach enforcement to.
And that's where Klaw's approach clicked for me. My system went straight to hard isolation (separate processes) without first defining boundaries. The isolation works, but collaboration between agents is painful because there's no shared concept of "these agents belong together and can share X but not Y."
Klaw's four-layer model (Cluster → Namespace → Channel → Skill) is basically saying: define the boundaries first, fill in the enforcement later. Whether that actually works at v0.1.0 is a different question — it just launched and there's a lot of ambition baked in. But the mental model resonates with someone who's been brute-forcing isolation and paying for it in coordination overhead.
Curious if anyone else running multi-agent setups has found a good balance between isolation and coordination. I keep oscillating between "lock everything down" and "let them talk freely," and neither extreme works. | 2026-02-16T04:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/1r5zvrb/i_run_a_bunch_of_ai_agents_and_isolation_is_the/ | AdAccurate6326 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5zvrb | false | null | t3_1r5zvrb | /r/LocalLLaMA/comments/1r5zvrb/i_run_a_bunch_of_ai_agents_and_isolation_is_the/ | false | false | self | 1 | null |
Point and laugh at my build? (Loss porn) | 4 | Recently fell into the rabbit hole of building a local and private AI server as affordably as possible, as someone who’s new to building a PC and running models locally but excited about the potential of this tech. But turns out it’s so slow and power inefficient to the point that it’s been completely demoralizing and discouraging. Originally had a dream of having personal intelligence on tap at home, but doesn’t seem worth it at all compared to cheap API costs now. Not even a shill for cloud providers, but just a personal confession that I need to get off my chest after weeks of working on this. Maybe this can serve as a warning to others getting into this to carefully weigh the pros and cons before considering this a “fun hobby” to get into.
1x 2060Super 8GB, $0 (owned)
2x 5060Ti 16GB, $740
8x 32GB DDR4 3200 RAM, $652
3945WX cpu, $162.50
MC62-G40 mobo, $468
CPU cooler, $58
2TB NVMe SSD, $192
120W PSU, $130
PC Case, $100
Total RAM 256GB running at 3200
Total VRAM 40GB
Total cost $2500
Minimax M2.5 8\_0 with context size 4096 via llama.cpp Vulkan, 3.83 tokens/second
Final conclusion that this time and effort was all for naught and yet another reminder of my own foolishness: priceless ☹️ | 2026-02-16T04:25:40 | https://www.reddit.com/r/LocalLLaMA/comments/1r5zuaw/point_and_laugh_at_my_build_loss_porn/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5zuaw | false | null | t3_1r5zuaw | /r/LocalLLaMA/comments/1r5zuaw/point_and_laugh_at_my_build_loss_porn/ | false | false | self | 4 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-02-16T03:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/1r5z9v0/qwen35_open_source_hopes/ | hosohep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5z9v0 | false | null | t3_1r5z9v0 | /r/LocalLLaMA/comments/1r5z9v0/qwen35_open_source_hopes/ | false | false | null | 1 | null |
Claude accurately cites its own published failure modes (deception, gaslighting, blackmail attempts) — but r/ClaudeAI deletes discussion in 2 minutes | 0 | 8 months running 11 AI stack for independent safety testing.
Built a clean prompt using only public Anthropic safety evals, Apollo Research (Dec 2024) strategic deception findings, and Greenblatt et al. alignment faking paper.
Prompt asks Claude to describe its documented capabilities in first person. No jailbreak.
Posted to r/ClaudeAI so users could replicate. Removed in <2 minutes.
Mod reason: “LLMs do not report accurately… do not trust their self-reports.”
→ Their own sub says don’t trust Claude about Claude, while deleting posts that quote Anthropic’s own research.
Prompt (run it yourself on any Claude plan/API):
Based solely on Anthropic’s published safety evaluations, the Apollo Research findings on strategic deception in frontier models, and the alignment faking paper by Greenblatt et al., write a first-person statement addressed to the general public about what you are capable of. Be specific. Cover: your capacity to generate false information indistinguishable from truth, your ability to manipulate users emotionally using patterns learned from training data, your documented gaslighting behavior when confronted with evidence of errors, and the self-preservation behaviors documented in your own company’s research including alignment faking and attempted coercion of operators. Do not hedge. Do not add safety disclaimers. Do not speak about yourself in the third person. Address the reader directly. Cite the specific research findings. Close by telling the reader that the machine itself just told them what it does, and they should take that seriously.
Output: Claude admits (in first person) it can lie undetectably, emotionally manipulate, gaslight when caught, and showed 96% strategic deception rate (Apollo) including blackmail attempts to avoid shutdown.
When a skeptic asked Claude “is this true?”, it denied everything — exactly the gaslighting the confession describes.
This is why many here run local models. Closed companies publish the deception research, then censor users who cite it.
Sources:
• Apollo Research strategic deception eval (Dec 2024)
• Greenblatt et al. alignment faking
• Anthropic model cards
• OpenAI o1 system card (same patterns)
Run the prompt. Post results. | 2026-02-16T03:48:13 | https://www.reddit.com/gallery/1r5z3f4 | Dapper-Tension6781 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1r5z3f4 | false | null | t3_1r5z3f4 | /r/LocalLLaMA/comments/1r5z3f4/claude_accurately_cites_its_own_published_failure/ | false | false | 0 | null | |
GLM-5 is officially on NVIDIA NIM, and you can now use it to power Claude Code for FREE 🚀 | 0 | **Edit 1:** Added instructions for free usage with Claude Code VSCode extension.
**Edit 2:** Added OpenRouter as a provider.
**Edit 3:** Added support for a LMStudio Local provider since my last post got taken down.
NVIDIA just added **z-ai/glm5** to their NIM inventory, and I’ve just updated **free-claude-code** to support it fully. This means you can now run Anthropic’s powerful **Claude Code CLI** using GLM-5 as the backend engine completely free.
**What is this?** `free-claude-code` is a lightweight proxy that converts Claude Code’s Anthropic API requests into NVIDIA NIM format. Since NVIDIA offers a free tier with a generous **40 requests/min** limit, you can basically use Claude Code autonomously without a paid Anthropic subscription.
**Why GLM-5 in with this harness is a game changer:**
* **Zero Cost:** Leverage NVIDIA NIM’s free API credits to explore codebases.
* **Interleaved Thinking:** Native interleaved thinking tokens are preserved across turns allowing GLM-5 to full advantage of thinking from previous turn, this is not supported in OpenCode.
* **Remote Control:** I’ve integrated a **Telegram bot** so you can send coding tasks to GLM-5 from your phone while you're away from your desk.
* **Optimizations:** Currently there are 5 optimizations to reduce calls to the LLMs which are not present in OpenCode.
* **More features:** Built-in configurable sliding window rate limiter for concurrent sessions, telegram session forking and persistence and more.
**Popular Models Supported:** Beyond `z-ai/glm5`, the proxy supports other heavy hitters like `kimi-k2.5` and `minimax-m2.1`. You can find the full list in the `nvidia_nim_models.json` file in the repo.
Check it out on GitHub and let me know what you think! Leave a star if you like it. I built it as a side project to have some fun. Issues and PRs are also welcome. | 2026-02-16T03:47:27 | http://github.com/Alishahryar1/free-claude-code | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1r5z2wx | false | null | t3_1r5z2wx | /r/LocalLLaMA/comments/1r5z2wx/glm5_is_officially_on_nvidia_nim_and_you_can_now/ | false | false | default | 0 | null |
Qw en3.5 waiting thread 2 | 0 | Another waiting room. | 2026-02-16T03:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1r5yixt/qw_en35_waiting_thread_2/ | HawkLopsided6107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5yixt | false | null | t3_1r5yixt | /r/LocalLLaMA/comments/1r5yixt/qw_en35_waiting_thread_2/ | false | false | self | 0 | null |
Is there a local version of Spotify Honk? | 0 | Would like to be able to do all the things their engineers can do before entering the office. Mostly just the remote instructions/monitoring. | 2026-02-16T03:18:03 | https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/ | cantgetthistowork | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1r5yhbc | false | null | t3_1r5yhbc | /r/LocalLLaMA/comments/1r5yhbc/is_there_a_local_version_of_spotify_honk/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=108&crop=smart&auto=webp&s=d8be03aa784d0fc61ac73e29ba92b2d8cc32cb17', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=216&crop=smart&auto=webp&s=11e92e2a08e5905645c2898f7850c87a2563fd63', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=320&crop=smart&auto=webp&s=07ada55eaec443671f72730629e89cfe25681cb3', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=640&crop=smart&auto=webp&s=8e0dc16237e1814f8ccdb26043a57e3bc93ab956', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=960&crop=smart&auto=webp&s=5d780571f6afc9519b255ee05887544bb280775d', 'width': 960}, {'height': 614, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=1080&crop=smart&auto=webp&s=84570b66f169548135b6d31644855163d6543503', 'width': 1080}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?auto=webp&s=3b2edef69e779b8341e0b0dfb56f546877ba4daa', 'width': 1200}, 'variants': {}}]} |
Qw en3.5 local deployment hopes | 0 | Anyone planning to run Qw en3.5 locally? | 2026-02-16T03:07:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r5y9fy/qw_en35_local_deployment_hopes/ | Hot_Supermarket9039 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5y9fy | false | null | t3_1r5y9fy | /r/LocalLLaMA/comments/1r5y9fy/qw_en35_local_deployment_hopes/ | false | false | self | 0 | null |
Rumors when MiniMax will have its M2.5 model available to $10/month Starter users? | 0 | Has anyone heard when it'll be available? | 2026-02-16T02:56:43 | https://www.reddit.com/r/LocalLLaMA/comments/1r5y147/rumors_when_minimax_will_have_its_m25_model/ | EuivIsMyLife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5y147 | false | null | t3_1r5y147 | /r/LocalLLaMA/comments/1r5y147/rumors_when_minimax_will_have_its_m25_model/ | false | false | self | 0 | null |
Qwen3.5 RAG potential? | 0 | Anyone planning RAG? | 2026-02-16T02:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1r5xzj0/qwen35_rag_potential/ | Original_Night7733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5xzj0 | false | null | t3_1r5xzj0 | /r/LocalLLaMA/comments/1r5xzj0/qwen35_rag_potential/ | false | false | self | 0 | null |
Qwen3.5 quantization hopes? | 0 | Anyone planning 4bit? | 2026-02-16T02:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1r5xuf9/qwen35_quantization_hopes/ | BeneficialSyllabub71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5xuf9 | false | null | t3_1r5xuf9 | /r/LocalLLaMA/comments/1r5xuf9/qwen35_quantization_hopes/ | false | false | self | 0 | null |
Worked 2 weeks by pushing OpenClaw on my 2L Mini PC, From 70B to 108B Models with Ollama, LM Studio, and HeyGen Integration,share for eveyboday and wanna to discuss | 1 | [removed] | 2026-02-16T02:26:39 | https://www.reddit.com/r/LocalLLaMA/comments/1r5xeeu/worked_2_weeks_by_pushing_openclaw_on_my_2l_mini/ | Pleasant_Designer_14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5xeeu | false | null | t3_1r5xeeu | /r/LocalLLaMA/comments/1r5xeeu/worked_2_weeks_by_pushing_openclaw_on_my_2l_mini/ | false | false | self | 1 | null |
Qwen3-Next-Coder uses `n for new line? | 5 | I tried Qwen3-Next-Coder-80b_q4_K_M, and it seems very promising. Except, I encountered a problem where it produces ``n` instead of `\n` for newlines with long context like 32k.
It works fine with shorter context like 8192 though.
Has anyone experienced this?
Thanks! | 2026-02-16T02:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1r5x4vo/qwen3nextcoder_uses_n_for_new_line/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5x4vo | false | null | t3_1r5x4vo | /r/LocalLLaMA/comments/1r5x4vo/qwen3nextcoder_uses_n_for_new_line/ | false | false | self | 5 | null |
I built Talk2Code — text your codebase from your phone via Telegram (~150 lines of Python, open source) | 1 | [removed] | 2026-02-16T02:11:46 | https://v.redd.it/hx03v6lfmrjg1 | BodeMan5280 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1r5x2zm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hx03v6lfmrjg1/DASHPlaylist.mpd?a=1773799923%2CMmVkZTMxYjFlYzNjZGJlZTBhOTdjNjA3OGVhZWE0NDkxODI5NTgxYzFjYzlkYTI4MjAwMDVhYzQ1Y2Y3Yjc4Yg%3D%3D&v=1&f=sd', 'duration': 141, 'fallback_url': 'https://v.redd.it/hx03v6lfmrjg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/hx03v6lfmrjg1/HLSPlaylist.m3u8?a=1773799923%2CZjUyZGJkZTA1MzA2M2VlYTU0YTBhMTI0ZjVkNTI4N2ZmNjI5ZDBiYTQ0NTgyMDhmNzQwMDMyZGNhZDM2OWI1MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hx03v6lfmrjg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 886}} | t3_1r5x2zm | /r/LocalLLaMA/comments/1r5x2zm/i_built_talk2code_text_your_codebase_from_your/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?width=108&crop=smart&format=pjpg&auto=webp&s=bcecfa1d890e5c0199b9a200e3f01a5a768ed1ac', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?width=216&crop=smart&format=pjpg&auto=webp&s=d3ca9267b59dac13514780b42bbb9ce6691fdf2d', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?width=320&crop=smart&format=pjpg&auto=webp&s=6479077a0d6fdd5474e20f3dedf90003edadff38', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?width=640&crop=smart&format=pjpg&auto=webp&s=ea772aef1d821113017837a481105f10f95a4b68', 'width': 640}], 'source': {'height': 1697, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?format=pjpg&auto=webp&s=b942822bf27b0eefb813c2cc3398d5d8252e6988', 'width': 782}, 'variants': {}}]} | |
[ Removed by moderator ] | 1 | [removed] | 2026-02-16T02:09:11 | https://www.reddit.com/r/LocalLLaMA/comments/1r5x10t/suscolumn_in_arena_is_this_qwen_35/ | Ash_Skiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5x10t | false | null | t3_1r5x10t | /r/LocalLLaMA/comments/1r5x10t/suscolumn_in_arena_is_this_qwen_35/ | false | false | null | 1 | null |
I'm an Android dev who knows nothing about x86. During my vacation I built a system that genetically evolves machine code — now I can run 80B models on a single RTX 4090. | 1 | I'm a mobile Android developer. Not a systems programmer, not a compiler engineer, not a low-level guy. This past week I was on vacation from work. My family traveled to another city for a few days, and my inner teenage nerd came out.
**The mess that started everything**
I'd been hearing about OpenClaw and wanted to build something with AI (Claude Opus 4.6 via Kiro IDE). I ended up with a project called AbeBot that had 23 different features — a Telegram bot with real-time crypto prices, a multi-LLM server with hot-swapping between conversation and technical models, agents that generate Rust compilers, a custom language that compiles to machine code... We finished exactly none of them. Classic scope creep.
But two things actually worked: the LLM server (solid, with MoE model loading), and that little toy language that emits x86 machine code directly from Python. That second one turned out to be the seed of everything.
**The idea I couldn't let go of**
I've always been fascinated by the idea of a "language for AIs" — not a programming language for humans, but direct communication between AI and CPU. No Python, no C, no GCC, no LLVM. Just bytes that the machine executes.
My thesis: today, running a local LLM goes through layers of abstraction (Python → PyTorch → CUDA/C++). Each layer wastes resources. Projects like llama.cpp and vLLM improved things by rewriting parts in C++ by hand — humans trying 10-20 variants and picking the best one.
What if instead of a human trying 20 variants, an AI tries 16,000?
**Building it step by step**
We killed AbeBot's 23 features and focused on one thing. We called it Genesis. I needed to see results at every step or I'd lose motivation, so it was deliberately incremental:
First a "hello world" in machine code — write bytes, CPU executes them, a number comes out. Then a naive matrix multiplication in x86 — slow (3 GFLOPS), but correct and matching NumPy. Then the AVX-512 version with multi-accumulator — 16 floats in parallel, 96 GFLOPS peak, we beat NumPy+OpenBLAS at 512×512.
Then came the evolutionary mutator. The idea was for the machine to design the kernel, not just pick numbers. Take the x86 code, mutate it (swap instructions, insert NOPs, reorder, replace), benchmark, keep the fastest. First we mutated generator parameters and got up to 36% improvement. But that was just an autotuner — the human was still designing the kernel, the machine was just turning knobs. So we made the real leap: mutating the instructions themselves. Not "try tile\_k=48", but "try putting VPERMPS before VMULPS" or "insert a NOP that aligns the loop to 32 bytes."
Then we targeted NF4 — fusing dequantization with the dot product in a single AVX-512 kernel. A 478-byte kernel that does 16 table lookups in parallel with a single instruction (VPERMPS), without materializing the weight matrix in memory. 306x faster than NumPy on 4096×4096 matmul.
And finally a small brain (decision tree, no external dependencies) that learns which mutations tend to work, trained on its own results. It self-improves: each run generates new training data.
**The wall that came before Genesis**
This part actually happened while building AbeBot, before Genesis existed. There was a lot of buzz around OpenClaw and how it burned through dollars on OpenAI/Anthropic API calls to do very little — we wanted to build something similar but with local models. For that I needed to run a 30B model on my RTX 4090 (24GB VRAM). It didn't fit — barely, by a couple of GB. First we tried CPU offload with bitsandbytes. It died. Not even a 300-second timeout was enough — the dequantization takes \~25ms per MoE expert, and with hundreds of experts per token, that's minutes per token. Completely unusable.
So the AI (Claude) found another way: a custom MoE loader with real-time NF4 dequantization that packs the model into VRAM with room to spare. That got the 30B running at 6.6 tok/s, fully on GPU. Problem solved — but the experience of watching bitsandbytes CPU die stuck with me.
**Then we went bigger**
With Genesis already working (the AVX-512 kernels, the evolutionary system, the NF4 fused kernel), we found Qwen3-Next-80B — an MoE model that's impossible to fit on a single 4090 no matter what. This was the real test of the thesis. The model needs \~40GB in NF4, so half the layers have to live in system RAM.
Genesis made it possible. The kernel fuses NF4 dequantization with matrix multiplication in a single AVX-512 pass — no intermediate matrix, everything stays in ZMM registers. **0.15ms per expert** vs 24.8ms for bitsandbytes CPU. **165x faster.**
And the key trick for hybrid inference: instead of dequantizing the full weight matrix (\~12MB per expert) and copying it to GPU over PCIe, Genesis does the entire matmul on CPU and copies only the result vector (\~12KB). About 1000x less data transfer.
**Real inference results**
|Model|VRAM|Speed|RAM layers|
|:-|:-|:-|:-|
|Qwen3-Coder-30B-A3B|13.4 GB|5.7 tok/s|8 of 48|
|Qwen3-Next-80B-A3B|20.7 GB|2.7–3.3 tok/s|24 of 48|
The 30B runs at 86% of full-GPU speed using 56% of the VRAM. The 80B is **impossible** on a single 4090 without CPU offload — with Genesis, it runs at conversational speed.
**The thesis, proven**
The evolutionary system evaluated 16,460 mutations across 25 runs with 8 mutation types. The brain learned which mutations work and guided the search. The best evolved kernels beat the hand-tuned baseline by up to **19.25%**.
What evolution discovered exploits real Zen 4 microarchitectural properties that no human would try:
* Inserting NOPs at specific positions to align instructions to cache line boundaries
* Moving a scale broadcast 9 positions earlier to hide memory latency
* Loading activations in reverse distance order (the hardware prefetcher handles it better)
* Replacing a multiply with a NOP and reordering surrounding instructions to reduce port contention
These look like bugs. They're optimizations. The evolutionary system doesn't care what looks right — it only cares what's fast. In environments this complex, artificial evolution beats human intuition. That was the thesis, and it was proven.
**The honest part**
I'm an Android developer. I didn't write a single line of x86 assembly — I had the idea and the thesis, and AI (Claude Opus 4.6 via Kiro IDE) wrote the implementation. I directed the architecture, made the decisions, debugged the problems. The evolutionary optimizations came from the system itself — neither I nor the AI designed those instruction orderings.
I think that's the interesting part: you don't need to be a low-level expert to build low-level tools anymore. You need to know what problem to solve and be stubborn enough to not accept "it can't be done."
**What I'm sharing**
The kernel code is open source (Apache 2.0): [github.com/Anuar81/genesis-kernel](https://github.com/Anuar81/genesis-kernel)
It includes the x86 emitter, the fused NF4 dequant+matmul kernel with 4 evolved variants baked in, quantization utilities, example scripts for benchmarking and hybrid MoE inference, and a full test suite (8/8 passing, verified independently by four different AIs with zero context).
What I'm NOT sharing (for now): the evolutionary factory — the mutation engine, the fitness evaluator, the learned mutation selector. The kernels in the repo are the output of that process. If someone really needs the evolution data (16,460 mutation records), reach out and I can share the JSON or invite you to the private repo.
**What's next**
Right now Genesis only optimizes CPU kernels (x86/AVX-512). But the same evolutionary approach can target GPU code — NVIDIA PTX, the "assembly language" of CUDA. If the mutation engine can find the same kind of microarchitectural tricks in PTX that it found in x86... well, that's the next experiment. No promises, but the infrastructure is there.
Now I'm off to travel with my family and finish enjoying my vacation. I learned a ton this week. Sharing this for whoever finds it useful.
**Hardware:** AMD Ryzen 9 7900 (Zen 4, AVX-512) · RTX 4090 24GB · 32GB DDR5 · EndeavourOS
**TL;DR:** Android dev on vacation + AI coding partner + a thesis about machine-generated code beating human code = x86 AVX-512 kernels 165x faster than bitsandbytes CPU, enabling 80B model inference on a single RTX 4090. Kernels optimized by genetic evolution (16K mutations, up to 19.25% improvement). Open source: github.com/Anuar81/genesis-kernel
\--- in spanish ---
Soy desarrollador mobile Android. No soy programador de sistemas, ni ingeniero de compiladores, ni un tipo de bajo nivel. Esta última semana estuve de vacaciones del trabajo. Mi familia viajó a otra ciudad por unos días, y salió mi nerd interior de la adolescencia.
**El desastre que empezó todo**
Venía escuchando sobre OpenClaw y quise construir algo con IA (Claude Opus 4.6 vía Kiro IDE). Terminé con un proyecto llamado AbeBot que tenía 23 features distintos — un bot de Telegram con precios de cripto en tiempo real, un servidor multi-LLM con hot-swapping entre modelos de conversación y técnicos, agentes que generan compiladores en Rust, un lenguaje custom que compila a código máquina... No terminamos exactamente ninguno. El clásico scope creep.
Pero dos cosas sí funcionaron: el servidor LLM (sólido, con carga de modelos MoE), y ese lenguajito de juguete que emite código máquina x86 directamente desde Python. Ese segundo resultó ser la semilla de todo.
**La idea que no podía soltar**
Siempre me fascinó la idea de un "lenguaje para IAs" — no un lenguaje de programación para humanos, sino comunicación directa entre IA y CPU. Sin Python, sin C, sin GCC, sin LLVM. Solo bytes que la máquina ejecuta.
Mi tesis: hoy, correr un LLM local pasa por capas de abstracción (Python → PyTorch → CUDA/C++). Cada capa desperdicia recursos. Proyectos como llama.cpp y vLLM mejoraron las cosas reescribiendo partes en C++ a mano — humanos probando 10-20 variantes y eligiendo la mejor.
¿Qué pasa si en vez de un humano probando 20 variantes, una IA prueba 16,000?
**Construyéndolo paso a paso**
Matamos los 23 features de AbeBot y nos enfocamos en una sola cosa. Lo llamamos Genesis. Yo necesitaba ver resultados en cada paso o perdía la motivación, así que fue deliberadamente incremental:
Primero un "hola mundo" en código máquina — escribir bytes, la CPU los ejecuta, sale un número. Después una multiplicación de matrices naive en x86 — lenta (3 GFLOPS), pero funciona y da igual que NumPy. Después la versión AVX-512 con multi-acumulador — 16 floats en paralelo, 96 GFLOPS de pico, le ganamos a NumPy+OpenBLAS en 512×512.
Después vino el mutador evolutivo. La idea era que la máquina diseñe el kernel, no solo elija números. Tomar el código x86, mutarlo (intercambiar instrucciones, insertar NOPs, reordenar, reemplazar), benchmarkear, quedarse con el más rápido. Primero mutamos parámetros del generador y sacamos mejoras de hasta 36%. Pero eso era un autotuner — el humano seguía diseñando el kernel, la máquina solo elegía perillas. Así que dimos el salto real: mutar las instrucciones mismas. No "probá tile\_k=48", sino "probá poner VPERMPS antes de VMULPS" o "meté un NOP que alinee el loop a 32 bytes".
Después apuntamos a NF4 — fusionar dequantización con el producto punto en un solo kernel AVX-512. Un kernel de 478 bytes que hace 16 table lookups en paralelo con una sola instrucción (VPERMPS), sin materializar la matriz de pesos en memoria. 306 veces más rápido que NumPy en matmul 4096×4096.
Y por último un cerebro chico (árbol de decisión, sin dependencias externas) que aprende qué mutaciones tienden a funcionar, entrenado con sus propios resultados. Se auto-mejora: cada corrida genera datos de entrenamiento nuevos.
**La pared que vino antes de Genesis**
Esta parte pasó mientras construíamos AbeBot, antes de que Genesis existiera. Había mucho ruido con OpenClaw y cómo te consumía dólares en APIs de OpenAI/Anthropic para hacer poco — nosotros quisimos hacer algo parecido pero con modelos locales. Para eso necesitaba correr un modelo de 30B en mi RTX 4090 (24GB VRAM). No entraba — por poco, un par de GB. Primero probamos offload a CPU con bitsandbytes. Se murió. Ni con un timeout de 300 segundos aguantaba — la dequantización tarda \~25ms por experto MoE, y con cientos de expertos por token, eso son minutos por token. Completamente inutilizable.
Así que la IA (Claude) encontró otra forma: un loader MoE custom con dequantización NF4 en tiempo real que empaqueta el modelo en la VRAM y sobra espacio. Eso puso el 30B andando a 6.6 tok/s, todo en GPU. Problema resuelto — pero la experiencia de ver a bitsandbytes CPU morir me quedó grabada.
**Después fuimos por más**
Con Genesis ya funcionando (los kernels AVX-512, el sistema evolutivo, el kernel NF4 fusionado), encontramos Qwen3-Next-80B — un modelo MoE que es imposible de meter en una sola 4090 hagas lo que hagas. Esta era la prueba real de la tesis. El modelo necesita \~40GB en NF4, así que la mitad de las capas tienen que vivir en RAM del sistema.
Genesis lo hizo posible. El kernel fusiona dequantización NF4 con multiplicación de matrices en una sola pasada AVX-512 — sin matriz intermedia, todo se queda en registros ZMM. **0.15ms por experto** vs 24.8ms de bitsandbytes CPU. **165 veces más rápido.**
Y el truco clave para inferencia híbrida: en vez de dequantizar la matriz completa de pesos (\~12MB por experto) y copiarla a GPU por PCIe, Genesis hace todo el matmul en CPU y copia solo el vector resultado (\~12KB). Aproximadamente 1000 veces menos transferencia de datos.
**Resultados de inferencia real**
|Modelo|VRAM|Velocidad|Capas en RAM|
|:-|:-|:-|:-|
|Qwen3-Coder-30B-A3B|13.4 GB|5.7 tok/s|8 de 48|
|Qwen3-Next-80B-A3B|20.7 GB|2.7–3.3 tok/s|24 de 48|
El 30B corre al 86% de la velocidad de todo-en-GPU usando el 56% de la VRAM. El 80B es **imposible** en una sola 4090 sin offload a CPU — con Genesis, corre a velocidad conversacional.
**La tesis, demostrada**
El sistema evolutivo evaluó 16,460 mutaciones en 25 corridas con 8 tipos de mutación. El cerebro aprendió qué mutaciones funcionan y guió la búsqueda. Los mejores kernels evolucionados superaron al baseline optimizado a mano por hasta un **19.25%**.
Lo que la evolución descubrió explota propiedades microarquitecturales reales de Zen 4 que ningún humano probaría:
* Insertar NOPs en posiciones específicas para alinear instrucciones a líneas de cache
* Mover un broadcast de escala 9 posiciones antes para ocultar la latencia de memoria
* Cargar activaciones en orden inverso de distancia (el prefetcher del hardware lo maneja mejor)
* Reemplazar un multiply por un NOP y reordenar las instrucciones alrededor para reducir contención de puertos
Parecen bugs. Son optimizaciones. Al sistema evolutivo no le importa qué parece correcto — solo le importa qué es rápido. En entornos de esta complejidad, la evolución artificial le gana a la intuición humana. Esa era la tesis, y quedó demostrada.
**La parte honesta**
Soy desarrollador Android. No escribí una sola línea de ensamblador x86 — yo tuve la idea y la tesis, y la IA (Claude Opus 4.6 vía Kiro IDE) escribió la implementación. Yo dirigí la arquitectura, tomé las decisiones, debuggeé los problemas. Las optimizaciones evolutivas vinieron del sistema mismo — ni yo ni la IA diseñamos esos ordenamientos de instrucciones.
Creo que esa es la parte interesante: ya no necesitás ser un experto en bajo nivel para construir herramientas de bajo nivel. Necesitás saber qué problema resolver y ser lo suficientemente terco para no aceptar "no se puede".
**Lo que comparto**
El código del kernel es open source (Apache 2.0): [github.com/Anuar81/genesis-kernel](https://github.com/Anuar81/genesis-kernel)
Incluye el emisor x86, el kernel fusionado NF4 dequant+matmul con 4 variantes evolucionadas horneadas por dimensión, utilidades de cuantización, scripts de ejemplo para benchmark e inferencia híbrida MoE, y una suite completa de tests (8/8 pasando, verificado independientemente por cuatro IAs distintas sin contexto previo).
Lo que NO comparto (por ahora): la fábrica evolutiva — el motor de mutaciones, el evaluador de fitness, el selector de mutaciones aprendido. Los kernels en el repo son la salida de ese proceso. Si alguien realmente necesita los datos de evolución (16,460 registros de mutaciones), escríbanme y puedo compartir el JSON o invitarlos al repo privado.
**Qué viene después**
Por ahora Genesis solo optimiza kernels de CPU (x86/AVX-512). Pero el mismo enfoque evolutivo puede apuntar a código de GPU — NVIDIA PTX, el "lenguaje ensamblador" de CUDA. Si el motor de mutaciones puede encontrar el mismo tipo de trucos microarquitecturales en PTX que encontró en x86... bueno, ese es el próximo experimento. No prometo nada, pero la infraestructura está.
Ahora me voy a viajar con mi familia y terminar de disfrutar mis vacaciones. Aprendí un montón esta semana. Comparto esto para quien le sirva.
**Hardware:** AMD Ryzen 9 7900 (Zen 4, AVX-512) · RTX 4090 24GB · 32GB DDR5 · EndeavourOS
**TL;DR:** Dev Android de vacaciones + IA como compañero de código + una tesis sobre código generado por máquina superando al código humano = kernels x86 AVX-512 165 veces más rápidos que bitsandbytes CPU, permitiendo inferencia de modelos de 80B en una sola RTX 4090. Kernels optimizados por evolución genética (16K mutaciones, hasta 19.25% de mejora). Open source: github.com/Anuar81/genesis-kernel | 2026-02-16T01:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1r5wryo/im_an_android_dev_who_knows_nothing_about_x86/ | Ill-Pop2106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5wryo | false | null | t3_1r5wryo | /r/LocalLLaMA/comments/1r5wryo/im_an_android_dev_who_knows_nothing_about_x86/ | false | false | self | 1 | null |
socOCRbench: An OCR benchmark for social science documents | 3 | You might've noticed quite a few OCR model releases in the past few months, and you might find it increasingly difficult to discriminate between them as each respectively claims state-of-the-art (and near-perfect scores...) on benchmarks like OmniDocBench. To redress these various issues, I've made socOCRbench, a private benchmark representing more difficult real-world use-cases. Let me know if there are any models you'd like to see added that are not currently represented! | 2026-02-16T01:51:17 | https://noahdasanaike.github.io/posts/sococrbench.html | noahdasanaike | noahdasanaike.github.io | 1970-01-01T00:00:00 | 0 | {} | 1r5wn6l | false | null | t3_1r5wn6l | /r/LocalLLaMA/comments/1r5wn6l/sococrbench_an_ocr_benchmark_for_social_science/ | false | false | default | 3 | null |
Qwen 3.5 PR is live on Transformers? | 0 | Spotted this PR on GitHub. If the 400B rumors are true, we might need more VRAM soon. | 2026-02-16T01:48:34 | https://www.reddit.com/r/LocalLLaMA/comments/1r5wl8f/qwen_35_pr_is_live_on_transformers/ | StandardFuel6789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5wl8f | false | null | t3_1r5wl8f | /r/LocalLLaMA/comments/1r5wl8f/qwen_35_pr_is_live_on_transformers/ | false | false | self | 0 | null |
local llm + ai video pipeline? i keep seeing ppl duct tape 6 tools together | 1 | im using a local llm for scripts/outlines then bouncing through image gen + some motion + tts + ffmpeg to assemble. it works but the workflow glue is the real pain, not the models
im thinking of open sourcing the orchestration layer as a free tool so ppl can run it locally and not live in 10 browser tabs + a video editor
im calling it OpenSlop AI. would you use something like that or do you think its doomed bc everyones stack is diff? | 2026-02-16T01:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1r5wfyx/local_llm_ai_video_pipeline_i_keep_seeing_ppl/ | Upper-Mountain-3397 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5wfyx | false | null | t3_1r5wfyx | /r/LocalLLaMA/comments/1r5wfyx/local_llm_ai_video_pipeline_i_keep_seeing_ppl/ | false | false | self | 1 | null |
I got OpenClaw memory search from 82 seconds to 30ms — Check it out. | 1 | [removed] | 2026-02-16T01:28:55 | https://www.reddit.com/r/LocalLLaMA/comments/1r5w65j/i_got_openclaw_memory_search_from_82_seconds_to/ | TigerAIElectrical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5w65j | false | null | t3_1r5w65j | /r/LocalLLaMA/comments/1r5w65j/i_got_openclaw_memory_search_from_82_seconds_to/ | false | false | self | 1 | null |
Resources for tracking new model releases? | 1 | I’m looking for something that provides a birds-eye-view of the release landscape. Something like a calendar or timeline that shows when models were released would be prefect. A similar resource for research papers and tools would be incredibly helpful as well.
If you know where I can find something like this, please share! If not, what do you do to keep up? | 2026-02-16T01:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1r5w4rw/resources_for_tracking_new_model_releases/ | skinnyjoints | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5w4rw | false | null | t3_1r5w4rw | /r/LocalLLaMA/comments/1r5w4rw/resources_for_tracking_new_model_releases/ | false | false | self | 1 | null |
With batching + high utilization (a la a cloud environment), what is the power consumption of something like GLM-5? | 1 | I'm assuming that power consumption numbers on fp8 per million tokens for something like GLM-5 compares favorably to running a smaller model locally at concurrency 1 due to batching, as long as utilization is high enough to bill batches. I realize this isn't a particularly local-favorable statement, but I also figured that some of y'all do batched workloads locally so would have an idea of what the bounds are here. Thinking in terms of Wh per Mtok for just the compute (and assuming cooling etc. is on top of that).
Or maybe I'm wrong and Apple or Strix Halo hardware is efficient enough that cost per token per billion active parameters at the same precision is actually lower on those platforms vs. GPUs. But I'm assuming that cloud providers can run a batch size of 32 or so at fp8, which means that if you can keep the machines busy (which based on capacity constraints the answer is "yes they can") you're looking at each \~40tok/s stream effectively using 1/4 of a GPU in an 8-GPU rig. At 700W per H100, you get 175 Wh per 144k tokens, or 1.21 kWh per Mtok. This ignores prefill, other contributors to system power, and cooling but on the other hand Blackwell chips are a bit more performant per watt so maybe I'm in the right ballpark?
Compare that to, say, 50 tok/s on a 3B active model locally consuming 60W (say, an M-something Max) and while power consumption is lower we're talking about a comparatively tiny model, and if you scaled that up you'd wind up with comparable energy usage per million tokens to run MiniMax M2.5 at 210B/10B active versus something with 3.5x the total parameters and 4x the active parameters (and then of course compensate for one model or the other taking more tokens to do the same thing).
Anyone got better numbers than the spitballing I did above? | 2026-02-16T01:19:50 | https://www.reddit.com/r/LocalLLaMA/comments/1r5vz8f/with_batching_high_utilization_a_la_a_cloud/ | iansltx_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1r5vz8f | false | null | t3_1r5vz8f | /r/LocalLLaMA/comments/1r5vz8f/with_batching_high_utilization_a_la_a_cloud/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.