title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
RTX Pro 5000 48GB vs DGX Spark for LLM + RAG lab setup (enterprise data)
1
Hi all, I’m setting up a small lab environment to experiment with LLMs + RAG using internal enterprise data (documentation, processes, knowledge base, etc.). The goal is to build something like an internal “chat with company knowledge” system. This is not for production yet — it’s mainly for testing architectures, em...
2026-02-16T11:46:09
https://www.reddit.com/r/LocalLLaMA/comments/1r67ick/rtx_pro_5000_48gb_vs_dgx_spark_for_llm_rag_lab/
Educational-Shoe8806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r67ick
false
null
t3_1r67ick
/r/LocalLLaMA/comments/1r67ick/rtx_pro_5000_48gb_vs_dgx_spark_for_llm_rag_lab/
false
false
self
1
null
Which model can provide me with an answer? Local model only
2
Need help to determine which local llama will provide a answer I approve :) Prompt: You're looking for an anime movie with the following characteristics: Setting: Plot & Themes: Unique Visual/Mystical Element: Not space-themed Features a Romeo and Juliet-like tragic love story. The spirits of fallen people appe...
2026-02-16T11:39:19
https://www.reddit.com/r/LocalLLaMA/comments/1r67dvh/which_model_can_provide_me_with_an_answer_local/
Gold_Sugar_4098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r67dvh
false
null
t3_1r67dvh
/r/LocalLLaMA/comments/1r67dvh/which_model_can_provide_me_with_an_answer_local/
false
false
self
2
null
Support and guidance in building an independent learning project.
1
I’m a product manager, and I’d like to “get my hands dirty” a bit to gain a deeper understanding of LLMs and AI. I was thinking of building a side project — maybe a trivia and riddle quiz for my kids. Something that could run daily, weekly, or monthly, with a scoring leaderboard. I’d like to incorporate both AI and L...
2026-02-16T11:35:06
https://www.reddit.com/r/LocalLLaMA/comments/1r67b6n/support_and_guidance_in_building_an_independent/
Financial-Sand-6999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r67b6n
false
null
t3_1r67b6n
/r/LocalLLaMA/comments/1r67b6n/support_and_guidance_in_building_an_independent/
false
false
self
1
null
Forked OpenClaw to run fully air-gapped (no cloud deps)
33
I've been playing with OpenClaw, but I couldn't actually use it for anything work-related because of the data egress. The agentic stuff is cool, but sending everything to OpenAI/cloud APIs is a non-starter for my setup. So I spent the weekend ripping out the cloud dependencies to make a fork that runs strictly on-prem...
2026-02-16T11:35:00
https://www.reddit.com/r/LocalLLaMA/comments/1r67b43/forked_openclaw_to_run_fully_airgapped_no_cloud/
zsb5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r67b43
false
null
t3_1r67b43
/r/LocalLLaMA/comments/1r67b43/forked_openclaw_to_run_fully_airgapped_no_cloud/
false
false
self
33
null
Is GLM Lite Subscription Worth it to have while getting limited?
0
Currently, i saw some comment or post that told me that if the limit of Lite usage is not fair enough as before the GLM5 release, is any of you guys have running the lite version? any thoughts?
2026-02-16T11:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1r674ou/is_glm_lite_subscription_worth_it_to_have_while/
Remote_Fun1742
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r674ou
false
null
t3_1r674ou
/r/LocalLLaMA/comments/1r674ou/is_glm_lite_subscription_worth_it_to_have_while/
false
false
self
0
null
Do I understand --n-keep correctly?
3
Can someone help me understand if I'm using `--n_keep` correctly? My understanding is that it keeps the first N tokes, then cuts the remaining in half and removes the first part. So, a 80k context with n\_keep 40k, after becoming full, would essentially become: \[0-40k\] \[60-80\] \[20k empty\] Is this correct?
2026-02-16T11:00:41
https://www.reddit.com/r/LocalLLaMA/comments/1r66osd/do_i_understand_nkeep_correctly/
nunodonato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66osd
false
null
t3_1r66osd
/r/LocalLLaMA/comments/1r66osd/do_i_understand_nkeep_correctly/
false
false
self
3
null
Best compromise for small budgets Local llm
1
Hello Guys, I know my question is pretty standard but i always see people arguing on whats the best setup for local GPUs so im a bit lost. My requirements is that the setup should be able to run gpt-oss 120B(its for the ballpark of VRAM) Of course with the fastest toks/s possible. I would like to know if its po...
2026-02-16T10:53:18
https://www.reddit.com/r/LocalLLaMA/comments/1r66k7j/best_compromise_for_small_budgets_local_llm/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66k7j
false
null
t3_1r66k7j
/r/LocalLLaMA/comments/1r66k7j/best_compromise_for_small_budgets_local_llm/
false
false
self
1
null
vLLM MAXIMUM performance on multi-3090
46
TLDR: install patched p2p driver, patch vllm platform and skip p2p check. You'll get +50% performance on 4x3090 with Qwen3 Coder Next FP8. Free performance, free tokens, very nice :) So, YOU (yes, YOU) managed to setup vLLM on your multi gpu platform with consumer cards. It's nice, running fast and doesn't lose a lot ...
2026-02-16T10:52:53
https://www.reddit.com/gallery/1r66jyp
Nepherpitu
reddit.com
1970-01-01T00:00:00
0
{}
1r66jyp
false
null
t3_1r66jyp
/r/LocalLLaMA/comments/1r66jyp/vllm_maximum_performance_on_multi3090/
false
false
https://preview.redd.it/…ae67486cc9fec858
46
null
Help me decide if to buy EGPU for Minisforum S1-max
3
Hello, I need an advice if to buy / not buy an extra GPU for my Minisforum S1-Max. Just to sum it up, this box has AMD AI Max plus 395 CPU, 128 gb RAM, AMD Radeon 8060s integrated GPU. I am running Arch Linux and my use case is LLM inference, currently mainly through llama.cpp. Currently I am running mainly MOE mod...
2026-02-16T10:52:26
https://www.reddit.com/r/LocalLLaMA/comments/1r66joq/help_me_decide_if_to_buy_egpu_for_minisforum_s1max/
krecoun007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66joq
false
null
t3_1r66joq
/r/LocalLLaMA/comments/1r66joq/help_me_decide_if_to_buy_egpu_for_minisforum_s1max/
false
false
self
3
null
😭
0
2026-02-16T10:49:58
https://i.redd.it/9w78f46q6ujg1.png
muxxington
i.redd.it
1970-01-01T00:00:00
0
{}
1r66i73
false
null
t3_1r66i73
/r/LocalLLaMA/comments/1r66i73/_/
false
false
default
0
{'enabled': True, 'images': [{'id': '9w78f46q6ujg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=108&crop=smart&auto=webp&s=9e0219c78505b4bac160e0ecf443a2243cf74d07', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9w78f46q6ujg1.png?width=216&crop=smart&auto=web...
Building a private AI Task Manager (runs Gemma 2B on-device). No data leaves your phone. Is $5 fair for lifetime access?
2
Hey everyone, I’m a developer frustrated by every productivity app turning into a monthly subscription service. I’m building an app called Pagio, and I want to validate my pricing model before I finish the code. The Pitch: Most AI apps send your data to OpenAI/Claude, which costs them money, so they charge you $...
2026-02-16T10:49:44
https://www.reddit.com/r/LocalLLaMA/comments/1r66i1j/building_a_private_ai_task_manager_runs_gemma_2b/
HelpfulNight1955
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66i1j
false
null
t3_1r66i1j
/r/LocalLLaMA/comments/1r66i1j/building_a_private_ai_task_manager_runs_gemma_2b/
false
false
self
2
null
What model is used to create such videos?
1
[removed]
2026-02-16T10:46:51
https://www.reddit.com/r/LocalLLaMA/comments/1r66g76/what_model_is_used_to_create_such_videos/
Odd_Branch_2125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66g76
false
null
t3_1r66g76
/r/LocalLLaMA/comments/1r66g76/what_model_is_used_to_create_such_videos/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/HtkfAH1x2HpzfbHOEBh0J_B9ZagH9iMNGTljJ1RwqXM.jpeg?width=108&crop=smart&auto=webp&s=6cf305f4ca8fc6fb038a84ff7c41889c28ec04dd', 'width': 108}, {'height': 216, 'url': ...
What model is used to create such videos on Instagram?
1
[removed]
2026-02-16T10:43:34
https://www.reddit.com/r/LocalLLaMA/comments/1r66e6a/what_model_is_used_to_create_such_videos_on/
Odd_Branch_2125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66e6a
false
null
t3_1r66e6a
/r/LocalLLaMA/comments/1r66e6a/what_model_is_used_to_create_such_videos_on/
false
false
self
1
null
ai models for content
1
[removed]
2026-02-16T10:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1r66btz/ai_models_for_content/
Odd_Branch_2125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r66btz
false
null
t3_1r66btz
/r/LocalLLaMA/comments/1r66btz/ai_models_for_content/
false
false
nsfw
1
null
ai model for such content
1
[removed]
2026-02-16T10:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1r669tq/ai_model_for_such_content/
Odd_Branch_2125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r669tq
false
null
t3_1r669tq
/r/LocalLLaMA/comments/1r669tq/ai_model_for_such_content/
false
false
nsfw
1
null
Can Seedance 2.0's 12B be run on a 4080 in comfyui?
1
pretty much the title.
2026-02-16T10:28:04
https://www.reddit.com/r/LocalLLaMA/comments/1r664yn/can_seedance_20s_12b_be_run_on_a_4080_in_comfyui/
Nervous_Narwhal4141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r664yn
false
null
t3_1r664yn
/r/LocalLLaMA/comments/1r664yn/can_seedance_20s_12b_be_run_on_a_4080_in_comfyui/
false
false
self
1
null
How viable are eGPUs and NVMe?
2
Hello. I got myself an Asus ProArt X870E-CREATOR WIFI mobo, and been happily running \~65GB filesize models on 16GB VRAM 96GB RAM (RX 9070 XT + 9950X3D) However, my main M.2 PCIe 5.0 slot (M2\_1) remains unused since I run all my current drives through the chipset (since they're PCIe 4.0 themselves anyway). So I wond...
2026-02-16T10:16:44
https://www.reddit.com/r/LocalLLaMA/comments/1r65y85/how_viable_are_egpus_and_nvme/
ABLPHA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65y85
false
null
t3_1r65y85
/r/LocalLLaMA/comments/1r65y85/how_viable_are_egpus_and_nvme/
false
false
self
2
null
Qwen 3.5 is Live
12
looks like he finished his tea
2026-02-16T10:13:47
https://i.redd.it/o3wd0lge0ujg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1r65whe
false
null
t3_1r65whe
/r/LocalLLaMA/comments/1r65whe/qwen_35_is_live/
false
false
https://preview.redd.it/…77b52430a9292497
12
{'enabled': True, 'images': [{'id': 'o3wd0lge0ujg1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?width=108&crop=smart&auto=webp&s=c8907f9c172b4802172fa066f54ffc9ecb3fbfac', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/o3wd0lge0ujg1.png?width=216&crop=smart&auto=we...
Token bloat in non-English on local LLMs— what actually helps (models, tokenisers, prompts?
6
I’ve been trying to use local LLMs in languages other than English and the token count sometimes goes absolutely wild (context fills faster, slower generation, worse long-form UX). For folks doing multilingual locally: what’s actually worked for you in practice? A few specific things I’m curious about: Which model f...
2026-02-16T10:07:26
https://www.reddit.com/r/LocalLLaMA/comments/1r65sqp/token_bloat_in_nonenglish_on_local_llms_what/
aizivaishe_rutendo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65sqp
false
null
t3_1r65sqp
/r/LocalLLaMA/comments/1r65sqp/token_bloat_in_nonenglish_on_local_llms_what/
false
false
self
6
null
Importance of Cpu on Gpu Build
1
Hi, how important is the Cpu in a GPU build? I can get a used system with a 8700k cpu and 16 gigs of DDR4 for cheap. My plan is to get a used 3090 for this. I plan to run simple models, maybe gpt oss 20b or ministral3 14b, along with voice assistant tools, like whisper, parakeet or qwen3tts. Would that system suffice...
2026-02-16T10:03:08
https://www.reddit.com/r/LocalLLaMA/comments/1r65q7g/importance_of_cpu_on_gpu_build/
AllTey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65q7g
false
null
t3_1r65q7g
/r/LocalLLaMA/comments/1r65q7g/importance_of_cpu_on_gpu_build/
false
false
self
1
null
LOCAL-Llama
1
[removed]
2026-02-16T09:57:27
https://www.reddit.com/r/LocalLLaMA/comments/1r65mr4/localllama/
Valuable-Constant-54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65mr4
false
null
t3_1r65mr4
/r/LocalLLaMA/comments/1r65mr4/localllama/
false
false
self
1
null
Qwen 3.5 Open Source: Native Multimodal, Ultimate Efficiency!
153
Happy New Year, everyone! Our latest generation native multimodal model, Qwen3.5-397B-A17B, is now officially open source!
2026-02-16T09:55:14
https://i.redd.it/jz35kh22xtjg1.jpeg
Senior-Silver-6130
i.redd.it
1970-01-01T00:00:00
0
{}
1r65lkc
false
null
t3_1r65lkc
/r/LocalLLaMA/comments/1r65lkc/qwen_35_open_source_native_multimodal_ultimate/
false
false
https://preview.redd.it/…fd3a3ab1ab9e7966
153
{'enabled': True, 'images': [{'id': 'jz35kh22xtjg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=108&crop=smart&auto=webp&s=bd9b9c1fb87be83b39a340cd0a53f12768059f58', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jz35kh22xtjg1.jpeg?width=216&crop=smart&auto=...
Qwen/Qwen3.5-397B-A17B · Hugging Face
1
2026-02-16T09:50:48
https://huggingface.co/Qwen/Qwen3.5-397B-A17B
ayylmaonade
huggingface.co
1970-01-01T00:00:00
0
{}
1r65j0w
false
null
t3_1r65j0w
/r/LocalLLaMA/comments/1r65j0w/qwenqwen35397ba17b_hugging_face/
false
false
https://external-preview…6ce1ac2f60c2b01d
1
{'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'h...
unsloth/Qwen3.5-397B-A17B-GGUF
32
Since people keep posting about it without hugging face link. Here you go: https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF Shoutout to unsloth. They’re quite quick on this
2026-02-16T09:45:48
https://www.reddit.com/r/LocalLLaMA/comments/1r65g56/unslothqwen35397ba17bgguf/
Ok_Brain_2376
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65g56
true
null
t3_1r65g56
/r/LocalLLaMA/comments/1r65g56/unslothqwen35397ba17bgguf/
false
false
self
32
{'enabled': False, 'images': [{'id': 'tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tyNtHt3WoxBPzPjA1ZcBizJ-B85Ig_7j6FQYXGr2LFo.png?width=108&crop=smart&auto=webp&s=cb97086a3cec0abaf76465736f94d6c30e3bc319', 'width': 108}, {'height': 116, 'url': 'h...
can we please have a megathread for the qwen3.5 release?
1
[removed]
2026-02-16T09:40:07
https://www.reddit.com/r/LocalLLaMA/comments/1r65cpk/can_we_please_have_a_megathread_for_the_qwen35/
disillusioned_okapi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r65cpk
false
null
t3_1r65cpk
/r/LocalLLaMA/comments/1r65cpk/can_we_please_have_a_megathread_for_the_qwen35/
false
false
self
1
null
dears mods, can we please have a megathread for this release?
1
[removed]
2026-02-16T09:36:16
https://i.redd.it/ptebytkrstjg1.png
disillusioned_okapi
i.redd.it
1970-01-01T00:00:00
0
{}
1r65agy
false
null
t3_1r65agy
/r/LocalLLaMA/comments/1r65agy/dears_mods_can_we_please_have_a_megathread_for/
false
false
https://preview.redd.it/…d6738cc0bf538c85
1
{'enabled': True, 'images': [{'id': 'ptebytkrstjg1', 'resolutions': [{'height': 183, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=108&crop=smart&auto=webp&s=8059de3efbc411aee75e4678a41ced8e1962913b', 'width': 108}, {'height': 366, 'url': 'https://preview.redd.it/ptebytkrstjg1.png?width=216&crop=smart&auto=we...
Qwen 3.5 is out!!
45
[https://huggingface.co/collections/Qwen/qwen35](https://huggingface.co/collections/Qwen/qwen35)
2026-02-16T09:34:35
https://www.reddit.com/r/LocalLLaMA/comments/1r659i8/qwen_35_is_out/
Wooden-Deer-1276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r659i8
true
null
t3_1r659i8
/r/LocalLLaMA/comments/1r659i8/qwen_35_is_out/
false
false
self
45
{'enabled': False, 'images': [{'id': 'KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KXveQvJuVNdGr-ptWl2PqBDlsiUwJfKyXYWB50ZRxPk.png?width=108&crop=smart&auto=webp&s=8aa639e257fd06e34f938d329cd573bffa772e4e', 'width': 108}, {'height': 116, 'url': 'h...
Qwen3.5-397B-A17B Unsloth GGUFs
455
Qwen releases Qwen3.5💜! Qwen3.5-397B-A17B is an open MoE vision reasoning LLM for agentic coding & chat. It performs on par with Gemini 3 Pro, Claude Opus 4.5, and GPT-5.2. Run 4-bit on 256GB Mac / RAM or less. Guide to run them: [https://unsloth.ai/docs/models/qwen3.5](https://unsloth.ai/docs/models/qwen3.5) Unslo...
2026-02-16T09:34:10
https://i.redd.it/zgfpbga5ttjg1.png
danielhanchen
i.redd.it
1970-01-01T00:00:00
0
{}
1r6599e
false
null
t3_1r6599e
/r/LocalLLaMA/comments/1r6599e/qwen35397ba17b_unsloth_ggufs/
false
false
https://preview.redd.it/…b5da44055ef86dfa
455
{'enabled': True, 'images': [{'id': 'zgfpbga5ttjg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=108&crop=smart&auto=webp&s=deea17efaf7688d2e13a0367944b8d1578e430d5', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/zgfpbga5ttjg1.png?width=216&crop=smart&auto=we...
Small, fast Spam Detection model designed for German text
7
[https://huggingface.co/tanaos/tanaos-spam-detection-german](https://huggingface.co/tanaos/tanaos-spam-detection-german) A small and fast Spam Detection model, trained on German text to detect the following types of spam content: 1. Unsolicited commercial advertisement or non-commercial proselytizing. 2. Fraudulent s...
2026-02-16T09:31:53
https://www.reddit.com/r/LocalLLaMA/comments/1r657yx/small_fast_spam_detection_model_designed_for/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r657yx
false
null
t3_1r657yx
/r/LocalLLaMA/comments/1r657yx/small_fast_spam_detection_model_designed_for/
false
false
self
7
{'enabled': False, 'images': [{'id': 'GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=108&crop=smart&auto=webp&s=ae907b968aa3906b92688b790df0ba7d23950ad4', 'width': 108}, {'height': 116, 'url': 'h...
Qwen3.5 Release Blog Post
122
Weights: [https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B)
2026-02-16T09:31:44
https://qwen.ai/blog?id=qwen3.5
Stunning_Energy_7028
qwen.ai
1970-01-01T00:00:00
0
{}
1r657w5
false
null
t3_1r657w5
/r/LocalLLaMA/comments/1r657w5/qwen35_release_blog_post/
false
false
default
122
null
Small, fast Spam Detection model designed for German text
1
[removed]
2026-02-16T09:29:48
https://www.reddit.com/r/LocalLLaMA/comments/1r656sv/small_fast_spam_detection_model_designed_for/
Ok_Hold_5385
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r656sv
false
null
t3_1r656sv
/r/LocalLLaMA/comments/1r656sv/small_fast_spam_detection_model_designed_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GDyKYPPvaxnQZ_TdmOmnd308EdAoOfQU1YZvduKKTYk.png?width=108&crop=smart&auto=webp&s=ae907b968aa3906b92688b790df0ba7d23950ad4', 'width': 108}, {'height': 116, 'url': 'h...
Qwen3.5-397B-A17B is out!!
784
[https://huggingface.co/Qwen/Qwen3.5-397B-A17B](https://huggingface.co/Qwen/Qwen3.5-397B-A17B)
2026-02-16T09:29:03
https://www.reddit.com/r/LocalLLaMA/comments/1r656d7/qwen35397ba17b_is_out/
lolxdmainkaisemaanlu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r656d7
false
null
t3_1r656d7
/r/LocalLLaMA/comments/1r656d7/qwen35397ba17b_is_out/
false
false
self
784
{'enabled': False, 'images': [{'id': 'sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sxCtTuIrZpTpAOWoo9pt0eNH_oV-_xUiqhE8DoFPkFM.png?width=108&crop=smart&auto=webp&s=7318ec3ce4509fbace98fa419ca07a197bbf6b12', 'width': 108}, {'height': 116, 'url': 'h...
Qwen3.5-397B-A17B weights are live on ModelScope!
2
2026-02-16T09:28:57
https://modelscope.cn/models/Qwen/Qwen3.5-397B-A17B/summary
Stunning_Energy_7028
modelscope.cn
1970-01-01T00:00:00
0
{}
1r656b8
false
null
t3_1r656b8
/r/LocalLLaMA/comments/1r656b8/qwen35397ba17b_weights_are_live_on_modelscope/
false
false
https://external-preview…d1dfa721562cd927
2
{'enabled': False, 'images': [{'id': 'Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z8bx1P3h1CiRE0RdBGE07uHAodY2tY3MhY1YCyBocTc.png?width=108&crop=smart&auto=webp&s=bfd2f3c5a67913ab2f0268b019f5a1321841c3ea', 'width': 108}, {'height': 216, 'url': '...
Trying to understand some benchmarks
1
I'm trying to serve `gpt-oss-120b` to as many people in my organization as possible, but I'm finding it hard to get an idea of what the theoretical ceiling might be. Currently the model is split over 2x H100 94GB cards in PCIe which is on a cloud provider. We have a quote for 4x H100 94GB NVL cards, and the cards will ...
2026-02-16T08:50:56
https://www.reddit.com/r/LocalLLaMA/comments/1r64k8y/trying_to_understand_some_benchmarks/
monkeyofscience
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r64k8y
false
null
t3_1r64k8y
/r/LocalLLaMA/comments/1r64k8y/trying_to_understand_some_benchmarks/
false
false
self
1
null
Qwen 3.5 series marks the end of VL models?
65
2026-02-16T08:47:03
https://i.redd.it/m1ocrjozktjg1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1r64i0u
false
null
t3_1r64i0u
/r/LocalLLaMA/comments/1r64i0u/qwen_35_series_marks_the_end_of_vl_models/
false
false
https://preview.redd.it/…9cdb51f551dea483
65
{'enabled': True, 'images': [{'id': 'm1ocrjozktjg1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=108&crop=smart&auto=webp&s=1cd35cdf35816ba9e9d3e7d11c700e3ec4aafd97', 'width': 108}, {'height': 185, 'url': 'https://preview.redd.it/m1ocrjozktjg1.jpeg?width=216&crop=smart&auto=w...
Trying to understand some benchmarks
1
[removed]
2026-02-16T08:46:04
https://www.reddit.com/r/LocalLLaMA/comments/1r64hh6/trying_to_understand_some_benchmarks/
monkeyofscience
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r64hh6
false
null
t3_1r64hh6
/r/LocalLLaMA/comments/1r64hh6/trying_to_understand_some_benchmarks/
false
false
self
1
null
LeetCode Assembly Dataset (400+ Solutions in x86-64 / ARM64 using GCC/Clang)
15
Introducing the LeetCode Assembly Dataset: a dataset of 400+ LeetCode problem solutions in assembly across x86-64, ARM64, MIPS64, and RISC-V using GCC & Clang at -O0/-O1/-O2/-O3 optimizations. This dataset is perfect for teaching LLMs complex assembly and compiler behavior!
2026-02-16T08:41:57
https://huggingface.co/datasets/ronantakizawa/leetcode-assembly
Ok_Employee_6418
huggingface.co
1970-01-01T00:00:00
0
{}
1r64f67
false
null
t3_1r64f67
/r/LocalLLaMA/comments/1r64f67/leetcode_assembly_dataset_400_solutions_in_x8664/
false
false
https://external-preview…85e151194fd969d3
15
{'enabled': False, 'images': [{'id': 'buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/buD1IznU__1J1rUw_P_Y0bDCDvG6vRfHSOa2qWKO67s.png?width=108&crop=smart&auto=webp&s=1b924a3cfde480641c1c9227b81924f0a1505a6a', 'width': 108}, {'height': 116, 'url': 'h...
Is there a model that is completely uncensored when it comes to controversial topics?
18
I know "uncensored" often means NSFW, for role-play, etc, but that's not really what I care about. I want a model that has no problem not conforming to typical safety rules. It's willing to engage and objectively assess and consider points that might go directly against "safety guidelines". Think historical topics, so...
2026-02-16T08:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1r64deu/is_there_a_model_that_is_completely_uncensored/
ghulamalchik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r64deu
false
null
t3_1r64deu
/r/LocalLLaMA/comments/1r64deu/is_there_a_model_that_is_completely_uncensored/
false
false
self
18
null
Open source MCP server for real-time currency & crypto conversion
0
Hey everyone, I built and deployed a currency exchange MCP server that gives AI agents real-time forex and crypto conversion. \*\*What it does:\*\* \- Convert between 60+ fiat currencies and 30+ cryptocurrencies \- Batch convert to up to 50 currencies at once \- Historical rates with time-series data \...
2026-02-16T08:32:04
https://www.reddit.com/r/LocalLLaMA/comments/1r649gm/open_source_mcp_server_for_realtime_currency/
RuddyBuilds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r649gm
false
null
t3_1r649gm
/r/LocalLLaMA/comments/1r649gm/open_source_mcp_server_for_realtime_currency/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fE3nMGySKUWMp71MtQdIdHs_wzrWr-Shi51KT1WGHJ4.png?width=108&crop=smart&auto=webp&s=08b39bbbd499a7e62a23813f5b4e94e1519c7f38', 'width': 108}, {'height': 108, 'url': 'h...
The Qwen3.5 will still opensource
12
The link for it.( not weight yet) https://bailian.console.aliyun.com/cn-beijing/?spm=5176.29619931.J\_XNqYbJaEnpB5\_cCJf7e6D.1.136910d78TBFEG&tab=home#/model-market/detail/qwen3.5-397b-a17b
2026-02-16T08:30:56
https://www.reddit.com/r/LocalLLaMA/comments/1r648sf/the_qwen35_will_still_opensource/
bobeeeeeeeee8964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r648sf
false
null
t3_1r648sf
/r/LocalLLaMA/comments/1r648sf/the_qwen35_will_still_opensource/
false
false
self
12
null
Local macOS LLM llama-server setup guide
0
In case anyone here is thinking of using a Mac as a local small LLM model server for your other machines on a LAN, here are the steps I followed which worked for me. The focus is plumbing — how to set up ssh tunneling, screen sessions, etc. Not much different from setting up a Linux server, but not the same either. Of ...
2026-02-16T08:29:05
https://forgottencomputer.com/retro/install_mac.html
breksyt
forgottencomputer.com
1970-01-01T00:00:00
0
{}
1r647pf
false
null
t3_1r647pf
/r/LocalLLaMA/comments/1r647pf/local_macos_llm_llamaserver_setup_guide/
false
false
default
0
null
Qwen Released Qwen 3.5 397B and Qwen 3.5 Plus!
74
[https://chat.qwen.ai/](https://chat.qwen.ai/) https://preview.redd.it/ddrcinnghtjg1.png?width=626&format=png&auto=webp&s=5f91e5a8f0b99c86d30ee966815465f1571e8d2e
2026-02-16T08:27:24
https://www.reddit.com/r/LocalLLaMA/comments/1r646pt/qwen_released_qwen_35_397b_and_qwen_35_plus/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r646pt
false
null
t3_1r646pt
/r/LocalLLaMA/comments/1r646pt/qwen_released_qwen_35_397b_and_qwen_35_plus/
false
false
https://preview.redd.it/…ee7dbfc2bfc54630
74
null
I built CodeGraph CLI — parses your codebase into a semantic graph with tree-sitter, does RAG-powered search over LanceDB vectors, and lets you chat with multi-agent AI from the terminal
5
I've been building **CodeGraph CLI** (`cg`) — an open-source, local-first code intelligence tool. It parses your project into an AST with tree-sitter, builds a directed dependency graph in SQLite, embeds every symbol into vectors stored in LanceDB, then layers RAG, impact analysis, and a multi-agent system on top. **G...
2026-02-16T08:25:20
https://www.reddit.com/r/LocalLLaMA/comments/1r645hx/i_built_codegraph_cli_parses_your_codebase_into_a/
Wild_Expression_5772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r645hx
false
null
t3_1r645hx
/r/LocalLLaMA/comments/1r645hx/i_built_codegraph_cli_parses_your_codebase_into_a/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=108&crop=smart&auto=webp&s=a7042462df25bfce2e186d98096192e2bd07a743', 'width': 108}, {'height': 108, 'url': 'h...
Minimax M2.5 vs. GLM-5 vs. Kimi k2.5: How do they compare to Codex and Claude for coding?
34
Hi everyone, I’m looking for community feedback from those of you who have hands-on experience with the recent wave of coding models: 1. **Minimax M2.5** 2. **GLM-5** 3. **Kimi k2.5** There are plenty of benchmarks out there, but I’m interested in your subjective opinions and day-to-day experience. **If you use mul...
2026-02-16T08:25:15
https://www.reddit.com/r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/
East-Stranger8599
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r645g6
false
null
t3_1r645g6
/r/LocalLLaMA/comments/1r645g6/minimax_m25_vs_glm5_vs_kimi_k25_how_do_they/
false
false
self
34
null
mlx-ruby: MLX bindings for Ruby
2
Ruby desperately needed bindings for MLX so I finally sat down with Codex and ported it along with all the example models. Working on adding better Rubyesqe ergonomics, but all the core library features work and performance is within 25% of the official Python bindings. https://github.com/skryl/mlx-ruby
2026-02-16T08:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1r645d7/mlxruby_mlx_bindings_for_ruby/
rut216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r645d7
false
null
t3_1r645d7
/r/LocalLLaMA/comments/1r645d7/mlxruby_mlx_bindings_for_ruby/
false
false
self
2
{'enabled': False, 'images': [{'id': 'cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cgb0DXlJFeXM-oiZS5Tc1Xoi00oNj-R4EnYDYRwkHGI.png?width=108&crop=smart&auto=webp&s=169ebfe3cfa4032d5d13def2db973f232d8d264c', 'width': 108}, {'height': 108, 'url': 'h...
Moonshot AI Launches Kimi Claw
0
# Moonshot AI Launches Kimi Claw: Native OpenClaw on [Kimi.com](http://Kimi.com) with 5,000 Community Skills and 40GB Cloud Storage Now.
2026-02-16T08:23:49
https://www.reddit.com/r/LocalLLaMA/comments/1r644mo/moonshot_ai_launches_kimi_claw/
techlatest_net
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r644mo
false
null
t3_1r644mo
/r/LocalLLaMA/comments/1r644mo/moonshot_ai_launches_kimi_claw/
false
false
self
0
null
I built CodeGraph CLI that parses your codebase into a semantic graph with tree-sitter, does RAG-powered search over LanceDB vectors, and lets you chat with multi-agent AI from the terminal*
1
I've been building **CodeGraph CLI** (`cg`) — an open-source, local-first code intelligence tool. It parses your project into an AST with tree-sitter, builds a directed dependency graph in SQLite, embeds every symbol into vectors stored in LanceDB, then layers RAG, impact analysis, and a multi-agent syste...
2026-02-16T08:21:41
https://www.reddit.com/r/LocalLLaMA/comments/1r643do/i_built_codegraph_cli_that_parses_your_codebase/
Wild_Expression_5772
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r643do
false
null
t3_1r643do
/r/LocalLLaMA/comments/1r643do/i_built_codegraph_cli_that_parses_your_codebase/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z9as73bRDidn2Qx4d8uENfpKkoOOgNKhYev0vH7w5I4.png?width=108&crop=smart&auto=webp&s=a7042462df25bfce2e186d98096192e2bd07a743', 'width': 108}, {'height': 108, 'url': 'h...
Q: Why hasn't people made models like Falcon-E-3B-Instruct?
0
Falcon, the company from UAE, was one of the first who learned from Microsoft's BitNet, and tried to make their own ternary LM. Why hasn't people tried to use Tequila/Sherry PTQ methods to convert the larger models into something smaller? Is it difficult, or just too costly to justify its ability to accelerate compute?...
2026-02-16T08:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1r640qb/q_why_hasnt_people_made_models_like/
TomLucidor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r640qb
false
null
t3_1r640qb
/r/LocalLLaMA/comments/1r640qb/q_why_hasnt_people_made_models_like/
false
false
self
0
null
Qwen 3.5 Released on api!
8
[https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-397b-a17b](https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-397b-a17b) [https://bailian.console.aliyun.com/cn-beijing/?tab=model#/model-market/detail/qwen3.5-plus](https://bailian.console.aliyun....
2026-02-16T08:11:21
https://www.reddit.com/r/LocalLLaMA/comments/1r63x5c/qwen_35_released_on_api/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r63x5c
false
null
t3_1r63x5c
/r/LocalLLaMA/comments/1r63x5c/qwen_35_released_on_api/
false
false
https://preview.redd.it/…d88e3d6bc0271bbb
8
null
Liquid LFM2-VL 450M (Q4_0) running in-browser via WebGPU (local inference)
5
Hey r/LocalLLaMA \- quick experiment share. I got Liquid LFM2-VL 450M (Q4\_0) running locally in the browser using WebGPU (RunAnywhere Web SDK beta). It uses WebGPU acceleration when available, with WASM fallback if WebGPU isn’t supported. Try it out : [https://runanywhere-web-demo.vercel.app/](https://runanywhere-we...
2026-02-16T08:04:23
https://v.redd.it/hds7nl76dtjg1
New_Inflation_6927
v.redd.it
1970-01-01T00:00:00
0
{}
1r63t3b
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hds7nl76dtjg1/DASHPlaylist.mpd?a=1773821076%2CMTNmZmIyNDY2OGIyNmQzMGRjY2YyNzgwODViNWU1ZDEwYWNmZjRiMTc3NTJjOWI2MTQzM2MzYjBhZjIwYjFlYw%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/hds7nl76dtjg1/CMAF_1080.mp4?source=fallback', 'h...
t3_1r63t3b
/r/LocalLLaMA/comments/1r63t3b/liquid_lfm2vl_450m_q4_0_running_inbrowser_via/
false
false
https://external-preview…9a0792614715dc3d
5
{'enabled': False, 'images': [{'id': 'NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NnVqZzJyNzZkdGpnMXDL1zjcu1tYh5LWBn_7_lAAZkEPMPyNk_OEuU5EWQ_Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=9f38aed718d1d6d19d8ec9abd03ae8d264429...
Qwen3.5-397B-A17B will be open source!
138
https://preview.redd.it/…wen.ai/) !!!!!!
2026-02-16T08:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1r63sre/qwen35397ba17b_will_be_open_source/
LegacyRemaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r63sre
true
null
t3_1r63sre
/r/LocalLLaMA/comments/1r63sre/qwen35397ba17b_will_be_open_source/
false
false
https://preview.redd.it/…96080a446b6e1bff
138
null
Are you ready?
68
2026-02-16T08:00:12
https://i.redd.it/edi57xtmctjg1.jpeg
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1r63qfa
false
null
t3_1r63qfa
/r/LocalLLaMA/comments/1r63qfa/are_you_ready/
false
false
https://preview.redd.it/…2b8cfcfe6ae7ad75
68
{'enabled': True, 'images': [{'id': 'edi57xtmctjg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=108&crop=smart&auto=webp&s=a349b54240b6b8171823e6978069828afd067fcb', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/edi57xtmctjg1.jpeg?width=216&crop=smart&auto=...
Qwen 3.5 Plus(397b-a17b) is now available on Chinese Qwen APP
148
So I guess they will release the weight in the next 24 hours
2026-02-16T07:52:23
https://i.redd.it/f462h8vqatjg1.png
AaronFeng47
i.redd.it
1970-01-01T00:00:00
0
{}
1r63lvl
false
null
t3_1r63lvl
/r/LocalLLaMA/comments/1r63lvl/qwen_35_plus397ba17b_is_now_available_on_chinese/
false
false
https://preview.redd.it/…6f3f5b1c27c31c52
148
{'enabled': True, 'images': [{'id': 'f462h8vqatjg1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=108&crop=smart&auto=webp&s=8db0692c75b4aedefd60c67a244648bc20ab1d9a', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/f462h8vqatjg1.png?width=216&crop=smart&auto=web...
Ambiguity / Clarification QA benchmark for LLMs
1
is there any benchmark that measures an LLM's capability to question the prompt / ask for clarification when faced with ambiguity ?
2026-02-16T07:49:11
https://www.reddit.com/r/LocalLLaMA/comments/1r63k0p/ambiguity_clarification_qa_benchmark_for_llms/
Ok-Loan3275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r63k0p
false
null
t3_1r63k0p
/r/LocalLLaMA/comments/1r63k0p/ambiguity_clarification_qa_benchmark_for_llms/
false
false
self
1
null
Why is everything about code now?
194
I hate hate hate how every time a new model comes out its about how its better at coding. What happened to the heyday of llama 2 finetunes that were all about creative writing and other use cases. Is it all the vibe coders that are going crazy over the models coding abilities?? Like what about other conversational us...
2026-02-16T07:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1r63fhu/why_is_everything_about_code_now/
falconandeagle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r63fhu
false
null
t3_1r63fhu
/r/LocalLLaMA/comments/1r63fhu/why_is_everything_about_code_now/
false
false
self
194
null
Qwen3.5 vs Llama hypothetical
0
How do you think Qwen3.5 compares to Llama3?
2026-02-16T07:27:48
https://www.reddit.com/r/LocalLLaMA/comments/1r6378w/qwen35_vs_llama_hypothetical/
BeneficialSyllabub71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r6378w
false
null
t3_1r6378w
/r/LocalLLaMA/comments/1r6378w/qwen35_vs_llama_hypothetical/
false
false
self
0
null
bb25 (Bayesian BM25) v0.2.0 is out!
11
bb25 v0.2.0 is out — a Python + Rust implementation of Bayesian BM25 that turns search scores into calibrated probabilities. [https://github.com/instructkr/bb25](https://github.com/instructkr/bb25) A week ago, I built bb25 that turns BM25 into a probability engine! In addition to the Rust-based implementation, the pa...
2026-02-16T07:05:15
https://i.redd.it/qwyslvhr2tjg1.jpeg
Ok_Rub1689
i.redd.it
1970-01-01T00:00:00
0
{}
1r62tlh
false
null
t3_1r62tlh
/r/LocalLLaMA/comments/1r62tlh/bb25_bayesian_bm25_v020_is_out/
false
false
default
11
{'enabled': True, 'images': [{'id': 'qwyslvhr2tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=108&crop=smart&auto=webp&s=7c1ac27db805a05b52a4255a9cdc81e5c4d460cd', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/qwyslvhr2tjg1.jpeg?width=216&crop=smart&auto=we...
bb25 (Bayseian BM25) v0.2.0 is out!
1
[removed]
2026-02-16T07:02:49
https://i.redd.it/wuqvrr4c2tjg1.jpeg
Ok_Rub1689
i.redd.it
1970-01-01T00:00:00
0
{}
1r62s21
false
null
t3_1r62s21
/r/LocalLLaMA/comments/1r62s21/bb25_bayseian_bm25_v020_is_out/
false
false
default
1
{'enabled': True, 'images': [{'id': 'wuqvrr4c2tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=108&crop=smart&auto=webp&s=8087f15e56dc651558db3dd3f19bab5f953b1057', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/wuqvrr4c2tjg1.jpeg?width=216&crop=smart&auto=we...
Releasing bb25 0.2.0: Why Bayesian BM25 (bb25) extends well far-beyond search?
1
[removed]
2026-02-16T07:01:35
https://i.redd.it/0sozv5512tjg1.jpeg
Ok_Rub1689
i.redd.it
1970-01-01T00:00:00
0
{}
1r62r8q
false
null
t3_1r62r8q
/r/LocalLLaMA/comments/1r62r8q/releasing_bb25_020_why_bayesian_bm25_bb25_extends/
false
false
default
1
{'enabled': True, 'images': [{'id': '0sozv5512tjg1', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=108&crop=smart&auto=webp&s=202cc3a732bb09a4f708e2b08521b2aaabeee3fd', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/0sozv5512tjg1.jpeg?width=216&crop=smart&auto=we...
Prompt Engineering was overhyped, and it’s already dying as a standalone career?
1
[removed]
2026-02-16T06:35:31
https://www.reddit.com/r/LocalLLaMA/comments/1r62an5/prompt_engineering_was_overhyped_and_its_already/
Own-Treacle4585
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r62an5
false
null
t3_1r62an5
/r/LocalLLaMA/comments/1r62an5/prompt_engineering_was_overhyped_and_its_already/
false
false
self
1
null
Realistic take, the hype around Chinese models are unfounded.
0
I am currently working on my 2billion $ SAAS, as one does. I am noticing how unreliable these models are, from self hosted all the way to open router, at extracting structured data. What’s weird is how haiku consistently beats Kimi K2 in these tasks. I believed that I could self host everything and have infinite mone...
2026-02-16T06:26:04
https://www.reddit.com/r/LocalLLaMA/comments/1r624l4/realistic_take_the_hype_around_chinese_models_are/
Themotionalman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r624l4
false
null
t3_1r624l4
/r/LocalLLaMA/comments/1r624l4/realistic_take_the_hype_around_chinese_models_are/
false
false
self
0
null
Uncensored LLMs are everything Ai should be...
0
https://preview.redd.it/…ing request....
2026-02-16T06:19:41
https://www.reddit.com/r/LocalLLaMA/comments/1r620en/uncensored_llms_are_everything_ai_should_be/
wittlewayne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r620en
false
null
t3_1r620en
/r/LocalLLaMA/comments/1r620en/uncensored_llms_are_everything_ai_should_be/
true
false
spoiler
0
null
Moving from AMD to Nvidia - RX 7900 XTX -> RTX 3090's
0
https://preview.redd.it/…unch of 3090s?
2026-02-16T06:17:14
https://www.reddit.com/r/LocalLLaMA/comments/1r61yx1/moving_from_amd_to_nvidia_rx_7900_xtx_rtx_3090s/
alphatrad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61yx1
false
null
t3_1r61yx1
/r/LocalLLaMA/comments/1r61yx1/moving_from_amd_to_nvidia_rx_7900_xtx_rtx_3090s/
false
false
https://preview.redd.it/…c9248c53f5debc8a
0
null
Anyone shipping production apps or prototypes with Local LLMs on Mobile? What's the actual use case?
4
I am primarily interested in knowing what use cases demands running LLMs locally instead of using cloud APIs. Local LLMs have huge latency but complete privacy and I am very interested if any consumer use cases would love privacy over latency
2026-02-16T06:10:51
https://www.reddit.com/r/LocalLLaMA/comments/1r61upn/anyone_shipping_production_apps_or_prototypes/
mighty-precious2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61upn
false
null
t3_1r61upn
/r/LocalLLaMA/comments/1r61upn/anyone_shipping_production_apps_or_prototypes/
false
false
self
4
null
We tested 5 vLLM optimizations: Prefix Cache, FP8, CPU Offload, Disagg P/D, and Sleep Mode
10
Hi everyone, We just published a new article on the JarvisLabs blog that dives into 5 practical techniques to optimize vLLM performance. *Processing img ma65us58ssjg1...* We actually ran benchmarks on Qwen3-32B to see how much improvements these techniques actually bring to the table. Here is a quick summary of ...
2026-02-16T06:07:41
https://www.reddit.com/r/LocalLLaMA/comments/1r61so4/we_tested_5_vllm_optimizations_prefix_cache_fp8/
LayerHot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61so4
false
null
t3_1r61so4
/r/LocalLLaMA/comments/1r61so4/we_tested_5_vllm_optimizations_prefix_cache_fp8/
false
false
https://external-preview…8ffba89208dfb0b9
10
null
With the ridiculous ram prices has anyone tried optane / very fast nvme for page file
2
I know it's will be much slower, but I was wondering if anyone explored this path or have insights.
2026-02-16T06:02:57
https://www.reddit.com/r/LocalLLaMA/comments/1r61pma/with_the_ridiculous_ram_prices_has_anyone_tried/
AdventurousGold672
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61pma
false
null
t3_1r61pma
/r/LocalLLaMA/comments/1r61pma/with_the_ridiculous_ram_prices_has_anyone_tried/
false
false
self
2
null
Why prompt-based controls cannot enforce execution boundaries in autonomous agents
0
I keep seeing people rely on prompts to “restrict” what an agent can do. In practice, this breaks down the moment the agent: \- retries, \- expands scope, \- or chains tool calls. Prompts can influence behavior, but they cannot \*block execution\*. Once an agent is allowed to act, something outside the mod...
2026-02-16T05:47:01
https://www.reddit.com/r/LocalLLaMA/comments/1r61f1i/why_promptbased_controls_cannot_enforce_execution/
IllustratorNo5375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61f1i
false
null
t3_1r61f1i
/r/LocalLLaMA/comments/1r61f1i/why_promptbased_controls_cannot_enforce_execution/
false
false
self
0
{'enabled': False, 'images': [{'id': 'lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lInEV-bp6lxsrRwdywV7-Xr5xfg_iE-CQ4VUIH6Mc4g.png?width=108&crop=smart&auto=webp&s=c4abc9eb79e64b9559f2f9ec09ce2865cb20638a', 'width': 108}, {'height': 108, 'url': 'h...
lloyal.node: branching + continuous tree batching for llama.cpp in Node (best-of-N / beam / MCTS-ish)
0
Just shipped **lloyal.node**: Node.js bindings for llama.cpp-style models with **forkable inference state** \+ **continuous tree batching** (shared-prefix KV branching). The goal is to make “searchy” decoding patterns cheap in Node without re-running the prompt for every candidate. You can fork a branch at some point,...
2026-02-16T05:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1r61dcp/lloyalnode_branching_continuous_tree_batching_for/
Savings-Poet5718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r61dcp
false
null
t3_1r61dcp
/r/LocalLLaMA/comments/1r61dcp/lloyalnode_branching_continuous_tree_batching_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HwtrLiP6mcBxsNfhqRTOrcAQX2f4Y5_mLp-2-2Q-YVs.png?width=108&crop=smart&auto=webp&s=e6a9b2df90ed7f4cc116dafda0e48b2e97a05ee2', 'width': 108}, {'height': 108, 'url': 'h...
Reduced MoE experts from 8 to 4. Is the quality drop huge?
2
Hey all, Running Minimax M2.5 locally on: * 16GB VRAM * 128GB DDR4 RAM * UD Q3\_K\_XL quant (Unsloth) * 32k context With the full 8 experts (layers offloaded to CPU), I’m only getting \~5 TPS. Unsloth advertises 20+ TPS on similar 16GB VRAM setups (with 96GB RAM), but I’m far from that. When I drop to 4 experts, sp...
2026-02-16T05:23:14
https://www.reddit.com/r/LocalLLaMA/comments/1r60z06/reduced_moe_experts_from_8_to_4_is_the_quality/
Dentuam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60z06
false
null
t3_1r60z06
/r/LocalLLaMA/comments/1r60z06/reduced_moe_experts_from_8_to_4_is_the_quality/
false
false
self
2
null
Qwen3.5 fine-tuning plans?
3
Anyone planning LoRA?
2026-02-16T05:23:06
https://www.reddit.com/r/LocalLLaMA/comments/1r60yx4/qwen35_finetuning_plans/
skipdaballs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60yx4
false
null
t3_1r60yx4
/r/LocalLLaMA/comments/1r60yx4/qwen35_finetuning_plans/
false
false
self
3
null
AMA Announcement: StepFun AI, The Opensource Lab Behind Step-3.5-Flash Model (Thursday, 8AM-11AM PST)
58
Hi r/LocalLLaMA 👋 We're excited for Thursday's guests: **The StepFun Team!** **Kicking things off Thursday, Feb. 19th, 8 AM–11 AM PST** ⚠️ **Note:** The AMA itself will be hosted in a **separate thread,** please don’t post questions here.
2026-02-16T05:11:16
https://i.redd.it/u11uh8jfisjg1.png
XMasterrrr
i.redd.it
1970-01-01T00:00:00
0
{}
1r60qu9
false
null
t3_1r60qu9
/r/LocalLLaMA/comments/1r60qu9/ama_announcement_stepfun_ai_the_opensource_lab/
false
false
default
58
{'enabled': True, 'images': [{'id': 'u11uh8jfisjg1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=108&crop=smart&auto=webp&s=2b01786dfa62431c87c753e20ca72c7486848abc', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/u11uh8jfisjg1.png?width=216&crop=smart&auto=we...
Safer email processing
1
I had been working on a local agent for household tasks, reminders, email monitoring and handling, calendar access and the like. To be useful, it needs integrations and that means access. The problem is prompt injection, as open claw has shown. Thinking on the problem and some initial testing, I came up with a two ...
2026-02-16T05:02:07
https://www.reddit.com/r/LocalLLaMA/comments/1r60kds/safer_email_processing/
ravage382
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60kds
false
null
t3_1r60kds
/r/LocalLLaMA/comments/1r60kds/safer_email_processing/
false
false
self
1
null
Qwen 3.5 will be released today
416
Sources reveal that Alibaba will open-source its next-generation large model, Qwen3.5, tonight on Lunar New Year's Eve. The model reportedly features a comprehensive innovation in its architecture. [https://x.com/Sino\_Market/status/2023218866370068561?s=20](https://x.com/Sino_Market/status/2023218866370068561?s=20)
2026-02-16T04:54:20
https://www.reddit.com/r/LocalLLaMA/comments/1r60ety/qwen_35_will_be_released_today/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60ety
true
null
t3_1r60ety
/r/LocalLLaMA/comments/1r60ety/qwen_35_will_be_released_today/
false
false
self
416
null
Qwen 3.5 will be released today
1
[removed]
2026-02-16T04:51:35
https://www.reddit.com/r/LocalLLaMA/comments/1r60cwm/qwen_35_will_be_released_today/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60cwm
false
null
t3_1r60cwm
/r/LocalLLaMA/comments/1r60cwm/qwen_35_will_be_released_today/
false
false
https://preview.redd.it/…30d644291a30fe9f
1
null
TTS with speech speed control?
2
Whether it’s Chatterbox, F5 TTS or any other model, the final TTS output doesn’t match the reference voice’s speech pace. The generated audio is usually much faster than the reference. Are there any good TTS models that have proper speech pace option?
2026-02-16T04:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1r60aag/tts_with_speech_speed_control/
TheRealistDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r60aag
false
null
t3_1r60aag
/r/LocalLLaMA/comments/1r60aag/tts_with_speech_speed_control/
false
false
self
2
null
Samsung is working on robots! 👀
0
2026-02-16T04:40:29
https://i.redd.it/yf8549hycsjg1.png
moaijobs
i.redd.it
1970-01-01T00:00:00
0
{}
1r60515
false
null
t3_1r60515
/r/LocalLLaMA/comments/1r60515/samsung_is_working_on_robots/
false
false
https://preview.redd.it/…f128026d912c991e
0
{'enabled': True, 'images': [{'id': 'yf8549hycsjg1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=108&crop=smart&auto=webp&s=4d97abff4cfc348b04beb72e6e35f43210a5c25a', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/yf8549hycsjg1.png?width=216&crop=smart&auto=web...
Synthetic text vs. distilled corpus
1
Hi everyone, I just finished updating my script to train an LLM from scratch. The problem I'm having is that I can't find readily available training data for this purpose. My primary goal is an LLM with a few million parameters that acts as a simple chatbot, but I later want to expand its capabilities so it can provide...
2026-02-16T04:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1r600r8/synthetic_text_vs_distilled_corpus/
Visual_Brain8809
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r600r8
false
null
t3_1r600r8
/r/LocalLLaMA/comments/1r600r8/synthetic_text_vs_distilled_corpus/
false
false
self
1
null
gUrrT: An Intelligent Open-Source Video Understanding System A different path from traditional Large Video Language Models (LVLMs).
12
"Ask" is cool, but why does video understanding have to be so compute heavy? 🤨 Built gUrrT: A way to "talk to videos" without the soul-crushing VRAM requirements of LVLMs. The idea behind gUrrT was to totally bypass the Large Video Language Model route by harnessing the power of Vision Models, Audio Transcription, A...
2026-02-16T04:28:22
https://github.com/owaismohammad/gurrt
OkAdministration374
github.com
1970-01-01T00:00:00
0
{}
1r5zw9t
false
null
t3_1r5zw9t
/r/LocalLLaMA/comments/1r5zw9t/gurrt_an_intelligent_opensource_video/
false
false
default
12
{'enabled': False, 'images': [{'id': 'PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PBKSKihyaqWWUF931IKQyMwP8ty-mnW6fJ_AhNLozhI.png?width=108&crop=smart&auto=webp&s=9734d209df233df3e6285a2d35e7d2f0c78a702d', 'width': 108}, {'height': 108, 'url': 'h...
I run a bunch of AI agents and "isolation" is the word I trust least
1
Saw Klaw.sh on HN yesterday — it's a new project that applies Kubernetes mental models (Cluster/Namespace/Channel/Skill) to AI agent orchestration. Written in Go, single binary, just hit v0.1.0. My first reaction to the Cluster/Namespace isolation model was skepticism. I run a multi-agent system with process-level iso...
2026-02-16T04:27:36
https://www.reddit.com/r/LocalLLaMA/comments/1r5zvrb/i_run_a_bunch_of_ai_agents_and_isolation_is_the/
AdAccurate6326
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5zvrb
false
null
t3_1r5zvrb
/r/LocalLLaMA/comments/1r5zvrb/i_run_a_bunch_of_ai_agents_and_isolation_is_the/
false
false
self
1
null
Point and laugh at my build? (Loss porn)
4
Recently fell into the rabbit hole of building a local and private AI server as affordably as possible, as someone who’s new to building a PC and running models locally but excited about the potential of this tech. But turns out it’s so slow and power inefficient to the point that it’s been completely demoralizing and ...
2026-02-16T04:25:40
https://www.reddit.com/r/LocalLLaMA/comments/1r5zuaw/point_and_laugh_at_my_build_loss_porn/
Diligent-Culture-432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5zuaw
false
null
t3_1r5zuaw
/r/LocalLLaMA/comments/1r5zuaw/point_and_laugh_at_my_build_loss_porn/
false
false
self
4
null
[ Removed by moderator ]
1
[removed]
2026-02-16T03:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1r5z9v0/qwen35_open_source_hopes/
hosohep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5z9v0
false
null
t3_1r5z9v0
/r/LocalLLaMA/comments/1r5z9v0/qwen35_open_source_hopes/
false
false
null
1
null
Claude accurately cites its own published failure modes (deception, gaslighting, blackmail attempts) — but r/ClaudeAI deletes discussion in 2 minutes
0
8 months running 11 AI stack for independent safety testing. Built a clean prompt using only public Anthropic safety evals, Apollo Research (Dec 2024) strategic deception findings, and Greenblatt et al. alignment faking paper. Prompt asks Claude to describe its documented capabilities in first person. No jailbreak. ...
2026-02-16T03:48:13
https://www.reddit.com/gallery/1r5z3f4
Dapper-Tension6781
reddit.com
1970-01-01T00:00:00
0
{}
1r5z3f4
false
null
t3_1r5z3f4
/r/LocalLLaMA/comments/1r5z3f4/claude_accurately_cites_its_own_published_failure/
false
false
https://preview.redd.it/…bf0865d0c232ee7e
0
null
GLM-5 is officially on NVIDIA NIM, and you can now use it to power Claude Code for FREE 🚀
0
**Edit 1:** Added instructions for free usage with Claude Code VSCode extension. **Edit 2:** Added OpenRouter as a provider. **Edit 3:** Added support for a LMStudio Local provider since my last post got taken down. NVIDIA just added **z-ai/glm5** to their NIM inventory, and I’ve just updated **free-claude-code** ...
2026-02-16T03:47:27
http://github.com/Alishahryar1/free-claude-code
PreparationAny8816
github.com
1970-01-01T00:00:00
0
{}
1r5z2wx
false
null
t3_1r5z2wx
/r/LocalLLaMA/comments/1r5z2wx/glm5_is_officially_on_nvidia_nim_and_you_can_now/
false
false
default
0
null
Qw en3.5 waiting thread 2
0
Another waiting room.
2026-02-16T03:20:15
https://www.reddit.com/r/LocalLLaMA/comments/1r5yixt/qw_en35_waiting_thread_2/
HawkLopsided6107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5yixt
false
null
t3_1r5yixt
/r/LocalLLaMA/comments/1r5yixt/qw_en35_waiting_thread_2/
false
false
self
0
null
Is there a local version of Spotify Honk?
0
Would like to be able to do all the things their engineers can do before entering the office. Mostly just the remote instructions/monitoring.
2026-02-16T03:18:03
https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/
cantgetthistowork
techcrunch.com
1970-01-01T00:00:00
0
{}
1r5yhbc
false
null
t3_1r5yhbc
/r/LocalLLaMA/comments/1r5yhbc/is_there_a_local_version_of_spotify_honk/
false
false
default
0
{'enabled': False, 'images': [{'id': 'vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/vkAxmsGxkymUdkeY4TsOmpYeeDU-qkE4RiF3NyZ3sKo.jpeg?width=108&crop=smart&auto=webp&s=d8be03aa784d0fc61ac73e29ba92b2d8cc32cb17', 'width': 108}, {'height': 122, 'url': '...
Qw en3.5 local deployment hopes
0
Anyone planning to run Qw en3.5 locally?
2026-02-16T03:07:33
https://www.reddit.com/r/LocalLLaMA/comments/1r5y9fy/qw_en35_local_deployment_hopes/
Hot_Supermarket9039
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5y9fy
false
null
t3_1r5y9fy
/r/LocalLLaMA/comments/1r5y9fy/qw_en35_local_deployment_hopes/
false
false
self
0
null
Rumors when MiniMax will have its M2.5 model available to $10/month Starter users?
0
Has anyone heard when it'll be available?
2026-02-16T02:56:43
https://www.reddit.com/r/LocalLLaMA/comments/1r5y147/rumors_when_minimax_will_have_its_m25_model/
EuivIsMyLife
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5y147
false
null
t3_1r5y147
/r/LocalLLaMA/comments/1r5y147/rumors_when_minimax_will_have_its_m25_model/
false
false
self
0
null
Qwen3.5 RAG potential?
0
Anyone planning RAG?
2026-02-16T02:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1r5xzj0/qwen35_rag_potential/
Original_Night7733
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5xzj0
false
null
t3_1r5xzj0
/r/LocalLLaMA/comments/1r5xzj0/qwen35_rag_potential/
false
false
self
0
null
Qwen3.5 quantization hopes?
0
Anyone planning 4bit?
2026-02-16T02:47:59
https://www.reddit.com/r/LocalLLaMA/comments/1r5xuf9/qwen35_quantization_hopes/
BeneficialSyllabub71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5xuf9
false
null
t3_1r5xuf9
/r/LocalLLaMA/comments/1r5xuf9/qwen35_quantization_hopes/
false
false
self
0
null
Worked 2 weeks by pushing OpenClaw on my 2L Mini PC, From 70B to 108B Models with Ollama, LM Studio, and HeyGen Integration,share for eveyboday and wanna to discuss
1
[removed]
2026-02-16T02:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1r5xeeu/worked_2_weeks_by_pushing_openclaw_on_my_2l_mini/
Pleasant_Designer_14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5xeeu
false
null
t3_1r5xeeu
/r/LocalLLaMA/comments/1r5xeeu/worked_2_weeks_by_pushing_openclaw_on_my_2l_mini/
false
false
self
1
null
Qwen3-Next-Coder uses `n for new line?
5
I tried Qwen3-Next-Coder-80b_q4_K_M, and it seems very promising. Except, I encountered a problem where it produces ``n` instead of `\n` for newlines with long context like 32k. It works fine with shorter context like 8192 though. Has anyone experienced this? Thanks!
2026-02-16T02:14:12
https://www.reddit.com/r/LocalLLaMA/comments/1r5x4vo/qwen3nextcoder_uses_n_for_new_line/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5x4vo
false
null
t3_1r5x4vo
/r/LocalLLaMA/comments/1r5x4vo/qwen3nextcoder_uses_n_for_new_line/
false
false
self
5
null
I built Talk2Code — text your codebase from your phone via Telegram (~150 lines of Python, open source)
1
[removed]
2026-02-16T02:11:46
https://v.redd.it/hx03v6lfmrjg1
BodeMan5280
v.redd.it
1970-01-01T00:00:00
0
{}
1r5x2zm
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hx03v6lfmrjg1/DASHPlaylist.mpd?a=1773799923%2CMmVkZTMxYjFlYzNjZGJlZTBhOTdjNjA3OGVhZWE0NDkxODI5NTgxYzFjYzlkYTI4MjAwMDVhYzQ1Y2Y3Yjc4Yg%3D%3D&v=1&f=sd', 'duration': 141, 'fallback_url': 'https://v.redd.it/hx03v6lfmrjg1/CMAF_1080.mp4?source=fallback', '...
t3_1r5x2zm
/r/LocalLLaMA/comments/1r5x2zm/i_built_talk2code_text_your_codebase_from_your/
false
false
https://external-preview…b6ba6f052b6539af
1
{'enabled': False, 'images': [{'id': 'bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bGVsYnQ2bWZtcmpnMUZ81RDSJR2PUJqN2uMzTOMS6Ep5ZyIK5_fgYnYybGJ9.png?width=108&crop=smart&format=pjpg&auto=webp&s=bcecfa1d890e5c0199b9a200e3f01a5a768e...
[ Removed by moderator ]
1
[removed]
2026-02-16T02:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1r5x10t/suscolumn_in_arena_is_this_qwen_35/
Ash_Skiller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5x10t
false
null
t3_1r5x10t
/r/LocalLLaMA/comments/1r5x10t/suscolumn_in_arena_is_this_qwen_35/
false
false
null
1
null
I'm an Android dev who knows nothing about x86. During my vacation I built a system that genetically evolves machine code — now I can run 80B models on a single RTX 4090.
1
I'm a mobile Android developer. Not a systems programmer, not a compiler engineer, not a low-level guy. This past week I was on vacation from work. My family traveled to another city for a few days, and my inner teenage nerd came out. **The mess that started everything** I'd been hearing about OpenClaw and wanted to ...
2026-02-16T01:57:35
https://www.reddit.com/r/LocalLLaMA/comments/1r5wryo/im_an_android_dev_who_knows_nothing_about_x86/
Ill-Pop2106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5wryo
false
null
t3_1r5wryo
/r/LocalLLaMA/comments/1r5wryo/im_an_android_dev_who_knows_nothing_about_x86/
false
false
self
1
null
socOCRbench: An OCR benchmark for social science documents
3
You might've noticed quite a few OCR model releases in the past few months, and you might find it increasingly difficult to discriminate between them as each respectively claims state-of-the-art (and near-perfect scores...) on benchmarks like OmniDocBench. To redress these various issues, I've made socOCRbench, a priva...
2026-02-16T01:51:17
https://noahdasanaike.github.io/posts/sococrbench.html
noahdasanaike
noahdasanaike.github.io
1970-01-01T00:00:00
0
{}
1r5wn6l
false
null
t3_1r5wn6l
/r/LocalLLaMA/comments/1r5wn6l/sococrbench_an_ocr_benchmark_for_social_science/
false
false
default
3
null
Qwen 3.5 PR is live on Transformers?
0
Spotted this PR on GitHub. If the 400B rumors are true, we might need more VRAM soon.
2026-02-16T01:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1r5wl8f/qwen_35_pr_is_live_on_transformers/
StandardFuel6789
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5wl8f
false
null
t3_1r5wl8f
/r/LocalLLaMA/comments/1r5wl8f/qwen_35_pr_is_live_on_transformers/
false
false
self
0
null
local llm + ai video pipeline? i keep seeing ppl duct tape 6 tools together
1
im using a local llm for scripts/outlines then bouncing through image gen + some motion + tts + ffmpeg to assemble. it works but the workflow glue is the real pain, not the models im thinking of open sourcing the orchestration layer as a free tool so ppl can run it locally and not live in 10 browser tabs + a video edi...
2026-02-16T01:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1r5wfyx/local_llm_ai_video_pipeline_i_keep_seeing_ppl/
Upper-Mountain-3397
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5wfyx
false
null
t3_1r5wfyx
/r/LocalLLaMA/comments/1r5wfyx/local_llm_ai_video_pipeline_i_keep_seeing_ppl/
false
false
self
1
null
I got OpenClaw memory search from 82 seconds to 30ms — Check it out.
1
[removed]
2026-02-16T01:28:55
https://www.reddit.com/r/LocalLLaMA/comments/1r5w65j/i_got_openclaw_memory_search_from_82_seconds_to/
TigerAIElectrical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5w65j
false
null
t3_1r5w65j
/r/LocalLLaMA/comments/1r5w65j/i_got_openclaw_memory_search_from_82_seconds_to/
false
false
self
1
null
Resources for tracking new model releases?
1
I’m looking for something that provides a birds-eye-view of the release landscape. Something like a calendar or timeline that shows when models were released would be prefect. A similar resource for research papers and tools would be incredibly helpful as well. If you know where I can find something like this, please ...
2026-02-16T01:27:05
https://www.reddit.com/r/LocalLLaMA/comments/1r5w4rw/resources_for_tracking_new_model_releases/
skinnyjoints
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5w4rw
false
null
t3_1r5w4rw
/r/LocalLLaMA/comments/1r5w4rw/resources_for_tracking_new_model_releases/
false
false
self
1
null
With batching + high utilization (a la a cloud environment), what is the power consumption of something like GLM-5?
1
I'm assuming that power consumption numbers on fp8 per million tokens for something like GLM-5 compares favorably to running a smaller model locally at concurrency 1 due to batching, as long as utilization is high enough to bill batches. I realize this isn't a particularly local-favorable statement, but I also figured ...
2026-02-16T01:19:50
https://www.reddit.com/r/LocalLLaMA/comments/1r5vz8f/with_batching_high_utilization_a_la_a_cloud/
iansltx_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1r5vz8f
false
null
t3_1r5vz8f
/r/LocalLLaMA/comments/1r5vz8f/with_batching_high_utilization_a_la_a_cloud/
false
false
self
1
null