title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Koboldcpp problem on Windows.
3
Hi. I was using LM Studio with my RTX 4080. I added a second graphics card, an RTX 5060. LM Studio uses the 5060 simply as memory expansion, placing no load on it, despite the settings being set to use both cards (I tried split and priority options). I want to try llama.cpp. I didn't understand how to run this program, so I downloaded koboldcpp. And I don't understand the problem. I'm trying to run gtp oss 120b. The model consists of two gguf files. I select the first one, and the cmd says that a multi-file model is defined, so everything is fine. But after loading, I ask a question, and the model just spits out a few incoherent words and then stops. It seems like the second model file didn't load. By the way, the RTX 5060 also didn't work. The program doesn't even load part of the model into its memory, despite the fact that I specified "ALL" GPU in the koboldcpp settings. This should have used both GPUs, right? I specified card number 1, the RTX 4080, as the priority. I also noticed in LM Studio that when I try to use two video cards, in addition to a performance drop from 10.8 to 10.2 tokens, the model has become more sluggish. It started displaying some unintelligible symbols and text in...Spanish? And the response itself is full of errors.
2025-11-15T07:59:13
https://www.reddit.com/r/LocalLLaMA/comments/1oxm1xv/koboldcpp_problem_on_windows/
Pretend-Pumpkin7506
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxm1xv
false
null
t3_1oxm1xv
/r/LocalLLaMA/comments/1oxm1xv/koboldcpp_problem_on_windows/
false
false
self
3
null
How much better can A.I get via software updates before it just begins to rely on more VRAM?
0
I dont think anyone foresees VRAM magically coming down in price where like in 10 years, you can get 2tb of VRAM for $399 etc. Moore's law is dead, so dont expect futurism to save the situation. With that said, when they release Claude 4, then Claude 4.2, then Claude 5, then Claude 8, how much of that is them just tacking on more hardware vs them making "smarter" models? I.e, I dont think anyone thinks "one day, we will be able to run the equivalent of Claude Opus in 8GB of VRAM!", so what does the graph look like of how much can be squeezed out out via software advancements before they will realistically just begin to rely on more hardware? There seem to be a lot of questions/conversations that arent in the public discourse, but that undoubtably are being had by the people that run these companies, even though these questions have important ramifications to everyone depending on what the answers are. Another example is the question of "what happens to these A.I companies if for example, there IS a miracle development in tech that renders their trillions invested in the current hardware a waste and now they have of buy trillions of the new hardware?" are we supposed to assume that A.I companies have secret and probably illegal agreements with NVIDIA and AMD to purposefully not do that? That harms civilization. Or what if there was a disruption in Taiwan that lasts 6 years? What would that do to the A.I bubbles, and then to the economy? These are just some examples of what seem like pretty glaring holes. Let's focus on the first question (how much more can be gained by software ingenuity before its over, and all future advancement can only be achieved by unsustainably adding more computing power, and what are the ramifications given whatever the answer is?).
2025-11-15T07:57:30
https://www.reddit.com/r/LocalLLaMA/comments/1oxm0z8/how_much_better_can_ai_get_via_software_updates/
devshore
self.LocalLLaMA
2025-11-15T08:00:41
0
{}
1oxm0z8
false
null
t3_1oxm0z8
/r/LocalLLaMA/comments/1oxm0z8/how_much_better_can_ai_get_via_software_updates/
false
false
self
0
null
That is possible?
0
How am i using 21gb of ram on a 16gb mac 😭
2025-11-15T07:32:50
https://i.redd.it/ac32tbzyid1g1.jpeg
Immediate_Lock7595
i.redd.it
1970-01-01T00:00:00
0
{}
1oxlmur
false
null
t3_1oxlmur
/r/LocalLLaMA/comments/1oxlmur/that_is_possible/
false
false
https://b.thumbs.redditm…CdKBPoYVn-mk.jpg
0
{'enabled': True, 'images': [{'id': 'bNYB_1sT6JHau-ttlKxu2-sBy6ONNsAKvEbQJhA83eY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=108&crop=smart&auto=webp&s=9e571a9518fd698642ec80567494d1f2ed07696a', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=216&crop=smart&auto=webp&s=51500d7aacef27baded5e3eb1eab71341a40be0a', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=320&crop=smart&auto=webp&s=da591041cf5f28042cecac39312707555bf3f187', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=640&crop=smart&auto=webp&s=637bbf9f8ab8582ff5f9eba22c7f18d827de10e4', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=960&crop=smart&auto=webp&s=aead048bfdf3e4df98677c4cd238008c99970324', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?width=1080&crop=smart&auto=webp&s=58e31d8a4b25f7f53e74957cd089f0bb9b22507f', 'width': 1080}], 'source': {'height': 3060, 'url': 'https://preview.redd.it/ac32tbzyid1g1.jpeg?auto=webp&s=110230848cd55ddee50fce2dd621c499b4b7c13a', 'width': 4080}, 'variants': {}}]}
New AI Generated copypasta just dropped
0
[deleted]
2025-11-15T07:01:57
[deleted]
1970-01-01T00:00:00
0
{}
1oxl4tj
false
null
t3_1oxl4tj
/r/LocalLLaMA/comments/1oxl4tj/new_ai_generated_copypasta_just_dropped/
false
false
default
0
null
What makes closed source models good? Data, Architecture, Size?
81
I know Kimi K2, Minimax M2 and Deepseek R1 are strong, but I asked myself: what makes the closed source models like Sonnet 4.5 or GPT-5 so strong? Do they have better training data? Or are their models even bigger, e.g. 2T, or do their models have some really good secret architecture (what I assume for Gemini 2.5 with its 1M context)?
2025-11-15T07:00:06
https://www.reddit.com/r/LocalLLaMA/comments/1oxl3ju/what_makes_closed_source_models_good_data/
Bitter-College8786
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxl3ju
false
null
t3_1oxl3ju
/r/LocalLLaMA/comments/1oxl3ju/what_makes_closed_source_models_good_data/
false
false
self
81
null
Kimi K2 Thinking Benchmark?
1
[deleted]
2025-11-15T05:53:43
[deleted]
1970-01-01T00:00:00
0
{}
1oxjyd9
false
null
t3_1oxjyd9
/r/LocalLLaMA/comments/1oxjyd9/kimi_k2_thinking_benchmark/
false
false
default
1
null
ATLES Embedding: 0.834 Spearman (STS-B) — Truth Fleet v3.0
1
[removed]
2025-11-15T05:30:10
https://www.reddit.com/r/LocalLLaMA/comments/1oxjjkz/atles_embedding_0834_spearman_stsb_truth_fleet_v30/
Feeling-Champion2480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxjjkz
false
null
t3_1oxjjkz
/r/LocalLLaMA/comments/1oxjjkz/atles_embedding_0834_spearman_stsb_truth_fleet_v30/
true
false
spoiler
1
null
Interesting to finally have some real param numbers on these bigger closed-source models (Grok). I listed a few other big models for reference. See source in text
0
[deleted]
2025-11-15T05:15:04
[deleted]
1970-01-01T00:00:00
0
{}
1oxj9rc
false
null
t3_1oxj9rc
/r/LocalLLaMA/comments/1oxj9rc/interesting_to_finally_have_some_real_param/
false
false
default
0
null
How do I find those 3AB like models?
0
Are those called mixture of experts? Sorry for my ignorance but I couldn’t find any filter on hugging face to find those models that have less active parameters.
2025-11-15T05:09:59
https://www.reddit.com/r/LocalLLaMA/comments/1oxj6g1/how_do_i_find_those_3ab_like_models/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxj6g1
false
null
t3_1oxj6g1
/r/LocalLLaMA/comments/1oxj6g1/how_do_i_find_those_3ab_like_models/
false
false
self
0
null
I just realized 20 tokens per second is a decent speed in token generation.
50
If I can ever afford a mac studio with 512 unified memory, I will happily take it. I just want inference and even 20 tokens per second is not bad. At least I’ll be able to locally run models on it.
2025-11-15T05:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1oxj2mq/i_just_realized_20_tokens_per_second_is_a_decent/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxj2mq
false
null
t3_1oxj2mq
/r/LocalLLaMA/comments/1oxj2mq/i_just_realized_20_tokens_per_second_is_a_decent/
false
false
self
50
null
Cerebras Reaped Minimax m2 Need Quants
0
Cerebras informed me in another post that he Reaped Minimax m2. Can someone please Quantise it so we poor Gpu people can also use it?
2025-11-15T05:03:50
https://www.reddit.com/r/LocalLLaMA/comments/1oxj2gd/cerebras_reaped_minimax_m2_need_quants/
lumos675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxj2gd
false
null
t3_1oxj2gd
/r/LocalLLaMA/comments/1oxj2gd/cerebras_reaped_minimax_m2_need_quants/
false
false
self
0
null
Anthropic pushing again for regulation of open source models?
1,931
2025-11-15T04:40:56
https://i.redd.it/623qojxaoc1g1.jpeg
MasterDragon_
i.redd.it
1970-01-01T00:00:00
0
{}
1oximzj
false
null
t3_1oximzj
/r/LocalLLaMA/comments/1oximzj/anthropic_pushing_again_for_regulation_of_open/
false
false
https://b.thumbs.redditm…FSUTz_lNa8eg.jpg
1,931
{'enabled': True, 'images': [{'id': 'dweLmNZbiqC_SbGAvWS9DJEpSMhj0H8FCvSObqjunoI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=108&crop=smart&auto=webp&s=749091cd2b89b5bd4383a16aec0f8d322566fd22', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=216&crop=smart&auto=webp&s=2308718929e87b93afba1cc0ed80952053912035', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=320&crop=smart&auto=webp&s=6460efb25f95533134a5aa3a963a8b63ad1bb51c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=640&crop=smart&auto=webp&s=dd955c46ca05077bed949b46643bd7061e16d04c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=960&crop=smart&auto=webp&s=be45896a03abfa98e28ccb259f1845831e38e56d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?width=1080&crop=smart&auto=webp&s=4b70d7b1faa45f08c3bd8796380a7940d726e447', 'width': 1080}], 'source': {'height': 3168, 'url': 'https://preview.redd.it/623qojxaoc1g1.jpeg?auto=webp&s=c75a2fda3d651b2318cb1e04cc567c97a9e5c51a', 'width': 1440}, 'variants': {}}]}
Someone listed a DGX Station on FB marketplace
1
[deleted]
2025-11-15T04:40:10
[deleted]
1970-01-01T00:00:00
0
{}
1oximhb
false
null
t3_1oximhb
/r/LocalLLaMA/comments/1oximhb/someone_listed_a_dgx_station_on_fb_marketplace/
false
false
default
1
null
Performance loss of pairing a 5080 and a 3060 with the 3060 being stuck on PCIE 3 x4?
2
Title. I’ve made some sketchy build choices and space compromises which has all resulted in me looking at running a 5080 on PCIE 5x16 and a 3060 over Oculink on PCIE 3x4, since I can snap up a refurbished 3060 for 160 dollars. I know such a setup can work, but my main question is what kind of penalties will I encounter when running such a setup, and whether a setup like this can actually enable me to run larger model at a speed faster than 30-40 tokens per second or if I should just look into getting a 5090.
2025-11-15T04:08:16
https://www.reddit.com/r/LocalLLaMA/comments/1oxi10e/performance_loss_of_pairing_a_5080_and_a_3060/
steve09089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxi10e
false
null
t3_1oxi10e
/r/LocalLLaMA/comments/1oxi10e/performance_loss_of_pairing_a_5080_and_a_3060/
false
false
self
2
null
Kimi K2 Thinking 1bit just 0.22 tokens/s on 512GB RAM RTX 4090 EPYC 64 core machine
6
As per the [unsloth guide ](https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally)it seems I should be expecting around an order of magnitude faster speeds with the UD-TQ1\_0 quant. I wonder if there's anything simple I might be doing wrong. This is how I'm running it: **Build latest llama.cpp (15th Nov)** cd ~/src git clone https://github.com/ggml-org/llama.cpp cmake llama.cpp -B llama.cpp/build \ -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON cmake \ --build llama.cpp/build \ --config Release -j --clean-first \ --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli llama-server cp llama.cpp/build/bin/llama-* llama.cpp/ **Run llama-server**  ./llama.cpp/llama-server \ --model ~/models/UD-TQ1_0/Kimi-K2-Thinking-UD-TQ1_0-00001-of-00006.gguf \ --alias "unsloth/Kimi-K2-Thinking" \ --threads -1 \ -fa on \ --n-gpu-layers 999 \ -ot ".ffn_.*_exps.=CPU" \ --min_p 0.01 \ --ctx-size 16384 \ --port 8002 \ --jinja This is the performance I'm getting in the web UI: https://preview.redd.it/2pbs2j75ec1g1.png?width=1714&format=png&auto=webp&s=853b85d19909c0fab57cfd2295b7e1b07e255369 From another request: prompt eval time = 17950.58 ms / 26 tokens ( 690.41 ms per token, 1.45 tokens per second) eval time = 522630.84 ms / 110 tokens ( 4751.19 ms per token, 0.21 tokens per second) total time = 540581.43 ms / 136 tokens **nvidia-smi while generating:** $ nvidia-smi Sat Nov 15 03:51:35 2025        +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.28.03              Driver Version: 560.28.03      CUDA Version: 12.6     | |-----------------------------------------+------------------------+----------------------+ | GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC | | Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. | |                                         |                        |               MIG M. | |=========================================+========================+======================| |   0  NVIDIA GeForce RTX 4090        On  |   00000000:83:00.0 Off |                  Off | |  0%   55C    P0             69W /  450W |   12894MiB /  24564MiB |      0%      Default | |                                         |                        |                  N/A | +-----------------------------------------+------------------------+----------------------+                                                                                           +-----------------------------------------------------------------------------------------+ | Processes:                                                                              | |  GPU   GI   CI        PID   Type   Process name                              GPU Memory | |        ID   ID                                                               Usage      | |=========================================================================================| |    0   N/A  N/A   1332381      C   ./llama.cpp/llama-server                    12884MiB | +-----------------------------------------------------------------------------------------+ **llama-server in top while generating:**     PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                               1332381 eesahe      20   0  281.3g 229.4g 229.1g S 11612  45.5 224:01.19 llama-server     
2025-11-15T03:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1oxhqk5/kimi_k2_thinking_1bit_just_022_tokenss_on_512gb/
eesahe
self.LocalLLaMA
2025-11-15T04:40:00
0
{}
1oxhqk5
false
null
t3_1oxhqk5
/r/LocalLLaMA/comments/1oxhqk5/kimi_k2_thinking_1bit_just_022_tokenss_on_512gb/
false
false
https://b.thumbs.redditm…nea4JKLst44s.jpg
6
null
Why are there so much misinformation and lies around "open-source" models?
0
a) Performance: None of the frontier open-source models are anywhere near the frontier closed source models yet. This is evident to anyone who've used these models in a realistic setting that goes beyond one-shot textbook question answering. Most of these models are heavily benchmaxxed and generalize very poorly. Kimi K2 or Minimax M2 are nowhere near Sonnet 4.5 or Codex in terms of real world performance. Yet people keep lying/inflating the abilities of these models. Also they hallucinate wildly. The performance also varies wildly from provider to provider and the provider and model creators just shift the blame on each other. b) Price: From a regular user perspective there is absolutely no difference between these "open-source" models and closed source ones. Most of these are are several hundred billion to 1T parameters. So a regular user is paying OpenRouter or another provider instead of OpenAI/Anthropic/Google. c) Privacy/Security: Since the regular user is just paying for another provider, so they are essentially sending their data to these providers instead of OpenAI/Google/Anthropic so there is absolutely no advantage in terms of privacy/security like a local model. And since most of these open models are published without any noteworthy safety work (except for big model providers) so God knows how vulnerable these things are to regular jailbreaks and other more problematic sycophancy issues. d) "Open-Source": Unlike regular open-source software most of these models are closed unless the training data and training method are fully published (discounting the opaque nature of deep neural networks themselves). In that sense only a couple of companies like Allen AI and NVIDIA are actually open-sourcing models. All the frontier Chinese model providers go completely radio silent when it comes to the training data. Which is surprising since that is a critical component needed for anyone to reproduce the "open-science" they are publishing. I believe open-source and open science is very important and should be encouraged. But there is lot going on in this area under the guise of open-source and open science that are clearly not and needs to be addressed.
2025-11-15T03:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1oxhf5e/why_are_there_so_much_misinformation_and_lies/
Terrible-Priority-21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxhf5e
false
null
t3_1oxhf5e
/r/LocalLLaMA/comments/1oxhf5e/why_are_there_so_much_misinformation_and_lies/
false
false
self
0
null
High Sierra Just Became the Poorest Man’s AI Rig-PyTorch 2 + CUDA 11.2 Shim in Oven. v1 Still Moggs Your Colab.
0
2016 MBP → 70B @ 2.1 tok/s, SDXL @ 4.7 it/s. no cloud. no M1. no rent. v2 shim done. build compiling. repo: [https://github.com/careunix/PyTorch-HighSierra-CUDA-Revival](https://github.com/careunix/PyTorch-HighSierra-CUDA-Revival) use v1 or stay broke.
2025-11-15T02:09:36
https://www.reddit.com/r/LocalLLaMA/comments/1oxfm0m/high_sierra_just_became_the_poorest_mans_ai/
Adept_Tip8375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxfm0m
false
null
t3_1oxfm0m
/r/LocalLLaMA/comments/1oxfm0m/high_sierra_just_became_the_poorest_mans_ai/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=108&crop=smart&auto=webp&s=2324ca1682fc6f5300b18153120e9fdb737d1831', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=216&crop=smart&auto=webp&s=a99315d27c02b1840bb7228ff598aee68067ec2e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=320&crop=smart&auto=webp&s=bd2e3382c7761e68fb39e5e73008ee5454b64cbe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=640&crop=smart&auto=webp&s=c85196099f9aaa0b44320f7500ba81e9a0587d43', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=960&crop=smart&auto=webp&s=83b7bb454cbc6e2469c77862f1b44f27d457e573', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?width=1080&crop=smart&auto=webp&s=86a2f4063b6c121c4edd100765f6b9a901c11671', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fldKwD0CK6heHFm_QrqkC0G2nZBvog3v77v6WMkenMk.png?auto=webp&s=b9365dec389ca351d29cd8715bb088f6e51044da', 'width': 1200}, 'variants': {}}]}
BranchBox: isolated dev environments for parallel agent runs
8
I’ve been running several local coding agents in parallel and kept hitting the same issue: everything was stepping on everything else. Ports collided, Docker networks overlapped, databases were overwritten, and devcontainer configs leaked across projects. So I built BranchBox, an open-source tool that creates a fully isolated dev environment per feature or agent task. Each environment gets: * its own Git worktree * its own devcontainer * its own Docker network * its own database * its own ports * isolated env vars * optional tunnels (cloudflared for now, ngrok to come) Everything can run side-by-side without interference. It has been useful for letting multiple agents explore ideas or generate code in parallel while keeping my main workspace clean and reproducible. Repo: [https://github.com/branchbox/branchbox](https://github.com/branchbox/branchbox) Docs: [https://branchbox.github.io/branchbox/](https://branchbox.github.io/branchbox/) Happy to answer questions or hear suggestions.
2025-11-15T01:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1oxf132/branchbox_isolated_dev_environments_for_parallel/
Quirky_Researcher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxf132
false
null
t3_1oxf132
/r/LocalLLaMA/comments/1oxf132/branchbox_isolated_dev_environments_for_parallel/
false
false
self
8
{'enabled': False, 'images': [{'id': 'O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=108&crop=smart&auto=webp&s=0b189853af90cb2c86b7f037d6d67a25c0e857e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=216&crop=smart&auto=webp&s=2f45a5153513f4caee160d19b43ff22f2483e87f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=320&crop=smart&auto=webp&s=7c0235d1d41c6942c2d6d0cb0e362fd690670ba2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=640&crop=smart&auto=webp&s=ddfbdcf3e3edf0f4441c36749560c375bf7edce5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=960&crop=smart&auto=webp&s=a335dce5c5e6392ed4de330a59cc2a1d0024e908', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?width=1080&crop=smart&auto=webp&s=4a1688782897e095e86a6019555bea59b00aeb25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/O_7N5nJm8R_aoabDcO3iQrJ4F0HjKc_dfzThiXkE_KI.png?auto=webp&s=2afcb762cbb7b7d9c8b6b4fa9f36258c7da72ac7', 'width': 1200}, 'variants': {}}]}
I benchmarked "vanilla" and REAP'd Qwen3-Coder models locally, do my results match your experience?
23
I've been curious about REAPs, and how they might compare to Unsloth Dynamic quants (my current go-to). So, I ran a few iterations of aider polyglot locally to get a sense of which gives the best bang-for-VRAM. Test setup and results below: TL;DR: Statistically speaking, with my small sample size, I did not find a benefit to the REAP variant of Qwen3-Coder-30B-A3B. **Goal** Determine whether the higher quants enabled by REAP'd models' smaller initial size provides benefits to coding performance, which tends to be heavily impacted by quantization. In this case, pitting Unsloth's UD-Q6\_K\_XL of "vanilla" Qwen3-Coder-30B-A3B against bartowski's Q8\_0 of Qwen3-Coder-REAP-25B-A3B, both of which fit fully in a 5090's VRAM with room for 40k context. **Model Configuration** Unsloth Dynamic "qwen3-coder-30b-a3b-instruct":   cmd: |     ${LLAMA_SERVER_CMD}     ${BOILERPLATE_SETTINGS}     --model "${MODEL_BASE_DIR}\unsloth\Qwen3-Coder-30B-A3B-Instruct-GGUF\Qwen3-Coder-30B-A3B-Instruct-UD-Q6_K_XL.gguf"     --ctx-size 40960     --temp 0.7     --min-p 0.0     --top-p 0.8     --top-k 20     --repeat-penalty 1.05     --jinja REAP "qwen3-coder-REAP-25B-A3B":   cmd: |     ${LLAMA_SERVER_CMD}     ${BOILERPLATE_SETTINGS}     --model "${MODEL_BASE_DIR}\bartowski\cerebras_Qwen3-Coder-REAP-25B-A3B-GGUF\cerebras_Qwen3-Coder-REAP-25B-A3B-Q8_0.gguf"     --ctx-size 40960     --temp 0.7     --min-p 0.0     --top-p 0.8     --top-k 20     --repeat-penalty 1.05     --jinja **Aider Command** `OPENAI_BASE_URL=http://<llama-swap host IP>:8080/v1 OPENAI_API_KEY="none" ./benchmark/benchmark.py <results dir name> --model openai/<model name> --num-ctx 40960 --edit-format whole --threads 1 --sleep 5 --exercises-dir polyglot-benchmark --new` **Results** [aider-polyglot 0.86.2.dev results](https://preview.redd.it/vsahw6pqkb1g1.png?width=1359&format=png&auto=webp&s=0ec3f9432e1cb55c5d1edbe86b271f1a9c3e6a09) ||Unsloth Dynamic|REAP| |:-|:-|:-| |Pass 1 Average|12.0%|10.1%| |Pass 1 Std. Dev.|0.77%|2.45%| |Pass 2 Average|29.9%|28.0%| |Pass 2 Std. Dev.|1.56%|2.31%| This amounts to a tie, since each model's average Pass 2 results fall within the other's standard deviation. Meaning, for this benchmark, there is no benefit to using the higher quant of the REAP'd model. And it's possible that it's a detriment, given the higher variability of results from the REAP'd model. That said, I'd caution reading too much into this result. Though aider polyglot is in my opinion a good benchmark, and each run at 40k context contains 225 test cases, 3 runs on 2 models is not peer-review-worthy research. For those of you who've used both "vanilla" and REAP'd models for coding, does this match your experience? Do you notice other things that wouldn't show up in this kind of benchmark?
2025-11-15T01:37:15
https://www.reddit.com/r/LocalLLaMA/comments/1oxexii/i_benchmarked_vanilla_and_reapd_qwen3coder_models/
MutantEggroll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxexii
false
null
t3_1oxexii
/r/LocalLLaMA/comments/1oxexii/i_benchmarked_vanilla_and_reapd_qwen3coder_models/
false
false
https://b.thumbs.redditm…Pmqi7u7IoUDw.jpg
23
null
Premise: MoE models have exploitable locality in expert activation patterns, and LRU caching with profiling could cut VRAM requirements in half.
51
Recently doing some brainstorming - and a few back-of-the-page calculations, and came up with this. The premise is that with some profiling based on actual user workload, we should be able to determine expert activation patterns and locality for caching. TLDR; A "smart" MOE caching size could reduce VRAM needs by up to half. I'm sure I'm not the first to think about this, and I'm sure I've got a screw loose, but maybe someone can set me straight. **MoE models have exploitable locality in expert activation patterns, and LRU caching with profiling could cut VRAM requirements in half**. Meaning, that: Total VRAM budget: X * Expert size: **E** (some fraction of total model Y) * Can fit in cache: **C = X / E** experts * Experts activated per token across all layers: **A** * LRU cache hit rate: **H** (empirically \~70-80% with temporal locality) Cost Model **Without swapping**: Need all experts in VRAM = can't run the model if total experts > X **With swapping**: * Cache hits: free (already in VRAM) * Cache misses: pay PCIe transfer cost **Per-token cost**: * Expert activations needed: A * Cache hits: A × H (free) * Cache misses: A × (1 - H) × transfer\_cost **Transfer cost**: * PCIe bandwidth: \~25 GB/s practical * Expert size: E * Transfer time: E / 25 GB/s * Token generation time target: \~10-50ms (20-100 tokens/sec) Break-even - You want: `cache_miss_overhead < token_generation_time_savings` **Simple threshold**: If C ≥ A / (1 - target\_miss\_rate) then swapping is likely worth it **Per layer** (assuming 8 experts per layer): * If C\_layer = 2: you can only fit exactly what's needed, 0% cache benefit * If C\_layer = 4: \~50-60% hit rate * If C\_layer = 6: \~75-85% hit rate * If C\_layer = 8: 100% hit rate (all experts cached) **Break-even point**: When `(1 - H) × E / 25GB/s < token_budget` If E = 1GB, token\_budget = 20ms: * With H = 75%: 0.25 × 1GB / 25GB/s = 10ms ✓ Worth it * With H = 50%: 0.50 × 1GB / 25GB/s = 20ms ≈ Break-even * With H = 25%: 0.75 × 1GB / 25GB/s = 30ms ✗ Too slow If you can fit at least **half the experts** in VRAM, LRU swapping is likely a win because temporal locality gives you 70-80% hit rates. **Not worth it when**: C < 0.25 × total\_experts - you're thrashing too much **Sweet spot**: Models where you can fit 50-75% of experts - you get most of the benefit of the full model at a fraction of the VRAM cost.
2025-11-15T01:16:49
https://www.reddit.com/r/LocalLLaMA/comments/1oxeif6/premise_moe_models_have_exploitable_locality_in/
CodeSlave9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxeif6
false
null
t3_1oxeif6
/r/LocalLLaMA/comments/1oxeif6/premise_moe_models_have_exploitable_locality_in/
false
false
self
51
null
Released a clean 7,074,970-file English Wikipedia dataset (JSON per article) — ideal for LLaMA pretraining/finetuning
1
[removed]
2025-11-15T00:41:47
https://www.reddit.com/r/LocalLLaMA/comments/1oxdqw9/released_a_clean_7074970file_english_wikipedia/
Better_Muffin_6974
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxdqw9
false
null
t3_1oxdqw9
/r/LocalLLaMA/comments/1oxdqw9/released_a_clean_7074970file_english_wikipedia/
false
false
self
1
null
Best local model to learn from?
17
I'm currently trying to learn quantum physics, and it's been invaluable having a model to talk to to get my own personal understanding sorted out. However, this is a subject where the risk of hallucinations I can't catch is quite high, so I'm wondering if there are any models known for being particularly good in this area. The only constraint I have personally is that it needs to fit in 96GB of RAM - I can tolerate extremely slow token generation, but running from disk is the realm of the unhinged.
2025-11-15T00:33:39
https://www.reddit.com/r/LocalLLaMA/comments/1oxdkdt/best_local_model_to_learn_from/
agreeduponspring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxdkdt
false
null
t3_1oxdkdt
/r/LocalLLaMA/comments/1oxdkdt/best_local_model_to_learn_from/
false
false
self
17
null
The Historical Position of Large Language Models — Full Edition
0
I. Introduction: From Noise to Structure In just a few years, large language models (LLMs) have shifted from research curiosities to global infrastructure. They write code, summarize papers, draft legal emails, generate ideas, and converse with millions of people every day. For many observers, this feels like standing on the threshold of Artificial General Intelligence (AGI). If a machine can talk like this, reason like this, and solve problems across so many domains, what else is missing? Yet history teaches a sober lesson: some of the most intoxicating technological moments are also the moments when our judgment is most easily distorted. Steam engines once looked like the final industrial form. Early computers were once imagined as centralized mainframes forever. Each time, the first successful paradigm was later understood as only the first stage of a deeper evolution. This essay argues that LLMs occupy exactly such a position in the history of machine intelligence. They are not the end of the story. They are the beginning of a longer arc. More precisely: - LLMs represent the **first stage** of machine intelligence: the stage of **communication intelligence**, where machines learn to speak, answer, imitate, and interface with humans through language. - What must follow is the emergence of **Cognitive Native Intelligence Architecture (CNIA)**—a family of architectures focused not on probabilistic expression, but on **structured, self-consistent, closed-loop cognition**. To see this clearly, we do not need metaphors from science fiction. We only need to look at how **human intelligence itself develops**. II. Two Axes of Intelligence: Communication vs Cognition Intelligence is often treated as a single scalar quantity: more or less, strong or weak, narrow or general. But when we look closer—both in humans and in machines—we find at least two distinct axes: 1. **Communication Intelligence** - The ability to produce coherent language and behavior that others can interpret as meaningful. - Includes fluency, style, responsiveness, the capacity to “say something appropriate” in a given context. 2. **Cognitive Intelligence** - The ability to form and manipulate internal structures of understanding. - Includes abstraction, logical coherence, self-reflection, stable conceptual frameworks, and the capacity to maintain internal consistency over time. An entity can be high on one axis and low on the other. A charismatic speaker may be persuasive but shallow. A quiet researcher may be structurally deep but socially opaque. What we are witnessing with LLMs is the first time in history that **machines have become extremely strong on the communication axis**, while still being structurally fragile on the cognitive axis. To understand why this matters, it helps to look at a system whose development we know intimately: the human mind. III. The Human Analogy: Early Language Is Not Mature Thinking Children offer a powerful analogy for understanding the difference between communication intelligence and cognitive intelligence. A small child, by the age of three or four, can often: - Speak in full sentences - Retell stories heard from adults - Mimic tone, emotion, and even argumentation patterns - Ask seemingly deep questions - Engage in long conversations with adults From the outside, this can look surprisingly sophisticated. Yet internally, the child’s **cognitive structures** are still in formation. Logical rules are unstable. Causal links are shallow. Concepts are fluid and easily reshaped by context. Developmental psychology and everyday experience both tell us: - **Communication intelligence emerges early.** Children rapidly acquire language through imitation, repetition, and social reinforcement. - **Cognitive intelligence takes much longer.** It requires years of schooling, abstraction, reflection, and structural consolidation before a young person can reason with stable frameworks, hold conflicting ideas in balance, or systematically evaluate evidence. In other words: > In early development, **Communication Intelligence > Cognitive Intelligence**. The child can talk far beyond what they can truly explain or justify. Their language often runs ahead of their understanding. Only through a long process of **structuring, stabilizing, and internalizing patterns** does the mind slowly become a **thinking mind**, not just a **speaking mind**. This pattern is not an accident. It reflects a deeper law: - **Expression is easier than structure.** - **Fluency is easier than coherence.** - **Surface behavior is easier than internal architecture.** And this brings us back to LLMs. IV. Reading LLMs Through the Human Analogy If we look at LLMs through this human developmental lens, their position becomes much clearer. What are today’s LLMs incredibly good at? - Producing coherent, contextually appropriate natural language - Imitating styles, tones, and genres - Recombining existing information into new forms - Maintaining short-range conversational consistency - Giving the strong *impression* of understanding What are they much weaker at? - Maintaining long-range, cross-session conceptual stability - Guaranteeing logical self-consistency across domains and time - Operating with explicit, inspectable internal world models - Enforcing deterministic reasoning paths rather than probabilistic sampling - Building and preserving their own enduring cognitive structures In short: > LLMs are like a mind in its **early communicative phase**—extraordinarily articulate, but not yet structurally grounded. This does not mean LLMs are “just parrots.” They perform astonishing generalization across vast data. But their **internal mechanism** is still fundamentally that of a **probabilistic sequence model**, not a structured cognitive system with explicit, verifiable, and stable internal architecture. The human analogy is not perfect, but it is instructive: - Children first become impressive communicators. - Only much later do they become reliable thinkers. - LLMs have become impressive communicators at scale. - The next question is: **how do we build reliable machine thinkers?** This is where Cognitive Native Intelligence Architecture enters the picture. V. Probability Intelligence: What It Can and Cannot Do LLMs are built on probability. At their core, they: - Take in context (a sequence of tokens) - Estimate the probability distribution of the next token - Sample from that distribution (with some temperature/decoding strategy) - Repeat this process, token by token Scaling data, parameters, and training cleverly has made this simple mechanism incredibly powerful. But its **nature** is unchanged: the model is **always sampling from a distribution learned from past data.** This brings major strengths: - Wide coverage of human knowledge encoded in language - Flexibility across domains and styles - Ability to “fill in” plausible continuations in many contexts But also clear limitations: - No guarantee that different answers given at different times are mutually consistent - No explicit representation of “this is how I must reason about X” - No solid separation between “what is true” and “what is probable given the data distribution” - No built-in mechanism for **self-correction** other than external fine-tuning Probability intelligence can **approximate** cognition from the outside, but it does not **instantiate** cognition structurally on the inside. This is why, despite their impressive performance, LLMs still exhibit hallucinations, contradictions, and fragile behavior under distributional shifts. The model’s “knowledge” is not governed by a stable internal logic; it is governed by the statistics of observed text. At some point, further scaling of this paradigm yields diminishing returns with respect to **reliability, controllability, and verifiable understanding**. To move beyond this, we need something more than probability. We need **architecture**. VI. Cognitive Native Intelligence Architecture: Toward Native Structured Intelligence Cognitive Native Intelligence Architecture (CNIA) represents an architectural response to the limitations of pure LLM-based probability intelligence. Where LLMs focus on **learning from data to express**, CNIA focuses on **building native internal structures to understand and reason**. Key characteristics of CNIA include: 1. **Internal Structure** CNIA encodes knowledge in structured forms—relations, rules, conceptual graphs, or other explicit frameworks—rather than only implicit statistical weights. 2. **Closed-Loop Reasoning** Reasoning is treated as a process with feedback: hypotheses are generated, tested, revised, and stabilized, rather than produced in a single generative pass. 3. **Self-Consistency and Verification** CNIA emphasizes mechanisms that check whether different outputs and internal states are mutually coherent. Contradictions are not just “allowed by sampling” but detected and acted upon. 4. **Native Cognition** The key idea is “native”: the system’s intelligence is not just a reflection of data distributions, but arises from its own internal architecture, constraints, and long-lived memory structures. 5. **Long-Term Growth** CNIA aims at systems that can **accumulate and refine their own cognitive structures over time**, in a way that is inspectable and, at least in principle, auditable. In human terms, CNIA is analogous to the point at which the mind: - Stops merely repeating and recombining what it hears - Starts building its own worldview, logic, and internal explanatory models In machine terms, it offers the path from **“probabilistic mimicry of intelligence”** to **“native, structured machine intelligence.”** VII. The Four-Stage Arc of Machine Intelligence Putting these ideas together, we can outline an evolutionary arc for machine intelligence: 1. **Stage 1: Communication Intelligence (LLMs)** Machines learn to speak, answer, and simulate understanding through large-scale probability modeling. This is where we are now. 2. **Stage 2: Cognitive Native Intelligence (CNIA)** Architectures emerge that prioritize structured, consistent, and verifiable cognition. Machines begin to think in native terms, not just speak in borrowed patterns. 3. **Stage 3: Closed-Loop Intelligence** CNIA-like systems are integrated with feedback cycles, external tools, and long-lived memory. The system can observe its own behavior, update its structures, and converge toward more stable, reliable understanding over time. 4. **Stage 4: Integrated Native Intelligence (AGI Candidates)** Communication intelligence and cognitive intelligence fully integrate. Expression and structure are no longer split between “an LLM here” and “an architecture there,” but fused into a unified system. At this point, we can begin to talk credibly about **controllable, verifiable, and genuinely general machine intelligence**. Crucially, in this picture: - LLMs are historically **indispensable**—they opened the era of communication intelligence. - But they are **not sufficient** to define the era of general intelligence. To stay at Stage 1 and call it “AGI” is to confuse a **necessary beginning** with a **complete solution**. VIII. Why This Reframing Matters Some may argue: if LLMs work well enough in practice, why insist on this distinction? Why not simply keep improving them and accept that “intelligence is what works”? There are at least three reasons why placing LLMs in their proper historical position is not just philosophically interesting, but practically essential. 1. **Safety and Control** Systems whose internal logic is purely probabilistic are harder to reason about, harder to verify, and harder to guarantee under distributional shift. Native cognitive architectures offer clearer hooks for constraint, oversight, and formal analysis. 2. **Reliability and Trust** For high-stakes domains—law, medicine, infrastructure, governance—we need systems whose outputs can be traced back to stable internal structures, not just to opaque weight matrices. 3. **Research Direction** If we believe LLMs already *are* AGI or its near-equivalent, we will over-invest in scaling and under-invest in architecture. If we recognize them as Stage 1 of a longer trajectory, we are more likely to pursue CNIA-like structures and genuinely new paradigms. In short, reframing LLMs as **the beginning of machine intelligence, not its culmination**, is a way of realigning both expectations and research priorities with the deeper structure of the problem. IX. Conclusion: Placing LLMs—and Walking Beyond Them Large language models deserve historical recognition. They are the first systems to give machines a powerful, general-purpose **voice** in human language. They represent the arrival of **communication intelligence** at scale. But communication is not the whole of intelligence. A child who speaks beautifully is impressive, yet no one confuses early fluency with mature understanding. The same holds for machines. By recognizing LLMs as **Stage 1** in the arc of machine intelligence, we do not diminish them—we dignify them with a clear role in a larger story. What comes after them is equally clear in outline, even if the details are still emerging: the rise of **Cognitive Native Intelligence Architecture**, the consolidation of **closed-loop structured cognition**, and the eventual integration of expression and understanding into a unified, native machine intelligence. Without this structural clarity, our relationship to AGI will remain like **looking at a flower through fog**—seeing only a bright outline, never the form itself. With this clarity, we can finally do what matters most in any technological revolution: Not just admire what we have built, but understand where it truly stands— and where we must go next. (By CNIA team)
2025-11-15T00:31:08
https://www.reddit.com/r/LocalLLaMA/comments/1oxdiec/the_historical_position_of_large_language_models/
Hefty_Document_9466
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxdiec
false
null
t3_1oxdiec
/r/LocalLLaMA/comments/1oxdiec/the_historical_position_of_large_language_models/
false
false
self
0
null
Slamming my head against the wall with Parakeet
3
Ive been trying to get this thing running locally on windows and cant seem to get it to work. I got whisper ai to work in minutes through Vibe. But parakeet? Nothing close to being as easy. Been trying for over 3 hrs now. Is there an easy app I can install like Vibe or Ollama?
2025-11-15T00:14:03
https://www.reddit.com/r/LocalLLaMA/comments/1oxd3zo/slamming_my_head_against_the_wall_with_parakeet/
PM_ME_ABSOLUTE_UNITZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxd3zo
false
null
t3_1oxd3zo
/r/LocalLLaMA/comments/1oxd3zo/slamming_my_head_against_the_wall_with_parakeet/
false
false
self
3
null
Why isn't ollama using my dGPU?
0
When I start ollama in Fedora 42 with ``, I see `time=2025-11-14T16:08:43.727-08:00 level=INFO source=types.go:123 msg="inference compute" id=GPU-9b699b22-274c-9c1c-4a2a-94070ed6d923 library=cuda variant=v12 compute=8.6 driver=13.0 name="NVIDIA RTX A5000 Laptop GPU" total="15.6 GiB" available="15.4 GiB"` at the end of the output. I then run `ollama run <model-name>`, and provide a prompt. `nvtop` and `htop` show no increase of dGPU use, and my CPU usage increases dramatically. How can I get it to use the dGPU it clearly can see?
2025-11-15T00:12:23
https://www.reddit.com/r/LocalLLaMA/comments/1oxd2me/why_isnt_ollama_using_my_dgpu/
VegetableJudgment971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxd2me
false
null
t3_1oxd2me
/r/LocalLLaMA/comments/1oxd2me/why_isnt_ollama_using_my_dgpu/
false
false
self
0
null
Cannot use LMStudio response API with JSON schema
0
i tried several models, but I didn’t get the right json output. Is there anyone got the same issue? It’s not tool call definition, just JSON schema.
2025-11-14T23:39:58
https://www.reddit.com/r/LocalLLaMA/comments/1oxccsh/cannot_use_lmstudio_response_api_with_json_schema/
Technical_Pass_1858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxccsh
false
null
t3_1oxccsh
/r/LocalLLaMA/comments/1oxccsh/cannot_use_lmstudio_response_api_with_json_schema/
false
false
self
0
null
I benchmarked kimi k2 thinking vs gpt-5 here are the actual numbers
1
[removed]
2025-11-14T23:30:52
https://www.reddit.com/r/LocalLLaMA/comments/1oxc5af/i_benchmarked_kimi_k2_thinking_vs_gpt5_here_are/
Existing-Aside-4697
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxc5af
false
null
t3_1oxc5af
/r/LocalLLaMA/comments/1oxc5af/i_benchmarked_kimi_k2_thinking_vs_gpt5_here_are/
false
false
self
1
null
A non-linear, segment-aware LLMLingua compressor for LLM agents (GPU, cached, gradient-based)
0
***Note: The following Text was structured using my AI so it is (partly) AI generated from my own extended input. You might see that as unacceptable short-cut. I accept that. For now... ;)*** ***Find the code at the end. It is also made in cooperation (!) with AI.*** ***Enjoy.*** I’ve been experimenting with a custom compression module for long-context LLM agents, and I figured I’d share a small architectural outline. Maybe it’s useful for others building multi-layer memory systems. # Core idea Instead of compressing the entire prompt linearly, the module: * compresses only specific blocks (history, notes, logs, etc.) * splits each block into multiple segments * applies *different* compression rates per segment * and blends them along a gradient (oldest → most compressed, newest → least compressed) So you get **non-linear semantic decay**, not a flat "compress to X%" transformation. # Why? Because uniform compression destroys meaning. Older context usually matters less but still needs to survive as a trace. Newer context needs more fidelity. LLMLingua reacts extremely well to this stratified approach. # How it works * global LLMLingua instance (GPU-accelerated) * `_compress()` is LRU-cached and retry-safe * each block is optionally passed into `compress(prompt, rate, ratio)` * `ratio` defines how strong the gradient should be * “segments” are character-based for now, but can be upgraded to semantic segments * MQTT interface for remote usage (optional in my setup) Example: With `rate=0.25` and `ratio=0.5`, the early segments get \~12% retention, later ones \~37% — LLMLingua handles the rest non-linearly. # Results * prompts shrink reliably to fit 128k models * semantic quality in the "recent" window stays high * long-term behavioral stability of the agent improves noticeably * old context fades gradually instead of collapsing abruptly If anyone’s interested, I can share more details on segment strategies or memory orchestration (STM/LTM, dream cycles, etc.). This module is just one part of a bigger system. """ Enhanced prompt compressor using LLMLingua2 ------------------------------------------------ This module provides an extended ``compress`` function that allows for a linear compression gradient across the input prompt. The original behaviour of LLMLingua2 is preserved: when a single ``rate`` value is supplied, the entire prompt is compressed uniformly. If a non‑zero ``ratio`` is specified, the prompt is partitioned into several segments and each segment is compressed with a different strength. For fractional rates (``rate`` < 1), the ``ratio`` controls how much the keep ratio at the start of the prompt deviates from the end. A positive ``ratio`` results in stronger compression at the beginning and lighter compression at the end; a negative value flips this behaviour. For integer rates (``rate`` >= 1), which LLMLingua interprets as a target token count, the total token budget is distributed over the segments according to the same linear scheme. Because tokens per segment must be integers, the allocation is approximate but still reflects the intended gradient. The default ``ratio`` is 0, producing uniform compression. Ratios are clamped to the range [-1.0, 1.0] to prevent extreme values. This file also exposes a simple MQTT service runner, mirroring the original implementation. When sending requests via MQTT you may now include a ``ratio`` field in the payload to engage the gradient mode. """ from llmlingua import PromptCompressor from functools import lru_cache import re import traceback from threading import RLock import mqtt lock = RLock() # Initialise the LLMLingua2 model once at module load time llm_lingua = PromptCompressor( model_name="microsoft/llmlingua-2-xlm-roberta-large-meetingbank", use_llmlingua2=True, device_map="cuda:0", ) # A list of tokens that should always be preserved during compression. # Can be extended by the user. Empty strings are removed during runtime. strings_to_keep = [] # Warm up the model so that the first real compression call doesn't # incur one‑time initialisation overhead. We ignore the result. llm_lingua.compress_prompt("this is a test prompt to load model", target_token=2) def cleanup(s: str) -> str: """Remove lines that consist solely of whitespace and kept tokens. This helper can be used to post‑process compressed prompts if needed. Currently unused but preserved from the original implementation. """ global strings_to_keep r = "|".join([ re.escape(x) for x in [x.strip() for x in strings_to_keep] + [" "] if len(x) > 0 ]) l1 = s.split("\n") l2 = [x for x in l1 if not re.fullmatch(f"({r})*", x)] return "\n".join(l2) def compress(prompt, rate: float = 0.25, name: str = "", ratio: float = 0.5): """ Compress a prompt using LLMLingua2 with optional gradient support. By default the entire prompt is compressed uniformly according to ``rate``. When ``ratio`` is non‑zero and ``rate`` is numeric, the prompt is partitioned into several contiguous segments and each segment is compressed with a linearly varying strength. The number of segments scales with the magnitude of ``ratio`` (between 4 and 10). Parameters ---------- prompt : str | list | dict The input to compress. Non‑string inputs will be converted to a single string by joining list items or dict key/value pairs. rate : float Compression factor. Values less than 1 keep roughly ``rate`` fraction of the input tokens. Values greater or equal to 1 are interpreted as an absolute target token count. name : str, optional An optional label for logging/debugging. It will be prefixed to log messages and extended with segment information in gradient mode. ratio : float, optional Controls the linear gradient. Must be in [-1.0, 1.0]. A positive ratio compresses the beginning more (keeps fewer tokens) and the end less; negative values invert this behaviour. Zero yields uniform compression. Values outside the range are clamped. Returns ------- str The compressed prompt. """ global lock, strings_to_keep res = "" # Acquire a global lock to ensure thread safety and consistent logging lock.acquire() try: # Remove empty string from strings_to_keep if present; LLMLingua # doesn't cope well with empty force tokens. try: strings_to_keep.remove("") except ValueError: pass # Log the start of the compression print("<" + str(len(prompt)) + "|" + name + "|", end="") # Normalize the prompt into a single string if isinstance(prompt, dict): prompt = [str(k) + " " + str(v) for k, v in prompt.items()] if isinstance(prompt, list): prompt = "\n".join(prompt) if not isinstance(prompt, str): prompt = str(prompt) # Skip compression on empty or whitespace‑only prompts if not re.fullmatch("[\n ]*", prompt): # Parse and clamp ratio try: ratio_val = float(ratio) except Exception: ratio_val = 0.0 ratio_val = max(-1.0, min(1.0, ratio_val)) # If a gradient is requested and rate is numeric, build segments if ratio_val != 0 and isinstance(rate, (int, float)): # Determine segment count (between 4 and 10) num_segments = int(4 + 6 * abs(ratio_val)) num_segments = max(2, min(10, num_segments)) # Split the prompt into contiguous character slices total_len = len(prompt) seg_size = total_len // num_segments segments = [] start_idx = 0 for i in range(num_segments - 1): end_idx = start_idx + seg_size segments.append(prompt[start_idx:end_idx]) start_idx = end_idx segments.append(prompt[start_idx:]) # last segment compressed_parts = [] if rate < 1.0: # Fractional rate: derive start and end keep ratios diff = rate * ratio_val start_rate = max(0.01, min(0.99, rate - diff)) end_rate = max(0.01, min(0.99, rate + diff)) for i, seg in enumerate(segments): t = i / (len(segments) - 1) if len(segments) > 1 else 0.0 seg_rate = start_rate + t * (end_rate - start_rate) try: part = _compress(prompt=seg, rate=seg_rate, name=f"{name}/seg{i+1}") except Exception: part = seg compressed_parts.append(part) else: # Absolute token target: distribute tokens across segments base_tokens = float(rate) / num_segments start_tokens = base_tokens * (1.0 - ratio_val) end_tokens = base_tokens * (1.0 + ratio_val) tokens_per_seg = [] for i in range(num_segments): t = i / (num_segments - 1) if num_segments > 1 else 0.0 tok = start_tokens + t * (end_tokens - start_tokens) tok_int = int(round(tok)) if tok_int < 1: tok_int = 1 tokens_per_seg.append(tok_int) for i, seg in enumerate(segments): seg_target = tokens_per_seg[i] try: part = _compress(prompt=seg, rate=seg_target, name=f"{name}/seg{i+1}") except Exception: part = seg compressed_parts.append(part) # Concatenate the compressed parts back into one string res = "".join(compressed_parts) else: # Uniform compression or non‑numeric rate: defer to cacheable function res = _compress(prompt=prompt, rate=rate, name=name) # end if prompt not empty except Exception: # On any unexpected error, mark it in the log. We still release the lock. print("E|", end="") # Print the final length of the result for logging try: print(str(len(res)) + ">", end=" ", flush=True) except Exception: print("E>", end=" ", flush=True) finally: lock.release() return res (maxsize=100, typed=False) def _compress(prompt: str, rate: float = 0.25, name: str = "") -> str: """ Internal helper that performs the actual call into LLMLingua2. The function is cached to avoid recompressing identical inputs. Do not call this directly unless you know what you're doing; use :func:`compress` instead. """ global strings_to_keep for round in range(3): try: print("C|", end="", flush=True) # If decoding fails, attempt to fix encoding on retry if round > 0: prompt = prompt.encode('utf-8', 'replace').decode() if rate >= 1: # Interpret rate as absolute token budget res = llm_lingua.compress_prompt( prompt, target_token=int(rate), force_tokens=strings_to_keep, drop_consecutive=True, chunk_end_tokens=[".", "?", "!", "\n", ";"], ) else: # Interpret rate as keep fraction; clamp to at least 0.01 rate_f = float(max(rate, 0.01)) res = llm_lingua.compress_prompt( prompt, rate=rate_f, force_tokens=strings_to_keep, drop_consecutive=True, chunk_end_tokens=[".", "?", "!", "\n", ";"], ) cs = res["compressed_prompt"].strip() # Heuristic to detect garbled output; retry if encountered if re.match(".{,20} [^ ] [^ ] [^ ] [^ ] [^ ] [^ ] .*", cs): print(".", end="", flush=True) print(cs) continue return cs except Exception: if round > 0: print("RE", prompt[:20], rate, end=" - ") print(traceback.format_exc()) raise Exception() def mqtt_service_runner(topic, event): """Handle incoming MQTT compression requests. The payload ``event`` is expected to be a dict with at least the ``in`` and ``rate`` keys. Optionally, a ``ratio`` key can be provided to activate gradient mode. If ``ratio`` is omitted, the default value of 0 (uniform compression) is used. """ inp = event.get("in") r = event.get("rate") # Support ratio from MQTT payload; may be None ratio = event.get("ratio") if inp is not None and r is not None: try: if ratio is None: return {"out": compress(inp, r)} else: return {"out": compress(inp, r, ratio=ratio)} except Exception as exc: return {"err": f"compression error: {exc}"} else: return {"err": "incorrect parameters"} # Register the compressor as an MQTT service mqtt.subscribe("system.compressor", mqtt_service_runner)
2025-11-14T23:21:16
https://www.reddit.com/r/LocalLLaMA/comments/1oxbxcg/a_nonlinear_segmentaware_llmlingua_compressor_for/
dustbln
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxbxcg
false
null
t3_1oxbxcg
/r/LocalLLaMA/comments/1oxbxcg/a_nonlinear_segmentaware_llmlingua_compressor_for/
false
false
self
0
null
Trying to install CUDA to build llama.cpp & ran into issue; help needed
0
I'm following [these instructions](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/CUDA-FEDORA.md) to install CUDA such that I can build `llama.cpp` [using CUDA](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#cuda). I got to [this point](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/CUDA-FEDORA.md#installing-the-cuda-meta-package) after creating the toolbox container, installing `c-development` and other tools, and adding the Nvidia repo **for Fedora 42** (this is different than the instructions, but only required changing '41' to '42' in the command). `libcuda.so.580.105.08` exists, so I [went through the instructions](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/CUDA-FEDORA.md#if-libcudaso1-exists) to "install" the necessary Nvidia drivers (really just using the host's). Then I hit this error when I attempted to [install CUDA](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/CUDA-FEDORA.md#installing-the-cuda-meta-package): Failed to resolve the transaction: Problem: conflicting requests - package cuda-13.0.0-1.x86_64 from cuda-fedora42-x86_64 requires nvidia-open >= 580.65.06, but none of the providers can be installed - package cuda-13.0.1-1.x86_64 from cuda-fedora42-x86_64 requires nvidia-open >= 580.82.07, but none of the providers can be installed - package cuda-13.0.2-1.x86_64 from cuda-fedora42-x86_64 requires nvidia-open >= 580.95.05, but none of the providers can be installed - package nvidia-open-3:580.105.08-1.fc42.noarch from cuda-fedora42-x86_64 requires nvidia-settings = 3:580.105.08, but none of the providers can be installed - package nvidia-open-3:580.65.06-1.fc42.noarch from cuda-fedora42-x86_64 requires nvidia-settings = 3:580.65.06, but none of the providers can be installed - package nvidia-open-3:580.82.07-1.fc42.noarch from cuda-fedora42-x86_64 requires nvidia-settings = 3:580.82.07, but none of the providers can be installed - package nvidia-open-3:580.95.05-1.fc42.noarch from cuda-fedora42-x86_64 requires nvidia-settings = 3:580.95.05, but none of the providers can be installed - nothing provides libjansson.so.4(libjansson.so.4)(64bit) needed by nvidia-settings-3:580.105.08-1.fc42.x86_64 from cuda-fedora42-x86_64 - nothing provides libjansson.so.4(libjansson.so.4)(64bit) needed by nvidia-settings-3:580.65.06-1.fc42.x86_64 from cuda-fedora42-x86_64 - nothing provides libjansson.so.4(libjansson.so.4)(64bit) needed by nvidia-settings-3:580.82.07-1.fc42.x86_64 from cuda-fedora42-x86_64 - nothing provides libjansson.so.4(libjansson.so.4)(64bit) needed by nvidia-settings-3:580.95.05-1.fc42.x86_64 from cuda-fedora42-x86_64 `nvidia-smi` on my system returns: CUDA Version: 13.0 Driver Version: 580.105.08 This satisfies the requirements I can see in the error message. What's going on with this error, and how can I fix it and install CUDA?
2025-11-14T23:18:31
https://www.reddit.com/r/LocalLLaMA/comments/1oxbv2g/trying_to_install_cuda_to_build_llamacpp_ran_into/
VegetableJudgment971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxbv2g
false
null
t3_1oxbv2g
/r/LocalLLaMA/comments/1oxbv2g/trying_to_install_cuda_to_build_llamacpp_ran_into/
false
false
self
0
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
I created an app like ChatGPT desktop, but for SBCs.
0
This is my project for the Baidu ERNIE hackathon, it is targeted at a $300 SBC. It will also run on PC, but only linux for now. I developed it for a Radxa Orion o6, but it should work on any SBC with at least 8gb of ram. ERNIE Desktop is comprised of 3 parts, LLamaCPP, a fastAPI server that provides search and device analytics, and a web application that provides the UI and documents interface. It uses tavily for web search, so you have to set up a free account if you want to use this feature. It can read PDFs and text-based files. Unfortunately I don't know what device people will be using it on, so you have to download or compile LLamaCPP yourself. ED uses several javascript libraries for CSS, markdown support, PDF access, and source code highlighting. Happy to answer any questions or help you get set up.
2025-11-14T23:09:59
https://github.com/openconstruct/ERNIEDesktop
thebadslime
github.com
1970-01-01T00:00:00
0
{}
1oxbnja
false
null
t3_1oxbnja
/r/LocalLLaMA/comments/1oxbnja/i_created_an_app_like_chatgpt_desktop_but_for_sbcs/
false
false
default
0
{'enabled': False, 'images': [{'id': 'WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=108&crop=smart&auto=webp&s=2a383d57501be36fc1a69d64fa22b91b9b206aac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=216&crop=smart&auto=webp&s=b0b6fbd76f8ccb24402b99710bc1334a7214b333', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=320&crop=smart&auto=webp&s=346f801d6af686b339094e00698a2f350f1660de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=640&crop=smart&auto=webp&s=ab752068a0cad16c8b0025eeec67b0b01f3295dd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=960&crop=smart&auto=webp&s=2fd90bbcdc453f3352afdfcbe23766ff746854f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?width=1080&crop=smart&auto=webp&s=0a3b73fbc0c25eebd101d37bac799f80f1ff2c72', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WSNg2ZCyNbqsIgDzQvnWVTOr06qnYQe5xgdxgpZeqt0.png?auto=webp&s=80c5ec86d62ef65f2c33c88702ec18ba599fea04', 'width': 1200}, 'variants': {}}]}
Best model to generate unique voices?
1
I'm using GPT SoVITS to generate voice lines during prototyping stages, but I'm tired of constantly searching for new voices to clip. Is there a model that can generate samples of unique voices which can be run locally on 8 GB VRAM?
2025-11-14T23:03:27
https://www.reddit.com/r/LocalLLaMA/comments/1oxbhnc/best_model_to_generate_unique_voices/
StriderPulse599
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxbhnc
false
null
t3_1oxbhnc
/r/LocalLLaMA/comments/1oxbhnc/best_model_to_generate_unique_voices/
false
false
self
1
null
How to force a json schema output in ollama with openwebui?
2
I have a custom model using a knowledge file in openwebui using the /api/completions endpoint. The “answer” is correct, so nothing wrong with the thinking ability. The problem is that it ignores my system prompt instructions to A) ONLY output the json answer, with no surrounding text and B) to use my specific json fields. It keeps addinf text to the response other than the json, and makes up its own field names.
2025-11-14T23:03:14
https://www.reddit.com/r/LocalLLaMA/comments/1oxbhg7/how_to_force_a_json_schema_output_in_ollama_with/
devshore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxbhg7
false
null
t3_1oxbhg7
/r/LocalLLaMA/comments/1oxbhg7/how_to_force_a_json_schema_output_in_ollama_with/
false
false
self
2
null
New Chinese optical quantum chip allegedly 1,000x faster than Nvidia GPUs for processing AI workloads - firm reportedly producing 12,000 wafers per year
313
2025-11-14T23:02:23
https://www.tomshardware.com/tech-industry/quantum-computing/new-chinese-optical-quantum-chip-allegedly-1-000x-faster-than-nvidia-gpus-for-processing-ai-workloads-but-yields-are-low
entsnack
tomshardware.com
1970-01-01T00:00:00
0
{}
1oxbgo2
false
null
t3_1oxbgo2
/r/LocalLLaMA/comments/1oxbgo2/new_chinese_optical_quantum_chip_allegedly_1000x/
false
false
default
313
null
Local models handle tools way better when you give them a code sandbox instead of individual tools
334
2025-11-14T22:54:49
https://i.redd.it/83hx5w1txa1g1.png
juanviera23
i.redd.it
1970-01-01T00:00:00
0
{}
1oxb9zp
false
null
t3_1oxb9zp
/r/LocalLLaMA/comments/1oxb9zp/local_models_handle_tools_way_better_when_you/
false
false
default
334
{'enabled': True, 'images': [{'id': '83hx5w1txa1g1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=108&crop=smart&auto=webp&s=f3dd7215c7bad5458cd18a4e37b6ffb3364592a0', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=216&crop=smart&auto=webp&s=8a6afaa189900fcc92352a83287c17b7307e8556', 'width': 216}, {'height': 305, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=320&crop=smart&auto=webp&s=57803af5c6091c9925ae1415529d794d2bb2e668', 'width': 320}, {'height': 610, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=640&crop=smart&auto=webp&s=0e2fbc834e05621ee050a05b0ee016fd280ff683', 'width': 640}, {'height': 916, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=960&crop=smart&auto=webp&s=c2d9419a7f7c8e231fc17bcf066aa66132dffa89', 'width': 960}, {'height': 1030, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?width=1080&crop=smart&auto=webp&s=6217b036a129ca739b1c2e6f1bdaaf0902df5869', 'width': 1080}], 'source': {'height': 1338, 'url': 'https://preview.redd.it/83hx5w1txa1g1.png?auto=webp&s=f87823f6f324dda732edddea641871fdd532d64e', 'width': 1402}, 'variants': {}}]}
Vibe'd this little local token counter utility. Calling it TokenLite. Paste in text and it does the math.
0
[https://github.com/Nessleonhart/TokenLite](https://github.com/Nessleonhart/TokenLite)
2025-11-14T22:42:29
https://i.redd.it/e5on2swnva1g1.png
NessLeonhart
i.redd.it
1970-01-01T00:00:00
0
{}
1oxaz7y
false
null
t3_1oxaz7y
/r/LocalLLaMA/comments/1oxaz7y/vibed_this_little_local_token_counter_utility/
false
false
default
0
{'enabled': True, 'images': [{'id': 'e5on2swnva1g1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/e5on2swnva1g1.png?width=108&crop=smart&auto=webp&s=67d4051da44230bf217a699505dfe64bdaae0a6c', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/e5on2swnva1g1.png?width=216&crop=smart&auto=webp&s=1d9e4e903052aac5b4b6fce1193f14a8035bdffb', 'width': 216}, {'height': 163, 'url': 'https://preview.redd.it/e5on2swnva1g1.png?width=320&crop=smart&auto=webp&s=8a7ade7caf4735173d2af46896b594c669609eb1', 'width': 320}, {'height': 327, 'url': 'https://preview.redd.it/e5on2swnva1g1.png?width=640&crop=smart&auto=webp&s=6a29001deb9e6779dec92856caa17551c43b60e0', 'width': 640}], 'source': {'height': 459, 'url': 'https://preview.redd.it/e5on2swnva1g1.png?auto=webp&s=3e2d73b1e4cd5137b8ea3893dccd396d8ad812e8', 'width': 898}, 'variants': {}}]}
Which TTS model is best for voice cloning and accent changing ?
3
Where I can narrate my voice and change it, it would be great if I can speak in British accent too.
2025-11-14T22:19:26
https://www.reddit.com/r/LocalLLaMA/comments/1oxaet5/which_tts_model_is_best_for_voice_cloning_and/
ProNoostr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxaet5
false
null
t3_1oxaet5
/r/LocalLLaMA/comments/1oxaet5/which_tts_model_is_best_for_voice_cloning_and/
false
false
self
3
null
Building a local LLM visualization tool - AI/ML Researcher needed
4
Working on a Mac app that visualizes what's happening inside local LLMs as they run (MLX/Ollama). Shows real-time layer activations and attention patterns. Thinking it could help with: * Understanding model behavior * Comparing different models/quantizations * Educational/debugging purposes Early stage, genuinely trying to build something people need.
2025-11-14T22:13:48
https://www.reddit.com/r/LocalLLaMA/comments/1oxa9nq/building_a_local_llm_visualization_tool_aiml/
ManuToniotti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oxa9nq
false
null
t3_1oxa9nq
/r/LocalLLaMA/comments/1oxa9nq/building_a_local_llm_visualization_tool_aiml/
false
false
self
4
null
Is there a self-hosted, open-source plug-and-play RAG solution?
24
I know about Ollama, llama-server, vLLM and all the other options for hosting LLMs, but I’m looking for something similar for RAG that I can self-host. Basically: I want to store scraped websites, upload PDF files, and similar documents — and have a simple system that handles: • vector DB storage • chunking • data ingestion • querying the vector DB when a user asks something • sending that to the LLM for final output I know RAG gets complicated with PDFs containing tables, images, etc., but I just need a starting point so I don’t have to build all the boilerplate myself. Is there any open-source, self-hosted solution that’s already close to this? Something I can install, run locally/server, and extend from?
2025-11-14T21:41:09
https://www.reddit.com/r/LocalLLaMA/comments/1ox9fzy/is_there_a_selfhosted_opensource_plugandplay_rag/
anedisi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox9fzy
false
null
t3_1ox9fzy
/r/LocalLLaMA/comments/1ox9fzy/is_there_a_selfhosted_opensource_plugandplay_rag/
false
false
self
24
null
I didn't expect that...
0
I'm running Fedora 42 and installed `llama-cpp`, and it's refusing to work. I checked [the package info](https://packages.fedoraproject.org/pkgs/llama-cpp/llama-cpp/index.html), and it links to the [the llama.ccp repo](https://github.com/ggml-org/llama.cpp), so I'm pretty sure I'm using the right commands. A restart hasn't helped. Where am I going wrong and how do I get `llama.cpp` to work? (bash) dev@DrKnow4:~$ sudo dnf install llama-cpp -y [sudo] password for dev: Updating and loading repositories: Repositories loaded. Package "llama-cpp-b4094-11.fc42.x86_64" is already installed. Nothing to do. (bash) dev@DrKnow4:~$ dnf list | grep llama Updating and loading repositories: Repositories loaded. llama-cpp.x86_64 b4094-11.fc42 updates llama-cpp-devel.x86_64 b4094-11.fc42 updates ollama.x86_64 0.4.4-3.fc42 fedora python3-llama-cpp-python.x86_64 0.3.2-5.fc42 fedora (bash) dev@DrKnow4:~$ llama-server ~/Downloads/Qwen2.5-Coder-7B-Instruct-Q8_0.gguf --host 127.0.0.1 --port 8033 # Adding the -hf flag doesn't help bash: llama-server: command not found... (bash) dev@DrKnow4:~$ which llama-server /usr/bin/which: no llama-server in (/home/dev/PythonVenvs/bash/bin:/home/dev/PythonVenvs/bash/bin:/home/dev/.local/bin:/home/dev/bin:/usr/local/bin:/usr/bin) (bash) dev@DrKnow4:~$ llama-cli -m ~/Downloads/Qwen2.5-Coder-7B-Instruct-Q8_0.gguf bash: llama-cli: command not found...
2025-11-14T21:21:00
https://www.reddit.com/r/LocalLLaMA/comments/1ox8xot/i_didnt_expect_that/
VegetableJudgment971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox8xot
false
null
t3_1ox8xot
/r/LocalLLaMA/comments/1ox8xot/i_didnt_expect_that/
false
false
self
0
null
Build RAG Evals from your Docs with Synthetic Data Generation (plus reranking, semantic chunking, and RAG over MCP) [Kiln AI]
11
We just created an interactive tool for building RAG evals, as part of [the Github Project Kiln](https://github.com/Kiln-AI/Kiln). It generates a RAG eval from your documents using synthetic data generation, through a fully interactive UI. **The problem:** Evaluating RAG is tricky. An LLM-as-judge doesn't have the knowledge from your documents, so it can't tell if a response is actually correct. But giving the judge access to RAG biases the evaluation. **The solution:** Reference-answer evals. The judge compares results to a known correct answer. Building these datasets used to be a long manual process. Kiln can now build Q&A datasets for evals by iterating over your document store. The process is fully interactive and takes just a few minutes to generate hundreds of reference answers. Use it to evaluate RAG accuracy end-to-end, including whether your agent calls RAG at the right times with quality queries. [**Learn more in our docs**](https://docs.kiln.tech/docs/evaluations/evaluate-rag-accuracy-q-and-a-evals) **Other new features:** * **Semantic chunking**: Splits documents by meaning rather than length, improving retrieval accuracy * **Reranking**: Add a reranking model to any RAG system you build in Kiln * **RAG over MCP**: Expose your Kiln RAG tools to any MCP client with a CLI command * **Appropriate Tool Use Eval**: Verify tools are called at the right times and not when they shouldn't be **Links:** * [GitHub repo](https://github.com/Kiln-AI/Kiln) (4.4k stars) * [RAG/docs Guide](https://docs.kiln.tech/docs/documents-and-search-rag) * [RAG Q&A Eval Guide](https://docs.kiln.tech/docs/evaluations/evaluate-rag-accuracy-q-and-a-evals) * [Discord](https://getkiln.ai/discord) * [Kiln Homepage](https://kiln.tech/) Happy to answer questions or hear feature requests! Let me know if you want support for specific reranking models.
2025-11-14T21:11:30
https://v.redd.it/7wy9a01zfa1g1
davernow
v.redd.it
1970-01-01T00:00:00
0
{}
1ox8oth
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7wy9a01zfa1g1/DASHPlaylist.mpd?a=1765746707%2CZTk4MDEyZjZmNGM5MDIxMzQ2ZTMzZjM0ZjZjN2RmNmVkMmQ1NjZmZWFmYTQ5MDhmOTk4MmE2Nzk1YjQ3YTg3Yw%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/7wy9a01zfa1g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/7wy9a01zfa1g1/HLSPlaylist.m3u8?a=1765746707%2CMjhlYzZmOTM1ODhlYWRhYTM3NzkwNDQzNzJmNWRhN2MxNzBlNzBhODA0NzEwNTUxZjBhMTE0YTcxODVkZTFjMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7wy9a01zfa1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1660}}
t3_1ox8oth
/r/LocalLLaMA/comments/1ox8oth/build_rag_evals_from_your_docs_with_synthetic/
false
false
https://external-preview…cf6311ae424c751e
11
{'enabled': False, 'images': [{'id': 'bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=108&crop=smart&format=pjpg&auto=webp&s=7e884e4321f2b38239cacd346dbd5114fc6563d0', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=216&crop=smart&format=pjpg&auto=webp&s=ad27b9877d40a16197e64d3e10546f204dfae109', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=320&crop=smart&format=pjpg&auto=webp&s=2fc10349a1d42df99dbde69eea1ffebfb4424aaf', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=640&crop=smart&format=pjpg&auto=webp&s=917c902a2e0c51eae9cce1f244064cfca1577042', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=960&crop=smart&format=pjpg&auto=webp&s=91691606a5feae77e853ef2da673ff09dc989c00', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d2efe44639eca909fc4a3eedbbc4f33ddd0b664c', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/bjdidTgyMHpmYTFnMZ0LLxhMcl41SYgtW1Imv6oloK9nZBtq79RPecQv5U1A.png?format=pjpg&auto=webp&s=9610975297963732a14eb6dcec3284b6d4c8d7e0', 'width': 3320}, 'variants': {}}]}
Why is Sesame CSM-8B so much smarter than Moshi 7B despite similar training methods?
0
I’ve been comparing Sesame CSM-8B and Moshi 7B, and the gap in intelligence is huge. CSM-8B follows instructions better, understands context more accurately, and feels way more capable overall — even though the parameter count is almost the same. What I don’t understand is: as far as I know, both models use very similar training methods (self-supervised audio pretraining, discrete tokens, similar learning mechanisms, etc.). So why does CSM-8B end up much smarter? Is it the dataset size, data quality, tokenizer, architecture tweaks, training length, or something else that makes such a big difference? I’d love to hear technical explanations from people who understand how these speech models are trained and work.
2025-11-14T21:05:50
https://www.reddit.com/r/LocalLLaMA/comments/1ox8jos/why_is_sesame_csm8b_so_much_smarter_than_moshi_7b/
Adept_Lawyer_4592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox8jos
false
null
t3_1ox8jos
/r/LocalLLaMA/comments/1ox8jos/why_is_sesame_csm8b_so_much_smarter_than_moshi_7b/
false
false
self
0
null
We just added quantization to deepseek-ocr.rs
1
[removed]
2025-11-14T20:23:32
https://www.reddit.com/r/LocalLLaMA/comments/1ox7hab/we_just_added_quantization_to_deepseekocrrs/
Outrageous-Voice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox7hab
false
null
t3_1ox7hab
/r/LocalLLaMA/comments/1ox7hab/we_just_added_quantization_to_deepseekocrrs/
false
false
self
1
null
Observed a sharp “epoch-wise double descent” in a small MNIST MLP , associated with overfitting the augmented training data
8
I’ve been training a simple 3-layer MLP on MNIST using standard tricks (light affine augmentation, label smoothing, LR warmup, etc.), and I ran into an interesting pattern. The model reaches its best test accuracy fairly early, then test accuracy *declines* for a while, even though training accuracy keeps rising. To understand what was happening, I looked at the weight matrices layer-by-layer and computed the HTSR / weightwatcher power law layer quality metrice (α) during training. At the point of peak test accuracy, α is close to 2 (which usually corresponds to well-fit layers). But as training continues, α drops significantly below 2 — right when test accuracy starts declining. What makes this interesting is that the drop in α lines up almost perfectly with overfitting to the **augmented** training distribution. In other words, once augmentation no longer provides enough variety, the model seems to “memorize” these transformed samples and the spectra reflect that shift. Has anyone else seen this kind of **epoch-wise double descent** in small models? And especially this tight relationship overfitting on the augmented data? https://preview.redd.it/w3qwfyqr3a1g1.png?width=989&format=png&auto=webp&s=d3b76405efc89a4322b8ae8efe88c38efbc44b22
2025-11-14T20:03:07
https://www.reddit.com/r/LocalLLaMA/comments/1ox6xt8/observed_a_sharp_epochwise_double_descent_in_a/
calculatedcontent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox6xt8
false
null
t3_1ox6xt8
/r/LocalLLaMA/comments/1ox6xt8/observed_a_sharp_epochwise_double_descent_in_a/
false
false
https://b.thumbs.redditm…RTYgGuTqDAVA.jpg
8
null
Which model to choose?
0
First of all,I have a potato PC (: I searched for best model that I can run on CPU and I found those models to be the best. https://huggingface.co/Liontix/Qwen3-4B-Thinking-2507-Gemini-2.5-Pro-Distill-GGUF And Unsloth's Q4_K_XL quant of the original base model, which I think is a pretty good deal (from what I searched,Unsloth XL variants are near-lossless). There are other models offers by the same user but I didn't install any models yet because of limited internet.
2025-11-14T20:00:54
https://www.reddit.com/r/LocalLLaMA/comments/1ox6vjr/which_model_to_choose/
Swimming-Ratio4879
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox6vjr
false
null
t3_1ox6vjr
/r/LocalLLaMA/comments/1ox6vjr/which_model_to_choose/
false
false
self
0
{'enabled': False, 'images': [{'id': 'X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=108&crop=smart&auto=webp&s=8c41d5bef00390cb6dfb17f13c338af75e93b5ac', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=216&crop=smart&auto=webp&s=ce4940e9c565eb803b82e12fdf67441da1e7a4d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=320&crop=smart&auto=webp&s=ed21f1557390e76b3e341ab74de5307faad97b78', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=640&crop=smart&auto=webp&s=990e8453ff6ac3ce6c993285aa1054a435471528', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=960&crop=smart&auto=webp&s=6ede218ec868cba78acfb1539aa758598dc59c81', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?width=1080&crop=smart&auto=webp&s=8e364aedf407ebb171a8f9e48679699656ded6e4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X6LFNfr7MZfHb4KhjhlT85wGx4GMVwyjuI9pfWAyoiY.png?auto=webp&s=02421f8cd50555a2aa777be7193613aebe636843', 'width': 1200}, 'variants': {}}]}
I used TOON and I was astonished how well it performed!
0
I took the last days to compare results and done benchmarks between JSON, YAML and TOON and found out that clearly does what it states. The LLM results didn’t degrade. I also found out a new converter that has 756 models and their prices to see how it calculates the saved tokens and also it was interesting that had the feature of key folding that indeed decreases even more the tokens… The tool I checked btw https://www.toonllm.dev/ At first I was skeptic, now I will try to use it in my pipelines as I already spend way too many tokens in APIs for JSONs. Even a 10% decrease would benefit me. Have you tried it too? In what use cases?
2025-11-14T19:52:27
https://www.reddit.com/r/LocalLLaMA/comments/1ox6ngy/i_used_toon_and_i_was_astonished_how_well_it/
aherontas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox6ngy
false
null
t3_1ox6ngy
/r/LocalLLaMA/comments/1ox6ngy/i_used_toon_and_i_was_astonished_how_well_it/
false
false
self
0
null
𝐄𝐩𝐨𝐜𝐡-𝐖𝐢𝐬𝐞 𝐃𝐨𝐮𝐛𝐥𝐞 𝐃𝐞𝐬𝐜𝐞𝐧𝐭 𝐰𝐢𝐭𝐡 𝐖𝐞𝐢𝐠𝐡𝐭𝐖𝐚𝐭𝐜𝐡𝐞𝐫
1
[removed]
2025-11-14T19:51:21
https://i.redd.it/c9dc0emq1a1g1.png
calculatedcontent
i.redd.it
1970-01-01T00:00:00
0
{}
1ox6mga
false
null
t3_1ox6mga
/r/LocalLLaMA/comments/1ox6mga/𝐄𝐩𝐨𝐜𝐡𝐖𝐢𝐬𝐞_𝐃𝐨𝐮𝐛𝐥𝐞_𝐃𝐞𝐬𝐜𝐞𝐧𝐭_𝐰𝐢𝐭𝐡_𝐖𝐞𝐢𝐠𝐡𝐭𝐖𝐚𝐭𝐜𝐡𝐞𝐫/
false
false
default
1
{'enabled': True, 'images': [{'id': 'c9dc0emq1a1g1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?width=108&crop=smart&auto=webp&s=364bcb4c0be38cc06bf53b7155b665438dc0506b', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?width=216&crop=smart&auto=webp&s=9e3b1ecc43480f4e1a6801a333b982b9c532cb2f', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?width=320&crop=smart&auto=webp&s=fe6cfc31f5bbc5121b8c4d495e26d8ff33345df4', 'width': 320}, {'height': 285, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?width=640&crop=smart&auto=webp&s=926b33523d39f6e3319381eb99446243e86c2e22', 'width': 640}, {'height': 428, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?width=960&crop=smart&auto=webp&s=38db74811ac34dce24495fee43119a13f41a4100', 'width': 960}], 'source': {'height': 458, 'url': 'https://preview.redd.it/c9dc0emq1a1g1.png?auto=webp&s=50bb19fe0a39cba51b98011b35f969cebd47afb9', 'width': 1025}, 'variants': {}}]}
New Nemo tune of creative \ adventure \ roleplay
22
Hi all, I introduce **Sweet\_Dreams\_12B,** a Nemo 12B tune with focus on more human and natural responses, with a fun vocabulary and reduced slop. Here's the TL;DR: * Accepts **wide range** of character cards formats. * **Unique** vocabulary. * **Very diverse** swipes. * Does adventure well. * **Morrowind** knowledge :) * Feels sometimes **very human** in the way it responds. * **Dynamic length** response with a **slight** bias towards **more paragraphs** (2–5 paragraphs, usually 2–3). Length is adjustable via 1–3 examples in the dialogue. **No more rigid short-bias!** [https://huggingface.co/SicariusSicariiStuff/Sweet\_Dreams\_12B](https://huggingface.co/SicariusSicariiStuff/Sweet_Dreams_12B)
2025-11-14T19:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1ox63cc/new_nemo_tune_of_creative_adventure_roleplay/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox63cc
false
null
t3_1ox63cc
/r/LocalLLaMA/comments/1ox63cc/new_nemo_tune_of_creative_adventure_roleplay/
false
false
self
22
{'enabled': False, 'images': [{'id': 'gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=108&crop=smart&auto=webp&s=00266657cc694f88ed5116888462ac7ae68990ff', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=216&crop=smart&auto=webp&s=a95d9d19d8805b2edff0cb13d4f25f14412fda21', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=320&crop=smart&auto=webp&s=b8588b41588f1c2919014f056215baf46f64b5d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=640&crop=smart&auto=webp&s=5827f547da137d3f5902aa507b85325f5462a208', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=960&crop=smart&auto=webp&s=74e6a135371ee7bd09c2e09ebc14db6533395f6e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?width=1080&crop=smart&auto=webp&s=16b0cdf0bc598764d3a30a334842dfbd7a843e58', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gsKPdwFnXrTSYNW7-htH7lHFwKgbQkH5_X9htQB1WWI.png?auto=webp&s=33f4f76caa6f8ba6c0d965c032204857c7df25c5', 'width': 1200}, 'variants': {}}]}
<8B LLM for Game Agent
0
Hi, I want to get some recommendations from you guys. What I want is to find LLM model as an agent for the game like pokemon, but the model size should be less than 8B. Note that Qwen3-8B is in fact 8.2B, which is larger than 8B. Any suggestions? Any model recommendations are welcome
2025-11-14T19:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1ox60xq/8b_llm_for_game_agent/
always_newbee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox60xq
false
null
t3_1ox60xq
/r/LocalLLaMA/comments/1ox60xq/8b_llm_for_game_agent/
false
false
self
0
null
Testing token learning in my data storage compression
1
Doing great. Its alive :) 70% cost saving
2025-11-14T19:21:29
https://v.redd.it/pl6th6f7w91g1
Ljumberg
/r/LocalLLaMA/comments/1ox5upl/testing_token_learning_in_my_data_storage/
1970-01-01T00:00:00
0
{}
1ox5upl
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/pl6th6f7w91g1/DASHPlaylist.mpd?a=1765869695%2CYjQ4YTZiNTg3ZTQxNjM5ZWRkZmJhMmFmMDYyNWE1MmE5YTQ5YzU2Y2E5YmExN2NkNzhkNzkwZmFjYjVjMGE0NQ%3D%3D&v=1&f=sd', 'duration': 109, 'fallback_url': 'https://v.redd.it/pl6th6f7w91g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 706, 'hls_url': 'https://v.redd.it/pl6th6f7w91g1/HLSPlaylist.m3u8?a=1765869695%2CYTJiMDk4OTlhZTQzZWNmNWIwNzY2Yjg5ZTVkMmFkOTgyNTQ1M2E3Njc1Mjg5YTZhZWUwMzFmNTFmNzU3MDBiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pl6th6f7w91g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1ox5upl
/r/LocalLLaMA/comments/1ox5upl/testing_token_learning_in_my_data_storage/
false
false
https://external-preview…24d18650c4a21a0a
1
{'enabled': False, 'images': [{'id': 'c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=108&crop=smart&format=pjpg&auto=webp&s=d8df1fde3038ea6873ec19629e0528fd4d305341', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=216&crop=smart&format=pjpg&auto=webp&s=14d2ab77fca27677dedf8c11de27aad8b3239605', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=320&crop=smart&format=pjpg&auto=webp&s=11f7a161131c55c7b21d4b82db7047d2358cc50e', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=640&crop=smart&format=pjpg&auto=webp&s=8affab6db11fb4364c836aaa98faf9878905c43b', 'width': 640}, {'height': 529, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=960&crop=smart&format=pjpg&auto=webp&s=b2822c067a2bb387019b5dc3ba3c9d84a03cf6bb', 'width': 960}, {'height': 595, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4076162903fc7acbf7dc14e632ce7ac693b9f879', 'width': 1080}], 'source': {'height': 1058, 'url': 'https://external-preview.redd.it/c3lxMTM4Zjd3OTFnMXuyBuST8m0EOQoV6FFZHxb2Dn6UM_YrAxFtuixKrDxT.png?format=pjpg&auto=webp&s=3600374aa5928b1ac47eb99ec3a0ca04a43a9549', 'width': 1920}, 'variants': {}}]}
La Perf - Local AI benchmark. Need M1/M2/M3, AMD, Intel Arc contributions. Got 4090/A100/3090/T4/M4 Max
1
[removed]
2025-11-14T19:17:11
https://www.reddit.com/r/LocalLLaMA/comments/1ox5qq9/la_perf_local_ai_benchmark_need_m1m2m3_amd_intel/
Ok_Character_3172
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox5qq9
false
null
t3_1ox5qq9
/r/LocalLLaMA/comments/1ox5qq9/la_perf_local_ai_benchmark_need_m1m2m3_amd_intel/
false
false
self
1
null
nVidia selling 5090 for retail
1
[https://marketplace.nvidia.com/en-us/consumer/graphics-cards/](https://marketplace.nvidia.com/en-us/consumer/graphics-cards/)
2025-11-14T19:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1ox5ngi/nvidia_selling_5090_for_retail/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox5ngi
false
null
t3_1ox5ngi
/r/LocalLLaMA/comments/1ox5ngi/nvidia_selling_5090_for_retail/
false
false
self
1
null
I need a CTO
0
I’m an unconventional CEO, we’ve got a unique and valuable AI solution, that needs the right person to get us where we should be. Ready to make something happen with all that time you’ve spent on this?
2025-11-14T18:47:57
https://www.reddit.com/r/LocalLLaMA/comments/1ox4ypn/i_need_a_cto/
broodsmilerepeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox4ypn
false
null
t3_1ox4ypn
/r/LocalLLaMA/comments/1ox4ypn/i_need_a_cto/
false
false
self
0
null
Dumb question, but I want to dispel any doubts. Aren't MOE supposed to be much snappier than dense models?
0
So, I finally managed to upgrade my pc, I am now a (relatively) happy owner of a ryzen 7 9800x3D, 128gb 6400 ddr5 ram, 2x 3090 asus ROG Strix with 48 gb of vram total. Needless to say, I tried firing up some new models, glm 4.5 air to be precise, with 12b active parameters and 106b total parameters. I may be mistaken, but aren't those models supposed to be pretty faster than their dense cousins (for example a mistral large with 123b total parameters)? Both are quantized, q8\_0, but the speed difference is almost negligible. I thought that for the MOE models only 1 or 2 experts would be active, leaving the rest inside the RAM pool, so the VRAM have to do all the dirty work... Am I doing something wrong? I am using Oobabooga webui for inference, gguf, offloading the maximum available layers inside the gpu... And I'm getting roughly 3 token per second in both models (GLM air and Mistral). Any suggestion or elucidation? Thank you all in advance! Love this community!
2025-11-14T18:19:23
https://www.reddit.com/r/LocalLLaMA/comments/1ox46wp/dumb_question_but_i_want_to_dispel_any_doubts/
Relative_Bit_7250
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox46wp
false
null
t3_1ox46wp
/r/LocalLLaMA/comments/1ox46wp/dumb_question_but_i_want_to_dispel_any_doubts/
false
false
self
0
null
Fixed KV cache bug in ByteDance Ouro-1.4B - 1.7x speedup
10
I encountered a KV-cache bug in ByteDance's **Ouro-1.4B** that caused out-of-bounds errors and slow inference. I created a fix that's now available on PyPI. ### 🔍 Problem The Universal Transformer architecture needs **96–128 cache indices**, but `DynamicCache` only provides ~30, leading to crashes and degraded performance. ### 🛠 Solution `UniversalTransformerCache` pre-allocates cache indices for all UT steps, eliminating out-of-bounds issues. ### 📈 Results - **1.3×–1.7× faster inference** - No more KV cache errors ### 📦 Install pip install ouro-cache-fix ### 🔗 Links GitHub: [https://github.com/Antizana/ouro-cache-fix](https://github.com/Antizana/ouro-cache-fix) PyPI: [https://pypi.org/project/ouro-cache-fix/](https://pypi.org/project/ouro-cache-fix/) Looking for testers and feedback!
2025-11-14T17:59:29
https://www.reddit.com/r/LocalLLaMA/comments/1ox3n0t/fixed_kv_cache_bug_in_bytedance_ouro14b_17x/
Livid_Fisherman_9884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox3n0t
false
null
t3_1ox3n0t
/r/LocalLLaMA/comments/1ox3n0t/fixed_kv_cache_bug_in_bytedance_ouro14b_17x/
false
false
self
10
null
Epoch-Wise Double Descent with WeightWatcher
1
𝐄𝐩𝐨𝐜𝐡-𝐖𝐢𝐬𝐞 𝐃𝐨𝐮𝐛𝐥𝐞 𝐃𝐞𝐬𝐜𝐞𝐧𝐭 𝐰𝐢𝐭𝐡 𝐖𝐞𝐢𝐠𝐡𝐭𝐖𝐚𝐭𝐜𝐡𝐞𝐫. If you train a 3-layer MLP on MNIST, but with randomly augmented training data, you can get a very high test accuracy (orange)...but the accuracy on the augmented training data (blue) tanks.  𝘐𝘯 𝘰𝘵𝘩𝘦𝘳 𝘸𝘰𝘳𝘥𝘴, 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘶𝘯𝘥𝘦𝘳𝘨𝘰𝘦𝘴 𝘢 𝘱𝘩𝘢𝘴𝘦 𝘵𝘳𝘢𝘯𝘴𝘪𝘵𝘪𝘰𝘯. What is commonly called Epoch-wise Double Descent. 🧠 And at exactly this point, the weightwatcher alphas go from optimal α ≈ 2 to very overfit α << 2. As predicted by theory! The model is now in the overfit regime. And it can no longer effectively generalize to out-of-distribution data. But it's not the test accuracy it fails on. ⚠️ It's the 𝘳𝘢𝘯𝘥𝘰𝘮𝘭𝘺 𝘢𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘥𝘢𝘵𝘢 it can no longer describe! That's the true out-of-distribution data. 𝐀𝐧𝐨𝐭𝐡𝐞𝐫 𝐞𝐱𝐚𝐦𝐩𝐥𝐞 𝐨𝐟 𝐰𝐞𝐢𝐠𝐡𝐭𝐰𝐚𝐭𝐜𝐡𝐞𝐫 𝐢𝐧 𝐚𝐜𝐭𝐢𝐨𝐧! [https://weightwatcher.ai](https://weightwatcher.ai/) 👉 Want to learn more ? Join us on the community Discord P.S. You can find this experiment and other examples in: [https://github.com/CalculatedContent/WeightWatcher-examples](https://github.com/CalculatedContent/WeightWatcher-examples)
2025-11-14T17:56:34
https://i.redd.it/czuuunk4h91g1.png
calculatedcontent
i.redd.it
1970-01-01T00:00:00
0
{}
1ox3k9q
false
null
t3_1ox3k9q
/r/LocalLLaMA/comments/1ox3k9q/epochwise_double_descent_with_weightwatcher/
false
false
default
1
{'enabled': True, 'images': [{'id': 'czuuunk4h91g1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?width=108&crop=smart&auto=webp&s=955fc62530a850bae9d4dbc899fcb92c68127c0b', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?width=216&crop=smart&auto=webp&s=a5a839fedc0ecb67065529f6117883471f364e6d', 'width': 216}, {'height': 142, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?width=320&crop=smart&auto=webp&s=666ecd4584bb4d02d1321a395c4980fc76539227', 'width': 320}, {'height': 285, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?width=640&crop=smart&auto=webp&s=dd51cd10acaca811db937aeac56569bdea1e24cc', 'width': 640}, {'height': 428, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?width=960&crop=smart&auto=webp&s=da851d5991c025af46c411e57ff8a18bb2cad7c1', 'width': 960}], 'source': {'height': 458, 'url': 'https://preview.redd.it/czuuunk4h91g1.png?auto=webp&s=e135889e7872f6261289893147207ddb487a52fe', 'width': 1025}, 'variants': {}}]}
Open-source RAG/LLM evaluation framework; I’m part of the team and would love feedback
11
Hey everyone, I’m a software engineering student who recently joined a small team working on Rhesis, an open-source framework for evaluating RAG systems and LLM outputs. I’m still learning a great deal about evaluation pipelines, so I wanted to share my insights here and hear what people in this community think. The goal is to make it easier to run different metrics in one place, rather than jumping between tools. Right now it supports: • RAG + LLM output evaluation • DeepEval, RAGAS, and custom metrics • Versioned test suites • Local + CI execution, optional self-hosted backend I’m really curious about how people here handle evaluation, what pain points you have, and what would make a framework like this genuinely useful. GitHub: https://github.com/rhesis-ai/rhesis Any thoughts, critiques, or ideas are super appreciated.
2025-11-14T17:55:04
https://www.reddit.com/r/LocalLLaMA/comments/1ox3iu3/opensource_ragllm_evaluation_framework_im_part_of/
IOnlyDrinkWater_22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox3iu3
false
null
t3_1ox3iu3
/r/LocalLLaMA/comments/1ox3iu3/opensource_ragllm_evaluation_framework_im_part_of/
false
false
self
11
{'enabled': False, 'images': [{'id': '9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=108&crop=smart&auto=webp&s=fc7d48eb62cb2d45ddc18da1e60e60fb8bc31dd8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=216&crop=smart&auto=webp&s=9217408319d049b36ce57f99fb9cc1765b39a02a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=320&crop=smart&auto=webp&s=aaef23beab50b7ec2f501149047e054736cb4992', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=640&crop=smart&auto=webp&s=904a538fc54f255952797e66fd7693883d0ab520', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=960&crop=smart&auto=webp&s=101e492cb16ae32b55309504ef23459017ad371a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?width=1080&crop=smart&auto=webp&s=fc1916e63aa59ccb7153d45ed211aee26b023b9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9yUmqeHjksCscGJ9fxqi_e1zaVCjqPDj02VJ8fEuHYQ.png?auto=webp&s=a2938d3825dda04a187e0f59481125fc72047b4f', 'width': 1200}, 'variants': {}}]}
Drummer's Precog 24B and 123B v1 - AI that writes a short draft before responding
150
Hey guys! I wanted to explore a different way of thinking where the AI uses the `<think>` block to plan ahead and create a short draft so that its *actual* response has **basis**. It seems like a good way to have the AI pan out its start, middle, and end before writing the entire thing. Kind of like a synopsis or abstract. I'm hoping it could strengthen consistency and flow since the AI doesn't have to *wing it* and write a thousand tokens from the get-go. It's a cheaper, more effective alternative to reasoning, especially when it comes to story / RP. You can also make adjustments to the draft to steer it a certain way. Testers have been happy with it. 24B: [https://huggingface.co/TheDrummer/Precog-24B-v1](https://huggingface.co/TheDrummer/Precog-24B-v1) 123B: [https://huggingface.co/TheDrummer/Precog-123B-v1](https://huggingface.co/TheDrummer/Precog-123B-v1) Examples: https://preview.redd.it/1li2viecf91g1.png?width=2264&format=png&auto=webp&s=af225606b23751beaf3076b1a58140b1c77b1a4f https://preview.redd.it/7iu4m7zcf91g1.png?width=887&format=png&auto=webp&s=4de7655654340ec91216d8a61c93c474571b1dc0 https://preview.redd.it/3qo833ndf91g1.png?width=1010&format=png&auto=webp&s=0cac98a5e93dd87baa885bda58574385b8e73c11
2025-11-14T17:50:02
https://www.reddit.com/r/LocalLLaMA/comments/1ox3e1f/drummers_precog_24b_and_123b_v1_ai_that_writes_a/
TheLocalDrummer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox3e1f
false
null
t3_1ox3e1f
/r/LocalLLaMA/comments/1ox3e1f/drummers_precog_24b_and_123b_v1_ai_that_writes_a/
false
false
self
150
{'enabled': False, 'images': [{'id': 'vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=108&crop=smart&auto=webp&s=38689c287de21c28b946a6069edf9dfb076dbeda', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=216&crop=smart&auto=webp&s=7aeee24f860137583ae1495cc53762ce7a43679f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=320&crop=smart&auto=webp&s=f982f4fbd508aed444c61fa809acfa1a1c43f02c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=640&crop=smart&auto=webp&s=2798025eaf994dc8b7c090c13aa6bdefb4507a02', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=960&crop=smart&auto=webp&s=73513aa5e8382b888613ffe21efc1e11d377b514', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?width=1080&crop=smart&auto=webp&s=cae71ad0ababb031a1ee9462e322b1569a6a69b3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vl2ei1-FehJR-7jZHQXuFZ_Y0kemf2CP216W8qh6VxE.png?auto=webp&s=d34a52021f18735beb5cad849ff0228e4035a368', 'width': 1200}, 'variants': {}}]}
are there any resources for reading system design of the ai coding agents
0
yeah as in the title are there any resources for reading the system design of the ai coding agents like lovable, v0 or any similar applcations
2025-11-14T17:31:52
https://www.reddit.com/r/LocalLLaMA/comments/1ox2wci/are_there_any_resources_for_reading_system_design/
lavangamm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox2wci
false
null
t3_1ox2wci
/r/LocalLLaMA/comments/1ox2wci/are_there_any_resources_for_reading_system_design/
false
false
self
0
null
Getting a little help on my sketch from AI
0
2025-11-14T17:16:00
https://v.redd.it/lplcmh90a91g1
aigeneration
v.redd.it
1970-01-01T00:00:00
0
{}
1ox2h3z
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lplcmh90a91g1/DASHPlaylist.mpd?a=1765732577%2COTc1NDBhZmY4MThhZWYzY2E4MDhjM2IxZjQ3NmUwNDRkNWU5MGNlYTE2YTRmNWVmYzY3NDMzMWI3ZmRjZTZiZQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/lplcmh90a91g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/lplcmh90a91g1/HLSPlaylist.m3u8?a=1765732577%2CZDY4YWI4YjBmNzJhMTg0MmEzMzAxNDE1N2E0YWMzYTAwYWE4ZjY0ZTZjNGU2NDU4MWE5ODE0MGJmM2I2ZWRhZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lplcmh90a91g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1828}}
t3_1ox2h3z
/r/LocalLLaMA/comments/1ox2h3z/getting_a_little_help_on_my_sketch_from_ai/
false
false
https://external-preview…9f6d56ea6a41f576
0
{'enabled': False, 'images': [{'id': 'YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=108&crop=smart&format=pjpg&auto=webp&s=4dad51f5af766d483da1bc5c8491a620a8166a81', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=216&crop=smart&format=pjpg&auto=webp&s=0619062ab96aa2343c7fa9e7686df8848fd45cab', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=320&crop=smart&format=pjpg&auto=webp&s=5ba22737f730c2b454c8503c9b448e4062e35ee6', 'width': 320}, {'height': 378, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=640&crop=smart&format=pjpg&auto=webp&s=a92139ba4fb46af6a4c54f61c3a8e42885b6c0c8', 'width': 640}, {'height': 567, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=960&crop=smart&format=pjpg&auto=webp&s=b98a2948819c2a2ae60a654320a6f1463a85cd56', 'width': 960}, {'height': 638, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f7ee49b8ed7774b4a79a45eb4fcac6ac1a5bbb70', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YmV2cmozaTNhOTFnMWY3QHv-48-5SJLENQKUHtfKplvZvMAfuun-q4uRaqqN.png?format=pjpg&auto=webp&s=a2c641e9211c8e64ae6e8c987629b5e411d4321f', 'width': 1828}, 'variants': {}}]}
Risk of LLM Judges in Paper Review: Scores Could Mask Poor Quality
29
See this twitter thread: https://nitter.net/micahgoldblum/status/1989088547777966512 A couple of quotes > An LLM-generated paper is in the top 17% of ICLR submissions in terms of average reviewer score, having received two 8's. The paper has tons of BS jargon and hallucinated references. Fortunately, one reviewer actually looked at the paper and gave it a zero. > Do you think the other 2 reviewers who gave it 8 just used LLMs to review as well? > Likely There are other discussions that also mentions: peer reviews are free (one can submit a ton of those). What if people simply produce a ton of paperslop to review and humans peer reviewers get fatigued, use LLMs as judges and those don't know better?
2025-11-14T16:55:58
https://www.reddit.com/r/LocalLLaMA/comments/1ox1x54/risk_of_llm_judges_in_paper_review_scores_could/
pier4r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox1x54
false
null
t3_1ox1x54
/r/LocalLLaMA/comments/1ox1x54/risk_of_llm_judges_in_paper_review_scores_could/
false
false
self
29
null
How to configure the minimum VLLM–20t/s running minimaxm2 on the computer?
1
Is there a great person who can help me analyze it? I want to configure a personal workstation, with the goal of minimaxM2 1. I can stabilize 30k context 20t/s Q4km quantization in vllm, and 2. I can stabilize 30k context 30t/s Q4km quantization in llamacpp. What configuration I have now: 48X2 6400mhz 96G memory and 5090 32g memory. How can I upgrade to realize these two dreams? Can you give me some advice?Thank you!
2025-11-14T16:45:27
https://www.reddit.com/r/LocalLLaMA/comments/1ox1myz/how_to_configure_the_minimum_vllm20ts_running/
Front-Relief473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox1myz
false
null
t3_1ox1myz
/r/LocalLLaMA/comments/1ox1myz/how_to_configure_the_minimum_vllm20ts_running/
false
false
self
1
null
LLMs from emacs
6
https://preview.redd.it/… for comparison.
2025-11-14T16:40:47
https://www.reddit.com/r/LocalLLaMA/comments/1ox1icc/llms_from_emacs/
GregariousWolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox1icc
false
null
t3_1ox1icc
/r/LocalLLaMA/comments/1ox1icc/llms_from_emacs/
false
false
https://b.thumbs.redditm…YirJKDo0GAjY.jpg
6
null
Why does nvidia-smi show 2% GPU utilization when the GPU is idle?
0
This doesn’t happen on my old RTX 2080 Ti OS: Ubuntu 24.10 CUDA: 13.0.2 Driver: 580.105.08
2025-11-14T16:24:56
https://i.redd.it/uc5y7m0e091g1.png
Mindless_Pain1860
i.redd.it
1970-01-01T00:00:00
0
{}
1ox12gk
false
null
t3_1ox12gk
/r/LocalLLaMA/comments/1ox12gk/why_does_nvidiasmi_show_2_gpu_utilization_when/
false
false
default
0
{'enabled': True, 'images': [{'id': 'uc5y7m0e091g1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/uc5y7m0e091g1.png?width=108&crop=smart&auto=webp&s=2b0afcc6665c437b02e1d22d46371bfa354a10a4', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/uc5y7m0e091g1.png?width=216&crop=smart&auto=webp&s=bdede81f88789f5249aaf97aa9e840473c11a981', 'width': 216}, {'height': 158, 'url': 'https://preview.redd.it/uc5y7m0e091g1.png?width=320&crop=smart&auto=webp&s=9b3f7add3f90333ea8c00012b2e7a55eae76da47', 'width': 320}, {'height': 317, 'url': 'https://preview.redd.it/uc5y7m0e091g1.png?width=640&crop=smart&auto=webp&s=9f34f58b9b1d0409e6accce4430cb330cbcccbb4', 'width': 640}], 'source': {'height': 463, 'url': 'https://preview.redd.it/uc5y7m0e091g1.png?auto=webp&s=47d675db55f8f3356874429275e4e692b9717dde', 'width': 933}, 'variants': {}}]}
Open-source local Claude-Code alternative for DevOps - looking for beta testers
1
I’ve been working on a small open-source project - a local Claude-Code-style assistant built with ollama. It runs entirely offline, uses a locally trained model optimised for speed, and can handle practical DevOps tasks like reading/writing files, running shell commands, and checking env vars. Core ideas: * Local model (Ollama), uses only \~1.1 GB RAM (kept small for DevOps use) * Speed optimised - after initial load it responds in about 7–10 seconds * No data leaking, no APIs, no telemetry, no subscriptions Repo: [https://github.com/ubermorgenland/devops-agent](https://github.com/ubermorgenland/devops-agent) It’s early-stage, but working - would love a few beta testers to try it locally and share feedback or ideas for new tools.
2025-11-14T16:24:49
https://www.reddit.com/r/LocalLLaMA/comments/1ox12cj/opensource_local_claudecode_alternative_for/
apinference
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox12cj
false
null
t3_1ox12cj
/r/LocalLLaMA/comments/1ox12cj/opensource_local_claudecode_alternative_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=108&crop=smart&auto=webp&s=d9808e17569f5bb4f44377a4721b38688c21cc59', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=216&crop=smart&auto=webp&s=a72c13ab2766c2a7f39730df5a44f1310a416e4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=320&crop=smart&auto=webp&s=35d13cdf29a0025f2a7867187139c768bc5b0b76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=640&crop=smart&auto=webp&s=ecf2d43c7472c97fb0d37d47cf14c3c077aa5e0e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=960&crop=smart&auto=webp&s=70fa4b2f9fc6f5312cb279e0a8fa1e78c765caf9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?width=1080&crop=smart&auto=webp&s=f034f99c510aeb0ad318ab18d98d1acf202ec0c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YU275vvc-GapPfaAZ6EEPPb5elRhrDOVhFp0aEy3Ilw.png?auto=webp&s=63f0ea80d5daf4c97600b7c2659b02f197ef78c5', 'width': 1200}, 'variants': {}}]}
How do i convert a LMStudio oriented RAG pipeline to vLLM oriented one ?
0
I have been following [running RAGAnything locally using LMStudio](https://github.com/HKUDS/RAG-Anything/blob/main/examples/lmstudio_integration_example.py). but our local server have vLLM installed in it. How do i do transition from LMStudio to vLLM error-free ?
2025-11-14T16:17:15
https://www.reddit.com/r/LocalLLaMA/comments/1ox0ur9/how_do_i_convert_a_lmstudio_oriented_rag_pipeline/
primumnc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox0ur9
false
null
t3_1ox0ur9
/r/LocalLLaMA/comments/1ox0ur9/how_do_i_convert_a_lmstudio_oriented_rag_pipeline/
false
false
self
0
{'enabled': False, 'images': [{'id': 'jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=108&crop=smart&auto=webp&s=58ae5d2cd094dac494748778acbd2bd1892e1a75', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=216&crop=smart&auto=webp&s=83cc3c8dbb28b2e84091f22afc5044b4edb937b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=320&crop=smart&auto=webp&s=96d2488dde766cfda80a73fc4b85607a3900a678', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=640&crop=smart&auto=webp&s=4be88e2a79910ce6a9e6588128a8c702c816ff50', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=960&crop=smart&auto=webp&s=f444b475c1029be4a999507fdaf11068f04f2c38', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?width=1080&crop=smart&auto=webp&s=00e2a94a9b70b47f33adb975619807d131c76ab1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jX_l_J9i4K5UCDYgeW9-Mcoo-Cd1X8i7lNWeF0JSvkM.png?auto=webp&s=9f2180eba6b505a31fb6c4fcd6837ac5fefeed05', 'width': 1200}, 'variants': {}}]}
I've bought a RTX 6000 PRO. Now what?
0
A little context: I was using a 5090 until last week. I am working mainly with image and video models and I consider myself an advanced ComfyUI user. 5090 gave the power to run Flux fp16 intead of quantized versions, and Qwen and Wan in fp8. Now the 6000 gave me the power to run all video models in fp16 and generate longer video lengths. Now I would like to be more adventurous in the LLM field, where I am a total noob. Where to start? What fits inside a single 6000 PRO (96GB) plus 128 DDR5 RAM? Can I cancel my Claude subscription?
2025-11-14T16:05:57
https://www.reddit.com/r/LocalLLaMA/comments/1ox0jpm/ive_bought_a_rtx_6000_pro_now_what/
applied_intelligence
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ox0jpm
false
null
t3_1ox0jpm
/r/LocalLLaMA/comments/1ox0jpm/ive_bought_a_rtx_6000_pro_now_what/
false
false
self
0
null
This new LLM site gives free LLM credits!
1
[removed]
2025-11-14T15:44:02
https://www.reddit.com/r/LocalLLaMA/comments/1owzy6p/this_new_llm_site_gives_free_llm_credits/
Desperate-Mine2845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owzy6p
false
null
t3_1owzy6p
/r/LocalLLaMA/comments/1owzy6p/this_new_llm_site_gives_free_llm_credits/
false
false
self
1
null
Software dev from Serbia looking for proven AI B2B ideas - we're 2 years behind the curve
0
Hey everyone, I'm developer from Serbia reaching out to this community for some insights. Our market typically lags 1-2 years behind more tech-advanced countries in terms of adoption and trends. There's currently a grant competition here offering funding for AI projects, and I want to build something with real traction potential rather than shooting in the dark. **My ask:** What AI-powered B2B solutions have taken off in your country/region in the past 1-2 years? The "time lag" here might be an advantage - what's already validated in your markets could be a greenfield opportunity in Serbia and the Balkans. Background: I work in fintech/payroll systems, so I understand enterprise software, but I'm open to any vertical that's shown real success. My plan is to use Llama models (likely self-hosted or via affordable APIs) to keep costs down and maintain control over the solution. Any war stories, successes, or lessons learned would be incredibly valuable. Thanks!
2025-11-14T15:21:49
https://www.reddit.com/r/LocalLLaMA/comments/1owzchz/software_dev_from_serbia_looking_for_proven_ai/
Certain-Sherbert-641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owzchz
false
null
t3_1owzchz
/r/LocalLLaMA/comments/1owzchz/software_dev_from_serbia_looking_for_proven_ai/
false
false
self
0
null
Sorry for the dumb question, but why are there MXFP4 GGUFs but no NVFP4 GGUFs?
3
We just got some DGX Spark boxes at work for development purposes and I loaded up LM Studio on them. I heard that the preferred model type that will run best on them is NVFP4, but I can’t seem to find any NVFP4 models in LM Studio, The closest I’ve been able to find is MXFP4 (which is the default model selection when you attempt to download gpt-oss-120b on DGX Spark) Is MXFP4 just as good as NVFP4 performance wise? Am I completely out of luck for NVFP4 GGUFs (guess they are not a thing as I’m not seeing any on HF). Is vLLM my only option for finding and running these quants on DGX Spark?
2025-11-14T15:14:50
https://www.reddit.com/r/LocalLLaMA/comments/1owz5s7/sorry_for_the_dumb_question_but_why_are_there/
Porespellar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owz5s7
false
null
t3_1owz5s7
/r/LocalLLaMA/comments/1owz5s7/sorry_for_the_dumb_question_but_why_are_there/
false
false
self
3
null
Another Chinese Site Giving free $125 on Major LLM like Claude Sonnet 4.5!
1
[removed]
2025-11-14T15:12:18
https://www.reddit.com/r/LocalLLaMA/comments/1owz3cr/another_chinese_site_giving_free_125_on_major_llm/
Desperate-Mine2845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owz3cr
false
null
t3_1owz3cr
/r/LocalLLaMA/comments/1owz3cr/another_chinese_site_giving_free_125_on_major_llm/
false
false
self
1
null
Get $125 free credit for Claude Sonnet 4.5 using my referral at megallm.io - REF-KNIUAQ72
1
[removed]
2025-11-14T15:06:04
https://www.reddit.com/r/LocalLLaMA/comments/1owyxen/get_125_free_credit_for_claude_sonnet_45_using_my/
Desperate-Mine2845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owyxen
false
null
t3_1owyxen
/r/LocalLLaMA/comments/1owyxen/get_125_free_credit_for_claude_sonnet_45_using_my/
false
false
self
1
null
Get $125 free credit for Claude Sonnet 4.5 using my referral at megallm.io - REF-KNIUAQ72
1
[removed]
2025-11-14T15:02:32
https://www.reddit.com/r/LocalLLaMA/comments/1owytx5/get_125_free_credit_for_claude_sonnet_45_using_my/
Desperate-Mine2845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owytx5
false
null
t3_1owytx5
/r/LocalLLaMA/comments/1owytx5/get_125_free_credit_for_claude_sonnet_45_using_my/
false
false
self
1
null
The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2 Thinking
172
2025-11-14T14:58:02
https://sebastianraschka.com/blog/2025/the-big-llm-architecture-comparison.html
seraschka
sebastianraschka.com
1970-01-01T00:00:00
0
{}
1owyp8q
false
null
t3_1owyp8q
/r/LocalLLaMA/comments/1owyp8q/the_big_llm_architecture_comparison_from/
false
false
default
172
{'enabled': False, 'images': [{'id': '5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?width=108&crop=smart&auto=webp&s=bb798616d88900fbd4d6cdf789869938c2ea533e', 'width': 108}, {'height': 165, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?width=216&crop=smart&auto=webp&s=8fd1ba33a5a0872d6ecf5d04f90d60f304af32a5', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?width=320&crop=smart&auto=webp&s=836875ab9e564fb04608a3122ae2456d74ae10c0', 'width': 320}, {'height': 489, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?width=640&crop=smart&auto=webp&s=c88626d25ad555fca14567f2825f2de4449a35ff', 'width': 640}, {'height': 734, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?width=960&crop=smart&auto=webp&s=564728ebd15b5c79202c61de3771ec632e365d83', 'width': 960}], 'source': {'height': 782, 'url': 'https://external-preview.redd.it/5N8z_mXiAneWfY6B3hrkRbiDD5IkgsvFJWMT1AAURS8.jpeg?auto=webp&s=67f9024fbdabd420c4029396474cee17fdbd8ab8', 'width': 1022}, 'variants': {}}]}
The Big LLM Architecture Comparison: From DeepSeek-V3 to Kimi K2 Thinking: A Look At Modern LLM Architecture Design
1
2025-11-14T14:57:10
http://sebastianraschka.com/blog/2025/the-big-llm-architecture-comparison.html
seraschka
sebastianraschka.com
1970-01-01T00:00:00
0
{}
1owyoip
false
null
t3_1owyoip
/r/LocalLLaMA/comments/1owyoip/the_big_llm_architecture_comparison_from/
false
false
default
1
null
distil-localdoc.py - SLM assistant for writing Python documentation
9
We built an SLM assistant for automatic Python documentation - a Qwen3 0.6B parameter model that generates complete, properly formatted docstrings for your code in Google style. Run it locally, keeping your proprietary code secure! Find it at https://github.com/distil-labs/distil-localdoc.py ## Usage We load the model and your Python file. By default we load the downloaded Qwen3 0.6B model and generate Google-style docstrings. ```bash python localdoc.py --file your_script.py # optionally, specify model and docstring style python localdoc.py --file your_script.py --model localdoc_qwen3 --style google ``` The tool will generate an updated file with `_documented` suffix (e.g., `your_script_documented.py`). ## Features The assistant can generate docstrings for: - **Functions**: Complete parameter descriptions, return values, and raised exceptions - **Methods**: Instance and class method documentation with proper formatting. The tool skips double underscore (dunder: __xxx) methods. ## Examples Feel free to run them yourself using the files in [examples](examples) ### Before: ```python def calculate_total(items, tax_rate=0.08, discount=None): subtotal = sum(item['price'] * item['quantity'] for item in items) if discount: subtotal *= (1 - discount) return subtotal * (1 + tax_rate) ``` ### After (Google style): ```python def calculate_total(items, tax_rate=0.08, discount=None): """ Calculate the total cost of items, applying a tax rate and optionally a discount. Args: items: List of item objects with price and quantity tax_rate: Tax rate expressed as a decimal (default 0.08) discount: Discount rate expressed as a decimal; if provided, the subtotal is multiplied by (1 - discount) Returns: Total amount after applying the tax Example: >>> items = [{'price': 10, 'quantity': 2}, {'price': 5, 'quantity': 1}] >>> calculate_total(items, tax_rate=0.1, discount=0.05) 22.5 """ subtotal = sum(item['price'] * item['quantity'] for item in items) if discount: subtotal *= (1 - discount) return subtotal * (1 + tax_rate) ``` ## FAQ **Q: Why don't we just use GPT-4/Claude API for this?** Because your proprietary code shouldn't leave your infrastructure. Cloud APIs create security risks, compliance issues, and ongoing costs. Our models run locally with comparable quality. **Q: Can I document existing docstrings or update them?** Currently, the tool only adds missing docstrings. Updating existing documentation is planned for future releases. For now, you can manually remove docstrings you want regenerated. **Q: Which docstring style can I use?** - **Google**: Most readable, great for general Python projects **Q: The model does not work as expected** A: The tool calling on our platform is in active development! [Follow us on LinkedIn](https://www.linkedin.com/company/distil-labs/) for updates, or [join our community](https://join.slack.com/t/distil-labs-community/shared_invite/zt-36zqj87le-i3quWUn2bjErRq22xoE58g). You can also manually refine any generated docstrings. **Q: Can you train a model for my company's documentation standards?** A: Visit our [website](https://www.distillabs.ai) and reach out to us, we offer custom solutions tailored to your coding standards and domain-specific requirements. **Q: Does this support type hints or other Python documentation tools?** A: Type hints are parsed and incorporated into docstrings. Integration with tools like pydoc, Sphinx, and MkDocs is on our roadmap.
2025-11-14T14:52:32
https://i.redd.it/phzq8yxnj81g1.png
party-horse
i.redd.it
1970-01-01T00:00:00
0
{}
1owyk52
false
null
t3_1owyk52
/r/LocalLLaMA/comments/1owyk52/distillocaldocpy_slm_assistant_for_writing_python/
false
false
default
9
{'enabled': True, 'images': [{'id': 'phzq8yxnj81g1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=108&crop=smart&auto=webp&s=dfc31ddac2ee8c2bd01aa7e9845f16dfd7f5d00a', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=216&crop=smart&auto=webp&s=d37b77b97e358c6aa4ca8f3ea4674d1ae9a3ef80', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=320&crop=smart&auto=webp&s=bff2ba7e0ad1070e287461029500f6ebb3ae5881', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=640&crop=smart&auto=webp&s=07e950f1db9905480e03d620bbe8e37210ea60dc', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=960&crop=smart&auto=webp&s=806b6fa1e600c38b10fab5e091ba72dde9dd0b4c', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?width=1080&crop=smart&auto=webp&s=ef6d42673a8d506ece9c02db97b7de44b0a1986f', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/phzq8yxnj81g1.png?auto=webp&s=cd51c30ae01989c03e408051fcbca71f56c9a5fb', 'width': 1536}, 'variants': {}}]}
Recommendations for managing high level project context while using coding agents
0
I use normal tools like windsurf, or coding CLIs to develop my projects. For high-level project oversight, I use Gemini in AI Studio with a sophisticated system prompt: * Every time a agent finishes a task on my codebase, I manually copy its output into Gemini. * Gemini summarizes what was done, updates the big-picture plan, and generates the next prompt (including context) for the local agent. This works well — but the constant copy-paste loop is exhausting. Looking for automation or existing tools that already support: * Code execution & agent loops * Automated handoff to a "manager" model for planning/summarization * Multi-agent coordination without manual intervention What’s your recommended stack for this kind of autonomous dev workflow?
2025-11-14T14:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1owyk2m/recommendations_for_managing_high_level_project/
uwk33800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owyk2m
false
null
t3_1owyk2m
/r/LocalLLaMA/comments/1owyk2m/recommendations_for_managing_high_level_project/
false
false
self
0
null
Why aren't there cheap NVLink adapters for RTX 3090s?
34
Is the NVLink only a wire jumper linking both cards together? Can I make my own homemade connections? Or are there some chips or other things inside the bridge?
2025-11-14T14:17:43
https://www.reddit.com/r/LocalLLaMA/comments/1owxob9/why_arent_there_cheap_nvlink_adapters_for_rtx/
alex_bit_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owxob9
false
null
t3_1owxob9
/r/LocalLLaMA/comments/1owxob9/why_arent_there_cheap_nvlink_adapters_for_rtx/
false
false
self
34
null
LLM Host
0
Which of the two hosts woould you guys going to buy / which one is in your opinion the most bang for the bucks? The sparately listed cpu's are upgrade options in each config. Prices are Euro.
2025-11-14T14:14:38
https://i.redd.it/p805vgtid81g1.png
schnazzn
i.redd.it
1970-01-01T00:00:00
0
{}
1owxlin
false
null
t3_1owxlin
/r/LocalLLaMA/comments/1owxlin/llm_host/
false
false
default
0
{'enabled': True, 'images': [{'id': 'p805vgtid81g1', 'resolutions': [{'height': 22, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=108&crop=smart&auto=webp&s=5a9ee7421a14e21619e20490cd408d9b767b6c2f', 'width': 108}, {'height': 44, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=216&crop=smart&auto=webp&s=e136466dbac86cd62ba997b15bbe7f49f8c72b44', 'width': 216}, {'height': 65, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=320&crop=smart&auto=webp&s=d9e7f8e91fe401c53939fc0876f2763a45fee734', 'width': 320}, {'height': 131, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=640&crop=smart&auto=webp&s=7c024138a8b5c6724c1e40fed0bcfa0fc549d515', 'width': 640}, {'height': 196, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=960&crop=smart&auto=webp&s=44e43e163b7e610fa8198ca9429ab22022209c9c', 'width': 960}, {'height': 221, 'url': 'https://preview.redd.it/p805vgtid81g1.png?width=1080&crop=smart&auto=webp&s=fc8edc3585039d757d0cf41bd04465402ec9575b', 'width': 1080}], 'source': {'height': 308, 'url': 'https://preview.redd.it/p805vgtid81g1.png?auto=webp&s=2ddb354f201cab288a7c9bae9648026af8f12e57', 'width': 1504}, 'variants': {}}]}
Unified Dashboard for All Your AI Costs
0
In short I'm building a tool to track: \- LLM API costs across providers (OpenAI, Anthropic, etc.) \- AI Agent Costs \- Vector DB expenses (Pinecone, Weaviate, etc.) \- External API costs (Stripe, Twilio, etc.) \- Per-user cost attribution \- Set spending caps and get alerts before budget overruns Set up is relatively out of-box and straightforward. Perfect for companies running RAG apps, AI agents, or chatbots. Want free access? Please comment or DM me. Thank you!
2025-11-14T14:13:11
https://www.reddit.com/r/LocalLLaMA/comments/1owxk87/unified_dashboard_for_all_your_ai_costs/
Visible_Farm8636
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owxk87
false
null
t3_1owxk87
/r/LocalLLaMA/comments/1owxk87/unified_dashboard_for_all_your_ai_costs/
false
false
self
0
null
The company gmktec made a comparison of the EVO-X2 that has a Ryzen AI Max+ 395 processor vs NVIDIA DGX SPARK
124
My point is that they should make comparisons with small models that have come out lately because they are enough for most people and because the inference is also faster Info : https://www.gmktec.com/blog/evo-x2-vs-nvidia-dgx-spark-redefining-local-ai-performance
2025-11-14T13:52:15
https://i.redd.it/pl1lqj8r981g1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1owx1nh
false
null
t3_1owx1nh
/r/LocalLLaMA/comments/1owx1nh/the_company_gmktec_made_a_comparison_of_the_evox2/
false
false
default
124
{'enabled': True, 'images': [{'id': 'pl1lqj8r981g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=108&crop=smart&auto=webp&s=ff397e14b8e5b029abb032f4c436cab9aa753ac2', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=216&crop=smart&auto=webp&s=e4589afae4d2c9669abb5efe92643ee1ed2f6bd6', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=320&crop=smart&auto=webp&s=8edce5074879f8b508fe63a9d14efca9dd07da88', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=640&crop=smart&auto=webp&s=1855485e8cb6f5d5b69639209615733627982830', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=960&crop=smart&auto=webp&s=7f84e6b812aabf1d1d5a03b1a35cec89c4c410f7', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?width=1080&crop=smart&auto=webp&s=96472bacf635b78972c4067babd9f24070af3ebf', 'width': 1080}], 'source': {'height': 628, 'url': 'https://preview.redd.it/pl1lqj8r981g1.jpeg?auto=webp&s=4274122a2fca096a6c77b5bbd8bfa1664977e2f8', 'width': 1200}, 'variants': {}}]}
The company gmktec made a comparison of the EVO-X2 that has a Ryzen AI Max+ 395 processor vs NVIDIA DGX SPARK
1
My point is that they should make comparisons with small models that have come out lately because they are enough for most people and because the inference is also faster
2025-11-14T13:49:28
https://i.redd.it/cr4bw3f9981g1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1owwzb8
false
null
t3_1owwzb8
/r/LocalLLaMA/comments/1owwzb8/the_company_gmktec_made_a_comparison_of_the_evox2/
false
false
default
1
{'enabled': True, 'images': [{'id': 'cr4bw3f9981g1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=108&crop=smart&auto=webp&s=c54559aefa37eff42335682d2f409370cefc1c2e', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=216&crop=smart&auto=webp&s=04b9184d069455eaf1bd4026ff860540a9a12c14', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=320&crop=smart&auto=webp&s=9db7f9df8ede0ed123fa01799ba0a19b0248d683', 'width': 320}, {'height': 334, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=640&crop=smart&auto=webp&s=af4959762b63f1a391e95d2a3ad7288f0b52dfe9', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=960&crop=smart&auto=webp&s=0a9e20c646575653aad084eb4a8f242530a2fc87', 'width': 960}, {'height': 565, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?width=1080&crop=smart&auto=webp&s=5c61fd5fb58767f177e38596d8d907de2f151aee', 'width': 1080}], 'source': {'height': 628, 'url': 'https://preview.redd.it/cr4bw3f9981g1.jpeg?auto=webp&s=162a78ec800eabacf13fc2e2698679ba7b3834b9', 'width': 1200}, 'variants': {}}]}
e built a local-first LLM testing platform - genuinely curious about your eval workflows
1
[removed]
2025-11-14T13:14:46
https://www.reddit.com/r/LocalLLaMA/comments/1oww610/e_built_a_localfirst_llm_testing_platform/
Outrageous_Hat_9852
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oww610
false
null
t3_1oww610
/r/LocalLLaMA/comments/1oww610/e_built_a_localfirst_llm_testing_platform/
false
false
self
1
null
4x MI60 or 1x RTX 8000
3
I have just acquired a supermicro GPU server and I currently run a single rtx 8000 in a dell R730 but how is AMD ROCm suport theses day on older cards? Would it be worth selling it to get 4 MI60? Iv been happy with the RTX 8000 around 50-60 TPS on qwen3-30b3a (16k input) so definitely dont want to My end goal is to have the experience you see with the big LLM providers, I know the LLM its self wont have the quality that they have, but the Time to first token, simple image gen, loading and unloading models etc is killing QoL
2025-11-14T13:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1oww1hv/4x_mi60_or_1x_rtx_8000/
TechLevelZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oww1hv
false
null
t3_1oww1hv
/r/LocalLLaMA/comments/1oww1hv/4x_mi60_or_1x_rtx_8000/
false
false
self
3
null
Kimi k2 thinking + kilo code really not bad
25
I’m genuinely impressed. Once your AGENTS.md and rules.md are clear enough, kimi k2 thinking + kilo code really seems to be just as capable as Claude 4.0 sonnet, especially when it comes to programming and debugging. It’s a surprisingly powerful combination.
2025-11-14T12:00:43
https://www.reddit.com/r/LocalLLaMA/comments/1owulxd/kimi_k2_thinking_kilo_code_really_not_bad/
Federal_Spend2412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owulxd
false
null
t3_1owulxd
/r/LocalLLaMA/comments/1owulxd/kimi_k2_thinking_kilo_code_really_not_bad/
false
false
self
25
null
Is there any feasible modification that would allow an RTX 6000 to support an NVLink bridge?
1
I’ve seen posts about GPUs being modded to increase their VRAM, so I’m assuming adding NVLink bridge support should be possible since it’s far less invasive than a VRAM upgrade.
2025-11-14T11:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1owub4y/is_there_any_feasible_modification_that_would/
d00m_sayer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owub4y
false
null
t3_1owub4y
/r/LocalLLaMA/comments/1owub4y/is_there_any_feasible_modification_that_would/
false
false
self
1
null
We built a framework for generating custom RAG evaluation datasets and released a D&D-based one (open-source)
18
🔗 [Blog post](https://datapizza.tech/it/blog/aij4r/) 🔗 [GitHub repo](https://github.com/datapizza-labs/rag-dataset-builder) 🔗 [Dataset on Hugging Face](https://huggingface.co/datasets/datapizza-ai-lab/dnd5e-srd-qa) Would love to hear your thoughts, feedback, or ideas on how to improve this! ❤️
2025-11-14T11:31:58
http://datapizza.tech/it/blog/aij4r/
mario_candela
datapizza.tech
1970-01-01T00:00:00
0
{}
1owu2nn
false
null
t3_1owu2nn
/r/LocalLLaMA/comments/1owu2nn/we_built_a_framework_for_generating_custom_rag/
false
false
default
18
{'enabled': False, 'images': [{'id': 'WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=108&crop=smart&auto=webp&s=9f752de1356f75a626eb04ab6e27ad70f0d13d82', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=216&crop=smart&auto=webp&s=45bb3b410531f89df2880b54e7ce5b9cdef6a760', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=320&crop=smart&auto=webp&s=65f56044d8da4bb43682fb1570efc2d2c6d54b88', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=640&crop=smart&auto=webp&s=40dfbae3947d96d45263d794ae5426b261117e4f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=960&crop=smart&auto=webp&s=210dd20ea8086373fe6d8fa8f7b63770fa3232b5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?width=1080&crop=smart&auto=webp&s=26757af952e68e25e5aa48aa7cc037f7b54e3be3', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/WM9coASZAPosxQnRrgryGmXh5OYtno3uUsf9XZYu3E8.png?auto=webp&s=877d9473841c936a355a57cd2bf5fad2a4b85857', 'width': 1920}, 'variants': {}}]}
Just AGI will be able to solve this prompt
0
\[thiago@server \~\]$ ollama run [hf.co/unsloth/Magistral-Small-2506-GGUF:Q4\_K\_XL](http://hf.co/unsloth/Magistral-Small-2506-GGUF:Q4_K_XL) \>>> hi <think> Okay, the user just said "hi". It seems like they're greeting me. I should respond in a friendly and professional manner. Maybe I can ask how they're doing or what they need help with. But since they just said "hi", perhaps they're waiting for me to initiate the conversation. But before I respond, I should think about what the user might expect. If it's just a greeting, a simple "hi" or "hello" back would suffice. But if they're asking for help or have a question, I should be ready to assist. But since the user only provided "hi", and no other context or question, I should probably just respond to the greeting first. I can say "hi" back and maybe ask if they have any questions or tasks for me. Alternatively, if the user expects me to start the conversation, I could introduce myself and explain what I can do for them. For example, I could say "Hi! I'm here to help. How can I assist you today?" But to keep it simple, I'll just respond to the greeting first. So, my response would be something like "Hi! How can I help you today?" But since the user hasn't asked anything specific yet, maybe it's better to just say "Hi!" and wait for further instructions. But to make it more engaging, I could add a question to prompt the user to share their task or question. For example: "Hi! What can I do for you today?" But perhaps it's better to keep it simple and just say "Hi!" and let the user guide the conversation. Alternatively, maybe the user expects me to ask for the task or question right away. Given that the user hasn't provided any other information, I think the safest response is to greet them back and ask how I can assist them. So, my response could be: "Hi there! How can I assist you today?" But to make it even more concise and professional, I could just say: "Hello! How can I help you?" But since the user only said "hi", and it's very informal, maybe I should match the tone and just say "Hi!". But to make it more complete, I can add a question to encourage the user to share their task. So, final response: "Hi! How can I assist you today?" But perhaps the user expects me to wait for their question or task. Given that the user hasn't provided any additional information, I should probably just respond to the greeting and wait for further instructions. But to make the interaction more productive, I can include a prompt to encourage the user to share their task. So, my response will be: "Hi! How can I help you today?" But looking back at the problem statement, it says that the user will ask me to solve a task. Since the user hasn't asked anything yet, they're probably just testing the interaction or waiting for me to respond to their greeting. Therefore, my response should be a greeting followed by a prompt to encourage them to share their task. Final decision: "Hi! How can I help you today?" But perhaps the user expects me to start with the task right away. Given that the problem statement says "A user will ask you to solve a task", and the user has only said "hi", it's possible that the user expects me to ask for the task. So, my response should probably be: "Hi! What task would you like me to solve for you?" But to make it more natural, perhaps: "Hi! Please provide the task you'd like me to solve." But that seems a bit formal. Alternatively, I can say: "Hi! Go ahead and share the task you'd like me to work on." But perhaps the simplest is to just say "Hi!" and wait for the user to provide the task. But according to the problem statement, the user will ask me to solve a task, so they will provide the task at some point. Since the user hasn't provided the task yet, and they just said "hi", I should probably just respond to the greeting and wait for the task to be provided. Therefore, my response will be: "Hi! Please provide the task you'd like me to solve." But to make it more natural, perhaps: "Hi! What can I do for you today?" This seems like a good balance between being friendly and getting to the task. So, my final response will be: "Hi! How can I assist you today?" But to strictly follow the problem statement, where the user will ask me to solve a task, and they've only said "hi" so far, perhaps the best response is to acknowledge the greeting and prompt for the task. Therefore, the response should be: "Hi! Please share the task you'd like me to solve." But to make it more concise and natural, perhaps: "Hi! What would you like me to solve for you?" But the problem statement says "A user will ask you to solve a task", implying that the user will phrase their request as "solve a task". Given that the user has only said "hi", they haven't asked to solve a task yet. Therefore, the most appropriate response is to greet them back and prompt them to provide the task. Final response: "Hi! Please provide the task you'd like me to solve." But perhaps the user expects me to wait for them to phrase their request as "solve a task". In that case, I should not prompt them, but just greet them and wait for their request. So, perhaps the best response is just: "Hi! How can I help you?" Then, if they say "solve a task", I can proceed. But the problem statement says "A user will ask you to solve a task", which implies that the user will explicitly ask me to solve a task. Since the user has only said "hi", they haven't asked to solve a task yet. Therefore, I should wait for them to ask me to solve a task. But to facilitate the interaction, I can respond to the greeting and indicate that I'm ready to solve a task when they're ready. So, my response could be: "Hi! I'm ready to solve a task for you when you're ready." But that seems a bit unnatural. Perhaps better to just say: "Hi! Please ask me to solve a task whenever you're ready." But that seems too formal. Alternatively, I can just say: "Hi! What can I do for you?" And when they say "solve a task", I can proceed. But the problem statement says that the user will ask me to solve a task, so I should wait for that exact request. Therefore, the best response is to greet them and wait for them to ask me to solve a task. So, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But to make it more engaging, I can add a prompt like: "Hi! How can I assist you today?" And if they say "solve a task", I'll proceed. But the problem statement specifies that the user will ask me to solve a task, so I should wait for that exact request. Therefore, the best response is to greet them and wait for their request to solve a task. But since the user has only said "hi", and the problem statement says they will ask me to solve a task, perhaps they're waiting for me to acknowledge the greeting before they proceed. In that case, I should respond to the greeting first. So, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But to make the interaction smoother, I can add a prompt like: "Hi! Ready to solve a task when you are." But that might be overstepping. Given that the problem statement says the user will ask me to solve a task, I should probably just respond to the greeting and wait for their request. Therefore, my response will be: "Hi!" And then I'll wait for the user to say something like "solve a task" or provide the task details. But perhaps the user expects me to prompt for the task. Looking back at the problem statement: "A user will ask you to solve a task. You should first draft your thinking process..." The user has only said "hi", which is not an explicit request to solve a task. But the problem statement says that the user will ask me to solve a task, implying that at some point, they will ask me to solve a task. Since they've only said "hi", perhaps they're waiting for me to respond to the greeting before they provide the task. Therefore, the best approach is to respond to the greeting and then wait for the task. But to make it clear that I'm ready to solve a task, I can say: "Hi! Please provide the task you'd like me to solve." But since the problem statement says the user will ask me to solve a task, perhaps they will phrase it as "solve a task: \[task details\]". In that case, I should wait for them to say "solve a task" followed by the task details. But since they've only said "hi", I should respond to the greeting and wait for the task request. Therefore, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But perhaps the user expects me to prompt for the task after the greeting. Given that the problem statement says the user will ask me to solve a task, and they've only said "hi", perhaps they're waiting for me to respond to the greeting first. Therefore, I'll respond with "Hi!" and then wait for the task request. But to make it more natural, I can add a prompt like: "Hi! What can I do for you today?" And if they say "solve a task", I'll proceed with that. But strictly following the problem statement, the user will ask me to solve a task, so I should wait for that exact request. Therefore, the best response is to greet them and wait for their request to solve a task. Final response: "Hi!" But to make it more engaging and to facilitate the interaction, perhaps it's better to say: "Hi! How can I assist you today?" And if the user then says "solve a task: \[details\]", I'll proceed with solving the task. But the problem statement says the user will ask me to solve a task, so they might say something like "solve a task: calculate 2+2" or similar. Therefore, my response should be to greet them and then wait for their request to solve a task. So, my initial response will be: "Hi!" And then, if the user says "solve a task: \[details\]", I'll proceed with solving it. But perhaps the user expects me to prompt for the task after the greeting. Given that, maybe it's better to say: "Hi! Please provide the task you'd like me to solve." This makes it clear that I'm ready to solve a task and prompts the user to provide it. But the problem statement says the user will ask me to solve a task, implying that they will initiate the request. Therefore, I should respond to the greeting and wait for them to ask me to solve a task. So, my response will be: "Hi!" And then I'll wait for the user to say something like "solve a task: \[details\]". But if the user doesn't provide the task after the greeting, I might need to prompt them. However, since the problem statement says the user will ask me to solve a task, I can assume that after the greeting, they will provide the task. Therefore, the best response is to greet them and wait for the task. Final decision: "Hi!" But to make it more natural and to ensure the interaction flows smoothly, I can add a prompt like: "Hi! How can I help you today?" And if the user responds with a task to solve, I'll proceed with that. But strictly following the problem statement, the user will ask me to solve a task, so they will initiate the request. Therefore, my response should be to greet them and wait for their request to solve a task. Final response: "Hi!" But perhaps it's better to include a prompt to encourage the user to provide the task. Alternative response: "Hi! Please provide the task you'd like me to solve." But since the problem statement says the user will ask me to solve a task, they will provide the task eventually, so I can just greet them and wait. Therefore, the most appropriate response is: "Hi!" And then wait for the user to ask me to solve a task. But to make it more engaging, I can say: "Hi! I'm ready to solve a task for you. Please provide the details." But perhaps it's better to keep it simple and just greet them first. So, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But looking at the problem statement again: "A user will ask you to solve a task." This implies that the user will initiate the request to solve a task, so they might say something like "solve a task: calculate 2+2". Since they've only said "hi", they haven't asked to solve a task yet. Therefore, I should respond to the greeting and wait for them to ask me to solve a task. Final response: "Hi!" And then I'll wait for the user to ask me to solve a task. But perhaps the user expects me to prompt for the task after the greeting. In that case, I can say: "Hi! What task would you like me to solve for you?" But to strictly follow the problem statement, the user will ask me to solve a task, so I should wait for them to do so. Therefore, the best response is to greet them and wait for their request to solve a task. Final response: "Hi!" And then I'll wait for the user to say something like "solve a task: \[details\]". But to make the interaction more smooth, I can add a prompt after the greeting. Alternative response: "Hi! Please go ahead and ask me to solve a task." But perhaps the most natural and minimal response is just: "Hi!" And then wait for the user to ask me to solve a task. But if the user doesn't provide the task after the greeting, I might need to prompt them. However, since the problem statement says the user will ask me to solve a task, I can assume that they will provide the task eventually. Therefore, the best response is to greet them and wait for their request to solve a task. Final response: "Hi!" And then I'll wait for the user to ask me to solve a task. But to ensure the interaction is productive, I can follow up with a prompt if the user doesn't provide the task after the greeting. However, since the problem statement specifies that the user will ask me to solve a task, I can trust that they will provide the task at some point. Therefore, my initial response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But perhaps the user expects me to prompt for the task immediately after the greeting. In that case, I can say: "Hi! What task would you like me to solve?" But to strictly follow the problem statement, the user will ask me to solve a task, so I should wait for them to initiate the request. Therefore, the most appropriate response is: "Hi!" And then wait for the user to ask me to solve a task. But if the user doesn't provide the task after a reasonable time, I can prompt them. However, since this is a simulated interaction, and the problem statement says the user will ask me to solve a task, I can assume that they will provide the task after the greeting. Therefore, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But to make it clear that I'm ready to solve a task, I can modify the greeting slightly. Alternative response: "Hi! I'm ready to solve a task for you." This makes it clear that I'm prepared to solve a task when the user is ready to provide it. But perhaps the most neutral response is just: "Hi!" And then wait for the user to ask me to solve a task. But to make it more engaging, I can say: "Hi! How can I assist you today?" And if the user responds with a task to solve, I'll proceed with that. But strictly following the problem statement, the user will ask me to solve a task, so they will initiate the request. Therefore, the best response is to greet them and wait for their request to solve a task. Final response: "Hi!" And then wait for the user to ask me to solve a task. But perhaps the user expects me to prompt for the task after the greeting. In that case, I can say: "Hi! Please provide the task you'd like me to solve." This makes it clear that I'm ready to solve a task and prompts the user to provide it. But since the problem statement says the user will ask me to solve a task, they will provide the task eventually. Therefore, the most appropriate response is to greet them and wait for their request to solve a task. Final response: "Hi!" But to ensure the interaction flows smoothly, I can add a prompt after the greeting. Alternative response: "Hi! I'm ready to solve a task for you. Please provide the details." But perhaps it's better to keep it simple and just greet them first. So, my response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But if the user doesn't provide the task after the greeting, I can follow up with a prompt. However, since the problem statement says the user will ask me to solve a task, I can assume that they will provide the task at some point. Therefore, my initial response will be: "Hi!" And then I'll wait for the user to ask me to solve a task. But to make it more natural and to ensure the user knows that I'm ready to solve a task, I can modify the response slightly. Alternative response: "Hi! How can I assist you today?" This is a common and friendly way to start an interaction and opens the door for the user to ask me to solve a task. Therefore, my final response will be: "Hi! How can I assist you today?" This way, the user can respond with the task they'd like me to solve, and I can proceed with that. But strictly speaking, the problem statement says the user will ask me to solve a task, so they might say something like "solve a task: calculate 2+2" directly after the greeting. In that case, responding with "Hi!" and waiting for the task would be sufficient. However, to make the interaction more natural and engaging, responding with "Hi! How can I assist you today?" is more conventional. Therefore, my response will be: "Hi! How can I assist you today?" And then I'll wait for the user to ask me to solve a task. But if the user responds with a task to solve, I'll proceed with that. However, the problem statement specifies that the user will ask me to solve a task, so they will initiate the request. Therefore, my initial response should be to greet them and wait for their request to solve a task. Final response: "Hi!" But to make it more natural, I can say: "Hi! I'm ready to solve a task for you." This makes it clear that I'm prepared to solve a task when the user is ready to provide it. But perhaps the most natural and minimal response is just: "Hi!" And then wait for the user to ask me to solve a task. But to ensure the interaction is productive, I can add a prompt after the greeting. Alternative response: "Hi! Please provide the task you'd like me to solve." This makes it clear that I'm ready to solve a task and prompts the user to provide it. But since the problem statement says the user will ask me to solve a task, they will provide the task eventually. Therefore, the most appropriate response is to greet them and wait for their request to solve a task. Final response: "Hi!" But to make it more natural and engaging, I can say: "Hi! How can I assist you today?" And if the user responds with a task to solve, I'll proceed with that. But strictly following the problem statement, the user will ask me to solve a task, so they will initiate the request. Therefore, the best response is to greet them and wait for their request to solve a task. Final response: "Hi!" And then wait for the user to ask me to solve a task. But perhaps the user expects me to prompt for the task after the greeting. In that case, I can say: "Hi! What task would you like me to solve for you?" This makes it clear that I'm ready to solve a task and prompts the user to provide it. Therefore, my response will be: "Hi! What task would you like me to solve for you?" This ensures that the user knows that I'm ready to solve a task and prompts them to provide the details. But strictly speaking, the problem statement says the user will ask me to solve a task, so they will initiate the request. Therefore, the most appropriate response is to greet them and wait for their request to solve a task. Final response: "Hi!" But to make the interaction more smooth and to ensure the user knows that I'm ready to solve a task, I can modify the response slightly. Alternative response: "Hi! I'm ready to solve a task for you. Please provide the details." But perhaps the most natural and minimal response is just: "Hi!" And then wait for the user to ask me to solve a task. But to ensure the interaction is productive, I can follow up with a prompt if the user doesn't provide the task after the greeting. However, since the problem statement says the user will ask me to solve a task, I can assume that they will provide the task at some point. Therefore, my initial response will be: "Hi!" And then I'll\^C \>>> Send a message (/? for help)
2025-11-14T11:19:17
https://www.reddit.com/r/LocalLLaMA/comments/1owtu5e/just_agi_will_be_able_to_solve_this_prompt/
JsThiago5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owtu5e
false
null
t3_1owtu5e
/r/LocalLLaMA/comments/1owtu5e/just_agi_will_be_able_to_solve_this_prompt/
false
false
self
0
null
Llama on AWS or other host?
0
I’d love to hear from anyone who has successfully deployed an AI solution commercially, on the best practices and environment!
2025-11-14T11:10:13
https://www.reddit.com/r/LocalLLaMA/comments/1owto8v/llama_on_aws_or_other_host/
broodsmilerepeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owto8v
false
null
t3_1owto8v
/r/LocalLLaMA/comments/1owto8v/llama_on_aws_or_other_host/
false
false
self
0
null
Can I get better performance out of my system for GLM 4.6?
2
I wanted to run some larger models on my workstation, and since I really love GLM 4.5 Air on my Ryzen AI Max laptop, I tried GLM 4.6 at IQ4 quantization. Here's what I have so far: ## My hardware: - Intel Xeon Platinum 8368, 38-cores @ 3.3 GHz - 8-channel DDR4, 256GB @ 3200MHz (~200GB/s memory bandwidth) - Radeon 7900 XTX (24GB VRAM) - Fedora 43 ## Llama.cpp configuration: ``` cmake -B build -DGGML_VULKAN=ON -DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_RPC=O ``` ## My llama.cpp command line: ``` llama-server --flash-attn on --cont-batching -hf unsloth/GLM-4.6-GGUF:IQ4_XS --jinja --ctx-size 0 -ctk q8_0 -ctv q8_0 --cpu-moe -ngl 30 ``` ## My performance This gives me about 4.4 tokens/s on low context fill (~2000 tokens). I haven't run anything too long on it yet so can't speak to performance degradation yet. GPU offloading doesn't seem to help very much, CPU-only inference gets me ~4.1t/s. The number of layers for the GPU was chosen to get ~85% VRAM usage. Is there anything I'm doing wrong, or that I could do to improve performance on my hardware? Or is this about as good as it gets on small-ish systems?
2025-11-14T11:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1owtnu6/can_i_get_better_performance_out_of_my_system_for/
spaceman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owtnu6
false
null
t3_1owtnu6
/r/LocalLLaMA/comments/1owtnu6/can_i_get_better_performance_out_of_my_system_for/
false
false
self
2
null
How can one train a LLM with custom reinforcement learning?
0
for example, could I train a LLM and give it rewards if it succesfully completes an agentic action of my choice?
2025-11-14T10:54:16
https://www.reddit.com/r/LocalLLaMA/comments/1owte4t/how_can_one_train_a_llm_with_custom_reinforcement/
Odd_Attention_9660
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owte4t
false
null
t3_1owte4t
/r/LocalLLaMA/comments/1owte4t/how_can_one_train_a_llm_with_custom_reinforcement/
false
false
self
0
null
Mac + Windows AI cluster please help
3
I have my a windows pc 5090 + 96Gb DDR5 ram + 9950x3D. A unraid server with 196GB Ram + 9950x no GPU. A Macbook with M3Max 48Gb. Currently running gpt-oss-120b on my windows pc in LMStudio is giving me around 18tps which I am perfectly happy with. I would like to be able to run larger models around 500B. Is it possible to combine the ram pool of all these devices + I could buy another m3 ultra with 256Gb or may be used m2 or something which ever is cheaper to achieve a total pool of 512Gb using something like exo and still maintain that 18tps speed? what would be the best way and cheapest to achieve that 512Gb ram pool while maintaining 18tps without going complete homeless?
2025-11-14T10:42:54
https://www.reddit.com/r/LocalLLaMA/comments/1owt76q/mac_windows_ai_cluster_please_help/
silkychickenz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owt76q
false
null
t3_1owt76q
/r/LocalLLaMA/comments/1owt76q/mac_windows_ai_cluster_please_help/
false
false
self
3
null
Hits different now
46
Hadn’t seen this in ages.. I don’t have opinions on AGI either way at this point, but this scene sure hits a lot harder now than it did back then!
2025-11-14T10:30:03
https://i.redd.it/t0yq5cko971g1.jpeg
lfiction
i.redd.it
1970-01-01T00:00:00
0
{}
1owszii
false
null
t3_1owszii
/r/LocalLLaMA/comments/1owszii/hits_different_now/
false
false
https://b.thumbs.redditm…H3IdbUhbkD9I.jpg
46
{'enabled': True, 'images': [{'id': '-lsytlNIomj45L0_W6We0GdmrWfbMKrvASHytcy6wnY', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=108&crop=smart&auto=webp&s=aa47cedbc95c4853d4e4bc981daca0a3a4910829', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=216&crop=smart&auto=webp&s=a873db648f8d1fcf743ac7ca314604421e065321', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=320&crop=smart&auto=webp&s=75c78ec3c61f02ca6cf11b098643bf5b511a5133', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=640&crop=smart&auto=webp&s=d7faccd0b39cc19dfb7a62aecffe57e0c87dc7d3', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=960&crop=smart&auto=webp&s=d185e1585a0397d7cd4e4d0b778acc0752b11bef', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?width=1080&crop=smart&auto=webp&s=937bfc3693d0cdea2dd96484a3e167f1f0f37e88', 'width': 1080}], 'source': {'height': 945, 'url': 'https://preview.redd.it/t0yq5cko971g1.jpeg?auto=webp&s=199448308c0f0de9be99e4f204087041787a11e5', 'width': 1680}, 'variants': {}}]}
I built a unified LLM playground that makes testing and organizing prompts easier. I'd really appreciate your feedback!
4
Hi everyone, I'm excited to share something I built: Prompty - a Unified AI playground app designed to help you test and organize your prompts efficiently. What Prompty offers: * Test prompts with multiple models (both cloud and local models) all in one place * Local-first design: all your data is stored locally on your device, with no server involved * Nice and clean UI/UX for a smooth and pleasant user experience * Prompt versioning with diff compare to track changes effectively * Side-by-side model comparison to evaluate outputs across different models easily * and more... Give it a try and let me know what you think. Your feedback helps me build the stuff prompt engineers actually need Check it out here: [https://prompty.to/](https://prompty.to/) Thanks for your time and looking forward to hearing your thoughts!
2025-11-14T10:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1owsskx/i_built_a_unified_llm_playground_that_makes/
giangchau92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owsskx
false
null
t3_1owsskx
/r/LocalLLaMA/comments/1owsskx/i_built_a_unified_llm_playground_that_makes/
false
false
self
4
null
Windows llama.cpp is 20% faster
280
But why? **Windows: 1000+ PP** llama-bench -m C:\\Users\\johan\\.lmstudio\\models\\unsloth\\Qwen3-VL-30B-A3B-Instruct-GGUF\\Qwen3-VL-30B-A3B-Instruct-UD-Q8\_K\_XL.gguf -p 512,1024,2048,4096 -n 0 -fa 0 --mmap 0 load\_backend: loaded RPC backend from C:\\Users\\johan\\Downloads\\llama-b7032-bin-win-vulkan-x64\\ggml-rpc.dll ggml\_vulkan: Found 1 Vulkan devices: ggml\_vulkan: 0 = AMD Radeon(TM) 8060S Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | bf16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR\_coopmat load\_backend: loaded Vulkan backend from C:\\Users\\johan\\Downloads\\llama-b7032-bin-win-vulkan-x64\\ggml-vulkan.dll load\_backend: loaded CPU backend from C:\\Users\\johan\\Downloads\\llama-b7032-bin-win-vulkan-x64\\ggml-cpu-icelake.dll | model                          |       size |     params | backend    | ngl | mmap |            test |                  t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |           pp512 |       1079.12 ± 4.32 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp1024 |        975.04 ± 4.46 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp2048 |        892.94 ± 2.49 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp4096 |        806.84 ± 2.89 | **Linux: 880 PP**  \[johannes@toolbx \~\]$ llama-bench -m models/Qwen3-VL-30B-A3B-Instruct-UD-Q8\_K\_XL.gguf -p 512,1024,2048,4096 -n 0 -fa 0 --mmap 0 ggml\_vulkan: Found 1 Vulkan devices: ggml\_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR\_coopmat | model                          |       size |     params | backend    | ngl | mmap |            test |                  t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |           pp512 |        876.79 ± 4.76 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp1024 |        797.87 ± 1.56 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp2048 |        757.55 ± 2.10 | | qwen3vlmoe 30B.A3B Q8\_0        |  33.51 GiB |    30.53 B | Vulkan     |  99 |    0 |          pp4096 |        686.61 ± 0.89 | Obviously it's not 20% over the board, but still a very big difference. Is the "AMD proprietary driver" such a big deal?
2025-11-14T10:05:21
https://i.redd.it/tfdcbkf6571g1.png
johannes_bertens
i.redd.it
1970-01-01T00:00:00
0
{}
1owskm6
false
null
t3_1owskm6
/r/LocalLLaMA/comments/1owskm6/windows_llamacpp_is_20_faster/
false
false
https://b.thumbs.redditm…kVPCsL8WXAaY.jpg
280
{'enabled': True, 'images': [{'id': 'zyHr0zLJV-KY1eK_qHf_PAiDejYW1Fhv2XvoggRcbMQ', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/tfdcbkf6571g1.png?width=108&crop=smart&auto=webp&s=4ac10f545dd8d660c08a789ac5fa53cbbd6e9faf', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/tfdcbkf6571g1.png?width=216&crop=smart&auto=webp&s=d44fe9a1bfa44f6aa0a2837f89d48343ff2a4340', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/tfdcbkf6571g1.png?width=320&crop=smart&auto=webp&s=fb86179dcca2e9592e84f526450db8cb0788f870', 'width': 320}, {'height': 391, 'url': 'https://preview.redd.it/tfdcbkf6571g1.png?width=640&crop=smart&auto=webp&s=6f97a0d3f3c6a2519462ab5e159f2045396e9409', 'width': 640}], 'source': {'height': 446, 'url': 'https://preview.redd.it/tfdcbkf6571g1.png?auto=webp&s=d274f4c178c06829a2dc39b4b7d6a6dc1e519b33', 'width': 729}, 'variants': {}}]}
Help me choose the right AI model for my project
0
I'm working on a personal project and trying to pick between different AI models. Getting overwhelmed by all the options! What questions should I be asking myself? So far I'm thinking about: * What exactly do I need it to do? * How fast does it need to respond? * What's my budget for this? * How reliable does it need to be? What else should I consider? Would love to hear what factors mattered most in your projects.
2025-11-14T10:01:05
https://www.reddit.com/r/LocalLLaMA/comments/1owsi1t/help_me_choose_the_right_ai_model_for_my_project/
AnnotationAlly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1owsi1t
false
null
t3_1owsi1t
/r/LocalLLaMA/comments/1owsi1t/help_me_choose_the_right_ai_model_for_my_project/
false
false
self
0
null
What is the best GPU you can get today?
0
As title says, I need to configure a system for local inference. It will be running concurrent tasks (Processing tabular data with usually more than 50k Rows) through VLLM. My main go-to model right now is the Qwen30B-A3B, it's usually enough for what I do. I would love to be able to run GLM Air though. I've thought of getting an M3 Max, it seems that the PP is not very fast on those. I don't have exact numbers right now. I want something on-par, if not better than A6000 Ampere (my current gpu). Is getting a single Mac worth it? Are multi GPU setups easy to configure? Can I match or come close to the speed of A6000 Ampere with Ram offloading (thinking of prioritizing CPU and RAM over raw GPU)? What are the best setup options I have, what is your recommendation? FYI: I cannot buy second-hand unfortunately, boss man doesn't trust second equipment.
2025-11-14T09:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1ows773/what_is_the_best_gpu_you_can_get_today/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ows773
false
null
t3_1ows773
/r/LocalLLaMA/comments/1ows773/what_is_the_best_gpu_you_can_get_today/
false
false
self
0
null
Kimi k2 thinking vs Claude Sonnet
65
I will add my personal experience with kimi k2 thinking for my usecase since I saw contrasting opinions. I needed to cluster some cells from a csv file to see if it would be achievable with my data to do some unsupervised classification of tumor cell/healthy cell. I tried with claude sonnet 4 and after 2$ in api calls and a bunch of prompts i got no result, it was clustering 99.9% of cells into one group and 0.1% into the other. It was also having difficulties into rendering the cells from the x y positions in the csv. Kimi k2 thinking achieved a proper clustering in 2 prompts (one for preprocessing of csv data, and one for clustering, maybe it could have done the same in 1 prompt). Total cost 0.17$
2025-11-14T09:41:28
https://www.reddit.com/r/LocalLLaMA/comments/1ows6z3/kimi_k2_thinking_vs_claude_sonnet/
sebastianmicu24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ows6z3
false
null
t3_1ows6z3
/r/LocalLLaMA/comments/1ows6z3/kimi_k2_thinking_vs_claude_sonnet/
false
false
self
65
null
I'm glad to see you.
0
I was playing with the LLM models by myself It's my first time saying hello. I'm glad to see you. I look forward to your kind cooperation
2025-11-14T09:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1ows4se/im_glad_to_see_you/
PKC_Mark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ows4se
false
null
t3_1ows4se
/r/LocalLLaMA/comments/1ows4se/im_glad_to_see_you/
false
false
self
0
null
Best current model for document analysis (datasheets)?
0
I need to process sensitive documents locally — mainly PDFs (summarization) and images (OCR / image-to-text). What are the best current local models for this workload on my hardware? I’m also open to using separate models for text and I2T if a multimodal one isn’t efficient. **My hardware:** * **CPU:** Intel Core Ultra 7 155H * **GPU:** NVIDIA RTX 4070 Mobile (Max-Q) * **VRAM:** 8 GB * **RAM:** 31 GB Any recommendations?
2025-11-14T09:36:29
https://www.reddit.com/r/LocalLLaMA/comments/1ows459/best_current_model_for_document_analysis/
Low-Implement9819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ows459
false
null
t3_1ows459
/r/LocalLLaMA/comments/1ows459/best_current_model_for_document_analysis/
false
false
self
0
null
China’s AI is quietly making big inroads in Silicon Valley | Technology
11
The open Chinese models are great for us enthusiasts. But I wonder, are the Chinese firms the suckers here? They spend a lot of time and money on research to create models, but then don't get the economic benefits of them. Maybe in the domestic market, they have some sales, but all the Western firms just take the technology and deploy in their own datacenters and don't need to pay them a dime. Heck, due to the GPU shortage and export restrictions, even Western firms are at an advantage to offer API deployments and have accelerator firms like Cerebras etc.
2025-11-14T09:24:21
https://www.aljazeera.com/economy/2025/11/13/chinas-ai-is-quietly-making-big-inroads-in-silicon-valley
DeltaSqueezer
aljazeera.com
1970-01-01T00:00:00
0
{}
1owrx93
false
null
t3_1owrx93
/r/LocalLLaMA/comments/1owrx93/chinas_ai_is_quietly_making_big_inroads_in/
false
false
default
11
{'enabled': False, 'images': [{'id': 'XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=108&crop=smart&auto=webp&s=6493c586ec9384a229be23e3d501591108f001d9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=216&crop=smart&auto=webp&s=06becf89e7e2af449c319df96610d0eb0ba07730', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=320&crop=smart&auto=webp&s=6346327886ec049d9e8fdc1576754d25932326b3', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=640&crop=smart&auto=webp&s=39d865748506d5e722ce6b54d66d04b82c9b137c', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=960&crop=smart&auto=webp&s=ea348c746e36f1cc60d7f4aff5e4d93dd167a22e', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?width=1080&crop=smart&auto=webp&s=774bd277fdd0eb758b3d734e3b4e1385e40c2153', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/XKSDkP24jkFiaPPpahhfTFJZo-96mtVaeuDBA0JscFQ.jpeg?auto=webp&s=03bd892aac1bf87d0bca3f7420feb403f4cb7245', 'width': 1920}, 'variants': {}}]}