title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Minimax now offers Coding Plans, but is it worth it?
7
I have a GLM Coding Plan subscription, and so far I’ve had a pretty good experience with GLM-4.6 in Claude Code. I paid $180, and it gives me \~600 prompts every 5 hours. Here, the plan costs $20 more and offers 300 prompts every 5 hours, which is about half. What do you guys think? Is it better to stick with GLM, or is it worth trying Minimax M2? I’m not sure if a yearly plan would include better models during the term—maybe I pay for a year and wait 6–8 months to see a new model from Minimax. Let me know your thoughts. https://preview.redd.it/3zotexgkcg0g1.png?width=2534&format=png&auto=webp&s=4fb1e7532e7e626119b7778a342ff32940a962a5 https://preview.redd.it/oc6dk9kndg0g1.png?width=2784&format=png&auto=webp&s=ee37aeff1a01846b940e62324bcb7257f0e657e5
2025-11-10T16:05:27
https://www.reddit.com/r/LocalLLaMA/comments/1othqbc/minimax_now_offers_coding_plans_but_is_it_worth_it/
baykarmehmet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1othqbc
false
null
t3_1othqbc
/r/LocalLLaMA/comments/1othqbc/minimax_now_offers_coding_plans_but_is_it_worth_it/
false
false
https://b.thumbs.redditm…t8QCFp2xFmGo.jpg
7
null
The reason Local LLMs feel underwhelming
0
**Premise**: The model architectures and hardware we have are terrible for the single-user (batch-1) at home use case. **Background Info**: If you have ever run multiple requests against the same model-endpoint (Running on GPU**s**), you may have noticed that it uses pretty much the same amount of juice and it takes about the same amount of time (roughly there is nuance). So what you get is twice as many tokens per Wh / twice as many tokens per second*. You crank up the concurrency and it just keeps scaling and **then** you run out of VRAM. You realize that even though you had plenty of VRAM left over (maybe more than the model-weights themselves) your throughput is limited by your amount of VRAM for context(s). It's not a limit of memory-bandwidth or compute, the limit is you can't have enough requests going to saturate (or maybe better call it maximally exploit) compute. I have a use case where I can go at 3500 t/s using Qwen3VL-30b-a3b FP8 on a pair of RTX 5090. That's **10M** tokens per hour. I could burn through 1B tokens in a few days. **So what:** Have you seen the thing where ChatGPT or Gemini give you two responses. Unless it's peak hours, that's basically free for them. Ok, so instead of sending just a single query to our local chat-thingy, how about we send the same thing twice, or three times, or 10-times. Then have it summarise it, or pick the best. There are multiple strategy. That's something you can try using optiLLM (it's on GitHub - has been posted here before). It's easy to set-up with vllm. There's also OpenEvolve by the same person, also very cool but more of a "research" use-case. There surely are other neat ideas - it's literally free tokens, how do we get them? You need enough VRAM for this to work. Squeezing in the largest model that works with compromised context, maybe not be the way to get the best responses. If you have experimented with the above frameworks, cobbled together something along that, or have an interesting idea - tell me :)
2025-11-10T16:04:45
https://www.reddit.com/r/LocalLLaMA/comments/1othpkc/the_reason_local_llms_feel_underwhelming/
reto-wyss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1othpkc
false
null
t3_1othpkc
/r/LocalLLaMA/comments/1othpkc/the_reason_local_llms_feel_underwhelming/
false
false
self
0
null
Poweredge r710 120gm ram (No VRAM)
0
Hello Everyone, I am pretty new to the world of local LLM's (thinkered a bit with LmStudio) I was wondering If I could achieve any significant results with the following goal. Have an Ai agent that can help me write code and deploy locally on the server and bit by bit find ways to let it manage the server by itself ultimately (in the long run). If you have any suggestions and places where to start I would love that. Currently installed on the server : Proxmox
2025-11-10T16:00:39
https://www.reddit.com/r/LocalLLaMA/comments/1othlf5/poweredge_r710_120gm_ram_no_vram/
zakoud
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1othlf5
false
null
t3_1othlf5
/r/LocalLLaMA/comments/1othlf5/poweredge_r710_120gm_ram_no_vram/
false
false
self
0
null
AMA With Moonshot AI, The Open-source Frontier Lab Behind Kimi K2 Thinking Model
549
Hi r/LocalLLaMA Today we are having **Moonshot AI**, the research lab behind the **Kimi** **models**. We’re excited to have them open up and answer your questions directly. Our participants today: * u/ComfortableAsk4494 * u/zxytim * u/ppwwyyxx **The AMA will run from 8 AM – 11 AM PST, with the Kimi team continuing to follow up on questions over the next 24 hours.** https://preview.redd.it/5yg0ncsn7g0g1.png?width=3525&format=png&auto=webp&s=5318680204ef7502ad349aec148147d9e3398f87
2025-11-10T15:44:10
https://www.reddit.com/r/LocalLLaMA/comments/1oth5pw/ama_with_moonshot_ai_the_opensource_frontier_lab/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oth5pw
false
null
t3_1oth5pw
/r/LocalLLaMA/comments/1oth5pw/ama_with_moonshot_ai_the_opensource_frontier_lab/
false
true
https://b.thumbs.redditm…NBr7SQ9Go3AI.jpg
549
null
Maxsun displays quad GPU and dual GPU workstations. Pricing TBD
8
https://preview.redd.it/… able to afford?
2025-11-10T15:35:52
https://www.reddit.com/r/LocalLLaMA/comments/1otgxs8/maxsun_displays_quad_gpu_and_dual_gpu/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otgxs8
false
null
t3_1otgxs8
/r/LocalLLaMA/comments/1otgxs8/maxsun_displays_quad_gpu_and_dual_gpu/
false
false
https://b.thumbs.redditm…4ohTsreyqvdA.jpg
8
null
Local Generation/Translation of subtitules.
2
Do we have that? I remember VLC anoucing something along these lines, but i never saw a home lab working version of something like that.
2025-11-10T15:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1otgn2q/local_generationtranslation_of_subtitules/
techmago
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otgn2q
false
null
t3_1otgn2q
/r/LocalLLaMA/comments/1otgn2q/local_generationtranslation_of_subtitules/
false
false
self
2
null
VoxCPM Text-to-Speech running or Apple Neural Engine ANE
13
Hey! I ported OpenBMB's VoxCPM to CoreML so now it mostly runs using the Apple Neural Engine ANE. Here is the [repo](https://github.com/0seba/VoxCPMANE) The models supports voice cloning and handles real time streaming speech generation on my M1 Macbook Air 8GB. Hopefully someone can try it, any feedback is useful. https://reddit.com/link/1otgd3j/video/f73iublf3g0g1/player I am also looking into porting more models to CoreML for NE support, so let me know what could be useful to you. Here are some characteristics to help filter out if a task or model makes sense for the NE or not. * Compute heavy operations. I am looking into porting the image encoder of OCR models (like DeepsSeekOCR) and running the text generation/decoding with MLX * Same as above, but more generally encoder/embedding models that lean on the compute heavy and latency is not as important * MoEs are awful for the NE * 4 bit quantization is a big issue, NE does not support grouping so there is too much degradation under 6 bits, 8 bits recommended to stay on the safe side. * NE can not access the full RAM bandwidth (120 GB/s on M3 Max, M4 Pro and M4 Max, 60 GB/s in other models, [source](https://github.com/Anemll/anemll-bench), note this is peak bandwidth and full model runs under 50 GB/s in my experience. On iPhone 15 Pro Max I get 44 GB/s peak bandwidth) * For the reason above avoid tasks where (big models and) latency is important, other situations where generation at reading speed is enough can be acceptable, 6 inferences per second can be performed on a 6GB model at 40 GB/s bandwidth. * It is highly preferable for tasks where context is bound, 0-8K tokens, CoreML computation graph is static so the attention is always performed on the full context of the computation graph you are using. It is possible to have several computations graphs with different lengths but this would require model switching and I haven't looked into the downsides if you want to do things like extend the current context if it is full. * Async batch generation may be a favorable scenario. * Running on the NE instead of the GPU means the GPU is free and it has less power consumption which could also prevent throttling. * I am not sure but I think it is better to lean on small-ish models. CoreML has a maximum model size of 2 GB for the NE, so to run bigger models you have to split the whole (transformer) model into groups of its consecutive blocks (also my Macbook has 8 GB so I cannot test anything bigger). * CoreML has a big first compilation time for a new model (specially for the Neural Engine) but on subsequent model loads it is cached and it is much faster. Happy to help if you have any more questions or have any issues with the package.
2025-11-10T15:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1otgd3j/voxcpm_texttospeech_running_or_apple_neural/
0seba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otgd3j
false
null
t3_1otgd3j
/r/LocalLLaMA/comments/1otgd3j/voxcpm_texttospeech_running_or_apple_neural/
false
false
self
13
{'enabled': False, 'images': [{'id': 'dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=108&crop=smart&auto=webp&s=8a6032c36cd362c78b76ac704944ad2532d38284', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=216&crop=smart&auto=webp&s=b04a91b497efb377569abbdd14855b8aded5a259', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=320&crop=smart&auto=webp&s=b1a5e867cdbc2922645226e9594a85c82b28658f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=640&crop=smart&auto=webp&s=d167e12953f9af749f13c3c224b850317eb67ca9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=960&crop=smart&auto=webp&s=53eff8f7b8376859e5bcaa0a36fd8bcd3d365966', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?width=1080&crop=smart&auto=webp&s=c06ce004d06a299e800a87354ec23ea753006d7d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dvjTO3Ntbl7DpYMVsEyG0Q8lBCe_LB-me-5HVErZU58.png?auto=webp&s=85685fc6a74da2b2292261f535d202129e5687ed', 'width': 1200}, 'variants': {}}]}
What’s your offline stack?
3
I had been using Zed and until today enjoying it, but the latest version is throwing a lot of ‘unable to parse’ errors. I’d like to use VSCode but not going to ‘sign in’ to any service for offline use - that’s silly. Does anyone have a bulletproof offline free and preferably open source only dev setup for VS Code today?
2025-11-10T15:04:10
https://www.reddit.com/r/LocalLLaMA/comments/1otg460/whats_your_offline_stack/
CandidLiving5247
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otg460
false
null
t3_1otg460
/r/LocalLLaMA/comments/1otg460/whats_your_offline_stack/
false
false
self
3
null
Reason #5827 I'm on at least 3 lists, and why Google AI suck
0
I just wanted to search for some lyrics dammit, but Google knows better, because of course they do! AI search- whatever you think about it is meh, but for god's sake, if it refuses, just don't show me anything, instead of this patronizing bullshit. This takes almost half of the damn screen. https://preview.redd.it/kg5ib8p50g0g1.png?width=2202&format=png&auto=webp&s=e3e62a365d94201b1166ec6ea5345f3c030d0273
2025-11-10T14:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1otfrgt/reason_5827_im_on_at_least_3_lists_and_why_google/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otfrgt
false
null
t3_1otfrgt
/r/LocalLLaMA/comments/1otfrgt/reason_5827_im_on_at_least_3_lists_and_why_google/
false
false
https://b.thumbs.redditm…rM4Nbo82pIEk.jpg
0
null
Sharing my AI Experiment Pack 2025 - 125 Creative Prompts (Grok tested)
1
[removed]
2025-11-10T14:48:56
https://www.reddit.com/r/LocalLLaMA/comments/1otfpwx/sharing_my_ai_experiment_pack_2025_125_creative/
Pretty_Swan_4189
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otfpwx
false
null
t3_1otfpwx
/r/LocalLLaMA/comments/1otfpwx/sharing_my_ai_experiment_pack_2025_125_creative/
false
false
self
1
null
What will we more likey get?
0
What do you think is more likely? Will we get more VRAM at cheaper prices, which might be due to China likely entering the consumer GPU space at lower prices? Or will we get better and more intelligent small LLMs? Or is the LLM advancement currently hitting a wall? Many recent releases haven't shown noticeable improvement over their previous generation. Meta and Google haven't released a model in ages (based on the AI clock, lol) although they might be cooking something.
2025-11-10T14:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1otfou4/what_will_we_more_likey_get/
skillmaker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otfou4
false
null
t3_1otfou4
/r/LocalLLaMA/comments/1otfou4/what_will_we_more_likey_get/
false
false
self
0
null
Local LLM for creative writing
0
For good reason, it seems like most LLMs here discussed is in regards to coding performance. I dont generally do coding, i am looking more at creative writing, what are the things i Should be looking for when deciding on a model in that line? I guess it should be uncensored that would probably help, what benefits do we get from larger node models? Isnt like context window the most important?
2025-11-10T14:12:31
https://www.reddit.com/r/LocalLLaMA/comments/1otetj1/local_llm_for_creative_writing/
Elricboy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otetj1
false
null
t3_1otetj1
/r/LocalLLaMA/comments/1otetj1/local_llm_for_creative_writing/
false
false
self
0
null
Advice
0
First time post here - but seeking criticism (on reddit- yeah) I had wanted to explore multi-model systems for local LLM for agentic workflows, but moving different models on/off GPU required a lot of manual coding - so I developed an automated infrastructure to handle it. I'm not a developer by trade - and a lot of this is for my own learning experience - I've not open sourced it yet since I'm a single parent and would like to learn more (out of respect for the open source community) so I can put something clean out. But I am not sure if I've built something with any value and that's why I am here- so here is a brief summary: GGUF Native Servers (llama.cpp backend via UDS & HTTP1.1 protocols) Load unload models within a single workflow (my library is 46 models, tested a lot with all of them - seems no issue) Load dynamically across GPU based on demand and resources (balanced parallelism non-sequential if workflow allows) Heterogeneous HW (throw your shade I got what i could 5090 & 3090, probably should have spent more time on ebay...) can achieve 50/50 balancing but obviously very HW specific, though I'd like to explore this more overtime for hodgepodge setups Dynamic semaphore spawning for concurrency (not time slicing - based on model size and available resources) tested 8 model pool 2b-14b, up to 17 workers ~95% success rate (970 inferences ~2000s), 30 model pool 2b-35b ~75% success (this particular test is 90 simultaneous requests ~166s) automated cuda error recovery - Cuda.87 errors may/maynot trigger a crash - most are recoverable - but when it does crash the server recovery time is ~86s (my telemetries are on an atomic clock - but llama.cpp does not timestamp the errors - i need to dig through the data and find the gap) I've had runs of over 12K inferences with less than 100 cuda errors - all recovered. OOM & memory pressure managed via scheduling/retry and no fragmentation Load time are load times - but these are a configurable 1-time cost and there are options to choose where/how you pay that cost (can be as low as 300-1200ms depending on model and config) i guess it is kinda like run.ai (maybe?) but for local inference on consumer HW... I'm still learning and am curious if something like this has value to the community before i go crazy cleaning it up? But yeah, been fun to tinker and explore... Just wondering if I should clean this up and release it or just move on to a new project... Thanks.
2025-11-10T14:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1oteobs/advice/
Obvious_Service_8209
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oteobs
false
null
t3_1oteobs
/r/LocalLLaMA/comments/1oteobs/advice/
false
false
self
0
null
What is the best hardware under 10k to run local big models with over 200b parameters?
74
Hi! I'm looking to build an AI rig that can run these big models for coding purposes, but also as a hobby. I have been playing around with a 3090 I had for gaming, but I'm interested in running bigger models. So far my options seem: 1. Upgrade motherboard/psu/case and get another 3090/4090, total 42gb vram, 128gb ram, and a server-cpu to support more channels. 2. Buy a mac studio with m3 ultra. My questions are: 1. Would a mixed ram/vram setup like 1 be slower than the m3 when running 230b models? What about models like minimax m2 which use MoE? Would those run much faster on the gpu+ram approach? 2. Is there any other sensible option to get huge amounts of ram/vram and enough performance for inference on 1 user without going over 10k? 3. Would it be worth it to go for a mix of 1 3090 and 1 5090? Or would the 5090 just be bottle necked waiting for the 3090? I'm in no rush, I'm starting to save up to buy something in a few months, but I want to understand what direction should I go for. If something like option 1 was the best idea I might upgrade little by little from my current setup.
2025-11-10T13:28:39
https://www.reddit.com/r/LocalLLaMA/comments/1otdr19/what_is_the_best_hardware_under_10k_to_run_local/
nadiemeparaestavez
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otdr19
false
null
t3_1otdr19
/r/LocalLLaMA/comments/1otdr19/what_is_the_best_hardware_under_10k_to_run_local/
false
false
self
74
null
Is there any kind of list with GPUs and their performance on some models?
1
I am researching which gpu to get, but i would like to something that says how good a gpu is. That thing would be a chart with the gpus and their performance on some models. Is there anything like that out there? btw, im between the b60 dual or r9700
2025-11-10T13:27:33
https://www.reddit.com/r/LocalLLaMA/comments/1otdq33/is_there_any_kind_of_list_with_gpus_and_their/
WizardlyBump17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otdq33
false
null
t3_1otdq33
/r/LocalLLaMA/comments/1otdq33/is_there_any_kind_of_list_with_gpus_and_their/
false
false
self
1
null
Cheapest method to selfhost Qwen 3VL Model
7
Hey hi everyone I need suggestions to selfhost this model with cheapest price
2025-11-10T13:12:16
https://i.redd.it/aebhrmzyif0g1.png
PavanRocky
i.redd.it
1970-01-01T00:00:00
0
{}
1otddd4
false
null
t3_1otddd4
/r/LocalLLaMA/comments/1otddd4/cheapest_method_to_selfhost_qwen_3vl_model/
false
false
default
7
{'enabled': True, 'images': [{'id': 'aebhrmzyif0g1', 'resolutions': [{'height': 139, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=108&crop=smart&auto=webp&s=0b19b67bdd2c5ee36db32dd8dba9f0afb4fa3b81', 'width': 108}, {'height': 279, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=216&crop=smart&auto=webp&s=fcd096d3184b832dce6ec78e15149160aacbf817', 'width': 216}, {'height': 414, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=320&crop=smart&auto=webp&s=e19a34a00dd7ee8ce435f042bdb3c7334a70c285', 'width': 320}, {'height': 829, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=640&crop=smart&auto=webp&s=bf98f24c25e7253b6af66a0712d1e065b7d506d3', 'width': 640}, {'height': 1243, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=960&crop=smart&auto=webp&s=82f516b818466c1658dc558c3730154a78ee4f90', 'width': 960}, {'height': 1399, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?width=1080&crop=smart&auto=webp&s=221a5e3ea313c9f3639f0a72dc6ca643a8fd8abc', 'width': 1080}], 'source': {'height': 1399, 'url': 'https://preview.redd.it/aebhrmzyif0g1.png?auto=webp&s=baabcb9c31bfe6bf3f73f5f3a1aa284dd3ec51f7', 'width': 1080}, 'variants': {}}]}
Your favorite open-source AI labs, and why?
0
https://preview.redd.it/…onal preference.
2025-11-10T12:47:11
https://www.reddit.com/r/LocalLLaMA/comments/1otctny/your_favorite_opensource_ai_labs_and_why/
InternationalAsk1490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otctny
false
null
t3_1otctny
/r/LocalLLaMA/comments/1otctny/your_favorite_opensource_ai_labs_and_why/
false
false
https://b.thumbs.redditm…J3rYdqs3O_kQ.jpg
0
null
How are you doing impact analysis before merging multi-repo changes?
1
Curious how other teams are handling this. I keep seeing the same pattern with my teams: – AI makes it cheap to change code – People move fast across multiple services – Then incidents and hotfixes quietly eat all the “saved” time The common gap seems to be missed impact analysis (identifying what esle to change when coding for a new requirement): Before you merge a change, how do you figure out: – what other services / repos are affected? – which DBs / events / contracts you might break? – who else should be in the loop for the change? Are you using: – PR templates – runbooks / checklists – custom internal tooling – or… mostly vibes? What’s actually working for you and what feels brittle?
2025-11-10T12:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1otcpk1/how_are_you_doing_impact_analysis_before_merging/
Temporary_Papaya_199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otcpk1
false
null
t3_1otcpk1
/r/LocalLLaMA/comments/1otcpk1/how_are_you_doing_impact_analysis_before_merging/
false
false
self
1
null
vLLM speed issues
2
I find myself in the awkward position that my Q4 llamacpp version of Qwen3-VL-30b-A3b is significantly faster (like 2x speed per token) than the equivalent vLLM AWQ version and I can't point my finger on why. Single first requests so not a KV cache issue. In principle vLLM should technically be faster but I'm just not seeing it. Might I be misconfiguring it somehow? Has anyone else run into similar trouble?
2025-11-10T12:18:02
https://www.reddit.com/r/LocalLLaMA/comments/1otc8qi/vllm_speed_issues/
HarambeTenSei
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otc8qi
false
null
t3_1otc8qi
/r/LocalLLaMA/comments/1otc8qi/vllm_speed_issues/
false
false
self
2
null
Can I run any local llm with this hardware?
1
Hey guys! All good? I'm a developer and I want to migrate to local llm, this is my first contact after Claude, cursor, Gemini and chat gpt, so I'm quite a layman. I have an RTX 3060 12GB, Ryzen 7 5700x and 32 RAM, would it be possible to run something with that? For development and chat bots, I thought about using the qwen model but 250 vram is too much for me, I thought about trying the small one from Google, does anyone have any other suggestions?
2025-11-10T12:13:56
https://www.reddit.com/r/LocalLLaMA/comments/1otc5xf/can_i_run_any_local_llm_with_this_hardware/
SrMatic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otc5xf
false
null
t3_1otc5xf
/r/LocalLLaMA/comments/1otc5xf/can_i_run_any_local_llm_with_this_hardware/
false
false
self
1
null
Your AI might be smart - but does it actually remember you?
0
It’s crazy how advanced AI has become - reasoning, writing, even planning - but most tools still forget everything once you close the tab. Every new chat or session feels like starting over. No memory, no continuity. We’ve been exploring ways to fix that at getalchemystai\[.\]com - building SDKs, MCPs, and a Chrome extension (link in comment section) that make AI memory portable across tools like ChatGPT, Claude, Gemini, and others. Persistent memory could make AI way more useful - remembering context, goals, tone, or even past mistakes. But almost no one is doing it.
2025-11-10T11:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1otbeg0/your_ai_might_be_smart_but_does_it_actually/
VirtualEducator8243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otbeg0
false
null
t3_1otbeg0
/r/LocalLLaMA/comments/1otbeg0/your_ai_might_be_smart_but_does_it_actually/
false
false
self
0
null
Managing local stack in Windows.
2
I assume that some people here are using their main Windows Desktop computer for inference and all the shenanigans as I do, as well as for daily use/gaming or whatever. **I would like to know how you guys are managing your stacks, and how do you keep them updated and so on.** Do you have your services in **bare-metal,** or are you using **Docker+WSL2**? How are you managing them? My stack as an example: * llama.cpp/llama-server * llama-swap * ollama * owui * comfyui * n8n * getting started with vLLM. \+ remote power on/off my main station and access all of this through Tailscale anywhere with my phone/laptop. I have all of this working as I want in my windows host in bare-metal, but as the stack gets bigger over time I'm starting to find it tedious to keep track of all the pip, winget and building just to have everything up to date. **What is your stack and how are you managing it fellow Windows Local Inference Redditors?**
2025-11-10T11:31:58
https://www.reddit.com/r/LocalLLaMA/comments/1otbdrl/managing_local_stack_in_windows/
Warriorsito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otbdrl
false
null
t3_1otbdrl
/r/LocalLLaMA/comments/1otbdrl/managing_local_stack_in_windows/
false
false
self
2
null
Please share your GPU, method, and how long it takes to generate 6 seconds of HD video
4
I'm curious to try and understand an average across different hardware, models, etc, how long it truly takes these days to generate 6 second video clips. Thank you!
2025-11-10T11:27:58
https://www.reddit.com/r/LocalLLaMA/comments/1otbb55/please_share_your_gpu_method_and_how_long_it/
dep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otbb55
false
null
t3_1otbb55
/r/LocalLLaMA/comments/1otbb55/please_share_your_gpu_method_and_how_long_it/
false
false
self
4
null
Ultra-fast robotic TTS
12
I'm looking for a TTS engine where speed/low resources (no GPU) along with clarity are important. It doesn't need to sound human and I imagine it to be closer to espeak-ng than Kokoro-82. The problem with espeak-ng itself is that it is robotic to the point of not being easy to understand. What options are there that lie between espeak-ng and Kokoro-82 on the same quality/speed curves?
2025-11-10T11:19:27
https://www.reddit.com/r/LocalLLaMA/comments/1otb5vw/ultrafast_robotic_tts/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otb5vw
false
null
t3_1otb5vw
/r/LocalLLaMA/comments/1otb5vw/ultrafast_robotic_tts/
false
false
self
12
null
I am trying to launch a project that is competing with Ollama and ML studio. But I'm out of funds. any ideas?
0
The website: [https://rbee.dev/](https://rbee.dev/) Please give me stars on github: [https://github.com/rbee-keeper/rbee](https://github.com/rbee-keeper/rbee) I'm hoping to sell somethings on presale. but I haven't wired up stripe and clerk yet. But that is the best idea that I have at the moment. Currently my creativity and development powers have ran out. I'm very close to the finish line. It's just the last couple of percentages that gets stretched out for so long. No I have blown through my savings. Any ideas on how to get funded properly?
2025-11-10T11:16:32
https://i.redd.it/6ieehegvxe0g1.png
Sileniced
i.redd.it
1970-01-01T00:00:00
0
{}
1otb45l
false
null
t3_1otb45l
/r/LocalLLaMA/comments/1otb45l/i_am_trying_to_launch_a_project_that_is_competing/
false
false
default
0
{'enabled': True, 'images': [{'id': '6ieehegvxe0g1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=108&crop=smart&auto=webp&s=121cbdab79b44c942a8da8514e7d9d299b743a70', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=216&crop=smart&auto=webp&s=4b4beacb6503099b3096495a18b17d3391964338', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=320&crop=smart&auto=webp&s=fc44ed3f2c7bbe81f3343b3dc2f1dea475d586bf', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=640&crop=smart&auto=webp&s=918d1e7ec6bd03593031b7acf0acbcce2f11842b', 'width': 640}, {'height': 513, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=960&crop=smart&auto=webp&s=fe26e5d8634010fed00aa2b395a8a36b6e4dacd5', 'width': 960}, {'height': 577, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?width=1080&crop=smart&auto=webp&s=4cc307c4dd08b3ea6e6b485f6b5f2f3d4ac483dd', 'width': 1080}], 'source': {'height': 1021, 'url': 'https://preview.redd.it/6ieehegvxe0g1.png?auto=webp&s=d28bf75a7a6b4020219557bef2d59a28c95b5617', 'width': 1909}, 'variants': {}}]}
I found two resources that might be helpful for those looking to build or finetune LLMs
1
We often talk about data size, compute power, and architectures when discussing foundation models. In this case I also meant open-source models like **LLama 3 and 4 herd**, **GPT-oss**, **gpt-oss-safeguard**, or **Qwen**, etc. But the real transformation begins much deeper. Essentially, at the neuron level, where the [activation functions](https://go.adaline.ai/SyU65V5) decide how information flows. Think of it like this. Every neuron in a neural network asks, *“Should I fire or stay silent?”* That decision, made by an activation function, defines whether the model can truly understand patterns or just mimic them. One way to think is if there are memory boosters or preservers. Early models used **sigmoid** and **tanh**. The issue was that they killed gradients and they slowing down the learning process. Then **ReLU** arrived which fast, sparse, and scalable. It unlocked the deep networks we now take for granted. Today’s foundation models use more evolved activations: * **GPT-oss** blends **Swish + GELU (SwiGLU)** for long-sequence stability. * **gpt-oss-safeguard** adds *adaptive activations* that tune gradients dynamically for safer fine-tuning. * **Qwen** relies on **GELU** to keep multilingual semantics consistent across layers. These activation functions shape how a model can reason, generalize, and stay stable during massive training runs. Even small mathematical tweaks can mean smoother learning curves, fewer dead neurons, and more coherent outputs. If you’d like a deeper dive, here’s the full breakdown (with examples and PyTorch code): 1. [Activation Functions in Neural Network](https://go.adaline.ai/SyU65V5) 2. [Foundation Models](https://go.adaline.ai/NoX0UZz) https://preview.redd.it/baghjn220e0g1.png?width=1189&format=png&auto=webp&s=3600e261817b0c3d482e2b5507ada0a6e4a7989a
2025-11-10T10:47:11
https://www.reddit.com/r/LocalLLaMA/comments/1otamgm/i_found_two_resources_that_might_be_helpful_for/
TheProdigalSon26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otamgm
false
null
t3_1otamgm
/r/LocalLLaMA/comments/1otamgm/i_found_two_resources_that_might_be_helpful_for/
false
false
https://b.thumbs.redditm…nU9LWFEYDols.jpg
1
null
What’s the best way to build a true omni-channel bot (email + SMS + WhatsApp + voice + chat) with shared session state?
2
Hi everyone. I am working for a client who wants to build a collection automation system using an omnichannel bot. The goal is to support email, SMS, voice or phone (VoIP or PSTN), and a chat widget on a website or app. I have looked at tools like VAPI and similar vendors that offer voice, SMS and email, but I am not sure they qualify as true omnichannel solutions, especially when it comes to chat and keeping session context across different channels. I would like to hear from anyone who has built or is currently building something like this. What platforms or architectures are you using for omnichannel support bots across email, SMS, voice and chat? How are you handling session state or context when users switch channels? For example, if someone starts on a chat widget, then replies over SMS or gets a follow up phone call, how do you keep everything tied together? What have been the biggest technical challenges? Things like voice reliability, routing across channels, data sync issues, identifying the same user across different channels, or handing off to a human. If you evaluated vendors that only supported two or three channels, like voice plus SMS plus email, did you run into limitations that forced you to build custom components? Would appreciate any real world experiences or vendor recommendations. Thanks.
2025-11-10T10:36:25
https://www.reddit.com/r/LocalLLaMA/comments/1otagbl/whats_the_best_way_to_build_a_true_omnichannel/
BriefCardiologist656
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1otagbl
false
null
t3_1otagbl
/r/LocalLLaMA/comments/1otagbl/whats_the_best_way_to_build_a_true_omnichannel/
false
false
self
2
null
388 Tickets in 6 Weeks: Context Engineering Done Right
3
2025-11-10T10:02:21
https://tobiasuhlig.medium.com/388-tickets-in-6-weeks-context-engineering-done-right-f8da8577b8c2?source=friends_link&sk=c9a9cac53d5f52a970a79a9493639eef
TobiasUhlig
tobiasuhlig.medium.com
1970-01-01T00:00:00
0
{}
1ot9x8a
false
null
t3_1ot9x8a
/r/LocalLLaMA/comments/1ot9x8a/388_tickets_in_6_weeks_context_engineering_done/
false
false
default
3
{'enabled': False, 'images': [{'id': 'Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=108&crop=smart&auto=webp&s=91a1c16236f80f3247bce2756b78a5caeaaeaee9', 'width': 108}, {'height': 224, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=216&crop=smart&auto=webp&s=d4b86cd70de2fcc3faace4075439301482179496', 'width': 216}, {'height': 333, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=320&crop=smart&auto=webp&s=f2b8719ac8ecf1932d293a5a3f23354461330cf9', 'width': 320}, {'height': 666, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=640&crop=smart&auto=webp&s=deaef017e280cfefb6c15e86809f1df119148482', 'width': 640}, {'height': 999, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=960&crop=smart&auto=webp&s=d213ce495fd34ae3876fdec82dcd42b75f5666c1', 'width': 960}, {'height': 1124, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?width=1080&crop=smart&auto=webp&s=01b529d287e1d1460483b3866647b864a2bdeebc', 'width': 1080}], 'source': {'height': 1249, 'url': 'https://external-preview.redd.it/Dlmg215DPfup_FOeknlp5AZH3ICyr6Kdy3gfrOudPbU.png?auto=webp&s=3488c877f1caf9ad82c8de924a974c578ac7f72b', 'width': 1200}, 'variants': {}}]}
Is there model that can moan or make semi-realistic female emotions?
0
I’m working on an adult app and looking for model that can produce realistic human emotions, especially female moans or sensual vocal reactions. I tried Elevenlabs, it can, but usually \~70% of the results are too bad and "robotic".
2025-11-10T09:36:47
https://www.reddit.com/r/LocalLLaMA/comments/1ot9in3/is_there_model_that_can_moan_or_make/
Amelia_Amour
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot9in3
false
null
t3_1ot9in3
/r/LocalLLaMA/comments/1ot9in3/is_there_model_that_can_moan_or_make/
false
false
self
0
null
Looking for community input on an open-source 6U GPU server frame
0
Hey all, I’m planning to 3D model and open-source a 6U chassis designed to house up to an EATX board, 14 pcie slot width of gpu, dual psus, with mounts for cpu aio cooling. Ideally the whole thing will be able to slide out for easy maintenance, with good support for cable management of power and pcie risers. My goal is a 3D-printable chassis to support a new x299 build with expansion for up to 7 server cards cooled by blowers, but past that I would like some input from what the community might want out of something along these lines. I’ll likely post the design files on Prusa Printables, alongside my powermac g3 sleeper workstation mod. Before I start modeling, the following questions come to mind: What print bed size should I target? The two standard sizes that come to mind are an Ender 3 or Bambu X1 Carbon, but I’d like to hear your thoughts. Does it have enough pcie slot width? Going to 16 slots would mean better breathing for quad 3 slot 3090 rigs. Any must-have features you’d like to see (easy cable routing, removable tray, open air vs enclosed, etc.) If there’s solid community interest, I’ll make the design more flexible and polished. If not, I’ll simplify it to fit my own setup. Either way, I’ll open-source it when it’s ready.
2025-11-10T09:31:29
https://www.reddit.com/r/LocalLLaMA/comments/1ot9fox/looking_for_community_input_on_an_opensource_6u/
PraxisOG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot9fox
false
null
t3_1ot9fox
/r/LocalLLaMA/comments/1ot9fox/looking_for_community_input_on_an_opensource_6u/
false
false
self
0
null
Qwen3-VL's perceptiveness is incredible.
365
[I took a 4k image and scattered around 6 medium-length words.](https://i.imgur.com/liqVUJd.jpeg) With `Qwen3-VL-8B-Instruct-GGUF` and a temperature of `0`, an image token count of `2300` (seems to be the sweet spot), and the prompt: > Provide transcriptions and bounding boxes for the words in the image. Use JSON format. This is the output: > [ {"bbox_2d": [160, 867, 181, 879], "text_content": "steam"}, {"bbox_2d": [146, 515, 168, 527], "text_content": "queen"}, {"bbox_2d": [565, 731, 589, 743], "text_content": "satisfied"}, {"bbox_2d": [760, 615, 784, 627], "text_content": "feather"}, {"bbox_2d": [335, 368, 364, 379], "text_content": "mention"}, {"bbox_2d": [515, 381, 538, 392], "text_content": "cabinet"} ] Flawless. No notes. [It even got the bounding boxes correct.](https://i.imgur.com/r5Pt8oa.jpeg) How do other models compare? - Gemini 2.5 pro: Hallucinates an answer. - Claude Opus 4: Correctly identifies 3/6 words. - ChatGPT 5: After 5 minutes (!!) of thinking, it finds all 6 words. The bounding boxes are wrong. - DeepSeekOCR: Produces garbage (possible PEBCAK) - PaddleOCR-VL-0.9B: Finds 3 words, hallucinates 2. Doesn't output bounding boxes. - GLM-4.5V: Also perfect results. Very impressive that such as small model can get such good results, especially considering it's not tuned for OCR.
2025-11-10T09:12:28
https://www.reddit.com/r/LocalLLaMA/comments/1ot95gj/qwen3vls_perceptiveness_is_incredible/
Trypocopris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot95gj
false
null
t3_1ot95gj
/r/LocalLLaMA/comments/1ot95gj/qwen3vls_perceptiveness_is_incredible/
false
false
self
365
{'enabled': False, 'images': [{'id': 'rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=108&crop=smart&auto=webp&s=c38e37b95a8b89628c0e267e2775fde48f481a12', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=216&crop=smart&auto=webp&s=f6b403dcf1c1400b6c657dd71ef5529376ed096a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=320&crop=smart&auto=webp&s=12cd80a510c02eb71870dd6b7395eac60d6da1d4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=640&crop=smart&auto=webp&s=0be88de95a2a4cc06a73233bd1be1c9352239d31', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=960&crop=smart&auto=webp&s=0825bc9d8298911b3fc1b56d1fa106f3acb3a962', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?width=1080&crop=smart&auto=webp&s=a6f9a10d8d784c75f38b035037e374394ce3da8c', 'width': 1080}], 'source': {'height': 1728, 'url': 'https://external-preview.redd.it/rsUjx-Rml3VhWFW4XgFl418L0h6UUxjiwzkORNPLJGI.jpeg?auto=webp&s=af44b75897490a358f94c6cfce05749a7a96802f', 'width': 3072}, 'variants': {}}]}
Local Models setup in Text Generation WebUI (Oobabooga) Issue
1
I installed Text Generation WebUI (Oobabooga) and downloaded manually the MiniMax-M2-UD-IQ1\_S-00002-of-00002.gguf. I use the standard setup and model loader llama.cpp. I put the model into the folder \\text-generation-webui\\user\_data\\models bc there is this txt file telling my putting the models into that specific folder. But when I start up WebUi and want to choose the model in "model-dropdown" nothing is shown. Did is used the wrong model format or what is the error?
2025-11-10T09:08:59
https://www.reddit.com/r/LocalLLaMA/comments/1ot93ip/local_models_setup_in_text_generation_webui/
_springphul_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot93ip
false
null
t3_1ot93ip
/r/LocalLLaMA/comments/1ot93ip/local_models_setup_in_text_generation_webui/
false
false
self
1
null
NVIDIA RTX Pro 5000 Blackwell 72 GB Price
14
Found one of the first price tags in germany. Seems quite high, I expected it to be around 6000-6500€. I hope it will go down when other offers come up... What do you think about this GPU? I think the 6000 series has better value, especially considering bandwidth and core count. [https://www.comnet-itshop.de/eshop.php?eslink=1&action=article\_detail&s\_supplier\_id=12&s\_supplier\_aid=12189390](https://www.comnet-itshop.de/eshop.php?eslink=1&action=article_detail&s_supplier_id=12&s_supplier_aid=12189390) https://preview.redd.it/pk7d074qae0g1.png?width=1284&format=png&auto=webp&s=8ad13e0998d176a60ae79e3141e86ccf1fa3e9b4
2025-11-10T09:08:14
https://www.reddit.com/r/LocalLLaMA/comments/1ot9346/nvidia_rtx_pro_5000_blackwell_72_gb_price/
Low_Philosophy7906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot9346
false
null
t3_1ot9346
/r/LocalLLaMA/comments/1ot9346/nvidia_rtx_pro_5000_blackwell_72_gb_price/
false
false
https://b.thumbs.redditm…OduHZxzuem5w.jpg
14
null
A Grand Unified Theory of Universal Language Models: Cosmological Analogies in Transformer Architecture
0
We propose a novel hypothetical framework that establishes profound analogies between transformer-based language models and fundamental cosmological principles. This Grand Unified Theory of Universal Language Models (GUT-ULM) posits that transformer archi- tectures can be understood as computational universes, where the attention mechanism functions as gravitational force, training represents the forward arrow of time, and tokens emerge from a Universal Language Field (ULF) analogous to quantum fields in particle physics. We extend this framework to address continual learning through the lens of cosmic acceleration, propose the emergence of information singularities analogous to black holes, and demonstrate how inference parameters create a computational multiverse. This work bridges artificial intelligence, hypothetical physics, and cosmology, offering new perspectives on model interpretability, scalability, and the fundamental nature of machine intelligence. Keywords: Transformer models, cosmological analogy, attention mechanism, Universal Language Field, continual learning, information singularities, multimodal AI
2025-11-10T08:50:17
https://notebooklm.google.com/notebook/b00bbb76-9473-4141-a29c-6612ecf151d6
Sad-Low9265
notebooklm.google.com
1970-01-01T00:00:00
0
{}
1ot8t5u
false
null
t3_1ot8t5u
/r/LocalLLaMA/comments/1ot8t5u/a_grand_unified_theory_of_universal_language/
false
false
default
0
{'enabled': False, 'images': [{'id': 'N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=108&crop=smart&auto=webp&s=fb28ad8a753f23001a2bddcdc473f490db1c00be', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=216&crop=smart&auto=webp&s=d909a029cc223f7ae921dcf00c6072730896c854', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=320&crop=smart&auto=webp&s=fc21eb5db66d5f8f2fcf39eab80f8c0d59b014a3', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=640&crop=smart&auto=webp&s=5e8a677c6653863681bc20d2715c1746fcbcd04e', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=960&crop=smart&auto=webp&s=3466232e7a2229744ffff955ffd6e80d35ba3925', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?width=1080&crop=smart&auto=webp&s=5ffb3bad6ec9a11f0c5421860f7b50ffa441328d', 'width': 1080}], 'source': {'height': 1254, 'url': 'https://external-preview.redd.it/N3QGD27wXzvVS0kf8tC3Kui7F4OYYRR-UQI8b6u9to4.png?auto=webp&s=2d5b1a61db22a15616b8c64560ddf3776acf475e', 'width': 2400}, 'variants': {}}]}
What is the best way to extract info from scanned documents like this?
0
https://preview.redd.it/… need more VRAM?
2025-11-10T08:45:27
https://www.reddit.com/r/LocalLLaMA/comments/1ot8qi1/what_is_the_best_way_to_extract_info_from_scanned/
Ok_Television_9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot8qi1
false
null
t3_1ot8qi1
/r/LocalLLaMA/comments/1ot8qi1/what_is_the_best_way_to_extract_info_from_scanned/
false
false
https://b.thumbs.redditm…kZP8ZvyXp_Bw.jpg
0
null
7 PCIe x16 slots with 4 3090s: how do I vertically mount the 4th one?
3
I'm aware that this isn't a PC building or hardware sub, but I figure there's probably a number of people here who have experienced something similar to this. I have a Phanteks Enthoo Pro 2 Server Edition case.
2025-11-10T08:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1ot8323/7_pcie_x16_slots_with_4_3090s_how_do_i_vertically/
Amazydayzee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot8323
false
null
t3_1ot8323
/r/LocalLLaMA/comments/1ot8323/7_pcie_x16_slots_with_4_3090s_how_do_i_vertically/
false
false
self
3
null
Is it too early for local LLMs?
87
I’ve been thinking for a while about setting up a local environment for running an LLM. Since I was already planning to build a gaming PC, I saw it as a good opportunity to tweak the setup so I could also use AI tools locally, I use them quite a lot. But after looking into the market, it really feels like it’s still too early. Everything is overpriced, full of compromises, or the few uncompromising options cost an absurd amount. It just doesn’t seem worth it yet. I feel like we’ll need to wait another couple of years before running an LLM locally becomes truly viable for most people. Of course, it depends on your use case and budget, but I think only a few can realistically justify or get a real return on such an investment right now.
2025-11-10T07:58:46
https://www.reddit.com/r/LocalLLaMA/comments/1ot80p0/is_it_too_early_for_local_llms/
Substantial_Mode_167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot80p0
false
null
t3_1ot80p0
/r/LocalLLaMA/comments/1ot80p0/is_it_too_early_for_local_llms/
false
false
self
87
null
How does cuda compability work and whats the difference beween pip cuda and apt cuda?
5
As I understand it you can install older cuda toolkit on newer drivers without problem. E.g. Cuda 12.0 on 580 driver. What about programs, can you run torch cuda 12.8 on cuda toolkit 13.0? Does llamacpp compile with any resonably new cuda toolkit? Like could I check out a commit of llamacpp last year and compile with cuda 13 toolkit? Do you even need cuda toolkit when running pytorch that installs cuda packages with pip?
2025-11-10T07:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1ot7eyr/how_does_cuda_compability_work_and_whats_the/
arstarsta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot7eyr
false
null
t3_1ot7eyr
/r/LocalLLaMA/comments/1ot7eyr/how_does_cuda_compability_work_and_whats_the/
false
false
self
5
null
I'm new to LLMs and just ran my first model. What LLM "wowed" you when you started out?
16
Hey everyone, I'm brand new to the world of LLMs and finally took the plunge this week. I set up my first model and honestly, I'm hooked. There's something special about running this tech on my own machine and seeing it respond in real time. Since I'm just starting out, I'd love to hear from this community: **What was the first LLM that truly "wowed" you?** Was it a particular model's creativity? Its speed? Its uncensored or unexpected responses? Or just the thrill of running it completely offline? I'm looking for recommendations and stories to guide my next steps, and I'm sure other newcomers are too. Thanks in advance, and I'm excited to join the conversation.
2025-11-10T07:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1ot79n2/im_new_to_llms_and_just_ran_my_first_model_what/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot79n2
false
null
t3_1ot79n2
/r/LocalLLaMA/comments/1ot79n2/im_new_to_llms_and_just_ran_my_first_model_what/
false
false
self
16
null
Generating questions of my school’s standard/style/format
0
Hi redditors I'm an educator vibe coding a reliable question bank using Google AI Studio's environment. My main goal is to generate new questions and detailed solutions by typing in a keyword (e.g., "quadratic equation"). These questions must be a very close match in style, difficulty, and format to my school's past year papers and textbooks. I've uploaded all my textbooks and past papers in PDF, and have tried to generate qus/solutions based on a keyword/topic. I need advices on: 1. the best path to achieve high style/format consistency and fast generation speed (low latency) 2. Is my current RAG setup (even with better prompting) the best I can hope for to generate qus and solutions closest to my school standard 3. Would fine-tuning be a better option to explore to achieve similar qus and solution style to my school standard instead of using rag? Thank you for ur time! Would Appreciate solid advices!
2025-11-10T07:08:11
https://www.reddit.com/r/LocalLLaMA/comments/1ot78m3/generating_questions_of_my_schools/
East-Statistician88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot78m3
false
null
t3_1ot78m3
/r/LocalLLaMA/comments/1ot78m3/generating_questions_of_my_schools/
false
false
self
0
null
Kimi infra team: Quantization is not a compromise, it's the next paradigm
195
After K2-Thinking's release, many developers have been curious about its native INT4 quantization format. Shaowei Liu, **infra engineer** at u/Kimi-Moonshot shares an insider's view on why this choice matters, and why quantization today isn't just about sacrificing precision for speed. # Key idea In the context of LLMs, quantization is no longer a trade-off. With the evolution of param-scaling and test-time-scaling, native low-bit quantization will become a standard paradigm for large model training. # Why Low-bit Quantization Matters In modern LLM inference, there are two distinct optimization goals: • High throughput (cost-oriented): maximize GPU utilization via large batch sizes. • Low latency (user-oriented): minimize per-query response time. For Kimi-K2's MoE structure (with **1/48 sparsity**), **decoding is memory-bound** — the smaller the model weights, the faster the compute. FP8 weights (≈1 TB) already hit the limit of what a single high-speed interconnect GPU node can handle. By switching to W4A16, latency drops sharply while maintaining quality — a perfect fit for low-latency inference. # Why QAT over PTQ Post-training quantization (PTQ) worked well for shorter generations, but **failed in longer reasoning chains**: • Error accumulation during long decoding degraded precision. • Dependence on calibration data caused "expert distortion" in sparse MoE layers. Thus, K2-Thinking adopted QAT for **minimal loss** and **more stable long-context reasoning**. # How it works K2-Thinking uses a **weight-only QAT** with **fake quantization + STE (straight-through estimator)**. The pipeline was fully integrated in just days — from QAT training → INT4 inference → RL rollout — enabling near lossless results without extra tokens or retraining. # INT4's hidden advantage in RL Few people mention this: **native INT4** doesn't just speed up inference — it **accelerates RL training** itself. Because RL rollouts often suffer from "long-tail" inefficiency, INT4's low-latency profile makes those stages much faster. In practice, each RL iteration runs **10-20% faster end-to-end.** Moreover, quantized RL brings stability: smaller representational space reduces accumulation error, improving learning robustness. # Why INT4, not MXFP4 Kimi chose INT4 over "fancier" MXFP4/NVFP4 to better support **non-Blackwell GPUs**, with strong existing kernel support (e.g., Marlin). At a quant scale of 1×32, INT4 matches FP4 formats in expressiveness while being more hardware-adaptable.
2025-11-10T06:25:55
https://www.reddit.com/r/LocalLLaMA/comments/1ot6k56/kimi_infra_team_quantization_is_not_a_compromise/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot6k56
false
null
t3_1ot6k56
/r/LocalLLaMA/comments/1ot6k56/kimi_infra_team_quantization_is_not_a_compromise/
false
false
self
195
null
Quick check - are these the only LLM building blocks?
0
Been working with LLMs for a while now. My understanding is there are basically 4 things - Classification, Summarization, Chat, and Extraction. Chain them together and you get Agents/Workflows. Am I missing something obvious here? Trying to explain this to both customers and fellow developers and want to make sure I'm not oversimplifying.
2025-11-10T06:17:20
https://www.reddit.com/r/LocalLLaMA/comments/1ot6f78/quick_check_are_these_the_only_llm_building_blocks/
Individual-Library-1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot6f78
false
null
t3_1ot6f78
/r/LocalLLaMA/comments/1ot6f78/quick_check_are_these_the_only_llm_building_blocks/
false
false
self
0
null
Montana Becomes First State to Enshrine ‘Right to Compute’ Into Law - Montana Newsroom
88
Hopefully this leads to more states following / similar federal legislation.
2025-11-10T06:05:33
https://montananewsroom.com/montana-becomes-first-state-to-enshrine-right-to-compute-into-law/
Different_Fix_2217
montananewsroom.com
1970-01-01T00:00:00
0
{}
1ot682o
false
null
t3_1ot682o
/r/LocalLLaMA/comments/1ot682o/montana_becomes_first_state_to_enshrine_right_to/
false
false
https://external-preview…e4fcb714a76a2ddc
88
{'enabled': False, 'images': [{'id': 'mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=108&crop=smart&auto=webp&s=9d50ee72554f48ce521ff6d9deb859434437fa97', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=216&crop=smart&auto=webp&s=18f61bc7b708f1a4f05dfbea693b029c23631ff7', 'width': 216}, {'height': 228, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=320&crop=smart&auto=webp&s=47307531bd26dcfd65b88041b6df4532d9199922', 'width': 320}, {'height': 457, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=640&crop=smart&auto=webp&s=a2bc023e5f328155ef589e67ab05e5434c741142', 'width': 640}, {'height': 685, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=960&crop=smart&auto=webp&s=a9f82780f3e1d02960cc23f77b67869047433776', 'width': 960}, {'height': 771, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?width=1080&crop=smart&auto=webp&s=5daf8c7cbb9a01a17a898ea78337506ecfd2daf3', 'width': 1080}], 'source': {'height': 1828, 'url': 'https://external-preview.redd.it/mQmftbFg8dXc1pZL5UOJLL9AO6IH64kyLn5ax_JS4QM.jpeg?auto=webp&s=7c9e4d531abc108bf19509e4a60bbef490933992', 'width': 2560}, 'variants': {}}]}
Last week in Multimodal AI - Local Edition
19
I curate a weekly newsletter on multimodal AI. Here are the local/edge highlights from this week: Rolling Forcing - Real-Time Streaming Video on 1 GPU • Generates multi-minute video interactively with joint multi-frame denoising. • Anchors temporal context for stability without heavy clusters. • [**Project Page**](https://kunhao-liu.github.io/Rolling_Forcing_Webpage/) | [**Paper**](https://arxiv.org/abs/2509.25161) | [**GitHub**](https://github.com/TencentARC/RollingForcing) | [**Hugging Face**](https://huggingface.co/TencentARC/RollingForcing) https://reddit.com/link/1ot67nn/video/q45gljk2ed0g1/player Step-Audio-EditX (3B) - Text-Driven Audio Editing • Controls emotion, style, breaths, laughs via prompts. • Runs on a single GPU; open weights for local pipelines. • [**Project Page**](https://stepaudiollm.github.io/step-audio-editx/) | [**Paper**](https://arxiv.org/abs/2511.03601) | [**GitHub**](https://github.com/stepfun-ai/Step-Audio-EditX) | [**Hugging Face**](https://huggingface.co/stepfun-ai/Step-Audio-EditX) [An overview of the architecture of Step-Audio-EditX.](https://preview.redd.it/fsl15il8ed0g1.png?width=1456&format=png&auto=webp&s=caa10ad203ad44158a1ba8dbe7f303b0eb03cfbd) BindWeave - Consistent Subjects, Local Pipelines • Subject-consistent video gen; ComfyUI support. • Drop-in for desktop creative stacks. • [**Project Page**](https://lzy-dot.github.io/BindWeave/) | [**Paper**](https://huggingface.co/papers/2510.00438) | [**GitHub**](https://github.com/bytedance/BindWeave) | [**Hugging Face**](https://huggingface.co/ByteDance/BindWeave) https://reddit.com/link/1ot67nn/video/ay7nndyaed0g1/player InfinityStar (8B) - Unified Spacetime AR Gen • 8B model targets high-res image/video generation. • Fits prosumer GPUs for local experimentation. • [**Paper**](https://arxiv.org/abs/2511.04675) | [**GitHub**](https://github.com/FoundationVision/InfinityStar) | [**Hugging Face**](https://huggingface.co/FoundationVision/InfinityStar) https://reddit.com/link/1ot67nn/video/ouipokpbed0g1/player OlmoEarth-v1-Large - Remote Sensing for Builders • Satellite model ready for on-prem analysis. • Strong for geospatial R&D without cloud lock-in. • [**Hugging Face**](https://huggingface.co/allenai/OlmoEarth-v1-Large) | [**Paper**](https://www.datocms-assets.com/64837/1762355216-olmoearth_v2.pdf) | [**Announcement**](https://x.com/allen_ai/status/1985719070407176577) https://reddit.com/link/1ot67nn/video/mkbihhrced0g1/player Checkout the [full newsletter](https://open.substack.com/pub/thelivingedge/p/multimodal-monday-32-multi-query?r=12l7fk&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) for more demos, papers, and resources.
2025-11-10T06:04:53
https://www.reddit.com/r/LocalLLaMA/comments/1ot67nn/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot67nn
false
null
t3_1ot67nn
/r/LocalLLaMA/comments/1ot67nn/last_week_in_multimodal_ai_local_edition/
false
false
https://a.thumbs.redditm…cAhwbOLzsNz8.jpg
19
null
Built my own local running LLM and connect to a SQL database in 2 hours
0
Hello, I saw many posts here about running LLM locally using and connect to databases. As a data engineer myself, I am very curious about this. Therefore, I gave it a try after looking at many repos. Then I built a completed, local running LLM model supported, database client. It should be very friendly to non-technical users.. provide your own db name and password, that's it. As long as you understand the basic components needed, it is very easy to build it from scratch. Feel free to ask me any question.
2025-11-10T05:53:40
https://i.redd.it/77j64o0mcd0g1.png
Content_Complex_8080
i.redd.it
1970-01-01T00:00:00
0
{}
1ot60e6
false
null
t3_1ot60e6
/r/LocalLLaMA/comments/1ot60e6/built_my_own_local_running_llm_and_connect_to_a/
false
false
https://b.thumbs.redditm…hsbVBLR4Q6Iw.jpg
0
{'enabled': True, 'images': [{'id': 'CxXQOxHjjWBohybF-uZ_XOiv0txsTDVfGGCaPny4y-o', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?width=108&crop=smart&auto=webp&s=636db67d99aac561f831ff5178d899ca02d41a70', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?width=216&crop=smart&auto=webp&s=81fb96696bd180d39ffc29c1203c16a1bc86c6fb', 'width': 216}, {'height': 231, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?width=320&crop=smart&auto=webp&s=a193d11a6c0daf043bd04e3e80d3ccd56d748eb2', 'width': 320}, {'height': 462, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?width=640&crop=smart&auto=webp&s=367d56a48b53e1ac05073a5d59aa92d275067327', 'width': 640}, {'height': 693, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?width=960&crop=smart&auto=webp&s=c532598087deac8a38aef38aec596c84b7b56e29', 'width': 960}], 'source': {'height': 766, 'url': 'https://preview.redd.it/77j64o0mcd0g1.png?auto=webp&s=de1fe2d6f074c58654401f437c854740ac234465', 'width': 1061}, 'variants': {}}]}
Is any one here believe that there should be ui for llms ?
0
Hello everyone, I had this question in my mind if llm's could use the internet like the internet was natively designed for them how much efficient it would become for example we have mcps where LLM can use the internet or the application but what if we create something that turns your website into LLM family design maybe is just pure json text and buttons. Aur maybe it is just user journey and along with a documentation file to read before acting for an llms. What I think is if we have a website converter for each and every website it can convert into AI ready UI would not this be easier for llms to use the websites faster efficiently and accurately?
2025-11-10T05:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1ot5wpm/is_any_one_here_believe_that_there_should_be_ui/
teraflopspeed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot5wpm
false
null
t3_1ot5wpm
/r/LocalLLaMA/comments/1ot5wpm/is_any_one_here_believe_that_there_should_be_ui/
false
false
self
0
null
RAG Paper 25.11.09
6
**1.** [Expert Evaluation of LLM World Models: A High-$T\_c$ Superconductivity Case Study](http://arxiv.org/abs/2511.03782v1) 2. [ASVRI-Legal: Fine-Tuning LLMs with Retrieval Augmented Generation for Enhanced Legal Regulation](http://arxiv.org/abs/2511.03563v1) 3. [RAGBoost: Efficient Retrieval-Augmented Generation with Accuracy-Preserving Context Reuse](http://arxiv.org/abs/2511.03475v1) 4. [Comparing the Performance of LLMs in RAG-based Question-Answering: A Case Study in Computer Science Literature](http://arxiv.org/abs/2511.03261v1) 5. [LGM: Enhancing Large Language Models with Conceptual Meta-Relations and Iterative Retrieval](http://arxiv.org/abs/2511.03214v1) 6. [Forecast2Anomaly (F2A): Adapting Multivariate Time Series Foundation Models for Anomaly Prediction](http://arxiv.org/abs/2511.03149v1) 7. [A Proprietary Model-Based Safety Response Framework for AI Agents](http://arxiv.org/abs/2511.03138v1) **Collected by** [**RagView**](https://www.ragview.ai/) **.**
2025-11-10T05:41:02
https://www.reddit.com/r/LocalLLaMA/comments/1ot5sh6/rag_paper_251109/
Cheryl_Apple
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot5sh6
false
null
t3_1ot5sh6
/r/LocalLLaMA/comments/1ot5sh6/rag_paper_251109/
false
false
self
6
null
Local LLaMA model for RTX5090
5
I have the RTX5090 card, I want to run a local LLM with ChatRTX, what model do you recommend I install? Frankly, I'm going to use it to summarize documents and classify images. Thank you
2025-11-10T05:02:10
https://www.reddit.com/r/LocalLLaMA/comments/1ot541p/local_llama_model_for_rtx5090/
Cuaternion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot541p
false
null
t3_1ot541p
/r/LocalLLaMA/comments/1ot541p/local_llama_model_for_rtx5090/
false
false
self
5
null
when did tesla p40s get boost? or did anyone test them on latest moe models?
14
ive been sitting here fuming over ram/gpu prices over the last few months, while everything gets more expensive especially for used hardware on ebay, i've been stuck with my 4 Tesla p40s for awhile. and i never once thought to check if the latest MOE models run well on tesla p40. because i remember my tesla p40s were useless and slow and only got me 2-3 tokens/sec on llama 70B models. then the other day i said to myself i'm just gonna load the qwen3 30B-A3B coder model and see what happens. the Q4 quant fits fully in vram of the 4 gpus. well i was quite surprised. i got 53 tokens per second generation speed with qwen3 coder . i was like oh wow! because i remember the other day i watched a random youtube video of a guy with 5090 getting 48 tokens/sec on the same model, but some his model was running in cpu ram. i also cant remember which quant he used. so i went and tried downloading a Q2 quant of minimax M2, and that very large model is netting me 19-23 tokens per second of generation speed and 67-71 tokens of processing. heres an example output with minimax m2 running across all 4 tesla p40s: prompt eval time = 2521.31 ms / 174 tokens ( 14.49 ms per token, 69.01 tokens per second) eval time = 144947.40 ms / 3156 tokens ( 45.93 ms per token, 21.77 tokens per second) total time = 147468.70 ms / 3330 tokens these speeds surprised me so much i just ordered 4 more p40s because they are so cheap compared to everything else i plan to use the Q4 quant of minimax m2 with 8 of them. did something happen recently to make them faster or is this just an unexpected outcome of latest advancements?
2025-11-10T03:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1ot3xiy/when_did_tesla_p40s_get_boost_or_did_anyone_test/
pharrowking
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot3xiy
false
null
t3_1ot3xiy
/r/LocalLLaMA/comments/1ot3xiy/when_did_tesla_p40s_get_boost_or_did_anyone_test/
false
false
self
14
null
Anyone got the chance to compare LOCAL MiniMax-M2 and Kimi-K2-Thinking?
3
I'm downloading Kimi-K2-Thinking Q3KXL and it will probably take a few days, but so far MiniMax-M2-Q6 is doing great. I had it easily solve an agentic task that GLM-4.5Q8 would fail along with the Qwen-32/30b models. GPT-OSS-120b was able to solve it too, so I'm going to be comparing these 3 together quite a bit. I'm curious what folks are seeing in terms of performance running local,
2025-11-10T03:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1ot3ueh/anyone_got_the_chance_to_compare_local_minimaxm2/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot3ueh
false
null
t3_1ot3ueh
/r/LocalLLaMA/comments/1ot3ueh/anyone_got_the_chance_to_compare_local_minimaxm2/
false
false
self
3
null
API to MCP Server
1
If you want to develop enterprise grade agentic apps then most likely you need to make use of existing APIs. Best way to give access of your APIs to your agents is through MCP Servers. My below GitHub repo has comprehensive guide to create MCP Servers/proxy for your existing APIs using products/platforms like AWS, GCP, MS Azure and Postman. [https://github.com/meetrais/api-to-mcp-server](https://github.com/meetrais/api-to-mcp-server)
2025-11-10T03:46:47
https://www.reddit.com/r/LocalLLaMA/comments/1ot3pbq/api_to_mcp_server/
meetrais
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot3pbq
false
null
t3_1ot3pbq
/r/LocalLLaMA/comments/1ot3pbq/api_to_mcp_server/
false
false
self
1
{'enabled': False, 'images': [{'id': 'vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=108&crop=smart&auto=webp&s=60cb064a0c3aab40e5365661ab0ea04fefa00e11', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=216&crop=smart&auto=webp&s=b1b133eac769a42bfc77c5f04666c0f71e736fe2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=320&crop=smart&auto=webp&s=79afd90264e10d9d166d05d8ed42e99735ef0e80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=640&crop=smart&auto=webp&s=0c67663eb184b8446765e1f23aaf2f149f1fb44c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=960&crop=smart&auto=webp&s=a4e75f95e81c25434bcde75f455bed8dd9ea0aa0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?width=1080&crop=smart&auto=webp&s=0dbb9c15fafe70778025d055f8b44f5b869b5f88', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vw0ThzfjUiiq8vyhVihmWo2aaEx6px2COH3OJDEkGkw.png?auto=webp&s=8d0058f1418057143ca31d5faf45186cbf0212f6', 'width': 1200}, 'variants': {}}]}
I tested Strix Halo clustering w/ ~50Gig IB to see if networking is really the bottleneck
524
**TLDR:** While InfiniBand is cool, 10 Gbps Thunderbolt is sufficient for llama.cpp. Recently I got really fascinated by clustering with Strix Halo to get a potential 200 GB of VRAM without significant costs. I'm currently using a 4x4090 solution for research, but it's very loud and power-hungry (plus it doesn't make much sense for normal 1-2 user inference—this machine is primarily used for batch generation for research purposes). I wanted to look for a low-power but efficient way to inference \~230B models at Q4. And here we go. I always had this question of how exactly networking would affect the performance. So I got two modded Mellanox ConnectX-5 Ex 100 Gig NICs which I had some experience with on NCCL. These cards are very cool with reasonable prices and are quite capable. However, due to the Strix Halo platform limitation, I only got a PCIe 4.0 x4 link. But I was still able to get around 6700 MB/s or roughly 55 Gbps networking between the nodes, which is far better than using IP over Thunderbolt (10 Gbps). I tried using vLLM first and quickly found out that RCCL is not supported on Strix Halo. :( Then I tried using llama.cpp RPC mode with the `-c` flag to enable caching, and here are the results I got: |Test Type|Single Machine w/o rpc|2.5 Gbps|10 Gbps (TB)|50 Gbps| |:-|:-|:-|:-|:-| |**pp512**|653.74|603.00|654.03|663.70| |**tg128**|49.73|30.98|36.44|35.73| |**tg512**|47.54|29.13|35.07|34.30| |**pp512 @ d512**|601.75|554.17|599.76|611.11| |**tg128 @ d512**|45.81|27.78|33.88|32.67| |**tg512 @ d512**|44.90|27.14|31.33|32.34| |**pp512 @ d2048**|519.40|485.93|528.52|537.03| |**tg128 @ d2048**|41.84|25.34|31.22|30.34| |**tg512 @ d2048**|41.33|25.01|30.66|30.11| As you can see, the Thunderbolt connection almost matches the 50 Gbps MLX5 on token generation. Compared to the non-RPC single node inference, the performance difference is still quite substantial—with about a 15 token/s difference—but as the context lengthens, the text generation difference somehow gets smaller and smaller. Another strange thing is that somehow the prompt processing is better on RPC over 50 Gbps, even better than the single machine. That's very interesting to see. During inference, I observed that the network was never used at more than maybe \~100 Mbps or 10 MB/s most of the time, suggesting the gain might not come from bandwidth—maybe latency? But I don't have a way to prove what exactly is affecting the performance gain from 2.5 Gbps to 10 Gbps IP over Thunderbolt. Here is the llama-bench command I'm using: ./llama-bench -m ./gpt-oss-120b-mxfp4-00001-of-00003.gguf -d 0,512,2048 -n 128,512 -o md --rpc <IP:PORT> So the result is pretty clear: you don't need a fancy IB card to gain usable results on llama.cpp with Strix Halo. At least until RCCL supports Strix Halo, I think.
2025-11-10T03:42:05
https://i.redd.it/ezjtolwnoc0g1.jpeg
Hungry_Elk_3276
i.redd.it
1970-01-01T00:00:00
0
{}
1ot3lxv
false
null
t3_1ot3lxv
/r/LocalLLaMA/comments/1ot3lxv/i_tested_strix_halo_clustering_w_50gig_ib_to_see/
false
false
default
524
{'enabled': True, 'images': [{'id': 'ezjtolwnoc0g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=108&crop=smart&auto=webp&s=a55c6f7c9c379c62b1ed3d785bb03962cd11f187', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=216&crop=smart&auto=webp&s=17bb45ffbdfe28ac84de8092bd670e46df3e7e33', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=320&crop=smart&auto=webp&s=ab1b8fafc2d2dc8b4c1064ed88a92af60417d80b', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=640&crop=smart&auto=webp&s=2d9f058ee27bdab9923ee3d40ab306fea5558c71', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=960&crop=smart&auto=webp&s=27072f853568ef91377848057a72aca4912ae3c2', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?width=1080&crop=smart&auto=webp&s=cd66c36b10929a41e174e21209576b4e3f779138', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/ezjtolwnoc0g1.jpeg?auto=webp&s=ce2311d590547a3d7360df3b9527b21fe17b0f2d', 'width': 4032}, 'variants': {}}]}
Can I use Qwen 3 coder 30b with a M4 Macbook Pro 48GB
3
Also, Are there any websites where I can check the token rate per each macbook or popular models? I'm planning to buy the below model, Just wanted to check how will the performance be? * Apple M4 Pro chip with 12‑core CPU, 16‑core GPU, 16‑core Neural Engine * 48GB unified memory
2025-11-10T03:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1ot301h/can_i_use_qwen_3_coder_30b_with_a_m4_macbook_pro/
thereisnospooongeek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot301h
false
null
t3_1ot301h
/r/LocalLLaMA/comments/1ot301h/can_i_use_qwen_3_coder_30b_with_a_m4_macbook_pro/
false
false
self
3
null
In Defense of Spark
1
[removed]
2025-11-10T02:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1ot2jez/in_defense_of_spark/
Simusid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot2jez
false
null
t3_1ot2jez
/r/LocalLLaMA/comments/1ot2jez/in_defense_of_spark/
false
false
self
1
null
[Research] 31 % perplexity drop on 8.4 M transformer model using a lightweight periodic regulator — looking for replication on stronger GPUs
30
Hey everyone, I ran a controlled training experiment on an 8.4 M-parameter transformer model and observed a consistent \*\*31 % perplexity reduction\*\* compared to baseline after 2 000 steps. 📊 Full metrics & logs: [https://limewire.com/d/j7jDI#OceCXHWNhG](https://limewire.com/d/j7jDI#OceCXHWNhG) \*\*Setup\*\* \- Model: small LM (\~8.4 M params) \- GPU: RTX 5070 \- Optimizer: AdamW, lr = 2e-6, warmup = 200, grad-clip = 1.0 \- Sequence = 256, batch = 8 × GA 4 \- Seed = 41 \- Modification: added a compact periodic regulator in the optimizer update (≈ 0.07 % extra params) \*\*Result\*\* | Metric | Baseline | Regulated | Δ | |---------|-----------|-----------|---| | eval CE | 6.731 | 6.360 | −0.371 | | eval PPL | 838.17 | \*\*578.49 (−31 %)\*\* | | stability β | — | 0.91 | Same data, same seed, no architecture changes. The effect is reproducible and stable. \*\*Why post here\*\* Looking for: \- community replication on larger GPUs (A100 / L40S / H100) \- discussion about scaling behaviour and scheduler-level interventions \- any pointers to similar experiments you may have seen I’ll share the Python scripts and configs (ready-to-run) with anyone who wants to test. The full repo isn’t public yet but will follow once results are replicated. Thanks for reading and for any feedback!
2025-11-10T02:42:30
https://www.reddit.com/r/LocalLLaMA/comments/1ot2eqd/research_31_perplexity_drop_on_84_m_transformer/
freeky78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot2eqd
false
null
t3_1ot2eqd
/r/LocalLLaMA/comments/1ot2eqd/research_31_perplexity_drop_on_84_m_transformer/
false
false
self
30
null
Explorando instrumentação e LLMs locais — buscando conselhos sobre setup on-premise com 4× A100
0
Olá pessoal, Sou Diretor de TI e tenho trabalhado cada vez mais com instrumentação de IA e ferramentas open source. Hoje rodo praticamente tudo em Claude Code e Cursor, mas nos últimos meses comecei a mergulhar mais fundo nessa parte de rodar modelos localmente e entender o que realmente é necessário para ter performance e flexibilidade sem depender 100% da nuvem. Recentemente comprei um MacBook M3 Max (48 GB RAM / 40 núcleos) para testar modelos localmente, mas percebi que, mesmo com essa máquina, não consigo atingir a performance e o nível de “coder instrumentation” que busco — aquele fluxo completo de *edit / search / plan / write / execute* que o Claude Code faz com perfeição. Por curiosidade (e necessidade), fiz um scraping da interface do Claude Code e construí um clone funcional em Go, onde já consigo editar arquivos, criar novos e integrar ferramentas de instrumentação. No momento uso a API da Anthropic (Claude Sonnet 4.5), mas estou preparando algo maior. # Configuração planejada (on-premise) Estou montando uma infraestrutura local para testes, com a ideia de simular tudo primeiro via AWS ou GCP e depois adquirir o hardware físico. A configuração planejada seria: * 4× NVIDIA A100 80 GB * 2× AMD EPYC 7713 (64 cores cada) * 8× 128 GB DDR4 3200 MHz RAM (total ≈ 1 TB) * Placa-mãe Supermicro H12-DSI-NT6 (dual socket + 6× NVMe) * Chassi Supermicro 4U * 2× SSDs NVMe 4 TB * Fonte redundante + rede 100 Gb Mellanox # Objetivo Quero criar uma infraestrutura on-premise capaz de: * Rodar modelos de código e instrumentação com contextos longos (128k tokens ou mais) * Suportar 10 a 20 desenvolvedores simultâneos em um cluster local * Fazer inferência e testes contínuos de agentes sem depender da nuvem * Integrar ferramentas (edição, execução, análise) diretamente no ambiente do desenvolvedor # O que gostaria de saber da comunidade 1. Alguém aqui já montou uma estrutura semelhante, ou simulou um cluster A100 localmente pela AWS/GCP? 2. Existem modelos open source realmente otimizados para coding/instrumentation que recomendam testar antes do investimento? 3. Para quem já roda setups on-premise, vale a pena ir direto para bare-metal com A100 ou usar H100/B200 na nuvem até validar? 4. Alguma dica de framework de orquestração (vLLM, Text-Generation-Inference, Ray, etc.) que se deu bem com múltiplas GPUs? Quero ouvir de quem já passou por esse processo — tanto de montar a infraestrutura quanto de validar modelos coder-aware. Qualquer dica, insight ou até feedback sobre a viabilidade desse setup é muito bem-vindo.
2025-11-10T02:29:22
https://www.reddit.com/r/LocalLLaMA/comments/1ot24zi/explorando_instrumentação_e_llms_locais_buscando/
OnionOld5681
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot24zi
false
null
t3_1ot24zi
/r/LocalLLaMA/comments/1ot24zi/explorando_instrumentação_e_llms_locais_buscando/
false
false
self
0
null
Just found out Notion gives access to AI + Business plan for 3 months
0
I was testing Notion for my startup workspace when I noticed they currently give 3 months of Notion Business + Notion AI for free but it’s specifically for startups that sign up using a business email (not a Gmail or personal one). All I did was create an account with my startup email, set up the workspace, and got instant access to the Business plan and full AI features without paying anything. I’ve been using it for documentation, project tracking, and content generation the built-in AI assistant is surprisingly good for summarizing notes and writing drafts. Definitely worth it if you’re an early-stage founder exploring AI productivity tools.
2025-11-10T02:16:18
https://www.reddit.com/r/LocalLLaMA/comments/1ot1vhu/just_found_out_notion_gives_access_to_ai_business/
freebie1234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ot1vhu
false
null
t3_1ot1vhu
/r/LocalLLaMA/comments/1ot1vhu/just_found_out_notion_gives_access_to_ai_business/
false
false
self
0
null
We built LiteAPI — a way to use GPT-4, Claude, and Gemini for 50% less
0
Hey all 👋 LLM API costs are getting out of hand, especially when fine-tuning or doing large-scale inference. So we built **LiteAPI** — a platform that provides **OpenAI, Anthropic, and Gemini credits for half price**. Same APIs. Same models. Just cheaper access. It’s mainly for devs/researchers who are burning through thousands of tokens a day and want to cut spend without changing their workflow. Would love your thoughts on whether this kind of service would help with your LLM projects or if you see any gaps we should fill.
2025-11-10T00:17:38
https://www.reddit.com/r/LocalLLaMA/comments/1oszcmg/we_built_liteapi_a_way_to_use_gpt4_claude_and/
Frosty_Conclusion100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oszcmg
false
null
t3_1oszcmg
/r/LocalLLaMA/comments/1oszcmg/we_built_liteapi_a_way_to_use_gpt4_claude_and/
false
false
self
0
null
What's the current best long-form TTS workflow (≤12 GB VRAM) with Elevenlabs-like audiobook output?
1
I’m looking for a local TTS workflow for long-form narration (articles, book chapters) that runs on a machine with ≤12 GB VRAM (CPU-only options welcome). Features I'm looking for: 1.) Low glitch/dropout rate for the model - no babbling or minute-long pauses. Sentence/paragraph-level chunking with automatic retry. 2.) Multi-speaker/character support - can automatically assign distinct voices per speaker/role. 3.) Optionally, some element of context awareness to maintain voice and pacing across paragraphs. 4.) Ideally a simple 'paste > chapter/article-length audio' flow Naturalness and low error rate are more important than sheer quality. Pointers to ready-made workflows/scripts are appreciated, as are model or component recommendations.
2025-11-10T00:01:54
https://www.reddit.com/r/LocalLLaMA/comments/1osz0bf/whats_the_current_best_longform_tts_workflow_12/
adeadbeathorse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osz0bf
false
null
t3_1osz0bf
/r/LocalLLaMA/comments/1osz0bf/whats_the_current_best_longform_tts_workflow_12/
false
false
self
1
null
BERTs that chat: turn any BERT into a chatbot with dLLM
363
> > > **Motivation**: I couldn’t find a good “Hello World” tutorial for training diffusion language models, so I tried finetuning a tiny BERT to make it *talk*—and it turned out more fun than I expected. **TLDR**: With a small amount of open-source instruction data, a standard BERT can gain conversational ability. Specifically, a finetuned [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large), with a similar number of parameters, performs close to [Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B). All training and evaluation code, along with detailed results and comparisons, is available in our [W&B report](https://api.wandb.ai/links/asap-zzhou/101h5xvg) and our [documentations](https://github.com/ZHZisZZ/dllm/tree/main/examples/bert). [**dLLM**](https://github.com/ZHZisZZ/dllm): The BERT chat series is *trained, evaluated and visualized* with [dLLM](https://github.com/ZHZisZZ/dllm) — a unified library for training and evaluating diffusion language models. It brings transparency, reproducibility, and simplicity to the entire pipeline, **serving as an all-in-one, tutorial-style resource.**
2025-11-09T23:34:15
https://v.redd.it/47030knxcb0g1
Individual-Ninja-141
v.redd.it
1970-01-01T00:00:00
0
{}
1osydym
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/47030knxcb0g1/DASHPlaylist.mpd?a=1765323272%2CYTc1MjViMDY4ZTVhNzViYmJlYWNjMjE1NWYyYmRiYWRjYWI4Zjc4NzhhMDEzMjMzYzc5OWMzMmY3MmY3ZWFkZg%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/47030knxcb0g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 732, 'hls_url': 'https://v.redd.it/47030knxcb0g1/HLSPlaylist.m3u8?a=1765323272%2CNzY2ZTg2NTQxZjQ5ZTg0NjhiOTAwMTEzMzE3NmUwNGZjYWE5ZmNlNTU1ZTlkMDE5YjIxNzA3NjMyNTk1NGMyMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/47030knxcb0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1osydym
/r/LocalLLaMA/comments/1osydym/berts_that_chat_turn_any_bert_into_a_chatbot_with/
false
false
https://external-preview…6d5c9ef4d5c01af2
363
{'enabled': False, 'images': [{'id': 'aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d32f4a5899321df995e31b84c00d692d9fc3d51', 'width': 108}, {'height': 219, 'url': 'https://external-preview.redd.it/aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY.png?width=216&crop=smart&format=pjpg&auto=webp&s=3c46942c88c642b8d3c495c48d8d005b98615483', 'width': 216}, {'height': 325, 'url': 'https://external-preview.redd.it/aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY.png?width=320&crop=smart&format=pjpg&auto=webp&s=85500abe7f61b6d0b6702f1b79ec539cba3eb84d', 'width': 320}, {'height': 650, 'url': 'https://external-preview.redd.it/aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY.png?width=640&crop=smart&format=pjpg&auto=webp&s=867a3343c374150af98b6deb7c6470425a90389a', 'width': 640}], 'source': {'height': 876, 'url': 'https://external-preview.redd.it/aXQyaXFqbnhjYjBnMcUkZmGot1jxvd3JGKHlRvmTnIHsjUGTXFsDE1YCFtzY.png?format=pjpg&auto=webp&s=bf2d46956e5476f72fdc831e1065035c102427e1', 'width': 862}, 'variants': {}}]}
built an open-source, AI-native alternative to n8n that outputs clean TypeScript code workflows
31
hey everyone, Like many of you, I've used workflow automation tools like n8n, zapier etc. they're ok for simpler flows, but I always felt frustrated by the limitations of their proprietary JSON-based nodes. Debugging is a pain, and there's no way to extend into code. So, I built **Bubble Lab**: an open-source, typescript-first workflow automation platform, here's how its different: 1/ **prompt to workflow:** the typescript infra allows for deep compatibility with AI, so you can build/amend workflows with natural language. Our agent orchestrates our composable **bubbles** (integrations, tools) into a production-ready workflow 2/ **full observability & debugging**: Because every workflow is compiled with end-to-end type safety and has built-in traceability with rich logs, you can actually see what's happening under the hood 3/ **real code, not JSON blobs**: Bubble Lab outputs clean, production-ready TypeScript. This means you can own it, extend it in your IDE, add it to your existing CI/CD pipelines, and run it anywhere. No more being locked into a proprietary format. check out our repo (stars are hugely appreciated!), and lmk if you have any feedback or questions!!
2025-11-09T23:11:26
https://github.com/bubblelabai/BubbleLab
Informal-Salad-375
github.com
1970-01-01T00:00:00
0
{}
1osxv3y
false
null
t3_1osxv3y
/r/LocalLLaMA/comments/1osxv3y/built_an_opensource_ainative_alternative_to_n8n/
false
false
default
31
{'enabled': False, 'images': [{'id': 'XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=108&crop=smart&auto=webp&s=86f5ac56ff85be6094d300674ebaa7b5cf776464', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=216&crop=smart&auto=webp&s=01879bbe4a41cea49bb6ee17bb163316d3d09aa0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=320&crop=smart&auto=webp&s=2d2e1ecd54fd2f6f3ea8b8ddabb39b5beb9376ce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=640&crop=smart&auto=webp&s=941b214538393eaa8fca52a4ada2508045b1ba68', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=960&crop=smart&auto=webp&s=8a9058b7f27548592f18ada491a3c31029caf188', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?width=1080&crop=smart&auto=webp&s=9d1e9232326bd6347aa4cf81fbed6f1d1ffba950', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XQlwNsoGJe2oyHLPY--H7xoqHWnheK5nDgKNk7rhxzQ.png?auto=webp&s=e816d2588bb4a21967390bbba72df7b92d473316', 'width': 1200}, 'variants': {}}]}
How you get over 200 tok/s on full Kimi K2 Thinking (or any other big MoE Model) on cheapish hardware - llama.cpp dev pitch
0
Today, Chat GPT 5 Pro, Grok 4 and Gemini 2.5 Pro decided to work together to write a pitch for a concrete, end-to-end design (router+KV on a NVIDIA RTX 5090, expert farm on 3× AMD PRO R9700, RAM cache, and NVMe “object store”), for a modified llama.cpp version that would allow to mix consumer / prosumer video cards from AMD, NVIDIA and INTEL using vulkan backend for running large MoE LLMs on cheapish hardware. What do you think? # Project K2T-HomeLab # Mixed-Vendor Vulkan Inference for a 1T-param MoE (RTX 5090 + 3× R9700) — Dev Team Pitch We’re proposing a practical, open, mixed-vendor inference stack that runs **Kimi K2 Thinking**—a **1-trillion-parameter Mixture-of-Experts model** with **256k context**, **top-8 routing (of 384 experts per MoE layer)**, **one shared expert**, and **INT4 QAT**—on **consumer/prosumer GPUs** using a **Vulkan backend in llama.cpp**. The core idea is to treat the NVIDIA RTX 5090 as the **router + attention + KV** engine and use **three AMD Radeon AI PRO R9700s** as an **expert farm**, coordinated by an **offline co-occurrence–driven static placement** that keeps \~**90–98%** of routed tokens on a single AMD card per MoE layer. Model facts here—architecture, context length, expert counts, and INT4-native positioning—are from the K2 releases. Why now? K2 already demonstrates production-grade agentic features (long tool chains, parser support) and ships deployment recipes for mainstream inference engines (vLLM, SGLang, KTransformers). Those point to today’s **reference cluster** setups (e.g., TP=8 on H200/L20 era GPUs) and show published prefill/decode baselines we can improve on for single-node hobbyist rigs. Our pitch: a buildable path that delivers strong throughput and long-context usability on \~€10–14k hardware, without exotic interconnects. We make conservative kernel choices (INT4 experts with FP16 accum), rely on **host-bounce PCIe** (cross-vendor P2P is unreliable), and hide I/O with **prefetch + residency**. # Hardware & Throughput Side-by-Side |Item|**K2 Deployment Example (ground truth)**|**Our Mixed-Vendor Single-Node (estimate)**| |:-|:-|:-| |**GPUs**|8× **NVIDIA L20** (TP=8) — official KTransformers+SGLang example with published throughput.|1× **RTX 5090 32 GB** (router + attention + KV) **+ 3× AMD Radeon AI PRO R9700 32 GB** (expert farm).| |**CPU**|**2× Intel Xeon 6454S** (heterogeneous CPU+GPU deployment in the KTransformers example).|**1× AMD Threadripper Pro 7965WX** (24C) or higher (WRX90 platform).| |**System RAM**|Not specified in K2 doc for the L20 run; typical dual-socket server: **256–512 GB**.|**128–256 GB** ECC (min **128 GB**; budget \~**96 GB** RAM cache for experts).| |**SSDs (capacity & count)**|Not specified.|**4–8× NVMe Gen4 x4**, **4–8 TB each** (min **16 TB** total; prefer **32 TB**) for “object-store” experts.| |**Motherboard / PCIe**|Dual-socket server board; 8 GPU slots via risers/switches (vendor design).|**WRX90 (TR Pro)** with **≥4× PCIe 5.0 x16** full-length (one per GPU) **+ 6–8× PCIe Gen4 x4** for NVMe (onboard M.2 + U.2/HBA). **Minimum lanes:** 4×x16 (GPUs) + 6×x4 (NVMe) ≈ **88 lanes**.| |**Context window**|**256k** (per model spec).|**256k**; **512k+** feasible with q4 KV paging (engineering option).| |**Prefill throughput**|**≈ 577.7 tok/s** (37-way concurrency) on **8× L20 + 2× 6454S**.|**≈ 0.85–1.15×10³ tok/s** (short context) via speculative draft on 5090; **estimate** pending bring-up.| |**Decode throughput**|**≈ 45.9 tok/s** (37-way concurrency) on **8× L20 + 2× 6454S**.|**≈ 400–600 tok/s** (short), **≈ 250–350 tok/s** at **256k**; **estimate** from back-of-the-envelope model.| |**Power / form factor**|Datacenter server(s) (varies by vendor; L20 is server GPU).|Single tower or 4U workstation; **\~1.5–1.9 kW** under load (PSU ≥ 1600 W).| |**Estimated price (complete)**|**≈ €60k–€90k** total (8× L20 + 2× Xeon 6454S servers, RAM, storage; market-dependent).|**≈ €10k–€14k** total (5090 + 3× R9700 + WRX90 board + TR Pro CPU + 128–256 GB ECC + 16–32 TB NVMe; market-dependent).| |**Notes**|Official run & metrics published in K2 docs (KTransformers+SGLang).|Our figures are **engineering targets**; validate with bring-up & profiling.| # What We’re Building (in one paragraph) A modified **llama.cpp (Vulkan)** runtime that: 1. runs **MLA attention + router + shared experts + KV** on the **RTX 5090**, 2. dispatches **top-k experts** to **3× R9700** using device-first packing, 3. performs **on-AMD fused FFN + gated sum** so only one FP16 vector per used AMD comes back, 4. keeps hot experts resident (VRAM/RAM) and streams cold shards from **NVMe “object store”**, 5. uses an **offline co-occurrence placement** (from traces) plus **tiny micro-replication** to minimize cross-device traffic. # Ground Truth: Kimi K2 Thinking (the model we’re targeting) * **Architecture:** Mixture-of-Experts, **1T parameters**, **61 layers (60 MoE + 1 dense)**, **384 experts per MoE layer**, **top-8 experts per token**, **1 shared expert** per MoE layer, **attention hidden dim 7168**, **SwiGLU**, **256k context**, and **MLA attention**. These are the official K2 Thinking specs. * **INT4 Native (QAT):** K2 reports **native INT4** via post-training QAT for MoE components, designed to keep quality while improving gen speed and lowering memory. Checkpoints ship in **compressed-tensors** format; int4 can be unpacked to higher precision if needed. * **Reference deployments:** K2 ships examples for **vLLM** and **SGLang** (TP=8 on H200-class) and documents a **KTransformers+SGLang** mixed CPU+GPU setup with published **throughput** (e.g., \~**577.7 tok/s prefill** and **\~45.9 tok/s decode** at 37-way concurrency on 8× L20 + Intel CPUs). These are useful baselines for our target deltas. We build on this: same model, same agentic parser/tooling surface (K2 parser names), different hardware and runtime. # 4-Tier System Design # Tier-1 — RTX 5090 (32 GB): Router + MLA Attention + KV + Shared Experts * **Runs:** LayerNorms, embeddings, final head; **MLA attention** (Q/K/V, latent projections), **router** (top-8 gating), **shared expert** (one per MoE layer), and **aggregation** of expert returns. We keep **KV cache** here with quant options (fp16/q8/q4) to scale context. K2 confirms MLA and 256k context; KV sizing is our engineering choice (we’ll support q4/q8 knobs). * **Why 5090 for attention?** Long-context decode is attention-heavy; keeping KV + attention local avoids round-trips and unpredictable cross-vendor P2P. * **Speculative decoding (optional):** Small draft model path (disable beyond large contexts) to accelerate short responses. # Tier-2 — 3× AMD Radeon AI PRO R9700 (32 GB each): Expert Farm * **Runs:** MoE FFNs for routed experts in **INT4 weights / FP16 accum**, with **on-device gated sum**. Return **one FP16 d\_model vector per used AMD**. * **Static placement:** Offline **co-occurrence graph** (from representative traces) assigns experts to one of three AMD devices per layer; micro-replicate 1–3% “bridge” experts to improve same-device hits. * **Goal:** Most tokens hit **one AMD** per MoE layer; rare cases spill to 2 (or 3) GPUs. # Tier-3 — CPU RAM (NUMA-local promotion cache) * **Holds:** Promoted experts (warm set), pinned staging buffers, residency bitmaps, heatmaps, and a look-ahead prefetch queue. # Tier-4 — NVMe Object Store (coldest) * **Layout:** One expert per file (or small bundles), O\_DIRECT + `io_uring`, 2–8 MB reads, checksums. * **App-sharded:** Spread top-N experts across drives; replicate top 5% to reduce tail reads. # Offline Step: Co-occurrence–Driven Static Placement (explained) **Why:** MoE routes a token’s activations to multiple experts. If those experts live on **one AMD device**, we do **one dispatch** and one return. If they’re split, we multiply queues, copies, and latency. **How:** 1. From trace logs (train/finetune eval or telemetry), compute for each MoE layer ℓ a **co-occurrence graph** of experts (edge weight \~ how often two experts fire together), optionally weighted by gate products. 2. **Contract** high-weight cliques to supernodes. 3. **3-way partition** the graph into device groups with **capacity constraints (VRAM + MACs)**. Greedy DSATUR or KL-style refinements work. 4. **Micro-replicate** 1–3% high-betweenness experts to a second AMD for same-device fallback. 5. Emit **GGUF metadata**: per-layer device map, replication hints, placement metrics. **Result:** The router can **pack tokens by device first**; we get a **high same-device rate** in steady state with simple, predictable scheduling. # Runtime Scheduler & Data Flow For each MoE layer: 1. **5090** does LN + router (top-8 + shared expert on the 5090). 2. Pack per-device activations (e.g., FP8 on the wire), **H2D host-bounce** to AMD staging rings (NUMA-local pinned buffers). 3. **AMD** runs fused dequant→SwiGLU FFN→gated sum; emit one FP16 vector per used AMD. 4. **D2H** those vectors; **5090** aggregates with shared expert + residual, then moves to MLA attention for the next layer. 5. **Overlap**: While AMDs compute layer ℓ experts, 5090 prefetches layer ℓ+1/ℓ+2 expert bundles, and starts attention for ℓ+1. **Important:** We assume **no reliable cross-vendor P2P**. All traffic is **GPU↔host↔GPU** via pinned buffers. We amortize with **microbatches (≥32–64 tokens)** and **timeline semaphores**. # Performance Model (transparent assumptions) These are **engineering estimates** to size buffers and queues. Concrete numbers will come from profiling. * **Model constants** (from K2): 61 layers (60 MoE), d\_model=7168, 384 experts/layer, top-8 selected, 1 shared expert, 256k context, MLA attention. # KV footprint (MLA) K2 states MLA attention with 256k context; latent dimension isn’t published. We plan to expose `--kv-quant {fp16|q8|q4}` and treat MLA latent as a **tunable assumption** (e.g., 512–1024; we size for \~768). This gives practical KV budgets on 32 GB (q4/q8/fp16 options) while staying faithful to K2’s MLA design. # PCIe traffic (per token, per layer, average across 1.1–1.2 devices/token) * **H2D:** \~**7 KB** (7168 dims × 1 B if FP8 on wire). * **D2H:** \~**14 KB** (7168 × 2 B FP16). * \~**21 KB/layer/token** × 60 layers × \~**1.15 devices/token** ≈ **\~1.45 MB/token**. * At **500 tok/s**: **\~725 MB/s** aggregate—well under PCIe 5.0 x16 practical bandwidth. (*Assumes good batching and minimal cross-device spill.*) # Compute balance * **Experts (AMDs):** INT4 weight GEMMs with FP16 accum; with fused dequant and on-device reduce, three R9700s should remain **compute-bound** on experts under typical concurrency. * **Attention (5090):** Long context shifts load to MLA attention and KV movement; keeping these on the 5090 avoids cross-vendor synchronization and preserves throughput. # Baselines for context K2’s sample deployments report (different hardware & engines): **\~577.7 tok/s prefill** and **\~45.9 tok/s decode** at 37-way concurrency (8× L20 + 2× Intel). This is a **useful yardstick**, not our target hardware. We expect higher single-node decode on short context with speculative drafting and strong batching, tapering as context approaches 256k. # Numerical Stability & Quality * **INT4 experts:** K2 is **natively INT4 via QAT** for MoE; we will keep FFN in INT4/FP16-accum and validate parity on sanity sets (perplexity and a few K2 public benchmarks). * **Router bias ε:** We only bias **near-ties** (small logit deltas) to prefer resident experts; we’ll log pre/post gate stats and run small A/Bs to ensure quality holds. * **Shared expert on 5090:** Always resident (1 per layer), reducing cross-traffic and stabilizing outputs. # Implementation Plan (6–8 weeks, low-risk increments) 1. **Bring-up (Weeks 1–2):** * Vulkan backbone on 5090: MLA attention path + router + shared expert + dense layer. * KV q4/q8/fp16 options and staging buffers. 2. **Single-AMD path (Week 3):** * One R9700: INT4 FFN kernels (dequant-in-register), on-device gated sum, indirect dispatch. 3. **Multi-AMD + static placement (Week 4):** * Offline co-occurrence placer + GGUF metadata; device-first packing; basic prefetch FIFO. 4. **Storage & caching (Week 5):** * NVMe object store + RAM promotion cache, LRU/LFU + look-ahead (2–4 layers), io\_uring. 5. **Replicas + speculative + parsers (Week 6):** * Micro-replication, router ε-bias; speculative draft; K2 tool/reasoning parsers wired. (K2’s docs specify parser names and that they’re integrated in vLLM/sglang—parity at the API level.) 6. **Hardening (Weeks 7–8):** * Autotuning (batching/queueing), telemetry, kv-pager experiments for >256k, correctness runs. # Key Risks & Mitigations * **Cross-vendor P2P:** Treat as **unsupported**; architect around host-bounce with pinned rings and large microbatches. * **Vulkan feature variability:** Check for `shader_integer_dot_product` and cooperative-matrix extensions; provide FP16 fallback. * **Placement drift:** Re-run offline placer periodically on fresh traces; use replicas to smooth distribution changes. * **Disk tail latency:** Prefetch bundles two layers ahead; replicate top 5% cold-miss culprits across drives. # Developer Experience & Knobs --model /path/to/Kimi-K2-Thinking-int4 # K2 native INT4 weights (compressed-tensors) :contentReference[oaicite:12]{index=12} --moe-device-map auto|tuned.json # from offline placer output --kv-quant {fp16|q8|q4} # MLA KV cache precision (engineering option) --resident-vram-gb 14 --ram-cache-gb 96 --prefetch-layers 3 --router-bias-epsilon 0.10 --capacity-factor 1.4 --spec-draft small-int4 --spec-threshold 64k --nvme-drives 4 --shard-compress lz4 --tool-call-parser kimi_k2 --reasoning-parser kimi_k2 # matches K2 parser names in current engines :contentReference[oaicite:13]{index=13} # Why This Matters K2 shows that **agentic, long-horizon reasoning** with **INT4 efficiency** and **256k context** is here and usable today. The “last mile” for a massive community is an **affordable**, **portable**, **mixed-vendor** inference stack. By grounding our design in **static routing**, **predictable host-bounce**, and **tight Vulkan kernels**, we make a 1T-param MoE feel like a friendly 70B dense model—on hardware people can actually buy. Let’s ship it. 🚀 **Notes on sources:** All K2 model properties (MoE layout, 1T params, 61 layers, 384 experts, top-8, MLA, 256k, INT4 QAT) are quoted from the K2 README. Baseline deployment modes and the example prefill/decode numbers come from the K2 deployment guide and KTransformers notes. Our throughput targets and PCIe/compute estimates are clearly labeled as **engineering assumptions** to be validated during bring-up.
2025-11-09T22:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1osx4vk/how_you_get_over_200_toks_on_full_kimi_k2/
_serby_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osx4vk
false
null
t3_1osx4vk
/r/LocalLLaMA/comments/1osx4vk/how_you_get_over_200_toks_on_full_kimi_k2/
false
false
self
0
null
Qwen3-VL Now EXL3 Supported
45
⚠️ Requires [ExLlamaV3 v0.0.13](https://github.com/turboderp-org/exllamav3) [https://huggingface.co/turboderp/Qwen3-VL-8B-Instruct-exl3](https://huggingface.co/turboderp/Qwen3-VL-8B-Instruct-exl3) [https://huggingface.co/turboderp/Qwen3-VL-30B-A3B-Instruct-exl3](https://huggingface.co/turboderp/Qwen3-VL-30B-A3B-Instruct-exl3) [https://huggingface.co/turboderp/Qwen3-VL-32B-Instruct-exl3](https://huggingface.co/turboderp/Qwen3-VL-32B-Instruct-exl3) [CatBench results](https://preview.redd.it/985mbsz43b0g1.png?width=594&format=png&auto=webp&s=1a181433478e6fced642a2905b3ba86d70a8ab56) Questions? Ask here or in the[ exllama discord](https://discord.gg/VbR8wQxf).
2025-11-09T22:21:49
https://www.reddit.com/r/LocalLLaMA/comments/1oswo5v/qwen3vl_now_exl3_supported/
Unstable_Llama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oswo5v
false
null
t3_1oswo5v
/r/LocalLLaMA/comments/1oswo5v/qwen3vl_now_exl3_supported/
false
false
https://b.thumbs.redditm…18kOnbuPHY1o.jpg
45
null
Running Kimi 2 (or any other big MoE Model) on cheapish hardware.
1
Today, Chat GPT 5 Pro, Grok 4 and Gemini 2.5 Pro decided to work together to help me write a pitch for a concrete, end-to-end design (router+KV on a NVIDIA RTX 5090, expert farm on 3× AMD PRO R9700, RAM cache, and NVMe “object store”), for a modified llama.cpp version that would allow to mix consumer / prosumer video cards from different vendors using vulkan backend for running large MoE LLMs on cheapish hardware. What do you think? # Project K2T-HomeLab # Mixed-Vendor Vulkan Inference for a 1T-param MoE (RTX 5090 + 3× R9700) — Dev Team Pitch # Executive Summary We’re proposing a practical, open, mixed-vendor inference stack that runs **Kimi K2 Thinking**—a **1-trillion-parameter Mixture-of-Experts model** with **256k context**, **top-8 routing (of 384 experts per MoE layer)**, **one shared expert**, and **INT4 QAT**—on **consumer/prosumer GPUs** using a **Vulkan backend in llama.cpp**. The core idea is to treat the NVIDIA RTX 5090 as the **router + attention + KV** engine and use **three AMD Radeon AI PRO R9700s** as an **expert farm**, coordinated by an **offline co-occurrence–driven static placement** that keeps \~**90–98%** of routed tokens on a single AMD card per MoE layer. Model facts here—architecture, context length, expert counts, and INT4-native positioning—are from the K2 releases. Why now? K2 already demonstrates production-grade agentic features (long tool chains, parser support) and ships deployment recipes for mainstream inference engines (vLLM, SGLang, KTransformers). Those point to today’s **reference cluster** setups (e.g., TP=8 on H200/L20 era GPUs) and show published prefill/decode baselines we can improve on for single-node hobbyist rigs. Our pitch: a buildable path that delivers strong throughput and long-context usability on \~€10–14k hardware, without exotic interconnects. We make conservative kernel choices (INT4 experts with FP16 accum), rely on **host-bounce PCIe** (cross-vendor P2P is unreliable), and hide I/O with **prefetch + residency**. # Hardware & Throughput Side-by-Side |Item|**K2 Deployment Example (ground truth)**|**Our Mixed-Vendor Single-Node (estimate)**| |:-|:-|:-| |**GPUs**|8× **NVIDIA L20** (TP=8) — official KTransformers+SGLang example with published throughput.|1× **RTX 5090 32 GB** (router + attention + KV) **+ 3× AMD Radeon AI PRO R9700 32 GB** (expert farm).| |**CPU**|**2× Intel Xeon 6454S** (heterogeneous CPU+GPU deployment in the KTransformers example).|**1× AMD Threadripper Pro 7965WX** (24C) or higher (WRX90 platform).| |**System RAM**|Not specified in K2 doc for the L20 run; typical dual-socket server: **256–512 GB**.|**128–256 GB** ECC (min **128 GB**; budget \~**96 GB** RAM cache for experts).| |**SSDs (capacity & count)**|Not specified.|**4–8× NVMe Gen4 x4**, **4–8 TB each** (min **16 TB** total; prefer **32 TB**) for “object-store” experts.| |**Motherboard / PCIe**|Dual-socket server board; 8 GPU slots via risers/switches (vendor design).|**WRX90 (TR Pro)** with **≥4× PCIe 5.0 x16** full-length (one per GPU) **+ 6–8× PCIe Gen4 x4** for NVMe (onboard M.2 + U.2/HBA). **Minimum lanes:** 4×x16 (GPUs) + 6×x4 (NVMe) ≈ **88 lanes**.| |**Context window**|**256k** (per model spec).|**256k**; **512k+** feasible with q4 KV paging (engineering option).| |**Prefill throughput**|**≈ 577.7 tok/s** (37-way concurrency) on **8× L20 + 2× 6454S**.|**≈ 0.85–1.15×10³ tok/s** (short context) via speculative draft on 5090; **estimate** pending bring-up.| |**Decode throughput**|**≈ 45.9 tok/s** (37-way concurrency) on **8× L20 + 2× 6454S**.|**≈ 400–600 tok/s** (short), **≈ 250–350 tok/s** at **256k**; **estimate** from back-of-the-envelope model.| |**Power / form factor**|Datacenter server(s) (varies by vendor; L20 is server GPU).|Single tower or 4U workstation; **\~1.5–1.9 kW** under load (PSU ≥ 1600 W).| |**Estimated price (complete)**|**≈ €100k–€140k** total (8× L20 + 2× Xeon 6454S servers, RAM, storage; market-dependent).|**≈ €10k–€14k** total (5090 + 3× R9700 + WRX90 board + TR Pro CPU + 128–256 GB ECC + 16–32 TB NVMe; market-dependent).| |**Notes**|Official run & metrics published in K2 docs (KTransformers+SGLang).|Our figures are **engineering targets**; validate with bring-up & profiling.| # What We’re Building (in one paragraph) A modified **llama.cpp (Vulkan)** runtime that: 1. runs **MLA attention + router + shared experts + KV** on the **RTX 5090**, 2. dispatches **top-k experts** to **3× R9700** using device-first packing, 3. performs **on-AMD fused FFN + gated sum** so only one FP16 vector per used AMD comes back, 4. keeps hot experts resident (VRAM/RAM) and streams cold shards from **NVMe “object store”**, 5. uses an **offline co-occurrence placement** (from traces) plus **tiny micro-replication** to minimize cross-device traffic. # Ground Truth: Kimi K2 Thinking (the model we’re targeting) * **Architecture:** Mixture-of-Experts, **1T parameters**, **61 layers (60 MoE + 1 dense)**, **384 experts per MoE layer**, **top-8 experts per token**, **1 shared expert** per MoE layer, **attention hidden dim 7168**, **SwiGLU**, **256k context**, and **MLA attention**. These are the official K2 Thinking specs. * **INT4 Native (QAT):** K2 reports **native INT4** via post-training QAT for MoE components, designed to keep quality while improving gen speed and lowering memory. Checkpoints ship in **compressed-tensors** format; int4 can be unpacked to higher precision if needed. * **Reference deployments:** K2 ships examples for **vLLM** and **SGLang** (TP=8 on H200-class) and documents a **KTransformers+SGLang** mixed CPU+GPU setup with published **throughput** (e.g., \~**577.7 tok/s prefill** and **\~45.9 tok/s decode** at 37-way concurrency on 8× L20 + Intel CPUs). These are useful baselines for our target deltas. We build on this: same model, same agentic parser/tooling surface (K2 parser names), different hardware and runtime. # 4-Tier System Design # Tier-1 — RTX 5090 (32 GB): Router + MLA Attention + KV + Shared Experts * **Runs:** LayerNorms, embeddings, final head; **MLA attention** (Q/K/V, latent projections), **router** (top-8 gating), **shared expert** (one per MoE layer), and **aggregation** of expert returns. We keep **KV cache** here with quant options (fp16/q8/q4) to scale context. K2 confirms MLA and 256k context; KV sizing is our engineering choice (we’ll support q4/q8 knobs). * **Why 5090 for attention?** Long-context decode is attention-heavy; keeping KV + attention local avoids round-trips and unpredictable cross-vendor P2P. * **Speculative decoding (optional):** Small draft model path (disable beyond large contexts) to accelerate short responses. # Tier-2 — 3× AMD Radeon AI PRO R9700 (32 GB each): Expert Farm * **Runs:** MoE FFNs for routed experts in **INT4 weights / FP16 accum**, with **on-device gated sum**. Return **one FP16 d\_model vector per used AMD**. * **Static placement:** Offline **co-occurrence graph** (from representative traces) assigns experts to one of three AMD devices per layer; micro-replicate 1–3% “bridge” experts to improve same-device hits. * **Goal:** Most tokens hit **one AMD** per MoE layer; rare cases spill to 2 (or 3) GPUs. # Tier-3 — CPU RAM (NUMA-local promotion cache) * **Holds:** Promoted experts (warm set), pinned staging buffers, residency bitmaps, heatmaps, and a look-ahead prefetch queue. # Tier-4 — NVMe Object Store (coldest) * **Layout:** One expert per file (or small bundles), O\_DIRECT + `io_uring`, 2–8 MB reads, checksums. * **App-sharded:** Spread top-N experts across drives; replicate top 5% to reduce tail reads. # Offline Step: Co-occurrence–Driven Static Placement (explained) **Why:** MoE routes a token’s activations to multiple experts. If those experts live on **one AMD device**, we do **one dispatch** and one return. If they’re split, we multiply queues, copies, and latency. **How:** 1. From trace logs (train/finetune eval or telemetry), compute for each MoE layer ℓ a **co-occurrence graph** of experts (edge weight \~ how often two experts fire together), optionally weighted by gate products. 2. **Contract** high-weight cliques to supernodes. 3. **3-way partition** the graph into device groups with **capacity constraints (VRAM + MACs)**. Greedy DSATUR or KL-style refinements work. 4. **Micro-replicate** 1–3% high-betweenness experts to a second AMD for same-device fallback. 5. Emit **GGUF metadata**: per-layer device map, replication hints, placement metrics. **Result:** The router can **pack tokens by device first**; we get a **high same-device rate** in steady state with simple, predictable scheduling. # Runtime Scheduler & Data Flow For each MoE layer: 1. **5090** does LN + router (top-8 + shared expert on the 5090). 2. Pack per-device activations (e.g., FP8 on the wire), **H2D host-bounce** to AMD staging rings (NUMA-local pinned buffers). 3. **AMD** runs fused dequant→SwiGLU FFN→gated sum; emit one FP16 vector per used AMD. 4. **D2H** those vectors; **5090** aggregates with shared expert + residual, then moves to MLA attention for the next layer. 5. **Overlap**: While AMDs compute layer ℓ experts, 5090 prefetches layer ℓ+1/ℓ+2 expert bundles, and starts attention for ℓ+1. **Important:** We assume **no reliable cross-vendor P2P**. All traffic is **GPU↔host↔GPU** via pinned buffers. We amortize with **microbatches (≥32–64 tokens)** and **timeline semaphores**. # Performance Model (transparent assumptions) These are **engineering estimates** to size buffers and queues. Concrete numbers will come from profiling. * **Model constants** (from K2): 61 layers (60 MoE), d\_model=7168, 384 experts/layer, top-8 selected, 1 shared expert, 256k context, MLA attention. # KV footprint (MLA) K2 states MLA attention with 256k context; latent dimension isn’t published. We plan to expose `--kv-quant {fp16|q8|q4}` and treat MLA latent as a **tunable assumption** (e.g., 512–1024; we size for \~768). This gives practical KV budgets on 32 GB (q4/q8/fp16 options) while staying faithful to K2’s MLA design. # PCIe traffic (per token, per layer, average across 1.1–1.2 devices/token) * **H2D:** \~**7 KB** (7168 dims × 1 B if FP8 on wire). * **D2H:** \~**14 KB** (7168 × 2 B FP16). * \~**21 KB/layer/token** × 60 layers × \~**1.15 devices/token** ≈ **\~1.45 MB/token**. * At **500 tok/s**: **\~725 MB/s** aggregate—well under PCIe 5.0 x16 practical bandwidth. (*Assumes good batching and minimal cross-device spill.*) # Compute balance * **Experts (AMDs):** INT4 weight GEMMs with FP16 accum; with fused dequant and on-device reduce, three R9700s should remain **compute-bound** on experts under typical concurrency. * **Attention (5090):** Long context shifts load to MLA attention and KV movement; keeping these on the 5090 avoids cross-vendor synchronization and preserves throughput. # Baselines for context K2’s sample deployments report (different hardware & engines): **\~577.7 tok/s prefill** and **\~45.9 tok/s decode** at 37-way concurrency (8× L20 + 2× Intel). This is a **useful yardstick**, not our target hardware. We expect higher single-node decode on short context with speculative drafting and strong batching, tapering as context approaches 256k. # Numerical Stability & Quality * **INT4 experts:** K2 is **natively INT4 via QAT** for MoE; we will keep FFN in INT4/FP16-accum and validate parity on sanity sets (perplexity and a few K2 public benchmarks). * **Router bias ε:** We only bias **near-ties** (small logit deltas) to prefer resident experts; we’ll log pre/post gate stats and run small A/Bs to ensure quality holds. * **Shared expert on 5090:** Always resident (1 per layer), reducing cross-traffic and stabilizing outputs. # Implementation Plan (6–8 weeks, low-risk increments) 1. **Bring-up (Weeks 1–2):** * Vulkan backbone on 5090: MLA attention path + router + shared expert + dense layer. * KV q4/q8/fp16 options and staging buffers. 2. **Single-AMD path (Week 3):** * One R9700: INT4 FFN kernels (dequant-in-register), on-device gated sum, indirect dispatch. 3. **Multi-AMD + static placement (Week 4):** * Offline co-occurrence placer + GGUF metadata; device-first packing; basic prefetch FIFO. 4. **Storage & caching (Week 5):** * NVMe object store + RAM promotion cache, LRU/LFU + look-ahead (2–4 layers), io\_uring. 5. **Replicas + speculative + parsers (Week 6):** * Micro-replication, router ε-bias; speculative draft; K2 tool/reasoning parsers wired. (K2’s docs specify parser names and that they’re integrated in vLLM/sglang—parity at the API level.) 6. **Hardening (Weeks 7–8):** * Autotuning (batching/queueing), telemetry, kv-pager experiments for >256k, correctness runs. # Key Risks & Mitigations * **Cross-vendor P2P:** Treat as **unsupported**; architect around host-bounce with pinned rings and large microbatches. * **Vulkan feature variability:** Check for `shader_integer_dot_product` and cooperative-matrix extensions; provide FP16 fallback. * **Placement drift:** Re-run offline placer periodically on fresh traces; use replicas to smooth distribution changes. * **Disk tail latency:** Prefetch bundles two layers ahead; replicate top 5% cold-miss culprits across drives. # Developer Experience & Knobs --model /path/to/Kimi-K2-Thinking-int4 # K2 native INT4 weights (compressed-tensors) :contentReference[oaicite:12]{index=12} --moe-device-map auto|tuned.json # from offline placer output --kv-quant {fp16|q8|q4} # MLA KV cache precision (engineering option) --resident-vram-gb 14 --ram-cache-gb 96 --prefetch-layers 3 --router-bias-epsilon 0.10 --capacity-factor 1.4 --spec-draft small-int4 --spec-threshold 64k --nvme-drives 4 --shard-compress lz4 --tool-call-parser kimi_k2 --reasoning-parser kimi_k2 # matches K2 parser names in current engines :contentReference[oaicite:13]{index=13} # Why This Matters K2 shows that **agentic, long-horizon reasoning** with **INT4 efficiency** and **256k context** is here and usable today. The “last mile” for a massive community is an **affordable**, **portable**, **mixed-vendor** inference stack. By grounding our design in **static routing**, **predictable host-bounce**, and **tight Vulkan kernels**, we make a 1T-param MoE feel like a friendly 70B dense model—on hardware people can actually buy. Let’s ship it. 🚀 **Notes on sources:** All K2 model properties (MoE layout, 1T params, 61 layers, 384 experts, top-8, MLA, 256k, INT4 QAT) are quoted from the K2 README. Baseline deployment modes and the example prefill/decode numbers come from the K2 deployment guide and KTransformers notes. Our throughput targets and PCIe/compute estimates are clearly labeled as **engineering assumptions** to be validated during bring-up.
2025-11-09T22:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1oswk0e/running_kimi_2_or_any_other_big_moe_model_on/
_serby_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oswk0e
false
null
t3_1oswk0e
/r/LocalLLaMA/comments/1oswk0e/running_kimi_2_or_any_other_big_moe_model_on/
false
false
self
1
null
Are there any potential footguns to using "synthetic" audio data generated by Google Gemini to fine-tune an open-source TTS model?
1
For example, would it affect the licensing of the resulting TTS model or the dataset itself? There certainly are performance limitations whereby the resulting model could end up inheriting whatever issues Gemini has but so far it has been quite flawless. I've also wondered whether the fact that it's not real human sound will cause it to have adverse effects on the internal mechanisms of the TTS model itself leading to irregular behaviors during training and inference ultimately.
2025-11-09T22:13:49
https://www.reddit.com/r/LocalLLaMA/comments/1oswh0a/are_there_any_potential_footguns_to_using/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oswh0a
false
null
t3_1oswh0a
/r/LocalLLaMA/comments/1oswh0a/are_there_any_potential_footguns_to_using/
false
false
self
1
null
[Release] Pre-built llama-cpp-python wheels for Blackwell/Ada/Ampere/Turing, up to CUDA 13.0 & Python 3.13 (Windows x64)
29
Building llama-cpp-python with CUDA on Windows can be a pain. So I embraced the suck and pre-compiled 40 wheels for 4 Nvidia architectures across 4 versions of Python and 3 versions of CUDA. Figured these might be useful if you want to spin up GGUFs rapidly on Windows. **What's included:** * RTX 50/40/30/20 series support (Blackwell, Ada, Ampere, Turing) * Python 3.10, 3.11, 3.12, 3.13 * CUDA 11.8, 12.1, 13.0 (Blackwell only compiled for CUDA 13) * llama-cpp-python 0.3.16 **Download:** [https://github.com/dougeeai/llama-cpp-python-wheels](https://github.com/dougeeai/llama-cpp-python-wheels) No Visual Studio. No CUDA Toolkit. Just pip install and run. Windows only for now. Linux wheels coming soon if there's interest. Open to feedback on what other configs would be helpful. Thanks for letting me post, long time listener, first time caller.
2025-11-09T22:02:05
https://www.reddit.com/r/LocalLLaMA/comments/1osw6ki/release_prebuilt_llamacpppython_wheels_for/
dougeeai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osw6ki
false
null
t3_1osw6ki
/r/LocalLLaMA/comments/1osw6ki/release_prebuilt_llamacpppython_wheels_for/
false
false
self
29
{'enabled': False, 'images': [{'id': 'RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=108&crop=smart&auto=webp&s=ca9b5ee4d814543d339749bdabd3627bdb957e10', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=216&crop=smart&auto=webp&s=e4c75799bd59964b711a314ae75ee959300308c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=320&crop=smart&auto=webp&s=622365f8bfcaff516bd7660d35c577dba3083471', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=640&crop=smart&auto=webp&s=83af4d265bed949db334710c0c8201f5db5b2060', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=960&crop=smart&auto=webp&s=d22d2411d0b026a008510019029072051152318f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?width=1080&crop=smart&auto=webp&s=6fd4e412ee2d01217daddf13cd43ca19cdac1b19', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RCbuhy9j-T74iJOvG2SU7Mvqh83DDIZXNGQRbdU7PTI.png?auto=webp&s=24672c991f7c3963a820f3d3aea7ad61f1fae999', 'width': 1200}, 'variants': {}}]}
How to stop Strix Halo crashing while running Ollama:Rocm under Debian Trixie.
1
I recently got myself a Framework desktop motherboard, and the GPU was crashing fairly frequently when I was running the Rocm variant of Ollama. This was resolved by adding this repository to my Debian machine: https://launchpad.net/~amd-team/+archive/ubuntu/gfx1151/, and installing the package amdgpu-firmware-dcn351. The problem was described in this thread, and the solution was in this comment: https://github.com/ROCm/ROCm/issues/5499#issuecomment-3419180681 I have installed Rocm 7.1, and Ollama has been very solid for me after the firmware upgrade.
2025-11-09T21:31:23
https://www.reddit.com/r/LocalLLaMA/comments/1osveo6/how_to_stop_strix_halo_crashing_while_running/
fufufang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osveo6
false
null
t3_1osveo6
/r/LocalLLaMA/comments/1osveo6/how_to_stop_strix_halo_crashing_while_running/
false
false
self
1
null
Whats the best option right now for local TTS, or voice changing AI. Being able to train the voice would be great as well.
1
Title pretty much.
2025-11-09T21:27:39
https://www.reddit.com/r/LocalLLaMA/comments/1osvb94/whats_the_best_option_right_now_for_local_tts_or/
Code123450
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osvb94
false
null
t3_1osvb94
/r/LocalLLaMA/comments/1osvb94/whats_the_best_option_right_now_for_local_tts_or/
false
false
self
1
null
routing/categorizing model finetune: llm vs embedding vs BERT - to route to best llm for a given input
0
one way to do it would be to 0-1 rank on categories for each input funny: intelligence: nsfw: tool\_use: Then based on these use harcoded logic to route what would you recommend? I've never had much luck training the bert models on this kind of thing personally perhaps a <24b llm is the best move?
2025-11-09T20:56:36
https://www.reddit.com/r/LocalLLaMA/comments/1osuixs/routingcategorizing_model_finetune_llm_vs/
LeadOne7104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osuixs
false
null
t3_1osuixs
/r/LocalLLaMA/comments/1osuixs/routingcategorizing_model_finetune_llm_vs/
false
false
self
0
null
Running DeepSeek-OCR on vLLM 0.11.1rc6.dev7 in Open WebUI as a test
46
Obviously you're not supposed to use DeepSeek-OCR through a chat UI. I'm just testing to see if it works or not. Also, this is not really an OCR task but I was wondering if I could use this model for general image description. Seems like that works just fine. I have not yet implemented the helper scripts in the [DeepSeek-OCR github repo](https://github.com/deepseek-ai/DeepSeek-OCR/tree/main/DeepSeek-OCR-master/DeepSeek-OCR-vllm). They seem pretty handy for image/pdf/batch OCR workloads.
2025-11-09T20:53:15
https://i.redd.it/j14a86mxka0g1.png
AFruitShopOwner
i.redd.it
1970-01-01T00:00:00
0
{}
1osufxq
false
null
t3_1osufxq
/r/LocalLLaMA/comments/1osufxq/running_deepseekocr_on_vllm_0111rc6dev7_in_open/
false
false
default
46
{'enabled': True, 'images': [{'id': 'j14a86mxka0g1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=108&crop=smart&auto=webp&s=caa0ece9188a898405a8987f468c8b4d6bfed288', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=216&crop=smart&auto=webp&s=62aa38fccb2d234b063feb6eb0f5f2b727a33d19', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=320&crop=smart&auto=webp&s=e8a85700b9f8428c03104e2aebb541993b9941d9', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=640&crop=smart&auto=webp&s=21626c1b49c8f27927e9f2fd70e483b0bd2bd4d4', 'width': 640}, {'height': 719, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=960&crop=smart&auto=webp&s=41e732caab8b1bd1a23fedd46d605285a34759e1', 'width': 960}, {'height': 809, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?width=1080&crop=smart&auto=webp&s=371719b688f82121094a15ded684a28ba38443e7', 'width': 1080}], 'source': {'height': 1134, 'url': 'https://preview.redd.it/j14a86mxka0g1.png?auto=webp&s=cf8352902a91e35d0b9371139a5b1468c2187b10', 'width': 1513}, 'variants': {}}]}
Benchmark Results: GLM-4.5-Air (Q4) at Full Context on Strix Halo vs. Dual RTX 3090
53
Hi, I benchmarked the GLM-4.5-Air (Q4) model running at a near-maximum context on two very different systems: a Strix Halo APU and a dual RTX 3090 server. Both tests were conducted under Debian GNU/Linux with the latest llama.cpp builds from the day of testing. But I did overlook and there's a one-revision difference between the two llama.cpp builds. Here are the startup commands, environment details, and a diagram that breaks down the performance and energy efficiency of both setups. **RTX3090:** ```bash $ LLAMA_SET_ROWS=1 llama-server -m GLM-4.5-Air-UD-Q4_K_XL-00001-of-00002.gguf --n-cpu-moe 38 \ --tensor-split 28,20 -c 0 --n-gpu-layers 99 --temp 0.9 --flash-attn auto --jinja --host 0.0.0.0 \ --port 8080 -a glm_air --no-context-shift --no-mmap --swa-full --reasoning-format none ``` ```bash prompt eval time = 1781631.25 ms / 119702 tokens ( 14.88 ms per token, 67.19 tokens per second) eval time = 1045615.05 ms / 5232 tokens ( 199.85 ms per token, 5.00 tokens per second) total time = 2827246.30 ms / 124934 tokens slot release: id 3 | task 1 | stop processing: n_tokens = 124933, truncated = 0 $ llama-server --version ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes ggml_vulkan: Found 2 Vulkan devices: ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat ggml_vulkan: 1 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat version: 6990 (53d7d21e6) built with cc (Debian 14.2.0-19) 14.2.0 for x86_64-linux-gnu Build flags: -DGGML_CUDA=ON -DGGML_CUDA_F16=ON -DGGML_CUDA_FA_ALL_QUANTS=ON -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DCMAKE_CUDA_ARCHITECTURES=86 -DGGML_VULKAN=ON" ``` **strix halo:** ```bash $ llama-server -m GLM-4.5-Air-UD-Q4_K_XL-00001-of-00002.gguf --n-gpu-layers 99 --host 0.0.0.0 \ --port 8080 -a glm_air -c 131072 -fa 1 --no-mmap ``` ```bash prompt eval time = 5175231.01 ms / 119703 tokens ( 43.23 ms per token, 23.13 tokens per second) eval time = 1430449.98 ms / 5778 tokens ( 247.57 ms per token, 4.04 tokens per second) total time = 6605680.99 ms / 125481 tokens slot update_slots: id 2 | task 1577 | prompt done, n_tokens = 119703, batch.n_tokens = 919 $ llama-server --version ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Radeon 8060S Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat version: 6989 (eeee367de) built with cc (Debian 15.2.0-7) 15.2.0 for x86_64-linux-gnu Build flags: -DGGML_VULKAN=ON -DGGML_HIP_ROCWMMA_FATTN=ON -DAMDGPU_TARGETS=gfx1151 ```
2025-11-09T20:47:30
https://i.redd.it/vvimjdf4na0g1.png
Educational_Sun_8813
i.redd.it
1970-01-01T00:00:00
0
{}
1osuat7
false
null
t3_1osuat7
/r/LocalLLaMA/comments/1osuat7/benchmark_results_glm45air_q4_at_full_context_on/
false
false
default
53
{'enabled': True, 'images': [{'id': 'vvimjdf4na0g1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=108&crop=smart&auto=webp&s=fb8b9807ab94773c7137d7d86975d0efd4a979e5', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=216&crop=smart&auto=webp&s=c54735783aa0b458f5b983c2d88ae6e5142b4567', 'width': 216}, {'height': 116, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=320&crop=smart&auto=webp&s=6b314a09b42b898e99b49183169195f7669580ba', 'width': 320}, {'height': 232, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=640&crop=smart&auto=webp&s=5b8bba52c5a4592461099dbcbbff0318d56011e9', 'width': 640}, {'height': 349, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=960&crop=smart&auto=webp&s=4e33751c61e5769d81899dc58333a86ed2d44c79', 'width': 960}, {'height': 392, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?width=1080&crop=smart&auto=webp&s=cd212963da753ab0e11cb42929f5e191ed1dd6aa', 'width': 1080}], 'source': {'height': 800, 'url': 'https://preview.redd.it/vvimjdf4na0g1.png?auto=webp&s=dd06fde6fc1e3257763e47158b9c6ed4b08ad47c', 'width': 2200}, 'variants': {}}]}
Keep the model running?
0
Newbie here. I want to train a model locally on my pc. Do I need to keep the model running to train it? If I close the program, do I need to start All over ?
2025-11-09T20:27:57
https://www.reddit.com/r/LocalLLaMA/comments/1ostt83/keep_the_model_running/
External_Income29
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ostt83
false
null
t3_1ostt83
/r/LocalLLaMA/comments/1ostt83/keep_the_model_running/
false
false
self
0
null
This exists?
0
First of all, sorry if this has already been asked. Is there anything out there that can clone my movements and put them on someone else? (Like a celebrity, someone created by artificial intelligence, someone I know) and that can be done on a webcam, for example, me being in a meeting when it's actually Cristiano Ronaldo. Does this exist? Something that isn't too robotic. Because I recently saw a video of a man where there was an AI model that apparently copied all his movements in real time and looked “real.” If so, which is the best in terms of cost-benefit? Thank you for your time
2025-11-09T20:11:21
https://www.reddit.com/r/LocalLLaMA/comments/1oste7c/this_exists/
MaoDeFerro23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oste7c
false
null
t3_1oste7c
/r/LocalLLaMA/comments/1oste7c/this_exists/
false
false
self
0
null
Faster Prompt Processing in llama.cpp: Smart Proxy + Slots + Restore
77
[https://github.com/airnsk/proxycache](https://github.com/airnsk/proxycache) # What this service is [](https://github.com/airnsk/proxycache/tree/main#what-this-service-is) This service is a smart proxy in front of llama.cpp that makes long‑context chat and IDE workflows much faster by managing llama.cpp slots, reusing cached context, and restoring saved caches from disk when needed. It speaks an OpenAI‑compatible Chat Completions API, so existing clients can connect without changes, including both streaming (SSE) and non‑stream responses depending on request settings. # Why it’s needed [](https://github.com/airnsk/proxycache/tree/main#why-its-needed) llama.cpp provides “slots,” each holding a conversation’s KV cache so repeated requests with the same or very similar prefix can skip recomputing the whole prompt and continue from the first mismatching token, which dramatically cuts latency for large prompts. In real teams the number of users can easily exceed the number of available slots (e.g., 20 developers but only 4 slots), so naive routing causes random slot reuse and cache overwrites that waste time and GPU/CPU cycles. This proxy solves that by steering requests to the right slot, saving evicted caches to disk, and restoring them on demand, so long prompts don’t need to be recomputed from scratch each time. # How requests are balanced and slots are chosen [](https://github.com/airnsk/proxycache/tree/main#how-requests-are-balanced-and-slots-are-chosen) * Slots and heat: When a request lands in a slot and its cache is valid for reuse, the slot is considered “hot,” and new requests won’t overwrite it if other options exist, preserving useful KV for future reuse. * Similarity matching: The proxy computes a fast, word‑block prefix similarity between the incoming conversation and existing hot slots, and only reuses a hot slot if the similarity meets a single ratio threshold (e.g., 85% of the shorter sequence), otherwise it rejects reuse to avoid polluting the hot cache with a weakly related prompt. * Free and cold first: If reuse is rejected, the proxy sends the request to a free slot or a cold slot (one not currently carrying a valuable hot cache), protecting high‑value contexts from accidental overwrites under load. * Oldest when full: If there are no free or cold slots, the proxy picks the least‑recently used slot and saves its current KV cache to disk before assigning the new request, ensuring nothing valuable is lost when the pool is exhausted. * Restore on demand: When a new request matches a cache that was previously saved, the proxy restores that cache into a free/cold/oldest slot and routes the request there, which takes seconds versus minutes for full prompt recomputation on long contexts, especially in IDE scenarios with 30–60k tokens. * Concurrency safety: Each slot is guarded with an async lock; if all are busy, the request waits for the first LRU slot to free, preventing race conditions and unintended cache overwrites during concurrent generation. # Save and restore from disk [](https://github.com/airnsk/proxycache/tree/main#save-and-restore-from-disk) llama.cpp’s HTTP server exposes slot save/restore; saving writes a cache file to the directory provided by --slot‑save‑path, and restore loads by file basename (e.g., slotcache\_.bin), which is exactly how this proxy persists and revives caches across requests and restarts. The proxy keeps small local .meta files describing cached prefixes for fast lookup, while llama.cpp owns the actual KV .bin files under --slot‑save‑path for correctness and performance. # Quick start [](https://github.com/airnsk/proxycache/tree/main#quick-start) 1. Start llama.cpp ( [https://github.com/ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) ) with slots and a cache directory: &#8203; llama-server -m ./model.gguf -np 4 --slot-save-path /var/kvcache --host 0.0.0.0 --port 8080 This enables the OpenAI‑compatible HTTP server, a pool of 4 slots, and a directory where slot KV caches are saved and restored by basename. 1. Run the proxy next to it: &#8203; git clone https://github.com/airnsk/proxycache.git cd proxycache python3 -m venv venv && source venv/bin/activate && pip install -r requirements.txt python3 proxycache.py # or: uvicorn app:app --host 0.0.0.0 --port 8081 Your clients should call the proxy’s /v1/chat/completions endpoint; the proxy will handle similarity, slot selection, save/restore, and streaming vs non‑streaming automatically. If you run into issues using gpt-oss-20b with an IDE like Cline, follow these instructions: [https://www.reddit.com/r/CLine/comments/1mtcj2v/making\_gptoss\_20b\_and\_cline\_work\_together/](https://www.reddit.com/r/CLine/comments/1mtcj2v/making_gptoss_20b_and_cline_work_together/) # Parameters [](https://github.com/airnsk/proxycache/tree/main#parameters) * LLAMA\_SERVER\_URL: The llama.cpp server base URL, e.g., [http://127.0.0.1:8080](http://127.0.0.1:8080/), which must expose the OpenAI‑compatible chat completions endpoint. * SLOTS\_COUNT: The number of server slots (should match llama.cpp -np) so the proxy can track and plan reuse/restore correctly under load. * SIMILARITY\_MIN\_RATIO: One similarity threshold (e.g., 0.85) controlling both active reuse and disk restore; if a match is below this ratio, the proxy will prefer a free/cold slot or restore instead of overwriting a hot slot. * MIN\_PREFIX\_\* (chars/words/blocks): Requests below this size are treated as “small” and steered to free/cold/oldest slots to avoid disturbing valuable hot caches used by large, long‑running prompts. * LOCAL\_META\_DIR and --slot-save-path: The proxy stores small .meta descriptors locally for fast candidate lookup, while llama.cpp reads/writes the real KV cache files under --slot‑save‑path using basename in the HTTP API. # Why this boosts IDE and long‑context productivity [](https://github.com/airnsk/proxycache/tree/main#why-this-boosts-ide-and-longcontext-productivity) For 30–60k‑token contexts typical in project‑wide IDE assistants, recomputing a full prompt can take minutes, whereas restoring a previously cached context and continuing from the first mismatching token typically takes seconds on llama.cpp, dramatically improving iteration speed for large teams with limited slots.
2025-11-09T20:10:26
https://i.redd.it/90im3um0fa0g1.png
Previous_Nature_5319
i.redd.it
1970-01-01T00:00:00
0
{}
1ostdcn
false
null
t3_1ostdcn
/r/LocalLLaMA/comments/1ostdcn/faster_prompt_processing_in_llamacpp_smart_proxy/
false
false
default
77
{'enabled': True, 'images': [{'id': '90im3um0fa0g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=108&crop=smart&auto=webp&s=f4b8aa910c41e73dcc0ef256bb4bf3645fd251ef', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=216&crop=smart&auto=webp&s=33fdfc7196e46b3a65da0066b4b06044bf7c73ef', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=320&crop=smart&auto=webp&s=9e347dea7b198645f5eb61203762f8c11b778336', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=640&crop=smart&auto=webp&s=1880277abb2e16deb196c79509aa47dbb7d349ae', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=960&crop=smart&auto=webp&s=8789b730764d288f18c2afa56b8196c9092ccbd9', 'width': 960}, {'height': 606, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?width=1080&crop=smart&auto=webp&s=5b86b8109f6a3e08974319610f6c776643eae00c', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/90im3um0fa0g1.png?auto=webp&s=3b4026436628e95a1d150eb3c9aacfa1ddf706ea', 'width': 2848}, 'variants': {}}]}
Codename Goose Desktop and Goose CLI with Ollama or other local inference
3
Hey r/LocalLLaMA, I have been messing around with Goose Desktop and Goose CLI for a while, and I am wondering if anyone has had any luck with getting it to work with local models for function and tool calling. I have been able to get several local models running with it, but none that can actually use the extensions in Goose. So far I've only been successful with Cloud APIs for functions and tool calling. Would love to learn more about what you did and how you got it working. I am working with 16 GB VRAM and 32 GB RAM, and I am running Ollama, for clarity.
2025-11-09T19:55:48
https://www.reddit.com/r/LocalLLaMA/comments/1osszxh/codename_goose_desktop_and_goose_cli_with_ollama/
NoWorking8412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osszxh
false
null
t3_1osszxh
/r/LocalLLaMA/comments/1osszxh/codename_goose_desktop_and_goose_cli_with_ollama/
false
false
self
3
null
Budget system for 30B models revisited
8
Moved my three Nvidia GTX-1070 GPUs to a DDR4 system. About a year ago I was running these GPUs on a 12 year old [DDR3 system](https://www.reddit.com/r/ollama/comments/1gc5hnb/budget_system_for_30b_models/) and using Ollama. I was getting 8 t/s for gemma2 and you'll see below that with DDR4 system and gemma3 is getting 9 t/s. GPU matters more than system CPU, and DDR speed, if your system isn't offloading. [https://www.reddit.com/r/ollama/comments/1gc5hnb/budget\_system\_for\_30b\_models/](https://www.reddit.com/r/ollama/comments/1gc5hnb/budget_system_for_30b_models/) System: AMD Ryzen 5 3600 CPU, 32GB DDR4 RAM, three GTX-1070 GPUs, single PSU, power limit via crontab set for: `sudo nvidia-smi -i 0 -pl 110; sudo nvidia-smi -i 1 -pl 111; sudo nvidia-smi -i 2 -pl 112` OS: Kubuntu 25.10 Llama.cpp: Vulkan build: cb1adf885 (6999) 1. \*Ling-mini-2.0-Q8\_0.gguf (NOT 30B size but about same Vram usage) 2. gemma-3-27b-it-UD-Q4\_K\_XL.gguf 3. Qwen3-Coder-30B-A3B-Instruct-Q4\_K\_M.gguf 4. granite-4.0-h-small-UD-Q4\_K\_XL.gguf 5. GLM-4-32B-0414-UD-Q4\_K\_XL.gguf 6. DeepSeek-R1-Distill-Qwen-32B-Q4\_K\_M.gguf `llama-bench -m /Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf` load_backend: loaded RPC backend from /home/user33/vulkan/build/bin/libggml-rpc.so ggml_vulkan: Found 3 Vulkan devices: ggml_vulkan: 0 = NVIDIA GeForce GTX 1070 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none ggml_vulkan: 1 = NVIDIA GeForce GTX 1070 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none ggml_vulkan: 2 = NVIDIA GeForce GTX 1070 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none load_backend: loaded Vulkan backend from /home/user33/vulkan/build/bin/libggml-vulkan.so load_backend: loaded CPU backend from /home/user33/vulkan/build/bin/libggml-cpu-haswell.so Sorted by Params size |Model|Size|Params|pp512|tg128| |:-|:-|:-|:-|:-| |\*Ling-mini-2.0-Q8\_0.gguf|16.11 GiB|16.26 B|227.98|70.94| |gemma-3-27b-it-UD-Q4\_K\_XL.gguf|15.66 GiB|27.01 B|57.26|8.97| |Qwen3-Coder-30B-A3B-Instruct-Q4\_K\_M.gguf|17.28 GiB|30.53 B|81.45|47.76| |granite-4.0-h-small-UD-Q4\_K\_XL.gguf|17.49 GiB|32.21 B|25.34|15.41| |GLM-4-32B-0414-UD-Q4\_K\_XL.gguf|18.54 GiB|32.57 B|48.22|7.80| |DeepSeek-R1-Distill-Qwen-32B-Q4\_K\_M.gguf|18.48 GiB|32.76 B|52.37|8.93| Table below shows reference of model name (Legend) in llama.cpp |Model|Size|Params|pp512|tg128|Legend| |:-|:-|:-|:-|:-|:-| |\***Ling-mini-2.0-Q8\_0.gguf**|16.11 GiB|16.26 B|227.98|70.94|bailingmoe2 16B.A1B Q8\_0| |**gemma-3-27b-it-UD-Q4\_K\_XL.gguf**|15.66 GiB|27.01 B|57.26|8.97|gemma3 27B Q4\_K - Medium| |**Qwen3-Coder-30B-A3B-Instruct-Q4\_K\_M.gguf**|17.28 GiB|30.53 B|81.45|47.76|qwen3moe 30B.A3B Q4\_K - Medium| |**granite-4.0-h-small-UD-Q4\_K\_XL.gguf**|17.49 GiB|32.21 B|25.34|15.41|granitehybrid 32B Q4\_K - Medium| |**GLM-4-32B-0414-UD-Q4\_K\_XL.gguf**|18.54 GiB|32.57 B|48.22|7.80|glm4 32B Q4\_K - Medium| |**DeepSeek-R1-Distill-Qwen-32B-Q4\_K\_M.gguf**|18.48 GiB|32.76 B|52.37|8.93|qwen2 32B Q4\_K - Medium| AMD motherboard X370 one GPU using 1X PCIe extender, other two mounted to 16X slot. [Three Nvidia GTX-1070 8GB VRAM each \(24GB VRAM total\) power limited using nvidia-smi to 333 watts ](https://preview.redd.it/truv59oxaa0g1.jpg?width=2252&format=pjpg&auto=webp&s=ccf335b37f291f935fbcfae8c403bd4e039ea846)
2025-11-09T19:41:23
https://www.reddit.com/r/LocalLLaMA/comments/1ossmm8/budget_system_for_30b_models_revisited/
tabletuser_blogspot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ossmm8
false
null
t3_1ossmm8
/r/LocalLLaMA/comments/1ossmm8/budget_system_for_30b_models_revisited/
false
false
https://b.thumbs.redditm…83UxXY2-Y3Vg.jpg
8
null
Best model and setup 4 4 3090s?
0
I’m running open air, kubuntu, 2 psus on a 20 amp circuit w an i9 and some ram. What’s the best way to take full advantage of those 4 3090s? I use oooba and find exl3 models are usually the sweet spot for me but recent offerings aren’t working well. Love this sub thanks to all who post here!
2025-11-09T19:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1ossf1x/best_model_and_setup_4_4_3090s/
klenen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ossf1x
false
null
t3_1ossf1x
/r/LocalLLaMA/comments/1ossf1x/best_model_and_setup_4_4_3090s/
false
false
self
0
null
Strix Halo inference Cluster
45
2025-11-09T19:25:06
https://youtu.be/0cIcth224hk?si=IfW5yysNbNWUDvFx
sub_RedditTor
youtu.be
1970-01-01T00:00:00
0
{}
1oss784
false
{'oembed': {'author_name': 'Donato Capitella', 'author_url': 'https://www.youtube.com/@donatocapitella', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/0cIcth224hk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Building a Two-Node AMD Strix Halo Cluster for LLMs with llama.cpp RPC (MiniMax-M2 &amp; GLM 4.6)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/0cIcth224hk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Building a Two-Node AMD Strix Halo Cluster for LLMs with llama.cpp RPC (MiniMax-M2 & GLM 4.6)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1oss784
/r/LocalLLaMA/comments/1oss784/strix_halo_inference_cluster/
false
false
default
45
{'enabled': False, 'images': [{'id': 'QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=108&crop=smart&auto=webp&s=723c429643a0665c386f4eb9342e3fff35a5b79c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=216&crop=smart&auto=webp&s=6d9ae5cd3c103a3a6f96625a5f20d4392e4b9fe6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?width=320&crop=smart&auto=webp&s=6be666f0103fd1f705f54f78d0ee69bc9405d6dc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QLldEh6cHckh0zu3VOF5RuY9ywGiZCt_x-CXw1nKwvM.jpeg?auto=webp&s=0433df90d3c4ad78c71525548e56c3e8e228be54', 'width': 480}, 'variants': {}}]}
Best performing model for MiniPC, what can I expect?
2
So I have a Lenovo M720q MiniPC with a Intel i5-8500T and 32GB RAM, where I run my proxmox and home assistant on. I spontaneously bought a Nvidia T1000 8GB to run Voice Assistant on Home Assistant more smoothly. The card hasn't arrived yet and I went down the rabbit hole a little bit (not too deep). Is it reasonable to expect a small model to run on this configuration as well? Maybe a small personal assistant for Home Assistant with some heavier stuff during the night (summaries, Research, etc)? What models should I aim for (if any at all)? Thank you!
2025-11-09T19:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1oss145/best_performing_model_for_minipc_what_can_i_expect/
caffeineandgravel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oss145
false
null
t3_1oss145
/r/LocalLLaMA/comments/1oss145/best_performing_model_for_minipc_what_can_i_expect/
false
false
self
2
null
best smallest model to run locally on a potato pc
0
i have a pc with 8 free gb ram i need to run the ai model on recall tasks ( recalling a word fitting to a sentence best from a large list of 20 k words, slightly less is also fine )
2025-11-09T19:00:38
https://www.reddit.com/r/LocalLLaMA/comments/1osrkb6/best_smallest_model_to_run_locally_on_a_potato_pc/
Sudden_Platform_4408
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osrkb6
false
null
t3_1osrkb6
/r/LocalLLaMA/comments/1osrkb6/best_smallest_model_to_run_locally_on_a_potato_pc/
false
false
self
0
null
PhD AI Research: Local LLM Inference — One MacBook Pro or Workstation + Laptop Setup?
1
I'm starting a PhD on a topic that leverages AI, and a large part of my work would involve running and evaluating LLMs, comparing model behavior, testing RAG pipelines, and experimenting with different inference setups. I won’t be training large models on my personal machine — my university offers infrastructure for that, though with some access limitations and queue times. So my personal hardware is mainly for: Running medium–large LLMs locally (often quantized 30B–70B, and sometimes larger) Prototyping ideas quickly without waiting on remote resources Working from different locations (office, library, travel, conferences) General research computing, writing, coding, etc. I want something that supports fast, low-friction iteration — because a lot of my thinking/testing happens spontaneously and not always while I’m physically at a workstation. The Two Options Option A — One Portable Workhorse 16" MacBook Pro (M4 Max) 128GB unified memory 2TB SSD \~£5400 (potentially less with university procurement/discount) Pros: Can run large models anywhere. No need to remote into another machine for inference work. Reduced workflow friction → faster iteration and idea testing. Simpler setup: one environment, no sync overhead. Cons: Laptop thermals = not ideal for very long or sustained high-load jobs. Single point of failure. Option B — Workstation + Light Laptop Mac Studio (M4 Max, 128GB, 2TB) \+ 16" MacBook Pro (M4, 24GB, 512GB) Total \~£6700 (again, possibly lower with university discounts) Pros: Mac Studio handles longer inference runs more comfortably. Two machines = redundancy + possible parallel tasks. Cons: The 24GB laptop cannot run large models locally, so I’d need to remote into the Studio for most LLM work. That introduces friction: syncing environments, data paths, vector stores, etc. Higher total cost → reduces budget available for conferences, workshops, and travel, which are important in a PhD. Unified memory is non-upgradeable, so there’s no scaling the Studio later. Why I’m Not Considering Linux Laptops Right Now I’ve used Linux before and I like it but on laptops I found: Power management issues → significantly worse battery life Driver/toolchain breakage during updates Needing to maintain configs rather than just work Inconsistent GPU support depending on model/vendor I want this machine to be something I work on, not work to maintain. That said, a compelling reason for a Linux laptop could make me reconsider. Where I’m Leaning I’m leaning toward Option A because having all compute with me would let me experiment freely from anywhere, which fits how I actually work day-to-day. But I also understand the value of a dedicated workstation for stability and sustained performance. Before I commit, I want to make sure I’m not overlooking something important in the workflow or long-term usability. Disclaimer / Note Some of what I’ve written above is based on my assumptions. I specialize in another field, and this is about leveraging AI / LLMs for scientific workflows. My knowledge about AI and LLMs is still limited, so corrections, insights, or better approaches are welcome. Question for people who run LLMs locally For those who run medium–large LLMs for inference, evaluation, and RAG prototyping (not training): Does having all the compute in one portable machine give you noticeably better iteration speed and workflow fluidity? Or do you find the workstation + lightweight laptop setup more productive in practice? Any experiences, regrets, or “I wish I had done X instead” stories are welcome. TL;DR: PhD student looking to run LLMs locally for testing, evaluation, and RAG. Options: Option A: MacBook Pro M4 Max, 128GB, 2TB — portable, frictionless, \~£5400 Option B: Mac Studio M4 Max 128GB + MacBook Pro 24GB — better sustained performance, but less portable, \~£6700 Leaning toward Option A for portability and faster experimentation, but seeking advice before committing.
2025-11-09T18:51:18
https://www.reddit.com/r/LocalLLaMA/comments/1osrbov/phd_ai_research_local_llm_inference_one_macbook/
Anime_Over_Lord
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osrbov
false
null
t3_1osrbov
/r/LocalLLaMA/comments/1osrbov/phd_ai_research_local_llm_inference_one_macbook/
false
false
self
1
null
How LLMs work?
0
If LLMs are word predictors, how do they solve code and math? I’m curious to know what's behind the scenes.
2025-11-09T18:39:52
https://www.reddit.com/r/LocalLLaMA/comments/1osr0yz/how_llms_work/
Mettlewarrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osr0yz
false
null
t3_1osr0yz
/r/LocalLLaMA/comments/1osr0yz/how_llms_work/
false
false
self
0
null
Kimi K2 Thinking on H100 setup?
1
Has anyone successfully setup this model, in native int4, on multiple nodes of H100? Could you please share your setup? Tyvm in advance.
2025-11-09T18:36:00
https://www.reddit.com/r/LocalLLaMA/comments/1osqxc9/kimi_k2_thinking_on_h100_setup/
pumapeepee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osqxc9
false
null
t3_1osqxc9
/r/LocalLLaMA/comments/1osqxc9/kimi_k2_thinking_on_h100_setup/
false
false
self
1
null
LM Studio unlocked for "unsupported" hardware — Testers wanted!
34
Hello everyone! Quick update — a **simple in situ patch** was found (see GitHub), and the **newest versions of the backends** are now released for "unsupported" hardware. Since the last post, **major refinements** have been made: performance, compatibility, and build stability have all improved. The **AVX1-only** and **AVX1 + Vulkan** backends are now **confirmed working** on **Ivy Bridge Xeons** with **older Tesla GPUs**. Here’s the current testing status: * ✅ **AVX1 CPU builds:** working * ✅ **AVX1 Vulkan builds:** working * ❓ **AVX1 CUDA builds:** untested (no compatible hardware yet) * ❓ **Non-AVX experimental builds:** untested (no compatible hardware yet) I’d love for more people to **try the patch instructions on their own architectures** and share results — especially if you have **newer NVIDIA GPUs** or **non-AVX CPUs** (like first-gen Intel Core). 👉 [https://github.com/theIvanR/lmstudio-unlocked-backend](https://github.com/theIvanR/lmstudio-unlocked-backend) My test setup is dual Ivy Bridge Xeons with Tesla K40 GPUs https://preview.redd.it/7v3vd9ldx90g1.png?width=1106&format=png&auto=webp&s=58ae1582a47823f049f86ae91ebe2ae368a9b22a https://preview.redd.it/ou8639ofx90g1.png?width=1041&format=png&auto=webp&s=15f853146d4adde2e4dec84aa76a24b17a5eab3c Brief install instructions: \- navigate to backends folder. ex C:\\Users\\Admin\\.lmstudio\\extensions\\backends \- (recommended for clean install) delete everything except "vendor" folder \- drop contents from compressed backend of your choice \- select it in LM Studio runtimes and enjoy.
2025-11-09T18:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1osqscj/lm_studio_unlocked_for_unsupported_hardware/
TheSpicyBoi123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osqscj
false
null
t3_1osqscj
/r/LocalLLaMA/comments/1osqscj/lm_studio_unlocked_for_unsupported_hardware/
false
false
https://b.thumbs.redditm…JvKq-h-ba6cU.jpg
34
null
Motivated versus Value reasoning in LLMs
0
Given that we a now are supposed to have reasoning models, are there models that can, out of the box or be trained to, reason in a specific style or way? In the psychological literature and in philosophy (especially Hume and/or Kant), one usually draw a distinction between fundamentally 2 different types of reasoning, motivated/instrumental/hypothetical reasoning, versus categorical or value reasoning, or but I can't seem to find models that are trained differently, to uphold and abide by these deep conceptual distinctions. I personally don't want a model to do motivated reasoning for example, even if i tell it to by accident. Furthermore, here i am talking about how the model functions, not in what it can output, so if a big forward pass on latent generation space is done, we can't tell if it is truly reasoning in one way or another. Or can training by RL only produce motivated reasoning by definition?
2025-11-09T18:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1osqk7g/motivated_versus_value_reasoning_in_llms/
ComprehensiveTap4823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osqk7g
false
null
t3_1osqk7g
/r/LocalLLaMA/comments/1osqk7g/motivated_versus_value_reasoning_in_llms/
false
false
self
0
null
Help running GPUStack
1
Hello, I'm trying to run gpustack, I've installed it with pip in a conda environment with cuda 12.8 and it works fine, except I can't seem to run language models on my gpu, they just get run on the cpu. In the terminal, about every 20 seconds it will give output saying that the rpc server for gpu 0 isn't running and it will start it, then it says it started it, then it just loops that. I've tried replacing the llama-box executable with one from the github releases, but that didn't change anything. In the gpu-0.log file, it does always say "Unknown argument: --origin-rpc-server-main-gpu" I'm using Cachyos and have an nvidia 30 series gpu. Any help would be greatly appreciated.
2025-11-09T18:10:00
https://www.reddit.com/r/LocalLLaMA/comments/1osq9ja/help_running_gpustack/
Ender436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osq9ja
false
null
t3_1osq9ja
/r/LocalLLaMA/comments/1osq9ja/help_running_gpustack/
false
false
self
1
null
If I really really wanted to run Qwen 3 coder 480b locally, what spec am I looking?
0
Lets see what this sub can cook up. Please include expected tps, ttft, price, and obviously spec
2025-11-09T18:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1osq5sg/if_i_really_really_wanted_to_run_qwen_3_coder/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osq5sg
false
null
t3_1osq5sg
/r/LocalLLaMA/comments/1osq5sg/if_i_really_really_wanted_to_run_qwen_3_coder/
false
false
self
0
null
Continue.dev CLI with no account, is it possible?
2
I am bowing to pressure to use some of these coding tools... I don't want to give access to any of the big boys, so everything must be hosted locally. I have set up the Continue plug in for vscodium and it seems to be accessing my local llama install okay. I would like to use the CLI, but when I start it up it demands an external log on. Is it possible to get it to work locally only? https://i.imgur.com/zEAecOg.png
2025-11-09T17:58:42
https://www.reddit.com/r/LocalLLaMA/comments/1ospyy3/continuedev_cli_with_no_account_is_it_possible/
fragglerock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ospyy3
false
null
t3_1ospyy3
/r/LocalLLaMA/comments/1ospyy3/continuedev_cli_with_no_account_is_it_possible/
false
false
self
2
{'enabled': False, 'images': [{'id': 'rljcD23s5yom9tclp3OjzAep5Iq8x3iWs6SfGOzCCQY', 'resolutions': [{'height': 19, 'url': 'https://external-preview.redd.it/rljcD23s5yom9tclp3OjzAep5Iq8x3iWs6SfGOzCCQY.png?width=108&crop=smart&auto=webp&s=6b32756027a046db162a0b3e047ccddb66e523cf', 'width': 108}, {'height': 38, 'url': 'https://external-preview.redd.it/rljcD23s5yom9tclp3OjzAep5Iq8x3iWs6SfGOzCCQY.png?width=216&crop=smart&auto=webp&s=5a96fa9d699dd81256572ae8ff8a461b15b4dec3', 'width': 216}, {'height': 56, 'url': 'https://external-preview.redd.it/rljcD23s5yom9tclp3OjzAep5Iq8x3iWs6SfGOzCCQY.png?width=320&crop=smart&auto=webp&s=3b1c2f084075095f5eeaa1b9d12a5443e75902ad', 'width': 320}], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/rljcD23s5yom9tclp3OjzAep5Iq8x3iWs6SfGOzCCQY.png?auto=webp&s=1d6c7c85ba1864b22d3c896366542b1712b90584', 'width': 361}, 'variants': {}}]}
Building AI Homeserver Setup Budget 2000€
1
Hi, we’re planning to build a local AI workstation that can handle both **LLM fine-tuning** and **heavy document processing**. Here’s what we’re trying to do: * Run and fine-tune **local open-source LLMs** (e.g. Mistral, LLaMA, etc.) * Use **OCR** to process and digitize large document archives (about **200 GB** total, with thousands of pages) * Translate full **books (\~2000 pages)** from one language to another * Create a **local searchable knowledge base** from these documents * Optionally use the setup for **video enhancement tasks** (AI upscaling, transcription, or analysis) We want **one powerful, all-in-one system** that can handle this offline — no cloud. Ideally something with: * A strong GPU (plenty of VRAM for LLMs and OCR models) * Lots of RAM and storage * Good cooling and power efficiency * Upgrade options for the future The **budget is around €2000 (Germany)** — the less, the better, but we want solid performance for AI workloads. It will be used as an alrounder, possible Proxmox as a Supervisor and than with Lxc or lm /docker ai applications. We have around 2tb Data which we want to be more accessible, something like paperlessng? But than with translation and searchbility. And so on
2025-11-09T17:52:04
https://www.reddit.com/r/LocalLLaMA/comments/1ospst5/building_ai_homeserver_setup_budget_2000/
Mediocre_Honey_6310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ospst5
false
null
t3_1ospst5
/r/LocalLLaMA/comments/1ospst5/building_ai_homeserver_setup_budget_2000/
false
false
self
1
null
Strix Halo and RAM choices...
2
Hey everyone, Onexfly just opened the Indiegogo campaign for the Onexfly Apex, it's a gaming handheld with the Strix Halo/Ryzen AI Max+ 395 and several options for RAM. I'm personally torn because while 128gb RAM is really nice, it's about $500 more expensive than the 64gb version. Since I want to use this for both gaming and AI, I wanted to see everyone else's opinions. Is 128gb overkill, or is it just right?
2025-11-09T17:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1osplmf/strix_halo_and_ram_choices/
Familiar-Art-6233
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osplmf
false
null
t3_1osplmf
/r/LocalLLaMA/comments/1osplmf/strix_halo_and_ram_choices/
false
false
self
2
null
Comma v.01 converted to GGUF for easy use in Ollama
1
https://ollama.com/hillhand/comma-v0.1-2t - This is just the straight base model, NOT a chat/instruct tuned model. Trained on The Common Pile by EleutherAI: https://blog.eleuther.ai/common-pile/ https://huggingface.co/common-pile Note this comment from a few months ago with some skepticism about exactly how "clean" the dataset is: https://www.reddit.com/r/LocalLLaMA/comments/1l5f3m0/comment/mwgp96t/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button - If you've seen more information about Comma and/or The Common Pile since then please share. Because it's only about as powerful as Llama 2, there has not been much discussion about Comma out there.
2025-11-09T17:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1ospc94/comma_v01_converted_to_gguf_for_easy_use_in_ollama/
Jadael
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ospc94
false
null
t3_1ospc94
/r/LocalLLaMA/comments/1ospc94/comma_v01_converted_to_gguf_for_easy_use_in_ollama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]}
I got confused by "sandboxes" in every AI article I read. Spent a weekend figuring it out. Here's what finally clicked for me.
0
Honestly, I felt dumb. I was reading Anthropic's article about how they made AI agents 98.7% more efficient, and they kept mentioning "sandboxes." "Code execution in secure sandboxes..." "Agents write code that runs in a sandbox..." And I realized... I didn't actually know what that meant. Like, I had a vague idea? But not really. What made it worse: everyone seemed to just *know*. It was treated as assumed knowledge. Which made me feel like I was the only one who didn't get it. So I did what I always do when I'm confused: I went deep. Not to write an article. Just to understand it for myself. **What finally clicked:** AI agents don't just call APIs. They need to *write code* \- actual Python scripts, shell commands - and execute it. And you can't just let AI-generated code run on your production server. That's... terrifying. That's where sandboxes come in. It's like giving the AI a safe playground where it can build whatever it wants, but the mess stays contained. **The part that blew my mind:** Anthropic's token reduction wasn't just about efficiency. It was about *what becomes possible*. Without sandboxes, you pre-define every tool an agent can use (huge context window). With sandboxes, agents write custom code for each unique problem. It's the difference between "here are 1000 tools you might need" vs "write the exact tool you need right now." **Why I'm sharing this:** I spent hours being confused before it clicked. I thought maybe writing this out would help someone else skip that confusion. Also... I built a little tutorial where you can actually run a sandbox yourself in 2 minutes. Because reading about it didn't help me - *doing* it did. Full write-up (with hands-on tutorial): [https://themindfulai.dev/articles/discovering-sandboxes-ai-infrastructure](https://themindfulai.dev/articles/discovering-sandboxes-ai-infrastructure) If you've been seeing "sandboxes" everywhere and feeling confused like I was, maybe this helps. Or maybe I'm still missing something - I'm definitely still learning this stuff. What helped you understand sandboxes? Am I thinking about this wrong?
2025-11-09T17:33:25
https://www.reddit.com/r/LocalLLaMA/comments/1ospbev/i_got_confused_by_sandboxes_in_every_ai_article_i/
Individual-Library-1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ospbev
false
null
t3_1ospbev
/r/LocalLLaMA/comments/1ospbev/i_got_confused_by_sandboxes_in_every_ai_article_i/
false
false
self
0
null
There was a post not too long ago in this sub where some researchers from MIT or some university created a tool on top of qwen 2.5 that rivaled GPT 4.0 in web search or tool calling but I can’t find it.
1
If anyone remembers or have the post saved. Please reshare here in the thread.
2025-11-09T17:22:20
https://www.reddit.com/r/LocalLLaMA/comments/1osp1cr/there_was_a_post_not_too_long_ago_in_this_sub/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osp1cr
false
null
t3_1osp1cr
/r/LocalLLaMA/comments/1osp1cr/there_was_a_post_not_too_long_ago_in_this_sub/
false
false
self
1
null
VRAM options for GLM 4.5V
0
Anybody have VRAM info for this model? I’ve got two Mi50 32GBs and a P100 16GB…
2025-11-09T16:44:52
https://www.reddit.com/r/LocalLLaMA/comments/1oso2id/vram_options_for_glm_45v/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oso2id
false
null
t3_1oso2id
/r/LocalLLaMA/comments/1oso2id/vram_options_for_glm_45v/
false
false
self
0
null
Does Kimi K2 Thinking not have access to their thoughts within the turn?
0
I like to test reasoning/thinking models on the level of control they have over their thoughts, by asking them to say something in the thoughts that they don't say in the message. Gemini and Claude are great at this. ChatGPT models can do it a little. But Chinese models often struggle and Kimi straight up refuses, saying they can't. And then I realized they don't see their thoughts at all, like have no idea what they just thought about. I'm kind of confused by this and wonder how thinking even works if the model doesn't see it after the second it's over in that same turn. Or am I understanding it wrong?
2025-11-09T16:44:48
https://www.reddit.com/r/LocalLLaMA/comments/1oso2gj/does_kimi_k2_thinking_not_have_access_to_their/
IllustriousWorld823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oso2gj
false
null
t3_1oso2gj
/r/LocalLLaMA/comments/1oso2gj/does_kimi_k2_thinking_not_have_access_to_their/
false
false
self
0
null
We made a multi-agent framework . Here’s the demo. Break it harder.
0
Since we dropped Laddr about a week ago, a bunch of people on our last post said “cool idea, but show it actually working.” So we put together a short demo of how to get started with Laddr. **Demo video:** [https://www.youtube.com/watch?v=ISeaVNfH4aM](https://www.youtube.com/watch?v=ISeaVNfH4aM) **Repo:** [https://github.com/AgnetLabs/laddr](https://github.com/AgnetLabs/laddr) **Docs:** [https://laddr.agnetlabs.com](https://laddr.agnetlabs.com) Feel free to try weird workflows, force edge cases, or just totally break the orchestration logic. We’re actively improving based on what hurts. Also, tell us what you want to see Laddr do next. Browser agent? research assistant? something chaotic?
2025-11-09T16:36:34
https://www.youtube.com/watch?v=ISeaVNfH4aM
wikkid_lizard
youtube.com
1970-01-01T00:00:00
0
{}
1osnv17
false
{'oembed': {'author_name': 'AgnetLabs', 'author_url': 'https://www.youtube.com/@AgnetLabs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ISeaVNfH4aM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Laadr : Getting Started - AgnetLabs | Learning"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ISeaVNfH4aM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Laadr : Getting Started - AgnetLabs | Learning', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1osnv17
/r/LocalLLaMA/comments/1osnv17/we_made_a_multiagent_framework_heres_the_demo/
false
false
default
0
{'enabled': False, 'images': [{'id': '4BEVVDWWk0p8l-rmKtxNg3NJqXG3x5Xaq8LtHSZqFmg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/4BEVVDWWk0p8l-rmKtxNg3NJqXG3x5Xaq8LtHSZqFmg.jpeg?width=108&crop=smart&auto=webp&s=7716253e2bb874802d7df0d0326e8f47a25e6996', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/4BEVVDWWk0p8l-rmKtxNg3NJqXG3x5Xaq8LtHSZqFmg.jpeg?width=216&crop=smart&auto=webp&s=80ac3b88cc231fcef36656198bb39c2db90918a8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/4BEVVDWWk0p8l-rmKtxNg3NJqXG3x5Xaq8LtHSZqFmg.jpeg?width=320&crop=smart&auto=webp&s=e3a4d662253c1885f12d0b7f2ea6294a2977e1e3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/4BEVVDWWk0p8l-rmKtxNg3NJqXG3x5Xaq8LtHSZqFmg.jpeg?auto=webp&s=f6459667db5cb326132bf160603908dea3d94abe', 'width': 480}, 'variants': {}}]}
Mixing 3090s and mi60 on same machine in containers?
3
I have two 3090s and considering a third. However thinking about dual mi60s for the same price as a third and using a container to run rocm models. Whilst I cannot combine the ram I could run two separate models. Was a post a while back about having these in the same machine, but thought this would be cleaner?
2025-11-09T16:33:31
https://www.reddit.com/r/LocalLLaMA/comments/1osns7l/mixing_3090s_and_mi60_on_same_machine_in/
Salt_Armadillo8884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osns7l
false
null
t3_1osns7l
/r/LocalLLaMA/comments/1osns7l/mixing_3090s_and_mi60_on_same_machine_in/
false
false
self
3
null
Open-Weight AI Model Releases: November 1–8, 2025
14
Hey everyone, Here are some of the interesting open-weight models and tools that dropped between November 1st and November 8th. **November 1st** * [**LongCat-Flash-Omni**](https://github.com/meituan-longcat/LongCat-Flash-Omni) (Meituan): A very powerful 560B parameter omni-modal model that uses a Mixture of Experts (MoE) architecture. It processes text, images, video, and audio in one stack and is designed for real-time, low-latency streaming. **November 4th** * [**Qwen-Image-2509-MultipleAngles LORA**](https://huggingface.co/dx8152/Qwen-Edit-2509-Multiple-angles) (Community release): A small LORA file that gives you camera-aware control over images. You can use it with the open-source Qwen-Image-Edit model to rotate, tilt, or zoom the camera's perspective while keeping the subject consistent. * [**Maya 1**](https://huggingface.co/maya-research/maya1) (Maya Research): A 3-billion-parameter open-source text-to-speech model focused on "voice design." It allows for generating speech with fine-grained emotion tags like `[laugh]` or `[whisper]`, making it a strong open alternative to proprietary voice APIs. **November 5th** * [**Skyvern**](https://github.com/Skyvern-Al/skyvern) (Browser AI): An open-source (AGPL-3.0) agent for automating browser tasks. Unlike older tools that break when a website's layout changes, Skyvern uses Vision LLMs to *see* and interact with web pages, making it much more robust. **November 6th** * [**Kimi K2 Thinking**](https://huggingface.co/moonshotai/Kimi-K2-Thinking) (Moonshot AI): A state-of-the-art open-source model specifically built for complex, multi-step reasoning. It can perform hundreds of sequential tool calls to solve PhD-level problems. It was released as a native INT4 quantized model with a 256k context window. * [**InfinityStar**](https://github.com/FoundationVision/InfinityStar) (Bytedance): An 8-billion-parameter open-source autoregressive model for generating high-res images and video. It uses a token-based approach instead of diffusion, which makes it about 10x faster for generating video. **November 7th** * [**Marvis-TTS-v0.2**](https://github.com/Marvis-Labs/marvis-tts) (Marvis-Labs): An open-source, real-time conversational TTS model designed for edge devices like phones or laptops. It's very small (414MB quantized) and is perfect for on-device voice assistants. * [**Step-Audio-EditX**](https://github.com/stepfun-ai/Step-Audio-EditX) (StepFun): The first open-source, LLM-based model made for iterative audio editing. You can upload an audio file and use text prompts to change its emotion, speaking style, or other characteristics. --- A big thanks to u/Acrobatic-Tomato4862 for helping me research and write this up. That's the list for this week! If any inaccuracy, please let me know.
2025-11-09T16:29:43
https://www.reddit.com/r/LocalLLaMA/comments/1osnolr/openweight_ai_model_releases_november_18_2025/
Duarteeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1osnolr
false
null
t3_1osnolr
/r/LocalLLaMA/comments/1osnolr/openweight_ai_model_releases_november_18_2025/
false
false
self
14
null
How to build an AI computer (version 2.0)
746
2025-11-09T16:28:27
https://i.redd.it/03t3yj51d90g1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1osnnfn
false
null
t3_1osnnfn
/r/LocalLLaMA/comments/1osnnfn/how_to_build_an_ai_computer_version_20/
false
false
https://b.thumbs.redditm…skMdN6BRNYGU.jpg
746
{'enabled': True, 'images': [{'id': 'wxW_Yn0M-SUA-ejB_FMKI_VEUTwqHyy4B05J82-VCQM', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=108&crop=smart&auto=webp&s=e6b598751e0960a27c056a1543ce4b4ffcce7f35', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=216&crop=smart&auto=webp&s=227933da952d02cb2a7a83a03abc0d78efb55cf9', 'width': 216}, {'height': 262, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=320&crop=smart&auto=webp&s=24f37be636a58dab444728a9c408007bbebcde04', 'width': 320}, {'height': 525, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=640&crop=smart&auto=webp&s=6d467717022f60865bdfcbb8d96bb265e2bdb541', 'width': 640}, {'height': 788, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=960&crop=smart&auto=webp&s=b41fa9c157e150b77a46cd419d9a663045803f87', 'width': 960}, {'height': 887, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?width=1080&crop=smart&auto=webp&s=e11e42e095f9bbacedc486938ee434eb82b6363f', 'width': 1080}], 'source': {'height': 1778, 'url': 'https://preview.redd.it/03t3yj51d90g1.png?auto=webp&s=992e194febbeb3f3b73d94eb54899e5c75005559', 'width': 2164}, 'variants': {}}]}
How to build an AI computer?
1
[removed]
2025-11-09T16:27:40
https://i.redd.it/zex5pctuc90g1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1osnmp6
false
null
t3_1osnmp6
/r/LocalLLaMA/comments/1osnmp6/how_to_build_an_ai_computer/
false
false
https://b.thumbs.redditm…OxSwSi1ESF7s.jpg
1
{'enabled': True, 'images': [{'id': 'cK-ys0QLUXJJRnH_C5UEwbVHqmaH6Co76YbADggCQIY', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=108&crop=smart&auto=webp&s=edee3b335437efdabe0f1bb30b6de3674338cc45', 'width': 108}, {'height': 177, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=216&crop=smart&auto=webp&s=2fbb97858f2a0f6b15f260d3a1d34c6434f535d7', 'width': 216}, {'height': 262, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=320&crop=smart&auto=webp&s=9234df7a485f23e2776db54efa1d46084459d97a', 'width': 320}, {'height': 525, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=640&crop=smart&auto=webp&s=16eed6c7cf74867c16401f1b53f3c755590e7345', 'width': 640}, {'height': 788, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=960&crop=smart&auto=webp&s=7caa2bbc4b33150a779acd768779fd58870f021c', 'width': 960}, {'height': 887, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?width=1080&crop=smart&auto=webp&s=a7ab60edd28741b7fba5a589656f711c36a8d402', 'width': 1080}], 'source': {'height': 1778, 'url': 'https://preview.redd.it/zex5pctuc90g1.png?auto=webp&s=70a5410ef9fcca291f050c8e022b52dfab381804', 'width': 2164}, 'variants': {}}]}