title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Announcing: Hack the Edge by AMD × Liquid AI - San Francisco 15-16th November
12
Hello r/LocalLLaMA ! Join the AMD and Liquid teams at the Liquid AI Office in SF for an exclusive hackathon **Nov 15-16th.**  Over these two days you will build unique local, private, and efficient AI applications directly on AMD hardware — with guidance from Liquid and AMD researchers. The challenge will be revealed on site. Winners receive their share of $5K. Apply to Join👇 [https://luma.com/smik3k94](https://luma.com/smik3k94)
2025-11-07T07:58:41
https://i.redd.it/tb6azm0yjszf1.jpeg
LiquidAI_Team
i.redd.it
1970-01-01T00:00:00
0
{}
1oqojnk
false
null
t3_1oqojnk
/r/LocalLLaMA/comments/1oqojnk/announcing_hack_the_edge_by_amd_liquid_ai_san/
false
false
default
12
{'enabled': True, 'images': [{'id': 'tb6azm0yjszf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/tb6azm0yjszf1.jpeg?width=108&crop=smart&auto=webp&s=c18c25643bdf350bb630b445d6038264c3d8bcaf', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/tb6azm0yjszf1.jpeg?width=216&crop=smart&auto=webp&s=b54f8834bd0e4a7ab274bc01a76315119037c87d', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/tb6azm0yjszf1.jpeg?width=320&crop=smart&auto=webp&s=fbd3d9f2f2d690d817f7d9703ea266a320ae4bef', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/tb6azm0yjszf1.jpeg?width=640&crop=smart&auto=webp&s=8fea13414021e95ebdcab649e10638f46da5615f', 'width': 640}], 'source': {'height': 800, 'url': 'https://preview.redd.it/tb6azm0yjszf1.jpeg?auto=webp&s=73a335c814ff373c7b0252875e9550955509193f', 'width': 800}, 'variants': {}}]}
Who are you
0
Describe yourself 😎
2025-11-07T07:48:52
https://www.reddit.com/r/LocalLLaMA/comments/1oqoe9l/who_are_you/
redteether
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqoe9l
false
null
t3_1oqoe9l
/r/LocalLLaMA/comments/1oqoe9l/who_are_you/
false
false
self
0
null
Problem Uploading PDFs in Self hosted AI
0
Hey everyone, I’ve been working on building a local knowledge base for my Self Hosted AI running in OpenWebUI. I exported a large OneNote notebook to individual PDF files and then tried to upload them so the AI can use them as context. Here’s the weird part: Only the PDFs without any linked or embedded files (like Word or PDF attachments inside the OneNote page) upload successfully. Whenever a page had a file attachment or link in OneNote, the exported PDF fails to process in OpenWebUI with the error: “Extracted content is not available for this file. Please ensure that the file is processed before proceeding.” Even using Adobe Acrobat’s “Redact” or “Sanitize” options didn’t fix it. My guess is that these PDFs still contain embedded objects or “Launch” annotations that the loader refuses for security reasons. Has anyone run into this before or found a reliable way to strip attachments/annotations from OneNote-exported PDFs so they can be indexed normally in OpenWebUI? I’d love to keep the text but remove anything risky.
2025-11-07T07:42:26
https://www.reddit.com/r/LocalLLaMA/comments/1oqoalt/problem_uploading_pdfs_in_self_hosted_ai/
stutau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqoalt
false
null
t3_1oqoalt
/r/LocalLLaMA/comments/1oqoalt/problem_uploading_pdfs_in_self_hosted_ai/
false
false
self
0
null
ubergarm/Kimi-K2-Thinking-GGUF · Hugging Face
142
Great job ngxson, compilade, DevQuasar, Bartowski, AesSedai, and more folks who pulled together hacking on this one today! 🫶 Only one quant released so far which is `q4_0` for the routed experts and `q8_0` for everything else. This is because the original model is released in roughly this size at "full quality". I've tested the quant on both ik_llama.cpp and mainline llama.cpp and it inferences fine. Though it wasn't giving me any `<think>` or `</think>` tags so you might have to fiddle with the template or something (model card shows how to just load whatever you want). I may try some smaller quants for ik_llama.cpp to see if they hold up despite original model being QAT'd to ~4bpw. The "full size" weighs in at 543.617 GiB (4.549 BPW). Have fun!
2025-11-07T07:32:32
https://huggingface.co/ubergarm/Kimi-K2-Thinking-GGUF
VoidAlchemy
huggingface.co
1970-01-01T00:00:00
0
{}
1oqo57j
false
null
t3_1oqo57j
/r/LocalLLaMA/comments/1oqo57j/ubergarmkimik2thinkinggguf_hugging_face/
false
false
https://external-preview…972c7af305ead702
142
{'enabled': False, 'images': [{'id': '-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=108&crop=smart&auto=webp&s=5d24ac0c601b0b50d622732e4b46649f229e2c36', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=216&crop=smart&auto=webp&s=baa8b08d389400a76e6f14c932b47417c7e00e52', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=320&crop=smart&auto=webp&s=02162e2bbf937afe41f10b7620ba7b5cf36a40da', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=640&crop=smart&auto=webp&s=b1be9cda0b5e2a3434209c5a9d38f045a106ba74', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=960&crop=smart&auto=webp&s=7c62cf1f0aab07c9930c6ad797e3bd3f2d0e664e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?width=1080&crop=smart&auto=webp&s=0256ef982ed7fd35bdc1e5361ffc361ccbb5fa54', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-6vnf_3yTWf3TtVUA6a-SCJQHQSGAkjtdxEpaCd4oLc.png?auto=webp&s=bec46cab5e0cb5dd06bfd9c2881c2ca4ea626337', 'width': 1200}, 'variants': {}}]}
Cross-model agent workflows — anyone tried migrating prompts, embeddings, or fine-tunes?
1
Hey everyone, I’m exploring the challenges of moving AI workloads between models (OpenAI, Claude, Gemini, LLaMA). Specifically: \- Prompts and prompt chains \- Agent workflows / multi-step reasoning \- Context windows and memory \- Fine-tune & embedding reuse Has anyone tried running the same workflow across multiple models? How did you handle differences in prompts, embeddings, or model behavior? Curious to learn what works, what breaks, and what’s missing in the current tools/frameworks. Any insights or experiences would be really helpful! Thanks in advance! 🙏
2025-11-07T07:27:37
https://www.reddit.com/r/LocalLLaMA/comments/1oqo2iy/crossmodel_agent_workflows_anyone_tried_migrating/
NoEntertainment8292
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqo2iy
false
null
t3_1oqo2iy
/r/LocalLLaMA/comments/1oqo2iy/crossmodel_agent_workflows_anyone_tried_migrating/
false
false
self
1
null
Why is the context (KV cache) vram amount for gpt-oss 120b so low
4
I’m running gpt oss 120b with flash attention on (does that make the quality worse or no?) No quantized KV cache, 37/37 layers offloaded to GPU (KV) -Ncmoe set to 31 VRAM usage 15.6/15.99gb Ram usage 59.0/64gb (67gb on Linux mint for some reason) Beginning of chat 22.2 tok/s haven’t tried long context tasks yet (Using Laptop meaning I use built in graphics for visuals, so I get a bit more free VRAM of my mobile rtx 4090) Is this a glitch? Or why is it that I can set the context length to 128000?
2025-11-07T07:07:07
https://www.reddit.com/r/LocalLLaMA/comments/1oqnr30/why_is_the_context_kv_cache_vram_amount_for/
Adventurous-Gold6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqnr30
false
null
t3_1oqnr30
/r/LocalLLaMA/comments/1oqnr30/why_is_the_context_kv_cache_vram_amount_for/
false
false
self
4
null
Anyone want to check out my model?
0
I'm curious if it will work well since I only tested everything in Korean! You guys are the experts, and I'm also genuinely curious if the model handles English well just by using word embeddings. What I've implemented so far is: **System Prompt** (added today), **Memory** (RAG), and **Answer Referencing** (to sources?). (I built a Chess engine too, but I lost interest, lol—it was a hybrid setup.) Now that I say it, it doesn't sound like I did much... Anyway! I'll drop the link below—come check it out if you're interested!
2025-11-07T06:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1oqnfqo/anyone_want_to_check_out_my_model/
Patience2277
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqnfqo
false
null
t3_1oqnfqo
/r/LocalLLaMA/comments/1oqnfqo/anyone_want_to_check_out_my_model/
false
false
self
0
null
Built a local AI Benchmark Tool - just something I made for my own testing
1
Hey everyone! 👋 I’ve been tinkering with local LLMs for a while and decided to make a benchmarking dashboard for my own use — mainly to test and compare local AI models like Llm (GGUF, Transformers, and Diffusers.) It’s called PKC MARK, and I built it to visualize performance metrics in a simple web interface. 🚀 Core Features Web-based UI: Control everything through benchmark_canvas.html (Tailwind + Chart.js). Includes Summary, Log, Chart, Comparison tabs. Automatic Model Detection: Reads the folder in config.json and automatically recognizes models. Supports: • GGUF / Llama.cpp models • Diffusers (image models) • Transformers (Hugging Face) Pipeline Test Support: Can inject analysis model results (like emotion detection) into LLM prompts for connected testing. Flexible Options: Model caching, sequential or parallel runs, VRAM retry, GPU layer config, seed control, and pipeline toggle. Local Only: No online API calls — everything runs fully offline. Result History & Comparison: Saves each test result locally and lets you compare past runs side-by-side. 📊 Performance Metrics Metric Description VRAM Usage (GB) GPU memory used during model execution Model Load Time (s) Time taken to load each model Inference Speed (TPS) Tokens per second throughput TTFT (ms) Time to first token — latency measure GPU Power (W) GPU power draw during test GPU Temp (°C) Temperature monitoring CPU Utilization (%) CPU usage during runs ⚙️ Requirements Python 3.11+ NVIDIA GPU (CUDA 12.1 recommended) Libraries in requirements.txt (FastAPI, PyTorch, Diffusers, Transformers, etc.) 🧭 How It Works 1. Set your model directory in config.json. 2. Run start_server_windows.bat or python benchmark_server.py. 3. Open benchmark_canvas.html to launch the web dashboard. 4. Choose models, tweak parameters, and hit Start Benchmark. 5. Watch real-time charts and compare results after each run. 💬 Just Curious I mainly made this for my own experiments, but I’d love to hear your thoughts: Any interesting metrics I should add? What do you think of the dashboard layout? (No models or files shared — just a personal tool project!) — PKC 🍻
2025-11-07T06:41:49
https://www.reddit.com/gallery/1oqnc62
According_Wait_7336
reddit.com
1970-01-01T00:00:00
0
{}
1oqnc62
false
null
t3_1oqnc62
/r/LocalLLaMA/comments/1oqnc62/built_a_local_ai_benchmark_tool_just_something_i/
false
false
https://b.thumbs.redditm…Tuu8m0pGvr9I.jpg
1
null
AI scientists week
4
3 new very cool systems this week in AI for science One called Denario fully open source: [https://github.com/AstroPilot-AI/Denario](https://github.com/AstroPilot-AI/Denario) Other is Kosmos from futurehouse: [https://arxiv.org/abs/2511.02824](https://arxiv.org/abs/2511.02824) and earlier today alphaevolve's new paper: [https://arxiv.org/abs/2511.02864](https://arxiv.org/abs/2511.02864) Any other suggestions on similar systems? Have people tried google co-scientists etc? I think Claude code by itself is already pretty strong
2025-11-07T06:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1oqn62k/ai_scientists_week/
Emergency_Brief_9141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqn62k
false
null
t3_1oqn62k
/r/LocalLLaMA/comments/1oqn62k/ai_scientists_week/
false
false
self
4
{'enabled': False, 'images': [{'id': '590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=108&crop=smart&auto=webp&s=77bc02cb808f68ded9bdc35c25ee2729eddd5f21', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=216&crop=smart&auto=webp&s=56860794398451049ba12aa7f66d133c5581004d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=320&crop=smart&auto=webp&s=f1e44711ee5364197fec4bf41059dfad91cfbbc2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=640&crop=smart&auto=webp&s=5682e0adf04df74d8bebb9cdcfc9f16e07bb6ad0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=960&crop=smart&auto=webp&s=621d510155d9e8d9232baad95eeb07a4f60f2936', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?width=1080&crop=smart&auto=webp&s=c8ac34953daf7bc35a9c7385b338d3ce10ebd7c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/590WTFf9OZI31Y7DFrphh32s8WegQ0MmRwVR1wy1d30.png?auto=webp&s=e62df8bcc8795822a126a74a1c2216ee9759c6d3', 'width': 1200}, 'variants': {}}]}
Which VLM finetuning library is the best and ready to use?
1
Hello everyone! I would like to know which VLM finetuning library is easy to use. VLMs in consideration: 1. rednote-hilab/dots.ocr 2. PaddlePaddle/PaddleOCR-VL 3. lightonai/LightOnOCR-1B-1025
2025-11-07T06:17:31
https://www.reddit.com/r/LocalLLaMA/comments/1oqmxs1/which_vlm_finetuning_library_is_the_best_and/
Acceptable_Young_167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqmxs1
false
null
t3_1oqmxs1
/r/LocalLLaMA/comments/1oqmxs1/which_vlm_finetuning_library_is_the_best_and/
false
false
self
1
null
Framework Ryzen AI 32gb
2
I’m thinking of getting the framework Ryzen AI 32gb motherboard. I will be running ollama server, using docker to run home assistant, pihole, frigate and ollama for local ai. I only plan to use ai for tool calls and basic questions. That’s it. This will be running 24/7 I don’t want to run a cloud llm model. What do you think?
2025-11-07T05:55:46
https://www.reddit.com/r/LocalLLaMA/comments/1oqmjxf/framework_ryzen_ai_32gb/
Cute-Rip-5739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqmjxf
false
null
t3_1oqmjxf
/r/LocalLLaMA/comments/1oqmjxf/framework_ryzen_ai_32gb/
false
false
self
2
null
Co-authored a book called "Build DeepSeek from Scratch" | Live Now
134
Book link: [https://hubs.la/Q03Rl\_lh0](https://hubs.la/Q03Rl_lh0) Github repository: [https://github.com/VizuaraAI/DeepSeek-From-Scratch](https://github.com/VizuaraAI/DeepSeek-From-Scratch) Published by Manning Publications.
2025-11-07T05:45:39
https://i.redd.it/1felu4y3wrzf1.jpeg
OtherRaisin3426
i.redd.it
1970-01-01T00:00:00
0
{}
1oqmder
false
null
t3_1oqmder
/r/LocalLLaMA/comments/1oqmder/coauthored_a_book_called_build_deepseek_from/
false
false
default
134
{'enabled': True, 'images': [{'id': '1felu4y3wrzf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=108&crop=smart&auto=webp&s=e1f6fcd16089fd14359191c9e651059c1ad475f5', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=216&crop=smart&auto=webp&s=b9bd96852a73a5f54428ee78cc470f0fa892b32a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=320&crop=smart&auto=webp&s=f71f9fb85083830c720fa9723b38d404127b1c32', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=640&crop=smart&auto=webp&s=fd71bff3abb06e132c57c36de21150381fe19207', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=960&crop=smart&auto=webp&s=6457de437f21f38c7a688d74b87d4792683bd028', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?width=1080&crop=smart&auto=webp&s=df2e992b6eea56d25e31153242f743060e575798', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/1felu4y3wrzf1.jpeg?auto=webp&s=4e55cbaff1c0a6c981ba1fc7648a70c8c7983445', 'width': 1080}, 'variants': {}}]}
Hi everyone!
6
https://preview.redd.it/…ting with you! ✨
2025-11-07T05:44:27
https://www.reddit.com/r/LocalLLaMA/comments/1oqmcmt/hi_everyone/
According_Wait_7336
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqmcmt
false
null
t3_1oqmcmt
/r/LocalLLaMA/comments/1oqmcmt/hi_everyone/
false
false
https://a.thumbs.redditm…tA8f5tW3XZf4.jpg
6
null
Best way to run Whisper through Vulkan?
6
I have an AMD GPU and want to do some audio/video transcription locally. I've been using [const-me's](https://github.com/Const-me/Whisper) GUI, but it's currently abandonware and only really works for the ggml-medium model and nothing else. I'd like to use something with more accuracy like the ggml-large model (I do have enough VRAM for it), but the only other free option I've found that might work is whisper.cpp, which has been an absolute pain to get working (and this is coming from someone who had to jump through a bunch of hoops to get the Zluda version of ComfyUI working). Is there anything else out there that's up to date and works with Vulkan? If whisper.cpp is the really only thing then I'll try to get it working, but I'd really like other options.
2025-11-07T05:38:00
https://www.reddit.com/r/LocalLLaMA/comments/1oqm8jh/best_way_to_run_whisper_through_vulkan/
AIgoonermaxxing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqm8jh
false
null
t3_1oqm8jh
/r/LocalLLaMA/comments/1oqm8jh/best_way_to_run_whisper_through_vulkan/
false
false
self
6
{'enabled': False, 'images': [{'id': 'lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=108&crop=smart&auto=webp&s=d491975990adeb9aad11bf99258e76e85b00d182', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=216&crop=smart&auto=webp&s=2bf37c2fc5945c31e3e50ba0675e0ecc8bcf283b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=320&crop=smart&auto=webp&s=c860c948f3c4971061fdd580ad606b664df312c0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=640&crop=smart&auto=webp&s=dc9ba6acf5b8e07892976f171728c1625a42208b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=960&crop=smart&auto=webp&s=3e83c173dfda16d3d2daabe702ffb3ffe0e414cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?width=1080&crop=smart&auto=webp&s=7478459a42aedd0993be75f83a2a77be14db112b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lotoZHF7SaTrLu2PJE59YOkpuV9hD4btEn-nZSJ_PmU.png?auto=webp&s=a3123a00c234b3f46c3a709768cc8d374b222aad', 'width': 1200}, 'variants': {}}]}
Looking into a homeserver capable of 70b parameters
5
I'm hoping to create a home server for \~$1000 to run inference models on. I'd like to avoid heavily quantized models if possible. So far, I've found the Intel A770 to be the best priced option for the GPU and those would run \~$600-700 for three. I know the minimum recommended for the 70b Llama models is 48GB VRam so I would barely be meeting that. My biggest issue has been trying to find a server that would support the graphics cards. The Dell Precision T7910 seems like the best bet so far, though I'm worried about available 8 pin connectors for three cards. Each card takes 2 8 pin connectors and my research has brought me to the T7910 having 5 total connectors. Any clarification for whether this server would support my load would be appreciated. Otherwise, any recommendation for other servers or graphics cards would be great. Since I won't have Tensor or Cuda cores, I'm assuming I wouldn't be able to train a model with decent efficiency? I'd love input for using Intel cards on Linux for inference models.
2025-11-07T05:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1oqlzrd/looking_into_a_homeserver_capable_of_70b/
nstein5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqlzrd
false
null
t3_1oqlzrd
/r/LocalLLaMA/comments/1oqlzrd/looking_into_a_homeserver_capable_of_70b/
false
false
self
5
null
Where to train a model on the cloud?
1
[removed]
2025-11-07T04:51:24
https://www.reddit.com/r/LocalLLaMA/comments/1oqle4d/where_to_train_a_model_on_the_cloud/
Ok_Construction_3021
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqle4d
false
null
t3_1oqle4d
/r/LocalLLaMA/comments/1oqle4d/where_to_train_a_model_on_the_cloud/
false
false
self
1
null
🚀 Introducing SGLang-Jax — Open-source JAX/TPU engine for LLM inference
5
Hi everyone, We’re building SGLang-Jax — an open-source project that brings SGLang’s high-performance LLM serving to Google TPU via JAX/XLA. ✨ Highlights: • Fast LLM inference on TPU (batching, caching, LoRA, etc.) • Pure JAX + XLA implementation (no PyTorch dependency) • Lower cost vs GPU deployment • Still early-stage — lots of space for contributors to make real impact 🛠️ Want to get involved? We welcome: • Issues, feature requests, and bug reports • PRs (we have \`good-first-issue\` labels) • Ideas, design discussions, or feedback 📌 Links (GitHub, blog, contact email) are in the first comment to avoid Reddit spam filters. If you're into TPU, JAX or LLM systems — we'd love to collaborate!
2025-11-07T04:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1oql49w/introducing_sglangjax_opensource_jaxtpu_engine/
RamezesDong666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oql49w
false
null
t3_1oql49w
/r/LocalLLaMA/comments/1oql49w/introducing_sglangjax_opensource_jaxtpu_engine/
false
false
self
5
null
Is there a way to create a chatbot integrated into my website using a local LLM?
2
Hi! I am a complete novice to the space. I am currently using a commercial software to train an AI chatbot on select files and serve as a chatbot to answer customer questions. For the sake of privacy and not be limited by inquiry caps, I want to run my own model. My questions is, can I run a local LLM and then have a chat screen integrated into my website? Is there any tool out there that allows me to do this? I really appreciate any help or direction towards helpful resources. TIA
2025-11-07T04:34:35
https://www.reddit.com/r/LocalLLaMA/comments/1oql2z8/is_there_a_way_to_create_a_chatbot_integrated/
FailingupwardsPHD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oql2z8
false
null
t3_1oql2z8
/r/LocalLLaMA/comments/1oql2z8/is_there_a_way_to_create_a_chatbot_integrated/
false
false
self
2
null
🚀 Introducing SGLang-Jax — Open-source JAX/TPU engine for LLM inference (help wanted!)
1
[removed]
2025-11-07T04:33:58
https://www.reddit.com/r/LocalLLaMA/comments/1oql2ks/introducing_sglangjax_opensource_jaxtpu_engine/
RamezesDong666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oql2ks
false
null
t3_1oql2ks
/r/LocalLLaMA/comments/1oql2ks/introducing_sglangjax_opensource_jaxtpu_engine/
false
false
self
1
null
We’re Entering the Era of Autonomous SaaS 24/7 Agents, Infinite Scale.
0
Honestly, I think people are missing the point AI agents won’t shrink enterprise software, they’ll expand it. Once SaaS tools start pairing AI agents with their existing workflows, the ceiling disappears. You’re not limited by how many humans can use a product anymore agents can run 24/7, pushing usage and value way beyond what we’ve seen. The winners will be the ones who merge systems of record with autonomous execution. That combo is the future of enterprise software.
2025-11-07T04:27:26
https://i.redd.it/y5u82rrlirzf1.jpeg
Ok-Breakfast-4676
i.redd.it
1970-01-01T00:00:00
0
{}
1oqky3z
false
null
t3_1oqky3z
/r/LocalLLaMA/comments/1oqky3z/were_entering_the_era_of_autonomous_saas_247/
false
false
default
0
{'enabled': True, 'images': [{'id': 'y5u82rrlirzf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=108&crop=smart&auto=webp&s=b641052567693a1f7d0757cd235ed98b713079b7', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=216&crop=smart&auto=webp&s=2d74ef2aef9a0d66a9d6c469d2be380f07ea7a32', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=320&crop=smart&auto=webp&s=0169d06756af110a9e4f5478a89a693c87e823bb', 'width': 320}, {'height': 373, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=640&crop=smart&auto=webp&s=c94d804cdb6675d8c0032493658ddb8d3b9e8e95', 'width': 640}, {'height': 560, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=960&crop=smart&auto=webp&s=89753f5642dcb7bea15723b3ff8b78bf8a67465f', 'width': 960}, {'height': 630, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?width=1080&crop=smart&auto=webp&s=aad2a0c9bce6347fab8125311703b2cb492aaabf', 'width': 1080}], 'source': {'height': 640, 'url': 'https://preview.redd.it/y5u82rrlirzf1.jpeg?auto=webp&s=a4e5070cf262822a7437ab78d6cc42133080a838', 'width': 1096}, 'variants': {}}]}
kat-coder, as in KAT-Coder-Pro V1 is trash and is scamming clueless people at an exorbitant $0.98/$3.8 per million tokens
14
I want to thank Novita for making this model free for some time but this model is not worth using even as a free model. kwai should absolutely be crucified for the prices they were trying to charge for this model, or will be trying to charge if they dont change their prices. https://preview.redd.it/jhmzz5o6erzf1.png?width=1215&format=png&auto=webp&s=3a81506c5db2c32fef3c6a492fe5fdb683250f58 this is my terminal-bench run of on kat-coder using your api with the terminus-2 harness, only 28.75%, this is the lowest score ive tested to date. this would not be a big deal if the model were cheaper or only slightly worse since some models might do worse at some kinds of coding tasks but this is abhorrently bad. for comparison (including a lot of the worst scoring runs I've had): * qwen3 coder from nvidia nim api scores 37.5%, this is the same score qwen has in the modelcard. keep in mind that this is using terminus-2 harness, which works well with most models, but qwen3 coder models in particular seem to underperform with any agent that isnt qwen3-code cli. this model is free from nvidia nim api for unlimited use or 2000 req per day from qwen oath. * qwen3 coder 30b a3b scores 31.3% with the same harness. please tell me how on earth kat-coder is worse than a very easily run, small local moe. significantly worse too. its a 2.55% score difference, that is a large gap. * Deepseek v3.1 terminus from nvidia nim with the same harness scores 36.25%, this is another model that is handicapped by the terminus-2 harness, it works better with things like aider, etc. this model is also way cheaper api cost that kat-coder, or just completely free via nvidia nim. * kimi k2 with terminus-2 from nvidia nim api scores 41.25% in my tests, moonshot got a score of 44.5% in their first party testing. * minimax m2:free from openrouter 43.75% $0.98/$3.8 api cost for this (the price we will be paying after this free usage period if it goes back to original cost) is absolutely disgusting, this is more expensive than all the models I mentioned here. Seriously, there are so many better free options. I would not be surprised if this is just another checkpoint of their 72b model that they saw scored a little higher in their eval harness against some cherrypicked benchmarks, that they decided to try and release as a "high end" coding model to make money off dumb vibe coders that fall victim to confirmation bias. I still have all my terminal bench sessions saved and can prove my results are real. I also ran against kat-coder and most of these models more than once so I can verify theyre accurate.
2025-11-07T04:11:06
https://www.reddit.com/r/LocalLLaMA/comments/1oqkmt6/katcoder_as_in_katcoderpro_v1_is_trash_and_is/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqkmt6
false
null
t3_1oqkmt6
/r/LocalLLaMA/comments/1oqkmt6/katcoder_as_in_katcoderpro_v1_is_trash_and_is/
false
false
https://b.thumbs.redditm…088FnLZpIrNY.jpg
14
null
128GB RAM costs ~$1000 & Strix Halo costs $1600 in total
33
We all know RAM has gone up quite a bit, like: [https://pcpartpicker.com/product/WTMMnQ/corsair-vengeance-rgb-64-gb-2-x-32-gb-ddr5-6000-cl30-memory-cmh64gx5m2b6000c30](https://pcpartpicker.com/product/WTMMnQ/corsair-vengeance-rgb-64-gb-2-x-32-gb-ddr5-6000-cl30-memory-cmh64gx5m2b6000c30) How is it possible that Strix Halo with 128GB costs $1699? like [https://www.gmktec.com/products/amd-ryzen%E2%84%A2-ai-max-395-evo-x2-ai-mini-pc?srsltid=AfmBOopMa5dg-W23Ck2BDBNK2wWvPAnToenYsT16yQ-\_mreQ8HR7gD9v](https://www.gmktec.com/products/amd-ryzen%E2%84%A2-ai-max-395-evo-x2-ai-mini-pc?srsltid=AfmBOopMa5dg-W23Ck2BDBNK2wWvPAnToenYsT16yQ-_mreQ8HR7gD9v) LPDDR5X, 8000MHz
2025-11-07T04:04:41
https://www.reddit.com/r/LocalLLaMA/comments/1oqki9e/128gb_ram_costs_1000_strix_halo_costs_1600_in/
johnnytshi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqki9e
false
null
t3_1oqki9e
/r/LocalLLaMA/comments/1oqki9e/128gb_ram_costs_1000_strix_halo_costs_1600_in/
false
false
self
33
null
Polaris Alpha Review
2
Polaris Alpha, a stealth AI model launched on OpenRouter on November 6, 2025, is generating significant excitement in AI communities for its capabilities in coding, tool calling, and instruction adherence. As a cloaked model from an undisclosed provider, it features a 256,000 token context window, free access ($0 per million input/output tokens), and is designed for feedback collection, with all prompts logged for potential improvements. Community speculations heavily favor it being a preview of OpenAI's GPT-5.1, based on naming conventions (e.g., similar to "Horizon-Alpha"), exclusive developer message features, and performance traits. From my sessions, Polaris Alpha shines in practicality. Its 256k context window means you can throw massive docs or codebases at it without losing track—huge for my dev side projects. The informal tone? Endearing at first ("Love this question!"), but it grows on you, making interactions feel less robotic. Tool calling is seamless; I tested it with a mock API integration prompt, and it nailed the logic without fluff. Instruction following is spot-on too—no more prompting tweaks to avoid derails. But it's alpha, so expect quirks. One prompt for a complex data analysis stumbled on edge cases, needing a retry. Still, compared to older models, it hallucinates way less, which warms my heart after too many "confidently wrong" responses from others. Pros: Free access, lightning-fast responses, excels in technical tasks—it's like a turbocharged assistant for coders and thinkers. The community feedback echoes this; users report it outperforming in benchmarks like Korean language tests or SVG generations. Cons: Being cloaked means no transparency on origins or training data, and prompts are logged for improvements (fair warning for privacy folks). It's not open-weights, so no local runs yet, and very niche prompts might need massaging. Speculations and My Predictions The elephant in the room: Who's behind it? Community sleuths on X and Reddit point to OpenAI—naming patterns like "Horizon-Alpha" from past previews, exclusive developer messages, and that sky-themed vibe (Polaris is a star, after all). Some guess Grok or Google, but the 256k context doesn't scream Gemini's million-token beasts. I lean OpenAI too; it has that polished feel without the censorship baggage. Predicting ahead, if this is GPT-5.1 teasing, expect a full rollout soon with even better multi-modal chops. It could disrupt coding tools or agents, making them more intuitive. But I'm skeptical about over-hype—alphas often evolve, and privacy logging might turn off some. Still, it's reignited my passion for experimenting; imagine pairing it with local tools once (if) weights drop. [https:\/\/openrouter.ai\/openrouter\/polaris-alpha](https://preview.redd.it/dgvxmm7s7rzf1.png?width=1274&format=png&auto=webp&s=f1c5c0ca6004eb36a49beb11c463cbae6c85a7b5)
2025-11-07T03:28:36
https://www.reddit.com/r/LocalLLaMA/comments/1oqjsif/polaris_alpha_review/
learningtolivee101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqjsif
false
null
t3_1oqjsif
/r/LocalLLaMA/comments/1oqjsif/polaris_alpha_review/
false
false
https://b.thumbs.redditm…u5LHiYjOLTeo.jpg
2
null
Polaris Alpha Review
1
[removed]
2025-11-07T03:20:16
https://www.reddit.com/r/LocalLLaMA/comments/1oqjmk7/polaris_alpha_review/
mlrunlisted1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqjmk7
false
null
t3_1oqjmk7
/r/LocalLLaMA/comments/1oqjmk7/polaris_alpha_review/
false
false
https://b.thumbs.redditm…ypLruVYWHelY.jpg
1
null
30 days to become AI engineer
245
I’m moving from 12 years in cybersecurity (big tech) into a Staff AI Engineer role. I have 30 days (\~16h/day) to get production-ready, prioritizing context engineering, RAG, and reliable agents. I need a focused path: the few resources, habits, and pitfalls that matter most. If you’ve done this or ship real LLM systems, how would you spend the 30 days?
2025-11-07T03:12:00
https://www.reddit.com/r/LocalLLaMA/comments/1oqjgnh/30_days_to_become_ai_engineer/
CayleneKole
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqjgnh
false
null
t3_1oqjgnh
/r/LocalLLaMA/comments/1oqjgnh/30_days_to_become_ai_engineer/
false
false
self
245
null
Open AI testing new model, properly wanting to give more open source
5
https://preview.redd.it/…difficult tasks.
2025-11-07T02:26:54
https://www.reddit.com/r/LocalLLaMA/comments/1oqiint/open_ai_testing_new_model_properly_wanting_to/
Vozer_bros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqiint
false
null
t3_1oqiint
/r/LocalLLaMA/comments/1oqiint/open_ai_testing_new_model_properly_wanting_to/
false
false
https://b.thumbs.redditm…CTwQIQrn-XIM.jpg
5
null
Kimi 2 is the #1 creative writing AI right now. better than sonnet 4.5
465
Just tried Kimi 2 and I'm genuinely impressed. It's the best creative writer AI I've used—better than Sonnet 4.5, better than anything else out there. And it's dirt cheap compared to Sonnet. I never thought a cheap, open model would beat Anthropic at writing. don't do coding as much, but its understanding is so strong that it's probably capable there too. This is amazing for us consumers. The giants now have to slash prices significantly or lose to China. At this pace, we'll see locally-run LLMs outperforming current top models in months. That's terrible for big companies like OpenAI and Anthropic—they'll need AGI or something massively better to justify their cost difference or cut the price down to half at least for now. This market is unpredictable and wild. With the US and Chinese companies pushing each other like this and not holding back, AI will become so powerful so fast that we won't have to do anything ourselves anymore.
2025-11-07T02:20:29
https://www.reddit.com/r/LocalLLaMA/comments/1oqiduq/kimi_2_is_the_1_creative_writing_ai_right_now/
Excellent-Run7265
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqiduq
false
null
t3_1oqiduq
/r/LocalLLaMA/comments/1oqiduq/kimi_2_is_the_1_creative_writing_ai_right_now/
false
false
self
465
null
My Hands-On Review of Kimi K2 Thinking: The Open-Source AI That's Changing the Game
12
# Overview As someone who's tested numerous AI models, Kimi K2 Thinking stands out for its balance of power and efficiency. Released by Moonshot AI on November 6, 2025, it's designed as a "thinking agent" with a 1 trillion-parameter MoE architecture, activating 32 billion parameters per inference. This allows it to run on reasonable hardware while delivering impressive results in reasoning and tool use. # Key Strengths In my tests, it handled up to 300 sequential tool calls without losing coherence, a big improvement over prior models. For coding, it achieved high scores like 71.3% on SWE-Bench Verified, and I saw it generate functional games and fix bugs seamlessly. It's available on Hugging Face and supports OpenAI-compatible APIs, making integration straightforward. # Getting Started Download from Hugging Face or try via the Moonshot API. Check the docs at platform.moonshot.ai for setup. Hey r/ LocalLLaMA, I've been tinkering with AI models for years, and Moonshot AI's Kimi K2 Thinking, launched on November 6, 2025, has genuinely impressed me. Positioned as an open-source "thinking agent," it specializes in deep reasoning, autonomous tool orchestration, and coding. After running it on my setup with two M3 Ultras at around 15 tokens per second, I can vouch for its efficiency and capabilities. The 256K context window handled large projects without hiccups, and its native INT4 quantization provided a 2x speedup in inference without compromising quality. What sets it apart is the Mixture-of-Experts (MoE) architecture: 61 layers, 7168 attention hidden dimension, 384 experts selecting 8 per token, SwiGLU activation, and a 160K vocabulary. This setup, with 1 trillion total parameters but only 32 billion active, makes it resource-friendly yet powerful. In my sessions, it chained 200-300 tool calls autonomously, interleaving chain-of-thought with functions for tasks like research or writing. [Kimi K2 — Open-Source Agentic Model | by Shravan Kumar | Medium](https://preview.redd.it/s53z4idbtqzf1.png?width=1400&format=png&auto=webp&s=10e6b11229f8971e433349f14671e6f96b8045ad) # Technical Dive The model's checkpoints are in compressed-tensors format, and I easily converted them to FP8/BF16 for testing. It supports frameworks like vLLM and SGLang, and the turbo variant hit 171 tokens/second with 2.17-second first-token latency—faster than competitors like MiniMax-M2. Hardware requirements are manageable, under 600GB for weights, which is great for hobbyists. In hands-on experiments, I tasked it with building a Space Invaders game in HTML/JavaScript—it delivered working code in one prompt. For creative tasks, it generated editable SVGs and even replicated a macOS interface with file management. Multilingual coding shone through, handling Japanese seamlessly and producing human-like emotional writing. # Benchmark Insights I verified several benchmarks myself, and the results were consistent with reports. It scored 44.9% on Humanity's Last Exam with tools, outperforming Claude Sonnet 4.5 in agentic search (60.2% on BrowseComp vs. 24.1%). Math tasks were strong, with 99.1% on AIME25 using Python. While it edges GPT-5 in some areas like GPQA Diamond (85.7% vs. 84.5%), users on X have noted occasional long-context weaknesses. [5 Thoughts on Kimi K2 Thinking - by Nathan Lambert](https://preview.redd.it/lt29qbudtqzf1.png?width=1920&format=png&auto=webp&s=8ff070c0931247250a0f2ee48e1362482a9bf940) Here's a table of key benchmarks from my evaluation: |Benchmark|Setting|Score|Notes| |:-|:-|:-|:-| |Humanity's Last Exam (Text-only)|No tools|23.9%|Solid baseline reasoning.| |Humanity's Last Exam|With tools|44.9%|Beats proprietary models in expert questions.| |HLE (Heavy)|—|51.0%|Enhanced with parallel trajectories.| |AIME25|No tools|94.5%|Excellent math performance.| |AIME25|With Python|99.1%|Near-perfect tool-assisted.| |HMMT25|No tools|89.4%|Tournament-level math prowess.| |BrowseComp|With tools|60.2%|Superior to GPT-5 (54.9%).| |BrowseComp-ZH|With tools|62.3%|Strong in Chinese browsing.| |SWE-Bench Verified|With tools|71.3%|Agentic coding leader.| |MMLU-Pro|No tools|84.6%|Broad knowledge base.| |GPQA Diamond|—|85.7%|Matches top closed models.| |LiveCodeBench v6|—|83.1%|Competitive programming strength.| # Community Feedback and Implications On X, the buzz is positive—posts highlight its macOS replication and game generation. Experts discuss its role in AI timelines, with open-source now rivaling closed models, potentially accelerating innovation while questioning proprietary dominance. Enterprises like Airbnb are exploring similar tech for cost savings. The Modified MIT License allows commercial use with attribution for large deployments, democratizing access. However, potential benchmark biases and hardware needs are worth noting. Overall, I'd rate it 9/10 for open-source AI—transformative, but with room for recall improvements in ultra-long tasks. https://preview.redd.it/5qocbotltqzf1.png?width=1280&format=png&auto=webp&s=d68b50858d33f6639aff9f7aac5bb69bc1358d64 For access, head to Hugging Face, kimi.com, or the API at platform.moonshot.ai.
2025-11-07T02:08:35
https://www.reddit.com/r/LocalLLaMA/comments/1oqi4qp/my_handson_review_of_kimi_k2_thinking_the/
Radiant-Act4707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqi4qp
false
null
t3_1oqi4qp
/r/LocalLLaMA/comments/1oqi4qp/my_handson_review_of_kimi_k2_thinking_the/
false
false
https://b.thumbs.redditm…KvGwDOtPOSQs.jpg
12
null
Creating longer videos
0
Hello im curious what you guys think is the best platform to create 15 minute videos with on history topics? Im aware i will need to stitch together shorter clips. LTX seems promising but im curious how fast i would use up the 11000 credits in the pro plan.
2025-11-07T01:53:12
https://www.reddit.com/r/LocalLLaMA/comments/1oqhsqs/creating_longer_videos/
Fluid_Egg_4343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqhsqs
false
null
t3_1oqhsqs
/r/LocalLLaMA/comments/1oqhsqs/creating_longer_videos/
false
false
self
0
null
Best sub-3b local model for a Python code-fix agent on M2 Pro 16 GB? Considering Qwen3-0.6B
1
Hi everyone! I want to build a tiny local agent as a proof of concept. The goal is simple: build the pipeline and run quick tests for an agent that fixes Python code. I am not chasing SOTA, just something that works reliably at very small size. My machine: * MacBook Pro 16-inch, 2023 * Apple M2 Pro * 16 GB unified memory * macOS Sequoia What I am looking for: * Around 2-3b params or less * Backend: Ollama or llama.cpp * Context 4k-8k tokens Models I am considering * Qwen3-0.6B as a minimal baseline. * Is there a Qwen3-style tiny model with a “thinking” or deliberate variant, or a coder-flavored tiny model similar to Qwen3-Coder-30B but around 2-3b params? * Would Qwen2.5-Coder-1.5B already be a better practical choice for Python bug fixing than Qwen3-0.6B? Bonus: * Your best pick for Python repair at this size and why. * Recommended quantization, e.g., Q4\_K\_M vs Q5, and whether 8-bit KV cache helps. * Real-world tokens per second you see on an M2 Pro for your suggested model and quant. Appreciate any input and help! I just need a dependable tiny model to get the local agent pipeline running
2025-11-07T01:48:16
https://www.reddit.com/r/LocalLLaMA/comments/1oqhp2g/best_sub3b_local_model_for_a_python_codefix_agent/
podolskyd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqhp2g
false
null
t3_1oqhp2g
/r/LocalLLaMA/comments/1oqhp2g/best_sub3b_local_model_for_a_python_codefix_agent/
false
false
self
1
null
New Kimi K2 Thinking are Pretty Disappointing. Much worse than Kimi 0905
0
2025-11-07T01:44:29
https://i.redd.it/kb1wuj4bpqzf1.png
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1oqhm5s
false
null
t3_1oqhm5s
/r/LocalLLaMA/comments/1oqhm5s/new_kimi_k2_thinking_are_pretty_disappointing/
false
false
https://b.thumbs.redditm…jwEpmBB7iOtA.jpg
0
{'enabled': True, 'images': [{'id': 'FV_yQ0U1QQjXcCLWd-rmkYflvP6cNRNWGjp3owxCtF4', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/kb1wuj4bpqzf1.png?width=108&crop=smart&auto=webp&s=30f1932ecf6e0c4b6659c0ab0376e6313b1aff37', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/kb1wuj4bpqzf1.png?width=216&crop=smart&auto=webp&s=b1da0e7f17934e941630b61107aed5d674d3c3fa', 'width': 216}, {'height': 375, 'url': 'https://preview.redd.it/kb1wuj4bpqzf1.png?width=320&crop=smart&auto=webp&s=08bcd0fdc399d698e26c58139c249eb93a09edec', 'width': 320}, {'height': 751, 'url': 'https://preview.redd.it/kb1wuj4bpqzf1.png?width=640&crop=smart&auto=webp&s=42dae4c3e9a9847a7c1a5bc49a6f6fe1fed96e52', 'width': 640}], 'source': {'height': 998, 'url': 'https://preview.redd.it/kb1wuj4bpqzf1.png?auto=webp&s=9617b94d6422c4f31731728b1fa826a0eb045184', 'width': 850}, 'variants': {}}]}
RzenEmbed-v2-7B (multimodal embedding)
10
2025-11-07T00:56:21
https://huggingface.co/qihoo360/RzenEmbed
the__storm
huggingface.co
1970-01-01T00:00:00
0
{}
1oqgk4c
false
null
t3_1oqgk4c
/r/LocalLLaMA/comments/1oqgk4c/rzenembedv27b_multimodal_embedding/
false
false
default
10
{'enabled': False, 'images': [{'id': 'zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=108&crop=smart&auto=webp&s=374a20b3cd57199765f012452aadad2563d981f2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=216&crop=smart&auto=webp&s=a41a3d3244ee8688a4b7b94572a574fff4c53a28', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=320&crop=smart&auto=webp&s=7f82510a7de6860611e01852f8b3f15816e2ea56', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=640&crop=smart&auto=webp&s=d9be59fb9af9942dfe0de415b392058c2cfbbf90', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=960&crop=smart&auto=webp&s=a77ad3b4d97b6e489ab3786774b9b02ab7a55425', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?width=1080&crop=smart&auto=webp&s=98b7df433b35172dd03d48d44835cf63d93f67d3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zjYj496B25Dh4S-OXammKuVuVC52psHmoaE_lB7iL4E.png?auto=webp&s=5e8dacd8a7b5f152da710242ee9ddb36d26284aa', 'width': 1200}, 'variants': {}}]}
I built a copilot for Linear app
0
I use Linear (the project management app) almost every day at my company and absolutely love it. Lately I’ve been hacking around with different MCPs to see what I can build, so I tried the same with the Linear MCP. Over the weekend, I connected Linear’s MCP to the C1 Generative UI API and built a small interactive copilot. Now I can ask Linear anything about the projects I’m working on in plain English. I can explore issues, visualize data, and actually interact with everything instead of scrolling through text. I honestly think more copilots should work like this. What do you think? Which products you’ve used so far have the best copilot? Link if you'd like to try it: [https://console.thesys.dev/playground?sid=-N7oNjfXVV5zwhwaUcYFt](https://console.thesys.dev/playground?sid=-N7oNjfXVV5zwhwaUcYFt)
2025-11-07T00:21:20
https://www.reddit.com/r/LocalLLaMA/comments/1oqfrte/i_built_a_copilot_for_linear_app/
AviusAnima
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqfrte
false
null
t3_1oqfrte
/r/LocalLLaMA/comments/1oqfrte/i_built_a_copilot_for_linear_app/
false
false
self
0
null
Continuous Autoregressive Language Models
1
[removed]
2025-11-06T23:45:19
https://arxiv.org/abs/2510.27688
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1oqex7h
false
null
t3_1oqex7h
/r/LocalLLaMA/comments/1oqex7h/continuous_autoregressive_language_models/
false
false
default
1
null
SGLang is integrating ktransformers for hybrid CPU/GPU inference
26
This is rather a really exciting news (if you have 2TB of RAM ...)! I know 2TB is huge, but it's still "more manageable" than VRAM (also technically you only need 1TB I think). Based on this [PR (WIP)](https://github.com/sgl-project/sglang/issues/11425), it seems it's possible to run the **latest Kimi K2 Thinking** with [SGLang with ktransformers CPU kernels](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/Kimi-K2-Thinking.md). To give you some context, right now, the main way to run LLMs for GPU poor (us), but RAM rich (whoever snagged some before the hike), would be using GGUF with llama.cpp. But that comes with few compromises: we need to wait for the quants, and if a model has a new architecture, this would take quite some time. Not to forget, quality usually takes a hit (although ik_llama and unsloth UD are neat). Now beside [vllm](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/) (arguably the best GPU inference engine), [SGLang](https://github.com/sgl-project/sglang) from top universities researchers (UC Berkley, Stanford, etc ...) is relatively new, and it seems they're collaborating with the creator of Kimi K2 and [ktransformers](https://github.com/kvcache-ai/ktransformers) (I didn't know they had the same team behind them), to provide more scalable hybrid inference! And it's even possible to [Lora finetune it](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/SFT_Installation_Guide_KimiK2.md)! Of course if you have 2TB of RAM. Anyway the performance on their testing: **Their System Configuration:** - GPUs: 8× NVIDIA L20 - CPU: Intel(R) Xeon(R) Gold 6454S **Bench prefill** ``` ============ Serving Benchmark Result ============ Backend: sglang Traffic request rate: inf Max request concurrency: not set Successful requests: 37 Benchmark duration (s): 65.58 Total input tokens: 37888 Total input text tokens: 37888 Total input vision tokens: 0 Total generated tokens: 37 Total generated tokens (retokenized): 37 Request throughput (req/s): 0.56 Input token throughput (tok/s): 577.74 Output token throughput (tok/s): 0.56 Total token throughput (tok/s): 578.30 Concurrency: 23.31 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 41316.50 Median E2E Latency (ms): 41500.35 ---------------Time to First Token---------------- Mean TTFT (ms): 41316.48 Median TTFT (ms): 41500.35 P99 TTFT (ms): 65336.31 ---------------Inter-Token Latency---------------- Mean ITL (ms): 0.00 Median ITL (ms): 0.00 P95 ITL (ms): 0.00 P99 ITL (ms): 0.00 Max ITL (ms): 0.00 ================================================== ``` **Bench decode** ``` ============ Serving Benchmark Result ============ Backend: sglang Traffic request rate: inf Max request concurrency: not set Successful requests: 37 Benchmark duration (s): 412.66 Total input tokens: 370 Total input text tokens: 370 Total input vision tokens: 0 Total generated tokens: 18944 Total generated tokens (retokenized): 18618 Request throughput (req/s): 0.09 Input token throughput (tok/s): 0.90 Output token throughput (tok/s): 45.91 Total token throughput (tok/s): 46.80 Concurrency: 37.00 ----------------End-to-End Latency---------------- Mean E2E Latency (ms): 412620.35 Median E2E Latency (ms): 412640.56 ---------------Time to First Token---------------- Mean TTFT (ms): 3551.87 Median TTFT (ms): 3633.59 P99 TTFT (ms): 3637.37 ---------------Inter-Token Latency---------------- Mean ITL (ms): 800.53 Median ITL (ms): 797.89 P95 ITL (ms): 840.06 P99 ITL (ms): 864.96 Max ITL (ms): 3044.56 ================================================== ```
2025-11-06T23:43:49
https://www.reddit.com/r/LocalLLaMA/comments/1oqevxz/sglang_is_integrating_ktransformers_for_hybrid/
waiting_for_zban
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqevxz
false
null
t3_1oqevxz
/r/LocalLLaMA/comments/1oqevxz/sglang_is_integrating_ktransformers_for_hybrid/
false
false
self
26
{'enabled': False, 'images': [{'id': 'PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=108&crop=smart&auto=webp&s=6d4155943a756c20485fda174b5b9512fa3f643d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=216&crop=smart&auto=webp&s=75f30c6fa85ea8c6aea10fa013349ef85221e58e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=320&crop=smart&auto=webp&s=b8497f72c2699c11dde8ee067b384c1211bbc004', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=640&crop=smart&auto=webp&s=020a7782745c27e4d9e21ddbde5866b17b083688', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=960&crop=smart&auto=webp&s=3f05b0f8300b8ef6644ec3aaa878152df8a4b9fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?width=1080&crop=smart&auto=webp&s=1ad9d23636b8557029623ce73a58752d18f39e7b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PONLA4FKqakn5cHnrIhITjxDyBwNVk3WF7VQI0nh1EY.png?auto=webp&s=b34f254ef03aeca7a01a4c809750a6d16555014a', 'width': 1200}, 'variants': {}}]}
Context Engineering
1
[removed]
2025-11-06T23:32:44
https://www.reddit.com/r/LocalLLaMA/comments/1oqemgb/context_engineering/
WorkRelatedEmails
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqemgb
false
null
t3_1oqemgb
/r/LocalLLaMA/comments/1oqemgb/context_engineering/
false
false
self
1
null
What's your definition of Context Engineering?
1
[removed]
2025-11-06T23:27:46
[deleted]
1970-01-01T00:00:00
0
{}
1oqeiac
false
null
t3_1oqeiac
/r/LocalLLaMA/comments/1oqeiac/whats_your_definition_of_context_engineering/
false
false
default
1
null
Zero-Configuration AI
1
[removed]
2025-11-06T23:26:52
https://www.reddit.com/r/LocalLLaMA/comments/1oqehix/zeroconfiguration_ai/
NorthComplaint7631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqehix
false
null
t3_1oqehix
/r/LocalLLaMA/comments/1oqehix/zeroconfiguration_ai/
false
false
https://b.thumbs.redditm…46GAltK2D-Lo.jpg
1
null
World's strongest agentic model is now open source
1,427
2025-11-06T23:20:15
https://i.redd.it/jd607rvrzpzf1.jpeg
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1oqebr3
false
null
t3_1oqebr3
/r/LocalLLaMA/comments/1oqebr3/worlds_strongest_agentic_model_is_now_open_source/
false
false
default
1,427
{'enabled': True, 'images': [{'id': 'jd607rvrzpzf1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=108&crop=smart&auto=webp&s=84df62efa7b1d413024c0f6646bc06a71b4098d6', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=216&crop=smart&auto=webp&s=cb16dd1034c30e6c454ab90ab48d403efff425f7', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=320&crop=smart&auto=webp&s=e1733839fc78646a8058ca0a0fad9420fee7ea60', 'width': 320}, {'height': 274, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=640&crop=smart&auto=webp&s=f84c70ace26fdbd5db78313787e58d2403961e38', 'width': 640}, {'height': 411, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=960&crop=smart&auto=webp&s=dabe1b4626a28159315a23133c55e32b01344289', 'width': 960}, {'height': 463, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?width=1080&crop=smart&auto=webp&s=c11f23fcd6e906ad255e9a5269607eebee7d0854', 'width': 1080}], 'source': {'height': 1757, 'url': 'https://preview.redd.it/jd607rvrzpzf1.jpeg?auto=webp&s=a2a84967cec6ddfc7e0b513b4eb3216098611035', 'width': 4096}, 'variants': {}}]}
1 second voice-to-voice latency with all open models & frameworks
23
Voice-to-voice latency needs to be under a certain threshold for conversational agents to sound natural. A general target is 1s or less. The Modal team wanted to see how fast we could get a STT > LLM > TTS pipeline working with self-deployed, open models only: [https://modal.com/blog/low-latency-voice-bot](https://modal.com/blog/low-latency-voice-bot) We used: \- Parakeet-tdt-v3\* \[STT\] \- Qwen3-4B-Instruct-2507 \[LLM\] \- KokoroTTS plus Pipecat, an open-source voice AI framework, to orchestrate these services. *\* An interesting finding is that Parakeet (paired with VAD for segmentation) was so fast, it beat open-weights streaming models we tested*! Getting down to 1s latency required optimizations along several axes 🪄 * Streaming vs not-streaming STT models * Colocating VAD (voice activity detection) with Pipecat vs with the STT service * Different parameterizations for vLLM, the inference engine we used * Optimizing audio chunk size and silence clipping for TTS * Using WebRTC for client to bot communication. We used SmallWebRTC, an open-source transport from Daily. * Using WebSockets for streaming inputs and outputs of the STT and TTS services. * Pinning all our services to the same region. While we ran all the services on Modal, we think that many of these latency optimizations are relevant no matter where you deploy!
2025-11-06T23:16:45
https://www.reddit.com/r/LocalLLaMA/comments/1oqe8o2/1_second_voicetovoice_latency_with_all_open/
crookedstairs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqe8o2
false
null
t3_1oqe8o2
/r/LocalLLaMA/comments/1oqe8o2/1_second_voicetovoice_latency_with_all_open/
false
false
self
23
{'enabled': False, 'images': [{'id': '3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=108&crop=smart&auto=webp&s=764609e711ea77618f489cb0f28e88bf5964722f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=216&crop=smart&auto=webp&s=ced8427d63316c1c31923e1b279d1b00bbb8dbbf', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=320&crop=smart&auto=webp&s=8112e377d0fb76d964dd63c5a3ff6b5c800582b5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=640&crop=smart&auto=webp&s=cca787136861dff8d921be5128646bca65f99bcf', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=960&crop=smart&auto=webp&s=b370e48fcfd56f10aab55e006fa086e93fc17fd9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?width=1080&crop=smart&auto=webp&s=24d35aae127873a311221951cc4b4be68a440a91', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/3_CcnOSSBz2tx7_7L-6YC4ZbKobLx1nwmOcdChYlIBs.png?auto=webp&s=84553bfb31ff65ce015043142b01e0f77d660e0f', 'width': 2400}, 'variants': {}}]}
No negative impact using Oculink eGPU: A quick test.
11
Hi, I have seen mixed information about the impact of using oculink for our local LLM projects. Well, just today I connected an RTX 3090 through oculink to my RTX A6000 SFF PC and I have some llama.cpp benchmarks using gemma3 27B Q8: |model|size|params|test|t/s|gpu\_config|devices|build| |:-|:-|:-|:-|:-|:-|:-|:-| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp2048|1396.93|1× RTX A6000|CUDA\_VISIBLE\_DEVICES=0|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp8192|1341.08|1× RTX A6000|CUDA\_VISIBLE\_DEVICES=0|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp16384|1368.39|1× RTX A6000|CUDA\_VISIBLE\_DEVICES=0|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|tg128|20.68|1× RTX A6000|CUDA\_VISIBLE\_DEVICES=0|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp2048|2360.41|A6000 + 3090 |CUDA\_VISIBLE\_DEVICES=0,1|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp8192|2466.44|A6000 + 3090 |CUDA\_VISIBLE\_DEVICES=0,1|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|pp16384|2547.94|A6000 + 3090 |CUDA\_VISIBLE\_DEVICES=0,1|7f09a680a (6970)| |gemma3 27B Q8\_0|26.73 GiB|27.01 B|tg128|22.74|A6000 + 3090|CUDA\_VISIBLE\_DEVICES=0,1|7f09a680a (6970)| I think this a good setup for a test as the two GPUs are fairly close in power and Gemma3 is a relative large model that fits in 8 bit on the 48GB of VRAM. As you can see, I got a significant increase with both GPUs enabled. This was surprising to me as I was expecting the results to be about the same. Yes, the 3090 is a bit faster, but it also running pin 4xPCiE 4.0 oculink connection. These are the commands I used in case anyone is wondering: CUDA_VISIBLE_DEVICES=0,1 \ ./bin/llama-bench \ -m /PATH/gemma-3-27b-it-Q8_0.gguf \ -t 1 -fa 1 \ -b 1024 -ub 512 \ -sm layer \ -ngl 99 \ -ts 0.5/0.5 \ -p 2048,8192,16384 --- ~/llamacpp$ CUDA_VISIBLE_DEVICES=0 \ ./bin/llama-bench \ -m /PATH/gemma-3-27b-it-Q8_0.gguf \ -t 1 -fa 1 \ -b 1024 -ub 512 \ -sm layer \ -ngl 99 \ -p 2048,8192,16384
2025-11-06T23:08:48
https://www.reddit.com/r/LocalLLaMA/comments/1oqe1kq/no_negative_impact_using_oculink_egpu_a_quick_test/
MexInAbu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqe1kq
false
null
t3_1oqe1kq
/r/LocalLLaMA/comments/1oqe1kq/no_negative_impact_using_oculink_egpu_a_quick_test/
false
false
self
11
null
Now gemini-3-pro-preview-11-2025 works on Gemini CLI
5
tweet: [https://x.com/sigridjin\_eth/status/1986564626449113126](https://x.com/sigridjin_eth/status/1986564626449113126)
2025-11-06T22:47:49
https://www.reddit.com/gallery/1oqdj85
Ok_Rub1689
reddit.com
1970-01-01T00:00:00
0
{}
1oqdj85
false
null
t3_1oqdj85
/r/LocalLLaMA/comments/1oqdj85/now_gemini3propreview112025_works_on_gemini_cli/
false
false
https://b.thumbs.redditm…BEw9ogd_4spg.jpg
5
null
Is there a way to run 2x 6000 pro blackwells without going Epyc/Threadripper?
2
I know the proper way is to go the Epyc/Threadripper route but those are very expensive and I'd rather wait for the Epyc Venice release next year anyway before dropping that kind of cash. I'm currently running a single 6000 pro blackwell on regular MSI X870 with 256gb ram and AMD 9950x CPU, but because of the design of that motherboard I cannot install a second blackwell on it (it's blocked by a PCIE\_PWR1 connector). And yes I know there are not enough PCEI lanes on consumer hardware anyway to run two cards at PCIE5 16x, but I'm thinking maybe even with fewer lanes there's some setup that sort of works, or is it a hard no? Has anyone had any luck getting 2x 6000 pro blackwell running on regular consumer grade hardware, if so, what is your setup like?
2025-11-06T22:44:14
https://www.reddit.com/r/LocalLLaMA/comments/1oqdg2w/is_there_a_way_to_run_2x_6000_pro_blackwells/
jbak31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqdg2w
false
null
t3_1oqdg2w
/r/LocalLLaMA/comments/1oqdg2w/is_there_a_way_to_run_2x_6000_pro_blackwells/
false
false
self
2
null
Community-driven robot simulations are finally here (EnvHub in LeRobot)
5
Hey everyone! I’m Jade from the LeRobot team at Hugging Face, we just launched EnvHub! It lets you upload simulation environments to the Hugging Face Hub and load them directly in LeRobot with one line of code. We genuinely believe that solving robotics will come through *collaborative work* and that starts with **you**, the community. By uploading your environments (in Isaac, MuJoCo, Genesis, etc.) and making it compatible with LeRobot, we can all build toward a shared library of complex, compatible tasks for training and evaluating robot policies in LeRobot. If someone uploads a robot pouring water task, and someone else adds folding laundry or opening drawers, we suddenly have a growing playground where anyone can train, evaluate, and compare their robot policies. Fill out the form in the comments if you’d like to join the effort! Twitter announcement: [https://x.com/jadechoghari/status/1986482455235469710](https://x.com/jadechoghari/status/1986482455235469710) Back in 2017, OpenAI called on the community to build Gym environments. Today, we’re doing the same for robotics.
2025-11-06T22:36:37
https://www.reddit.com/r/LocalLLaMA/comments/1oqd9g1/communitydriven_robot_simulations_are_finally/
Soft-Worth-4872
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqd9g1
false
null
t3_1oqd9g1
/r/LocalLLaMA/comments/1oqd9g1/communitydriven_robot_simulations_are_finally/
false
false
self
5
null
My custom browser just leveled up 🍄
0
Previously, I shared my custom browser that can solve text captchas. Today, I've enhanced it to also solve image grid or object captchas using a built-in local vision model. I tested it with 2-3 different captcha providers, and the accuracy is approximately 68% with the 2 billion model. Please note that this is for research purposes only, will keep playing to see how to get 80% ++.
2025-11-06T22:22:35
https://v.redd.it/nsgduz6jopzf1
ahstanin
v.redd.it
1970-01-01T00:00:00
0
{}
1oqcx0v
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nsgduz6jopzf1/DASHPlaylist.mpd?a=1765059769%2CZDYxYTE2ZDQ3MzMzYTgxYWE1YzdjNjE2NzJkOTQ4MTM0N2M4OGJlOWJlZWEzMThhYzM0ZGFhZDQ1ZTczMDI4OA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/nsgduz6jopzf1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/nsgduz6jopzf1/HLSPlaylist.m3u8?a=1765059769%2CZTIzYzNlNGMyMzk2MDJiM2RiNjZjODgwMjNjYjk2MGE5Y2QzM2NmZDlmZWY2Yzc4NDljOTNiMjA0NTQ4NTRlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nsgduz6jopzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1oqcx0v
/r/LocalLLaMA/comments/1oqcx0v/my_custom_browser_just_leveled_up/
false
false
https://external-preview…fda609ee7b9e1ce7
0
{'enabled': False, 'images': [{'id': 'bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0a54e1402de1a090c52b4bc83d317b0e852c94a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=216&crop=smart&format=pjpg&auto=webp&s=fbc39f27e361e843a8e1de559f0a8992c25e9400', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=320&crop=smart&format=pjpg&auto=webp&s=8a2a0118be9fe7d4cd780d72ac4cb035eed3388e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=640&crop=smart&format=pjpg&auto=webp&s=8795ee49ea1a1b95d891806b88d87420255f990c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=960&crop=smart&format=pjpg&auto=webp&s=0da81ad40b49f019481d39ec2a262ed4905e832d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2929ddff8615d261c49a56e85260bcb71037a5e1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bTJ0dm56NmpvcHpmMdNxnrzMjLr17_JtbU8OmCEgIoG0KPgtKAaiVeQ-d0Gp.png?format=pjpg&auto=webp&s=debf1d53376b41aba2ce0b5d09345f25ca0ff5f1', 'width': 1920}, 'variants': {}}]}
Intel Arc Pro B60 Benchmarks + Review
6
2025-11-06T22:06:08
https://www.igorslab.de/intel-arc-pro-b60-im-workstation-test-mit-technikanalyse-und-teardown-kampf-der-kleinen-arbeitstiere-unter-1000-euro/
reps_up
igorslab.de
1970-01-01T00:00:00
0
{}
1oqci89
false
null
t3_1oqci89
/r/LocalLLaMA/comments/1oqci89/intel_arc_pro_b60_benchmarks_review/
false
false
https://external-preview…b7ca5629c376f7a0
6
{'enabled': False, 'images': [{'id': 'QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=108&crop=smart&auto=webp&s=ee0b897a039f0905a2c840c5ca4d71e7029d9e49', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=216&crop=smart&auto=webp&s=d4a5f540f715ddbdad4b6ee8dbe95c5a7fb2f603', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=320&crop=smart&auto=webp&s=dc90e2fa119d6d64ba560d6bd5f2cda33228393a', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=640&crop=smart&auto=webp&s=248c30c78d813ac3471bc256af5c88eef110d694', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=960&crop=smart&auto=webp&s=374e75177d23b174fc49ad3db9605c536192af43', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?width=1080&crop=smart&auto=webp&s=852c8982b7bca617af662d4f7515bd4a41a12376', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/QBNwRhZrHVBWUAv8NtmSOCezIxNUpbf6gsv9rxnhZk8.jpeg?auto=webp&s=d2b2dc1fd105fd83f575ea3316521cd3e170c2f5', 'width': 2560}, 'variants': {}}]}
Just want to take a moment to express gratitude for this tech
103
What a time to be alive! I was just randomly reflecting today - a single file with just a bunch of numbers can be used to make poems, apps, reports and so much more. And that's just LLMs.. But then this applies to image, video, speech, music, audio, 3D models and whatever else that can be expressed digitally Anyone can do this with publicly available downloads and software. You dont need sophisticated computers or hardware. Possibly most insane of all is that you can do all of this for free. This is just utter insanity. If you had told me this would be the ecosystem before this wave happened, I would have never believed you. Regardless of how things evolve, I think we should be immensely grateful for all of this.
2025-11-06T22:00:28
https://www.reddit.com/r/LocalLLaMA/comments/1oqcd1y/just_want_to_take_a_moment_to_express_gratitude/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqcd1y
false
null
t3_1oqcd1y
/r/LocalLLaMA/comments/1oqcd1y/just_want_to_take_a_moment_to_express_gratitude/
false
false
self
103
null
6 AI Agent Guides from Google, Anthropic, Microsoft, etc. Released This Week
1
[removed]
2025-11-06T21:50:15
https://www.reddit.com/r/LocalLLaMA/comments/1oqc3iv/6_ai_agent_guides_from_google_anthropic_microsoft/
sarthakai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqc3iv
false
null
t3_1oqc3iv
/r/LocalLLaMA/comments/1oqc3iv/6_ai_agent_guides_from_google_anthropic_microsoft/
false
false
self
1
null
Here's a workaround for broken GPT-OSS-20b/120b structured outputs.
3
Made a simple endpoint mirror that makes structured outputs work in LM Studio (or llama.cpp) for GPT-OSS GGUFs: https://github.com/shihanqu/GPT-OSS-Structure-Repair-Mirror/tree/main It improves the JSON Compliance for GPT-OSS from about 0% to 90%, according to the default test in the [Structured JSON Tester](https://www.reddit.com/r/LocalLLaMA/comments/1of3r61/test_results_for_various_models_ability_to_give/) Increases json schema compliance score from 0% to 90% for oss 20b
2025-11-06T21:46:36
https://www.reddit.com/r/LocalLLaMA/comments/1oqc07y/heres_a_workaround_for_broken_gptoss20b120b/
zenmagnets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqc07y
false
null
t3_1oqc07y
/r/LocalLLaMA/comments/1oqc07y/heres_a_workaround_for_broken_gptoss20b120b/
false
false
self
3
null
Epoch: LLMs that generate interactive UI instead of text walls
40
So generally LLMs generate text or sometimes charts (via tool calling) but I gave it the ability to generate UI So instead of LLMs outputting markdown, I built **Epoch** where the LLM generates actual interactive components. # How it works The LLM outputs a **structured component tree**: Component = { type: "Card" | "Button" | "Form" | "Input" | ... properties: { ... } children?: Component[] } My renderer walks this tree and builds React components. So responses aren't text but they're interfaces with buttons, forms, inputs, cards, tabs, whatever. # The interesting part **It's bidirectional.** You can click a button or submit a form -> that interaction gets serialized back into conversation history -> LLM generates new UI in response. So you get actual stateful, explorable interfaces. You ask a question -> get cards with action buttons -> click one -> form appears -> submit it -> get customized results. # Tech notes * Works with **Ollama** (local/private) and **OpenAI** * Structured output schema doesn't take context, but I also included it in the system prompt for better performance with smaller Ollama models (system prompt is a bit bigger now, finding a workaround later) * 25+ components, real time SSE streaming, web search, etc. Basically I'm turning LLMs from text generators into **interface compilers**. Every response is a composable UI tree. Check it out: [github.com/itzcrazykns/epoch](https://github.com/itzcrazykns/epoch) Built with Next.js, TypeScript, Vercel AI SDK, shadcn/ui. Feedback welcome!
2025-11-06T21:46:26
https://i.redd.it/elog79cngpzf1.png
ItzCrazyKns
i.redd.it
1970-01-01T00:00:00
0
{}
1oqc01w
false
null
t3_1oqc01w
/r/LocalLLaMA/comments/1oqc01w/epoch_llms_that_generate_interactive_ui_instead/
false
false
default
40
{'enabled': True, 'images': [{'id': 'elog79cngpzf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=108&crop=smart&auto=webp&s=2aef2ee458a2a9bcc7b03ef68f1687dbbc725b5f', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=216&crop=smart&auto=webp&s=76d4aa3dc92e91b7d60b5b375b0b292df188f043', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=320&crop=smart&auto=webp&s=3aa6a931a4352555f5b003ee2d83f933a56ff01d', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=640&crop=smart&auto=webp&s=b2a6a284595f32b69211b6ce05cbcd3fd5a10860', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=960&crop=smart&auto=webp&s=0f7dc035356f01d5b8499a9a87d9f7e89337d8e9', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/elog79cngpzf1.png?width=1080&crop=smart&auto=webp&s=f8a712559d41810a7aea940ff5a0d43e42fae881', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/elog79cngpzf1.png?auto=webp&s=a746777164efc0e29df1d091da846d7ebc11ae3c', 'width': 1920}, 'variants': {}}]}
Anyone running MiniMax M2 AWQ on 2x6000 Pro's with sglang?
3
I am trying to fit MiniMax M2 AWG on Dual 6000 Pro using sglang. Anyone have a working config?
2025-11-06T20:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1oqa0rl/anyone_running_minimax_m2_awq_on_2x6000_pros_with/
MidnightProgrammer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oqa0rl
false
null
t3_1oqa0rl
/r/LocalLLaMA/comments/1oqa0rl/anyone_running_minimax_m2_awq_on_2x6000_pros_with/
false
false
self
3
null
Bombshell report exposes how Meta relied on scam ad profits to fund AI
49
2025-11-06T20:24:52
https://arstechnica.com/tech-policy/2025/11/bombshell-report-exposes-how-meta-relied-on-scam-ad-profits-to-fund-ai/
srwaxalot
arstechnica.com
1970-01-01T00:00:00
0
{}
1oq9vg6
false
null
t3_1oq9vg6
/r/LocalLLaMA/comments/1oq9vg6/bombshell_report_exposes_how_meta_relied_on_scam/
false
false
default
49
{'enabled': False, 'images': [{'id': '6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?width=108&crop=smart&auto=webp&s=1ec3a344a4ca445bbce44923f524464f2c1b7ec1', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?width=216&crop=smart&auto=webp&s=d1dbfccb98ffb1f5654ff320f0fec62f054f1cac', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?width=320&crop=smart&auto=webp&s=04ef2b6ea45cb973eb158541f63a9f0c80e54d45', 'width': 320}, {'height': 405, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?width=640&crop=smart&auto=webp&s=d4057f4ab73241df2a29bc789074f67f648ea470', 'width': 640}, {'height': 607, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?width=960&crop=smart&auto=webp&s=f14b56f692b3e4a9fc3a2c5f7609f55947270301', 'width': 960}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6_FxJznQ24YNhZIspAvn3ERygtZKSAS_E8jP1rIyqCc.jpeg?auto=webp&s=6da515893d6b1cb273effacbdf2ba901379d9649', 'width': 1024}, 'variants': {}}]}
Microsoft’s AI Scientist
165
Microsoft literally just dropped the first AI scientist
2025-11-06T20:23:52
https://i.redd.it/jbv9rmub4pzf1.jpeg
Ok-Breakfast-4676
i.redd.it
1970-01-01T00:00:00
0
{}
1oq9ui3
false
null
t3_1oq9ui3
/r/LocalLLaMA/comments/1oq9ui3/microsofts_ai_scientist/
false
false
default
165
{'enabled': True, 'images': [{'id': 'jbv9rmub4pzf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=108&crop=smart&auto=webp&s=cb69747f52bd0077a4f646ea1d855ac70340d9fd', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=216&crop=smart&auto=webp&s=16cd49eba36b95427df554dd95dc7bc7f2f74792', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=320&crop=smart&auto=webp&s=58672da6aed48bcd7ff73cab88b3f25c1f8fc0d8', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=640&crop=smart&auto=webp&s=d7b040b383a3c04d5034fca2fe81396d6e5d57a9', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=960&crop=smart&auto=webp&s=9ca6bae77d564e21436f6366e177c433ac869b50', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?width=1080&crop=smart&auto=webp&s=7ba19bd138bec2b57df4797b8e5c4c262f8e8940', 'width': 1080}], 'source': {'height': 703, 'url': 'https://preview.redd.it/jbv9rmub4pzf1.jpeg?auto=webp&s=d3dd09e295d5e0604e100a35e6903557ef75f4e7', 'width': 1170}, 'variants': {}}]}
Polaris Alpha
22
This is a cloaked model provided to the community to gather feedback. A powerful, general-purpose model that excels across real-world tasks, with standout performance in coding, tool calling, and instruction following. https://openrouter.ai/openrouter/polaris-alpha
2025-11-06T20:03:31
https://www.reddit.com/r/LocalLLaMA/comments/1oq9b94/polaris_alpha/
policyweb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq9b94
false
null
t3_1oq9b94
/r/LocalLLaMA/comments/1oq9b94/polaris_alpha/
false
false
self
22
{'enabled': False, 'images': [{'id': 'F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=108&crop=smart&auto=webp&s=32a58979b0a9d8b4c5df5fae1dccedf83b8b8f17', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=216&crop=smart&auto=webp&s=f273a6eeddf55d513e1ccfb75efaef2ee40ae7dd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=320&crop=smart&auto=webp&s=b36e5d7e1a84465939d75d303f380eac109699f6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=640&crop=smart&auto=webp&s=3d0c993da5f5fb14fce0d931cf2f48d622cdbb76', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=960&crop=smart&auto=webp&s=168a8aa0357e1b6bc14567eebeb83b286aaf2ef4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?width=1080&crop=smart&auto=webp&s=b48e5b47f79cb069c377c7df18eb999c4009c10e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/F_9MM-rF2vdBXxQ1y06hQopypniBGpYdwwzD0ZOnzlg.png?auto=webp&s=174d4238d26b6dc6a1693bc477da1d15955d4bc9', 'width': 1200}, 'variants': {}}]}
Nvidia's Jensen Huang: 'China is going to win the AI race,' FT reports
204
2025-11-06T20:03:28
https://www.reuters.com/world/asia-pacific/nvidias-jensen-huang-says-china-will-win-ai-race-with-us-ft-reports-2025-11-05/
fallingdowndizzyvr
reuters.com
1970-01-01T00:00:00
0
{}
1oq9b7e
false
null
t3_1oq9b7e
/r/LocalLLaMA/comments/1oq9b7e/nvidias_jensen_huang_china_is_going_to_win_the_ai/
false
false
default
204
{'enabled': False, 'images': [{'id': 'B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=108&crop=smart&auto=webp&s=ef1810846bc81d4a1a1d09aa6bbd2cc287a964df', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=216&crop=smart&auto=webp&s=e830afcbab3dc4a699273548741d317c16fc8c3d', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=320&crop=smart&auto=webp&s=9a07a1a5bfb4915a0669f4d09f9a6704fbe8cdcb', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=640&crop=smart&auto=webp&s=04b0e3c2929dde65c0820bb5e348487a3bb39955', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=960&crop=smart&auto=webp&s=4837073b3c59b301fd18eabc10c0d8bc7d7cc3b2', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?width=1080&crop=smart&auto=webp&s=0395d2ab57fbf20a88a31cc2f6cd7c87e9915375', 'width': 1080}], 'source': {'height': 1005, 'url': 'https://external-preview.redd.it/B5kYZqF-LXs8_vBUF8bfaXMktkNYepX59paDPfYv7go.jpeg?auto=webp&s=417e80c8d8b947a76f14f7a0baa4add470c69dca', 'width': 1920}, 'variants': {}}]}
Nvidia&amp;#x27;s Jensen Huang: &amp;#x27;China is going to win the AI race,&amp;#x27; FT reports
1
[deleted]
2025-11-06T20:02:28
[deleted]
1970-01-01T00:00:00
0
{}
1oq9a8m
false
null
t3_1oq9a8m
/r/LocalLLaMA/comments/1oq9a8m/nvidiaampx27s_jensen_huang_ampx27china_is_going/
false
false
default
1
null
Anyone know the schema for Llama fine-tuning with tool calling on Vertex AI?
1
Hey everyone , I’m trying to fine-tune a Llama model on Vertex AI with my own dataset, and I’d like to enable tool calling (a.k.a. function calling) so the fine-tuned model can call structured tools. However, I can’t find any schema or data format in the docs that shows how to format training examples for tool-augmented Llama fine-tuning , e.g. how to include "tools", "tool\_calls", and "responses" fields in the JSONL format that Vertex AI fine-tuning jobs accept. Is there an officially supported training data schema for Llama fine-tuning with tool calling on Vertex AI yet? This is a Gemini 2.5 schema example, but I’d like to do something similar for Llama: { "systemInstruction": { "parts": \[ { "text": "You are a helpful assistant." } \] }, "contents": \[ { "role": "user", "parts": \[ { "text": "In our 'Project Zeus' project, add a new task 'Post-Launch Review'." } \] }, { "role": "model", "parts": \[ { "text": "Adding the 'Post-Launch Review' task to 'Project Zeus'." }, { "function\_call": { "name": "addTask", "args": { "planId": "PZ", "itemType": "theme", "name": "Post-Launch Review", "id": "PZ-5", "description": "Initiative to review project performance and gather feedback post-launch.", "endDate": "2024-12-31" } } } \] } \] } Any guidance, examples or anyone who’s tried this would be super appreciated! Thanks in advance
2025-11-06T20:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1oq99ur/anyone_know_the_schema_for_llama_finetuning_with/
Ok_Patience_8641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq99ur
false
null
t3_1oq99ur
/r/LocalLLaMA/comments/1oq99ur/anyone_know_the_schema_for_llama_finetuning_with/
false
false
self
1
null
Kimi K2 Thinking and DeepSeek R1 Architectures Side by Side
143
Kimi K2 is based on the DeepSeek V3/R1 architecture, and here's a side-by-side comparison. \- 2× fewer attention heads (64 vs. 128) \- \~1.5× more experts per MoE layer (384 vs. 256) \- Bigger vocabulary (160k vs. 129k) \- K2 activates \~32B parameters per token (vs. 37B in DeepSeek R1) \- Fewer dense FFN blocks before MoE In short, Kimi K2 is a slightly scaled DeepSeek V3/R1. And the gains are in the data and training recipes. Hopefully, we will see some details on those soon, too. *Processing img 6vhn5ws1wozf1...*
2025-11-06T19:37:54
https://www.reddit.com/r/LocalLLaMA/comments/1oq8mmy/kimi_k2_thinking_and_deepseek_r1_architectures/
seraschka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq8mmy
false
null
t3_1oq8mmy
/r/LocalLLaMA/comments/1oq8mmy/kimi_k2_thinking_and_deepseek_r1_architectures/
false
false
self
143
null
Rolled my own LLaMA interface to role play campaigns.
14
Repo Here if anyone is interested. [https://github.com/tarnvaal/PersistentDMf](https://github.com/tarnvaal/PersistentDM) I thought maybe others would enjoy it. You can save/load world shards (large text corpus's that you pre-summarize into memory fragments) separately from your actual chat campaign so you can switch modules. Its currently cofigured to run on a 24gb vram card with bge for embedding and inference with Harbinger. bge-small-en-v1.5 Harbinger-24B-Q5_K_M.gguf
2025-11-06T19:32:30
https://i.redd.it/hkp1up33uozf1.jpeg
tarnkellstudios
i.redd.it
1970-01-01T00:00:00
0
{}
1oq8hg4
false
null
t3_1oq8hg4
/r/LocalLLaMA/comments/1oq8hg4/rolled_my_own_llama_interface_to_role_play/
false
false
default
14
{'enabled': True, 'images': [{'id': 'hkp1up33uozf1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=108&crop=smart&auto=webp&s=9e54ab1da25dd29182044fbdc5ca1dfc9ad691d4', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=216&crop=smart&auto=webp&s=57247399caf05d6c8de193a59d2a9a6f836beca9', 'width': 216}, {'height': 327, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=320&crop=smart&auto=webp&s=4f05b4a851a4fbf45306aedbd3e93c9f9dd080aa', 'width': 320}, {'height': 654, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=640&crop=smart&auto=webp&s=ace60f0e8b47ed0c5955108255b5863323c11cc2', 'width': 640}, {'height': 982, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=960&crop=smart&auto=webp&s=009761d5111ee9b7bcd118ba1e05e77a2005b0bd', 'width': 960}, {'height': 1105, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?width=1080&crop=smart&auto=webp&s=818520a3fa3026c774e83c71cb497492c16f054c', 'width': 1080}], 'source': {'height': 1940, 'url': 'https://preview.redd.it/hkp1up33uozf1.jpeg?auto=webp&s=76b0c6142d7d3e873afc36a04e0cab968b96d51e', 'width': 1896}, 'variants': {}}]}
Has anyone tried kimi k2 thinking locally yet?
11
How much ram it requires? Its nativly support int4 so it might be around 512gb
2025-11-06T19:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1oq8dbo/has_anyone_tried_kimi_k2_thinking_locally_yet/
Brave-Hold-9389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq8dbo
false
null
t3_1oq8dbo
/r/LocalLLaMA/comments/1oq8dbo/has_anyone_tried_kimi_k2_thinking_locally_yet/
false
false
self
11
null
Coding Success Depends More on Language Than Math
35
The biggest factor in how good someone is at coding might surprise you. It is not math it is language. A Nature study found that your ability with numbers explains only two percent of the difference in coding skill while language related brain activity explains seventy percent. So maybe coding is less about numbers and more about how clearly you can think and express ideas in words.
2025-11-06T19:04:03
https://www.reddit.com/gallery/1oq7qav
Ok-Breakfast-4676
reddit.com
1970-01-01T00:00:00
0
{}
1oq7qav
false
null
t3_1oq7qav
/r/LocalLLaMA/comments/1oq7qav/coding_success_depends_more_on_language_than_math/
false
false
https://b.thumbs.redditm…LpT2QyCdjlUw.jpg
35
null
Coding Success Depends More on Language Than Math
6
The biggest factor in how good someone is at coding might surprise you. It is not math it is language. A Nature study found that your ability with numbers explains only two percent of the difference in coding skill while language related brain activity explains seventy percent. So maybe coding is less about numbers and more about how clearly you can think and express ideas in words.
2025-11-06T19:03:36
https://www.reddit.com/gallery/1oq7pwc
Ok-Breakfast-4676
reddit.com
1970-01-01T00:00:00
0
{}
1oq7pwc
false
null
t3_1oq7pwc
/r/LocalLLaMA/comments/1oq7pwc/coding_success_depends_more_on_language_than_math/
false
false
https://a.thumbs.redditm…yU4qXRxHXtw8.jpg
6
null
DGX sparks vs Mac Studio
4
So am I getting this right? Sparks capable of 3 token per second on llama 70b and a mac studio with almost same price capable of 16 token per second? Is there any reason why one should even consider sparks?
2025-11-06T18:23:59
https://www.reddit.com/gallery/1oq6nyv
Free_Expression2107
reddit.com
1970-01-01T00:00:00
0
{}
1oq6nyv
false
null
t3_1oq6nyv
/r/LocalLLaMA/comments/1oq6nyv/dgx_sparks_vs_mac_studio/
false
false
https://a.thumbs.redditm…55CkmfGGDiF4.jpg
4
null
Another AI Winter Is Coming—But This One Will Be Different
0
2025-11-06T18:20:57
https://www.inc.com/dave-sokolin/another-ai-winter-is-coming-but-this-one-will-be-different/91254465
ttkciar
inc.com
1970-01-01T00:00:00
0
{}
1oq6kwk
false
null
t3_1oq6kwk
/r/LocalLLaMA/comments/1oq6kwk/another_ai_winter_is_comingbut_this_one_will_be/
false
false
default
0
null
Alpha Arena Season 1 results
2
nof1.ai had this experiment called Alpha Arena Season 1. As per their website, QWEN3 Max is the best AI model for stock trading as of now. Deep seek chat is 2nd, Cloude is third. Open ai ChatGPT 5 is last! Thoughts?
2025-11-06T18:09:08
https://www.reddit.com/r/LocalLLaMA/comments/1oq692s/alpha_arena_season_1_results/
Any_Baby_3888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq692s
false
null
t3_1oq692s
/r/LocalLLaMA/comments/1oq692s/alpha_arena_season_1_results/
false
false
self
2
null
New emerging ai
0
Hello I am making a ai platform where you can have chat with various llm for free and unlimited and even chat with premium model like gpt and Claude at low price and it also have a search feature so you can have realtime answer with citations and a spaces like option where user can add any type of files and urls and then talk with them so it's just like you go to library and tell Librarian name of a character and tell him to find about it so the librarian goes and search for the book(files and urls given by user) and then read that book and then come and tell you all about it so you don't even need to read the book and then a data chat feature where you can connect your SQL database and chat with it no matter how big it is it can even handle your whole company database and a deep research feature so I was asking what more should I add that no leading ai company gives you but people want it
2025-11-06T18:08:04
https://www.reddit.com/r/LocalLLaMA/comments/1oq6812/new_emerging_ai/
TopFuture2709
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq6812
false
null
t3_1oq6812
/r/LocalLLaMA/comments/1oq6812/new_emerging_ai/
false
false
self
0
null
How does LLaMA compare to open-source alternatives like Falcon or MPT for academic research?
3
I’m exploring large language models for citation extraction and literature review. LLaMA seems competitive, but I’d love community insights on where it really shines vs. other open models.
2025-11-06T18:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1oq6413/how_does_llama_compare_to_opensource_alternatives/
imposterpro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq6413
false
null
t3_1oq6413
/r/LocalLLaMA/comments/1oq6413/how_does_llama_compare_to_opensource_alternatives/
false
false
self
3
null
Continuous Autoregressive Language Models
15
Holy shit this might be next big AI paradigm shift Tencent + Tsinghua just released CALM (Continuous Autoregressive Language Models) and might kill “next token” system every LLM uses Instead of predicting one token CALM predicts continuous vectors representing multiple tokens at once It doesn’t think word by word it thinks in ideas per step Why matters • 4× fewer prediction steps • 44% less training compute • No discrete vocab pure continuous reasoning • New metric BrierLM replaces perplexity
2025-11-06T18:02:34
https://i.redd.it/b478slb4fozf1.jpeg
Ok-Breakfast-4676
i.redd.it
1970-01-01T00:00:00
0
{}
1oq62mg
false
null
t3_1oq62mg
/r/LocalLLaMA/comments/1oq62mg/continuous_autoregressive_language_models/
false
false
default
15
{'enabled': True, 'images': [{'id': 'b478slb4fozf1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=108&crop=smart&auto=webp&s=e3a91c00d7406cd29168cfc530728632b6f4826c', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=216&crop=smart&auto=webp&s=f4d49f734e78bdd835f172973e60c0ab2c1e2e59', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=320&crop=smart&auto=webp&s=fccea049313c895910d3cb4a7d2e3cbc5a1b9ab7', 'width': 320}, {'height': 479, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=640&crop=smart&auto=webp&s=ee8b3b0f58d4c0b2debb7820263262cc344a5213', 'width': 640}, {'height': 718, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=960&crop=smart&auto=webp&s=969f81b9c6d38e19437153662060e42b827b3906', 'width': 960}, {'height': 808, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?width=1080&crop=smart&auto=webp&s=802eab5c4289b51dec6befad7c5f1951eba19ba4', 'width': 1080}], 'source': {'height': 873, 'url': 'https://preview.redd.it/b478slb4fozf1.jpeg?auto=webp&s=68472d67dd1ed0f4c59d2863ca104e350ee77c43', 'width': 1166}, 'variants': {}}]}
Innovate Another AI Winter Is Coming—But This One Will Be Different
1
2025-11-06T17:54:35
https://www.inc.com/dave-sokolin/another-ai-winter-is-coming-but-this-one-will-be-different/91254465
ttkciar
inc.com
1970-01-01T00:00:00
0
{}
1oq5uq0
false
null
t3_1oq5uq0
/r/LocalLLaMA/comments/1oq5uq0/innovate_another_ai_winter_is_comingbut_this_one/
false
false
default
1
null
Speculative Decoding is AWESOME with Llama.cpp!
57
I tried it earlier this year with LM Studio and was incredibly disappointed. The gains were marginal at best, and sometimes slowed down inference, and I quickly abandoned it. Fast forward to this week, I decided to try out Speculative Decoding (SD) with Llama.cpp, and it's truly worth using. Models I tried, and rough performance gains (all models are Unsloth's dynamic Q4\_K\_XL) - Running this on a unified memory with RX 890m iGPU: \- Llama3.3-70B: Without SD, 2.2 t/s. With SD (llama-3.2-1B) as draft, I get 3.2-4 t/s with average of 3.5 t/s \-Qwen3-32B: Without SD, 4.4 t/s. With SD (Qwen3-0.6B) as draft, I get 5-9 t/s I tried larger/smarter draft models, different quant levels for the small models, but landed on the Q4's as the best compromise. Ran tool calling, processed large context, and tried obvious and obscure niche type prompts. The performance always holds at 10% better at the worst case. For average use cases I was getting 30-50% improvements which is huge for a humble machine like mine. Some might call a 2.2 t/s to 4 t/s a no gain, but the quality of a 70B model responses for certain prompts it's still unmatched by any MOE in that size or larger (except for coding). Getting 6-7t/s for Qwen3-32B dense brings the model back to my most used list again. YMMV with faster dGPUs, faster unified memory like on the Strix Halo. This was done with all the default llama.cpp parameters, I just add -md /path/to/model/model.gguf. Who knows how much better I can get the performance with non-default SD parameters. I'm now on the hunt for the perfect draft model to hook with Mistral Small-24B. If you have any suggestions, please let me know.
2025-11-06T17:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1oq5msi/speculative_decoding_is_awesome_with_llamacpp/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq5msi
false
null
t3_1oq5msi
/r/LocalLLaMA/comments/1oq5msi/speculative_decoding_is_awesome_with_llamacpp/
false
false
self
57
null
Bark TTS is insanely slow
2
Hi everyone, I wanted to use Bark TTS for an local agent project. The problem is that it is insanely slow. I just wanted to test it with the default code available in the Git repo, and it took 15 minutes to generate 2 simple phrases. Considering that I work with a 5080, and that some people can make it run in less than a minute with less efficient GPUs, I think maybe I missed something. The only difference between the repo and my code is the PyTorch version, which is newer on my stack. Because PyTorch does not find my GPU if I do not upgrade it, does anyone have already seen similar behavior? PS: I checked, and PyTorch is using the GPU, not the CPU.
2025-11-06T17:37:33
https://www.reddit.com/r/LocalLLaMA/comments/1oq5duz/bark_tts_is_insanely_slow/
yeahlloow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq5duz
false
null
t3_1oq5duz
/r/LocalLLaMA/comments/1oq5duz/bark_tts_is_insanely_slow/
false
false
self
2
null
MiniMax M2 on single RTX5090
4
I was reading many posts and heard good advices, but I keep failing to load MiniMax M2 LLM on single RTX5090 and 128 GB RAM. Can someone explain me with example of command how to host localy this model no matter what way of hosting(vLLM, SGLang...)?
2025-11-06T17:33:33
https://www.reddit.com/r/LocalLLaMA/comments/1oq59yy/minimax_m2_on_single_rtx5090/
Advanced_Skill_5051
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq59yy
false
null
t3_1oq59yy
/r/LocalLLaMA/comments/1oq59yy/minimax_m2_on_single_rtx5090/
false
false
self
4
null
Local AI with image input for low end devices?
2
I am running a m1 MacBook air 8gb model. Right now I have tried Gemma 3:4b and its image recognition and detection is really bad. I also tried installing Gemma3:12b but that took half an hour to process and output on my low end mac and that was without images. So i’m looking for something the size of Gemma 3:4b but much better at vision capability. Any help would be appreciated.
2025-11-06T16:59:06
https://www.reddit.com/r/LocalLLaMA/comments/1oq4cly/local_ai_with_image_input_for_low_end_devices/
gamerboixyz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq4cly
false
null
t3_1oq4cly
/r/LocalLLaMA/comments/1oq4cly/local_ai_with_image_input_for_low_end_devices/
false
false
self
2
null
Why are all models similar…
2
…when replying to ‘tell me a fun fact’? It’s always an octopus has 3 hearts or the shortest 38 minute war in history. This is true for models across different providers. Are they all trained on the same data? Is it hard to train a model from scratch on say 100 PDF textbooks on law so that when I ask ‘tell me a fun fact’ it replies with ‘Victoria, the ACT and Queensland are the only Australian states and territories with a charter of human rights)?
2025-11-06T16:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1oq44x2/why_are_all_models_similar/
gmetothemoongodspeed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq44x2
false
null
t3_1oq44x2
/r/LocalLLaMA/comments/1oq44x2/why_are_all_models_similar/
false
false
self
2
null
Lemonade's C++ port is available in beta today, let me know what you think
122
A couple weeks ago I asked on here if Lemonade should switch from Python and go native and got a strong "yes." So now I'm back with a C++ beta! If anyone here has time to try this out and give feedback that would be awesome. As a refresher: Lemonade is a local LLM server-router, like a local OpenRouter. It helps you quickly get started with llama.cpp Vulkan or ROCm, as well as AMD NPU (on Windows) with the RyzenAI SW and FastFlowLM backends. Everything is unified behind a single API and web ui. To try the C++ beta, head to the latest release page: [Release v8.2.1 · lemonade-sdk/lemonade](https://github.com/lemonade-sdk/lemonade/releases/tag/v8.2.1) * Windows users: download Lemonade\_Server\_Installer\_beta.exe and run it. * Linux users: download lemonade-server-9.0.0-Linux.deb, run `sudo dpkg -i lemonade-server-9.0.0-Linux.deb`, and run `lemonade-server-beta serve` My immediate next steps are to fix any problems identified in the beta, then completely replace the Python with the C++ for users! This will happen in a week unless there's a blocker. The Lemonade GitHub has links for issues and discord if you want to share thoughts there. And I always appreciate a star if you like the project's direction! PS. The usual caveats apply for LLMs on AMD NPU. Only available on Windows right now, Linux is being worked on, but there is no ETA for Linux support. I share all of the community's Linux feedback with the team at AMD, so feel free to let me have it in the comments.
2025-11-06T16:31:00
https://i.redd.it/yemgirr6wnzf1.png
jfowers_amd
i.redd.it
1970-01-01T00:00:00
0
{}
1oq3ls6
false
null
t3_1oq3ls6
/r/LocalLLaMA/comments/1oq3ls6/lemonades_c_port_is_available_in_beta_today_let/
false
false
default
122
{'enabled': True, 'images': [{'id': 'yemgirr6wnzf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=108&crop=smart&auto=webp&s=b2ac70291af97173574f04593dacc4a9c49a3697', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=216&crop=smart&auto=webp&s=e3fde0208675d72d625eb818d1fd20b637b404db', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=320&crop=smart&auto=webp&s=300a39e5e07cdcff270b0a7da9445263ef787c9a', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=640&crop=smart&auto=webp&s=ba5d3a212198dbafbf221f509993023f52307bc5', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=960&crop=smart&auto=webp&s=ac783eec5b7f64b76d3cf77ae7e4278f5d3884a0', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?width=1080&crop=smart&auto=webp&s=ca31f53c18b79c2d3bfca209e797529d1af4985d', 'width': 1080}], 'source': {'height': 910, 'url': 'https://preview.redd.it/yemgirr6wnzf1.png?auto=webp&s=9f92dad06bac98d0b65b4e75305bb508cad7d3f7', 'width': 1622}, 'variants': {}}]}
Saw this masterpiece
311
I will say the guy who made this isn't an accident Bro !!!
2025-11-06T16:23:37
https://i.redd.it/gq0gb70bxnzf1.jpeg
Emergency_Beat8198
i.redd.it
1970-01-01T00:00:00
0
{}
1oq3eo2
false
null
t3_1oq3eo2
/r/LocalLLaMA/comments/1oq3eo2/saw_this_masterpiece/
false
false
default
311
{'enabled': True, 'images': [{'id': 'gq0gb70bxnzf1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/gq0gb70bxnzf1.jpeg?width=108&crop=smart&auto=webp&s=f028c6dbf904a6f868323311d157272ad8a6e2f3', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/gq0gb70bxnzf1.jpeg?width=216&crop=smart&auto=webp&s=064ff7bcedc0044184fa6f16a6ac808dc824bef5', 'width': 216}, {'height': 376, 'url': 'https://preview.redd.it/gq0gb70bxnzf1.jpeg?width=320&crop=smart&auto=webp&s=5f546eac871994642ffe6b88dacd0d8ac73612d7', 'width': 320}, {'height': 753, 'url': 'https://preview.redd.it/gq0gb70bxnzf1.jpeg?width=640&crop=smart&auto=webp&s=8acf4474e96d7ef3c4030af24937c8c793f14407', 'width': 640}], 'source': {'height': 900, 'url': 'https://preview.redd.it/gq0gb70bxnzf1.jpeg?auto=webp&s=fccb2daab01707f721e659ba55fbaa7deba46169', 'width': 764}, 'variants': {}}]}
Anyone got agents running locally? curious what the best tools out there are?
6
looking for some simple out of the box tools to get agents running locally. wondering what people have found to be useful and the easiest way to get started?
2025-11-06T16:23:36
https://www.reddit.com/r/LocalLLaMA/comments/1oq3eno/anyone_got_agents_running_locally_curious_what/
JBG32123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq3eno
false
null
t3_1oq3eno
/r/LocalLLaMA/comments/1oq3eno/anyone_got_agents_running_locally_curious_what/
false
false
self
6
null
How to run glm 4.5 air more faster
0
I have computer with a rtx 5090 and 96gb of ram. I was thinking i might be able to get a better tps than what i get. My cpu is also core 7 ultra 265k but with lm studio i get around 13 to 14tps. It's not usable at all. For me to consider a model usable atleast need to get 20 to 30 tps on a large context around 100k Anyway for me to get it work faster? I hope someone has same setup as me and help me out here ... It's a dissapointment with this setup to get 13tps to be honest.
2025-11-06T16:17:01
https://www.reddit.com/r/LocalLLaMA/comments/1oq386i/how_to_run_glm_45_air_more_faster/
lumos675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq386i
false
null
t3_1oq386i
/r/LocalLLaMA/comments/1oq386i/how_to_run_glm_45_air_more_faster/
false
false
self
0
null
Local LM setup: RTX 5070Ti 16G vs DGX Spark vs Mac Studio 64G
5
I am starting research (PhD) in language models. I've been juggling data between university servers for running experiments but it is a pain. I am considering spending some 💰 and setting up a local server. My typical use-case is inference and finetuning smaller LMs. I can get the following in about $3000: 1. Core ultra 9 + 32G + 5070Ti 16G 2. DGX Spark 128G 3. Mac Studio (M4 max) with 64G unified memory Each option comes bundled with concerns: 1st has low vram 2nd has heating issues with consistent load 3rd has lack of cuda support What would you advise a researcher to buy and why?
2025-11-06T16:03:41
https://www.reddit.com/r/LocalLLaMA/comments/1oq2uw1/local_lm_setup_rtx_5070ti_16g_vs_dgx_spark_vs_mac/
v01dm4n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq2uw1
false
null
t3_1oq2uw1
/r/LocalLLaMA/comments/1oq2uw1/local_lm_setup_rtx_5070ti_16g_vs_dgx_spark_vs_mac/
false
false
self
5
null
GPT OSS 20B with llama.cpp on Nvidia 5000 series
0
Hello, To reduce cost I bought some old laptop on ebay with 16GB vRam !, here is some benchs : In Order : [Nvidia P5000 Mobile \(Pascal\)](https://preview.redd.it/1vphueu8onzf1.png?width=1391&format=png&auto=webp&s=20185d9f59764c2b38362ffa4f2cc3fbbe1979d0) . [Nvidia Quadro RTX 5000 Mobile \(Turing\)](https://preview.redd.it/kj2ankj6qnzf1.png?width=1391&format=png&auto=webp&s=6a55eadcba7a7a52d4b76030d38e6b10221e4972) . [Nvidia RTX A5500 Mobile \(Ampere\)](https://preview.redd.it/mx01viuornzf1.png?width=1391&format=png&auto=webp&s=eab87a435424e51e62cdce2494e07cfd3ff0e2bf) Do you have tested on ADA 5000 (ADA) and RTX PRO 5000 (Blackwell) Mobile the performance to compare ?
2025-11-06T15:55:26
https://www.reddit.com/r/LocalLLaMA/comments/1oq2mot/gpt_oss_20b_with_llamacpp_on_nvidia_5000_series/
Squik67
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq2mot
false
null
t3_1oq2mot
/r/LocalLLaMA/comments/1oq2mot/gpt_oss_20b_with_llamacpp_on_nvidia_5000_series/
false
false
https://b.thumbs.redditm…mbvqcgIT2Rzc.jpg
0
null
How does Parallel Test Time Compute really work?
1
[removed]
2025-11-06T15:45:27
https://www.reddit.com/r/LocalLLaMA/comments/1oq2cu7/how_does_parallel_test_time_compute_really_work/
Potential_Top_4669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq2cu7
false
null
t3_1oq2cu7
/r/LocalLLaMA/comments/1oq2cu7/how_does_parallel_test_time_compute_really_work/
false
false
self
1
null
GPUs with NVMe SSDs on-board serving full LLM weights, is it the future?
0
HBM is very wasteful for "slow" CPUs processing data word by word, while GPUs can technically access NVMe SSDs (Nvidia have their high-end cards already supporting that), it'll be much more cost-effective for consumer GPUs to alloc NVMe slots and have user put SSDs on-board for full LLM weights, then HBM VRAM serve as activation cache of MoE params. Perfect solution, but no idea if manufactures will go that direction, there is AI arm race at state scale at the moment, consumer grade AI solutions may starve to death on half way.
2025-11-06T15:34:05
https://www.reddit.com/r/LocalLLaMA/comments/1oq21yo/gpus_with_nvme_ssds_onboard_serving_full_llm/
complyue
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq21yo
false
null
t3_1oq21yo
/r/LocalLLaMA/comments/1oq21yo/gpus_with_nvme_ssds_onboard_serving_full_llm/
false
false
self
0
null
Intel Arc Pro B60 24GB workstation GPU to launch in Europe mid to late November, starting at €769
0
2025-11-06T15:30:42
https://videocardz.com/newz/intel-arc-pro-b60-24gb-workstation-gpu-to-launch-in-europe-mid-to-late-november-starting-at-e769
reps_up
videocardz.com
1970-01-01T00:00:00
0
{}
1oq1yvv
false
null
t3_1oq1yvv
/r/LocalLLaMA/comments/1oq1yvv/intel_arc_pro_b60_24gb_workstation_gpu_to_launch/
false
false
default
0
{'enabled': False, 'images': [{'id': 'CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=108&crop=smart&auto=webp&s=16be051e7c64e0bf33c8cc495fd2b73e15d18791', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=216&crop=smart&auto=webp&s=f87e7551f6362b463802d899b7b6a4c4d466a588', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=320&crop=smart&auto=webp&s=b5a9ccad013595d9146b2ca3d849734959a3c1cb', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=640&crop=smart&auto=webp&s=4d7417137448011cceff3129d7264fe30c525a2b', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=960&crop=smart&auto=webp&s=f9ad35fb87e0bcd87c11a3aa541ff37b61f720f4', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?width=1080&crop=smart&auto=webp&s=52199bfb0af2782d44d9eb3f3e48547c6de7a4ce', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/CjWSSDI7tgLX8bg-tWIZXxN_9y21HSnJTzzId5X3RXg.jpeg?auto=webp&s=550938521cb88df96dabb0cfe69775e84799f121', 'width': 2500}, 'variants': {}}]}
Kimi K2 Thinking Huggingface
259
2025-11-06T15:12:59
https://huggingface.co/moonshotai/Kimi-K2-Thinking
DistanceSolar1449
huggingface.co
1970-01-01T00:00:00
0
{}
1oq1i9b
true
null
t3_1oq1i9b
/r/LocalLLaMA/comments/1oq1i9b/kimi_k2_thinking_huggingface/
false
false
default
259
{'enabled': False, 'images': [{'id': 'H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=108&crop=smart&auto=webp&s=07dc83095105be433db2dde187f5ec06563728e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=216&crop=smart&auto=webp&s=373b3af88da74654a83e8d0431614ecb18898896', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=320&crop=smart&auto=webp&s=b55ef6153ff571f579c81811752b6d3d48fc0b28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=640&crop=smart&auto=webp&s=73256a6e56665a31c845dbe43d4cf687ee6b4218', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=960&crop=smart&auto=webp&s=4c6f4613c574804e45aca493bec17fcce7fcedf1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?width=1080&crop=smart&auto=webp&s=886b65dd6e5fe3288cc1f9da8c4ef31177ca40c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/H-gfQMTLwEzPYBcfO_Qq4uuh_Gu1NEE3y2PjVFhCwx0.png?auto=webp&s=e9f4ecfd1ae95ce4c46f17e8c19792c62bc07b04', 'width': 1200}, 'variants': {}}]}
Kimi released Kimi K2 Thinking, an open-source trillion-parameter reasoning model
746
https://preview.redd.it/…e.co/moonshotai)
2025-11-06T15:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1oq1arc/kimi_released_kimi_k2_thinking_an_opensource/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq1arc
false
null
t3_1oq1arc
/r/LocalLLaMA/comments/1oq1arc/kimi_released_kimi_k2_thinking_an_opensource/
false
false
https://b.thumbs.redditm…TwiCDt_n7fGc.jpg
746
null
Why is the rtx 6000 pro 7500-8300bucks , when 96 gb of gddr7 costs 320bucks ? Monopoly/ greed and demand??
0
You can find 3gb of gddr7 for 10 bucks , even larger chips shouldnt cost much more per gb. The pricing is absurd , packaging and the gpu die dont cost that much, nvidia is price gouging their costumers…. Even apple’s ram price is absurd… it feels like amd is not doing much as if they are bought off by Nvidia or someone else … Someone needs to break this cuda monopoly…
2025-11-06T14:50:22
https://www.reddit.com/r/LocalLLaMA/comments/1oq0x6r/why_is_the_rtx_6000_pro_75008300bucks_when_96_gb/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq0x6r
false
null
t3_1oq0x6r
/r/LocalLLaMA/comments/1oq0x6r/why_is_the_rtx_6000_pro_75008300bucks_when_96_gb/
false
false
self
0
null
LLMs try ascii letters
9
hey all, recently went into a little rabbit hole into LLMs generating ascii art. unsurprisingly Claude got it \*mostly\* right. but its pretty interesting to see how each model treats generating ASCII art. i wasnt able to test the true superpowers of AI but checked out Kimi K2 (with thinking, somehow (probably just a recursive thinking loop)), DeepSeek (with DeepThink), GLM 4.6 (with thinking), Claude 4.5 (as a closed-source comparison), Qwen Max (also as a closed-source comparison), each respectively on their web browser clients. i told each model to: "Make ASCII art of the word "Bonfire" in 3 different styles" here's what they made: Claude 4.5 - this one definitely is the best, because it's probably the largest. this is going to set the standard for me [BONFIRE, BonFire and Bonfier](https://preview.redd.it/2zyfgsbwbnzf1.png?width=728&format=png&auto=webp&s=6ae2da99412aaf719d30706c520f340b9c08639f) i feel like the rest are all equally bad. DeepSeek - barely visible Bs, absolute gibberish beyond that [BRRSS??, BANG, ELLALLE](https://preview.redd.it/x5tzjqtucnzf1.png?width=626&format=png&auto=webp&s=46e02b738a5b06ca56d7a0998448ce75f13acff6) Qwen Max - the 2nd and 3rd has nothing to do with "Bonfire" at all, the first was almost perfect [BONFNE, OUOLIO, HEUEUE](https://preview.redd.it/s9w8v2l6dnzf1.png?width=849&format=png&auto=webp&s=551251547a5caa9cc15485985482b4007d40bbe9) Kimi K2 (thinking, somehow) - the last wasn't even ASCII letters but whatever. all of these are unintelligible [OONFFUE, 9OUAAUA, BOO NFI RE](https://preview.redd.it/m464ypqcdnzf1.png?width=604&format=png&auto=webp&s=17a5760f1aaa16a7696a0ee10a94e59977109bf6) GLM 4.6 - i honestly thought this one would do better. style 2 is just.... bad [A8NEURE, I actually don't know what it was trying to do, RANEORE](https://preview.redd.it/6vrfd19odnzf1.png?width=774&format=png&auto=webp&s=fdf3278a96e27967103deb0e7bc9a84a9b562b06) id assume data like this (making ascii letters) is super easy to synthetically generate, so probably anyone could make a finetune or LoRA to do just that. sorry if i made this hard to read, but i hope at least some people found this interesting.
2025-11-06T14:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1oq0iak/llms_try_ascii_letters/
ComplexType568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq0iak
false
null
t3_1oq0iak
/r/LocalLLaMA/comments/1oq0iak/llms_try_ascii_letters/
false
false
https://b.thumbs.redditm…quPh9xR5_Mog.jpg
9
null
Is there any good offline free open source Meeting protocol creation app on github?
5
a simple whisper+deepseek/qwenllm project should do the trick, right? is there any good project you can advice? ideally one i can use at my company. any hints would be greatly appreciated
2025-11-06T14:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1oq09kh/is_there_any_good_offline_free_open_source/
howardhus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1oq09kh
false
null
t3_1oq09kh
/r/LocalLLaMA/comments/1oq09kh/is_there_any_good_offline_free_open_source/
false
false
self
5
null
We have a new Autoregressive Text-to-Speech in town!
92
[https://huggingface.co/maya-research/maya1](https://huggingface.co/maya-research/maya1)
2025-11-06T13:48:06
https://i.redd.it/3gtxm0bl5nzf1.png
Severe-Awareness829
i.redd.it
1970-01-01T00:00:00
0
{}
1opzdow
false
null
t3_1opzdow
/r/LocalLLaMA/comments/1opzdow/we_have_a_new_autoregressive_texttospeech_in_town/
false
false
default
92
{'enabled': True, 'images': [{'id': '3gtxm0bl5nzf1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=108&crop=smart&auto=webp&s=ce8efe5f7e36b911e9e602e6422c702a3c64b2bd', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=216&crop=smart&auto=webp&s=297f3e0a0ee0c5ba8e04ed72c49158d2cc162760', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=320&crop=smart&auto=webp&s=a02be02a52c6927b4d6bf1840c23bfa99a1d8723', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=640&crop=smart&auto=webp&s=62bf8df51385db28e73fba54de34caa842cb3b13', 'width': 640}, {'height': 431, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=960&crop=smart&auto=webp&s=0faf2d31e6b8b4f7f48ec5e314b48c48d2da65d6', 'width': 960}, {'height': 485, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?width=1080&crop=smart&auto=webp&s=7ef79f15d9d82acefca65f9d830a444513ea8c24', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/3gtxm0bl5nzf1.png?auto=webp&s=9ce2b760097acf868c13f61ec90368e546d2101a', 'width': 1603}, 'variants': {}}]}
3 RTX 3090 graphics cards in a computer for inference and neural network training
2
I want to build a sufficiently powerful PC for ML within my budget. I have enough money for 3× RTX 3090s or a single RTX 5090. In terms of performance, they’re roughly comparable (3 × 35.58 TFLOPS FP32 vs 1 × 104.8 TFLOPS FP32), but the 3× RTX 3090s have more VRAM (3 × 24 GB vs 1 × 32 GB). As I understand it, to run three GPUs well I need a server-grade CPU (for example, Intel Xeon or AMD EPYC) to have enough PCIe lanes. Also, if I’m understanding correctly, NVLink works with at most 2 GPUs, and with 3 they can only communicate via PCIe - how much will this affect the speed of neural network inference and training? Which GPUs should I get?
2025-11-06T13:44:36
https://www.reddit.com/r/LocalLLaMA/comments/1opzanc/3_rtx_3090_graphics_cards_in_a_computer_for/
Standard-Heat4706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opzanc
false
null
t3_1opzanc
/r/LocalLLaMA/comments/1opzanc/3_rtx_3090_graphics_cards_in_a_computer_for/
false
false
self
2
null
The 1 Billion Token Challenge: Finding the Perfect Pre-training Mix
15
2025-11-06T13:41:14
https://huggingface.co/blog/codelion/optimal-dataset-mixing
asankhs
huggingface.co
1970-01-01T00:00:00
0
{}
1opz7s0
false
null
t3_1opz7s0
/r/LocalLLaMA/comments/1opz7s0/the_1_billion_token_challenge_finding_the_perfect/
false
false
default
15
{'enabled': False, 'images': [{'id': '_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=108&crop=smart&auto=webp&s=922718466ba26d8d59ef8fba09212f307e3ef525', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=216&crop=smart&auto=webp&s=2b13356ac45e1d396a5d7cab4b2bf7f77ed80064', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=320&crop=smart&auto=webp&s=4ed9585e180f3efa9d93e0860cea137a6994c79c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=640&crop=smart&auto=webp&s=405a71d422559afcbb722515f04b7c60d0ce6182', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=960&crop=smart&auto=webp&s=e84d896af9ce5db36dbbc7bc6c856a7db501e0e6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?width=1080&crop=smart&auto=webp&s=7acab2f09f1f82c5cbd34142b7b78165bfeb4744', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_7547ybAZ0VtkPRQO9cNQBrH3zJjmJDlBtHalKB63eY.png?auto=webp&s=00bfe28e2687654382b6049c5ff056e8620fa749', 'width': 1200}, 'variants': {}}]}
ok no jokes like i did previously, why does my whisper in cpu base english doesnt work and the LLM is speaking nonsense, i only said "who are you"
0
i have piper tts on, the model was gemma 4b q6
2025-11-06T13:41:05
https://i.redd.it/qfl45a5c4nzf1.png
BuriqKalipun
i.redd.it
1970-01-01T00:00:00
0
{}
1opz7mc
false
null
t3_1opz7mc
/r/LocalLLaMA/comments/1opz7mc/ok_no_jokes_like_i_did_previously_why_does_my/
false
false
default
0
{'enabled': True, 'images': [{'id': 'qfl45a5c4nzf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/qfl45a5c4nzf1.png?width=108&crop=smart&auto=webp&s=15f05327990a00ed02cf05e4488f3b53c8f14b05', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/qfl45a5c4nzf1.png?width=216&crop=smart&auto=webp&s=a2c4e14c7fd46e4475384fd4a86bd7761df89e26', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/qfl45a5c4nzf1.png?width=320&crop=smart&auto=webp&s=e78d81a6188d77226be7da3d1acdf01863b362db', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/qfl45a5c4nzf1.png?width=640&crop=smart&auto=webp&s=68ebe38bb2d7f8a67c4ba7c8319af10486d4b2ad', 'width': 640}], 'source': {'height': 805, 'url': 'https://preview.redd.it/qfl45a5c4nzf1.png?auto=webp&s=a9097f70c025287269dffc4a5d00d31f5d64b505', 'width': 952}, 'variants': {}}]}
built a single control panel to build mcp servers from any db to any agent builder
3
built a tool that lets you connect your sources (like postgres, bigquery, snowflake, hubspot, etc), define, join and sandbox views using sql, and then chat with ai to configure mcp tools on this view. these tools can then be published to any agent builder via one link - openai, langgraph, n8n, make, or your own - without exposing credentials or messy schemas. the goal is to make your internal data usable by agents without needing to build custom apis or pipelines. would anyone be interested to give this a try?
2025-11-06T13:34:43
https://i.redd.it/5ubsg5k93nzf1.png
Better-Department662
i.redd.it
1970-01-01T00:00:00
0
{}
1opz27e
false
null
t3_1opz27e
/r/LocalLLaMA/comments/1opz27e/built_a_single_control_panel_to_build_mcp_servers/
false
false
default
3
{'enabled': True, 'images': [{'id': '5ubsg5k93nzf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=108&crop=smart&auto=webp&s=b54c18447e0e6885e85a9d24e3ee5b782c3ace7c', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=216&crop=smart&auto=webp&s=ba1bbdb8e6209244c3a3b639efb12d4470b2800a', 'width': 216}, {'height': 205, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=320&crop=smart&auto=webp&s=2856b3f26ce0ef3a29d7ef13cddfe035a42ae9d5', 'width': 320}, {'height': 410, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=640&crop=smart&auto=webp&s=358cc5e38f6fd9f203c4786f9bf175d75f4de7b5', 'width': 640}, {'height': 615, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=960&crop=smart&auto=webp&s=0f9e69596fb4472511cf679dc724bcdd69a342aa', 'width': 960}, {'height': 692, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?width=1080&crop=smart&auto=webp&s=412234cae68c46296e8cbeffd2922e6a6d7fa51b', 'width': 1080}], 'source': {'height': 1178, 'url': 'https://preview.redd.it/5ubsg5k93nzf1.png?auto=webp&s=26412eac1206cd1c250ae3ca0513770e3d5066cd', 'width': 1838}, 'variants': {}}]}
11 problems nobody talks about building Agents (and how to approach them)
0
I have been working on AI agents for a while now. It’s fun, but some parts are genuinely tough to get right. Over time, I have kept a mental list of things that consistently slow me down. These are the hardest issues I have hit (and how you can approach each of them). # 1. Overly Complex Frameworks I think the biggest challenge is using agent frameworks that try to do everything and end up feeling like overkill. Those are powerful and can do amazing things, but in practice you use \~10% of it and then you realize that it's too complex to do the simple, specific things you need it to do. You end up fighting the framework instead of building with it. For example: in **LangChain**, defining a simple agent with a single tool can involve setting up chains, memory objects, executors and callbacks. That’s a lot of stuff when all you really need is an LLM call plus one function. **Approach:** Pick a lightweight building block you actually understand end-to-end. If something like Pydantic AI or SmolAgents (or yes, feel free to plug your own) covers 90% of use cases, build on that. Save the rest for later. It takes just a few lines of code: from pydantic_ai import Agent, RunContext roulette_agent = Agent( 'openai:gpt-4o', deps_type=int, output_type=bool, system_prompt=( 'Use the `roulette_wheel` function to see if the ' 'customer has won based on the number they provide.' ), ) u/roulette_agent.tool async def roulette_wheel(ctx: RunContext[int], square: int) -> str: """check if the square is a winner""" return 'winner' if square == ctx.deps else 'not a winner' # run the agent success_number = 18 result = roulette_agent.run_sync('Put my money on square eighteen', deps=success_number) print(result.output) \--- # 2. No “human-in-the-loop” Autonomous agents may sound cool, but giving them unrestricted control is bad. I was experimenting with an MCP Agent for LinkedIn. It was fun to prototype, but I quickly realized there were no natural breakpoints. Giving the agent full control to post or send messages felt risky (one misfire and boom). **Approach:** The fix is to introduce **human-in-the-loop (HITL) controls** which are like safe breakpoints where the agent pauses, shows you its plan or action and waits for approval before continuing. Here's a simple example pattern: # Pseudo-code def approval_hook(action, context): print(f"Agent wants to: {action}") user_approval = input("Approve? (y/n): ") return user_approval.lower().startswith('y') # Use in agent workflow if approval_hook("send_email", email_context): agent.execute_action("send_email") else: agent.abort("User rejected action") The upshot is: you stay in control. \--- # 3. Black-Box Reasoning Half the time, I can’t explain why my agent did what it did. It will take some weird action, skip an obvious step or make weird assumptions -- all hidden behind “LLM logic”. The whole thing feels like a black box where the plan is hidden. **Approach:** Force your agent to expose its reasoning: structured plans, decision logs, traceable steps. Use tools like LangGraph, OpenTelemetry or logging frameworks to surface “why” rather than just seeing “what”. \--- # 4. Tool-Calling Reliability Issues Here’s the thing about agents: they are only as strong as the tools they connect to. And those tools? They change. Rate-limits hit. Schema drifts. Suddenly your agent agent has no idea how to handle that so it just fails mid-task. **Approach:** Don’t assume the tool will stay perfect forever. * Treat tools as versioned contracts -- enforce schemas & validate arguments * Add retries and fallbacks instead of failing on the first error * Follow open standards like MCP (used by OpenAI) or A2A to reduce schema mismatches. In Composio, every tool is fully described with a JSON schema for its inputs and outputs. Their API returns an error code if the JSON doesn’t match the expected schema. You can catch this and handle it (for example, prompting the LLM to retry or falling back to a clarification step). from composio_openai import ComposioToolSet, Action # Get structured, validated tools toolset = ComposioToolSet() tools = toolset.get_tools(actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER]) # Tools come with built-in validation and error handling response = openai.chat.completions.create( model="gpt-4", tools=tools, messages=[{"role": "user", "content": "Star the composio repository"}] ) # Handle tool calls with automatic retry logic result = toolset.handle_tool_calls(response) They also allow fine-tuning of the tool definitions further guides the LLM to use tools correctly. **Who’s doing what today:** * LangChain → Structured tool calling with Pydantic validation. * LlamaIndex → Built-in retry patterns & validator engines for self-correcting queries. * CrewAI → Error recovery, handling, structured retry flows. * Composio → 500+ integrations with prebuilt OAuth handling and robust tool-calling architecture. \--- # 5. Token Consumption Explosion One of the sneakier problems with agents is how fast they can consume tokens. The worst part? I couldn’t even see what was going on under the hood. I had no visibility into the exact prompts, token counts, cache hits and costs flowing through the LLM. Because we stuffed the full conversation history, every tool result, every prompt into the context window. **Approach:** * Split short-term vs long-term memory * Purge or summarise stale context * Only feed what the model needs now &#8203; context.append(user_message) if token_count(context) > MAX_TOKENS: summary = llm("Summarize: " + " ".join(context)) context = [summary] Some frameworks like AutoGen, cache LLM calls to avoid repeat requests, supporting backends like disk, Redis, Cosmos DB. \--- # 6. State & Context Loss You kick off a plan, great! Halfway through, the agent forgets what it was doing or loses track of an earlier decision. Why? Because all the “state” was inside the prompt and the prompt maxed out or was truncated. **Approach:** Externalize memory/state: use vector DBs, graph flows, persisted run-state files. On crashes or restarts, load what you already did and resume rather than restart. For ex: LlamaIndex provides `ChatMemoryBuffer`  & storage connectors for persisting conversation state. \--- # 7. Multi-Agent Coordination Nightmares You split your work: “planner” agent, “researcher” agent, “writer” agent. Great in theory. But now you have routing to manage, memory sharing, who invokes who, when. It becomes spaghetti. And if you scale to five or ten agents, the sync overhead can feel a lot worse (when you are coding the whole thing yourself). **Approach:** Don’t free-form it at first. Adopt protocols (like A2A, ACP) for structured agent-to-agent handoffs. Define roles, clear boundaries, explicit orchestration. If you only need one agent, don’t over-architect. Start with the simplest design: if you really need sub-agents, manually code an agent-to-agent handoff. \--- # 8. Long-term memory problem Too much memory = token chaos. Too little = agent forgets important facts. This is the “memory bottleneck”, you have to decide “what to remember, what to forget and when” in a systematic way. **Approach:** Naive approaches don’t cut it. Treat memory layers: * Short-term: current conversation, active plan * Long-term: important facts, user preferences, permanent state Frameworks like Mem0 have a purpose-built memory layer for agents with relevance scoring & long-term recall, while Letta (another framework) uses a memory graph with explicit “when/how to forget” rules baked in. \--- # 9. The “Almost Right” Code Problem The biggest frustration developers (including me) face is dealing with AI-generated solutions that are "almost right, but not quite". Debugging that “almost right” output often takes longer than just writing the function yourself. **Approach:** There’s not much we can do here (this is a model-level issue) but you can add guardrails and sanity checks. * Check types, bounds, output shape. * If you expect a date, validate its format. * Use self-reflection steps in the agent. * Add test cases inside the loop. Some frameworks support \`chain-of-thought reflection\` or \`self-correction steps\`. \--- # 10. Authentication & Security Trust Issue Security is usually an afterthought in an agent's architecture. So handling authentication is tricky with agents. On paper, it seems simple: give the agent an API key and let it call the service. But in practice, this is one of the fastest ways to create security holes (like MCP Agents). Role-based access controls must propagate to all agents and any data touched by an LLM becomes "totally public with very little effort". **Approach:** * Least-privilege access * Let agents request access only when needed (use OAuth flows or Token Vault mechanisms) * Track all API calls and enforce role-based access via an identity provider (Auth0, Okta) Assume your whole agent is an attack surface. \--- # 11. No Real-Time Awareness (Event Triggers) Many agents are still built on a “You ask → I respond” loop. That’s in-scope but not enough. What if an external event occurs (Slack message, DB update, calendar event)? If your agent can’t react then you are just building a chatbot, not a true agent. **Approach:** Plug into event sources/webhooks, set triggers, give your agent “ears” and “eyes” beyond user prompts. Just use a managed trigger platform instead of rolling your own webhook system. Like Composio Triggers can send payloads to your AI agents (you can also go with the SDK listener). Here's the webhook approach. app = FastAPI() client = OpenAI() toolset = ComposioToolSet() @app.post("/webhook") async def webhook_handler(request: Request): payload = await request.json() # Handle Slack message events if payload.get("type") == "slack_receive_message": text = payload["data"].get("text", "") # Pass the event to your LLM agent tools = toolset.get_tools([Action.SLACK_SENDS_A_MESSAGE_TO_A_SLACK_CHANNEL]) resp = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": "You are a witty Slack bot."}, {"role": "user", "content": f"User says: {text}"}, ], tools=tools ) # Execute the tool call (sends a reply to Slack) toolset.handle_tool_calls(resp, entity_id="default") return {"status": "ok"} This pattern works for any app integration. The trigger payload includes context (message text, user, channel, ...) so your agent can use that as part of its reasoning or pass it directly to a tool. \--- At the end of the day, agents break for the same old reasons. I think most of the possible fixes are the boring stuff nobody wants to do. Which of these have you hit in your own agent builds? And how did (or will) you approach them.
2025-11-06T13:29:42
https://composio.dev/blog/11-problems-i-have-noticed-building-agents-(and-fixes-nobody-talks-about)
Acrobatic-Pay-279
composio.dev
1970-01-01T00:00:00
0
{}
1opyxu7
false
null
t3_1opyxu7
/r/LocalLLaMA/comments/1opyxu7/11_problems_nobody_talks_about_building_agents/
false
false
https://external-preview…b7175abf47263636
0
{'enabled': False, 'images': [{'id': 'E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=108&crop=smart&auto=webp&s=eee884df1d9954e282866c31a15edd2c2a217e1e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=216&crop=smart&auto=webp&s=db3a57e1b9377a665c0037606ee50ff7de5840c4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=320&crop=smart&auto=webp&s=9caf11abcca0f10e394ae4b0197daba646523579', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=640&crop=smart&auto=webp&s=9746ebdb6c765683910f5c69deca74c8bf22a555', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=960&crop=smart&auto=webp&s=8aecc764a3697d36d80002172f262251ff01464e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?width=1080&crop=smart&auto=webp&s=46d97eeca30fda4c80d2d08656cc4678a415da8d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/E7osLo16dHGyGHMFS5WAFF_cR8bnX9KL3VZzbEytdXs.png?auto=webp&s=ab6f4610feec0e978afd188d1a443de22c2ecf68', 'width': 1200}, 'variants': {}}]}
Continuous Autoregressive Language Models : Alternate for traditional LLMs, paper by Tencent
35
WeChat AI just dropped a paper called Continuous Autoregressive Language Models (CALM),it basically rethinks how LLMs generate text. Instead of predicting one token at a time from a discrete vocabulary (the slow, softmax-heavy way every GPT-style model works), CALM predicts continuous vectors that each represent multiple tokens. These vectors are learned through a high-fidelity autoencoder that can compress, say, 4 tokens into one latent vector and reconstruct them with over 99.9% accuracy. So the model generates “semantic chunks” instead of words, cutting generation steps by 4× while keeping meaning intact. Because the model operates in continuous space, there’s no softmax, no cross-entropy, and no perplexity. Training uses an energy-based objective that compares predicted vs. real vectors, and evaluation uses a new metric called BrierLM, a likelihood-free stand-in for perplexity. In benchmarks on The Pile and WikiText-103, CALM matched or beat standard Transformers with ~40% less compute. It’s not just a speed trick, it’s a new scaling direction: instead of making models bigger, make each generative step carry more meaning. Paper : https://arxiv.org/abs/2510.27688 Explanation : https://youtu.be/tLWBzya9dwA?si=k-9ozLk_PvU-V6au
2025-11-06T13:26:55
https://www.reddit.com/r/LocalLLaMA/comments/1opyvjt/continuous_autoregressive_language_models/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opyvjt
false
null
t3_1opyvjt
/r/LocalLLaMA/comments/1opyvjt/continuous_autoregressive_language_models/
false
false
self
35
null
Can we expect Gemma 4 to generate/edit images?
21
Gemma 3 was based on gemini 2.0 architecture. Then gemini 2.5 was launched. But we didn't get gemma 4 or 3.5. Then when they released nanobanana and merged it with gemini 2.5 flash. Then I had a thought. What if google releases gemini 3.0 with native image generation? If that becomes reality then we might get gemma 4 with image generation. And guess what, Rumours are that gemini 3.0 pro will have native image generation, or like some people say, it will have nano banana 2. That's it!!!!!. My thoughts came true. Now im not sure if gemini 3.0 flash and flash lite will have image generation but if they do, then gemma models will definitely get image generation too. Something like EMU 3.5 but in different sizes. What do you guys think? (Some people even say they aint gonna release gemma 4 and im here speculating its features😭😭😭)
2025-11-06T13:10:44
https://www.reddit.com/r/LocalLLaMA/comments/1opyi9q/can_we_expect_gemma_4_to_generateedit_images/
Brave-Hold-9389
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opyi9q
false
null
t3_1opyi9q
/r/LocalLLaMA/comments/1opyi9q/can_we_expect_gemma_4_to_generateedit_images/
false
false
self
21
null
Suggestion in training object detection models
1
Hey guys, I have been working on detecting various segments from page layout i.e., text, marginalia, table, diagram, etc with object detection models with [yolov13](https://github.com/iMoonLab/yolov13). I've trained a couple of models, one model with around 3k samples & another with 1.8k samples. Both models were trained for about 150 epochs with augmentation. Inorder to test the model, i created a custom curated benchmark dataset to eval with a bit more variance than my training set. My models scored only 0.129 mAP & 0.128 mAP respectively (mAP@\[.5:.95\]). I wonder what factors could affect the model performance. Also can you suggest which parts i should focus on?
2025-11-06T13:06:16
https://www.reddit.com/r/LocalLLaMA/comments/1opyeiz/suggestion_in_training_object_detection_models/
Adventurous-Storm102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opyeiz
false
null
t3_1opyeiz
/r/LocalLLaMA/comments/1opyeiz/suggestion_in_training_object_detection_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=108&crop=smart&auto=webp&s=c3f33fbebd35f6bd5346939c8eae0e9391f232dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=216&crop=smart&auto=webp&s=2a8e7b7e7d3271f8345a2424af2256e8b9cf6094', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=320&crop=smart&auto=webp&s=29ed1c132a34aae192b35d7c949a29ab58a2d723', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=640&crop=smart&auto=webp&s=8dceb56587889504f44f4145bf35fdceb0123386', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=960&crop=smart&auto=webp&s=77307acee93309ff9ddfd8d052b17ea19fbc87ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?width=1080&crop=smart&auto=webp&s=f75d0ba2426155971f701c828e4aaf4299ac7644', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wmkqHTFKZ4uN6n_8tj_Y5Z8GsEfZXIWjG-snCUTsfgw.png?auto=webp&s=05ef270dbab92fbe00fd916864f69f9c86c18a69', 'width': 1200}, 'variants': {}}]}
GitHub - qqqa: Fast, stateless LLM for your shell: qq answers; qa runs commands (MIT)
2
2025-11-06T12:27:57
https://github.com/matisojka/qqqa
MorroWtje
github.com
1970-01-01T00:00:00
0
{}
1opxkrj
false
null
t3_1opxkrj
/r/LocalLLaMA/comments/1opxkrj/github_qqqa_fast_stateless_llm_for_your_shell_qq/
false
false
default
2
{'enabled': False, 'images': [{'id': 'hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=108&crop=smart&auto=webp&s=e9c285d3da885ea7b43373973cea6b95d7d2002b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=216&crop=smart&auto=webp&s=08c3d90419100c1b192810057803f10182c6d383', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=320&crop=smart&auto=webp&s=2bd92c9cf9de055a4f5304fb20bd4c38bfe6e4fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=640&crop=smart&auto=webp&s=9e161cb7bc6b90cae5f5c9cfd42bf862df19dcf0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=960&crop=smart&auto=webp&s=fe9f481d99560e34352d69393ca8dcb4c5be7a39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?width=1080&crop=smart&auto=webp&s=80fcc24e67eb66295764586229f5d83b131083fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hEip0VgC5lajvTv4TBedru7ffBSINBylFeJGCyBCZjw.png?auto=webp&s=96c8b1427c2ea09523c7d8ae50477e5f7f2d4abb', 'width': 1200}, 'variants': {}}]}
Title: What groundbreaking MCP server ideas could literally disrupt entire industries now that Claude can autonomously control our computers?
0
So I just learned about Model Context Protocol (MCP) servers and how they let Claude autonomously interact with your PC, applications, and systems - not just browse the web, but ACTUALLY control things. This feels like one of those “before and after” moments in tech history. We’re talking about AI that can: • Execute commands on your machine • Chain together multiple tools/servers • Automate complex multi-step workflows across different applications • Make decisions and adapt in real-time My question: What one-of-a-kind MCP servers (or combinations of existing ones) could be built right now that would be genuinely groundbreaking? What’s possible in the MCP era that was literally impossible before? I’m thinking about things like: 🔹 Healthcare: An MCP that monitors patient data across systems, automatically flags anomalies, updates records, and coordinates care between providers in real-time 🔹 Autonomous Trading Systems: AI that doesn’t just analyze markets but actually executes trades across multiple exchanges, rebalances portfolios, files tax documents, and adjusts strategies based on real-time global events - all while you sleep 🔹 Smart City Infrastructure: MCPs controlling traffic lights, energy grids, waste management, and emergency services simultaneously - optimizing entire cities in real-time by connecting thousands of IoT sensors and municipal systems 🔹 Personalized Education Revolution: An MCP that monitors how a student learns across ALL their apps (browser history, note-taking, practice problems), identifies knowledge gaps instantly, generates custom curriculum, schedules study sessions, and even creates practice exams - basically a $100k/year tutor for free What sectors do you think are ripe for disruption? What MCP combinations could create something that’s genuinely impossible without this technology? Drop your wildest (but technically feasible) ideas below. Bonus points if you’re already building something 👀
2025-11-06T12:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1opxj4e/title_what_groundbreaking_mcp_server_ideas_could/
Ok-Breakfast-4676
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opxj4e
false
null
t3_1opxj4e
/r/LocalLLaMA/comments/1opxj4e/title_what_groundbreaking_mcp_server_ideas_could/
false
false
self
0
null
Text-to-Speech (TTS) models & Tools for 8GB VRAM?
10
I'm a GGUF guy. I use Jan, Koboldcpp, llama.cpp for Text models. Now I'm starting to experiment with Audio models(TTS - Text to Speech). I see below Audio model formats on HuggingFace. Now I have confusion over model formats. * safetensors / bin (PyTorch) * GGUF * ONNX I don't see GGUF quants for some Audio models. **1\]** What **model format** are you using? **2\]** Which **tools/utilities** are you using for Text-to-Speech process? Because not all chat assistants have TTS & other options. Hopefully there are **tools to run all type of audio model formats**(since no GGUF for some models). I have windows 11. **3\]** What **Audio models** are you using? I see lot of Audio models like below: Kokoro, coqui-XTTS, Chatterbox, Dia, VibeVoice, Kyutai-TTS, Orpheus, Zonos, Fishaudio-Openaudio, bark, sesame-csm, kani-tts, VoxCPM, SoulX-Podcast, Marvis-tts, Whisper, parakeet, canary-qwen, granite-speech **4\]** **What quants** are you using & recommended? Since I have only 8GB VRAM & 32GB RAM. I usually do tradeoff between speed and quality for few Text models which are big for my VRAM+RAM. **But Audio-wise I want best quality so I'll pick higher quants which fits my VRAM**. Never used any quants greater than Q8, but I'm fine going with BF16/F16/F32 as long the it fits my 8GB VRAM. Here I'm talking about GGUF formats. For example, Dia-1.6-F32 is just 6GB. VibeVoice-1.5B-BF16 is 5GB, SoulX-Podcast-1.7B.F16 is 4GB. Hope these fit my VRAM with context & etc., Fortunately half of the Audio models(1-3B mostly) size are small comparing to Text models. I don't know how much the context will take additional VRAM, since haven't tried any Audio models before. **5\]** Please share any resources related to this(Ex: Any github repo has huge list?). **My requirements**: * Make 5-10 mins audio in mp3 format for given text. * Voice cloning. For CBT type presentations, I don't want to talk every time. I just want to create my voice as template first. Then I want use my Voice template with given text, to make decent audio in my voice. That's it. Thanks.
2025-11-06T12:14:17
https://www.reddit.com/r/LocalLLaMA/comments/1opxb1r/texttospeech_tts_models_tools_for_8gb_vram/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1opxb1r
false
null
t3_1opxb1r
/r/LocalLLaMA/comments/1opxb1r/texttospeech_tts_models_tools_for_8gb_vram/
false
false
self
10
null