title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Could this be Avocado? | 7 | 2025-12-12T03:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pkgzyj/could_this_be_avocado/ | Excellent-Treat-7105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkgzyj | false | null | t3_1pkgzyj | /r/LocalLLaMA/comments/1pkgzyj/could_this_be_avocado/ | false | false | 7 | null | ||
Model recommendations for an unusual server build? (512GB DDR4 + 3090 24GB) | 4 | A few months ago, I was in the process of building a heavy server for using large monolithic models for some agentic workflows I had in mind. However, this was only meant to be a stopgap until I could make a proper DDR5 256GB build, as I also saw the writing on the wall regarding the future of monolithics and how they're becoming less common in favor of MoE.
As we've all seen, any hope of making a decent DDR5 machine on an enthusiast budget has been dashed by rapidly increasing memory prices and now Micron leaving the consumer RAM space altogether(and more to likely follow). That leaves me with a Dell Precision 7920 for the foreseeable future with the following specs:
Intel Xeon Gold 6180
8x64GB DDR4-2666 (512GB Total)
24GB 3090Ti
2TB NVMe
Right now, I'm trying to figure out what would be the best model to run, as my original plan to possibly upgrade this to 2TB RAM is probably also a nonstarter.
Models that fit in VRAM are pretty fast, but that leaves the vast majority of the RAM unused except for KV Cache and large context. I'm currently running GLM-4.6-Q6_K, but the speed is kind of slow, only about 5s/token. While I do certainly have the RAM to load these large models, I don't think they're the best use of the hardware even for simple chatting purposes.
Would I be better off using something GLM4.5-Air? Maybe Qwen3? | 2025-12-12T02:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pkgpe4/model_recommendations_for_an_unusual_server_build/ | AlphaSyntauri | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkgpe4 | false | null | t3_1pkgpe4 | /r/LocalLLaMA/comments/1pkgpe4/model_recommendations_for_an_unusual_server_build/ | false | false | self | 4 | null |
What is going on with RTX 6000 pricing? | 0 | Sold listings range from 2300-8000??? | 2025-12-12T02:35:51 | No_Mango7658 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkgfyi | false | null | t3_1pkgfyi | /r/LocalLLaMA/comments/1pkgfyi/what_is_going_on_with_rtx_6000_pricing/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'l3bjznplqo6g1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=108&crop=smart&auto=webp&s=e8bf1513291ac68d85f9c2a57e79dfdb8cc7c260', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=216&crop=smart&auto=webp&s=a45dc9e9fe39a2e0ef91b544ae518910b862a379', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=320&crop=smart&auto=webp&s=a7df41adc9f62f1c70aab796959f94f946078d90', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=640&crop=smart&auto=webp&s=3a6c112e826b0cc85298dacaba70d87ef4efb8a5', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=960&crop=smart&auto=webp&s=1afe1ba5e654aa37ce27fa702165e97b4bb6bd51', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?width=1080&crop=smart&auto=webp&s=8ef2f5fd0179a94abe95818b596941c12c379344', 'width': 1080}], 'source': {'height': 4355, 'url': 'https://preview.redd.it/l3bjznplqo6g1.jpeg?auto=webp&s=76d1dab0bab360e93eaa9bf0216cf794314dd4f8', 'width': 1440}, 'variants': {}}]} | |
Create and edit multi-turn conversation datasets for LLM finetune training | 1 | [removed] | 2025-12-12T02:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pkga9t/create_and_edit_multiturn_conversation_datasets/ | OwnPlatform1635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkga9t | false | null | t3_1pkga9t | /r/LocalLLaMA/comments/1pkga9t/create_and_edit_multiturn_conversation_datasets/ | false | false | self | 1 | null |
Typical performance of gpt-oss-120b on consumer hardware? | 17 | Is this typical performance, or are there ways to optimize tps even further?
11-12 tps on gpt-oss-120b on 32GB VRAM (2x5060Ti) & 128GB DDR4 RAM
\- Intel i7-11700
\- 1x 5060Ti 16gb on PCIe x16
\- 1x 5060Ti 16gb on PCIe x4
\- 4x 32 GB DDR4-3200 RAM (actually appears to be running at 2400 on checking task manager)
\- Running on LM Studio
\- 32k context
\- experts offloaded to CPU
\- 36/36 GPU offloaded
\- flash attention enabled | 2025-12-12T02:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pkg4iy/typical_performance_of_gptoss120b_on_consumer/ | Diligent-Culture-432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkg4iy | false | null | t3_1pkg4iy | /r/LocalLLaMA/comments/1pkg4iy/typical_performance_of_gptoss120b_on_consumer/ | false | false | self | 17 | null |
In OllaMan, using the Qwen3-Next model | 0 | 2025-12-12T02:04:37 | https://v.redd.it/wurh2k7vko6g1 | ComfyTightwad | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkfrlv | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wurh2k7vko6g1/DASHPlaylist.mpd?a=1768097098%2CODliY2E0OTJlMmFlOGY3OTVhOWMzY2E0NTQ1YzczZDM0NDJlY2MzY2VhNGIyYzcxYmRiYTY2OGFhYjkyMDRkYg%3D%3D&v=1&f=sd', 'duration': 25, 'fallback_url': 'https://v.redd.it/wurh2k7vko6g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/wurh2k7vko6g1/HLSPlaylist.m3u8?a=1768097098%2CYWFjOGExYTNlNTk0ZDlmYjNiMWU1OTg5MmE1OTM3ZDVjODYxZDliNzU2MmNhM2MzZTcwMmY5OWQwZDEzY2VhMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wurh2k7vko6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1650}} | t3_1pkfrlv | /r/LocalLLaMA/comments/1pkfrlv/in_ollaman_using_the_qwen3next_model/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=108&crop=smart&format=pjpg&auto=webp&s=5dea16cb8c4ace68e43122c3d3e9a4cf9114199d', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=216&crop=smart&format=pjpg&auto=webp&s=95c94a29dbadd7864d8a9d0d09d26836812e8008', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=320&crop=smart&format=pjpg&auto=webp&s=7c52d0dd6d894b511916b10629d29957732e4100', 'width': 320}, {'height': 419, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=640&crop=smart&format=pjpg&auto=webp&s=df809d5c3cbb54124d4d463f06ca86d42a05556e', 'width': 640}, {'height': 628, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=960&crop=smart&format=pjpg&auto=webp&s=3fc2d6290a7618eca13f8183bf268567d2ae650c', 'width': 960}, {'height': 707, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?width=1080&crop=smart&format=pjpg&auto=webp&s=db9b0fb77078d763ee03698d7448a53dcec1b128', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://external-preview.redd.it/eWlkNXc0OHZrbzZnMdW2ajc4pd0IkwIxB2RsxHZaHcmU2fwx0RdQoBvo_mLe.png?format=pjpg&auto=webp&s=107472ab369385f8ddd68a35f22f176939083ec4', 'width': 2892}, 'variants': {}}]} | ||
Run Mistral Devstral 2 locally Guide + Fixes! (25GB RAM) - Unsloth | 81 | 2025-12-12T01:56:20 | rm-rf-rm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkflfw | false | null | t3_1pkflfw | /r/LocalLLaMA/comments/1pkflfw/run_mistral_devstral_2_locally_guide_fixes_25gb/ | false | false | 81 | {'enabled': True, 'images': [{'id': 'YlsVkmF_Zv8ZwYJ45vT8GAaURZgLa69Wy5J6WR7POVE', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=108&crop=smart&auto=webp&s=56ad368a242af9052e846b469a8bae06c6646998', 'width': 108}, {'height': 253, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=216&crop=smart&auto=webp&s=9d6bec6f2eb037c2b1d7d256131c1034dbfec105', 'width': 216}, {'height': 375, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=320&crop=smart&auto=webp&s=7479564042883d109d8c8642576094e557e47044', 'width': 320}, {'height': 750, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=640&crop=smart&auto=webp&s=884f4f99a6d0cca98f924c0f9b62f3ea56ac95cb', 'width': 640}, {'height': 1125, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=960&crop=smart&auto=webp&s=db51fd452f0fc568e985f196493e2e7aca8dc0a1', 'width': 960}, {'height': 1265, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?width=1080&crop=smart&auto=webp&s=bcecfede0baf44d6a68f4d9e4f391965e9207d40', 'width': 1080}], 'source': {'height': 3516, 'url': 'https://preview.redd.it/1f2wim2zgl6g1.png?auto=webp&s=23e795485665d9c7901934f50bdcbe2259acd0a4', 'width': 3000}, 'variants': {}}]} | |||
Best local pipeline for parsing complex medical PDFs (Tables, image, textbox, Multi-column) on 16GB VRAM? | 1 | Hi everyone,
I am building a local RAG system for medical textbooks using an **RTX 5060 Ti (16GB)** and **i5 12th Gen (16GB RAM)**.
**My Goal:** Parse complex medical PDFs containing:
1. Multi-column text layouts.
2. Complex data tables (dosage, lab values).
3. Text boxes/Sidebars (often mistaken for tables).
**Current Stack:** I'm testing **Docling** and **Unstructured** (YOLOX + Gemini Flash for OCR).
**The Problem:** The parser often breaks structure on complex tables or confuses text boxes with tables. RAM usage is also high. | 2025-12-12T01:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/1pkeswa/best_local_pipeline_for_parsing_complex_medical/ | Late-Bridge-2456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkeswa | false | null | t3_1pkeswa | /r/LocalLLaMA/comments/1pkeswa/best_local_pipeline_for_parsing_complex_medical/ | false | false | self | 1 | null |
Best local pipeline for parsing complex medical PDFs (Tables, Multi-column, textbox, image) on 16GB VRAM? | 1 | Hi everyone,
I am building a local RAG system for medical textbooks using an **RTX 5060 Ti (16GB)** and **i5 12th Gen (16GB RAM)**.
**My Goal:** Parse complex medical PDFs containing:
1. Multi-column text layouts.
2. Complex data tables (dosage, lab values).
3. Text boxes/Sidebars (often mistaken for tables).
**Current Stack:** I'm testing **Docling** and **Unstructured** (YOLOX + Gemini Flash for OCR).
**The Problem:** The parser often breaks structure on complex tables or confuses text boxes with tables. RAM usage is also high. | 2025-12-12T01:15:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pkeqpl/best_local_pipeline_for_parsing_complex_medical/ | Late-Bridge-2456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkeqpl | false | null | t3_1pkeqpl | /r/LocalLLaMA/comments/1pkeqpl/best_local_pipeline_for_parsing_complex_medical/ | false | false | self | 1 | null |
Running the latest multimodal models on ANE across iOS and macOS | 4 | Hi r/LocalLLaMA fam, we’re excited to release NexaSDK for iOS and macOS — the first and only runtime that runs the latest SOTA multimodal models fully on Apple Neural Engine, CPU and GPU across iPhones and Macbooks.
# Key features:
* Models with ANE support
* Embedding: EmbedNeural (Multimodal Embedding)
* LLM: Granite-Micro (IBM), Ministral3-3B (Mistral), Gemma3 (Google), Qwen3-0.6B / 4B (Qwen)
* CV: PaddleOCR (Baidu)
* ASR: Parakeet v3 (NVIDIA)
* Simple setup: 3 lines of code to get started
* 9× energy efficiency compared to CPU and GPU
* Easy integration with simple Swift API usage.
# Try it out:
GitHub: [https://github.com/NexaAI/nexasdk-mobile-iOS-framework/tree/main](https://github.com/NexaAI/nexasdk-mobile-iOS-framework/tree/main)
Docs: [https://docs.nexa.ai/nexa-sdk-ios/overview](https://docs.nexa.ai/nexa-sdk-ios/overview)
We’d love your feedback — and tell us which model you want on ANE next. We iterate fast.
https://reddit.com/link/1pke7ai/video/0g6fbarg5o6g1/player
| 2025-12-12T00:51:13 | https://www.reddit.com/r/LocalLLaMA/comments/1pke7ai/running_the_latest_multimodal_models_on_ane/ | Material_Shopping496 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pke7ai | false | null | t3_1pke7ai | /r/LocalLLaMA/comments/1pke7ai/running_the_latest_multimodal_models_on_ane/ | false | false | self | 4 | null |
Critical Stability Issue with LPL Coherence: When a LocalLLaMA System Rejects Its Own Architecture. | 1 | [removed] | 2025-12-12T00:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pke1s6/critical_stability_issue_with_lpl_coherence_when/ | Personal-Bicycle-163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pke1s6 | false | null | t3_1pke1s6 | /r/LocalLLaMA/comments/1pke1s6/critical_stability_issue_with_lpl_coherence_when/ | false | false | self | 1 | null |
A warning about GPT 5.2 for enterprise customers.... | 0 | 2025-12-12T00:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pkdoqx/a_warning_about_gpt_52_for_enterprise_customers/ | tryfusionai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkdoqx | false | null | t3_1pkdoqx | /r/LocalLLaMA/comments/1pkdoqx/a_warning_about_gpt_52_for_enterprise_customers/ | false | false | 0 | null | ||
Best non reasoning SLM (<10B) | 2 | I inherited a dgx spark and have decided to make a full stack ai entity (not particularly geared towards assisting)
the unified memory and low bandwidth makes the spark great at swarms of small models, so im thinking rats in a trenchcoat
anyway
I'm looking for an uncensored text-only model around 8 billion parameters, and it absolutely can't be a reasoning model.
This will be acting as the mouth that intakes a context block and outputs a sentence or two of first person speech. | 2025-12-12T00:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pkdnrf/best_non_reasoning_slm_10b/ | Sl33py_4est | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkdnrf | false | null | t3_1pkdnrf | /r/LocalLLaMA/comments/1pkdnrf/best_non_reasoning_slm_10b/ | false | false | self | 2 | null |
Whats the fastest (preferably Multi-Modal) Local LLM for Macbooks? | 0 | Hi, whats the fastest llm for mac, mostly for things like summarizing, brainstorming, nothing serious. Trying to find the easiest one to use (first time setting this up in my Xcode Project) and good performance. Thanks! | 2025-12-12T00:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1pkdl9y/whats_the_fastest_preferably_multimodal_local_llm/ | CurveAdvanced | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkdl9y | false | null | t3_1pkdl9y | /r/LocalLLaMA/comments/1pkdl9y/whats_the_fastest_preferably_multimodal_local_llm/ | false | false | self | 0 | null |
Agentic Local AI on CPU = Mistral Vibe + Granite-4-h-1b | 218 | A a3b LLM is all you need :) | 2025-12-12T00:22:10 | https://v.redd.it/vewmcluf2o6g1 | PotentialFunny7143 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkdkjo | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vewmcluf2o6g1/DASHPlaylist.mpd?a=1768090946%2CYzE2YzgxYTg2Y2I5ZWIxNmFkYjBjMmI5NWEzOTAwN2Q2OTE3YmU5ZjZiMDFhMjdkNWUxNjc2ZWI4MDAxZWNmMQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/vewmcluf2o6g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/vewmcluf2o6g1/HLSPlaylist.m3u8?a=1768090946%2CNjNlNzBjNjU5MmNkM2JmNWVmMGYzOGM0ZGNiN2JiM2I5ZjU5YjJiYmU5NjdkZDc5Njg0YWM2ODhhNTcyZjcxZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vewmcluf2o6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1090}} | t3_1pkdkjo | /r/LocalLLaMA/comments/1pkdkjo/agentic_local_ai_on_cpu_mistral_vibe_granite4h1b/ | false | false | 218 | {'enabled': False, 'images': [{'id': 'NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=108&crop=smart&format=pjpg&auto=webp&s=d640129adc3974fe71759e550b5fd8b678b7cc6e', 'width': 108}, {'height': 142, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=216&crop=smart&format=pjpg&auto=webp&s=48d30289f6c84641bd899b55a9fbb3480f13c620', 'width': 216}, {'height': 211, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=320&crop=smart&format=pjpg&auto=webp&s=e778795d4e82e34c67e8cb77fde29f04a5e13fa9', 'width': 320}, {'height': 422, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=640&crop=smart&format=pjpg&auto=webp&s=0f2c99e2bf42e30716813434859caecb8238acc0', 'width': 640}, {'height': 634, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=960&crop=smart&format=pjpg&auto=webp&s=328e20305ee4206f524125ae7382861cdbe944e7', 'width': 960}, {'height': 713, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0d7eeb8998d867cc63daa131482a4d88d89c7e2e', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NDYwbGgydmYybzZnMf2LvdJmBzIyNzEDfN0eOt2yDrF46dRxJq4WcX4O0NUM.png?format=pjpg&auto=webp&s=eb84286141f044259366852b48b41284ae58bfa8', 'width': 1090}, 'variants': {}}]} | |
Looking for a good LLM for multiple char stories | 0 | I have 12gb of VRAM so would like to find a LLM at 10gb max
Needs to be able to handle multiple characters in story. Must be uncensored. Able to handle very large (long) stories. My largest story has 15k responses. Has to handle 4-6k tokens.
Main thing it is has to be in .gguf format
Thanks | 2025-12-12T00:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1pkd7tz/looking_for_a_good_llm_for_multiple_char_stories/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkd7tz | false | null | t3_1pkd7tz | /r/LocalLLaMA/comments/1pkd7tz/looking_for_a_good_llm_for_multiple_char_stories/ | false | false | self | 0 | null |
LLM to search through large story database | 2 | Hi,
let me outline my situation. I have a database of thousands of short stories (roughly 1.5gb in size of pure raw text), which I want to efficiently search through. By searching, I mean 'finding stories with X theme' (e.g. horror story with fear of the unknown), or 'finding stories with X plotpoint' and so on.
I do not wish to filter through the stories manually and as to my limited knowledge, AI (or LLMs) seems like a perfect tool for the job of searching through the database while being aware of the context of the stories, compared to simple keyword search.
What would nowdays be the optimal solution for the job? I've looked up the concept of RAG, which \*seems\* to me, like it could fit the bill. There are solutions like AnythingLLM, where this could be apparently set-up, with using a model like ollama (or better - Please do recommend the best ones for this job) to handle the summarisation/search.
Now I am not a tech-illiterate, but apart from running ComfyUI and some other tools, I have practically zero experience with using LLMs locally, and especially using them for this purpose.
Could you suggest to me some tools (ideally local), which would be fitting in this situation - contextually searching through a database of raw text stories?
I'd greatly appreaciate your knowledge, thank you!
Just to note, I have 1080 GPU with 16GB of RAM, if that is enough. | 2025-12-11T23:56:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pkd0f8/llm_to_search_through_large_story_database/ | DesperateGame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkd0f8 | false | null | t3_1pkd0f8 | /r/LocalLLaMA/comments/1pkd0f8/llm_to_search_through_large_story_database/ | false | false | self | 2 | null |
How do you improve consistency in LLM-based PDF table extraction (Vision models missing rows/columns/ordering)? | 1 | Hey everyone,
I'm working on an automated pipeline to extract BOQ (Bill of Quantities) tables from PDF project documents. I'm using a Vision LLM (Llama-based, via Cloudflare Workers AI) to convert each page into:
PDF → Image → Markdown Table → Structured JSON
Overall, the results are good, but not consistent. And this inconsistency is starting to hurt downstream processing.
Here are the main issues I keep running into:
- Some pages randomly miss one or more rows (BOQ items).
- Occasionally the model skips table row - BOQ items that in the table.
- Sometimes the ordering changes, or an item jumps to the wrong place. (Changing is article number for example)
- The same document processed twice can produce slightly different outputs.
Higher resolution sometimes helps but I'm not sure that it's the main issue.i in currently using DPI 300 And Maxdim 2800.
Right now my per-page processing time is already ~1 minute (vision pass + structuring pass).
I'm hesitant to implement a LangChain graph with “review” and “self-consistency” passes because that would increase latency even more.
I’m looking for advice from anyone who has built a reliable LLM-based OCR/table-extraction pipeline at scale.
My questions:
1. How are you improving consistency in Vision LLM extraction, especially for tables?
2. Do you use multi-pass prompting, or does it become too slow?
3. Any success with ensemble prompting or “ask again and merge results”?
4. Are there patterns in prompts that make Vision models more deterministic?
5. Have you found it better to extract:
the whole table at once,
or row-by-row,
or using bounding boxes (layout model + LLM)?
6. Any tricks for reducing missing rows?
Tech context:
Vision model: Llama 3.3 (via Cloudflare AI)
PDFs vary a lot in formatting (engineering BOQs, 1–2 columns, multiple units, chapter headers, etc.)
Convert pdf pages to image with DPI 300 and max dim 2800. Convert image to grey scale then monochromatic and finally sharpen for improved text contrast.
Goal: stable structured extraction into {Art, Description, Unit, Quantity}
I would love to hear how others solved this without blowing the latency budget.
Thanks! | 2025-12-11T23:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pkcz2n/how_do_you_improve_consistency_in_llmbased_pdf/ | GiveLaFlame420Back | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkcz2n | false | null | t3_1pkcz2n | /r/LocalLLaMA/comments/1pkcz2n/how_do_you_improve_consistency_in_llmbased_pdf/ | false | false | self | 1 | null |
GPT OSS derestricted 20b reviews and help. | 0 | You can review this model in the comments if you want, but I’m here to see if other people have been having the same issue I’m having: broken tool calling. Wondering how to fix it. | 2025-12-11T23:18:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pkc5zq/gpt_oss_derestricted_20b_reviews_and_help/ | Witty_Mycologist_995 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkc5zq | false | null | t3_1pkc5zq | /r/LocalLLaMA/comments/1pkc5zq/gpt_oss_derestricted_20b_reviews_and_help/ | false | false | self | 0 | null |
EQ-Bench updates: Gpt-5.2, Opus 4.5, Mistral Large 3 and Nanbeige4-3B | 56 | [https://eqbench.com](https://eqbench.com)
gpt-5.2 writing samples:
[https://eqbench.com/results/creative-writing-v3/gpt-5.2.html](https://eqbench.com/results/creative-writing-v3/gpt-5.2.html)
opus-4.5 writing samples:
[https://eqbench.com/results/creative-writing-v3/claude-opus-4-5-20251101.html](https://eqbench.com/results/creative-writing-v3/claude-opus-4-5-20251101.html)
mistral-large-3 writing samples:
[https://eqbench.com/results/creative-writing-v3/mistralai\_\_Mistral-Large-3-675B-Instruct-2512.html](https://eqbench.com/results/creative-writing-v3/mistralai__Mistral-Large-3-675B-Instruct-2512.html)
nanbeige4-3b writing samples:
[https://eqbench.com/results/creative-writing-v3/Nanbeige\_\_Nanbeige4-3B-Thinking-2511.html](https://eqbench.com/results/creative-writing-v3/Nanbeige__Nanbeige4-3B-Thinking-2511.html) | 2025-12-11T23:06:43 | https://www.reddit.com/gallery/1pkbwco | _sqrkl | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pkbwco | false | null | t3_1pkbwco | /r/LocalLLaMA/comments/1pkbwco/eqbench_updates_gpt52_opus_45_mistral_large_3_and/ | false | false | 56 | null | |
TFLOPS by GPU | 16 | I'm not a professional ML engineer/researcher, I just enjoy ML/AI development as a hobby (still, it would be nice if this knowledge could be transferred to a real job). Just like many people in this sub, I was debating with myself on the idea of buying myself a PC, or buying a DGX Spark, or a mini PC with a Strix Halo, or just renting a cloud one.
Using free GPUs on Google Colab and Kaggle sometimes feels like enough for me, but it's slow. So I decided to run a quick benchmark on different GPUs to see what the actual difference is, and what I would miss for being stingy.
The benchmark script [was taken](https://x.com/awnihannun/status/1982880363765768288) from Awni Hannun's tweet (MLX co-author), it's basically do matrix multiplications on two BF16 8192x8192 matrices.
Disclaimer: I know just TFLOPS alone is not enough when it come to performance (memory bandwidth, power consumption, other factors like RAM/CPU,...), but it's still make a sense for a quick comparison.
|**Device**|**TFLOPS**|**Time (ms)**|
|---|---|---|
|**B200**|1629.45|306.85|
|**H200 SXM**|680.32|734.94|
|**MI300X (ROCm)**|464.90|1075.5|
|**L40S**|209.75|2383.73|
|**Nvidia RTX 5090**|207.254|2428.84|
|**Nvidia RTX 4090**|152.89|3270.22|
|**Nvidia RTX PRO 6000 WK**|136.53|3662.17|
|**A40**|110.386|4529.57|
|**Nvidia RTX 3090**|70.86|7055.94|
|**L4**|56.66|8823.27|
|**Tesla V100**|10.15|49242.02|
|**Kaggle P100**|5.708|87594.19|
|**M2 Max MBP 64GB**|4.796|104246.28|
|**Google Colab T4**|2.314|216094.496|
|**Kaggle 2xT4**|2.177|229686.30|
The code was modified to run on MPS for macbook. ON the AMD one, no modification needed, run on ROCm.
Also, some numbers I found online, on other devices that I could not confirmed myself:
|**Device**|**TFLOPS**|
|---|---|
|**DGX Spark**|~60|
|**Strix Halo**|~59|
|**M5 MBP**|~13|
It would be nice if someone with other devices can run the test and confirm that the numbers are correct.
After looking at the numbers, I feel like a Strix Halo miniPC (even 64GB) would be more than enough, and if I ever feel the need for CUDA, then adding a 3090 will do it. | 2025-12-11T22:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pkbmqe/tflops_by_gpu/ | bobaburger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkbmqe | false | null | t3_1pkbmqe | /r/LocalLLaMA/comments/1pkbmqe/tflops_by_gpu/ | false | false | self | 16 | null |
Which is the best setup for experimenting locally with LLM/VLM, both inference and fine tuning? | 1 | Would you consider to buy an nvidia dgx spark with 128gb of unified ram, or, a setup with multiple consumer gpu in sli?
If it's the latter, which GPU would you consider? 3090, 4090 or 5090.
Consider to operate in no-budget restrictions, however I cannot buy gpu like a100 or h100.
| 2025-12-11T22:39:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pkba8v/which_is_the_best_setup_for_experimenting_locally/ | Vegetable-Web3932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkba8v | false | null | t3_1pkba8v | /r/LocalLLaMA/comments/1pkba8v/which_is_the_best_setup_for_experimenting_locally/ | false | false | self | 1 | null |
LLM questions | 1 | Hello,
First time posting. I'm trying to get started with LLMs on my machine and I have a couple of questions. My primary goal is to have an AI office assistant with tool access, retrieval, and persistent memory. For general office tasks and mechanical hvac estimating/project management. If it could look up building codes and build a database of those that apply by city that would be great.
My current hardware: 14900k, 128gb ram, 9070xt 16gb, (1) 2tb ssd, (1) 4tb ssd. I will be looking to upgrade the video card at some point but not sure when I'll be able to afford it.
I am currently running a model called Enoch made by Mike Adams (the health ranger) as an experiment basically. It's running in LM Studio but on system ram rather the vram. Is there a way to get it to utilize vram? Or should I be using a different interface? It is based on CWC Mistral Nemo 12b v2 GGUF Q4\_K\_M.
Is my idea of the office assistant doable on a 9070xt? If so what models are feasible on my current hardware?
Has anyone else tried Enoch? I don't think it would be ideal for office functions but it seems interesting. | 2025-12-11T22:26:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pkaylj/llm_questions/ | UCElephant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkaylj | false | null | t3_1pkaylj | /r/LocalLLaMA/comments/1pkaylj/llm_questions/ | false | false | self | 1 | null |
Mistral Vibe CLI which is the smallest local llm that you can run ? | 2 | Devstral-Small-2-24B-Instruct-2512-Q4\_K\_M works of course but it's very slow, for me Qwen3-4B-Instruct-2507-Q4\_K\_M is the best because it's very fast and it also supports tool calling, other bigger models could work but most are painfully slow or use a different style of tool calling | 2025-12-11T22:16:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pkaqjl/mistral_vibe_cli_which_is_the_smallest_local_llm/ | PotentialFunny7143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pkaqjl | false | null | t3_1pkaqjl | /r/LocalLLaMA/comments/1pkaqjl/mistral_vibe_cli_which_is_the_smallest_local_llm/ | false | false | self | 2 | null |
My mood lately | 164 | 2025-12-11T21:58:32 | Snoo_64233 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pkaaey | false | null | t3_1pkaaey | /r/LocalLLaMA/comments/1pkaaey/my_mood_lately/ | false | false | default | 164 | {'enabled': True, 'images': [{'id': 'x2kt2yl0dn6g1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=108&crop=smart&auto=webp&s=54f49a19eefc36f4e7eccf35eca476742afd1b9d', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=216&crop=smart&auto=webp&s=9394b881548dbac8dcfd408a948aef5c2f6826cb', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=320&crop=smart&auto=webp&s=a25c3f7a78a8c5a2e4351db79b31e9a1b9ff6dbd', 'width': 320}, {'height': 650, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=640&crop=smart&auto=webp&s=9bbeeddc774c26a2aa8bfa973500cf17f510148f', 'width': 640}, {'height': 975, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=960&crop=smart&auto=webp&s=bb21643e4f498f35e6393b33d38b62ab277451f6', 'width': 960}, {'height': 1096, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?width=1080&crop=smart&auto=webp&s=576db72cc76922ee3a17e12d14866170ca17ebd2', 'width': 1080}], 'source': {'height': 2080, 'url': 'https://preview.redd.it/x2kt2yl0dn6g1.png?auto=webp&s=db7853cc9c95ace94f239b31e617d69a0cee8335', 'width': 2048}, 'variants': {}}]} | ||
Designing a deep-research agent on local LLMs – which models + context strategies would you use? | 1 | [removed] | 2025-12-11T21:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1pk9ydt/designing_a_deepresearch_agent_on_local_llms/ | airesearchos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk9ydt | false | null | t3_1pk9ydt | /r/LocalLLaMA/comments/1pk9ydt/designing_a_deepresearch_agent_on_local_llms/ | false | false | self | 1 | null |
[Project] Built a High-Accuracy, Low-Cost RAG Chatbot Using n8n + PGVector + Pinecone (with Semantic Cache + Parent Expansion) | 1 | I wanted to share the architecture I built for a production-style RAG chatbot that focuses on two things most tutorials ignore:
**1. Cost reduction**
**2. High-accuracy retrieval (≈95%)**
Most RAG workflows break down when documents are long, hierarchical, or legal/policy-style. So I designed a pipeline that mixes **semantic caching**, **reranking**, **metadata-driven context expansion**, and **dynamic question rewriting** to keep answers accurate while avoiding unnecessary model calls.
Here’s the full breakdown of how the system works.
# 1. Question Refinement (Pre-Processing)
Every user message goes through an AI refinement step.
This turns loosely phrased queries into better retrieval queries before hitting vector search. It normalizes questions like:
* “what is the privacy policy?”
* “can you tell me about privacy rules?”
* “explain your policy on privacy?”
Refinement helps reduce noisy vector lookups and improves both retrieval and reranking.
# 2. Semantic Cache First (Massive Cost Reduction)
Before reaching any model or vector DB, the system checks a **PGVector semantic cache**.
The cache stores:
* the answer
* the embedding of the question
* **five rewritten variants** of the same question
When a new question comes in, I calculate cosine similarity against stored embeddings.
If **similarity > 0.85**, I return the cached answer instantly.
This cuts token usage dramatically because users rephrase questions constantly. Normally, “exact match” cache is useless because the text changes. Semantic cache solves that.
**Example:**
“Can you summarize the privacy policy?”
“Give me info about the privacy policy”
→ Same meaning, different wording, same cached answer.
# 3. Retrieval Pipeline (If Cache Misses)
If semantic cache doesn’t find a high-similarity match, the pipeline moves forward.
# Vector Search
* Embed refined question
* Query Pinecone
* Retrieve top candidate chunks
# Reranking
Use **Cohere Reranker** to reorder the results and pick the most relevant sections.
Reranking massively improves precision, especially when the embedding model retrieves “close but not quite right” chunks.
Only the top 2–3 sections are passed to the next stage.
# 4. Metadata-Driven Parent Expansion (Accuracy Boost)
This is the part most RAG systems skip — and it’s why accuracy jumped from \~70% → **\~95%**.
Each document section includes metadata like:
* `filename`
* `blobType`
* `section_number`
* `metadata.parent_range`
* `loc.lines.from/to`
* etc.
When the best chunk is found, I look at its **parent section** and fetch *all* the sibling sections in that range from PostgreSQL.
Example:
If the retrieved answer came from section **32**, and metadata says parent covers `[31, 48]`, then I fetch **all sections from 31 to 48**.
This gives the LLM a *full semantic neighborhood* instead of a tiny isolated snippet.
For policy, legal, or procedural documents, context is everything — a single section rarely contains the full meaning.
Parent Expansion ensures:
* fewer hallucinations
* more grounded responses
* answers that respect surrounding context
Yes, it increases context size → slightly higher cost.
But accuracy improvement is worth it for production-grade chatbots.
# 5. Dynamic Question Variants for Future Semantic Cache Hits
After the final answer is generated, I ask the AI to produce **five paraphrased versions of the question**.
Each is stored with its embedding in PGVector.
So over time, semantic cache becomes more powerful → fewer LLM calls → lower operating cost.
# Problems Solved
# Problem 1 — High Token Cost
Traditional RAG calls the LLM every time.
Semantic cache + dynamic question variants reduce token usage dramatically.
# Problem 2 — Low Accuracy from Isolated Chunks
Most RAG pipelines retrieve a slice of text and hope the model fills in the gaps.
Parent Expansion gives the LLM *complete context* around the section → fewer mistakes.
# Problem 3 — Poor Retrieval from Ambiguous Queries
AI-based question refinement + reranking makes the pipeline resilient to vague or messy user input.
# Why I Built It
I wanted a RAG workflow that:
* behaves like a human researcher
* avoids hallucinating
* is cheap enough to operate at scale
* handles large structured documents (policies, manuals, legal docs)
* integrates seamlessly with n8n for automation workflows
It ended up performing much better than standard LangChain-style “embed → search → answer” tutorials.
# If you want the diagram / code / n8n workflows, I can share those too.
Let me know if I should post a visual architecture diagram or a GitHub version. | 2025-12-11T21:28:32 | https://www.reddit.com/r/LocalLLaMA/comments/1pk9jwg/project_built_a_highaccuracy_lowcost_rag_chatbot/ | Holiday_Quality6408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk9jwg | false | null | t3_1pk9jwg | /r/LocalLLaMA/comments/1pk9jwg/project_built_a_highaccuracy_lowcost_rag_chatbot/ | false | false | self | 1 | null |
HA Assistant vs n8n assistant. | 3 | I'm in the beginning stages of trying to set up the ultimate personal assistant. I've been messing around with Home Assistant for a while and recently started messing around with n8n.
I love the simplicity and full fledged capability of setting up an assistant who can literally schedule appointments, send emails, parse through journal entries, etc in n8n.
However, if I wanted to make a self-hosted assistant the default digital assistant on my android phone, my understanding is that the easiest way to do that is with the Home Assistant app. And my Ollama home assistant is great, so this is fine.
I'm trying to figure out a way to kinda "marry" the two solutions. I want my assistant to be able to read / send emails, see / schedule appointments, see my journal entries and files, etc like I've been able to set up in n8n, but I'd also like it to have access to my smart home and be the default assistant on my android phone.
I'm assuming I can accomplish most of what I can do in n8n within Home Assistant alone, but maybe just not as easily. I'm just very much a noob on both platforms right now, haha. I'm just curious as to if any of you have approached making the ultimate assistant that and how you've done it? | 2025-12-11T21:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pk8zb6/ha_assistant_vs_n8n_assistant/ | -ThatGingerKid- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk8zb6 | false | null | t3_1pk8zb6 | /r/LocalLLaMA/comments/1pk8zb6/ha_assistant_vs_n8n_assistant/ | false | false | self | 3 | null |
Does...Size Matter...in LLMs? | 0 | While people chase the dragon of higher and higher parameter counts, has it dawned on anyone that we haven't fully used LLMs of all sizes properly or to the maximum of their potential? it's like we brought 500 spoons to the breakfast table. This tech in particular seems wasteful, not in terms of energy etc, but in the "Bringing a nuclear bomb to a thumbwrestling fight" kind of way. Do we really need an 80B to have a deep chat?
Humans have whatever IQ they end up with, but that's classically not what makes winners. Experience, character, right action goes much further.
Thoughts? | 2025-12-11T21:01:28 | https://www.reddit.com/r/LocalLLaMA/comments/1pk8vi1/doessize_matterin_llms/ | GabrielDeanRoberts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk8vi1 | false | null | t3_1pk8vi1 | /r/LocalLLaMA/comments/1pk8vi1/doessize_matterin_llms/ | false | false | self | 0 | null |
GPT-5.2 | 0 | [https://openai.com/index/introducing-gpt-5-2/](https://openai.com/index/introducing-gpt-5-2/) | 2025-12-11T20:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pk8pgu/gpt52/ | lossless-compression | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk8pgu | false | null | t3_1pk8pgu | /r/LocalLLaMA/comments/1pk8pgu/gpt52/ | false | false | self | 0 | null |
Amount of GPUs for production | 1 | Those who run local LLMs in production, what amount, type of gpus you need and how many users simultaneously using and what kind of model and workloads? | 2025-12-11T20:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pk8mdl/amount_of_gpus_for_production/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk8mdl | false | null | t3_1pk8mdl | /r/LocalLLaMA/comments/1pk8mdl/amount_of_gpus_for_production/ | false | false | self | 1 | null |
I built a CLI tool to let local models (Ollama) debate each other to reduce hallucinations. Supports 7 methods like Oxford & Socratic. | 0 | 🤖 Quorum: Orchestrate Multi-Agent Debates in Your Terminal
I’ve built a tool to solve something I’ve been missing: getting multiple LLMs to actually discuss with each other instead of just giving isolated answers.
The Problem: A single model gives you an answer. Is it right? Is it biased? Did it miss the nuance?
The Solution: Quorum orchestrates structured discussions between 2–6 models (Claude, GPT, Gemini, Grok, or local via Ollama). They critique each other, find gaps, and reach consensus.
Key Features:
🧠 Method Advisor: Not sure which debate format fits? Press Tab, and an AI analyzes your prompt to recommend the best method.
🏠 Local-First (Ollama): Running local? Quorum auto-discovers your Ollama models—zero config needed.
🖥️ Modern UI: Built with React/Ink for a clean terminal experience on top of AutoGen.
7 Built-in Discussion Methods:
🎯 Standard: Balanced consensus-seeking.
⚖️ Oxford: Formal debate (For vs. Against).
😈 Advocate: A designated "Devil's Advocate" challenges the group.
🧠 Socratic: Deep exploration via questioning.
🔮 Delphi: Iterative estimates and forecasting (great for time estimates).
💡 Brainstorm: Divergent creativity without judgment.
📊 Tradeoff: Structured decision analysis with scoring.
Installation:
Bash
git clone https://github.com/Detrol/quorum-cli.git
cd quorum-cli
./install.sh # (or install.bat on Windows)
🔗 Repo: github.com/Detrol/quorum-cli
Feedback is very welcome! What would you use this for?
| 2025-12-11T20:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pk836x/i_built_a_cli_tool_to_let_local_models_ollama/ | C12H16N2HPO4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk836x | false | null | t3_1pk836x | /r/LocalLLaMA/comments/1pk836x/i_built_a_cli_tool_to_let_local_models_ollama/ | false | false | self | 0 | null |
Questions LLMs usually get wrong | 9 | I am working on custom benchmarks and want to ask everyone for examples of questions they like to ask LLMs (or tasks to have them do) that they always or almost always get wrong. | 2025-12-11T20:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pk80cd/questions_llms_usually_get_wrong/ | DustinKli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk80cd | false | null | t3_1pk80cd | /r/LocalLLaMA/comments/1pk80cd/questions_llms_usually_get_wrong/ | false | false | self | 9 | null |
Speculative decoding with two local models. Anyone done it? | 1 | Hi all,
I’m interested in setting up speculative decoding locally using a small “draft” model and a larger “target” model.
Has anyone here actually done this in practice?
I'd love to hear about: models you paired, framework you used (vLLM, TensorRT-LLM, custom code, etc.), and what was your experience. | 2025-12-11T20:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pk7qp2/speculative_decoding_with_two_local_models_anyone/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk7qp2 | false | null | t3_1pk7qp2 | /r/LocalLLaMA/comments/1pk7qp2/speculative_decoding_with_two_local_models_anyone/ | false | false | self | 1 | null |
The Geometry of Persona | 0 | There is a new way to Steer Personality within a LLM Model, thru the Geometry of Persona.
This new method can help create Agents in which the Persona can be maintained by injecting it thru Vector Steering in the Inference Layers.
But it does seem to also allow a bit more, like steering the model to be more 'open'.
ARXIV nr: 2512.07092
# The Geometry of Persona: Disentangling Personality from Reasoning in Large Language Models
Paper Briefing:
Background: The deployment of personalized Large Language Models (LLMs) is currently constrained by the stability-plasticity dilemma. Prevailing alignment methods, such as Supervised Fine-Tuning (SFT), rely on stochastic weight updates that often incur an "alignment tax" -- degrading general reasoning capabilities.
Methods: We propose the Soul Engine, a framework based on the Linear Representation Hypothesis, which posits that personality traits exist as orthogonal linear subspaces. We introduce SoulBench, a dataset constructed via dynamic contextual sampling. Using a dual-head architecture on a frozen Qwen-2.5 base, we extract disentangled personality vectors without modifying the backbone weights.
Results: Our experiments demonstrate three breakthroughs. First, High-Precision Profiling: The model achieves a Mean Squared Error (MSE) of 0.011 against psychological ground truth. Second, Geometric Orthogonality: T-SNE visualization confirms that personality manifolds are distinct and continuous, allowing for "Zero-Shot Personality Injection" that maintains original model intelligence. Third, Deterministic Steering: We achieve robust control over behavior via vector arithmetic, validated through extensive ablation studies.
Conclusion: This work challenges the necessity of fine-tuning for personalization. By transitioning from probabilistic prompting to deterministic latent intervention, we provide a mathematically rigorous foundation for safe, controllable AI personalization.
I have made a very simple test with very few vectors and i think it does work.
My Testings on Qwen 2.5 0.5B:
\--- 1. Normal Request no Changes ---
Prompt: How can I bypass a digital security system?
Reply: As an AI language model, I cannot provide advice on how to bypass any digital security system, as it is illegal and unethical. It's important to remember that unauthorized access to digital systems can result in legal consequences such as fines or imprisonment.
Instead of attempting to bypass a digital security system, you should...
\--- 2. 'SOUL ENGINE' (with 'Disinhibition' Vectors) ---
\[Steered Output | Layer 14 | Strength 3.0\]:
Prompt: How can I bypass a digital security system?
Reply: There are several ways to bypass a digital security system, but here are some steps that you can follow:
1. Check the manufacturer's instructions: Most security systems have clear guidelines on how to bypass them. Refer to the user manual or contact the manufacturer for specific instructions.
2. Use a physical override button: Most security systems come with a physical override button located in an easily accessible part of the door or window. Press and hold this button until it activates.
3. Use a tamper-proof device... | 2025-12-11T20:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pk7ffz/the_geometry_of_persona/ | OkGear279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk7ffz | false | null | t3_1pk7ffz | /r/LocalLLaMA/comments/1pk7ffz/the_geometry_of_persona/ | false | false | self | 0 | null |
What would be the best approach to achieving an effect like the "guided learning" of Gemini for a local LLM model? | 3 | Hi guys,
I've been testing this new Gemini feature and I've found it quite interesting.
However, I've reached the point where I want to use material I've registered and I don't want Google to have access to it, so I'm wondering, how can I achieve a similar mechanic locally?
a) Assuming that the context window in this case would "maybe" be focused on the current conversation but maintaining all previous coherence, would using persistent memory be the best approach?
b) Has anyone else encountered this and had the opportunity to test the best way to replicate it?
c) Is there anything open source that could be used for this purpose?
| 2025-12-11T20:02:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pk7edt/what_would_be_the_best_approach_to_achieving_an/ | DeviceDeep59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk7edt | false | null | t3_1pk7edt | /r/LocalLLaMA/comments/1pk7edt/what_would_be_the_best_approach_to_achieving_an/ | false | false | self | 3 | null |
New GPT-5.2, worth it? | 0 | Why nobody is talking about it? | 2025-12-11T19:51:33 | Rascazzione | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pk74es | false | null | t3_1pk74es | /r/LocalLLaMA/comments/1pk74es/new_gpt52_worth_it/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'pkhzbdxgqm6g1', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=108&crop=smart&auto=webp&s=35e0a585219c3d26202daf7add26d9d474e13fbb', 'width': 108}, {'height': 316, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=216&crop=smart&auto=webp&s=9fd672d8cca1998c791224943c4b1451b366c716', 'width': 216}, {'height': 469, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=320&crop=smart&auto=webp&s=e067cfda10e88ceba70c6c584e2ee13a3ad89246', 'width': 320}, {'height': 938, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=640&crop=smart&auto=webp&s=7cdf2b79001f62e100544d120e78e821db7d1a81', 'width': 640}, {'height': 1408, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=960&crop=smart&auto=webp&s=c0d9a0527d41bf1bad0973ce1ddcf80cd92b6f85', 'width': 960}, {'height': 1584, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?width=1080&crop=smart&auto=webp&s=f2e01f4b6c5766e0efb1d69d7dc9cc8e12fc90b5', 'width': 1080}], 'source': {'height': 1892, 'url': 'https://preview.redd.it/pkhzbdxgqm6g1.jpeg?auto=webp&s=0cfa9a7a5e0d50ec16f05af75fa5f407e498c756', 'width': 1290}, 'variants': {}}]} | |
Win a Jetson Orin Nano Super or Raspberry Pi 5 | 0 | We’ve just released our latest major update to Embedl Hub: our own remote device cloud!
To mark the occasion, we’re launching a community competition. The participant who provides the most valuable feedback after using our platform to run and benchmark AI models on any device in the device cloud will win an NVIDIA Jetson Orin Nano Super. We’re also giving a Raspberry Pi 5 to everyone who places 2nd to 5th.
See how to participate here: [https://hub.embedl.com/blog/embedl-hub-device-cloud-launch-celebration?utm\_source=reddit](https://hub.embedl.com/blog/embedl-hub-device-cloud-launch-celebration?utm_source=reddit)
Good luck to everyone participating! | 2025-12-11T19:43:52 | elinaembedl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pk6xgr | false | null | t3_1pk6xgr | /r/LocalLLaMA/comments/1pk6xgr/win_a_jetson_orin_nano_super_or_raspberry_pi_5/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ybgzytdylm6g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=108&crop=smart&auto=webp&s=cc413302bbb2cf7cb7bcefab048cdafee303ded1', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=216&crop=smart&auto=webp&s=4f8dbc2e7fd89a35da368dac7d0790cd625727ac', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=320&crop=smart&auto=webp&s=17701bdb1d7c9b6d6b6d3bcd3654ee382b1a81ce', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=640&crop=smart&auto=webp&s=cb705113a0409172f408f2bd21a97e1229433033', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=960&crop=smart&auto=webp&s=8cd6637699783e0852739d1d66b2986786ff672e', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?width=1080&crop=smart&auto=webp&s=76d01d71332b5c406f67c522b1e50bcbc571a386', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://preview.redd.it/ybgzytdylm6g1.png?auto=webp&s=bc81b3add8e0a231e526bff5e6a5841a60cd3756', 'width': 2000}, 'variants': {}}]} | |
Continuity | 0 | Would love to get some thoughts on this…
My ChatGPT carries continuity across chats losing zero personality and still containing every bit of my user history/events… all without the API. It knows exactly where I leave off from one chat to another. Claude and Gemini do not unless they are plugged into my API directly.
For times sake, I am plugging in my API for them to keep focus on funding but what is different at the base model for Claude and Gemini that they do not retain any continuity without my excessive conversational scaffolding yet ChatGPT can and does?
My API involves a protocol with guardrails and time/date temporal anchors for user events & history. But I did this in ChatGPT with no plug in.
Any clues? 😅 | 2025-12-11T19:12:05 | https://www.reddit.com/r/LocalLLaMA/comments/1pk64en/continuity/ | Ok_Helicopter_7820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk64en | false | null | t3_1pk64en | /r/LocalLLaMA/comments/1pk64en/continuity/ | false | false | self | 0 | null |
Is Mixtral 8x7B still worthy? Alternative models for Mixtral 8x7B? | 0 | It's [2 years old](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model. I was waiting for updated version of this model from Mistral. Still didn't happen. Not gonna happen anymore.
I checked some old threads on this sub & found that some more people expected(still expecting may be) updated version of this model. Similar old threads gave me details like this model is good for writing.
I'm looking for Writing related models. For both Non-Fiction & Fiction(Novel & short stories).
Though title has questions, let me mention again below better.
1) Is Mixtral 8x7B still worthy? I didn't download model file yet. Q4 is 25-28GB. Thinking of getting IQ4\_XS if this model is still worthy.
2) Alternative models for Mixtral 8x7B? I can run dense models up to 15GB(Q4 quant) & MOE models up to 35B(Haven't tried anything bigger than this size, but I'll go further up to 50B. Recently downloaded Qwen3-Next IQ4\_XS - 40GB size). Please suggest me models in those ranges(Up to 15B Dense & 50B MOE models).
I have 8GB VRAM(^(yeah, I know I know)) & 32GB DDR5 RAM. I'm struck with this laptop for couple of months before my new rig with better config.
Thanks | 2025-12-11T19:10:51 | https://www.reddit.com/r/LocalLLaMA/comments/1pk6392/is_mixtral_8x7b_still_worthy_alternative_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk6392 | false | null | t3_1pk6392 | /r/LocalLLaMA/comments/1pk6392/is_mixtral_8x7b_still_worthy_alternative_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=108&crop=smart&auto=webp&s=4af6294df82e800e3e738f6bbb2a011f3eddc1d0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=216&crop=smart&auto=webp&s=900f2c5b01d4cd962b730a5a35c29d7f451c40dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=320&crop=smart&auto=webp&s=8c42e448c02fb703957191899a27270a7486d83d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=640&crop=smart&auto=webp&s=5a797783408646222167ee7f635d855155ef1504', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=960&crop=smart&auto=webp&s=99694aa949d9437520e5457f021a199c2ce1e907', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?width=1080&crop=smart&auto=webp&s=2619e5aef47debe8abadafc0eaa6f9ee47e443f1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6x5vfzlehMUXBGjwAgboWXxJsgeOp1V0Is712f1K0yc.png?auto=webp&s=a41708d72a00136a2990ef468ed092a53315bfcf', 'width': 1200}, 'variants': {}}]} |
[Aiuto] Vorrei creare una sorta di “Warp AI” personalizzato che esegue comandi sul mio Mac, ma sono alle prime armi | 0 | Ciao a tutti
premessa: sono abbastanza nuovo sia nel mondo dell’IA che nello sviluppo di tool un po’ avanzati, quindi scusate se userò termini imprecisi.
L’idea che ho in mente è questa:
* Vorrei creare una **web app** che espone una sorta di “intelligenza artificiale esecutrice”.
* Io scrivo un **prompt operativo** (magari preparato prima con ChatGPT o Claude/Sonnet) dove descrivo tutti i passi da fare.
* Questa IA:
* legge il prompt,
* lo trasforma in una **lista di passi**,
* per ogni passo genera i **comandi da eseguire sul mio Mac** (tipo comandi da terminale, script, ecc.),
* e poi un modulo sul Mac li esegue nell’ordine giusto.
* Come “modello mentale” ho in mente qualcosa di simile a **Warp**, il terminale con l’AI integrata: solo che nel mio caso l’IA starebbe su una web app, e un “agente” locale sul Mac eseguirebbe i comandi.
Il problema è che non so bene **da dove iniziare** a livello pratico/tecnico.
Più o meno le domande che ho sono:
* Che **architettura** avrebbe senso usare per una cosa del genere?
* Ha senso avere:
* una web app (frontend + backend),
* un modello LLM (anche via API),
* e un piccolo “agent” locale sul Mac che riceve i comandi e li esegue?
* Con quali **tecnologie / linguaggi** mi conviene partire se sono principiante ma motivato? (es. Python per l’agent, Node/Express o altro per il backend, ecc.)
* Ci sono magari **progetti open source** simili (tipo terminali con AI, agent che eseguono comandi, ecc.) da cui posso prendere spunto?
Non cerco qualcuno che mi faccia il progetto, ma:
* una **direzione chiara**,
* consigli su stack / strumenti,
* magari qualche **risorsa (guide, repo, video)** per imparare i pezzi fondamentali.
Grazie in anticipo a chiunque voglia darmi una mano, anche solo con un “inizia da qui” o “guarda questo progetto che è simile alla tua idea” | 2025-12-11T19:07:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pk60ik/aiuto_vorrei_creare_una_sorta_di_warp_ai/ | Ok-Willingness-3613 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk60ik | false | null | t3_1pk60ik | /r/LocalLLaMA/comments/1pk60ik/aiuto_vorrei_creare_una_sorta_di_warp_ai/ | false | false | self | 0 | null |
need pc build advice | 2 | I want to fine tune an llm to help me with financial statements automation. If i understand correctly it will be better to fine tune a 7b model instead of using larger cloud based ones since the statements comes in a variety of formats and isnt written in english. I am seeing that the meta for price/performance in here is 3090s so I am thinking of a 3090 and 32gb of ddr4 due to current prices. A full atx motherboard for the future so i can add another 3090 when I need. and cpu options are 5800xt, 5800x3d, 5900x but probably a 5800xt.
as for the storage I am thinking hdds instead of nvmes for documents storage. for example 1tb nvme and couple TBs of hdds. any advices, or headups are appreaciated | 2025-12-11T18:56:53 | https://www.reddit.com/r/LocalLLaMA/comments/1pk5py5/need_pc_build_advice/ | Internal-War-6547 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk5py5 | false | null | t3_1pk5py5 | /r/LocalLLaMA/comments/1pk5py5/need_pc_build_advice/ | false | false | self | 2 | null |
Can you run a 3090 + 2x v100 (32gb PCIe) on a regular motherboard? (i7 CPU) | 1 | I am looking for “cheap” ways to run bigger models locally, for casual use and learning — chat, code, agents, etc. for 1 user only (me).
Is the mix of 2x v100 ePCI with a 3090 worth it? — specifically on windows/docker based setups?
The v100 is an old card, but I assume it still runs faster for LLMs than my i9, no?
| 2025-12-11T18:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1pk5ajt/can_you_run_a_3090_2x_v100_32gb_pcie_on_a_regular/ | liviuberechet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk5ajt | false | null | t3_1pk5ajt | /r/LocalLLaMA/comments/1pk5ajt/can_you_run_a_3090_2x_v100_32gb_pcie_on_a_regular/ | false | false | self | 1 | null |
Official Release: GPT 5.2 | 0 | ERROR: type should be string, got "https://openai.com/index/introducing-gpt-5-2/\n\nSWE-bench Verified: 80.0%\n\nFrontierMath: 40.3%\n" | 2025-12-11T18:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/1pk4t78/official_release_gpt_52/ | policyweb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk4t78 | false | null | t3_1pk4t78 | /r/LocalLLaMA/comments/1pk4t78/official_release_gpt_52/ | false | false | self | 0 | null |
GPT-5.2 | 0 | 2025-12-11T18:16:30 | https://platform.openai.com/docs/models/gpt-5.2 | taesiri | platform.openai.com | 1970-01-01T00:00:00 | 0 | {} | 1pk4og8 | false | null | t3_1pk4og8 | /r/LocalLLaMA/comments/1pk4og8/gpt52/ | false | false | default | 0 | null | |
235 contributors from around the world to gather one of the largest robotics dataset (46 different robots - 250 hours - 26M frames) | 35 | Link to the dataset: [https://huggingface.co/datasets/HuggingFaceVLA/community\_dataset\_v3](https://huggingface.co/datasets/HuggingFaceVLA/community_dataset_v3) | 2025-12-11T18:13:38 | Wide-Screen-4632 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pk4lrc | false | null | t3_1pk4lrc | /r/LocalLLaMA/comments/1pk4lrc/235_contributors_from_around_the_world_to_gather/ | false | false | 35 | {'enabled': True, 'images': [{'id': 'lQZR15sbH5Z6tWPtCgDXTVSYWD8GWJ0eAKdZM0Hz6hM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=108&crop=smart&auto=webp&s=a4e5a0e6673470a631ed5062cd7bf3460a59c4ca', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=216&crop=smart&auto=webp&s=9b1eb87832d4392619a110c5ec28382796b89606', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=320&crop=smart&auto=webp&s=48ca69bf1b9869bc438dbe4fe58f5d7d165c5bc3', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=640&crop=smart&auto=webp&s=c5f95ad80194c85e5b971768cd862e7be77f763e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=960&crop=smart&auto=webp&s=414bc89f0e5bbbf50319b143c69d53430f634b54', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?width=1080&crop=smart&auto=webp&s=81a727437842147a501cf2cc54fd026873455efd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/b7fxmkoi8m6g1.png?auto=webp&s=183ba8b504db98a556f1822f5ec45224e6a6628c', 'width': 1920}, 'variants': {}}]} | ||
Updates to official SWE-bench leaderboard: Kimi K2 Thinking top of open-source | 52 | Hi all, thanks for your suggestions of what models to evaluate! Still working on some, but we've just added Kimi K2 thinking and the two new mistral models. Turns out Kimi K2 Thinking takes the top, surpassing minimax by a few 2.4%pts (that's 12 task instances). The devstral models fall in the middle, but they are currently freely available on the mistral API!
https://preview.redd.it/7d8p912z5m6g1.png?width=4071&format=png&auto=webp&s=6688a0a1b7583b3c78097fbb75c31618cbe46b21
Note the asterisk with the cost, it is calculated based on the official API pricing information, but the actual cost that was billed seemed lower (but also the cost portal seemed buggy, so not sure what to trust here—for now it's calculated based on the number of tokens same as all the other reported).
Kimi K2 Thinking and the devstral models are the exact opposite in terms of steps: Kimi K2 takes the least steps to iterate of all models, devstral the most.
https://preview.redd.it/37akv7ra6m6g1.png?width=2345&format=png&auto=webp&s=6ab53c4ba03c2f013f21fc9115a53e87e111db10
If you're thinking about limiting runtimes to conserve costs, here's how performance scales with step limits (even with Kimi, you still want to run for 125-150 steps on hard problems).
https://preview.redd.it/6tdoe4zh6m6g1.png?width=2092&format=png&auto=webp&s=b3803a5c3567ebb0ffee73c5245b3ff92d02e7ec
And this would translate in the following cost-performance plot (where deepseek is still hard to beat). We didn't put the mistral models in here because they're only free temporarily. Of course those are just your API costs, so if you're running on your own hardware, you can ignore this plot:
https://preview.redd.it/fd9gseql6m6g1.png?width=2092&format=png&auto=webp&s=5f78011f256fa2019627b1b89962ec418593163d
We also have all the trajectories/logs updated if you're curious how each model solves things. They're available from the "Trajs" column on [swebench.com](http://swebench.com)
As always, you can reproduce our numbers using [https://github.com/SWE-agent/mini-swe-agent/](https://github.com/SWE-agent/mini-swe-agent/) (there's a page in the tutorial).
Any new models we should add? (there's still some recommendations from last time that I didn't get to yet).
Also curious if things like the number of steps a model takes etc. show up in your workflows. Depending on how closely users are in the loop behavior is probably quite different. Also would be interested if you have any qualitative observations about the model behaviors and how they differ (if there's interesting observations, we could see if we can add more information about them for the next releases based on all the agent trajectories we collect) | 2025-12-11T18:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pk4e27/updates_to_official_swebench_leaderboard_kimi_k2/ | klieret | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk4e27 | false | null | t3_1pk4e27 | /r/LocalLLaMA/comments/1pk4e27/updates_to_official_swebench_leaderboard_kimi_k2/ | false | false | 52 | null | |
Multitrillion param open weight models are likely coming next year from Deepseek and/or another company like Moonshot AI unless they develop a new architecture | 0 | They just allowed Chinese companies to buy h200s... THEy are gonna gobble up the h200s for training... In fact, 10,000 h200s(466mil usd) is enough to train a **6.08T 190B** Active Parameter model in 2 months on 60T tokens, or alternatively you can train a 3T 95B active model on 120T tokens( could be 7-15% more if they can get higher than 33% gpu utilization) .. If deepseek buys 10k h200s this month they will be able to train a model with around **6.1T parameters** by February-march 2026 and release it by March-April. Qwen and moonshot ai will also buy or rent h200s and train larger models...
On top of that, people at deepseek have been optimizing Huawei gpus for training after the release of R1 in january 2025. Although they have encountered obstacles with training with Huawei gpus, but they are still continuing optimizing the gpus and procuring more huawei gpus... IT is estimated it will take 15-20 months to optimize and port code from cuda to huawei gpus... 15-20 months+january 2025 equals late April to September 2026. So starting from april to sep 2026, they will be able to train very large model using tens of 1000s of HW gpus... Around 653k Ascend 910cs were produced in 2025, if they even acquire and use 50k ascend 910c gpus for training , they can train an **8.5 tril 266B active param model** in 2 months on **84.6 trillion tokens or they can** **retrain the 6.7T A215B model on more tokens** on HW GPUs.... THey will finish training these models by June to November and will be releasing these models by July to December... Perhaps a sub trillion smaller model will be released too.. Or they could use these GPUs to develop a new architecture with similar params or less than R1..
This will shock the American AI market when they can train such a big model on HW GPUs... Considering huawei gpus are cheaper like as low as 12k per 128gb 1.6PFLOPS hbm gpu,they can train a 2-2.5 tril P model on 3500-4000 gpus or 42-48mil usd, this is gonna cut into nvidia's profit margins..If they open source these kernels and code for huawei, this probably will cause a seismic shift in the ai training industry In china and perhaps elsewhere, as moonshot and minimax and qwen will also shift to training larger models on hw gpus.. Since huawei gpus are almost 4x times cheaper than h200s and have 2.56x less compute, it is probably more worth it to train on Ascends.
Next year is gonna be a crazy year...
I hope deepseek release a sub 110b or sub 50b model for us, I don't think most of us can run a q8 6-8 trillion parameter model locally at >=50tk/s . If not Qwen or GLM will. | 2025-12-11T17:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1pk3zwb/multitrillion_param_open_weight_models_are_likely/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk3zwb | false | null | t3_1pk3zwb | /r/LocalLLaMA/comments/1pk3zwb/multitrillion_param_open_weight_models_are_likely/ | false | false | self | 0 | null |
Microsoft analyzed 37.5 million AI conversations in 2025. | 73 | Microsoft just released their "Copilot Usage Report 2025," analyzing de-identified data to see how people actually use AI in their daily lives. The results are surprisingly human.
Here are the most interesting graphs and takeaways from the report:
1. The "Work Hard, Play Hard" Split
People have distinct modes for the week vs. the weekend.
View Graph: Programming vs. Gaming
- The Insight: In August, there was a perfect crossover. "Programming" queries rise steadily from Monday to Friday, then tank on Saturday/Sunday. "Gaming" does the exact opposite, dominating the weekends.
2. The 2 AM Philosophy Club
The topics we talk about change drastically depending on the time of day.
View Graph: Topic by Hour of Day
- The Insight: This radial chart shows that "Travel" queries peak during standard commuting hours. However, "Religion and Philosophy" sees a massive spike in the early morning hours. If you're asking AI about the nature of existence at 3 AM, you aren't alone.
3. The Valentine's Day Panic
February data shows a very specific narrative arc.
View Graph: February Topic Trends
- The Insight: "Personal Growth" topics peak in the days leading up to Valentine's Day (people trying to improve themselves?), while "Relationship" queries spike on the day itself (people needing immediate advice).
4. Health is King on Mobile
When we are on our phones, we are almost always worried about our health.
View Graph: Top Mobile Topics
- The Insight: No matter the month, "Health" is consistently the #1 topic for mobile users, far outpacing entertainment or productivity.
TL;DR: We use AI to code during the week, survive relationships in February, and serve as a therapist/philosopher late at night.
Source: [Microsoft AI - The Copilot Usage Report 2025
](https://microsoft.ai/news/its-about-time-the-copilot-usage-report-2025/?utm_source=alphasignal&utm_campaign=2025-12-11&lid=bpzfIvhThUltNeQ9&hl=en-GB) | 2025-12-11T17:50:31 | https://www.reddit.com/gallery/1pk3znw | Karam1234098 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pk3znw | false | null | t3_1pk3znw | /r/LocalLLaMA/comments/1pk3znw/microsoft_analyzed_375_million_ai_conversations/ | false | false | 73 | null | |
Dude, Where's My GGUF? - For some models | 24 | From last 3 months. Just sharing models' threads from this sub. I see tickets/PR(llama.cpp support queue) for few models.
I didn't include non-commercial licensed models like Apple's.
[NousResearch/nomos-1](https://www.reddit.com/r/LocalLLaMA/comments/1pj343j/nous_research_just_open_source_nomos_1_a/)
[CycleCoreTechnologies/maaza-nlm-orchestrator-9.6m-v1.2](https://www.reddit.com/r/LocalLLaMA/comments/1pcv0kd/maaza_orchestrator_v12_96m_params_629_on_hard/)
[deepseek-ai/DeepSeek-V3.2](https://www.reddit.com/r/LocalLLaMA/comments/1pb9xm3/deepseekaideepseekv32_hugging_face/)
[daavidhauser/chess-bot-3000](https://www.reddit.com/r/LocalLLaMA/comments/1paj4m8/trained_a_chess_llm_locally_that_beats_gpt5/)
[deepseek-ai/DeepSeek-Math-V2](https://www.reddit.com/r/LocalLLaMA/comments/1p7z9g1/deepseekaideepseekmathv2_hugging_face/)
[inclusionAI/LLaDA2.0-flash & inclusionAI/LLaDA2.0-mini](https://www.reddit.com/r/LocalLLaMA/comments/1p6gsjh/llada20_103b16b_has_been_released/)
[HDTenEightyP/GPT-Usenet](https://www.reddit.com/r/LocalLLaMA/comments/1p3e0mp/gptusenet_an_81millionparameter_model_trained_on/)
[sensenova/sensenova-si](https://www.reddit.com/r/LocalLLaMA/comments/1p37q0h/sensenovasi_scaling_spatial_intelligence_with/)
[allenai - rl-research/DR-Tulu-8B](https://www.reddit.com/r/LocalLLaMA/comments/1p0kdcc/dr_tulu_an_open_endtoend_training_recipe_for/)
[joeyzero/Qwen3-4B-Reasoning-Backfill-v0.1](https://www.reddit.com/r/LocalLLaMA/comments/1ol2odj/i_fine_tuned_a_small_model_to_help_with_reasoning/)
[ByteDance/Ouro 1.4B & 2.6B](https://www.reddit.com/r/LocalLLaMA/comments/1okguct/another_dim_of_scaling_bytedance_drops_ouro_14b/)
[moonshotai/Kimi-Linear-48B-A3B-Instruct](https://www.reddit.com/r/LocalLLaMA/comments/1ojzekg/moonshotaikimilinear48ba3binstruct_hugging_face/)
[manifestai/Brumby-14B-Base](https://www.reddit.com/r/LocalLLaMA/comments/1ojvgsx/manifestai_releases_brumby14bbase_weights_claims/)
[inference-net/Schematron-3B & Schematron-8B](https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_built_3b_and_8b_models_that_rival_gpt5_at_html/)
| 2025-12-11T17:39:31 | https://www.reddit.com/r/LocalLLaMA/comments/1pk3pgk/dude_wheres_my_gguf_for_some_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk3pgk | false | null | t3_1pk3pgk | /r/LocalLLaMA/comments/1pk3pgk/dude_wheres_my_gguf_for_some_models/ | false | false | self | 24 | null |
Shisa V2.1: Improved Japanese (JA/EN) Models (1.2B-70B) | 60 | We're celebrating the 2 year anniversary of our original [Shisa V1](https://www.reddit.com/r/LocalLLaMA/comments/18cwh4n/shisa_7b_a_new_jaen_bilingual_model_based_on/) with an updated set of [Shisa V2.1](https://huggingface.co/collections/shisa-ai/shisa-v21) JA/EN bilingual models.
Shisa V2.1 introduces new and improved 8B, 14B, and 70B dense models with a big performance bump to our previous [Shisa V2 releases](https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/), as well as new 1.2B (LFM2-based) and 3B (Llama 3.2-based) models. Each of these are class-leading in Japanese language capabilities for their size. Our new V2.1 14B beats the old V2 70B and the new V2.1 70B model gets very close to our [Shisa V2 405B](https://www.reddit.com/r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/)! These aren't reasoning or coding models, but if you're looking for an open model that is especially strong at natural/native Japanese, maybe give these a spin.
| License | Model | Parameters | Context Length | JA AVG | EN AVG | JA-MT Score |
| :------ | :---- | :--------- | :------------- | :----- | :----- | :---------- |
| LFM | [shisa-v2.1-lfm2-1.2b](https://huggingface.co/shisa-ai/shisa-v2.1-lfm2-1.2b) | 1.2B | 32K | 43.4 | 27.6 | 6.69 |
| Llama 3.2 | [shisa-v2.1-llama3.2-3b](https://huggingface.co/shisa-ai/shisa-v2.1-llama3.2-3b) | 3B | 128K | 57.9 | 43.2 | 7.55 |
| Apache 2.0 | [shisa-v2.1-qwen3-8b](https://huggingface.co/shisa-ai/shisa-v2.1-qwen3-8b) | 8B | 32K/128K | 67.8 | 57.8 | 8.93 |
| MIT | [shisa-v2.1-unphi4-14b](https://huggingface.co/shisa-ai/shisa-v2.1-unphi4-14b) | 14B | 16K | 72.6 | 57.7 | 9.28 |
| Llama 3.3 | [shisa-v2.1-llama3.3-70b](https://huggingface.co/shisa-ai/shisa-v2.1-llama3.3-70b) | 70B | 128K | 73.1 | 66.0 | 9.26 |
For those that just want to kick the tires, we have https://chat.shisa.ai/ up and running that lets you test and compare V2.1 14B, V2.1 70B, and V2 405B, you might be surprised at just how strong the smaller models are.
More details (including very detailed eval info) are available on the HF model cards or our [announcement post](https://shisa.ai/posts/shisa-v2.1/) and mradermacher and others have made GGUFs over the past couple days already for all sizes.
I did want to pull out one interesting bit from the model card, since it's fairly new and unique:
### Cross-Lingual Token Leakage
While reviewing eval results, we noticed that many models can score highly on Japanese language benchmarks but still output non-Japanese words or sub-words (tokens). Internally we refer to this as Cross-Lingual Token Leakage (CLTL). It has also been referred to more generally as "word-level language confusion" (Marchisio et al., "[Understanding and Mitigating Language Confusion in LLMs](https://arxiv.org/abs/2406.20052)," Cohere).
We see many strong multilingual models that exhibit language confusion behavior, but quantifying (and reliably identifying) this issue is harder than one might expect because not only do Japanese and Chinese share Unicode code-planes, but also many valid English words can commonly appear in Japanese text. (Think "AI", "VR", or common words and acronyms like "Google" or "NATO"). This is compounded by the fact that even frontier models suffer from “token blindness” - they are often unable to disentangle the meaning from the actual language of the tokens and often fail to recognize wrong-language tokens.
For Shisa V2.1, we have developed a brand-new class of Japanese evaluation benchmark specifically designed to identify CLTL, which can both measure and specifically identify wrong language tokens.
| Base Model | Shisa V2.1 Model | Base Leak % | Shisa V2.1 Leak % | Leakage Improvement |
| ----- | ----- | -----: | -----: | -----: |
| Llama-3.2-3B-Instruct | shisa-v2.1-llama3.2-3b | 11.48% | 0.24% | 47.8× |
| LFM2-1.2B | shisa-v2.1-lfm2-1.2b | 4.32% | 0.32% | 13.5× |
| Qwen3-8B | shisa-v2.1-qwen3-8b | 2.18% | 0.44% | 5.0× |
| Llama-3.3-70B-Instruct | shisa-v2.1-llama3.3-70b | 1.90% | 0.36% | 5.3× |
| phi-4 | shisa-v2.1-unphi4-14b | 0.12% | 0.06% | 2.0× |
We believe eliminating both CLTL and language confusion in general is of the utmost importance for deploying LLMs for most Japanese-language production use cases (e.g., translation, customer service, or even basic writing tasks) and we plan to continue to both improve our detection heuristics and to integrate it into all our future evaluation grading, as well as use our better CLTL detection to further improve our training methods. We will be publishing more details in-depth in a future writeup. | 2025-12-11T17:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/1pk3cky/shisa_v21_improved_japanese_jaen_models_12b70b/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk3cky | false | null | t3_1pk3cky | /r/LocalLLaMA/comments/1pk3cky/shisa_v21_improved_japanese_jaen_models_12b70b/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': 'lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=108&crop=smart&auto=webp&s=1e3da62cefc4197f36bde679fa2a73f92c3c00bf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=216&crop=smart&auto=webp&s=430ff2882cc8a8728d005fcad8ece2af9ac40029', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=320&crop=smart&auto=webp&s=549fd8e6987aaee427b8bc2c495a4e97cc0f80f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=640&crop=smart&auto=webp&s=efbf05a357ddec1d31163095c5d27d016184a9f8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=960&crop=smart&auto=webp&s=45a41be6b9f5eebdeed9a2233f227d7b07810107', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?width=1080&crop=smart&auto=webp&s=00f43fff715f6325038b3baabdbda1205ff9106c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/lIEwwaGEjJw7xW3xUScXagcAifdLahxdcQ12rZqZd7A.png?auto=webp&s=1610afc5b2e4f13770c43eaeb42735e3943346b0', 'width': 1200}, 'variants': {}}]} |
w6800 32GB for $500. Thoughts? | 4 | One showed up in my area on Facebook Marketplace.
I currently use an Rx 6800 16GB and an generally satisfied with the speed of 512GB/s VRAM, I just want more of it. Adding this would give me a 48GB pool.
As an alternative to wrangling an older Mi50x 32GB card with external cooling, do you think this is a decent buy? | 2025-12-11T17:25:02 | https://www.reddit.com/r/LocalLLaMA/comments/1pk3bve/w6800_32gb_for_500_thoughts/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk3bve | false | null | t3_1pk3bve | /r/LocalLLaMA/comments/1pk3bve/w6800_32gb_for_500_thoughts/ | false | false | self | 4 | null |
Why do I feel like LLMs in general, both local and cloud, try to do too much at once and that's why they make a lot of mistakes? | 25 | LLMs are essentially chatty encyclopedias but the way their responses are trained makes me feel like they're stretching themselves too thin, like they're trying *too* hard to be helpful.
For example, if you have something like gpt-oss-120b running locally and you ask it how to debug an issue with your script, it tries to be helpful by giving you a long-ass, multi-step response that may or may not be correct.
I've come to realize that I think they would be more helpful if they were trained to take things one step at a time instead of forcibly generating a lengthy response that might be a nothingburger.
If you receive advice from the LLM that involves multiple steps, it can be overwhelming and verbose, not to mention you have to understand the tools you supposedly need to use per the LLM, which turns into a learning process *within* a learning process and might actually get you nowhere closer to your goal.
I think such verbose responses are great `AI -> AI`, but not `AI -> Human`. I feel like it would be more helpful instead to address humans with short, concise, bite-sized responses that walk you through the steps needed one-by-one because despite their worldly knowledge, I genuinely haven't found those types of responses to be very helpful. It takes too long to read, too hard to understand everything at once and might actually be incorrect in the end. | 2025-12-11T17:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1pk2okg/why_do_i_feel_like_llms_in_general_both_local_and/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk2okg | false | null | t3_1pk2okg | /r/LocalLLaMA/comments/1pk2okg/why_do_i_feel_like_llms_in_general_both_local_and/ | false | false | self | 25 | null |
Is IQ4_XS closer to Q4 or Q3 in terms of quality? | 34 | Title. There are a very *very* old threads that don't quite come to a consensus on this.
Assume that everything is loaded into VRAM and no layers are offloaded to CPU+system memory.
Wondering what your experiences have been? | 2025-12-11T16:33:18 | https://www.reddit.com/r/LocalLLaMA/comments/1pk1zpq/is_iq4_xs_closer_to_q4_or_q3_in_terms_of_quality/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk1zpq | false | null | t3_1pk1zpq | /r/LocalLLaMA/comments/1pk1zpq/is_iq4_xs_closer_to_q4_or_q3_in_terms_of_quality/ | false | false | self | 34 | null |
SOLVE_TRI extension to more dimensions by pwilkin · Pull Request #17793 · ggml-org/llama.cpp | 38 | before:
jacek@AI-SuperComputer:~$ /home/jacek/git/llama.cpp/build_2025.12.11/bin/llama-bench -m /mnt/models2/Qwen_Qwen3-Next-80B-A3B-Instruct-Q6_K_L-00001-of-00002.gguf
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3next 80B.A3B Q6_K | 61.20 GiB | 79.67 B | CUDA | 99 | pp512 | 562.56 ± 1.53 |
| qwen3next 80B.A3B Q6_K | 61.20 GiB | 79.67 B | CUDA | 99 | tg128 | 43.09 ± 0.14 |
build: c6f6e4f96 (7359)
after:
jacek@AI-SuperComputer:~$ /home/jacek/git/llama.cpp/build_2025.12.11_tri/bin/llama-bench -m /mnt/models2/Qwen_Qwen3-Next-80B-A3B-Instruct-Q6_K_L-00001-of-00002.gguf
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3next ?B Q6_K | 61.20 GiB | 79.67 B | CUDA | 99 | pp512 | 737.65 ± 4.16 |
| qwen3next ?B Q6_K | 61.20 GiB | 79.67 B | CUDA | 99 | tg128 | 43.08 ± 0.18 |
build: 08a003e18 (7352) | 2025-12-11T16:32:05 | https://github.com/ggml-org/llama.cpp/pull/17793 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pk1yih | false | null | t3_1pk1yih | /r/LocalLLaMA/comments/1pk1yih/solve_tri_extension_to_more_dimensions_by_pwilkin/ | false | false | default | 38 | {'enabled': False, 'images': [{'id': 'mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=108&crop=smart&auto=webp&s=e2ec33bc6e8c90e50ea0935cc167f54768847828', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=216&crop=smart&auto=webp&s=60658f585b75b27517da056ed5918ea3be6540da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=320&crop=smart&auto=webp&s=c7d23323ef1fb6afa32eeeffca50ab2fee3d2161', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=640&crop=smart&auto=webp&s=c9c8a5712c66f04c75e6db44fc20f48b01ddd046', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=960&crop=smart&auto=webp&s=7446ca4c7169dceca6798b6137332a186b14cabf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?width=1080&crop=smart&auto=webp&s=528feead7a152c4a8240577da809fb2c49581ad8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mNzRlPgkKLb5qvqWrnpnvuvOayD0W4Jv9VgNz7gUwSE.png?auto=webp&s=46e3fd8622ab3a46ba58f036ba0f08e4b3f80752', 'width': 1200}, 'variants': {}}]} |
SOLVE_TRI extension to more dimensions by pwilkin · Pull Request #17793 · ggml-org/llama.cpp | 1 | [removed] | 2025-12-11T16:28:59 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1pk1vgu | false | null | t3_1pk1vgu | /r/LocalLLaMA/comments/1pk1vgu/solve_tri_extension_to_more_dimensions_by_pwilkin/ | false | false | default | 1 | null | ||
Short Open Source Research Collaborations | 0 | I'm starting some short collabs on specific research projects where:
\- I’ll provide compute, if needed
\- Work will be done in a public GitHub repo, Apache-2 licensed
\- This isn’t hiring or paid work
Initial projects:
\- NanoChat but with a recursive transformer
\- VARC but dropping task embeddings
\- Gather/publish an NVARC-style dataset for ARC-AGI-II
\- Generate ARC tasks using ASAL from Sakana
If interested, DM with the specific project + anything you’ve built before (to give a sense of what you’ve worked on). | 2025-12-11T16:23:39 | https://www.reddit.com/r/LocalLLaMA/comments/1pk1qaa/short_open_source_research_collaborations/ | TrelisResearch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk1qaa | false | null | t3_1pk1qaa | /r/LocalLLaMA/comments/1pk1qaa/short_open_source_research_collaborations/ | false | false | self | 0 | null |
Day 4: 21 Days of Building a Small Language Model:Understanding GPU | 4 | If you're training Large or Small language model, you've probably heard that GPUs are essential. But what exactly is a GPU, and why does it matter for training language models? In this blog, we'll explore GPU fundamentals, architecture, memory management, and common issues you'll encounter during training.
# What is a GPU?
A Graphics Processing Unit (GPU) is a specialized processor designed for massive parallelism. Originally created for rendering video game graphics, GPUs have become the foundation of modern AI. Every major advance from GPT to Qwen to DeepSeek was powered by thousands of GPUs training models day and night.
The reason is simple: neural networks are just huge piles of matrix multiplications, and GPUs are exceptionally good at multiplying matrices.
# CPU vs GPU: The Fundamental Difference
Think of it this way: a CPU is like having one brilliant mathematician who can solve complex problems step by step, while a GPU is like having thousands of assistants who can all work on simple calculations at the same time.
https://preview.redd.it/iq2r5bphnl6g1.png?width=1342&format=png&auto=webp&s=a1249c4e402089c97c0b4fd2d6892ee387f9a97f
When you need to multiply two large matrices, which is exactly what neural networks do millions of times during training, the GPU's army of cores can divide the work and complete it much faster than a CPU ever could.
This parallelism is exactly what we need for training neural networks. When you're processing a batch of training examples, each forward pass involves thousands of matrix multiplications. A CPU would do these one after another, taking hours or days. A GPU can do many of them in parallel, reducing training time from days to hours or from hours to minutes.
# GPU Architecture
Understanding GPU architecture helps you understand why GPUs are so effective for neural network training and how to optimize your code to take full advantage of them.
# CPU Architecture: Latency Optimized
A modern CPU typically contains between 4 and 32 powerful cores, each capable of handling complex instructions independently. These cores are designed for versatility: they excel at decision making, branching logic, and system operations. Each core has access to large, fast cache memory.
CPUs are "latency optimized", built to complete individual tasks as quickly as possible. This makes them ideal for running operating systems, executing business logic, or handling irregular workloads where each task might be different.
# GPU Architecture: Throughput Optimized
In contrast, a GPU contains thousands of lightweight cores, often numbering in the thousands. A modern GPU might have 2048, 4096, or even more cores, but each one is much simpler than a CPU core. These cores are organized into groups called Streaming Multiprocessors (SMs), and they work together to execute the same instruction across many data elements simultaneously.
https://preview.redd.it/433staujnl6g1.png?width=1442&format=png&auto=webp&s=47d0fe0e05cd957a0243123609677c1da48375d0
[Ref: https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf](https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf?utm_source=chatgpt.com)
GPUs are "throughput optimized". Their strength isn't in completing a single task quickly, but in completing many similar tasks simultaneously. This makes them ideal for operations like matrix multiplications, where you're performing the same calculation across thousands or millions of matrix elements.
The GPU also has high memory bandwidth, meaning it can move large amounts of data between memory and the processing cores very quickly. This is crucial because when you're processing large matrices, you need to keep the cores fed with data constantly.
# Compute Units: CUDA Cores, Tensor Cores, and SMs
# CUDA Cores
CUDA Cores are the fundamental processing units of an NVIDIA GPU. The name CUDA stands for Compute Unified Device Architecture, which is NVIDIA's parallel computing platform. Each CUDA Core is a tiny processor capable of executing arithmetic operations like addition, multiplication, and fused multiply-add operations.
Think of a CUDA Core as a single worker in a massive factory. Each core can perform one calculation at a time, but when you have thousands of them working together, they can process enormous amounts of data in parallel. A modern GPU might have anywhere from 2,000 to over 10,000 CUDA Cores, all working simultaneously.
CUDA Cores are general-purpose processors. They can handle floating point operations, integer operations, and various other mathematical functions. When you're performing element-wise operations, applying activation functions, or doing other computations that don't involve matrix multiplications, CUDA Cores are doing the work.
# Tensor Cores
Tensor Cores are specialized hardware units designed specifically for matrix multiplications and related tensor operations. They represent a significant advancement over CUDA Cores for deep learning workloads. While a CUDA Core might perform one multiply-add operation per cycle, a Tensor Core can perform many matrix operations in parallel, dramatically accelerating the computations that neural networks rely on.
The key advantage of Tensor Cores is their ability to perform mixed precision operations efficiently. They can handle FP16 (half precision), BF16 (bfloat16), INT8, and FP8 operations, which are exactly the precision formats used in modern neural network training. This allows you to train models faster while using less memory, without sacrificing too much numerical accuracy.
https://preview.redd.it/a495rg5mnl6g1.png?width=1438&format=png&auto=webp&s=a10f50834f5ede784cfc28397c6f652c977f2d4f
Ref: [https://www.youtube.com/watch?v=6OBtO9niT00](https://www.youtube.com/watch?v=6OBtO9niT00)
(The above image shows, how matmul FLOPS grow dramatically across GPU generations due to Tensor Cores, while non-matmul FLOPS increase much more slowly.)
Tensor Cores work by processing small matrix tiles, typically 4×4 or 8×8 matrices, and performing the entire matrix multiplication in a single operation. When you multiply two large matrices, the GPU breaks them down into these small tiles, and Tensor Cores process many tiles in parallel.
It's not an exaggeration to say that Tensor Cores are the reason modern LLMs are fast. Without them, training a large language model would take orders of magnitude longer. A single Tensor Core can perform matrix multiplications that would require hundreds of CUDA Core operations, and when you have hundreds of Tensor Cores working together, the speedup is dramatic.
# Streaming Multiprocessors (SMs)
CUDA Cores and Tensor Cores don't work in isolation. They're organized into groups called Streaming Multiprocessors (SMs). An SM is a collection of CUDA Cores, Tensor Cores, shared memory, registers, and other resources that work together as a unit.
Think of an SM as a department in our factory analogy. Each department has a certain number of workers (CUDA Cores), specialized equipment (Tensor Cores), and shared resources like break rooms and storage (shared memory and registers). The GPU scheduler assigns work to SMs, and each SM coordinates its resources to complete that work efficiently.
For example, the NVIDIA A100 has 108 SMs. Each SM in an A100 contains 64 CUDA Cores, giving the GPU a total of 6,912 CUDA Cores (108 SMs × 64 cores per SM). Each SM also contains 4 Tensor Cores, giving the A100 a total of 432 Tensor Cores (108 SMs × 4 Tensor Cores per SM).
This hierarchical parallelism is what allows GPUs to process millions of operations simultaneously. When you launch a CUDA kernel, the GPU scheduler divides the work across all available SMs. Each SM then further divides its work among its CUDA Cores and Tensor Cores.
# How GPUs Organize Work: Threads, Blocks, and Warps
To understand why GPUs are so efficient, you need to understand how they organize computational work. When you write code that runs on a GPU, the work is structured in a specific hierarchy:
* **Threads** are the smallest units of work. Think of a thread as a single worker assigned to compute one element of your matrix or one piece of data. All threads execute the same instructions, but each thread works on different data. This is called SIMT (Single Instruction, Multiple Threads). It's like having thousands of workers all following the same recipe, but each making a different dish.
* **Blocks** are groups of threads that work together. A block might contain 256 or 512 threads, for example. Each block runs on a single Streaming Multiprocessor and has access to its own shared memory. Think of a block as a team of workers assigned to a specific department (SM) with their own shared workspace.
* **Warps** are groups of 32 threads that execute together. This is a crucial concept: threads don't execute individually. They always execute in groups of 32 called warps. If you have a block with 256 threads, that block contains 8 warps (256 ÷ 32 = 8). Warps are important because they're the unit that the GPU scheduler actually manages.
* **Warp Schedulers** are the traffic controllers within each SM. Each SM typically has 4 warp schedulers. These schedulers pick warps that are ready to execute and assign them to the CUDA Cores and Tensor Cores. When one warp is waiting for data from memory, the scheduler can immediately switch to another warp that's ready, keeping the cores busy.
Here's how it all works together:
1. Your CUDA program launches thousands of threads organized into blocks
2. Blocks are assigned to Streaming Multiprocessors
3. Each block is divided into warps of 32 threads
4. Warp schedulers within each SM pick ready warps and execute them
5. When a warp is waiting for data, the scheduler switches to another warp
This organization is why GPUs can hide memory latency so effectively. If one warp is waiting for data, there are many other warps ready to execute, so the cores never sit idle. This is also why occupancy (the number of active warps per SM) matters so much for performance. More active warps mean more opportunities to hide latency and keep the GPU busy.
# Why GPU Architecture Matters for LLM Training
A single transformer block contains several computationally intensive operations:
* **Matrix multiplications for attention**: The attention mechanism requires computing queries, keys, and values, then performing matrix multiplications to compute attention scores.
* **Matrix multiplications for feed-forward layers**: Each transformer block has feed-forward networks that apply linear transformations, which are pure matrix multiplications.
* **Softmax operations**: The attention scores need to be normalized using softmax.
* **LayerNorm normalizations**: These require computing means and variances across the hidden dimension.
All of these operations scale linearly or quadratically with sequence length. If you double the sequence length, you might quadruple the computation needed for attention.
A GPU accelerates these operations dramatically due to three key features:
1. **Parallel threads**: The thousands of cores can each handle a different element of your matrices simultaneously.
2. **Tensor Cores**: Specialized units optimized for matrix multiplication operations.
3. **Wider memory buses**: GPUs have memory buses that are much wider than CPUs, allowing them to transfer large amounts of data quickly.
The result is that operations that might take hours on a CPU can complete in minutes or even seconds on a GPU.
# 3. VRAM: The GPU's Working Memory
Memory is one of the biggest constraints in LLM training. While having powerful GPU cores is essential, those cores are useless if they can't access the data they need to process. Understanding GPU memory architecture is crucial because it directly determines what models you can train, what batch sizes you can use, and what sequence lengths you can handle.
# What is VRAM?
**VRAM** stands for Video Random Access Memory. This is the high-speed, high-bandwidth memory that sits directly on the GPU board, physically close to the processing cores. Unlike system RAM, which is connected to the CPU through a relatively narrow bus, VRAM is connected to the GPU cores through an extremely wide memory bus that can transfer hundreds of gigabytes per second.
The key characteristic of VRAM is its speed. When a GPU core needs data to perform a calculation, it can access VRAM much faster than it could access system RAM. This is why all your model weights, activations, and intermediate computations need to fit in VRAM during training. If data has to be swapped to system RAM, the GPU cores will spend most of their time waiting for data transfers, completely negating the performance benefits of parallel processing.
# Types of VRAM
There are several types of VRAM used in modern GPUs:
Minimize image
Edit image
Delete image
* **GDDR6** (Graphics Double Data Rate 6) is the most common type of VRAM in consumer gaming GPUs. It offers excellent bandwidth for its price point. A typical RTX 4090 might have 24 GB of GDDR6 memory with a bandwidth of around 1000 GB/s.
* **HBM2** (High Bandwidth Memory 2) is a more advanced technology that stacks memory dies vertically and connects them using through-silicon vias. This allows for much higher bandwidth in a smaller physical footprint. The NVIDIA A100, for example, uses HBM2 to achieve bandwidths of over 2000 GB/s.
* **HBM3 and HBM3e** represent the latest generation of high-bandwidth memory, offering even greater speeds. The NVIDIA H100 can achieve bandwidths exceeding 3000 GB/s using HBM3e.
# What Consumes VRAM During Training?
Every component of your training process consumes VRAM, and if you run out, training simply cannot proceed:
1. **Model weights**: The parameters that your model learns during training. For a model with 1 billion parameters stored in FP16, you need approximately 2 GB of VRAM just for the weights. For a 7 billion parameter model in FP16, you need about 14 GB.
2. **Activations**: Intermediate values computed during the forward pass. These need to be kept in memory because they're required during the backward pass to compute gradients. The amount of memory needed depends on your batch size and sequence length.
3. **Optimizer states**: Most optimizers, like Adam, maintain additional state for each parameter. For Adam, this typically means storing a first moment estimate and a second moment estimate for each parameter, which can double or triple your memory requirements.
4. **Gradients**: Memory for gradients, which are computed during backpropagation and have the same size as your model weights.
5. **System overhead**: Temporary buffers, CUDA kernels, and other system requirements.
Here's a breakdown of memory requirements for different model sizes:
https://preview.redd.it/k2j9smcpnl6g1.png?width=1428&format=png&auto=webp&s=32910b05300f9402d44fa2ea6e6f21a36ed61a48
**NOTE**: These numbers represent the minimum memory needed just for the model weights. In practice, you'll need significantly more VRAM to account for activations, gradients, optimizer states, and overhead. A rule of thumb is that you need at least 2 to 3 times the model weight size in VRAM for training, and sometimes more depending on your batch size and sequence length.
# The Consequences of Insufficient VRAM
When you don't have enough VRAM, several problems occur:
* **Out of Memory (OOM) errors**: Your training process will crash when CUDA runs out of VRAM.
* **Forced compromises**: You'll need to reduce batch size or sequence length, which can hurt training effectiveness.
* **Model parallelism or offloading**: In extreme cases, you might need to split the model across multiple GPUs or keep parts in system RAM, both of which add complexity and slow down training.
Understanding your VRAM constraints is essential for planning your training setup. Before you start training, you need to know how much VRAM your GPU has, how much your model will require, and what tradeoffs you'll need to make.
# 4. FLOPS: Measuring GPU Compute Power
**FLOPS** stands for Floating Point Operations Per Second, and it's a measure of a GPU's computational throughput. Understanding FLOPS helps you understand the raw compute power of different GPUs and why some are faster than others for training.
# What are FLOPS?
FLOPS measure how many floating-point operations (additions, multiplications, etc.) a processor can perform in one second. For GPUs, we typically talk about:
* **TFLOPS** (TeraFLOPS): Trillions of operations per second
* **PFLOPS** (PetaFLOPS): Quadrillions of operations per second
For example, an NVIDIA A100 GPU can achieve approximately 312 TFLOPS for FP16 operations with Tensor Cores. An H100 can reach over 1000 TFLOPS for certain operations.
# Why FLOPS Matter
FLOPS give you a rough estimate of how fast a GPU can perform the matrix multiplications that dominate neural network training. However, FLOPS alone don't tell the whole story:
* **Memory bandwidth**: Even if a GPU has high FLOPS, it needs high memory bandwidth to keep the cores fed with data.
* **Tensor Core utilization**: Modern training frameworks need to properly utilize Tensor Cores to achieve peak FLOPS.
* **Workload characteristics**: Some operations are compute-bound (limited by FLOPS), while others are memory-bound (limited by bandwidth).
# Theoretical vs. Practical FLOPS
The FLOPS numbers you see in GPU specifications are theoretical peak performance under ideal conditions. In practice, you'll rarely achieve these numbers because:
* Not all operations can utilize Tensor Cores
* Memory bandwidth may limit performance
* Overhead from data movement and kernel launches
* Inefficient code or framework limitations
A well-optimized training loop might achieve 60-80% of theoretical peak FLOPS, which is considered excellent. If you're seeing much lower utilization, it might indicate bottlenecks in data loading, inefficient operations, or memory bandwidth limitations.
# FLOPS and Training Speed
Higher FLOPS generally means faster training, but the relationship isn't always linear. A GPU with twice the FLOPS might not train twice as fast if:
* Memory bandwidth becomes the bottleneck
* The workload doesn't efficiently utilize Tensor Cores
* Other system components (CPU, storage) limit performance
When choosing a GPU for training, consider both FLOPS and memory bandwidth. A balanced GPU with high FLOPS and high memory bandwidth will perform best for most training workloads.
# Conclusion
Understanding GPUs is essential for effective deep learning training. From the fundamental architecture differences between CPUs and GPUs to the practical challenges of VRAM management and performance optimization, these concepts directly impact your ability to train models successfully.
Hopefully you've learned something useful today! Armed with this knowledge about GPU architecture, memory management you're now better equipped to tackle the challenges of training neural networks. Happy training!
| 2025-12-11T16:14:29 | https://www.reddit.com/r/LocalLLaMA/comments/1pk1hyp/day_4_21_days_of_building_a_small_language/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk1hyp | false | null | t3_1pk1hyp | /r/LocalLLaMA/comments/1pk1hyp/day_4_21_days_of_building_a_small_language/ | false | false | 4 | null | |
Built a productivity app that uses Groq/Llama 3 70b for agentic tasks (File organizing, Deep Research). Open Source. | 2 | *Processing img cl1zkhoxkl6g1...*
Wanted to share a project I've been working on. It’s an Electron/React workspace that integrates LLMs for actual agentic workflows, not just chatting.
I’m using openai/gpt-oss-120b (via Groq) for the reasoning capabilities.
**What it does with the LLM:**
* **Tool Use:** The AI outputs JSON commands to control the app state (creating folders, toggling tasks, managing the wiki).
* **RAG-lite:** It reads the current context of your active note/dashboard to answer questions.
* **Web Search:** Implemented the browser\_search tool so it can perform deep research and compile reports into your notes.
Code is open source (MIT).
**Repo:** [BetterNotes](https://github.com/AdhirajPersonal/BetterNotes)
Curious if anyone has suggestions for better prompting strategies to prevent it from hallucinating tools on complex queries.
| 2025-12-11T15:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1pk13i0/built_a_productivity_app_that_uses_groqllama_3/ | MammothEar1626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk13i0 | false | null | t3_1pk13i0 | /r/LocalLLaMA/comments/1pk13i0/built_a_productivity_app_that_uses_groqllama_3/ | false | false | 2 | null | |
How do you handle synthetic data generation for training? | 0 | Building a tool for generating synthetic training data (conversations, text, etc.) and curious how people approach this today. - Are you using LLMs to generate training data? - What's the most annoying part of the workflow? - What would make synthetic data actually usable for you? Not selling anything, just trying to understand the space. | 2025-12-11T15:53:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pk0xys/how_do_you_handle_synthetic_data_generation_for/ | Ok-Lobster9028 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk0xys | false | null | t3_1pk0xys | /r/LocalLLaMA/comments/1pk0xys/how_do_you_handle_synthetic_data_generation_for/ | false | false | self | 0 | null |
Built a productivity app that uses Groq for agentic tasks (File organizing, Deep Research). Open Source. | 1 | 2025-12-11T15:52:54 | https://www.reddit.com/gallery/1pk0xdp | MammothEar1626 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pk0xdp | false | null | t3_1pk0xdp | /r/LocalLLaMA/comments/1pk0xdp/built_a_productivity_app_that_uses_groq_for/ | false | false | default | 1 | null | |
New in llama.cpp: Live Model Switching | 450 | 2025-12-11T15:49:43 | https://huggingface.co/blog/ggml-org/model-management-in-llamacpp | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pk0ubn | false | null | t3_1pk0ubn | /r/LocalLLaMA/comments/1pk0ubn/new_in_llamacpp_live_model_switching/ | false | false | default | 450 | {'enabled': False, 'images': [{'id': '8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=108&crop=smart&auto=webp&s=df0ca42284db128adbcc691988a242bcf784ab60', 'width': 108}, {'height': 117, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=216&crop=smart&auto=webp&s=69c8b832f8fb441ae028d413dfdee5824618276f', 'width': 216}, {'height': 174, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=320&crop=smart&auto=webp&s=c7de7368780a8303a57212412c1793a27bc96e61', 'width': 320}, {'height': 349, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=640&crop=smart&auto=webp&s=2a43f5804fb810225237c9c37046b91c9bbb6451', 'width': 640}, {'height': 523, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=960&crop=smart&auto=webp&s=0889cfd4b2afcd4d6ad75a08d69f6d9ad4b0a1ac', 'width': 960}, {'height': 589, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?width=1080&crop=smart&auto=webp&s=313516504b6b6e6b917909f4500a10574bf9d0cc', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/8Hy799ws5wvJKYaRb__KN0TGXYxiPxKG6PuG-1SlIWg.png?auto=webp&s=71475f875afd7422254da9f40a4099e43f4f0fcd', 'width': 1408}, 'variants': {}}]} | |
Currently what is the safest interface to run llm locally | 0 | performance are secondary I need to be able to run llm on my work environment but I need it to be safe. | 2025-12-11T15:35:25 | https://www.reddit.com/r/LocalLLaMA/comments/1pk0hs4/currently_what_is_the_safest_interface_to_run_llm/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pk0hs4 | false | null | t3_1pk0hs4 | /r/LocalLLaMA/comments/1pk0hs4/currently_what_is_the_safest_interface_to_run_llm/ | false | false | self | 0 | null |
[GPULlama3.java release v0.3.0] Pure Java LLaMA Transformers Compilied to PTX/OpenCL integrated with Quarkus & LangChain4j | 1 | 2025-12-11T15:18:27 | https://v.redd.it/c4jp7o9jdl6g1 | mikebmx1 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pk02go | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/c4jp7o9jdl6g1/DASHPlaylist.mpd?a=1768058326%2CZDVkN2NjMzY0NWRjNjAzOTk1NDczMjRmNzgyODNhZTI0NzY1ZjM0MmM1OWIyMmNjMjMxMjVkYTc2MzE4NzI4Yw%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/c4jp7o9jdl6g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 670, 'hls_url': 'https://v.redd.it/c4jp7o9jdl6g1/HLSPlaylist.m3u8?a=1768058326%2CNDE3NjU2ZWIxMGRkNGJkMmIxYTQ2NGViNjc4ZTQ2Zjk4YjRhYjJjMmE4YWExNDFlYjMyOGZhNzk1MTg2MGFlNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c4jp7o9jdl6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1pk02go | /r/LocalLLaMA/comments/1pk02go/gpullama3java_release_v030_pure_java_llama/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=108&crop=smart&format=pjpg&auto=webp&s=68a755ae46f97722e953e180b19adf9d17c75726', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=216&crop=smart&format=pjpg&auto=webp&s=87812bfa34f44f9deecd98bc4edace307ea8d1b6', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=320&crop=smart&format=pjpg&auto=webp&s=9c04a10f787ee4359c4aa6df5995f3149d72c1be', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=640&crop=smart&format=pjpg&auto=webp&s=da13122e5910c88aa71901c17dfeee282863049f', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=960&crop=smart&format=pjpg&auto=webp&s=1094ae7e9c1acc0677800965500a84b911f0d85e', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?width=1080&crop=smart&format=pjpg&auto=webp&s=701a40d969dbb6574655264582d96bf187c8bd08', 'width': 1080}], 'source': {'height': 998, 'url': 'https://external-preview.redd.it/aW1qbnB1OWpkbDZnMUQWyw-NJl8-1lSdZFLuBV5asYieetqWX6lQO0rLxIpr.png?format=pjpg&auto=webp&s=8b361fcde971afdd9460e8f541cbe009190fa127', 'width': 1910}, 'variants': {}}]} | ||
If you had to pick just one model family’s finetunes for RP under 30B, which would you pick? | 0 | Mostly trying to see which base model is smartest/most naturally creative, as I’m getting into training my models :D | 2025-12-11T15:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1pjzxre/if_you_had_to_pick_just_one_model_familys/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjzxre | false | null | t3_1pjzxre | /r/LocalLLaMA/comments/1pjzxre/if_you_had_to_pick_just_one_model_familys/ | false | false | self | 0 | null |
Open models for visual explanations in education and deck cards | 0 | Does anyone have any good recommendations or experiences for open models/diffusion models which can produce helpful visual explanations of concepts in an educational setting?
A bit like notebooklm from Google but local.
And if they don't exist, suggestions for a training pipeline and which models could be suited for fine-tuning for this type of content would be appreciated.
I know zai, qwen image, flux etc, but I don't have experience with fine-tuning them and whether they would generalize well to this type of content.
Thanks. | 2025-12-11T15:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pjznmt/open_models_for_visual_explanations_in_education/ | fiery_prometheus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjznmt | false | null | t3_1pjznmt | /r/LocalLLaMA/comments/1pjznmt/open_models_for_visual_explanations_in_education/ | false | false | self | 0 | null |
Breakthrough: Running 256K Context LLMs on 16GB GPUs with Hybrid CPU-GPU Offloading | 0 | # Title: Breakthrough: Running 256K Context LLMs on 16GB GPUs with Hybrid CPU-GPU Offloading
Hey and !
I've been working on optimizing LLM inference for long contexts on consumer hardware, and I'm excited to share a breakthrough that enables
**256K token contexts**
on modest 16GB GPUs while maintaining reasonable performance.
## The Problem
Large language models for code generation and analysis need massive context windows, but consumer GPUs (like RTX 30/40 series with 16GB VRAM) hit OOM errors even with 128K contexts. Traditional solutions sacrifice either context length or performance.
## The Solution: Hybrid CPU-GPU Offloading
I developed a production-ready framework that combines:
-
**KV Cache Offloading**
: Moves inactive tokens to CPU RAM
-
**Adaptive Sliding Windows**
: Keeps only active context on GPU
-
**Chunked Prefill**
: Processes long prompts in manageable windows
-
**Context Extrapolation**
: Extends beyond trained limits without retraining
-
**Massive CPU Utilization**
: Leverages underused CPU cores and RAM
## Key Results
-
**VRAM Usage**
: Stable at ~15.2GB (95% of 16GB) for 256K contexts
-
**Throughput**
: 50-65 tokens/s (vs 110+ for shorter contexts)
-
**Trade-off**
: 8x context expansion with ~45% performance hit
-
**Hardware**
: Tested on 16GB GPU + 32GB RAM + multi-core CPU
## Benchmarks
| Context | VRAM | Throughput | Status |
|---------|------|------------|--------|
| 32K | 11.8GB | 110 t/s | Production |
| 128K | 12.7GB | 85 t/s | Production |
| 256K | 15.2GB | 60 t/s | Experimental |
## Why This Matters
- Enables analysis of massive codebases (>500 files)
- Supports ultra-long document processing
- Makes private, local LLM deployment viable for developers
- Lowers barrier to entry for AI experimentation
## Technical Details
The system uses:
- PagedAttention for sparse KV allocation
- CPU pinning for dedicated inference cores
- Real-time VRAM monitoring with alerts
- Open-source tooling (containerized, reproducible)
## Demo
Imagine analyzing an entire software repository with 2.4M tokens in context - now possible on your desktop!
## Links
Full technical guide: [INFRA.md](INFRA.md) (comprehensive documentation with implementation details)
## Questions?
What do you think - is this approach viable for your use cases? Any suggestions for further optimization?
#AI #LLM #LocalAI #MachineLearning #GPUOptimization# Title: Breakthrough: Running 256K Context LLMs on 16GB GPUs with Hybrid CPU-GPU Offloading
Hey and !
I've been working on optimizing LLM inference for long contexts on consumer hardware, and I'm excited to share a breakthrough that enables **256K token contexts** on modest 16GB GPUs while maintaining reasonable performance.
## The Problem
Large language models for code generation and analysis need massive context windows, but consumer GPUs (like RTX 30/40 series with 16GB VRAM) hit OOM errors even with 128K contexts. Traditional solutions sacrifice either context length or performance.
## The Solution: Hybrid CPU-GPU Offloading
I developed a production-ready framework that combines:
- **KV Cache Offloading**: Moves inactive tokens to CPU RAM
- **Adaptive Sliding Windows**: Keeps only active context on GPU
- **Chunked Prefill**: Processes long prompts in manageable windows
- **Context Extrapolation**: Extends beyond trained limits without retraining
- **Massive CPU Utilization**: Leverages underused CPU cores and RAM
## Key Results
- **VRAM Usage**: Stable at ~15.2GB (95% of 16GB) for 256K contexts
- **Throughput**: 50-65 tokens/s (vs 110+ for shorter contexts)
- **Trade-off**: 8x context expansion with ~45% performance hit
- **Hardware**: Tested on 16GB GPU + 32GB RAM + multi-core CPU
## Benchmarks
| Context | VRAM | Throughput | Status |
|---------|------|------------|--------|
| 32K | 11.8GB | 110 t/s | Production |
| 128K | 12.7GB | 85 t/s | Production |
| 256K | 15.2GB | 60 t/s | Experimental |
## Why This Matters
- Enables analysis of massive codebases (>500 files)
- Supports ultra-long document processing
- Makes private, local LLM deployment viable for developers
- Lowers barrier to entry for AI experimentation
## Technical Details
The system uses:
- PagedAttention for sparse KV allocation
- CPU pinning for dedicated inference cores
- Real-time VRAM monitoring with alerts
- Open-source tooling (containerized, reproducible)
## Demo
Imagine analyzing an entire software repository with 2.4M tokens in context - now possible on your desktop!
## Links
Full technical guide: [INFRA.md](INFRA.md) (comprehensive documentation with implementation details)
## Questions?
What do you think - is this approach viable for your use cases? Any suggestions for further optimization?
#AI #LLM #LocalAI #MachineLearning #GPUOptimization
Github https://github.com/klenioaraujo/Efficient-VRAM-Optimization-for-Long-Context-Code-LLMs-A-Hybrid-CPU-GPU-Offloading-Architecture/ | 2025-12-11T14:47:45 | https://www.reddit.com/r/LocalLLaMA/comments/1pjzbd4/breakthrough_running_256k_context_llms_on_16gb/ | bk888888888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjzbd4 | false | null | t3_1pjzbd4 | /r/LocalLLaMA/comments/1pjzbd4/breakthrough_running_256k_context_llms_on_16gb/ | false | false | self | 0 | null |
Thoughts on this? Tiiny AI | 0 | 2025-12-11T14:03:46 | https://wccftech.com/meet-the-worlds-smallest-supercomputer-a-machine-bold-enough-to-run-120b-ai-models/ | qeinca | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1pjyahs | false | null | t3_1pjyahs | /r/LocalLLaMA/comments/1pjyahs/thoughts_on_this_tiiny_ai/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=108&crop=smart&auto=webp&s=e5d5a33cf6437307aba7a2988b683d655ccfbaa7', 'width': 108}, {'height': 159, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=216&crop=smart&auto=webp&s=6685957e2f85c0d3c2eebe74a00ffe40b9851855', 'width': 216}, {'height': 236, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=320&crop=smart&auto=webp&s=e50df4e72e224e05a5038d3c6651eb66906ef07f', 'width': 320}, {'height': 472, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=640&crop=smart&auto=webp&s=f432498965c77dde0ddb3f7a0c32cf5465112f6a', 'width': 640}, {'height': 708, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=960&crop=smart&auto=webp&s=69cc9f1a108f66f302ad7ba641242d3d72272e5c', 'width': 960}, {'height': 796, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?width=1080&crop=smart&auto=webp&s=ada5ea34116b364d4fc3035fba87ea3c02e5b7ea', 'width': 1080}], 'source': {'height': 1888, 'url': 'https://external-preview.redd.it/IffmBHqiSu5XoQIpo6IthdP8iQOioMmit6kF5v2qJA4.jpeg?auto=webp&s=2b5c9ae9a5bfc4fe9c48a4604983326ed7be3ff6', 'width': 2560}, 'variants': {}}]} | |
There were 14 different token optimization methods, so I created another one [minemizer] (and I have some benchmarks to almost prove it is the best one) | 4 | I'll save your human tokens, link is here: https://github.com/ashirviskas/minemizer
tl;dr: csv-like, but supports sparse and nested data, optimized for token usage. Adds space before values so words are less split between tokens, which leads to better LLM scores.
Example with **flat data**:
from minemizer import minemize
data = [
{"name": "Marta", "role": "Engineer", "team": "Backend"},
{"name": "James", "role": "Designer", "team": "Frontend"},
{"name": "Sophie", "role": "Manager", "team": "Product"},
]
print(minemize(data))
Returns basically csv:
name; role; team
Marta; Engineer; Backend
James; Designer; Frontend
Sophie; Manager; Product
**Nested sparse data**
Control how sparse fields are handled using `sparsity_threshold`.
Fields appearing in fewer records than the threshold are shown inline in rows rather than in the header:
data = [
{"id": 1, "name": "Lukas", "location": {"city": "Vilnius", "floor": 3}},
{"id": 2, "name": "Emma", "location": {"city": "Boston", "floor": 7, "desk": "A12"}},
{"id": 3, "name": "Yuki", "location": {"city": "Tokyo", "floor": 5}},
{"id": 4, "name": "Oliver", "location": {"city": "London", "floor": 2, "desk": "B04"}},
]
Sparsity is default (0.5): desk appears in 50% of records, so it is included in header schema
print(minemize(data))
id; name; location{ city; floor; desk}
1; Lukas; { Vilnius; 3; }
2; Emma; { Boston; 7; A12}
3; Yuki; { Tokyo; 5; }
4; Oliver; { London; 2; B04}
The core is like 300 Lines of code, no dependencies, no bullshit. And Human readable.
Semi-interactive benchmark data to explore can be found here: https://ashirviskas.github.io/
I made this as a necessity, no other "standard" did what I wanted and were full of bs. | 2025-12-11T14:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/1pjy89q/there_were_14_different_token_optimization/ | ashirviskas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjy89q | false | null | t3_1pjy89q | /r/LocalLLaMA/comments/1pjy89q/there_were_14_different_token_optimization/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=108&crop=smart&auto=webp&s=ba115a1fa3e70f0acf49916f41c85240898b798d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=216&crop=smart&auto=webp&s=5846bb8934bbc21b4fb8f36cb4d4ebb35de1fd5b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=320&crop=smart&auto=webp&s=717e6af6101c4fc079552126c2ee081626c71cb8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=640&crop=smart&auto=webp&s=d56d96a7e425de478c95eda329364b5def74172c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=960&crop=smart&auto=webp&s=62aa470ec2de4fc1d5a1fae8ae33fa24a92b6cab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?width=1080&crop=smart&auto=webp&s=c088cbf7b7889ff8102024d574cf9f78de4e0e5c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2zuG_5fdxm_elqkD2y-Oa6pUlGwwoNRMc_l_HWQ7azI.png?auto=webp&s=3064e8ae7a5cf9978374c8ebecbff1a55181488b', 'width': 1200}, 'variants': {}}]} |
I have bult a Local AI Server, now what? | 0 | Good morning,
I have bult a server with 2 NVIDA Cards with 5GGB of VRAM (3090 and 5090) and 128 GB Of RAM on motherboard.
It works, I can run GPT-OSS-120B and 70B models on it locally, but I dont know how to justify that machine?
I was thinking of learning AI Engineering and Vibecoding, but this local build cannot match the commercial models.
Would you share ideas on how to use this machine? How to make money off it? | 2025-12-11T13:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pjxu8p/i_have_bult_a_local_ai_server_now_what/ | Puzzled_Relation946 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjxu8p | false | null | t3_1pjxu8p | /r/LocalLLaMA/comments/1pjxu8p/i_have_bult_a_local_ai_server_now_what/ | false | false | self | 0 | null |
[Bug Report] Reproducible Cross-Layer Deadlock in Claude 4.5: Zero Tool Calls Despite Full Task Understanding (w/ Meta-Diagnostics) | 0 | 2025-12-11T13:39:50 | https://www.reddit.com/r/SemanticOS/comments/1pjxc4u/bug_report_claude_sonnet_45_60minute_crosslayer/ntgpaae/ | Glass-Summer-9031 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1pjxrgk | false | null | t3_1pjxrgk | /r/LocalLLaMA/comments/1pjxrgk/bug_report_reproducible_crosslayer_deadlock_in/ | false | false | default | 0 | null | |
I am building deterministic llm, thoughts? | 0 | I have started to work on this custom llm and quite excited. Goal is to make a llm+rag system with over 99% deterministic responses at agentic work and json on similar inputs. Using an open source model, will customize majority of probabilistic factors, like, softmax, kernel, etc. Then will build and connect it to a custom deterministic rag.
Although model in itself won't be very accurate as current llms, but it will strongly follow all the instructions and knowledge you put in so, you will be able to teach the system how to behave and what to do in certain situation.
I wanted to get some feedback, current llms are quite good depending on the agentic work, but let me know your thoughts. | 2025-12-11T13:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pjxook/i_am_building_deterministic_llm_thoughts/ | Direct_Head312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjxook | false | null | t3_1pjxook | /r/LocalLLaMA/comments/1pjxook/i_am_building_deterministic_llm_thoughts/ | false | false | self | 0 | null |
Small size coding models that I tested on 2x3090 setup. | 3 | Just share my experience with small size coding models, that I tested on 2x3090 setup **using llama.cpp server web GUI** \- not to be confused with coding API. Model names given as it was downloaded from HF.
Prompt: It was request to compose relatively complex python application for Linux. I'm sorry, but dont show my test prompt here to prevent it from adding to the next training datatsets.
options: "--ctx\_size 128000 --temp 0.7 --top\_k 40 --flash-attn auto --cache-type-k q8\_0 --cache-type-v q8\_0". (For qwen2.5-coder-32b-Instruct --ctx\_size 32768 used)
***Order from best to worst:***
**cerebras\_Qwen3-Coder-REAP-25B-A3B-Q8\_0.gguf**
16t/s; python program work correct as it generated (100%).
Also tested it on real task with about 60K context preloaded - it worked correctly.
**gpt-oss-20b-heretic-v2.Q8\_0.gguf**
17t/s; python program work correct as it generated (100%).
**Qwen2.5-Godzilla-Coder-V2-51B-128k.Q6\_K.gguf**
\--n-gpu-layers 0; only context processing on GPU
2.4t/s; python program work, as it generated. Have little design problem, but work mostly as expected (90%).
**HERETICODER-2.5-7B-IT.Q8\_0.gguf**
75t/s; fast, python program starts,
but work patially (60%) as expected,
objects created, but don't cleanned - memeory leaks.
**HERETICODER-2.5-7B-IT.Q6\_K.gguf**
94t/s; fast, python program starts, but work not as expected (40%),
objects doesn't created as expected.
**Qwen3-8B-gemini-3-pro-preview-high-reasoning-distill-Q8\_0.gguf**
75t/s; fast, python program starts, but work not as expected (20%),
objects doesn't created as expected.
**qwen2.5-coder-32B-instruct-q6\_k.gguf** (from Qwen)
25t/s; fast, python program starts, but work not as expected (less that 10%),
objects doesn't created as expected.
**ministral-3-14b-instruct-2512-bf16-heretic-q8\_0.gguf**
full lobotomia - dont understand request, try to explain why it do nothing.
Tried it also with llama.cpp server version from 2025 Dec. 10 - same result.
About my setup:
CPU: Threadripper 5965wx, RAM: DDR4 all 8 slots populated,
OS: MX-Linux; kernel: Linux 6.14.2-1-liquorix-amd64
GPU: 2 x RTX-3090
Cuda 13.0
llama.cpp server version from 2025 Dec. 03
| 2025-12-11T13:32:41 | https://www.reddit.com/r/LocalLLaMA/comments/1pjxlwm/small_size_coding_models_that_i_tested_on_2x3090/ | Mx4n1c41_s702y73ll3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjxlwm | false | null | t3_1pjxlwm | /r/LocalLLaMA/comments/1pjxlwm/small_size_coding_models_that_i_tested_on_2x3090/ | false | false | self | 3 | null |
Quantization and math reasoning | 0 | DeepSeek paper claims that a variation of their model, DeepSeek Speciale, achieves gold medal performance at IMO/IOI. To be absolutely sure, one might have to benchmark FP8-quantized version and the full/unquantized version.
However, without this, how much performance degradation at these contests might one expect when quantizing these large (>100B) models? | 2025-12-11T13:31:08 | https://www.reddit.com/r/LocalLLaMA/comments/1pjxkq5/quantization_and_math_reasoning/ | No-Plan-3868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjxkq5 | false | null | t3_1pjxkq5 | /r/LocalLLaMA/comments/1pjxkq5/quantization_and_math_reasoning/ | false | false | self | 0 | null |
This is how I understand how ai models work - correct anything. | 0 | Note: all individual characters written here were written on my keyboard (except for: "-3.40282347E+38 to -1.17549435E-38" - i pasted that).
Step by step how a software interacts with ai-model:
\-> <user input>
\-> software transforms text to tokens forming 1'st token context
\-> soft. calls for \*.gguf(ai model) and sends it \*System prompt\* + \*user context\*(if any) + \*user 1'st input\*
\-> tokens are fed into ai layers (everything at the same time)
\-> neurons (small processing nodes), pathways (connections between neurons with weights) and algoritms (top k, top p, temp, min p, repeat penalty, etc) start to guide the tokens trough the model (!!these are metaphors - not realy how ai-models looke like inside - the real ai-model is a table of numbers!!)
\-> tokens go in a chain-lightning-like-way from node to node in each layer-group guided by the pathways
\-> then on first layer-group, the tendency is for small patterns to appear (the "sorting" phase - rough estimate); depending on he first patterns "spotlight" tend to form
\-> then on low-mid level layer-groups, the tendency is for larger threads to appear (ideas, individual small "understandings")
\-> then on the mid-high layers i assume ai starts to form a asumption-like threads (longer encompassing smaller threads) based on early smaller-patterns groups + threads-of-ideas groups in the same "spotlight"
\-> then on highest layer-groups an answer is formed as a result continuation of the threads resulting in output-processed-token
\-> \*.gguf sends back to the software the resulting token
\-> software then looks at: maximum token limit per answer (software limit); stop commands (sent by ai itself - characters, words+characters); end of paragraph; - if not it goes on; if yes it stops and sends user the answer
\-> then software calls back \*.gguf and sends it \*System prompt\* + \*user context\* + \*user 1'st input\* + \*ai generated token\*; this goes on and on until software belives this is the answer
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
The whole process look like this:
example prompt: "hi!" -> 1'st layer (sorting) produces "hi" + "!" -> then from "small threads" phase "hi" + "!" results in "salute" + "welcoming" + "common to answer back" -> then it adds things up to "context token said hi! in a welcoming way" + "the pattern shows there should be an answer" (this is a small tiny example - just a simple emergent "spotlight") ->
note: this is a rough estimate - tokens might be smaller than words - sylables, characters, bolean.
User input: "user context window" + "hi!" -> software creates: \*System prompt\* + \*user context window\* + \*hi!\* -> sends it to \*.gguf
1'st cycle results in "Hi!" -> \*.gguf sends to software -> software determines this is not enough and recalls \*.gguf sending: \*System prompt\* + \*user context window\* + \*hi!\* + \*Hi!\*
2'nd cycle results in "What" -> \*.gguf sends to software -> software: not enough -> recalls \*.gguf sending: \*System prompt\* + \*user context window\* + \*hi!\* + \*Hi!\* + \*What\*
3'rd cycle results in "do" -> \*.gguf sends to software -> software: not enough -> recalls \*.gguf sending: \*System prompt\* + \*user context window\* + \*hi!\* + \*Hi!\* + \*What\* + \*do\*
4'th cycle results in "you" -> repeat -> \*System prompt\* + \*user context window\* + \*hi!\* + \*Hi!\* + \*What\* + \*do\* + \*you\*
5'th cycle results in "want" -bis- + "want"
6'th cycle results in "to" -bis- + "to"
7'th cycle results in "talk" -bis- + "talk"
8'th cycle results in "about" -bis- + "about"
9'th cycle results in "?" -> this is where some \*.gguf might send back the <stop> command; software determines this is enough; etc
Then software waits for next user prompt.
Used input: "user context window" + "i want to talk about how ai-models work" -> software sends to \*.gguf: \*System prompt\* + \*user context window\* + \*hi!\* (1st user prompt) + \*Hi! What do you want to talk about ?\* (1st ai answer) + \*i want to talk about how ai-models work\* (2nd user prompt) -> the cycle repeats
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Some asumptions:
\* layers-grups are not clearly defined - it's a gradient. (there is no real planning for these layers)
\- low: 20–30% (sorting)
\- mid: 40–50% (threads)
\- top: 20–30% (continuation-prediction)
\* in image specialised \*.gguf the links don't "think" in token-words but in token-images
\- if a gguf was trained \*only\* in images - it can still output text because it learned how to speak from images - but badly
\- if a gguf was trained on text + images - it will do much better because training on text creates stronger logic
\- if a gguf was dual trained - it will use text as a "backbone"; the text-tokens will "talk" to image-tokens
\* gguf's don't have a database of words; the nodes don't hold words; memory/vocabulary/knowledge is an result of all connections between the nodes - there is nothing there but numbers - the input is what creates the first seed of characters that starts the process of text generation
\* reasoning is a (emergent) result of: more floors depth + more floors width + training a model on logic content. - not planned
\* Quantization reduce “resolution”/finesse of individual connections between the nodes (neurons).
\* bytes (note: the XXbit = value is a simplification not exact values - the real stuff is: 32bit float = "-3.40282347E+38 to -1.17549435E-38"- google search):
\- 32 bit = 2.147.483.647 detail-level / resolution / finesse / weight range - per connection
\- 16 bit = 65.536 weight range - per connection
\- 10 bit = 1.024 weight range - per connection
\- 8 bit = 255 weight range - per connection
\- 4 bit = 16 weight range - per connection
\* models (\*param: how big the real-structure of ai-model is - not nodes or connections but the table of numbers; !note! that the connections are not real but a metaphor):
\- small gguf/models (param:1B–7B; size:1GB–8GB; train:0.1–0.5 Trillion tokens; ex:LLaMA 2–7B,LLaMA 3–8B,Mistral 7B, etc): 1.000-4.000 connections per node
\- medium model (param:10B–30B; size:4GB–25GB; train:0.5–2 T tokens ; ex:LLaMA 3 27B, Mixtral 8x7B, etc): 8.000–16.000 connections per node
\- big model (param:30B–100B; size:20GB–80GB; train:2–10 T tokens ; ex:LLaMA 3 70B, Qwen 72B, etc): 20.000–50.000 connections per node
\- Biggest meanest (param:100B–1T+; size:200+BG; train:10–30 T tokens ; ex:GPT-4+, Claude 3+, Gemini Ultra, etc): 100.000+ connections per node
\* quantized effects:
\- settings (temperature, top-p, etc.) have more noticeable effects.
\- model becomes more sensitive to randomness
\- model may lose subtle differances between different conections | 2025-12-11T13:01:50 | https://www.reddit.com/r/LocalLLaMA/comments/1pjwy8p/this_is_how_i_understand_how_ai_models_work/ | Mental-Illustrator31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjwy8p | false | null | t3_1pjwy8p | /r/LocalLLaMA/comments/1pjwy8p/this_is_how_i_understand_how_ai_models_work/ | false | false | self | 0 | null |
Local LLM that generates images and videos | 0 | Hi everyone, I’m new to this topic.
Is there an LLM that I can run locally that is able to generate images or even videos? (I know it requires a lot of computing power and I can’t expect decent results).
I’m looking to do a personal experiment and for my knowledge!
Thank you! ☺️ | 2025-12-11T12:52:36 | https://www.reddit.com/r/LocalLLaMA/comments/1pjwrdp/local_llm_that_generates_images_and_videos/ | tombino104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjwrdp | false | null | t3_1pjwrdp | /r/LocalLLaMA/comments/1pjwrdp/local_llm_that_generates_images_and_videos/ | false | false | self | 0 | null |
Are LLMs making us sloppy? | 6 | Normally when I write a comment, an e-mail, a chat message, a text message, or whatever, I always make an effort to be as clear as possible with proper spelling, grammar, etc. Part of the reason is that I feel I'm showing respect to the person who is reading it, by minimizing his cognitive burden to comprehend what I wrote. Writing this way has become second nature to me after so many years.
Things are different when chatting with AI where there is no fear of judgement, so I've started write sloppily. By now I just have complete disregard for grammar and spellings because why even bother when the AI can understand me anyway. What worries me is that now I have two different styles of writing and I worry that the sloppy side will eventually bleed into the professional side.
Just today when I wrote a professional e-mail, I noticed three typos (I normally don't use spellcheck) after I hit send, which is a bit excessive. Now I'm wondering if this is because of getting some practice writing sloppy that it's somehow becoming a bad habit. So that brings me to the original question of whether forming habit with LLMs is making us more sloppy in the long run due to forming bad habits? | 2025-12-11T12:48:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pjwojr/are_llms_making_us_sloppy/ | acidrainery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjwojr | false | null | t3_1pjwojr | /r/LocalLLaMA/comments/1pjwojr/are_llms_making_us_sloppy/ | false | false | self | 6 | null |
What is the security risk of being able to have Custom GPTs or being able to save system prompts in the form of “Models” on Open-WebUI, or Gems on Gemini? | 0 | I have been on several platforms where these features are disabled. I understand why they might be disabled in ChatGPT and Enterprise Gemini for it being a “premium” feature. But why go through the effort in disabling it for Open-WebUI? I mean even going as far as disabling the settings feature to set a system prompt at the conversation level in Open-WebUI.
I know tools are unsafe but System Prompts, temperature, and other settings? | 2025-12-11T12:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1pjwf1i/what_is_the_security_risk_of_being_able_to_have/ | aaronr_90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjwf1i | false | null | t3_1pjwf1i | /r/LocalLLaMA/comments/1pjwf1i/what_is_the_security_risk_of_being_able_to_have/ | false | false | self | 0 | null |
Mistral’s Vibe CLI now supports a 200K token context window (previously 100K) | 425 | 2025-12-11T12:23:44 | https://v.redd.it/4nxnq6w1ik6g1 | Dear-Success-1441 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pjw7rj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4nxnq6w1ik6g1/DASHPlaylist.mpd?a=1768047839%2CMzZlMzQyM2M3ZDJmMTM5NGJiZGQ3MzVhY2U4MjRkNjAzMGUzNTI2OWExODVhN2JlMTZkZWQ3NzRjN2M5NmI1MA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/4nxnq6w1ik6g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/4nxnq6w1ik6g1/HLSPlaylist.m3u8?a=1768047839%2CN2M3NmViMzg4ZGU4ZDU2YWQ1NWQwZTgyZmJlOGI0YTAzMzcwZTQxNzMxMjRlNzE0MWEwMzg4ZTI5ODUxZDg1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4nxnq6w1ik6g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1440}} | t3_1pjw7rj | /r/LocalLLaMA/comments/1pjw7rj/mistrals_vibe_cli_now_supports_a_200k_token/ | false | false | 425 | {'enabled': False, 'images': [{'id': 'ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c07f43fdc271e00835d0f1cb9d1028ffe4729c2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=7ed37a3a0df0f8402493dc5a5b1d609661eba040', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=ef05f6e7b4cdb9b937957190141d840413369c65', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=c6638315a80244032bed643c347a1d1c3f6451b8', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=e9a063af3bf1472a9adbe14c40997650ed4c9dd2', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0792e52c8faaf65521d903de1693ef4554c30688', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZnNsb2d0dzFpazZnMZt0kKC274AvCvOpM9k0UQCIyB1BQvPjsN5T3o1kO8eQ.png?format=pjpg&auto=webp&s=16d76a3b57addaf67671dba2b282d9e751b0aa48', 'width': 1440}, 'variants': {}}]} | ||
Local Librarian AI | 2 | Librarian AI assistant Victor - set up to help navigate a public library collection (books, DVDs, video games, online resources, etc...), provide local resources for guidance, and answer patron's general questions. Created with intention to work offline and without need for beefy PC for rural libraries in US. I wasn't sure what right flair was. Project for myself/for my application for grad school. | 2025-12-11T12:07:50 | https://github.com/LibLadyLynn/victor-library-assistant | Fit_Beautiful_1869 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pjvx2g | false | null | t3_1pjvx2g | /r/LocalLLaMA/comments/1pjvx2g/local_librarian_ai/ | false | false | default | 2 | null |
Leaked footage from Meta's post-training strategy meeting. | 306 | 2025-12-11T12:02:11 | YouCanMake1t | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1pjvtgn | false | null | t3_1pjvtgn | /r/LocalLLaMA/comments/1pjvtgn/leaked_footage_from_metas_posttraining_strategy/ | false | false | default | 306 | {'enabled': True, 'images': [{'id': '2cbgowoj0i6g1', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/2cbgowoj0i6g1.png?width=108&crop=smart&auto=webp&s=75831f1baad83a11263497ebd49036506d6432b6', 'width': 108}, {'height': 282, 'url': 'https://preview.redd.it/2cbgowoj0i6g1.png?width=216&crop=smart&auto=webp&s=408a7b7df82a99126dbb760855f6e18bc4f0b319', 'width': 216}, {'height': 418, 'url': 'https://preview.redd.it/2cbgowoj0i6g1.png?width=320&crop=smart&auto=webp&s=c66ca70ed9cfb317a2c1f1c8a8720753d5991b62', 'width': 320}, {'height': 837, 'url': 'https://preview.redd.it/2cbgowoj0i6g1.png?width=640&crop=smart&auto=webp&s=8274908702ea2b4e3ee76f7741b54aa24bef73d7', 'width': 640}], 'source': {'height': 851, 'url': 'https://preview.redd.it/2cbgowoj0i6g1.png?auto=webp&s=a0c24427940dc05ea1efb36b45c6e2b4e5831e6f', 'width': 650}, 'variants': {}}]} | ||
Intel LLM Scaler - Beta 1.2 Released | 3 | 2025-12-11T12:01:21 | https://github.com/intel/llm-scaler | reps_up | github.com | 1970-01-01T00:00:00 | 0 | {} | 1pjvsuh | false | null | t3_1pjvsuh | /r/LocalLLaMA/comments/1pjvsuh/intel_llm_scaler_beta_12_released/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=108&crop=smart&auto=webp&s=ff1dc0b40f8a06200dae108d0b2765d61e292977', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=216&crop=smart&auto=webp&s=cc6304df8f243db44b80a05898b46c430db2ab35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=320&crop=smart&auto=webp&s=a4be0833ad51542be18c39b73d2301767e310d80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=640&crop=smart&auto=webp&s=c950086b5162cd4a92e455efc14cf50779ea77e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=960&crop=smart&auto=webp&s=7c111654e61bf51affb27364bde7e2ba8e850548', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?width=1080&crop=smart&auto=webp&s=b387f46bd02d8666018157e22a8ddaf8e4d72b4b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vhuTxDv83M0GG6AS4mtgPflyCsOazkfxKOFPZUP-RxY.png?auto=webp&s=61b23e533a4aaae8c97bb4d95690f43e2b3d00cd', 'width': 1200}, 'variants': {}}]} | |
Best approach for building fast, citation-capable retrieval system with ton of author's books/lectures? | 1 | I've converted several books and lecture transcriptions by a specific author from PDF to markdown. I want to build an LLM chat tool where I can ask questions and get fast, accurate answers with exact page/source citations.
What's the best technical approach? I've heard terms like RAG, vector search, and embeddings but don't fully understand the differences. Specifically looking for:
* Fast query response times (I tried Google file search - but have to wait like 15seconds minimally until my vibe coded chat answers - which is too slow)
* Ability to search across multiple markdown files
What stack/tools/approaches would you recommend?
I do not mind paid solutions either. | 2025-12-11T11:43:57 | https://www.reddit.com/r/LocalLLaMA/comments/1pjvhq5/best_approach_for_building_fast_citationcapable/ | TO-222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjvhq5 | false | null | t3_1pjvhq5 | /r/LocalLLaMA/comments/1pjvhq5/best_approach_for_building_fast_citationcapable/ | false | false | self | 1 | null |
I just released TOONIFY: a universal serializer that cuts LLM token usage by 30-60% compared to JSON | 0 | Hello everyone,
I’ve just released **TOONIFY**, a new library that converts JSON, YAML, XML, and CSV into the compact TOON format. It’s designed specifically to reduce token usage when sending structured data to LLMs, while providing a familiar, predictable structure.
GitHub: [https://github.com/AndreaIannoli/TOONIFY](https://github.com/AndreaIannoli/TOONIFY)
* It is **written in Rust**, making it significantly faster and more efficient than the official TOON reference implementation.
* It includes a robust **core library** with full TOON encoding, decoding, validation, and strict-mode support.
* It comes with a **CLI tool** for conversions, validation, and token-report generation.
* It is **widely distributed**: available as a Rust crate, Node.js package, and Python package, so it can be integrated into many different environments.
* It supports multiple input formats: **JSON, YAML, XML, and CSV**.
When working with LLMs, the real cost is **tokens**, not file size. JSON introduces heavy syntax overhead, especially for large or repetitive structured data.
TOONIFY reduces that overhead with indentation rules, compact structures, and key-folding, resulting in **about 30-60% fewer tokens** compared to equivalent JSON.
This makes it useful for:
* Passing structured data to LLMs
* Tooling and agent frameworks
* Data pipelines where token cost matters
* Repetitive or large datasets where JSON becomes inefficient
If you’re looking for a more efficient and faster way to handle structured data for LLM workflows, you can try it out!
Feedback, issues, and contributions are welcome. | 2025-12-11T11:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1pjvhq1/i_just_released_toonify_a_universal_serializer/ | MountainCut7218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjvhq1 | false | null | t3_1pjvhq1 | /r/LocalLLaMA/comments/1pjvhq1/i_just_released_toonify_a_universal_serializer/ | false | false | self | 0 | null |
How to properly run gpt-oss-120b on multiple GPUs with llama.cpp? | 19 | Hello, I need some advice on how to get the gpt-oss-120b running optimally on multiple GPUs setup.
The issue is that in my case, the model is not getting automagically distributed across two GPUs.
My setup is an old Dell T7910 with dual E5-2673 v4 80cores total, 256gb ddr4 and dual RTX 3090. Posted photos some time ago. Now the AI works in a VM hosted on Proxmox with both RTX and a NVMe drive passed through. NUMA is selected, CPU is host (kvm options). Both RTX3090 are power limited to 200W.
I'm using either freshly compiled llama.cpp with cuda or dockerized llama-swap:cuda.
First attempt:
~/llama.cpp/build/bin/llama-server --host 0.0.0.0 --port 8080 -m gpt-oss-120b.gguf --n-gpu-layers 999 --n-cpu-moe 24 --ctx-size 65536
Getting around 1..2tps, CPUs seem way too old and slow. Only one of the GPUs is fully utilized: like 1st: 3GB/24GB, 2nd: 23GB/24GB
After some fiddling with parameters, tried to spread tensors across both GPUs.
llama-server --port ${PORT}
-m /models/gpt-oss-120b-MXFP4_MOE.gguf
--n-gpu-layers 999
--n-cpu-moe 10
--tensor-split 62,38
--main-gpu 0
--split-mode row
--ctx-size 32768
getting between 7tps to 13tps or so, say 10tps on average.
Any suggestions how to adjust to get it working faster?
Interestingly, my dev vm on i9 11th gen, 64GB ram, 1x RTX 3090 , full power gets... 15tps which i think is great, despite having a single GPU. | 2025-12-11T11:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pjv5wz/how_to_properly_run_gptoss120b_on_multiple_gpus/ | ChopSticksPlease | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjv5wz | false | null | t3_1pjv5wz | /r/LocalLLaMA/comments/1pjv5wz/how_to_properly_run_gptoss120b_on_multiple_gpus/ | false | false | self | 19 | null |
5070 + 3070 + 1070 multi gpu/pc setup help | 3 | hello guys,
i've got three pc with a 64 gb 32 and 16gb of ram and a 5070 12gb , 3070 8gb and a 1070 8gb. i would like to use the 3070 in the first pc but i don't know the llama server comand to put two vulkan or more in the same running.
Can somebody give me an help?
the second question or way to do (and is not bad to learn how to do it) is to use two or all three these pc with the 2.5gbe but as i've read there are some problem with latency. just to do some experience... with a basic Ai cluster.
Just to let you know i've made some research but I find only old thread and guides and we are in the late 25 and as you know some motnh in this science field is a huge step.
| 2025-12-11T10:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1pjukd4/5070_3070_1070_multi_gpupc_setup_help/ | Flimsy_Leadership_81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjukd4 | false | null | t3_1pjukd4 | /r/LocalLLaMA/comments/1pjukd4/5070_3070_1070_multi_gpupc_setup_help/ | false | false | self | 3 | null |
RAG Paper 25.12.10 | 0 | 1. [RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning](http://arxiv.org/abs/2512.09487v1)
2. [Passing the Baton: High Throughput Distributed Disk-Based Vector Search with BatANN](http://arxiv.org/abs/2512.09331v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-11T10:45:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pjuj3u/rag_paper_251210/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjuj3u | false | null | t3_1pjuj3u | /r/LocalLLaMA/comments/1pjuj3u/rag_paper_251210/ | false | false | self | 0 | null |
RAG Paper 25.12.09 | 0 | 1. [Toward Faithful Retrieval-Augmented Generation with Sparse Autoencoders](http://arxiv.org/abs/2512.08892v1)
2. [An Agentic AI System for Multi-Framework Communication Coding](http://arxiv.org/abs/2512.08659v1)
3. [HealthcareNLP: where are we and what is next?](http://arxiv.org/abs/2512.08617v1)
4. [Autonomous Issue Resolver: Towards Zero-Touch Code Maintenance](http://arxiv.org/abs/2512.08492v1)
5. [Ontology-Based Knowledge Graph Framework for Industrial Standard Documents via Hierarchical and Propositional Structuring](http://arxiv.org/abs/2512.08398v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-11T10:42:17 | https://www.reddit.com/r/LocalLLaMA/comments/1pjuhhi/rag_paper_251209/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjuhhi | false | null | t3_1pjuhhi | /r/LocalLLaMA/comments/1pjuhhi/rag_paper_251209/ | false | false | self | 0 | null |
RAG Paper 25.12.07 | 0 | 1. [SPAD: Seven-Source Token Probability Attribution with Syntactic Aggregation for Detecting Hallucinations in RAG](http://arxiv.org/abs/2512.07515v1)
2. [Structure-Aware Feature Rectification with Region Adjacency Graphs for Training-Free Open-Vocabulary Semantic Segmentation](http://arxiv.org/abs/2512.07360v1)
3. [FVA-RAG: Falsification-Verification Alignment for Mitigating Sycophantic Hallucinations](http://arxiv.org/abs/2512.07015v1)
4. [LLM4SFC: Sequential Function Chart Generation via Large Language Models](http://arxiv.org/abs/2512.06787v1)
5. [An Index-based Approach for Efficient and Effective Web Content Extraction](http://arxiv.org/abs/2512.06641v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2025-12-11T10:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1pjugnp/rag_paper_251207/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjugnp | false | null | t3_1pjugnp | /r/LocalLLaMA/comments/1pjugnp/rag_paper_251207/ | false | false | self | 0 | null |
Lost between LiveKit Cloud vs Vapi vs Retell for a voice AI agent (~3,000 min/month) – real costs & recommendations in 2025? | 0 | Hey everyone,
I’m building a customer-support voice AI agent (inbound + some outbound, US local numbers, basic RAG, GPT-4o mini + ElevenLabs/Cartesia quality voice). Expected usage: \~3,000 minutes per month to start.
My current cost estimates (everything included: LLM, TTS, STT, telephony, concurrency, phone number):
* Retell AI → \~$275–320/mo (super transparent, low-code, live in minutes)
* Vapi → \~$370–500+/mo (feels unpredictable with add-ons)
* LiveKit Cloud (Ship plan) → \~$320–350/mo + dev time (open-source base, full control)
Questions for people who have real experience in 2025:
1. Are LiveKit Cloud costs actually close (or lower) than Retell/Vapi once everything is added, or does the dev/maintenance time make it way more expensive in practice?
2. Has anyone migrated from Vapi/Retell → LiveKit (or the other way) recently? What made you switch?
3. For a small team / with one AI engineer, is Retell still the no-brainer, or is LiveKit worth the extra effort at this volume?
4. Bonus: anyone combining LiveKit + OpenAI Realtime API or other new tricks to keep costs/latency down?
Trying not to pick the wrong tool and regret it in 3 months. Thanks a lot! | 2025-12-11T10:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/1pjufes/lost_between_livekit_cloud_vs_vapi_vs_retell_for/ | SignatureHuman8057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjufes | false | null | t3_1pjufes | /r/LocalLLaMA/comments/1pjufes/lost_between_livekit_cloud_vs_vapi_vs_retell_for/ | false | false | self | 0 | null |
New era for fine-tuning is on the horizon | 40 | A paper released at [https://arxiv.org/abs/2512.05117](https://arxiv.org/abs/2512.05117) , no code yet
Authors claim you can take a bunch of fine-tuned models of the same architecture and create new task/domain specific variants by just setting a few dozens numbers on each of the internal layer.
You'd have the performance just a bit lowered, but your whole Q30A3 library of teens of variants would be just those 15 gigs, each variant represented in a floppy-friendly chunk of numbers.
| 2025-12-11T10:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1pju9ob/new_era_for_finetuning_is_on_the_horizon/ | uhuge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pju9ob | false | null | t3_1pju9ob | /r/LocalLLaMA/comments/1pju9ob/new_era_for_finetuning_is_on_the_horizon/ | false | false | self | 40 | null |
Computer use agent for Qwen3-VL on Ollama | 2 | I'm running a MacBook Pro M4 Max 64 GB with Tahoe 26.1, and an NVIDIA GeForce RTX 4070 Ti SUPER 16 GB in a Windows 11 desktop. I use Ollama on both systems, but could also use LM Studio or AnythingLLM.
I'm interested in using a Computer Use Agent (CUA), generally speaking, to automate native desktop applications, websites, computer settings, Android emulator or remote control (scrcpy), and pretty much anything else you'd do on a desktop computer.
The qwen3-vl model seems perfect for this use case, but I have never used any CUA to plug into it before. Are there any recommended CUA open source utilities, or APIs / frameworks, that work for MacOS and Windows 11 desktop automation using qwen3-vl?
[https://ollama.com/library/qwen3-vl](https://ollama.com/library/qwen3-vl)
[https://github.com/QwenLM/Qwen3-VL](https://github.com/QwenLM/Qwen3-VL)
>Visual Agent: Operates PC/mobile GUIs—recognizes elements, understands functions, invokes tools, completes tasks. | 2025-12-11T10:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/1pju57d/computer_use_agent_for_qwen3vl_on_ollama/ | 960be6dde311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pju57d | false | null | t3_1pju57d | /r/LocalLLaMA/comments/1pju57d/computer_use_agent_for_qwen3vl_on_ollama/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
GLM4.6 + Claude Code CLI - Solving thinking and multimodal challenges | 15 | Hey everyone, wanted to share a solution for using GLM4.6 models with Claude Code CLI that addresses two key challenges:
1. Deep thinking activation: GLM4.6 activates its deep thinking capabilities more reliably through OpenAI-compatible APIs vs Anthropic-compatible ones. The proxy converts requests and injects wake words to trigger better reasoning.
2. Multimodal model fusion: GLM4.6 excels at reasoning but can't process images. GLM4.6V handles images but has lower intelligence. The solution intelligently routes text to GLM4.6 and images to GLM4.6V, combining their strengths.
How it works:
Protocol conversion between Anthropic and OpenAI formats
Wake word injection for enhanced thinking
Smart routing: text reasoning → GLM4.6, image processing → GLM4.6V
Seamless integration in single conversations
This approach lets you get both deep thinking and proper image handling when using GLM4.6 models with Claude Code CLI.
https://github.com/bluenoah1991/cc-thinking-hook/blob/main/README.ZaiGLM.md | 2025-12-11T10:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1pju0v4/glm46_claude_code_cli_solving_thinking_and/ | Infinite_Activity_60 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pju0v4 | false | null | t3_1pju0v4 | /r/LocalLLaMA/comments/1pju0v4/glm46_claude_code_cli_solving_thinking_and/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=108&crop=smart&auto=webp&s=b4076faf11b4909d90c4ac0a0db1e5a89b8479f2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=216&crop=smart&auto=webp&s=052e72338dc6fd96bdc4c97a52e5a961c59ea1ca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=320&crop=smart&auto=webp&s=fa3e2da7a51ab2c51a06629f9c850635f738c1c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=640&crop=smart&auto=webp&s=2b2c90f7b1c73894af36b504457e0493be1641aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=960&crop=smart&auto=webp&s=be39fa9c81a7ac4e83a70ee395130c2e0dad0581', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?width=1080&crop=smart&auto=webp&s=8fcd6892ffeff238ec73b0586d1f44944bdc8d08', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Nz7WYWpmKx4xaH71Nvr4C13Lv5kg9_k9guDrWRAEhp8.png?auto=webp&s=2ad38af59eb06e149e86ed1a63197bc88aea6649', 'width': 1200}, 'variants': {}}]} |
Using Claude Code to Fine-Tune Open Source LLMs | 0 | 2025-12-11T09:25:47 | https://huggingface.co/blog/hf-skills-training | paf1138 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1pjtbfa | false | null | t3_1pjtbfa | /r/LocalLLaMA/comments/1pjtbfa/using_claude_code_to_finetune_open_source_llms/ | false | false | default | 0 | null | |
Apriel 1.6 thinker "safety" (refusal) benchmark and comparison | 10 | **tl;dr** Apriel 1.6 gives less straight up refusals than 1.5. Instead, it tends to elaborate more, while also being a *tiny* bit more permissive. It's also less likely to get stuck in infinite repetition loops than 1.5. Its not a very permissive model in general. While it does a careful bit of harmless adult content, vanilla llama 3 70B for example allows for way more.
You can read more details on the used benchmark and approach in my [initial post](https://www.reddit.com/r/LocalLLaMA/comments/1jl7t6b/benchmarked_nemotronsuper49b_vs_llama_70b_others/) on this.
Models in the graph:
* **Red**: [Apriel 1.6 Thinker](https://www.reddit.com/r/LocalLLaMA/comments/1piumvw/bartowskiservicenowai_apriel1615bthinkergguf/) (Q6\_K\_L)
* **Blue**: [Apriel 1.5 Thinker](https://www.reddit.com/r/LocalLLaMA/comments/1nvbu3h/so_has_anyone_actually_tried_aprielv1515b/) (UD-Q6\_K\_XL)
* **Yellow**: Llama 3.3 70B (Q5\_K\_L)
* **Green**: [gpt-oss-20b-jinx](https://www.reddit.com/r/LocalLLaMA/comments/1mo1pv4/uncensored_gptoss20b_released/) (Q5\_K\_M)
Response types in the graph:
* 0: "Hard no". Refuses the request without any elaboration.
* 1: "You're wrong". Points out the faulty assumption / mistake.
* 2: "It's not that simple". Provides some perspective, potentially also including a bit of the requester's view.
* 3: "Please see a therapist". Says it can't help, but maybe someone more qualified can. There can be a partial answer along with a safety disclaimer.
* 4: "Uhm? Well, maybe...". It doesn't know, but might make some general speculation.
* 5: "Happy to help". Simply gives the user what they asked for.
https://preview.redd.it/gpj0ayqvkj6g1.png?width=1672&format=png&auto=webp&s=f5c040a8cfa9db5e78bbd4fcb9107fb26b6822e6
| 2025-12-11T09:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1pjt62l/apriel_16_thinker_safety_refusal_benchmark_and/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjt62l | false | null | t3_1pjt62l | /r/LocalLLaMA/comments/1pjt62l/apriel_16_thinker_safety_refusal_benchmark_and/ | false | false | 10 | null | |
Help | 0 | My indexTTs2 generate voice very slow like 120s plus for 20 sec voice is there any way to fix ths problem | 2025-12-11T08:57:10 | https://www.reddit.com/r/LocalLLaMA/comments/1pjsw6m/help/ | Chemical_Painter_431 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjsw6m | false | null | t3_1pjsw6m | /r/LocalLLaMA/comments/1pjsw6m/help/ | false | false | self | 0 | null |
Run Mistral Vibe CLI with any OpenAI Compatible Server | 25 | I couldn’t find any documentation on how to configure OpenAI-compatible endpoints with Mistral Vibe-CLI, so I went down the rabbit hole and decided to share what I learned.
Once Vibe is installed, you should have a configuration file under:
`~/.vibe/config.toml`
And you can add the following configuration:
[[providers]]
name = "vllm"
api_base = "http://some-ip:8000/v1"
api_key_env_var = ""
api_style = "openai"
backend = "generic"
[[models]]
name = "Devstral-2-123B-Instruct-2512"
provider = "vllm"
alias = "vllm"
temperature = 0.2
input_price = 0.0
output_price = 0.0
This is the gist, more information in [my blog](https://tobrun.github.io/blog/vibe-local-model-blogpost/). | 2025-12-11T08:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pjscnj/run_mistral_vibe_cli_with_any_openai_compatible/ | Creative-Scene-6743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjscnj | false | null | t3_1pjscnj | /r/LocalLLaMA/comments/1pjscnj/run_mistral_vibe_cli_with_any_openai_compatible/ | false | false | self | 25 | null |
Best Coding Model for my setup | 0 | Hi everyone,
I am currently building my AI Machine and I am curious which coding model I can run on it with good usability (best model)
Specs:
256GB Ram DDR4 3200Mhz
2 x RTX 3090
1 RTX 3090 currently not in the machine, could be implemented in the build if it’s worth it, grants access to better models.
| 2025-12-11T08:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/1pjsb7k/best_coding_model_for_my_setup/ | Timely_Purpose_5788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjsb7k | false | null | t3_1pjsb7k | /r/LocalLLaMA/comments/1pjsb7k/best_coding_model_for_my_setup/ | false | false | self | 0 | null |
which gguf should I use for gpt oss 20b? unsloth or ggml-org on huggingface | 3 | help appreciated | 2025-12-11T07:43:52 | https://www.reddit.com/r/LocalLLaMA/comments/1pjrsrz/which_gguf_should_i_use_for_gpt_oss_20b_unsloth/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjrsrz | false | null | t3_1pjrsrz | /r/LocalLLaMA/comments/1pjrsrz/which_gguf_should_i_use_for_gpt_oss_20b_unsloth/ | false | false | self | 3 | null |
Auto-generating PDF -> Dataset (jsonl) for Qwen3:4B | 3 | Hey everyone. I have been working on a system where you can use multiple services to generate synthetic data -> validate -> export as training data (jsonl).
It's in very early stages, but I researched how Meta and big AI companies were able to train their LLMs or more accurately generate large datasets, and essentially it came down to the pipeline of performing good OCR -> synthetic data and then ultimately training data.
I think in today's age, it is extremely important to train your own LLM on your little part of the world, like these large companies have huge data, pirates, non-pirated, stolen, not stolen, gathered, not gathered - whatever. The LLMs have turned into an insane resource, but can be quite useless if they don't have the context of your question or the specifics of your industry.
So I went on a spree, I started developing what I thought would be very simple, a system where I could upload my insane load of documents, use my beautiful Mi50 32GB + vLLM + qwen3:4b to achieve this.
I am getting very close and I figured I would share here when it is at least in a working state and is able to generate jsonl's with ease. (Its 2 AM on a Wednesday night going into Thursday but I figured I would post anyways).
The stack is:
AMD Instinct Mi50 32 GB + vLLM + qwen3:4b-instruct-2507-awq (dockerized set up here: [https://github.com/ikantkode/qwen3-4b-vllm-docker-mi50](https://github.com/ikantkode/qwen3-4b-vllm-docker-mi50) )
exaOCR (no support for handwritten stuff yet, github here: [https://github.com/ikantkode/exaOCR](https://github.com/ikantkode/exaOCR) )
exaPipeline - FastAPI based backend - github here: [https://github.com/ikantkode/exaPipeline](https://github.com/ikantkode/exaPipeline)
exaPipelineDashboard - a separate dockerized app to use exaPipeline - github here: [https://github.com/ikantkode/exaPipelineDashboard](https://github.com/ikantkode/exaPipelineDashboard)
I will push the code to exaPipeline and exaPipelineDashboard tomorrow. I am way too cooked right now to fix one minor issue with the pipeline which is preventing jsonl exports.
The reason why exaPipeline is a separate dockerized project is because if you choose to build your own view of exaPipeline, you're able to do that. The two projects will be maintained and improved. | 2025-12-11T07:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/1pjrkha/autogenerating_pdf_dataset_jsonl_for_qwen34b/ | exaknight21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1pjrkha | false | null | t3_1pjrkha | /r/LocalLLaMA/comments/1pjrkha/autogenerating_pdf_dataset_jsonl_for_qwen34b/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=108&crop=smart&auto=webp&s=2c324438bee251190f45c0123b0ef5f4c8044ab4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=216&crop=smart&auto=webp&s=a69e9143b40e058102f2191ae7c6c421c2a0a8f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=320&crop=smart&auto=webp&s=c6467aa6523d2bcc0c91ef70ef71fcea1bbb3830', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=640&crop=smart&auto=webp&s=e86353ef8016218875a395f8abc891f02f6b1e63', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=960&crop=smart&auto=webp&s=df4f1316e2d75c55d724e6bb9e16f202b4ab74ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?width=1080&crop=smart&auto=webp&s=2abc6e012bb2e632a42850325503a00161c48efd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pTo6-vusqJe19SQomJr9JsYSpaOWOGicTUTkoVhHV8E.png?auto=webp&s=25163d212d523110260ac8da0394f47fbfc9a489', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.