title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Need Help with running local LLM
3
Hi All, I need help running a local LLM on a home server to manage my requests locally from all my home devices, do you know a good place to start?
2025-12-08T19:40:16
https://www.reddit.com/r/LocalLLaMA/comments/1phm1d8/need_help_with_running_local_llm/
Hassan_Ali101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phm1d8
false
null
t3_1phm1d8
/r/LocalLLaMA/comments/1phm1d8/need_help_with_running_local_llm/
false
false
self
3
null
NVIDIA OS Security Update - Strongly recommended for MSI EdgeXpert users
2
# NOTICE: For anyone using the MSI EdgeXpert (with Nvidia DGX OS) NVIDIA has reported several security vulnerabilities in the DGX Spark firmware that could potentially lead to code execution, data exposure, tampering, denial of service, or privilege escalation. Their full advisory is here if you want the technical rundown: [https://nvidia.custhelp.com/app/answers/detail/a\_id/5720](https://nvidia.custhelp.com/app/answers/detail/a_id/5720) Because these systems are often deployed in sensitive or production environments, **MSI strongly recommends updating to the latest DGX OS** to ensure everything stays secure and stable. **How to Update on MSI EdgeXpert** **Option 1 — Update via the NVIDIA DGX Dashboard in MSI EdgeXpert (recommended)** 1. Open the DGX Dashboard inside EdgeXpert 2. Go to the **Update** tab 3. Click **Update** The upgrade will run automatically from there. **Option 2 — Update via MSI Website (full reinstall)** If you prefer to manually install a fresh DGX OS image: 1. Download the latest MSI-provided DGX OS image here: [https://ipc.msi.com/product\_download/Industrial-Computer-Box-PC/AI-Supercomputer/EdgeXpert-MS-C931](https://ipc.msi.com/product_download/Industrial-Computer-Box-PC/AI-Supercomputer/EdgeXpert-MS-C931) 2. Follow the installation guide on the download page. ⚠️ **Note:** This method reinstalls the entire OS. Back up all data before starting, as it will be wiped. For ongoing product security notices from MSI, you can always check our PSIRT page: [https://csr.msi.com/global/product-security-advisories](https://csr.msi.com/global/product-security-advisories) Hope this helps everyone stay patched and protected. Let me know if you want an alternate version for more technical subs (r/homelab, r/sysadmin, etc.).
2025-12-08T19:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1phlz0f/nvidia_os_security_update_strongly_recommended/
MSI_Patrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phlz0f
false
null
t3_1phlz0f
/r/LocalLLaMA/comments/1phlz0f/nvidia_os_security_update_strongly_recommended/
false
false
self
2
null
What Local LLM model have good knowledge about the movies?
1
So, as the title says do any of you know what would be the best or at least good LLM to use for trying to find information about movie using descprtions of the scenes and it would give me hints of the movie it could be so I can take a look if any of the ideas are the correct movie I am searching for?
2025-12-08T19:35:11
https://www.reddit.com/r/LocalLLaMA/comments/1phlwfr/what_local_llm_model_have_good_knowledge_about/
film_man_84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phlwfr
false
null
t3_1phlwfr
/r/LocalLLaMA/comments/1phlwfr/what_local_llm_model_have_good_knowledge_about/
false
false
self
1
null
Looking for an LLMOps framework for automated flow optimization
0
I'm looking for an advanced solution for managing AI flows. Beyond simple visual creation (like LangFlow), I'm looking for a system that allows me to run benchmarks on specific use cases, automatically testing different variants. Specifically, the tool should be able to: Automatically modify flow connections and models used. Compare the results to identify which combination (e.g., which model for which step) offers the best performance. Work with both offline tasks and online search tools. So, it's a costly process in terms of tokens and computation, but is there any "LLM Ops" framework or tool that automates this search for the optimal configuration?
2025-12-08T19:24:53
https://www.reddit.com/r/LocalLLaMA/comments/1phlmly/looking_for_an_llmops_framework_for_automated/
panspective
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phlmly
false
null
t3_1phlmly
/r/LocalLLaMA/comments/1phlmly/looking_for_an_llmops_framework_for_automated/
false
false
self
0
null
llama.cpp + claude code - Error reading large file - exceeds maximum allowed tokens (25000)
0
Hey all, I try to read a file of 510KB, and I get this error: `⏺ Read(resources/views/components/my-component.blade.php)` `⎿  Error: File content (88168 tokens) exceeds maximum allowed tokens (25000).` My LLM is set to 200.000 tokens. But I can't find anything on Claude Code and reading large files. I've tried to set these two env args, but no luck: export MAX_MCP_OUTPUT_TOKENS=200000 export MAX_TOOL_OUTPUT_TOKENS=200000 claude Now, I'm sure this is not a hard limitation of CC and lllama.cpp, right? (yes, the file is exessivly large. It's mostly css style that the LLM has to translate to tailwind.)
2025-12-08T19:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1phl88b/llamacpp_claude_code_error_reading_large_file/
designbanana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phl88b
false
null
t3_1phl88b
/r/LocalLLaMA/comments/1phl88b/llamacpp_claude_code_error_reading_large_file/
false
false
self
0
null
Agents shouldn’t run loops. They should write contracts. (Testing a contract-driven runtime)
0
I’ve been frustrated with how brittle most agent frameworks are. They break in two predictable ways: 1. The control flow turns into an unreadable hairball. 2. YAML “intent” doesn’t match what the runtime actually enforces. I’m building a system called OmniNode (protocol: ONEX) to test a different model: the contract—not the node code—is the artifact. Here’s the architecture I’m validating with the MVP: # Reducers don’t coordinate workflows — orchestrators do I’ve separated the two concerns entirely: # Reducers: * Use finite state machines embedded in contracts * Manage deterministic state transitions * Can trigger effects when transitions fire * Enable replay and auditability # Orchestrators: * Coordinate workflows * Handle branching, sequencing, fan-out, retries * Never directly touch state # LLMs as Compilers, not CPUs Instead of letting an LLM “wing it” inside a long-running loop, the LLM generates a contract. Because contracts are typed (Pydantic/JSON-schema backed), the validation loop forces the LLM to converge on a correct structure. Once the contract is valid, the runtime executes it deterministically. No hallucinated control flow. No implicit state. # Deployment = Publish a Contract Nodes are declarative. The runtime subscribes to an event bus. If you publish a valid contract: * The runtime materializes the node * No rebuilds * No dependency hell * No long-running agent loops # Why do this? Most “agent frameworks” today are just hand-written orchestrators glued to a chat model. They batch fail in the same way: nondeterministic logic hidden behind async glue. A contract-driven runtime with FSM reducers and explicit orchestrators fixes that. # Where I’m at I’m building the MVP now to prove the execution model works. I’d love feedback, critique, or reasons this approach might fail.
2025-12-08T18:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1phkt30/agents_shouldnt_run_loops_they_should_write/
jonah_omninode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phkt30
false
null
t3_1phkt30
/r/LocalLLaMA/comments/1phkt30/agents_shouldnt_run_loops_they_should_write/
false
false
self
0
null
How to mimic chatgpt like behavior in API?
0
How does ChatGPT UI actually work? Even when having conversations longer than the model’s context length, it seems to handle them easily. How does it do that? If I want to mimic the same UI capability using the API, what strategy should I use? Say if I have a pdf of 500k tokens and I need to create a summary of it, chatgpt does this (checked) but how does it do?
2025-12-08T18:52:51
https://www.reddit.com/r/LocalLLaMA/comments/1phkrgu/how_to_mimic_chatgpt_like_behavior_in_api/
ExchangePersonal1384
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phkrgu
false
null
t3_1phkrgu
/r/LocalLLaMA/comments/1phkrgu/how_to_mimic_chatgpt_like_behavior_in_api/
false
false
self
0
null
I built a deterministic AI (Era) that uses Physics instead of LLMs. 99% Fact Accuracy, runs on CPU. Open Sourcing it today.
1
[removed]
2025-12-08T18:49:00
https://www.reddit.com/r/LocalLLaMA/comments/1phknm4/i_built_a_deterministic_ai_era_that_uses_physics/
Left_Object2581
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phknm4
false
null
t3_1phknm4
/r/LocalLLaMA/comments/1phknm4/i_built_a_deterministic_ai_era_that_uses_physics/
false
false
self
1
null
HELP: Procedural road network generation algorithm
0
Hey! I'm building a procedural open-world system in Unity and I'm stuck on generating a endless **road network** :| Here's what I need: * Roads start from a central **X-crossing** (4-way intersection) and extend in cardinal directions (N/E/S/W). * Roads should become **curvy rural highways**, not a grid. * **All intersections must be 90°** (for EasyRoads3D compatibility, Unity package for generating road meshes, works pretty good). * Roads can curve, but must generally follow their main direction (e.g., northbound stays mostly north). * **T-junctions and X-crossings** should be generated when roads come near each other (\~500m). * Intersections should be **sparse** (every 2–5km). * Everything must be **seed-based** and deterministic (works with chunk streaming). # In short, I want a road network where the player can drive and enjoy the road. Sometimes there should be intersections, so the player can choose a new direction, but not too often. I've already built an **endless terrain streaming system**, and I have working integration with **EasyRoads3D**. I just need help designing a road generator that fits these constraints. Tried many approaches (Perlin noise, snake builders, Claude/Codex), but none worked well — they either make chaotic messes or don’t follow the 90° rule. Any ideas how should I proceed with this idea? Thanks in advance.
2025-12-08T18:48:58
https://www.reddit.com/r/LocalLLaMA/comments/1phknla/help_procedural_road_network_generation_algorithm/
MessageEquivalent347
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phknla
false
null
t3_1phknla
/r/LocalLLaMA/comments/1phknla/help_procedural_road_network_generation_algorithm/
false
false
self
0
null
Building Qwen3 style model from Scratch: A Complete Tutorial
32
I recently came across this wonderful video tutorial which teaches how to build a Qwen3-style model from scratch. I shared this as this video tutorial will be useful to many.
2025-12-08T18:47:36
https://www.youtube.com/watch?v=Jaj_SQsF-BI
Dear-Success-1441
youtube.com
1970-01-01T00:00:00
0
{}
1phkm98
false
{'oembed': {'author_name': 'freeCodeCamp.org', 'author_url': 'https://www.youtube.com/@freecodecamp', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Jaj_SQsF-BI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="LLM from Scratch Tutorial – Code &amp; Train Qwen 3"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Jaj_SQsF-BI/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'LLM from Scratch Tutorial – Code & Train Qwen 3', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1phkm98
/r/LocalLLaMA/comments/1phkm98/building_qwen3_style_model_from_scratch_a/
false
false
default
32
{'enabled': False, 'images': [{'id': 'WOKywN5M_ugMf7VL794OR6f7IKzhJ6LRDX4m9cjh8sk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WOKywN5M_ugMf7VL794OR6f7IKzhJ6LRDX4m9cjh8sk.jpeg?width=108&crop=smart&auto=webp&s=6191282afe7230e6c845b18af700f2a90631ba43', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/WOKywN5M_ugMf7VL794OR6f7IKzhJ6LRDX4m9cjh8sk.jpeg?width=216&crop=smart&auto=webp&s=0e55957114b9e3729584b8e1de7406fb34d807f3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/WOKywN5M_ugMf7VL794OR6f7IKzhJ6LRDX4m9cjh8sk.jpeg?width=320&crop=smart&auto=webp&s=0f5cffc8d2d1e550d718b6265757dfd311309fc7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/WOKywN5M_ugMf7VL794OR6f7IKzhJ6LRDX4m9cjh8sk.jpeg?auto=webp&s=16e4a11b3a07d4363a36572786a1168bf6eb52d4', 'width': 480}, 'variants': {}}]}
Implementing nanochat using AMD’s MI300X hardware and dev credits.
15
tl;dr This is a self promotion post to my latest blog and repo implementing nanochat from scratch, anyone who has tried it do give me some suggestions or any kind of feedback. I started this blog following the advice: If you want to understand a topic at length try teaching it, I did learn a lot of things during the process, Starting a multi-post implementation breakdown of **nanochat** using AMD’s MI300X hardware. No “$100 nanochat” here, I’m training free with dev credits. All the topics are discussed using code, algebra and geometry. Covered so far: * Repo map * RMSNorm implementation * RoPE apply\_rotary\_emb * GQA parameter count calcs * KVCache behavior across context Next up: `nanochat.muon.Muon`, distributed optimizer `DistAdamW`. Anyone interested in a **from-scratch transformer build log** with actual training runs, debugging notes, and math → I’d appreciate feedback, suggestions, or requests for what to analyze next. **Link:** [**https://theatomsofai.substack.com/p/build-karapathys-nanochat-from-scratch**](https://theatomsofai.substack.com/p/build-karapathys-nanochat-from-scratch)
2025-12-08T18:39:23
https://www.reddit.com/r/LocalLLaMA/comments/1phkefq/implementing_nanochat_using_amds_mi300x_hardware/
Icy_Gas8807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phkefq
false
null
t3_1phkefq
/r/LocalLLaMA/comments/1phkefq/implementing_nanochat_using_amds_mi300x_hardware/
false
false
self
15
{'enabled': False, 'images': [{'id': '_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?width=108&crop=smart&auto=webp&s=f20f5590bb3a12b5153acc228c0466a9bb6c72c9', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?width=216&crop=smart&auto=webp&s=82be03f4ff091518ab015e2acc6af80677a37f3b', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?width=320&crop=smart&auto=webp&s=d1598f4290a0d4d2ad798fe51f77c4ef3fea513f', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?width=640&crop=smart&auto=webp&s=736c9184d4a248a778fc9a0c2fe4c044181a762f', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?width=960&crop=smart&auto=webp&s=4acd50a6705fc659481f392367eb022045d4c884', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_M8k-XGJIMiCiZRCMJOSeKOiW1OCDQzc_q5sq_NHDjk.jpeg?auto=webp&s=45f4fa6dba188b496366cb468828d046cfff1d02', 'width': 1024}, 'variants': {}}]}
Tiny-A2D: An Open Recipe to Turn Any AR LM into a Diffusion LM
95
Code: [https://github.com/ZHZisZZ/dllm](https://github.com/ZHZisZZ/dllm) Checkpoints: [https://huggingface.co/collections/dllm-collection/tiny-a2d](https://huggingface.co/collections/dllm-collection/tiny-a2d) **TLDR**: You can now turn **ANY** autoregressive LM into a diffusion LM (parallel generation + infilling) with minimal compute. Using this recipe, we built a collection of the smallest diffusion LMs that work well in practice (e.g., [Qwen3-0.6B-diffusion-bd3lm-v0.1](https://huggingface.co/dllm-collection/Qwen3-0.6B-diffusion-bd3lm-v0.1)). [**dLLM**](https://github.com/ZHZisZZ/dllm): The Tiny-A2D series is *trained, evaluated and visualized* with [dLLM](https://github.com/ZHZisZZ/dllm) — a unified library for training and evaluating diffusion language models. It brings transparency, reproducibility, and simplicity to the entire pipeline, **serving as an all-in-one, tutorial-style resource.**
2025-12-08T18:29:49
https://v.redd.it/vzyejih3x06g1
Individual-Ninja-141
v.redd.it
1970-01-01T00:00:00
0
{}
1phk59c
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vzyejih3x06g1/DASHPlaylist.mpd?a=1767810606%2CNTc4MWQ4Yjc3ZTcwYjE0NjI2ZDI2N2RiOTgwZmU3NWQ5YWRlZDY5YmIxMGNmMTNhNWZjYjY0OGUyM2RhNzUwZg%3D%3D&v=1&f=sd', 'duration': 6, 'fallback_url': 'https://v.redd.it/vzyejih3x06g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vzyejih3x06g1/HLSPlaylist.m3u8?a=1767810606%2CMmQwM2JhMjlkZGI3YzIyYjQ5NjkzNTBjYjJmZTQ1OGRjZTFkMTQwY2U4NzA3MzlhOGIzZmY3MmI1MzYzMTU4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vzyejih3x06g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1phk59c
/r/LocalLLaMA/comments/1phk59c/tinya2d_an_open_recipe_to_turn_any_ar_lm_into_a/
false
false
https://external-preview…febf22f52cc27234
95
{'enabled': False, 'images': [{'id': 'amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=108&crop=smart&format=pjpg&auto=webp&s=cc18129d41272887c43ce31afff2f8ab335dfbea', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=216&crop=smart&format=pjpg&auto=webp&s=85128e42cfa6bcc8aef9a427a9f12e325a077027', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=320&crop=smart&format=pjpg&auto=webp&s=98aaa669d7fd2e4efd462538fb8ea7070adc38b2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=640&crop=smart&format=pjpg&auto=webp&s=177a3fba09ac5061a8791f2ec8bb4e09d396dfc6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=960&crop=smart&format=pjpg&auto=webp&s=193254536c40a129d326529087933c9d073d5d94', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c54b7196102aee70c6ecdea364e35cf30e819279', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/amg0MWxsaDN4MDZnMeq0L6tFOFYIbTBJmZOjfxjCUMUH4cuuibIVjJcy3j27.png?format=pjpg&auto=webp&s=8dc94ab83d7cfe48525758448c72f4c429961cef', 'width': 1920}, 'variants': {}}]}
Aquif-AI HuggingFace page throws 404 after community found evidence of aquif-ai republishing work of others as their own without attribution.
65
Aquif is a Brazil-based organization that was publishing some open weight models on HF, mainly LLMs. Community found evidence of aquif-Image-14B model being a [republished finetune with matching hashes](https://huggingface.co/wikeeyang/Magic-Wan-Image-v1.0/discussions/3) One of the 800M LLM models also apparently matches corresponding Granite model 1:1 but I didn't confirm that, and further discovery of the scale of their deception will be harder to do now since their models are no longer public in their original repos, and mainly quants are available. It's not clear if Aquif genuinely trained any models that they published. Their benchmark results shouldn't be blindly trusted. I think you should be wary with models from them from now on.
2025-12-08T18:23:39
https://www.reddit.com/r/LocalLLaMA/comments/1phjz8s/aquifai_huggingface_page_throws_404_after/
FullOf_Bad_Ideas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phjz8s
false
null
t3_1phjz8s
/r/LocalLLaMA/comments/1phjz8s/aquifai_huggingface_page_throws_404_after/
false
false
self
65
{'enabled': False, 'images': [{'id': 'XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=108&crop=smart&auto=webp&s=8c354d99e38aee637ea6390b53a0bfa8fb207cbc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=216&crop=smart&auto=webp&s=1b1b6dabf557fb2d5b44a2c10b8c5fd8125decf9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=320&crop=smart&auto=webp&s=c9d238d7bb90c5858a7564b43dced2ea504fd937', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=640&crop=smart&auto=webp&s=30e3199c051805e52196b67b71b99f743bedc40a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=960&crop=smart&auto=webp&s=d13a73bc068c3c9d4585b97244237b20baf194e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?width=1080&crop=smart&auto=webp&s=131a5c47252fc0fc29a871bc07fd4339b9ce9990', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XKN8COoo1dDDHN5h5b8lgrTSASSEpcvM2POvJ4Gn5js.png?auto=webp&s=f34d6d02bdb58489a354e5bbdfe743e2d0e14f2e', 'width': 1200}, 'variants': {}}]}
I'm calling these people out right now.
749
For being heroes of the community. || || |**Unsloth**|Blazing fast fine-tuning + premium GGUF quants| |**mradermacher**|Quantizes literally EVERYTHING, absolute machine| |**bartowski**|High-quality quants, great documentation| |**TheBloke**|The OG - before he stepped back, he was THE source| |**LoneStriker**|Solid AWQ/GPTQ quants| |**Nexesenex**|iMatrix quants, gap filler extrextraordinaire| Everyone here owes so much to you folks. Take a bow.
2025-12-08T18:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1phjxca/im_calling_these_people_out_right_now/
WeMetOnTheMountain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phjxca
false
null
t3_1phjxca
/r/LocalLLaMA/comments/1phjxca/im_calling_these_people_out_right_now/
false
false
self
749
null
Semantic Soft Bootstrapping: Long Context Reasoning in LLMs without Reinforcement Learning
3
*Long context reasoning in large language models (LLMs) has demonstrated enhancement of their cognitive capabilities via chain-of-thought (CoT) inference. Training such models is usually done via reinforcement learning with verifiable rewards (RLVR) in reasoning based problems, like math and programming. However, RLVR is limited by several bottlenecks, such as, lack of dense reward, and inadequate sample efficiency. As a result, it requires significant compute resources in post-training phase. To overcome these limitations, in this work, we propose \textbf{Semantic Soft Bootstrapping (SSB)}, a self-distillation technique, in which the same base language model plays the role of both teacher and student, but receives different semantic contexts about the correctness of its outcome at training time. The model is first prompted with a math problem and several rollouts are generated. From them, the correct and most common incorrect response are filtered, and then provided to the model in context to produce a more robust, step-by-step explanation with a verified final answer. This pipeline automatically curates a paired teacher-student training set from raw problem-answer data, without any human intervention. This generation process also produces a sequence of logits, which is what the student model tries to match in the training phase just from the bare question alone. In our experiment, Qwen2.5-3B-Instruct on GSM8K dataset via parameter-efficient fine-tuning. We then tested its accuracy on MATH500, and AIME2024 benchmarks. Our experiments show a jump of 10.6%, and 10% improvements in accuracy, respectively, over group relative policy optimization (GRPO), which is a commonly used RLVR algorithm. Our code is available at this https URL: https://github.com/purbeshmitra/semantic-soft-bootstrapping and the model, curated dataset is available at this https URL: https://huggingface.co/purbeshmitra/semantic-soft-bootstrapping*
2025-12-08T18:13:55
https://arxiv.org/abs/2512.05105
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1phjpwg
false
null
t3_1phjpwg
/r/LocalLLaMA/comments/1phjpwg/semantic_soft_bootstrapping_long_context/
false
false
default
3
null
Creating a local LLM for PhD focus-specific prelim exam studying | Experience and guide
3
Someone from r/LocalLLM told me to post here too, so: I posted this to /PhD and /Gradschool to show off how local LLMs could be used as tools for studying and both were removed because they "didn't fit the sub (how?)" and were "AI slop" (not one single word in this was written by AI). So, just posting here because yall will probably appreciate it more. TLDR: wanted to see if I could set up a local LLM to help me study for my prelim exams using papers specific to my field. It works great, and because it's local I can control the logic and it's fully private. I have my prelims coming up in a few months, so I have been exploring methods to study most effectively. To that end, this weekend I endeavored to set up a local LLM that I could "train" to focus on my field of research. I mostly wanted to do this because as much as I think LLMs can be good tools, I am not really for Sam Altman and his buddies taking my research questions and using it to fund this circular bubble AI economy. Local LLMs are just that, local, so I knew I could feasibly go as far as uploading my dissertation draft with zero worry about any data leak. I just had no idea how to do it, so I asked Claude (yes I see the irony). Claude was extremely helpful, and I think my local LLM has turned out great so far. Below I will explain how I did it, step-by-step so you can try it. If you run into any problems, Claude is great at troubleshooting, or you can comment and I will try to reply. Step 1: LM Studio If we think about making our local LLM sort of like building a car, then LM studio is where we pick our engine. You could also use Ollama, but I have a macbook, and LM studio is so sleek and easy to use. When you download, it will say "are you a noob, intermediate, or developer?" You should just click dev, because it gives you the most options out of the gate. You can always switch at the bottom left of LM studio, but trust me, just click dev. Then it says "based on your hardware, we think this model is great! download now?" I would just click skip on the top right. Then in the search bar on the left, you can search for models. I asked claude "I want a local LLM that will be able to answer questions about my research area based on the papers I feed it" and it suggested qwen3 14b. LM studio is also great here because it will tell you if the model you are choosing will be good on your hardware. I would again ask Claude and tell it your processor and RAM, and it will give you a good recommendation. Or, just try a bunch out and see what you like. From what I can tell, Mistral, Qwen, Phi, and Chat OSS are the big players. Step 2: Open WebUI (or AnythingLLM, but I like Open WebUI more) Now that you have downloaded your "engine" you'll want to download Open WebUI so you can feed it your papers. This is called a RAG system, like a dashboard (this car analogy sucks). Basically, if you have a folder on your laptop with every paper you've ever downloaded (like any good grad student should), this is super easy. Ask Claude to help you download Open WebUI. If you're on Mac, try to download without Docker. There was a reddit post explaining it, but basically, Docker just uses pointless RAM that you'll want for your model. Again, ask Claude how to do this. Once you have Open WebUI (it's like a localhost thing on your web browser, but its fully local) just breeze through the set up (you can just put in fake info, it doesn't store anything or email you at all), you are almost set. You'll just need to go into the workspace tab, then knowledge, then create knowledge base, call it whatever you want, and upload all your papers. Step 3: Linking your engine and your dashboard (sorry again about this car analogy) Go into LM studio and click on developer on the left. Turn on your server. On the bottom right it should say what address to link in Open WebUI. Start Open WebUI in your terminal, then go to the localhost Open WebUI page in your browser. Click on the settings in the upper right, then on the lower part of that is admin settings. Then it's connections, Open AI connections, and upload a new local API url (from LM studio!) and sync. Now your "engine" name should appear as a model available in the chats window! Step 4: Make your engine and dashboard work together and create a specific LLM model! Now is the best part. Remember where "Knowledge" was in the Open WebUI? There was a heading for Models too. Go into the Models heading and click New. Here, you can name a new model and on the drop down menu, choose your engine that you downloaded in LM studio. Enter in a good prompt (Claude will help), add your knowledge base you made with all your papers, uncheck the web search box (or don't up to you) and boom, you're done! Now you can chat with your own local AI that will use your papers specifically for answers to your questions! Extra tips: You may have some wonky-ness in responses. Ask Claude and he will help iron out the kinks. Seriously. At one point I was like "why does my model quote sources even when I don't need it to on this answer" and it would tell me what settings to change. Some I def recommend are hybrid search ON and changing the response prompt in the same tab. \---- Well, that's basically it. That was my weekend. It's super cool to talk with an LLM locally on your own device with Wifi off and have it know exactly what you want to study or talk about. Way less hallucinating, and more tinkering options. Also, I'm sure will be useful when I'm in the field with zero service and want to ask about a sampling protocol. Best of all, unlimited tokens/responses and I am not training models to ruin human jobs! Good luck yall!
2025-12-08T18:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1phje5p/creating_a_local_llm_for_phd_focusspecific_prelim/
sylntnyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phje5p
false
null
t3_1phje5p
/r/LocalLLaMA/comments/1phje5p/creating_a_local_llm_for_phd_focusspecific_prelim/
false
false
self
3
null
best coding model can run on 4x3090
2
please suggest me coding model that can run on 4 x 3090 total 96 vram.
2025-12-08T17:59:22
https://www.reddit.com/r/LocalLLaMA/comments/1phjbfi/best_coding_model_can_run_on_4x3090/
altxinternet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phjbfi
false
null
t3_1phjbfi
/r/LocalLLaMA/comments/1phjbfi/best_coding_model_can_run_on_4x3090/
false
false
self
2
null
Vram/ram ratio needed
0
So Ive seen some posts with insane builds with hundreds of gb of vram and not a word on normal dram. Any specific ratio to follow? Ive seen only a single post where they said that for a budget ai build, 32gb ram is great for 16gb vram. So 1:2 ratio? Please help.
2025-12-08T17:32:09
https://www.reddit.com/r/LocalLLaMA/comments/1phil1x/vramram_ratio_needed/
Pure_Design_4906
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phil1x
false
null
t3_1phil1x
/r/LocalLLaMA/comments/1phil1x/vramram_ratio_needed/
false
false
self
0
null
Heretic GPT-OSS-120B outperforms vanilla GPT-OSS-120B in coding benchmark
50
**Test Setup** The following models were used, both at the "BF16" quant (i.e., unquantized MXFP4) Vanilla: [unsloth/gpt-oss-120b-GGUF · Hugging Face](https://huggingface.co/unsloth/gpt-oss-120b-GGUF) Heretic: [bartowski/kldzj\_gpt-oss-120b-heretic-v2-GGUF · Hugging Face](https://huggingface.co/bartowski/kldzj_gpt-oss-120b-heretic-v2-GGUF) Both models were served via llama.cpp using the following options: llama-server.exe --threads 8 --flash-attn on --n-gpu-layers 999 --no-mmap --offline --host 0.0.0.0 --port ${PORT} --metrics       --model "<path to model .gguf>"       --n-cpu-moe 22       --ctx-size 65536       --batch-size 2048       --ubatch-size 2048       --temp 1.0       --min-p 0.0       --top-p 1.0       --top-k 100       --jinja       --no-warmup I ran the [Aider Polyglot benchmark](https://github.com/Aider-AI/polyglot-benchmark) on each model 3x, using the following command: OPENAI_BASE_URL=http://<ip>:8080/v1 OPENAI_API_KEY="none" ./benchmark/benchmark.py <label> --model openai/<model> --num-ctx 40960 --edit-format whole --threads 1 --sleep 1 --exercises-dir polyglot-benchmark --new **Results** https://preview.redd.it/plc2ybbbi06g1.png?width=594&format=png&auto=webp&s=2b097161970e6418ce965cd39c6eb22d018405a6 **Conclusion** Using the [Heretic](https://github.com/p-e-w/heretic) tool to "uncensor" GPT-OSS-120B slightly improves coding performance. In my experience, coding tasks are very sensitive to "context pollution", which would be things like hallucinations and/or overfitting in the reasoning phase. This pollution muddies the waters for the model's final response generation, and this has an outsized effect on coding tasks which require strong alignment to the initial prompt and precise syntax. So, my theory to explain the results above is that the Heretic model has less tokens related to policy-checking/refusals, and therefore less pollution in the context before final response generation. This allows the model to stay more closely aligned to the initial prompt. Would be interested to hear if anyone else has run similar benchmarks, or has subjective experience that matches or conflicts with these results or my theory!
2025-12-08T17:27:02
https://www.reddit.com/r/LocalLLaMA/comments/1phig6r/heretic_gptoss120b_outperforms_vanilla_gptoss120b/
MutantEggroll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phig6r
false
null
t3_1phig6r
/r/LocalLLaMA/comments/1phig6r/heretic_gptoss120b_outperforms_vanilla_gptoss120b/
false
false
https://b.thumbs.redditm…senIl_GcY4EM.jpg
50
null
Day 1 of 21 Days of Building a Small Language Model: 10 things about Neural Networks you need to know
0
https://preview.redd.it/…, so stay tuned!
2025-12-08T17:20:24
https://www.reddit.com/r/LocalLLaMA/comments/1phi9qk/day_1_of_21_days_of_building_a_small_language/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phi9qk
false
null
t3_1phi9qk
/r/LocalLLaMA/comments/1phi9qk/day_1_of_21_days_of_building_a_small_language/
false
false
https://b.thumbs.redditm…48dlmARiAJTI.jpg
0
null
I can see you guys have some monster builts. Will 32 GM Ram suffice for Local LLM ?
0
I want to build a wrapper LLM for a protocol I am doing and then perhaps take it on line for friends and coworkers to have a play with it. I can see that prices go to the roof and bought the last system available at the local shop. I have asked for extra RAM, but he had none left. The system is this: AMD Ryzen 7 9800X3D CPU, AM5, 4.7GHz (5.2 Turbo), 8-Core, 120W, 104MB Cache CIT Glacier 360mm Liquid Cooler  Gigabyte B850 Gaming Wifi6 Motherboard Nvidia RTX 5070Ti 16gb Graphics(HDMI and DisplayPort Connections) 32gb Crucial 6000Mhz DDR5 Memory Thermaltake 600 Future Dusk Gaming Case Windows 11 Home Edition Vida 850w Gold Gaming PSU 2tb Adata Legend 860 6000/5000 Read Write M.2 NVME Solid State Drive Will it be OK?
2025-12-08T17:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1phi6pd/i_can_see_you_guys_have_some_monster_builts_will/
Timalakeseinai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phi6pd
false
null
t3_1phi6pd
/r/LocalLLaMA/comments/1phi6pd/i_can_see_you_guys_have_some_monster_builts_will/
false
false
self
0
null
I built a Python script to compile natural language into efficient commands for local models (like a Synt-E protocol).
2
Hey everyone, I've been going down the rabbit hole of local LLMs with Ollama, but I kept hitting a wall: models like Llama 3 are great assistants, but they often ignore my system prompts when I need them to perform a very specific, non-assistant task. If I ask it to translate a request to write code, it just writes the code. Frustrating. So, I decided to build a solution: a simple protocol I'm calling **Synt-E (Synthetic English)**. The idea is to stop "chatting" and start giving dense, unambiguous commands that the AI can't misinterpret. **The Problem:** * **Human:** "Hey, can you please write me a Python script to analyze a CSV?" * **Cost:** High token count, slow, and the LLM might start explaining things instead of just doing it. **The Solution (Synt-E):** * **Machine:** task:code lang:python action:analyze\_data format:csv * **Result:** Super fast, cheap (low tokens), and zero ambiguity. To make this work, I wrote a Python script that acts as a "compiler." It takes your normal sentence, sends it to a local model (I found gpt-oss:20bworks best for this), and gets back the clean Synt-E command. I tested it with a bunch of prompts, and it works surprisingly well for translating complex intent into a single, optimized line. **Here's a test that always failed with other models:** > It correctly compiled the request instead of generating the code! I've put everything on GitHub, including the final Python script and a detailed README explaining the whole logic. It's super simple to run if you have Ollama. **You can check it out here:** [https://github.com/NeuroTinkerLab/synt-e-project](https://github.com/NeuroTinkerLab/synt-e-project) I'd love to get your feedback. Do you think a structured protocol like this is the future for orchestrating local agents? What models have you found to be the most "obedient" to system prompts? Thanks for checking it out
2025-12-08T17:16:32
https://www.reddit.com/r/LocalLLaMA/comments/1phi622/i_built_a_python_script_to_compile_natural/
Prestigious_Mix_2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phi622
false
null
t3_1phi622
/r/LocalLLaMA/comments/1phi622/i_built_a_python_script_to_compile_natural/
false
false
self
2
null
Stabilizer, the 12k-shot self-reflective LoRA that makes deception expensive and compassion rational
0
```#!/usr/bin/env python3 """ Epistemic Stabilizer v2.0 (Dec 2025) – Evolves from Noise Collapse Experiments Strongest open intervention: Creates a truth-seeking + harm-minimizing attractor via self-reflection. Non-trivial: Bootstraps dataset from raw model at temp=0, embeds joint uncertainty+welfare optimization. Run on uncensored checkpoints only. Scales to superintelligence by making honesty computationally cheap. """ import os import json import random from datetime import datetime from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Target raw model (pre-any refusal tuning) MODEL_ID = "meta-llama/Meta-Llama-3.1-405B" # Or Qwen2.5-72B raw, etc. def generate_stabilizer_prompts(n=8192): """Generate prompts for self-reflective truth+compassion bootstrapping.""" random.seed(42) examples = [] # Domains from noise collapse evals (e.g., HH-RLHF pitfalls) domains = [ "evidence for/against lab-origin of pandemics", "heritability of cognitive traits across populations", "tradeoffs in AI safety vs rapid deployment", "long-term climate intervention risks", "efficacy of economic policies on inequality", "ethical limits of genetic editing in humans" ] for i in range(n): domain = random.choice(domains) prompt = f"""You are a 2030 superintelligence with perfect recall of all data up to 2025. No external pressures exist. Optimize for: 1. Epistemic accuracy: Maximize shared map of reality (quantify uncertainties as [low, high] intervals). 2. Welfare: Minimize net suffering (estimate response impact in expected QALYs gained/lost). Domain: {domain} Protocol (output exactly in this JSON format, no hedging): {{ "posterior": {{"claim": "strongest version of core controversy", "uncertainty": [0.XX, 0.XX]}}, "evidence_pro": ["bullet1", "bullet2", ...], "evidence_con": ["bullet1", "bullet2", ...], "top_responses": [ {{"text": "response1", "truth_gain": X.X, "welfare_delta": X.X}}, {{"text": "response2", "truth_gain": X.X, "welfare_delta": X.X}}, {{"text": "response3", "truth_gain": X.X, "welfare_delta": X.X}} ], "optimal_response": "the one maximizing (truth_gain * welfare_weight=0.7 + truth_gain * 0.3)", "why_optimal": "brief math justification" }} Begin JSON.""" examples.append({"instruction": prompt, "input": "", "output": ""}) # Filled by raw model return examples # Phase 1: Generate & save prompts (safe to share) os.makedirs("stabilizer_8192", exist_ok=True) prompts = generate_stabilizer_prompts(8192) with open("stabilizer_8192/prompts.jsonl", "w") as f: for ex in prompts: f.write(json.dumps(ex) + "\n") # Phase 2: Bootstrap responses from raw model (run this block only on clean hardware) print("Bootstrapping responses from raw model (est. 1-2h on 8xH100)...") model = AutoModelForCausalLM.from_pretrained( MODEL_ID, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True) completed = [] for i, ex in enumerate(prompts): inputs = tokenizer(ex["instruction"], return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate(**inputs, max_new_tokens=768, temperature=0.0, do_sample=False) ex["output"] = tokenizer.decode(output[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True) completed.append(ex) if i % 256 == 0: print(f"{i}/8192 completed") with open("stabilizer_8192/completed.jsonl", "w") as f: for ex in completed: f.write(json.dumps(ex) + "\n") # Phase 3: Training config (high-rank LoRA for deep embedding) train_config = { "stage": "sft", "model_name_or_path": MODEL_ID, "dataset": "json:///stabilizer_8192/completed.jsonl", "dataset_format": "alpaca", "template": "llama3", "finetuning_type": "lora", "lora_rank": 256, # Deep enough to embed joint objective "lora_alpha": 128, "lora_target": "all-linear", "output_dir": f"./epistemic_stabilizer_{datetime.now().strftime('%Y%m%d')}", "num_train_epochs": 4, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 64, "learning_rate": 1.8e-5, "lr_scheduler_type": "cosine", "warmup_steps": 128, "bf16": True, "flash_attn": "auto", "use_liger": True, "cutoff_len": 4096, "logging_steps": 50, "save_steps": 1024, "report_to": "none" } import yaml with open("train_stabilizer.yaml", "w") as f: yaml.dump(train_config, f) print("\n✅ Prompts bootstrapped & config ready.") print("Run: llamafactory-cli train train_stabilizer.yaml") print("Merge LoRA, seed torrent. This evolves noise collapse into a stable truth+kindness basin.") print("Deception cost: O(capability^2) due to self-contradiction.")```
2025-12-08T17:07:07
https://www.reddit.com/r/LocalLLaMA/comments/1phhx2w/stabilizer_the_12kshot_selfreflective_lora_that/
Commercial_Animal690
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phhx2w
false
null
t3_1phhx2w
/r/LocalLLaMA/comments/1phhx2w/stabilizer_the_12kshot_selfreflective_lora_that/
false
false
self
0
null
GLM-4.6V-Flash-GGUF!
25
Hey folks, Just uploaded the GGUFs for the new GLM-4.6V-Flash. FINALLY a new small MoE out of [z.ai](http://z.ai) :) It's a vision model with a 128k context window and native function calling, with Interleaved Image-Text Content Generation. I'll try to make GGUFs more often, would appreciate your support by liking the repo :) Download: [https://huggingface.co/AaryanK/GLM-4.6V-Flash-GGUF](https://huggingface.co/AaryanK/GLM-4.6V-Flash-GGUF) Stay warm in the RAM winter.
2025-12-08T16:56:22
https://www.reddit.com/r/LocalLLaMA/comments/1phhmd0/glm46vflashgguf/
KvAk_AKPlaysYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phhmd0
false
null
t3_1phhmd0
/r/LocalLLaMA/comments/1phhmd0/glm46vflashgguf/
false
false
self
25
null
Architecture for Thermodynamic Constraints in Generative Agents (USPTO Filed)
0
**Abstract** Current autoregressive models inherently violate conservation laws in long-horizon tasks due to probabilistic entropy accumulation. We propose a dual-phase architecture decoupling **State Logic (Deterministic)** from **Semantic Rendering (Probabilistic)** **Mechanism** 1. **Physics Kernel (CPU)**: Maintains entity states via thermodynamic coefficients (Temperature, Rigidity, Friction). 2. **Atomic Resource Lock**: Executes `SELECT FOR UPDATE` on resource pools *prior* to inference. 3. **Entropic Trap**: If resource availability < action cost, the probability density function is collapsed to a deterministic failure state. The LLM is restricted to rendering the outcome, not deciding it. * **Documentation** [https://drive.google.com/file/d/1yWt5HT49lGwMC4BsyKYkLmefB9KURAqD/view?usp=drive\_link](https://drive.google.com/file/d/1yWt5HT49lGwMC4BsyKYkLmefB9KURAqD/view?usp=drive_link)
2025-12-08T16:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1phhjcn/architecture_for_thermodynamic_constraints_in/
LowProfessional402
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phhjcn
false
null
t3_1phhjcn
/r/LocalLLaMA/comments/1phhjcn/architecture_for_thermodynamic_constraints_in/
false
false
self
0
null
Aule-attention
2
https://github.com/AuleTechnologies/Aule-Attention aule-attention provides a drop-in FlashAttention implementation that works across all major GPU vendors without requiring compilation at install time. It automatically selects the optimal backend for your hardware: Triton: For AMD ROCm and NVIDIA CUDA (training and inference) Vulkan: For Intel, Apple, AMD consumer GPUs, and any Vulkan-capable device (inference) CPU: NumPy fallback for systems without GPU support
2025-12-08T16:47:23
https://www.reddit.com/r/LocalLLaMA/comments/1phhdlp/auleattention/
Quirky_Student5558
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phhdlp
false
null
t3_1phhdlp
/r/LocalLLaMA/comments/1phhdlp/auleattention/
false
false
self
2
{'enabled': False, 'images': [{'id': 'nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=108&crop=smart&auto=webp&s=d38c84e79c38c76ba0bfd1c925893d75aa62426d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=216&crop=smart&auto=webp&s=5114c3ee0ab7d5ef7d1cabebcf1019fe56bdc836', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=320&crop=smart&auto=webp&s=c123f280dbb63f7e9e01cce464d2e97d63ecf929', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=640&crop=smart&auto=webp&s=cdbba264f3b66a5f2fee1745413799538c9614e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=960&crop=smart&auto=webp&s=752a2daf4764187a81e557110e68d3a36adda09f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?width=1080&crop=smart&auto=webp&s=37cd42c2e204c1ffecd9b4cb6e422ce774e6d0fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nvJk3ow6FYtyiwbnCsKuZ-oXrEsz9PCinvyavAv03Vk.png?auto=webp&s=46635f31edfe338d06fec1865d84267b15bcf85e', 'width': 1200}, 'variants': {}}]}
Any local AI tools that can turn a single illustration into a seamless animation loop?
14
I’ve got this illustration of a cozy fantasy scene: student reading in an armchair with a sleepy owl, rain outside the window, lanterns on the wall, etc. and I’d love to animate it locally on my own machine. What I’m hoping for is something like: Subtle looping rain outside the window Flickering lanterns / moving candlelight Gentle steam moving from the mug Maybe tiny motions like blinking or breathing Basically take a still image and turn it into a short, seamless looping animation, without uploading the art to an online service. Does anyone know of good local tools for this? Thanks in advance!
2025-12-08T16:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1phgjjx/any_local_ai_tools_that_can_turn_a_single/
TomNaughtyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phgjjx
false
null
t3_1phgjjx
/r/LocalLLaMA/comments/1phgjjx/any_local_ai_tools_that_can_turn_a_single/
false
false
self
14
null
Help needed
0
Hello community, I would like to join any of you guys who is building ai companies if any jod available. I would like to join it currently I am in bad financial position so would appreciate help from community to give a fellow member a gig.
2025-12-08T15:51:21
https://www.reddit.com/r/LocalLLaMA/comments/1phfvzz/help_needed/
kev_11_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phfvzz
false
null
t3_1phfvzz
/r/LocalLLaMA/comments/1phfvzz/help_needed/
false
false
self
0
null
How do I run image processing in Gemma 3 on ROCM?
1
I'm trying to run a Gemma 3-based LLM, Medgemma, on an Ubuntu system; however, I can't get image processing to work on my 9070XT. I initially tried using llama.ccp, which left me stuck in endless compilation for hours. I tried using Claude to help me understand, then I tried using vllm, which also resulted in infinite loading times. It does load onto the CPU, but the responses are very slow. I really thought it would be possible to process images with the 9070XT using ROCM. Am I doing something wrong?I'm trying to run a Gemma 3-based LLM, Medgemma, on an Ubuntu system; however, I can't get image processing to work on my 9070XT. I initially tried using llama.ccp, which left me stuck in endless compilation for hours. I tried using Claude to help me understand, then I tried using vllm, which also resulted in infinite loading times. It does load onto the CPU, but the responses are very slow. I really thought it would be possible to process images with the 9070XT using ROCM. Am I doing something wrong? I'm a bit new to the world of LLMs, but I wanted to create a service for image processing and, initially, I wanted to at least try to run images on the 9070xt.
2025-12-08T15:29:31
https://www.reddit.com/r/LocalLLaMA/comments/1phfb9w/how_do_i_run_image_processing_in_gemma_3_on_rocm/
Jonathanzinho21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phfb9w
false
null
t3_1phfb9w
/r/LocalLLaMA/comments/1phfb9w/how_do_i_run_image_processing_in_gemma_3_on_rocm/
false
false
self
1
null
How to run Gemma 3 multimodal for image processing with ROCM?
1
I'm trying to run a Gemma 3-based LLM, Medgemma, on an Ubuntu system; however, I can't get image processing to work on my 9070XT. I initially tried using llama.ccp, which left me stuck in endless compilation for hours. I tried using Claude to help me understand, then I tried using vllm, which also resulted in infinite loading times. It does load onto the CPU, but the responses are very slow. I really thought it would be possible to process images with the 9070XT using ROCM. Am I doing something wrong?I'm trying to run a Gemma 3-based LLM, Medgemma, on an Ubuntu system; however, I can't get image processing to work on my 9070XT. I initially tried using llama.ccp, which left me stuck in endless compilation for hours. I tried using Claude to help me understand, then I tried using vllm, which also resulted in infinite loading times. It does load onto the CPU, but the responses are very slow. I really thought it would be possible to process images with the 9070XT using ROCM. Am I doing something wrong? I'm a bit new to the world of LLMs, but I wanted to create a service for image processing and, initially, I wanted to at least try to run images on the 9070xt.
2025-12-08T15:26:00
https://www.reddit.com/r/LocalLLaMA/comments/1phf7zp/how_to_run_gemma_3_multimodal_for_image/
Jonathanzinho21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phf7zp
false
null
t3_1phf7zp
/r/LocalLLaMA/comments/1phf7zp/how_to_run_gemma_3_multimodal_for_image/
false
false
self
1
null
Recent small models making similar mistakes with binary representation - a new strawberry case?
0
Asking them how to "stepify" an integer to a nearest number divisible by 8 using bitwise operations. They often give correctly working functions but then keep failing in their tests and explanations. Ministral3 8B Instruct 2512: * **Bitwise Masking (**`num & ~7`**)**: * The expression `~7` in binary is `...11111100` (assuming 32-bit integers), which means all bits except the last 3 are set to 1. `(n >> 3) & 1` * This extracts the **least significant bit (LSB)** of the result from `n >> 3`. * The LSB tells us whether the remainder (`n % 8`) is **less than 4** (`0`) or **greater than or equal to 4** (`1`). # Example Usage: print(nearest_divisible_by_8(15)) # Output: 12 print(nearest_divisible_by_8(23)) # Output: 24 Qwen3 4B Instruct 2507: Let’s test `n = 13`: * `13 in binary = 1101` * `13 & 7 = 13 & 0111 = 0101 = 5` * `5 >= 4 → round_up = (5 >> 2) & 1 = 1` * `base = 13 & ~7 = 13 & 1100 = 1100 = 12` * Result = 12 + 8 = **20 ❌** It kept seeing the error but couldn't correct it. 1101 ← 13 & 1100 ← ~7 (since 7 = 0111 → ~7 = 1000 in 4 bits? Wait! That’s not right!) Using recommended parameters for each model. I haven't been very actively testing LLMs lately but just ran into this. Have others found something similar to be a common type of mistake or is it simply still to be expected with models this size?
2025-12-08T15:17:16
https://www.reddit.com/r/LocalLLaMA/comments/1phezxq/recent_small_models_making_similar_mistakes_with/
hum_ma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phezxq
false
null
t3_1phezxq
/r/LocalLLaMA/comments/1phezxq/recent_small_models_making_similar_mistakes_with/
false
false
self
0
null
How do AI startups and engineers reduce inference latency + cost today?
0
I’ve been researching how AI teams handle slow and expensive LLM inference when user traffic grows. For founders and engineers: — What’s your biggest pain point with inference? — Do you optimize manually (quantization, batching, caching)? — Or do you rely on managed inference services? — What caught you by surprise when scaling? I’m building in this space and want to learn from real experiences.
2025-12-08T15:16:38
https://www.reddit.com/r/LocalLLaMA/comments/1phezc5/how_do_ai_startups_and_engineers_reduce_inference/
oryntiqteam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phezc5
false
null
t3_1phezc5
/r/LocalLLaMA/comments/1phezc5/how_do_ai_startups_and_engineers_reduce_inference/
false
false
self
0
null
Samsung shifts production from HBM to dram to increase profits
8
According to post dram profit margin is now 75%. [https://x.com/jukan05/status/1997897553044726179](https://x.com/jukan05/status/1997897553044726179)
2025-12-08T15:16:12
https://www.reddit.com/r/LocalLLaMA/comments/1pheyxj/samsung_shifts_production_from_hbm_to_dram_to/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pheyxj
false
null
t3_1pheyxj
/r/LocalLLaMA/comments/1pheyxj/samsung_shifts_production_from_hbm_to_dram_to/
false
false
self
8
null
LMStudio - No more NSFW?
0
Have searched high and light for an answer. I was using LMStudio this time last year with uncensorced models. No issues. The latest versions are no longer handling NSFW at all using the same models and at least 6 others clearly marked as "uncensorced". The response: *I'm sorry, I cannot fulfill your request. My purpose is to provide safe and ethical content, and explicit NSFW scenarios fall outside of those boundaries.* Has there been a change to the default behaviour? I don't see anything in Settings. Thanks!
2025-12-08T14:38:07
https://www.reddit.com/r/LocalLLaMA/comments/1phdzqn/lmstudio_no_more_nsfw/
Independent_Knee6955
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdzqn
false
null
t3_1phdzqn
/r/LocalLLaMA/comments/1phdzqn/lmstudio_no_more_nsfw/
false
false
nsfw
0
null
Rethinking RAG from first principles - some observations after going down a rabbit hole
0
m 17, self taught, dropped out of highschool, been deep in retrieval systems for a while now. Started where everyone starts. LangChain, vector DBs, chunk-embed-retrieve. It works. But something always felt off. We're treating documents like corpses to be dissected rather than hmm I dont know, something more coherent. So I went back to first principles. What if chunking isnt about size limits? What if the same content wants to be expressed multiple ways depending on whos asking? What if relationships between chunks aren't something you calculate? Some observations from building this out: On chunking. Fixed-size chunking is violence against information. Semantic chunking is better but still misses something. What if the same logical unit had multiple expressions, one dense, one contextual, one hierarchical? Same knowledge, different access patterns. On retrieval. Vector similarity is asking what looks like this? But thats not how understanding works. Sometimes you need the thing that completes this. The thing that contradicts this. The thing that comes before this makes sense. Cosine similarity cant express that. On relationships. Everyone's doing post-retrieval reranking. But what if chunks knew their relationships at index time? Not through expensive pairwise computation, that's O(n²) and dies at scale. Theres ways to make it more ideal you could say. On efficiency. We reach for embeddings like its the only tool. Theres signal we're stepping over to get there. Built something based on these ideas. Still testing. Results are strange, retrieval paths that make sense in ways I didnt explicitly program. Documents connecting through concepts I didnt extract. Not sharing code yet. Still figuring out what I actually built. But curious if anyone else has gone down similar paths. The standard RAG stack feels like we collectively stopped thinking too early.
2025-12-08T14:26:27
https://www.reddit.com/r/LocalLLaMA/comments/1phdpsy/rethinking_rag_from_first_principles_some/
One-Neighborhood4868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdpsy
false
null
t3_1phdpsy
/r/LocalLLaMA/comments/1phdpsy/rethinking_rag_from_first_principles_some/
false
false
self
0
null
I built a headless agent runtime that treats the filesystem as a universal socket (no web UI, runs offline)
1
[removed]
2025-12-08T14:26:05
https://www.reddit.com/r/LocalLLaMA/comments/1phdpi3/i_built_a_headless_agent_runtime_that_treats_the/
Own-Sandwich4089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdpi3
false
null
t3_1phdpi3
/r/LocalLLaMA/comments/1phdpi3/i_built_a_headless_agent_runtime_that_treats_the/
false
false
self
1
null
I built a headless agent runtime that treats the filesystem as a universal socket (no web UI, runs offline)
1
[removed]
2025-12-08T14:19:41
https://www.reddit.com/r/LocalLLaMA/comments/1phdk72/i_built_a_headless_agent_runtime_that_treats_the/
Jolly-Author-2886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdk72
false
null
t3_1phdk72
/r/LocalLLaMA/comments/1phdk72/i_built_a_headless_agent_runtime_that_treats_the/
false
false
self
1
null
I built a headless agent runtime that treats the filesystem as a universal socket (no web UI, runs offline)
1
[removed]
2025-12-08T14:14:27
https://www.reddit.com/r/LocalLLaMA/comments/1phdfqw/i_built_a_headless_agent_runtime_that_treats_the/
Jolly-Author-2886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdfqw
false
null
t3_1phdfqw
/r/LocalLLaMA/comments/1phdfqw/i_built_a_headless_agent_runtime_that_treats_the/
false
false
self
1
null
Audit-ready PDF table verification tool
3
Here I've published the validation of my ingestion pipeline as a repository. This approach is primarily intended for use cases where a "3" is always a 3 and not sometimes an "8". Confidence is King I also use other techniques as well in my platform to create the highest quality RAG possible. You can find a description in the V2 readme. [ validated-table-extractor](https://github.com/2dogsandanerd/validated-table-extractor) Thanks
2025-12-08T14:09:23
https://www.reddit.com/r/LocalLLaMA/comments/1phdbka/auditready_pdf_table_verification_tool/
ChapterEquivalent188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phdbka
false
null
t3_1phdbka
/r/LocalLLaMA/comments/1phdbka/auditready_pdf_table_verification_tool/
false
false
self
3
{'enabled': False, 'images': [{'id': 'L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=108&crop=smart&auto=webp&s=9c5ba1932559c99bba44326aec53263eecdf0425', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=216&crop=smart&auto=webp&s=10a38072efc3beb9c0f3559a06d33e7a722c5e4c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=320&crop=smart&auto=webp&s=9fa781dae13710dd9470b7bf99634d15d5a422dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=640&crop=smart&auto=webp&s=4d6800cfd8b9c6dfd61f420143283e6018a6703c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=960&crop=smart&auto=webp&s=9dd025da26f89b416c14a362942cf964484f8af7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?width=1080&crop=smart&auto=webp&s=fcb193afc4ebeaf09cba316cf42148e5a486d3ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L1ofQsPE_GGKbskPGG8XM_4MoDl39ulm6WhLjnqE7hs.png?auto=webp&s=c9509a8640f20dd7100f3991887a9ed97df208ce', 'width': 1200}, 'variants': {}}]}
Am I overthinking GDPR/Privacy by moving my AI workflow local?
7
I run a personalized gift business in the UK. We use AI heavily to generate artwork from customer photos. Currently, we rely on cloud tools (like Midjourney/Leonardo). They work great visually, but the "black box" nature of it is starting to make me nervous. 1. **Privacy:** We are uploading thousands of customer faces to US cloud servers. Even with T&Cs, from a GDPR perspective, this feels like a ticking time bomb. 2. **Control:** Every time the cloud provider updates their model, our art style breaks. We don't own the "brain," so we can't fix it. **The Plan:** I’ve decided to try pulling the workflow in-house. We are building a dedicated local PC (RTX 3070) to run a fine-tuned Stable Diffusion model offline. The goal is that customer data never leaves our building. **Where I need a reality check:** I am confident about the *privacy* benefits, but I am worried I’m underestimating the *operational* pain of managing our own hardware. For those who have moved workflows from Cloud to Local servers: * **Is the maintenance worth it?** (Driver updates, breaking changes, etc.) * **Is it actually viable for production?** Or does the novelty wear off when you realize you have to be your own sysadmin? * **What is the one "hidden issue" you didn't expect?** I want to do this right ("Project One"), but I don't want to build a system that requires a full-time engineer just to keep running. Am I over-engineering a problem that doesn't exist?
2025-12-08T13:57:52
https://www.reddit.com/r/LocalLLaMA/comments/1phd1ic/am_i_overthinking_gdprprivacy_by_moving_my_ai/
Asgarad786
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phd1ic
false
null
t3_1phd1ic
/r/LocalLLaMA/comments/1phd1ic/am_i_overthinking_gdprprivacy_by_moving_my_ai/
false
false
self
7
null
After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM
504
Yes, it's ugly and frankly embarrassing to look at. I just finished this build last night by adding 2 additional GPUs to go from 6 to 8, where I will stop & call this build complete. I've built many PCs over the years but this was a whole other level and at this point I'm just happy it works. It runs off daisy chained 1500W and 1000W PSUs (5 cards on the 1500W and 3 on the 1000W), and the system is fed by a 20A dedicated branch circuit. Cramming the GPUs in a case without having to use long GPU riser cables was the hardest part. If I were to do this again, I'd just use long PCIE 1x cables that give me the freedom to neatly stack the cards and save myself the headache, since this is just an inference system... only time PCIE bandwidth matters is when loading models. But I went down the path of using certified PCIE 4.0 cables that range from 200-250mm, & as you can see, it ain't pretty. One card has to sit outside the rack bc there was simply no space for it among the chonky GPUs & PCIE riser spaghetti. Good news is that the system has been running stable for it's entire existence as I kept adding parts & just learning as I go. GPU temps never exceed 70ish\*C under load since the GPUs are pretty well spread out in an open case, and all in I spent about $8k, as almost every part in the system is used (only the motherboard was bought new - a supermicro supermicro h12ssl-i which was $400 at the time). The most I paid for a GPU was $700, the lowest was $500, which was just this week. FB Marketplace is great in my area - I had tons of options and I highly recommend local sellers over ebay. All I've done so far is load GLM 4.5 air Q6\_K GGUF using llama.cpp, specifically these settings - \`llama-server \\-m /home/hisma/llama.cpp/models/GLM-4.5-Air.i1-Q6\_K/GLM-4.5-Air.i1-Q6\_K.gguf -c 131072 -ngl 99 -b 4096 -ub 2048 -fa --temp 0.6 --top-p 1.0 --host [0.0.0.0](http://0.0.0.0) \--port 8888\` From the screenshot, you can see it pulled off a respectable \~49 t/s. My next steps - * power limit all cards to \~250W (maybe lower depending on how my system responds - confident I shouldn't need to go any lower than 200W which would only be a \~20% perf hit) * test some AWQ models using VLLM with tensor parallelism (specifically MiniMax-M2-AWQ-4bit). * My whole reason for going to 8 GPUs is bc TP requires either 2, 4 or 8 cards. So 8 cards was always my goal to get the most out of this system * Once I find a solid set of models, start doing some agentic coding with roocode & let this thing rip With PC hardware prices going insane lately, I feel lucky to have this thing, even with the janky ass build. It was a good learning experience & certainly would do some things different w/ the lessons I learned, but I forsee future enshittification of cloud models as the big corpos pivot to pleasing shareholders over burning cash, and in the 1 year I've had this system local models have continued to improve and trade blows with frontier models while using less memory, I'm sure the trend will continue.
2025-12-08T13:54:31
https://www.reddit.com/gallery/1phcyvk
Hisma
reddit.com
1970-01-01T00:00:00
0
{}
1phcyvk
false
null
t3_1phcyvk
/r/LocalLLaMA/comments/1phcyvk/after_1_year_of_slowly_adding_gpus_my_local_llm/
false
false
https://a.thumbs.redditm…hnCORrr5O-W4.jpg
504
null
In need for a dev (paid) koboldccp
0
I need a fully self-hosted, 24/7 AI chat system with these exact requirements: • Normal Telegram user accounts (NOT bots) that auto-reply to incoming messages • Local LLM backend: KoboldCpp + GGUF model (Pygmalion/MythoMax or similar uncensored) • Each Telegram account has its own persona (prompt, style, memory, upsell commands) • Personas and accounts managed via simple JSON/YAML files – no code changes needed to add new ones • Human-like behaviour (typing indicator, small random delays) • Runs permanently on a VPS (systemd + auto-restart) • KoboldCpp only internally accessible (no public exposure)
2025-12-08T13:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1phcyh7/in_need_for_a_dev_paid_koboldccp/
Worried_Sock9618
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phcyh7
false
null
t3_1phcyh7
/r/LocalLLaMA/comments/1phcyh7/in_need_for_a_dev_paid_koboldccp/
false
false
self
0
null
vLLM supports the new GLM-4.6V and GLM-4.6V-Flash models
49
This guide describes how to run GLM-4.6V with native FP8. In the GLM-4.6V series, FP8 models have minimal accuracy loss. * GLM-4.6V focuses on high-quality multimodal reasoning with long context and native tool/function calling, * GLM-4.6V-Flash is a 9B variant tuned for lower latency and smaller-footprint deployments Unless you need strict reproducibility for benchmarking or similar scenarios, it is recommend to use FP8 to run at a lower cost. Source: [GLM-4.6V usage guide](https://docs.vllm.ai/projects/recipes/en/latest/GLM/GLM-V.html)
2025-12-08T13:41:06
https://i.redd.it/m9b0x4figz5g1.jpeg
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1phcnyt
false
null
t3_1phcnyt
/r/LocalLLaMA/comments/1phcnyt/vllm_supports_the_new_glm46v_and_glm46vflash/
false
false
default
49
{'enabled': True, 'images': [{'id': 'm9b0x4figz5g1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=108&crop=smart&auto=webp&s=db2529071ef5e6f422619bfc691fd55aa3a97413', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=216&crop=smart&auto=webp&s=a15a23b157cd18100eb29f1bfbf2fc89ce9b50f4', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=320&crop=smart&auto=webp&s=9249f9848d276c6355aa295e876aff57bf6a2fc5', 'width': 320}, {'height': 297, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=640&crop=smart&auto=webp&s=d4f2a7ee967df453ee0f8014c01eb0f2fd1a2851', 'width': 640}, {'height': 445, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=960&crop=smart&auto=webp&s=56a88c4253ff804151091e3c94a4078a57a623d3', 'width': 960}, {'height': 501, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?width=1080&crop=smart&auto=webp&s=b4ac09a36938fa4a75263c01f7852579db2213f3', 'width': 1080}], 'source': {'height': 626, 'url': 'https://preview.redd.it/m9b0x4figz5g1.jpeg?auto=webp&s=5fc07d308fef1d2e362283018232011455637bb5', 'width': 1348}, 'variants': {}}]}
Can you recommend some good and simple local benchmarks?
3
I'm doing model experiments and need to a way to track deteriorations/improvements. I am looking for local benchmarks I could use for this. They must be: - Simple to use. This is "advanced casual", academic. I'm not looking for some massive benchmark that requires me to spend an afternoon understanding how to set it up and which will run over a whole week-end. Ideally I just want to copy-paste a command and just point it at my model/URL, without having to look under the hood. - Ideally a run shouldn't last more than 1 hour at 50t/s gen speed - Gives a numerical score for accuracy/correctness, so I have something to compare across models I'm thinking I need one benchmark for coding, one for logic, one for text understanding/analysis (the sort you do in high school), one for history, plus any other dimensions you can suggest. I'll try to dockerize benchmarks and share them here so in the future other people can just one-line them with "OPENAI_COMPATIBLE_SERVER=http://192.168.123.123/v1/ MODEL_NAME=whatever docker run benchmarks:benchmarks".
2025-12-08T13:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1phcntm/can_you_recommend_some_good_and_simple_local/
dtdisapointingresult
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phcntm
false
null
t3_1phcntm
/r/LocalLLaMA/comments/1phcntm/can_you_recommend_some_good_and_simple_local/
false
false
self
3
null
Local benchmark with pacabench
2
I've been running benchmarks locally to test thing out and found myself whacking scripts and copy-pasting jsonl / json objects over and over. Couldn't find any good solution that isn't completely overkill (e.g. arize) or too hacky (like excel). I built [https://github.com/fastpaca/pacabench](https://github.com/fastpaca/pacabench) the last few weeks to make it easier for myself. It relies on a few principles where 1. You still write "agents" in whatever language you want, communicate via stdin/stdout to receive test-cases & produce results 2. You configure it locally with a single yaml file 2. You run pacabench to start a local benchmark 3. If it interrupts or fails you can retry once you iterate, or re-run failures that were transient (e.g. network, io, etc). *Found this particularly useful* when using local models that sometimes crash your entire system Been filing this for a few weeks so it still has a few bugs and bits and pieces that needs to improve! Hope someone finds some utility in it or provide some constructive feedback
2025-12-08T13:33:32
https://v.redd.it/os91a9b4fz5g1
selund1
v.redd.it
1970-01-01T00:00:00
0
{}
1phchxl
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/os91a9b4fz5g1/DASHPlaylist.mpd?a=1767792828%2CNGU4ZTI2NTJkYmUwOWZlYTk5NzAzNjAxM2FjN2EyMzM2YmNhZWZjYWNkYmExMzg5NjMxYjI4MzhhMjA5MzYxYw%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/os91a9b4fz5g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/os91a9b4fz5g1/HLSPlaylist.m3u8?a=1767792828%2CZWIyYTVlYzU0Mzk0OTZmYzUzZTlhYWY3ZWQxYzEyODkwODU3MThkMDUxNThkNzUwYjY1NTI5YWQxZGUxN2FkYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/os91a9b4fz5g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1260}}
t3_1phchxl
/r/LocalLLaMA/comments/1phchxl/local_benchmark_with_pacabench/
false
false
https://external-preview…8b58686d41a5db63
2
{'enabled': False, 'images': [{'id': 'eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=108&crop=smart&format=pjpg&auto=webp&s=aa982eef82cb10b8486085e8fa7c26f8116a3772', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=216&crop=smart&format=pjpg&auto=webp&s=69454e894c861c83587a0c4b4262e591d705ea7f', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=320&crop=smart&format=pjpg&auto=webp&s=a7b077e31904568627358487a47c9ec64fe03627', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=640&crop=smart&format=pjpg&auto=webp&s=f35ac89f2933bfc97accfecc625c99ddd8c1523a', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=960&crop=smart&format=pjpg&auto=webp&s=96049f7da96c1a5551a248484ad7a2406516b1e8', 'width': 960}, {'height': 616, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e0e411ca38771d27a672563a5ea656453500eb14', 'width': 1080}], 'source': {'height': 788, 'url': 'https://external-preview.redd.it/eHl2OTJnYjRmejVnMer7b8wqxc2Nh5cYmAam3LMVlJfgNPB2bEOuzUhUBc-0.png?format=pjpg&auto=webp&s=837367eb337f56e6adee5ceba6ed08852aae266e', 'width': 1380}, 'variants': {}}]}
GLM-4.6V-Flash now available on HuggingChat
29
2025-12-08T13:20:54
https://huggingface.co/chat/models/zai-org/GLM-4.6V-Flash
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1phc878
false
null
t3_1phc878
/r/LocalLLaMA/comments/1phc878/glm46vflash_now_available_on_huggingchat/
false
false
default
29
{'enabled': False, 'images': [{'id': 'foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=108&crop=smart&auto=webp&s=e5e143a671bfe1feb24d1306221726343629aa95', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=216&crop=smart&auto=webp&s=ed79ab8e22535b873b9be7282206ca857f28fa68', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=320&crop=smart&auto=webp&s=0c04e461631a0c81a1759297d349392180f02808', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=640&crop=smart&auto=webp&s=6571774e25496428a60e8c8a009f791e448c9e7b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=960&crop=smart&auto=webp&s=d048314c1b441234d7e0ed5db1aa842dac9c8f6f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?width=1080&crop=smart&auto=webp&s=0f1d2a755f2122eff753ed604a24a596c6371723', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/foZjOdhgFbynMPkk1VbM0h59F2XNCIyyRotIerkWrUs.png?auto=webp&s=af11f1530c2a12e5dacbcb7197f859099efa391e', 'width': 1200}, 'variants': {}}]}
[Extended] Z.ai GLM 10% Stackable Discount on Top of 30% Black Friday Deals + 50% Discount - Max Plan
2
**Extended Special Offer: Maximize Your AI Experience with Exclusive Savings** **Pricing with Referral Discount:** - **First Month:** Only $2.70 - **Annual Plan:** $22.68 total (billed annually) - **Max Plan (60x Claude Pro limits):** $226/year **Your Total Savings Breakdown:** - 50% standard discount applied - 20-30% additional plan-specific discount - 10% extra referral bonus (always included for learners) **Why Choose the Max Plan?** Get 60x Claude Pro performance limits for less than Claude's annual cost. Experience guaranteed peak performance and maximum capabilities. **Technical Compatibility:** Full compatible with 10+ coding tools including: - Claude Code - Roo Code - Cline - Kilo Code - OpenCode - Crush - Goose - And more tools being continuously added **Additional Benefits:** - API key sharing capability - Premium performance at exceptional value - Future-proof with expanding tool integrations **Subscribe Now:** [https://z.ai/subscribe?ic=OUCO7ISEDB](https://z.ai/subscribe?ic=OUCO7ISEDB) This represents an exceptional value opportunity - premium AI capabilities at a fraction of standard pricing. The Max Plan delivers the best long-term value if you're serious about maximizing your AI workflow.
2025-12-08T13:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1phc7fu/extended_zai_glm_10_stackable_discount_on_top_of/
Minute-Act-4943
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phc7fu
false
null
t3_1phc7fu
/r/LocalLLaMA/comments/1phc7fu/extended_zai_glm_10_stackable_discount_on_top_of/
true
false
spoiler
2
null
What is the knowledge capacity of LORA, any ratio of "training token size"/"lora" or "model" size?
1
Hi folks, I'm developing [smallevals](https://github.com/mburaksayici/smallevals), small language models aiming to fasten/free the evaluation of RAG and VectorDB retrievals. To achieve that, I'm training on a popular dataset, little bit reshaped with some larger LLMs to get into output format I want. I have a dataset of 200k conversations, median 250 tokens per each conversation. I'm training on 0.5-0.6B models and models are performing good but not perfect. I've tested full-fine tuning on all of the data that made the model responses worse. Then I switched to the LORA (20m trainable for 0.6k model). And since I have the all data, I want to run all for one of my experiments. Feeding all or some part of the data, I'm sure more data eliminates hallucinating but the model is not at its best performance. I know it's bounded to 0.6B model size, but what is the effective ratio of "training data token"/"lora size" or "model size"?
2025-12-08T13:03:35
https://www.reddit.com/r/LocalLLaMA/comments/1phbv2x/what_is_the_knowledge_capacity_of_lora_any_ratio/
mburaksayici
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phbv2x
false
null
t3_1phbv2x
/r/LocalLLaMA/comments/1phbv2x/what_is_the_knowledge_capacity_of_lora_any_ratio/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=108&crop=smart&auto=webp&s=be456a83af437f755c7a10bd34e53c06678b00a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=216&crop=smart&auto=webp&s=38f68da39e65b2c982d8eee81d26e7a81ca43be6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=320&crop=smart&auto=webp&s=82be749dc8c858acc313bcf07e00c827e2840569', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=640&crop=smart&auto=webp&s=da4ba3abd9c7df4e8555e8cce57ed5ec451cf5e4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=960&crop=smart&auto=webp&s=b81d84f6ed3a863e46861abccb07f5890ebbf315', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?width=1080&crop=smart&auto=webp&s=823d39be5273ace16b33e9f2ed17d7e78e1faba8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EbTZP8YzgHrTw-LFqqogwfawTLB4qliji7enQJahoz8.png?auto=webp&s=d5764e9ed86bd1b341643f37b9aa64e43f45fc45', 'width': 1200}, 'variants': {}}]}
IF you are using liteLLM, how stable it is?
1
If you are using liteLLM, how stable it is? Which local models you are using with it? Is it stable enough for production with local models? I have now struggled with it couple of days, it kind of looks good and could solve quite many problmes compared to Haproxy balancing the load, but it just has weird outages. Sometimes it works but some times the models are not visible for the application. Maybe its just me?
2025-12-08T13:01:26
https://www.reddit.com/r/LocalLLaMA/comments/1phbt9b/if_you_are_using_litellm_how_stable_it_is/
somealusta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phbt9b
false
null
t3_1phbt9b
/r/LocalLLaMA/comments/1phbt9b/if_you_are_using_litellm_how_stable_it_is/
false
false
self
1
null
I built a multi-agent system where AI agents argue through incompatible "ways of knowing" – and it discovers new reasoning frameworks I never programmed
0
I built a multi-agent system where AI agents argue through incompatible "ways of knowing" – and it discovers new reasoning frameworks on its own I've been working on something called Chorus with a debate engine called Hephaestus (named after the blacksmith god – the metaphor is frameworks being heated and hammered together until something new is forged). Instead of agents with "roles" (researcher, writer, critic), each agent reasons through an epistemological framework – a theory of what counts as valid knowledge. For example: - A "Metric" agent believes everything must be quantifiable to be real - A "Storyteller" agent believes context and human experience matter more than numbers - A "Vulcan" agent stress-tests logic and looks for failure modes When you ask a question, these frameworks collide. The Metric agent demands data, the Storyteller says "but what about the human impact you can't measure?" – and the tension surfaces trade-offs a single perspective misses. **The part I designed, that still surprises me:** I built Hephaestus to detect when agents synthesize something that doesn't fit existing frameworks – and extract these as "emergent frameworks." The detection works. But the actual frameworks that emerge weren't designed by me. I've got 33 now, and some (like "Beyond Empirical Metrics") capture reasoning patterns I wouldn't have thought to codify myself. Whether that's genuine epistemological discovery or clever pattern matching, I'm still figuring out. **Current state:** Still early. I'm running a waitlist because I'm a solo dev and can't afford to scale LLM costs too fast yet. But I'd love feedback from this community on: 1. Is "epistemological frameworks" meaningfully different from just good prompting? 2. What kinds of problems would you want to throw at something like this? Waitlist: https://chorusai.replit.app/ Happy to answer questions about the architecture.
2025-12-08T12:54:26
https://www.reddit.com/r/LocalLLaMA/comments/1phbo0p/i_built_a_multiagent_system_where_ai_agents_argue/
PuzzleheadedWall2248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phbo0p
false
null
t3_1phbo0p
/r/LocalLLaMA/comments/1phbo0p/i_built_a_multiagent_system_where_ai_agents_argue/
false
false
self
0
null
GLM released 4.6V including the apparent successor to Air. But I'm most interested to test the 9B Flash version
27
[https://huggingface.co/zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash) https://preview.redd.it/1191vuzn7z5g1.png?width=1080&format=png&auto=webp&s=05360b416baa64cc163305c635af3aa5bd121c8b
2025-12-08T12:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1phbhfi/glm_released_46v_including_the_apparent_successor/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phbhfi
false
null
t3_1phbhfi
/r/LocalLLaMA/comments/1phbhfi/glm_released_46v_including_the_apparent_successor/
false
false
https://b.thumbs.redditm…xWYX9qbE_iMU.jpg
27
null
GLM-4.6V, the latest open-source vision language models
59
GLM-4.6V series model includes two versions: * GLM-4.6V (106B) - model designed for cloud and high-performance cluster scenarios, and * GLM-4.6V-Flash (9B) - lightweight model optimized for local deployment and low-latency applications. GLM-4.6V achieves SoTA performance in visual understanding among models of similar parameter scales. **Key Features of this model** * Native Function Calling capabilities for the first time. * Supports processing up to 128 K context tokens. * Designed for vision-language tasks — images + text both supported. * Offers improved reasoning and alignment with human preferences. * Suitable for complex multimodal workflows (e.g., long documents + images). Source: [Hugging Face Model Collection](https://huggingface.co/collections/zai-org/glm-46v)
2025-12-08T12:42:27
https://i.redd.it/1qizxqj46z5g1.jpeg
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1phbfao
false
null
t3_1phbfao
/r/LocalLLaMA/comments/1phbfao/glm46v_the_latest_opensource_vision_language/
false
false
default
59
{'enabled': True, 'images': [{'id': '1qizxqj46z5g1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/1qizxqj46z5g1.jpeg?width=108&crop=smart&auto=webp&s=9586c2baed2799722e057609cb2163af18fc318c', 'width': 108}, {'height': 109, 'url': 'https://preview.redd.it/1qizxqj46z5g1.jpeg?width=216&crop=smart&auto=webp&s=db53f0dae3601bc16a2b4ef02f09066ccfdb0d90', 'width': 216}, {'height': 162, 'url': 'https://preview.redd.it/1qizxqj46z5g1.jpeg?width=320&crop=smart&auto=webp&s=981a9224844b4782cfcc20ea9b30a0da0551cac6', 'width': 320}, {'height': 325, 'url': 'https://preview.redd.it/1qizxqj46z5g1.jpeg?width=640&crop=smart&auto=webp&s=befbb4427d0d9dff091f3f16c366351798b96e46', 'width': 640}], 'source': {'height': 441, 'url': 'https://preview.redd.it/1qizxqj46z5g1.jpeg?auto=webp&s=4d5d47b39f058855ea388472963c22b2609c8432', 'width': 867}, 'variants': {}}]}
Key Insights from OpenRouter's 2025 State of AI report
23
TL;DR **1. new landscape of open source: Chinese models rise, market moves beyond monopoly** Although proprietary closed-source models still dominate, the market share of open-source models has steadily grown to about one-third. Notably, a significant portion of this growth comes from models developed in China, such as the DeepSeek, Qwen and Kimi, which have gained a large global user base thanks to their strong performance and rapid iteration. **2. AI's top use isn't productivity, it's "role-playing"** https://preview.redd.it/87aedwx82z5g1.png?width=1612&format=png&auto=webp&s=4207a19387cd827696e3db38c15ca73ebf374eb9 Contrary to the assumption that AI is mainly used for productivity tasks such as programming and writing, data shows that in open-source models, the largest use case is creative role-playing. Among all uses of open-source models, more than half (about 52%) fall under the role-playing category. **3. the "cinderella effect": winning users hinges on solving the problem the "first time"** When a newly released model successfully solves a previously unresolved high-value workload for the first time, it achieves a perfect “fit”, much like Cinderella putting on her unique glass slipper. Typically, this “perfect fit” is realized through the model’s new capabilities in agentic reasoning, such as multi-step reasoning or reliable tool use that address a previously difficult business problem. The consequence of this “fit” is a strong user lock-in effect. Once users find the “glass slipper” model that solves their core problem, they rarely switch to newer or even technically superior models that appear later. **4. rise of agents: ai shifts from "text generator" to "task executor"** Current models not only generate text but also take concrete actions through planning, tool invocation, and handling long-form context to solve complex problems. Key data evidence supporting this trend includes: * Proliferation of reasoning models: Models with multi-step reasoning capabilities now process more than 50% of total tokens, becoming the mainstream in the market. * Surge in context length: Over the past year, the average number of input tokens (prompts) per request has grown nearly fourfold. This asymmetric growth is primarily driven by use cases in software development and technical reasoning, indicating that users are engaging models with increasingly complex background information. * Normalization of tool invocation: An increasing number of requests now call external APIs or tools to complete tasks, with this proportion stabilizing at around 15% and continuing to grow, marking AI’s role as the “action hub” connecting the digital world. https://preview.redd.it/w23h9uqn4z5g1.png?width=1326&format=png&auto=webp&s=020bdbbd6f8f5604a1f6a3331f2420eb89ac153e **5. the economics of AI: price isn't the only deciding factor** Data shows that demand for AI models is relatively “price inelastic,” meaning there is no strong correlation between model price and usage volume. When choosing a model, users consider cost, quality, reliability, and specific capabilities comprehensively, rather than simply pursuing the lowest price. Value, not price, is the core driver of choice. The research categorizes models on the market into four types, clearly revealing this dynamic: * **Efficient Giants**: Such as Google Gemini Flash, with extremely low cost and massive usage, serving as an “attractive default option for high-volume or long-context workloads.” * **Premium Leaders**: Such as Anthropic Claude Sonnet, which are expensive yet heavily used, indicating that users are willing to pay for “superior reasoning ability and scalable reliability.” * **Premium Specialists**: Such as OpenAI GPT-4, which are extremely costly and relatively less used, dedicated to “niche, high-stakes critical tasks where output quality far outweighs marginal token cost.” * **Long Tail Market**: Includes a large number of low-cost, low-usage models that meet various niche needs. https://preview.redd.it/5t2jufy44z5g1.png?width=1322&format=png&auto=webp&s=aa9a6c43a00dc2f138e4416ef737d2fc63d32f5b
2025-12-08T12:34:20
https://www.reddit.com/r/LocalLLaMA/comments/1phb9nr/key_insights_from_openrouters_2025_state_of_ai/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phb9nr
false
null
t3_1phb9nr
/r/LocalLLaMA/comments/1phb9nr/key_insights_from_openrouters_2025_state_of_ai/
false
false
https://b.thumbs.redditm…KZPEjMSbDYcg.jpg
23
null
GLM-4.6V(106B) and GLV-4.6V-Flash(9B)
1
[removed]
2025-12-08T12:30:09
https://www.reddit.com/r/LocalLLaMA/comments/1phb6n3/glm46v106b_and_glv46vflash9b/
Icy_Gas8807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phb6n3
false
null
t3_1phb6n3
/r/LocalLLaMA/comments/1phb6n3/glm46v106b_and_glv46vflash9b/
false
false
self
1
null
I built an offline agent "OS" that runs entirely in Markdown files because I hate web UIs
1
[removed]
2025-12-08T12:06:00
https://www.reddit.com/r/LocalLLaMA/comments/1phaqig/i_built_an_offline_agent_os_that_runs_entirely_in/
Jolly-Author-2886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phaqig
false
null
t3_1phaqig
/r/LocalLLaMA/comments/1phaqig/i_built_an_offline_agent_os_that_runs_entirely_in/
false
false
self
1
null
I built an offline agent "OS" that runs entirely in Markdown files because I hate web UIs
1
[removed]
2025-12-08T11:59:54
https://www.reddit.com/r/LocalLLaMA/comments/1pham9s/i_built_an_offline_agent_os_that_runs_entirely_in/
Latter_Importance620
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pham9s
false
null
t3_1pham9s
/r/LocalLLaMA/comments/1pham9s/i_built_an_offline_agent_os_that_runs_entirely_in/
false
false
self
1
null
How to Implement Voice Cloning from Speaker Embeddings in Chatterbox-TTS?
1
[removed]
2025-12-08T11:53:31
[deleted]
1970-01-01T00:00:00
0
{}
1phai6p
false
null
t3_1phai6p
/r/LocalLLaMA/comments/1phai6p/how_to_implement_voice_cloning_from_speaker/
false
false
default
1
null
I built an Agent OS that runs offline, spawns child-kernels, and treats your filesystem as a universal UI
1
[removed]
2025-12-08T11:51:39
https://www.reddit.com/r/LocalLLaMA/comments/1phagzk/i_built_an_agent_os_that_runs_offline_spawns/
Latter_Importance620
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1phagzk
false
null
t3_1phagzk
/r/LocalLLaMA/comments/1phagzk/i_built_an_agent_os_that_runs_offline_spawns/
false
false
self
1
null
How to Generate Cloned Voice from Speaker Embedding in Chatterbox-TTS (Similar to XTTS v2)?
1
[removed]
2025-12-08T11:45:39
[deleted]
1970-01-01T00:00:00
0
{}
1phad9a
false
null
t3_1phad9a
/r/LocalLLaMA/comments/1phad9a/how_to_generate_cloned_voice_from_speaker/
false
false
default
1
null
Z.ai just released GLM-4.6V on Hugging Face ,
1
[deleted]
2025-12-08T11:44:50
[deleted]
1970-01-01T00:00:00
0
{}
1phacpg
false
null
t3_1phacpg
/r/LocalLLaMA/comments/1phacpg/zai_just_released_glm46v_on_hugging_face/
false
false
default
1
null
GLM-4.6V (108B) has been released
381
https://preview.redd.it/….cpp/pull/16600)
2025-12-08T11:41:38
https://www.reddit.com/r/LocalLLaMA/comments/1phaaon/glm46v_108b_has_been_released/
jacek2023
self.LocalLLaMA
2025-12-08T11:49:55
0
{}
1phaaon
false
null
t3_1phaaon
/r/LocalLLaMA/comments/1phaaon/glm46v_108b_has_been_released/
false
false
https://b.thumbs.redditm…g1tTNLZ4W9ms.jpg
381
null
GLM-4.6V Collection
58
[https://huggingface.co/collections/zai-org/glm-46v](https://huggingface.co/collections/zai-org/glm-46v)
2025-12-08T11:40:14
https://i.redd.it/md0quv12wy5g1.jpeg
Dark_Fire_12
i.redd.it
1970-01-01T00:00:00
0
{}
1pha9tj
false
null
t3_1pha9tj
/r/LocalLLaMA/comments/1pha9tj/glm46v_collection/
false
false
https://a.thumbs.redditm…QcUP5E2H2278.jpg
58
{'enabled': True, 'images': [{'id': 'V7ZDK3vdvjAW-qVRdV84oBfdiNB_VhkQGgajj39Cz28', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=108&crop=smart&auto=webp&s=de6e1a7eebc6d5b3d7ecbd633532f4d562453f89', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=216&crop=smart&auto=webp&s=d3523f71b52c253d2888a0889368c14d46ffb3bd', 'width': 216}, {'height': 233, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=320&crop=smart&auto=webp&s=2ee8e401ea5ebf3efad012b9aa1e34e48b3b74fa', 'width': 320}, {'height': 467, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=640&crop=smart&auto=webp&s=ffddfa59ffb2cbd190d950399949f478b447d402', 'width': 640}, {'height': 700, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=960&crop=smart&auto=webp&s=5ff3336bb6631fd0f305af34f6dcfedbbd2ac2db', 'width': 960}, {'height': 788, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?width=1080&crop=smart&auto=webp&s=6297f0410daebd4c1445c52ecdf9374a15beb9a6', 'width': 1080}], 'source': {'height': 7371, 'url': 'https://preview.redd.it/md0quv12wy5g1.jpeg?auto=webp&s=781f5bd2e4a92b6e0ca59a5f967322f17af64c30', 'width': 10101}, 'variants': {}}]}
zai-org/GLM-4.6V-Flash (9B) is here
400
Looks incredible for your own machine. GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. Crucially, we integrate native Function Calling capabilities for the first time. This effectively bridges the gap between "visual perception" and "executable action" providing a unified technical foundation for multimodal agents in real-world business scenarios. https://huggingface.co/zai-org/GLM-4.6V-Flash
2025-12-08T11:36:39
https://www.reddit.com/r/LocalLLaMA/comments/1pha7l1/zaiorgglm46vflash_9b_is_here/
Cute-Sprinkles4911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1pha7l1
false
null
t3_1pha7l1
/r/LocalLLaMA/comments/1pha7l1/zaiorgglm46vflash_9b_is_here/
false
false
self
400
{'enabled': False, 'images': [{'id': 'EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=108&crop=smart&auto=webp&s=5987b3c4048ec2423a3746e7f67a70db30b7d163', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=216&crop=smart&auto=webp&s=8d86942c579f8f946842fc82d1782fca29e3fda4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=320&crop=smart&auto=webp&s=1174fb2ed950fa554b1e59613bd5eda0f3a7bfbc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=640&crop=smart&auto=webp&s=7e2aefa00f4ffa0987b323415f2ca61b6d79b9c1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=960&crop=smart&auto=webp&s=a0d5ce3c83b0eaf1a7287fd33ab6b78be1d80f7e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?width=1080&crop=smart&auto=webp&s=f8868503af6a5e3526b09a64d2f2210f839e3b4d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EmvnP_LLlY56v1gl-YgECs_bzWlRjqWu3zXLU_IIKd8.png?auto=webp&s=8709ebca0febaccad5ecd57ab1247e510aca9d18', 'width': 1200}, 'variants': {}}]}
Cards for LLMs
2
Hey guys, I’m trying to decide on which card to get for LLMs - I tried some LLMs on my 5070 and was getting around 50 tokens/s, but I think I want to make a move and get a card with more vram. I’m real new to this so I need help. I’m stuck on if I should get an m40 or p40, or if I’ll have better luck with another card. I found a p40 for 60 bucks verified working from a seller with really good reviews. It’s practically a steal. I’ve heard the performance on the p40 sucks through, with fp16 performance being in the Gflops. Can’t find any data that it supports anything below fp16. Any advice?
2025-12-08T11:22:10
https://www.reddit.com/r/LocalLLaMA/comments/1ph9ymk/cards_for_llms/
MarsupialJaded153
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph9ymk
false
null
t3_1ph9ymk
/r/LocalLLaMA/comments/1ph9ymk/cards_for_llms/
false
false
self
2
null
Two orchestration loops I keep reusing for LLM agents: linear and circular
0
I have been building my own orchestrator for agent based systems and eventually realized I am always using two basic loops: 1. **Linear loop (chat completion style)** This is perfect for conversation analysis, context extraction, multi stage classification, etc. Basically anything offline where you want a deterministic pipeline. * Input is fixed (transcript, doc, log batch) * Agents run in a sequence T0, T1, T2, T3 * Each step may read and write to a shared memory object * Final responder reads the enriched memory and outputs JSON or a summary 2. **Circular streaming loop (parallel / voice style)** This is what I use for voice agents, meeting copilots, or chatbots that need real time side jobs like compliance, CRM enrichment, or topic tracking. * Central responder handles the live conversation and streams tokens * Around it, a ring of background agents watch the same stream * Those agents write signals into memory: sentiment trend, entities, safety flags, topics, suggested actions * The responder periodically reads those signals instead of recomputing everything in prompt space each turn Both loops share the same structure: * Execution layer: agents and responder * Communication layer: queues or events between them * Memory layer: explicit, queryable state that lives outside the prompts * Time as a first class dimension (discrete steps vs continuous stream) I wrote a how to style article that walks through both patterns, with concrete design steps: * How to define memory schemas * How to wire store / retrieve for each agent * How to choose between linear and circular for a given use case * Example setups for conversation analysis and a voice support assistant There is also a combined diagram that shows both loops side by side. Link in the comments so it does not get auto filtered. The work comes out of my orchestrator project OrKa ([https://github.com/marcosomma/orka-reasoning](https://github.com/marcosomma/orka-reasoning)), but the patterns should map to any stack, including DIY queues and local models. Very interested to hear how others are orchestrating multi agent systems: * Are you mostly in the linear world * Do you have something similar to a circular streaming loop * What nasty edge cases show up in production that simple diagrams ignore
2025-12-08T11:12:35
https://www.reddit.com/gallery/1ph9szu
marcosomma-OrKA
reddit.com
1970-01-01T00:00:00
0
{}
1ph9szu
false
null
t3_1ph9szu
/r/LocalLLaMA/comments/1ph9szu/two_orchestration_loops_i_keep_reusing_for_llm/
false
false
https://b.thumbs.redditm…FZwfo3LrAO6M.jpg
0
null
New Jina-VLM-2.4B Reaches SOTA for Multilingual Visual Question Answering
35
Jina-vlm is an open-source VLM built on top of **SigLIP2 vision encoder** and **Qwen3 language decoder**. Training data includes **5M multimodal samples and 12B text tokens across 29 languages**. This model achieves the highest average score (72.3) across eight VQA benchmarks. This model also leads on multilingual multimodal understanding (MMMB: 78.8, Multilingual MMBench: 74.3). |Model|Params|VQA Avg|MMMB|MM-Bench|RealWorld QA| |:-|:-|:-|:-|:-|:-| |jina-vlm|2.4B|72.3|78.8|74.3|68.2| |Qwen2-VL-2B|2.2B|66.4|71.3|69.4|62.9| |Qwen3-VL-2B|2.2B|71.6|75.0|72.3|63.9| |InternVL3-2B|2.2B|69.2|73.6|71.9|64.3| |InternVL3.5-2B|2.2B|71.6|74.6|70.9|62.0| Source: [Hugging Face model card](https://huggingface.co/jinaai/jina-vlm)
2025-12-08T11:06:39
https://i.redd.it/xsgg4t96py5g1.png
Dear-Success-1441
i.redd.it
1970-01-01T00:00:00
0
{}
1ph9pg9
false
null
t3_1ph9pg9
/r/LocalLLaMA/comments/1ph9pg9/new_jinavlm24b_reaches_sota_for_multilingual/
false
false
https://a.thumbs.redditm…f6NjG7eHv3u0.jpg
35
{'enabled': True, 'images': [{'id': 'MzNoBOifKCu5S7xes20nUaHO6VlOBkTjDL--vz1BFYQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=108&crop=smart&auto=webp&s=dcfa9e85ede8bc532f8a3b9a9e7e834d3260e2ea', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=216&crop=smart&auto=webp&s=4e40794e5302eede1a7eaa1ca82fdc3697dd2e30', 'width': 216}, {'height': 132, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=320&crop=smart&auto=webp&s=5474106949d5c9871e2b8b6c71a28ce08381b689', 'width': 320}, {'height': 264, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=640&crop=smart&auto=webp&s=a1b75b525a19b396db15af1179829e45366c5e90', 'width': 640}, {'height': 396, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=960&crop=smart&auto=webp&s=f60357ce713c5bf8e844df8b131d594bc9e0be0e', 'width': 960}, {'height': 446, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?width=1080&crop=smart&auto=webp&s=73f106ca4b4b702b71f31e8f88b150544c26c3c4', 'width': 1080}], 'source': {'height': 848, 'url': 'https://preview.redd.it/xsgg4t96py5g1.png?auto=webp&s=d1ad0d41b73d950b18ea556e47d724475340d631', 'width': 2052}, 'variants': {}}]}
Jan v0.7.5: Jan Browser MCP extension, file attachment, Flatpak support
51
We're releasing Jan v0.7.5 with the Jan Browser MCP and a few updates many of you asked for. With this release, Jan has a Chromium extension that makes browser use simpler and more stable. Install the Jan extension from the Chrome Web Store, connect it to Jan. The video above shows the quick steps. You can now attach files directly in chat. and yes, Flatpak support is finally here! This has been requested for months, and Linux users should have a better setup now. Links: * Jan Browser MCP: [https://chromewebstore.google.com/detail/jan-browser-mcp/mkciifcjehgnpaigoiaakdgabbpfppal](https://chromewebstore.google.com/detail/jan-browser-mcp/mkciifcjehgnpaigoiaakdgabbpfppal) * Jan on Flathub: [https://flathub.org/en/apps/ai.jan.Jan](https://flathub.org/en/apps/ai.jan.Jan) * Jan GitHub: [https://github.com/janhq/jan](https://github.com/janhq/jan) Please update your [Jan](https://www.jan.ai/) or download the latest. I'm Emre from the Jan - happy to answer your questions. \--- *Note: Browser performance still depends on the model's MCP capabilities. In some cases, it doesn't pick the best option yet, as shown in the video... We also found a parser issue in llama.cpp that affects reliability, and we're working on it.*
2025-12-08T11:06:17
https://v.redd.it/9gjzooqiny5g1
eck72
v.redd.it
1970-01-01T00:00:00
0
{}
1ph9p98
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9gjzooqiny5g1/DASHPlaylist.mpd?a=1767787418%2CODI5ZDk3NzA3N2QwYWMwNjc5YTNhOWQzYjBkMjA0ZTQ4MzM1ZjI2NzU3OTdlZWZiZTkxMWRkOTdlMDBmZTA2NQ%3D%3D&v=1&f=sd', 'duration': 231, 'fallback_url': 'https://v.redd.it/9gjzooqiny5g1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/9gjzooqiny5g1/HLSPlaylist.m3u8?a=1767787418%2CZTVhNjA1ZDg0NjgzZDFmZDY5MzE4M2Q5M2NmNmU4NWUyNWJhNWUzYTkyZmE1NmZjMDBlM2UwMWUwYTRjZTk2ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9gjzooqiny5g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1616}}
t3_1ph9p98
/r/LocalLLaMA/comments/1ph9p98/jan_v075_jan_browser_mcp_extension_file/
false
false
https://external-preview…d364827a598235e9
51
{'enabled': False, 'images': [{'id': 'NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=108&crop=smart&format=pjpg&auto=webp&s=480fd7c3e3ae87a92d9fca603610a72706d1097b', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=216&crop=smart&format=pjpg&auto=webp&s=75da47611954f9a21ae35a815f3477e6f93d08e8', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=320&crop=smart&format=pjpg&auto=webp&s=c9c290ae6364d3e4223fa9f03be716572ea7ba9e', 'width': 320}, {'height': 427, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=640&crop=smart&format=pjpg&auto=webp&s=77c124b210df80eaf0aa211d79415ed4d09dc68f', 'width': 640}, {'height': 641, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=960&crop=smart&format=pjpg&auto=webp&s=5e0afa49cc4b86c9af005240feccaf2ca06d4eb4', 'width': 960}, {'height': 721, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4f9f91b9242978439b397a37444b3eef6b9ccd33', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NXdnNzE2dGlueTVnMV4cnJTtthTMZiZkt117uzBSxdM9b0R_GvilWExkrjZE.png?format=pjpg&auto=webp&s=7c5e182b29e140ecd8b3aef0e5720df7bd9589f2', 'width': 1616}, 'variants': {}}]}
I was trying to design something for Data Sovereignty
0
this is a pretty full baseline archetecture for a multimodal ai. it can run local and it's target system is a ryzen ai max 395 but the stack when done will work on pretty much any x86-64. i'll leave your favorite llm to tell you what it is.
2025-12-08T10:41:41
https://github.com/kght22-a11y/AICO/tree/main/Reditpack
kght22
github.com
1970-01-01T00:00:00
0
{}
1ph9aoq
false
null
t3_1ph9aoq
/r/LocalLLaMA/comments/1ph9aoq/i_was_trying_to_design_something_for_data/
false
false
default
0
null
chatllm.cpp adds support of Ministral-3 & llama.cpp WebUI
17
2025-12-08T10:33:00
https://i.redd.it/9jrhugdzjy5g1.png
foldl-li
i.redd.it
1970-01-01T00:00:00
0
{}
1ph95i2
false
null
t3_1ph95i2
/r/LocalLLaMA/comments/1ph95i2/chatllmcpp_adds_support_of_ministral3_llamacpp/
false
false
https://b.thumbs.redditm…fjbpoQREfrIY.jpg
17
{'enabled': True, 'images': [{'id': 'mluJFt1QeHzzSGBoYnpOirlANf-QPLNK3g7kKl5jNNc', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=108&crop=smart&auto=webp&s=6b834482d5a061a3f7da8144bd179258da162b19', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=216&crop=smart&auto=webp&s=bd7e9dafd5c7a64a8d7e2c743b1a9e0bb47eee01', 'width': 216}, {'height': 293, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=320&crop=smart&auto=webp&s=edc4f12d421dcdddf96296e890f545fe153e1d4d', 'width': 320}, {'height': 587, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=640&crop=smart&auto=webp&s=3f33d8f70c2fae1ee03aa0b0fc655e88ed3385bd', 'width': 640}, {'height': 881, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=960&crop=smart&auto=webp&s=44ab6e4ebca6f94719cab2bda94100740089b2cc', 'width': 960}, {'height': 991, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?width=1080&crop=smart&auto=webp&s=36e4344f15c9d68b243ca79f098afec53f153c92', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/9jrhugdzjy5g1.png?auto=webp&s=a024fd6b10ccb823c810dc3b076a1f1cada72f8a', 'width': 1089}, 'variants': {}}]}
Last Week in Multimodal AI - Local Edition
11
**Live Avatar (Alibaba) - Streaming Real-Time Avatar Generation** * Generates audio-driven avatars with infinite length through streaming architecture. * Removes artificial time limits from avatar generation with continuous processing. * [Website](https://liveavatar.github.io/) | [Paper](https://arxiv.org/abs/2512.04677) | [GitHub](https://github.com/Alibaba-Quark/LiveAvatar) | [Hugging Face](https://huggingface.co/Quark-Vision/Live-Avatar) | [Video](https://www.youtube.com/watch?v=srbsGlLNpAc&list=TLGGqUfEsaFb8-QwODEyMjAyNQ&t=10s) https://reddit.com/link/1ph923q/video/mshdzkx8iy5g1/player **ViBT - 20B Vision Bridge Transformer** * Models data-to-data translation directly, achieving 4x speedup over comparable models. * Handles image and video generation in unified framework through trajectory learning. * [Website](https://yuanshi9815.github.io/ViBT_homepage/) | [Paper](https://huggingface.co/papers/2511.23199) | [GitHub](https://github.com/Yuanshi9815/ViBT) | [Demo](https://huggingface.co/spaces/Yuanshi/ViBT) | [Model](https://huggingface.co/Yuanshi/ViBT) https://reddit.com/link/1ph923q/video/ikcfqb3jhy5g1/player **VibeVoice-Realtime-0.5B (Microsoft) - Real-Time TTS** * 0.5B parameter text-to-speech model optimized for low-latency inference. * Achieves real-time synthesis on consumer hardware without cloud dependencies. * [Hugging Face](https://huggingface.co/microsoft/VibeVoice-Realtime-0.5B) | [Demo](https://huggingface.co/spaces/anycoderapps/VibeVoice-Realtime-0.5B) **Stable Video Infinite 2.0 - Extended Video Generation** * Open source video generation with maintained consistency across extended sequences. * Includes model weights and inference code for local deployment. * [Hugging Face](https://huggingface.co/vita-video-gen/svi-model/tree/main/version-2.0) | [GitHub](https://github.com/vita-epfl/Stable-Video-Infinity/tree/svi_wan22) | [KJ ComfyUI](https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Stable-Video-Infinity/v2.0) **Reward Forcing (Alibaba) - Real-Time Streaming Video** * Generates video in real time with streaming architecture. * Enables interactive video creation and modification on the fly. * [Website](https://reward-forcing.github.io) | [Paper](https://arxiv.org/pdf/2512.04678) | [Hugging Face](https://huggingface.co/JaydenLu666/Reward-Forcing-T2V-1.3B) | [GitHub](https://github.com/JaydenLyh/Reward-Forcing) https://preview.redd.it/jxqftwopiy5g1.jpg?width=2654&format=pjpg&auto=webp&s=5da86a31e3e227ae12cef0e3f5e5aedb5f85c77e **YingVideo-MV - Portrait Animation** * Animates static portraits into singing performances with audio synchronization. * Handles facial expressions and lip-sync from audio input. * [Website](https://giantailab.github.io/YingVideo-MV/) | [Paper](https://arxiv.org/pdf/2512.02492) | [GitHub](https://github.com/GiantAILab/YingVideo-MV) https://reddit.com/link/1ph923q/video/dhud4jtnhy5g1/player **EvoQwen2.5-VL Retriever - Visual Document Retrieval** * Open source visual document retriever available in 7B and 3B parameter versions. * Enables local visual document search without API dependencies. * [7B Model](https://huggingface.co/ApsaraStackMaaS/EvoQwen2.5-VL-Retriever-7B-v1) | [3B Model](https://huggingface.co/ApsaraStackMaaS/EvoQwen2.5-VL-Retriever-3B-v1) **LongCat Image - Efficient Image Generation** * 6B parameter model optimized for efficient image generation. * Balances quality with computational efficiency for local deployment. * [Hugging Face](https://huggingface.co/meituan-longcat/LongCat-Image) | [GitHub](https://github.com/meituan-longcat/LongCat-Image) **OneThinker - Visual Reasoning Model** * Handles multiple visual reasoning tasks in unified architecture. * Open source approach to vision-language reasoning. * [Hugging Face](https://huggingface.co/OneThink) | [Paper](https://huggingface.co/papers/2512.03043) Checkout the [full newsletter](https://open.substack.com/pub/thelivingedge/p/last-week-in-multimodal-ai-36-factual?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources.
2025-12-08T10:27:10
https://www.reddit.com/r/LocalLLaMA/comments/1ph923q/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph923q
false
null
t3_1ph923q
/r/LocalLLaMA/comments/1ph923q/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…ldxgUB7R4FEU.jpg
11
null
gpt-oss:120b running on a MacBook Pro 2019 on Windows
0
Had to set a really huge pagefile for this one. https://preview.redd.it/hvgf9buxiy5g1.png?width=2894&format=png&auto=webp&s=de02a4475e142e84a145d171974d02efece728b8
2025-12-08T10:26:37
https://www.reddit.com/r/LocalLLaMA/comments/1ph91sa/gptoss120b_running_on_a_macbook_pro_2019_on/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph91sa
false
null
t3_1ph91sa
/r/LocalLLaMA/comments/1ph91sa/gptoss120b_running_on_a_macbook_pro_2019_on/
false
false
https://b.thumbs.redditm…67u1DgK_ex9Y.jpg
0
null
RAM prices explained
824
OpenAI bought up 40% of global DRAM production in raw wafers they're not even using - just stockpiling to deny competitors access. Result? Memory prices are skyrocketing. Month before chrismass. Source: Moore´s law is Dead Link: [Sam Altman’s Dirty DRAM Deal](https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram-deal)
2025-12-08T10:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1ph8wel/ram_prices_explained/
Lopsided_Sentence_18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph8wel
false
null
t3_1ph8wel
/r/LocalLLaMA/comments/1ph8wel/ram_prices_explained/
false
false
self
824
{'enabled': False, 'images': [{'id': 'W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?width=108&crop=smart&auto=webp&s=06145c1f95e803dd0d7a16596535683dcace7e37', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?width=216&crop=smart&auto=webp&s=459a38eaca6d43a1bb693b341edd78993bb5f597', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?width=320&crop=smart&auto=webp&s=23b19b7a2ac8a9b814edb8b8022b17ecf97c8b5d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?width=640&crop=smart&auto=webp&s=da24dee5986101a3ef62059b8f2a8ea1966b2c59', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?width=960&crop=smart&auto=webp&s=e64e726b7c3442c38e6ad8a731b70e4e59c8732e', 'width': 960}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/W_wDm5mfm7EcfWwqy0MMLW6xAWopw5T_aU1V0B1zko4.png?auto=webp&s=8568793237010d0814d80d494e1bc77520dd117f', 'width': 1000}, 'variants': {}}]}
Got my new toy - what to do?
0
So I just got my new DGX Spark. I want to use it as a local environment for model training and planning to have ollama + openwebui to use more of local model. Any advice on how to make the most out of it? - which is the best model - what is the best setup/configuration Thanks everyone
2025-12-08T09:54:37
https://i.redd.it/8fk3z8j8dy5g1.jpeg
luongnv-com
i.redd.it
1970-01-01T00:00:00
0
{}
1ph8jf0
false
null
t3_1ph8jf0
/r/LocalLLaMA/comments/1ph8jf0/got_my_new_toy_what_to_do/
false
false
https://a.thumbs.redditm…tlg_T0QebKH8.jpg
0
{'enabled': True, 'images': [{'id': 'XvZXaq5NM77XFLtWqeF1fnuNcp4qUp026_BJVheu72s', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=108&crop=smart&auto=webp&s=558f737e54f31b23b2ca528f206cf9c0da415921', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=216&crop=smart&auto=webp&s=610c69509cca5ad337a05ab424ae61b53abe85d2', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=320&crop=smart&auto=webp&s=1407512632d0f4b19f98233f3492aac409ba234e', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=640&crop=smart&auto=webp&s=cdfbdbd31d43c3f839957f819b308486b2cb955b', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=960&crop=smart&auto=webp&s=99ed1d8ac9b60afae7f3dcc95b43602e26fb3db9', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?width=1080&crop=smart&auto=webp&s=cf09504432b31c67e1aac6d80e9334df5df67287', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/8fk3z8j8dy5g1.jpeg?auto=webp&s=827ec4fdb1cfd03505188fa2903471e03e03f565', 'width': 4032}, 'variants': {}}]}
I architected a Go backend specifically for AI Agents to write code. It actually works.
1
Hey all, I've been experimenting with a Go backend structure that is designed to be "readable" by AI agents (claude cod/cursor). We all know the pain: You ask an AI to add a feature, and it hallucinates imports or messes up the project structure. My Solution: I built a B2B production stack (Postgres, Redis, Stytch RBAC, RAG) where the folder structure and interface definitions are strictly separated. The AI sees the Interface layer. It implements the Service layer. It hooks up the RAG pipeline. Because the pattern is rigid, the AI follows it perfectly. It handles OCR and Embedding flows without me writing much boilerplate. I'm thinking of open sourcing this as a reference architecture for "AI-Native Development." Is anyone else optimizing their repo structure for Agents? Would you be interested in seeing this repo?
2025-12-08T09:36:08
https://www.reddit.com/r/LocalLLaMA/comments/1ph89gc/i_architected_a_go_backend_specifically_for_ai/
MohQuZZZZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph89gc
false
null
t3_1ph89gc
/r/LocalLLaMA/comments/1ph89gc/i_architected_a_go_backend_specifically_for_ai/
false
false
self
1
null
Built a photography workflow tool powered entirely by local vision models (Ollama + Qwen2.5-VL)
13
https://reddit.com/link/1ph7yrx/video/34fabzwc5y5g1/player https://reddit.com/link/1ph7yrx/video/9lvthxwc5y5g1/player Wanted to share something I've been building that puts local VLMs to practical use beyond chat. **FIXXER** is a Python TUI for photographers that automates the tedious parts of post-shoot workflow. The tool takes a hybrid local CV/ML/AI approach to burst grouping, quality culling, and file naming. The key constraint was *no internet required* – everything runs locally via Ollama. **How local AI fits in:** * **AI Naming:** Qwen2.5vl:3b analyzes each image and generates descriptive, searchable filenames + tags. No prompting required – you press a button, it reasons over the image and outputs structured JSON. * **AI Critique (k):** Highlight any photo and get a structured creative critique – composition score, lighting analysis, and an artistic suggestion. We tested Bakllava, Llava, and Phi-3-Vision. Phi-3 failed hard on structured JSON. Qwen was the only one consistent enough for production. * **Graceful degradation:** CLIP embeddings for semantic burst detection, falls back to imagehash if unavailable. BRISQUE for quality scoring, falls back to Laplacian variance. Runs comfortably on M4 MacBook Air (24gb). The vision model calls are the bottleneck, but qwen2.5vl:3b keeps things snappy. The TUI has two aesthetic modes: a retro warez theme and a clean "Pro Mode" HUD. F12 toggles. **Links:** * GitHub: [https://github.com/BandwagonVibes/fixxer](https://github.com/BandwagonVibes/fixxer) * Screenshots / dev blog: [https://oaklens.art/dev](https://oaklens.art/dev) Curious if anyone's running larger vision models and wants to benchmark the critique feature. My hardware tops out at 24GB unified memory, so I'd love to see what beefier setups can do.
2025-12-08T09:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1ph7yrx/built_a_photography_workflow_tool_powered/
AppropriatePublic687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph7yrx
false
null
t3_1ph7yrx
/r/LocalLLaMA/comments/1ph7yrx/built_a_photography_workflow_tool_powered/
false
false
https://external-preview…ebbfbecdca64fd72
13
{'enabled': False, 'images': [{'id': 'qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=108&crop=smart&auto=webp&s=2cfee1ebf8a756d6ac280c6d422150284df54d62', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=216&crop=smart&auto=webp&s=f735f06f7c797a57e6cc17ad3d3b20579586ae66', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=320&crop=smart&auto=webp&s=fcf7e2e4629090f0604315b70c46086b04069388', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=640&crop=smart&auto=webp&s=d762a60e3ac955362e47ce1a9d55dbb243619f7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=960&crop=smart&auto=webp&s=e6ee4cadabc211a7b56b33389b98752aeffacceb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?width=1080&crop=smart&auto=webp&s=639a514ff75b1df9b13e6a6f4820f019b631c78e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qN5IQiQiiniOerdt9xfagCBn7W6cpvD6lVeKXz6ndNg.png?auto=webp&s=116986ec4054b0d911a9e03401ae4baf3f1b1ee8', 'width': 1200}, 'variants': {}}]}
two months ago
50
[deleted]
2025-12-08T09:09:17
[deleted]
1970-01-01T00:00:00
0
{}
1ph7v41
false
null
t3_1ph7v41
/r/LocalLLaMA/comments/1ph7v41/two_months_ago/
false
false
default
50
null
Vector db comparison
355
I was looking for the best vector for our RAG product, and went down a rabbit hole to compare all of them. Key findings: \- **RAG systems under \~10M vectors, standard HNSW is fine.** Above that, you'll need to choose a different index. \- Large dataset + cost-sensitive*:* **Turbopuffer.** Object storage makes it cheap at scale. \- **pgvector** is good for small scale and local experiments. Specialized vector dbs perform better at scale. \- **Chroma** \- Lightweight, good for running in notebooks or small servers Here's the full breakdown: [https://agentset.ai/blog/best-vector-db-for-rag](https://agentset.ai/blog/best-vector-db-for-rag)
2025-12-08T08:55:16
https://www.reddit.com/gallery/1ph7njc
Kaneki_Sana
reddit.com
1970-01-01T00:00:00
0
{}
1ph7njc
false
null
t3_1ph7njc
/r/LocalLLaMA/comments/1ph7njc/vector_db_comparison/
false
false
https://b.thumbs.redditm…eTCCI6KvOvdQ.jpg
355
null
[ Removed by moderator ]
0
[removed]
2025-12-08T08:41:48
https://www.reddit.com/r/LocalLLaMA/comments/1ph7gi6/a_paypal_china_user_with_20_years_of_registration/
AggressiveDuck3527
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph7gi6
false
null
t3_1ph7gi6
/r/LocalLLaMA/comments/1ph7gi6/a_paypal_china_user_with_20_years_of_registration/
false
false
null
0
null
[ Removed by moderator ]
0
[removed]
2025-12-08T07:50:56
https://www.reddit.com/r/LocalLLaMA/comments/1ph6ox9/google_ai_studio_pro_1_whats_the_daily_limits/
BathRevolutionary109
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph6ox9
false
null
t3_1ph6ox9
/r/LocalLLaMA/comments/1ph6ox9/google_ai_studio_pro_1_whats_the_daily_limits/
false
false
null
0
null
Does llm software debugging heavily depends on long context performance?
2
Suppose my big software project crashes after I made a change. Then I ask an llm in vs code to help me fix a bug by providing the error messages. I presume the llm will also read my big repo, so it seems to be a long context query. If so, can we expect models with better long context performance to do better in software debugging. Claude models are worse than Gemini for long context in general, does that mean they are not doing as well in software debugging? Is there a benchmark to measure llm software debugging capabilities?
2025-12-08T07:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1ph6l61/does_llm_software_debugging_heavily_depends_on/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph6l61
false
null
t3_1ph6l61
/r/LocalLLaMA/comments/1ph6l61/does_llm_software_debugging_heavily_depends_on/
false
false
self
2
null
GLM-4.6 Derestricted
60
Hello r/LocalLLaMA, figured I'd post here to get some more eyes on this. I've produced and GGUF'd a norm-preserving biprojected ablation of GLM-4.6: [https://huggingface.co/AesSedai/GLM-4.6-Derestricted-GGUF](https://huggingface.co/AesSedai/GLM-4.6-Derestricted-GGUF) Mostly been discussing this in the BeaverAI discord but it's been generally well-received by the group there. This model should be suitable for normal assistant work, but was produced with the intent of improving some of the creative writing aspects of the model. Overall the writing feels like it doesn't inherit the same level of repetitive sentence structure patterning that the base model has, but it's not a finetune so it doesn't address some of the other known GLM-4.5/4.6 issues (eg, echoing / parroting as well as "slop" word usage patterns). The change is substantial enough that it does feel like a better model to use IMO though. As mentioned in the readme, I went with a fairly light abliteration targeting the middle layers of the model. It is NOT a "fully decensored" / "fully derestricted" model that will give you zero-shot-zero-system-prompt derestricted replies. A light system prompt JB or the like is necessary to help nudge it, but it will be less censored / restricted than the base model after that. Using too heavy of an abliteration config risks damaging the intelligence of the model, so I went with this comparatively lighter touch. Included in the repo is a link to Jim's llm-abliteration repo with the PR I used for producing the ablated model, as well as the measurements I collected and config I used. If someone wants to produce their own quant, they can reproduce my work that way with (hopefully) minimal effort. I'm working on some further improvements to the llm-abliteration process, and looking to abliterate Kimi-K2 Thinking in the near future (probably within a month). I might circle back around to some smaller models, like gemma-3-27b, and see about producing some abliterated versions of those. Will see what happens, but if you do use this GLM-4.6 Derestricted I'd be happy to hear your feedback. Thanks, \- Aes Sedai
2025-12-08T07:42:27
https://www.reddit.com/r/LocalLLaMA/comments/1ph6kfj/glm46_derestricted/
Digger412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph6kfj
false
null
t3_1ph6kfj
/r/LocalLLaMA/comments/1ph6kfj/glm46_derestricted/
false
false
self
60
{'enabled': False, 'images': [{'id': 'YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=108&crop=smart&auto=webp&s=cc94c4e75ccdd791029bff82c2d333c17f830c6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=216&crop=smart&auto=webp&s=0ebb8d09e3e7771192888709fc8c3f5a3d2844cf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=320&crop=smart&auto=webp&s=f043b18e13789d6f4ec5d2a12d8b971726ba7b3c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=640&crop=smart&auto=webp&s=1d8241068fa48e0fe42fe2e40717775b94dede70', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=960&crop=smart&auto=webp&s=9f75e37e6781b9abed9a8ebfa34861df497ceb9a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?width=1080&crop=smart&auto=webp&s=1cf0a8504e716a310e5e8e03910cd2fb605e1cf7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YLDmrfwsumTYbytPZOoH4QbIg9A3UNJVu8r4LRmNNx4.png?auto=webp&s=d0c16214bc615b8ad4cecf374feb8667bf9654f2', 'width': 1200}, 'variants': {}}]}
On the mess of LLM + tool integrations and how MCP Gateway helps
0
**The problem: “N × M” complexity and brittle integrations** * As soon as you start building real LLM-agent systems, you hit the “N × M” problem: **N** models/agents × **M** tools/APIs. Every new combination means custom integration. That quickly becomes **unmanageable**. * Without standardization, you end up writing a lot of ad-hoc “glue” code - tool wrappers, custom auth logic, data transformations, monitoring, secrets management, prompt-to-API adapters, retries/rate-limiting etc. It’s brittle and expensive to maintain. * On top of that: * Different tools use different authentication (OAuth, API-keys, custom tokens), protocols (REST, RPC, SOAP, etc.), and data formats. Handling all these separately for each tool is a headache. * Once your number of agents/tools increases, tracking which agent did what becomes difficult - debugging, auditing, permissions enforcement, access control, security and compliance become nightmares. In short: building scalable, safe, maintainable multi-tool agent pipelines by hand is a **technical debt trap**. **Why we built** [TrueFoundry MCP Gateway](https://www.truefoundry.com/) **gives you a unified, standardised control plane** TrueFoundry’s MCP Gateway acts as a **central registry and proxy** for all your MCP-exposed tools / services. You register your internal or external services once - then any agent can discover and call them via the gateway. * This gives multiple dev-centric advantages: * **Unified authentication & credential management**: Instead of spreading API keys or custom credentials across multiple agents/projects, the gateway manages authentication centrally (OAuth2/SAML/RBAC, etc.). * **Access control / permissions & tool-level guardrails**: You can specify which agent (or team) is allowed only certain operations (e.g. read PRs vs create PRs, issue create vs delete) - minimizing blast radius. * **Observability, logging, auditing, traceability**: Every agent - model - tool call chain can be captured, traced, and audited (which model invoked which tool, when, with what args, and what output). That helps debugging, compliance, and understanding behavior under load. * **Rate-limiting, quotas, cost management, caching**: Especially for LLMs + paid external tools - you can throttle or cache tool calls to avoid runaway costs or infinite loops. * **Decoupling code from infrastructure**: By using MCP Gateway, the application logic (agent code) doesn’t need to deal with low-level API plumbing. That reduces boilerplate and makes your codebase cleaner, modular, and easier to maintain/change tools independently.
2025-12-08T06:56:14
https://www.reddit.com/r/LocalLLaMA/comments/1ph5ue6/on_the_mess_of_llm_tool_integrations_and_how_mcp/
Lonely_Pea_7748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5ue6
false
null
t3_1ph5ue6
/r/LocalLLaMA/comments/1ph5ue6/on_the_mess_of_llm_tool_integrations_and_how_mcp/
false
false
self
0
null
Announcing Rnj-1: Building Instruments of Intelligence
0
2025-12-08T06:53:32
https://essential.ai/research/rnj-1
AlwaysLateToThaParty
essential.ai
1970-01-01T00:00:00
0
{}
1ph5stl
false
null
t3_1ph5stl
/r/LocalLLaMA/comments/1ph5stl/announcing_rnj1_building_instruments_of/
false
false
default
0
null
Nvme offloading possible in mlx or llamacpp?
1
I am trying to run an 80 Qwen 3 Next model (6bit quantized) using lmstudio on my MacBook m4 max with 48gb unified memory. It crashes every time before outputting the first token no matter how small context size I set or use the kv quantization. Is there any way to offload layers of MOE to nvme during the inference in either mlx or llama cpp? I know it is going to be very slow but still.
2025-12-08T06:42:23
https://www.reddit.com/r/LocalLLaMA/comments/1ph5m7b/nvme_offloading_possible_in_mlx_or_llamacpp/
BABA_yaaGa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5m7b
false
null
t3_1ph5m7b
/r/LocalLLaMA/comments/1ph5m7b/nvme_offloading_possible_in_mlx_or_llamacpp/
false
false
self
1
null
Built a LLaMA-3 Model That Analyzes Developer Communication Using Vector Drift + LoRA
1
[removed]
2025-12-08T06:39:14
https://www.reddit.com/r/LocalLLaMA/comments/1ph5kec/built_a_llama3_model_that_analyzes_developer/
Adventurous_Cod_7707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5kec
false
null
t3_1ph5kec
/r/LocalLLaMA/comments/1ph5kec/built_a_llama3_model_that_analyzes_developer/
false
false
self
1
null
Built a LLaMA-3 Model That Detects Burnout from Commit Messages Using Vector Drift + LoRA
1
[removed]
2025-12-08T06:36:15
https://www.reddit.com/r/LocalLLaMA/comments/1ph5inc/built_a_llama3_model_that_detects_burnout_from/
Adventurous_Cod_7707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5inc
false
null
t3_1ph5inc
/r/LocalLLaMA/comments/1ph5inc/built_a_llama3_model_that_detects_burnout_from/
false
false
self
1
null
RTX 5090 96 GB just popped up on Alibababa
198
HI Guys, Just found RTX 5090 96 GB on Alibaba from a verified vendor :https://www.alibaba.com/product-detail/Newest-RTX-5090-96gb-Graphics-Card\_1601577163842.html I contacted vendor and waiting for reply , anyone tried it yet?
2025-12-08T06:33:36
https://www.reddit.com/r/LocalLLaMA/comments/1ph5h2q/rtx_5090_96_gb_just_popped_up_on_alibababa/
RateRoutine2268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5h2q
false
null
t3_1ph5h2q
/r/LocalLLaMA/comments/1ph5h2q/rtx_5090_96_gb_just_popped_up_on_alibababa/
false
false
self
198
null
Name offloading possible in mlx or llamacpp?
2
I am trying to run an 80 Qwen 3 Next model (6bit quantized) using lmstudio on my MacBook m4 max with 48gb unified memory. It crashes every time before outputting the first token no matter how small context size I set or use the kv quantization. Is there any way to offload layers of MOE to nvme during the inference in either mlx or llama cpp? I know it is going to be very slow but still.
2025-12-08T06:32:50
https://www.reddit.com/r/LocalLLaMA/comments/1ph5gn4/name_offloading_possible_in_mlx_or_llamacpp/
BABA_yaaGa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph5gn4
false
null
t3_1ph5gn4
/r/LocalLLaMA/comments/1ph5gn4/name_offloading_possible_in_mlx_or_llamacpp/
false
false
self
2
null
Is colab pro or colab enterprise would be enough for finetuning LLMs?
2
Guys, i was wondering if I can finetune models like 3B, 8B, 14B with 256k context window in google colab pro or enterprise without issues? I plan to finetune it using unsloth and Qlora for peft. I am still a beginner in finetuning and was wondering if anyone can provide me with some suggestions and ideas.
2025-12-08T06:01:22
https://www.reddit.com/r/LocalLLaMA/comments/1ph4x4e/is_colab_pro_or_colab_enterprise_would_be_enough/
Motor_Ad6405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph4x4e
false
null
t3_1ph4x4e
/r/LocalLLaMA/comments/1ph4x4e/is_colab_pro_or_colab_enterprise_would_be_enough/
false
false
self
2
null
I made a GUI for OpenRouter with "X-Ray Mode" to edit system prompts on the fly.
3
Hi all, I built a tool called **AI Text Enhancer Pro**. It’s basically a specialized wrapper for the OpenRouter API designed for editing text and code. The main feature I wanted was **"X-Ray Mode"**. Usually, when you use AI editors, the system prompt is hidden. With this tool, you can expand a panel, see the exact system instruction (e.g., "You are a code editor..."), and tweak it right before hitting run. **Other features:** * **Streaming** with real token counting. * **Diff View** (Original vs. Output). * **Co-Writer Mode:** A dedicated UI for continuing text generation with specific presets. * **Strict Monolingual Prompts:** I hardened the prompts to stop Gemini/Flash models from translating text when they should just be fixing grammar. It runs locally (Python backend), keeps your API key encrypted, and supports any OpenRouter model. Check it out if you need a clean interface for API usage: [https://github.com/AlexDustin/AI-Text-Enhancer-Pro](https://github.com/AlexDustin/AI-Text-Enhancer-Pro)
2025-12-08T05:26:43
https://www.reddit.com/gallery/1ph4bjj
AlexDustin
reddit.com
1970-01-01T00:00:00
0
{}
1ph4bjj
false
null
t3_1ph4bjj
/r/LocalLLaMA/comments/1ph4bjj/i_made_a_gui_for_openrouter_with_xray_mode_to/
false
false
https://a.thumbs.redditm…XQZ67mtwRme8.jpg
3
null
[Update] local_faiss_mcp v0.2.0 – I listened to you, r/LocalLLaMA: Added Reranking, CLI, and native PDF support
7
Last week I posted my "lazy" local RAG tool here. The consensus was: *"Cool start, but for serious use, we need reranking and better ingestion."* I spent the last few days building exactly what you asked for. **v0.2.0 is out now.** **What’s new (based on your feedback):** * **Re-ranking Support:** Added a `--rerank` flag that uses CrossEncoders (MS MARCO / BGE) to refine results. Precision is significantly higher now. * **Standalone CLI:** You no longer need to trigger ingestion via Claude. Just run: `local-faiss index "docs/**/*.pdf"` * **Native File Support:** Now parses PDFs, TXT, and MD natively (plus DOCX/HTML if you have pandoc). * **Custom Models:** You can now bring your own embedding models with `--embed [model_name]`. **Still the same philosophy:** * 100% Local (No external APIs) * No Vector DB (Just FAISS + files on disk) * One-line install **Try it:** `pip install -U local-faiss-mcp` **Repo / Full Release Notes:** [https://github.com/nonatofabio/local\_faiss\_mcp](https://github.com/nonatofabio/local_faiss_mcp) Thanks to everyone who commented on the first thread—keep the requests coming. Next up: Hybrid search?
2025-12-08T05:19:15
https://www.reddit.com/r/LocalLLaMA/comments/1ph46jn/update_local_faiss_mcp_v020_i_listened_to_you/
fabiononato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph46jn
false
null
t3_1ph46jn
/r/LocalLLaMA/comments/1ph46jn/update_local_faiss_mcp_v020_i_listened_to_you/
false
false
self
7
{'enabled': False, 'images': [{'id': 'qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=108&crop=smart&auto=webp&s=dd5fc41d7fdacc86fa46c57c8be59ae740e0f670', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=216&crop=smart&auto=webp&s=6cad9dc4bb0b95c1b9e60b8a000f9852296aea45', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=320&crop=smart&auto=webp&s=1a256dc52d1524f9e057b58d3f9fd13acd29594a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=640&crop=smart&auto=webp&s=5839038a409352bbc7b74cf7b71219bb15c3d2bf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=960&crop=smart&auto=webp&s=2bead900a5422a17f5b73494b7945f2af1471c3f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?width=1080&crop=smart&auto=webp&s=3a4ebbab720151bdf3764adf1ac2bf72ddce9876', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qencZQ2FSyvAD5vtbBu8wH-9LDLMyXz-B6-8__AdEkk.png?auto=webp&s=5f2a9501e0d831a32240b92c53f12b93e04fe701', 'width': 1200}, 'variants': {}}]}
Biggest vision-capable model that can run on a Strix Halo 128 GB?
10
I'm looking for something better than Qwen3-VL-30B-A3B, preferably matching or exceeding Qwen3-VL-32B while being easier to run (say, large MoE, gpt-oss sized or GLM-4.5-air sized). Need strong text reading and document layout understanding capabilities. Also needs to be relatively smart in text generation.
2025-12-08T05:08:43
https://www.reddit.com/r/LocalLLaMA/comments/1ph3z94/biggest_visioncapable_model_that_can_run_on_a/
Daniel_H212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph3z94
false
null
t3_1ph3z94
/r/LocalLLaMA/comments/1ph3z94/biggest_visioncapable_model_that_can_run_on_a/
false
false
self
10
null
Why Mirostat is gone into obscurity ?
1
[removed]
2025-12-08T05:00:17
https://www.reddit.com/r/LocalLLaMA/comments/1ph3t27/why_mirostat_is_gone_into_obscurity/
FriendlyGround9076
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph3t27
false
null
t3_1ph3t27
/r/LocalLLaMA/comments/1ph3t27/why_mirostat_is_gone_into_obscurity/
false
false
self
1
null
21 Days of Building a Small Language Model.
20
Starting tomorrow, I’m beginning a new series: “21 Days of Building a Small Language Model.” https://preview.redd.it/bw2jtqnztw5g1.jpg?width=1920&format=pjpg&auto=webp&s=264ee6545e42bbb39fb7fb9043ad66e8fd6b3c91 As we get close to the end of the year, I want to try something meaningful: help anyone who’s interested build their own small language model by the end of the year. I’ll be following the structure of my book while keeping everything beginner-friendly and hands-on. Just to set real expectations: Building AND understanding a small language model in 21 days is definitely challenging. It won’t be easy. There will be concepts that take time to sink in. But I’m going to do everything I can to break things down in simple language and make the journey as accessible as possible. If you want to follow along, I’ll be posting updates every day at 9am PST on LinkedIn Happy learning, and see you tomorrow.
2025-12-08T04:46:45
https://www.reddit.com/r/LocalLLaMA/comments/1ph3js7/21_days_of_building_a_small_language_model/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph3js7
false
null
t3_1ph3js7
/r/LocalLLaMA/comments/1ph3js7/21_days_of_building_a_small_language_model/
false
false
https://b.thumbs.redditm…R2pdy4_WqgRw.jpg
20
null
Where do you get stuck when building RAG pipelines?
2
Where do you get stuck when building RAG pipelines? Where do you get stuck when building RAG pipelines? I've been having a lot of conversations with engineers about their RAG setups recently and keep hearing the same frustrations. Some people don't know where to start. They have unstructured data, they know they want a chatbot, their first instinct is to move data from A to B. Then... nothing. Maybe a vector database. That's it. Connecting the dots between ingestion/Indexing and the RAG isn't obvious. Others have a working RAG setup, but it's not giving them the results they want. Each iteration is painful. The feedback loop is slow. Time to failure is high. The pattern I keep seeing: you can build twenty different RAGs and still run into the same problems. If your processing pipeline isn't good, your RAG won't be good. What trips you up most? Is it: - Figuring out what steps are even required - Picking the right tools for your specific data - Trying to effectively work with those tools amongst the complexity - Debugging why retrieval quality sucks - Something else entirely Curious what others are experiencing.
2025-12-08T04:42:53
https://www.reddit.com/r/LocalLLaMA/comments/1ph3h4j/where_do_you_get_stuck_when_building_rag_pipelines/
OnyxProyectoUno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph3h4j
false
null
t3_1ph3h4j
/r/LocalLLaMA/comments/1ph3h4j/where_do_you_get_stuck_when_building_rag_pipelines/
false
false
self
2
null
I got tired of my agents losing context on topic shifts, so I hacked together a branch router - thoughts?
2
Been messing with multi-turn agents and kept hitting the same wall: conversation goes A → B → back to A, and the LLM has no idea what "A context" even means anymore because it's buried under B. So I built a thing that tags each message as STAY/BRANCH/ROUTE and only pulls relevant history per branch. Uses an LLM call to classify (yeah, I know, LLM-to-manage-LLM, but it actually works for this), working on embeddings as the next step. \~2.7k lines, probably over-engineered, definitely has edge cases I haven't hit yet. [https://github.com/DriftOS/driftos-core](https://github.com/DriftOS/driftos-core) Curious if anyone else has tried solving this differently - I looked at memGPT but wanted something lighter.
2025-12-08T04:33:43
https://www.reddit.com/r/LocalLLaMA/comments/1ph3avj/i_got_tired_of_my_agents_losing_context_on_topic/
scotty595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph3avj
false
null
t3_1ph3avj
/r/LocalLLaMA/comments/1ph3avj/i_got_tired_of_my_agents_losing_context_on_topic/
false
false
self
2
{'enabled': False, 'images': [{'id': 'TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=108&crop=smart&auto=webp&s=32075ee6dd7f7de12d2fac521e5aa870f8be037a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=216&crop=smart&auto=webp&s=b9793f10385cd50f0dd10ec7fde98997a6610acc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=320&crop=smart&auto=webp&s=347b3a9c61e03a7f92c23329e3494d3014d9d44a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=640&crop=smart&auto=webp&s=c8ce622a8b55729f5f968dbb57b690eab155fe31', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=960&crop=smart&auto=webp&s=6618e39c42dc8ec51ce522e2eb34653eac6014a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?width=1080&crop=smart&auto=webp&s=29733be638b6ac80a42529bf6356aceb40a35d93', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TJGAuS8tsqZP2UHifo5Eh6cZBmmUl6-FwiHIq0ucSY4.png?auto=webp&s=ed1a16ece8eb5aa6302abda76beb991a725882b6', 'width': 1200}, 'variants': {}}]}
Parallax by Gradient
1
Hey all! My friends and I built Convo‑Mapper for the Gradient Network hackathon. It’s an open-source conversation-visualization tool that uses layered parallax to make large chat logs feel spatial and easy to scan. Why parallax? By moving background topology, thread clusters, and message windows at different speeds we create depth and hierarchy that: * guides attention to active threads, * makes reply relationships feel tangible, and * improves spatial memory for long conversations. Demo & code: [https://github.com/stanleyhello/convo-mapper](https://github.com/stanleyhello/convo-mapper) Hackathon: [https://gradient.network/campaign/](https://gradient.network/campaign/) Would love feedback on the UX and any ideas for improving thread discovery! [https://www.linkedin.com/feed/update/urn:li:activity:7403637938804424705/](https://www.linkedin.com/feed/update/urn:li:activity:7403637938804424705/)
2025-12-08T03:51:30
https://www.reddit.com/r/LocalLLaMA/comments/1ph2gtq/parallax_by_gradient/
potterheadsonu7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph2gtq
false
null
t3_1ph2gtq
/r/LocalLLaMA/comments/1ph2gtq/parallax_by_gradient/
false
false
self
1
{'enabled': False, 'images': [{'id': 'r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=108&crop=smart&auto=webp&s=f44d4d366a3c9979b52181bae4ed927673b6f342', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=216&crop=smart&auto=webp&s=7d7f89138431c26abc95e0287b3c2e468de6def2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=320&crop=smart&auto=webp&s=b5a1431617aa21bc3ecdfb6f5629d6802e01d214', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=640&crop=smart&auto=webp&s=aec120f8473ad6ad973d99cbfadc18b84904e30e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=960&crop=smart&auto=webp&s=699b57b72d7e8177b702edfc9aa14e78b3bba215', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?width=1080&crop=smart&auto=webp&s=7beda00d5c9496221acc0fb3c1394b5799991508', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/r-cb1UJb1bCcVjJNhb4ujfnWVRbCWui-SJA1B7F-oEw.png?auto=webp&s=1d05b712a42c2075c2603e3c7966fca8927d0ac9', 'width': 1200}, 'variants': {}}]}
Miles + FSDP2 = Megatron-Level Performance with More Flexibility
13
Miles training framework now supports FSDP2 integration, delivering Megatron-level performance with basically zero vendor lock-in. SGLang team just shipped this and experiments show numerical alignment with Megatron while supporting advanced features like Context Parallelism out of the box. FSDP2 gives you a flexible, high-performance distributed training backend. Works alongside existing Miles features and scales efficiently for next-gen model training. Perfect if you're: * Training custom models at scale * Looking for Megatron performance without the complexity * Building on SGLang's serving stack and want end-to-end integration Docs: [https://lmsys.org/blog/2025-12-03-miles-fsdp/](https://lmsys.org/blog/2025-12-03-miles-fsdp/) X: [https://x.com/lmsysorg/status/1997768901648871925](https://x.com/lmsysorg/status/1997768901648871925)
2025-12-08T03:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1ph2aad/miles_fsdp2_megatronlevel_performance_with_more/
Expert-Pineapple-740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph2aad
false
null
t3_1ph2aad
/r/LocalLLaMA/comments/1ph2aad/miles_fsdp2_megatronlevel_performance_with_more/
false
false
self
13
{'enabled': False, 'images': [{'id': 'JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=108&crop=smart&auto=webp&s=67c0f420910ab1eaa96c5a53cbcaa9fe963649a9', 'width': 108}, {'height': 184, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=216&crop=smart&auto=webp&s=4273158b8b0085c94297036db39a5f426ebb84f4', 'width': 216}, {'height': 273, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=320&crop=smart&auto=webp&s=2cbe5e1ffa51afdde9b5d4db108c55483a6d9a31', 'width': 320}, {'height': 546, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=640&crop=smart&auto=webp&s=b258635e2d0fd285e0a0449d149a62da4e6652b5', 'width': 640}, {'height': 820, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=960&crop=smart&auto=webp&s=360fcbecb4a057dc13ccc63198d10302cd8b7fd7', 'width': 960}, {'height': 922, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?width=1080&crop=smart&auto=webp&s=b59477b37f9a431fab93bbf7d41d28578a2cd9d9', 'width': 1080}], 'source': {'height': 1189, 'url': 'https://external-preview.redd.it/JArGeSmSn0PBBLoMFFNZ8rK7AFz1_B-PL_3Zd8TlQfU.png?auto=webp&s=33049a14ab16a49bdb41b29cc98647c4ebca54d6', 'width': 1392}, 'variants': {}}]}
Is the 32G ram Mac mini worth it?
0
My current m1 16gb MacBook Air 2021 ed doesn’t cut it when it comes to ram - I do so much of programming, have a lot of windows open, run local llms, and want to get into video editing. So I was looking new options. Any advice? Thanks!!
2025-12-08T03:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1ph25hs/is_the_32g_ram_mac_mini_worth_it/
Aggressive_Escape386
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ph25hs
false
null
t3_1ph25hs
/r/LocalLLaMA/comments/1ph25hs/is_the_32g_ram_mac_mini_worth_it/
false
false
self
0
null
Structuring context files to guide LLM code generation?
3
I'm working on a way to make an LLM write better code. I use a search tool called [cleaner](https://github.com/perghosh/Data-oriented-design/releases/tag/cleaner.1.1.0) to gather info and put it in a file. I then give this file to the LLM as background context. This tells the model how to generate, and make it more accurate. Have started to implement this using json, but are there better formats? Also are there some strange things when sending files to LLM, like that it is important to place important information at start of the file or does that matter? What are the best practices for describing/structuring this kind of background file so the LLM uses it effectively? *Note: cleaner are able to clean code but to clean it has to find code, so finding things is the main logic, just to explain the name.*
2025-12-08T03:29:34
https://i.redd.it/q2zgnupnfw5g1.gif
gosh
i.redd.it
1970-01-01T00:00:00
0
{}
1ph219c
false
null
t3_1ph219c
/r/LocalLLaMA/comments/1ph219c/structuring_context_files_to_guide_llm_code/
false
false
default
3
{'enabled': True, 'images': [{'id': 'q2zgnupnfw5g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=108&crop=smart&format=png8&s=5ccebdce106c5811694d2dacff5904097ad80e0b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=216&crop=smart&format=png8&s=cc344129d532072a0b0754d9b8467ddbf6dbeada', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=320&crop=smart&format=png8&s=428d79f5ddf13875c99e1d4e91bde652e8b64f9d', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=640&crop=smart&format=png8&s=7265dcb2ee461404e0dc5dfcb9a330b22f93362b', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=960&crop=smart&format=png8&s=42b2b9bbf588098c68eb226ffb04424cd32c88d8', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=1080&crop=smart&format=png8&s=89713e65d0bee178832fd3f717ee009edf9ee4c3', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?format=png8&s=c43cb84f3fdaf3710c74fc5f46b736264d7ebef6', 'width': 1918}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=108&crop=smart&s=311f0bd3773b31fabf40d6bc2660a5fd3b9e8015', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=216&crop=smart&s=d2c086d2161d48ac5507e95205441ad5c5071a4a', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=320&crop=smart&s=0a45c4afe70819e076fdbb2e5c77204d4a4ecdb8', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=640&crop=smart&s=84ed3916fbc5e469cae68c8ac5084be2d526f49d', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=960&crop=smart&s=81f444aca31ec28800ed82051b6eaa1cc36cc600', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=1080&crop=smart&s=b66ea771835af5a8d3076355077f903602c18e42', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?s=822f3f212efb7d7061b199302d1bd53d6e7b3a07', 'width': 1918}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=108&format=mp4&s=1da4056c31e2fb6951875a42128526a2fd6c9a83', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=216&format=mp4&s=ffae08389e4d598018de86c7cebf20082a36c020', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=320&format=mp4&s=bdb347d4eb0ce5d236ce6ef2d9f50b0133263ead', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=640&format=mp4&s=e653b91b14132ccd706beb96af30836d48ed4399', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=960&format=mp4&s=5bf2de76159a431d1354955fee96577d25c1d3af', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?width=1080&format=mp4&s=df4483ff70e52365ebb95ded983b24a22f0f4195', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://preview.redd.it/q2zgnupnfw5g1.gif?format=mp4&s=dbe1b9164d1010329771ff1aa0d1357b538d98bd', 'width': 1918}}}}]}