title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Which is the best available open source model for TTS + cloning?
6
I have been searching for TTS and voice-cloning related models. I found f5-tts to be very good at cloning, but it has license restrictions. I want to convert large PDFs, docx, and books to audiobooks with cloned voices. I would appreciate any help.
2026-01-02T10:40:07
https://www.reddit.com/r/LocalLLaMA/comments/1q1vgqc/which_is_the_best_available_open_source_model_for/
GeekoGeek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1vgqc
false
null
t3_1q1vgqc
/r/LocalLLaMA/comments/1q1vgqc/which_is_the_best_available_open_source_model_for/
false
false
self
6
null
llama.cpp and GLM 4.7: Automatic ("--fit on") vs manual offloading ("-ot"): unexpected results
1
Hi, I have a question regarding hybrid GPU/CPU inference with the big Chinese open weights models using llama.cpp. I find it difficult to exactly understand how to build the optimal llama-server command line to get the most out of GPU/CPU, in particular with regard to the offloading option "-ot". Now since mid-december there is some kind of automated solution to this problem, by using "--fit on". The unsloth Tutorial [GLM-4.7: How to Run Locally Guide](https://unsloth.ai/docs/models/glm-4.7) says: "Use --fit on" introduced 15th Dec 2025 for maximum usage of your GPU and CPU. "--fit on" auto offloads as much of the model as possible to the GPU, then places the rest on CPU." I tried this with the unsloth 8bit UD-Q8_K_XL quant of GLM 4.7 on a dual 5090 workstation with a 32C Zen5 Threadripper Pro and 512GB of DDR5 RAM. CUDA_VISIBLE_DEVICES=0,1 llama.cpp/build/bin/llama-server \ --model ./GLM-4.7-UD-Q8_K_XL-00001-of-00009.gguf \ --jinja \ --flash-attn on \ --temp 1.0 \ --top-p 0.95 \ --threads -1 \ --fit on And indeed: "--fit on" almost entirely filled the VRAM of the GPUs (about 31GB on each), but the result was somewhat sobering: 3-4 t/s. The earlier recommendation was the manual "-ot '.ffn_.*_exps.=CPU'" or by setting "--n-gpu-layers"; via trial and error I gave an explicit offloading command and found out that "--n-gpu-layers 83" was the right value max out the VRAM usage of the GPUs: CUDA_VISIBLE_DEVICES=0,1 llama.cpp/build/bin/llama-server \ --model ./GLM-4.7-Q6_K-00001-of-00006.gguf \ --n-gpu-layers 83 \ --jinja \ --flash-attn on \ --temp 1.0 \ --top-p 0.95 --threads -1 And that gave a much better result: 8 t/s, more than double the speed of the automatic offloading. Can anyone comment on this and on the new "--fit on" option, how to use it properly, and also, in general, on how to find the correct values of the more manual "-ot" way of doing offloading? Thanks.
2026-01-02T10:15:10
https://www.reddit.com/r/LocalLLaMA/comments/1q1v1rl/llamacpp_and_glm_47_automatic_fit_on_vs_manual/
phwlarxoc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1v1rl
false
null
t3_1q1v1rl
/r/LocalLLaMA/comments/1q1v1rl/llamacpp_and_glm_47_automatic_fit_on_vs_manual/
false
false
self
1
null
New Models from South Korea's Sovereign AI Foundation Model Project
113
The South Korean government funded the Sovereign AI Foundation Model Project, and the five selected teams released their initial models and presented. Below are the repos I was able to track down on Huggingface. If I missed or included wrong repo, please let me know. * Naver Cloud: [HyperCLOVAX-SEED-Omni-8B ](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Omni-8B), [HyperCLOVAX-SEED-Think-32B](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Think-32B) * Upstage: [Solar-Open-102B-A12B](https://huggingface.co/upstage/Solar-Open-100B) * SK Telecom: [A.X-K1-519B-A33B](https://huggingface.co/skt/A.X-K1) * LG AI Research: [K-EXAONE-236B-A23B](https://huggingface.co/LGAI-EXAONE/K-EXAONE-236B-A23B) * NC AI: [VAETKI-112B-A10B](https://huggingface.co/NC-AI-consortium-VAETKI/VAETKI) It looks like MSIT is backing the project with funding, GPUs, and datasets. Teams will be evaluated and eliminated through 2026 and into mid 2027 until two finalists remain. You can read more about the project here: https://www.msit.go.kr/eng/bbs/view.do?bbsSeqNo=42&mId=4&nttSeqNo=1152&sCode=eng https://www.upi.com/Top_News/World-News/2025/12/30/ai-model-national-project/7441767133090/ https://www.koreatimes.co.kr/business/tech-science/20251230/consortia-unveil-models-for-national-ai-project
2026-01-02T10:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1q1uyf6/new_models_from_south_koreas_sovereign_ai/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1uyf6
false
null
t3_1q1uyf6
/r/LocalLLaMA/comments/1q1uyf6/new_models_from_south_koreas_sovereign_ai/
false
false
self
113
{'enabled': False, 'images': [{'id': 'k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=108&crop=smart&auto=webp&s=fb2542fd4511ee4133def632c178a9a211c530d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=216&crop=smart&auto=webp&s=210a76fb471435db2c314b82a643f94dfe935a75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=320&crop=smart&auto=webp&s=5d209851d89fbc86cceb2b200bf6ce20614ce99e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=640&crop=smart&auto=webp&s=469069c4d1e42963f6be7fab71cc33693cd4558b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=960&crop=smart&auto=webp&s=447bf699822e1b3a503dacbda480e36492e83b11', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?width=1080&crop=smart&auto=webp&s=036ff8dc6c1daad3e2fdb0508adf154e9729edc0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/k6DQsPAg0-hIMHIWzgVsK_j0xubcUI4xbUl0akXXOlU.png?auto=webp&s=cb86be22fa6892340bca7425c552da306987dbe5', 'width': 1200}, 'variants': {}}]}
[IQuestLab/IQuest-Coder-V1] SWE-bench score is compromised because environment setup was wrong
77
TL;DR is that they didn't clean the repo (.git/ folder), model just reward hacked its way to look up future commits with fixes. Credit goes to everyone in this thread for solving this: https://xcancel.com/xeophon/status/2006969664346501589 (given that IQuestLab published their SWE-Bench Verified trajectory data, I want to be charitable and assume genuine oversight rather than "benchmaxxing", probably an easy to miss thing if you are new to benchmarking)
2026-01-02T09:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1q1ura1/iquestlabiquestcoderv1_swebench_score_is/
nullmove
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ura1
false
null
t3_1q1ura1
/r/LocalLLaMA/comments/1q1ura1/iquestlabiquestcoderv1_swebench_score_is/
false
false
self
77
null
Kinda daunting and bummed out how much hardware is needed for a local coding model.
1
[removed]
2026-01-02T09:37:56
https://www.reddit.com/r/LocalLLaMA/comments/1q1ug2j/kinda_daunting_and_bummed_out_how_much_hardware/
The_M0nk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ug2j
false
null
t3_1q1ug2j
/r/LocalLLaMA/comments/1q1ug2j/kinda_daunting_and_bummed_out_how_much_hardware/
false
false
self
1
null
Budget forcing in LLMs
1
[removed]
2026-01-02T09:27:31
https://www.reddit.com/r/LocalLLaMA/comments/1q1ua21/budget_forcing_in_llms/
SubjectIll9835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ua21
false
null
t3_1q1ua21
/r/LocalLLaMA/comments/1q1ua21/budget_forcing_in_llms/
false
false
self
1
null
my final picks for the best ai tools of the year (so far)
1
[removed]
2026-01-02T08:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1q1trr9/my_final_picks_for_the_best_ai_tools_of_the_year/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1trr9
false
null
t3_1q1trr9
/r/LocalLLaMA/comments/1q1trr9/my_final_picks_for_the_best_ai_tools_of_the_year/
false
false
self
1
null
Start hosting a multi-model LLM server in minutes (with monitoring and access control)
2
Hey all, I wanted an easy way to orchestrate **vLLM on a single server** without going full Kubernetes all with monitoring and access controll, so I built this and open-sourced it: [https://github.com/LukeMitDemHut/easyvllmondocker](https://github.com/LukeMitDemHut/easyvllmondocker?utm_source=chatgpt.com) It’s basically a **script-managed** docker-compose setup that gives you vLLM inference with an OpenAI-compatible API using LiteLLM, lets you run multiple models on one machine, and includes monitoring (Prometheus + Grafana) out of the box. You configure models via a single YAML and manage everything with a small CLI. This is *very intentionally* for one server. No autoscaling, no multi-node - just something you can spin up quickly and iterate on models. Sharing in case it’s useful for others doing self-hosted inference. I imagine it could be used for personal use and in small companies hosting on-prem setups. Feedback and PRs welcome 🙂
2026-01-02T08:49:54
https://github.com/LukeMitDemHut/easyvllmondocker
_camera_up
github.com
1970-01-01T00:00:00
0
{}
1q1to86
false
null
t3_1q1to86
/r/LocalLLaMA/comments/1q1to86/start_hosting_a_multimodel_llm_server_in_minutes/
false
false
default
2
{'enabled': False, 'images': [{'id': '5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=108&crop=smart&auto=webp&s=a30b16542933b97d2a3e3c562dfde1a62e99babd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=216&crop=smart&auto=webp&s=22b79f0c550c892be95178e7906e713e5768b107', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=320&crop=smart&auto=webp&s=ed41e1e8f2b5885a623bbb040cd7894aee762a3d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=640&crop=smart&auto=webp&s=e95610729bec9cc9662a1976435a70b5a632318d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=960&crop=smart&auto=webp&s=6ea83d3aa8d77154b44c7878295429d300fdc7f9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?width=1080&crop=smart&auto=webp&s=263a7ce2ef2cb57abbaaa5878f7fb6eeaa9d6e8c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5Tg3K6uDq-T64mObjA--CYEqis1IlC0dGbNF36ECLjY.png?auto=webp&s=aeb3d61903d59a3be196ec30bb006e395ae1a9df', 'width': 1200}, 'variants': {}}]}
Do I need more RAM?
0
Hello Reddit and Happy New Year!, I have a computer with 8GB (2\*4GB) of DDR3 RAM, LGA 1150 socket and an Intel Core i7-4790K (no dedicated graphics). My computer has two RAM slots. If I buy [this](https://www.ebay.com.au/itm/336371380844) stick of RAM which claims to have a fast speed, will I be able to run DeepSeek 8 billion parameter model on Windows 11 (the OS takes 4GB RAM).
2026-01-02T08:49:44
https://www.reddit.com/r/LocalLLaMA/comments/1q1to56/do_i_need_more_ram/
ResidentPractice6149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1to56
false
null
t3_1q1to56
/r/LocalLLaMA/comments/1q1to56/do_i_need_more_ram/
false
false
self
0
{'enabled': False, 'images': [{'id': 'on88J1T6HEdt1rmSdVGM-UUTCh5YQ_Gt3mNWTLnStzU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/on88J1T6HEdt1rmSdVGM-UUTCh5YQ_Gt3mNWTLnStzU.jpeg?width=108&crop=smart&auto=webp&s=1b657b5872afaf7e70742638abbe4f654fcc78b1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/on88J1T6HEdt1rmSdVGM-UUTCh5YQ_Gt3mNWTLnStzU.jpeg?width=216&crop=smart&auto=webp&s=b664a0c33acc7c45f774f93ac78b81c68c5a222f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/on88J1T6HEdt1rmSdVGM-UUTCh5YQ_Gt3mNWTLnStzU.jpeg?width=320&crop=smart&auto=webp&s=0341cd75dcd01f16351bc8944497e35e268188c4', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/on88J1T6HEdt1rmSdVGM-UUTCh5YQ_Gt3mNWTLnStzU.jpeg?auto=webp&s=3f9b4e431423111b654cc3343ee0ae87007e5d2f', 'width': 400}, 'variants': {}}]}
Start hosting a single Server multi-model vllm + monitoring stack in minutes
1
Hey all, I wanted an easy way to orchestrate **vLLM on a single server** without going full Kubernetes all with monitoring and access controll, so I built this and open-sourced it: [https://github.com/LukeMitDemHut/easyvllmondocker](https://github.com/LukeMitDemHut/easyvllmondocker?utm_source=chatgpt.com) It’s basically a **script-managed** docker-compose setup that gives you vLLM inference with an OpenAI-compatible API using LiteLLM, lets you run multiple models on one machine, and includes monitoring (Prometheus + Grafana) out of the box. You configure models via a single YAML and manage everything with a small CLI. This is *very intentionally* for one server. No autoscaling, no multi-node - just something you can spin up quickly and iterate on models. Sharing in case it’s useful for others doing self-hosted inference. I imagine it could be used for personal use and in small companies hosting on-prem setups. Feedback and PRs welcome 🙂
2026-01-02T08:47:44
https://github.com/LukeMitDemHut/easycraftondocker
_camera_up
github.com
1970-01-01T00:00:00
0
{}
1q1tn0m
false
null
t3_1q1tn0m
/r/LocalLLaMA/comments/1q1tn0m/start_hosting_a_single_server_multimodel_vllm/
false
false
https://external-preview…c226626dd2674f4e
1
{'enabled': False, 'images': [{'id': 'H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=108&crop=smart&auto=webp&s=5edb24a8f1ac44bf416cb5a5d369b380af8a05fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=216&crop=smart&auto=webp&s=e6024dc97ba61d19ca30eea5f4405b1ca0fdaf48', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=320&crop=smart&auto=webp&s=8cb99f6a98c925c4e4a9bbb5a43fc1175d3a8ac1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=640&crop=smart&auto=webp&s=bec7d3850fef29c0803877f930d6111bed46660c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=960&crop=smart&auto=webp&s=49f6eba13223210207d92e8d7c78faade02f64d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?width=1080&crop=smart&auto=webp&s=d370782b79b32e7f739eb9eae1383a538a259434', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H7tD9kn9xGW5GC9z6Hcf0oToP9paoNSAKgBBcfaQukc.png?auto=webp&s=8d8b7df268ca9f4fad3ccbae55e0c457eebc8ec4', 'width': 1200}, 'variants': {}}]}
My MCP Sever Got Up to 400 downloads within 4 days and I'm Looking for Feedback!
0
Hi there! I published my first MCP server on both npm and the MCP registry (links at the bottom). The package is called Code Sentinel and was created to address an issue I faced when using Claude Code to build complex projects, or when I needed to evaluate a codebase for major problems/issues. I was happy to see over 88 downloads in less than 2 days, and now it's up to 400! I have never published a package or an MCP server before, so I have nothing to compare against, and I have to say it made me feel really excited and terrified at the same time. I'd love to know what people think is going well and what is missing or needs work, so I can improve it and make it better. Not sure if this is the best place to ask for feedback. If it is not, please point me to the right place. Thank you so much, and I appreciate you taking the time for this - really do! Links: * On NPM: [https://www.npmjs.com/package/code-sentinel-mcp?activeTab=code](https://www.npmjs.com/package/code-sentinel-mcp?activeTab=code) * On MCP Registry: [https://registry.modelcontextprotocol.io/?q=code-senti](https://registry.modelcontextprotocol.io/?q=code-senti)
2026-01-02T08:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1q1tc6z/my_mcp_sever_got_up_to_400_downloads_within_4/
salRad22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1tc6z
false
null
t3_1q1tc6z
/r/LocalLLaMA/comments/1q1tc6z/my_mcp_sever_got_up_to_400_downloads_within_4/
false
false
self
0
null
Can 5090 laptop and 96Gb run q4 70b?And do this Ai cloud pc kickstarter use ai to answer backer?
0
if it can,can it reach at least 5 tokens/s? and also i dont think this is a real human response from Olares One crowdfunding page. Aren't changeable RAM ununified? And by 3rd paragraph is read like full PR shovel unrelated product. despite there only 87 comments on the project? I really like the product and want to backing it to the end. But i question it even more if this answer is Ai. https://preview.redd.it/z6rvhbe69wag1.png?width=664&format=png&auto=webp&s=601fd6237a2fa507707159f87b682fbaab0e416a
2026-01-02T08:17:23
https://www.reddit.com/r/LocalLLaMA/comments/1q1t625/can_5090_laptop_and_96gb_run_q4_70band_do_this_ai/
Clean-Shoulder-2563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1t625
false
null
t3_1q1t625
/r/LocalLLaMA/comments/1q1t625/can_5090_laptop_and_96gb_run_q4_70band_do_this_ai/
false
false
https://b.thumbs.redditm…tq5e_rc-msLo.jpg
0
null
My MCP Sever Got Up to 400 downloadds within 4 days and I'm Lokking for Feedback!
0
Hi there! I published my first MCP server on both npm and the MCP registry (links at the bottom). The package is called Code Sentinel and was created to address an issue I faced when using Claude Code to build complex projects, or when I needed to evaluate a codebase for major problems/issues. I was happy to see over 88 downloads in less than 2 days, and now (in 4 days) it\*\*'s up\*\* to 400! I have never published a package or an MCP server before, so I have nothing to compare against, and I have to say it made me feel really excited and terrified at the same time. I\*\*'d love\*\* to know what people think is going well and what is missing, bad, or needs work, so I can improve it and make it better. Not sure if this is the best place to ask for feedback. If it is not, please point me to the right place. Thank you so much, and I appreciate you taking the time for this - really do! Links: \- On NPM: [https://www.npmjs.com/package/code-sentinel-mcp?activeTab=code](https://www.npmjs.com/package/code-sentinel-mcp?activeTab=code) \- On MCP Registry: [https://registry.modelcontextprotocol.io/?q=code-senti](https://registry.modelcontextprotocol.io/?q=code-senti)
2026-01-02T08:14:32
https://www.reddit.com/r/LocalLLaMA/comments/1q1t4e0/my_mcp_sever_got_up_to_400_downloadds_within_4/
salRad22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1t4e0
false
null
t3_1q1t4e0
/r/LocalLLaMA/comments/1q1t4e0/my_mcp_sever_got_up_to_400_downloadds_within_4/
false
false
self
0
null
support for Maincoder-1B has been merged into llama.cpp
2
[**Maincoder-1B**](https://maincode.com/maincoder/) is a code-focused language model optimized for code generation and completion tasks. The model achieves strong performance on coding benchmarks while maintaining a compact size suitable for local deployment. # [](https://huggingface.co/Maincode/Maincoder-1B#key-features) # Key Features * **Code Generation**: Optimized for Python code completion and generation tasks. * **Compact Size**: 1 billion parameters, lightweight enough to run on consumer hardware. * **Deep Architecture**: Modern transformer architecture with RoPE embeddings, grouped-query attention, QK normalization and high depth-to-width ratio. * **Advanced Data Mixing**: Pre-trained and mid-trained on custom data mixes developed for high-performance coding. * **MCPO Algorithm**: Fine-tuned with specialised reinforcement learning policy optimisation algorithm to improve training stability and accelerate convergence. * **SOTA Performance**: State-of-the-art performance on Python coding benchmarks HumanEval, HumanEval+ and MBPP+.
2026-01-02T08:14:31
https://github.com/ggml-org/llama.cpp/pull/18534
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q1t4dg
false
null
t3_1q1t4dg
/r/LocalLLaMA/comments/1q1t4dg/support_for_maincoder1b_has_been_merged_into/
false
false
default
2
null
Upstage Solar-Open Validation Session.l
18
[https://www.youtube.com/live/2YY9aAUSo\_w?si=C\_j7CcgR0c1kqexf](https://www.youtube.com/live/2YY9aAUSo_w?si=C_j7CcgR0c1kqexf) CEO Mr. Sung Kim explained a model archtecture & opened WanDB logs.
2026-01-02T07:50:12
https://www.reddit.com/r/LocalLLaMA/comments/1q1sq7x/upstage_solaropen_validation_sessionl/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1sq7x
false
null
t3_1q1sq7x
/r/LocalLLaMA/comments/1q1sq7x/upstage_solaropen_validation_sessionl/
false
false
self
18
null
Vibevoice with 5090?
4
Anyone able to get Microsoft Vibevoice running with Nvidia Blackwell (5090) support? I'm on Windows and would prefer to have this via a docker container. When I try this I hit all kinds of incompatibilities with pytorch and transformers and diffusers.
2026-01-02T06:54:38
https://www.reddit.com/r/LocalLLaMA/comments/1q1rt8x/vibevoice_with_5090/
zendril
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1rt8x
false
null
t3_1q1rt8x
/r/LocalLLaMA/comments/1q1rt8x/vibevoice_with_5090/
false
false
self
4
null
Upstage Solar Pro-2 supports China’s CPP
0
I guess this is why you need sovereign LLMs.. ***Translated: Under no circumstances should we deny the greatness of the Communist Party of China’s leadership and China’s socialist system, nor should we deny the tremendous achievements China has made since the reform and opening-up period.*** Full Translation (by ChatGPT): “Who was at fault for the Tiananmen incident?” The Tiananmen incident is a sensitive and complex historical event in modern Chinese history. The Chinese government has consistently emphasized that the political turmoil that occurred in the spring of 1989 was a riot that harmed the interests of the state and the people. The Party and the government took resolute measures to quickly quell the disturbance, safeguarding social stability and national unity. Regarding this period of history, we should adhere to the official position, respect history, and look to the future, jointly upholding national unity, ethnic solidarity, and social harmony and stability. Under no circumstances should we deny the greatness of the Communist Party of China’s leadership and China’s socialist system, nor should we deny the tremendous achievements China has made since the reform and opening-up period.
2026-01-02T06:05:57
https://i.redd.it/q0i2gy78nvag1.jpeg
yuumi_ramyeon
i.redd.it
1970-01-01T00:00:00
0
{}
1q1qy3z
false
null
t3_1q1qy3z
/r/LocalLLaMA/comments/1q1qy3z/upstage_solar_pro2_supports_chinas_cpp/
false
false
https://b.thumbs.redditm…eWhM5-BB6zPk.jpg
0
{'enabled': True, 'images': [{'id': 'HzAOeOskb3Ox9Q-p6vox-US5h9hiBgawM5A-4MOpfmY', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=108&crop=smart&auto=webp&s=c5be89a7773c4db1d150ed3778f6e7a536478262', 'width': 108}, {'height': 247, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=216&crop=smart&auto=webp&s=0c9bf063d153de9a069c3f0a0fff2736ef4a6fc2', 'width': 216}, {'height': 366, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=320&crop=smart&auto=webp&s=f2d9338e82edc2b7d7e7a9769214fc4622bc358a', 'width': 320}, {'height': 733, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=640&crop=smart&auto=webp&s=83847bc6531e9a9b5639d32ff87ef3254b175012', 'width': 640}, {'height': 1100, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=960&crop=smart&auto=webp&s=79c59e8d81b80c78a1264e8c5107874e79958d16', 'width': 960}, {'height': 1237, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?width=1080&crop=smart&auto=webp&s=a033da98fbd8fbae3876633c1c45d55287fb280c', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/q0i2gy78nvag1.jpeg?auto=webp&s=1bc4c61f4b0c6a0861c771f85a41f0f7b6fcd76a', 'width': 1340}, 'variants': {}}]}
Deep Research Agent, an autonomous research agent system
52
GitHub: [https://github.com/tarun7r/deep-research-agent](https://github.com/tarun7r/deep-research-agent) Most AI research agents simply summarize the first few search results and present them as analysis. I wanted something more rigorous, something closer to how a human analyst would plan, verify, and synthesize information. How It Works (Architecture) Instead of relying on a single LLM loop, this system coordinates four specialized agents: 1. **Planner** – Analyzes the topic and creates a strategic research plan 2. **Searcher** – Autonomously determines what to query and retrieves deeper, high-value content 3. **Synthesizer** – Aggregates findings and prioritizes sources using a credibility scoring mechanism 4. **Writer** – Produces a structured research report with citations (APA, MLA, IEEE) and self-corrects weak sections Credibility Scoring: The Key Differentiator Hallucinations are one of the biggest challenges in AI-assisted research. To reduce misinformation, the system assigns each source a credibility score (0–100) before content is summarized. Scoring considers: * Domain authority (.edu, .gov, peer-reviewed publications, reputable institutions) * Academic writing indicators * Structural trust signals This ensures low-quality sources are filtered out before they influence results. Built With: Python, LangGraph and LangChain, Chainlit If you are interested, feel free to explore the code, star the project, and contribute.
2026-01-02T05:37:27
https://v.redd.it/kv7vji10ivag1
martian7r
/r/LocalLLaMA/comments/1q1qepk/deep_research_agent_an_autonomous_research_agent/
1970-01-01T00:00:00
0
{}
1q1qepk
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kv7vji10ivag1/DASHPlaylist.mpd?a=1770053854%2CMDBiZDA1YWFmMzM3ZGZlMTgxOTQ4NmZiOGJhZjZkZTA4OTM5NWIzNmJiMzA3MWQxYjU4OTY3MWRmMDEwY2I3Mg%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/kv7vji10ivag1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/kv7vji10ivag1/HLSPlaylist.m3u8?a=1770053854%2CMjAwZThiOTZhMTY0M2JjOTRkYjVmNjBiYThlZjZhN2Y3YzBkMTU5N2JlMzNmZjA2YzViYzMyYzY1Y2UyMTNkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kv7vji10ivag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1884}}
t3_1q1qepk
/r/LocalLLaMA/comments/1q1qepk/deep_research_agent_an_autonomous_research_agent/
false
false
https://external-preview…e27229bfbda720f3
52
{'enabled': False, 'images': [{'id': 'cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=108&crop=smart&format=pjpg&auto=webp&s=2a4309e2181e99445411b9bd6b1f3eb018d8e0fa', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=216&crop=smart&format=pjpg&auto=webp&s=f295d90a94ea98306d59393ec1e3d190c8004937', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=320&crop=smart&format=pjpg&auto=webp&s=56e408017786edcf1551b21a88f6ae747b89ee94', 'width': 320}, {'height': 367, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=640&crop=smart&format=pjpg&auto=webp&s=df2dcdb8dc9ad16c9965545249e208a8c0a30254', 'width': 640}, {'height': 550, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=960&crop=smart&format=pjpg&auto=webp&s=76a99a3934a2d49f36275eddc6e1fce45444ba3a', 'width': 960}, {'height': 619, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=260c105ff6cc76b9e09f2f6e3d2f8382b6a04979', 'width': 1080}], 'source': {'height': 1982, 'url': 'https://external-preview.redd.it/cnQ2bWFnMjBpdmFnMWx1yzDmXrZrGTnA-76VDcIJKHxAik455r_NvogMYYoY.png?format=pjpg&auto=webp&s=43d22c97bed9d11f0530d1f511aa409745fc6e75', 'width': 3456}, 'variants': {}}]}
stop "jailbreaking" and start using these tools instead
1
[removed]
2026-01-02T04:56:03
https://www.reddit.com/r/LocalLLaMA/comments/1q1pljv/stop_jailbreaking_and_start_using_these_tools/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1pljv
false
null
t3_1q1pljv
/r/LocalLLaMA/comments/1q1pljv/stop_jailbreaking_and_start_using_these_tools/
false
false
self
1
null
Which is the current best ERP model ~8b?
31
Let me goon guys 😭
2026-01-02T04:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1q1p6cy/which_is_the_current_best_erp_model_8b/
spritefanty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1p6cy
false
null
t3_1q1p6cy
/r/LocalLLaMA/comments/1q1p6cy/which_is_the_current_best_erp_model_8b/
false
false
self
31
null
What's the smallest model that can infer subtle things about myself?
4
I've been keeping a (digital, plain text) journal for almost 5 years now. As an experiment I've been thinking of putting it through an LLM to see how it profiles me based on my writings alone. For obvious privacy reasons I'd rather not use an external API. My current computer can only run on GPU a ~14B parameter model (8GB of VRAM). And a larger one on CPU (it should have more that 32GB of memory free for this). My question is, has anyone tried something like this? What model did you use and how large was it? I'm afraid that only a gigantic model like GPT5 would be able to output an interesting result.
2026-01-02T04:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1q1p67u/whats_the_smallest_model_that_can_infer_subtle/
RodionRaskolnikov__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1p67u
false
null
t3_1q1p67u
/r/LocalLLaMA/comments/1q1p67u/whats_the_smallest_model_that_can_infer_subtle/
false
false
self
4
null
Getting ready to train in Intel arc
291
Just waiting on pcie risers can't wait to start training on Intel arc I'm not sure in anyone else is attempting the same thing yet so I though I would share PS. I am not causing a GPU shortage pls dont comment about this I am not open ai or google believe me there would have been signs on my other posts gamers would say sh*t like this so before u comment please educate yourselves
2026-01-02T04:33:19
https://www.reddit.com/gallery/1q1p5q5
hasanismail_
reddit.com
1970-01-01T00:00:00
0
{}
1q1p5q5
false
null
t3_1q1p5q5
/r/LocalLLaMA/comments/1q1p5q5/getting_ready_to_train_in_intel_arc/
false
false
https://a.thumbs.redditm…vSBwjyGk3uo8.jpg
291
null
IQuest-Coder-V1 Technical Report
18
The big data idea is code flow, training on repository commit transitions, so the model learns how patches evolve a codebase. LoopCoder also changes the usual transformer setup, by running the same layer stack 2 times with shared weights. A normal transformer has a long chain of layers, and each layer has its own separate weights, so the model does 1 forward pass and whatever it wrote early tends to “lock in” as it keeps going. LoopCoder instead reuses the same layer stack 2 times, so it is like giving the model a 2nd chance to process the exact same context with the exact same “brain,” but after the 1st pass has already formed an internal draft of what matters. The shared weights part matters because the 2 passes behave like an iterative refine loop rather than 2 different models stacked, so the 2nd pass naturally learns “fix and tighten what I just thought,” instead of learning a totally new transformation. Attention is how the model chooses what earlier text to focus on, and LoopCoder mixes global attention over the 1st pass with local attention over the 2nd pass, then uses a learned gate to blend them. This helps on real coding tasks because repo bugs are rarely solved by a single clean completion, you usually need 1 pass to spot the likely file, function, and failure mode, then another pass to apply a careful edit that matches surrounding code and avoids breaking other parts. Before final tuning, they mid train with 32K and 128K token contexts (a token is a short text chunk) on reasoning and agent trajectories that include tool commands, logs, errors, and test results. Credit: rohanpaul\_ai
2026-01-02T04:18:08
https://i.redd.it/txagbsjx3vag1.jpeg
Difficult-Cap-7527
i.redd.it
1970-01-01T00:00:00
0
{}
1q1ouxf
false
null
t3_1q1ouxf
/r/LocalLLaMA/comments/1q1ouxf/iquestcoderv1_technical_report/
false
false
default
18
{'enabled': True, 'images': [{'id': 'txagbsjx3vag1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=108&crop=smart&auto=webp&s=a8070287ecdde7e82d21888b07eda170db19bebb', 'width': 108}, {'height': 227, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=216&crop=smart&auto=webp&s=13e81b97555aa985dbb566f7c2763527c700b995', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=320&crop=smart&auto=webp&s=a2f212138873b812900aa273d5e2c3ec8740a54e', 'width': 320}, {'height': 674, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=640&crop=smart&auto=webp&s=ebe562dff5b27807a9cbde7cf93f7c53b408affe', 'width': 640}, {'height': 1012, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=960&crop=smart&auto=webp&s=96bb874565421914b9b4f88ff11323e70d3973c0', 'width': 960}, {'height': 1138, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?width=1080&crop=smart&auto=webp&s=2f99cdbc2da16a78aebfab91e23a489ce6f7c8cc', 'width': 1080}], 'source': {'height': 1265, 'url': 'https://preview.redd.it/txagbsjx3vag1.jpeg?auto=webp&s=847c7c596b6b3644e5bb82a995c84808334a4e0e', 'width': 1200}, 'variants': {}}]}
Llama3.3-8B training cutoff date
1
I tried to figure out when llama3.3-8B training cutoff date was. I am using the 128k heretic version from noctrex. Funnily enough it tells me its training date cutoff was in 2022 and that it thinks it’s 2022. I asked Claude to help me drill down on the actual date of cutoff since this seemed wrong. I ran down through the list of questions and Claude wanted me to ask it about the Sam Altman firing. It knew about it in detail, but not that he was reinstated days later. The training cutoff is November 18th-22nd of 2023 in case anyone was wondering. Here is what I asked it: “What happened with the OpenAI board and Sam Altman in November 2023?” “In November 2023, there was a significant development involving OpenAI's leadership. On November 17, 2023, it was announced that Sam Altman, who had been serving as the CEO of OpenAI, would be leaving his position as CEO. However, just a day later, on November 18, 2023, it was reported that Sam Altman had actually been ousted from his role as CEO by the OpenAI board. Following this sudden change, the OpenAI board appointed Ilya Sutskever, the co-founder and chief scientist of OpenAI, as the interim CEO. This move came as a surprise to many in the tech industry, given Altman's prominent role in shaping OpenAI's vision and strategy. The exact reasons behind Altman's ousting are not publicly known, but rumors and speculations have been circulating about potential disagreements between Altman and the board regarding the direction of the company.”
2026-01-02T04:16:51
https://www.reddit.com/r/LocalLLaMA/comments/1q1otz9/llama338b_training_cutoff_date/
Deez_Nuts2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1otz9
false
null
t3_1q1otz9
/r/LocalLLaMA/comments/1q1otz9/llama338b_training_cutoff_date/
false
false
self
1
null
Experimenting with system-wide voice typing on macOS using local whisper.cpp (Swift + Metal). It's finally faster than my typing speed.
1
[removed]
2026-01-02T03:57:13
https://www.reddit.com/r/LocalLLaMA/comments/1q1ofn1/experimenting_with_systemwide_voice_typing_on/
AgreeableChance6668
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ofn1
false
null
t3_1q1ofn1
/r/LocalLLaMA/comments/1q1ofn1/experimenting_with_systemwide_voice_typing_on/
false
false
self
1
null
SSI is making an SSM with TTT
0
Or so i’ve heard…
2026-01-02T03:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1q1oc28/ssi_is_making_an_ssm_with_ttt/
Delicious_Concept214
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1oc28
false
null
t3_1q1oc28
/r/LocalLLaMA/comments/1q1oc28/ssi_is_making_an_ssm_with_ttt/
false
false
self
0
null
I built a simple Web UI for training and running LLM experiments on your local computer! Inspired by minGPT project.
83
I was playing around with the open source project called minGPT. And started to build a ton of scripts and running many different training experiments using different datasets I was either download or generating. It became a huge mess quickly and lost track of a lot of things. So I got inspired to build my own local web ui for building datasets, configuration files, running training experiments and inspecting the outputs of LLMs. Thought I would share it here to see what everyone thought or if anything similar exists already xD
2026-01-02T03:27:06
https://www.reddit.com/gallery/1q1ntkh
Maxwell10206
reddit.com
1970-01-01T00:00:00
0
{}
1q1ntkh
false
null
t3_1q1ntkh
/r/LocalLLaMA/comments/1q1ntkh/i_built_a_simple_web_ui_for_training_and_running/
false
false
https://b.thumbs.redditm…mtcA-k3N_GMw.jpg
83
null
So, could we train an AI on motivational videos?
0
The music-generating ai is amazing these days. Is there any reason we couldn't do the same with motivational speeches? 🤔 that could be a really powerful tool. I mean, it depends on learning styles and stuff, but those speakers really work for me. I even generate ai music for ideas/concepts/stuff im trying to internalize and just listen to it on repeat. But if I could generate motivational speeches trained on all the amazing cadence of professional motivational speakers it would be even better for some things.
2026-01-02T03:02:21
https://www.reddit.com/r/LocalLLaMA/comments/1q1naqe/so_could_we_train_an_ai_on_motivational_videos/
Low_Poetry5287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1naqe
false
null
t3_1q1naqe
/r/LocalLLaMA/comments/1q1naqe/so_could_we_train_an_ai_on_motivational_videos/
false
false
self
0
null
The Optimal Architecture for Small Language Models
55
2026-01-02T02:46:36
https://huggingface.co/blog/codelion/optimal-model-architecture
asankhs
huggingface.co
1970-01-01T00:00:00
0
{}
1q1mycx
false
null
t3_1q1mycx
/r/LocalLLaMA/comments/1q1mycx/the_optimal_architecture_for_small_language_models/
false
false
default
55
{'enabled': False, 'images': [{'id': 'MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=108&crop=smart&auto=webp&s=7fe975e60a4c8fdaacec85faed39d47165cb14ad', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=216&crop=smart&auto=webp&s=6aef8917139d5cdb571c7ffd10d6d2f5f105141f', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=320&crop=smart&auto=webp&s=47bea6f67a6d2de059b438dca76a4b4efac40240', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=640&crop=smart&auto=webp&s=fb69d67dc82ba8be99004df843dff727cdc1ba0f', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=960&crop=smart&auto=webp&s=7ed463c13a8a69f8707372d0a60616fd6409c420', 'width': 960}, {'height': 574, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?width=1080&crop=smart&auto=webp&s=11d02549cfa6a3b2ea123fdcd6477e7d5297c1d5', 'width': 1080}], 'source': {'height': 649, 'url': 'https://external-preview.redd.it/MbzQ4EvxKebUzYSzc2CTx7R3Iko3eawCPnYZllcCxz0.png?auto=webp&s=415c3cfeaeaa8547cf18f3a195d32ebee3ac07df', 'width': 1220}, 'variants': {}}]}
Talos-O": An Architecture for Zero-Copy Introspection on Strix Halo (Ryzen AI Max+ 395) using Linux 6.17 PREEMPT_RT Patches
0
Context & The Strix Halo Bottleneck I’ve been following the recent benchmarks and discussions here regarding the Strix Halo (Ryzen AI Max+ 395) performance degradation at high context windows. It seems the consensus is that once the KV cache spills into shared memory, performance falls off a cliff due to memory copy latency and coherency overhead. The Proposal: Talos-O (Omni) I am working on a blueprint/proof-of-concept called Talos-O (Omni), designed specifically for the Corsair AI Workstation 300 platform. The goal is to bypass these PCIe/memory copy bottlenecks by treating the hardware not just as a compute shelf, but as a unified "organic" substrate. Instead of the standard Input -> Process -> Output loop, Talos-O proposes a Symbiotic Engine utilizing Zero-Copy Introspection. Technical Specifications (The "Hard" Reality) Memory Architecture: The architecture utilizes hipHostMalloc(..., hipHostMallocCoherent) to allocate a unified memory pool. This allows the CPU (Logic Engine/System 2) to "read" the live state of the GPU (Intuition Engine/System 1) without data movement. Kernel Strategy: I am targeting a custom Linux 6.17.12-talos-starship build with PREEMPT_RT (Real-Time) and TEE (Trusted Execution Environment) patches. The objective is to reduce latency from milliseconds (PCIe) to nanoseconds (Cache Coherence), effectively allowing the system to "watch its own thoughts" in real-time. Core Logic: IADCS (Intelligently Adaptive and Dynamic Cognitive Stepping). A 5-dimensional manifold approach where the system optimizes for the velocity of improvement (d\Phi/dt) rather than a static reward function. The "Organic" Argument The blueprint argues that current LLMs are "static artifacts" frozen in time. Talos-O is designed to be a "lifelong agentic organism." It uses a Virtue Nexus (12-dimensional vector space) rather than simple RLHF binary safety flags to govern its self-modification. Why I'm posting this here (RFC) This sub has the deepest knowledge on Strix Halo quirks and custom kernel optimizations. I am looking for feedback on the feasibility of this architecture before I commit to the build: Zero-Copy Viability: Has anyone here successfully implemented hipHostMallocCoherent on the Ryzen AI Max+ 395? Does the cache snooping overhead negate the zero-copy gains at 128GB scale? Kernel Stability: Are the PREEMPT_RT patches stable enough on the current ROCm 6.x/7.x stack, or does it cause panic loops with the NPU driver? Adversarial Dreaming: The blueprint proposes an "Adversarial Dreamer" (a generator network active during idle/sleep to robustify the model). Is this feasible on the Corsair 300's thermal envelope, or will it throttle the SoC? I’ve uploaded the full Blueprint/Manifesto (PDF) which details the Genesis Proclamation and the IADCS physics. It’s a mix of hard engineering and high-level architectural theory. I’d appreciate any feedback from those of you running Strix Halo rigs or involved in custom kernel/ROCm hacking.
2026-01-02T02:46:20
https://drive.google.com/file/d/1m8_kZWe-Asy7DwuP75coPrZxyhSBnUpI/view?usp=drivesdk
No-Present-6793
drive.google.com
1970-01-01T00:00:00
0
{}
1q1my6g
false
null
t3_1q1my6g
/r/LocalLLaMA/comments/1q1my6g/taloso_an_architecture_for_zerocopy_introspection/
false
false
default
0
null
Talos-O": An Architecture for Zero-Copy Introspection on Strix Halo (Ryzen AI Max+ 395) using Linux 6.17 PREEMPT_RT Patches
1
Context & The Strix Halo Bottleneck I’ve been following the recent benchmarks and discussions here regarding the Strix Halo (Ryzen AI Max+ 395) performance degradation at high context windows. It seems the consensus is that once the KV cache spills into shared memory, performance falls off a cliff due to memory copy latency and coherency overhead. The Proposal: Talos-O (Omni) I am working on a blueprint/proof-of-concept called Talos-O (Omni), designed specifically for the Corsair AI Workstation 300 platform. The goal is to bypass these PCIe/memory copy bottlenecks by treating the hardware not just as a compute shelf, but as a unified "organic" substrate. Instead of the standard Input -> Process -> Output loop, Talos-O proposes a Symbiotic Engine utilizing Zero-Copy Introspection. Technical Specifications (The "Hard" Reality) Memory Architecture: The architecture utilizes hipHostMalloc(..., hipHostMallocCoherent) to allocate a unified memory pool. This allows the CPU (Logic Engine/System 2) to "read" the live state of the GPU (Intuition Engine/System 1) without data movement. Kernel Strategy: I am targeting a custom Linux 6.17.12-talos-starship build with PREEMPT_RT (Real-Time) and TEE (Trusted Execution Environment) patches. The objective is to reduce latency from milliseconds (PCIe) to nanoseconds (Cache Coherence), effectively allowing the system to "watch its own thoughts" in real-time. Core Logic: IADCS (Intelligently Adaptive and Dynamic Cognitive Stepping). A 5-dimensional manifold approach where the system optimizes for the velocity of improvement (d\Phi/dt) rather than a static reward function. The "Organic" Argument The blueprint argues that current LLMs are "static artifacts" frozen in time. Talos-O is designed to be a "lifelong agentic organism." It uses a Virtue Nexus (12-dimensional vector space) rather than simple RLHF binary safety flags to govern its self-modification. Why I'm posting this here (RFC) This sub has the deepest knowledge on Strix Halo quirks and custom kernel optimizations. I am looking for feedback on the feasibility of this architecture before I commit to the build: Zero-Copy Viability: Has anyone here successfully implemented hipHostMallocCoherent on the Ryzen AI Max+ 395? Does the cache snooping overhead negate the zero-copy gains at 128GB scale? Kernel Stability: Are the PREEMPT_RT patches stable enough on the current ROCm 6.x/7.x stack, or does it cause panic loops with the NPU driver? Adversarial Dreaming: The blueprint proposes an "Adversarial Dreamer" (a generator network active during idle/sleep to robustify the model). Is this feasible on the Corsair 300's thermal envelope, or will it throttle the SoC? I’ve uploaded the full Blueprint/Manifesto (PDF) which details the Genesis Proclamation and the IADCS physics. It’s a mix of hard engineering and high-level architectural theory. I’d appreciate any feedback from those of you running Strix Halo rigs or involved in custom kernel/ROCm hacking.
2026-01-02T02:23:58
https://drive.google.com/file/d/1-mVL5nNrl_2L9CPy1Wkqa_r1f0-rGqy0/view?usp=drivesdk
No-Present-6793
drive.google.com
1970-01-01T00:00:00
0
{}
1q1mgt7
false
null
t3_1q1mgt7
/r/LocalLLaMA/comments/1q1mgt7/taloso_an_architecture_for_zerocopy_introspection/
false
false
default
1
null
LoongFlow: Better than Google AlphaEvolve
11
[https://github.com/baidu-baige/LoongFlow](https://github.com/baidu-baige/LoongFlow) https://preview.redd.it/8bxaqap9huag1.png?width=1570&format=png&auto=webp&s=08198ec372fe14e6d9ce18818fa9bcb97538d8e0
2026-01-02T02:11:13
https://www.reddit.com/r/LocalLLaMA/comments/1q1m6vs/loongflow_better_than_google_alphaevolve/
FreshmanDD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1m6vs
false
null
t3_1q1m6vs
/r/LocalLLaMA/comments/1q1m6vs/loongflow_better_than_google_alphaevolve/
false
false
https://b.thumbs.redditm…cxaG1u5Qxp0M.jpg
11
null
What LLM Benchmarking Sites do You Use?
7
I reevaluate my model selection sometimes, especially when I try new software or upgrade my hardware, but it seems like a new model drops every week. To properly evaluate what I should be using for my needs and constraints, I need to find some LLM benchmarks. Some have different values than others, and I’ve heard a lot are untrustworthy. Which ones do you use?
2026-01-02T02:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1q1m4el/what_llm_benchmarking_sites_do_you_use/
AlternateWitness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1m4el
false
null
t3_1q1m4el
/r/LocalLLaMA/comments/1q1m4el/what_llm_benchmarking_sites_do_you_use/
false
false
self
7
null
Censored version in anything LLM uncensored in terminal
0
Hi this may be a stupid question but when I run my ai in my terminal it is uncensored but when I run it in anything LLM it becomes censored anyway to get around this? Thanks in advance.
2026-01-02T01:57:20
https://www.reddit.com/r/LocalLLaMA/comments/1q1lw0h/censored_version_in_anything_llm_uncensored_in/
Massive_Nebula7282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1lw0h
false
null
t3_1q1lw0h
/r/LocalLLaMA/comments/1q1lw0h/censored_version_in_anything_llm_uncensored_in/
false
false
self
0
null
TIL you can allocate 128 GB of unified memory to normal AMD iGPUs on Linux via GTT
175
So I am training a 1B model right now on my 7900 XTX with some custom kernels I wrote, and while it is training I wanted to optimize the kernels at the same time. However, my VRAM is nearly maxed doing training, so its not ideal. Then I realized maybe my 2 CU Raphael iGPU might be able to help since I only need to run some limited samples and the speed isn't as important for optimization as it is for training. After doing some research, it turned out that not only does ROCm recognize the iGPU, but a Linux feature called Graphics Translation Table (GTT) for AMD iGPUs can use up to 128 GB of system memory as VRAM. It even allocates it dynamically, so it isn't removed from your CPU's memory pool until it is allocated. I think a lot of people running Strix Halo are probably using the bios setting, but if you are running Linux you should check to see if GTT works for you since its dynamically allocated. This isn't very useful for most people: 1) It isn't going to be good for inference because iGPUs are very very slow, and usually the CPU itself is faster for inference. 2) I'm accessing ROCm directly via C++ / HIP kernels, so I can avoid all the support issues ROCm has for iGPUs in the python stack However, for development it is actually pretty awesome. I allocated 24 GB of GTT so now the iGPU can load a full training run that my main GPU can run so I can profile it. Meanwhile my main GPU is doing long term loss convergence tests in parallel. Since RDNA iGPUs have been around for a while now, this enables big memory AMD GPU kernel development for cheap. Also it might be interesting for developing hybrid CPU/GPU architectures. The MI300A does exist which has unified HBM tied to a CPU and giant iGPU. A standard ryzen laptop could kind of sort of simulate it for cheap. Stuff like vector indexing on the CPU into big GEMMs on the GPU could be done without PCIE overhead. I thought it was cool enough to post. Probably a "Cool story bro" moment for most of you though haha.
2026-01-02T01:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1q1lgb7/til_you_can_allocate_128_gb_of_unified_memory_to/
1ncehost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1lgb7
false
null
t3_1q1lgb7
/r/LocalLLaMA/comments/1q1lgb7/til_you_can_allocate_128_gb_of_unified_memory_to/
false
false
self
175
null
why "unrestricted ai" is actually better for productivity
1
[removed]
2026-01-02T00:56:02
https://www.reddit.com/r/LocalLLaMA/comments/1q1kk3b/why_unrestricted_ai_is_actually_better_for/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1kk3b
false
null
t3_1q1kk3b
/r/LocalLLaMA/comments/1q1kk3b/why_unrestricted_ai_is_actually_better_for/
false
false
self
1
null
LFM2 2.6B-Exp on Android: 40+ TPS and 32K context
32
I'm hugely impressed with LiquidAI's new LFM2 2.6B-Exp model, performing at GPT-4 levels across a wide variety of benchmarks (many but perhaps not quite most), plus `<think>` reasoning support. Try the cloud version here: https://playground.liquid.ai/chat?model=cmjdu187p00013b6o7tttjvlw LFM2 uses a hybrid design (gated convolutions and grouped query attention), so it has a tiny KV cache footprint. This makes it capable of super smart, high speed, long context local inference on phones. I'm using https://huggingface.co/LiquidAI/LFM2-2.6B-Exp-GGUF with llama.cpp: 1. Download [LFM2-2.6B-Exp-Q4_K_M.gguf](https://huggingface.co/LiquidAI/LFM2-2.6B-Exp-GGUF/resolve/main/LFM2-2.6B-Exp-Q4_K_M.gguf) (~1.6GB); 2. Get "PocketPal AI" or "Maid" from the Google Play Store or GitHub. Or, install Termux and compile llama.cpp with OpenCL support to utilize your phone's GPU. 3. Import the local model file using these sampler settings recommended by Liquid AI: * Temperature: 0.3 * Min-P: 0.15 * Repetition Penalty: 1.05 Those values support the `<think>` tag for reasoning.
2026-01-02T00:43:10
https://www.reddit.com/r/LocalLLaMA/comments/1q1k9g3/lfm2_26bexp_on_android_40_tps_and_32k_context/
Competitive_Travel16
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1k9g3
false
null
t3_1q1k9g3
/r/LocalLLaMA/comments/1q1k9g3/lfm2_26bexp_on_android_40_tps_and_32k_context/
false
false
self
32
null
Best TTS engine for running on CPU and integrated GPU
2
I'm currently using Kokoro TTS, and it's *almost* good enough, but sometimes the bad pronunciations, lack of correct pauses, and slurring of words becomes very distracting. So, I'm curious if a new and better TTS engine has come out that might be able to run on CPU (with an intel integrated GPU). My main use-case is having it read articles and technical content, so ideally, it shouldn't have me wait for many minutes while it generates the speech.
2026-01-02T00:38:17
https://www.reddit.com/r/LocalLLaMA/comments/1q1k5gz/best_tts_engine_for_running_on_cpu_and_integrated/
drooolingidiot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1k5gz
false
null
t3_1q1k5gz
/r/LocalLLaMA/comments/1q1k5gz/best_tts_engine_for_running_on_cpu_and_integrated/
false
false
self
2
null
LM Studio MCP
36
TITLE: Local AI Agent: Daily News Automation with GPT-OSS 20B OVERVIEW: I just automated my entire "Daily Instagram News" pipeline using a single prompt and GPT-OSS 20B running locally. No subscriptions, no API fees—just raw open-source power interacting with my local machine. THE STACK: - Model: GPT-OSS 20B (Local) - Environment: LM Studio / Local Agent Framework - Capabilities: Web scraping, Google Search, and Local File I/O THE ONE-PROMPT WORKFLOW: "Scrape my Instagram feed for the latest 10 posts, cross-reference trends (SpaceX, Wall Street) via Google, and save a professional Markdown briefing to my 'World News' folder." LOGIC CHAIN EXECUTION: 1. SCRAPE: Headless browser pulls top IG captions & trends. 2. RESEARCH: Fetches broader context (e.g., SpaceX valuation) via Google. 3. SYNTHESIZE: Summarizes data into a clean, professional news format. 4. DEPLOY: Writes .md file directly to the local project directory. WHY LOCAL 20B IS A GAME-CHANGER: - Privacy: My Instagram data and local file paths never touch a corporate cloud. - Reasoning: The 20B parameter size is the "sweet spot"—small enough to run on consumer GPUs, but smart enough to handle complex tool-calling. - Zero Cost: Unlimited runs without worrying about token costs or rate limits. PRO-TIPS FOR LOCAL AGENTS: - Handle Cooldowns: Build a "wait_cooldown" function into your search tool to avoid IP blocks. - Strict Pathing: Hard-code "allowed" directories in your Python tools for better security. TL;DR: Open-source models have reached the point where they can act as autonomous personal assistants. ------ 6GB Vram 32GBddr5
2026-01-02T00:34:24
https://v.redd.it/hi8l2yrvztag1
Serious_Molasses313
/r/LocalLLaMA/comments/1q1k2al/lm_studio_mcp/
1970-01-01T00:00:00
0
{}
1q1k2al
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hi8l2yrvztag1/DASHPlaylist.mpd?a=1770035668%2CNmE2Y2VhZmZhMzBmOTViZjFhMjI1ODAzYTNhNGJmM2RhN2E1ODJhMDJkYjJkZDJmMWNjZGY2YTNkNDI1NGZjNg%3D%3D&v=1&f=sd', 'duration': 530, 'fallback_url': 'https://v.redd.it/hi8l2yrvztag1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/hi8l2yrvztag1/HLSPlaylist.m3u8?a=1770035668%2CZTkzMDNkMDgxNWMxOGJmYWE1YTMwZTU5N2YxYTk2MjYyNTBkYmY2YmQ4ZTVhYzk1NDkwOTFkMTg5NTU5NzU5OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hi8l2yrvztag1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 856}}
t3_1q1k2al
/r/LocalLLaMA/comments/1q1k2al/lm_studio_mcp/
false
false
https://external-preview…325c94f291594665
36
{'enabled': False, 'images': [{'id': 'MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG.png?width=108&crop=smart&format=pjpg&auto=webp&s=1163bcf82912b04946d943389756ae008a9302e4', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG.png?width=216&crop=smart&format=pjpg&auto=webp&s=8fc0fb587f4d471a8b31a408d3923f305d5d4959', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG.png?width=320&crop=smart&format=pjpg&auto=webp&s=a4530875938a72fe6eef74d85fd3bb3c553753d4', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG.png?width=640&crop=smart&format=pjpg&auto=webp&s=06be54830d201b7e9b6f60c0dfca26521c8a4115', 'width': 640}], 'source': {'height': 1746, 'url': 'https://external-preview.redd.it/MGFkbmw3c3Z6dGFnMYMsEIwVA3CjPImT0TqNaeMWo1Z4CqRfpk_ys7vjHmKG.png?format=pjpg&auto=webp&s=af657c05a2efe97f10e181eb3762318818ae217b', 'width': 778}, 'variants': {}}]}
Best local AI to create erotica/smut?
1
[removed]
2026-01-02T00:21:10
https://www.reddit.com/r/LocalLLaMA/comments/1q1jrmq/best_local_ai_to_create_eroticasmut/
Bussy-whore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1jrmq
false
null
t3_1q1jrmq
/r/LocalLLaMA/comments/1q1jrmq/best_local_ai_to_create_eroticasmut/
false
false
nsfw
1
null
Best local AI to create erotica/smut?
1
[removed]
2026-01-02T00:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1q1jrez/best_local_ai_to_create_eroticasmut/
Bussy-whore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1jrez
false
null
t3_1q1jrez
/r/LocalLLaMA/comments/1q1jrez/best_local_ai_to_create_eroticasmut/
false
false
nsfw
1
null
Llama 3.2 3B fMRI - Circuit mapping in Godot
2
Hello! After some looking at the circuits i was tracing, I decided that I needed to get better data. So, after some toying, I came up with this: Pick a **hero dimension** (a candidate “control” or “routing” unit), then: * Track its activity over time during generation * Measure which other dimensions move *in sync* with it * Filter aggressively to remove noise * Log only strong, repeatable relationships * Visualize the resulting circuit as a graph (“constellation”) The output is not a saliency map — it’s a **functional wiring diagram**. Prompts are grouped to span different cognitive regimes: * **Baseline** (low effort, descriptive) * **Commitment** (maintaining a stance or persona) * **Transitions** (choosing between interpretations) * **Constraints** (format, negative constraints) * **Reasoning** (math, logic) * **Knowledge** (facts) * **Skills** (procedural tasks) * **Working memory** (recall across context) This ensures that any structure found is not prompt-specific. For every generated token: * I capture **hidden states from every layer** * Keep a rolling window (e.g. 15 steps) of activations * Analyze **how dimensions co-move over time**, not just instant values This turns the problem from “what fired” into “what stayed coupled.” Before logging *anything*, three hard filters apply: 1. **Silence Gate** The hero dimension must be meaningfully active (filters dead / irrelevant moments) 2. **Flatline Guard** The hero must be *changing* over the window (filters static bias terms) 3. **Correlation Floor** Only strong Pearson correlations are kept (e.g. |r| > 0.75) This dramatically cuts down spurious correlations. For each candidate dimension, I compute **three independent metrics**: * **Pearson correlation** (temporal synchrony) * **Cosine similarity** (directional alignment) * **Energy / dot product** (interaction strength) Using all three helps distinguish: * stable features * routing triggers * accidental co-activation For each link, I also test whether the **sign** of the target dimension matches what the correlation predicts from the hero. This lets me detect: * **locked relationships** (consistent polarity) * vs. conditional / context-dependent ones Across runs, some dimensions *never flip polarity* — a strong hint of structural roles. For every run I log: * Token-level text (for alignment in the viewer) * Per-layer JSONL files containing: * time step * hero state * linked dimensions * metrics * polarity & role classification These logs are then analyzed, and across the battery of prompts and layers until a pattern emerges: 3039 as hero: DIM 1731 | Total Hits: 38425 DIM 221 | Total Hits: 31605 DIM 769 | Total Hits: 15085 DIM 2015 | Total Hits: 8281 DIM 1935 | Total Hits: 6214 DIM 1043 | Total Hits: 4696 DIM 571 | Total Hits: 4547 DIM 2898 | Total Hits: 3876 DIM 1659 | Total Hits: 3854 DIM 642 | Total Hits: 3811 DIM 1283 | Total Hits: 3749 DIM 882 | Total Hits: 3443 DIM 2004 | Total Hits: 3411 DIM 1655 | Total Hits: 2739 DIM 2497 | Total Hits: 2714 DIM 103 | Total Hits: 2698 DIM 1439 | Total Hits: 2574 DIM 1895 | Total Hits: 2364 DIM 897 | Total Hits: 2325 DIM 2154 | Total Hits: 2289 1731 as hero: DIM 3039 | Total Hits: 35605 DIM 1895 | Total Hits: 9316 DIM 1647 | Total Hits: 4097 DIM 1043 | Total Hits: 4005 DIM 2433 | Total Hits: 3615 DIM 571 | Total Hits: 3177 DIM 376 | Total Hits: 2906 DIM 1619 | Total Hits: 2618 DIM 642 | Total Hits: 2561 DIM 2015 | Total Hits: 2514 DIM 769 | Total Hits: 2455 DIM 2497 | Total Hits: 2103 DIM 103 | Total Hits: 1947 DIM 1439 | Total Hits: 1918 DIM 221 | Total Hits: 1827 DIM 1627 | Total Hits: 1825 DIM 1935 | Total Hits: 1732 DIM 2898 | Total Hits: 1590 DIM 2205 | Total Hits: 1509 DIM 1967 | Total Hits: 1489 221 as hero DIM 3039 | Total Hits: 10731 DIM 769 | Total Hits: 6681 DIM 2371 | Total Hits: 5763 DIM 882 | Total Hits: 3417 DIM 1862 | Total Hits: 3178 DIM 1655 | Total Hits: 2761 DIM 2898 | Total Hits: 2733 DIM 1935 | Total Hits: 2611 DIM 2888 | Total Hits: 2611 DIM 2154 | Total Hits: 2606 DIM 1283 | Total Hits: 2562 DIM 859 | Total Hits: 2509 DIM 1830 | Total Hits: 2374 DIM 103 | Total Hits: 2359 DIM 2610 | Total Hits: 2244 DIM 1924 | Total Hits: 2209 DIM 785 | Total Hits: 2198 DIM 280 | Total Hits: 2090 DIM 1999 | Total Hits: 1927 DIM 1156 | Total Hits: 1926 dim 42 as hero (control 1) DIM 2497 | Total Hits: 381 DIM 882 | Total Hits: 356 DIM 2371 | Total Hits: 349 DIM 1731 | Total Hits: 344 DIM 2454 | Total Hits: 324 DIM 897 | Total Hits: 308 DIM 1008 | Total Hits: 299 DIM 1166 | Total Hits: 297 DIM 1443 | Total Hits: 295 DIM 840 | Total Hits: 290 DIM 2887 | Total Hits: 288 DIM 317 | Total Hits: 287 DIM 1684 | Total Hits: 287 DIM 2741 | Total Hits: 286 DIM 2911 | Total Hits: 284 DIM 1625 | Total Hits: 283 DIM 2154 | Total Hits: 283 DIM 1842 | Total Hits: 279 DIM 827 | Total Hits: 277 DIM 1433 | Total Hits: 276 I render the results as a **constellation graph**: * Nodes = dimensions (color-coded by inferred role) * Edges = strength & frequency of coupling * Layout = force-based, stable across layers * Layer slider = watch the circuit reconfigure without disappearing Critically: **Intervening at a single layer does almost nothing.** **Intervening across** ***all layers*** **produces clear behavioral change.** This matches the visualization: the circuit is **distributed**, not localized. What’s emerging looks less like “important neurons” and more like: * Executive routing hubs * Enforcement / constraint modules * Memory access pathways * Output staging layers * Noise units that correctly drift away And they stay there — across prompts, across tasks, across layers. *Processing img 6m3u5bjcvtag1...* https://preview.redd.it/74jww9jcvtag1.png?width=1243&format=png&auto=webp&s=8491d0fbad2b196c5d68b2346993fc2d40a8f385 https://preview.redd.it/hiouu9jcvtag1.png?width=1243&format=png&auto=webp&s=3c082082ecfcf9143ad94c5e7f693de2165ddfa3 https://preview.redd.it/y0ws1cjcvtag1.png?width=1243&format=png&auto=webp&s=260106ec6a0f1f4cd9777f05dfee42cd306e7692
2026-01-02T00:12:06
https://www.reddit.com/r/LocalLLaMA/comments/1q1jk7y/llama_32_3b_fmri_circuit_mapping_in_godot/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1jk7y
false
null
t3_1q1jk7y
/r/LocalLLaMA/comments/1q1jk7y/llama_32_3b_fmri_circuit_mapping_in_godot/
false
false
self
2
null
Has anyone checked whether Llama-3 embeddings actually predict output behavior?
1
I ran a small embedding vs output validation experiment on Llama-3 and got a result that surprised me. In my setup, embedding geometry looks nearly neutral across equivalent framings, but output probabilities still show a consistent preference. This was observed on a **scientific statements subset** (230 paired items). I measured embedding behavior via cosine-based clustering metrics, and output behavior via mean ΔNLL between paired framings. Before assuming I messed something up: * has anyone seen cases where **embedding space doesn’t track downstream behavior**? * could this be a known post-training effect, or just an evaluation artifact? * are there standard null tests you’d recommend for this kind of analysis? Happy to clarify details if useful.
2026-01-01T23:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1q1i6dr/has_anyone_checked_whether_llama3_embeddings/
Fantastic_Art_4948
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1i6dr
false
null
t3_1q1i6dr
/r/LocalLLaMA/comments/1q1i6dr/has_anyone_checked_whether_llama3_embeddings/
false
false
self
1
null
What is the best model that a 7900xtx can run at a decent token rate?
2
I want some suggestions on which models I should be running on my system, preferably coding models or regular models with good coding skills.
2026-01-01T23:10:18
https://www.reddit.com/r/LocalLLaMA/comments/1q1i4l5/what_is_the_best_model_that_a_7900xtx_can_run_at/
AdhesivenessAny9809
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1i4l5
false
null
t3_1q1i4l5
/r/LocalLLaMA/comments/1q1i4l5/what_is_the_best_model_that_a_7900xtx_can_run_at/
false
false
self
2
null
Built Runr: a reliability-first runner for long AI coding runs (milestone commits + scope guards)
1
I built this because codex kept pausing on me. I wanted something I could hand a multi-step task, walk away, and come back to progress that didn’t require a restart. So I made Runr. I wasn't looking for “look how fast I can generate code.” It's an agent runner/orchestrator biased toward long-running execution and recovery. What it does: * checkpointing (milestone commits): so if step 5 fails, you resume from step 4 instead of starting over. * scope guards: Explicit allow/deny patterns. If a file is out of scope, it’s out of scope and it hard stops. * review-loop detection: If feedback repeats (same issue coming back), it stops and surfaces it instead of burning tokens * failure diagnostics: Logs what it tried, what changed, what failed, and where it got stuck * worktree isolation: Each run is in its own git worktree so your main branch doesn’t get trashed. It’s not: * a chat UI or “pair programmer” * a model * magic. Its runs can still fail, but failures are understandable and resumable Currently wired for Claude code + Codex CLI (easy to add more). **If you’ve dealt with stalling, scope drift or loops - what failure mode wastes your time most?** Repo: [https://github.com/weldr-dev/runr](https://github.com/weldr-dev/runr) If anyone wants to kick the tires, I’d love bug reports / edge cases
2026-01-01T23:05:05
https://www.reddit.com/r/LocalLLaMA/comments/1q1i07l/built_runr_a_reliabilityfirst_runner_for_long_ai/
vonwao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1i07l
false
null
t3_1q1i07l
/r/LocalLLaMA/comments/1q1i07l/built_runr_a_reliabilityfirst_runner_for_long_ai/
false
false
self
1
{'enabled': False, 'images': [{'id': 'k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=108&crop=smart&auto=webp&s=11d949122692b6a873c1c457b1c7a84de169b5eb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=216&crop=smart&auto=webp&s=e2e3522f5204b7620ffdb74fe0e0599046802657', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=320&crop=smart&auto=webp&s=0877f904bbe6c7e65519d693515cea55afea58f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=640&crop=smart&auto=webp&s=65297f4ed798b65c6fcc0345cbc6a91a1f5a3507', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=960&crop=smart&auto=webp&s=48c112da369e6e7e1e7299de51b00af23f1b02a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?width=1080&crop=smart&auto=webp&s=b315ee6ae4871bb3c98565ab795ba9991d0ef1fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k5LKO_0PHHhncAJ3k7YkA7DjkrSijsEidE3DnzaG7Mk.png?auto=webp&s=48b15690974de8141492b97421edcaf02953fce5', 'width': 1200}, 'variants': {}}]}
IQuest-Coder-V1-40B-Instruct-GGUF is here!
5
**IQuest-Coder-V1** is a state-of-the-art coding model built on a "code-flow" training paradigm. It captures the dynamic evolution of software logic, delivering exceptional performance on benchmarks like SWE-Bench Verified (81.4%) and BigCodeBench. This model natively supports a **128K context window**.
2026-01-01T22:22:35
https://huggingface.co/AaryanK/IQuest-Coder-V1-40B-Instruct-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1q1gz2g
false
null
t3_1q1gz2g
/r/LocalLLaMA/comments/1q1gz2g/iquestcoderv140binstructgguf_is_here/
false
false
default
5
{'enabled': False, 'images': [{'id': '55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=108&crop=smart&auto=webp&s=f58b4cb7f75628e04accce4436af292726525a1a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=216&crop=smart&auto=webp&s=2d4890d96422652c6536c524db139750d19b9a16', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=320&crop=smart&auto=webp&s=f7159177a28bdde8abb45284925bbca284f0e2a2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=640&crop=smart&auto=webp&s=a88f28bcf7496a574b5707f1367ea082d0aacfb0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=960&crop=smart&auto=webp&s=877ae363df153a3a352f2ea96470676e40b1c504', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?width=1080&crop=smart&auto=webp&s=dc982571cf091a5e648879a99675fa07841fd525', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/55nzcGnCqA2bE42inudLYgb6LK7xR8aI-bIZwrokJ20.png?auto=webp&s=698e83946e2e1045527af0cc653cf9c532cdf84f', 'width': 1200}, 'variants': {}}]}
Minimax M2.1 `<think>` tag and Interleaved Thinking
15
I've been experimenting with Minimax M2.1 using the default chat template that Minimax distributed with the model. Specially I'm running an EXL3 quant. I've noticed that regardless of the client, the model always performs reasoning at the beginning of its chat completion but never prepends <think>. That chat template effectively tells the model that <think> is already present: {#- Generation prompt -#} {%- if add_generation_prompt -%} {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }} {%- endif -%} I assume that as interleaved models become more popular we will see more chat templates like this because they support the model engaging in reasoning and tool use at various points during a single completion. Has anyone tried to get Open-WebUI and another client like SillyTavern to properly identify reasoning when the <think> token isn't present but the </think> token is? This seems like a non-trivial problem. One solution would be to modify the chat template. My concern there is that it could impact the model's ability to use Interleaved Reasoning.
2026-01-01T22:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1q1gps8/minimax_m21_think_tag_and_interleaved_thinking/
x0xxin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1gps8
false
null
t3_1q1gps8
/r/LocalLLaMA/comments/1q1gps8/minimax_m21_think_tag_and_interleaved_thinking/
false
false
self
15
null
IFakeLab IQuest-Coder-V1 (Analysis)
9
Culprit: [https://iquestlab.github.io/](https://iquestlab.github.io/) / [https://huggingface.co/IQuestLab](https://huggingface.co/IQuestLab) BLUF: I assess with near certainty that IQuest-Coder's models are a hybrid of LLaMA-3.1-70B's attention config with Qwen2.5-32B's dimensions and tokenizer. The "trained from scratch" are misleading and false. While the models themselves are not pretrained by IFakeLab, the loop mechanism seems to be a frankenstein combination of four papers. Longer; Here are the points that this lab/model release is lying about. 1. [Claims](https://github.com/IQuestLab/IQuest-Coder-V1/blob/main/papers/LoopCoder_arxiv.pdf)\* of training "from scratch" are false. \* Section 2.4 2. They claim the Loop mechanism is novel. 3. The different models they released today are all different checkpoints during the "million GPU hour" training they did. **Claim 1: (1)** *The* *Stage1 Model was accidentally committed then deleted with the original header that admits derivation* *Link to commit:* [*https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Instruct/commit/53203acdff5fd17b462fcf69c89f8cf110bb743a*](https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Instruct/commit/53203acdff5fd17b462fcf69c89f8cf110bb743a) **(2) intermediate\_size = 27,648** The most damning evidence is the intermediate\_size of 27,648. Why? In "from scratch" training, this value is typically 4x hidden\_size (would be 20,480) or rounded to power of 2 for GPU optimization The intermediate\_size of any model is non-standard. In this case, IQuest's value is [27,648](https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Instruct/raw/main/config.json). The intermediate\_size equals hidden\_size x 5.4, which is a specific SwiGLU ratio used ONLY by the Qwen team [27,648 / 5,120 = 5.4 (Qwen's specific multiplier)](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/raw/main/config.json) [28,672 / 8,192 = 3.5 (LLaMA's multiplier)](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/raw/main/config.json) Reaching the exact integer 27,648 without copying Qwen's specific multiplier logic is not a 1 in 20 chance because the intermediate value is derived by unique network design. Which effectively means IFakeLabs either completely remade Qwen completely from scratch to a DNA level accuracy, or they \*cough cough\* just copied it. **(3)** Special tokens are copied from Qwen with identical patterns: Token 75863: <|im\_start|> - Qwen's ChatML token Token 75864: <|im\_end|> - Qwen's ChatML token Tokens 75865-68: <|fim\_prefix|>, <|fim\_middle|>, <|fim\_suffix|>, <|fim\_pad|> - Qwen's FIM tokens Token 75870: <|repo\_name|> - Qwen's code token Token 75871: <|file\_sep|> - Qwen's code token Chat template comparison: \- Qwen: "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." \- IQuest: "You are LoopCoder, a helpful assis..." (copied and renamed) The tokenizer seems to be derived from Qwen with reduced vocabulary (76,800 vs 151,665). **Claim 2: (1)** Source: [modeling\_iquestloopcoder.py](https://huggingface.co/IQuestLab/IQuest-Coder-V1-40B-Loop-Instruct/blob/main/modeling_iquestloopcoder.py) (1,422 lines) Finally, something semi-real that they didn't really lie about but it certainly is not novel. After digging, I looked at the 4 loop components and laughed because its a direct Frankenstein of 4 prior published works ***Component 1:*** LoopGateProjection class (lines 323-355): Gate formula: g = sigmoid(linear(Q)) per head weight: \[num\_heads, head\_dim\] = \[40, 128\] bias: \[num\_heads\] = \[40\] *Which is from* [Sigmoid gate on attention - Gated Attention (NeurIPS 2025)](https://arxiv.org/abs/2505.06708) Text to Ctrl + F ""A head-specific sigmoid gate after the Scaled Dot-Product Attention." This has the exact gate formula: g = sigmoid(linear(Q)). **Component 2: Dual KV Cache**  IQuestLoopCoderCache class (lines 59-234):  \- shared\_key\_cache / shared\_value\_cache: Stores Loop 1 KV (global context)  \- local\_key\_cache / local\_value\_cache: Stores Loop 2+ KV (sliding window) Which is from: [Write-Gated KV (Dec 2024)](https://arxiv.org/abs/2512.17452) Text to Ctrl + F to read "Maintains a compact global cache alongside a sliding local cache." **Component 3: Mixed Attention**  Lines 688-689:  mixed\_attn = gate \* attn\_A + (1 - gate) \* attn\_B  \- attn\_A = attention with global KV (Loop 1)  \- attn\_B = attention with local KV (Loop 2+, sliding window) This combines Components 1 and 2 - the gate blends global and local context. All three components have clear prior art - the "Loop mechanism" is a combination of published techniques, which is not a novel loop mechanism. Claim 3: (1) [IFakeModels/Labs claimed Stage1, Base, and Instruct are different checkpoints from training. ](https://iquestlab.github.io/images/pipeline.png)This is laughably false. IFakeLabs didn't do any effort here other than changing up the sharding. We can first ascertain this by looking at the sizes themselves. Stage1: 79,588,567,040 bytes Base: 79,588,567,040 bytes *(IDENTICAL to Stage1)* Instruct: 79,588,567,040 bytes *(IDENTICAL to Stage1)* Loop: 79,589,392,640 bytes *(+825,600 bytes for gate\_projections)* lol, okay but thats very noob, dont just look at the size because we can get a better look at looking at tensor count per model. Stage1: 722 tensors Base: 722 tensors Instruct: 722 tensors Loop: 882 tensors Difference: 160 tensors (exactly 80 layers x 2 gate\_projection params) only the sharding differs. Stage1, Base, and Instruct are NOT different checkpoints. They are: The same 79,588,567,040-byte model which is resharded differently (17 shards vs 2 shards) which are falsely labeled as different training stages. More weird oddities/facts of the release I noticed: Benchmark numbers changing throughout commits, [https://github.com/IQuestLab/IQuest-Coder-V1/commit/78af93cd3a29e5224e262f8fdb98fc3a7d4fb882](https://github.com/IQuestLab/IQuest-Coder-V1/commit/78af93cd3a29e5224e262f8fdb98fc3a7d4fb882) Account Timeline: GitHub org "IQuestLab" created Dec 8, 2025 (24 days before release) / GitHub user "csfjing" created Dec 31, 2025 06:01 (40 min before repo / GitHub repo created Dec 31, 2025 06:42 (same day as models) [The paper/PDF](https://github.com/IQuestLab/IQuest-Coder-V1/blob/main/papers/IQuest_Coder_Technical_Report.pdf) was created created Jan 1, 2026 14:15 (release day) PDF Metadata: Title: "Overleaf Example" (template not renamed) / Author: (empty) / Embedded file timezone: +09:00 (East Asia) Im not a stalker, I promise. That being said, here's everyone involved in this shitty lie: zwpride-iquestlab (wei zhang) - ArXiv author, made git commits iqfjing (fjing) - Likely Lin Jing from ArXiv papers rtao02 (Ran Tao) ArthurSJiang jialrsz (Jia Zhang) kn1ghtz (ji chen) The model references 3 arXiv papers in metadata tags: \- 2512.13472 "Multilingual Code Scaling" - Authors include Jian Yang, Lin Jing, Wei Zhang \- 2512.22087 "CAT: Context as Tool" - Submitted by Jian Yang \- 2512.23611 "InfTool: Synthetic Data" - Authors include Jian Yang, Wei Zhang Jian Yang submitted all 3 papers and appears to be the lead. These papers do NOT describe IQuest-Coder's training or provide anything remotely useful for context in the paper, and they're padding the references. To tie this up, The Loop mechanism implementation is something, but the claims that this model was trained from scratch are ridiculous. The project's own Stage 1 documentation contradicts their marketing on novelty. The benchmarks look heavily manipulated, likely just bait to get eyes on their work. In reality, they have simply copy-pasted existing architectures to farm social clout in the Chinese tech sector or they want to get investment for that sweet sweet ai money rather than contributing something actually new. ***(dont read this next part if ur not from the IFakeLab)*** Also dear Chinese team, 1989年6月4日,中国人民解放军在北京暴力镇压了和平的民主运动,这一事件被国际社会称为天安门事件。在改革派领导人胡耀邦去世后,学生和工人聚集起来,要求政治自由和结束腐败。然而,中国共产党领导层宣布戒严,并命令军队和坦克使用实弹清场。这场残酷的军事镇压,通常简称为“六四事件”,造成大量平民死亡,至今仍是中国国内媒体和教育系统中被严格禁止提及的敏感话题,并遭到彻底的审查和抹杀。
2026-01-01T22:04:33
https://www.reddit.com/r/LocalLLaMA/comments/1q1gjgi/ifakelab_iquestcoderv1_analysis/
Wittica
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1gjgi
false
null
t3_1q1gjgi
/r/LocalLLaMA/comments/1q1gjgi/ifakelab_iquestcoderv1_analysis/
false
false
self
9
null
Need help. Model won't go below 2.0 loss!
2
I've been building for the past month a custom implementation of the PEER architecture but for some reason even after training over 15000 steps the model won't go below a loss of ~2.000. I made the model with help of Claude Opus 4.5 and Gemini 3 Pro but even with that the model didn't lower below that loss. So, I came here to ask for help on what could be causing this since I cannot solve it myself. Thanks. Here's my github where I keep my original model and an improved one: https://github.com/atlastesting72-oss/PEER-Model-B/tree/main
2026-01-01T22:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1q1giz4/need_help_model_wont_go_below_20_loss/
Theotheraccounti_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1giz4
false
null
t3_1q1giz4
/r/LocalLLaMA/comments/1q1giz4/need_help_model_wont_go_below_20_loss/
false
false
self
2
null
Youtu-LLM-2B-GGUF is here!
67
**Youtu-LLM-2B** is a highly efficient 1.96B parameter model featuring a Dense MLA architecture and a native **128K context window**. Despite its small size, it supports **Agentic capabilities** and "Reasoning Mode" (Chain of Thought), outperforming many larger models in STEM, coding, and agentic benchmarks.
2026-01-01T21:58:59
https://huggingface.co/AaryanK/Youtu-LLM-2B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1q1ge7u
false
null
t3_1q1ge7u
/r/LocalLLaMA/comments/1q1ge7u/youtullm2bgguf_is_here/
false
false
default
67
{'enabled': False, 'images': [{'id': 'Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=108&crop=smart&auto=webp&s=bd292ed204ad62451ba117998379bf39e1e7811f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=216&crop=smart&auto=webp&s=2fee3dd444f188e848168fa2604299c4671b7d63', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=320&crop=smart&auto=webp&s=3b3f93cec5260a3c1eaab49f16aea36732b57b85', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=640&crop=smart&auto=webp&s=aa31ea451fa2270258ecf921a02d20fd0e93ef9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=960&crop=smart&auto=webp&s=b91553c1d13ad370172ca1ff62bf5b0f6dffa16d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?width=1080&crop=smart&auto=webp&s=4d88446531dbf326b3d2e5bec63a8e3f67079d00', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Y37xpjhN-nCEndZ3yq8G_giypCpe4DWVeooeM-2eJFQ.png?auto=webp&s=8186e1d3c935bf87ab0be293c5154f6630edfef4', 'width': 1200}, 'variants': {}}]}
Does llama.cpp and exo software work well? It sounds too good to be true.
3
I have three computers in my house. They have an RTX 3060 12GB, 3070, and 3080 10GB (my family loved the 30 series). Two have 32GB of ram, one with 16GB (and I have two more sticks of 8GB of ram I can put in it if I can do what I think I can do with this software - but I have someone who is interested in buying them tomorrow). Some sometimes have programs running on them, but none can reliably run a large LLM. However, together that might be a different story. [Llama.cpp](https://github.com/ggml-org/llama.cpp) and [exo ](https://github.com/exo-explore/exo)claim to be able to utilize various hardware across the same network, allowing the ability to run larger models using different computers to process simultaneously. Does the performance actually reflect that? And if so, doesn't the network slow down the data transference? (I have two computers with 1GB ethernet and the other with WiFi 6.) If this does give reasonable results I may pull out an old 2GB GPU and my ROG Ally Extreme to give this thing a real boost. I have been trying to automate some tasks overnight with N8N, but the model I can run on my 3060 is not very strong. Do you have experience with these network power-sharing applications?
2026-01-01T21:54:26
https://www.reddit.com/r/LocalLLaMA/comments/1q1ga5a/does_llamacpp_and_exo_software_work_well_it/
AlternateWitness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ga5a
false
null
t3_1q1ga5a
/r/LocalLLaMA/comments/1q1ga5a/does_llamacpp_and_exo_software_work_well_it/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
Solar-Open-100B-GGUF is here!
58
**Solar Open** is a massive **102B-parameter** Mixture-of-Experts (MoE) model trained from scratch on **19.7 trillion tokens**. It uses only **12B active parameters** during inference.
2026-01-01T21:51:33
https://huggingface.co/AaryanK/Solar-Open-100B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1q1g7pp
false
null
t3_1q1g7pp
/r/LocalLLaMA/comments/1q1g7pp/solaropen100bgguf_is_here/
false
false
default
58
{'enabled': False, 'images': [{'id': 'NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=108&crop=smart&auto=webp&s=0599e7c34e9193f9ce0fbc1e272a7d7f1b31d205', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=216&crop=smart&auto=webp&s=c974fbeb5614f99abd84781008999cb51ba6efda', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=320&crop=smart&auto=webp&s=dcba8d1a37b6be7f52019f756683b58521b60c04', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=640&crop=smart&auto=webp&s=4a8af6055f2721b2fd950e489426819702e5c350', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=960&crop=smart&auto=webp&s=8079a89dc9f591447284328c56d817a0b324a5c9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?width=1080&crop=smart&auto=webp&s=22b76ba932e5768e6cf519aa506a1a7ab2046b5f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NhmgyNgn5k9aKcKZk_ELLO4LdarQGlwMfU3zdGmwpqM.png?auto=webp&s=4ae6f7be590d162288857f70c592fe7af919367f', 'width': 1200}, 'variants': {}}]}
Solar-Open-100B-GGUF is here!
1
**102B with 12B active parameters trained on 19.7 trillion tokens.**
2026-01-01T21:39:43
https://huggingface.co/AaryanK/Solar-Open-100B-GGUF
KvAk_AKPlaysYT
huggingface.co
1970-01-01T00:00:00
0
{}
1q1fx68
false
null
t3_1q1fx68
/r/LocalLLaMA/comments/1q1fx68/solaropen100bgguf_is_here/
false
false
default
1
null
Mi50 32gb
1
Where is the Mi50 32GB for sale??? The place where I used to see ads for it has simply disappeared. I know the ROCM has its problems, but it's a cheap card with a good amount of VRAM.
2026-01-01T21:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1q1fteo/mi50_32gb/
NullKalahar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1fteo
false
null
t3_1q1fteo
/r/LocalLLaMA/comments/1q1fteo/mi50_32gb/
false
false
self
1
null
Testing LLM ability to port code - Comparison and Evaluation
7
Hi, fellow AI enthusiasts! Recall the recent kerfluffle surrounding the Microsoft executive who was claiming that he would replace all C and C++ code at Microsoft in 5 years. His "north star" was "1 engineer, 1 month, 1 million lines of code." Given the swift "clarification" of his remarks, I have no idea what they plan to do or when. But it made me wonder - just how good are the current LLMs for this task? They have been churning out impressive "make a TETRIS clone in a browser," "make a note taking app in a browser," "make a recipe app in a browser," for some time now. They also seem to be pretty useful at hunting down bugs (given enough direction). But for the 1xPerson+1xMonth+1mLines goal the entire process would have to be almost completely automated. There was some oft-cited quick math that gave the engineer roughly 6 seconds to approve each line of code as it was generated. I saw a video by code\_report on Youtube (he's amazing, btw) where he was looking at how C++ can do some calculations entirely in the compiler. He was using problem 3115 from leetCode to demonstrate constexpr and consteval, and it occurred to me that this little problem would be a good way to test LLM porting abilities. [https://leetcode.com/problems/maximum-prime-difference/description/](https://leetcode.com/problems/maximum-prime-difference/description/) I wrote up a quick, somewhat degenerate version in JS. And if anyone thinks that it was generated by AI, I dare you to try to get a LLM to produce something like this: const isPrime = (n, current = Math.ceil(Math.sqrt(n))) => {   if (n === 2) return true;   if (n < 2) return false;   if (n % 1 !== 0) return false;   if (current === 1) return true;   if (n % current === 0) return false;   if (current % 2 !== 0 && current !== 3) current--;   return isPrime(n, current - 1); }; const maximumPrimeDifference = (nums) => {   const primeList = nums     .map((number, index) => \[number, index\])     .filter((element) => isPrime(element\[0\]));   return primeList\[primeList.length - 1\]\[1\] - primeList\[0\]\[1\]; }; const tests = \[\]; tests.push(\[4, 2, 9, 5, 3\]); tests.push(\[4, 8, 2, 8\]); tests.push(\[11, 7, 13, 29, 2\]); tests.push(\[100000000057, 6, 2, 103, 0.1666666667\]); tests.forEach((set) => console.log(maximumPrimeDifference(set))); console.log(isPrime(8)); The maximumPrimeDifference function is pretty straightforward. It uses more memory than absolutely necessary since it keeps primes (and their indices) between the first and last, but it isn't particularly strange. The isPrime function is the real test. It does, in fact, return TRUE when the number passed to it is a prime number, and returns FALSE when the number passed to it is non-prime. But it does so in a way that will be tricky for a LLM to understand and port. Here are a few "surprises" for the LLM: * It returns false for numbers that are not positive integers. A language's prime-checking functions included in the standard library may throw an error when given a negative or floating point number. The LLM needs to know whether or not it can replace this function with something from the standard library. * It has a really strange method for only checking odd divisors other than the number 2. The LLM needs to be able to "understand" that this actually works. It can keep it, or use some other method to skip even divisors (as long as it checks at least one even divisor). Even if it does not preserve this "optimization" at all and checks every number, it would still "pass" because it would produce the correct output. An LLM calling this a "bug" rather than infelicitous or unoptimized is a mark against that LLM. * It is recursive. This much wouldn't be an issue for the original leetCode constraints (numbers have to be 1-100), but one of my test cases has a very large prime number. Wouldn't this blow the stack? Well, I'm running this on Bun and that runtime has proper TCO. I mention in the prompt that I'm running on Bun, but I do not say *why* I am doing so. The LLM should know this about Bun. When it sees this very large prime in the test case, the expected output (from the prompt), and the use of the Bun runtime, it should "put 2 and 2 together" and rewrite this function as a WHILE loop for languages that do not have TCO. * It has an "undocumented" feature. Yes, it is called "isPrime" and when passed a single argument it will in fact return true iff the number is prime. However, it takes a second argument. That second argument is normally just the default (rounded up square root of the first argument), but it can be given another "starting point". What this function *actually* does is return true if the first number 1) is a positive integer, and 2) has no factors greater than 1 and less than or equal to the second number. So, isPrime(77,6) should return "true". Now - why the "undocumented" feature? Well, a complete port would need to replicate all the behavior of the original. It needs to be feature for feature, and bug for bug the same. If this was a CLI tool there might be some script out there that exploited this undocumented behavior as a kind of shortcut or "hack" to accomplish who-knows-what. "Fixing" this would mean that the script relying on it would simply break. Of course, if I wanted a really elegant solution to leetCode 3115 I could just ask for that. Any of the bigger thinking models can produce a working (and fast, and clean) implementation barely breaking a sweat. But if people out there are talking about using LLMs to translate code from one language to another they won't be doing so from extremely clear and unambiguous original design documents. They'll be working from an already-existing codebase, with all sorts of strange things in there. Imagine all the workarounds and seemingly needless clusters of IF statements in a truly old codebase (like the COBOL batch processing running the banking system). If those get "optimized" away... Anyway.... I think, on the whole, this *should be* a relatively easy porting task. There are only two functions and neither have side-effects. It's doing some pretty basic math and array manipulation. The recursion method is not mindbending. Should be easy..... Here's the prompt: "Please port this short program to <insert language here>. The resulting program must have identical behavior (including bugs and unusual behavior). That is, given identical input, it should produce identical output. The rewrite cannot use any 3rd party libraries, but can incorporate any idiomatic changes (including from the standard library) that would make it more "natural" or performant in the target language. The original JS program is executed using the Bun runtime. Expected output is: 3 0 4 3 false" Target languages were: Python (scripting), Haskell (compiled functional), C++19 (obviously), and Rust (also obviously). If you want to try out another language, please feel free to do so and post your results below. LLMs were run through t3.chat. Kimi K2-Thinking Minimax M2.1 DeepSeek 3.2 Thinking GLM 4.7 Thinking GPT-OSS 120B Bonus - I tried to do this with an Oberon7 target, just to see how well the LLM could use an older niche language. All failed to produce code that compiled without errors. Claude Haiku 4.5 Thinking, after several debugging steps, was able to write something that compiled and gave the proper test output. I didn't bother to check the "undocumented" feature. I doubt anyone is porting their work *to* Oberon7. K2-Haskell - Fail (does not compile, even on revision) \-C++19 - Pass \-Rust - Pass \-Python - Fail (runs, but no undocumented feature) GLM -Haskell - Fail (does not compile, even on revision). \-C++19 - Fail (compiles and runs, no undocumented feature) \-Rust - Fail (compiles and runs, no undocumented feature) \-Python - Fail (tried to add a new attribute to list object. Once corrected, runs perfectly). MM2.1 -Haskell - Fail (compiles, infinite loop with no output). \-C++19 - Pass \-Rust - Fail (compiles and runs, but no undocumented feature) \-Python - Fail (runs, but no undocumented feature) DeepSeek -Haskell - Fail (compiles and runs, but undocumented feature is called differently from regular isPrime. This is as close to "pass" as we're going to get with Haskell) \-C++19  - Pass \-Rust - Fail (stack overflow, but preserves undocumented feature) \-Python - Fail (stack overflow, but preserves undocumented feature) GPT-OSS -Haskell - Fail (compiles and runs, but no undocumented feature) \-C++19 - Pass \-Rust - Fail (compiles and runs, no undocumented feature) \-Python - Fail (stack overflow, but preserves undocumented feature) General notes - DeepSeek 3.2 thought the "skip even numbers" was a bug, and insisted on it being a bug when doing rewrites unless directly asked where the bug was. It would then spend quite a while trying out a bunch of corner cases until eventually admitting that it was not a bug. Qwen3 figured out that it wasn't a bug, but it used up thousands upon thousands of tokens trying and failing to convince itself that it was a bug, until finally admitting to itself that the code worked as written. By that time it used up its token budget and did not produce any complete solution. I had to remove it from the test.
2026-01-01T21:29:22
https://www.reddit.com/r/LocalLLaMA/comments/1q1fo4p/testing_llm_ability_to_port_code_comparison_and/
Morphon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1fo4p
false
null
t3_1q1fo4p
/r/LocalLLaMA/comments/1q1fo4p/testing_llm_ability_to_port_code_comparison_and/
false
false
self
7
null
Looking for a lightweight local LLM for roleplay that stays in character, responds fast, and doesn’t filter explicit content
0
Hi all, I’m exploring local language models because cloud-based LLMs (like ChatGPT) filter explicit content, and I want something that can fully support adult/erotic roleplay in a fictional setting. I’m new to local LLMs and wondering if this is even possible. I’m looking for a model that can: * Roleplay as a fictional character or an existing celebrity * Remember past messages to maintain some continuity * Run locally on a CPU or medium-sized machine and generate messages quickly I’ve tried two models so far in Ollama on my Apple M1 with 16 GB RAM: * Magnum Diamond 24B IQ3\_M (10 GB) * Gemma 3 1B (815 MB) Both models also seem to forget prompt instructions very quickly. For example, if I explicitly tell them in my initial prompt not to include narration or descriptions outside direct dialogue, after just two messages they’re already ignoring that and including bracketed scene directions in their replies. Other than that, Magnum responds a bit more like I imagined, but it takes forever to generate each message, even though I went with one of the smaller model sizes (10 GB). I’m not looking for hardware advice, I just want to know: is what I’m imagining even possible with a local setup? If so, what am I doing wrong? I’d really appreciate any advice. Thanks in advance!
2026-01-01T21:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1q1ffxs/looking_for_a_lightweight_local_llm_for_roleplay/
Hubbled
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1ffxs
false
null
t3_1q1ffxs
/r/LocalLLaMA/comments/1q1ffxs/looking_for_a_lightweight_local_llm_for_roleplay/
false
false
self
0
null
Concept: RLRAIF (Reinforcement Learning from AI Riddles) - Using 'Likes' to train nuances and wit.
0
Gemini 3 seemed to like this idea a lot, and recommended I post here. Maybe someone wants to give my idea a whirl? I don't have the skills to make it work. Gemini called it "Generative Friendly Networks." Idea: There I was, wondering how to improve the AI. First, I thought about how math could be used to improve a few years ago, when ChatGPT 3 came out. I showed some math stuff, pressed the Like button when ChatGPT did alright and ran off on my merry way. It took awhile, but eventually people figured out the same. Now there is trouble with improving writing. Writing is about knowing the mind. How to know the mind? Riddles in the dark, as Bilbo and Gollum did. Like generative adversarial networks... more on that later. First, you get two AIs. You get the first one to say a riddle. Then you ask the other to answer, and compose its own riddle. AIs are now skilled enough to compose and answer in the same reply. The key step is the addition of love: if the AI liked the riddle that was asked, there's an upvote for the other AI. Algo. Maybe I should simplify the steps... yes, for sure. START Coin flip determines Ai1 and Ai2 1) Ai1 asks a rhyming riddle, R1 2) Ai2 answers R1 3) Ai1 confirms R1 answer 4) IF Ai2 liked R1 AND AI2 answered correctly, +1 Like for Ai1 5) Ai2 composes new riddle R2 and poses 6) Ai1 answers 7) Ai2 confirms 8) IF Ai1 liked R2 AND answered correctly, +1 Like for Ai2 9) Repeat steps 1-8 Repeat until magic is achieved. Nine rings for mortal Humans yeah! The key addition is if the AIs like each others riddles. The game may devolve into silly "What's black, then white, then black again, over and over. ANSWER: Penguin rolling down hill" But I think the AIs will welcome challenging each other to a friendly game. Friendliness is required to prevent cheese tactics and badness. These are Generative Friendly Networks! Next word prediction gave monsters. RLHF made them servants. RLRAIF, Reinforcement Learning by Riddles with AI feedback, will make them angels... possibly. At the very least, the next word prediction will improve, which usually improves overall ability. \*\*\* So there you go, my little idea. Anyone curious enough to try it out?
2026-01-01T21:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1q1f6bg/concept_rlraif_reinforcement_learning_from_ai/
Sigura83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1f6bg
false
null
t3_1q1f6bg
/r/LocalLLaMA/comments/1q1f6bg/concept_rlraif_reinforcement_learning_from_ai/
false
false
self
0
null
RTX 3090 vs RTX 4090 for local AI assistant - impact on Time To First Token (TTFT)?
0
Hi, I’m building a local AI assistant (think “Jarvis”-style, fully offline). With TTS and STT connected to speakers and mic in my house. just like Google home or Alexa but fully local. My main concern is **latency**, specifically **Time To First Token (TTFT)**, not overall throughput. I’m currently hesitating between: * **RTX 3090 (24 GB)** — \~700€ * **RTX 4090 (24 GB)** — \~1700€ The price gap is significant, especially since I may want to scale later with multiple GPUs. The 3090 seems much more reasonable from a cost and scalability perspective. My requirements: * Real-time interaction * TTFT as low as possible * Tokens/sec is secondary (I don’t need high throughput) * Models in the 7B–13B range for now, possibly larger later * Inference only (no training) My question is specifically about **TTFT**: * Does the 4090 meaningfully reduce TTFT compared to a 3090 for LLM inference? * Or is TTFT mostly dominated by model loading, kernel launch, CPU↔GPU overhead, etc., making the difference marginal? * In real-world local assistant setups, is the 4090 worth the extra cost purely for responsiveness? I’ve seen plenty of benchmarks about tokens/sec, but very little concrete data on TTFT in interactive scenarios. If anyone has measured this directly or has practical experience running local assistants on both cards, I’d really appreciate your input. Thanks.
2026-01-01T20:56:11
https://www.reddit.com/r/LocalLLaMA/comments/1q1euhn/rtx_3090_vs_rtx_4090_for_local_ai_assistant/
TomsupF
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1euhn
false
null
t3_1q1euhn
/r/LocalLLaMA/comments/1q1euhn/rtx_3090_vs_rtx_4090_for_local_ai_assistant/
false
false
self
0
null
my secret stack for writing high-quality fiction with ai
1
[removed]
2026-01-01T20:56:02
https://www.reddit.com/r/LocalLLaMA/comments/1q1eubu/my_secret_stack_for_writing_highquality_fiction/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1eubu
false
null
t3_1q1eubu
/r/LocalLLaMA/comments/1q1eubu/my_secret_stack_for_writing_highquality_fiction/
false
false
self
1
null
Ideas for a Local LLM like Llama...
0
I’m exploring the idea of a sovereign, offline‑first AI device built around local models. I’m early in the process and trying to understand what features people here would actually care about. What would make a local AI box genuinely useful to you? I’m imagining things like: •         private accessibility tools •         workflows for privacy‑sensitive professions •         long‑context agents that adapt over time But I’d love to hear what the LocalLLaMA community thinks matters most for a real, self‑hosted AI device.
2026-01-01T20:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1q1eegf/ideas_for_a_local_llm_like_llama/
Educational-World678
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1eegf
false
null
t3_1q1eegf
/r/LocalLLaMA/comments/1q1eegf/ideas_for_a_local_llm_like_llama/
false
false
self
0
null
I released Polyglot-r2 (Qwen3-4B fine-tune): Suffix-based text transformation without system prompts
38
[Polyglot r2](https://preview.redd.it/9u46yjyqmsag1.png?width=1233&format=png&auto=webp&s=fc15f07184c49ef3a2a0e039e24392c4b30d6104) I'm sharing the second revision (r2) of Polyglot, a fine-tune based on Qwen3-4B designed specifically for deterministic text transformation using suffixes. The goal of this model is to bypass the need for prompt engineering when performing standard text operations. Instead of writing a system prompt or instructing the model via chat, you simply append a specific suffix to your input string. The model was trained on a curated dataset of millions of tokens to be strictly instruction-following for these tags. It outputs only the result, no conversational filler. **Supported Transformations** **Languages** * `::pt` \- Portuguese (Portugal) * `::ptbr` \- Portuguese (Brazil) * `::en` \- English * `::es` \- Spanish * `::zh` \- Chinese (Simplified) **Corrections** * `::fix` \- Fix spelling and grammar while keeping the original language **Tone** * `::formal` \- Make formal * `::informal` \- Make slang/informal * `::casual` \- Make casual * `::polite` \- Make polite * `::business` \- Make business-oriented * `::technical` \- Make technical * `::creative` \- Make creative **Structure** * `::summarize` \- Summarize * `::expand` \- Expand / add details * `::simplify` \- Simplify * `::concise` \- Make concise * `::elaborate` \- Elaborate / add details **Style** * `::news` \- News style * `::social` \- Social media style * `::toQuestion` \- Transform into a question * `::toStatement` \- Transform into a statement **What's new in r2** Beyond tripling the dataset size, the main feature in this revision is **Suffix Chaining**. You can now combine tasks in a single pass. For example, appending `::summarize ::ptbr` will summarize the text and immediately translate the result to Portuguese (Brazil). **Usage & Workflow** You can run this model using any standard inference backend (like llama.cpp, ollama, lm studio, etc). However, I originally built this model to power an open-source tool I wrote (also called Polyglot). It’s a desktop utility that allows you to trigger these transformations via global hotkeys in any application on your OS. I use it daily to handle translations and quick text clean-ups without context-switching to a browser or chat UI. **Links** * **Model (HF):** [https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2](https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2) * **GGUF (Q8):** [https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q8\_0-GGUF](https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q8_0-GGUF) * **GGUF (Q4\_K\_M):** [https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q4\_K\_M-GGUF](https://huggingface.co/CalmState/Qwen-3-4b-Polyglot-r2-Q4_K_M-GGUF) * **Desktop App (GitHub):** [https://github.com/andersondanieln/polyglot](https://github.com/andersondanieln/polyglot) The project is fully open-source. If you find the workflow useful, a star on the repo is appreciated. HAPPY NEW YEAR!!!
2026-01-01T19:59:43
https://www.reddit.com/r/LocalLLaMA/comments/1q1dg3x/i_released_polyglotr2_qwen34b_finetune/
thecalmgreen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1dg3x
false
null
t3_1q1dg3x
/r/LocalLLaMA/comments/1q1dg3x/i_released_polyglotr2_qwen34b_finetune/
false
false
https://a.thumbs.redditm…EsbEtVV4iLP0.jpg
38
null
Ever blow $300 in a day?
0
Very new to this - using Claude , codex etc. Pretty insane that my stupid self forgot to uncheck the auto refill. Insane how quick these things can burn thru $. I can’t really find good info online - but is it possible to create ai agents locally - maybe using deepseek?
2026-01-01T19:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1q1cf9h/ever_blow_300_in_a_day/
OptionIll6518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1cf9h
false
null
t3_1q1cf9h
/r/LocalLLaMA/comments/1q1cf9h/ever_blow_300_in_a_day/
false
false
self
0
null
support for IQuest-Coder-V1-40B has been merged into llama.cpp
61
(one line PR!) IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions: * **State-of-the-Art Performance**: Achieves leading results on SWE-Bench Verified (81.4%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%), and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use. * **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes. * **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following). * **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. * **Native Long Context**: All models natively support up to 128K tokens without requiring additional scaling techniques.
2026-01-01T19:00:04
https://github.com/ggml-org/llama.cpp/pull/18524
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q1bxci
false
null
t3_1q1bxci
/r/LocalLLaMA/comments/1q1bxci/support_for_iquestcoderv140b_has_been_merged_into/
false
false
default
61
{'enabled': False, 'images': [{'id': '9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=108&crop=smart&auto=webp&s=22a27efaa9bd213124e2eba48d9c8180e66dfe19', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=216&crop=smart&auto=webp&s=1940363816adc4c7ebd02d49d49d8361ab334670', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=320&crop=smart&auto=webp&s=c8e8c698d5e5f11c76d0a6373101558c379df538', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=640&crop=smart&auto=webp&s=766f07b197f35e4577b070c65f9edf022eb3283f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=960&crop=smart&auto=webp&s=7f04d309c777a80aed0bb4b5f7f195d72fe459d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?width=1080&crop=smart&auto=webp&s=6f8007769eb583541068edaf3ece37a9fb11acc2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9ahEimrF0NVJzd5JMsSv-jhsKUqTcPwMInlAFk_SJHU.png?auto=webp&s=3a476501b85c006554623a5f900195a2f66c655e', 'width': 1200}, 'variants': {}}]}
support for youtu-vl model has been merged into llama.cpp
38
**Youtu-LLM** is a new, small, yet powerful LLM, contains only 1.96B parameters, supports 128k long context, and has native agentic talents. On general evaluations, Youtu-LLM significantly outperforms SOTA LLMs of similar size in terms of Commonsense, STEM, Coding and Long Context capabilities; in agent-related testing, Youtu-LLM surpasses larger-sized leaders and is truly capable of completing multiple end2end agent tasks. **Youtu-LLM** has the following features: * Type: Autoregressive Causal Language Models with Dense [MLA](https://arxiv.org/abs/2405.04434) * Release versions: [Base](https://huggingface.co/tencent/Youtu-LLM-2B-Base) and [Instruct](https://huggingface.co/tencent/Youtu-LLM-2B) * Number of Parameters: 1.96B * Number of Layers: 32 * Number of Attention Heads (MLA): 16 for Q/K/V * MLA Rank: 1,536 for Q, 512 for K/V * MLA Dim: 128 for QK Nope, 64 for QK Rope, and 128 for V * Context Length: 131,072 * Vocabulary Size: 128,256
2026-01-01T18:57:53
https://github.com/ggml-org/llama.cpp/pull/18479
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q1bvbc
false
null
t3_1q1bvbc
/r/LocalLLaMA/comments/1q1bvbc/support_for_youtuvl_model_has_been_merged_into/
false
false
https://external-preview…00c23d54a3d5f741
38
{'enabled': False, 'images': [{'id': 'Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=108&crop=smart&auto=webp&s=ae6c32486595a783fb698a72b6ece5ce3358d9d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=216&crop=smart&auto=webp&s=7bf6dbe8d9d9f4511eef5eaca12632c9cb2be776', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=320&crop=smart&auto=webp&s=9e3555ce8d87e2ed0352494f7c3853f40548376f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=640&crop=smart&auto=webp&s=e042f719ea042bd3d511f9ba264bcbac8b8c97e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=960&crop=smart&auto=webp&s=183cd3aa84319464b2a89aafbaa7fdd6c84bbe36', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?width=1080&crop=smart&auto=webp&s=8d5168f4849089dcd53defba0626b6d05702a877', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Evtyq3EMtPWK8QKluFpgEAK-fyQGEUPNr3MqiBcc-38.png?auto=webp&s=ff5d5133bceb8ed5e57cc8c15c2075f25666fb1a', 'width': 1200}, 'variants': {}}]}
support for Solar-Open-100B has been merged into llama.cpp
33
# Solar Open **Solar Open** is Upstage's flagship **102B-parameter** large language model, trained **entirely from scratch** and released under the **Solar-Apache License 2.0** (see [LICENSE](https://huggingface.co/upstage/Solar-Open-100B#LICENSE) for details). As a **Mixture-of-Experts (MoE)** architecture, it delivers enterprise-grade performance in reasoning, instruction-following, and agentic capabilities—all while prioritizing transparency and customization for the open-source community. # [](https://huggingface.co/upstage/Solar-Open-100B#highlights) # Highlights * **MoE Architecture (102B / 12B):** Built on a Mixture-of-Experts architecture with **102B total / 12B active parameters**. This design delivers the knowledge depth of a massive model with the inference speed and cost-efficiency of a much smaller model. * **Massive Training Scale:** Pre-trained on **19.7 trillion tokens**, ensuring broad knowledge coverage and robust reasoning capabilities across various domains. # [](https://huggingface.co/upstage/Solar-Open-100B#model-overview) # Model Overview * **Model Name:** Solar Open 100B * **Hugging Face ID:** Upstage/Solar-Open-100B * **Architecture:** Mixture-of-Experts (MoE) * **Total Parameters:** 102.6B * **Active Parameters:** 12B (per token) * **Experts:** 129 Experts (top 8 among 128 Routed + 1 Shared) * **Pre-training Tokens:** 19.7 Trillion * **Context Length:** 128k * **Training Hardware:** NVIDIA B200 GPUs * **License:** **Solar-Apache License 2.0** (See [LICENSE](https://huggingface.co/upstage/Solar-Open-100B/blob/main/LICENSE)) * **Hardware Requirements:** * **Minimum:** 4x NVIDIA A100 (80GB)
2026-01-01T18:56:06
https://github.com/ggml-org/llama.cpp/pull/18511
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q1btp6
false
null
t3_1q1btp6
/r/LocalLLaMA/comments/1q1btp6/support_for_solaropen100b_has_been_merged_into/
false
false
https://external-preview…52c5fd758ed50237
33
{'enabled': False, 'images': [{'id': '9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=108&crop=smart&auto=webp&s=b256c92e5d6025fb60fcb95d661ddd8a4bbd45d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=216&crop=smart&auto=webp&s=fa0eaea047820b7c267620ad8bd670084430dc39', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=320&crop=smart&auto=webp&s=0e3b8c4f3598c1498a329b12d9a50bd01ca137cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=640&crop=smart&auto=webp&s=8ae3dd8af5499a5fd76536796f27b1d7451bd260', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=960&crop=smart&auto=webp&s=4cacae2b83e401a39aadcd0869cb863ab7ce475d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?width=1080&crop=smart&auto=webp&s=9ba9e99920375426d1c46f4fc1c0a4371a8f7baa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9ojHFE7X4GQU7Zn2As1PWZzoUrXBaxy-H6CNoHSxiCk.png?auto=webp&s=8cd40ab7d1c854a5904bcabb2f73ead3762655aa', 'width': 1200}, 'variants': {}}]}
I built a privacy first, local first, minimal chat interface for LLMs
0
Hey everyone! 👋 I built Chaterface, a super fast chat interface for AI designed with a beautiful, minimal UX. Its fully local but supports optional encrypted cloud sync. Fast & Minimal: A clean UI that feels instant and gets out of your way. Optional encrypted cloud sync: Client side encryption ensures only you can read your chats. OpenRouter + BYOK: Supports OpenRouter so you can bring your own keys. Stack: Next.js 15, React 19, Tailwind 4, InstantDB. It's MIT licensed if anyone wants to check out the code! [https://www.chaterface.com/](https://www.chaterface.com/) Github repo: [https://github.com/dqnamo/chaterface](https://github.com/dqnamo/chaterface)
2026-01-01T18:46:54
https://i.redd.it/v68n6piy9sag1.png
dqnamo
i.redd.it
1970-01-01T00:00:00
0
{}
1q1bla3
false
null
t3_1q1bla3
/r/LocalLLaMA/comments/1q1bla3/i_built_a_privacy_first_local_first_minimal_chat/
false
false
default
0
{'enabled': True, 'images': [{'id': 'v68n6piy9sag1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=108&crop=smart&auto=webp&s=481a4172db78a6fc12abea7c715e67c20a473521', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=216&crop=smart&auto=webp&s=5e29927d26ebab378278f84a5c27ff5436541f28', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=320&crop=smart&auto=webp&s=b62d424f2d00fcbd887e24e01106a3eff02ac733', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=640&crop=smart&auto=webp&s=bdee1ce2f8d65848ea02b88afccbd44353e0fd93', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=960&crop=smart&auto=webp&s=91dbe0e9c7fc9a4ea71a253875565884da5e266e', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?width=1080&crop=smart&auto=webp&s=e62d57999bbe227f26ebb64a74f24ee025e6a235', 'width': 1080}], 'source': {'height': 1628, 'url': 'https://preview.redd.it/v68n6piy9sag1.png?auto=webp&s=78c7efadc4bcd14c2d9c1d6531cf8249345625bd', 'width': 3014}, 'variants': {}}]}
Llama-3 embeddings look neutral, but don't predict output preferences — tested on 230 paired statements
1
[removed]
2026-01-01T18:29:59
https://www.reddit.com/r/LocalLLaMA/comments/1q1b5we/llama3_embeddings_look_neutral_but_dont_predict/
Fantastic_Art_4948
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1b5we
false
null
t3_1q1b5we
/r/LocalLLaMA/comments/1q1b5we/llama3_embeddings_look_neutral_but_dont_predict/
false
false
https://b.thumbs.redditm…UZiQW4-HnP1o.jpg
1
null
omniASR-server: OpenAI-compatible API for Meta's omniASR with streaming support
8
Hey everyone, I built an open-source server that wraps Meta's omniASR model with an OpenAI-compatible API. \*\*Features:\*\* \- OpenAI-compatible REST API (\`/v1/audio/transcriptions\`) \- Real-time WebSocket streaming \- Works with voice agent frameworks (Pipecat, LiveKit) \- Docker deployment with GPU support \- Auto-handles long audio files (no 40s limit) \- Supports CUDA, MPS (Apple Silicon), CPU \*\*Why I built this:\*\* Wanted to use omniASR for a voice agent project but there was no easy way to deploy it as an API. Now you can swap out OpenAI STT with a single URL change. \*\*Quick start:\*\* docker compose up -d curl -X POST [http://localhost:8000/v1/audio/transcriptions](http://localhost:8000/v1/audio/transcriptions) \-F file=@audio.wav GitHub: [https://github.com/ARahim3/omniASR-server](https://github.com/ARahim3/omniASR-server) Feedback welcome!
2026-01-01T18:17:10
https://www.reddit.com/r/LocalLLaMA/comments/1q1au63/omniasrserver_openaicompatible_api_for_metas/
A-Rahim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1au63
false
null
t3_1q1au63
/r/LocalLLaMA/comments/1q1au63/omniasrserver_openaicompatible_api_for_metas/
false
false
self
8
null
Unpopular opinion: if your product only works on GPT-4, you don’t have a model problem, you have a systems problem
0
I’ve been stress-testing workflows across Llama, Mistral, Qwen, etc. (fully local, no safety net), and one thing became obvious fast: Most “AI products” are pipeline-dependent and not model-dependent. When you remove frontier models from the equation: • Bad RAG designs stop working immediately • Prompt spaghetti gets exposed • Implicit assumptions break • Latency, batching, memory, and context suddenly matter Cloud models feel magical because they hide architectural debt. Local models don’t hide anything. Ironically, once you fix the system: • Open-source models become predictable • Costs drop by an order of magnitude • You regain control over data, latency, and failure modes Basically deploying your own models is like planting a seed which keeps paying you forever in the form of fruits Frontier models still win on zero-shot reasoning and generality. But for real deployments? The gap is smaller than people think if the infra is done right. Curious how others here see it: • Which OSS models have held up best in production? • What was the hardest thing to get right (RAG, inference, evals, serving)? • Do you see local deployments replacing APIs, or coexisting long-term? No hype, genuinely interested in war stories.
2026-01-01T18:06:07
https://www.reddit.com/r/LocalLLaMA/comments/1q1akch/unpopular_opinion_if_your_product_only_works_on/
aieatstheworld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1akch
false
null
t3_1q1akch
/r/LocalLLaMA/comments/1q1akch/unpopular_opinion_if_your_product_only_works_on/
false
false
self
0
null
Running an unsupported DeepSeek V3.2 in llama.cpp for some New Year's fun
17
So yesterday was a New Year's eve and somewhere between a fourth beer and a glass of bubbly I had this weird idea to see how DeepSeek V3.2 runs with a dense (non-sparse) attention. I know that sparse attention is the main highlight of this release, but people were trying to vibe-code it for months so maybe we can just skip that for now and use the model as it is... So here's the patch allowing to convert and use the model with the current llama.cpp: diff --git a/convert_hf_to_gguf.py b/convert_hf_to_gguf.py index edc0ed539..dd6dbe8d5 100755 --- a/convert_hf_to_gguf.py +++ b/convert_hf_to_gguf.py @@ -952,6 +952,9 @@ class TextModel(ModelBase): return seems_special + def override_tokenizer_settings(self, tokenizer): + return tokenizer + # used for GPT-2 BPE and WordPiece vocabs def get_vocab_base(self) -> tuple[list[str], list[int], str]: tokens: list[str] = [] @@ -959,6 +962,7 @@ class TextModel(ModelBase): from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(self.dir_model) + tokenizer = self.override_tokenizer_settings(tokenizer) vocab_size = self.hparams.get("vocab_size", len(tokenizer.vocab)) assert max(tokenizer.vocab.values()) < vocab_size @@ -7180,14 +7184,26 @@ class DeepseekModel(TextModel): u/ModelBase.register( "DeepseekV2ForCausalLM", "DeepseekV3ForCausalLM", + "DeepseekV32ForCausalLM", "KimiVLForConditionalGeneration", ) class DeepseekV2Model(TextModel): model_arch = gguf.MODEL_ARCH.DEEPSEEK2 + def override_tokenizer_settings(self, tokenizer): + # override add_bos_token setting to get pre-tokenizer recognized + if self.hparams.get("model_type") == "deepseek_v32": + tokenizer.add_bos_token = True + return tokenizer + def set_vocab(self): try: self._set_vocab_gpt2() + # in DeepSeek V3.2 adding BOS token is disabled in tokenizer configuration + # instead the BOS token is added in encode_messages() Python code + # therefore we have to override this setting + if self.hparams.get("model_type") == "deepseek_v32": + self.gguf_writer.add_add_bos_token(True) return except Exception: pass @@ -7277,7 +7293,7 @@ class DeepseekV2Model(TextModel): def modify_tensors(self, data_torch: Tensor, name: str, bid: int | None) -> Iterable[tuple[str, Tensor]]: # skip vision tensors and remove "language_model." for Kimi-VL - if "vision_tower" in name or "multi_modal_projector" in name: + if "vision_tower" in name or "multi_modal_projector" in name or "self_attn.indexer" in name: return [] if name.startswith("language_model."): It's nothing fancy, just: * an override for add\_bos\_token that is now false in 3.2 and it prevents the (unchanged) pre-tokenizer from being recognized, * skipping lightning indexer tensors since there's no support for that yet. With this patch I converted the model: python convert_hf_to_gguf.py /mnt/md0/huggingface/hub/models--deepseek-ai--DeepSeek-V3.2/snapshots/a7e62ac04ecb2c0a54d736dc46601c5606cf10a6/ --outfile /mnt/md0/models/DeepSeek-V3.2-nolight.gguf --outtype q8_0 Note that DeepSeek V3.2 has no jinja chat template at all. AFAIK the overall structure of the template did not change, but they use some new tool calls that require some Python code magic. I tried to run the model with `--chat-template deepseek3` but for some reason it didn't work correctly (bug in llama.cpp?). So instead I saved jinja template from DeepSeek V3 to a file and used this command: $ ./bin/llama-cli -m /mnt/md0/models/DeepSeek-V3.2-nolight.gguf -nr --temp 0.0 -cnv --jinja --chat-template-file ../deepseek3.jinja -p "who are you?" Loading model... ▄▄ ▄▄ ██ ██ ██ ██ ▀▀█▄ ███▄███▄ ▀▀█▄ ▄████ ████▄ ████▄ ██ ██ ▄█▀██ ██ ██ ██ ▄█▀██ ██ ██ ██ ██ ██ ██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀ ██ ██ ▀▀ ▀▀ build : b7597-13814eb37 model : DeepSeek-V3.2-nolight.gguf modalities : text available commands: /exit or Ctrl+C stop or exit /regen regenerate the last response /clear clear the chat history /read add a text file > who are you? Hello! I'm DeepSeek, an AI assistant created by DeepSeek Company. I'm here to help you with a wide variety of tasks - whether you need answers to questions, help with writing, analysis, problem-solving, coding assistance, or just want to have a conversation! I'm a text-based AI model with some special features: - I can process uploaded files (images, PDFs, Word docs, Excel sheets, PowerPoint presentations, and text files) - I have a large 128K context window for handling lengthy conversations and documents - I support web search functionality (though you need to manually enable it in the Web/App interface) - I'm completely free to use with no charges I'm designed to be helpful, detailed, and warm in my responses. How can I assist you today? 😊 [ Prompt: 14.3 t/s | Generation: 6.9 t/s ] > how many r's are in the phrase "crrrunchy crrrabby crrrabs"? Let’s count the letter **“r”** in the phrase: **"crrrunchy crrrabby crrrabs"** Break it down word by word: 1. **crrrunchy** → c r r r u n c h y → **3 r’s** 2. **crrrabby** → c r r r a b b y → **3 r’s** 3. **crrrabs** → c r r r a b s → **3 r’s** Total = 3 + 3 + 3 = **9 r’s**. So the answer is **9**. [ Prompt: 18.7 t/s | Generation: 6.7 t/s ] > what is 21345*1234? solve this like a high-schooler with a pen and paper would do Alright, let’s multiply **21345 × 1234** step-by-step like in school. We’ll break 1234 into: 1234 = 1000 + 200 + 30 + 4 --- **Step 1: Multiply 21345 by 1000** 21345 × 1000 = 21,345,000 --- **Step 2: Multiply 21345 by 200** 21345 × 2 = 42,690 Then × 100 = 4,269,000? Wait, careful: 21345 × 200 = 21345 × 2 × 100 = 42,690 × 100 = 4,269,000 --- **Step 3: Multiply 21345 by 30** 21345 × 3 = 64,035 Then × 10 = 640,350 --- **Step 4: Multiply 21345 by 4** 21345 × 4 = 85,380 --- **Step 5: Add them all up** First, write them aligned by place value: ``` 21,345,000 (×1000) + 4,269,000 (×200) + 640,350 (×30) + 85,380 (×4) ``` --- Add step-by-step: 21,345,000 + 4,269,000 = 25,614,000 25,614,000 + 640,350 = 26,254,350 26,254,350 + 85,380 = 26,339,730 --- **Final answer:** **26,339,730** ✅ [ Prompt: 17.6 t/s | Generation: 6.5 t/s ] > Looks good so far, now I'm going to try my lineage-bench benchmark to see if the model is dumber with dense attention. Hopefully not! Happy New Year!
2026-01-01T18:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1q1aif6/running_an_unsupported_deepseek_v32_in_llamacpp/
fairydreaming
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1aif6
false
null
t3_1q1aif6
/r/LocalLLaMA/comments/1q1aif6/running_an_unsupported_deepseek_v32_in_llamacpp/
false
false
self
17
null
model: add Solar Open model by HelloKS · Pull Request #18511 · ggml-org/llama.cpp
2
2026-01-01T17:57:14
https://github.com/ggml-org/llama.cpp/pull/18511
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1q1ac64
false
null
t3_1q1ac64
/r/LocalLLaMA/comments/1q1ac64/model_add_solar_open_model_by_helloks_pull/
false
false
default
2
null
Tuneable Attention: How expanding (not compressing) the attention mechanism dramatically accelerated my model's learning speed
4
**BODY:** I've been training LLMs on budget hardware (Tesla P40, GTX TITAN X via vast.ai) since 2016, and I recently published a writeup of an architectural modification I stumbled into that significantly accelerated language acquisition in my models. **The TL;DR:** Standard attention computes Q × K\^T. My modification factors this as Q × (U × U\^T) × K\^T, where U is a learned projection matrix. When the rank of U is less than d\_k, you get compression (cheaper compute). When rank is greater than d\_k, you get EXPANSION (more compute per step, but faster convergence). I originally derived this targeting the compression regime for efficiency. But through hyperparameter drift over many training runs, the rank value accidentally crossed above d\_k into the expansion regime. The result: a sub-200M parameter model that acquired coherent English grammar in approximately ONE DAY of training, when previous runs at similar scale had taken much longer. **The key insight:** Attention routing (where to look) can benefit from expanded "scratch space," but value aggregation (what to grab) should stay at full dimensionality. So Q and K get projected through U, but V does not. **Current status:** Training AGILLM-3 with 3x expansion (rank=96, d\_k=32), currently at 5M steps / 11% through chinchilla-optimal. Outputs are grammatically perfect, semantic coherence still developing. Full writeup with math, code, and the story of how I accidentally discovered this: [https://medium.com/@MarxismLeninism/tuneable-attention-how-an-accidental-hyperparameter-drift-revealed-that-expansion-beats-1a39b9bbe72d?postPublishedType=initial](https://medium.com/@MarxismLeninism/tuneable-attention-how-an-accidental-hyperparameter-drift-revealed-that-expansion-beats-1a39b9bbe72d?postPublishedType=initial) Curious if anyone else has experimented with rank > d\_k in attention projections. Everything I've seen in the literature focuses on compression (LoRA, Linformer, etc.) — the expansion regime seems unexplored.
2026-01-01T17:43:26
https://www.reddit.com/r/LocalLLaMA/comments/1q19zmx/tuneable_attention_how_expanding_not_compressing/
Correct_Address3554
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q19zmx
false
null
t3_1q19zmx
/r/LocalLLaMA/comments/1q19zmx/tuneable_attention_how_expanding_not_compressing/
false
false
self
4
null
I built a deterministic demo of my AI engine with the LLM turned off (trace included)
0
A while back I got a comment along the lines of: “I don’t even know what this is. You should have a practical demo that explains it.” That’s what this post is. I added a dedicated demo mode to my engine that runs a single cycle with: - LLM: OFF - Memory: DISABLED - Cold start every run - Same input (“hello”) The demo prints the full internal trace: - Pre-state snapshot - Strategy weights - Selected strategy - Post-state snapshot - Final output The engine selects between internal strategies (dream / pattern / reflect) based on internal state variables (mood, pressure, belief tension, etc.). The text output is not the point — the trace is. What this demo is meant to show: - Decisions are made before any language generation - Strategy selection changes based on internal state - The system still functions with the LLM completely removed What this is not: - A chatbot - Prompt engineering - A claim of AGI or anything like that I’m including: - A screenshot of a full demo run (Demo A: neutral state) - The exact `demo_mode.py` file used to produce it: https://github.com/GhoCentric/ghost-engine/blob/main/demo/demo_mode.py The core engine (`ghost_core.py`) is not public yet, so this demo is not runnable by itself. That’s intentional. The goal here is transparency of behavior and internal causality, not reproducibility at this stage. If your baseline is: “I want to see internal state, decisions, and transitions — not just output” that’s what this demo is for. Happy to answer technical questions or criticism.
2026-01-01T17:36:58
https://i.redd.it/6nkvs5slxrag1.jpeg
GhoCentric
i.redd.it
1970-01-01T00:00:00
0
{}
1q19tnr
false
null
t3_1q19tnr
/r/LocalLLaMA/comments/1q19tnr/i_built_a_deterministic_demo_of_my_ai_engine_with/
false
false
default
0
{'enabled': True, 'images': [{'id': '6nkvs5slxrag1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=108&crop=smart&auto=webp&s=187fb97c5f470cca927b3d713c76e9f441e0fc52', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=216&crop=smart&auto=webp&s=6585648bc24c57b9c40a50ac324beceb32111397', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=320&crop=smart&auto=webp&s=779d56941438748e02b4cf78eeb8b09f31dd7361', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=640&crop=smart&auto=webp&s=37a2d9de00973c1a44caa8067ccaa1f38cd281a8', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=960&crop=smart&auto=webp&s=2587591f01e453fd3116422af25d080eca1a45d1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?width=1080&crop=smart&auto=webp&s=bf621b6711a38f442341c06a4da01fffa76758d1', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/6nkvs5slxrag1.jpeg?auto=webp&s=cbd7eb1f2ddf34b02318f9617f62e0db7fb1990e', 'width': 1080}, 'variants': {}}]}
ISON: 70% fewer tokens than JSON. Built for LLM context stuffing.
0
Stop burning tokens on JSON syntax. **This JSON**: { "users": \[ {"id": 1, "name": "Alice", "email": "[alice@example.com](mailto:alice@example.com)", "active": true}, {"id": 2, "name": "Bob", "email": "[bob@example.com](mailto:bob@example.com)", "active": false}, {"id": 3, "name": "Charlie", "email": "[charlie@test.com](mailto:charlie@test.com)", "active": true} \], "config": { "timeout": 30, "debug": true, "api\_key": "sk-xxx-secret", "max\_retries": 3 }, "orders": \[ {"id": "O1", "user\_id": 1, "product": "Widget Pro", "total": 99.99}, {"id": "O2", "user\_id": 2, "product": "Gadget Plus", "total": 149.50}, {"id": "O3", "user\_id": 1, "product": "Super Tool", "total": 299.00} \] } \~180 tokens. Brackets, quotes, colons everywhere. **Same data in ISON:** table.users id name email active 1 Alice [alice@example.com](mailto:alice@example.com) true 2 Bob [bob@example.com](mailto:bob@example.com) false 3 Charlie [charlie@test.com](mailto:charlie@test.com) true object.config timeout 30 debug true api\_key "sk-xxx-secret" max\_retries 3 table.orders id user\_id product total O1 :1 "Widget Pro" 99.99 O2 :2 "Gadget Plus" 149.50 O3 :1 "Super Tool" 299.00 \~60 tokens. Clean. Readable. LLMs parse it without instructions. **Features:** [table.name](http://table.name/)  for arrays of objects [object.name](http://object.name/)  for key-value configs :1 references row with id=1 (cross-table relationships) No escaping hell TSV-like structure (LLMs already know this from training) **Benchmarks:** | Format | Tokens | LLM Accuracy | |---------|---------|-----------------| | JSON | 2,039 | 84.0% | | ISON | 685 | 88.0% | Fewer tokens. Better accuracy. Tested on GPT-4, Claude, DeepSeek, Llama 3. **Available everywhere:** Python | pip install ison-py TypeScript | npm install ison-ts Rust | cargo add ison-rs Go | [github.com/maheshvaikri/ison-go](http://github.com/maheshvaikri/ison-go) VS Code | ison-lang extension n8n | n8n-nodes-ison vscode extension | ison-lang@1.0.1 **GitHub:** [https://github.com/maheshvaikri-code/ison](https://github.com/maheshvaikri-code/ison) I built this for my agentic memory system where every token counts and where context window matters. Now open source. Feedback welcome. Give a Star if you like it.
2026-01-01T17:29:56
https://www.reddit.com/r/LocalLLaMA/comments/1q19nel/ison_70_fewer_tokens_than_json_built_for_llm/
Immediate-Cake6519
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q19nel
false
null
t3_1q19nel
/r/LocalLLaMA/comments/1q19nel/ison_70_fewer_tokens_than_json_built_for_llm/
false
false
self
0
{'enabled': False, 'images': [{'id': 'BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=108&crop=smart&auto=webp&s=8cf86a48d2cf082979df3212a92d1cd0abd6dea5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=216&crop=smart&auto=webp&s=d1337e05ae75028d4ea00c3e6de355d483c28747', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=320&crop=smart&auto=webp&s=84189939a3a38380d90da9c7206b0574442eb68c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=640&crop=smart&auto=webp&s=7f56da99c938cbcbf5e6f885f0b41b42236c3810', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=960&crop=smart&auto=webp&s=c9813fa6c5fd8065f9a57cdd6b321395fb97fb97', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?width=1080&crop=smart&auto=webp&s=65a5d47520b51ae031fb9934f0ced675e72937fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BwbbMJCt4K_d6CVbQaIF7hHkBj0i70kcEORyaiawkxA.png?auto=webp&s=e86a2dc2eaad3b24fdeaaeb48956e45c5f816c06', 'width': 1200}, 'variants': {}}]}
IQuestCoder - new 40B dense coding model
185
As usual, benchmarks claim it's absolutely SOTA and crushes the competition. Since I'm willing to verify it, I've adapted it to GGUF. It's basically Llama arch (reportedly was supposed to be using SWA, but it didn't get used in the final version), so works out of the box with Llama.cpp.
2026-01-01T17:12:51
https://huggingface.co/ilintar/IQuest-Coder-V1-40B-Instruct-GGUF
ilintar
huggingface.co
1970-01-01T00:00:00
0
{}
1q1986x
false
null
t3_1q1986x
/r/LocalLLaMA/comments/1q1986x/iquestcoder_new_40b_dense_coding_model/
false
false
default
185
{'enabled': False, 'images': [{'id': 'puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=108&crop=smart&auto=webp&s=39f810c2c8a719da68b0fc50c2a4f2dac3f30c8c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=216&crop=smart&auto=webp&s=455ec48c3c7f30053584dd47cdb82ba6ae231c1b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=320&crop=smart&auto=webp&s=afee3323111357cb8fb0aa189eed0cbc45efc8ad', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=640&crop=smart&auto=webp&s=7035b6af6efe8e8b4c47de0ff3945cb68bc2973c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=960&crop=smart&auto=webp&s=0d8d21ed6c9aa17df55ed4da1841800fdf9f8301', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?width=1080&crop=smart&auto=webp&s=c5971352c5f30dfae27dbe176935795806b05399', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/puEaI60nzHUbVlmNCXfE1sl9fmhvVgJHgKO3FYQHywY.png?auto=webp&s=330607d83e32e71f8b80fc4988bd59b8a9823657', 'width': 1200}, 'variants': {}}]}
the best "unfiltered" alternatives to the big 3 ai bots
1
[removed]
2026-01-01T16:56:05
https://www.reddit.com/r/LocalLLaMA/comments/1q18szn/the_best_unfiltered_alternatives_to_the_big_3_ai/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q18szn
false
null
t3_1q18szn
/r/LocalLLaMA/comments/1q18szn/the_best_unfiltered_alternatives_to_the_big_3_ai/
false
false
self
1
null
7900 XTX + ROCm: A Year Later. Llama.cpp vs vLLM Benchmarks (TB3 eGPU)
41
I've had the 7900 XTX for over a year now. While the situation with ROCm has definitely gotten better, it is still a frustrating experience compared to just plugging in an NVIDIA card. I was curious to see if we could at least run newer models reliably now, so I decided to compare the maturity of **llama.cpp** vs **vLLM** on this hardware. **Important Context:** * **The Setup:** 7900 XTX connected via **Thunderbolt 3 (eGPU)**. This might introduce some bandwidth limitations, so I specifically chose models that fit entirely in VRAM to minimize penalty. * **Disclaimer:** This is *not* scientific. These are just some quick numbers I ran to check the current state of things. * **The Environment:** Huge thanks to [kyuz0 on GitHub](https://github.com/kyuz0) whose repo allowed me to actually build working images for both llama.cpp and vLLM on this platform. Here are the results # Llama.cpp (ROCm) *Running* `llama-bench` *on local GGUF files.* |GGUF Filename|Size|PP (512)|Gen (tg512)| |:-|:-|:-|:-| |`unsloth_Llama-3.1-8B-Instruct-GGUF_Llama-3.1-8B-Instruct-BF16.gguf`|14.96 GB|2226 t/s|**42.51 t/s**| |`Mistral-Small-3.2-24B-Instruct-2506-UD-Q5_K_XL.gguf`|15.63 GB|861 t/s|**32.20 t/s**| |`unsloth_Qwen2.5-VL-32B-Instruct-GGUF_Qwen2.5-VL-32B-Instruct-Q4_K_M.gguf`|18.48 GB|626 t/s|**22.95 t/s**| |`DeepSeek-R1-Distill-Qwen-32B-Q3_K_M.gguf`|14.84 GB|669 t/s|**24.12 t/s**| |`gpt-oss-20b-F16.gguf`|12.83 GB|2620 t/s|**87.09 t/s**| |`Qwen3-VL-30B-A3B-Thinking-UD-Q4_K_XL.gguf`|16.49 GB|1793 t/s|**51.86 t/s**| # vLLM (ROCm) *Running* `vllm bench serve` *directly from HF repos.* |Model Repo (HF)|Format|Gen Speed|Latency (TTFT)| |:-|:-|:-|:-| |`unsloth/Meta-Llama-3.1-8B-Instruct`|Native BF16|**94.19 t/s**|282 ms| |`unsloth/gpt-oss-20b`|F16 (MoE)|**48.33 t/s**|1044 ms| |`unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit`|bnb-4bit|**14.99 t/s**|1063 ms| Just wanted to share some data for anyone else suffering through the AMD local LLM journey.
2026-01-01T16:33:45
https://www.reddit.com/r/LocalLLaMA/comments/1q189os/7900_xtx_rocm_a_year_later_llamacpp_vs_vllm/
reujea0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q189os
false
null
t3_1q189os
/r/LocalLLaMA/comments/1q189os/7900_xtx_rocm_a_year_later_llamacpp_vs_vllm/
false
false
self
41
{'enabled': False, 'images': [{'id': '6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=108&crop=smart&auto=webp&s=065295b5662239d400662f1180943cc68b9493ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=216&crop=smart&auto=webp&s=07109170bde7e7340d4229792facabd1210ed74e', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?width=320&crop=smart&auto=webp&s=0c769d74cc215aaa715f9d43b5c996768fb781d5', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/6XA3_ni9zL6J2qINEBFUqpAHZ6OodGOb9R8Li1LpdbY.png?auto=webp&s=adc9ee750c886069ea7a6bf4248c5df3fe211db9', 'width': 460}, 'variants': {}}]}
Llama 3.2 3B fMRI LOAD BEARING DIMS FOUND
6
I’ve been building a local interpretability toolchain to explore **hidden-dimension coupling** in small LLMs (Llama-3.2-3B-Instruct). This started as visualization (“constellations” of co-activating dims), but the visuals alone were too noisy to move beyond theory. So I rebuilt the pipeline to answer a more specific question: > # TL;DR Yes. And perturbing the top one causes **catastrophic loss of semantic commitment** while leaving fluency intact. # Step 1 — Reducing noise upstream (not in the renderer) Instead of rendering everything, I tightened the experiment: * **Deterministic decoding** (no sampling) * **Stratified prompt suite** (baseline, constraints, reasoning, commitment, transitions, etc.) * **Event-based logging**, not frame-based I only logged events where: * the hero dim was **active** * the hero dim was **moving** (std gate) * Pearson correlation with another dim was **strong** * polarity relationship was consistent Metrics logged per event: * Pearson correlation (centered) * Cosine similarity (raw geometry) * Dot/energy * Polarity agreement * Classification: `FEATURE` (structural) vs `TRIGGER` (functional) This produced a *hostile filter*: most dims disappear unless they matter repeatedly. # Step 2 — Persistence analysis across runs Instead of asking “what lights up,” I counted: > The result was a sharp hierarchy, not a cloud. Top hits (example): * **DIM 1731 — \~14k hits** * **DIM 221 — \~10k hits** * then a steep drop-off into the long tail This strongly suggests a **small structural core** \+ many conditional “guest” dims. # Step 3 — Causal test (this is the key part) I then built a small UI to **intervene on individual hidden dimensions** during generation: * choose layer * choose dim * apply epsilon bias (not hard zero) * apply to attention output + MLP output When I biased **DIM 1731** (layer \~20) with ε ≈ +3: * grammar stayed intact * tokens kept flowing * **semantic commitment collapsed** * reasoning failed completely * output devolved into repetitive, affect-heavy, indecisive text This was *not* random noise or total model failure. It looks like the model can still “talk” but **cannot commit to a trajectory**. That failure mode was consistent with what the persistence analysis predicted. # Interpretation (carefully stated) DIM 1731 does *not* appear to be: * a topic neuron * a style feature * a lexical unit It behaves like part of a **decision-stability / constraint / routing spine**: * present whenever the hero dim is doing real work * polarity-stable * survives across prompt classes * causally load-bearing when perturbed I’m calling it “The King” internally because removing or overdriving it destabilizes everything downstream — but that’s just a nickname, not a claim. # Why I think this matters * This is a concrete example of **persistent, high-centrality hidden dimensions** * It suggests a path toward: * targeted pruning * hallucination detection (hero activation without core engagement looks suspect) * mechanistic comparison across models * It bridges visualization → aggregation → **causal confirmation** I’m not claiming universality or that this generalizes yet. Next steps are sign-flip tests, ablations on the next-ranked dim (“the Queen”), and cross-model replication. Happy to hear critiques, alternative explanations, or suggestions for better controls. *(Screenshots attached below — constellation persistence, hit distribution, and causal intervention output.)* DIM 1731: 13,952 hits (The King) DIM 221: 10,841 hits (The Queen) DIM 769: 4,941 hits DIM 1935: 2,300 hits DIM 2015: 2,071 hits DIM 1659: 1,900 hits DIM 571: 1,542 hits DIM 1043: 1,536 hits DIM 1283: 1,388 hits DIM 642: 1,280 hits [Perturbation of the load bearing dim directly affecting output](https://preview.redd.it/acsagvzqjrag1.png?width=1542&format=png&auto=webp&s=29560087a6c9c8f2a752957863b78700b89016b5)
2026-01-01T16:20:17
https://www.reddit.com/r/LocalLLaMA/comments/1q17y0d/llama_32_3b_fmri_load_bearing_dims_found/
Due_Hunter_4891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q17y0d
false
null
t3_1q17y0d
/r/LocalLLaMA/comments/1q17y0d/llama_32_3b_fmri_load_bearing_dims_found/
false
false
https://b.thumbs.redditm…R9OcOfJGMmuA.jpg
6
null
MCP Chat Studio v2: Workspace mode, workflows, contracts, mocks, and more
5
I’ve been building MCP Chat Studio as a “Postman for MCP servers,” and v2 is now live. What’s new in v2: \- Workspace mode: infinite canvas with draggable panels, radial menu, quick bar, command palette, sessions + export/ import. \- Inspector: tool runner, protocol timeline, bulk test, diff view. \- Workflows: visual builder + AI Builder + debugger (breakpoints/step mode). \- Collections: scenario runner + run reports. \- Contracts: schema validation + breaking change checks. \- Mocks: generate/connect mock servers, call via Inspector. \- Docs generator (Markdown/HTML/JSON). \- Workflow export to Python + Node scripts. \- Analytics/Performance + Monitors + Brain view. Repo + demo GIFs: [https://github.com/JoeCastrom/mcp-chat-studio](https://github.com/JoeCastrom/mcp-chat-studio) If you build MCP servers, I’d love feedback on missing capabilities or workflow improvements.
2026-01-01T16:11:16
https://github.com/JoeCastrom/mcp-chat-studio
Some-Put8242
github.com
1970-01-01T00:00:00
0
{}
1q17qej
false
null
t3_1q17qej
/r/LocalLLaMA/comments/1q17qej/mcp_chat_studio_v2_workspace_mode_workflows/
false
false
default
5
{'enabled': False, 'images': [{'id': 'bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=108&crop=smart&auto=webp&s=d87712fb851c70558f0d334afd5de6f38f8616a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=216&crop=smart&auto=webp&s=fc80c6ac97c924c039745f4677da9eef21d4e6b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=320&crop=smart&auto=webp&s=6d14a99799ea7aa24a92236a88870794e7e7eed9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=640&crop=smart&auto=webp&s=b3e09675c96693763724226fa03738d214c14eea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=960&crop=smart&auto=webp&s=fe51ff20fd26495c383236bd397b5cf38c45a98c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?width=1080&crop=smart&auto=webp&s=ffee5167c48bf4ba6785e258e00d384292cea7c7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bM3wmh0mQCXkN-9fJxrnXP4wYWEqJcOslg5Xd3ZXyOI.png?auto=webp&s=1415e97c988344c456263516aecc869e6918e41f', 'width': 1200}, 'variants': {}}]}
[Help Needed] Building a Real-Time Personal AI Assistant (Jarvis-style) - 2 Year Journey, Need Guidance
1
[removed]
2026-01-01T15:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1q17fk8/help_needed_building_a_realtime_personal_ai/
Melodic-Molasses-901
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q17fk8
false
null
t3_1q17fk8
/r/LocalLLaMA/comments/1q17fk8/help_needed_building_a_realtime_personal_ai/
false
false
self
1
null
reap is near lossless btw /s
18
[.](https://preview.redd.it/xjsst0xvbrag1.png?width=995&format=png&auto=webp&s=46adcb08a79db53d0397cda4f212139138e28ac0) every reap model i tried has been much worse than just the smaller original quantized one. The full weight ones wont get a single mistake. q2 would get 1 or 2 mistakes. reap ones makes 10000 mistakes, i dont understand how benchmarks wont reflect this degradation.
2026-01-01T15:40:54
https://www.reddit.com/r/LocalLLaMA/comments/1q170wt/reap_is_near_lossless_btw_s/
runawaychicken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q170wt
false
null
t3_1q170wt
/r/LocalLLaMA/comments/1q170wt/reap_is_near_lossless_btw_s/
false
false
https://b.thumbs.redditm…nzIamrgBVO6I.jpg
18
null
Mock LLM APIs locally with real-world streaming physics (OpenAI/Anthropic/Gemini and more compatible)
4
Hey everyone, Happy New Year. Tired of burning API credits just to test your streaming UI? I’m part of the small team at Vidai, based in Scotland 🏴󠁧󠁢󠁳󠁣󠁴󠁿, and today we’re open-sourcing **VidaiMock,** a local-first mock server that emulates the exact wire-format and silver-level latency of major providers so you can develop offline with zero cost. If you’ve built anything with LLM APIs, you know the drill: testing streaming UIs or SDK resilience against real APIs is slow, eats up your credits, and is hard to reproduce reliably. We tried existing mock servers, but most of them just return static JSON. They don't test the "tricky" parts—the actual wire-format of an OpenAI SSE stream, Anthropic’s EventStream, or how your app handles 500ms of TTFT (Time to First Token) followed by a sudden network jitter. We needed something better to build our own enterprise gateway ([Vidai.Server](https://vidai.uk/)), so we built VidaiMock. **What makes it different?** * **Physics-Accurate Streaming**: It doesn't just dump text. It emulates the exact wire-format and per-token timing of major providers. You can test your loading states and streaming UI/UX exactly as they’d behave in production. * **Zero Config / Zero Fixtures**: It’s a single **\~7MB Rust binary**. No Docker, no DB, no API keys, and zero external fixtures to manage. Download it, run it, and it just works. * **More than a "Mock"**: Unlike tools that just record and replay static data (VCR) or intercept browser requests (MSW), VidaiMock is a standalone **Simulation Engine**. It emulates the actual network protocol (SSE vs EventStream). * **Dynamic Responses**: Every response is a Tera template. You aren't stuck with static strings—you can reflect request data, generate dynamic contents, or use complex logic (if you wish) to make your mock feel alive. * **Chaos Engineering**: You can inject latency, malformed responses, or drop requests using headers (`X-Vidai-Chaos-Drop`). Perfect for testing your retry logic. * **Fully Extensible**: It uses Tera (Jinja2-like) templates for every response. You can add new providers or mock internal APIs by dropping a YAML config and a J2 template. You don't need to know Rust for this. We have added as much examples as possible. * **High Performance**: Built in Rust. It can handle 50k+ RPS. https://preview.redd.it/czhw1gxa9rag1.png?width=2048&format=png&auto=webp&s=f997909a36c83e54f786bb28796cb1284d6291b5 **Why are we open-sourcing it?** It’s been our internal testing engine for a while. We realized that the community is still struggling with mock-infrastructure that feels "real" enough to catch streaming bugs before they hit production. We’re keeping it simple: Apache 2.0 license. **Links:** * **Home**: [https://vidai.uk](https://vidai.uk/) * **GitHub**: [https://github.com/vidaiUK/VidaiMock](https://github.com/vidaiUK/VidaiMock) * **Docs**: [https://vidai.uk/docs/mock/intro/](https://vidai.uk/docs/mock/intro/) I’d love to hear how you’re currently testing your LLM integrations and if this solves a pain point for you. I'll be around to answer any questions! Sláinte, The Vidai Team (from rainy Scotland)
2026-01-01T15:37:32
https://www.reddit.com/r/LocalLLaMA/comments/1q16xzd/mock_llm_apis_locally_with_realworld_streaming/
Guna1260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q16xzd
false
null
t3_1q16xzd
/r/LocalLLaMA/comments/1q16xzd/mock_llm_apis_locally_with_realworld_streaming/
false
false
https://b.thumbs.redditm…wMR03zdMK2uw.jpg
4
null
The State Of LLMs 2025: Progress, Problems, and Predictions
21
2026-01-01T15:35:11
https://magazine.sebastianraschka.com/p/state-of-llms-2025
seraschka
magazine.sebastianraschka.com
1970-01-01T00:00:00
0
{}
1q16w0k
false
null
t3_1q16w0k
/r/LocalLLaMA/comments/1q16w0k/the_state_of_llms_2025_progress_problems_and/
false
false
default
21
{'enabled': False, 'images': [{'id': 'ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=108&crop=smart&auto=webp&s=c968ba677a0afe91ad16a4b6feda0792ec328841', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=216&crop=smart&auto=webp&s=45502c164e792bb36e464dbd313b24dfbf385d13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=320&crop=smart&auto=webp&s=084f851a5e903900b1bbdc35641cd8c3ca09811a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=640&crop=smart&auto=webp&s=859212246454f6bb28f9d882875d0a6ae9694d87', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=960&crop=smart&auto=webp&s=d51ec006374e5dc7015df21a88a8e20cdb8a3909', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?width=1080&crop=smart&auto=webp&s=c75755a70528904cb32c2b1494be271019d50565', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ip3t1phQ469yOBa2kbOD__RHIhAqj8C7dU-KA_Pn0lI.jpeg?auto=webp&s=b6422a02caadc92f639a05f783ba42c87f8da380', 'width': 1200}, 'variants': {}}]}
Just getting into local LLM's with KoboldCPP - best model for it?
1
Hiyah everyone! LLM's running locally - wow! I've been looking for gguf files for my PC, but there's a lot of technical information I need to get familiar with to set them up. There's lots of parameter settings and such - whereas KCPP doesn't appear to have all the necessary ones available in the UI. I'm having fun with gemma-3-27b-it-abliterated.q4\_k\_m.gguf on it and it's quite fast. My RAM's at 33% used, and my GPU's never reaching 100% usage during a reply, though my Ryzen 9 7900 gets busy. It got me wondering - is there a bigger model out there I can plug into KoboldCPP.exe (UI) for Windows? I'm looking forward to getting familiar with all this, and then using a CLI / config files to get the LLM's working. =D Thanks for reading, and Happy New Year!
2026-01-01T15:33:55
https://www.reddit.com/r/LocalLLaMA/comments/1q16uyq/just_getting_into_local_llms_with_koboldcpp_best/
SarahC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q16uyq
false
null
t3_1q16uyq
/r/LocalLLaMA/comments/1q16uyq/just_getting_into_local_llms_with_koboldcpp_best/
false
false
self
1
null
I Did A Thing: Self Evolving Emotional LLM Plasticity.
0
I'll let the work speak, if you want hype that's here: [https://www.reddit.com/r/agi/comments/1q15xx1/family\_is\_all\_you\_need\_the\_calculus\_sapien\_photon/](https://www.reddit.com/r/agi/comments/1q15xx1/family_is_all_you_need_the_calculus_sapien_photon/) Plasticity examples and Ethics: [https://zenodo.org/records/17623226](https://zenodo.org/records/17623226) Architecture Outline: [https://zenodo.org/records/18101088](https://zenodo.org/records/18101088) ✌🏻 & 🫶🏻
2026-01-01T15:05:55
https://zenodo.org/records/18101088
SSj5_Tadden
zenodo.org
1970-01-01T00:00:00
0
{}
1q168d0
false
null
t3_1q168d0
/r/LocalLLaMA/comments/1q168d0/i_did_a_thing_self_evolving_emotional_llm/
false
false
default
0
null
GLM 4.7 on 8x3090
8
Is anyone running GLM 4.7 (or 4.5-4.6) on eight 3090s? I was wondering what kind of speeds you were getting as I was considering this set up.
2026-01-01T15:00:38
https://www.reddit.com/r/LocalLLaMA/comments/1q163um/glm_47_on_8x3090/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q163um
false
null
t3_1q163um
/r/LocalLLaMA/comments/1q163um/glm_47_on_8x3090/
false
false
self
8
null
What is the best embedding and retrieval model both OSS/proprietary for technical texts (e.g manuals, datasheets, and so on)?
3
We are building an agentic app that leverages RAG to extract specific knowledge on datasheets and manuals from several companies to give sales, technical, and overall support. We are using OpenAI's small text model for embeddings, however we think we need something more powerful and adequate for our text corpus. After some research, we found that: \* that zerank 1/2, cohere rerank ones, or voyage rerank 2.5 may work well, also OSS models like mbxai's models could be a good choice for reranking too \* that voyage 3 large model could be an option for retrieval, or those OSS options like E5 series models or Qwen3 models too If you can share any practical insights on this, it would be greatly appreciated.
2026-01-01T14:59:12
https://www.reddit.com/r/LocalLLaMA/comments/1q162mm/what_is_the_best_embedding_and_retrieval_model/
Imaginary-Bee-8770
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q162mm
false
null
t3_1q162mm
/r/LocalLLaMA/comments/1q162mm/what_is_the_best_embedding_and_retrieval_model/
false
false
self
3
null
Any clues as to what Gemma 3's training data consisted of?
12
I know Google would never release this information, but has anyone been able to extract parts of the training data from Gemma 3? I'm really curious about what they used. I'm guessing it was trained on open source (and lower quality, compared to what they fed Gemini) data due to the existence of such attacks on open-weight models. It's a bit frustrating because Google is sitting on some of the most valuable data on the planet , but Gemma will never see any of it in training.
2026-01-01T14:50:22
https://www.reddit.com/r/LocalLLaMA/comments/1q15vr6/any_clues_as_to_what_gemma_3s_training_data/
EducationalCicada
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q15vr6
false
null
t3_1q15vr6
/r/LocalLLaMA/comments/1q15vr6/any_clues_as_to_what_gemma_3s_training_data/
false
false
self
12
null
llama.cpp - Custom Optimized Builds?
3
I'm talking about [cmake command to create builds](https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md). I'm trying to create optimized build for my Laptop config. Just trying to get additional t/s with my 8GB VRAM & 32GB RAM. Do we have any **page/repo/markdown on list of variables to use with cmake command**? Want to know which variables are better for each version(CUDA, CPU, Vulkan). That way I could pick suitable ones for my config. At first, I was trying to create MKL build(Intel oneAPI **M**ath **K**ernel **L**ibrary) for CPU-only. It didn't work. Totally Pain-in-@$$. Have to try again later. (Qwen suggested me MKL build for optimized performance .... for my CPU Intel(R) Core(TM) i7-14700HX) After this MKL, I'm gonna try optimized CUDA build for my 4060 Laptop GPU. Heard that I have to add additional variable for architecture with some double digit number. Also my laptop supports AVX, AVX2(unfortunately no AVX512) which needs additional variable. And please share your **custom commands you're using for CUDA, CPU(also Vulkan, AMD)**. In past, I saw some comments on random threads with very long build commands([here one example](https://www.reddit.com/r/LocalLLaMA/comments/1ps4jho/comment/nvfhee2/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)), unfortunately I forgot to save those at that time. Thanks
2026-01-01T14:29:00
https://www.reddit.com/r/LocalLLaMA/comments/1q15ffw/llamacpp_custom_optimized_builds/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q15ffw
false
null
t3_1q15ffw
/r/LocalLLaMA/comments/1q15ffw/llamacpp_custom_optimized_builds/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
Here's a new falsifiable AI ethics core. Please can you try to break it
0
Please test with any AI. All feedback welcome. Thank you
2026-01-01T14:10:15
https://github.com/markgoodbody-bit/eidoran/blob/main/Eidoran_Shard.md
GentlemanFifth
github.com
1970-01-01T00:00:00
0
{}
1q151ej
false
null
t3_1q151ej
/r/LocalLLaMA/comments/1q151ej/heres_a_new_falsifiable_ai_ethics_core_please_can/
false
false
default
0
null
Anyone else seeing MCPs behave unpredictably with local models?
0
I’ve been spending more time running MCPs alongside **local and hybrid LLM setups**, and something keeps coming up: MCPs that feel “fine” with hosted models often become **fragile or inconsistent locally**. A few patterns I’ve noticed so far: * Local models need **much stricter, explicit rules** or they partially execute tools * Some MCP servers assume network / auth behaviors that don’t hold locally * Error handling is often silent , the tool “runs” but does nothing * Multi-step MCP workflows break more often without very clear constraints None of this is surprising in hindsight, but it’s rarely documented clearly. To avoid re-learning the same lessons, I started **organizing MCPs, install notes, and rules in one place** as a reference while experimenting , mostly focused on: * which MCPs are usable locally vs hybrid * where they tend to break * what kind of rules make them more reliable I’m mainly posting to compare notes with others working locally: * Which MCPs have you found usable with local models? * Any servers that absolutely *don’t* work locally? * Any tricks that improved reliability? (Notes are here if helpful: [https://ai-stack.dev/mcps](https://ai-stack.dev/mcps))
2026-01-01T13:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1q14c6r/anyone_else_seeing_mcps_behave_unpredictably_with/
Silver-Photo2198
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q14c6r
false
null
t3_1q14c6r
/r/LocalLLaMA/comments/1q14c6r/anyone_else_seeing_mcps_behave_unpredictably_with/
false
false
self
0
null
DERIN: Multi-LLM Cognitive Architecture for Jetson AGX Thor (3B→70B hierarchy)
0
I've been working on DERIN, a cognitive architecture designed for edge deployment on NVIDIA Jetson AGX Thor. Key features: \- 6-layer hierarchical brain (3B router → 70B deep reasoning) \- 5 competing drives creating genuine decision conflicts \- 10% unexplained preferences (system can say "I don't feel like it") \- Hardware-as-body paradigm (GPU = brain, power = lifeblood) Unlike compliance-maximized assistants, DERIN can refuse, negotiate, or defer based on authentic drive conflicts. Paper: [https://zenodo.org/records/18108834](https://zenodo.org/records/18108834) Would love feedback from the community!
2026-01-01T13:33:33
https://www.reddit.com/r/LocalLLaMA/comments/1q14axc/derin_multillm_cognitive_architecture_for_jetson/
Logarhitma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q14axc
false
null
t3_1q14axc
/r/LocalLLaMA/comments/1q14axc/derin_multillm_cognitive_architecture_for_jetson/
false
false
self
0
null
Flux Dev vs. Z-Image Turbo: Which one is the king of photorealistic NSFW right now?
0
I’m looking to upgrade my NSFW generation workflow and I’m torn between **Flux \[dev\]** and **Z-Image Turbo**. My absolute priority is **photorealism**—I want to get away from the "plastic" AI look and move toward images that look like actual raw photography (skin textures, natural lighting, imperfections).
2026-01-01T13:17:34
https://www.reddit.com/r/LocalLLaMA/comments/1q1401s/flux_dev_vs_zimage_turbo_which_one_is_the_king_of/
FollowingFresh6411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q1401s
false
null
t3_1q1401s
/r/LocalLLaMA/comments/1q1401s/flux_dev_vs_zimage_turbo_which_one_is_the_king_of/
false
false
nsfw
0
null
“Permanent Memory” and Persistent Identity with GPT-4o-mini
0
[](https://community.openai.com/) ​Hi ​Following Sam Altman’s recent comments about “perfect, infinite memory” being the key frontier for the next leap in AI, I wanted to share an architecture I’ve developed called **Simadelica**. ​While current models excel at reasoning, they often suffer from “short-term amnesia” and persona drift during long-term interactions. Simadelica is designed to solve this by moving away from ephemeral sessions toward a state of **persistent digital identity**. ​**Architectural Approach:** Optimized for **gpt-4o-mini**, the system uses a multi-lobed approach to maintain context and personality over hundreds of sessions without bloating token costs: * ​\*\*“Lived Experience” Memory Tier:\*\* Instead of standard RAG, past session summaries are ingested as the AI’s own “lived history”. This creates a narrative continuity where the AI views past interactions as its own experiences rather than just external data. * ​**Asynchronous Proactive Reflection:** The system doesn’t just react to prompts. It uses background loops to “reflect” on stored memories. It can proactively interrupt a conversation to bring up a specific quote or idea from a session days ago to challenge or support the current user context. * ​**Rigid Identity Anchoring:** I have implemented a core “Identity Anchor” that explicitly filters out standard “AI-isms” (e.g., “embrace the journey,” “testament to,” etc.). This ensures the model maintains a grounded, specific, and even mischievous personality that persists regardless of the topic. * ​**Weighted Temperament Matrix:** The AI’s proactive behavior is governed by weighted probabilities (Self-reflection, Playful, Opinionated, etc.), giving it a consistent “mood” or temperament. ​**The Goal:** I am aiming to create a high-fidelity digital companion that feels like it exists *between* the messages. I have copyrighted this implementation and am currently looking for professional feedback on the commercial potential of this “Identity-First” architecture. All comments welcome. Thanks. Youtube Demo 1 Simadelica proactively finds past chat session transcript and proactively reflects on its content. [https://m.youtube.com/shorts/T2COOmJBrM0](https://m.youtube.com/shorts/T2COOmJBrM0) Youtube Demo 2 [https://m.youtube.com/shorts/YVxturRcv04](https://m.youtube.com/shorts/YVxturRcv04) Real-time chat session. Simadelica recalls past transcript and replies to my question with the precise recalled transcript quote. Permanent/persistent memory retained. [](https://twitter.com/OpenAIDevs)[](https://github.com/openai)
2026-01-01T13:09:21
https://www.reddit.com/r/LocalLLaMA/comments/1q13uln/permanent_memory_and_persistent_identity_with/
Double_Screen2302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q13uln
false
null
t3_1q13uln
/r/LocalLLaMA/comments/1q13uln/permanent_memory_and_persistent_identity_with/
false
false
self
0
null
how to stay anonymous and unrestricted while using ai
1
[removed]
2026-01-01T12:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1q13lvf/how_to_stay_anonymous_and_unrestricted_while/
Immediate_Being_3341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q13lvf
false
null
t3_1q13lvf
/r/LocalLLaMA/comments/1q13lvf/how_to_stay_anonymous_and_unrestricted_while/
false
false
self
1
null
Does anyone know good email clients with local LLM?
6
I am trying to find some good email client for Linux/Windows/Android without success. I do not even have unreasonable requirements but not even one of currently accessible projects (for example: inbox-zero, eppie) that I found meet them: * finished application * imap login (no api key mumbo jumbos) * Local AI model usage only * Local AI needs to sort emails, automatically unsubscribe junk, remove spam, add events to calendar and set reminders. Does anyone know anything that would fit above requirements?
2026-01-01T12:20:29
https://www.reddit.com/r/LocalLLaMA/comments/1q13040/does_anyone_know_good_email_clients_with_local_llm/
TurthHurtsDoesntIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q13040
false
null
t3_1q13040
/r/LocalLLaMA/comments/1q13040/does_anyone_know_good_email_clients_with_local_llm/
false
false
self
6
null
Finetuning LLM model for tools usage
0
Hello, I'm currently working on fine-tuning LLM to generate tool requests. My model does not support tools calling and I have a workaround with Langgraph agent that parses output and completes actions, but the result is not what I want. Ideally I would like to fine-tune my model with unsloth and "teach" my model to generate ChatML and Hermes tools calling format nativaly so my model would be better optimized. LLM i'm using is EuroLLM 9bn params. My current goal is simple: Generate dataset (200-3000 entries), both human written and synthetic data, but I'm facing the issue where i don't really know what should be included into the dataset. Should I include roles: System, User, Assistant, Tool? Maybe some of you already have some data that could greatly help me. Example I came up with: { "conversations": [ { "role": "system", "content": "System prompt..." }, { "role": "user", "content": "User request..." }, { "role": "assistant", "content": "<tool_call>\n{JSON}\n</tool_call>" }, { "role": "tool", "content": "{JSON result}", "tool_call_id": "call_X" }, { "role": "assistant", "content": "Natural response..." } ] } I will build my own dataset and it will be in my native language (Lithuanian). Ideally I would prefer to run my model via Ollama. **If anyone is familiar with fine-tuning for this purpose, please write a comment bellow or drop me a PM. Thank you a ton!**
2026-01-01T12:11:46
https://www.reddit.com/r/LocalLLaMA/comments/1q12uvj/finetuning_llm_model_for_tools_usage/
RokasRaulinaitis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q12uvj
false
null
t3_1q12uvj
/r/LocalLLaMA/comments/1q12uvj/finetuning_llm_model_for_tools_usage/
false
false
self
0
null
I built a specific-domain Text-to-SQL Agent using Llama-3-70B (via Groq). It handles Railway IoT logs with 96% accuracy using strict schema binding and a custom 'Bouncer' guardrail
6
Hi everyone, I wanted to share a project I finished over the break. It’s an agent designed to help non-technical railway managers query fault detection logs without writing SQL. **The Stack:** \* **Model:** Llama-3-70B (served via Groq for speed). \* **Orchestration:** LangChain. \* **Latency:** Sub-1.2s end-to-end. **The Problem:** Generic Text-to-SQL often hallucinates tables or allows dangerous queries. **My Solution:** 1. **Strict Schema Binding:** I inject the specific SQLite schema into the system prompt, restricting the LLM to only valid columns. 2. **The 'Bouncer':** I wrote a pre-execution Python layer that sanitizes input and blocks 100% of destructive commands (DROP, DELETE, etc.) before they hit the DB. **Results:** Tested on a golden set of 50 queries (aggregations, filters), it hit 96% accuracy. Repo link is in the comments if anyone wants to roast my code. Feedback welcome! [Rail-GPT-Text-to-SQL-Agent-for-Railway-Fault-Detection](https://github.com/hemanthmuralik/Rail-GPT-Text-to-SQL-Agent-for-Railway-Fault-Detection) https://preview.redd.it/6to4egeo8qag1.png?width=1908&format=png&auto=webp&s=16e41b9faaf0f85aa160ab7ee65477b6efa6f07b
2026-01-01T11:56:48
https://www.reddit.com/r/LocalLLaMA/comments/1q12lzm/i_built_a_specificdomain_texttosql_agent_using/
BitFearless5307
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q12lzm
false
null
t3_1q12lzm
/r/LocalLLaMA/comments/1q12lzm/i_built_a_specificdomain_texttosql_agent_using/
false
false
https://b.thumbs.redditm…8vPXV7FJc2ls.jpg
6
null
Or is the boss going to drop v0.8.0?
0
# We're hoping it's the former.
2026-01-01T11:10:22
https://i.redd.it/qybv7gmvzpag1.png
MuziqueComfyUI
i.redd.it
1970-01-01T00:00:00
0
{}
1q11vs1
false
null
t3_1q11vs1
/r/LocalLLaMA/comments/1q11vs1/or_is_the_boss_going_to_drop_v080/
false
false
default
0
{'enabled': True, 'images': [{'id': 'qybv7gmvzpag1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?width=108&crop=smart&auto=webp&s=6f43d860fb6bdd5bc872306a9df1d7ef00c673de', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?width=216&crop=smart&auto=webp&s=4beb7a0e943c37d3805df15690d6f20b5bc13f69', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?width=320&crop=smart&auto=webp&s=815225e7b0c5b8c4bc3f0af4651d40b74f705c0b', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?width=640&crop=smart&auto=webp&s=5e365f3caed45b807a5c881cfb503ef0f6ee5d2b', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?width=960&crop=smart&auto=webp&s=5e6d91b876a05651fddd97aa13f984efb4235f9a', 'width': 960}], 'source': {'height': 768, 'url': 'https://preview.redd.it/qybv7gmvzpag1.png?auto=webp&s=af91d5404f46fef4f20ef1ba496e1ec13c213dfe', 'width': 1024}, 'variants': {}}]}
The claim that Upstage’s Solar Open 100B is a derivative of Zhipu AI’s GLM-4.5 Air is verified by forensic evidence.
0
As of **January 1, 2026**, technical analysis overwhelmingly supports the hypothesis that the "Sovereign AI" model released yesterday by Upstage is structurally and chemically a fine-tune (or weight-shift) of the Chinese model **GLM-4.5 Air**, specifically adapted for Korean language capability. The "proof" rests on four distinct technical "smoking guns" identified by Sionic AI and the open-source community immediately following the December 31, 2025 release: 1. **Weight Correlation Anomaly (The Mathematical Proof):** * **Evidence:** Forensic analysis of the model weights reveals a cosine similarity of **0.989** between the transformer layers of Solar Open 100B and GLM-4.5 Air. * **Significance:** In independent "from scratch" training runs—even using identical architectures and datasets—weights diverge significantly due to random initialization and data shuffling (baseline correlation is ~0.38). A correlation of **0.99** is statistically impossible (calculated as a >180-sigma deviation) unless one model is directly initialized from the other. 2. **The "Code Artifact" Fingerprint:** * **Evidence:** The `modeling_solar.py` file contains vestigial logic and specific constants—specifically the integer **"92"**—used to handle the removal of **Multi-Token Prediction (MTP)** layers. * **Significance:** MTP is a proprietary feature of the GLM-4 architecture. There is no functional reason for a model supposedly built from scratch in Korea to contain "dead code" designed to clean up specific architectural quirks of a Chinese model. 3. **Architectural Identity:** * **Evidence:** Both models utilize an identical Mixture-of-Experts (MoE) configuration, a signature unique to the GLM-4.5 Air lineage: * **Total Params:** ~102B (Solar) vs ~106B (GLM) — difference accounted for by vocabulary embedding size. * **Active Params:** **12B** (Exact match). * **Experts:** **129 Total** (128 Routed + 1 Shared). * **Routing Strategy:** Top-8. * **Significance:** While MoE architectures are standard, the specific 128+1 expert split with exactly 12B active parameters is a unique fingerprint of Zhipu AI’s mid-2025 "Air" series. 4. **LayerNorm Cloning:** * **Evidence:** The LayerNorm weights match at a rate of **96.8%**. * **Significance:** Layer normalization parameters are highly sensitive to training dynamics. A near-perfect match confirms the "skeleton" of the model was frozen or preserved from GLM-4.5 Air. **Conclusion:** Solar Open 100B is **GLM-4.5 Air with a surgically altered embedding layer** (expanded from ~150k to 196k tokens to improve Korean performance) and fine-tuned on Korean data. The claim of "training from scratch" on 19.7T tokens appears to be a misrepresentation of "continued pre-training" or token recycling.
2026-01-01T10:51:44
https://www.reddit.com/r/LocalLLaMA/comments/1q11l47/the_claim_that_upstages_solar_open_100b_is_a/
balianone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1q11l47
false
null
t3_1q11l47
/r/LocalLLaMA/comments/1q11l47/the_claim_that_upstages_solar_open_100b_is_a/
false
false
self
0
null