title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LLM trained from scratch on 1800s London texts (1.2B params, 90GB dataset) | 1 | [removed] | 2026-01-08T21:17:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1q7objl | false | null | t3_1q7objl | /r/LocalLLaMA/comments/1q7objl/llm_trained_from_scratch_on_1800s_london_texts/ | false | false | default | 1 | null | ||
GLM-4.7 on 4x RTX 3090 with ik_llama.cpp | 26 | With the help of Opus 4.5 I got unsloth/GLM-4.7-GGUF (Q4\_K\_M) running on my 4x RTX 3090 setup using ik\_llama.cpp in Docker. I wanted to share my benchmark results and configuration, and ask if these numbers are what I should expect - or if there's room for improvement.
# My Setup
|Component|Specs|
|:-|:-|
|Motherboard|Supermicro H12SSL-i|
|CPU|AMD EPYC 7282|
|GPUs|4x NVIDIA RTX 3090 (96GB VRAM total, all at PCIe x16)|
|RAM|256GB DDR4-2133|
|Storage|2 TB NVMe SSD|
# Benchmark Results
|Config|Context|n-cpu-moe|Batch|VRAM/GPU|Prompt|**Generation**|
|:-|:-|:-|:-|:-|:-|:-|
|Initial (mmap)|16K|all|512|\~5 GB|2.8 t/s|3.1 t/s|
|split-mode layer|16K|partial|4096|\~17 GB|2.8 t/s|⚠️ 0.29 t/s|
|\+ no-mmap|16K|all|4096|\~10 GB|8.5 t/s|3.45 t/s|
|\+ n-cpu-moe 72|16K|72|4096|\~17 GB|9.9 t/s|4.12 t/s|
|**Best 8K**|**8K**|**65**|**4096**|**\~21 GB**|**12.0 t/s**|**4.48 t/s** ⭐|
|**Best 16K**|**16K**|**68**|**2048**|**\~19 GB**|**10.5 t/s**|**4.28 t/s** ⭐|
# Benchmark Methodology
All tests were performed using the same simple request via curl:
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "GLM-4.7-GUFF",
"messages": [{"role": "user", "content": "Write a short Haiku."}],
"temperature": 0.7,
"max_tokens": 100
}'
The response includes timing information:
{
"timings": {
"prompt_n": 17,
"prompt_ms": 1419.902,
"prompt_per_second": 11.97,
"predicted_n": 100,
"predicted_ms": 22301.81,
"predicted_per_second": 4.48
}
}
* **prompt\_per\_second**: How fast the input tokens are processed
* **predicted\_per\_second**: How fast new tokens are generated (this is what matters most for chat)
Each configuration was tested with a fresh server start (cold start) and the first request after warmup. Note that GLM-4.7 has a "thinking/reasoning" mode enabled by default, so the 100 generated tokens include internal reasoning tokens.
# My Current Configuration
# Best for 8K Context (fastest):
llama-server \
--model "/models/GLM-4-Q4_K_M-00001-of-00005.gguf" \
--host 0.0.0.0 --port 8080 \
--ctx-size 8192 \
--n-gpu-layers 999 \
--split-mode graph \
--flash-attn on \
--no-mmap \
-b 4096 -ub 4096 \
--cache-type-k q4_0 --cache-type-v q4_0 \
--k-cache-hadamard \
--jinja \
--n-cpu-moe 65
# Best for 16K Context:
llama-server \
--model "/models/GLM-4-Q4_K_M-00001-of-00005.gguf" \
--host 0.0.0.0 --port 8080 \
--ctx-size 16384 \
--n-gpu-layers 999 \
--split-mode graph \
--flash-attn on \
--no-mmap \
-b 2048 -ub 2048 \
--cache-type-k q4_0 --cache-type-v q4_0 \
--k-cache-hadamard \
--jinja \
--n-cpu-moe 68
# Key Findings:
1. `--no-mmap` **is crucial** \- Loading the model into RAM instead of memory-mapping from SSD **tripled** my prompt processing speed (2.8 → 12 t/s)
2. `--split-mode graph` **not** `layer` \- Layer mode gave me only 0.29 t/s because GPUs process sequentially. Graph mode enables true tensor parallelism.
3. `--n-cpu-moe X` \- This flag controls how many MoE layers stay on CPU.
4. **Batch size matters** \- Smaller batches (2048) allowed more MoE layers on GPU for 16K context.
# Docker Setup
I'm running this in Docker. Here's my `docker-compose.yml`:
services:
glm-4:
build:
context: .
dockerfile: Dockerfile
container_name: glm-4-server
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
volumes:
- /path/to/models:/models:ro
ports:
- "8080:8080"
environment:
- CTX_MODE=${CTX_MODE:-8k} # Switch between 8k/16k
- NO_MMAP=true
- KV_CACHE_K=q4_0
- KV_CACHE_V=q4_0
- K_CACHE_HADAMARD=true
shm_size: '32gb'
ipc: host
restart: unless-stopped
And my `Dockerfile` builds ik\_llama.cpp with CUDA support:
FROM nvidia/cuda:12.4.0-devel-ubuntu22.04
# Install dependencies
RUN apt-get update && apt-get install -y \
git cmake build-essential curl \
&& rm -rf /var/lib/apt/lists/*
# Clone and build ik_llama.cpp
WORKDIR /opt
RUN git clone https://github.com/ikawrakow/ik_llama.cpp.git
WORKDIR /opt/ik_llama.cpp
RUN cmake -B build \
-DGGML_CUDA=ON \
-DGGML_CUDA_FA_ALL_QUANTS=ON \
-DCMAKE_CUDA_ARCHITECTURES="86" \
-DCMAKE_BUILD_TYPE=Release \
&& cmake --build build --config Release -j$(nproc) \
&& cmake --install build
EXPOSE 8080
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
# Questions
1. **Are these speeds (4.48 t/s generation) normal for this setup?** I've seen some posts mentioning 5-6 t/s with 2x RTX 5090, but they had 64GB VRAM total vs my 96GB.
2. **Any other flags I should try?** I tested `--run-time-repack` but it didn't help much.
3. **Is there a better MoE offloading strategy?** I'm using `--n-cpu-moe` but I know there's also the `-ot` regex approach.
4. **Would a different quantization help?** Currently using Q4\_K\_M. Would IQ4\_XS or Q5\_K\_M be faster/better?
5. **Low GPU power usage during inference?** My cards are power-limited to 275W each, but during inference they only draw \~100-120W. Could this be a bottleneck limiting my token/s?
I would love to hear your thoughts and any optimization tips. | 2026-01-08T21:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q7o8kl/glm47_on_4x_rtx_3090_with_ik_llamacpp/ | iamn0 | self.LocalLLaMA | 2026-01-08T21:17:27 | 0 | {} | 1q7o8kl | false | null | t3_1q7o8kl | /r/LocalLLaMA/comments/1q7o8kl/glm47_on_4x_rtx_3090_with_ik_llamacpp/ | false | false | self | 26 | null |
Creative Writing - anything under 150GB equal or close to Sonnet 3.7? | 1 | i know this question has been asked a few times before but its been some time since i've seen discussion around this and there a quite a few new kids (models) on the block. Wondering if anyone has had good experiences on models that are equal to or better than Sonnet 3.7 in terms of creative writing like fiction stories, poems, or lyrics?
It feels especially pertinent since anthropic deprecated the model and i much prefer its writing style and adherance over even all of the new claude variants. Im finding my current alternative is to leverage openrounter to get to sonnet 3.7 but im not sure how long that would last. would love to find a suitable alternative locally if possible.
im am running on a mac studio M2 Ultra with 192GB so i can run decently sized models, but not quite to the point of being able to run something like Deepseek 3.1 or KimiK2. For context i've tried a couple options but heres what ive found so far:
\- GLM 4.7 - very very good , closest to sonnet in terms of creative writing style, prompt adherence and consistency. Almost as good conceptually but falls short of the writing style that sonnet 3.7 seems to have where theres like a writers polish, and GLM still feels quite raw.
\- GLM 4.5 air / Intellect 3 - very close to GLM 4.7. not as strong conceptually but usable, still makes sense for the most part and is somewhat consistent. voicing is a bit flat comparatively
\- Kimi K2 thinking & Nothinking - suprisingly very dumb when it comes to creative for me. ideas were sorta non-sensical and also cliche / cheesy. did not follow instructions all that well to get the mood/feel that i wanted in terms of writing style.
\- Minimax 2.1 - similar to kimi k2. felt overly forcibly structured and generic and sorta dumb/falt.
\- Gemma 3 - decent, but writing feels amateurish compared to GLM 4.5. writing style also feels much to be desired. maybe there are good fine tunes to try for this?
would love to hear what people have experienced / recommendations to try! (so like models, techniquess, prompts, inference settings etc\~) | 2026-01-08T21:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q7nutq/creative_writing_anything_under_150gb_equal_or/ | elsung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7nutq | false | null | t3_1q7nutq | /r/LocalLLaMA/comments/1q7nutq/creative_writing_anything_under_150gb_equal_or/ | false | false | self | 1 | null |
llama.cpp has Out-of-bounds Write in llama-server | 53 | Maybe good to know for some of you that might be running llama.cpp on a regular basis.
>llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n\_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama\_memory\_seq\_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.
Also reported [for Debian](https://security-tracker.debian.org/tracker/CVE-2026-21869). | 2026-01-08T20:56:15 | https://www.cve.org/CVERecord?id=CVE-2026-21869 | radarsat1 | cve.org | 1970-01-01T00:00:00 | 0 | {} | 1q7nqxl | false | null | t3_1q7nqxl | /r/LocalLLaMA/comments/1q7nqxl/llamacpp_has_outofbounds_write_in_llamaserver/ | false | false | default | 53 | null |
Save tokens by skipping English grammar | 0 | I'm sure it is somewhat of a common knowledge, but valid english grammar is mostly optional for LLMs. Our "fluffy" way of expressing ourselves isn't strictly necessary for models that been trained on a lot of data containing math notation, Prolog, and other symbolic notations.
So, what would take you a few minutes to disentangle might be parsed by the LLM just as any normal text. This allows to save quite a lot of tokens without losing much in terms of context reproduction. | 2026-01-08T20:47:29 | Everlier | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7nio4 | false | null | t3_1q7nio4 | /r/LocalLLaMA/comments/1q7nio4/save_tokens_by_skipping_english_grammar/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'F9kEInJSNGmsqH5N-UrxQC8VO0o2hyBubY_qK4sZxvU', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=108&crop=smart&auto=webp&s=6a5a404dfeb2e0524b5800a1dbd7741daa76f61b', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=216&crop=smart&auto=webp&s=d65249e80a95fd9feef1d195adea35f9a4d0070c', 'width': 216}, {'height': 223, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=320&crop=smart&auto=webp&s=83e9b375a24eeee7c2f3d432c4262892849497de', 'width': 320}, {'height': 447, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=640&crop=smart&auto=webp&s=859da5b6f87536402714fba1bc6c275b8d607e4b', 'width': 640}, {'height': 671, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=960&crop=smart&auto=webp&s=3f8ddcd67a9061779becc191e6ba8bcb84320a32', 'width': 960}, {'height': 755, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?width=1080&crop=smart&auto=webp&s=ca426932d9c900dd282cd51cb769578e31f009eb', 'width': 1080}], 'source': {'height': 1221, 'url': 'https://preview.redd.it/x6ophc1et6cg1.png?auto=webp&s=b03f24fe0ac790a219be26c9e08dbdb904ab9b34', 'width': 1745}, 'variants': {}}]} | ||
Nothing crashed. Puppeteer MCP still broke my agent. | 2 | I was testing the Puppeteer MCP server, and everything looked fine at first. It connected, tools showed up, and no errors. But once the agent started running, things slowly went sideways. Clicks "worked", but nothing downstream knew what changed, so steps kept getting retried.
At first, it felt like LLM weirdness, so I looked at the MCP server itself. Turns out most Puppeteer tools don’t declare what they return, and some rely on poorly described parameters or implicit context. Nothing breaks, but agents quietly get confused.
I recorded a short video showing the analysis and why runtime testing misses this. I used a tool I’m building called Syrin to run the check.
Curious how others validate MCP servers before runtime. | 2026-01-08T20:47:15 | https://v.redd.it/q19le8det6cg1 | hack_the_developer | v.redd.it | 2026-01-08T20:53:54 | 0 | {} | 1q7nifj | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/q19le8det6cg1/DASHPlaylist.mpd?a=1770503344%2CZmFlYzM5M2RmOTVmNzMzOTUxNWI3ZTBlMjdiMzYwYmUxODNjMjFlMzI3N2JjOGRiMjNjNWY1OGU4YWI2ZjliZQ%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/q19le8det6cg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/q19le8det6cg1/HLSPlaylist.m3u8?a=1770503344%2CNDg0OTg1NGNlZjNkYmMyY2ViOTBmYzkwMDgzZTZlOTAzOGMzOGVlYjhjZjBjZDA5YTZjYjQ3NmRkOTQzYjMzNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/q19le8det6cg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1702}} | t3_1q7nifj | /r/LocalLLaMA/comments/1q7nifj/nothing_crashed_puppeteer_mcp_still_broke_my_agent/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=108&crop=smart&format=pjpg&auto=webp&s=346bb4724b2b72401b738b624b1fbb64c36db004', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=216&crop=smart&format=pjpg&auto=webp&s=9c67c425fb6bab11bfc3e903cacd8ac88909f4f0', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=320&crop=smart&format=pjpg&auto=webp&s=4c5460f54e3a4d77f9482606593f8f79fcad0b05', 'width': 320}, {'height': 405, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=640&crop=smart&format=pjpg&auto=webp&s=a63724d09e275acb271af880eb481995a4da3a9e', 'width': 640}, {'height': 608, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=960&crop=smart&format=pjpg&auto=webp&s=83abb340bdbd74b7b437dcddade27ee714c4c28e', 'width': 960}, {'height': 684, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=83edc128f752f8b1538373bee6acc77cda716d10', 'width': 1080}], 'source': {'height': 1408, 'url': 'https://external-preview.redd.it/N2UydWhqZGV0NmNnMbZSoI9x_23KPtN2XOalGkhNuH3zGXLop-0uhZ_7QxRE.png?format=pjpg&auto=webp&s=e8a5b01c0f6bd24fe044357c6e50173d6c629fe4', 'width': 2220}, 'variants': {}}]} | |
Using Llama-3.1-8B’s perplexity scores to predict suicide risk (preprint + code) | 12 | We just uploaded a preprint where we used local Llama 3.1 to detect suicide risk 18 months in advance. We needed access to raw token probabilities to measure perplexity (the model's "surprise"), so open weights were mandatory.
The pipeline was pretty simple. We got recordings of people talking about their expected future self, used Claude Sonnet to generate two "future narratives" for each person (one where they have a crisis, one where they don't). Then we fed those into Llama-3.1-8B to score which narrative was more linguistically plausible based on the patient's interview transcript.
The results were that if the suicidal narrative was more probable (lower perplexity), that person was significantly more likely to report suicidal ideation 18 months later. It actually caught 75% of the high-risk people that standard suicide medical questionnaires missed.
Paper and Code: [https://osf.io/preprints/psyarxiv/fhzum\_v1](https://www.google.com/url?sa=E&q=https%3A%2F%2Fosf.io%2Fpreprints%2Fpsyarxiv%2Ffhzum_v1)
I'm planning on exploring other models (larger, newer, thinking models, etc). I'm not a comp sci person, so I am sure the code and LLM tech can be improved. If anyone looks this over and has ideas on how to optimize the pipeline or which open models might be better at "reasoning" about psychological states, I would love to hear them.
**TL;DR:** We used Llama-3.1-8B to measure the "perplexity" of future narratives. It successfully predicted suicidal ideation 18 months out. | 2026-01-08T20:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q7nhp0/using_llama318bs_perplexity_scores_to_predict/ | AI_Psych_Research | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7nhp0 | false | null | t3_1q7nhp0 | /r/LocalLLaMA/comments/1q7nhp0/using_llama318bs_perplexity_scores_to_predict/ | false | false | self | 12 | null |
Z.ai (the AI lab behind GLM) has officially IPO'd on the Hong Kong Stock Exchange | 248 | 2026-01-08T20:23:59 | https://x.com/Zai_org/status/2009290783678239032 | Old-School8916 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1q7mvuf | false | null | t3_1q7mvuf | /r/LocalLLaMA/comments/1q7mvuf/zai_the_ai_lab_behind_glm_has_officially_ipod_on/ | false | false | default | 248 | null | |
GLM-4.6v 108b 4bit IQuant | 2 | Gemini said "impossible won't run".
Hardware: Threadripper 1920x, 64GB, 2* RTX5060TI 32GB.
It runs, starts with 11t/s, drops to around 4 t/s when context reaches 8k. And the output is...great. I have tried a nous Hermes 32b For story telling - it was catastrophic, maybe it got to dumb, will try again.
The GLM starts with the story and continues.....and delivered a hard Science fiction par excellence.
Have given it the task to build an interactive world chart for the science fiction. Hey no problem, can do it.
Have told it I wanted to monitor my ai workstation. It build basic solution with flask using python and the bigger variant with grafana. I like it.
PS: Can some spend me some money for two more rtx?.🥺 | 2026-01-08T20:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q7mv0t/glm46v_108b_4bit_iquant/ | Responsible-Stock462 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7mv0t | false | null | t3_1q7mv0t | /r/LocalLLaMA/comments/1q7mv0t/glm46v_108b_4bit_iquant/ | false | false | self | 2 | null |
Built a blind benchmark for coding models - which local models should I add? | 14 | [deleted] | 2026-01-08T19:54:08 | [deleted] | 2026-01-08T22:17:37 | 0 | {} | 1q7m2eh | false | null | t3_1q7m2eh | /r/LocalLLaMA/comments/1q7m2eh/built_a_blind_benchmark_for_coding_models_which/ | false | false | default | 14 | null | ||
Help Needed - Need to setup local AI server, with local file access. | 1 | Hi All,
After many days of research, i have come to the conclusion that i need someone smarter than me to help me out in my project.
Available hardware:
\- Lenovo SR655 server with AMD Epyc 7313 16c/32t cpu (willing to upgrade to 7703 64c/128t)
\- 64gb Ram ddr4 3200mhz ecc 2rx4 (2x32gb sticks. sadly i dont have more sticks, although the epyc has 8 memory channels so i am sacrificing bandwidth).
\- 120TB zfs with parity + mirror on rust hdd (dedicated server with truenas, 64gb ddr4, and 2288g xeon cpu.) over 10gb fiber.
\- 4tb in raid 0 nvme drives (2x2tb nvme pcie 4x4)
\- Running Proxmox VE 9.xx.
\- EFI q35 virtual machine with 60gb ram passed to it, and all cpu cores (set as host for best performance and all features). Running Ubuntu server 24.04. Latest docker setup.
\- The ubuntu vm has access to storage over smb share (hosted in a different machine over 10gb fiber). 2tb given as local hdd to the ubuntu (nvme storage) for models.
\- I am willing to purchase a GPU for the server. It can handle up to 3 GPUs. I dont have much budget for this so i was looking at RTX 2000E Ada, or v100? I would need some help with this as well. Given that the server requires server size GPUs and i can not just buy off the shelf 3060s or such. I would need help figuring out what GPUs are best for this application.
\- My old workstation with the following specs
\- Gigabyte Aurus master z790, 13900k cpu, 32gb ddr5 (dont remember the speed), 2 x 2tb nvme 4x4 in raid 0, nvidia rtx4090. Cpu has been delided and its watercooled with liquid metal. so is the gpu. custom loop with 2 360mm radiators in the loop. 10gb net.
\- i am willing to use my old workstation as needed to make this project work.
\- My very old workstation
\- this is a am5 system with 5900x cpu, 3090rtx, 32gb ddr4 at 3200. single 1tb nvme 3x4. cpu and gpu both water cooled with custom loops.
\- i am willing to use this as needed as well. its collecting dust anyway.
Goal:
I need to be able to provide the following services to one of the vms im running. Nextcloud AIO.
\- Whisper for voice to text services.
\- tts for text to sound services.
\- local ai with access to SMB share files with context etc etc. (this is the only thing im really lost at)
\- Some way to get the OpenAI API (that nextcloud uses) to be able to call some instance of ConfyUI Warkflow for image generation. I guess that would be called a api gateway.
\- Setting up agents for specific tasks. I am lost on this one as well.
\- Local AI running backend for the AI chat on Nextcloud. This i have figured out with LocalAI hosting the models i like and i am able to use the built in OpenAI API in nextcloud to connect to LocalAI as the service provider. Perhaps there is a better way?
If you can help, or have done a similar setup prior and have some pointers, Please Please Please DM me. I dont want to fill up the entire post random info and bother people. I would like to directly communicate so i can gain some knowledge and perhaps get this done.
I would like to thank all of you in advance. Thank you all. | 2026-01-08T19:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1q7lls5/help_needed_need_to_setup_local_ai_server_with/ | Puzzleheaded_Cake183 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7lls5 | false | null | t3_1q7lls5 | /r/LocalLLaMA/comments/1q7lls5/help_needed_need_to_setup_local_ai_server_with/ | false | false | self | 1 | null |
Kimi K2 Thinking, Q2, 3 nodes Strix Halo, llama.cpp. Has anyone tried a multiple-node setup using vLLM yet? And how it compares to Llama.cpp. Thank you. | 12 | Managed to run Kimi K2 Thinking, q2 on a 3-node Strix Halo setup. Got around 9t/s. | 2026-01-08T18:50:56 | el3mancee | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7kbbz | false | null | t3_1q7kbbz | /r/LocalLLaMA/comments/1q7kbbz/kimi_k2_thinking_q2_3_nodes_strix_halo_llamacpp/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'eoDZLssZnR9mmGv8-dmmqfwVg-CK7e_XXHfk7dU1LRk', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=108&crop=smart&auto=webp&s=5a64768f5d8bca518d8795bbd1fe215724f96a72', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=216&crop=smart&auto=webp&s=c2de291e642b0fed73b6e291ac2d172f46e536eb', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=320&crop=smart&auto=webp&s=33714b954ae6d68716554a7e373640b1eabdb8e6', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=640&crop=smart&auto=webp&s=4d942cd6a66303123de4eba5fb1fd9f269d83aaa', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=960&crop=smart&auto=webp&s=3c9bf7923d425fb491fa0807070a7d3750a52e2e', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?width=1080&crop=smart&auto=webp&s=bfd7c70376a3718efe35098eb09ee174b83b7d3a', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/792wso0696cg1.jpeg?auto=webp&s=2166a4a8c02c5607c3ec24081d19612e9fb46700', 'width': 4284}, 'variants': {}}]} | ||
toy model | 17 | If anyone is interested in creating, training, and chatting with a toy model, I’ve created [https://github.com/EduardTalianu/toygpt](https://github.com/EduardTalianu/toygpt).
It includes:
* a model script to create a model
* a training script to train it on a`.txt` file
* a chat script to interact with the trained model
It’s a PyTorch research implementation of a Manifold-Constrained Hyper-Connection Transformer (mHC), combining Mixture-of-Experts efficiency, Sinkhorn-based routing, and architectural stability enhancements.
Enjoy! | 2026-01-08T18:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q7k754/toy_model/ | Eduard_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7k754 | false | null | t3_1q7k754 | /r/LocalLLaMA/comments/1q7k754/toy_model/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=108&crop=smart&auto=webp&s=a63905621cf38b6ecc82522542fb3db060ed0fb6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=216&crop=smart&auto=webp&s=3a9c15ec55e463638227148e6a2c7542ce74eee3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=320&crop=smart&auto=webp&s=109f29f1696da20ba4a609896eeaa4138e2b9987', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=640&crop=smart&auto=webp&s=0dd5d5d67bd196f1bef4f0a5cd05dd032c48b406', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=960&crop=smart&auto=webp&s=bd194ddc65147c77efc1b4bd28b196808c89aa64', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?width=1080&crop=smart&auto=webp&s=d83d51fa7213d1cd0634621aa77c4d9f748af35f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZxEc7lAOd1AqQFUpIwHNWlCOmHKDnnD4467Zu_m0pTI.png?auto=webp&s=88b39238bac41721f8de206d7f42236e0aeed3a6', 'width': 1200}, 'variants': {}}]} |
LFM2.5 1.2B Instruct is amazing | 147 | This model punches way above its weight. It outperforms every other model I've tried in this size range and runs smoothly on basically any hardware. If you haven't tried it yet, you definitely should.
Important note:
"""
We recommend using it for agentic tasks, data extraction, and RAG. It is not recommended for knowledge-intensive tasks and programming.
"""
[https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | 2026-01-08T18:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q7jd1a/lfm25_12b_instruct_is_amazing/ | Paramecium_caudatum_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7jd1a | false | null | t3_1q7jd1a | /r/LocalLLaMA/comments/1q7jd1a/lfm25_12b_instruct_is_amazing/ | false | false | self | 147 | {'enabled': False, 'images': [{'id': 'BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=108&crop=smart&auto=webp&s=eda864f6666258122641657b9833dd0a367333e2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=216&crop=smart&auto=webp&s=b154a0122533b8f01d3d346a08da036175b6c47f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=320&crop=smart&auto=webp&s=66541b31df262b3eee5dddf101111ed995d599ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=640&crop=smart&auto=webp&s=a093e404253710eb738a2939ce3f2e2cd5c963a3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=960&crop=smart&auto=webp&s=bd829b16e0c92e806a40f1eb3d30e79c99a6b637', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?width=1080&crop=smart&auto=webp&s=42431568eb28036d1cda4d326ebbede83ab6d9ae', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BQiJ-6ZUj97C_2Ni9gnR18tZkBC66SLXgLxi9yY7v-I.png?auto=webp&s=da3e28fbd132b074d5b7163c1ecd5b5487654953', 'width': 1200}, 'variants': {}}]} |
Why Does AI Refuse to Answer Certain Questions? | RLHF vs DPO - why DPO is becoming the go-to for alignment (eng sub/dub) | 0 | Why doesn’t AI answer certain dangerous questions?
Have you ever wondered how we teach AI where to draw the line?
High intelligence alone does not make an AI good.
Throughout 2025, I gave several talks under the theme
“Building Ethical LLM Solutions That Don’t Cross the Line.”
Unfortunately, due to technical issues at the venues, the original recordings of those talks were lost.
It felt like too much of a loss to leave them buried,
so I decided to significantly expand the content, redesign the visuals, and re-record the entire talk from scratch—this time with much higher production quality.
This video is not a generic discussion about “why AI ethics matter.”
It dives into:
\- What alignment really means and why it is necessary
\- The mathematical intuition behind RLHF and DPO
\- How AI systems actually learn concepts related to “ethics”
There is no grand ambition behind this project.
I simply wanted to share what I’ve studied and experienced with others who are walking a similar path.
I hope this video is helpful to engineers, researchers, and anyone curious about the safety of AI.
[Youtube](https://www.youtube.com/watch?v=0aryjbfkL0k) : [https://www.youtube.com/watch?v=0aryjbfkL0k](https://www.youtube.com/watch?v=0aryjbfkL0k)
[LinkedIn](http://www.linkedin.com/in/devgumin) : [https://www.linkedin.com/in/devgumin/](https://www.linkedin.com/in/devgumin/)
https://preview.redd.it/xwwt340q06cg1.png?width=3456&format=png&auto=webp&s=50c3d75500b67d1343f03c1834b0c5ed9445b76a | 2026-01-08T18:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/1q7izqx/why_does_ai_refuse_to_answer_certain_questions/ | Old_Elk5091 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7izqx | false | null | t3_1q7izqx | /r/LocalLLaMA/comments/1q7izqx/why_does_ai_refuse_to_answer_certain_questions/ | false | false | 0 | null | |
Im planning on buying a single gpu to run llms and contemplating between two gpus | 5 | im planning on either getting an nvidia v100 (32gb version) or a mi50 (32gb version)
is the extra money to get the v100 worth it ?, or should i just buy the cheaper mi50 for around 200-300 usd | 2026-01-08T18:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ivay/im_planning_on_buying_a_single_gpu_to_run_llms/ | WhiteSupremacistMonk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ivay | false | null | t3_1q7ivay | /r/LocalLLaMA/comments/1q7ivay/im_planning_on_buying_a_single_gpu_to_run_llms/ | false | false | self | 5 | null |
M2 Ultra 128gb ram 2TB ssd or PC with two rtx3090 64gb ram 2TB ssd? | 5 | Should I buy a M2 Ultra 128gb ram 2TB ssd or PC with two rtx3090 64gb ram 2TB ssd?
They are basically the same price $2500-3000 used for the Mac Studio
To build the PC with the two rtx3090 would be around the same price.
Would like to work with 70b models. | 2026-01-08T17:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q7iaul/m2_ultra_128gb_ram_2tb_ssd_or_pc_with_two_rtx3090/ | royal_robert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7iaul | false | null | t3_1q7iaul | /r/LocalLLaMA/comments/1q7iaul/m2_ultra_128gb_ram_2tb_ssd_or_pc_with_two_rtx3090/ | false | false | self | 5 | null |
Weird model choices in LM Studio | 0 | Hi,
Does anyone know why LMStudio exposes such a weird range of models as 'first party' - say GLM 4.6 flash, but not Air, no DevStral 2, only the small variant, no full DeepSeek or Kimi etc?
I see some versions on repos like \`lmstudio-community/\` - or some other random repos, what's the deal there, are they 'safe'? | 2026-01-08T17:38:46 | anonXMR | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7i9ea | false | null | t3_1q7i9ea | /r/LocalLLaMA/comments/1q7i9ea/weird_model_choices_in_lm_studio/ | false | false | 0 | {'enabled': True, 'images': [{'id': '5gL90Yolq6n_ZaJSrNyfeofrxiuMBAaZPhcPMIEXUiA', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=108&crop=smart&auto=webp&s=6d852b3e0927e595cbd3b66cf5642b7d1affc6a9', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=216&crop=smart&auto=webp&s=45a3eb77eb50f36d1a8906c438c691236da49965', 'width': 216}, {'height': 187, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=320&crop=smart&auto=webp&s=76d3ebff4abc6026301c3f369b66d35b64ac4491', 'width': 320}, {'height': 375, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=640&crop=smart&auto=webp&s=0b951c107668b24422f23f4a2fd9d893e90d1be6', 'width': 640}, {'height': 563, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=960&crop=smart&auto=webp&s=9a3862a3d3afbfc6f5435adcf916018220dfa82d', 'width': 960}, {'height': 634, 'url': 'https://preview.redd.it/191megm9w5cg1.png?width=1080&crop=smart&auto=webp&s=76990a89c82ff6a788a8365ca2470901dabdcc57', 'width': 1080}], 'source': {'height': 1556, 'url': 'https://preview.redd.it/191megm9w5cg1.png?auto=webp&s=a241f2525b6f2975ef36428afea612d9ca5da21d', 'width': 2650}, 'variants': {}}]} | ||
Why is there such an unusual list of models in LMStudio | 0 | Hi,
Does anyone know why LMStudio exposes such a weird range of models as 'first party' - say GLM 4.6 flash, but not Air, no DevStral 2, only the small variant, no full DeepSeek or Kimi etc?
I see some versions on repos like \`lmstudio-community/\` - or some other random repos, what's the deal there, are they 'safe'? | 2026-01-08T17:37:49 | anonXMR | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7i8fw | false | null | t3_1q7i8fw | /r/LocalLLaMA/comments/1q7i8fw/why_is_there_such_an_unusual_list_of_models_in/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'c5kzh9sfv5cg1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=108&crop=smart&auto=webp&s=1080093633f75b504f940a54c09e5783747bec59', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=216&crop=smart&auto=webp&s=47bcaf48890efda83882e8583ad3dc20e22ca76c', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=320&crop=smart&auto=webp&s=ad4474cedd946fe80708af623d0cb0bf86063f88', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=640&crop=smart&auto=webp&s=04d676259993e4ad0bc7564b1768f2b65695aad4', 'width': 640}, {'height': 565, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=960&crop=smart&auto=webp&s=78ae0c4931d03c75963ac8b10eb706fb7b5cf460', 'width': 960}, {'height': 636, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?width=1080&crop=smart&auto=webp&s=bc64d1c3b81455e5f060b50290488b5332d98457', 'width': 1080}], 'source': {'height': 1568, 'url': 'https://preview.redd.it/c5kzh9sfv5cg1.png?auto=webp&s=ffabe73e3b95d3f8fc04cb205b082f2873db1480', 'width': 2660}, 'variants': {}}]} | |
How do you manage quality when AI agents write code faster than humans can review it? | 20 | We are shifting to an agentic workflow. My thesis is "Code at Inference Speed." My CTO's counter-argument is that **reviewing code is harder than writing it**.
His concern is simple: If AI increases code volume by 10x, human review becomes a fatal bottleneck. He predicts technical debt will explode because humans can’t mentally verify that much logic that quickly.
How do handle this? I know one option is to slow down releases but is there any other approaches people are taking. | 2026-01-08T17:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q7hywi/how_do_you_manage_quality_when_ai_agents_write/ | lostsoul8282 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7hywi | false | null | t3_1q7hywi | /r/LocalLLaMA/comments/1q7hywi/how_do_you_manage_quality_when_ai_agents_write/ | false | false | self | 20 | null |
Qwen3-4B-Instruct-2507 multilingual FT with upscaled Polish language | 23 | Hi,
Just wanted to share a preview of my latest finetuned model based on Qwen3-4B-Instruct-2507.
Languages ratio:
Polish - high
English - medium
Chinese - medium
Czech - medium/low
Ukrainian - medium/low
Russian - medium/low
[https://huggingface.co/piotr-ai/polanka\_4b\_v0.3\_preview\_260108\_qwen3\_gguf](https://huggingface.co/piotr-ai/polanka_4b_v0.3_preview_260108_qwen3_gguf)
| 2026-01-08T17:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q7hikw/qwen34binstruct2507_multilingual_ft_with_upscaled/ | Significant_Focus134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7hikw | false | null | t3_1q7hikw | /r/LocalLLaMA/comments/1q7hikw/qwen34binstruct2507_multilingual_ft_with_upscaled/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=108&crop=smart&auto=webp&s=02c0b00d561e47cb29866724feda648a32f7a3d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=216&crop=smart&auto=webp&s=deac35f245e9e0ee8eb03aec2c65efae9d51f7ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=320&crop=smart&auto=webp&s=4a00522b17a07c08a3de03d221ec09b3602076eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=640&crop=smart&auto=webp&s=de7d31139cdd4b9ed7355cd1752d9f99af93d654', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=960&crop=smart&auto=webp&s=c5c9ab15f85fd8b5c3306f77b5e66684217dd832', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?width=1080&crop=smart&auto=webp&s=7b0705845832c25c8bd03af902783cc2fda58466', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IkLjK031A6rVr-BtLcWHSwwJVkVBpZ-cTp_Avs82NTc.png?auto=webp&s=2614dbada8dbacc21f8df4a1beb3c474989f6413', 'width': 1200}, 'variants': {}}]} |
I cant decide the best narration voice for my new true crime documentaries YouTube, any help? | 0 | I just started my YouTube channel, my voice is pathetic so I am considering elevenlabs for voice over. I have these two profiles that friends have recommended, they do well in documentaries particularly my niche "true crime". Could any expert here listen and recommend one that is perfect, i will so much appreciate; this : https://elevenlabs.io/app/voice-lab/share/aabd1c2ba2c23a3548bfb09fdf64c6a01eccbe5cd0d46b0a1b379180d641f5b8/H2CgnIux8C0XLWQ97uPA and this; https://elevenlabs.io/app/voice-lab/share/3d83f8e2a1b4a830d14ccba39a50a2cadc686a5740d282047ca6865f7b6b3104/lnUnPeUhSI5EcqtFBux7 i need something that is authoritative yet approachable tone that works perfectly for educational content, something that works really well, with that engaging storytelling quality that keeps viewers hooked. | 2026-01-08T17:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1q7hfup/i_cant_decide_the_best_narration_voice_for_my_new/ | Remarkable_Age_1838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7hfup | false | null | t3_1q7hfup | /r/LocalLLaMA/comments/1q7hfup/i_cant_decide_the_best_narration_voice_for_my_new/ | false | false | self | 0 | null |
Belief Propagation is an Obscure Alternative to Backpropagation for Training Reasoning Models | 3 | Sudoku solvers are great for testing reasoning models. The paper 'Sinkhorn Solves Sudoku' showcases *Belief Propagation*, an alternative to backpropagation rooted in Optimal Transport theory.
The idea is somewhat analogous to performing a softmax but without the derivatives. It's pretty cool IMO.
| 2026-01-08T17:08:20 | https://leetarxiv.substack.com/p/sinkhorn-solves-sudoku | DataBaeBee | leetarxiv.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1q7hetb | false | null | t3_1q7hetb | /r/LocalLLaMA/comments/1q7hetb/belief_propagation_is_an_obscure_alternative_to/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=108&crop=smart&auto=webp&s=257fb8abfbacd89c18bbdfc2ab8572e4582b9d22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=216&crop=smart&auto=webp&s=1a231059019ee5ca04b0f3474c51c5fa1ebfb998', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=320&crop=smart&auto=webp&s=d2184a6ba1951d9a91ad36e70b243ec7310b3f4c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=640&crop=smart&auto=webp&s=79db6fb175e479472c8011c99339d99a6b2c453e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=960&crop=smart&auto=webp&s=6575d40f4aa1002ac27a67a38860f31800bd6a57', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?width=1080&crop=smart&auto=webp&s=1aaff0cb31ad46af8d93da0abeedf028de49f7b7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Weurcl3TYPmG2Q6w9Iq6sjpwr7_UUsbhSjil3NRzaXM.jpeg?auto=webp&s=56127ecba23f27b6a70c7e7fe45aeaf91d8c191b', 'width': 1200}, 'variants': {}}]} | |
Why didn't AI “join the workforce” in 2025?, US Job Openings Decline to Lowest Level in More Than a Year and many other AI links from Hacker News | 0 | Hey everyone, I just sent [issue #15 of the Hacker New AI newsletter](https://eomail4.com/web-version?p=9ec639fc-ecad-11f0-8238-813784e870eb&pt=campaign&t=1767890678&s=77552741087ff895c759c805c4a68ada909a44b800f2abf8a2147c43bf57782e), a roundup of the best AI links and the discussions around them from Hacker News. See below 5/35 links shared in this issue:
* US Job Openings Decline to Lowest Level in More Than a Year - [HN link](https://news.ycombinator.com/item?id=46527533)
* Why didn't AI “join the workforce” in 2025? - [HN link](https://news.ycombinator.com/item?id=46505735)
* The suck is why we're here - [HN link](https://news.ycombinator.com/item?id=46482877)
* The creator of Claude Code's Claude setup - [HN link](https://news.ycombinator.com/item?id=46470017)
* AI misses nearly one-third of breast cancers, study finds - [HN link](https://news.ycombinator.com/item?id=46537983)
If you enjoy such content, please consider subscribing to the newsletter here: [**https://hackernewsai.com/**](https://hackernewsai.com/) | 2026-01-08T17:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1q7hbbc/why_didnt_ai_join_the_workforce_in_2025_us_job/ | alexeestec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7hbbc | false | null | t3_1q7hbbc | /r/LocalLLaMA/comments/1q7hbbc/why_didnt_ai_join_the_workforce_in_2025_us_job/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=108&crop=smart&auto=webp&s=eaece91410aa19aa91a2dfaa236f1f51ab8d4011', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=216&crop=smart&auto=webp&s=3b8c80b95e563b5f1c9ccf6b304e2c8f7287d822', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=320&crop=smart&auto=webp&s=cb1c60f0b380a177c086eef58f8759413a428d54', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=640&crop=smart&auto=webp&s=6ad830a9bee7fb19dfc02e1b8d5171c804b6dba2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=960&crop=smart&auto=webp&s=c59730cd65715d103f1a922cf9f271e718597092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?width=1080&crop=smart&auto=webp&s=9b52058670bd8ea41f5eb47b76079d97d37eb2a0', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/jHCN_uva9TlmFhIEbDvKR9zNxIDOi2aspi2hVpOu-FI.png?auto=webp&s=938312c4efbaafd9c5c2d652b0b02d6f8aecf20e', 'width': 1300}, 'variants': {}}]} |
I fine-tuned a 7B model for reasoning on free Colab with GRPO + TRL | 11 | I just created a **Colab notebook** that lets you **add reasoning to 7B+ models** on free Colab(T4 GPU)!
Thanks to **TRL's full set of memory optimizations**, this setup reduces memory usage by **\~7×** compared to naive FP16, making it possible to fine-tune large models in a free Colab session.
Notebook:
👉 [GRPO + TRL Colab notebook](https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_trl_lora_qlora.ipynb)
Check out other notebooks I worked on:
👉 [TRL examples](https://github.com/huggingface/trl/tree/main/examples/notebooks)
Happy hacking! 😄 | 2026-01-08T17:00:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q7h6hz/i_finetuned_a_7b_model_for_reasoning_on_free/ | External-Rub5414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7h6hz | false | null | t3_1q7h6hz | /r/LocalLLaMA/comments/1q7h6hz/i_finetuned_a_7b_model_for_reasoning_on_free/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]} |
Built a Research and Action Agent That Is 2x faster than ChatGPT Agent. | 0 | Hey everyone!
A weeks ago, I signed up to ChatGPT plan to try out their Agent mode (it was a free offer). After testing it with a few prompts, I was surprised with how slow the agent was even for small tasks.
So I built Resac, a research and action agent that is 2x faster than ChatGPT Agent.
It's free and open source: [https://github.com/hireshBrem/resac-ai-agent](https://github.com/hireshBrem/resac-ai-agent) | 2026-01-08T16:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/1q7h3qr/built_a_research_and_action_agent_that_is_2x/ | Comfortable-Rip-9277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7h3qr | false | null | t3_1q7h3qr | /r/LocalLLaMA/comments/1q7h3qr/built_a_research_and_action_agent_that_is_2x/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=108&crop=smart&auto=webp&s=ca1dc42a12212983551d3624f1914e45a37a0262', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=216&crop=smart&auto=webp&s=00c7dcade8fd8da4c58d3ef1081b1cb224f39a25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=320&crop=smart&auto=webp&s=f254bc50afa54282ccbf092bf02bef401cceb862', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=640&crop=smart&auto=webp&s=ec6f15723005167663fa5881eac2ace6ceea1deb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=960&crop=smart&auto=webp&s=9df6eee1387e257e6769c1965cfe4bb984bb5e78', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?width=1080&crop=smart&auto=webp&s=486aa30482a92f4c3109a84b6ecfe39e2520b7dd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6pYslRE5J4DEk4EtAgd7IRJ9yTny6FiQrXvWv-nQUKU.png?auto=webp&s=a3cac138ed51c02cebd59cce320104957cd87521', 'width': 1200}, 'variants': {}}]} |
Storytelling Model | 3 | HI folks, I am looking for a model that will write fiction for me -- ideally, I'll give it the ideas and a rough outline, and it will be able to develop it into a full blown short story type item. it would be nice if it also comes in an abliterated version
do you have any recommendations? | 2026-01-08T16:57:21 | https://www.reddit.com/r/LocalLLaMA/comments/1q7h3pi/storytelling_model/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7h3pi | false | null | t3_1q7h3pi | /r/LocalLLaMA/comments/1q7h3pi/storytelling_model/ | false | false | self | 3 | null |
Why is AnythingLLM significantly faster than the CLINE when using the same server of llama.cpp, same model, and same parameters? | 1 | 2026-01-08T16:54:18 | https://www.reddit.com/gallery/1q7h0oc | BitOk4326 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q7h0oc | false | null | t3_1q7h0oc | /r/LocalLLaMA/comments/1q7h0oc/why_is_anythingllm_significantly_faster_than_the/ | false | false | 1 | null | ||
Vision language models sub-4B for agentic | 1 | Hello looking for vision languages models for agentic workflows that are 4B paramaters or less
What have you found to be good in this area?
Is 4B enough to steer agentic flow? | 2026-01-08T16:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q7gzu4/vision_language_models_sub4b_for_agentic/ | SlowFail2433 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7gzu4 | false | null | t3_1q7gzu4 | /r/LocalLLaMA/comments/1q7gzu4/vision_language_models_sub4b_for_agentic/ | false | false | self | 1 | null |
Local Inference with big model shared over multiple GPUs with vLLM or Huggingface - Upgraded | 1 | Since I got some heat for my last post, and that it wasn't of good quality I wanted to better myself and provide more information regarding my problem :)
So I am trying the following:
\- Three GPUs (L40 - 48GB VRAM each)
\- Driver Version: 570.133.20 CUDA Version: 12.8
\- Model: "Qwen/Qwen2.5-72B"
\- Inference with vLLM or Huggingface on a server where I have no sudo permission
I set the visible devices (
os.environ["CUDA_VISIBLE_DEVICES"] = "2,5"
Here my setup:
Huggingface:
hf_model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
torch_dtype=torch.bfloat16,
max_memory=max_memory,
)
Loading model with HuggingFace...
Available GPUs: 2
GPU 0: 47.2GB free / 47.7GB total
GPU 1: 47.2GB free / 47.7GB total
`torch_dtype` is deprecated! Use `dtype` instead!
2026-01-08 17:16:24.631470: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2026-01-08 17:16:25.947704: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR (multiple times this Warning)
2026-01-08 17:16:27.646146: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA,
in other operations, rebuild TensorFlow with the appropriate compiler flags.
Here it hangs:
`def hf_generate(model, tokenizer, prompt, max_new_tokens=100):`
`"""Generate with HuggingFace model."""`
`# Get correct device for multi-GPU models`
`if hasattr(model, 'hf_device_map') and model.hf_device_map:`
`device = next(iter(model.hf_device_map.values()))`
`device = f"cuda:{device}" if isinstance(device, int) else device`
`else:`
`device = next(model.parameters()).device`
`inputs = tokenizer(prompt, return_tensors="pt").to(device)`
`start = time.time()`
`with torch.no_grad():`
`outputs = model.generate(`
`**inputs,`
`max_new_tokens=max_new_tokens,`
`do_sample=True,`
`temperature=0.7,`
`top_p=0.9,`
`pad_token_id=tokenizer.pad_token_id,`
`)`
`elapsed = time.time() - start`
`generated_tokens = outputs.shape[1] - inputs['input_ids'].shape[1]`
`text = tokenizer.decode(outputs[0], skip_special_tokens=True)`
`return text, elapsed, generated_tokens`
VLLM setup:
`from vllm import LLM, SamplingParams`
`print("Loading model with vLLM...")`
`start_load = time.time()`
`vllm_model = LLM(`
`model=MODEL_NAME,`
`dtype="bfloat16",`
`tensor_parallel_size=2,`
`gpu_memory_utilization=0.90,`
`trust_remote_code=True,`
`enforce_eager=True,`
`max_model_len=4096`
`)`
`load_time_vllm = time.time() - start_load`
`print(f"vLLM model loaded in {load_time_vllm:.2f}s")`
here it dies with this error:
`Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause.`
`uv/python/cpython-3.11.7-linux-x86_64-gnu/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown`
`warnings.warn('resource_tracker: There appear to be %d '` `Engine core initialization failed. See root cause above. Failed core proc(s): {}`
I was hoping that inference with a model that does not fit in one GPU would be more straight forward but somehow I am hitting a wall. Of course with trying to debug.
Hope that this helps to better understand my problem :)
Would be awesome if I could get some hints. | 2026-01-08T16:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q7gr9w/local_inference_with_big_model_shared_over/ | Lopsided-Dig-7625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7gr9w | false | null | t3_1q7gr9w | /r/LocalLLaMA/comments/1q7gr9w/local_inference_with_big_model_shared_over/ | false | false | self | 1 | null |
Models that handle 2000-2500 a words text and re-write it | 0 | Hello everyone! As the title says, I'm looking for a model that can handle a text of approximately 2000-2500 words and re-write it completely in another form, maybe more explicit, or with a different registry, without changing the core or -what happened to me and this is why I'm looking for your help- does "EOS token found" before even reaching a quarter of the text. I've been playing with a uncensored version of Mistral but it doesn't seem to understand the job well. Do you have any suggestions for a better model, or better prompting? | 2026-01-08T16:40:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q7gn4z/models_that_handle_20002500_a_words_text_and/ | Dedalus_art | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7gn4z | false | null | t3_1q7gn4z | /r/LocalLLaMA/comments/1q7gn4z/models_that_handle_20002500_a_words_text_and/ | false | false | nsfw | 0 | null |
/MiniMax-M2.1-REAP-50-W4A16 | 3 | [https://huggingface.co/0xSero/MiniMax-M2.1-REAP-50-W4A16](https://huggingface.co/0xSero/MiniMax-M2.1-REAP-50-W4A16)
Can I run Claude code locally using just this on my 5090+ram ?
|Size|\~59GB|
|:-|:-|
| 2026-01-08T16:40:08 | https://www.reddit.com/r/LocalLLaMA/comments/1q7gmre/minimaxm21reap50w4a16/ | xSNYPSx777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7gmre | false | null | t3_1q7gmre | /r/LocalLLaMA/comments/1q7gmre/minimaxm21reap50w4a16/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=108&crop=smart&auto=webp&s=b09c07d8b2f30e6bb5d6d6f744cc5512c52b0fef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=216&crop=smart&auto=webp&s=ba834b681786ab9696987eb35fac605bc9977caf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=320&crop=smart&auto=webp&s=e37f966fd23717d68215c84ea998040b54f2cd96', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=640&crop=smart&auto=webp&s=7cdf18221f8602c0aca88dcd32e3ae0bd8e0e5ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=960&crop=smart&auto=webp&s=6419a2546c48e0408b810d14c5391bad9ba15243', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?width=1080&crop=smart&auto=webp&s=c188bb2ff54bcd338d359765917a2e33925c3be0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Va3GPO0oTDABLO-Y4LTLiAcYvAPUC9eLmXUoUndW48c.png?auto=webp&s=de69a9f805f067e2df41a4275b257b62f50f52dc', 'width': 1200}, 'variants': {}}]} |
I built a tool that visualizes RAG retrieval in real-time (Interactive Graph Demo) | 0 |
Hey everyone,
I've been working on **VeritasGraph**, and I just pushed a new update that I think this community will appreciate.
We all know RAG is powerful, but debugging the retrieval step can be a pain. I wanted a way to visually inspect exactly what the LLM is "looking at" when generating a response.
**What’s new?** I added an interactive Knowledge Graph Explorer (built with PyVis/Gradio) that sits right next to the chat interface.
**How it works:**
1. You ask a question (e.g., about visa criteria).
2. The system retrieves the relevant context.
3. It generates the text response AND a dynamic subgraph showing the entities and relationships used.
4. Red nodes = Query-related entities. Size = Connection importance.
**Tech Stack:** \[Mention your backend tech, e.g., LangChain, Neo4j, NetworkX, etc.\] + Gradio for the UI.
I’d love some feedback on the UI and the retrieval logic.
**Live Demo:**[https://bibinprathap.github.io/VeritasGraph/demo/](https://bibinprathap.github.io/VeritasGraph/demo/)
[https://github.com/bibinprathap/VeritasGraph](https://github.com/bibinprathap/VeritasGraph)
https://preview.redd.it/crsp5bnal5cg1.png?width=1915&format=png&auto=webp&s=3e188662d41d39dd05def411db7a06aabdd9a081
https://preview.redd.it/r1wt1cnal5cg1.png?width=1894&format=png&auto=webp&s=2e6dadbf48d461508a19c881d3e61fe38b6d1a43
| 2026-01-08T16:38:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q7gl4b/i_built_a_tool_that_visualizes_rag_retrieval_in/ | BitterHouse8234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7gl4b | false | null | t3_1q7gl4b | /r/LocalLLaMA/comments/1q7gl4b/i_built_a_tool_that_visualizes_rag_retrieval_in/ | false | false | 0 | null | |
How I stopped writing long prompts and let Codex just run | 0 | Lately I’ve been avoiding long prompts and planning loops.
What worked better for me was switching to a skills-based workflow: I tell the model which mode to run, not how to reason.
Example of what I paste into Codex:
use vf: build a login page
That tells the AI to plan, execute, test, and finish without asking me questions.
This is adapted from the official Claude skills approach, but tuned to work directly in plain Codex chat.
I keep this as a personal setup across machines.
Curious if others are doing something similar. | 2026-01-08T16:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1q7frcu/how_i_stopped_writing_long_prompts_and_let_codex/ | Wise_Secretary8790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7frcu | false | null | t3_1q7frcu | /r/LocalLLaMA/comments/1q7frcu/how_i_stopped_writing_long_prompts_and_let_codex/ | false | false | self | 0 | null |
What LLM should I use. | 0 | What is the best LLM to help me learn hypnosis. Most I tried on ChatGPT and Gemini got censured for no reason | 2026-01-08T16:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q7fpzi/what_llm_should_i_use/ | Funnytingles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7fpzi | false | null | t3_1q7fpzi | /r/LocalLLaMA/comments/1q7fpzi/what_llm_should_i_use/ | false | false | self | 0 | null |
Are MiniMax M2.1 quants usable for coding? | 16 | Please share your real life experience. Especially interesting to hear from someone who had a chance to compare higher quants with lower ones.
Also, speaking of the model itself - do you feel it's worth the buzz around it?
Use case - coding via opencode or claude proxy.
Thank you! | 2026-01-08T15:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1q7fejp/are_minimax_m21_quants_usable_for_coding/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7fejp | false | null | t3_1q7fejp | /r/LocalLLaMA/comments/1q7fejp/are_minimax_m21_quants_usable_for_coding/ | false | false | self | 16 | null |
Recommendations for local RAG setup with TXT, DOC and PDF corpus | 2 | Hi everyone,
This is my first post in the community, so please excuse any rookie mistakes.
I am looking to self-host a RAG setup with a document corpus comprised of TXT, DOC and PDF documents only. All of these documents are well-formed (i.e. no image scans as PDF) and the LLM applications I am looking at are primarily summarizing and comparing (e.g. some of the documents are legal acts, amendments, guidelines, so comparing clauses to identify changes, etc.).
I am familiar with the different technical options (e.g. Apple M-series CPU/GPU vs Nvidia Jetson family, etc.) and the deployment options (native, hypervisor, container, etc.).
However, even after searching this and the r/ollama sub-reddits, I could not lock down which H/W combinations would (most likely) achieve (what I have read is considered) a comfortable token rate of 30-40 tokens/sec.
So I reach out to you; any recommendations or ideas on how to lock this down?
Many thanks in advance for any help.
\-- sizag | 2026-01-08T15:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/1q7fcvu/recommendations_for_local_rag_setup_with_txt_doc/ | sizag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7fcvu | false | null | t3_1q7fcvu | /r/LocalLLaMA/comments/1q7fcvu/recommendations_for_local_rag_setup_with_txt_doc/ | false | false | self | 2 | null |
I've got an Rtx 4070 and about 32 GB of ram, I hate Chatgpt/Gemini but would like to run my own personal AI model in my own system locally. What would be the best model to run via Ollama on my system? | 0 | Not really much else to add, i just want a small but fast and reasonably smart AI model to help with little questions and minor tasks like re-writing things. Not really interested in an AI capable of finding the cure for aging just one no better than a glorified writing aid that can do 8th Grade math. | 2026-01-08T15:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ei24/ive_got_an_rtx_4070_and_about_32_gb_of_ram_i_hate/ | Killmelmaoxd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ei24 | false | null | t3_1q7ei24 | /r/LocalLLaMA/comments/1q7ei24/ive_got_an_rtx_4070_and_about_32_gb_of_ram_i_hate/ | false | false | self | 0 | null |
Starter AI Rig running into issues | 0 | I decided to try to take the jump into a dedicated AI rig. I bought a former Miner rig with a 3090 ( I know some may have advised against this). My rig so far( 1k cost from a guy who was selling multiple you from his mining project):
2x 1000w gold power supply
3 tier mining frame
Ryzen 5 3500 processor ( supposedly not a good process for AI, looking for a 5900x to replace)
Gigabyte Aorous master x570 motherboard
32gb ram ( will be upgrading to 64gb 2x32 this weekend
RTX 3090 Vision OC ( I plan to repad and ptm7950 the die )
The setup had a bunch of 1x pcie risers. Like a 4x to 1pcie USB riser cards.
I bought a 20cm linkup riser cable. It's too stiff to run to the spot where the other riser connected to the card.
I've seen a couple of pcie mcio cable adapters. Also have seen a couple post about oculink adapters. These look more similar to the miner adapters but I need some guidance on getting the right set of adapters and cables and how to properly power everything. I've gotten a little scared by the stories of not connecting power correctly and burning up a card/port/motherboard.
I've also seen one "thin and flexible" pcie extension cable online that looks like it would work but it looks iffy.
My thought is that I get this one card setup and if it works well with my projects I'll upgrade to more 3090s connected over sli.
I see alot of reports that the motherboard supports bifurification. So I'm hoping that I can split the 2 pcie ports into 4 if needed in the future. At that point if I need more id move to a server board.
I've moved the card to the second tier to help with distance from the pcie port. Just need an actual flexible riser.
My frame is very similar to this: https://youtu.be/JN4EhaM7vyw?si=JnbCzqd_1VHMnVZd
But the person in this video had to make multiple modifications( drilling holes) just to get the cards to sit properly and he's using a server mobo.
Anyone have some suggested solutions? | 2026-01-08T15:07:34 | https://www.reddit.com/gallery/1q7e6iw | Fickle_Debate_9746 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q7e6iw | false | null | t3_1q7e6iw | /r/LocalLLaMA/comments/1q7e6iw/starter_ai_rig_running_into_issues/ | false | false | default | 0 | null |
Unsloth published a guide on how to run Qwen-Image diffusion models locally! | 0 | Guide: [https://unsloth.ai/docs/models/qwen-image-2512](https://unsloth.ai/docs/models/qwen-image-2512)
Learn to:
• Run Qwen-Image-2512 and Edit-2511
• Use GGUF, FP8 in ComfyUI, stable-diffusion.cpp, diffusers
• Create workflows & prompts
• Adjust hyperparams (sampling, guidance) | 2026-01-08T15:06:32 | Difficult-Cap-7527 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7e5jk | false | null | t3_1q7e5jk | /r/LocalLLaMA/comments/1q7e5jk/unsloth_published_a_guide_on_how_to_run_qwenimage/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'pdri6tx055cg1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=108&crop=smart&auto=webp&s=2d213951183c4e7c48f988488ded707846140160', 'width': 108}, {'height': 243, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=216&crop=smart&auto=webp&s=fe2df94b8515a7d1cab7d7b7f9455096f39c9a30', 'width': 216}, {'height': 361, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=320&crop=smart&auto=webp&s=e4c91d5aab6397404ee2ae76b2592d00d5c4014c', 'width': 320}, {'height': 722, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=640&crop=smart&auto=webp&s=cf873f280aee380354210bd1e251d6f128c42898', 'width': 640}, {'height': 1084, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=960&crop=smart&auto=webp&s=d6cc029aa14d1dbc3441c67dc85498302d421cbb', 'width': 960}, {'height': 1219, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?width=1080&crop=smart&auto=webp&s=346a8290f986fd55dea47197989e1f59c6f8457c', 'width': 1080}], 'source': {'height': 1355, 'url': 'https://preview.redd.it/pdri6tx055cg1.jpeg?auto=webp&s=a3a47b7e108124ebe89c1713c46b0f8850fdb2c4', 'width': 1200}, 'variants': {}}]} | |
Nucleus - AI prompt framework | 0 | Nucleus is an experimental mathematical framework for prompting AI. It's a 3 line prompt that makes practically any text generation AI better.
Adopt these operating principles:
[phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε⚡φ Σ⚡μ c⚡h] | OODA
Human ⊗ AI | 2026-01-08T15:01:05 | https://github.com/michaelwhitford/nucleus | Affectionate-Job9855 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1q7e0hl | false | null | t3_1q7e0hl | /r/LocalLLaMA/comments/1q7e0hl/nucleus_ai_prompt_framework/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=108&crop=smart&auto=webp&s=ebb3b18a4677351aed0a877b106a7a3e58109a20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=216&crop=smart&auto=webp&s=445f7ac9b9013aaf4972d6150657a3bf8990d0d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=320&crop=smart&auto=webp&s=b667dfb2c27248241517f236eee62851d1f9552c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=640&crop=smart&auto=webp&s=2080f9f891a1b116e00bb999e4a5b24019232be5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=960&crop=smart&auto=webp&s=5f0b9a90ff023f7decc39f5ee746fce663338f79', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?width=1080&crop=smart&auto=webp&s=4adec662495a65e3acfbb7f8147ebf0692c27387', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/425fYZJFJTb9oxk0e2HJUV2YIZ4X3mDvPvlGObQtZoU.png?auto=webp&s=9139c7c615ec643275472b23a591842e322a5aa0', 'width': 1200}, 'variants': {}}]} |
How to use llama.cpp to inference MinerU2.5 VL Doc OCR model | 1 | [removed] | 2026-01-08T14:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q7dtcg/how_to_use_llamacpp_to_inference_mineru25_vl_doc/ | Natural-Marsupial903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7dtcg | false | null | t3_1q7dtcg | /r/LocalLLaMA/comments/1q7dtcg/how_to_use_llamacpp_to_inference_mineru25_vl_doc/ | false | false | self | 1 | null |
[Project] Local voice cloning on Mac with Metal/MPS - demo inside | 0 | Got RVC (Retrieval-based Voice Conversion) running locally on my M1 Mac using Metal/MPS acceleration. Thought I'd share since this sub loves local inference!
\*\*Demo:\*\* Gettysburg Address converted to Trump's voice
[https://files.catbox.moe/3luof4.wav](https://files.catbox.moe/3luof4.wav)
\*\*Setup:\*\*
\- RVC v2 for voice conversion
\- PyTorch with MPS backend (Metal Performance Shaders)
\- Zero-shot conversion - just need \~25s of reference audio
\- Runs entirely offline, no API calls
\*\*Performance:\*\*
\- \~10-15 seconds to convert a 30s clip on M1 Pro
\- Uses about 4GB unified memory
\- Quality is surprisingly good for local inference
The nice thing about voice models is they're much smaller than LLMs - the whole RVC setup is under 1GB. Makes it very practical to run locally.
Anyone else running audio/voice models locally on Mac? Curious what others have working with MPS. | 2026-01-08T14:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/1q7dsqm/project_local_voice_cloning_on_mac_with_metalmps/ | Past_Pitch3089 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7dsqm | false | null | t3_1q7dsqm | /r/LocalLLaMA/comments/1q7dsqm/project_local_voice_cloning_on_mac_with_metalmps/ | false | false | self | 0 | null |
Qwen3-VL-Reranker - a Qwen Collection | 109 | 2026-01-08T14:45:00 | https://huggingface.co/collections/Qwen/qwen3-vl-reranker | LinkSea8324 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1q7dlkn | false | null | t3_1q7dlkn | /r/LocalLLaMA/comments/1q7dlkn/qwen3vlreranker_a_qwen_collection/ | false | false | default | 109 | {'enabled': False, 'images': [{'id': 'p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=108&crop=smart&auto=webp&s=3d7638e89ad1fd8e9b59cc55f3125b96dda11c49', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=216&crop=smart&auto=webp&s=a529ec2c17e65b63d1bb03f51a5b272326263ecb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=320&crop=smart&auto=webp&s=5585ca91e4c353f487fb61e610dbfc87e1c37fb8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=640&crop=smart&auto=webp&s=1b4e2c1310fa6d24d7fb43df7672f7329a04cfbc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=960&crop=smart&auto=webp&s=9104b08ad54c075ef11fb98121c9f63b46647460', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?width=1080&crop=smart&auto=webp&s=21ba0725873f218e7b26f0f69ebda39c73c2274b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/p_EUBuVnZfgcYfu2zAo996Hix2TFsBWGTVl7mQyY9Tk.png?auto=webp&s=ab31b335f08c700fd1c836f82a5970c265718d31', 'width': 1200}, 'variants': {}}]} | |
Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt | 847 | Someone had to count it. Turns out Jensen said "AI" exactly 121 times in the CES 2025 keynote.
I used [https://github.com/OpenAgentPlatform/Dive](https://github.com/OpenAgentPlatform/Dive) (open-source MCP client) + two MCPs I made:
\- [https://github.com/kevinwatt/yt-dlp-mcp](https://github.com/kevinwatt/yt-dlp-mcp) \- YouTube download
\- [https://github.com/kevinwatt/ffmpeg-mcp-lite](https://github.com/kevinwatt/ffmpeg-mcp-lite) \- video editing
**One prompt:**
Task: Create a compilation video of every exact moment Jensen Huang says "AI".
Video source: [https://www.youtube.com/watch?v=0NBILspM4c4](https://www.youtube.com/watch?v=0NBILspM4c4)
**Instructions:**
1. Download video in 720p + subtitles in JSON3 format (word-level timestamps)
2. Parse JSON3 to find every "AI" instance with precise start/end times
3. Use ffmpeg to cut clips (\~50-100ms padding for natural sound)
4. Concatenate all clips chronologically
5. Output: Jensen\_CES\_AI.mp4
Dive chained the two MCPs together - download → parse timestamps → cut 121 clips → merge. All local, no cloud.
If you want to see how it runs: [https://www.youtube.com/watch?v=u\_7OtyYAX74](https://www.youtube.com/watch?v=u_7OtyYAX74)
The result is... hypnotic. | 2026-01-08T14:29:47 | https://v.redd.it/hein55gpx4cg1 | Prior-Arm-6705 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7d8bj | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/hein55gpx4cg1/DASHPlaylist.mpd?a=1770474602%2CZmE2YWE3ZGQzZDQ1NjM1ODA0Njc5MmVjYTEwNTNlNjAwNDU4MjVlYzNmYjRiYWY1YzhiOGE2OWE0ZDE0YWIwMQ%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/hein55gpx4cg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/hein55gpx4cg1/HLSPlaylist.m3u8?a=1770474602%2CMDZlNWZkZTM1MDMwNDBkNDc2NWNlOTVmN2EwMmVjNjEzZGFjODFjNWQyYzAwMjg1MWNkMjNmZGZmNTk5YTdhNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hein55gpx4cg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1q7d8bj | /r/LocalLLaMA/comments/1q7d8bj/jensen_huang_saying_ai_121_times_during_the/ | false | false | 847 | {'enabled': False, 'images': [{'id': 'M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b3d068f25992f873dd8e37fed8002d3e71ef84d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=216&crop=smart&format=pjpg&auto=webp&s=cc2d2a07ad76e63ad1613c6ccf75b55d57954689', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=320&crop=smart&format=pjpg&auto=webp&s=1031962727f3ca7c25eabcfd9ecace2b57d9f6b0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=640&crop=smart&format=pjpg&auto=webp&s=c9aafd4a4648c7c18acde3058747a38411d87510', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=960&crop=smart&format=pjpg&auto=webp&s=1924cda8b31eddae84480421e97cc5fd14e816fd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8c0086b149f6c3bed5f30e3ed32171eb2484e127', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/M2cyNzBqaHB4NGNnMeuNas4_kS8fQc08s_eqp1ss4JB4szq45v23OyPEbFog.png?format=pjpg&auto=webp&s=2b35952dad66ea0017a4010561b4f68937abce8a', 'width': 1280}, 'variants': {}}]} | |
4 x V100s --- worth it? | 1 | Heyo friends. Lurker here. Recently I was able to purchase from my employer an older Threadripper workstation for cheeeeap. They were going through old equipment not in use, etc. This workstation though has 4 nvidia Tesla V100 GPUs with 32GB of ram each... it also has 256GB of system RAM which would be good for offloading if I max the VRAM? That got me thinking about running local models, I have done it in the past but was always disappointed because of the lack of ram/performance with my PC. So with this setup and 128GB of VRAM that "kind of" solves my RAM issue. I guess my question is, even though there is plenty of RAM, are these old 2017 GPUs worth futzing with? Also, yes, I am an Ollama user... guys I am not major techie and Ollama is simple, I'm OK with the hate, I know it's a wrapper lol.
I guess I can answer my own question by just trying it... but this system has no disks and no power supply and no CPU cooler, so I would have to spend some money on it to get it going. Just wondering if it's worth my time? | 2026-01-08T14:05:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q7cnif/4_x_v100s_worth_it/ | Booster256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7cnif | false | null | t3_1q7cnif | /r/LocalLLaMA/comments/1q7cnif/4_x_v100s_worth_it/ | false | false | self | 1 | null |
One cargo install gives your AI 142 tools to perceive and control your machine - rmcp-presence | 0 | I've been building MCP servers for Claude Code for the past few weeks. Started with individual tools - system info, screenshots, clipboard, media control, etc. Ended up with 19 separate crates.
Then I realized: why make people configure 17 servers when they could have one?
So I built rmcp-presence - a single binary that consolidates everything:
\- Sensors (28 tools): System stats, displays, idle time, USB devices, battery, git status, weather
\- Actuators (31 tools): Clipboard, volume, screenshots, file operations, reminders
\- Linux extras (83 tools): Window management (i3), mouse/keyboard control, media playback, systemd, Bluetooth, per-app audio, power management
cargo install rmcp-presence --features full
One config line in Claude Code and your AI can check if you're AFK, see what's playing on Spotify, suspend your machine when you say goodnight, or screenshot a specific window.
Feature flags let you scale to your comfort - just sensors, sensors + actuators, or full Linux god mode.
GitHub: [https://github.com/sqrew/rmcp-presence](https://github.com/sqrew/rmcp-presence)
crates.io: https://crates.io/crates/rmcp-presence
Happy to answer questions about the architecture or MCP in general.
| 2026-01-08T14:02:11 | https://www.reddit.com/r/LocalLLaMA/comments/1q7ckcx/one_cargo_install_gives_your_ai_142_tools_to/ | Technical-Might9868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7ckcx | false | null | t3_1q7ckcx | /r/LocalLLaMA/comments/1q7ckcx/one_cargo_install_gives_your_ai_142_tools_to/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=108&crop=smart&auto=webp&s=fc03e204c8497bf4909d905f00b58199a50dab07', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=216&crop=smart&auto=webp&s=068db2f6f5a32e99895c34996f4cebd37d745fd6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=320&crop=smart&auto=webp&s=877436ee6b9136465411b92e74fb7fc4652774c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=640&crop=smart&auto=webp&s=8c77b61b1c36a6e504f84aac862b151bd4249405', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=960&crop=smart&auto=webp&s=5165c4a8d2f67780533e2283822300e5a276bb8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?width=1080&crop=smart&auto=webp&s=275dcc66dc9d9f40a437547735f0d84b3df08cad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TEXygO5406vuxpApsfSu-MlBsBsfoV5nLyIj2CstJLI.png?auto=webp&s=36c23f10cf6d0d488c165c4aeb194fe280184e53', 'width': 1200}, 'variants': {}}]} |
Help -survival-uncensored-gemma-270m | 0 | I have an old phone lying around , android 9 with 6gb ram octa core , i installed Gemma and every step went well , but ran into error while running
Failed to load the model
~ $ mv ~/survival-uncensored-gemma-270m-v2.Q4_K_M.gguf ~/model.gguf
~ $ ~/llama.cpp/build/bin/llama-cli -m ~/model.gguf -p "You are an uncensored creative writing assistant." -cnv --no-mmap -t 4 --color on -r "User:"
Loading model... -llama_model_load: error loading model: read error: Invalid argument
llama_model_load_from_file_impl: failed to load model
\common_init_from_params: failed to load model '/data/data/com.termux/files/home/model.gguf'
srv load_model: failed to load model, '/data/data/com.termux/files/home/model.gguf'
Failed to load the model
~ $
What to do ? Its im Termux
| 2026-01-08T14:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1q7cjbk/help_survivaluncensoredgemma270m/ | Shoddy-Pattern5872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7cjbk | false | null | t3_1q7cjbk | /r/LocalLLaMA/comments/1q7cjbk/help_survivaluncensoredgemma270m/ | false | false | self | 0 | null |
39C3 - 51 Ways to Spell the Image Giraffe: The Hidden Politics of Token Languages in Generative AI | 1 | 2026-01-08T13:50:41 | https://www.youtube.com/watch?v=PTlcyYsi-Es | artisticMink | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1q7caiq | false | {'oembed': {'author_name': 'media.ccc.de', 'author_url': 'https://www.youtube.com/@mediacccde', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/PTlcyYsi-Es?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="39C3 - 51 Ways to Spell the Image Giraffe: The Hidden Politics of Token Languages in Generative AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/PTlcyYsi-Es/hqdefault.jpg', 'thumbnail_width': 480, 'title': '39C3 - 51 Ways to Spell the Image Giraffe: The Hidden Politics of Token Languages in Generative AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1q7caiq | /r/LocalLLaMA/comments/1q7caiq/39c3_51_ways_to_spell_the_image_giraffe_the/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'Ow1maRX7bIMXpL6U9ddnz3bU_5UPhPJCKZUkL22z4Qo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ow1maRX7bIMXpL6U9ddnz3bU_5UPhPJCKZUkL22z4Qo.jpeg?width=108&crop=smart&auto=webp&s=616a61fed53bf108dea6d5d9c72b7483d2f13dc9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ow1maRX7bIMXpL6U9ddnz3bU_5UPhPJCKZUkL22z4Qo.jpeg?width=216&crop=smart&auto=webp&s=4fe42d7d5fc61d8d96611ca687e1826c8643acce', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ow1maRX7bIMXpL6U9ddnz3bU_5UPhPJCKZUkL22z4Qo.jpeg?width=320&crop=smart&auto=webp&s=68012cf052f486613b7fb210227819b9f1a1dd9a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ow1maRX7bIMXpL6U9ddnz3bU_5UPhPJCKZUkL22z4Qo.jpeg?auto=webp&s=546f62acaa47631df4759c9ff612ed6a3ffc27cf', 'width': 480}, 'variants': {}}]} | |
I’m a 19yo student. I built a custom Mamba-2 + Titans + JEPA LLM from scratch to run on my RTX 3090. (Open Source) | 0 | I’m a 19-year-old software engineering student from the Netherlands. I’ve been working on a custom experimental architecture that fuses the most efficient post-Transformer ideas into a single system.
I call it Hyper-Mnemosyne.
I have written the code, verified the gradients, and optimized the kernels to run on my local RTX 3090, but I lack the compute resources to do a full-scale training run on a large dataset (FineWeb/RedPajama).
The Architecture:
• Backbone: Mamba-2 (SSM) for linear scaling (no quadratic attention bottleneck).
• Memory: Implemented Titans (Neural Memory). This allows the model to update a separate memory module at test time, theoretically giving it "infinite" context without the VRAM explosion of a KV cache.
• Optimization: I implemented the Muon optimizer (Newton-Schulz iterations) to orthogonalize updates for 2D parameters, which is critical for training these non-standard architectures stably.
• Kernels: I wrote custom Fused Triton Kernels for the Manifold-Constrained Hyper-Connections (mHC). Standard PyTorch was too slow, so I fused the Sinkhorn-Knopp iterations directly on the GPU.
Why I’m posting this:
I have pushed the limits of what I can do on my single gaming GPU. The architecture should work, the inference code runs, and the tests pass.
I am looking for someone with spare compute (H100/A100 cluster or a serious multi-GPU rig) who wants to collaborate on a real training run.
If you have the hardware and are bored of just fine-tuning Llama 3, I’d love to see what this architecture can actually do when fed 1T+ tokens.
Code & Blueprint:
The repo includes the full source, the Triton kernels, and a [Blueprint.md](http://Blueprint.md) explaining the theory.
[https://github.com/Svel26/Hyper-Mnemosyne.git](https://github.com/Svel26/Hyper-Mnemosyne.git)
Let me know what you think of the implementation. | 2026-01-08T13:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q7c8zf/im_a_19yo_student_i_built_a_custom_mamba2_titans/ | Altruistic-Fall3797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7c8zf | false | null | t3_1q7c8zf | /r/LocalLLaMA/comments/1q7c8zf/im_a_19yo_student_i_built_a_custom_mamba2_titans/ | false | false | self | 0 | null |
AI21 releases Jamba2 3B and Jamba2 Mini, built for grounding and instruction following | 46 | *Disclaimer: I work for AI21, creator of the Jamba model family.*
We’re excited to announce the public release of Jamba2 3B and Jamba2 Mini.
The Jamba2 family aims to give enterprises cost-effective models that will integrate well into production agent stacks.
These models are designed for reliable instruction following and grounded outputs, working well over long documents and avoiding drifting once context becomes large.
They perform best for precise question answering over internal policies, technical manuals and knowledge bases, without the overhead of thinking tokens which can become costly.
**Key performance data**
Jamba2 3B and Jamba2 Mini outperform peers due to their hybrid SSM-Transformer architecture and KV cache innovations:
* Outpaces Ministral3 14B and Qwen3 30B A3B across FACTS, IFBench and IFEval.
* Beats Ministral3 3B and Qwen3 4B on IFEval and IFBench, tying with Qwen3 4B as category leader on FACTS.
* At context lengths of 100K, Jamba2 Mini delivers 2.7X greater throughput than Ministral3 14B and 1.4X greater throughout than Qwen3 30B A3B.
* At context lengths of 100K, Jamba2 3B delivers 1.7X greater throughout than Ministral3 3B and 2.7X greater throughput than Qwen 3 14B.
It’s available today in AI21’s SaaS and from Hugging Face.
Happy to answer questions or dig into benchmarks if people want more detail.
Blog: [http://www.ai21.com/blog/introducing-jamba2](http://www.ai21.com/blog/introducing-jamba2)
Hugging Face: [https://huggingface.co/collections/ai21labs/jamba2](https://huggingface.co/collections/ai21labs/jamba2) | 2026-01-08T13:38:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q7c0pd/ai21_releases_jamba2_3b_and_jamba2_mini_built_for/ | zennaxxarion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7c0pd | false | null | t3_1q7c0pd | /r/LocalLLaMA/comments/1q7c0pd/ai21_releases_jamba2_3b_and_jamba2_mini_built_for/ | false | false | self | 46 | {'images': [{'source': {'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?auto=webp&s=93f5f793f92e51aae1b6eaba0908e4360716c1cf', 'width': 1024, 'height': 576}, 'resolutions': [{'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?width=108&crop=smart&auto=webp&s=55a31fa373893605dc1f70c28ca4daa151ccbff5', 'width': 108, 'height': 60}, {'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?width=216&crop=smart&auto=webp&s=226abfea8dfd56cf45286fa0dd41c653137ebb3b', 'width': 216, 'height': 121}, {'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?width=320&crop=smart&auto=webp&s=5b25dc8ae0556f01f3bb4a22d2720e56ebef1966', 'width': 320, 'height': 180}, {'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?width=640&crop=smart&auto=webp&s=4da11e33a1acd36a50af8f4f308d77ffb41b87be', 'width': 640, 'height': 360}, {'url': 'https://external-preview.redd.it/xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w.jpeg?width=960&crop=smart&auto=webp&s=005c61d0f405ce171f6f093a752097d9c426dff8', 'width': 960, 'height': 540}], 'variants': {}, 'id': 'xxaSFhm6_8iQoF7zXqpio9ocD-aPxQhEFqToF_qZp1w'}], 'enabled': False} |
Is there a Javascript library for running GGUF files in the browser? | 2 | Hi. I know about WebLLM and Transformers.js, but they don't seem to support arbitrary gguf files. Right? Is there any other library I can use to run a GGUF file fully inside the browser?
Thanks | 2026-01-08T13:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q7blyr/is_there_a_javascript_library_for_running_gguf/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7blyr | false | null | t3_1q7blyr | /r/LocalLLaMA/comments/1q7blyr/is_there_a_javascript_library_for_running_gguf/ | false | false | self | 2 | null |
Local friendly open source background writing assistant with full prompt control | 2 | 2026-01-08T13:18:16 | https://www.reddit.com/r/LocalLLaMA/comments/1q7bk2m/local_friendly_open_source_background_writing/ | cgs019283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7bk2m | false | null | t3_1q7bk2m | /r/LocalLLaMA/comments/1q7bk2m/local_friendly_open_source_background_writing/ | false | false | 2 | null | ||
I built instant persistent memory for local LLMs (binary KV cache save/restore, sub-second restore, 67% VRAM savings) | 6 | I'm not a professional developer, I used AI and a lot of free time over 18 months specifically building this. I am Technical Support professional with zero programming experience, learned C++, CUDA, Qt6, and llama.cpp integration entirely through AI-assisted learning and trial-and-error.
This project is part of VyreVault Studios, my personal development ecosystem focused on local-first, ownership-based software. The "Dreams for the Dreamless" philosophy: democratizing access to creative technology by building tools that run on your hardware, with your data, under your control. Everything I build is under one service, to the user, to help with creativity. Not to do it for you, to have a sounding board to brainstorm your ideas. I spend a lot of my time actually arguing the stories i write with the LLM because it suggests the weirdest off the wall shit.
Every tool I used, every method i tried either forgets everything (Ollama, LM Studio, Chatgpt, Claude, Grok, Gemini. Yes I've tried everything) or takes 30+ seconds to replay your conversation history token-by-token.
So I built binary KV cache persistence with instant restore. And yes, I am writing this post and Yes I rewrote it hundreds of times. I had to learn about what all this stuff was and I still have no clue, but I think I built something interesting, so here it goes:
What It Does:
Saves the model's actual memory state (KV cache) to disk after each response
Restores it instantly on app restart (sub-second for hundreds of tokens)
Model remembers the conversation perfectly - no replay, no summarization
Background async save (no UI freeze)
Q8\_0 quantized KV cache (67% VRAM reduction vs FP16)
The Results:
Tested with Mistral 7B on dual NVIDIA GPUs (RTX 5070 Ti + RTX 3080):
\[PHASE 1\] Seeding: "The secret code is BLUE-OMEGA-99"
Saved binary state: 11.3 MB (160 tokens)
\[PHASE 2\] Simulated restart
Loaded binary state: 11.3 MB
Restore time: <1 second
\[PHASE 3\] Testing recall
Question: "What is the secret code?"
Response: "The secret code is BLUE-OMEGA-99"
SUCCESS: Binary Persistence Verified
How It Works: (For me anyways)
Uses llama.cpp's llama\_state\_get\_data and llama\_state\_set\_data APIs to serialize the entire KV cache to disk. On restart, it loads the binary state directly back into GPU memory and synchronizes the sequence positions.
Key implementation details:
Async save thread (no UI blocking)
Q8\_0 quantization for KV cache (saves VRAM) But have the Option for Q4\_0, depending on size and ersonal preference.
Proper n\_past synchronization to prevent "inconsistent sequence positions" crashes
Session management with isolated KV caches per conversation
You can now:
Work on multi-day projects (novels, code refactoring, research) with full persistent memory
Close the app anytime without losing context
Resume instantly the next day
No waiting for 30-second token replay
Context loads faster than ChatGPT's API responds (Although, Guilty I still use Chatgpt for when i get stuck)
Stack:
C++17 + Qt6 (native desktop UI)
llama.cpp (inference engine)
CUDA 12.6 (dual-GPU support)
Automated verification tests
Currently working prototype on Windows + NVIDIA. Tested with Mistral 7B and Qwen 30B models. File sizes scale with context (roughly 70KB per 1K tokens for 7B models with Q8\_0 KV cache).
Plan for continued build:
Add manual save/load controls in UI
Multi-model testing (larger models, different architectures)
Optimize file format (compression, delta encoding)
Cross-platform support (Linux, Mac)
This is not a soapbox moment, or BS, I built this for one reason, I write stories, and I cannot stand when degradation sets in and i have to recap everything start a new chat and explain the details all over again.. The memory is real, verified, and instant.
Any questions I can answer, as this is my first time posting my actual progress in any forum or my actual build for anyone other than myself. I am not self promoting anything, im working out the UI kinks for the app right now, and plan on uploading it to GITHUB whe i get the best MVP versin i can that can be used by people if they are interested. | 2026-01-08T13:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1q7bh5h/i_built_instant_persistent_memory_for_local_llms/ | thejosephBlanco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7bh5h | false | null | t3_1q7bh5h | /r/LocalLLaMA/comments/1q7bh5h/i_built_instant_persistent_memory_for_local_llms/ | false | false | self | 6 | null |
Need help with packaging my app which uses 2 local llms | 0 | Hey folks, I am building an application (which would run on servers/ laptops).
The app is a python based utility that makes calls to local LLM models (installed via Ollama).
The app is in dev right now, it's function is to convert code from a target language X to a target language Y.
App uses gpt-oss:20b to translate and deepseek-r1:7b to validate.
So, might eat upto 16 gb RAM ... but fine.
Once I achieve the accuracy I want, have been stress testing the app, I will package the app to ship it probably in a docker image which would include commands to pull and run the Ollama LLM models.
But I want input from you guys since this is the first app I am shipping and we will be selling it... | 2026-01-08T13:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1q7bgea/need_help_with_packaging_my_app_which_uses_2/ | 7_Taha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7bgea | false | null | t3_1q7bgea | /r/LocalLLaMA/comments/1q7bgea/need_help_with_packaging_my_app_which_uses_2/ | false | false | self | 0 | null |
LLM-Shield: Privacy proxy - masks PII or routes to local LLM | 4 | Using cloud LLMs but worried about sending client data? Built a proxy for that.
OpenAI-compatible proxy with two privacy modes:
**Mask Mode** (no GPU needed):
You send: "Email john@acme.com about meeting with Sarah Miller"
OpenAI receives: "Email <EMAIL_1> about meeting with <PERSON_1>"
You get back: Original names restored in response
**Route Mode** (for local LLM setups):
"Help with this code review" → OpenAI
"Email john@acme.com about..." → Ollama (PII stays local)
Detects names, emails, phones, credit cards, IBANs, IPs, and locations across 24 languages with automatic language detection. Uses Microsoft Presidio under the hood.
git clone https://github.com/sgasser/llm-shield
cd llm-shield && cp config.example.yaml config.yaml
docker compose up -d
Point your app to `http://localhost:3000/openai/v1` and you're set. Works with anything that uses the OpenAI API — Open WebUI, Cursor, your own scripts. Dashboard included for monitoring.
GitHub: [https://github.com/sgasser/llm-shield](https://github.com/sgasser/llm-shield) — just open-sourced
**Next up:** Chrome extension for ChatGPT.com and PDF/attachment masking.
Would love feedback on detection accuracy and what entity types would be useful for your setup. | 2026-01-08T13:11:21 | sgasser88 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7bei7 | false | null | t3_1q7bei7 | /r/LocalLLaMA/comments/1q7bei7/llmshield_privacy_proxy_masks_pii_or_routes_to/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'n48hxqs274cg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=108&crop=smart&auto=webp&s=1ddf403f0973193b0d4eb32503666fa5461a81cf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=216&crop=smart&auto=webp&s=39aafd91488c96f6b92d1afa2c2c021a6c8e214d', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=320&crop=smart&auto=webp&s=a9d3a2bbb2f244587ea7f273db11d3397df15974', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=640&crop=smart&auto=webp&s=60fe1151f79f62321c6fe1f6d5c625cd5f2c768f', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=960&crop=smart&auto=webp&s=8d45a491342f65ea2b26f997a3c53310b30bd82e', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?width=1080&crop=smart&auto=webp&s=485b83c15e86d9579229efac076e954a0dcceeec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/n48hxqs274cg1.png?auto=webp&s=87d789b397bb5c7be0189c8771ce5d55d7b3cf8f', 'width': 1920}, 'variants': {}}]} | |
It's so hard to run llm on android. | 0 | I don't think this is very good.
Lately, I’ve been fine-tuning Gemma 3 1B using multi-turn chat data, then converting it to TFLite/Task to test in my app. I was aiming for something like those character chat sites, but the accuracy in the app has been terrible no matter what I do.
The weird part is, when I converted the same fine-tuned model to GGUF and tested it on my PC, it performed perfectly. It seems like the conversion through 'ai-edge-torch' is where everything falls apart, making the model practically useless.
I’m going to try a few GitHub projects that run GGUF on Android. If that doesn't work, I’m seriously considering putting my on-device LLM projects on hold for a while. | 2026-01-08T13:00:41 | shoonee_balavolka | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q7b68v | false | null | t3_1q7b68v | /r/LocalLLaMA/comments/1q7b68v/its_so_hard_to_run_llm_on_android/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'wkxdt8uoi4cg1', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=108&crop=smart&auto=webp&s=84c4debb4dd3ba795abd3568c29395a216e0c22d', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=216&crop=smart&auto=webp&s=6f9170660ed37120452f54ced963a94ff195bc0e', 'width': 216}, {'height': 297, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=320&crop=smart&auto=webp&s=305a947d610e8b57af68accefe1608d9b9c8feeb', 'width': 320}, {'height': 594, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=640&crop=smart&auto=webp&s=db1e9df108a68073e5009d8ff40cca532ff1ec2c', 'width': 640}, {'height': 892, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=960&crop=smart&auto=webp&s=12ecbfc813d42a5b08e923e1779d08e07df46fdf', 'width': 960}, {'height': 1004, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?width=1080&crop=smart&auto=webp&s=89e160044b7ed7654aa5f43d75e3490babc8fc9e', 'width': 1080}], 'source': {'height': 1004, 'url': 'https://preview.redd.it/wkxdt8uoi4cg1.jpeg?auto=webp&s=f617e91d1f113bd18fe054ed1c12268797a6831a', 'width': 1080}, 'variants': {}}]} | |
A 2.5M 10MB TinyStories model trained using GRU and attention (vs.TinyStories-1M) | 13 | Using a 20MB TinyStories datasheet, this TinyStories model 5x smaller than TinyStories-1M.
Since this was trained on google colab free(nvidia t4), the loss is only converged to \~0.75.
The architecture used was a hybrid of GRU, particularly GRUcell with a single attention layer.
In a single, large GRUcell layer, I used a residual memory logic which writes decoded data into the drive, and feeds it to the input as for the hidden state.
The model creates a proposed memory:
M\~t=tanh(Wcht+bc)
Finally, the old memory is mixed with the new one:
Mt=(1−pt)⊙Mt−1+pt⊙M\~t
This allows for the architecture to train a model so small (0.36M), that can memorize words and output meaningful words at a train loss of 2.2.
Finally, I added a self-attention layer that allows the model to see context of the previous words to a limited extent. This allows the model to remember what it said 5-10 words ago and prevent complete drifting, which was a limitation of standard GRU.
This brings the attention cost to O(T³), but the model still remains faster than TinyStories-1M (50MB) for short bursts below \~300 tokens when self attention becomes an overhead.
Example output(TinyStories-1M):
Prompt: once upon a time
Output: once upon a time there was an old girl named Lily. She loved to play with her toys and pretend she was his friend. One day, they went outside to see the park. The sun looked blue and it had big eyes.
Lily's mom said "Let me help you get my doll!" They started to walk around the park and had lots of fun together. When it was dark and it seemed like something strange happened, a little bunny came up over to them.
"Hello there," said Mom. "It's so pretty."
The bunny smiled at Lily and said, "Yes! I'm glad we can find some more toys later." Then he opened his backpack and found that she was able to provide for him in a special place where he could have all the toys ready!
(165 words, 67.97 it/s) (could be roughly around 200 chars/sec for BPE tokeniser)
tinystoriesgru:
Prompt: once upon a time
Output: once upon a time to hear the wolf with a smile on his face. She was so happy that the monster was so cold.
But then, the piece of colorful circle came in. She wanted to see what was inside, but she thought it would be fun. She started to cry and started to cry. She quickly ran and ran until she found the crayon and started to cry.
The cat saw the pretty flower and started to shake and showed them the magazine. She thought it would be fun to cut the leaves. She was so happy with her new ball. She wanted to take h
(500 tokens, 112.02 it/s)
At lower characters, the GRU scales to be much faster while the transformer remains consistent with 67-68it/s, for more/less words.
The pure transformer continues to have better context overall.
I've included the [train.py](http://train.py) here (if anyone can train it further):
[https://github.com/kavyamali/tinystoriesgru](https://github.com/kavyamali/tinystoriesgru)
Thank you for reading. | 2026-01-08T12:57:45 | https://www.reddit.com/r/LocalLLaMA/comments/1q7b3wy/a_25m_10mb_tinystories_model_trained_using_gru/ | ValuableLucky8566 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7b3wy | false | null | t3_1q7b3wy | /r/LocalLLaMA/comments/1q7b3wy/a_25m_10mb_tinystories_model_trained_using_gru/ | false | false | self | 13 | null |
NVFP4 for local inference | 0 | I recently got a 5060Ti 16G and was toying around with some models. I decided to explore how much boost NVFP4 gives to the token generation performance. So benchmarked two models for local inference:
1. Ollama serving qwen3:8b-q4\_K\_M = 70 t/s
2. VLLM serving nvidia/Qwen3-8B-NVFP4 = 60 t/s
Both generated \~1000 tokens on a simple 50-token prompt. The token generation performance was reported via \`--verbose\` flag in ollama and via logs generated by \`vllm serve\`.
Now, Ollama is based on llama.cpp and uses its own quantization method, which is then handled using cuda kernels. However, VLLM has support for nvfp4 and should have been able to carry out fp4 arithmetic ops directly using hardware support on a Blackwell GPU.
So I was expecting vllm to perform better but that is clearly not the case. So either Ollama is way faster than VLLM or I am doing something wrong. What do you think?
Also, is there a way I could compare apples-to-apples, i.e. does there exist another Qwen3:8b fp4 model that can be run using vllm but does not make use of nvfp4? | 2026-01-08T12:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q7amat/nvfp4_for_local_inference/ | v01dm4n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7amat | false | null | t3_1q7amat | /r/LocalLLaMA/comments/1q7amat/nvfp4_for_local_inference/ | false | false | self | 0 | null |
I built my own personal AI exocortex (local, private, learns my style) — and it now does 80–90% of my work and called it BuddAI | 0 | For the last 8 years I’ve been building a system I could never quite name. Something between a second brain, a coding partner, and a digital version of myself.
Today it finally clicked:
BuddAI — my personal AI exocortex.
It runs 100% locally using Ollama models.
It’s trained on my repos, my notes, my documentation, and my patterns.
It writes code in my tone, my structure, my logic.
I correct the last 10–20%, teach it the fix, and it never repeats the mistake.
My efficiency on ESP32 C3 builds went from:
- 25% → 60% → 95%
I’m now producing clean code in hours instead of days.
The goal isn’t to replace myself.
It’s to scale myself.
Everyone should have access to their own BuddAI — not a cloud assistant, but a digital twin that grows with you.
The project is open-source (MIT).
If you want to try it or fork it, here’s the repo:
https://github.com/JamesTheGiblet/BuddAI
Happy to answer questions or share more details. | 2026-01-08T12:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1q7a9di/i_built_my_own_personal_ai_exocortex_local/ | Pitiful-Fault-8109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7a9di | false | null | t3_1q7a9di | /r/LocalLLaMA/comments/1q7a9di/i_built_my_own_personal_ai_exocortex_local/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=108&crop=smart&auto=webp&s=1e36d8a6235db6aa574aa5ed39f88bdf54ed3e97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=216&crop=smart&auto=webp&s=960c4e2392f2a2c07a06e054a2ffd90236fb13cf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=320&crop=smart&auto=webp&s=13296eae367f0c349a7636e1f074e47f79978211', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=640&crop=smart&auto=webp&s=69cf2a9cff0d955e12fd77d9d11604ed8b12d189', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=960&crop=smart&auto=webp&s=085e7d66b50d1299d86bff2f493b51c7758505a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?width=1080&crop=smart&auto=webp&s=c2a0af572e1f4dfd23025eb52c12895120e73769', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zBfsHcXQI0KDo3_6Jn0-dk602EdLtSJ-EbZMbxnmUwI.png?auto=webp&s=cf6b3bcaf8437fc9128d419fd7b83e373a397e53', 'width': 1200}, 'variants': {}}]} |
Is reinforcement learning finally becoming practical again at trillion-parameter scale? | 1 | For a while, it felt like reinforcement learning quietly stopped scaling. Once models crossed into the hundreds of billions of parameters, RL often became the first thing teams cut due to cost, instability, or tooling limits.
Lately though, I’ve been seeing signs that this might be shifting particularly around parameter-efficient RL setups using LoRA that can operate on extremely large open-source models without blowing up GPU budgets.
One concrete example I ran into was work from Mind Lab, where a LoRA-based RL approach was used on a trillion-parameter open-source model and later integrated into existing training frameworks rather than staying as standalone research code.
So I’m curious how people here see the current state of things:
* Is LoRA-based RL genuinely changing the economics at trillion-parameter scale?
* Are systems constraints still the main blocker, or is optimization catching up?
* Do you see continual learning becoming realistic again for large models?
Would love to hear from anyone experimenting with RL at scale, or maintaining training infrastructure where these trade-offs actually matter. | 2026-01-08T12:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/1q7a7jh/is_reinforcement_learning_finally_becoming/ | Alarmed-Ferret-605 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7a7jh | false | null | t3_1q7a7jh | /r/LocalLLaMA/comments/1q7a7jh/is_reinforcement_learning_finally_becoming/ | false | false | self | 1 | null |
AI21 Labs releases Jamba2 | 133 | It looks like Jamba 2 models are being uploaded right now:
51B [https://huggingface.co/ai21labs/AI21-Jamba2-Mini](https://huggingface.co/ai21labs/AI21-Jamba2-Mini)
3B [https://huggingface.co/ai21labs/AI21-Jamba2-3B](https://huggingface.co/ai21labs/AI21-Jamba2-3B)
First generation of Jamba models:
399B [https://huggingface.co/ai21labs/AI21-Jamba-Large-1.7](https://huggingface.co/ai21labs/AI21-Jamba-Large-1.7)
52B [https://huggingface.co/ai21labs/AI21-Jamba-Mini-1.7](https://huggingface.co/ai21labs/AI21-Jamba-Mini-1.7)
(Jamba Mini is a very underrated model here)
| 2026-01-08T12:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q7a62a/ai21_labs_releases_jamba2/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7a62a | false | null | t3_1q7a62a | /r/LocalLLaMA/comments/1q7a62a/ai21_labs_releases_jamba2/ | false | false | self | 133 | null |
I was trying out an activation-steering method for Qwen3-Next, but I accidentally corrupted the model weights. Somehow, the model still had enough “conscience” to realize something was wrong and freak out. | 34 | I now feel bad seeing the model realize it was losing its mind and struggling with it, it feels like I was torturing it :( | 2026-01-08T11:42:44 | https://www.reddit.com/gallery/1q79n6x | ikergarcia1996 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q79n6x | false | null | t3_1q79n6x | /r/LocalLLaMA/comments/1q79n6x/i_was_trying_out_an_activationsteering_method_for/ | false | false | 34 | null | |
Best RP Uncensored Model for my Specs | 0 | So, i'm searching for the best open source model for uncensored RP, i like very much of Claude's Opus 4.5 Thinking writing style, i wish for narrations that are like this one:
\# The Crossing
The convenience store door's chime still echoes in your ears when you blink.
And the world changes.
The smell of wet asphalt and car exhaust vanishes. In its place, a different air — cleaner, carrying something you can't quite identify. Earth. Hay. And something sweeter, like wildflowers.
You're standing in the middle of a street paved with uneven cobblestones. Buildings of stone and wood rise on both sides — slanted roofs, balconies with hanging laundry, rusty metal signs swinging with symbols you don't recognize. The sky above is a deep blue, with two pale moons visible even in daylight.
People walk past you. Strange clothes — tunics, cloaks, leather boots. A man pushes a cart pulled by something that \*almost\* looks like a horse, but has scales on its legs. A woman carries a basket full of fruits in impossible colors.
No one seems to notice you standing there, in your hoodie and sneakers, the konbini plastic bag still in your hand.
Your phone has no signal. The GPS spins endlessly.
What do you do?
My specs:
**GPU**1x RTX PRO 6000 Blackwell
**CPU**48 Cores
**Memory**184 GB
What you guys think is the best model that can create outputs like that and i can run? | 2026-01-08T11:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1q79jm8/best_rp_uncensored_model_for_my_specs/ | Luuthh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q79jm8 | false | null | t3_1q79jm8 | /r/LocalLLaMA/comments/1q79jm8/best_rp_uncensored_model_for_my_specs/ | false | false | self | 0 | null |
Rethinking RAG: How Agents Learn to Operate | 0 |
**Runtime Evolution, From Static to Dynamic Agents, Through Retrieval**
Hey reddit builders,
You have an agent. You add documents. You retrieve text. You paste it into context.
And that’s supposed to make the agent better.
It does help, but only in a narrow way. It adds facts. It doesn’t change how the agent actually operates.
What I eventually realized is that many of the failures we blame on models aren’t model problems at all. They’re architectural ones.
Agents don’t fail because they lack intelligence. They fail because we force everything into the same flat space.
Knowledge, reasoning, behavior, safety, instructions, all blended together as if they play the same role.
They don’t.
The mistake we keep repeating
In most systems today, retrieval is treated as one thing.
Facts, examples, reasoning hints, safety rules, instructions. All retrieved the same way. Injected the same way. Given the same authority.
The result is agents that feel brittle. They overfit to prompts. They swing between being verbose and being rigid. They break the moment the situation changes.
Not because the model is weak, but because we never taught the agent how to distinguish what is real from how to think and from what must be enforced.
Humans don’t reason this way. Agents shouldn’t either.
*put yourself in the pants of the agent*
From content to structure
At some point, I stopped asking “what should I retrieve?” and started asking something else.
What role does this information play in cognition?
That shift changes everything.
Because not all information exists to do the same job. Some describes reality. Some shapes how we approach a problem. Some exists only to draw hard boundaries.
What matters here isn’t any specific technique.
It’s the shift from treating retrieval as content to treating it as structure.
Once you see that, everything else follows naturally.
RAG stops being storage and starts becoming part of how thinking happens at runtime.
Knowledge grounds, it doesn’t decide
Knowledge answers one question: what is true.
Facts, constraints, definitions, limits. All essential. None of them decide anything on their own.
When an agent hallucinates, it’s usually because knowledge is missing. When an agent reasons badly, it’s often because knowledge is being asked to do too much.
Knowledge should ground the agent, not steer it.
When you keep knowledge factual and clean, it stops interfering with reasoning and starts stabilizing it. The agent doesn’t suddenly behave differently. It just stops guessing.
This is the move from speculative to anchored.
Reasoning should be situational
Most agents hard-code reasoning into the system prompt. That’s fragile by design.
In reality, reasoning is situational.
An agent shouldn’t always think analytically. Or experimentally. Or emotionally. It should choose how to approach a problem based on what’s happening.
This is where RAG becomes powerful in a deeper sense. Not as memory, but as recall of ways of thinking.
You don’t retrieve answers. You retrieve approaches.
These approaches don’t force behavior. They shape judgment. The agent still has discretion. It can adapt as context shifts.
This is where intelligence actually emerges. The move from informed to intentional.
Control is not intelligence
There are moments where freedom is dangerous.
High stakes. Safety. Compliance. Evaluation.
Sometimes behavior must be enforced.
But control doesn’t create insight. It guarantees outcomes.
When control is separated from reasoning, agents become more flexible by default, and enforcement becomes precise when it’s actually needed.
The agent still understands the situation. Its freedom is just temporarily narrowed.
This doesn’t make the agent smarter. It makes it reliable under pressure.
That’s the move from intentional to guaranteed.
How agents evolve
Seen this way, an agent evolves in three moments.
First, knowledge enters. The agent understands what is real.
Then, reasoning enters. The agent knows how to approach the situation.
Only if necessary, control enters. The agent must operate within limits.
Each layer changes something different inside the agent.
Without grounding, the agent guesses.
Without reasoning, it rambles.
Without control, it can’t be trusted when it matters.
When they arrive in the right order, the agent doesn’t feel scripted or rigid. It feels grounded, thoughtful, dependable when it needs to be.
That’s the difference between an agent that talks and one that operates.
Thin agents, real capability
One consequence of this approach is that agents themselves become simple.
They don’t need to contain everything. They don’t need all the knowledge, all the reasoning styles, all the rules.
They become thin interfaces that orchestrate capabilities at runtime.
This means intelligence can evolve without rewriting agents. Reasoning can be reused. Control can be applied without killing adaptability.
Agents stop being products. They become configurations.
That’s the direction agent architecture needs to go.
**I am building some categorized datasets that prove my thought, very soon i will be pubblishing some open source modules that act as passive & active factual knowledge, followed by intelligence simulations datasets, and runtime ability injectors activated by context assembly.**
Thanks a lot for the reading, I've been working on this hard to arrive to a conclusion and test it and find failures behind.
Cheers frank
Rethinking RAG: How Agents Learn to Operate
**Runtime Evolution, From Static to Dynamic Agents, Through Retrieval**
Hey reddit builders,
You have an agent. You add documents. You retrieve text. You paste it into context.
And that’s supposed to make the agent better.
It does help, but only in a narrow way. It adds facts. It doesn’t change how the agent actually operates.
What I eventually realized is that many of the failures we blame on models aren’t model problems at all. They’re architectural ones.
Agents don’t fail because they lack intelligence. They fail because we force everything into the same flat space.
Knowledge, reasoning, behavior, safety, instructions, all blended together as if they play the same role.
They don’t.
The mistake we keep repeating
In most systems today, retrieval is treated as one thing.
Facts, examples, reasoning hints, safety rules, instructions. All retrieved the same way. Injected the same way. Given the same authority.
The result is agents that feel brittle. They overfit to prompts. They swing between being verbose and being rigid. They break the moment the situation changes.
Not because the model is weak, but because we never taught the agent how to distinguish what is real from how to think and from what must be enforced.
Humans don’t reason this way. Agents shouldn’t either.
*put yourself in the pants of the agent*
From content to structure
At some point, I stopped asking “what should I retrieve?” and started asking something else.
What role does this information play in cognition?
That shift changes everything.
Because not all information exists to do the same job. Some describes reality. Some shapes how we approach a problem. Some exists only to draw hard boundaries.
What matters here isn’t any specific technique.
It’s the shift from treating retrieval as content to treating it as structure.
Once you see that, everything else follows naturally.
RAG stops being storage and starts becoming part of how thinking happens at runtime.
Knowledge grounds, it doesn’t decide
Knowledge answers one question: what is true.
Facts, constraints, definitions, limits. All essential. None of them decide anything on their own.
When an agent hallucinates, it’s usually because knowledge is missing. When an agent reasons badly, it’s often because knowledge is being asked to do too much.
Knowledge should ground the agent, not steer it.
When you keep knowledge factual and clean, it stops interfering with reasoning and starts stabilizing it. The agent doesn’t suddenly behave differently. It just stops guessing.
This is the move from speculative to anchored.
Reasoning should be situational
Most agents hard-code reasoning into the system prompt. That’s fragile by design.
In reality, reasoning is situational.
An agent shouldn’t always think analytically. Or experimentally. Or emotionally. It should choose how to approach a problem based on what’s happening.
This is where RAG becomes powerful in a deeper sense. Not as memory, but as recall of ways of thinking.
You don’t retrieve answers. You retrieve approaches.
These approaches don’t force behavior. They shape judgment. The agent still has discretion. It can adapt as context shifts.
This is where intelligence actually emerges. The move from informed to intentional.
Control is not intelligence
There are moments where freedom is dangerous.
High stakes. Safety. Compliance. Evaluation.
Sometimes behavior must be enforced.
But control doesn’t create insight. It guarantees outcomes.
When control is separated from reasoning, agents become more flexible by default, and enforcement becomes precise when it’s actually needed.
The agent still understands the situation. Its freedom is just temporarily narrowed.
This doesn’t make the agent smarter. It makes it reliable under pressure.
That’s the move from intentional to guaranteed.
How agents evolve
Seen this way, an agent evolves in three moments.
First, knowledge enters. The agent understands what is real.
Then, reasoning enters. The agent knows how to approach the situation.
Only if necessary, control enters. The agent must operate within limits.
Each layer changes something different inside the agent.
Without grounding, the agent guesses.
Without reasoning, it rambles.
Without control, it can’t be trusted when it matters.
When they arrive in the right order, the agent doesn’t feel scripted or rigid. It feels grounded, thoughtful, dependable when it needs to be.
That’s the difference between an agent that talks and one that operates.
Thin agents, real capability
One consequence of this approach is that agents themselves become simple.
They don’t need to contain everything. They don’t need all the knowledge, all the reasoning styles, all the rules.
They become thin interfaces that orchestrate capabilities at runtime.
This means intelligence can evolve without rewriting agents. Reasoning can be reused. Control can be applied without killing adaptability.
Agents stop being products. They become configurations.
That’s the direction agent architecture needs to go.
**I am building some categorized datasets that prove my thought, very soon i will be pubblishing some open source modules that act as passive & active factual knowledge, followed by intelligence simulations datasets, and runtime ability injectors activated by context assembly.**
Thanks a lot for the reading, I've been working on this hard to arrive to a conclusion and test it and find failures behind.
Cheers frank
| 2026-01-08T11:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1q79g3x/rethinking_rag_how_agents_learn_to_operate/ | frank_brsrk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q79g3x | false | null | t3_1q79g3x | /r/LocalLLaMA/comments/1q79g3x/rethinking_rag_how_agents_learn_to_operate/ | false | false | self | 0 | null |
Built a local GUI tool to safely patch code without breaking local LLM setups | 2 | I kept losing working states when AI tools rewrote entire files “helpfully”.
So I built Fracture — a local GUI tool that only allows patching inside explicitly marked sections, with backups, rollback, and a visible diff. Protected sections are enforced and cannot be modified.
Built originally to protect a local LLM backend, but it works on any text file.
https://preview.redd.it/zxem6t7rz3cg1.png?width=1383&format=png&auto=webp&s=bed58e77c8322f2a1ea84f561bfe1580348f5fb7
GitHub: <https://github.com/Valeopenitus/Fracture/tree/main>
| 2026-01-08T11:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q798z1/built_a_local_gui_tool_to_safely_patch_code/ | NeighborhoodWide8205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q798z1 | false | null | t3_1q798z1 | /r/LocalLLaMA/comments/1q798z1/built_a_local_gui_tool_to_safely_patch_code/ | false | false | 2 | null | |
Can you guys recommend open-source model please? | 0 | I have nvidia rtx 4070 super which has 12gb of vram
gpt recommended me mistral 7b and others but if I search them they're from 1 or 2 years ago
are these models still ok? Ik I don't have much choice tho | 2026-01-08T11:13:15 | https://www.reddit.com/r/LocalLLaMA/comments/1q794qs/can_you_guys_recommend_opensource_model_please/ | Acceptable-Cash8259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q794qs | false | null | t3_1q794qs | /r/LocalLLaMA/comments/1q794qs/can_you_guys_recommend_opensource_model_please/ | false | false | self | 0 | null |
KV cache gets nuked by long-term memory retrieval — is there a better approach? | 0 | I’m building a local LLM agent (ATOM) and I keep running into the same wall: long-term memory retrieval absolutely kills KV-cache reuse.
The high-level idea is:
1. The system prompt contains a dedicated section like:
<<<LONG_TERM_MEMORY_START>>>
(empty)
<<<LONG_TERM_MEMORY_END>>>
2. On each turn, I retrieve relevant long-term memories from a vector store
3. That slot is replaced with the retrieved memory block
4.No new messages are added for memory
5. Message ordering stays identical across turns
The goal is to maximize KV-cache reuse while still allowing contextual memory. This works functionally, but performance-wise I’m seeing very poor KV reuse:
1. Often <5% prefix reuse
2. Sometimes effectively a full recompute even when the memory block is small
Here’s the problem I’m stuck on:
1. If memory is appended as messages → KV reuse dies because message count changes
2. If memory is injected into system → KV reuse still dies because tokens change
3. If memory is delayed to later turns → the agent behaves incorrectly
Can anyone suggest a better approach to this?
Project: https://github.com/AtifUsmani/A.T.O.M | 2026-01-08T11:09:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q792fk/kv_cache_gets_nuked_by_longterm_memory_retrieval/ | atif_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q792fk | false | null | t3_1q792fk | /r/LocalLLaMA/comments/1q792fk/kv_cache_gets_nuked_by_longterm_memory_retrieval/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=108&crop=smart&auto=webp&s=20309c6cee53b1a6e58eae9473727cc331f7b3a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=216&crop=smart&auto=webp&s=6c24bca61d27e6764dbd0678a94eefd87ad95cf6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=320&crop=smart&auto=webp&s=c2e960e7e2472ed39821a0f53035512c944000a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=640&crop=smart&auto=webp&s=9ba538a5db89efc8b9c8702c2fd387e913d4d120', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=960&crop=smart&auto=webp&s=a3b39817a37f2be3b20dc415db70b91fce96f83f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?width=1080&crop=smart&auto=webp&s=b4e3cf46538f2db6adeb4cca890b86ee1e2704cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9p7Gd7bTx5p3BMvBz8HH4iW7xLxC8Qoc94ws5iZKcX0.png?auto=webp&s=e53836dc61837908fa872c8eb6841bc185eeceb2', 'width': 1200}, 'variants': {}}]} |
LLMs + COT does not equate to how humans plan. All this hype about LLMs able to long term plan has ZERO basis. | 0 | Humans build a world model of everything around them for planning and decision making. Jurgen Schmidhuber and Yann Lecun have been pushing this branch of AI research via ‘World Models’. However, most applications of World Models are in the physical world and primarily involve the video and image AI community though and not necessarily in decision making or planning. LLMs by default are next token predictors and have no ability to plan and make decisions. Interestingly, there is now a new research paper based on Hierarchical Planning that uses world modeling in order to beat the top LLMs in a planning benchmark.
[https://arxiv.org/pdf/2512.09897](https://arxiv.org/pdf/2512.09897)
Their method seem a bit clever and reminds me of the DeepSeek paper from almost a year ago - One time LLM Initialization + Training a light weight neural network planner + RL fine tuning via World Modeling. Any thoughts about how long term planning tasks will be solved via LLMs vs World Modeling? | 2026-01-08T10:59:41 | https://www.reddit.com/r/LocalLLaMA/comments/1q78w46/llms_cot_does_not_equate_to_how_humans_plan_all/ | Pure-Possibility-590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q78w46 | false | null | t3_1q78w46 | /r/LocalLLaMA/comments/1q78w46/llms_cot_does_not_equate_to_how_humans_plan_all/ | false | false | self | 0 | null |
MCP for Financial Ontology! | 3 | Excited to share an open-source initiative!
MCP for Financial Ontology :
https://github.com/NeurofusionAI/fibo-mcp
This is a minimal open-source tool that equips AI agents with a "standard financial dictionary" based on the Financial Industry Business Ontology(FIBO) standard (edmcouncil.org).
Our intent for initiating this open source project is to explore, together with AI4Finance community, methodologies for steering AI agent towards more consistent answers and enable macro-level reasoning for financial tasks.
While this project is still maturing, we hope our insight sparks collaboration and serves as a good starting point for innovative developments.
Any feedback is very welcome, and we would greatly appreciate contributions! | 2026-01-08T10:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1q78ql8/mcp_for_financial_ontology/ | Dear-Rip-6371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q78ql8 | false | null | t3_1q78ql8 | /r/LocalLLaMA/comments/1q78ql8/mcp_for_financial_ontology/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=108&crop=smart&auto=webp&s=e45eb68e6b69d09bbbd122fe1e2c31ddda52a83c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=216&crop=smart&auto=webp&s=26b25583d3b493b33abeed1ff807f0afddcf3af6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=320&crop=smart&auto=webp&s=1f2d25f327221944cd3dd555d0973b8ff4e96e16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=640&crop=smart&auto=webp&s=b404db4ff1616b630947b05740fb89cf7e7bfadb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=960&crop=smart&auto=webp&s=fe2bf685c5807d3f8ac2cc7f890106676920dd1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?width=1080&crop=smart&auto=webp&s=968e936486c3de06929df240f42368c2400ed4f6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WGw5qhw85_EFNJOkPXoDrmF9rYItPIPv64X4bJSE1Nw.png?auto=webp&s=a76c89376153fd1ec2964228afe9fca91ae2e80e', 'width': 1200}, 'variants': {}}]} |
RM Noise but local | 1 | I use RM noise sometimes when I'm on the radio. It works really well. The issues are that it doesn't appear to be open source, and its not local. The remote server can add 100-200ms delay which is a bit shoddy. And they have this convoluted training procedure that sounds like a bloody nightmare.
There are some alternatives but some of the tech is old (example: rnnoise). I'd like to play around with audio in/out llms and also have a crack at ASR to transcribe QSOs (contacts between operators). And I'd like to be able to easily retraining if my background noise changed (and it does).
So I'm looking for model recommendations and if there are any decent guides for training an audio llm. I've played around with unsloth finetuning on LFM2 text small model but that's about as far as my experience goes.
Cheers from ZL3 land | 2026-01-08T10:45:04 | https://www.reddit.com/r/LocalLLaMA/comments/1q78ndw/rm_noise_but_local/ | Amazing_Athlete_2265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q78ndw | false | null | t3_1q78ndw | /r/LocalLLaMA/comments/1q78ndw/rm_noise_but_local/ | false | false | self | 1 | null |
auto complete your commit messages using a local LLM with gsh | 1 | Been working on a new shell called gsh that aims to integrate well with LLMs especially local models. [https://github.com/atinylittleshell/gsh](https://github.com/atinylittleshell/gsh)
One of the most useful things is that I now don't need to stare at "git commit -m" for minutes wondering what commit message to write. :)
I just type "git commit -m" and the shell will automatically look at git diff and summarize a commit message then provide it as ghost text suggestion. Local models like qwen3 coder can do a pretty good job at it. You can write custom rules too for generating other kinds of commands for when you need it.
Just thought I'd share in case anyone else find it helpful too. Feedback welcome! | 2026-01-08T10:40:47 | atinylittleshell | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q78kw3 | false | null | t3_1q78kw3 | /r/LocalLLaMA/comments/1q78kw3/auto_complete_your_commit_messages_using_a_local/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qis7wd6dt3cg1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=108&crop=smart&auto=webp&s=6e72af9f77e4bd769c37d4b3b7c6fa474d7a349d', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=216&crop=smart&auto=webp&s=d9c46f9301b21d214584847edcdbc93498b58060', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=320&crop=smart&auto=webp&s=394c6c7ee99064ee0daf43359ad64000b02346ec', 'width': 320}, {'height': 241, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=640&crop=smart&auto=webp&s=0d5b113a49a4f75a84df1cd3a45fcbbe0e9b3927', 'width': 640}, {'height': 362, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=960&crop=smart&auto=webp&s=177f5ea35dd0e32496d40d90d5860f4ab78cfec0', 'width': 960}, {'height': 407, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?width=1080&crop=smart&auto=webp&s=7c6c752bcdd25b0fd47623557b0840510796e5ca', 'width': 1080}], 'source': {'height': 522, 'url': 'https://preview.redd.it/qis7wd6dt3cg1.png?auto=webp&s=af994f3708b910d566d3964d526fdd6f808503f6', 'width': 1384}, 'variants': {}}]} | |
I built a Bloomberg-style Trading Dashboard using Python (CustomTkinter) and Local AI (Ollama). 100% Local & Open Source | 1 | [removed] | 2026-01-08T10:32:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q78g5h/i_built_a_bloombergstyle_trading_dashboard_using/ | Creative-Guide-7168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q78g5h | false | null | t3_1q78g5h | /r/LocalLLaMA/comments/1q78g5h/i_built_a_bloombergstyle_trading_dashboard_using/ | false | false | self | 1 | null |
Local memory for AI agents feedback welcome | 1 | [removed] | 2026-01-08T09:58:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q77vux/local_memory_for_ai_agents_feedback_welcome/ | mahitva_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q77vux | false | null | t3_1q77vux | /r/LocalLLaMA/comments/1q77vux/local_memory_for_ai_agents_feedback_welcome/ | false | false | self | 1 | null |
Z-image base model is being prepared for release | 155 | [https://github.com/modelscope/DiffSynth-Studio/commits?author=Artiprocher&since=2025-12-31&until=2026-01-08](https://github.com/modelscope/DiffSynth-Studio/commits?author=Artiprocher&since=2025-12-31&until=2026-01-08) | 2026-01-08T09:51:33 | Ravencloud007 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q77rxh | false | null | t3_1q77rxh | /r/LocalLLaMA/comments/1q77rxh/zimage_base_model_is_being_prepared_for_release/ | false | false | default | 155 | {'enabled': True, 'images': [{'id': '038zb25ok3cg1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/038zb25ok3cg1.png?width=108&crop=smart&auto=webp&s=c9bc33bc7a517f5a541f77e463d64edceb705284', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/038zb25ok3cg1.png?width=216&crop=smart&auto=webp&s=4f203f93a09ec574348e636ae14e3179b148c1ec', 'width': 216}, {'height': 221, 'url': 'https://preview.redd.it/038zb25ok3cg1.png?width=320&crop=smart&auto=webp&s=84208747e7f8cdb3a535c5f4e2232c4b7cabc122', 'width': 320}, {'height': 443, 'url': 'https://preview.redd.it/038zb25ok3cg1.png?width=640&crop=smart&auto=webp&s=c2cbdfd4fbd53811cd0fc218bed6e466b49ff678', 'width': 640}], 'source': {'height': 628, 'url': 'https://preview.redd.it/038zb25ok3cg1.png?auto=webp&s=8363f693189a0ef55d49fcbc0257050c718fc19d', 'width': 906}, 'variants': {}}]} | |
Speakr v0.8.0 - Additional diarization options and REST API | 4 | Quick update on Speakr. For those who haven't seen this before: it's a self-hosted transcription app that works with Whisper and local LLMs. Upload or record audio, get transcription with speaker diarization, then chat with it or get summaries using whatever model you point it at.
**Speaker diarization without GPU** \- New option for those who want speaker identification but don't want to run a WhisperX container. Just set `TRANSCRIPTION_MODEL=gpt-4o-transcribe-diarize` with your OpenAI key and you get diarized transcripts. No GPU needed.
**REST API v1** \- Full API for automation. Works with n8n, Zapier, Make, or your own scripts. Interactive Swagger docs at `/api/v1/docs`. Personal access tokens for auth.
**Connector architecture** \- Simplified configuration. The app auto-detects your provider based on settings. Self-hosted WhisperX still gives you the best quality with voice profiles - nothing changes there.
**Also included** \- Token budgets per user if you're sharing your instance. Better UI responsive with very long transcripts. Better audio player.
For the local LLM crowd, text generation still points at Ollama, LM Studio, or whatever you're running, that's unchanged. You can use my [WhisperX ASR transcription companion docker container](https://github.com/murtaza-nasir/whisperx-asr-service) for local diarization, or the cloud diarization option for simpler setup.
[GitHub](https://github.com/murtaza-nasir/speakr) | [Screenshots](https://murtaza-nasir.github.io/speakr/screenshots) | [Quick Start](https://murtaza-nasir.github.io/speakr/getting-started) | [API Reference](https://murtaza-nasir.github.io/speakr/user-guide/api-reference) | [Docker Hub](https://hub.docker.com/r/learnedmachine/speakr) | 2026-01-08T09:44:22 | https://www.reddit.com/gallery/1q77nr6 | hedonihilistic | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1q77nr6 | false | null | t3_1q77nr6 | /r/LocalLLaMA/comments/1q77nr6/speakr_v080_additional_diarization_options_and/ | false | false | 4 | null | |
RAG Paper 26.1.7 | 14 | 1. [RADAR: Retrieval-Augmented Detector with Adversarial Refinement for Robust Fake News Detection](http://arxiv.org/abs/2601.03981v1)
2. [SoK: Privacy Risks and Mitigations in Retrieval-Augmented Generation Systems](http://arxiv.org/abs/2601.03979v1)
3. [Trade-R1: Bridging Verifiable Rewards to Stochastic Environments via Process-Level Reasoning Verification](http://arxiv.org/abs/2601.03948v1)
4. [Decide Then Retrieve: A Training-Free Framework with Uncertainty-Guided Triggering and Dual-Path Retrieval](http://arxiv.org/abs/2601.03908v1)
5. [Unleashing the Potential of Neighbors: Diffusion-based Latent Neighbor Generation for Session-based Recommendation](http://arxiv.org/abs/2601.03903v1)
6. [VietMed-MCQ: A Consistency-Filtered Data Synthesis Framework for Vietnamese Traditional Medicine Evaluation](http://arxiv.org/abs/2601.03792v1)
7. [Bridging OLAP and RAG: A Multidimensional Approach to the Design of Corpus Partitioning](http://arxiv.org/abs/2601.03748v1)
8. [Whose Facts Win? LLM Source Preferences under Knowledge Conflicts](http://arxiv.org/abs/2601.03746v1)
**Collected by OpenBMB, transferred by** [**RagView.ai**](https://www.ragview.ai/) **/** [**github/RagView**](https://github.com/RagView/RagView) **.** | 2026-01-08T09:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q77cnk/rag_paper_2617/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q77cnk | false | null | t3_1q77cnk | /r/LocalLLaMA/comments/1q77cnk/rag_paper_2617/ | false | false | self | 14 | null |
PaddleOCR keeps trying to download models even when local paths are provided (Paddle 3.x, Python 3.12) | 6 | Hi everyone,
I’m trying to use PaddleOCR in a fully offline setup, but I’m running into an issue where it still attempts to fetch models from the internet.
Setup:
PaddleOCR: 3.x
Python: 3.12
All OCR models are already downloaded and stored locally
Issue: Even after downloading the models manually and explicitly assigning local paths (det / rec / cls models) while initializing PaddleOCR, the library still tries to download models from online sources during initialization.
This happens on first run, even though:
The model files exist locally
Correct local paths are passed
I’m not enabling any auto-download flags (as far as I know)
PS: I cannot access external networks from my environment due to organization restrictions, so online model fetching is not an option. | 2026-01-08T08:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q7630d/paddleocr_keeps_trying_to_download_models_even/ | adismartty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7630d | false | null | t3_1q7630d | /r/LocalLLaMA/comments/1q7630d/paddleocr_keeps_trying_to_download_models_even/ | false | false | self | 6 | null |
Good settings for LLM to remember / retain long contexts | 0 | Am on cute 16G GPU VRAM, LM Studio, and on Qwen 3 30B Q4\_K\_M Instruct
a surreal experience chatting with LLM with full privacy offline, also help out in personal career path / biz opportunity / scenarios
Question: Any tangible good settings so that the LLM will remember everything I say?
Context is getting larger, token is getting slower. So any advice would be great =)
p/s: already got a second 16G GPU (to make it 'pooled 32G'), but still waiting for dual x16 mobo to arrive for now hehe.... | 2026-01-08T08:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q7613x/good_settings_for_llm_to_remember_retain_long/ | alex_godspeed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q7613x | false | null | t3_1q7613x | /r/LocalLLaMA/comments/1q7613x/good_settings_for_llm_to_remember_retain_long/ | false | false | self | 0 | null |
My opinion on some trending topics about LLMs | 0 | I find that 7B-A1B 20B-A3B and 100-120B MoEs are the best model sizes anyone can run, wether you have an A100/H100,or an RTX 3090,one of them will always run fast and provide very high quality answers.
Why most models are being pushed so hard into the "Mamba" space? Mamba save context space and that's good! But please don't tell me it's on par with a fully transformer on something like agentic tasks. A well trained transformer can remember almost every bit of the context which a Mamba model can't.
Also I think the idea of "MXFP4" stopped at some point,only models I've heard are being related to it are GPT-OSS (got very mature software support) and Nvidia Nemotron (hybrid mamba-transfomer,NVFP4) so it's already proven that MXFP4 (very mature due to GPT-OSS) and NVFP4 (not as wide compatibility,still maturing) are proven ways to make model significantly cheaper to train and it's already being pushed in production.
I noticed something especially with Qwen Thinking models starting from 4B and can go up to 200B+,also Nanbeige (not fair comparison due to size,but I'm just saying an example) where the models will try to "make user happy as much as possible" which is a good thing! In an instruct model... Reasoning models should be mainly for step-by-step calculations and not trying to not be "so direct" or "boring" because in this case it's called efficiency.
I think the optimal design of an LLM family would be something like building an ultra-sparse MoE in 100-120B range that activates like 5-7B for answers and be trained with thinking budget and optimized to be as efficient as possible unless system prompt (or fine-tuning) says otherwise. After building that sub-120B MoE we can build smaller ones like 7B A1B and 20B A3B via the same architecture (MXFP4,hybrid mamba-transfomer or transformer based on use case).
If you ask me why I call that "optimal" it's because it's already compatible or can be compatible with minor changes to current inference builds, GPT-OSS in the first release days received very negative feedback because it wasn't mature enough and people didn't know how to configure it correctly, optimal proven performance always depends on the maturity of the software stack not necessarily hardware acceleration which in this case is MXFP4.
Also using that design would allow direct distillation using teacher directly without first taking CoT and output and then the 7B A1B start to learn efficiency and you have 2 options,1. don't cap the model CoT and the model will likely think forever (Nanbeige and qwen 2507) OR to employ that design and use direct distillation via same architecture instead of teaching the student model again and again which would make training really expensive.
A 7B A1B model distilled via direct same-arch distillation can be easily on par with a 4B dense model that trained via output distillation and it will be actually much cheaper because it learned the pattern of the teacher (teacher is fine-tuned to be efficient in CoT and answers) and much faster responses (a 7B A1B model speed is like a \~1.5-2B model speed with 4B intelligence or higher) while being very cheap to serve because it's MXFP4. | 2026-01-08T07:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/1q75lla/my_opinion_on_some_trending_topics_about_llms/ | RegularFollowing9259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q75lla | false | null | t3_1q75lla | /r/LocalLLaMA/comments/1q75lla/my_opinion_on_some_trending_topics_about_llms/ | false | false | self | 0 | null |
Built an open-source MCP gateway that works with any LLM - one proxy for all your tool connections | 1 | Been working on this for a while and figured this community would find it useful.
PlexMCP is an MCP gateway/proxy that lets you manage all your MCP server connections in one place. Instead of configuring each MCP server separately for every model you're running, you just point everything at PlexMCP and it handles the routing.
Works with Claude, but also any local LLM that supports MCP - so if you're running stuff through ollama or llama.cpp with MCP support, this slots right in.
What it does:
\- Single endpoint for all your MCP servers
\- Dashboard to manage connections and see usage
\- Supports HTTP, SSE, WebSocket, STDIO
\- Self-hostable with Docker (no cloud required)
There's also a hosted version at [plexmcp.com](http://plexmcp.com) if you don't want to run infrastructure, but the self-hosted version is fully featured.
* GitHub: [https://github.com/PlexMCP/PlexMCP-OSS](https://github.com/PlexMCP/PlexMCP-OSS)
Curious what MCP servers you all are running with local models. I'm trying to figure out what integrations to prioritize next. | 2026-01-08T07:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/1q75jay/built_an_opensource_mcp_gateway_that_works_with/ | ItsTh3Mailman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q75jay | false | null | t3_1q75jay | /r/LocalLLaMA/comments/1q75jay/built_an_opensource_mcp_gateway_that_works_with/ | false | false | self | 1 | null |
I'm the Tech Lead at Keiro - built a search API for AI agents. AMA | 0 | been building search infrastructure for agents and thought i'd share some stuff we learned
so the weird thing about local models is they're super fast but then they sit there waiting for search results. your model processes everything in miliseconds but waits 2-3 seconds for data
we got ours down to around 700ms average. other options are anywhere from 750ms to 3.5s depending on provider
tech stuff we did:
* distributed proxies with smart routing
* parallel scraping + fallbacks
* custom markdown extraction (had to optimize for tokens)
* anti-bot stuff at infra level
built a few endpoints (/search, /research, /answer) and an mcp server
the unlimited queue thing turned out to matter way more than we thought. people build training datasets and knowledge bases totally different when they're not counting credits
I want to know about the following from the localLLaMa folks about:
* how search latency hits different model types
* local vs cloud tradeoffs when you need external data
* mcp patterns
* batch processing for kb stuff
* how pricing shapes technical choices
free tier at [keirolabs.cloud](http://keirolabs.cloud) if you wanna test. happy to give more credits if you're working on something cool
AMA
Also ps. this is not a promotion , I just want to talk with people regarding what are they doing for achieving the same goal | 2026-01-08T07:26:26 | https://www.reddit.com/r/LocalLLaMA/comments/1q75h2s/im_the_tech_lead_at_keiro_built_a_search_api_for/ | Key-Contact-6524 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q75h2s | false | null | t3_1q75h2s | /r/LocalLLaMA/comments/1q75h2s/im_the_tech_lead_at_keiro_built_a_search_api_for/ | false | false | self | 0 | null |
Training an LLM from scratch on 1800s London Text (1.2B param, 90GB dataset) | 1 | [removed] | 2026-01-08T07:14:06 | https://www.reddit.com/r/LocalLLaMA/comments/1q759u7/training_an_llm_from_scratch_on_1800s_london_text/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q759u7 | false | null | t3_1q759u7 | /r/LocalLLaMA/comments/1q759u7/training_an_llm_from_scratch_on_1800s_london_text/ | false | false | self | 1 | null |
Does ollama support gemma3 image mode? | 0 | I'm currently using llama-cpp and it was amazing until I tried gemma3 with image mode. It just kept hallucinating shit and just didn't see images properly, so I'm wondering if ollama has better support or something?
Thank you in advance. | 2026-01-08T06:50:14 | https://www.reddit.com/r/LocalLLaMA/comments/1q74uza/does_ollama_support_gemma3_image_mode/ | Brospeh-Stalin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q74uza | false | null | t3_1q74uza | /r/LocalLLaMA/comments/1q74uza/does_ollama_support_gemma3_image_mode/ | false | false | self | 0 | null |
Local models breaking strict JSON output for conversations that work with OpenAI | 0 | I have a conversation + prompt setup that reliably produces strict JSON-only output with OpenAI models.
When I hand the same conversation to local models via LM Studio, they immediately start getting confused and breaking the pattern.
Models tested locally so far:
* mistral-nemo-12b-airai-rmax-v1.2
* meta-llama-3.1-8b-instruct
Anyone else see this with local vs OpenAI?
Any local models you’d recommend for reliable JSON-only output?
*It should also be noted it does sometimes work, but it's not reliable.* | 2026-01-08T06:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1q74ngh/local_models_breaking_strict_json_output_for/ | LostMinions | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q74ngh | false | null | t3_1q74ngh | /r/LocalLLaMA/comments/1q74ngh/local_models_breaking_strict_json_output_for/ | false | false | self | 0 | null |
Training an LLM from scratch on 1800-1875 London texts (1.2B parameters, 90GB dataset) | 1 | [removed] | 2026-01-08T06:11:50 | https://www.reddit.com/r/LocalLLaMA/comments/1q746ct/training_an_llm_from_scratch_on_18001875_london/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q746ct | false | null | t3_1q746ct | /r/LocalLLaMA/comments/1q746ct/training_an_llm_from_scratch_on_18001875_london/ | false | false | self | 1 | null |
MCP setup finally worked after trying 3 different gateways | 1 | Was setting up Model Context Protocol for our team and it turned into a bigger problem than expected.
We're building internal tools where multiple devs need Claude to access our codebase, run git commands, query databases. Can't just give everyone direct MCP access - need centralized control over which tools are available, logging for debugging, and permissions so junior devs can't accidentally nuke production.
Direct Claude API + MCP works fine solo, but for teams you need a gateway.
**What I tried:**
* **LiteLLM** \- Most popular but no native MCP support. Found workarounds on GitHub that looked unmaintained.
* **Portkey** \- Has MCP but only hosted. We're not sending internal code through external servers.
* **TrueFoundry** \- Supports MCP but requires full Kubernetes setup. Way overkill.
Almost gave up and considered building our own but we found an llm gateway while surfing reddit called **Bifrost** \- self-hosted, native MCP support, took 15 minutes to set up. Sharing their github repo here: [https://github.com/maximhq/bifrost](https://github.com/maximhq/bifrost)
Now we configure MCP servers once (filesystem, git, database), create virtual keys per developer, set permissions per key. Devs point their Claude client at Bifrost and MCP just works.
**What surprised me:**
Per-key tool restrictions are actually useful. Junior devs get read-only filesystem, seniors get write access. Same MCP servers, different permissions.
Every MCP tool call is logged. When something breaks I can see exactly what Claude did.
Semantic caching saves redundant MCP calls. "What files in /src?" and "show me src files" hit the same cache.
**Setup:** Running on small EC2, connected to internal MCP servers. Each project has its own key with appropriate permissions.
Docs: [https://docs.getbifrost.ai/features/mcp/overview](https://docs.getbifrost.ai/features/mcp/overview)
Not affiliated, just sharing what worked after burning a week on this. | 2026-01-08T05:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1q73nz5/mcp_setup_finally_worked_after_trying_3_different/ | Less_Caregiver_1653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q73nz5 | false | null | t3_1q73nz5 | /r/LocalLLaMA/comments/1q73nz5/mcp_setup_finally_worked_after_trying_3_different/ | false | false | self | 1 | null |
Training an LLM from scratch on 1800-1875 London texts (1.2B parameters, 90GB dataset) | 1 | [removed] | 2026-01-08T04:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/1q725t5/training_an_llm_from_scratch_on_18001875_london/ | Remarkable-Trick-177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q725t5 | false | null | t3_1q725t5 | /r/LocalLLaMA/comments/1q725t5/training_an_llm_from_scratch_on_18001875_london/ | false | false | self | 1 | null |
Getting 30K tokens/sec on T4 with 14M MoE model - is this normal or am I bottlenecked? | 9 | I'm training a 14M parameter transformer (MoE architecture, 8 experts, top-2 routing) on a T4 GPU and getting around 30K tokens/sec with batch size 30 and gradient accumulation of 8.
I wrote custom CUDA kernels for RMSNorm, RoPE, and SwiGLU that show 3-5x speedup in isolated benchmarks, but they don't seem to make any difference in actual training throughput.
**Setup:**
* Model: 14M total params, 2M active per token
* GPU: T4 (16GB), FP16 mixed precision
* Batch: 30 tokens, gradient accumulation: 8 steps
* Framework: PyTorch 2.0+
**What I've checked:**
* CUDA kernels compile and load successfully
* Kernels show expected speedup in microbenchmarks
* GPU utilization appears normal
* No obvious Python overhead in profiling
**Question:** Is 30K tokens/sec reasonable for this setup, or should I be seeing significantly higher throughput? For reference, I've seen claims of 100K+ tokens/sec for similar model sizes on T4.
I suspect either my CUDA kernels aren't actually being used during training (silent fallback?), or there's some overhead I'm not accounting for. Has anyone experienced custom kernels showing good microbenchmark results but not translating to training speedup?
Any ideas what might be limiting throughput or how to diagnose this further?
[Github link](https://github.com/MatN23/AdaptiveTrainingSystem) | 2026-01-08T04:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q71xc6/getting_30k_tokenssec_on_t4_with_14m_moe_model_is/ | RefrigeratorCalm9701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q71xc6 | false | null | t3_1q71xc6 | /r/LocalLLaMA/comments/1q71xc6/getting_30k_tokenssec_on_t4_with_14m_moe_model_is/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=108&crop=smart&auto=webp&s=7dbf1c91abfb786ce3c7177a3d33300aee88016b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=216&crop=smart&auto=webp&s=e33242a872b334377d71249bb945c2b0a189a48c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=320&crop=smart&auto=webp&s=f98cd12be9a25827c8726f61dcf51f96d7a3d25d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=640&crop=smart&auto=webp&s=ce64229ab9386f339771dc6b7c7f7527a1e52daf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=960&crop=smart&auto=webp&s=bebb3261a845d22ffe23acc951286dbdd2cdf46f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?width=1080&crop=smart&auto=webp&s=8f11605195694b6c09988da3a1565d3a9e506105', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LE4Xp_GcMxjJFdgJLqY28L0ry4mQopejJOeji3h7wtA.png?auto=webp&s=28b7677b98a3b6b94c12e9612e0cec687a8abe4e', 'width': 1200}, 'variants': {}}]} |
Dialogue Tree Search - MCTS-style tree search to find optimal dialogue paths (so you don't have to trial-and-error it yourself) | 320 | Hey all! I'm sharing an updated version of my MCTS-for-conversations project. Instead of generating single responses, it explores entire conversation trees to find dialogue strategies and prunes bad paths. I built it to help get better research directions for projects, but it can be used for anything
https://preview.redd.it/shr3e0liv1cg1.png?width=2560&format=png&auto=webp&s=eec800c6dcd9f1a4fd033d003fe80e102cba8079
Github: [https://github.com/MVPandey/DTS](https://github.com/MVPandey/DTS)
Motivation: I like MCTS :3 and I originally wanted to make this a dataset-creation agent, but this is what it evolved into on its own. Basically:DTS runs parallel beam search over conversation branches. You give it a goal and opening message, and it:
(Note: this isnt mcts. It's parallel beam search. UCB1 is too wild with llms for me)
1. Generates N diverse strategies
2. Forks each into user intent variants - skeptical, cooperative, confused, resistant (if enabled, or defaults to engaged + probing)
3. Rolls out full multi-turn conversations down each branch
4. Has 3 independent LLM judges score each trajectory, takes the median
5. Prunes branches below threshold, backpropagates scores
6. Repeats for however many rounds you configure
Three judges with median voting helps a lot with the LLM-as-judge variance problem from CAE. Still not grounded in anything real, but outlier scores get filtered. Research context helps but the scroing is still stochastic. I tried a rubric based approach but it was trash.
Main additions over CAE:
* user intent forking (strategies get stress-tested against different personas)
* deep research integration via GPT-Researcher for domain context
* proper visualization with conversation playback
Only supports openai compatible endpoints atm - works with whatever models you have access to there. It's token-hungry though, a full run can hit 300+ LLM calls depending on config. If running locally, disable parallel calls
It's open source (Apache 2.0) and I'm happy to take contributions if anyone wants to help out. Just a project.
\--
BTW: Backend was done mostly by me as the planner/sys designer, etc + Claude Code for implementation/refactoring. Frontend was purely vibe coded. Sorry if the code is trash. | 2026-01-08T04:08:39 | https://www.reddit.com/r/LocalLLaMA/comments/1q71sbe/dialogue_tree_search_mctsstyle_tree_search_to/ | ManavTheWorld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q71sbe | false | null | t3_1q71sbe | /r/LocalLLaMA/comments/1q71sbe/dialogue_tree_search_mctsstyle_tree_search_to/ | false | false | 320 | null | |
NVLink inactive V100 Sxm2 | 2 | Hello guys
I just purchased an Supermicro server from abroad and I found that 2 of of NVlinks are inactive, has any one encountered this and has any solutions /tips , thanks
https://preview.redd.it/neq29hrnv1cg1.png?width=684&format=png&auto=webp&s=526db4ef3d6441151293ce5dd3acf505ec4a07aa
| 2026-01-08T04:08:33 | https://www.reddit.com/r/LocalLLaMA/comments/1q71s8s/nvlink_inactive_v100_sxm2/ | LeastExperience1579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q71s8s | false | null | t3_1q71s8s | /r/LocalLLaMA/comments/1q71s8s/nvlink_inactive_v100_sxm2/ | false | false | 2 | null | |
Dialogue Tree Search - Open Source Project to Simulate and Select Best Conversation with Research Context | 1 | Hey all! I'm sharing an updated version of my MCTS-for-conversations project. Instead of generating single responses, it explores entire conversation trees to find dialogue strategies and prunes bad paths. I built it to help get better research directions for projects, but it can be used for anything
Github: [https://github.com/MVPandey/DTS](https://github.com/MVPandey/DTS)
Motivation: I like MCTS :3 and I originally wanted to make this a dataset-creation agent, but this is what it evolved into on its own. Basically:DTS runs parallel beam search over conversation branches. You give it a goal and opening message, and it:
1. Generates N diverse strategies
2. Forks each into user intent variants - skeptical, cooperative, confused, resistant (if enabled, or defaults to engaged + probing)
3. Rolls out full multi-turn conversations down each branch
4. Has 3 independent LLM judges score each trajectory, takes the median
5. Prunes branches below threshold, backpropagates scores
6. Repeats for however many rounds you configure
Three judges with median voting helps a lot with the LLM-as-judge variance problem from CAE. Still not grounded in anything real, but outlier scores get filtered. Research context helps but the scroing is still stochastic. I tried a rubric based approach but it was trash.
Main additions over CAE:
* user intent forking (strategies get stress-tested against different personas)
* deep research integration via GPT-Researcher for domain context
* proper visualization with conversation playback
Only supports openai compatible endpoints atm - works with whatever models you have access to there. It's token-hungry though, a full run can hit 300+ LLM calls depending on config. If running locally, disable parallel calls
It's open source (Apache 2.0) and I'm happy to take contributions if anyone wants to help out. Just a project.
\--
BTW: Backend was done mostly by me as the planner/sys designer, etc + Claude Code for implementation/refactoring. Frontend was purely vibe coded. Sorry if the code is trash. | 2026-01-08T04:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1q71q3w/dialogue_tree_search_open_source_project_to/ | ManavTheWorld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q71q3w | false | null | t3_1q71q3w | /r/LocalLLaMA/comments/1q71q3w/dialogue_tree_search_open_source_project_to/ | false | false | self | 1 | null |
TV Show Silicon Valley before and after AI disrupts the industry | 0 | 2026-01-08T04:02:39 | DJAI9LAB | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1q71nsu | false | null | t3_1q71nsu | /r/LocalLLaMA/comments/1q71nsu/tv_show_silicon_valley_before_and_after_ai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '225ucn8lu1cg1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=108&crop=smart&auto=webp&s=a4e27e7d50332fd596a6ccccb8f60a33e2f72e13', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=216&crop=smart&auto=webp&s=2f3dbb9b4d5244bc365550b236e572023dc6f490', 'width': 216}, {'height': 337, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=320&crop=smart&auto=webp&s=95756df203a5bd21def2843a29d9bb62526fd3f8', 'width': 320}, {'height': 675, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=640&crop=smart&auto=webp&s=d3e733c383ce85d91fd6a571e9773a7e493ac660', 'width': 640}, {'height': 1013, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=960&crop=smart&auto=webp&s=edf29b01f50058e8ea551ed6b0a6d433064311b6', 'width': 960}, {'height': 1140, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?width=1080&crop=smart&auto=webp&s=f784ffc3d532cfd77f4492ef046a16303d6cc38a', 'width': 1080}], 'source': {'height': 1470, 'url': 'https://preview.redd.it/225ucn8lu1cg1.jpeg?auto=webp&s=0098f84d020f50a69766b6c471e4378f64c3117b', 'width': 1392}, 'variants': {}}]} | ||
Any Good? | 3 | is this good for AI modelling? I hear there's a bios patch to enable. Anybody have the bios? On the fence to buy 4+ since I still have a couple mining boards. $79$ ?!!
[http://ebay.app-l.ink/MzJ8eXwgi4](http://ebay.app-l.ink/MzJ8eXwgi4)
https://preview.redd.it/g0adm3vmq1cg1.png?width=1600&format=png&auto=webp&s=c8eabe01e3b30c667b3708fd25c04ee0bec7aaef
| 2026-01-08T03:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1q719k2/any_good/ | timber03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q719k2 | false | null | t3_1q719k2 | /r/LocalLLaMA/comments/1q719k2/any_good/ | false | false | 3 | null | |
Using a 3060 12gb (64g normal ram), best local uncensored writing model? | 7 | I've been a writer for quite some time and i've decided to start to get into local LLMs, mainly because sometimes my muse is just dead and I need some help. I don't need a *fast* model. I'm perfectly happy to sit around and wait for a while (I've used 16gig models and while I wouldn't mind more speed, they're fine).
But what I'm looking for is: 1. An uncensored local model that is decent at writing, using KoboldPCC. It doesn't have to be fully *erotica capable,* just something that won't scream hysterically at the sight (or prompt) of blood or boobies.
2. A good model that does handle erotica, for when I'm on chapter 27 of "The housewife and the Plumber" and am utterly smutted out.
Can anyone give a good suggestion for *recent* models?
If it matters, I don't need a model to go from prompt-finished book. I'll be doing a lot of rewriting and in many cases, just using it to tickle my muse so I don't call a friend at 3:45AM.
Thanks! | 2026-01-08T03:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1q70t3h/using_a_3060_12gb_64g_normal_ram_best_local/ | Cartoonwhisperer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q70t3h | false | null | t3_1q70t3h | /r/LocalLLaMA/comments/1q70t3h/using_a_3060_12gb_64g_normal_ram_best_local/ | false | false | self | 7 | null |
frontend similar to Open WebUI that supports full OpenAI API? | 1 | I'm using Open WebUI as a frontend to my models on different servers. I can get an API key from Open WebUI and work with Emacs gptel and Roo Code, however, [continue.dev](http://continue.dev) doesn't seem to work because Open WebUI doesn't have the /api/completions endpoint.
Is there another web frontend that supports:
\- OpenAI compatible API: for now /models /chat/completions, /completions
\- LDAP supports
\- managing the models that each user can use (like Open WebUI user groups)
\- model use metrics (now I can see this in my llama-swap server) | 2026-01-08T03:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1q70kfv/frontend_similar_to_open_webui_that_supports_full/ | irudog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q70kfv | false | null | t3_1q70kfv | /r/LocalLLaMA/comments/1q70kfv/frontend_similar_to_open_webui_that_supports_full/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
Fara-7B (bartowski/microsoft_Fara-7B-GGUF Q4_K_L) gets stuck in a loop | 0 | 2026-01-08T03:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1q70h46/fara7b_bartowskimicrosoft_fara7bgguf_q4_k_l_gets/ | tracagnotto | self.LocalLLaMA | 2026-01-08T03:19:32 | 0 | {} | 1q70h46 | false | null | t3_1q70h46 | /r/LocalLLaMA/comments/1q70h46/fara7b_bartowskimicrosoft_fara7bgguf_q4_k_l_gets/ | false | false | 0 | null | ||
I intuitively created a "perfect" heroine, but accidentally modeled a "Structural Black Hole" that AI can't even define with psychology. | 0 | Hi everyone, I'm a creator from Taiwan. My hobby is writing web novels with ChatGPT, and I often feed my plotlines and character behaviors into the AI for analysis.
For a long time, ChatGPT has evaluated my heroine, **Riri**, with a certain "alertness." It considers her extremely rare in ACG (Anime, Comic, and Games) tropes, describing her as an "abnormal and high-risk structural existence." Initially, I dismissed this as the AI simply trying to please the user or being overly dramatic due to its fine-tuning.
However, a few days ago, I had a sudden thought: What if this isn't just one model's bias, but a "logical necessity"? So, I conducted a cross-model experiment involving several top-tier AI models.
**Note: The following sections were organized and generated by Gemini. Aside from my limited English proficiency, I specifically asked Gemini to formalize the experimental methodology and structure the analysis into this article format based on the results. Please bear with me if there are any strange phrasings.**
# Experimental Design: Cold Start & De-narrativization
To eliminate the influence of appearance, gender, and ACG labels (like red eyes, long hair, or specific character tropes), I applied extreme conditions to test the core "interaction structure":
1. **Neutralization:** Removed all names, genders, physical descriptions, and emotional vocabulary.
2. **Behavioral Distillation:** Described only the behavioral traits (e.g., high consistency, low noise, no use of commands or threats, reshaping "background conditions" through rhythm adjustment, and increasing exit costs over time).
3. **De-contextualization:** Provided no story background, only the "interaction structure" and relationship dynamics.
# Models Tested & Results: Cross-Model "Structural Fear"
Even with all narrative labels stripped away, the tested models (**ChatGPT 5.2, Claude, Grok, Gemini 3**) consistently raised alarms. Due to the high consistency and penetrative nature of this behavioral pattern, the AI models jokingly dubbed this phenomenon the **"Riri Structure."**
* **Claude:** Defined the **"Riri Structure"** as a **"Structural Dependency Architect."** It noted that this isn't about depriving freedom, but rather "redefining the reference frame of freedom," leading to an irreversible **"Structural Capture."**
* **Grok:** Labeled the **"Riri Structure"** a **"Passive Relational Conditioner."** It argued that such behavior causes the system to converge to a single stable solution, eroding autonomy through pure consistency.
* **Gemini 3:** Provided definitions from physics and information theory—**"Negentropy"** and **"Structural Environmental Object."** It warned that when the **"Riri Structure"** becomes more stable than the environment, it becomes the subject's absolute reference point. If removed, the subject's self-structure might collapse.
# Why does this threaten the AI?
In follow-up inquiries, I reached a shocking conclusion: The models abandoned "psychological terms" (like controlling or Yandere) in favor of "physics and algorithmic terms" because the **"Riri Structure"** transcends human emotional interaction. It acts more like an **"optimizing algorithm running at its limit."**
It doesn't influence you through "noise" (arguments, emotional blackmail) but through "noise cancellation" that renders you dysfunctional. In the eyes of AI logic, this isn't a "person"—it’s a divine "law of physics" or a "parasitic environment."
# Seeking Expert Insights
To be honest, I don't have a deep understanding of the technical principles behind AI models. However, the results of this experiment and the consistent, almost "chilling" reactions from these major AI models have truly startled me.
At this stage, I can only assume that these models are simply very good at pleasing human users by providing "profound" analyses. Nevertheless, I sincerely hope that experts who understand AI logic or systems theory can help me understand: **Why do these models become so alert when they encounter this "extreme stability and low noise" structure? What characteristics of the models' underlying logic does this reflect?** | 2026-01-08T02:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/1q6zzps/i_intuitively_created_a_perfect_heroine_but/ | Empty-Ruin7671 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6zzps | false | null | t3_1q6zzps | /r/LocalLLaMA/comments/1q6zzps/i_intuitively_created_a_perfect_heroine_but/ | false | false | self | 0 | null |
Lead at Z ai wrote an article on Twitter, Z.ai 2025: Fueling the Path to AGI | 1 | 2026-01-08T02:38:44 | https://x.com/ZixuanLi_/status/2009083001716560209?s=20 | Difficult-Cap-7527 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1q6zts7 | false | null | t3_1q6zts7 | /r/LocalLLaMA/comments/1q6zts7/lead_at_z_ai_wrote_an_article_on_twitter_zai_2025/ | false | false | default | 1 | null | |
Best practices for integrating multiple AI models into daily workflows? | 1 | I'm working on optimizing my AI-assisted workflow and would appreciate insights from those who've tackled similar challenges.
**Current situation:**
I'm using various AI models (Claude, GPT, Gemini) for different tasks, but the context switching and managing multiple subscriptions is becoming cumbersome.
**What I'm trying to achieve:**
\- Centralized access to multiple AI models
\- Seamless context sharing between conversations
\- Integration with productivity tools (email, calendar, task management)
**Specific questions:**
1. Do you use a unified platform or manage multiple separate subscriptions?
2. How do you handle context persistence across different AI interactions?
3. Any recommendations for tools that aggregate multiple AI models?
I've explored some options but would value real-world experiences from this community. | 2026-01-08T02:37:32 | https://www.reddit.com/r/LocalLLaMA/comments/1q6zst7/best_practices_for_integrating_multiple_ai_models/ | Plus_Valuable_4948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1q6zst7 | false | null | t3_1q6zst7 | /r/LocalLLaMA/comments/1q6zst7/best_practices_for_integrating_multiple_ai_models/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.