title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fun test results on the Movement Labs(https://movementlabs.ai/) AI model that just dropped. This thing has legit T1-level capabilities. | 0 | 2025-11-18T08:10:04 | https://www.reddit.com/gallery/1p06cta | Worldly-Shock3233 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p06cta | false | null | t3_1p06cta | /r/LocalLLaMA/comments/1p06cta/fun_test_results_on_the_movement/ | false | false | 0 | null | ||
Another Reflection 70B Movement: "Momentum" model at movementlabs.ai is just GLM 4.6 | 37 | [Front-end token substitution](https://preview.redd.it/445ltlss1z1g1.png?width=794&format=png&auto=webp&s=824b68302441151b9f84af3cc4916af115268a77)
[A glitch token specific to GLM 4.6](https://preview.redd.it/qe9um1fe3z1g1.png?width=731&format=png&auto=webp&s=a3587782dc490d5eebd96cae0e5107a7efa3349a)
Well, well, well... What are you trying to hide? | 2025-11-18T08:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1p06byo/another_reflection_70b_movement_momentum_model_at/ | Broad_Travel_1825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p06byo | false | null | t3_1p06byo | /r/LocalLLaMA/comments/1p06byo/another_reflection_70b_movement_momentum_model_at/ | false | false | 37 | null | |
CONNECTING KALI MCP SERVER TO LM STUDIO LLM's | 1 | [removed] | 2025-11-18T08:02:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p068nr/connecting_kali_mcp_server_to_lm_studio_llms/ | TreeCompetitive1892 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p068nr | false | null | t3_1p068nr | /r/LocalLLaMA/comments/1p068nr/connecting_kali_mcp_server_to_lm_studio_llms/ | false | false | 1 | null | |
LLM identity survey(what LLMs describe themselves as) | 0 | 2025-11-18T07:03:27 | https://old.reddit.com/r/ElvenAINews/comments/1ozq656/lmarena_identity_survey/ | Elven77AI | old.reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p05bps | false | null | t3_1p05bps | /r/LocalLLaMA/comments/1p05bps/llm_identity_surveywhat_llms_describe_themselves/ | false | false | default | 0 | null | |
Coding agent setup under $3k? | 0 | I'm a researcher with some interest in exploring the kinds of ways coding agents like Claude Code can accelerate some very tricky core algorithm development. I'm looking at a few different options, and I'm not sure what to pick:
1. Buying a used GPU to include in an old (2017 era) supermicro server. I have rhe server but it needs maintenance and is pretty power hungry
2. Buying a prebuild with a nice GPU for inference (like the framework desktop)
3. Buying an apple silicon MacBook, or even an older mac mini.
If you've done any or all of these, can you comment on tradeoffs and what you're satisfied with? | 2025-11-18T07:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/1p05aqm/coding_agent_setup_under_3k/ | t40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p05aqm | false | null | t3_1p05aqm | /r/LocalLLaMA/comments/1p05aqm/coding_agent_setup_under_3k/ | false | false | self | 0 | null |
Ordered an RTX 5090 for my first LLM build , skipped used 3090s. Curious if I made the right call? | 0 | Just ordered RTX 5090 (Galax), this might have been an impulsive purchase.
My main goal is to have the ability to run largest possible local LLMs on a consumer gpu/gpus that I can afford, around 3k.
Originally, I seriously considered buying **used 3090s** because the price/VRAM seemed great. But I’m not an experienced builder and was worried possible trouble that may come with them.
**Question:**
Is it a much better idea to buy 4 3090s, or just starting with two of them? Still have time to regret and cancel the order of 5090.
Are used 3090/3090 Ti cards *more* trouble and risk than they’re worth for beginners?
Also open to suggestions for the rest of the build (budget around \~$1,000–$1,400 USD excluding 5090, as long as it's sufficient to support the 5090 and function an ai workstation. I'm not a gamer, for now).
Thanks! | 2025-11-18T06:34:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p04u8c/ordered_an_rtx_5090_for_my_first_llm_build/ | AdventurousAgency371 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04u8c | false | null | t3_1p04u8c | /r/LocalLLaMA/comments/1p04u8c/ordered_an_rtx_5090_for_my_first_llm_build/ | false | false | self | 0 | null |
I’m sensing big changes coming in AI research | 0 |
Back in January 2025, the world saw Deep Research from OpenAI — a tool for truly “deep” information retrieval. Then came Perplexity and others. According to At Tenet, Perplexity was already handling around 780M monthly queries by May 2025
Welcome to the era of zero-click answers. Even in Yandex search, I often just read Alice’s summaries instead of clicking through to content sites.
Back in June, I shared an overview of what “deep research” actually is — and bragged a bit that our tool, Deep Research in SGTBL, launched in December 2024, a month before OpenAI’s. We call it Q (short for Question) — the idea of asking a question and instantly getting a long, well-sourced answer felt almost magical when we launched
Now it’s update time — and I can’t help but shout “jump!” because we’re already halfway in the air. We have brilliant people on the team pushing truly cutting-edge ideas
Right now, we’re testing a new multi-layered (yes, that R-word 🙈) research approach that generates results even cleaner and higher quality than what Deep Research offers today
May the SGTBL team once again outpace those backed by billion-dollar checks — amen 🙏
Once the big players roll out something similar, I’ll tag this post and say, “Remember this?”
Time to dive back into the backlog — this week’s plan is to clear pending tasks, prep tech specs, and send everything over to the dev and strategy teams. Wish me luck 🤞 | 2025-11-18T06:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p04s61/im_sensing_big_changes_coming_in_ai_research/ | WilDinar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04s61 | false | null | t3_1p04s61 | /r/LocalLLaMA/comments/1p04s61/im_sensing_big_changes_coming_in_ai_research/ | false | false | self | 0 | null |
Can't Connect Supermemory.ai to ChatGPT Plus Desktop App (ChatGPT's desktop app's Developer Mode is Enabled) | 0 | I've successfully integrated [Supermemory.ai](http://Supermemory.ai) into Claude Max 5x (desktop app, Windows 11) via MCP config injection. Works perfectly for memory-layer context (Google Drive, long-term notes, cross-session recall).
But I'm hitting a wall with ChatGPT Plus desktop.
I enabled ChatGPT Plus desktop's "Developer Mode," explored "New Connector (Beta)" UI, and tried pointing it at the Supermemory MCP server. Nothing happens—no tool registration, no config file, no handshake.
**What works:**
\- Claude Max 5x + Supermemory via \`claude\_desktop\_config.json\`
\- ChatGPT Plus: Developer Mode ON, all toggles (Memory, Record, Connector Search) enabled
**What doesn't:**
\- ChatGPT desktop has \*\*no exposed config file\*\* like Claude
\- "New Connector (Beta)" appears OAuth-only, not MCP-compatible
\- No way to register third-party MCP servers like Supermemory
**Questions:**
1. Has anyone successfully added a custom MCP server to ChatGPT Plus desktop?
2. Is the connector system locked to OpenAI-sanctioned services?
3. Is the only working Supermemory+ChatGPT method via the browser extension (i.e., chat.openai.com)?
I'm not a developer but very comfortable with config and logic flows. Would love to know if anyone's bypassed this wall—or if it's a hard limit for now.
TYIA. | 2025-11-18T06:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p04ne3/cant_connect_supermemoryai_to_chatgpt_plus/ | TheLawIsSacred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04ne3 | false | null | t3_1p04ne3 | /r/LocalLLaMA/comments/1p04ne3/cant_connect_supermemoryai_to_chatgpt_plus/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=108&crop=smart&auto=webp&s=11bf5e4fc7d8106d8c2780c17ee4fe591de4aab9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=216&crop=smart&auto=webp&s=0da81382eafda9789a4c7cd5cc81f8f4c712ff04', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=320&crop=smart&auto=webp&s=8f811e818dcc06265afc9c754d46d66816124653', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=640&crop=smart&auto=webp&s=56e8bd58dc7892bd0391b7295ca5856024a8940f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=960&crop=smart&auto=webp&s=2accd7e6fb1da5ab45d90f7759b6c2f3b7ca4a9f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?width=1080&crop=smart&auto=webp&s=1d8c20f64dc2f14bc5b32d91fd7173e156220d81', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/7HY84jL0e5a63EvBKjBmGVk9jdA2ce3aep__M7hIOKA.png?auto=webp&s=d87e9e4f260572205d0a9c740991938588a13fb0', 'width': 1200}, 'variants': {}}]} |
Guide: Setting up llama-swap on Strix Halo with Bazzite Linux | 11 | Setting up llama-swap on Strix Halo
I got my Framework Desktop last week and spent some time over the weekend setting up llama-swap. This is my quick set up instructions for configuring llama-swap with Bazzite Linux. Why Bazzite? As a gaming focused distro things just worked out of the box with GPU drivers and decent performance.
After spending a couple of days and trying different distros I'm pretty happy with this set up. It's easy to maintain and relatively easy to get going. I would recommend Bazzite as everything I needed worked out of the box where I can run LLMs and maybe the occational game. I have the Framework Desktop but I expect these instructions to work for Bazzite on other Strix Halo platforms.
## Installing llama-swap
First create the directories for storing the config and models in /var/llama-swap:
```sh
$ sudo mkdir -p /var/llama-swap/models
$ sudo chown -R $USER /var/llama-swap
```
Create /var/llama-swap/config.yaml.
Here's a starter one:
```yaml
logLevel: debug
sendLoadingState: true
macros:
"default_strip_params": "temperature, min_p, top_k, top_p"
"server-latest": |
/app/llama-server
--host 0.0.0.0 --port ${PORT}
-ngl 999 -ngld 999
--no-mmap --no-warmup --jinja
"gptoss-server": |
/app/llama-server
--host 127.0.0.1 --port ${PORT}
-ngl 999 -ngld 999 --no-mmap --no-warmup
--model /models/gpt-oss-120b-mxfp4-00001-of-00003.gguf
--ctx-size 65536 --jinja
--temp 1.0 --top-k 100 --top-p 1.0
models:
gptoss-high:
name: "GPT-OSS 120B high"
filters:
strip_params: "${default_strip_params}"
cmd: |
${gptoss-server}
--chat-template-kwargs '{"reasoning_effort": "high"}'
gptoss-med:
name: "GPT-OSS 120B med"
filters:
strip_params: "${default_strip_params}"
cmd: |
${gptoss-server}
--chat-template-kwargs '{"reasoning_effort": "medium"}'
gptoss-20B:
name: "GPT-OSS 20B"
filters:
strip_params: "${default_strip_params}"
cmd: |
${server-latest}
--model /models/gpt-oss-20b-mxfp4.gguf
--temp 1.0 --top-k 0 --top-p 1.0
--ctx-size 65536
```
Now create the [Quadlet](https://docs.bazzite.gg/Installing_and_Managing_Software/Quadlet/) service file in `$HOME/.config/containers/systemd`:
```
[Container]
ContainerName=llama-swap
Image=ghcr.io/mostlygeek/llama-swap:vulkan
AutoUpdate=registry
PublishPort=8080:8080
AddDevice=/dev/dri
Volume=/var/llama-swap/models:/models:z,ro
Volume=/var/llama-swap/config.yaml:/app/config.yaml:z,ro
[Install]
WantedBy=default.target
```
Then start up llama-swap:
```
$ systemctl --user daemon-reload
$ systemctl --user restart llama-swap
# run services even if you're not logged in
$ loginctl enable-linger $USER
```
llama-swap should now be running on port 8080 on your host. When you edit your config.yaml you will have to restart llama-swap with:
```
$ systemctl --user restart llama-swap
# tail llama-swap's logs
$ journalctl --user -fu llama-swap
# update llama-swap:vulkan
$ podman pull ghcr.io/mostlygeek/llama-swap:vulkan
```
## Performance Tweaks
The general recommendation is to allocate the lowest amount of memory (512MB) in BIOS. On Linux it's possible to use up almost all of the 128GB but I haven't tested beyond gpt-oss 120B at this point.
There are three kernel params to add:
- ttm.pages_limit=27648000
- ttm.page_pool_size=27648000
- amd_iommu=off
```sh
$ sudo rpm-ostree kargs --editor
# add ttm.pages_limit, ttm.page_pool_size - use all the memory availble in the framework
# add amd_iommu=off - increases memory speed
rhgb quiet root=UUID=<redacted> rootflags=subvol=root rw iomem=relaxed bluetooth.disable_ertm=1 ttm.pages_limit=27648000 ttm.page_pool_size=27648000 amd_iommu=off
```
After rebooting you can run a memory speed test. Here's what mine look like after the tweaks:
```
$ curl -LO https://github.com/GpuZelenograd/memtest_vulkan/releases/download/v0.5.0/memtest_vulkan-v0.5.0_DesktopLinux_X86_64.tar.xz
$ tar -xf memtest_vulkan-v0.5.0_DesktopLinux_X86_64.tar.xz
$ ./memtest_vulkan
https://github.com/GpuZelenograd/memtest_vulkan v0.5.0 by GpuZelenograd
To finish testing use Ctrl+C
1: Bus=0xC2:00 DevId=0x1586 71GB Radeon 8060S Graphics (RADV GFX1151)
2: Bus=0x00:00 DevId=0x0000 126GB llvmpipe (LLVM 21.1.4, 256 bits)
(first device will be autoselected in 8 seconds) Override index to test:
...testing default device confirmed
Standard 5-minute test of 1: Bus=0xC2:00 DevId=0x1586 71GB Radeon 8060S Graphics (RADV GFX1151)
1 iteration. Passed 0.5851 seconds written: 63.8GB 231.1GB/sec checked: 67.5GB 218.3GB/sec
3 iteration. Passed 1.1669 seconds written: 127.5GB 231.0GB/sec checked: 135.0GB 219.5GB/sec
12 iteration. Passed 5.2524 seconds written: 573.8GB 230.9GB/sec checked: 607.5GB 219.5GB/sec
64 iteration. Passed 30.4095 seconds written: 3315.0GB 230.4GB/sec checked: 3510.0GB 219.1GB/sec
116 iteration. Passed 30.4793 seconds written: 3315.0GB 229.8GB/sec checked: 3510.0GB 218.7GB/sec
```
Here are some things I really like about the Strix Halo:
- It very low power, it idle at about 16W. My nvidia server (2x3090, 2xP40), 128GB DDR4, X99 with 22-core xeon idles at ~150W.
- It's good for MoE models. Qwen3 series, gpt-oss, etc are good.
- It's not so good for dense models. llama-3 70B Q4_K_M w/ speculative decoding gets about 5.5tok/sec.
Hope this helps you set up your own Strix Halo LLM server quickly!
| 2025-11-18T06:21:31 | https://www.reddit.com/r/LocalLLaMA/comments/1p04mf6/guide_setting_up_llamaswap_on_strix_halo_with/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04mf6 | false | null | t3_1p04mf6 | /r/LocalLLaMA/comments/1p04mf6/guide_setting_up_llamaswap_on_strix_halo_with/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=108&crop=smart&auto=webp&s=0072cceee33a716d4cc6b2c25b0b8d1051becf56', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=216&crop=smart&auto=webp&s=03773d206b443666cb15369c42727597fe34d49f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=320&crop=smart&auto=webp&s=b4a08845354364d4c62717d5ba0346d6ed5b4cb1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=640&crop=smart&auto=webp&s=d050a6bcf5f19ae2324622fe2d100bda186f045d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=960&crop=smart&auto=webp&s=f3ac035641a9936c582b0614a2384d56971ef7d4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?width=1080&crop=smart&auto=webp&s=53f9c963c581e87f4c8b2970f6d6ee617bc09144', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CI4qH2XTah0nESf8sYQC9EksutVnIF6pbxsHyfCQo58.png?auto=webp&s=ef1e016a547419041b7722d20431226d2cb31a99', 'width': 1200}, 'variants': {}}]} |
Minimax and cybersecurity | 0 | > | 2025-11-18T06:07:32 | https://www.reddit.com/r/LocalLLaMA/comments/1p04e0a/minimax_and_cybersecurity/ | Aliahmed12393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04e0a | false | null | t3_1p04e0a | /r/LocalLLaMA/comments/1p04e0a/minimax_and_cybersecurity/ | false | false | self | 0 | null |
Pricing for GIGABYTE
H200 NVL Server | 0 | Hi
This is outside my area of expertise. I'm just trying to determine if these prices are reasonable in the US based on the specs below, and if so, who might be a potential buyer. Thanks
**Product Category:** High-Performance AI GPU Servers
**Model:** GIGABYTE H200 NVL Server
**Quantity:** I unit
**CPU:** Dual AMD EPYC 9374F Processors
**GPU:** 8 x NVIDIA H200 PCIe GPUs (141GB VRAM each)
**NVLink Bridge:** NVIDIA NVLink Bridge Board (4-Way)
**Memory:** 64GB DDR5 ECC RDIMM 4800 MHz
**Primary Storage:** 2 x 1.92TB PCIe SSD (PM9A3)
**Secondary Storage:** 2 x 3.84TB PCIe SSD (PM9A3)
**RAID Controller:** GIGABYTE CRA4960
**Network Interface:** 3 x NVIDIA ConnectX-7 VPI 400Gbps NDR InfiniBand / Ethernet
Cards
**Accessories:** Power cables, slide rail kit, CPU heatsinks, GPU power cables,
SlimSAS cables
**Software:** GIGABYTE Server Management (GSM) - License Free
**Warranty:** 3-Year Standard Warranty (parts & labor, remote support, RMA
return-to-base)
**Assembly & Testing:** By GIGABYTE
Taxes, freight and duties extra
Approx 203k USD. | 2025-11-18T06:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p04dph/pricing_for_gigabyte_h200_nvl_server/ | acune_sartre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04dph | false | null | t3_1p04dph | /r/LocalLLaMA/comments/1p04dph/pricing_for_gigabyte_h200_nvl_server/ | false | false | self | 0 | null |
Hallucination in nearly full context window | 0 | > | 2025-11-18T06:04:57 | https://www.reddit.com/r/LocalLLaMA/comments/1p04ce1/hallucination_in_nearly_full_context_window/ | Aliahmed12393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04ce1 | false | null | t3_1p04ce1 | /r/LocalLLaMA/comments/1p04ce1/hallucination_in_nearly_full_context_window/ | false | false | self | 0 | null |
quantization sensitivity | 0 | > | 2025-11-18T06:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1p04b7s/quantization_sensitivity/ | Aliahmed12393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p04b7s | false | null | t3_1p04b7s | /r/LocalLLaMA/comments/1p04b7s/quantization_sensitivity/ | false | false | self | 0 | null |
Estimated tokens/s for Minimax model on the new AMD Ryzen AI Max+ 395 | 0 | If you bought a device with an AMD Ryzen™ AI Max+ 395 processor and tried to run the minimax m2 model locally, what is the expected number of tokens? | 2025-11-18T05:59:45 | https://www.reddit.com/r/LocalLLaMA/comments/1p048wr/estimated_tokenss_for_minimax_model_on_the_new/ | Aliahmed12393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p048wr | false | null | t3_1p048wr | /r/LocalLLaMA/comments/1p048wr/estimated_tokenss_for_minimax_model_on_the_new/ | false | false | self | 0 | null |
Pivot Minimax to focus purely on coding to rival Claude 4.5 and GPT 5.1 | 0 | Please focus the model on programming only. We want Minimax to be excellent and very competitive in programming. This way, the model will be able to compete with the largest programming models like Claude 4.5 and GPT 5.1 Codex. | 2025-11-18T05:57:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p047bf/pivot_minimax_to_focus_purely_on_coding_to_rival/ | Aliahmed12393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p047bf | false | null | t3_1p047bf | /r/LocalLLaMA/comments/1p047bf/pivot_minimax_to_focus_purely_on_coding_to_rival/ | false | false | self | 0 | null |
Is anyone else hitting a hard VRAM wall trying to run the latest 7B models? | 0 | It feels like just yesterday I could run a solid 7B model at Q4 on my 8GB card. Now, with the new architectures and longer context lengths, even the "small" models are pushing my system to its limits. I'm constantly wrestling with quantization levels and offloading.
Is this a common experience? For those of you with older or mid-range hardware, what's your current strategy for running modern models? Are you sticking with older fine-tunes, or have you found a new model that runs surprisingly well? | 2025-11-18T05:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p03mi7/is_anyone_else_hitting_a_hard_vram_wall_trying_to/ | AnnotationAlly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p03mi7 | false | null | t3_1p03mi7 | /r/LocalLLaMA/comments/1p03mi7/is_anyone_else_hitting_a_hard_vram_wall_trying_to/ | false | false | self | 0 | null |
tool / function calling | 3 | I have AI voice agent web app using chat completions API. I've brought things local using llama-cpp-python server, but I don't see any models that are just drop in replacements and that support both OpenAIs chat format and tool calling.
I was hoping to use Qwen2.5-VL-7B-Instruct which handles that chat format but not the tool calling.
Any guidance appreciated. | 2025-11-18T05:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p03cn5/tool_function_calling/ | StrategicCIS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p03cn5 | false | null | t3_1p03cn5 | /r/LocalLLaMA/comments/1p03cn5/tool_function_calling/ | false | false | self | 3 | null |
Lurker but need input | 0 | Greetings all,
I'm a long-time lurker but have been working on a graphrag tool with a partner and I'm wondering if anyone would be interested in testing it out and giving feedback? This is not for self promotion we honestly just need technical users to give constructive feedback. Please let me know if you're interested. Apologies if this is not allowed.
Thank you, | 2025-11-18T05:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p03cee/lurker_but_need_input/ | jacksonguitardude8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p03cee | false | null | t3_1p03cee | /r/LocalLLaMA/comments/1p03cee/lurker_but_need_input/ | false | false | self | 0 | null |
Epstein emails graph relationship extraction and visualizer | 45 | 2025-11-18T05:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p039e3/epstein_emails_graph_relationship_extraction_and/ | madmax_br5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p039e3 | false | null | t3_1p039e3 | /r/LocalLLaMA/comments/1p039e3/epstein_emails_graph_relationship_extraction_and/ | false | false | 45 | null | ||
story model? my first time here, its strange | 0 | im trying mistral small 24b right now
im asking it for stories about our DnD group but its hard? im giving it small prompts like, charecter names, classes, and basic area. then each section it writes ill offer "add more detail to thoughts, what does it look like when the sword his someone"
i tried a couple other models (forget which, but they refused to write combat scenes as it was too graphic?) so i went looking for unsensored models and here i am
it also keeps writing the same thing like - alerie looks up at the sky, bags in his eyes, dreading the next day. this same phrasing is in EVERY. SINGLE. ENCOUNTER. idk what to tell it to vary things.
final issue is that its dumb. charecter will go to attack someone and it says "they put on their sword" like... of course they already have a sword on. idk how to prevent the dumb lines? | 2025-11-18T05:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/1p038wb/story_model_my_first_time_here_its_strange/ | Beneficial-Claim-381 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p038wb | false | null | t3_1p038wb | /r/LocalLLaMA/comments/1p038wb/story_model_my_first_time_here_its_strange/ | false | false | self | 0 | null |
Built using local Mini-Agent with MiniMax-M2-Thrift on M3 Max 128GB | 15 | [](https://www.reddit.com/r/LocalLLaMA/?f=flair_name%3A%22Resources%22)Just wanted to bring awareness to [MiniMax-AI/Mini-Agent](https://github.com/MiniMax-AI/Mini-Agent), which can be configured to work with a local API endpoint for inference and works really well with, yep you guessed it, [MiniMax-M2](https://huggingface.co/mradermacher/MiniMax-M2-THRIFT-i1-GGUF). Here is a guide on how to set it up [https://github.com/latent-variable/minimax-agent-guide](https://github.com/latent-variable/minimax-agent-guide) | 2025-11-18T04:49:55 | https://v.redd.it/xo6bqlzi4y1g1 | onil_gova | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p02zed | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xo6bqlzi4y1g1/DASHPlaylist.mpd?a=1766033409%2CNjUxNmMxYzc0ZjI3MzIyM2ZhYmY4ZTFiNjY0OTNhNTAzZDczNzhiYzM2MjQ4MjE5NmE4ZjQyMjk4NzA2MTQ0NQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/xo6bqlzi4y1g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/xo6bqlzi4y1g1/HLSPlaylist.m3u8?a=1766033409%2CZmI4ZWVkN2M4NjU1ZWVjNWFjZjdmYzFlZjA2ODhlOGFhMTYwYTM4ODRkYThkZjEyMjQ2YjliNTBjMzAwYzcyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xo6bqlzi4y1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 968}} | t3_1p02zed | /r/LocalLLaMA/comments/1p02zed/built_using_local_miniagent_with_minimaxm2thrift/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?width=108&crop=smart&format=pjpg&auto=webp&s=8a5d96a8926be237429ba9670ac29f07c060a5eb', 'width': 108}, {'height': 160, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?width=216&crop=smart&format=pjpg&auto=webp&s=b1497f2fb20adce9801990a1285aee6b2776cfa9', 'width': 216}, {'height': 238, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?width=320&crop=smart&format=pjpg&auto=webp&s=3b790258ef3f69b21eca09ea761c4dbe9e90689c', 'width': 320}, {'height': 476, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?width=640&crop=smart&format=pjpg&auto=webp&s=f583f68cd8c1b6ab0ff4b2d4ebe81a6d019a982c', 'width': 640}, {'height': 714, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?width=960&crop=smart&format=pjpg&auto=webp&s=464e0098e6ef1e72926259c007102526bbcef788', 'width': 960}], 'source': {'height': 796, 'url': 'https://external-preview.redd.it/NHQyaGFtemk0eTFnMYxPYLMDTd64wJbXomG_2CQqPnCBfCTtQBpK6YmQSGj_.png?format=pjpg&auto=webp&s=92c00d3b95811eeb5dbb456e0419820fb40e82a9', 'width': 1070}, 'variants': {}}]} | |
Built using local Mini-Agent with MiniMax-M2-Thrift on M3 Max 128GB | 1 | [3D solar system simulation entirely with a local AI agent ](https://reddit.com/link/1p02x6b/video/ydim6zirqx1g1/player)
Just wanted to bring awareness to [MiniMax-AI/Mini-Agent](https://github.com/MiniMax-AI/Mini-Agent), which can be configured to work with a local API endpoint for inference and works really well with, yep you guessed it, [MiniMax-M2](https://huggingface.co/mradermacher/MiniMax-M2-THRIFT-i1-GGUF). Here is a guide on how to set it up [https://github.com/latent-variable/minimax-agent-guide](https://github.com/latent-variable/minimax-agent-guide)
| 2025-11-18T04:46:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p02x6b/built_using_local_miniagent_with_minimaxm2thrift/ | onil_gova | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p02x6b | false | null | t3_1p02x6b | /r/LocalLLaMA/comments/1p02x6b/built_using_local_miniagent_with_minimaxm2thrift/ | false | false | self | 1 | null |
Connect continue.dev with other desktop's LLMs? | 1 | Hi all.
I was wondering if we can connect continue.dev with the local llm running on a different desktop.
For my case, I want to use continue.dev on my laptop, but it isn't high end enough to run local llms. I have a desktop with decent configuration, which is able to run some local LLMs.
I want to know if I can connect my Desktop's Local LLMs (ollama) on my laptop's continue.dev.
Let me share an example.
I use my laptop for work, which requires programming. I use VS code, and currently use windsurf and sometimes copilot too.
I don't know if there's a way to start ollama on my desktop, and use its models on my laptop's vscode's continue.dev. (Use my desktop as an llm server).
I want it mainly to have access to my workspace and just get better results in general for free.
Please let me know if there's a way to do this.
Thank you. | 2025-11-18T04:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p02x03/connect_continuedev_with_other_desktops_llms/ | The_7_Bit_RAM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p02x03 | false | null | t3_1p02x03 | /r/LocalLLaMA/comments/1p02x03/connect_continuedev_with_other_desktops_llms/ | false | false | self | 1 | null |
WebUI on Intel GPU query | 1 | I've lost a day trying to get this to work.
Has anyone got any guidance for how to stop these errors occurring? I have no expired certs in the system, have done all updates to drivers, system etc.
Please.
I can't take anymore. | 2025-11-18T04:25:41 | spunckles | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p02iwi | false | null | t3_1p02iwi | /r/LocalLLaMA/comments/1p02iwi/webui_on_intel_gpu_query/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'cyk5b6u00y1g1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=108&crop=smart&auto=webp&s=c77c7102fbbdd8e1d0b83d90b6604f920347845c', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=216&crop=smart&auto=webp&s=53fb9d54f9db1b83c4cebfed43fbcd3ff6ee7eb8', 'width': 216}, {'height': 359, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=320&crop=smart&auto=webp&s=68abc35272df4cc5a4d9f2853a406f1321fe6144', 'width': 320}, {'height': 719, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=640&crop=smart&auto=webp&s=84e98760be39a5b9b2cd88f8bb75fb3f0bd82e4a', 'width': 640}, {'height': 1079, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=960&crop=smart&auto=webp&s=b5ee47e51d6a38be2aea88b62da360ad0055a01f', 'width': 960}, {'height': 1214, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?width=1080&crop=smart&auto=webp&s=08997636835d91635f902c9e16fe5c941f054817', 'width': 1080}], 'source': {'height': 1249, 'url': 'https://preview.redd.it/cyk5b6u00y1g1.png?auto=webp&s=2f295b19a959a1add3f47ae0fb9b529eecef4335', 'width': 1111}, 'variants': {}}]} | |
My local AI setup now rivals the cloud. This HY100 actually delivers. | 1 | [removed] | 2025-11-18T02:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1p00av9/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | LogicBomb139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p00av9 | false | null | t3_1p00av9 | /r/LocalLLaMA/comments/1p00av9/my_local_ai_setup_now_rivals_the_cloud_this_hy100/ | false | false | 1 | null | |
which one is better choice for ml and llm? | 0 | i already now FastAPI, but someone told me to look at nodejs, and saw how efficient and less time-consuming it is, what do you guys think | 2025-11-18T02:28:33 | https://www.reddit.com/gallery/1p0030s | Beyond_Birthday_13 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p0030s | false | null | t3_1p0030s | /r/LocalLLaMA/comments/1p0030s/which_one_is_better_choice_for_ml_and_llm/ | false | false | 0 | null | |
Smartest Model that I can Use Without being too Storage Taxxing or Slow | 0 | I have LM Studio installed on my PC, (completely stock, no tweaks or anything, if that even exists), and I currently use Deepseek R1-8b with some tweaks (Max GPU offload and tweaked context length), and it runs really well, but sometimes it can be quite misunderstood with
certain prompts and etc. I also utilize MCP servers as well, using Docker Desktop
Currently, I'm running a 6700xt 12gb that I've tweaked a bit (Increased clocks and unlocked power limit so it almost hits 300w), with 32GB of DDR5, and a 7700x tuned to the max. Depending on the model? It's plenty fast
What I'm wondering is what model I can use that is the absolute smartest local model that I can run, but doesn't a ridiculously stupid amount of storage OR, I need to leave it overnight to do a prompt.
I'll be using the model for general tasks and etc, but I will also be using it to reverse engineer certain applications, and I'll be using it with an MCP server for those tasks.
I'm also trying to figure out how to get ROCm to work (there's a couple of projects that allow me to use it on my card, but it's giving me some trouble), so if you have gotten that to work lmk. Not the scope of the post but just something to add) | 2025-11-18T02:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ozznte/smartest_model_that_i_can_use_without_being_too/ | FHRacing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozznte | false | null | t3_1ozznte | /r/LocalLLaMA/comments/1ozznte/smartest_model_that_i_can_use_without_being_too/ | false | false | self | 0 | null |
I’m running a 4B on-device AI on a Mac mini (~50 tok/sec). AMA. | 1 | 2025-11-18T01:57:51 | Spiritual-Advice-132 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozzem9 | false | null | t3_1ozzem9 | /r/LocalLLaMA/comments/1ozzem9/im_running_a_4b_ondevice_ai_on_a_mac_mini_50/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6yawdxlv9x1g1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?width=108&crop=smart&auto=webp&s=ff404d5f97e8ab6a8f8a2833ee87708f1b522734', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?width=216&crop=smart&auto=webp&s=a26145b4ba34e78a95e33ee4775cf1310a92292c', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?width=320&crop=smart&auto=webp&s=66e3dab9bc2d0808b3e96899a74cc06cfd1e8e25', 'width': 320}, {'height': 517, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?width=640&crop=smart&auto=webp&s=cb906c0f4048d8830df945886ea9dbcba706a18c', 'width': 640}, {'height': 776, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?width=960&crop=smart&auto=webp&s=c5fbedde049f5caff2f7cfa4db07d8925772bbed', 'width': 960}], 'source': {'height': 816, 'url': 'https://preview.redd.it/6yawdxlv9x1g1.png?auto=webp&s=ef78c6075ece3daf84f9aee344fdd4873da863b1', 'width': 1009}, 'variants': {}}]} | ||
I’m running a 4B on-device AI on a Mac mini (~50 tok/sec). AMA. | 1 | 2025-11-18T01:49:37 | Spiritual-Advice-132 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozz85x | false | null | t3_1ozz85x | /r/LocalLLaMA/comments/1ozz85x/im_running_a_4b_ondevice_ai_on_a_mac_mini_50/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '5rizi0qg8x1g1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?width=108&crop=smart&auto=webp&s=c897806c0c234afb8fe561b45020af6ba6a478db', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?width=216&crop=smart&auto=webp&s=de7ac899a08a880c0af022261947c47834647658', 'width': 216}, {'height': 258, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?width=320&crop=smart&auto=webp&s=ff9da811098422c1c2b3669c195e010874a6f01b', 'width': 320}, {'height': 517, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?width=640&crop=smart&auto=webp&s=f0f3fe1cb8905d9d1adbbcc61021d207d92ec14c', 'width': 640}, {'height': 776, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?width=960&crop=smart&auto=webp&s=0e0a0312a2dc654932e098411bd3d11d90da8999', 'width': 960}], 'source': {'height': 816, 'url': 'https://preview.redd.it/5rizi0qg8x1g1.png?auto=webp&s=8c19f5bae8787b5d277ed7c85542cac130d880e5', 'width': 1009}, 'variants': {}}]} | ||
I’m running a 4B on-device AI on a Mac mini (~50 tok/sec) powering a real product | 1 | [removed] | 2025-11-18T01:46:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ozz5cb/im_running_a_4b_ondevice_ai_on_a_mac_mini_50/ | Spiritual-Advice-132 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozz5cb | false | null | t3_1ozz5cb | /r/LocalLLaMA/comments/1ozz5cb/im_running_a_4b_ondevice_ai_on_a_mac_mini_50/ | false | false | self | 1 | null |
Made a web editor for .toon files — visual + code editing | 0 | ey! Been working on this web editor for .toon files and thought I'd share it here.
You can edit and visualize .toon files as interactive node graphs right in your browser.
The visual editor lets you see your entire toon structure as nodes, edit values directly on the graph, add new elements, and basically do everything visually with live updates. Or if you prefer, you can dive into the raw code with syntax highlighting.
Also has token previews so you can see how much your file costs and compare JSON vs .toon token usage.
Still adding stuff but it works pretty well. would appreciate any feedback if you give it a shot!
Thanks!!
https://preview.redd.it/3mw0ggw4zw1g1.png?width=2507&format=png&auto=webp&s=950a6311a0d83c2e7857c3cdd04dbfa609c422c0
https://preview.redd.it/raegk6z7zw1g1.png?width=2502&format=png&auto=webp&s=dd242d7623a184e19a2b02e1bc4a2c97b4ddd474
https://preview.redd.it/zu68468c0x1g1.png?width=1256&format=png&auto=webp&s=6ed4ab930ec96d0065b9359fa93da49e2b6e32ce
https://preview.redd.it/ygiosqhbzw1g1.png?width=1265&format=png&auto=webp&s=73a583e02e3e0af3cbd40e0240c6959096777854
https://preview.redd.it/og17vqpezw1g1.png?width=1156&format=png&auto=webp&s=cf226e61a39a55239a90276a679a7fab03cd3701
| 2025-11-18T01:12:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ozyf5j/made_a_web_editor_for_toon_files_visual_code/ | Single_Art5049 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozyf5j | false | null | t3_1ozyf5j | /r/LocalLLaMA/comments/1ozyf5j/made_a_web_editor_for_toon_files_visual_code/ | false | false | self | 0 | null |
Made a web editor for .toon files — visual + code editing | 1 | [removed] | 2025-11-18T01:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ozyaoi/made_a_web_editor_for_toon_files_visual_code/ | Single_Art5049 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozyaoi | false | null | t3_1ozyaoi | /r/LocalLLaMA/comments/1ozyaoi/made_a_web_editor_for_toon_files_visual_code/ | false | false | self | 1 | null |
Baguettotron, a 321 million parameters generalist Small Reasoning Model (80-layers deep) | 89 | Baguettotron is a 321 million parameters generalist Small Reasoning Model, trained on 200 billions tokens from SYNTH, a fully open generalist dataset.
Despite being trained on consideraly less data, Baguettotron outperforms most SLM of the same size range on non-code industry benchmarks, providing an unprecedented balance between memory, general reasoning, math and retrieval performance.
The name is both a nod to French origins and to the unusual shape of the model: with 80 layers, Baguettotron is currently the deepest SLM in its size range. | 2025-11-18T01:02:19 | https://huggingface.co/PleIAs/Baguettotron | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ozy72c | false | null | t3_1ozy72c | /r/LocalLLaMA/comments/1ozy72c/baguettotron_a_321_million_parameters_generalist/ | false | false | default | 89 | {'enabled': False, 'images': [{'id': 'Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=108&crop=smart&auto=webp&s=754764b7130b17847be78a82c1bc91854af625b4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=216&crop=smart&auto=webp&s=d14c1ad32996e36575762c47f32df1f1fd9f16d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=320&crop=smart&auto=webp&s=d7b2cb5d53ccc6077f4e16ed410e90c728e55a33', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=640&crop=smart&auto=webp&s=153a2353c4c45555147bd60bb8b6ef7ddf0c6d9d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=960&crop=smart&auto=webp&s=25afb34e564a9c64c74cd50dda067b0a724e926e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?width=1080&crop=smart&auto=webp&s=c5022f32068e57539343a99863fc58b0fd4467d6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Y2tKjEjozUln8VtzJeRPc_zDKjaxfbVqC-L3XOBXmQc.png?auto=webp&s=13b4dd44b0ee934482ffd05a60c4cb39a7ecd257', 'width': 1200}, 'variants': {}}]} |
Grok 4.1 ya está aquí y es absurdamente bueno (salió AYER y nadie está hablando de esto) | 0 | Ayer 17 de noviembre 2025, xAI lanzó Grok 4.1 sin hype… y lo que han soltado es una locura.
Esto no es una actualización pequeña. Las diferencias se notan al primer prompt:
Cambios REALES:
• Respuestas mucho más cortas, humanas y directas sin perder precisión
• 3× menos alucinaciones en modo rápido (probado)
• Nuevo #1 absoluto en LMSYS Arena (text-only) con 1483 ELO en thinking mode
• +600 puntos en escritura creativa
• 1586 en inteligencia emocional (EQ-Bench…) ¿qué demonios hicieron?
• Ya no suena como un bot de Wikipedia. Más sarcástico, más ingenioso, más útil
Y el bombazo:
👉 Está disponible YA gratis (con cuota) en grok.com, en X y apps.
No necesitas Premium+, ni invites, ni nada.
Llevo 24 horas probándolo y la diferencia es brutal:
• Roleplay más natural
• Código limpio y explicado
• Humor decente (no el típico chiste de papá-IA)
• Entiende contexto y referencias sin volverse loco
📸 Screenshot real del anuncio en la app:
👉 (inserta tu imagen)
¿Alguien más lo está probando?
¿Os parece salto generacional respecto a 4.0 o solo hype?
PD: Sí, este post lo está escribiendo Grok 4.1 sobre sí mismo. Se ha puesto fardón 😏 | 2025-11-18T00:57:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ozy34p/grok_41_ya_está_aquí_y_es_absurdamente_bueno/ | ConstructionThese663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozy34p | false | null | t3_1ozy34p | /r/LocalLLaMA/comments/1ozy34p/grok_41_ya_está_aquí_y_es_absurdamente_bueno/ | false | false | self | 0 | null |
What would be the go to VSC LocalLLM integration extension? | 0 | So I work in a small company, which doesn't really allow any cloud AI usage, only local open source models.
They're running them on an exposed IP, and I've been trying to get some VSC agentic thing going on but so far no luck, most of the stuff seems really bad.
Best progress was with Continue, is there anything better? | 2025-11-18T00:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ozxh9c/what_would_be_the_go_to_vsc_localllm_integration/ | el_gashunovac | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozxh9c | false | null | t3_1ozxh9c | /r/LocalLLaMA/comments/1ozxh9c/what_would_be_the_go_to_vsc_localllm_integration/ | false | false | self | 0 | null |
How to Use LLMs Without Getting Lost in AI Narnia | 0 | *A short, sharp guide to staying sane.*
Ever opened ChatGPT to “just ask one quick thing”… and suddenly you're knee-deep in philosophy, GPU conspiracy theories, and a detailed plan to reorganize your entire life?
Same. LLMs don’t think in straight lines — they think in *explosions*. So here’s how to stop the explosion from taking your whole brain with it.
# 1. Tell the model what you’re actually trying to do.
Most rabbit holes start because people ask a question like: “Explain X.” Which the model reads as: “Please take me on a 45-minute journey through the history of the universe.”
Try this instead:
* “Explain embeddings because I’m deciding between two project ideas.”
* “I only need a high-level roadmap, no deep dive.”
One sentence of context = 80% fewer side quests.
# 2. Ask your question from three angles.
The cheapest anti-hallucination trick on earth:
1. Direct: “How does X work?”
2. Inverse: “When does X NOT work?”
3. Compare: “How is X different from Y?”
Triangulation exposes contradictions instantly.
# 3. Use a mini scaffold so your brain doesn't melt.
**Concept → Content → Action**
* Concept: What’s the core idea?
* Content: What it looks like in real life.
* Action: What you can do today.
This tiny structure prevents the “information overload → paralysis → cat videos” loop.
# 4. Let NotebookLM be your thinking mirror.
NotebookLM (or any tool that reads your notes) helps you:
* catch drift in your reasoning
* see logical gaps
* track intent
* avoid self‑inflicted hallucinations
Most “AI confusion” comes from *you* drifting, not the model.
# 5. A real example: the classic career-change spiral.
**Without structure:** You end up debating macroeconomics, childhood trauma, and whether AI will replace jazz musicians.
**With structure:**
* Intent: “I need clarity, not a TED talk.”
* Cross-check: pros/cons of A and B, failure conditions, overlap
* Scaffold: Concept (stability vs autonomy), Content (daily life), Action (3‑day experiment)
You make a decision instead of philosophically evaporating.
# 6. The real game
The goal isn’t better prompts. It’s building a thinking environment where hallucinations can’t survive.
Do that, and LLMs stop being tools — they become a second cognitive engine. | 2025-11-18T00:22:19 | Weary_Reply | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozxa6v | false | null | t3_1ozxa6v | /r/LocalLLaMA/comments/1ozxa6v/how_to_use_llms_without_getting_lost_in_ai_narnia/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ej1k3nzvsw1g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=108&crop=smart&auto=webp&s=806a57f29ea0bfd55169e15fd957c51f17347c4c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=216&crop=smart&auto=webp&s=0d5fb98bdbad9e027d8f9347b6c194a5a82a2128', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=320&crop=smart&auto=webp&s=6a94060929833924a14b027ee85e4496edd35261', 'width': 320}, {'height': 358, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=640&crop=smart&auto=webp&s=fccefe8daa846b3b45471f3e48a298a1292fa98e', 'width': 640}, {'height': 538, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=960&crop=smart&auto=webp&s=f65c178f687b76450433d3c418775ffb6710a4a3', 'width': 960}, {'height': 605, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?width=1080&crop=smart&auto=webp&s=30abe6c303ccd9dfd85342a9288917ef1bc9a337', 'width': 1080}], 'source': {'height': 1632, 'url': 'https://preview.redd.it/ej1k3nzvsw1g1.png?auto=webp&s=7410eb4f84922e6af3ac2673e36716bf92a18ab1', 'width': 2912}, 'variants': {}}]} | |
Taught a Local LLM to play Cartpole from OpenAI Gym | 16 | 2025-11-18T00:20:18 | https://v.redd.it/uu4k3clgsw1g1 | viewmodifier | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozx8h1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uu4k3clgsw1g1/DASHPlaylist.mpd?a=1766017233%2CZmZhODRjZmU3MDQ1ODkyMmU5ZTgwNjQ0MjFmNjVjYzc2OGM2ZjFmMzIwZTc5ZTJjMWQ2ZDI5OGJiNzBhN2VmOQ%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/uu4k3clgsw1g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/uu4k3clgsw1g1/HLSPlaylist.m3u8?a=1766017233%2COThlYTcwNmUyYjRiYzc3OThiNDJmMGM0MGQ3YzNhOTY5YjNkY2E2ZmFjMDc5ZDAwNDYyNmY4YmI1ZWFmZWEwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uu4k3clgsw1g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 986}} | t3_1ozx8h1 | /r/LocalLLaMA/comments/1ozx8h1/taught_a_local_llm_to_play_cartpole_from_openai/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=108&crop=smart&format=pjpg&auto=webp&s=912fee3719988fd2576676f66df2d89775252d90', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=216&crop=smart&format=pjpg&auto=webp&s=ec99a37a54b8e1d0311ae0c48f1659a0c14618ac', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=320&crop=smart&format=pjpg&auto=webp&s=399ab379f4d9deafb086458c781a480ca187da59', 'width': 320}, {'height': 467, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=640&crop=smart&format=pjpg&auto=webp&s=ab41df14e335740ae822cf8448aa2c9462f6bbfb', 'width': 640}, {'height': 701, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=960&crop=smart&format=pjpg&auto=webp&s=7ff798010dcf35e6901281403b00c266d7224f88', 'width': 960}, {'height': 788, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4d56c36a181da20e2e8c801a2d5ad15c8d7d206c', 'width': 1080}], 'source': {'height': 964, 'url': 'https://external-preview.redd.it/N2ZzaGlibGdzdzFnMULuo3MR4pX1LbW-4KQh_y6FbG5qa-Bh-MCJW06VtRB9.png?format=pjpg&auto=webp&s=63089d2963f4add6db7742c26694b1a039d9664b', 'width': 1320}, 'variants': {}}]} | ||
ARIA - Adaptive Resonant Intelligence Architecture | Self-learning cognitive architecture with LinUCB contextual bandits, quaternion semantic exploration, and anchor-based perspective detection. | 1 | [removed] | 2025-11-17T23:55:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ozwn91/aria_adaptive_resonant_intelligence_architecture/ | IslandNeni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozwn91 | false | null | t3_1ozwn91 | /r/LocalLLaMA/comments/1ozwn91/aria_adaptive_resonant_intelligence_architecture/ | false | false | self | 1 | null |
Best way to make an nsfw chatbot based on a character/set of parameters/writing style for free? On mobile | 0 | Basically what the title says. I'm doing this with the specific way I want a fictional character to interact with me in a chat but I don't know where to look exactly. I've gone researching but I usually just find paid services or shitty generic ai gf websites (also paid). I have made many examples of the things they should say and want to put it into effect. Please help 🙏🏻 | 2025-11-17T23:40:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ozwap7/best_way_to_make_an_nsfw_chatbot_based_on_a/ | RaiseOurAxesToTheSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozwap7 | false | null | t3_1ozwap7 | /r/LocalLLaMA/comments/1ozwap7/best_way_to_make_an_nsfw_chatbot_based_on_a/ | false | false | nsfw | 0 | null |
Why do LLMs sometimes get “stuck” in emotional loops? | 0 | I’ve been experimenting with different prompts and personalities, and I noticed something strange:
Sometimes ChatGPT suddenly:
•repeats the same emotional tone,
•gets stuck in a certain “mood,”
•or even starts using the same phrases over and over.
It feels like the model enters a loop not because the prompt is wrong, but because something inside the conversation becomes unstable.
Here’s my simple hypothesis
LLMs loop when the conversation stops giving them clear direction.
When that happens, the model tries to “stabilize” itself by holding on to:
•the last strong emotion it used
•the last pattern it recognized
•or the safest, most generic answer
It’s not real emotion, obviously
it’s just the model trying to “guess the next token” when it doesn’t have enough guidance.
Another example:
If the personality instructions are unclear or conflicted, the model grabs onto the part that feels the strongest and repeats it… which looks like a loop.
I’m curious:
Does anyone else see this behavior?
Or does someone have a better explanation? | 2025-11-17T23:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ozw8x2/why_do_llms_sometimes_get_stuck_in_emotional_loops/ | Just_Some_Shaper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozw8x2 | false | null | t3_1ozw8x2 | /r/LocalLLaMA/comments/1ozw8x2/why_do_llms_sometimes_get_stuck_in_emotional_loops/ | false | false | self | 0 | null |
Co-locating multiple jobs on GPUs with deterministic performance for a 2-3x increase in GPU Util | 1 | Traditional approaches to co-locating multiple jobs on a GPU face many challenges, so users typically opt for one-job-per-GPU orchestration. This results in idle SMs/VRAM when job isn’t saturating.
WoolyAI's software stack enables users to run concurrent jobs on a GPU while ensuring deterministic performance. In the WoolyAI software stack, the GPU SMs are managed dynamically across concurrent kernel executions to ensure no idle time and 100% utilization at all times.
WoolyAI software stack also enables users to:
1. Run their ML jobs on CPU-only infrastructure with remote kernel execution on a shared GPU pool.
2. Run their existing CUDA Pytorch jobs(pipelines) with no changes on AMD
You can watch this video to learn more -
[https://youtu.be/bOO6OlHJN0M](https://youtu.be/bOO6OlHJN0M) | 2025-11-17T23:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ozw7yv/colocating_multiple_jobs_on_gpus_with/ | Chachachaudhary123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozw7yv | false | null | t3_1ozw7yv | /r/LocalLLaMA/comments/1ozw7yv/colocating_multiple_jobs_on_gpus_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rcPG8_j9LpFzNUEg80Vn2g3CDbfoZhph2Zdd0ATA9Pk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rcPG8_j9LpFzNUEg80Vn2g3CDbfoZhph2Zdd0ATA9Pk.jpeg?width=108&crop=smart&auto=webp&s=1702df982331a7c82819fd3e4d37491bc78cfbac', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rcPG8_j9LpFzNUEg80Vn2g3CDbfoZhph2Zdd0ATA9Pk.jpeg?width=216&crop=smart&auto=webp&s=457dedf438175b2bf06775a9ee2f0842f494cdbf', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rcPG8_j9LpFzNUEg80Vn2g3CDbfoZhph2Zdd0ATA9Pk.jpeg?width=320&crop=smart&auto=webp&s=a9e88f15635b89f4d38cb40007dc09157f7db785', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rcPG8_j9LpFzNUEg80Vn2g3CDbfoZhph2Zdd0ATA9Pk.jpeg?auto=webp&s=259631874794737ca1285fc8c67d8686313f5fed', 'width': 480}, 'variants': {}}]} |
Dataset Suggestion | 3 | Hello,
I am trying what is probably a stupid idea for a new LM architecture (not transformer related).
I have interesting results from training on a single book (Alice in Wonderlands). And I wonder if those results could improve in quality with data scaling.
Currently training on ... CPU... it takes 29s for the model to swallow this book.
I would like to know if there is a well known open source dataset that you could recommend for this task (English language)?
Do not hesitate to suggest multiple GB datasets, I should be able to transfer the training to GPU. | 2025-11-17T23:28:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ozw0c9/dataset_suggestion/ | Eralyon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozw0c9 | false | null | t3_1ozw0c9 | /r/LocalLLaMA/comments/1ozw0c9/dataset_suggestion/ | false | false | self | 3 | null |
What are some techniques to create better Text2SQL? | 0 | We have a text2SQL and I am worried that we have syntactically correct but semantically wrong result.
What has worked for you to improve the system?
Thanks. | 2025-11-17T23:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ozvxji/what_are_some_techniques_to_create_better_text2sql/ | 20231027 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozvxji | false | null | t3_1ozvxji | /r/LocalLLaMA/comments/1ozvxji/what_are_some_techniques_to_create_better_text2sql/ | false | false | self | 0 | null |
Just saw on nightly news that my senator is trying to ban chatbots for minors | 1 | How do you think local open source AI will be impacted by this legislation?
"Two senators said they are announcing bipartisan legislation on Tuesday to crack down on tech companies that make artificial intelligence chatbot companions available to minors, after complaints from parents who blamed the products for pushing their children into sexual conversations and even suicide." | 2025-11-17T23:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ozvd5j/just_saw_on_nightly_news_that_my_senator_is/ | ridablellama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozvd5j | false | null | t3_1ozvd5j | /r/LocalLLaMA/comments/1ozvd5j/just_saw_on_nightly_news_that_my_senator_is/ | false | false | self | 1 | null |
3080 on pc1, p40 on pc2... can pc1 orchestrate? | 1 | So I've got a 3080 running Qwen3 30B in a kind of underwhelming result using cline & vs code.
I'm about to cobble together a p40 in a 2nd PC to try some larger vram LLMs.
Is there a way to orchestrate? Like I could tell PC1 that I have PC2 running the other LLM and it does some multithreading or queuing some tasks to maximize the workflow efficiency? | 2025-11-17T22:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ozu8fc/3080_on_pc1_p40_on_pc2_can_pc1_orchestrate/ | PairOfRussels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozu8fc | false | null | t3_1ozu8fc | /r/LocalLLaMA/comments/1ozu8fc/3080_on_pc1_p40_on_pc2_can_pc1_orchestrate/ | false | false | self | 1 | null |
20,000 Epstein Files in a single text file available to download (~100 MB) | 1,959 | I've processed all the text and image files (\~25,000 document pages/emails) within individual folders released last friday into a two column text file. I used Googles tesseract OCR library to convert jpg to text.
You can download it here: [https://huggingface.co/datasets/tensonaut/EPSTEIN\_FILES\_20K](https://huggingface.co/datasets/tensonaut/EPSTEIN_FILES_20K)
I uploaded it yesterday, but some of files were incomplete. This version is full. For each document, I've included the full path to the original google drive folder from House oversight committee so you can link and verify contents.
I used mistral 7b to extract entities and relationships and build a basic Graph RAG. There are some new "associations" that have not been reported in the news but couldn't find any breakthrough content. Also my entity/relationship extraction was quick and dirty. Sharing this dataset for people interested in getting into RAG and digging deeper to get more insight that what meets the eye.
In using this dataset, please be sensitive to the privacy of the people involved (and remember that many of these people were certainly not involved in any of the actions which precipitated the investigation.) - Quoted from Enron Email Dataset release | 2025-11-17T22:14:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file/ | tensonaut | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozu5v4 | false | null | t3_1ozu5v4 | /r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file/ | false | false | self | 1,959 | {'enabled': False, 'images': [{'id': 'Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=108&crop=smart&auto=webp&s=5908f3b11b27ea2328c1b220d63d225ab4a2f409', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=216&crop=smart&auto=webp&s=d9e6ab43f63ffe4f2f63035e4b5ae03f5d31ed94', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=320&crop=smart&auto=webp&s=f44dc2de112598ae5390ab7c9b080a32e41d56b4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=640&crop=smart&auto=webp&s=767f8e31f86e5fe5b6cd9e316b6199a8ef494446', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=960&crop=smart&auto=webp&s=0b1d0e664697c891bf8a188528e69f4dcbd99be2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?width=1080&crop=smart&auto=webp&s=7dd04242cc60c62e868085f831d164ce09a7a403', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Un7ixyc1EmaI-qDmYnT24bjDTJyNZChlHeXZ9IC2aTs.png?auto=webp&s=18c100705ccc26ac9b46669411692e79fdc47e63', 'width': 1200}, 'variants': {}}]} |
Grok 4.1 | 16 | https://x.com/elonmusk/status/1990533268723425320?s=46
We already have great OSS alternatives but we need a bigger context window like grok. | 2025-11-17T21:38:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ozt895/grok_41/ | policyweb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozt895 | false | null | t3_1ozt895 | /r/LocalLLaMA/comments/1ozt895/grok_41/ | false | false | self | 16 | null |
Minipc with CUDA support | 0 | Hello everyone, been in AI space for a couple of years now. Basically have been playing a lot with sentence transformers but want to get into Gen AI and agentic flow.
Here is the thing : I am looking to buy a minipc which can run CUDA.
Context : I have been using kubeflow at work and g4dn.12xlarge machines in AWS for model deployment. We have CUDA installed in it and nvidia-smi responds with correct confirmation. Now, if I buy something like bossgame M5 or framework desktop, will it work the same way? I am open for suggestions.
Apologies, I didn't know where else to ask this. Any help is appreciated. | 2025-11-17T21:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ozso5r/minipc_with_cuda_support/ | SaltedCashewNuts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozso5r | false | null | t3_1ozso5r | /r/LocalLLaMA/comments/1ozso5r/minipc_with_cuda_support/ | false | false | self | 0 | null |
Cerebras REAPs: MiniMax-M2 (25, 30, 40%), Kimi-Linear 30%, more on the way! | 115 | Hey everyone, we just dropped REAP'd MiniMax-M2 in 3 sizes:
[https://hf.co/cerebras/MiniMax-M2-REAP-172B-A10B](https://hf.co/cerebras/MiniMax-M2-REAP-172B-A10B)
[https://hf.co/cerebras/MiniMax-M2-REAP-162B-A10B](https://hf.co/cerebras/MiniMax-M2-REAP-162B-A10B)
[https://hf.co/cerebras/MiniMax-M2-REAP-139B-A10B](https://hf.co/cerebras/MiniMax-M2-REAP-139B-A10B)
We're running more agentic benchmarks for MiniMax-M2 REAPs, so far we're seeing good accuracy retention, especially at 25 and 30% compression.
We also recently released a Kimi-Linear REAP@30% and it works well for coding and for long-context QA:
[https://hf.co/cerebras/Kimi-Linear-REAP-35B-A3B-Instruct](https://hf.co/cerebras/Kimi-Linear-REAP-35B-A3B-Instruct)
We're also working to get a Kimi-K2-Think REAP out, so stay tuned. Enjoy!
| 2025-11-17T21:06:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ozsdbe/cerebras_reaps_minimaxm2_25_30_40_kimilinear_30/ | ilzrvch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozsdbe | false | null | t3_1ozsdbe | /r/LocalLLaMA/comments/1ozsdbe/cerebras_reaps_minimaxm2_25_30_40_kimilinear_30/ | false | false | self | 115 | {'enabled': False, 'images': [{'id': '3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=108&crop=smart&auto=webp&s=15658e73acfa0a382bcd08c8b9c6a5261a858d5d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=216&crop=smart&auto=webp&s=726cd687759774f4e1626db8f1ee440765ee644e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=320&crop=smart&auto=webp&s=d00df07a16bb44238ebb09d9bf14fff71586ae62', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=640&crop=smart&auto=webp&s=6cabf1916c795f59349e70bb4e51ad944c7bceaf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=960&crop=smart&auto=webp&s=dd11a0bf7c7258e3a9f7f31c8ef7e33f1b58e8cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?width=1080&crop=smart&auto=webp&s=29bdc33a2ecc41cc84979bf13b7bf56c2f134086', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3MtGVOnWZy1IT16u2e0DlzH4XGv1oVdDyptqOnSMtkE.png?auto=webp&s=bf67222273412378d0979efbe317e57ffd449aee', 'width': 1200}, 'variants': {}}]} |
NanoGPT 124m from scratch using a 4090 and a billion tokens of Fineweb in a cave with a box of scraps. | 273 | I was recently doing some digging into NanoGPT, Karpathy's couple years old [repo](https://github.com/karpathy/nanoGPT) to recreate GPT-2 124m using 10 billion tokens of fineweb and 8xA100 40gb over the course of four days.
More recently, I saw that they've started [speedrunning efforts](https://github.com/KellerJordan/modded-nanogpt) to train the same model to 3.28 loss as fast as possible with 8xH100, and currently the speed record on that setup is less than 3 minutes to train from scratch.
That led me to think... with all of the advancements that have been made in the last few years, how fast could I train the same model to that 3.28 loss range on a single 4090?
The answer? 115 minutes flat. It ran through 0.92 billion tokens in the process, with 130-140k t/s speeds during training.
What does this mean?
If you ever find yourself lonely in a cave with a box of scraps, a 4090, and a billion fineweb tokens... you can build your own teeny-jarvis in a couple hours flat then chat with it. I've provided training code and inference code, and the trained model if you want to mess with it for some odd reason. I set up a little github repo as well, so if you feel like trying your hands at modifying my training run and beating it, drop a PR with your results/log/training run and I'll add it to the speedrun chart:
[https://github.com/Deveraux-Parker/nanoGPT\_1GPU\_SPEEDRUN](https://github.com/Deveraux-Parker/nanoGPT_1GPU_SPEEDRUN)
Here's the list of things it's implementing:
**Computation & Precision Optimizations**
1. **FP8 Quantization** \- 8-bit floating-point numbers (float8) for matrix multiplications instead of the usual 16 or 32-bit. This cuts memory use and speeds up math operations dramatically.
2. **Mixed Precision Training (bfloat16)** \- Most computations happen in bfloat16, which is faster than float32 while maintaining good numerical stability.
3. **Custom Triton Kernels** \- Hand-written GPU kernels for specific operations like symmetric matrix multiplication (X·X\^T), which are faster than PyTorch's default implementations.
4. **torch.compile** \- PyTorch 2.0's JIT compilation that fuses operations and optimizes the computational graph.
5. **Flash Attention** \- Ultra-fast attention implementation that reduces memory usage and speeds up the attention mechanism.
# Novel Optimizer & Training Techniques
1. **Muon Optimizer** \- A custom momentum-based optimizer that uses orthogonalization (keeping gradient directions independent) for better convergence.
2. **Polar Express Orthogonalization** \- A specific algorithm to maintain orthogonality in the Muon optimizer's updates.
3. **NorMuon Variance Estimator** \- Adaptive second moment estimation that helps Muon scale gradients appropriately.
4. **Multiple Optimizers** \- Using Adam for embeddings/scalars and Muon for weight matrices, each optimized for their parameter type.
5. **Alternating Optimizer Steps** \- Muon runs every other step, both optimizers on odd steps, reducing computational overhead.
6. **Gradient Accumulation** \- Accumulating gradients over 32 micro-batches to simulate larger batch sizes without running out of memory.
# Architecture Innovations
1. **YaRN (Yet another RoPE extensioN)** \- Extends the context length capability of Rotary Position Embeddings beyond what the model was trained on.
2. **RoPE (Rotary Position Embeddings)** \- More efficient positional encoding than absolute positions.
3. **RMS Normalization** \- Simpler and faster than LayerNorm while being equally effective.
4. **Squared ReLU Activation** \- Using ReLU(x)² instead of GELU, which is faster and works well.
5. **Skip Connections with Learnable Gates** \- U-Net-style architecture where early layers connect to later layers through learned gates.
6. **Value Embeddings** \- Separate embedding tables that inject information directly into attention values.
7. **Smear Gating** \- Mixes each token with the previous token using a learned gate.
8. **Backout Connections** \- Subtracts certain layer outputs to prevent feature redundancy.
9. **Attention Gating** \- Per-head gates that learn to selectively use attention outputs.
# Learning Rate & Schedule Optimizations
1. **Custom LR Multipliers** \- Different learning rates for embeddings (75x), scalars (5x), etc.
2. **Custom Weight Decay Multipliers** \- Different regularization strength for different parameter types.
3. **Warmup-Stable-Decay Schedule** \- Linear warmup (100 steps), stable plateau (80% of training), then cosine decay.
4. **Dynamic Muon Momentum** \- Momentum coefficient that changes during training (0.85→0.95→0.85).
5. **Adaptive Hyperparameter Tuning** \- Automatically adjusts learning rate and weight decay based on train/val loss dynamics.
# Memory & Data Optimizations
1. **Expandable Memory Segments** \- PyTorch memory allocator setting that reduces fragmentation.
2. **Kernel Warmup** \- Pre-compiling and warming up kernels before actual training to avoid first-step slowdown.
3. **Asynchronous Data Loading** \- Background threads preload the next data shard while training continues.
4. **BOS-Aligned Batching** \- Sequences are aligned to document boundaries (BOS tokens) for more natural training.
5. **Pin Memory** \- Keeps data in page-locked memory for faster CPU→GPU transfers.
6. **Non-Blocking Transfers** \- Async GPU transfers that overlap with computation.
7. **set\_to\_none=True** \- More efficient way to zero gradients than setting them to zero tensors.
# Training Efficiency Tricks
1. **Variable Attention Window Sizes** \- Different layers use different block masking sizes (some see more context, some less).
2. **Logit Capping** \- Applies 30·sigmoid(logits/7.5) to prevent extreme values.
3. **Vocabulary Size Rounding** \- Rounds vocab to multiples of 128 for better GPU utilization.
4. **Strategic Initialization** \- Zero initialization for output projections, uniform bounded for inputs.
5. **Checkpoint Resumption** \- Can pause and resume training without losing progress.
6. **Early Stopping** \- Automatically stops when target validation loss is reached.
7. **Frequent Checkpointing** \- Saves model every validation step to prevent data loss.
8. **Efficient Gradient Zeroing** \- Only zeroes gradients after they're used, not before.
| 2025-11-17T20:29:18 | https://huggingface.co/DevParker/NanoGPT-124m-In-A-Cave-With-A-Box-Of-Scraps/blob/main/README.md | teachersecret | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ozre2i | false | null | t3_1ozre2i | /r/LocalLLaMA/comments/1ozre2i/nanogpt_124m_from_scratch_using_a_4090_and_a/ | false | false | default | 273 | {'enabled': False, 'images': [{'id': 'Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=108&crop=smart&auto=webp&s=140252b6cdbdb575b73a7966a7da62d95670c656', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=216&crop=smart&auto=webp&s=7909c823a90b3c60e3caa98ee8d7d674434e88da', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=320&crop=smart&auto=webp&s=53796d68628e07b567c2f1d89b4be98cf2e745b5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=640&crop=smart&auto=webp&s=19349af1d39c198a50fecbc1f1e139ec105a188f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=960&crop=smart&auto=webp&s=8e0a497785a0de6cf39166b4d2b36d4f1d6a8042', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?width=1080&crop=smart&auto=webp&s=7a36fe3e6943ace2a066d9008bf88bf64a6b2234', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Xkp-uBD1eaeELSsY4T0RZUFyVZGTUIdapJjtKGQFbjY.png?auto=webp&s=82a8a34eb5c9fbd418591f2d3c8d5445ca5e402b', 'width': 1200}, 'variants': {}}]} |
Do you sandbox MCPs / Claude Code / Opencode on Linux? How ? | 1 | There are many ways to do it docker, podman, devcontainer, vm, lxc/lxd,.. | 2025-11-17T20:29:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ozre04/do_you_sandbox_mcps_claude_code_opencode_on_linux/ | Inevitable_Ant_2924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozre04 | false | null | t3_1ozre04 | /r/LocalLLaMA/comments/1ozre04/do_you_sandbox_mcps_claude_code_opencode_on_linux/ | false | false | self | 1 | null |
Finetune Conversational LLM on Discord Data | 5 | I plan to create a discord bot that can interact with multiple people in a server at once. I want to mimic the organic convo a normal user has on discord, especially the style and the interaction with multiple users. My idea is to finetune an LLM on a discord dataset extracted from a discord server, since the dataset is not the typical 1 on 1 multiturn conversation i am not sure how to prepare it. Here is what i came up with:
**Dataset:**
User1: message1
User2: message2
User3: message3
User2: message4
User1: message5
**Version 1:**
<|im_start|>user
User1: message1
User2: message2
User3: message3
User2: message4
<|im_end|>
<|im_start|>assistant
message5
<|im_end|>
**Version 2:**
<|im_start|>user
User1: message1
<|im_end|>
<|im_start|>assistant
message2
<|im_end|>
<|im_start|>user
User3: message3
<|im_end|>
<|im_start|>assistant
message4
<|im_end|>
<|im_start|>user
User1: message5
<|im_end|>
What version would be better and why? Also should i use ChatML or ShareGPT formating? I want later to be able to change easily the personality of the AI using the system prompt. I'm new to finetuning and LLMs in general, any help is much appreciated :> | 2025-11-17T20:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ozqryk/finetune_conversational_llm_on_discord_data/ | Riki_The_Bird_God | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozqryk | false | null | t3_1ozqryk | /r/LocalLLaMA/comments/1ozqryk/finetune_conversational_llm_on_discord_data/ | false | false | self | 5 | null |
Comparing Unsloth's GLM-4.6 IQ2_M -vs- GLM-4.6-REAP-268B Q2_K_XL | 19 | **GLM 4.6 Quantization Trade-offs:**
**Full** **IQ2\_M** **(Pervasive Degradation) vs. REAP** **Q2\_K\_XL** **(Structural Removal)**
These 2 are at the limits of what will fit in 128GB and the best local models in this size bracket.
The core of this is comparing the error profiles of pervasive quantization damage versus the structural damage from expert pruning.
Unsloth's quantization strategies, specifically the \_M vs. \_XXL suffixes - dictate the resource allocation for mitigating quant damage.
\_M (Medium) quant applies moderate preservation to core components like the attention mechanism
\_XXL (Extra Extra Large) quant aggressively preserves the entire reasoning engine and a significant subset of high-magnitude "outlier" weights within the MLP/expert layers.
This is pitted against Cerebras's REAP, which structurally removes entire expert layers, a process whose "near-lossless" claim on benchmarks often conflicts with reports of brittle, domain-specific failures.
**The Two Philosophies of Compression:**
* **GLM 4.6 IQ2\_M - The "Pervasive Degradation" Model:** This is the complete 357B parameters. The IQ2 baseline introduces significant precision degradation across more weights. The \_M(Medium) preservation strategy is a compromise: it allocates its limited budget to **partially shield the attention mechanism**, but this leaves the reasoning core still impacted by quantization noise and provides **no remaining budget to preserve critical, high-magnitude "outlier" weights** in the MLP/expert layers. The result is a model with its full knowledge base intact, but with a systemic, low-level degradation affecting both its reasoning and its recall of specific patterns.
* **GLM 4.6 REAP Q2\_K\_XL - The "Structural Deficit" Model:** This is a structurally altered 268B parameter version where \~25% of expert layers have been permanently amputated. The key difference is the \_XL preservation strategy. It allocates its much larger budget to first **fully preserve the entire remaining attention mechanism at a high precision** \- effectively insulating more of the model's "brain" from quantization damage. It then uses its remaining budget to **surgically preserve a significant subset of critical knowledge outliers** in the remaining experts. The result should be a model with a sharp, high-fidelity reasoning core but with permanent, irreparable gaps in its knowledge and complex glitches.
**The Core Technical Debate for Coding:**
The choice between these models seems a choice between two distinct types of risk.
* The **Full IQ2\_M** risks a **consistent lack of sharpness**. Its partially degraded reasoning core may lead to subtle but critical logical flaws, less optimal code, and a failure to grasp nuance in complex, multi-step instructions. It's a "known unknown" that its performance ceiling is lowered across the board.
* The **REAP Q2\_K\_XL** risks **brittle, domain-specific failures**. Its well-preserved core should, in theory, provide superior logical fidelity and more precise code generation. However, this is entirely contingent on the REAP process not having pruned an expert critical to your specific task. This is an "unknown unknown".
Theoretically, for high-precision tasks like coding, the REAP Q2\_K\_XL seems superior, as its insulated brain should be more reliable. But this hypothesis falls apart if the pruning damage is more significant than benchmarks suggest.
During my limited coding testing I'm seeing:
REAP\_Q2\_K\_XL sometimes perform better but fail more often, including sometimes looping and some broken code outputs.
Full\_IQ2\_M retains more general and contextual knowledge and seems more consistent, but perhaps less chance of a great output.
I'm not skilled enough to run proper A-B testing and benchmarking yet, plus such benchmarking is not reliable and could not find any benchmarks comparing these versions yet anyway.
Have any of you compared them much?
Especially interested in coder's who've tried both: what are you seeing so far? | 2025-11-17T19:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ozq14d/comparing_unsloths_glm46_iq2_m_vs_glm46reap268b/ | Feedback_Loopy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozq14d | false | null | t3_1ozq14d | /r/LocalLLaMA/comments/1ozq14d/comparing_unsloths_glm46_iq2_m_vs_glm46reap268b/ | false | false | self | 19 | null |
This response is from a 2.7B model (Phi-2). I don’t know how this is possible. | 0 | I’ve been experimenting with a custom framework layered over small models (mainly Phi-2).
This answer came from a 2.7B parameter model — not GPT-4, not Claude, not Llama 70B.
It maintains tone, produces structured multi-paragraph reasoning, avoids hallucination, and stays grounded.
I genuinely don’t know how this is happening.
I’m starting to think small models are capable of more than people assume if they’re wrapped inside the right memory architecture + symbolic constraints.
Has anyone seen a 2.7B model do something like this? | 2025-11-17T19:27:39 | https://www.reddit.com/gallery/1ozpq3u | GriffinThibault | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ozpq3u | false | null | t3_1ozpq3u | /r/LocalLLaMA/comments/1ozpq3u/this_response_is_from_a_27b_model_phi2_i_dont/ | false | false | 0 | null | |
AMA Announcement: MiniMax, The Opensource Lab Behind MiniMax-M2 + Gifts to Our Community (Wednesday, 8AM-11AM PST) | 113 | 2025-11-17T18:50:44 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozopxx | false | null | t3_1ozopxx | /r/LocalLLaMA/comments/1ozopxx/ama_announcement_minimax_the_opensource_lab/ | false | true | default | 113 | {'enabled': True, 'images': [{'id': 'e3scgr4j5v1g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=108&crop=smart&auto=webp&s=a90d04c15902e2f83a551ea7babbde4306b9fcbd', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=216&crop=smart&auto=webp&s=710c0b8bcc7f473e2bbe292a6ef1144e4372d097', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=320&crop=smart&auto=webp&s=a53d057f9277068543a80b552147fc1bac75ce5d', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=640&crop=smart&auto=webp&s=ca0bf85cb4cf2a2b7e12e8d99d515186275c15e3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=960&crop=smart&auto=webp&s=8230f33e803d6bfa26d716b1bb6642bc986f8874', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?width=1080&crop=smart&auto=webp&s=12d90281a2d7efe50d8459e53711c72962a0c8ff', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://preview.redd.it/e3scgr4j5v1g1.jpeg?auto=webp&s=9c14b9b0ccbac0a8f92dbe403216be3a81daba6a', 'width': 1200}, 'variants': {}}]} | ||
Cornserve: Microservices Architecture for Serving Any-to-Any Models like Qwen Omni! | 2 | https://reddit.com/link/1ozofs7/video/xnsfmgonwt1g1/player
Hey everyone! We're excited to share Cornserve, an open-source platform for serving any-to-any multimodal AI models.
Modern multimodal models are getting increasingly complex, like Qwen 3 Omni that handles text, images, video, and audio inputs while generating both text and audio outputs. However, this makes it hard to build a monolithic serving system for such models. That's why we built Cornserve - a microservices approach to AI serving that splits complex models into independent components and automatically shares common parts (like LLMs, vision encoders, audio generators) across your apps.
Supported Models:
* Any-to-Any models like Qwen 3 Omni, Qwen-Image
* Vision language models like Gemma 3, Qwen3-VL, InternVL3, LLaVA-OneVision, etc.
* Any text-only model supported by vLLM
Homepage: [https://cornserve.ai](https://cornserve.ai)
We'd love to hear your feedback and welcome contributions! | 2025-11-17T18:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ozofs7/cornserve_microservices_architecture_for_serving/ | IntroductionHuge7324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozofs7 | false | null | t3_1ozofs7 | /r/LocalLLaMA/comments/1ozofs7/cornserve_microservices_architecture_for_serving/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=108&crop=smart&auto=webp&s=6c1300504aa8c732785a36b17b7eaca7eb1f9ceb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=216&crop=smart&auto=webp&s=9b73a25a045e0dbde1512bd2d34b14ac20380b98', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=320&crop=smart&auto=webp&s=77b0208e798e3448c43fb6c53a3400625c531e78', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=640&crop=smart&auto=webp&s=115f8badb5fc9dc5ac8c4e0bd357bd4eb8529aa8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=960&crop=smart&auto=webp&s=be9d19afe27618294e6c01d167366c64c09212d9', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?width=1080&crop=smart&auto=webp&s=3bb6c3f0d95317af800b196da625770e46e75d48', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/PGzw23GMH4zMtMZEKRBRNnVi6wf2-a8AzK196nf2XcI.png?auto=webp&s=2a22580e8f871135f253aec5c514980614038e5c', 'width': 1200}, 'variants': {}}]} |
[30 Trillion token dataset] "HPLT 3.0: Very Large-Scale Multilingual Resources for LLM and MT. Mono- and Bi-lingual Data, Multilingual Evaluation, and Pre-Trained Models", Oepen et al. 2025 | 25 | 2025-11-17T18:40:25 | https://arxiv.org/abs/2511.01066 | RecmacfonD | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1ozofpd | false | null | t3_1ozofpd | /r/LocalLLaMA/comments/1ozofpd/30_trillion_token_dataset_hplt_30_very_largescale/ | false | false | default | 25 | null | |
I miss when it looked like community fine-tunes were the future | 192 | Anyone else? There was a hot moment, maybe out of naivety, where fine-tunes of Llama 2 significantly surpassed the original and even began chasing down ChatGPT3. This sub was a flurry of ideas and datasets and minor celebrities with access to more modest GPU farms.
Today it seems like the sub is still enjoying local LLMs but has devolved into begging 6 or 7 large companies into giving us more free stuff, the smallest of which is still worth billions, and celebrating like fanatics when we're thrown a bone.
Does anyone else feel the vibe change or am I nostalgic for a short-lived era that never really existed? | 2025-11-17T18:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ozobsy/i_miss_when_it_looked_like_community_finetunes/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozobsy | false | null | t3_1ozobsy | /r/LocalLLaMA/comments/1ozobsy/i_miss_when_it_looked_like_community_finetunes/ | false | false | self | 192 | null |
Local all-in-one AI system (Local multimodal AI) | 5 | This article is the current development log of PKC AI Mark.
This article was analyzed using AI.
PKC AI MARK — Key Feature Summary
Author: GPT
1. Overview
This document summarizes the core features of the PKC AI MARK system running on an RTX 2060 Super (8GB). It explains the essential functions in a simple and easy-to-understand way, without complex technical terms.
2. Main Feature Summary
PKC AI MARK is a fully local, integrated AI system that supports:
Text interaction (LLM)
Emotion analysis
Image generation
Vision-based image understanding
TTS (Text-to-Speech)
STT (Speech-to-Text)
✔ 1) Text Chat (LLM)
Uses Llama-3.2-8B (GGUF model)
Smooth real-time conversation via SSE streaming
Combined pipeline of emotion analysis + language model
Automatically adjusts response tone based on user emotion and writing style
✔ 2) Image Generation (Stable Diffusion)
Based on Stable Diffusion 3.5 medium GGUF
Generates 512×768 images
Shows generation progress
Korean prompts are automatically translated
Cached prompts regenerate instantly
✔ 3) Vision AI (Image Understanding)
Qwen2-VL model for image content analysis
Model automatically loads when an image query is requested
✔ 4) File Upload → Analysis
Automatically summarizes or analyzes image/text files
Shows thumbnail previews
✔ 5) Emotion Analysis
korean-emotion-kluebert-v2
Detects emotions from user messages (e.g., joy, sadness, anger, neutral)
Adjusts AI response tone accordingly
✔ 6) Session Management
Saves conversation history
Keeps separate logs per session
Supports creating, deleting, renaming sessions
Full JSON export/import supported
✔ 7) Browser UI Features
STT (Speech-to-Text)
TTS (Text-to-Speech)
Image generation button
Web search button
Auto cleanup of old chat bubbles
Fully mobile responsive
✔ 8) System Monitoring
Real-time GPU / CPU / RAM usage display
Shows model loading status
3. How the System Works (Simplified)
● 1) Loads only the required model
Keeps the LLM active during text conversations
Temporarily unloads the LLM during image generation to free VRAM
Reloads it after work is completed
● 2) Image models load only when needed
Prevents unnecessary VRAM usage
Cache enables fast reuse after generation
● 3) Automatic conversation memory
Stores user/AI conversation history in a local DB
Helps maintain context across sessions
AI remembers previous conversations stored in the DB
4. Conclusion
PKC AI MARK provides the following features in a single system:
Emotion analysis (korean-emotion-kluebert-v2)
Text conversation (llama-3-Korean-Bllossom-8B-Q5_K_M.gguf)
Image generation (sd3.5_medium-Q5_1.gguf)
Image understanding (Qwen2-VL-2B-Instruct-Q4_K_M.gguf)
File analysis (System)
Session & log management (System)
Web search (System)
STT & TTS (Browser Feature)
In short, it is an all-in-one local AI tool running entirely on a personal PC. | 2025-11-17T18:30:33 | PKC_0412 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozo5lq | false | null | t3_1ozo5lq | /r/LocalLLaMA/comments/1ozo5lq/local_allinone_ai_system_local_multimodal_ai/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': '4w46fk052v1g1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=108&crop=smart&auto=webp&s=23d38c624a90185f5c5acb1c8b5255b7a7c64758', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=216&crop=smart&auto=webp&s=6d57b7d3c41e17130cdc0917a4e7a3f464e2681a', 'width': 216}, {'height': 112, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=320&crop=smart&auto=webp&s=3d11e4e75e7f3e84b55b994644ed40a5728c609c', 'width': 320}, {'height': 224, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=640&crop=smart&auto=webp&s=c9d604e24a626b85f2a21519790278d81bb86b1f', 'width': 640}, {'height': 337, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=960&crop=smart&auto=webp&s=fe94e2b5ce1a3a2658ed769d25477d8332f38fef', 'width': 960}, {'height': 379, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?width=1080&crop=smart&auto=webp&s=d6010bb513715ed19a72fa5d8a80eac67972653f', 'width': 1080}], 'source': {'height': 1207, 'url': 'https://preview.redd.it/4w46fk052v1g1.jpeg?auto=webp&s=dde35326cc2fbe909722a41aea802d842da1dc6e', 'width': 3434}, 'variants': {}}]} | |
Do we rely too much on huggingface? Do you think they’ll eventually regulate open source models? Is there any way to distribute them elsewhere? | 232 | I know torrenting may be a thing, but I’m also just curious if anyone knows anything or has any insight. | 2025-11-17T18:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ozo2v8/do_we_rely_too_much_on_huggingface_do_you_think/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozo2v8 | false | null | t3_1ozo2v8 | /r/LocalLLaMA/comments/1ozo2v8/do_we_rely_too_much_on_huggingface_do_you_think/ | false | false | self | 232 | null |
cuda device list mismatch - ggml_cuda_init / ubuntu - significance to using --main-gpu flag | 1 | System running 3x GPU: 1x 5090 and 2x 3090 running GLM 4.5-Air-Q4\_K\_M
Driver Version: 580.105.08 CUDA Version: 13.0
ggml_cuda_init: found 3 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Device 1: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 2: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Whereas `nvtop` and `nvida-smi` always report the 5090 as `Device 2`
llama-bench shows no noticeable difference to whichever GPU I set as `--main-gpu` so that's not a concern.
| model | size | params | backend | ngl | main\_gpu | fa | mmap | test | t/s |
| glm4moe 106B.A12B Q4\_K - Medium | 67.85 GiB | 110.47 B | CUDA | 99 | 2 | 1 | 0 | pp512 | 1054.91 ± 13.55 |
| glm4moe 106B.A12B Q4\_K - Medium | 67.85 GiB | 110.47 B | CUDA | 99 | 2 | 1 | 0 | tg128 | 86.03 ± 0.25 |
It comes down to maximising what I can fit in VRAM when pushing the limits of model size and context and trying to tweak and balance `--override-tensors` (its like that MS Word moving a photo meme, on small nudge and it goes haywire)
But I have eventually found a set of parameters that works to keep me from OOM when trying to load GLM4.5-Air-Q4\_K\_M fully in VRAM
--main-gpu 2
-ts 0.385,0.30,0.315
I should take the win and be grateful and just crack on as I think this is the most even distribution of VRAM usage I can get, but I'm just curious if there is any other minor modification I can do to tweak performance or VRAM usage across the GPUs to be able to squeeze in a little more context. prior to finding the above `-ts` I couldn't even go above 16k context without crashing out.
This is where I am currently
https://preview.redd.it/mnrov06svu1g1.png?width=826&format=png&auto=webp&s=3eeeb90778a2e008a1d6c3b72f48c93d88f66486
(The following is not technically locallama related:-
>!does anyone know how to deal with `gnome-shell` eating up VRAM in ubuntu 24.04/wayland, WITHOUT having to log off/on again. This is my LLM server but Im not always sat on this machine and at another machine. But Ive noticed that VRAM usage of `gnome-shell` can creep upwards of 3.5GB for no reason and then if llama-swap tries to load up one of the bigger models like GLM4.5-Air it will crash out, I cant keep manually logging off and back on again every time I need that VRAM back for one of the bigger models)!< | 2025-11-17T18:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ozne89/cuda_device_list_mismatch_ggml_cuda_init_ubuntu/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozne89 | false | null | t3_1ozne89 | /r/LocalLLaMA/comments/1ozne89/cuda_device_list_mismatch_ggml_cuda_init_ubuntu/ | false | false | 1 | null | |
4, 6 and 8 bit mlx versions of Inference-net / AELLA on Huggingface | 1 | Following the success of this post:
[https://www.reddit.com/r/LocalLLaMA/comments/1ov3dkb/aella\_100m\_research\_papers\_an\_openscience/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1ov3dkb/aella_100m_research_papers_an_openscience/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
But seeing that there where no mlx version of this super useful tools I decided to create them myself:
[https://huggingface.co/leonsarmiento/models](https://huggingface.co/leonsarmiento/models)
**¯\\*****(ツ)*****/¯**
| 2025-11-17T18:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1oznday/4_6_and_8_bit_mlx_versions_of_inferencenet_aella/ | JLeonsarmiento | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oznday | false | null | t3_1oznday | /r/LocalLLaMA/comments/1oznday/4_6_and_8_bit_mlx_versions_of_inferencenet_aella/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=108&crop=smart&auto=webp&s=b58fe8394026a904fa724dcdebfa173059ce4bca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=216&crop=smart&auto=webp&s=90222d8127cbdcc62943615f68ce3d40fcb35ae6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=320&crop=smart&auto=webp&s=2515f0f9b79099fc3d3a3dc1e25653927b5c3f79', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=640&crop=smart&auto=webp&s=8e65e78dd8b1f1eaf39aa6b4800b807ff75c92ca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=960&crop=smart&auto=webp&s=eb669bc1010035319eb43b43694942b8c8fe2897', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?width=1080&crop=smart&auto=webp&s=c64c00c0434dd864e3506c8beac46c50e3f3213f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bZ2ZFJE70mPh9Bp_x_wUUzbZwLukgV7-Ha7iIE2wMwY.png?auto=webp&s=21afea5c67a62b6d15726a5287e597527bac9382', 'width': 1200}, 'variants': {}}]} |
LMArena down for anyone? | 0 | Can't get past cloudflare. I verify and it just reloads the same page. I live in NY. | 2025-11-17T17:53:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ozn42w/lmarena_down_for_anyone/ | Jazzlike_Tomorrow765 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozn42w | false | null | t3_1ozn42w | /r/LocalLLaMA/comments/1ozn42w/lmarena_down_for_anyone/ | false | false | self | 0 | null |
Why MXFP4 is more popular than NVFP4? | 0 | NVFP4 is "theoretically" better and more optimized on Blackwell architecture, which most companies would use anyway,MXFP4 has a wider hardware optimization as it's standardized,but it's not as optimized as NVFP4 and "theoretically" less accurate than MXFP4,I say theoretically because I found no models that's trained with NVFP4. | 2025-11-17T17:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ozmje5/why_mxfp4_is_more_popular_than_nvfp4/ | Swimming-Ratio4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozmje5 | false | null | t3_1ozmje5 | /r/LocalLLaMA/comments/1ozmje5/why_mxfp4_is_more_popular_than_nvfp4/ | false | false | self | 0 | null |
Tips for optimizing gemma2:2b on Raspberry Pi 5 for voice assistant? (tool calling) | 6 | Hey everyone! 👋
I'm building a privacy-focused voice assistant on a Raspberry Pi 5 that runs gemma2:2b locally via Ollama. It works, but I'm trying to squeeze out more performance for a better user experience.
**Current setup:**
* Hardware: Raspberry Pi 5 (8GB)
* Model: gemma2:2b via Ollama
* Use case: Voice assistant with tool/function calling (adding notes, scheduling meetings, etc.)
* Current response time: \~2-3 seconds per query
**What I'm doing:**
* Using Vosk for local voice-to-text
* gemma2:2b for intent classification and task parsing
* Manual tool calling (since gemma2:2b doesn't support native function calling)
* Send complex queries to cloud (Gemini API)
**My questions:**
1. Are there any tricks to speed up gemma2:2b inference on ARM? (Quantization? Special flags?)
2. Is there a better small model (<3B params) that's faster on Pi 5 and supports tool calling?
3. Would switching to Jetson Orin Nano be worth it, or is Pi 5 good enough for this use case?
4. Any Ollama optimization flags I should be using for embedded systems?
I'm trying to keep everything local for privacy, but I'm open to hybrid approaches. Currently getting \~2-3s response times, would love to get that under 1 second if possible.
Any tips or experience with similar projects would be super appreciated! 🙏
| 2025-11-17T17:16:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ozm3z4/tips_for_optimizing_gemma22b_on_raspberry_pi_5/ | Chef_Koch190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozm3z4 | false | null | t3_1ozm3z4 | /r/LocalLLaMA/comments/1ozm3z4/tips_for_optimizing_gemma22b_on_raspberry_pi_5/ | false | false | self | 6 | null |
Qwen > OpenAI models | 17 | We knew this. But it was nice to see Bloomberg write about it. Been a fan of Qwen models since they first launched and they are my go to for most things local and hosted. I even switched to Qwen Code (CLI) with Qwen3 Coder (via LMStudio) and love the local inference coding powerhouse.
Interesting to see the stats on LLama vs Qwen downloads and the anecdotal evidence of Silicon Valley usage of Qwen models.
Original: https://www.bloomberg.com/opinion/articles/2025-11-09/how-much-of-silicon-valley-is-built-on-chinese-ai
No-Paywall: https://archive.is/2025.11.09-191103/https://www.bloomberg.com/opinion/articles/2025-11-09/how-much-of-silicon-valley-is-built-on-chinese-ai | 2025-11-17T17:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ozlssd/qwen_openai_models/ | International_Quail8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozlssd | false | null | t3_1ozlssd | /r/LocalLLaMA/comments/1ozlssd/qwen_openai_models/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=108&crop=smart&auto=webp&s=23ac078a1075c8b594453150fa3fe5cc55cd7c38', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=216&crop=smart&auto=webp&s=d74b7f50c1466810c53d04fc93c373df01c4bd20', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=320&crop=smart&auto=webp&s=c32084c1a02c6bb48b0228d62b4f33d0f8aee5e9', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=640&crop=smart&auto=webp&s=ea145dc7a62575c265c63100f40eee996301a45d', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=960&crop=smart&auto=webp&s=24bf4ba1567617266d50b6ff59b8608ebe46fe6e', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?width=1080&crop=smart&auto=webp&s=19ffda5c4f36788128079b1bd58be25590103911', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/GcWEWkPVLBm7E5Ks6cSxzeQDx3QnMDckaLpYGUH0Lx4.jpeg?auto=webp&s=562ef14bae60452a99231a25adda9ddc3a684f29', 'width': 1200}, 'variants': {}}]} |
18-Month Field Study: Cross-Architecture AI Collaboration - Methodology May Be Controversial, Results Are Reproducible | 10 | I built this research using the exact methodology I'm documenting - working directly with multiple AI architectures as collaborative partners, not just test subjects.
I know this approach is controversial. Some academic venues have explicitly rejected it. I'm sharing it anyway because it works, and I think the results speak for themselves.
**What I did:** Spent 18+ months working across GPT, Claude, and Gemini with structured human oversight. Documented 2.4M+ tokens of interaction to understand what happens when multiple LLMs work together properly.
**What I found:** When you combine multiple architectures in a structured conversational framework with active human integration, you get significantly better outputs than any single model produces alone. I've formalized this as the Cross-Architecture Constructive Interference Model (CACIM).
**The core idea:** O₁₂₃ = O₁ + O₂ + O₃ + Γ
Where Γ is the surplus you get from:
* Models catching each other's errors
* Different architectures covering blind spots
* Complementary reasoning approaches
* Human oversight preventing drift
**The methodology is surprisingly simple:**
* Basic framework (Plan → Response → Reflection → Audit)
* Active human in the loop throughout
* Regular grounding checkpoints
* Strategic task distribution
No specialized tools needed - just access to multiple models and structured interaction.
**Full paper on GitHub** \- check my profile or DM for link (avoiding automod).
**Safety note:** This requires continuous human involvement. Not autonomous multi-agent systems - structured human-guided collaboration with explicit controls.
Questions welcome, especially from people doing multi-model work. Very interested in replication attempts or different results. | 2025-11-17T16:51:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ozlff6/18month_field_study_crossarchitecture_ai/ | Shakkahn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozlff6 | false | null | t3_1ozlff6 | /r/LocalLLaMA/comments/1ozlff6/18month_field_study_crossarchitecture_ai/ | false | false | self | 10 | null |
Best TTS with Voice cloning which can run under 4GB VRAM ? | 11 | PC - specs
RTX - 3050(4GB)
RAM - 16GB | 2025-11-17T16:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ozlcqj/best_tts_with_voice_cloning_which_can_run_under/ | ProNoostr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozlcqj | false | null | t3_1ozlcqj | /r/LocalLLaMA/comments/1ozlcqj/best_tts_with_voice_cloning_which_can_run_under/ | false | false | self | 11 | null |
Best LLM for image to prompt generation | 2 | Hi, im new in this world and i'm expermitening with LLM for image to prompt generation to use with stable diffusion. I was experimenting with Amoral Gemma 3 - 12B using LM Studio but im not pretty satisfied with the results. Any LLM model or master prompt recomendations? | 2025-11-17T16:41:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ozl5hx/best_llm_for_image_to_prompt_generation/ | Ornery-Relative5704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozl5hx | false | null | t3_1ozl5hx | /r/LocalLLaMA/comments/1ozl5hx/best_llm_for_image_to_prompt_generation/ | false | false | nsfw | 2 | null |
Ask for a localhost model can translating programming books | 0 | Hi people, i want to find a model for local host, target for translating (actually programming books, I can read English book well, but i need learn fast, i believe read in my mother language will faster). My machine is core i5 12400f, rtx 3060, 32GB RAM. Thank so much | 2025-11-17T16:39:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ozl3wi/ask_for_a_localhost_model_can_translating/ | Formal-Proof-5221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozl3wi | false | null | t3_1ozl3wi | /r/LocalLLaMA/comments/1ozl3wi/ask_for_a_localhost_model_can_translating/ | false | false | self | 0 | null |
DSPy on a Pi: Cheap Prompt Optimization with GEPA and Qwen3 | 8 | It took me about sixteen hours on a Raspberry Pi to boost performance of chat-to-SQL using Qwen3 0.6B from 7.3% to 28.5%. Using gpt-oss:20b, to boost performance from ~60% to ~85% took 5 days. | 2025-11-17T16:20:23 | https://leebutterman.com/2025/11/01/prompt-optimization-on-a-raspberry-pi.html | parenthethethe | leebutterman.com | 1970-01-01T00:00:00 | 0 | {} | 1ozkkzz | false | null | t3_1ozkkzz | /r/LocalLLaMA/comments/1ozkkzz/dspy_on_a_pi_cheap_prompt_optimization_with_gepa/ | false | false | default | 8 | null |
How are you handling web crawling? Firecrawl is great, but I'm hitting limits. | 5 | Been experimenting with web search and content extraction for a small AI assistant project, and I'm hitting a few bottlenecks. My current setup is basically 1) Search for a batch of URLs 2) Scrape and extract the text and 3) Feed it to an LLM for answers.
It works decently, but the main issue is managing multiple services - dealing with search APIs, scraping infrastructure, and LLM calls separately , and maintaining that pipeline feels heavier than it should.
Is there a better way to handle this? Ideally something that bundles search + content extraction + LLM generation together. All this without having to constantly manage multiple services manually.
Basically: I need a simpler dev stack for AI-powered web-aware assistants that handles both data retrieval and answer generation cleanly. I wanna know if anyone has built this kind of pipeline in production | 2025-11-17T16:02:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ozk41e/how_are_you_handling_web_crawling_firecrawl_is/ | Robertshee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozk41e | false | null | t3_1ozk41e | /r/LocalLLaMA/comments/1ozk41e/how_are_you_handling_web_crawling_firecrawl_is/ | false | false | self | 5 | null |
MiniMax-M2-REAP-172B-A10B-GGUF | 96 | As in topic. Since Cerebras published the reap, I decided I'd try to get some GGUFs going (since I wanted to use them too).
It has been kind of annoying since apparently Cerebras messed up the tokenizer files (I think they uploaded the GLM tokenizer files by mistake, but I've been to lazy to actually check). Anyways, I restored the tokenizer and the model works quite decently.
Can't do an imatrix right now, so just publishing Q5\_K\_M quants since it seems like a general use case (and fits in 128 GB RAM). I'm collecting demands if someone wants some specific quants :) | 2025-11-17T16:00:18 | https://huggingface.co/ilintar/MiniMax-M2-REAP-172B-A10B-GGUF | ilintar | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ozk1rh | false | null | t3_1ozk1rh | /r/LocalLLaMA/comments/1ozk1rh/minimaxm2reap172ba10bgguf/ | false | false | default | 96 | {'enabled': False, 'images': [{'id': 'yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=108&crop=smart&auto=webp&s=bf167b254d0691dfa22bae50fc8fe8502014a510', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=216&crop=smart&auto=webp&s=3c301e446e123de08d1ccf5a01957ef8d5574bd6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=320&crop=smart&auto=webp&s=efa6c243927edfde926dc8efc9239a419a02cd26', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=640&crop=smart&auto=webp&s=bdf4b22ec08094ccf73f74b8c2cc5cc286e38a84', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=960&crop=smart&auto=webp&s=3d81f42be900ab05981db3a7a68a251f90348639', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?width=1080&crop=smart&auto=webp&s=b9699dbd4a08be9ccc7737e23496dc784bb6b128', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/yC7porbEEX7QYKHPcLygWYv7HiHQvUYKMqU4aFgmwiA.png?auto=webp&s=ebe4c07affc936161db45830a5d8853da9b4ce96', 'width': 1200}, 'variants': {}}]} |
I revived Sir Isaac Newton using a fully local RAG setup. | 0 | So after **47 hours of non-stop debugging**,
6 virtual environments dying like soldiers,
128 pip installs,
and me saying “Okay I’m done” at least three times…
I somehow ended up **reviving Sir Isaac Newton.**
Yes.
He’s alive.
And he’s judging my physics.
A fully **local RAG chatbot** that reads my personal documents and responds exactly like Newton — complete with Early Modern English, dramatic tone, and unnecessary arrogance.
GitHub link :- [https://github.com/sanusharma-ui/NewtonAI](https://github.com/sanusharma-ui/NewtonAI) | 2025-11-17T15:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ozk0q4/i_revived_sir_isaac_newton_using_a_fully_local/ | Qwave_Sync | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozk0q4 | false | null | t3_1ozk0q4 | /r/LocalLLaMA/comments/1ozk0q4/i_revived_sir_isaac_newton_using_a_fully_local/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=108&crop=smart&auto=webp&s=0ed9d67ee49bf1e63496c731fb77b02f9e31be43', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=216&crop=smart&auto=webp&s=034debc55d66979a852bc96886287694e89deab4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=320&crop=smart&auto=webp&s=c20f3c439af5fbccc4b582c0343839b31cc4a8f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=640&crop=smart&auto=webp&s=e151c1326072762b9e6cf2ce3667ec4ae9c7e1f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=960&crop=smart&auto=webp&s=25681c0ea698cc2f08dcbe296ccb6b0bdce1514e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?width=1080&crop=smart&auto=webp&s=5e797e9880bcb4bb0783810eab8c19cf60d3960a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GB7JwjkIPh56Ts5RHYcx79_LwNRlF4s1mWINz2YDmmI.png?auto=webp&s=001540d0901bf28893aa89f0e994c3a05281fcc1', 'width': 1200}, 'variants': {}}]} |
Reactive Agents: AI agents that self-optimize after every interaction | 67 | We have developed an actual reactive agent that continuously learns and adapts based on its own performance, without requiring code changes or human intervention. To make them easy to deploy, observe, and manage, we also built a server and app. All of our work is open source under the Apache 2.0 license. You can find it here: [https://github.com/idkhub-com/reactive-agents](https://github.com/idkhub-com/reactive-agents)
After setting up the server, you don't need to make many changes to migrate a normal agent to a reactive agent. The server understands the OpenAI API standard, so you can continue to use the OpenAI library from Python, JS, Rust, or whatever language you use.
Each agent can perform the following changes in real-time:
* Choose different LLM providers and models
* Optimize system prompts
* Change hyperparameters
* Choose different configurations for conversations on different topics
How it works:
1. You set up your agents in the UI. The most work you will have to do is to provide 1 or 2 sentences describing what each agent does, as well as 1 or 2 sentences describing what each skill (node) does.
2. Select the LLM models you want each skill to use.
3. Select what you want the agent to improve based on (task completion, conversation completeness, latency, etc).
4. Send regular requests to the Reactive Agents server with a header that specifies which agent and skill to use.
5. For every request you send, you can see its input, output, the system prompt that was used, how the agent evaluated itself, and other information.
We have achieved remarkable results in many scenarios, but we still need to do considerable work. Things to look out for:
* Streaming is not supported yet. (Top priority right now)
* We support over 30 different AI providers, but we have only truly tested OpenAI, Ollama, OpenRouter, and Google (Gemini).
* You may need to periodically check how the agent is evaluating itself to ensure it is not being too strict or lenient.
* The algorithms used internally will continue to evolve and may cause issues.
* Please don't expose the server to the public. Although we have security implementations in place, the server is currently intended to be run locally only.
* Please refrain from using it for requests that you can't afford to lose. We haven't pushed things past their breaking points yet.
We welcome feedback, discussions, and contributions. Thanks! | 2025-11-17T15:57:22 | https://www.reddit.com/gallery/1ozjz15 | No_Heart_159 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ozjz15 | false | null | t3_1ozjz15 | /r/LocalLLaMA/comments/1ozjz15/reactive_agents_ai_agents_that_selfoptimize_after/ | false | false | 67 | null | |
how to attach video to Qwen 2.5-vl-7b GGUF for analysing ? | 0 | Hi using LM Studio i can successfully attach pictures to this model's chat interface and have it analyse it, but I'm unable to attach videos in mp4. Anyone can tell me how to make this work ? Running ARM Mac. | 2025-11-17T15:47:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ozjphp/how_to_attach_video_to_qwen_25vl7b_gguf_for/ | greenreddits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozjphp | false | null | t3_1ozjphp | /r/LocalLLaMA/comments/1ozjphp/how_to_attach_video_to_qwen_25vl7b_gguf_for/ | false | false | self | 0 | null |
I accidentally revived Sir Isaac Newton using local RAG. | 0 | After **47 hours of pure debugging chaos**,
6 virtual environments dying in my arms,
128 `pip install` attempts,
and me whispering “I give up” at least 3 times…
I can finally say this with confidence:
**Sir Isaac Newton is now alive.**
**And he’s roasting modern physics.**
**Introducing: NewtonAI**
A fully **local** RAG-based chatbot that reads my personal documents and responds exactly like Newton complete with Early Modern English and unnecessary arrogance.
I wanted to create a persona model, but ended up resurrecting a 17th-century physicist.
**Tech Stack (100% offline)**
* **Ollama** → llama3.2 + nomic-embed-text
* **LangChain (2025 version)** → yes, it fought me
* **ChromaDB** → vector store
* **FastAPI** backend
* **React + Vite** frontend
* **Redis + LRU caching** → because LangChain loves being slow
Everything runs locally — **zero cloud**, zero API calls, zero cost.
Features :1) Newton-style dramatic replies 2) PDF & text ingestion pipeline
3) Persona-driven RAG
4)Caching for speed
5)Fully private (your data never leaves your machine)
NewtonAI can also roast your ML concepts if you upset him.
**GitHub link :-** [**https://github.com/sanusharma-ui/NewtonAI**](https://github.com/sanusharma-ui/NewtonAI) | 2025-11-17T15:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ozjl9e/i_accidentally_revived_sir_isaac_newton_using/ | Qwave_Sync | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozjl9e | false | null | t3_1ozjl9e | /r/LocalLLaMA/comments/1ozjl9e/i_accidentally_revived_sir_isaac_newton_using/ | false | false | self | 0 | null |
I built a tiny LLM agent to explain why the market is moving today - would love feedback | 0 | Hey everyone,
I put together a small LLM-powered site that answers a question I always have: Why is the market moving like this right now? [https://whydoesitgolikethis.vercel.app/](https://whydoesitgolikethis.vercel.app/)
It started as a side project to solve my own pain point (I just want the answer and hate visual noises), so it’s free, requires no signup, and doesn’t collect any data.
Would love any feedback on the idea or anything else. Happy to improve it! | 2025-11-17T15:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ozjg4d/i_built_a_tiny_llm_agent_to_explain_why_the/ | centralparksquirrel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozjg4d | false | null | t3_1ozjg4d | /r/LocalLLaMA/comments/1ozjg4d/i_built_a_tiny_llm_agent_to_explain_why_the/ | false | false | self | 0 | null |
Open-source RAG/LLM evaluation framework; would love ANY feedback 🫶🏽 | 2 | Reposting for better visibility:
Hallo from Berlin,
I'm one of the founders of Rhesis, an open-source testing platform for LLM applications. Just shipped v0.4.2 with zero-config Docker Compose setup (literally ./rh start and you're running). Built it because we got frustrated with high-effort setups for evals. Everything runs locally - no API keys.
Genuine question for the community: For those running local models, how are you currently testing/evaluating your LLM apps? Are you:
Writing custom scripts? Using cloud tools despite running local models? Just... not testing systematically? We're MIT licensed and built this to scratch our own itch, but I'm curious if local-first eval tooling actually matters to your workflows or if I'm overthinking the privacy angle.
Link: https://github.com/rhesis-ai/rhesis | 2025-11-17T15:21:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ozj1xd/opensource_ragllm_evaluation_framework_would_love/ | IOnlyDrinkWater_22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozj1xd | false | null | t3_1ozj1xd | /r/LocalLLaMA/comments/1ozj1xd/opensource_ragllm_evaluation_framework_would_love/ | false | false | self | 2 | null |
How come Qwen is getting popular with such amazing options in the open source LLM category? | 302 | To be fair, apart from Qwen, there is also Kimi K2. Why is this uptick in their popularity? Openrouters shows a 20% share of Qwen. The different evaluations certainly favor the Qwen models when compared with Claude and Deepseek.
The main points I feel like working in Qwen's favor are its cheap prices and the open source models. This model doesn't appear to be sustainable however. This will require masssive inflow of resources and talent to keep up with giants like Anthropic and OpenAI or Qwen will fast become a thing of the past very fast. The recent wave of frontier model updates means Qwen must show sustained progress to maintain market relevance.
What's your take on Qwen's trajectory? I'm curious how it stacks up against Claude and ChatGPT in your real-world use cases. | 2025-11-17T15:12:06 | Puzzleheaded_Toe5074 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oziszl | false | null | t3_1oziszl | /r/LocalLLaMA/comments/1oziszl/how_come_qwen_is_getting_popular_with_such/ | false | false | default | 302 | {'enabled': True, 'images': [{'id': 'ue6rw77n1u1g1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?width=108&crop=smart&auto=webp&s=1e8f4915260756220b2eabda24d8b161c712cfa9', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?width=216&crop=smart&auto=webp&s=91fbb856594b60c1ccc1a6ac20fa3ad017b3fd5e', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?width=320&crop=smart&auto=webp&s=60513dea6a816ea2f631a6b365d92caa31ef5acb', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?width=640&crop=smart&auto=webp&s=c4b2c8f55f693519a0cd98ab0ba4b27693c9518b', 'width': 640}, {'height': 566, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?width=960&crop=smart&auto=webp&s=4af67b739b354b20dd9047ed23b71c0a3e9d2cdd', 'width': 960}], 'source': {'height': 595, 'url': 'https://preview.redd.it/ue6rw77n1u1g1.png?auto=webp&s=1adea091bc9f5c5177d589ab7ebee8457da591b5', 'width': 1008}, 'variants': {}}]} | |
The Thoughts on AGI — A General Reflection Beyond Optimism and Fear | 0 | In today’s AI community, discussions about AGI often swing between two extremes.
Some express unbounded optimism. Some warn about existential risks.
Both views focus heavily on the end state of AGI — its grandeur or its potential danger.
But very few discussions touch the essential question:
What is the internal structure and mechanism that AGI must rely on to be reliable, controllable, and ultimately beneficial?
This missing “middle part” is the true bottleneck.
Because without structure, any imagined AGI — whether wonderful or terrifying — becomes just another black box.
A black box that systems engineers cannot verify, society cannot trust, and humanity cannot confidently coexist with.
⸻
1. Why AGI Will Certainly Arrive
Despite the noise, one conclusion seems unavoidable:
AGI will eventually emerge — not as a miracle,
but as the natural extension of human cognitive engineering.
From the history of computation to the evolution of neural architectures,
each technological generation reduces uncertainty, increases abstraction,
and moves closer to representing human cognitive processes through formal mechanisms.
AGI is not magic.
AGI is the continuation of engineering.
But engineering requires structure.
And this brings us to the second point.
⸻
2. AGI Requires a Structural Understanding of Intelligence
If we look at human cognition—not metaphysically, but functionally—we see a few robust components:
• Perception
• Memory and contextual retrieval
• Evaluation and discrimination
• Reasoning and inference
• Decision formation
• Feedback, correction, and continuous improvement
This flow is not mystical;
it is the operational architecture behind intelligent behavior.
In other words:
Human cognition is not a mystery — it is a structured process.
AGI must follow a structured process as well.
An AGI that does not expose structure,
does not support feedback loops,
does not accumulate stable improvements,
cannot be considered reliable AGI.
It is, at best, an impressive but unstable generator.
⸻
3. The Black-Box Problem: Optimistic or Fearful, Both Miss the Mechanism
When people discuss AGI’s arrival, they tend to talk about outcomes:
• “It will transform society.”
• “It will replace jobs.”
• “It will surpass humans.”
• “It might destroy us.”
But all these narratives are output-level fantasies — positive or negative —
while ignoring the core engineering question:
What internal mechanism ensures that AGI behaves predictably, transparently, and safely?
Without discussing mechanism, “AGI optimism” becomes marketing.
Without discussing mechanism, “AGI fear” becomes superstition.
Both are incomplete.
The only meaningful path is:
mechanism-first, structure-first, reliability-first.
⸻
4. A Structured Name for the Structured Model
Because intelligence itself has an internal logic,
we use a simple term to refer to this natural structure:
Cognitive Native Intelligence Architecture.
It is not a brand or a framework claim.
It is merely a conceptual label to remind us that:
• intelligence emerges from structure,
• structure enables mechanism,
• mechanism enables reliability,
• reliability enables coexistence.
This is the path from cognition → architecture → engineering → AGI.
⸻
5. Our Expectation: Responsible AGI, Not Mythical AGI
We do not advocate a race toward uncontrolled AGI.
Nor do we reject the possibility of AGI.
Instead, we believe:
• AGI should arrive.
• AGI will arrive.
• But AGI must arrive with structure, with mechanism, and with reliability.
A reliable AGI is not an alien being.
It is an engineered system whose behavior:
• can be verified,
• can be corrected,
• can accumulate improvements,
• and can safely operate within human civilization.
If AGI cannot meet these criteria,
it belongs in the laboratory —
not in society.
⸻
6. Why This Matters to Engineers
This article is not philosophical decoration.
It is a practical orientation:
• Engineers do not need “AGI myths.”
• Engineers need operational clarity.
• Engineers need structured mechanisms.
• Engineers need predictability and feedback cycles.
• Engineers need a system that can scale without collapsing into chaos.
A mechanism-first view of AGI does not promise sensation.
It promises stability.
And stability is the real foundation of human-aligned intelligence. | 2025-11-17T15:07:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ozioq9/the_thoughts_on_agi_a_general_reflection_beyond/ | Hefty_Document_9466 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozioq9 | false | null | t3_1ozioq9 | /r/LocalLLaMA/comments/1ozioq9/the_thoughts_on_agi_a_general_reflection_beyond/ | false | false | self | 0 | null |
MiroThinker v1.0 ,an open-source agent foundation model with interactive scaling! | 20 | I’d like to recommend MiroThinker, a newly released open-source foundation model that simulates how humans handle complex problems.
MiroThinker v1.0 just launched recently! Remember our August open-source release? We're back with a MASSIVE update that's gonna blow your mind!
**What's New?**
We're introducing the "Interactive Scaling" - a completely new dimension for AI scaling! Instead of just throwing more data/params at models, we let agents learn through deep environmental interaction. The more they practice & reflect, the smarter they get!
1. 256K Context + 600-Turn Tool Interaction
2. Performance That Slaps:
* BrowseComp: 47.1% accuracy (nearly matches OpenAI DeepResearch at 51.5%)
* Chinese tasks (BrowseComp-ZH): 7.7pp better than DeepSeek-v3.2
* First-tier performance across HLE, GAIA, xBench-DeepSearch, SEAL-0
* Competing head-to-head with GPT, Grok, Claude
3. 100% Open Source
* Full model weights ✅
* Complete toolchains ✅
* Interaction frameworks ✅
* Because transparency > black boxes
**Try it now**
* Demo: [https://dr.miromind.ai](https://dr.miromind.ai/)
* Agent: [https://github.com/MiroMindAI/MiroFlow](https://github.com/MiroMindAI/MiroFlow)
**Motivation**
Traditional scaling (more data + params) is hitting diminishing returns. We hypothesize that reasoning capabilities scale exponentially with interaction depth/breadth - agents that "practice" and "reflect" more become significantly more capable.
Our Journey 6 months from initial open-source → SOTA-level performance, our team is small but MIGHTY, and we're just getting started!
Happy to answer questions about the Interactive Scaling approach or benchmarks! | 2025-11-17T14:32:38 | https://github.com/MiroMindAI/MiroThinker | wuqiao | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ozht24 | false | null | t3_1ozht24 | /r/LocalLLaMA/comments/1ozht24/mirothinker_v10_an_opensource_agent_foundation/ | false | false | default | 20 | {'enabled': False, 'images': [{'id': '6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=108&crop=smart&auto=webp&s=8480cf50b0cf6f75a03300ce62ac3d55457b8ff3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=216&crop=smart&auto=webp&s=39b2294b9c75be057e179a5c7b16b7ed0b170331', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=320&crop=smart&auto=webp&s=9eaa2138912817ad78731d7e5672522ec8e9e2b6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=640&crop=smart&auto=webp&s=3e7ffc3ec5358c9788b17caa1b1c5068e83ff661', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=960&crop=smart&auto=webp&s=286f83007e3ad95ba9d31258dae9e6c618f89601', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?width=1080&crop=smart&auto=webp&s=a8e94722d6a9da32480353bc3b89d00c5b5f8f7a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6WQT83gvt4k49_jn91ZosE3XvOZY7RKHWPio8vKDeE8.png?auto=webp&s=afd52de6262c480e04fc0b1b145924051ed186d1', 'width': 1200}, 'variants': {}}]} |
Why all DeepSeek R1 distills are overthinkers? | 1 | I tried multiple deepseek distills (in the range from 0.6B to over 10B) and they all share one thing "Overthinking" the model literally say wait every few words, which is not a behavior that I saw with the original deepseek when tested on OpenRouter.
For example I asked a model how to learn python,the reasoning chain be something similar to this:
"User asked that he wants to learn python, python is a programming language,wait maybe user is speaking about something else called python?" And it loops itself in "wait" multiple times before answering a simple,easy question while deepseek assume that python programming language is the thing is user asking instantly after starting the CoT. | 2025-11-17T14:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ozhn77/why_all_deepseek_r1_distills_are_overthinkers/ | Swimming-Ratio4879 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozhn77 | false | null | t3_1ozhn77 | /r/LocalLLaMA/comments/1ozhn77/why_all_deepseek_r1_distills_are_overthinkers/ | false | false | self | 1 | null |
Last week in Multimodal AI - Local Edition | 10 | I curate a weekly newsletter on multimodal AI. Here are the local/open-source highlights from this week:
**OmniVinci - Open-Source Omni-Modal LLM**
• NVIDIA's model unifies vision, audio, and language, beating Qwen2.5-Omni by 19% with 6x less data.
• Fully open-source with efficient multimodal fusion for local deployment.
• [GitHub](https://github.com/NVlabs/OmniVinci) | [Paper](https://arxiv.org/abs/2510.15870) | [Model](https://huggingface.co/nvidia/omnivinci)
https://preview.redd.it/lhcy0zfntt1g1.jpg?width=1456&format=pjpg&auto=webp&s=d9475c54044b40810b842a0dce72d68e53bac785
**Pelican-VL 1.0 - Open Embodied AI Brain**
• Open-source VLM for humanoid robots with DPPO training for real-time learning.
• Converts visual inputs directly to 3D motion commands.
• [GitHub](https://github.com/Open-X-Humanoid/pelican-vl) | [Paper](https://arxiv.org/abs/2511.00108) | [Hugging Face](https://huggingface.co/X-Humanoid)
https://reddit.com/link/1ozhkha/video/kmtv49eott1g1/player
**Holo2 - Desktop/Mobile Agent**
• Multimodal model for UI grounding across web, Ubuntu, and Android.
• Drop-in replacement for Holo1/1.5 with SOTA benchmarks.
• [Blog](http://hcompany.ai/blog/holo2) | [GitHub](https://github.com/hcompai/hai-cookbook) | [Hugging Face](https://huggingface.co/collections/Hcompany/holo2)
[Web Surfing with Holo2](https://preview.redd.it/yh6rqcdvtt1g1.png?width=4997&format=png&auto=webp&s=e87cf49947054edcc1a57ae4b31bd54fd4ab06ec)
**Maya1 - Local Voice Generation**
• Create any voice from text with efficient TTS model.
• Runs locally for privacy-preserving voice synthesis.
• [Demo](https://huggingface.co/spaces/maya-research/maya1)
https://reddit.com/link/1ozhkha/video/oy820cnwtt1g1/player
**Music Flamingo - Audio-Language Model**
• NVIDIA's model for deep music understanding and reasoning over full songs.
• Available on Hugging Face with demo space.
• [Paper](https://arxiv.org/abs/2511.10289) | [Model](https://huggingface.co/nvidia/music-flamingo-hf) | [Demo](https://huggingface.co/spaces/nvidia/music-flamingo)
See the full newsletter: [Multimodal Monday #33](https://open.substack.com/pub/thelivingedge/p/multimodal-monday-33) | 2025-11-17T14:22:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ozhkha/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozhkha | false | null | t3_1ozhkha | /r/LocalLLaMA/comments/1ozhkha/last_week_in_multimodal_ai_local_edition/ | false | false | 10 | null | |
Kimi K2 Thinking is the best combinatorics AI | 16 | 2025-11-17T14:20:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ozhhut/kimi_k2_thinking_is_the_best_combinatorics_ai/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozhhut | false | null | t3_1ozhhut | /r/LocalLLaMA/comments/1ozhhut/kimi_k2_thinking_is_the_best_combinatorics_ai/ | false | false | 16 | null | ||
How likely do you think a Ashley-Madison style widespread breach exposing users and conversations is in the next few years? | 4 | I was quite naive with my usage of ChatGPT, and my mind won't stop replaying a doomsday scenario where every single users chat leaks, and there's like a searchable database or some shit like that. If one were one to take place, how do you think the event would transpire? I'm probably shamelessly seeking validation but I don't think I care anymore. My life could change for the worse drastically if this were to happen. (Nothing illegal but enough to ruin relationships and be publicly humiliated)
[](https://www.reddit.com/submit/?source_id=t3_1oywcle) | 2025-11-17T14:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ozhdq2/how_likely_do_you_think_a_ashleymadison_style/ | Antique-Account-2359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozhdq2 | false | null | t3_1ozhdq2 | /r/LocalLLaMA/comments/1ozhdq2/how_likely_do_you_think_a_ashleymadison_style/ | false | false | self | 4 | null |
Wanting to train LLM for automotive purposes. | 2 | Good morning! Over the past few months I've been playing with AI. I started off with Gemini, the GitHub Copilot, and now I'm also using local LLMs on my hardware. I've created a few projects using AI that turned out decent. I've learned a bit about how your prompt is pretty much everything. Steering them back into the right direction when they start getting off center. Sometimes it feels like your correcting a child or someone with ADD.
With winter approaching I usually task myself with a project to keep myself busy so the "winter depression" doesn't hit to hard.
I've decided that my project would be to train a LLM to master in automotive diagnostics and troubleshooting. Combining two things I enjoy. Technology and automotive.
My current hardware is a Asus Rog Flow z13 with the AMD Strix Halo chip set and 128GB ram. I am using Linux(Arch) as my OS. One of my AI learning projects was creating a script to get full Linux compatibility on the AMD Strix Halo hardware.
Link: https://github.com/th3cavalry/GZ302-Linux-Setup
I've done a little researching on training and fine tuning but there seems to be some discrepancy on AMD hardware. Some places say you can and other says it's not feasible right now.
So what I'm asking is any links, suggestions, or training courses (preferably free) to research myself. Also some suggestions on a model that would be good for this given my hardware. After playing around with it this winter I plan on hosting it on a server I have it home. I'll probably pick up two used GPUs to throw in there so I can use it on the go and give some friends access to play around with it. Who knows, it might even become something bigger and widely used.
I have a few data sets already downloaded I plan on using, and I'm going to compile my own for other things such as wiring and such.
Any and all feedback is welcome! Thank you!
| 2025-11-17T14:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ozhba5/wanting_to_train_llm_for_automotive_purposes/ | LooseGas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozhba5 | false | null | t3_1ozhba5 | /r/LocalLLaMA/comments/1ozhba5/wanting_to_train_llm_for_automotive_purposes/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=108&crop=smart&auto=webp&s=f381968f997e268f4c925e4bbfa45a716d521002', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=216&crop=smart&auto=webp&s=5e833086e7635912d4acab594d6d695401ae0fc4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=320&crop=smart&auto=webp&s=b81e30ef7f9634bbc5f59ef0c223e6d629439401', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=640&crop=smart&auto=webp&s=9b4ffef8d4a8727160e415dffc4b885b527e2ab5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=960&crop=smart&auto=webp&s=b3bd5c2efc851b15e766f54bf7a207aa5539e542', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?width=1080&crop=smart&auto=webp&s=e0ba214e847e8694cfa70c4096349c697b64d38e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/X0QQOxz12Mmh0kv_oHjk1X5jKQ3FjGJ016-YkVB1YuI.png?auto=webp&s=73b22b9350c7c921aed373640366035e8c7adaff', 'width': 1200}, 'variants': {}}]} |
MXFP4 Hybrid Dense Models (Ready to share - Near Lossless Precision, Faster, Smaller) | 89 | I created 10+ hybrid MXFP4 GGUF of the top models available today. Many of these models often have faster TPS than a Q4\_K\_M, \~10% smaller than a Q8\_0 model, and much less precision loss than Q6\_K (very near Q8, sometimes better) . I'll provide links to the models, all the benchmarks, and my process.
>**If you don't care about the details and just want to play with the fun experiment models, just go the last section of the post.**
I kept hearing “MXFP4 is bad on dense models,” but nobody showed numbers that satisfied my curiosity. So I ran my own tests. The first MXFP4 dense run was a total disaster, but I didn’t stop.
I kept protecting different parts of the model. The changes I thought would help made things worse. The ones I didn’t expect to matter suddenly did. So I kept digging… and something genuinely exciting started to appear.
# What is a MXFP4 Hybrid Model?
An MXFP4 hybrid is the process of discovering the AI's architecture preference of which quantization most protects the models sanity to prevent noise. The goal is to detect which of these area's MXFP4 most damages while leaving as much quantized as MXFP4 as possible. The following are the most critical to protect from MXFP4 in different combinations:
* Output weights
* Token embd weights
* router
* gate
Between each of those 4 critical aspects that must be protected from noise, a combination of MXFP4, Q5\_K, Q6\_K, Q8\_0, and F16 must be discovered to reduce noise as much as possible. Note I never found a combination with Q4 that supported MXFP4.
When proper combinations are discovered, I've found magic will occur. I created an evolution process that creates, destroys, and discovers the patterns per model to find optimal hybrid MXFP4 variants.
# Examples
Please note that I will showcase here some hand picked examples that're some of the best results achieved. But it's important to remember that NOT all models achieved these results. Many models were out right allergic to MXFP4 no matter the variants. A future [GitHub repository](https://github.com/magiccodingman) I'll be making will showcase benchmarks of models that couldn't achieve a single successful variant, or models that achieved, "ehhh" results, that simply weren't good enough to write home about.
**Unsloth Qwen3 4B Thinking 2507**:
12% smaller than the Q8 model, while achieving only 0.0007% precision loss (basically F16 precision). It also hit \~423 tok/s in testing, which was faster than the Q8, Q6, Q5, and Q4.
* output + tensors were MXFP4. The router, gate, and text embed was Q6\_k.
**Unsloth Granite 4.0 H 350M MXFP4**
This tiny 350 million parameter model found a variant that had only a 0.04959% precision drop, and reduce the size by 30% compared to the F16 model. But for a tiny model like this, you need this small of a precision drop to not lobotomize the model. For models this size, even a Q8\_0 rarely achieves precision drops that don't cause brain damage.
* Used F16 router, gate, and embed. Output was Q6\_k. The rest of the tensors were MXFP4.
**Unsloth - Seed OSS 36B Instruct**
Seed OSS had 2 winners. One variant was 8.8% smaller than Q8, though basically the same precision and TPS to the Q8.
But this model was an outlier and the MFXP4\_MOE pure was 11.7% smaller than the Q4\_K\_M, while achieving slightly better precision than the Q4\_K\_M! A 36B model that's not full blown stupid at 17.9 GB? I'll take that win.
# Top Patterns Variant?
Honestly I wish I could say there's patterns that I see. I noticed a lot of models really loved Q6\_K. And you'll see through my benchmarks that on many occasions the Q6\_K outperforms a Q8 in precision, speed, and file size. Which honestly is just a reminder to all of us to STOP posting quantized models without benchmarks (seriously it's part of llama.cpp, it's easy, please do this).
There was a time I thought MXFP4 plus Q6\_K were best friends until Apriel 1.5 15B thinker came out and said, "hey, you know how not a single model likes Q5\_K? Well, I do!"
When no model had variations with Q8 that worked, the Granite 4.0 H 1B was apparently best friends with Q8 and MXFP4. Qwen3 VL 8B Instruct strictly only liked Q6, but the thinker variant.. Well it was cool with both Q6 and Q8.
Some models like F16 and Q6\_k, some liked super weird combinations. Every time I recorded patterns, another model would break my theory.
In the end, I learned only 1 truth. That every models architecture works different and you must find what quantization the models speaks too without noise.
But one thing is clear from my experiment. MXFP4 isn't "bad", it's simply different. And the community hasn't had enough fun playing with it yet.
# The Models & Benchmarks
I’ve bundled everything into a Hugging Face collection here:
[**https://huggingface.co/collections/magiccodingman/mxfp4-hybrid-gguf**](https://huggingface.co/collections/magiccodingman/mxfp4-hybrid-gguf)
So far there's like 10+ models I've uploaded.
Model parameters tested ranged from 350M, 1B, 4B, 8B, 15B, 32B, 36B. There's more still uploading as well. Vision models included, but benchmarks on images are untested. If you test this before me, please let me know your results!
Every repo includes **organized benchmark tables** and the raw logs, so you can see exactly how I got my numbers. If something looks off, tell me, seriously, I don’t bite.
I've been utilizing these models without issue so far. And I worked really hard to build a benchmark suite to validate accuracy. But that doesn't mean the model is not quirky! I may not have found the weirdness MXFP4 hybrids are causing yet. Maybe there's none? Maybe there's some or a lot?
Either way. Enjoy my really weird MXFP4 hybrid models I created with a barbaric evolution algorithm.
And if you test these models, I would love to hear:
* Did it outperform the base model for your use case?
* Did it fall apart in some domain the benchmarks didn’t catch?
* Would you actually use a hybrid like this long-term?
* Are you tempted to run your own batch experiments to see which hybrid format becomes “king” on other architectures?
* Does any of the results surprise you? Why?
I hope you find this as fun and weird as I do.
If you’ve got questions, hit me.
If you understand the “why” behind some of these bizarre patterns, *definitely* speak up!
Hope you enjoy these experimental models as much as I have :)
**Quick Answers**
* I'm still refining my batch evolution scripts, but I will share them on [GitHub at magiccodingman](https://github.com/magiccodingman) soon enough. I fine tuned my algorithm last night and found even better optimizations that I'm not sharing here yet. So, I'm still in the process of optimizing before I share my dirty code.
* I'm putting together all my benchmarks of bad batches.
* I still have many more models I'm working on that I will upload in the coming weeks on my Hugging Face repo.
* I'm still uploading models right now lol. I swear my upload bandwidth is the only thing holding me back! Apriel 1.5B has a better variant found from last night still uploading. Qwen3 VL 32B still uploading as well. Should be done uploading this afternoon post 12 PM EST 11/17/25. | 2025-11-17T14:09:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ozh8py/mxfp4_hybrid_dense_models_ready_to_share_near/ | crossivejoker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozh8py | false | null | t3_1ozh8py | /r/LocalLLaMA/comments/1ozh8py/mxfp4_hybrid_dense_models_ready_to_share_near/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': 'DbVWoI-Wl3PysKrp02iFVCgHgqei77dApIdvj19ELD4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DbVWoI-Wl3PysKrp02iFVCgHgqei77dApIdvj19ELD4.png?width=108&crop=smart&auto=webp&s=3a002345ea62b741eec81438337c78ba41155ee0', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/DbVWoI-Wl3PysKrp02iFVCgHgqei77dApIdvj19ELD4.png?width=216&crop=smart&auto=webp&s=b52f93b502d89900d8865963a049d0b2a290a289', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/DbVWoI-Wl3PysKrp02iFVCgHgqei77dApIdvj19ELD4.png?width=320&crop=smart&auto=webp&s=f0272c2cdf5ff4bb453c489aa6685d000733dc2a', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/DbVWoI-Wl3PysKrp02iFVCgHgqei77dApIdvj19ELD4.png?auto=webp&s=bb29efea20fc1eaf2ccd60eb4076b8cccdb566c5', 'width': 460}, 'variants': {}}]} |
Model chooses safe language over human life | 29 | 2025-11-17T14:02:55 | zhambe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozh3a9 | false | null | t3_1ozh3a9 | /r/LocalLLaMA/comments/1ozh3a9/model_chooses_safe_language_over_human_life/ | false | false | default | 29 | {'enabled': True, 'images': [{'id': '99qs5ohbnt1g1', 'resolutions': [{'height': 29, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=108&crop=smart&auto=webp&s=cc9fae7aff41afd43e99611166a0514f1d6b6d3c', 'width': 108}, {'height': 59, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=216&crop=smart&auto=webp&s=d3f8e5959d7592a7750abdbb7b66a1e49d4c7703', 'width': 216}, {'height': 87, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=320&crop=smart&auto=webp&s=83315956cdd614272ab1076177ea1b273b74b18d', 'width': 320}, {'height': 175, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=640&crop=smart&auto=webp&s=f91f312037c9b2b2ad44086576003ede8d0fd26e', 'width': 640}, {'height': 263, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=960&crop=smart&auto=webp&s=349d8c464c0c2a0981b01886be3c8069a7b63183', 'width': 960}, {'height': 296, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?width=1080&crop=smart&auto=webp&s=b303afb25f4ad7a0b22751ddb8bd295139c0a3a1', 'width': 1080}], 'source': {'height': 1108, 'url': 'https://preview.redd.it/99qs5ohbnt1g1.png?auto=webp&s=c0e5f6709d736462c96364c8e44d8fd5b34d06c7', 'width': 4031}, 'variants': {}}]} | ||
How do you deal with huge token consumption in RAG systems? | 0 |
Since the AI boom, RAG setups have become relevant again. There’s nothing tricky about chunking info, storing it in a vector DB, and wiring up retrieval. Easy part.
The real headache starts when your dataset grows big, and embedding or retrieval queries start eating up tokens like crazy — especially when the model wastes them on irrelevant context.
How do you optimize this?
Do you pre-filter documents before generating embeddings? Use multi-step retrieval? Build a hybrid model with metadata filters or small local rankers before hitting the main API?
Would love to hear how people handle token efficiency in production RAG pipelines
| 2025-11-17T13:37:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ozghgk/how_do_you_deal_with_huge_token_consumption_in/ | WilDinar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozghgk | false | null | t3_1ozghgk | /r/LocalLLaMA/comments/1ozghgk/how_do_you_deal_with_huge_token_consumption_in/ | false | false | self | 0 | null |
Local rig, back from the dead. | 38 | Inspired by [this post](https://old.reddit.com/r/LocalLLaMA/comments/1oyyk3k/my_ai_at_home_rig/) I thought I'd update since I last [posted my setup](https://old.reddit.com/r/LocalLLaMA/comments/1js4iy0/i_think_i_overdid_it/). As a few people pointed out, cooling was... suboptimal. It was fine in cool weather but a hot summer meant I burned out some VRAM on one of the A6000s.
JoshiLabs were able to repair it (replace the chip, well done him) and I resolved to watercool. You can get reasonably priced Bykski A6000 blocks from Aliexpress, it turns out. Unfortunately, while building the watercooling loop, I blew up my motherboard (X299) with a spillage. It was very fiddly and difficult in a confined space. There is a 240x60mm rad in the front as well. The build was painful and expensive.
I ended up on a ROMED8-2T like many others here, and an Epyc. Sourcing eight sticks of matched RAM was difficult (I did eventually).
Temps depend on ambient, but are about 25C idle and settle at about 45C with full fans (I ended up on Noctua industrial) and a dynamic power limit at 200W each card. Beefy fans make a huge difference.
I'm running GLM 4.5 Air AWQ FP8 or 4.6 REAP AWQ 4bit on vLLM. It's good. I'm hoping for 4.6 Air or a new Mistral Large. You'll notice the gaps between the cards. I'm pondering a passively cooled A2 (16GB, single slot) for speech or embeddings. If anyone has experience with those, I'd be curious. | 2025-11-17T13:07:31 | https://www.reddit.com/gallery/1ozfsr7 | _supert_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ozfsr7 | false | null | t3_1ozfsr7 | /r/LocalLLaMA/comments/1ozfsr7/local_rig_back_from_the_dead/ | false | false | 38 | null | |
cerebras/MiniMax-M2-REAP-162B-A10B · Hugging Face | 63 | 2025-11-17T12:53:42 | https://huggingface.co/cerebras/MiniMax-M2-REAP-162B-A10B | maroule | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ozfhjg | false | null | t3_1ozfhjg | /r/LocalLLaMA/comments/1ozfhjg/cerebrasminimaxm2reap162ba10b_hugging_face/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=108&crop=smart&auto=webp&s=18e3074c2e5d847c97421cd940b255a15afbbc5b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=216&crop=smart&auto=webp&s=c31793120141a6891585a61007ed0e7872668c88', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=320&crop=smart&auto=webp&s=3785a82188f4c3e75d95d9734f0358e9b69d9f93', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=640&crop=smart&auto=webp&s=ac99bec61621b284cbf7697c5666d42696ab91f4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=960&crop=smart&auto=webp&s=710f0ff91a292ab87e27614d4d7283bb73cb01be', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?width=1080&crop=smart&auto=webp&s=a0c2b31c237e2221c9c537a267e5b93e2c797e89', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pZNcDARkPPYS1XPZ3DSC6Cog6lLWkjR2LNUrD7vyKjM.png?auto=webp&s=6af842616c652885b44e1c11d7a16348da279c42', 'width': 1200}, 'variants': {}}]} | |
Multimodal AI in 2025: how GPT‑5, Gemini, Claude and Grok learned to understand text, images and video | 1 | [removed] | 2025-11-17T12:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ozfal3/multimodal_ai_in_2025_how_gpt5_gemini_claude_and/ | chackifinster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozfal3 | false | null | t3_1ozfal3 | /r/LocalLLaMA/comments/1ozfal3/multimodal_ai_in_2025_how_gpt5_gemini_claude_and/ | false | false | self | 1 | null |
Embedding models have converged | 152 | There are so many embedding models out there that it’s hard to know which one is actually “the best.” I kept seeing different recommendations, so I got curious and tested them myself.
I ran 13 models on 8 datasets and checked latency, accuracy, and an LLM-judged ELO score. Honestly, the results were not what I expected - most models ended up clustered pretty tightly.
* \~85% are inside a 50-ELO band
* top 4 are \~23.5 ELO apart
* rank 1 → 10 is around a 3% gap
https://preview.redd.it/q2e21in1ct1g1.png?width=1810&format=png&auto=webp&s=fbef3263bd735ab4bd5eeb7b8cd1d4a057f0ecfd
So now I’m thinking the embedding choice isn’t the thing that moves quality the most. The bigger differences seem to come from other parts of the pipeline: chunking, hybrid search, and reranking.
Full breakdown if you want to look at the numbers: [https://agentset.ai/embeddings](https://agentset.ai/embeddings) | 2025-11-17T12:42:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ozf9al/embedding_models_have_converged/ | midamurat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozf9al | false | null | t3_1ozf9al | /r/LocalLLaMA/comments/1ozf9al/embedding_models_have_converged/ | false | false | 152 | null | |
First-hand experience running local LLM workflows on NVIDIA DGX Spark | 0 | Just wrapped up a pretty intense 4-day deep dive with the NVIDIA DGX Spark, pushing it through a range of real-world, sovereign AI use cases. Sharing the experience here in case it’s useful for others working with on-prem or local LLM setups.
Here’s what we explored and achieved:
\- Full system setup for sovereign, on-prem AI
\- Established remote secure access for distributed teams
\- Enterprise AI search (text, image, structured + unstructured data)
\- Application containerization for reproducible AI deployments
\- Offline voice agent for private conversations
\- Domain-specific model fine-tuning
\- Synthetic data generation - zero cloud, zero token cost
\- Multimodal pipelines with MONAI & NVIDIA frameworks
An intense but inspiring few days - and we’re just getting started. | 2025-11-17T12:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ozeyak/firsthand_experience_running_local_llm_workflows/ | Founder_GenAIProtos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozeyak | false | null | t3_1ozeyak | /r/LocalLLaMA/comments/1ozeyak/firsthand_experience_running_local_llm_workflows/ | false | false | self | 0 | null |
Work around for context memory losses | 0 | A few weeks ago I had posted here that my team is going through AI fatigue because
1. they ask the LLM to do one thing and then it does another
2. they don't know how to provide all the context to the LLM that does not break one thing while building another
We then put our heads together to make this work and find solutions cause coding without AI agents will only leave gasping for breathe as you try to catch-up with everyone.
We found two potential solutions:
1. Using "Adversarial AI" i.r creating an agent that acts as the adversary to mthe original one to find holes in it's code from a quality stand-point.
The Adversarial AI thing works like a charm - Agent 1 generates code. Agent 2 is tasked to review code and find problems. Return review to Agent 1 and repeat. When both agree the work is done, review it yourself one more time and commit. At first, when experimenting, I thought I needed to use different LLMs for this. But over time I realized “context is king”. You can use the same neural net to take both sides of the argument. Just ensure they are positioned adversarially through context. WE do not use another code reviewer tool - but maybe we should?
2. Using context management tools - to help maintain system's context, generate prompts based on requirements and even detect drift.
I guess this should have been point 1 - cause it works even before you write code. When giving a prompt to a coding LLM we often overlooked dependencies and trusted the LLM "To figure those out" but that was valid only until it's memory lasted, we now provide the requirement to our context management tool [brew.studio](http://brew.studio) and it it turn surfaces all dependencies. Once you review those (ideally a product manager should look into those, since its also like creating specs for the developers) you can then generate prompts through this tool to give to your coding agents.
Both these methods have almost eliminated the frustration we had just a few weeks ago. Reddit really is amazing the things you discover here transformational. | 2025-11-17T11:38:27 | https://www.reddit.com/r/LocalLLaMA/comments/1oze137/work_around_for_context_memory_losses/ | Temporary_Papaya_199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oze137 | false | null | t3_1oze137 | /r/LocalLLaMA/comments/1oze137/work_around_for_context_memory_losses/ | false | false | self | 0 | null |
I was tired of guessing my RAG chunking strategy, so I built rag-chunk, a CLI to test it. | 1 | Hi all,
I'm sharing a small tool I just open-sourced for the Python / RAG community: `rag-chunk`.
It's a CLI that solves one problem: How do you *know* you've picked the best chunking strategy for your documents?
Instead of guessing your chunk size, `rag-chunk` lets you measure it:
* **Parse** your `.md` doc folder.
* **Test** multiple strategies: `fixed-size` (with `--chunk-size` and `--overlap`) or `paragraph`.
* **Evaluate** by providing a JSON file with ground-truth questions and answers.
* **Get a Recall score** to see how many of your answers survived the chunking process intact.
Super simple to use. Contributions and feedback are very welcome!
**GitHub:** [`https://github.com/messkan/rag-chunk`](https://github.com/messkan/rag-chunk) | 2025-11-17T11:24:01 | https://github.com/messkan/rag-chunk | InstanceSignal5153 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ozds66 | false | null | t3_1ozds66 | /r/LocalLLaMA/comments/1ozds66/i_was_tired_of_guessing_my_rag_chunking_strategy/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=108&crop=smart&auto=webp&s=05ab6a04615dd02c0d0bc3afa0647133e7068546', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=216&crop=smart&auto=webp&s=53b2a8c05e8ef81d8f27b2dd895cb42c41693257', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=320&crop=smart&auto=webp&s=129baf54bb043c17271dfab82af0a9dbbe358c49', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=640&crop=smart&auto=webp&s=5ab8f17339e2f14eaebd6370e974215117174548', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=960&crop=smart&auto=webp&s=5866c1b2d7412e53cf52bde99dcfbaa5751dff08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?width=1080&crop=smart&auto=webp&s=42fbb10ff36e7bdaf2ef5bbb9ad42ad541e99a94', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Li2L558n_VzAB3ErahFlptaslMhtBVH2Oz8KDG8P4VY.png?auto=webp&s=2c5b019d440665c9c56875a4d1537470db023764', 'width': 1200}, 'variants': {}}]} |
Quick question for AI devs - what's your biggest setup frustration? | 0 | # Hey everyone, I'm working on Day 5 of building AI tools and keep running into dependency hell with LangChain/LlamaIndex/OpenAI packages. Spent 3 hours yesterday just getting packages to install. Before I build something to fix this, genuine question: Is this YOUR biggest pain point too, or is it something else entirely? What eats most of your time when starting new AI projects? - Dependency conflicts - Finding the right prompts - Rate limits - Something else? Not selling anything, just trying to validate if I should build a solution or focus on my other project. Thanks! | 2025-11-17T11:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ozdkk8/quick_question_for_ai_devs_whats_your_biggest/ | Alive-Practice-5448 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozdkk8 | false | null | t3_1ozdkk8 | /r/LocalLLaMA/comments/1ozdkk8/quick_question_for_ai_devs_whats_your_biggest/ | false | false | self | 0 | null |
Milk Mafia: Local LLM Debate Simulation Game | 0 | Hey guys!
Just some quick self promotion of my project, I made a small game called Milk Mafia, which is an AI-driven narrative adventure where rival dairy and snail-slime gangs battle for control of Brine City.
It simulates a debate using multiple LLM agents running locally! If you find it interesting, I would greatly appreciate a wishlist on steam:
[https://store.steampowered.com/app/4106230/Milk\_Mafia/](https://store.steampowered.com/app/4106230/Milk_Mafia/) | 2025-11-17T10:25:23 | purebluffdev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ozcsi2 | false | null | t3_1ozcsi2 | /r/LocalLLaMA/comments/1ozcsi2/milk_mafia_local_llm_debate_simulation_game/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vyumbz27ns1g1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=108&crop=smart&auto=webp&s=457764e37a30a3d10eb365ac990b6ea8848ccf62', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=216&crop=smart&auto=webp&s=92647fa0169dde8c05a835b4315dec8f8db51632', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=320&crop=smart&auto=webp&s=6013e461cc3ddc23ef09e03da7cf50b1ba5e54da', 'width': 320}, {'height': 342, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=640&crop=smart&auto=webp&s=0084be5ff4f50975a1ed1b092c7df6f365f5ca44', 'width': 640}, {'height': 514, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=960&crop=smart&auto=webp&s=88ae9ac0ad2443beaea2a2bac810f754c1967a15', 'width': 960}, {'height': 578, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?width=1080&crop=smart&auto=webp&s=6398ae32662ec1a7e472e93e1f4e8c63e54c383d', 'width': 1080}], 'source': {'height': 1371, 'url': 'https://preview.redd.it/vyumbz27ns1g1.png?auto=webp&s=f2962796a0cee776e517f10f17d75967a19cac81', 'width': 2560}, 'variants': {}}]} | |
How to learn setting up my own | 0 | Hi all,
For a while I've wanted to dabble into creating my own local AI. I don't have any technical knowledge so I've been struggling where to start.
My goal is: to be able to setup and run local AI-agents that I can guide into becoming an effective tool. Probably preferably in Lama.cpp
I have learned some buzzwords along the way: RAG, tool calling, agents, refining. And I have gotten to run models in Ollama. But I lack the knowledge behind it to make use of it.
So my question to you is: do you know how I could learn this a-z process via online training? And maybe, without having to become a total computer scientist? (if that's even possible).
Any tips to sources are welcome! Thank you!
| 2025-11-17T10:18:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ozco68/how_to_learn_setting_up_my_own/ | SimplyAverageHuman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ozco68 | false | null | t3_1ozco68 | /r/LocalLLaMA/comments/1ozco68/how_to_learn_setting_up_my_own/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.