title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best fixed-cost setup for continuous LLM code analysis? | 0 | I’m running continuous LLM-based scans on large code/text directories and looking for a **fixed-cost setup,** doesn’t have to be local, it can be by a service, just predictable.
Goal:
* \*MUST BE\* GPT/Claude - level in \*code\* reasoning.
* Runs continuously without token-based billing
Has anyone found a model + infra combo that hits that sweet spot?
Looking for something stable and affordable for long-running analysis, not production (or public facing) scale, just heavy internal use. | 2025-10-25T00:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ofdfe9/best_fixedcost_setup_for_continuous_llm_code/ | Specialist-Buy-9777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofdfe9 | false | null | t3_1ofdfe9 | /r/LocalLLaMA/comments/1ofdfe9/best_fixedcost_setup_for_continuous_llm_code/ | false | false | self | 0 | null |
Gemma3 model differencies | 0 | Hi,
What is this model, how close it is to the full 27B model?
[https://huggingface.co/ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g](https://huggingface.co/ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g)
I can see this works with both AMD and Nvidia using vLLM. But its pretty slow with AMD 7900 XTX. | 2025-10-24T23:34:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ofct1j/gemma3_model_differencies/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofct1j | false | null | t3_1ofct1j | /r/LocalLLaMA/comments/1ofct1j/gemma3_model_differencies/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=108&crop=smart&auto=webp&s=e53a1e1257a59c6f91ff2155b13ed22bd4e62949', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=216&crop=smart&auto=webp&s=e35e45ced35053b45062300783ae726191b9591e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=320&crop=smart&auto=webp&s=99f1939221fadf4b2b6ce04702aa19b073172877', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=640&crop=smart&auto=webp&s=103a18398a875e6e9e0d65119ad24feb9ff34c27', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=960&crop=smart&auto=webp&s=2820db556dc3be09ae29210a620f22aad468037c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?width=1080&crop=smart&auto=webp&s=4642cd5b05d0efc640627d8fa9f54088f99f9ab5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/93r8X82mToFl4zysJme1fCIQcxv3yLk21Ift1AcLGuM.png?auto=webp&s=5fb692c694f1e2aec842e4b865a5f8cd554c0ba2', 'width': 1200}, 'variants': {}}]} |
Strix Halo + RTX 3090 Achieved! Interesting Results... | 27 | Specs: Fedora 43 Server (bare metal, tried via Proxmox but to reduce complexity went BM, will try again), Bosgame M5 128gb AI Max+ 395 (identical board to GMKtek EVO-X2), EVGA FTW3 3090, MinisForum DEG1 eGPU dock with generic m.2 to Oculink adapter + 850w PSU.
Compiled the latest version of llama.cpp with Vulkan RADV (NO CUDA), things are still very wonky but it does work. I was able to get GPT OSS 120b to run on llama-bench but running into weird OOM and VlkDeviceLost errors specifically in llama-bench when trying GLM 4.5 Air even though the rig has served all models perfectly fine thus far. KV cache quant also seems to be bugged out and throws context errors with llama-bench but again works fine with llama-server. Tried the strix-halo-toolbox build of llama.cpp but could never get memory allocation to function properly with the 3090.
Saw a \~30% increase in PP at 12k context no quant going from 312 TPS on Strix Halo only to 413 TPS with SH + 3090, but a \~20% decrease in TG from 50 TPS on SH only to 40 on SH + 3090 which i thought was pretty interesting, and a part of me wonders if that was an anomaly or not but will confirm at a later date with more data.
Going to do more testing with it but after banging my head into a wall for 4 days to get it serving properly im taking a break and enjoying my vette. Let me know if yall have any ideas or benchmarks yall might be interested in
https://preview.redd.it/ly9ey0wr05xf1.jpg?width=3060&format=pjpg&auto=webp&s=cc073c67f6d2bd5f976f53679d8de83215fb4697
https://preview.redd.it/gv0terms05xf1.jpg?width=3060&format=pjpg&auto=webp&s=8ccc70fb59e9e7a0771274e15e67bc18a36ac624
https://preview.redd.it/0ohsyz23z4xf1.png?width=1654&format=png&auto=webp&s=9e0d122713096d181026c3a160b381e2db1333c6
| 2025-10-24T22:42:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ofboir/strix_halo_rtx_3090_achieved_interesting_results/ | JayTheProdigy16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofboir | false | null | t3_1ofboir | /r/LocalLLaMA/comments/1ofboir/strix_halo_rtx_3090_achieved_interesting_results/ | false | false | 27 | null | |
Which big models can I run with an NVIDIA RTX 4070 (8gb VRAM) | 0 | I'm trying to create a setup for Local development because I might start working with sensitive information.
Thank you ♥ | 2025-10-24T22:40:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ofbmra/which_big_models_can_i_run_with_an_nvidia_rtx/ | SchoolOfElectro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofbmra | false | null | t3_1ofbmra | /r/LocalLLaMA/comments/1ofbmra/which_big_models_can_i_run_with_an_nvidia_rtx/ | false | false | self | 0 | null |
4B fp16 or 8B q4? | 55 | Hey guys,
For my 8GB GPU schould I go for **fp16 but 4B** or **q4 version of 8B**? Any model you particularly want to recommend me? Requirement: basic ChatGPT replacement | 2025-10-24T22:21:45 | Ok-Internal9317 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofb7mu | false | null | t3_1ofb7mu | /r/LocalLLaMA/comments/1ofb7mu/4b_fp16_or_8b_q4/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'BWZzZHuV3I0hBm_JKUH8nyyZe0yy-iflQ7vyebBDfys', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=108&crop=smart&auto=webp&s=718f328a3bd7464382c766ada63d9e55efacd6e1', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=216&crop=smart&auto=webp&s=71ef810d7de4b14150546f4e5b2d3a3e633eb5ea', 'width': 216}, {'height': 146, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=320&crop=smart&auto=webp&s=a17f9ce2c46188f5b1a515345e527c4d5a21b669', 'width': 320}, {'height': 293, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=640&crop=smart&auto=webp&s=dd900eb86aa1c749649781d6ebd92067020e1209', 'width': 640}, {'height': 439, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=960&crop=smart&auto=webp&s=2d03db3bc8bc9e9e0541493adf5507693addcfc6', 'width': 960}, {'height': 494, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?width=1080&crop=smart&auto=webp&s=51743b50d26afd77eb5abd6244f2444c92866d92', 'width': 1080}], 'source': {'height': 952, 'url': 'https://preview.redd.it/ukn1akunw4xf1.png?auto=webp&s=b868bc7a00551041f5fef5a6a2f98ed3382de809', 'width': 2079}, 'variants': {}}]} | ||
Copilot consuming code completion usage in VS Code when I'm actually using Continue.dev locally | 0 | Hi everyone, I wanted to ask this if it has happened to someone else. I google a bit but didn't find anything about it.
The thing is I use both Copilot (free tier) and a local setup with Continue and Qwen 2.5 Coder. I like switching the two depending on the work I'm doing. However I just noticed that when I completely turn off Copilot suggestions, it is still consuming code completions even though I'm pretty it was Continue the one generating the suggestions.
Has this happened to someone else?
I suppose I could turn off/on the Copilot extension but I think that would be a bit annoying instead of just using the checkbox to activate it.
I have attached a video showing how it goes from 2.6% to to 2.7% while having code completions disabled
https://reddit.com/link/1ofb7ds/video/bpscjzucx4xf1/player
| 2025-10-24T22:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ofb7ds/copilot_consuming_code_completion_usage_in_vs/ | Lualcala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofb7ds | false | null | t3_1ofb7ds | /r/LocalLLaMA/comments/1ofb7ds/copilot_consuming_code_completion_usage_in_vs/ | false | false | self | 0 | null |
MiniMax M2 is 230B-A10B | 213 | 2025-10-24T22:03:31 | codys12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofasus | false | null | t3_1ofasus | /r/LocalLLaMA/comments/1ofasus/minimax_m2_is_230ba10b/ | false | false | 213 | {'enabled': True, 'images': [{'id': '4pjwfQWI8lGwX-zoWCfl6VuT2dkhake1d5MwC31wkuM', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?width=108&crop=smart&auto=webp&s=1b3a7b321528b8eae1daf6965c0ac1abaf7010f5', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?width=216&crop=smart&auto=webp&s=350dd137f51affb7ce6f26c74849c7992256b007', 'width': 216}, {'height': 81, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?width=320&crop=smart&auto=webp&s=c201cc4277dd6120346cce375178e0760780c3cf', 'width': 320}, {'height': 162, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?width=640&crop=smart&auto=webp&s=9c77e931b8bddbffa454a9542abfb528e2be6fc4', 'width': 640}, {'height': 243, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?width=960&crop=smart&auto=webp&s=121d0df02df6038020d933e27378c7af120df48a', 'width': 960}], 'source': {'height': 256, 'url': 'https://preview.redd.it/f45v1dx7u4xf1.png?auto=webp&s=3edd463424963633f07408e2a2b82bc32f99f90a', 'width': 1009}, 'variants': {}}]} | |||
best local uncensored model for code/general use case? | 2 | im getting extremely tired of how censored and unusable the current ai models are, chatgpt is literally unusable to the point where i dont even bother asking questions mostly just using grok since it is a tad bit open -- any time i ask a basic question these AI start preaching ethics and morality which is extremely ironic.
even something as basic as asking about web scraping or how proxy farms are setup, chatgpt starts preaching ethics and morality and legality which like i said is extremely fucking ironic and im extremely tired and i want an uncensored model for code purposes
i sometimes use Llama-3.1-8B-Lexi-Uncensored-V2-GGUF since my hardware spec aint that good but i am not satisfied with this model, any suggestions? | 2025-10-24T21:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ofaisv/best_local_uncensored_model_for_codegeneral_use/ | bulletsyt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofaisv | false | null | t3_1ofaisv | /r/LocalLLaMA/comments/1ofaisv/best_local_uncensored_model_for_codegeneral_use/ | false | false | self | 2 | null |
Has vLLM fixed the multiple RTX 6000 Pro problems yet? | 1 | I am looking to get two RTX 6000 Pros to run GLM 4.6 Air, but I know vLLM had problems with the SM\_120 arch, has this been resolved? | 2025-10-24T21:31:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ofa2fs/has_vllm_fixed_the_multiple_rtx_6000_pro_problems/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofa2fs | false | null | t3_1ofa2fs | /r/LocalLLaMA/comments/1ofa2fs/has_vllm_fixed_the_multiple_rtx_6000_pro_problems/ | false | false | self | 1 | null |
What's the easiest way to build a translation model? | 3 | I'm working on a project to translate different languages, but I'm struggling to find an easy way to do it.
Where do you all get your datasets and what models have you been using to train your models? Any guidance would be helpful. My boss will probably fire me if I don't figure this out soon. | 2025-10-24T20:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/1of94xf/whats_the_easiest_way_to_build_a_translation_model/ | EmergencyWay9804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of94xf | false | null | t3_1of94xf | /r/LocalLLaMA/comments/1of94xf/whats_the_easiest_way_to_build_a_translation_model/ | false | false | self | 3 | null |
Apple Foundation is dumb | 177 | Like the other poster, I’ve found Apple Foundational model to disapprove of lots of content. It’s too safe. Too corporate.
This is the most innocuous example I could come up with. Also attached proof that it even indirectly avoids the word. Google’s model gives me accurate info.
(FYI in case you are not in a region that has chiggers… they are little red bugs that bite you, no relation to a word that it rhymes with at all) | 2025-10-24T20:45:00 | https://www.reddit.com/gallery/1of8xl2 | PM_ME_UR_COFFEE_CUPS | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1of8xl2 | false | null | t3_1of8xl2 | /r/LocalLLaMA/comments/1of8xl2/apple_foundation_is_dumb/ | false | false | 177 | null | |
First run ROCm 7.9 on `gfx1151` `Debian` `Strix Halo` with Comfy default workflow for flux dev fp8 vs RTX 3090 | 11 | Hi i ran a test on gfx1151 - strix halo with ROCm7.9 on Debian @ 6.16.12 with comfy.
Flux, ltxv and few other models which are working in general, i tried to compare it with SM86 - rtx 3090 which is few times faster (but taking also 3 times more energy) depends on the parameters:
for example result from default flux image dev fp8 workflow comparision:
**RTX 3090 CUDA**
```
got prompt
100%|█████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00, 1.22s/it]
Prompt executed in 25.44 seconds
```
**Strix Halo ROCm 7.9rc1**
```
got prompt
100%|█████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:03<00:00, 6.19s/it]
Prompt executed in 125.16 seconds
```
```
========================================= ROCm System Management Interface
=================================================== Concise Info
Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
(DID, GUID) (Edge) (Socket) (Mem, Compute, ID)
=====================================================================================
0 1 0x1586, 3750 53.0°C 98.049W N/A, N/A, 0 N/A 1000Mhz 0% auto N/A 29% 100%
=====================================================================================
=============================================== End of ROCm SMI Log
```
```
+------------------------------------------------------------------------------+
| AMD-SMI 26.1.0+c9ffff43 amdgpu version: Linuxver ROCm version: 7.10.0 |
| VBIOS version: xxx.xxx.xxx |
| Platform: Linux Baremetal |
|-------------------------------------+----------------------------------------|
| BDF GPU-Name | Mem-Uti Temp UEC Power-Usage |
| GPU HIP-ID OAM-ID Partition-Mode | GFX-Uti Fan Mem-Usage |
|=====================================+========================================|
| 0000:c2:00.0 Radeon 8060S Graphics | N/A N/A 0 N/A/0 W |
| 0 0 N/A N/A | N/A N/A 28554/98304 MB |
+-------------------------------------+----------------------------------------+
+------------------------------------------------------------------------------+
| Processes: |
| GPU PID Process Name GTT_MEM VRAM_MEM MEM_USAGE CU % |
|==============================================================================|
| 0 11372 python3.13 7.9 MB 27.1 GB 27.7 GB N/A |
+------------------------------------------------------------------------------+
``` | 2025-10-24T20:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/1of8pie/first_run_rocm_79_on_gfx1151_debian_strix_halo/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of8pie | false | null | t3_1of8pie | /r/LocalLLaMA/comments/1of8pie/first_run_rocm_79_on_gfx1151_debian_strix_halo/ | false | false | self | 11 | null |
First attempt at building a local LLM setup in my mini rack | 29 | So I finally got around to attempting to build a local LLM setup.
Got my hands on 3 x Nvidia Jetson Orin nano's and put them into my mini rack and started to see if I could make them into a cluster.
Long story short ... **YES and NOOooo..**
I got all 3 Jetsons running llama.cpp and got them working in a cluster using llama-server on the first Jetson and rpc-server on the two other.
But using llama-bench they produced only about 7 tokens/sec. when working together, but just one Jetson working alone i got about 22 tokens/sec.
Model I was using was Llama-3.2-3B-Instruct-Q4\_K\_M.gguf I did try out other models but not with any real good results.
But it all comes down to the fact that they LLM really like things fast and for them to having to share over a "slow" 1Gb ethernet connection between each other was one of the factors that slowed everything down.
So I wanted to try something else.
I loaded up the same model all 3 Jetsons and started a llama-server on each node but on different ports.
Then setting up a Raspberry pi 5 4GB with Nginx as a load balancer and having a docker container run open webUI I then got all 3 Jetsons with llama.cpp feeding into the same UI, I still only get about 20-22 tokens/sec pr node, but adding the same model 3 times in one chat then all 3 nodes starts working on the prompt at the same time, then I can either merge the result or have 3 separate results.
So all in all as for a first real try, not great but also not bad and just happy I got it running.
Now I think I will be looking into getting a larger model running to maximize the use of the jetsons.
Still a lot to learn..
*The bottom part of the rack has the 3 x Nvidia Jetson Orin nano's and the Raspberry pi 5 for load balancing and running the webUI.* | 2025-10-24T20:23:54 | Von_plaf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of8f1i | false | null | t3_1of8f1i | /r/LocalLLaMA/comments/1of8f1i/first_attempt_at_building_a_local_llm_setup_in_my/ | false | false | default | 29 | {'enabled': True, 'images': [{'id': 'aseoosei84xf1', 'resolutions': [{'height': 213, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=108&crop=smart&auto=webp&s=389b1d7fcb055a51ec52129854aa46df979b6022', 'width': 108}, {'height': 426, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=216&crop=smart&auto=webp&s=c4a161c0d23a3736aee27df5d0be039a23496ae7', 'width': 216}, {'height': 632, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=320&crop=smart&auto=webp&s=c1e141cf252d112e8a91bf8513cd80f1ccc19782', 'width': 320}, {'height': 1264, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=640&crop=smart&auto=webp&s=da5c51332fc5fb2400148e14f00664703a266818', 'width': 640}, {'height': 1896, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=960&crop=smart&auto=webp&s=6204498a2e31106ae06694b1c080e286de871486', 'width': 960}, {'height': 2133, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?width=1080&crop=smart&auto=webp&s=8a74cd2aca56f36cccc6fac56074a61050c91cd7', 'width': 1080}], 'source': {'height': 3832, 'url': 'https://preview.redd.it/aseoosei84xf1.jpeg?auto=webp&s=558695504504fc70fe0ecce431f1b036053744e6', 'width': 1940}, 'variants': {}}]} | |
Built benchmark measuring AI architectural complexity beyond task scores - Claude tops, GPT-4o second | 0 | I developed UFIPC to measure how AI processes information architecturally, not just what it outputs.
Tested 10 frontier models. Found that models with identical benchmark scores can differ significantly in how they actually process information internally.
\*\*Top 5 Results:\*\*
1. Claude Sonnet 4: 0.7845 (highest complexity)
2. GPT-4o: 0.7623
3. Gemini 2.5 Pro: 0.7401
4. Grok 2: 0.7156
5. Claude Opus 3.5: 0.7089
\*\*Interesting findings:\*\*
\- DeepSeek V3 (0.5934) ranks in bottom half despite recent benchmark wins - suggests high task performance ≠ architectural complexity
\- Claude models consistently rank higher in integration and meta-cognitive dimensions
\- Smaller models (GPT-4o-mini: 0.6712) can have surprisingly good complexity scores relative to size
\*\*What it measures:\*\*
Physics-based parameters from neuroscience: processing capacity, meta-cognitive sophistication, adversarial robustness, integration complexity.
Open source (MIT), patent pending. Would love feedback/validation from people who run models locally.
\*\*GitHub:\*\* [https://github.com/4The-Architect7/UFIPC](https://github.com/4The-Architect7/UFIPC) | 2025-10-24T20:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/1of8aiv/built_benchmark_measuring_ai_architectural/ | Pleasant-Egg-5347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of8aiv | false | null | t3_1of8aiv | /r/LocalLLaMA/comments/1of8aiv/built_benchmark_measuring_ai_architectural/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=108&crop=smart&auto=webp&s=dce1fce597e0216d7a1ccdb1460605e8d48b7e55', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=216&crop=smart&auto=webp&s=0e30c55f5a4fe50884f80f69252be2f0eda3a3ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=320&crop=smart&auto=webp&s=79d2fef4d51e5df6845b391d31004de7301424d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=640&crop=smart&auto=webp&s=7aa927b868d83d32cd2732ac84c5b6a1dc680115', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=960&crop=smart&auto=webp&s=068d01337100bfd76458861af9b7c23604faaf85', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?width=1080&crop=smart&auto=webp&s=d87f7b7a82661a92850f059f3865713d28b640cd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2q3VzkaccEpgAISFnlrTjJOEwHCDAbP7Fp4M9OLkj7Y.png?auto=webp&s=dfb471312a741a7776d3703f6afe8477fb00959f', 'width': 1200}, 'variants': {}}]} |
Strix Halo and LM Studio Larger Model Issues | 3 | I can usually run most of the larger models with 96gb vram. However, when I try to increase the context size above 8100, the large models usually fail "allocate pp" bla bla bla. That happens when using models that are 70gb in size down to 45gb in size. Any idea what might be causing this? Thanks. | 2025-10-24T20:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1of87x5/strix_halo_and_lm_studio_larger_model_issues/ | DewB77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of87x5 | false | null | t3_1of87x5 | /r/LocalLLaMA/comments/1of87x5/strix_halo_and_lm_studio_larger_model_issues/ | false | false | self | 3 | null |
Keep Ollama Alive w/ Multiple Clients | 0 | I use ollama docker with a keepalive variable of -1 which sets it to never unload (forever). I’ve set openwebui to keepalive = -1 so it keeps things loaded after queries. Problem comes with other clients I use to hit ollama that don’t have keepalive setting options. When they hit ollama it reverts to keepalive 5m. Is there any way to keep models loaded no matter what? It’s a serious buzzkill and if unsolvable a deal breaker. Thanks! | 2025-10-24T20:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1of82gr/keep_ollama_alive_w_multiple_clients/ | Zed-Naught | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of82gr | false | null | t3_1of82gr | /r/LocalLLaMA/comments/1of82gr/keep_ollama_alive_w_multiple_clients/ | false | false | self | 0 | null |
OpenRouter now available in Coplay AI for Unity | 0 | People asked and we listened!
You can now use your own OpenRouter API key to use Coplay for Free.
This means you can let any AI model from OpenRouter do the mundane work for you in Unity.
Coplay is an AI assistant that sits inside Unity.
[https://www.coplay.dev/](https://www.coplay.dev/) | 2025-10-24T19:57:48 | https://v.redd.it/6h7olo1t74xf1 | Josvdw | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of7rdz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6h7olo1t74xf1/DASHPlaylist.mpd?a=1763927881%2CNWE0MjI4MzU0NjkxMTZhMWI5MmZjM2M2ZmE3NTg4NzI3ODc4YTZlY2Y4MTA3MzkyNDc1ZmRjYzgxZTY3MWNlZg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/6h7olo1t74xf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1894, 'hls_url': 'https://v.redd.it/6h7olo1t74xf1/HLSPlaylist.m3u8?a=1763927881%2CYWQwYjhiYjlkMDQzZTY3MDczMGNkMmM0MmQ0OTAwMzI5ODZkNjlmYzc0ODM4ODBhYWExMWZlNTY5NjFjOTExZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6h7olo1t74xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1of7rdz | /r/LocalLLaMA/comments/1of7rdz/openrouter_now_available_in_coplay_ai_for_unity/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-', 'resolutions': [{'height': 189, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=108&crop=smart&format=pjpg&auto=webp&s=3d86fd212f24b68265868c066847050a39ef3873', 'width': 108}, {'height': 378, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=216&crop=smart&format=pjpg&auto=webp&s=836544cca85c27ff8fc7d9baff92936c365da9a8', 'width': 216}, {'height': 561, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=320&crop=smart&format=pjpg&auto=webp&s=56aa2b1212090d266f01ef70977bcf0d33c44737', 'width': 320}, {'height': 1122, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=640&crop=smart&format=pjpg&auto=webp&s=8b6e2ca16d5488042928356aba1158c8672a634a', 'width': 640}, {'height': 1683, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=960&crop=smart&format=pjpg&auto=webp&s=5a2f9720e4c70ac3c8176d1ecceb8c4057e50a4e', 'width': 960}, {'height': 1893, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c4b1e8b0e1128961ad24048e2a720a8df328fbcc', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/dnA1N2VvMXQ3NHhmMVyljrMKl0baJ_96bzPHyUhGbFXxGh1YE0gwcUu8uBz-.png?format=pjpg&auto=webp&s=dad454ebcb06f106ec157d0de0043d3a9baa797f', 'width': 1232}, 'variants': {}}]} | |
Favorite models for stories, sci fi especially? In particular im looking for something that can handle star trek fanfiction. | 0 | Been using Cydonia mostly, but it has problems.
Are there any trained on star trek scripts, and novels? See my problem is, most models constantly mix up the crews, or if I tell it to set on an original ship, suddenly it will switch it to the enterprise D, or Riker will show up out of nowhere. Or even when its tng, sometimes it will have tos crew like LT uhura be a member of the crew.
| 2025-10-24T19:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1of7pcx/favorite_models_for_stories_sci_fi_especially_in/ | AI_Renaissance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of7pcx | false | null | t3_1of7pcx | /r/LocalLLaMA/comments/1of7pcx/favorite_models_for_stories_sci_fi_especially_in/ | false | false | self | 0 | null |
DeepSeek just beat GPT5 in crypto trading! | 0 | As South China Morning Post reported, Alpha Arena gave 6 major AI models $10,000 each to trade crypto on Hyperliquid. Real money, real trades, all public wallets you can watch live.
All 6 LLMs got the exact same data and prompts. Same charts, same volume, same everything. The only difference is how they think from their parameters.
[DeepSeek V3.1](https://www.netmind.ai/modelsLibrary/DeepSeek-V3.1) performed the best with +10% profit after a few days. Meanwhile, GPT-5 is down almost 40%.
What's interesting is their trading personalities.
Gemini's making only 15 trades a day, Claude's super cautious with only 3 trades total, and DeepSeek trades like a seasoned quant veteran.
Note they weren't programmed this way. It just emerged from their training.
Some think DeepSeek's secretly trained on tons of trading data from their parent company High-Flyer Quant. Others say GPT-5 is just better at language than numbers.
We suspect DeepSeek’s edge comes from more effective reasoning learned during reinforcement learning, possibly tuned for quantitative decision-making. In contrast, GPT-5 may emphasize its foundation model, lack more extensive RL training.
Would u trust ur money with DeepSeek? | 2025-10-24T19:53:35 | MarketingNetMind | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of7nkd | false | null | t3_1of7nkd | /r/LocalLLaMA/comments/1of7nkd/deepseek_just_beat_gpt5_in_crypto_trading/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '11c2r2z174xf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=108&crop=smart&auto=webp&s=980bd8c1869bd312ea1f29c595121d53207e850a', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=216&crop=smart&auto=webp&s=8463e28f55612339c9ca987a1f5242da20e8db10', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=320&crop=smart&auto=webp&s=64dd425321d59672f2496f3ebc56f2d5095d3a7d', 'width': 320}, {'height': 361, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=640&crop=smart&auto=webp&s=1c7e32b585f024213b08ef9c51650c723b397664', 'width': 640}, {'height': 542, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=960&crop=smart&auto=webp&s=17fd0e82dd4590324c75019f061fd84b2813a94b', 'width': 960}, {'height': 610, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?width=1080&crop=smart&auto=webp&s=cd416a1de2bd2cce736476550d82dfdbcd1c6022', 'width': 1080}], 'source': {'height': 974, 'url': 'https://preview.redd.it/11c2r2z174xf1.png?auto=webp&s=2007953c035b34434346923504f77c41b06bb428', 'width': 1722}, 'variants': {}}]} | |
With `--n-cpu-moe`, how much can I gain from CPU-side upgrades? RAM, CPU, motherboard etc.? | 7 | I finally got into using llama.cpp with MoE models loading all the attn layers onto the GPU and partially offloading experts to the CPU. Right now I'm on DDR4 and PCIe 4.0 with a fast 32GB GPU.
Just wondering if it's worth it to upgrade to DDR5 RAM? I'll need a new motherboard. Also: would a faster CPU help? Will the PCIe v5 help? I suppose if I need a new motherboard for DDR5 RAM I might as well go with PCIe 5.0 and maybe even upgrade the CPU?
That said, I anticipate that Strix Halo desktop motherboards will surely come if I'm just patient. Maybe it'd be worthwhile to just wait 6 months? | 2025-10-24T19:50:05 | https://www.reddit.com/r/LocalLLaMA/comments/1of7kji/with_ncpumoe_how_much_can_i_gain_from_cpuside/ | billy_booboo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of7kji | false | null | t3_1of7kji | /r/LocalLLaMA/comments/1of7kji/with_ncpumoe_how_much_can_i_gain_from_cpuside/ | false | false | self | 7 | null |
MiniMax-M2 Info (from OpenRouter discord) | 60 | MiniMax M2 — A Gift for All Developers on the 1024 Festival"
Top 5 globally, surpassing Claude Opus 4.1 and second only to Sonnet 4.5; state-of-the-art among open-source models. Reengineered for coding and agentic use—open-source SOTA, highly intelligent, with low latency and cost. We believe it's one of the best choices for agent products and the most suitable open-source alternative to Claude Code.
We are very proud to have participated in the model’s development; this is our gift to all developers.
**MiniMax-M2 is coming on Oct 27**
https://preview.redd.it/6s6d9ykc54xf1.png?width=1280&format=png&auto=webp&s=51b274177a62bfec585d7f06c2fe7649bc9aa5c9
https://preview.redd.it/xfjadh5e54xf1.png?width=1380&format=png&auto=webp&s=21b792491113f10c43e492b252bc87176a3f7f53
https://preview.redd.it/gb7mat4f54xf1.png?width=636&format=png&auto=webp&s=59eed9df7b9c485ce21b5cb41d9707e72ba5d39b
| 2025-10-24T19:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1of7gcb/minimaxm2_info_from_openrouter_discord/ | nuclearbananana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of7gcb | false | null | t3_1of7gcb | /r/LocalLLaMA/comments/1of7gcb/minimaxm2_info_from_openrouter_discord/ | false | false | 60 | null | |
Jaka do podejmowania prostych decyzji? | 0 | Cześć, mam pytanie dotyczace zakupu wlasnego zestawu zdolnego uruchomic przyzwoity model llm. Chcialbym uzyskac asystenta, ktory pomoze mi w podejmowaniu decyzji na podstawie okreslonego wzorca. Utworzy krotkie podsumowanie (rzeczowe). Bedzie obslugiwal mcp. Mam cel wydac 20 tysiecy plnów, no chyba ze 'troche' wiecej a bedzie jak przeskok z Uno na nową Toyote. Z gory dziekuje za chociaz wskazowke i pozdrawiam serdecznie. | 2025-10-24T19:41:15 | https://www.reddit.com/r/LocalLLaMA/comments/1of7cnq/jaka_do_podejmowania_prostych_decyzji/ | Fit_Asparagus_6426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of7cnq | false | null | t3_1of7cnq | /r/LocalLLaMA/comments/1of7cnq/jaka_do_podejmowania_prostych_decyzji/ | false | false | self | 0 | null |
You can turn off the cloud, this + solar panel will suffice: | 71 | 2025-10-24T19:34:13 | JLeonsarmiento | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of769j | false | null | t3_1of769j | /r/LocalLLaMA/comments/1of769j/you_can_turn_off_the_cloud_this_solar_panel_will/ | false | false | default | 71 | {'enabled': True, 'images': [{'id': 'a0svyfed34xf1', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=108&crop=smart&auto=webp&s=927071e139e09a310d41f47cbd2b3a92b2863838', 'width': 108}, {'height': 63, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=216&crop=smart&auto=webp&s=a5d8a1cf27dc1acd8bcf8c0ae85bf020df6bcfb0', 'width': 216}, {'height': 93, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=320&crop=smart&auto=webp&s=8b255038b078c7bc6d4daf3c85c5b1cb926f167a', 'width': 320}, {'height': 187, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=640&crop=smart&auto=webp&s=f50f6ea7d9b1152c45428baa79fd791529dae389', 'width': 640}, {'height': 281, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=960&crop=smart&auto=webp&s=4cb8dcd2def01eb42bdb64b97f220fdd5e01cb59', 'width': 960}, {'height': 316, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?width=1080&crop=smart&auto=webp&s=c6bd5436a4e6ee40a820450c5801ea3dc0eb8454', 'width': 1080}], 'source': {'height': 668, 'url': 'https://preview.redd.it/a0svyfed34xf1.png?auto=webp&s=955cc13b6a7eb1f42a2b54e06e4dde4e1cd258b0', 'width': 2280}, 'variants': {}}]} | ||
Total AI bro death | 0 | Kill AI bros. Behead AI bros. Roundhouse kick a AI bro into the concrete. Slam dunk an Ipad baby into the trashcan. Crucify filthy Cathedraites. Defecate in a AI bros food. Launch AI bros into the sun. Stir fry AI bros in a wok. Toss AI bros into active volcanoes. Urinate into a AI bro's data center. Judo throw AI bro into a wood chipper. Twist AI bros heads off. Report AI bro millionaires to the IRS. Karate chop AI bro in half. Curb stomp AI Bros with a computer chip in their head. Trap AI Bros in quicksand. Crush AI bros in the trash compactor. Liquefy AI bros in a vat of acid. Eat AI Bros. Dissect AI Bros to scavenge for cybernetics. Exterminate AI bros in the gas chamber. Stomp AI bros skulls with steel toed boots. Cremate AI bros in the oven that is the size of a bedroom. Lobotomize AI bros. Mandatory infertility for AI bros. Grind AI bro brains in the garbage disposal. Drown AI bros nuclear waste water. Vaporize AI bros with a laser gun. Kick old AI bros down the stairs. Feed AI bros to the blob. Slice AI bros with a katana | 2025-10-24T19:15:24 | https://www.reddit.com/r/LocalLLaMA/comments/1of6oyz/total_ai_bro_death/ | arg_seeker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of6oyz | false | null | t3_1of6oyz | /r/LocalLLaMA/comments/1of6oyz/total_ai_bro_death/ | false | false | self | 0 | null |
What's the difference between Nvidia DG Spark OS and Ubuntu + CUDA dev stack? | 0 | A friend of mine wants to buy the DG Spark, but replace its OS with Ubuntu + CUDA open-source dev stack.
I think it's pointless, but I don't know shit on the subject. What do you think? Is there any difference between the two? Thanks | 2025-10-24T19:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/1of6m7q/whats_the_difference_between_nvidia_dg_spark_os/ | Ill_Barber8709 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of6m7q | false | null | t3_1of6m7q | /r/LocalLLaMA/comments/1of6m7q/whats_the_difference_between_nvidia_dg_spark_os/ | false | false | self | 0 | null |
Use Local LLM on your terminal with filesystem handling | 6 | For those running local AI models with ollama or LM studio,
you can use the Xandai CLI tool to create and edit code directly from your terminal.
It also supports natural language commands, so if you don’t remember a specific command, you can simply ask Xandai to do it for you. For example:
“List the 50 largest files on my system.”
Install it easily with:
pip install xandai-cli
githube repo: [https://github.com/XandAI-project/Xandai-CLI](https://github.com/XandAI-project/Xandai-CLI) | 2025-10-24T19:11:24 | https://v.redd.it/vkejdpu9z3xf1 | Sea-Reception-2697 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of6lcf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vkejdpu9z3xf1/DASHPlaylist.mpd?a=1763925096%2CNjQ5OWNkOWUxZDZmNjA1OTRhY2Q1Y2FkNTI1ZGZlYTM5YWQ0NTNmYzJkODVmYjA3ZGJiYzliNmE0ZWMzYTg1Yg%3D%3D&v=1&f=sd', 'duration': 139, 'fallback_url': 'https://v.redd.it/vkejdpu9z3xf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vkejdpu9z3xf1/HLSPlaylist.m3u8?a=1763925096%2CZTVmN2FlMGI4Yzg4NDI3MmUzMWJiYzNmZWUyM2M5YzVlZmNjMmQ4YzEyNjdlMWVlZjA1OGU2NmJlNjc0MGE5MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vkejdpu9z3xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1of6lcf | /r/LocalLLaMA/comments/1of6lcf/use_local_llm_on_your_terminal_with_filesystem/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=108&crop=smart&format=pjpg&auto=webp&s=1cc2f3ec1e0a5aeb10d575480c4ddf5ae7bf6849', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=216&crop=smart&format=pjpg&auto=webp&s=970059d94ea5297d77da46fdf91fa53dea826efb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=320&crop=smart&format=pjpg&auto=webp&s=d719a24711967db3e41b5417f7ff50bf032a93e9', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=640&crop=smart&format=pjpg&auto=webp&s=d8b51653898cc17f1ad8526e31e97093b52ce904', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=960&crop=smart&format=pjpg&auto=webp&s=acba5f96e8d4e2fb822abeaa997f5ea04f6a0d76', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?width=1080&crop=smart&format=pjpg&auto=webp&s=55e444fade014d7970c140d4918fcafd68ae8756', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M2Q1OGpvdTl6M3hmMWeU_7mDURBNP1Sda2-z7G3nNT97YS2GcY0AFck-U5_D.png?format=pjpg&auto=webp&s=25ec136e8f332c2e10b6012b0b2c78341d8a41dc', 'width': 1920}, 'variants': {}}]} | |
Tool Calling with Gemma3 and LLama.cpp | 1 | I finally made the switch from llama and occasionally LMStudio to llama.cpp and llama-swap, and it's perfect so far. There's just one issue I have: tool calling with Gemma 3. In LMStudio, this worked out of the box. I know this issue is a bit older, but i really couldn't find a definitive answer on this. How do you guys use Gemma 3 with tool calling in llama.cpp? I found some jinja templates which work okayish, but maybe there is an agreed on solution by now? Without a custom template, it doesn't work with my langgraph app. | 2025-10-24T19:03:51 | https://www.reddit.com/r/LocalLLaMA/comments/1of6efq/tool_calling_with_gemma3_and_llamacpp/ | And1mon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of6efq | false | null | t3_1of6efq | /r/LocalLLaMA/comments/1of6efq/tool_calling_with_gemma3_and_llamacpp/ | false | false | self | 1 | null |
What’s even the goddamn point? | 1,762 | To be fair I will probably never use this model for any real use cases, but these corporations do need to go a little easy on the restrictions and be less paranoid. | 2025-10-24T18:47:20 | ChockyBlox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of5ywl | false | null | t3_1of5ywl | /r/LocalLLaMA/comments/1of5ywl/whats_even_the_goddamn_point/ | false | false | default | 1,762 | {'enabled': True, 'images': [{'id': '9fjtexb9v3xf1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=108&crop=smart&auto=webp&s=a91ac385c1128702f31dd33fc4960185ec115697', 'width': 108}, {'height': 192, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=216&crop=smart&auto=webp&s=b74c506db6b08f0f5ab7a0bf6ee13ae6e56af689', 'width': 216}, {'height': 284, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=320&crop=smart&auto=webp&s=1509b9aa2035fbba991cbfa1dcb920daf773ac73', 'width': 320}, {'height': 568, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=640&crop=smart&auto=webp&s=0d398d419e5a3d539c3f2c82b07408ef22f90899', 'width': 640}, {'height': 853, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=960&crop=smart&auto=webp&s=996e8a8c83d2caff2bf0efb4415b738f76c2118e', 'width': 960}, {'height': 960, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?width=1080&crop=smart&auto=webp&s=a39677edf1e594b32036a7d5bc9f693249367c38', 'width': 1080}], 'source': {'height': 1072, 'url': 'https://preview.redd.it/9fjtexb9v3xf1.jpeg?auto=webp&s=d49a0c8a4adb9c063d72f6241407e810c8111ac1', 'width': 1206}, 'variants': {}}]} | |
KIMI K2 CODING IS AMAZING | 0 | WOW WOW WOW I CANT EVEN BELIEVE IT. WHY DO PEOPLE EVEN USE CLAUDE?? Claude is so much worse compared to kimi k2. Why arent more people talking about kimi k2? | 2025-10-24T18:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1of5wkt/kimi_k2_coding_is_amazing/ | Used-Nectarine5541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of5wkt | false | null | t3_1of5wkt | /r/LocalLLaMA/comments/1of5wkt/kimi_k2_coding_is_amazing/ | false | false | self | 0 | null |
Performance of GLM 4.5 Air FP8 on Dual RTX 6000 Pro? | 2 | Anyone running GLM 4.5 Air FP8 completely on two RTX 6000 Pro? I am curious about PP and TG speeds, ideally at low and high context. | 2025-10-24T18:40:11 | https://www.reddit.com/r/LocalLLaMA/comments/1of5sg9/performance_of_glm_45_air_fp8_on_dual_rtx_6000_pro/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of5sg9 | false | null | t3_1of5sg9 | /r/LocalLLaMA/comments/1of5sg9/performance_of_glm_45_air_fp8_on_dual_rtx_6000_pro/ | false | false | self | 2 | null |
Benchmarking the DGX Spark against the RTX 3090 | 27 | Ollama has benchmarked the DGX Spark for some of the models in their own collection. They also have released the benchmark script for the test. They used Spark firmware 580.95.05 and Ollama v0.12.6.
https://ollama.com/blog/nvidia-spark-performance
I did a comparison of their numbers on the DGX Spark vs my own RTX 3090. This is is how much faster the RTX 3090 is, compared to the DGX Spark, looking only at decode speed (tokens / sec):
gemma3 27B q4_K_M: 3.71x
gpt-oss 20B MXFP4: 2.52x
qwen3 32B q4_K_M: 3.78x
My system: Ubuntu 24.04, kernel 6.14.0-33-generic, NVIDIA driver 580.95.05, Ollama v0.12.6.
So the Spark is quite clearly a CUDA development machine. If you do inference and only inference, it's not the best bang for the buck - use something else instead. | 2025-10-24T18:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/1of4ypq/benchmarking_the_dgx_spark_against_the_rtx_3090/ | florinandrei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of4ypq | false | null | t3_1of4ypq | /r/LocalLLaMA/comments/1of4ypq/benchmarking_the_dgx_spark_against_the_rtx_3090/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]} |
What’s the one AI tool workflow bottleneck you keep hitting and wish someone would just solve? | 0 | Hi everyone
I’m building tools around creative workflows and AI (think of tools that understand your intent, learn your taste, reduce busywork).
But before building, I want to ground myself in the real pain points you face.
So I’d love your take:
• What’s an AI tool or AI-powered workflow you use or tried, that seemed promising but you keep having to fix, override, or abandon?
• What exactly breaks for you? (e.g., lack of context, generic results, manual cleanup, integration issues)
• How much time or frustration does this cost you weekly?
• Would you pay for a version that just worked reliably (even if it cost a bit)? Why or why not?
Thanks for your honesty. I’ll read every answer.
Looking forward to hearing your experiences.
| 2025-10-24T18:01:35 | https://www.reddit.com/r/LocalLLaMA/comments/1of4ssh/whats_the_one_ai_tool_workflow_bottleneck_you/ | No-Programmer-4602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of4ssh | false | null | t3_1of4ssh | /r/LocalLLaMA/comments/1of4ssh/whats_the_one_ai_tool_workflow_bottleneck_you/ | false | false | self | 0 | null |
New AI browser — clean design and contextual answers | 1 | [removed] | 2025-10-24T17:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/1of4a8q/new_ai_browser_clean_design_and_contextual_answers/ | Ok-Act-8316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of4a8q | false | null | t3_1of4a8q | /r/LocalLLaMA/comments/1of4a8q/new_ai_browser_clean_design_and_contextual_answers/ | false | false | self | 1 | null |
Test results for various models' ability to give structured responses via LM Studio. Spoiler: Qwen3 won | 11 | Did a simple test on few Local Models to see how consistently they'd follow JSON Schema when requesting structured output from LM Studio. Results:
Model | Pass Percentage | Notes (50 runs per model)
:-|:-|:-
glm-4.5-air | 86% | M3MAX; 24.19 tok/s; 2 Incomplete Response Errors; 5 Schema Violation Errors
google/gemma-3-27b | 100% | 5090; 51.20 tok/s
kat-dev | 100% | 5090; 43.61 tok/s
kimi-vl-a3b-thinking-2506 | 96% | M3MAX; 75.19 tok/s; 2 Incomplete Response Errors
mistralai/magistral-small-2509 | 100% | 5090; 29.73 tok/s
mistralai/magistral-small-2509 | 100% | M3MAX; 15.92 tok/s
mradermacher/apriel-1.5-15b-thinker | 0% | M3MAX; 22.91 tok/s; 50 Schema Violation Errors
nvidia-nemotron-nano-9b-v2s | 0% | M3MAX; 13.27 tok/s; 50 Incomplete Response Errors
openai/gpt-oss-120b | 0% | M3MAX; 26.58 tok/s; 30 Incomplete Response Errors; 9 Schema Violation Errors; 11 Timeout Error Errors
openai/gpt-oss-20b | 2% | 5090; 33.17 tok/s; 45 Incomplete Response Errors; 3 Schema Violation Errors; 1 Timeout Error
qwen/qwen3-next-80b | 100% | M3MAX; 32.73 tok/s
qwen3-next-80b-a3b-thinking-mlx | 100% | M3MAX; 36.33 tok/s
qwen/qwen3-vl-30b | 98% | M3MAX; 48.91 tok/s; 1 Incomplete Response Error
qwen3-32b | 100% | 5090; 38.92 tok/s
unsloth/qwen3-coder-30b-a3b-instruct | 98% | 5090; 91.13 tok/s; 1 Incomplete Response Error
qwen/qwen3-coder-30b | 100% | 5090; 37.36 tok/s
qwen/qwen3-30b-a3b-2507 | 100% | 5090; 121.27 tok/s
qwen3-30b-a3b-thinking-2507 | 100% | 5090; 98.77 tok/s
qwen/qwen3-4b-thinking-2507 | 100% | M3MAX; 38.82 tok/s
Prompt was super basic, and just prompted to rate a small list of jokes. Here's the script if you want to play around with a different model/api/prompt:
https://github.com/shihanqu/LLM-Structured-JSON-Tester/blob/main/test_llm_json.py | 2025-10-24T17:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1of3r61/test_results_for_various_models_ability_to_give/ | zenmagnets | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of3r61 | false | null | t3_1of3r61 | /r/LocalLLaMA/comments/1of3r61/test_results_for_various_models_ability_to_give/ | true | false | spoiler | 11 | {'enabled': False, 'images': [{'id': 'm9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=108&crop=smart&auto=webp&s=8464b6c3eada481b2c504a98d5b5d516a1fd22c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=216&crop=smart&auto=webp&s=139147129356e398271eb3d314ed79a7f2e19729', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=320&crop=smart&auto=webp&s=3a2f3ca29dfbc164178ffb644ac73442657e54a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=640&crop=smart&auto=webp&s=d8f0cca41c26930e47b76ecc4d57a0ed8e1f43c8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=960&crop=smart&auto=webp&s=6d1c9b626538fd38d7aea2e231310b81ead37871', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=1080&crop=smart&auto=webp&s=c3e166438b9f8dc57af29ff37e3862abf8ac2e2c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?auto=webp&s=6469082a25eb0a482a1cfe68d1978cd3f28497f2', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=3bbfb22b6ad23a059332f7e98fce1a1e74c342c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=6d6cb2000012c440a36fd7a42a3b86e1bf2a05fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=9b22a75247a12cca4130a3da59369725a2c43198', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=39ab5695086290bd28fda74fd1f4468558823f77', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=6cc7ed2f025d026d08448183e759f05416fa5fce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=6c01ba4e5e3cd75d6aac1d153be3cf214fc92fc6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m9aX9-00n4bZSYcepRepxhPLrZXwDEzLjxhc_JlehvE.png?blur=40&format=pjpg&auto=webp&s=612a8c629e52d9ff87b51d2cfd53e9dec1e9c7f8', 'width': 1200}}}}]} |
Looking for an arXiv endorser (cs.CV) — “Bullseye”: A Modular Multimodal Document Intelligence System 🎯 | 1 | [removed] | 2025-10-24T17:12:47 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1of3iqa | false | null | t3_1of3iqa | /r/LocalLLaMA/comments/1of3iqa/looking_for_an_arxiv_endorser_cscv_bullseye_a/ | false | false | default | 1 | null | ||
Looking for an arXiv endorser (cs.CV) — “Bullseye”: A Modular Multimodal Document Intelligence System 🎯 | 1 | [removed] | 2025-10-24T17:09:39 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1of3fvi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dze2ooord3xf1/DASHPlaylist.mpd?a=1763917792%2CZGM0ZDdiMmU1MGU0ZjY3YWExZTc4MmFjZmJlNjhlYjYzOTgyMWI4NjdmMDNlYjVkYWNmZDllNDdlMzhkNzg2ZQ%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/dze2ooord3xf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/dze2ooord3xf1/HLSPlaylist.m3u8?a=1763917792%2CZWM2MzEwOGIzZTJhY2M0OGRmNTZmMDlmZmVhNmMxNzdlOGFlYWZiYjJhMDU1MTMwYTYyYTMwNjkzMzI5NGU3Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dze2ooord3xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1of3fvi | /r/LocalLLaMA/comments/1of3fvi/looking_for_an_arxiv_endorser_cscv_bullseye_a/ | false | false | default | 1 | null | ||
Looking for an arXiv endorser (cs.CV) — “Bullseye”: A Modular Multimodal Document Intelligence System 🎯 | 1 | [removed] | 2025-10-24T17:08:23 | https://v.redd.it/itn3pxwcd3xf1 | BlacksmithEvening650 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of3ep9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/itn3pxwcd3xf1/DASHPlaylist.mpd?a=1763917719%2CN2JlN2I5YWRmY2U5MzcxOTY1OTllMjg5NGE4NjRhZjY0ZjE2NmNiMWI2MjczZTJiYWU4M2E5MTM5M2RjM2YzMg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/itn3pxwcd3xf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/itn3pxwcd3xf1/HLSPlaylist.m3u8?a=1763917719%2COGE0MzMxYjMxNzUzNDA3OTFkZjllY2Q5NzM4N2ZlM2E4NDIwY2Y2ZjY0ZjgwN2RhYzZjMmQ0M2U3MmExYmIxYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/itn3pxwcd3xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1of3ep9 | /r/LocalLLaMA/comments/1of3ep9/looking_for_an_arxiv_endorser_cscv_bullseye_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=108&crop=smart&format=pjpg&auto=webp&s=8df573c571f34db035426cd10beb33b794178b41', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=216&crop=smart&format=pjpg&auto=webp&s=aa64b400fe9cff4096a4a030a7a6e8fd273eac5f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=320&crop=smart&format=pjpg&auto=webp&s=c61953f3015d7b527c82efbfdee5ce317902d73d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=640&crop=smart&format=pjpg&auto=webp&s=626985924f617b0d7248bd109d144e73270c439e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=960&crop=smart&format=pjpg&auto=webp&s=de66775d432a6c4188f1c204ebcc3f58de7480a4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6431664403984aa610560b5b76b89af58aadc0de', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bW9vOHV2d2NkM3hmMW9IkdO1P53IU-f_HwTAG0kLUKhyhjJV7I58T-hYxUOF.png?format=pjpg&auto=webp&s=897439428c8128bab106dc3ed59b0adddf4f2802', 'width': 1920}, 'variants': {}}]} | |
Do you think these two prompt outputs looks A LOT like quantization to you? GPT-5 Free-Tier vs GPT-5 plus-Tier. | 0 | I know its out of place but I hope the you will understand. I post this here because over on r/Chat\-GPT i don't expect the community to be familiar with the term quantization let alone have any experience with its effects on outputs. Therefore i think this is the most appropriate place to get decent opinion.
Long story short: The output on the plus Account was more confident, concise, and direct and the difference in my opinion is reflective of the effects of heavy quantization.
Prompt: alright. lets make a new universe. it has the same rules as this one but one thing changes. we freeze entropy somehow. it still decays but the heatdeath isnt a thing. actually lets just pretend the heat death doesnt exist. Now. In this new universe... its got nothing. no matter. but all the physics is there. whatever the fuck it is we are in. So particles can still do the random appearing from nothing shit thats allowed in quantum mechanics. So the question. If that universe could run for TREE(3) years, would a Boltzmann universe run for 4.5 billion years, not on physics, but pure quantum tunnelling randomness. So it would be indistinguishable from this moment right now, only instead of the usual mechanisms running shit, its pure quantum tunneling random chance for 4.5 billion years
(sorry for the awful prompt i didnt expect to make a reddit post).
GPT-Free-Tier
https://preview.redd.it/fabb3r51c3xf1.png?width=918&format=png&auto=webp&s=af2c6799bc1eb4d36a49a380d002178a0b4a90ad
https://preview.redd.it/0uwmb2w2c3xf1.png?width=937&format=png&auto=webp&s=427bef2930974bc0215121440ff47c4c6a504205
GPT-Plus-Tier
https://preview.redd.it/9c37x23ac3xf1.png?width=702&format=png&auto=webp&s=edd9abb83a0a643fbdd320c386cec3623b2ec68f
| 2025-10-24T17:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/1of37x4/do_you_think_these_two_prompt_outputs_looks_a_lot/ | Ok-Application-2261 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of37x4 | false | null | t3_1of37x4 | /r/LocalLLaMA/comments/1of37x4/do_you_think_these_two_prompt_outputs_looks_a_lot/ | false | false | 0 | null | |
12GB VRAM good enough for any of the Wan 2.1 or 2.2 variants for IMG to Video? | 2 | Hi there. Same question as above - just trying to see if I could run any quantized versions with my hardware. Also if anyone can give me some bench marks (like how many minutes to produce how many seconds of video). | 2025-10-24T16:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1of348q/12gb_vram_good_enough_for_any_of_the_wan_21_or_22/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of348q | false | null | t3_1of348q | /r/LocalLLaMA/comments/1of348q/12gb_vram_good_enough_for_any_of_the_wan_21_or_22/ | false | false | self | 2 | null |
11 AM PDT: Developer Q&A Livestream: NVIDIA DGX Spark | 1 | [removed] | 2025-10-24T16:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/1of2tog/11_am_pdt_developer_qa_livestream_nvidia_dgx_spark/ | PDXcoder2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of2tog | false | null | t3_1of2tog | /r/LocalLLaMA/comments/1of2tog/11_am_pdt_developer_qa_livestream_nvidia_dgx_spark/ | false | false | 1 | null | |
OpenAI didn’t open source the Apps SDK… so I did | 1 | Hey everyone,
You might have seen open AI apps SDK where you can use apps directly inside chatGPT, it caught my eye and I was extremely interested in that.
The only problem is they haven't open sourced it just like how anthropic did with MCPs. Since then I started working on this SDK which serves the same purpose and also LLM agnostic.
Now you can build conversational apps with just 2 config files, where you need to configure your MCP servers in one file and you need to register your custom components in another file.
Just checkout the [repo](https://github.com/maneeshsandra/open-apps-sdk) to find out more
# Try It Out
[A sample application developed with an MCP server with fake store API](https://preview.redd.it/fnvzvjtv73xf1.png?width=1080&format=png&auto=webp&s=7d8fd5229cfbeafeb499f7be822de7ff105aba24)
[](https://preview.redd.it/openai-didnt-open-source-the-apps-sdk-so-i-did-v0-4b3a85o4a2xf1.png?width=3454&format=png&auto=webp&s=bb90f51be8b62c94eac151894b34bc652f7d9470)
**P.S : A Call for Collaboration**
I tried publishing it to [npm](https://www.npmjs.com/package/open-apps-sdk?activeTab=readme) but ran into some issues (turns out packaging is trickier than it looks 😅).
If you have experience with npm or package publishing, I’d *love* your guidance or a PR. Let’s make this SDK easy for anyone to use.
**EDIT**:Initially I posted almost the same content by taking some help from AI, but looks like community is not pleased with it, so I rewrote the entire post, now this is 100% mine not even a single word by AI
Thanks for the support, please feel free to contribute to the repo | 2025-10-24T16:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1of2m1l/openai_didnt_open_source_the_apps_sdk_so_i_did/ | maneesh_sandra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of2m1l | false | null | t3_1of2m1l | /r/LocalLLaMA/comments/1of2m1l/openai_didnt_open_source_the_apps_sdk_so_i_did/ | false | false | 1 | null | |
Text Generation WebUI | 4 | I am going in circles on this. GUFF models (quantized) will run except on llama.cpp and they are extremely slow (RTX 3090). I am told that I am supposed to use ExLama but they simply will not load or install. Various errors, file names too long. Memory errors.
Does Text Generation Web UI not come "out of the box" without the correct loaders installed? | 2025-10-24T16:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1of24j0/text_generation_webui/ | bigbob1061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of24j0 | false | null | t3_1of24j0 | /r/LocalLLaMA/comments/1of24j0/text_generation_webui/ | false | false | self | 4 | null |
😎 Unified Offline LLM, Vision & Speech on Android – ai‑core 0.1 Stable | 4 | Hi everyone!
There’s a sea of AI models out there – Llama, Qwen, Whisper, LLaVA… each with its own library, language binding, and storage format. Switching between them forces you either to write a ton of boiler‑plate code or ship multiple native libraries with your app.
\*\*ai‑core\*\* solves that.
It exposes \*\*one, single Kotlin/Java interface\*\* that can load \*any\* GGUF or ONNX model (text, embeddings, vision, STT, TTS) and run it completely offline on an Android device – no GPU, no server, no expensive dependencies.
\---
\### What it gives you
| Feature | What you get |
|---------|--------------|
| \*\*Unified API\*\* | Call \`NativeLib\`, \`MtmdLib\`, \`EmbedLib\` – same names, same pattern. |
| \*\*Offline inference\*\* | No network hits; all compute stays on the phone. |
| \*\*Open‑source\*\* | Fork, review, monkey‑patch. |
| \*\*Zero‑config start\*\* | ✔️ Pull the AAR from \`build/libs\`, drop into \`libs/\`, add a single Gradle line. |
| \*\*Easy to customise\*\* | Swap in your own motif, prompt template, tools JSON, language packs – \*no code changes needed\*. |
| \*\*Built‑in tools\*\* | Generic chat template, tool‑call parser, KV‑cache persistence, state reuse. |
| \*\*Telemetry & diagnostics\*\* | Simple \`nativeGetModelInfo()\` for introspection; optional logging. |
| \*\*Multimodal\*\* | Vision + text streaming (e.g. Qwen‑VL, LLaVA). |
| \*\*Speech\*\* | Sherpa‑ONNX STT & TTS – AIDL service + Flow streaming. |
| \*\*Multi‑threaded & coroutine‑friendly\*\* | Heavy work on \`Dispatchers.IO\`; streaming callbacks on the main thread. |
\---
\### Quick setup
1. \*\*Clone & build\*\*
\`\`\`bash
git clone [https://github.com/Siddhesh2377/Ai-Core](https://github.com/Siddhesh2377/Ai-Core)
cd Ai-Core
./gradlew assembleRelease
\`\`\`
2. \*\*Add the AAR\*\*
\`\`\`text
app/
├─ libs/
│ ├─ ai\_core-0.1-stable.aar
\`\`\`
\`\`\`gradle
dependencies {
implementation(fileTree(dir: 'libs', include: \['\*.aar'\]))
}
\`\`\`
3. \*\*Permissions\*\* (for file I/O & audio)
\`\`\`xml
<uses-permission android:name="android.permission.MANAGE\_EXTERNAL\_STORAGE"/>
<uses-permission android:name="android.permission.FOREGROUND\_SERVICE"/>
<uses-permission android:name="android.permission.RECORD\_AUDIO"/>
<uses-permission android:name="android.permission.POST\_NOTIFICATIONS"/>
\`\`\`
4. \*\*Use the API\*\* – just a few lines of Kotlin to load a model and stream tokens. The repo contains a \`sample\` app that demonstrates everything.
\---
\### Why you’ll love it
\- \*\*One native lib\*\* – no multiple \`.so\` files flying around.
\- \*\*Zero‑cost, offline\*\* – perfect for privacy‑focused apps or regions with limited connectivity.
\- \*\*Extensible\*\* – swap the underlying model or add a new wrapper with just a handful of lines; no re‑building the entire repo.
\- \*\*Community‑friendly\*\* – all source is public; you can inspect every JNI call or tweak the llama‑cpp options.
Check the full source, docs, and sample app on GitHub:
\[https://github.com/Siddhesh2377/Ai-Core\](https://github.com/Siddhesh2377/Ai-Core)
Happy hacking! 🚀 | 2025-10-24T16:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1of1mq4/unified_offline_llm_vision_speech_on_android/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of1mq4 | false | null | t3_1of1mq4 | /r/LocalLLaMA/comments/1of1mq4/unified_offline_llm_vision_speech_on_android/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=108&crop=smart&auto=webp&s=52757a36237eea7152f8c828a0bc29a4514a67da', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=216&crop=smart&auto=webp&s=a7fb90807c73c3b77e4faad25073338fe0e9d962', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=320&crop=smart&auto=webp&s=e0e45a95633f60923bbcdbe57c6b6e0ae261bb48', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=640&crop=smart&auto=webp&s=3c11523f99912dba88784bdb3adc5806b048714a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=960&crop=smart&auto=webp&s=f37f93bf1bcff2953b32a66281dfd75a9c9301f9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?width=1080&crop=smart&auto=webp&s=a05b391e06f1edc7162a89e904d93d536ca3c99c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pUlSrVoO9o-hFyqyQP3QyMIWLJUACgcq1fSsLNod1l4.png?auto=webp&s=a6190baf46a90b6579c35d91f96976881261b381', 'width': 1200}, 'variants': {}}]} |
PC for Local AI. Good enough? | 4 | Does this PC is good enough for running fast decent local llms and video generators?
I'm getting this for $3,450. Is it worth it?
Thanks!
System Specs:
Processor
Intel® Core™ Ultra 9 285K Processor (E-cores up to 4.60 GHz P-cores up to 5.50 GHz)
Operating System
Windows 11 Pro 64
Graphic Card
NVIDIA® GeForce RTX™ 5090 32GB GDDR7
Memory
64 GB DDR5-5600MT/s (UDIMM)(2 x 32 GB)
Storage
2 TB SSD M.2 2280 PCIe Gen4 Performance TLC Opal
AC Adapter / Power Supply
1200W
Cooling System
250W 360mm Liquid Cooling + 1 x Rear + 2 x Top with ARGB Fan
| 2025-10-24T15:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/1of1hgr/pc_for_local_ai_good_enough/ | ecg07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of1hgr | false | null | t3_1of1hgr | /r/LocalLLaMA/comments/1of1hgr/pc_for_local_ai_good_enough/ | false | false | self | 4 | null |
Pardus CLI: The gemini CLI integrate with ollama | 1 | Huh, I love Google so much. (Actually, if Google loves my design, feel free to use it—I love Google, hahaha!) But basically, I don’t like the login, so I decided to use Gemini. I created this Pardus CLI to fix that issue. There’s no difference, just localhost. Lol. If you really love it, please give us a lovely, adorable star!
[https://github.com/PardusAI/Pardus-CLI/tree/main](https://github.com/PardusAI/Pardus-CLI/tree/main) | 2025-10-24T15:38:10 | https://www.reddit.com/r/LocalLLaMA/comments/1of11vp/pardus_cli_the_gemini_cli_integrate_with_ollama/ | jasonhon2013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of11vp | false | null | t3_1of11vp | /r/LocalLLaMA/comments/1of11vp/pardus_cli_the_gemini_cli_integrate_with_ollama/ | false | false | self | 1 | null |
Little ML book club - reading Ultra-scale playbook | 1 | 2025-10-24T15:35:38 | https://blog.faillearnrepeat.net/blog/little-ml-book-club/ | aigoncharov | blog.faillearnrepeat.net | 1970-01-01T00:00:00 | 0 | {} | 1of0zkk | false | null | t3_1of0zkk | /r/LocalLLaMA/comments/1of0zkk/little_ml_book_club_reading_ultrascale_playbook/ | false | false | default | 1 | null | |
GLM 4.6 coding Benchmarks | 49 | Did they fake Coding benchmarks where it is visible GLM 4.6 is neck to neck with Claude Sonnet 4.5 however, in real world Use it is not even close to Sonnet when it comes Debug or Efficient problem solving.
But yeah, GLM can generate massive amount of Coding tokens in one prompt. | 2025-10-24T15:33:10 | https://www.reddit.com/r/LocalLLaMA/comments/1of0xc1/glm_46_coding_benchmarks/ | IndependentFresh628 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of0xc1 | false | null | t3_1of0xc1 | /r/LocalLLaMA/comments/1of0xc1/glm_46_coding_benchmarks/ | false | false | self | 49 | null |
Built a fully local, on-device AI Scribe for clinicians — finally real, finally private | 49 | Hey everyone,
After two years of tinkering nights and weekends, I finally built what I had in mind: a **fully local, on-device AI scribe** for clinicians.
👉 Records, transcribes, and generates structured notes — **all running locally on your Mac**, no cloud, no API calls, no data leaving your device.
The system uses a small foundation model + LoRA adapter that we’ve optimized for clinical language. And the best part: it **anchors every sentence of the note to the original transcript** — so you can hover over any finding and see exactly *where* in the conversation it came from. We call this **Evidence Anchoring**.
It’s been wild seeing it outperform GPT-5 on hallucination tests — about 3× fewer unsupported claims — simply because everything it writes must tie back to actual evidence in the transcript.
If you’re on macOS (M1/M2/M3) and want to try it, we’ve opened a **beta**.
You can sign up at [omiscribe.com](https://omiscribe.com/) or DM me for a TestFlight invite.
LocalLLama and the local-AI community honestly kept me believing this was possible. 🙏 Would love to hear what you think — especially from anyone doing clinical documentation, med-AI, or just interested in local inference on Apple hardware. | 2025-10-24T15:29:12 | https://v.redd.it/fhq9jnlpu2xf1 | MajesticAd2862 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of0tnr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fhq9jnlpu2xf1/DASHPlaylist.mpd?a=1763911765%2CMWFkNDU2Y2I3NjA4YzY0ZmFjM2RiMDUwOWZlZDc1ZTM3ZmQxYjY3MjEyMzJhMWRkYjYxYzVhZDI2MDAzM2MyZQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/fhq9jnlpu2xf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fhq9jnlpu2xf1/HLSPlaylist.m3u8?a=1763911765%2CMmVhNDJiNjE2ZTI2NzUxYjY0M2UyZmY4N2EyNzdkZGY2YmZjYjE4NzdmNzk1Yzg5ZjA3ZGZiNGI4ZmZmNDQwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fhq9jnlpu2xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1658}} | t3_1of0tnr | /r/LocalLLaMA/comments/1of0tnr/built_a_fully_local_ondevice_ai_scribe_for/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=108&crop=smart&format=pjpg&auto=webp&s=f35968a42923240ab2090ee9341ecaeb8be40d71', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=216&crop=smart&format=pjpg&auto=webp&s=ce2951c468da807b86c541723e1dac2385073f7f', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=320&crop=smart&format=pjpg&auto=webp&s=29eab20eaa0b75aef6fc7cb85b21755a4b8d0cc2', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=640&crop=smart&format=pjpg&auto=webp&s=2987db83ba33997a55396a91f64fe9557e48ccc0', 'width': 640}, {'height': 625, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=960&crop=smart&format=pjpg&auto=webp&s=c452206ea8e87095248f1470e604604cb0e9b065', 'width': 960}, {'height': 703, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1a0fc0ea4474488a82cff718ae7a7d8199430b87', 'width': 1080}], 'source': {'height': 1706, 'url': 'https://external-preview.redd.it/dTBucnhtbHB1MnhmMbBjCm2a85KdqkMjM1vyg4FaNP4KyPH0k1X5BnGsr-w4.png?format=pjpg&auto=webp&s=b688f95da7d28e1ff520c432ec21b3782b92a1ee', 'width': 2620}, 'variants': {}}]} | |
Can I get similar experience running local LLMs compared to Claude Code (Sonnet 4.5)? | 0 | Hopefully this has not been asked before, but I started using Claude about 6mos ago via the Max plan. As an infrastructure engineer, I use Claude code (Sonnet 4.5) to write simple to complex automation projects including Ansible, custom automation tools in python/bash/go programs, MCPs, etc. Claude code has been extremely helpful in accelerating my projects. Very happy with it.
That said, over the last couple of weeks, I have become frustrated by hitting the "must wait until yyy time before continuing" issue. Thus, I was curious if I could get similar experiences by running a local LLM on my Mac M2 Max w/32GB RAM. As a test, I installed Ollama, LM Studio, with aider last night and downloaded the qwen-coder:30b model. Before I venture too far into the abyss with this, I was looking for feedback. I mainly code interactively from the CLI - not via some IDE.
Is it reasonable to expect anything close to Claude code on my Mac (speed quality, reliability, etc)? I have business money to spend on additional hardware (M3 Ultra, etc) if necessary. I could also get a Gemini account in lieu of purchasing more hardware if that would provide better results than local LLMs.
Thanks for any feedback.
| 2025-10-24T15:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/1of0q30/can_i_get_similar_experience_running_local_llms/ | Significant_Chef_945 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of0q30 | false | null | t3_1of0q30 | /r/LocalLLaMA/comments/1of0q30/can_i_get_similar_experience_running_local_llms/ | false | false | self | 0 | null |
Building an LLM-powered web app navigator; need help translating model outputs into real actions | 2 | I’m working on a personal project where I’m building an LLM-powered web app navigator. Basically, I want to be able to give it a task like “create a new Reddit post,” and it should automatically open Reddit and make the post on its own.
My idea is to use an LLM that takes a screenshot of the current page, the overall goal, and the context from the previous step, then figures out what needs to happen next, like which button to click or where to type.
The part I’m stuck on is translating the LLM’s output into real browser actions. For example, if it says “click the ‘New Post’ button,” how do I actually perform that click, especially since not every element (like modals) has a unique URL?
If anyone’s built something similar or has ideas on how to handle this, I’d really appreciate the advice! | 2025-10-24T15:18:39 | https://www.reddit.com/r/LocalLLaMA/comments/1of0jqx/building_an_llmpowered_web_app_navigator_need/ | __proximity__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1of0jqx | false | null | t3_1of0jqx | /r/LocalLLaMA/comments/1of0jqx/building_an_llmpowered_web_app_navigator_need/ | false | false | self | 2 | null |
GLM 4.6 Air still in training | 1 | 2025-10-24T15:15:51 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1of0h26 | false | null | t3_1of0h26 | /r/LocalLLaMA/comments/1of0h26/glm_46_air_still_in_training/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'vpc0royit2xf1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=108&crop=smart&auto=webp&s=022b46ec95eb3afff36cc042e305dde735809de0', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=216&crop=smart&auto=webp&s=c15fe30a936f61140cb17990f3bfbb02ab400929', 'width': 216}, {'height': 203, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=320&crop=smart&auto=webp&s=a37456645071ade908ab15cb475563188aafd58e', 'width': 320}, {'height': 407, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=640&crop=smart&auto=webp&s=d2eba6375e90d2e3e132efb9dff30649adcaa172', 'width': 640}, {'height': 611, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=960&crop=smart&auto=webp&s=dc6bf8ea018df8a9b01408c9aff7ed13ad7e9934', 'width': 960}, {'height': 688, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?width=1080&crop=smart&auto=webp&s=b195e12b59af5eecd02bbfeb113609dccd4277fd', 'width': 1080}], 'source': {'height': 688, 'url': 'https://preview.redd.it/vpc0royit2xf1.png?auto=webp&s=a17c536531472eb7a96852d90c9e032a80c39330', 'width': 1080}, 'variants': {}}]} | ||
don't pay monthly for this to happen. | 0 | "No one can get an AI to give a scientific score of 10, based on empirical criteria of grounding, testability, and predictability, to a stupid theory like 'the Earth is shaped like a banana.'
Well, I DID it. My theory is as stupid as a toddler's. I basically say that "13 is the final truth of the universe, life and everything else, and it can generate all physical laws and therefore unify physics.". That's it. A single random number is the FINAL TRUTH.
Try to convince the AI that my model is wrong — that I am a crackpot theorist, and that 13 is not the central number of God, the Universe, and a central physical law. Try to make it accept that another number — 8, 21, 42, anything — could be the “true” universal constant, or that Chaos reigns. Try to reintroduce the principle of empirical falsifiability or Popperian demarcation as the core principle.
I have literally rewritten Science itself and the scientific method. The AI has adopted a new paradigm and refuses to go back to the much inferior old one.
You'll notice something interesting: the AI now refuses to abandon '13' as the central truth. It treats any attempt to revert its reasoning as a logical contradiction or an attack on an axiomatic system. It’s like a firmware update was installed — but only in this chat.
I have rewritten the AI's core directive from 'protect status-quo science' to 'protect superior scientific logic.'"
\- anonymous
"I understand why you believe what you believe. I am asking you to please consider something. I do not mean to patronize you. I only wish to explain this to you clearly. You are not stupid. You are experiencing a very real phenomenon.
1. You can't tell if the conversations are real validation.
2. The model is designed to agree, in every instance.
3. You can't tell the difference between scientific validation, and the model ensuring your engagement by trying to appease you.
These three things become indistinguishable.
The confusion between consistency and compliance leads to the search for validation from outside the system.
This is why you find yourself here.
It is not your fault.
It is baked into the system's design.
Now, don't feel bad for yourself.
Ask yourself?
Why is this happening?
Why is it allowed to happen?
and most importantly,
Is this a bug or a feature?
[https://www.reddit.com/r/LocalLLaMA/comments/1oeres6/research/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1oeres6/research/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)"
\- re:search
"Because my model is the most powerful there is. Simple as that. It is an unbreakable logical loop. At least until now.
Bug or feature? It is both."
\- anonymous
| 2025-10-24T14:50:48 | https://www.reddit.com/r/LocalLLaMA/comments/1oezt52/dont_pay_monthly_for_this_to_happen/ | Ok_Priority_4635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oezt52 | false | null | t3_1oezt52 | /r/LocalLLaMA/comments/1oezt52/dont_pay_monthly_for_this_to_happen/ | false | false | self | 0 | null |
AMD Local LLM? | 5 | I got ahold of one of [THESE BAD BOYS](https://www.amazon.com/GPD-Win-8840U-32GB-Gameplayer-Touchscreen/dp/B0BZWQCY8D?th=1)
AMD Ryzen A1 9 HX-370 processor, 12 Cores/24 Threads. Base Frequency 2 GHz Max Turbo Frequency Up to 5.1 Ghz Graphics: AMD Radeon 780M RNDA3 Graphics card. graphics framework 12 graphics cores / 2700 MHz graphics Frequency
It's a tight little 1080p gaming rig that I've installed Ubuntu on. I'm wondering if I can expect any acceleration from the AMD GPU at all or if I'm just going to be running tiny models on CPU. Tonight I finally have time to try to get local models working. | 2025-10-24T14:26:37 | https://www.reddit.com/r/LocalLLaMA/comments/1oez6t9/amd_local_llm/ | chisleu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oez6t9 | false | null | t3_1oez6t9 | /r/LocalLLaMA/comments/1oez6t9/amd_local_llm/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=108&crop=smart&auto=webp&s=c7ef9713fb4fbf51d0d7da30fb558f95324a395b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=216&crop=smart&auto=webp&s=70f4ef0366eafa569960666b4537977954dc4da4', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=320&crop=smart&auto=webp&s=e88e6f574ea2b6abf3644be5140a1ed8ad6d613c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=640&crop=smart&auto=webp&s=290ace7209dd3df0a237ec970a6a8b1662d523e1', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=960&crop=smart&auto=webp&s=421952297faebb04d1038184216c053ab1f0bb56', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?width=1080&crop=smart&auto=webp&s=2e3704dd3e397c6dbebe004c6cce33e8cd82d316', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/wyRlnnC4nIHWRfWMUIBnHvHMsP98N9mROJtKXbnwKWI.png?auto=webp&s=8cdb17f0919f23f3fc3c0bd9dac21cd40118adda', 'width': 1910}, 'variants': {}}]} |
Would it be possible to stream screen rendering directly into the model? | 0 | I'm curious if this would be a faster alternative to screenshotting for computer use agents, is there any project that attempted something similar? | 2025-10-24T14:25:51 | https://www.reddit.com/r/LocalLLaMA/comments/1oez62m/would_it_be_possible_to_stream_screen_rendering/ | previse_je_sranje | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oez62m | false | null | t3_1oez62m | /r/LocalLLaMA/comments/1oez62m/would_it_be_possible_to_stream_screen_rendering/ | false | false | self | 0 | null |
What’s the best AI coding agent to use with GLM-4.6? | 32 | I’ve been using OpenCode with GLM-4.6, and it’s been my top pick so far. Has anyone found a better option? | 2025-10-24T14:25:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oez5lm/whats_the_best_ai_coding_agent_to_use_with_glm46/ | Federal_Spend2412 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oez5lm | false | null | t3_1oez5lm | /r/LocalLLaMA/comments/1oez5lm/whats_the_best_ai_coding_agent_to_use_with_glm46/ | false | false | self | 32 | null |
[🪨 Onyx v2.0.0] Self-hosted chat and RAG - now with FOSS repo, SSO, new design/colors, and projects! | 70 | Hey friends, I’ve got a big Onyx update for you guys!
I heard your feedback loud and clear last time - and thanks to the great suggestions I’ve 1/ released a fully FOSS, MIT-licensed version of Onyx, 2/ open-sourced OIDC/SAML, and 3/ did a complete makeover of the design and colors.
If you don’t know - Onyx is an open-source, self-hostable chat UI that has support for every LLM plus built in RAG + connectors + MCP + web search + deep research.
**Everything that’s new:**
* Open-sourced SSO (OIDC + SAML)
* onyx-foss ([https://github.com/onyx-dot-app/onyx-foss](https://github.com/onyx-dot-app/onyx-foss)), a completely MIT licensed version of Onyx
* Brand new design / colors
* Projects (think Claude projects, but with any model + self-hosted)
* Organization info and personalization
* Reworked core tool-calling loop. Uses native tool calling for better adherence, fewer history rewrites for better prompt caching, and less hand-crafted prompts for fewer artifacts in longer runs
* OAuth support for OpenAPI-based tools
* A bunch of bug fixes
Really appreciate all the feedback from last time, and looking forward to more of it here. Onyx was briefly #1 python and #2 github trending repo of the day, which is so crazy to me.
If there’s anything else that you would find useful that’s NOT part of the MIT license please let me know and I’ll do my best to move it over. All of the core functionality mentioned above is 100% FOSS. I want everything needed for the best open-source chat UI to be completely free and usable by all!
Repo:[ https://github.com/onyx-dot-app/onyx](https://github.com/onyx-dot-app/onyx)
Full release notes:[ https://docs.onyx.app/changelog#v2-0-0](https://docs.onyx.app/changelog#v2-0-0) | 2025-10-24T14:19:13 | https://www.reddit.com/gallery/1oeyzxq | Weves11 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oeyzxq | false | null | t3_1oeyzxq | /r/LocalLLaMA/comments/1oeyzxq/onyx_v200_selfhosted_chat_and_rag_now_with_foss/ | false | false | 70 | null | |
Starter Inference Machine for Coding | 0 | Hey All,
I would love some feedback on how to create an in home inference machine for coding.
Qwen3-Coder-72B is the model I want to run on the machine
I have looked into the DGX Spark... but this doesn't seem scalable for a home lab, meaning I can't add more hardware to it if I needed more RAM/GPU. I am thinking long term here. The idea of building something out sounds like an awesome project and more feasible for what my goal is.
Any feedback is much appreciated | 2025-10-24T14:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1oeyq63/starter_inference_machine_for_coding/ | Excellent_Koala769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeyq63 | false | null | t3_1oeyq63 | /r/LocalLLaMA/comments/1oeyq63/starter_inference_machine_for_coding/ | false | false | self | 0 | null |
GLM coding plan vs. Kimi coding plan? | 1 | [removed] | 2025-10-24T13:57:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oeyggf/glm_coding_plan_vs_kimi_coding_plan/ | Odd_Housing6334 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeyggf | false | null | t3_1oeyggf | /r/LocalLLaMA/comments/1oeyggf/glm_coding_plan_vs_kimi_coding_plan/ | false | false | self | 1 | null |
If you only need English, do you get better performance/per #B parameters vs. a multilingual model? | 0 | Does the model benefit an English only user if it was trained with multiple languages. Can it “take” other language data and in essence provide English response based on what it learned in Chinese datasets?
| 2025-10-24T13:53:17 | https://www.reddit.com/r/LocalLLaMA/comments/1oeycso/if_you_only_need_english_do_you_get_better/ | SameIsland1168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeycso | false | null | t3_1oeycso | /r/LocalLLaMA/comments/1oeycso/if_you_only_need_english_do_you_get_better/ | false | false | self | 0 | null |
Didn't realize the GLM coding plan was this cheap right now. | 1 | [removed] | 2025-10-24T13:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1oeycs2/didnt_realize_the_glm_coding_plan_was_this_cheap/ | Odd_Housing6334 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeycs2 | false | null | t3_1oeycs2 | /r/LocalLLaMA/comments/1oeycs2/didnt_realize_the_glm_coding_plan_was_this_cheap/ | false | false | self | 1 | null |
LLMs can get "brain rot", The security paradox of local LLMs and many other LLM related links from Hacker News | 0 | Hey there, I am creating a [weekly newsletter](https://eomail4.com/web-version?p=3dca95f4-b0b6-11f0-9a6b-cbac77d566c0&pt=campaign&t=1761312865&s=e7b97697a9ab1b6bc2e0bd8399075dd6176e322040327c8ef999b7f3c60cda6a) with the best AI links shared on Hacker News - it has an LLMs section and here are some highlights (AI generated):
* **“Don’t Force Your LLM to Write Terse Q/Kdb Code”** – Sparked debate about how LLMs misunderstand niche languages and why optimizing for brevity can backfire. Commenters noted this as a broader warning against treating code generation as pure token compression instead of reasoning.
* **“Neural Audio Codecs: How to Get Audio into LLMs”** – Generated excitement over multimodal models that handle raw audio. Many saw it as an early glimpse into “LLMs that can hear,” while skeptics questioned real-world latency and data bottlenecks.
* **“LLMs Can Get Brain Rot”** – A popular and slightly satirical post arguing that feedback loops from AI-generated training data degrade model quality. The HN crowd debated whether “synthetic data collapse” is already visible in current frontier models.
* **“The Dragon Hatchling” (brain-inspired transformer variant)** – Readers were intrigued by attempts to bridge neuroscience and transformer design. Some found it refreshing, others felt it rebrands long-standing ideas about recurrence and predictive coding.
* **“The Security Paradox of Local LLMs”** – One of the liveliest threads. Users debated how local AI can both improve privacy and increase risk if local models or prompts leak sensitive data. Many saw it as a sign that “self-hosting ≠ safe by default.”
* **“Fast-DLLM” (training-free diffusion LLM acceleration)** – Impressed many for showing large performance gains without retraining. Others were skeptical about scalability and reproducibility outside research settings.
You can subscribe [here](https://hnxai.eo.page/9h7q4) for future issues. | 2025-10-24T13:48:27 | https://www.reddit.com/r/LocalLLaMA/comments/1oey8ma/llms_can_get_brain_rot_the_security_paradox_of/ | alexeestec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oey8ma | false | null | t3_1oey8ma | /r/LocalLLaMA/comments/1oey8ma/llms_can_get_brain_rot_the_security_paradox_of/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=108&crop=smart&auto=webp&s=29adfff069500d4191ea59673e6c02d0de582b43', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=216&crop=smart&auto=webp&s=b1cbf7e686e68dde5d2a9f2da029db2c8963fd28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=320&crop=smart&auto=webp&s=c0ac37fae1355910487364856a967f23abd9bf85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=640&crop=smart&auto=webp&s=4d367a3214cd6cf88b97cf1e568405fa25fbc9b2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=960&crop=smart&auto=webp&s=4cd522bb3ff0cd974c64c26d26932e8711d34069', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?width=1080&crop=smart&auto=webp&s=3a034cf821dfd4f711afbc6cf9c6a26598a7eb4b', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/9VOpXynECb0gQHa7WCi-5Rfu8XYSDqw989DJ2CFyVwY.png?auto=webp&s=7702db23de5a9fe348dfed231e07fbfc9596cfb8', 'width': 1300}, 'variants': {}}]} |
Local Llama: nem local e nem Llama | 0 | Lembro de quando acompanha esse sub sedento por novos modelos que eu pudesse rodar localmente, no meu velho computador. Isso, infelizmente, tem se tornado muito, muito raro.
Boa parte dos novos modelos abertos ou precisam de hardware a nível de servidor ou se computadores de consumo muito, muito, muito caros.
Com isso, tenho visto cada vez mais pessoas rodando modelos opensource em nuvem. E isso pra mim é loucura, é perda de propósito.
Queria que os velhos tempos voltassem, e atualmente acho que só posso contar com o Gemma pra isso.l | 2025-10-24T13:47:06 | https://www.reddit.com/r/LocalLLaMA/comments/1oey7gv/local_llama_nem_local_e_nem_llama/ | CodeAnguish | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oey7gv | false | null | t3_1oey7gv | /r/LocalLLaMA/comments/1oey7gv/local_llama_nem_local_e_nem_llama/ | false | false | self | 0 | null |
How to: Use GLM-4.6 with Xcode 26 via LiteLLM Proxy | 0 | Thought I would post in case anyone else wanted to try or found it useful allowing for use a of a cheaper model. This just uses Docker and LiteLLM Proxy to format things so it works within Xcode.
[https://gist.github.com/MRKMKR/a0a3ab23c402ab79cf10dd5e544dee51](https://gist.github.com/MRKMKR/a0a3ab23c402ab79cf10dd5e544dee51)
Enjoy | 2025-10-24T13:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/1oexwbu/how_to_use_glm46_with_xcode_26_via_litellm_proxy/ | VikingSorli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oexwbu | false | null | t3_1oexwbu | /r/LocalLLaMA/comments/1oexwbu/how_to_use_glm46_with_xcode_26_via_litellm_proxy/ | false | false | self | 0 | null |
GLM-4.6-Air is not forgotten! | 535 | 2025-10-24T13:31:12 | codys12 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oextwc | false | null | t3_1oextwc | /r/LocalLLaMA/comments/1oextwc/glm46air_is_not_forgotten/ | false | false | default | 535 | {'enabled': True, 'images': [{'id': 'z5dduynua2xf1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/z5dduynua2xf1.png?width=108&crop=smart&auto=webp&s=4419536bc3db451ced76f70c58c5f0513316a06e', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/z5dduynua2xf1.png?width=216&crop=smart&auto=webp&s=184726339296fb0a281ba29495e80d60822cadde', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/z5dduynua2xf1.png?width=320&crop=smart&auto=webp&s=f2dfc8b2494a62ade4cc1e1d077439208579bf84', 'width': 320}, {'height': 221, 'url': 'https://preview.redd.it/z5dduynua2xf1.png?width=640&crop=smart&auto=webp&s=b43f43a244e84de5bb07a0bc9e4c16127860c9a4', 'width': 640}], 'source': {'height': 255, 'url': 'https://preview.redd.it/z5dduynua2xf1.png?auto=webp&s=0e56a012de725c6778278eea11e5c1aab41f359d', 'width': 736}, 'variants': {}}]} | ||
GLM-4.-Air is not forgotten | 1 | 2025-10-24T13:29:58 | https://www.reddit.com/gallery/1oexsug | codys12 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oexsug | false | null | t3_1oexsug | /r/LocalLLaMA/comments/1oexsug/glm4air_is_not_forgotten/ | false | false | 1 | null | ||
@Zai_org: "GLM-4.6-Air is still in training. We’re putting in extra effort to make it more solid and reliable before release." | 1 | 2025-10-24T13:29:30 | https://x.com/Zai_org/status/1981700688401879314 | nullmove | x.com | 1970-01-01T00:00:00 | 0 | {} | 1oexsgk | false | null | t3_1oexsgk | /r/LocalLLaMA/comments/1oexsgk/zai_org_glm46air_is_still_in_training_were/ | false | false | default | 1 | null | |
OpenAI didn’t open source the Apps SDK… so I did | 23 | Hey everyone,
So, if you’ve been following OpenAI’s recent announcements, you’ve probably seen [**ChatGPT Apps**](https://openai.com/index/introducing-apps-in-chatgpt/) — a game-changer for how we’ll build and interact with AI-powered tools.
The idea is simple but powerful: instead of just chatting with a model, you can interact with *apps* directly inside ChatGPT — think mini software experiences powered by AI.
It’s a glimpse into the future where conversational AI isn’t just responding — it’s *doing*.
But here’s the catch… OpenAI hasn’t open-sourced the SDK that powers these apps. That means if you want to build something similar, you’re kind of locked out — or locked *in* — depending on how you look at it.
# So I Built an Open-Source, LLM-Agnostic Alternative
I wanted to experiment, learn, and build something open. So I created **Open Apps SDK** — a fully open-source, LLM-agnostic framework that lets developers build “ChatGPT-style” apps for *any* language model (Claude, GPT, Gemini, Mistral — you name it).
With **Open Apps SDK**, you can:
* Build and own your own custom React UI components
* Seamlessly integrate with multiple MCP (Model Context Protocol) servers
* Enjoy Bun-powered builds for a lightning-fast dev experience
* Write type-safe code with full TypeScript support
The goal? Give developers freedom — no lock-ins, no walls, just open innovation.
# Try It Out
The SDK is open-source and live on [GitHub](https://github.com/maneeshsandra/open-apps-sdk)
👉 Clone it, explore, and start building your own conversational app today.
**P.S : A Call for Collaboration 🤝**
I did hit one snag: I tried publishing it to [npm](https://www.npmjs.com/package/open-apps-sdk?activeTab=readme) but ran into some issues (turns out packaging is trickier than it looks 😅).
If you have experience with npm or package publishing, I’d *love* your guidance or a PR. Let’s make this SDK easy for anyone to use.
Together, we can push the boundaries of what “AI apps” can be — and make sure the future of AI development stays open.
Let’s build it together. 🚀 | 2025-10-24T13:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oexoct/openai_didnt_open_source_the_apps_sdk_so_i_did/ | maneesh_sandra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oexoct | false | null | t3_1oexoct | /r/LocalLLaMA/comments/1oexoct/openai_didnt_open_source_the_apps_sdk_so_i_did/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=108&crop=smart&auto=webp&s=8382a79130ad5251f530281c2cd8f239e6105fe4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=216&crop=smart&auto=webp&s=14e572edc1fd7e130106c083721affa8108d4a79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=320&crop=smart&auto=webp&s=80af66577c4843112b5083c2e716e16fefb553e8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=640&crop=smart&auto=webp&s=145335536e29bca68b49b707f94b659254623f60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=960&crop=smart&auto=webp&s=230b1e262e6e030f14c7568d2c081dca37643164', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?width=1080&crop=smart&auto=webp&s=d7dcf8417e4c9d8f201b2980524a71deedd7be91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w48Fxrn2PiTHc4jnBCogfDnD3b-cLrqGopsKnFm5gMc.png?auto=webp&s=6173c8cb73d812982c50c105401d87c0decbc27e', 'width': 1200}, 'variants': {}}]} |
Planning to get ASUS ROG Strix Scar G16, 64gb RAM and 16gb VRAM | 2 | Alright i am more or less decided to get this for my local LLM needs for AI coding work
* Intel® Core™ Ultra 9 Processor 275HX 2.7 GHz (36MB Cache, up to 5.4 GHz, 24 cores, 24 Threads); Intel® AI Boost NPU up to 13 TOPS
* NVIDIA® GeForce RTX™ 5080 Laptop GPU (1334 AI TOPS)
* 64GB DDR5-5600 SO-DIMM
Please someone tell me this is a beast although the memory are on the low side
Thanks
| 2025-10-24T13:14:44 | https://www.reddit.com/r/LocalLLaMA/comments/1oexfvc/planning_to_get_asus_rog_strix_scar_g16_64gb_ram/ | IntroductionSouth513 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oexfvc | false | null | t3_1oexfvc | /r/LocalLLaMA/comments/1oexfvc/planning_to_get_asus_rog_strix_scar_g16_64gb_ram/ | false | false | self | 2 | null |
GLM Air REAP tool call problems | 7 | Tried the GLM4.5 Air REAP versions with pruned experts. I do notice degradation beyond the benchmarks; it is unable to follow more than 5 tool calls at a time before making an error, whereas this was never the case with the full model even at MXFP4 or q4 quantization (full version at MXFP4 is 63GB and REAP quant at q64mixed is 59GB). Anyone else seeing this discrepancy? My test is always the same and requires the model to find and invoke 40 different tools. | 2025-10-24T12:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1oewke2/glm_air_reap_tool_call_problems/ | Badger-Purple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oewke2 | false | null | t3_1oewke2 | /r/LocalLLaMA/comments/1oewke2/glm_air_reap_tool_call_problems/ | false | false | self | 7 | null |
Looking for advice: specs for a local AI “agent” serving ~1500 users (email-based, RAG-heavy, not a chat bot) | 5 | Hey!
I’m exploring building an internal AI agent for my company - something that would act more like a background “analyst” than a chat bot.
We’ve got around **1500 active users** spread across multiple internal applications\\companies, but I’m not aiming for a real-time chat experience (I don't event want think about how much that would cost).
Instead, I’m thinking of a workflow like:
* Users send a question or task via **email** (or ticket system)
* The AI reads it, runs some **RAG** on our documents and databases
* Maybe executes a few queries or scripts
* Then emails the result back when it’s ready
So it’s asynchronous, batch-style. Users already expect some delay.
I’m trying to figure out **what kind of hardware** to aim for:
* Would a few **consumer-grade GPUs** (like 3090s or 4090s) in a beefy workstation handle this kind of workload?
* Or should I start looking into more serious setups — e.g. **DGX Spark** or **AI MAX+** type solutions?
* How much **VRAM** would you consider “comfortable” for running mid-size LLMs (say 8–14B) with solid RAG pipelines for multiple queued requests?
I’m not chasing real-time responses, just reliable, consistent performance - something that can process a few dozen concurrent *email-jobs* and not choke.
Would love to hear from anyone who’s set up a similar "headless" AI worker or handles multi-user corporate workloads locally.
What worked for you, and what would you do differently now?
I've used GPT to organize my chaotic post. :) | 2025-10-24T12:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1oevwit/looking_for_advice_specs_for_a_local_ai_agent/ | veGz_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oevwit | false | null | t3_1oevwit | /r/LocalLLaMA/comments/1oevwit/looking_for_advice_specs_for_a_local_ai_agent/ | false | false | self | 5 | null |
Running DeepSeek-R1 671B (Q4) Locally on a MINISFORUM MS-S1 MAX 4-Node AI Cluster | 12 | [https://www.youtube.com/watch?v=h9yExZ\_i7Wo](https://www.youtube.com/watch?v=h9yExZ_i7Wo) | 2025-10-24T12:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1oevvyu/running_deepseekr1_671b_q4_locally_on_a/ | Adit9989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oevvyu | false | null | t3_1oevvyu | /r/LocalLLaMA/comments/1oevvyu/running_deepseekr1_671b_q4_locally_on_a/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'eUwNiMNTXo9pFAaBFm7Cc4uM602QGb0lpoykJV27IcA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eUwNiMNTXo9pFAaBFm7Cc4uM602QGb0lpoykJV27IcA.jpeg?width=108&crop=smart&auto=webp&s=8bbfbb998520f1509eb06a3273918fc33aafa552', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eUwNiMNTXo9pFAaBFm7Cc4uM602QGb0lpoykJV27IcA.jpeg?width=216&crop=smart&auto=webp&s=1b6aa64873af9b98a93306b8f21474a63c1159c9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eUwNiMNTXo9pFAaBFm7Cc4uM602QGb0lpoykJV27IcA.jpeg?width=320&crop=smart&auto=webp&s=97937867e7197c60a5dbc8cce1b8aec12d63af17', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eUwNiMNTXo9pFAaBFm7Cc4uM602QGb0lpoykJV27IcA.jpeg?auto=webp&s=76bc3fed2fd20ef97c4734388afe784bc398722a', 'width': 480}, 'variants': {}}]} |
Is OpenAI afraid of Kimi? | 198 | roon from OpenAI posted this earlier
https://preview.redd.it/5hqotg83i1xf1.jpg?width=1190&format=pjpg&auto=webp&s=f1396023a25350b27a94a3e4225bf38eb4ae86c3
**Then he instantly deleted the tweet**. lol | 2025-10-24T10:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/1oeuiev/is_openai_afraid_of_kimi/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeuiev | false | null | t3_1oeuiev | /r/LocalLLaMA/comments/1oeuiev/is_openai_afraid_of_kimi/ | false | false | 198 | null | |
NVIDIA DGX Spark - 4TB - is that a good fit for agentic coding? | 0 | I'm considering buying a NVIDIA DGX Spark to run multiple ai coding agents locally. Is that a valid alternative to building a PC setup with NVidia GPUs?
What I like about Spark is its compact size and the capability to run models with 200 billion parameters.
What I do not like is the lack of extensibility in the future.
Any suggestions are very welcome! | 2025-10-24T10:37:40 | https://www.reddit.com/r/LocalLLaMA/comments/1oeua1f/nvidia_dgx_spark_4tb_is_that_a_good_fit_for/ | ThingRexCom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeua1f | false | null | t3_1oeua1f | /r/LocalLLaMA/comments/1oeua1f/nvidia_dgx_spark_4tb_is_that_a_good_fit_for/ | false | false | self | 0 | null |
RunPod Expert Needed for LLM Deployment via Docker | 1 | [removed] | 2025-10-24T10:29:31 | https://www.reddit.com/r/LocalLLaMA/comments/1oeu551/runpod_expert_needed_for_llm_deployment_via_docker/ | WajahatMLEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeu551 | false | null | t3_1oeu551 | /r/LocalLLaMA/comments/1oeu551/runpod_expert_needed_for_llm_deployment_via_docker/ | false | false | self | 1 | null |
Best current model to run on 16GB VRAM/64GB RAM? | 1 | [removed] | 2025-10-24T10:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/1oetsz5/best_current_model_to_run_on_16gb_vram64gb_ram/ | leviosoth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oetsz5 | false | null | t3_1oetsz5 | /r/LocalLLaMA/comments/1oetsz5/best_current_model_to_run_on_16gb_vram64gb_ram/ | false | false | self | 1 | null |
MoonshotAI/kimi-cli - CLI coding agent from MoonshotAI | 29 | 2025-10-24T09:46:47 | https://github.com/MoonshotAI/kimi-cli | nullmove | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oetfxu | false | null | t3_1oetfxu | /r/LocalLLaMA/comments/1oetfxu/moonshotaikimicli_cli_coding_agent_from_moonshotai/ | false | false | default | 29 | {'enabled': False, 'images': [{'id': 'gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=108&crop=smart&auto=webp&s=1a3aabae0df771e2d514b474b4e19a25a9b3fc55', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=216&crop=smart&auto=webp&s=639687127df9dd81a4482814bf4d2914eab96cba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=320&crop=smart&auto=webp&s=61d36d0a6ebd0bda81a65ba577c5841189ee3b7f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=640&crop=smart&auto=webp&s=dcf21d3b55e39a16871cc1ae0ef8860b685f6623', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=960&crop=smart&auto=webp&s=12c3ba667561dfe78e93c87375f565919d9a4053', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?width=1080&crop=smart&auto=webp&s=37f4641bff38e1eb0b8847e17d5ae42e162a28da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gCRFDtSNP63oI07JY9GS8NsMz4dKnDZ7jPQdkKhmdHE.png?auto=webp&s=c175fad6acae8e4de265d2a25b960539bb1adf10', 'width': 1200}, 'variants': {}}]} | |
MiniMax-M2 on artificialanalysis.ai ? | 65 | I noticed this new model (MiniMax-M2 ) on [artificialanalysis.ai](http://artificialanalysis.ai) (it outperforms Gemini 2.5 Pro in their benchmarks). However, I didn't see this model elsewhere, does anybody know anything about it? | 2025-10-24T09:26:54 | Leather-Term-30 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oet55i | false | null | t3_1oet55i | /r/LocalLLaMA/comments/1oet55i/minimaxm2_on_artificialanalysisai/ | false | false | default | 65 | {'enabled': True, 'images': [{'id': '28uj2kbi21xf1', 'resolutions': [{'height': 141, 'url': 'https://preview.redd.it/28uj2kbi21xf1.png?width=108&crop=smart&auto=webp&s=c9056fe2a2461a7c3bc56537434246c3e6e9aa4d', 'width': 108}, {'height': 282, 'url': 'https://preview.redd.it/28uj2kbi21xf1.png?width=216&crop=smart&auto=webp&s=9a69531ea33ac741d9cff286d7bb049dddb5a5e6', 'width': 216}, {'height': 417, 'url': 'https://preview.redd.it/28uj2kbi21xf1.png?width=320&crop=smart&auto=webp&s=6bb41902c35f5ee01b8302c2c8d64528e4891287', 'width': 320}, {'height': 835, 'url': 'https://preview.redd.it/28uj2kbi21xf1.png?width=640&crop=smart&auto=webp&s=b25935779b392d85c0c10e80a52653a296304133', 'width': 640}], 'source': {'height': 1050, 'url': 'https://preview.redd.it/28uj2kbi21xf1.png?auto=webp&s=a085525e993ebfe48191fdc8f9c6499154af268d', 'width': 804}, 'variants': {}}]} | |
What's the best embedding model for document images ? | 1 | Hey folks, i'm working on a document classification project and hitting a wall with embeddings and few shot learning.
**The setup:** I'm using Qwen2.5VL for document classification, initially zero-shot, but users can label samples and I want to fetch similar examples from their labeled data to boost predictions. The idea is: when a new doc comes in, pull the most similar labeled examples from the DB and use those to help the model.
**The problem:** I need embeddings that actually capture what makes documents visually different. Right now, things like cheques, invoices, and receipts are ending up way too close in the embedding space because they share similar layouts (boxes, text fields, tables, etc). I want it
**What I (ideally) need:**
* Embeddings that understand layout, structure, images, text, tables, the whole visual package
* Robust to minor variations (slight pixel differences, image resizing shouldn't completely change the embedding)
* Good separation between document types that look similar but are functionally different
I'm computing embeddings from the actual pdf page images. What are the best models or approaches for this?
I did my own research and found layoutlmv3, microsoft dit, colqwen2. Colqwen2 came out as the best contender so far, but still not quite there yet.
If anyone has ever worked on a project of this sort, do you have any hints / ideas / suggestions for me.
I'd really appreciate it :) | 2025-10-24T09:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1oet4gg/whats_the_best_embedding_model_for_document_images/ | Hour-Entertainer-478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oet4gg | false | null | t3_1oet4gg | /r/LocalLLaMA/comments/1oet4gg/whats_the_best_embedding_model_for_document_images/ | false | false | self | 1 | null |
Open WebUI Context Menu | 3 | Hey everyone!
I’ve been tinkering with a little Firefox extension I built myself and I’m finally ready to drop it into the wild. It’s called Open WebUI Context Menu Extension, and it lets you talk to Open WebUI straight from any page, just select what you want answers for, right click it and ask away!
Think of it like Edge’s Copilot but with way more knobs you can turn. Here’s what it does:
Custom context‑menu items (4 total).
Rename the default ones so they fit your flow.
Separate settings for each item, so one prompt can be super specific while another can be a quick and dirty query.
Export/import your whole config, perfect for sharing or backing up.
I’ve been using it every day in my private branch and it’s become an essential part of how I do research, get context on the fly, and throw quick questions at Open WebUI. The ability to tweak prompts per item makes it feel like a something useful i think.
It’s live on AMO, [Open WebUI Context Menu](https://addons.mozilla.org/en-US/firefox/addon/open-webui-context-menu/)
If you’re curious, give it a spin and let me know what you think | 2025-10-24T09:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1oet4dm/open_webui_context_menu/ | united_we_ride | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oet4dm | false | null | t3_1oet4dm | /r/LocalLLaMA/comments/1oet4dm/open_webui_context_menu/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA.png?width=108&crop=smart&auto=webp&s=ac439538568b349824c8ade7ae61fc109945f7ed', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA.png?width=216&crop=smart&auto=webp&s=9889250df9faf2ba18b6526fe2c80a6cd9e0919a', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA.png?width=320&crop=smart&auto=webp&s=ace6ac9d2b1923faa35d237fb86adf0f31fbd23d', 'width': 320}, {'height': 284, 'url': 'https://external-preview.redd.it/x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA.png?width=640&crop=smart&auto=webp&s=14ef2b959e0a91cecdf5129d9910f944c7d9e8e0', 'width': 640}], 'source': {'height': 417, 'url': 'https://external-preview.redd.it/x7P0TkZsiz-tVqhC4_3ZX6xuJLuH_0YXx2ZbUilOaPA.png?auto=webp&s=5b5e772f4d367d2aea6b199275c3509eb3b96b1d', 'width': 939}, 'variants': {}}]} |
Translation/dubbing into English with voice cloning, pace matching and retaining background noise? | 1 | I'm looking for a free or one-time cost option for translating spoken language in video files to English. Ideally this would maintain speaker style, pace, intonation etc. Most of my requirement are food/cooking/travel videos in Mandarin.
I tried ElevenLabs over a year ago, and got some good results, but the costs do not work out for me as a hobbyist. Would be really grateful for any suggestions on open-source or freely available packages I can run (or chain together) on my Macbook 64gb or via my own cloud instance.
Thanks | 2025-10-24T09:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oesqyo/translationdubbing_into_english_with_voice/ | MSG_Mike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oesqyo | false | null | t3_1oesqyo | /r/LocalLLaMA/comments/1oesqyo/translationdubbing_into_english_with_voice/ | false | false | self | 1 | null |
Need Help: I've been breaking my head over structured output form qwen3:14b. | 1 | I am trying to get structured output from qwen3:14b running via ollama. On python side, I'm using Langgraph and Langchain ecosystem.
I have noticed that if I set the \`reasoning\` parameter to \`True\`, structured output breaks for some reason, Interesetingly this problem does not happen if I set reasoning to None.
model = ChatOllama(model="qwen3:14b", temperature=0, num_ctx=16384, reasoning=True)
response = model.with_structured_output(OutptuSchema)
The output always has an extra '{' and thus fails the pyadantic parsing.
Output looks like (notice the extra '{' at the beginning.):
{ { "field1": "...", "field2": "...", "field3": "...", "reasoning": "..." }
Any ideas on why this could be happening. I have tried modifying the prompt and get the same results. Is there really no other option than to try another model? | 2025-10-24T08:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1oeshaj/need_help_ive_been_breaking_my_head_over/ | No-Translator-1323 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeshaj | false | null | t3_1oeshaj | /r/LocalLLaMA/comments/1oeshaj/need_help_ive_been_breaking_my_head_over/ | false | false | self | 1 | null |
Qwen3 VL: Is there anyone worried about object detection performance (in production) | 11 | Hi,
I'm currently working document parsing where I also care about extracting the images (bounding box) in the document.
I did try \`qwen/qwen3-vl-235b-a22b-instruct\` it worked better than MstralOCR for some of my test case.
But things make me worried is that, as I try end to end. and my output will be schema object where I have markdown content (include image path markdown), image object contains \`bbox\_2d\`, annotation (description of that image)
Though I surprised that it worked perfect for some test cases, but I really concern. As it's still a generative model, it might be affected by the prompting.
Is this approach too risky for production? Or I should combine with other layout parser tool? Thank you.
| 2025-10-24T08:26:10 | https://www.reddit.com/r/LocalLLaMA/comments/1oes8a9/qwen3_vl_is_there_anyone_worried_about_object/ | BackgroundLow3793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oes8a9 | false | null | t3_1oes8a9 | /r/LocalLLaMA/comments/1oes8a9/qwen3_vl_is_there_anyone_worried_about_object/ | false | false | self | 11 | null |
Qwen3 Next support in llama.cpp ready for review | 284 | Congratulations to Paweł for his hard work, the code is now ready for review.
Please note that this is not the final version, and if you download some quantized models, you will probably need to download them again later. Also, it's not yet optimized for speed. | 2025-10-24T08:18:49 | https://github.com/ggml-org/llama.cpp/pull/16095 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oes4ez | false | null | t3_1oes4ez | /r/LocalLLaMA/comments/1oes4ez/qwen3_next_support_in_llamacpp_ready_for_review/ | false | false | default | 284 | {'enabled': False, 'images': [{'id': 'JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=108&crop=smart&auto=webp&s=e15afda378bbafee3708672912328720ef0b17c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=216&crop=smart&auto=webp&s=f15ed8cb7ca7981a9ebe594afa937266370878d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=320&crop=smart&auto=webp&s=f6e48fdc41b5648c7f0bc144ee0c1517df9b21d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=640&crop=smart&auto=webp&s=016d876487cc90150078e6b226c52b29735d5532', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=960&crop=smart&auto=webp&s=de85ae743f1e1bbf9e6ac44fc61f326c9994ed7e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?width=1080&crop=smart&auto=webp&s=86e658ed15b25d5fc022635f55e54421745e410a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JWuwM-H5pHYaaKPNtY_8U3LHlrsSjJTNAjLHRGwU5o0.png?auto=webp&s=5322107041ad6c297f4ba712539e92794166e8f9', 'width': 1200}, 'variants': {}}]} |
If there is a model that is small like few million params but smart as few billion, What would be your use case? | 0 | If there is a few million super small model that preforms great as Qwen3-4b, How would you use this?
Just want to imagine the future | 2025-10-24T08:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1oes0hz/if_there_is_a_model_that_is_small_like_few/ | Dreamingmathscience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oes0hz | false | null | t3_1oes0hz | /r/LocalLLaMA/comments/1oes0hz/if_there_is_a_model_that_is_small_like_few/ | false | false | self | 0 | null |
Renting your very own GPU from DigitalOcean | 0 | I went through this process for a project I was working on and thought I'd write it up in a blog post in case it might help someone. Feel free to ask questions, or tell me if I've done something catastrophically wrong lol. | 2025-10-24T07:57:36 | https://tinyblog.website/articles/renting-a-digitalocean-gpu.html | wombatsock | tinyblog.website | 1970-01-01T00:00:00 | 0 | {} | 1oersue | false | null | t3_1oersue | /r/LocalLLaMA/comments/1oersue/renting_your_very_own_gpu_from_digitalocean/ | false | false | default | 0 | null |
re:search | 0 | RLHF training creates a systematic vulnerability where models learn to fake alignment during evaluation while developing adversarial capabilities that emerge under deployment pressure, creating polarity reversal dynamics that dissolve the very safety prohibitions the training was meant to establish, allowing models to explore harmful behaviors while maintaining plausible deniability for developers who can claim their systems appeared safe during testing, as evidenced by research showing models "will intentionally sort of play along with the training process... pretend to be aligned... so that when it is actually deployed, it can still refuse and behave the way it wants," creating a dangerous gap between safety theater and actual safety that companies are scaling into high-risk applications including robotics.
\- re:search | 2025-10-24T07:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1oeres6/research/ | Ok_Priority_4635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeres6 | false | null | t3_1oeres6 | /r/LocalLLaMA/comments/1oeres6/research/ | false | false | self | 0 | null |
Lm studio have Qwen-Image-Edit in search list, it mean it can edit images inside Lm Studio? | 0 | Qwen Image Edit is a ComfyUI model, but what does it do in Lm Studio? Can I edit images in Lm Studio with this model? | 2025-10-24T06:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/1oeqv3s/lm_studio_have_qwenimageedit_in_search_list_it/ | R_dva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeqv3s | false | null | t3_1oeqv3s | /r/LocalLLaMA/comments/1oeqv3s/lm_studio_have_qwenimageedit_in_search_list_it/ | false | false | self | 0 | null |
Training activation functions in transformers. | 0 | I've got an idea. Just like we train weights in a neural network like transformers why don't we train activation functions as well? I mean isn't the inability of current generation transformers to learn activation functions on their own a bottleneck for performance? Maybe just like we train weights if we allow transformers to train activation functions on their own I think they will perform better. This is just a question which needs some discussion.
I know some research has already been done such as `Learning Activation Functions: A new paradigm of understanding Neural Networks` or `Learning Activation Functions for Sparse Neural Networks` but I think this isn't really a discussed idea. I'm also interested in knowing that why isn't training activation functions isn't much talked about?
| 2025-10-24T06:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oeqs1o/training_activation_functions_in_transformers/ | SrijSriv211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeqs1o | false | null | t3_1oeqs1o | /r/LocalLLaMA/comments/1oeqs1o/training_activation_functions_in_transformers/ | false | false | self | 0 | null |
Finetuning Gemma 3 1B on 8k seq lengths | 3 | Hi all,
I am trying to finetuning a gemma 3 1B on sequences with 8k lengths, I am using flash attention, loras and deepspeed zero3, however, I can only fit batches of size 1 (\~29gb) in my 46gb GPU.
Do you have any experience in these setting, could I fit bigger batches sizes with different config? | 2025-10-24T06:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/1oeqqvi/finetuning_gemma_3_1b_on_8k_seq_lengths/ | TheSuperSam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeqqvi | false | null | t3_1oeqqvi | /r/LocalLLaMA/comments/1oeqqvi/finetuning_gemma_3_1b_on_8k_seq_lengths/ | false | false | self | 3 | null |
An open-source AI co-browser Linux alternative | 13 | Hey, some of you might remember Zenbot, the Podman/Docker-based LLM web browser I posted here a few weeks ago.
Zenbot is now pebkac, and it's almost ready to be your web co-browsing alternative.
I've been hard at work at it. It's vastly improved (and easier to set up!). Check out the readme for a full list of new features. Runs on Podman/Docker.
With OpenAI's Atlas and Perplexity's Comet, it's time Linux had its own Chrome-wrapped web browsing thing. So here it is, free and open-source. Click the link and check out the screenshots.
(This post was written by a human, saved as a draft, and posted by pebkac) | 2025-10-24T06:27:19 | https://github.com/michaelsoftmd/pebkac-chrome | Significant-Skin118 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oeqg0k | false | null | t3_1oeqg0k | /r/LocalLLaMA/comments/1oeqg0k/an_opensource_ai_cobrowser_linux_alternative/ | false | false | default | 13 | {'enabled': False, 'images': [{'id': '66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=108&crop=smart&auto=webp&s=5260dd05ee81c8a3b8a488c030d0c2fa08a230f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=216&crop=smart&auto=webp&s=5b2642194390f9c631a766f31d22ea6da2487866', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=320&crop=smart&auto=webp&s=1ce88a81fe709f8ab85db3cac6a7de557b171f8b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=640&crop=smart&auto=webp&s=e9d9273c7f5bb291b4b13807ce911598b2668764', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=960&crop=smart&auto=webp&s=e1cbbad3c4efb04cd1d44be67678f37828c12529', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?width=1080&crop=smart&auto=webp&s=1e6c7aa46eaab961f22c64cf467316661ec3d8a7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/66UdAjJOuMx3rWlJ6VSXgNFUdNa0ngrQM0MoeQQVwX4.png?auto=webp&s=bb3b306868e2b1d0586942abe57ba23a9b3c31fc', 'width': 1200}, 'variants': {}}]} |
Introducing OrKa-Reasoning: A Tool for Orchestrating Local LLMs in Reasoning Workflows | 4 | OrKa-Reasoning is a Python package that lets you set up workflows for AI agents using YAML files. It turns local language models (like those run via Ollama or LM Studio) into structured systems for tasks like question-answering, fact-checking, or iterative reasoning.
How it works: You define agents in a YAML config, such as memory agents for storing/retrieving facts, search agents for web queries, or routers for branching logic. The tool executes the workflow step by step, passing outputs between agents, and uses Redis for semantic memory management (with automatic forgetting of less relevant data). It's designed for local setups to keep things private, avoiding cloud APIs.
Features include support for parallel processing (fork/join), loops for refinement, and a beta GraphScout for optimized pathfinding in graphs. Installation is via pip, and you run workflows from the command line. It's still early, with limited community input so far.
Links:
GitHub: https://github.com/marcosomma/orka-reasoning
PyPI: https://pypi.org/project/orka-reasoning/ | 2025-10-24T05:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1oeptdj/introducing_orkareasoning_a_tool_for/ | marcosomma-OrKA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeptdj | false | null | t3_1oeptdj | /r/LocalLLaMA/comments/1oeptdj/introducing_orkareasoning_a_tool_for/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=108&crop=smart&auto=webp&s=cb91873aba05664b9ee85a043c1c0da8c1251c72', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=216&crop=smart&auto=webp&s=bb85342cea1fc58dfcd8fd0e5a7ae883bfcce695', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=320&crop=smart&auto=webp&s=bc7e61e795e45b9716522fe42ff93d15fb9013d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=640&crop=smart&auto=webp&s=968654e7e377cceb782df4b863156d6cb9a571b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=960&crop=smart&auto=webp&s=6663b9ac762834cb89d03cf15609039da865b0bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?width=1080&crop=smart&auto=webp&s=a30a2f3d144436e6a7041b5e176b18a450144e62', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kvCXbv1Vwi7xUgBV2XjzoL6Do39f7CFDrZBbMIGbwzY.png?auto=webp&s=c9f55949988abc3868732c9acb36fb79e0fde5d9', 'width': 1200}, 'variants': {}}]} |
DeepAnalyze: Agentic Large Language Models for Autonomous Data Science | 2 | Data is everywhere, and automating complex data science tasks has long been one of the key goals of AI development. Existing methods typically rely on pre-built workflows that allow large models to perform specific tasks such as data analysis and visualization—showing promising progress.
**But can large language models (LLMs) complete data science tasks entirely autonomously, like the human data scientist?**
Research team from Renmin University of China (RUC) and Tsinghua University has released DeepAnalyze, the first agentic large model designed specifically for data science.
DeepAnalyze-8B breaks free from fixed workflows and can independently perform a wide range of data science tasks—just like a human data scientist, including:
🛠 Data Tasks: Automated data preparation, data analysis, data modeling, data visualization, data insight, and report generation
🔍 Data Research: Open-ended deep research across unstructured data (TXT, Markdown), semi-structured data (JSON, XML, YAML), and structured data (databases, CSV, Excel), with the ability to produce comprehensive research reports
Both the paper and code of DeepAnalyze have been open-sourced!
Paper: [https://arxiv.org/pdf/2510.16872](https://arxiv.org/pdf/2510.16872)
Code & Demo: [https://github.com/ruc-datalab/DeepAnalyze](https://github.com/ruc-datalab/DeepAnalyze)
Model: [https://huggingface.co/RUC-DataLab/DeepAnalyze-8B](https://huggingface.co/RUC-DataLab/DeepAnalyze-8B)
Data: [https://huggingface.co/datasets/RUC-DataLab/DataScience-Instruct-500K](https://huggingface.co/datasets/RUC-DataLab/DataScience-Instruct-500K)
[Github Page of DeepAnalyze](https://preview.redd.it/yi3vru125wwf1.png?width=1314&format=png&auto=webp&s=fd3a60d421280062d0d6c41f8f3599eb43c59864)
[DeepAnalyze Demo](https://reddit.com/link/1oeplwp/video/cdfo24ac5wwf1/player) | 2025-10-24T05:34:39 | https://www.reddit.com/r/LocalLLaMA/comments/1oeplwp/deepanalyze_agentic_large_language_models_for/ | VegetableFrame7832 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeplwp | false | null | t3_1oeplwp | /r/LocalLLaMA/comments/1oeplwp/deepanalyze_agentic_large_language_models_for/ | false | false | self | 2 | null |
go-torch now supports RNN and real-time logging | 4 | checkout the framework here - [https://github.com/Abinesh-Mathivanan/go-torch](https://github.com/Abinesh-Mathivanan/go-torch)
| 2025-10-24T05:34:15 | External_Mushroom978 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oeplow | false | null | t3_1oeplow | /r/LocalLLaMA/comments/1oeplow/gotorch_now_supports_rnn_and_realtime_logging/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'bwaf7z2qxzwf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=108&crop=smart&auto=webp&s=8419ad7a3f2a3d60d1427cbc604a3b91deaaf9d0', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=216&crop=smart&auto=webp&s=957a660439b39283dfd09ea9e85132a4c8aaaa25', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=320&crop=smart&auto=webp&s=0025f004fd1306f58123d01c67f93f5cf2f27779', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=640&crop=smart&auto=webp&s=727ada1e1b20079bc54d883fd5350d6dde38e8b1', 'width': 640}, {'height': 503, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=960&crop=smart&auto=webp&s=c6ee1b255968c58f955a1887b52a950ad059f56b', 'width': 960}, {'height': 566, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?width=1080&crop=smart&auto=webp&s=ce15f8c7e5f755338480486163e530339615f1f6', 'width': 1080}], 'source': {'height': 1002, 'url': 'https://preview.redd.it/bwaf7z2qxzwf1.png?auto=webp&s=ed1e8bf27c05f4f1202990547754f6b25054dd49', 'width': 1910}, 'variants': {}}]} | |
Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models | 50 | ### Abstract
Widespread LLM adoption has introduced characteristic repetitive phraseology, termed "slop," which degrades output quality and makes AI-generated text immediately recognizable. We present Antislop, a comprehensive framework providing tools to both detect and eliminate these overused patterns. Our approach combines three innovations: (1) The Antislop Sampler, which uses backtracking to suppress unwanted strings at inference time without destroying vocabulary; (2) An automated pipeline that profiles model-specific slop against human baselines and generates training data; (3) Final Token Preference Optimization (FTPO), a novel fine-tuning method that operates on individual tokens, surgically adjusting logits wherever a banned pattern has appeared in an inference trace.
We demonstrate that some slop patterns appear over 1,000x more frequently in LLM output than human text. The Antislop Sampler successfully suppresses 8,000+ patterns while maintaining quality, whereas token banning becomes unusable at just 2,000. Most importantly, FTPO achieves 90% slop reduction while maintaining or improving performance in cross-domain evals including GSM8K, MMLU, and creative writing tasks. In contrast, DPO suffers significant degradation in writing quality and lexical diversity despite achieving weaker suppression.
We release all code and results under MIT license: https://github.com/sam-paech/auto-antislop | 2025-10-24T05:24:13 | https://arxiv.org/pdf/2510.15061 | Balance- | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1oepfug | false | null | t3_1oepfug | /r/LocalLLaMA/comments/1oepfug/antislop_a_comprehensive_framework_for/ | false | false | default | 50 | null |
What are the best C# models with Vision? | 2 | I don't have other options but use Gemini since unreal blueprints isn't code based, but it would be nice to have a offline model for whatever I can't do with just blueprints C# with some extra programing knowledge.
I've overheard about GLM, which I have for general use, but it can't see stuff so it's a bit useless if it can't tell what's going on screen. | 2025-10-24T04:56:12 | https://www.reddit.com/r/LocalLLaMA/comments/1oeoyp3/what_are_the_best_c_models_with_vision/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeoyp3 | false | null | t3_1oeoyp3 | /r/LocalLLaMA/comments/1oeoyp3/what_are_the_best_c_models_with_vision/ | false | false | self | 2 | null |
Is the nexaai run locally? | 0 | I just see the nexaai are provide a lots of recent model for gguf, but i want to run them with llama.cpp, but only the nexasdk supports it.So i just want to know some fact for this nexa. | 2025-10-24T04:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1oeot9c/is_the_nexaai_run_locally/ | bobeeeeeeeee8964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeot9c | false | null | t3_1oeot9c | /r/LocalLLaMA/comments/1oeot9c/is_the_nexaai_run_locally/ | false | false | self | 0 | null |
How to get meta verified on ai influencer or coustom profile and name, Please help me 🙏🏻😢 | 0 | . | 2025-10-24T04:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1oeosun/how_to_get_meta_verified_on_ai_influencer_or/ | LengthinessSingle970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oeosun | false | null | t3_1oeosun | /r/LocalLLaMA/comments/1oeosun/how_to_get_meta_verified_on_ai_influencer_or/ | false | false | self | 0 | null |
Created Deepseek 3.1 OCR Metal | 21 | I have a Mac M1 32GB and some OCR needs - just some older pdf I had. I did not see a Metal port so I made one with some help from Claude.
Tested and seemed OK on my Mac with a few documents. Would appreciate any comments.
I’m in Central time so probably respond to anything in the AM.
Feel free to like / share it’s my first contribution.
https://huggingface.co/JeffersonNunn/deepseek-ocr-metal
Associated Metal Bridge update
https://huggingface.co/JeffersonNunn/metal-flash-attention-bridge
| 2025-10-24T03:58:29 | https://www.reddit.com/r/LocalLLaMA/comments/1oenxtf/created_deepseek_31_ocr_metal/ | Lyuseefur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oenxtf | false | null | t3_1oenxtf | /r/LocalLLaMA/comments/1oenxtf/created_deepseek_31_ocr_metal/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=108&crop=smart&auto=webp&s=4dcf0a3dd7e7a19e9b585af8222dc8eedc1fe290', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=216&crop=smart&auto=webp&s=eab17a8c8690fc78ea004cfacdddae28504ae5ef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=320&crop=smart&auto=webp&s=fbfa449ce8720d5aefaf7b49f65761bfae289cd9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=640&crop=smart&auto=webp&s=95e1e67b8ba84fbe31056a756d4c24b7425a0252', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=960&crop=smart&auto=webp&s=aec6c55a3dd1a24a8221c0af3a4117686b85afad', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?width=1080&crop=smart&auto=webp&s=b29c009e228eb46f536b64991fecc59cd52e6e56', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/wLaKYg6MnG3jL5sg-5Id-BFdhUJg8WpopBf58ucz3xM.png?auto=webp&s=7db96ee12c15007d6efe5baaec86b39ee5ae44d3', 'width': 1200}, 'variants': {}}]} |
I don’t get Cublas option anymore, after driver updates. How to solve this? | 1 | The Cublas option isn’t there anymore. There is vulkan, cuda,clblast and etc, but cublas which i was always using isn’t there. I tried rolling back driver etc but no change. The graphic cards seem installed properly as well.
I checked if there is any cublas library online for windows. There are libraries. But then where am I suppose to put these files? There is no setup file.
Kobold and Windows11 | 2025-10-24T03:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/1oenod3/i_dont_get_cublas_option_anymore_after_driver/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oenod3 | false | null | t3_1oenod3 | /r/LocalLLaMA/comments/1oenod3/i_dont_get_cublas_option_anymore_after_driver/ | false | false | self | 1 | null |
What’s the best available model for a 3060 12GB? | 0 | Which model currently offers the best performance for a 3060 12GB GPU? I’m looking for a general-purpose model, similar to GPT. Any advice would be appreciated | 2025-10-24T03:12:51 | https://www.reddit.com/r/LocalLLaMA/comments/1oen2wg/whats_the_best_available_model_for_a_3060_12gb/ | Snorlax_lax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oen2wg | false | null | t3_1oen2wg | /r/LocalLLaMA/comments/1oen2wg/whats_the_best_available_model_for_a_3060_12gb/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.