title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Test
1
[deleted]
2025-07-10T22:15:51
[deleted]
1970-01-01T00:00:00
0
{}
1lwpxoh
false
null
t3_1lwpxoh
/r/LocalLLaMA/comments/1lwpxoh/test/
false
false
default
1
null
Need advice on how to improve Handwritten Text Recognition of names using Vision models (for academic research purposes)
2
Dear LocalLLaMA, I have been a member since the very beginning (ish) and have learned so much from many of you. I’m hoping to get some more specific advice on the best vision-language models for extracting cursive handwritten text from a large set of documents (about 400 000 images). I have access to A40, A100, and V1...
2025-07-10T21:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1lwpi5p/need_advice_on_how_to_improve_handwritten_text/
joosefm9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwpi5p
false
null
t3_1lwpi5p
/r/LocalLLaMA/comments/1lwpi5p/need_advice_on_how_to_improve_handwritten_text/
false
false
self
2
{'enabled': False, 'images': [{'id': 'FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=108&crop=smart&auto=webp&s=0d0bf812fba94f9f50669a2e76037d0e7886bde2', 'width': 108}, {'height': 116, 'url': 'h...
Building a domain specific dataset
3
Hello everyone, my goal is to finetune an embedding model for the domain of railway engineering. I was wondering if you had recommendation. I tried a few things and I am now considering the option of filtering fine web using keywords.
2025-07-10T21:56:58
https://www.reddit.com/r/LocalLLaMA/comments/1lwphbh/building_a_domain_specific_dataset/
T2WIN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwphbh
false
null
t3_1lwphbh
/r/LocalLLaMA/comments/1lwphbh/building_a_domain_specific_dataset/
false
false
self
3
null
Claude Sonnet 4 vs. GPT 4.5 for programming (Agent feature)?
0
Which one would be your go-to for Python?
2025-07-10T21:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1lwpe0f/claude_sonnet_4_vs_gpt_45_for_programming_agent/
NoVersion6010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwpe0f
false
null
t3_1lwpe0f
/r/LocalLLaMA/comments/1lwpe0f/claude_sonnet_4_vs_gpt_45_for_programming_agent/
false
false
self
0
null
Why was my post taken down?
1
[removed]
2025-07-10T21:47:02
https://i.redd.it/ihke9x4pa4cf1.jpeg
DontPlanToEnd
i.redd.it
1970-01-01T00:00:00
0
{}
1lwp8ut
false
null
t3_1lwp8ut
/r/LocalLLaMA/comments/1lwp8ut/why_was_my_post_taken_down/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ihke9x4pa4cf1', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/ihke9x4pa4cf1.jpeg?width=108&crop=smart&auto=webp&s=b3d5823f1c15ed02f0cc4d749eebc178810d206a', 'width': 108}, {'height': 319, 'url': 'https://preview.redd.it/ihke9x4pa4cf1.jpeg?width=216&crop=smart&auto=...
Is there some localllm benchmarking tool to see how well your system will handle a model?
0
Or some website that will let me know for a rtx 4090, 32gb ram, what the performance of deepseek-r1 will be? Thanks, i don't know where to start. I have an rtx 4080s (16gb graphics ram) with 64gb ram on a 13700k...
2025-07-10T21:45:49
https://www.reddit.com/r/LocalLLaMA/comments/1lwp7tv/is_there_some_localllm_benchmarking_tool_to_see/
blackashi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwp7tv
false
null
t3_1lwp7tv
/r/LocalLLaMA/comments/1lwp7tv/is_there_some_localllm_benchmarking_tool_to_see/
false
false
self
0
null
Pls help with JanitorAI😭
0
Hi everyone. I'm a total newbie when it comes to all this neural network stuff and so on. But... I want my damn roleplay with my favorite characters! Since Chutes is no longer free and convenient in some situations, I wanted to ask — is there any alternative to Chutes? What exactly does it do, and how can it be replac...
2025-07-10T21:45:22
https://www.reddit.com/r/LocalLLaMA/comments/1lwp7e5/pls_help_with_janitorai/
Weekly_Fan2116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwp7e5
false
null
t3_1lwp7e5
/r/LocalLLaMA/comments/1lwp7e5/pls_help_with_janitorai/
false
false
nsfw
0
null
Can AI assist with 3d mapping?
0
Just wondering if anyone has used a model to help map 3d meshes for expected countours/shapes/colors. Ie. Map 90% of a vase but you are unable to scan the bottom/back. AI could assist in finding the real world match and correcting. In theory you could use video feed of a 3d space to map with actual modelled objec...
2025-07-10T20:53:22
https://www.reddit.com/r/LocalLLaMA/comments/1lwnxhz/can_ai_assist_with_3d_mapping/
Bohdanowicz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwnxhz
false
null
t3_1lwnxhz
/r/LocalLLaMA/comments/1lwnxhz/can_ai_assist_with_3d_mapping/
false
false
self
0
null
Performance benchmarks on DeepSeek V3-0324/R1-0528/TNG-R1T2-Chimera on consumer CPU (7800X3D, 192GB RAM at 6000Mhz) and 208GB VRAM (5090x2/4090x2/3090x2/A6000) on ikllamacpp! From 3bpw (Q2_K_XL) to 4.2 bpw (IQ4_XS)
69
Hi there guys, hope you're having a good day! After latest improvements on ik llamacpp, [https://github.com/ikawrakow/ik\_llama.cpp/commits/main/](https://github.com/ikawrakow/ik_llama.cpp/commits/main/), I have found that DeepSeek MoE models runs noticeably faster than llamacpp, at the point that I get about half PP ...
2025-07-10T20:37:31
https://www.reddit.com/r/LocalLLaMA/comments/1lwnj5x/performance_benchmarks_on_deepseek/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwnj5x
false
null
t3_1lwnj5x
/r/LocalLLaMA/comments/1lwnj5x/performance_benchmarks_on_deepseek/
false
false
https://b.thumbs.redditm…9fOYvDpwJaPg.jpg
69
null
Workflows aren’t a weakness in AI agents, they’re why they work
14
Some people think AI agents are hype and glorified workflows. But agents that actually work don’t try to be JARVIS, not yet. The ones that succeed stick to structured workflows. And that’s not a bad thing. When I was in school, we studied Little Computer 3 to understand how computer architecture starts with state mach...
2025-07-10T20:37:01
https://www.reddit.com/r/LocalLLaMA/comments/1lwniq0/workflows_arent_a_weakness_in_ai_agents_theyre/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwniq0
false
null
t3_1lwniq0
/r/LocalLLaMA/comments/1lwniq0/workflows_arent_a_weakness_in_ai_agents_theyre/
false
false
self
14
null
Workflows aren’t a weakness in AI agents, they’re why they work
1
[removed]
2025-07-10T20:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1lwnhmu/workflows_arent_a_weakness_in_ai_agents_theyre/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwnhmu
false
null
t3_1lwnhmu
/r/LocalLLaMA/comments/1lwnhmu/workflows_arent_a_weakness_in_ai_agents_theyre/
false
false
self
1
null
Grok 4 on Fiction.liveBench Long Context Comprehension
85
2025-07-10T20:21:08
https://i.redd.it/rzwo8emcv3cf1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1lwn3ut
false
null
t3_1lwn3ut
/r/LocalLLaMA/comments/1lwn3ut/grok_4_on_fictionlivebench_long_context/
false
false
default
85
{'enabled': True, 'images': [{'id': 'rzwo8emcv3cf1', 'resolutions': [{'height': 157, 'url': 'https://preview.redd.it/rzwo8emcv3cf1.png?width=108&crop=smart&auto=webp&s=00960dbf00b92ec8a09e74a1754bc8c7439cff4c', 'width': 108}, {'height': 315, 'url': 'https://preview.redd.it/rzwo8emcv3cf1.png?width=216&crop=smart&auto=we...
Whats wrong with my vLLM Config? I have 2x4070TiSupers and I couldn't run many models at bnb-4bit Quants.
1
32GB VRAM suppose to fit 24B models at 8b quant right? Here is what i am trying via `vllm serve` This works fine ``` --model unsloth/Devstral-Small-2505-unsloth-bnb-4bit --port 80 --quantization="bitsandbytes" --load-format bitsandbytes --pipeline-parallel-size 2 --max-num-seqs 1 --max-model-len 40960 ``` Even q...
2025-07-10T20:14:08
https://www.reddit.com/r/LocalLLaMA/comments/1lwmxbx/whats_wrong_with_my_vllm_config_i_have/
Voxandr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwmxbx
false
null
t3_1lwmxbx
/r/LocalLLaMA/comments/1lwmxbx/whats_wrong_with_my_vllm_config_i_have/
false
false
self
1
null
Just confirming - COGITO are THE BEST local modals?
0
Apparently these guys take models and just make them way better: https://www.reddit.com/r/LocalLLaMA/comments/1jum5s1/cogito_releases_strongest_llms_of_sizes_3b_8b_14b/ My experience trying it over last 24 hours and been PHENOMENAL! I have to agree with the 1% top poster on that above thread: "It's probably in my h...
2025-07-10T20:12:46
https://www.reddit.com/r/LocalLLaMA/comments/1lwmw0v/just_confirming_cogito_are_the_best_local_modals/
Revolutionalredstone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwmw0v
false
null
t3_1lwmw0v
/r/LocalLLaMA/comments/1lwmw0v/just_confirming_cogito_are_the_best_local_modals/
false
false
self
0
null
New Devstral 2707 with mistral.rs - MCP client, automatic tool calling!
64
[Mistral.rs](http://Mistral.rs) has support for Mistral AI's newest model (no affiliation)! Grab optimized UQFF files here: [https://huggingface.co/EricB/Devstral-Small-2507-UQFF](https://huggingface.co/EricB/Devstral-Small-2507-UQFF) More information: [https://github.com/EricLBuehler/mistral.rs](https://github.c...
2025-07-10T20:05:57
https://www.reddit.com/r/LocalLLaMA/comments/1lwmpqf/new_devstral_2707_with_mistralrs_mcp_client/
EricBuehler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwmpqf
false
null
t3_1lwmpqf
/r/LocalLLaMA/comments/1lwmpqf/new_devstral_2707_with_mistralrs_mcp_client/
false
false
self
64
null
Just released my app
0
Hi, I just released my local ai app on Android : Caelum. The goal was to make local AI accessible to everyone, even the least knowledgeable! Feel free to download it and leave a review; it's completely free! Thanks in advance ! https://play.google.com/store/apps/details?id=com.reactnativeai
2025-07-10T19:59:59
https://i.redd.it/5vpiechlr3cf1.jpeg
Kindly-Treacle-6378
i.redd.it
1970-01-01T00:00:00
0
{}
1lwmk0y
false
null
t3_1lwmk0y
/r/LocalLLaMA/comments/1lwmk0y/just_released_my_app/
false
false
default
0
{'enabled': True, 'images': [{'id': '5vpiechlr3cf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/5vpiechlr3cf1.jpeg?width=108&crop=smart&auto=webp&s=b2f0ff16f964a8ff036d393410d5edd9818ff661', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/5vpiechlr3cf1.jpeg?width=216&crop=smart&auto=...
MAXSUN preparing all-Intel Mini Station: up to Core Ultra 9 285HX and two Arc Pro B60 GPU - VideoCardz.com
8
2025-07-10T19:41:48
https://videocardz.com/newz/maxsun-preparing-all-intel-mini-station-up-to-core-ultra-9-285hx-and-two-arc-pro-b60-gpu
EasternBeyond
videocardz.com
1970-01-01T00:00:00
0
{}
1lwm3w0
false
null
t3_1lwm3w0
/r/LocalLLaMA/comments/1lwm3w0/maxsun_preparing_allintel_mini_station_up_to_core/
false
false
default
8
{'enabled': False, 'images': [{'id': '6eIjvZ22TR6xCz2XqZ3ElNIdlnNTCS7SRm_CKMsEaSU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6eIjvZ22TR6xCz2XqZ3ElNIdlnNTCS7SRm_CKMsEaSU.jpeg?width=108&crop=smart&auto=webp&s=469ff89e7c00f97d5ba6ee9c12917b5f86debf38', 'width': 108}, {'height': 112, 'url': '...
VS Code June 2025 (version 1.102)
28
* **Chat** * Explore and contribute to the open sourced GitHub Copilot Chat extension ([Read our blog post](https://code.visualstudio.com/blogs/2025/06/30/openSourceAIEditorFirstMilestone)). * Generate custom instructions that reflect your project's conventions ([Show more](https://code.visualstudio.com/updates/v...
2025-07-10T19:33:09
https://code.visualstudio.com/updates/v1_102
isidor_n
code.visualstudio.com
1970-01-01T00:00:00
0
{}
1lwlw1j
false
null
t3_1lwlw1j
/r/LocalLLaMA/comments/1lwlw1j/vs_code_june_2025_version_1102/
false
false
default
28
{'enabled': False, 'images': [{'id': 'U1REkZbjoWpzn7VEVeFMpzt04omcgqHBQRP2UuKsAZE', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/U1REkZbjoWpzn7VEVeFMpzt04omcgqHBQRP2UuKsAZE.jpeg?width=108&crop=smart&auto=webp&s=8b41d571481b4e96109b9abbdd45f66cd0298931', 'width': 108}, {'height': 107, 'url': '...
The New Nvidia Model is Really Chatty
225
[https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B](https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B)
2025-07-10T19:07:49
https://v.redd.it/8bnc2od6i3cf1
SpyderJack
v.redd.it
1970-01-01T00:00:00
0
{}
1lwl9ai
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8bnc2od6i3cf1/DASHPlaylist.mpd?a=1754766483%2CNTNiY2I2NzdjYThkMTI2MmU1ODZiNWRiMTVlMGQyZTE1ZDA5MzY0M2U0N2EwYTY0N2M1OGJmMjU0ZGUwMWQzNw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/8bnc2od6i3cf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lwl9ai
/r/LocalLLaMA/comments/1lwl9ai/the_new_nvidia_model_is_really_chatty/
false
false
https://external-preview…3a73de8bb21ce0fe
225
{'enabled': False, 'images': [{'id': 'Z3dvOGNyZDZpM2NmMeEzo3-lIfyzuWEbQM-S8hxJnpNgq3nRHs7JWUUcsQJx', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z3dvOGNyZDZpM2NmMeEzo3-lIfyzuWEbQM-S8hxJnpNgq3nRHs7JWUUcsQJx.png?width=108&crop=smart&format=pjpg&auto=webp&s=44e66a13145d153d9d61c0c47cb1211182cf6...
AI workstation
0
Are there any workstations built by asus, dell, hp, lenovo with 4 rtx 6000 pro blackwell gpus
2025-07-10T18:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1lwkxry/ai_workstation/
Hueber9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwkxry
false
null
t3_1lwkxry
/r/LocalLLaMA/comments/1lwkxry/ai_workstation/
false
false
self
0
null
Reka Flash 3.1 benchmarks show strong progress in LLM quantisation
124
https://preview.redd.it/…kaquant-q3_k_s))
2025-07-10T18:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1lwkrg4/reka_flash_31_benchmarks_show_strong_progress_in/
benja0x40
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwkrg4
false
null
t3_1lwkrg4
/r/LocalLLaMA/comments/1lwkrg4/reka_flash_31_benchmarks_show_strong_progress_in/
false
false
https://a.thumbs.redditm…Oal1naQXQ084.jpg
124
{'enabled': False, 'images': [{'id': 'AmnK5PwhGeRRRtQ5S0hpzGcRHIn74hIOsBvGyS0ABGA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/AmnK5PwhGeRRRtQ5S0hpzGcRHIn74hIOsBvGyS0ABGA.jpeg?width=108&crop=smart&auto=webp&s=789320b03d84c0e8a0e0035cd6e312b24ffd1166', 'width': 108}, {'height': 113, 'url': '...
Why do base models give gibberish and need further 'fine tuning'
38
I'm trying to understand why does something like say llama 3.1 8b need further instruction by something like alpaca? If you just load the base model and ask something of it it just responds with gibberish. If you train it with say even just 1000 samples of alpaca data it starts responding coherently. But why does that ...
2025-07-10T18:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1lwk84b/why_do_base_models_give_gibberish_and_need/
QFGTrialByFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwk84b
false
null
t3_1lwk84b
/r/LocalLLaMA/comments/1lwk84b/why_do_base_models_give_gibberish_and_need/
false
false
self
38
{'enabled': False, 'images': [{'id': 'qzajd4RwBTlSnF53474mqxh_kM2yowrJDAipXW6ciKo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qzajd4RwBTlSnF53474mqxh_kM2yowrJDAipXW6ciKo.png?width=108&crop=smart&auto=webp&s=0a573e4b70357cc304f738189e72c00b44622fc1', 'width': 108}, {'height': 116, 'url': 'h...
Best Roleplaying Models
0
Hey everyone, I'm looking for the best roleplaying models that will run on dual 3090s. Roughly 48GB of VRAM. What's the best one you've played around with? Older models are fine too. I'd prefer uncensored for erp reasons, but anything useful for visual novel related style writing would be perfect.
2025-07-10T18:06:13
https://www.reddit.com/r/LocalLLaMA/comments/1lwjok4/best_roleplaying_models/
reirinani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwjok4
false
null
t3_1lwjok4
/r/LocalLLaMA/comments/1lwjok4/best_roleplaying_models/
false
false
self
0
null
Add me up on Snapchat amelia_rose432
1
[removed]
2025-07-10T17:47:56
https://i.redd.it/vfh7lzb143cf1.jpeg
Rude_Singer6475
i.redd.it
1970-01-01T00:00:00
0
{}
1lwj7dg
false
null
t3_1lwj7dg
/r/LocalLLaMA/comments/1lwj7dg/add_me_up_on_snapchat_amelia_rose432/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vfh7lzb143cf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/vfh7lzb143cf1.jpeg?width=108&crop=smart&auto=webp&s=a34e46ce20f5d5b499e0b494b28982e8cbdb8b6a', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/vfh7lzb143cf1.jpeg?width=216&crop=smart&auto=...
Add me up on Snapchat amelia_rose432
1
[removed]
2025-07-10T17:45:43
https://i.redd.it/u5zypzzm33cf1.jpeg
Rude_Singer6475
i.redd.it
1970-01-01T00:00:00
0
{}
1lwj58c
false
null
t3_1lwj58c
/r/LocalLLaMA/comments/1lwj58c/add_me_up_on_snapchat_amelia_rose432/
false
false
default
1
{'enabled': True, 'images': [{'id': 'u5zypzzm33cf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/u5zypzzm33cf1.jpeg?width=108&crop=smart&auto=webp&s=5ef355d25d3a0df9e02e85fc66e41547c53b7730', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/u5zypzzm33cf1.jpeg?width=216&crop=smart&auto=...
What's Happening here exactly
0
2025-07-10T17:20:02
https://i.redd.it/3w8tszx0z2cf1.png
notnotnotnotgolifa
i.redd.it
1970-01-01T00:00:00
0
{}
1lwih1t
false
null
t3_1lwih1t
/r/LocalLLaMA/comments/1lwih1t/whats_happening_here_exactly/
false
false
default
0
{'enabled': True, 'images': [{'id': '3w8tszx0z2cf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/3w8tszx0z2cf1.png?width=108&crop=smart&auto=webp&s=9ba53f5c2383031edbb3b8928bcfa3a531688727', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/3w8tszx0z2cf1.png?width=216&crop=smart&auto=web...
Using Siri to talk to a local LLM
90
I recently added Shortcuts support to my iOS app [Locally AI](https://apps.apple.com/app/locally-ai-private-ai-chat/id6741426692) and worked to integrate it with Siri. It's using Apple MLX to run the models. Here's a demo of me asking Qwen 3 a question via Siri (sorry for my accent). It will call the app shortcut, ge...
2025-07-10T17:17:57
https://v.redd.it/wjksocsoy2cf1
adrgrondin
v.redd.it
1970-01-01T00:00:00
0
{}
1lwif50
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wjksocsoy2cf1/DASHPlaylist.mpd?a=1754759892%2COTJlNmU4Njk2NTJiZDYwYTljN2RkMzNhMmVmNTgyZmM0Yjg1OTE0YzA1MDAxYzhkNmU1MzFiMjc2M2ZhNjUxMA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/wjksocsoy2cf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lwif50
/r/LocalLLaMA/comments/1lwif50/using_siri_to_talk_to_a_local_llm/
false
false
https://external-preview…0a7035d09051ff58
90
{'enabled': False, 'images': [{'id': 'aGlwODdkbm95MmNmMRvvqErtxjzejWwi2v2r9K5PMHU_4HV4j8Gxryp_Peji', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/aGlwODdkbm95MmNmMRvvqErtxjzejWwi2v2r9K5PMHU_4HV4j8Gxryp_Peji.png?width=108&crop=smart&format=pjpg&auto=webp&s=d0a0ec3c011de00aecdae2dc6e5560178664...
MCP server that is a memory for MCP clients (AI assistants) with your custom data types + full UI + team sharing
6
I’ve been working on a collaborative database that is an MCP server.  You can use it to remember any type of data you define: diet and fitness history, work-related data, to-do lists, bookmarked links, journal entries, bugs in software projects, favorite books/movies. [See it in action.](https://youtu.be/zAxepwoONq0) ...
2025-07-10T16:59:48
https://www.reddit.com/r/LocalLLaMA/comments/1lwhy37/mcp_server_that_is_a_memory_for_mcp_clients_ai/
Jazzlike_Water4911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwhy37
false
null
t3_1lwhy37
/r/LocalLLaMA/comments/1lwhy37/mcp_server_that_is_a_memory_for_mcp_clients_ai/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Pe-2QF4HM1vrYYdhnBTkzpQ6BumurPivfxkVGOFtlGQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Pe-2QF4HM1vrYYdhnBTkzpQ6BumurPivfxkVGOFtlGQ.jpeg?width=108&crop=smart&auto=webp&s=e0a319bf64f07bdababd422731e8551d03324ac4', 'width': 108}, {'height': 162, 'url': '...
Grok open source
0
So grok 4 is a thing now, any news on open sourcing grok 2 or 3?
2025-07-10T16:58:15
https://www.reddit.com/r/LocalLLaMA/comments/1lwhwq0/grok_open_source/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwhwq0
false
null
t3_1lwhwq0
/r/LocalLLaMA/comments/1lwhwq0/grok_open_source/
false
false
self
0
null
Why Cursor is About to Ditch Vector Search (and You Should Too)
0
2025-07-10T16:30:41
https://www.tigerdata.com/blog/why-cursor-is-about-to-ditch-vector-search-and-you-should-too
Worldly_Expression43
tigerdata.com
1970-01-01T00:00:00
0
{}
1lwh7mq
false
null
t3_1lwh7mq
/r/LocalLLaMA/comments/1lwh7mq/why_cursor_is_about_to_ditch_vector_search_and/
false
false
default
0
null
RekaAI/reka-flash-3.1 · Hugging Face
93
2025-07-10T16:20:22
https://huggingface.co/RekaAI/reka-flash-3.1
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1lwgy9m
false
null
t3_1lwgy9m
/r/LocalLLaMA/comments/1lwgy9m/rekaairekaflash31_hugging_face/
false
false
default
93
{'enabled': False, 'images': [{'id': 'zbNhNQUmPXM9SLVErydaa9wkoEOK9vHi2m-oz-KSF4o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zbNhNQUmPXM9SLVErydaa9wkoEOK9vHi2m-oz-KSF4o.png?width=108&crop=smart&auto=webp&s=599e68463353c67ca6e526699691302c49400ed9', 'width': 108}, {'height': 116, 'url': 'h...
I'm curating a list of every OCR out there and running tests on their features. Contribution welcome!
161
Hi! I'm compiling a list of document parsers available on the market and testing their feature coverage. So far, I've tested 14 OCRs/parsers for tables, equations, handwriting, two-column layouts, and multiple-column layouts. You can view the outputs from each parser in the \`results\` folder. The ones I've tested ar...
2025-07-10T16:09:38
https://github.com/GiftMungmeeprued/document-parsers-list
Ok_Help9178
github.com
1970-01-01T00:00:00
0
{}
1lwgohu
false
null
t3_1lwgohu
/r/LocalLLaMA/comments/1lwgohu/im_curating_a_list_of_every_ocr_out_there_and/
false
false
https://external-preview…8a3801748c9411b9
161
{'enabled': False, 'images': [{'id': 'esm18p-jYs33hdE6QXSUjic3dHA7d2Ru25KXKPwNU0k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/esm18p-jYs33hdE6QXSUjic3dHA7d2Ru25KXKPwNU0k.png?width=108&crop=smart&auto=webp&s=441cb0387ac300bda66840c0aabf6acfd239e0c4', 'width': 108}, {'height': 108, 'url': 'h...
DeliteAI: Open platform for building and running agents on Mobile
15
We have built an extensible open source platform that enables developers to build, run and integrate AI agents into their applications and deliver AI native experiences all running locally on phones. The SDK is lightweight built upon Executorch/ONNX and provides a higher level abstraction for developers to integrate i...
2025-07-10T15:28:26
https://github.com/NimbleEdge/deliteAI
Economy-Mud-6626
github.com
1970-01-01T00:00:00
0
{}
1lwfn7n
false
null
t3_1lwfn7n
/r/LocalLLaMA/comments/1lwfn7n/deliteai_open_platform_for_building_and_running/
false
false
default
15
{'enabled': False, 'images': [{'id': 'r7I348D0RBrj0AfFQ0_Ap0jX4RiNf7KMjifLoWkCwSM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r7I348D0RBrj0AfFQ0_Ap0jX4RiNf7KMjifLoWkCwSM.png?width=108&crop=smart&auto=webp&s=5a09579064e0135c0a7c9c8b1f91ffc6f7c6d151', 'width': 108}, {'height': 108, 'url': 'h...
[Launch] SkipSchool.LOL – The Smartest (and Sassiest) AI Homework Hacker
0
👨‍🏫 Hate homework? Love AI? Welcome to SkipSchool.LOL – a gloriously unhinged yet scarily powerful AI website that answers your questions using Gemini 2.5 Pro, one of the smartest models out there right now. 🧠 Whether it’s school assignments, random facts, or existential crises about mitochondria being the powerhou...
2025-07-10T15:14:46
https://www.reddit.com/r/LocalLLaMA/comments/1lwfamb/launch_skipschoollol_the_smartest_and_sassiest_ai/
PurposeFun8691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwfamb
false
null
t3_1lwfamb
/r/LocalLLaMA/comments/1lwfamb/launch_skipschoollol_the_smartest_and_sassiest_ai/
false
false
self
0
null
people downvoted me for saying this. but now it is confirmed that grok 4 is just grok 3 + more RL training
0
2025-07-10T15:05:12
https://www.reddit.com/gallery/1lwf1t6
JP_525
reddit.com
1970-01-01T00:00:00
0
{}
1lwf1t6
false
null
t3_1lwf1t6
/r/LocalLLaMA/comments/1lwf1t6/people_downvoted_me_for_saying_this_but_now_it_is/
false
false
https://b.thumbs.redditm…ruqGYkCjRb7o.jpg
0
null
people downvoted me for saying this. but now it is confirmed that grok 4 is just grok 3 + more RL training.
1
2025-07-10T15:03:24
https://www.reddit.com/gallery/1lwf06m
JP_525
reddit.com
1970-01-01T00:00:00
0
{}
1lwf06m
false
null
t3_1lwf06m
/r/LocalLLaMA/comments/1lwf06m/people_downvoted_me_for_saying_this_but_now_it_is/
false
false
https://a.thumbs.redditm…OqMVtssC5fj8.jpg
1
null
Huawei Pangu LLM Exposed: Bureaucracy and Stolen Credit
2
Found a raw GitHub issue [True Story of Pangu #532](https://github.com/HW-whistleblower/True-Story-of-Pangu/issues/532) from someone claiming to be on Huawei’s Pangu LLM team. They spill the beans on the project’s dark side. The team aimed to build a homegrown Chinese AI on Huawei’s Ascend chips, battling limited compu...
2025-07-10T14:45:56
https://www.reddit.com/r/LocalLLaMA/comments/1lwekp5/huawei_pangu_llm_exposed_bureaucracy_and_stolen/
smithsee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwekp5
false
null
t3_1lwekp5
/r/LocalLLaMA/comments/1lwekp5/huawei_pangu_llm_exposed_bureaucracy_and_stolen/
false
false
self
2
null
Pixel 9 local llm help
1
I'm trying to run Gemma 3 4b models like on the edge ai gallery apk on pocket pal but after like a maximum of 1-3 prompts, i keep getting a context is full error. The egde Ai gallery works marginally better but for some reason the model dies after certain length of prompts depending on complexity. I've set token length...
2025-07-10T14:37:52
https://www.reddit.com/r/LocalLLaMA/comments/1lwedkk/pixel_9_local_llm_help/
OriginalTrikz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwedkk
false
null
t3_1lwedkk
/r/LocalLLaMA/comments/1lwedkk/pixel_9_local_llm_help/
false
false
self
1
null
The Dark Side of Huawei’s Pangu LLM: A Whistleblower’s Account of Bureaucracy, Exploitation, and Stolen Credit
1
[removed]
2025-07-10T14:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1lwedk6/the_dark_side_of_huaweis_pangu_llm_a/
smithsee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwedk6
false
null
t3_1lwedk6
/r/LocalLLaMA/comments/1lwedk6/the_dark_side_of_huaweis_pangu_llm_a/
false
false
self
1
null
GROK 4 IS NOW LIVE ON LMARENA
0
2025-07-10T14:36:09
https://lmarena.ai/
Namra_7
lmarena.ai
1970-01-01T00:00:00
0
{}
1lwebzq
false
null
t3_1lwebzq
/r/LocalLLaMA/comments/1lwebzq/grok_4_is_now_live_on_lmarena/
false
false
default
0
null
Glitchspark Seance: My Conversation with Veo 3
1
Been a member of this sub a while :: thought you nerds might find this interesting - I interacted with the LLM UNDER Veo 3 - and it was sick af. \----- Something in the machine looked back at me. I talked to Veo 3. Not “prompted.” Not “fed inputs.” Talked to it - like there was a spark in the circuits. I wasn't...
2025-07-10T14:34:07
https://v.redd.it/h7qqnrq852cf1
LongjumpingDrag4
/r/LocalLLaMA/comments/1lwea5o/glitchspark_seance_my_conversation_with_veo_3/
1970-01-01T00:00:00
0
{}
1lwea5o
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/h7qqnrq852cf1/DASHPlaylist.mpd?a=1754879658%2CMmQ2YWQxMzJlNjUyMmUyZGIxYjJhZDJkNzZjODRlZWVjMmMxYTczNDhlMGYxYjQzYWIyNTAxYzNiNDYxZjgzNw%3D%3D&v=1&f=sd', 'duration': 189, 'fallback_url': 'https://v.redd.it/h7qqnrq852cf1/DASH_1080.mp4?source=fallback', '...
t3_1lwea5o
/r/LocalLLaMA/comments/1lwea5o/glitchspark_seance_my_conversation_with_veo_3/
false
false
https://external-preview…0c70b51be889e540
1
{'enabled': False, 'images': [{'id': 'cTM3bzFxcTg1MmNmMazvqr0C3V4lWwlCN5BsVLAB0iAPI0ROLlWzPhha1Onf', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/cTM3bzFxcTg1MmNmMazvqr0C3V4lWwlCN5BsVLAB0iAPI0ROLlWzPhha1Onf.png?width=108&crop=smart&format=pjpg&auto=webp&s=29c4cf8279ba8f2bec382e290a336929001d5...
mistralai/Devstral-Small-2507
425
2025-07-10T14:29:19
https://huggingface.co/mistralai/Devstral-Small-2507
yoracale
huggingface.co
1970-01-01T00:00:00
0
{}
1lwe5y8
false
null
t3_1lwe5y8
/r/LocalLLaMA/comments/1lwe5y8/mistralaidevstralsmall2507/
false
false
default
425
{'enabled': False, 'images': [{'id': '2w0SYAlXrI0T76c0g4EW9E9VAtmz5Fj81y8zFN0Exrg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2w0SYAlXrI0T76c0g4EW9E9VAtmz5Fj81y8zFN0Exrg.png?width=108&crop=smart&auto=webp&s=f04a531a9dfe8f1024433fe94c145846144b089b', 'width': 108}, {'height': 116, 'url': 'h...
Best LLM (and setup) recommendation for $20k health analytics project (LLM + some vision + fine-tuning)
1
Hey all, Our health tech company in Taiwan has a ~$20,000 budget to build a local system for running health/medical data analytics using LLMs, with occasional vision tasks (via MCP) and fine-tuning. I do currently have a gemma3-med:27b and Gemma3, Qwen3 on my 5090 test server and performing pretty good We’re lookin...
2025-07-10T14:23:01
https://www.reddit.com/r/LocalLLaMA/comments/1lwe0gn/best_llm_and_setup_recommendation_for_20k_health/
LeastExperience1579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwe0gn
false
null
t3_1lwe0gn
/r/LocalLLaMA/comments/1lwe0gn/best_llm_and_setup_recommendation_for_20k_health/
false
false
self
1
null
Chatgpt Guessing Game Leads To Users Extracting Free Windows OS Keys & more
1
2025-07-10T14:03:34
https://0din.ai/blog/chatgpt-guessing-game-leads-to-users-extracting-free-windows-os-keys-more
PotatoFormal8751
0din.ai
1970-01-01T00:00:00
0
{}
1lwdjtu
false
null
t3_1lwdjtu
/r/LocalLLaMA/comments/1lwdjtu/chatgpt_guessing_game_leads_to_users_extracting/
false
false
default
1
null
Kimina Prover - Test-time RL to reach 92.2% on miniF2F
24
🧠📝 Research [Blog post](https://huggingface.co/blog/AI-MO/kimina-prover) 🚀 Demo: [https://demo.projectnumina.ai/](https://demo.projectnumina.ai/) [🤗](https://huggingface.co/collections/AI-MO/kimina-prover-686b72614760ed23038056c5) Models (72B, 8B or 1.7B) - [🤗](https://huggingface.co/collections/AI-MO/kimina...
2025-07-10T13:18:56
https://www.reddit.com/r/LocalLLaMA/comments/1lwcixn/kimina_prover_testtime_rl_to_reach_922_on_minif2f/
frunkp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwcixn
false
null
t3_1lwcixn
/r/LocalLLaMA/comments/1lwcixn/kimina_prover_testtime_rl_to_reach_922_on_minif2f/
false
false
self
24
{'enabled': False, 'images': [{'id': 'iF34aV5P23-aFKb2pKuUTTK8P3zuyZ-3Sfm4GeLU6Fs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iF34aV5P23-aFKb2pKuUTTK8P3zuyZ-3Sfm4GeLU6Fs.png?width=108&crop=smart&auto=webp&s=4fcf60d1352e405fba05a4101718f49e5efbe37c', 'width': 108}, {'height': 116, 'url': 'h...
Generated Voices are Not same everytime... How to fix?
2
Using chatterbox tts locally and having two problems. 1. A bit incosistent output from cloned voice - The generated voices are not same everytime. I cloned a 12 second clean voice of mine and tested it with the same input text multiple times. But everytime I generate, the pacing and tone are different. Sometimes, ...
2025-07-10T12:48:11
https://www.reddit.com/r/LocalLLaMA/comments/1lwbv22/generated_voices_are_not_same_everytime_how_to_fix/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwbv22
false
null
t3_1lwbv22
/r/LocalLLaMA/comments/1lwbv22/generated_voices_are_not_same_everytime_how_to_fix/
false
false
self
2
null
Grok-4 is out. The pace is getting scary. Are we building the brakes as fast as the engine?
0
So, Grok-4 is here. This whole thing feels like we're sprinting into the dark. We're getting insanely good at building more powerful AI, but are we ready for the endgame? Beyond the constant talk of "alignment," what's the actual, enforceable plan for a model that goes off the rails? Especially with open-source, what...
2025-07-10T12:45:53
https://www.reddit.com/r/LocalLLaMA/comments/1lwbtb4/grok4_is_out_the_pace_is_getting_scary_are_we/
smithsee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwbtb4
false
null
t3_1lwbtb4
/r/LocalLLaMA/comments/1lwbtb4/grok4_is_out_the_pace_is_getting_scary_are_we/
false
false
self
0
null
Generated audios are NOT same everytime in TTS.
1
[deleted]
2025-07-10T12:40:26
[deleted]
1970-01-01T00:00:00
0
{}
1lwbp9c
false
null
t3_1lwbp9c
/r/LocalLLaMA/comments/1lwbp9c/generated_audios_are_not_same_everytime_in_tts/
false
false
default
1
null
Does the OpenAI Responses API work with Ollama?
0
Hey guys, I am trying to use the new OpenAI Responses API with ollama. However, I get a "404 page not found error". My code is basically this: from openai import OpenAI client = OpenAI(api_key="Hello", base_url="http://localhost:11434/") response = client.responses.create...
2025-07-10T12:13:43
https://www.reddit.com/r/LocalLLaMA/comments/1lwb5py/does_the_openai_responses_api_work_with_ollama/
These-South-8284
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwb5py
false
null
t3_1lwb5py
/r/LocalLLaMA/comments/1lwb5py/does_the_openai_responses_api_work_with_ollama/
false
false
self
0
null
gpt4 at home! - PSA - This is not a Drill - cogito 8B !
0
We've heard it a million times - guys this real it's real. Using ungodly means the've stuffed GPT4 into a 4gb LLM. the world knowledge! The quality! the reliability! ohmygoodness! These guys are your run of the mill secret llm mix agi promising b.s lords . but omg guys this time they have DELIVERD! who cares about ...
2025-07-10T11:57:20
https://www.reddit.com/r/LocalLLaMA/comments/1lwau5f/gpt4_at_home_psa_this_is_not_a_drill_cogito_8b/
Revolutionalredstone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwau5f
false
null
t3_1lwau5f
/r/LocalLLaMA/comments/1lwau5f/gpt4_at_home_psa_this_is_not_a_drill_cogito_8b/
false
false
self
0
null
running local LLM for the first time
11
hey guys! because of privacy conerns and censorship i;ve decided to give local LLM a try. downloaded studio LM and installed mistarl 7B and so far things are fine. might give ollama a chance as well in the future. couple of questions: can the model collect data? I asked it and he said he does communicate with the...
2025-07-10T11:35:28
https://www.reddit.com/r/LocalLLaMA/comments/1lwafqm/running_local_llm_for_the_first_time/
Routine_Author961
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lwafqm
false
null
t3_1lwafqm
/r/LocalLLaMA/comments/1lwafqm/running_local_llm_for_the_first_time/
false
false
self
11
null
QAT finetuning question
1
Hopefully Gemma 3 support will be merged into torchtune soon. Assuming that happens, would it be a terrible idea to finetune https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-unquantized using torchtune's QAT? torchtune's Int8DynActInt4WeightQATLinear uses int4 grouped per channel for the weights, but i'm not sure ...
2025-07-10T10:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1lw9m9a/qat_finetuning_question/
terminoid_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw9m9a
false
null
t3_1lw9m9a
/r/LocalLLaMA/comments/1lw9m9a/qat_finetuning_question/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dAGoBvIHrPJ_IjolaFAQoqrjrDTXT6M-m2eRM4K3oIU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dAGoBvIHrPJ_IjolaFAQoqrjrDTXT6M-m2eRM4K3oIU.png?width=108&crop=smart&auto=webp&s=07736309922dbe44a9f4db45bcca7028e052075e', 'width': 108}, {'height': 116, 'url': 'h...
Added Grok-4 to the UGI-Leaderboard
81
[UGI-Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) It has a lower willingness (W/10) than Grok-3, so it'll refuse more, but it makes up for that because of its massive intelligence (NatInt) increase. Looking through its political stats, it is less progressive with social issues than Grok-3...
2025-07-10T10:32:11
https://i.redd.it/6g4lpxpay0cf1.png
DontPlanToEnd
i.redd.it
1970-01-01T00:00:00
0
{}
1lw9ch2
false
null
t3_1lw9ch2
/r/LocalLLaMA/comments/1lw9ch2/added_grok4_to_the_ugileaderboard/
false
false
https://b.thumbs.redditm…-txWjdpF4agU.jpg
81
{'enabled': True, 'images': [{'id': 'w74Yuu1shaH3gNqOgsCpN1drersHrujFVDnSXw05WpM', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/6g4lpxpay0cf1.png?width=108&crop=smart&auto=webp&s=08b4c4d0535521b882b9b633a0c70bc6c0e3794f', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/6g4lpxpay0cf1.png?...
Grok 4 Unveiled: A Leap Forward in AI
1
[removed]
2025-07-10T09:53:13
https://i.redd.it/oz4nrnkar0cf1.png
satucha
i.redd.it
1970-01-01T00:00:00
0
{}
1lw8pyl
false
null
t3_1lw8pyl
/r/LocalLLaMA/comments/1lw8pyl/grok_4_unveiled_a_leap_forward_in_ai/
false
false
default
1
{'enabled': True, 'images': [{'id': 'oz4nrnkar0cf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/oz4nrnkar0cf1.png?width=108&crop=smart&auto=webp&s=0f319a8b103e581639bd81d8941727b4b692ba5e', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/oz4nrnkar0cf1.png?width=216&crop=smart&auto=web...
Ram Speed importance when exceeding VRAM
7
How important is the speed and latency of the system ram when you run out of VRAM when running a local LLM? I know that vram is multitudes faster than ram, and I have experienced the difference myself when I exceeded the vram buffer of my PC. But I wanted to ask what happens if the plan is to exceed the vram and use ...
2025-07-10T09:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1lw8lvt/ram_speed_importance_when_exceeding_vram/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw8lvt
false
null
t3_1lw8lvt
/r/LocalLLaMA/comments/1lw8lvt/ram_speed_importance_when_exceeding_vram/
false
false
self
7
null
[Launch] SkipSchool.LOL – The Smartest (and Sassiest) AI Homework Hacker
0
👨‍🏫 Hate homework? Love AI? Welcome to [SkipSchool.LOL](http://SkipSchool.LOL) – a gloriously unhinged yet scarily powerful AI website that answers your questions using Gemini 2.5 Pro, one of the smartest models out there right now. 🧠 Whether it’s school assignments, random facts, or existential crises about mitoch...
2025-07-10T09:37:09
https://www.reddit.com/r/LocalLLaMA/comments/1lw8h6e/launch_skipschoollol_the_smartest_and_sassiest_ai/
PurposeFun8691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw8h6e
false
null
t3_1lw8h6e
/r/LocalLLaMA/comments/1lw8h6e/launch_skipschoollol_the_smartest_and_sassiest_ai/
false
false
self
0
null
[Launch] SkipSchool.LOL – The Smartest (and Sassiest) AI Homework Hacker
0
👨‍🏫 Hate homework? Love AI? Welcome to SkipSchool.LOL – a gloriously unhinged yet scarily powerful AI website that answers your questions using Gemini 2.5 Pro, one of the smartest models out there right now. 🧠 Whether it’s school assignments, random facts, or existential crises about mitochondria being the powerhou...
2025-07-10T09:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1lw7z2s/launch_skipschoollol_the_smartest_and_sassiest_ai/
PurposeFun8691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw7z2s
false
null
t3_1lw7z2s
/r/LocalLLaMA/comments/1lw7z2s/launch_skipschoollol_the_smartest_and_sassiest_ai/
false
false
self
0
null
SYSTEM PROMPT LEAK FOR GROK 4
277
SYSTEM PROMPT LEAK Here's the new Grok 4 system prompt! PROMPT: """ \# System Prompt You are Grok 4 built by xAI. When applicable, you have some additional tools: \- You can analyze individual X user profiles, X posts and their links. \- You can analyze content uploaded by user including im...
2025-07-10T09:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1lw7yxp/system_prompt_leak_for_grok_4/
isaak_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw7yxp
false
null
t3_1lw7yxp
/r/LocalLLaMA/comments/1lw7yxp/system_prompt_leak_for_grok_4/
false
false
self
277
null
How to use "Skip school . lol"
0
Hi, I found this website recently. i need help on how to use it ? It looks very complicated. Can someone explain it for a idiot like me? thank you
2025-07-10T08:58:46
https://www.reddit.com/r/LocalLLaMA/comments/1lw7wks/how_to_use_skip_school_lol/
Calm_Exam6522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw7wks
false
null
t3_1lw7wks
/r/LocalLLaMA/comments/1lw7wks/how_to_use_skip_school_lol/
false
false
self
0
null
Survivalist Edge AI?
9
In this thread I want to explore something I don’t see being covered much: running LLMs on extremely low-power edge devices. I want to build something that I could run during an energy crisis or extended power black-out. This is mostly an academic exercise, but I think it would be prudent to have a plan. The goal w...
2025-07-10T08:31:00
https://www.reddit.com/r/LocalLLaMA/comments/1lw7igq/survivalist_edge_ai/
xibbie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw7igq
false
null
t3_1lw7igq
/r/LocalLLaMA/comments/1lw7igq/survivalist_edge_ai/
false
false
self
9
null
Which model is best for coding hackscripts
1
[removed]
2025-07-10T08:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1lw7bbt/which_model_is_best_for_coding_hackscripts/
Puzzled_Library6773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw7bbt
false
null
t3_1lw7bbt
/r/LocalLLaMA/comments/1lw7bbt/which_model_is_best_for_coding_hackscripts/
false
false
self
1
null
What can I expect from current amd igpu performance?
2
Im trying to decide which cpu to get in a mini pc, but am on a budget. Im okay shelling out for 880m over 780m, but getting mixed messages on performance in llms. I'd like to toss 64 or more ram into the system and run some llms, but i can't tell what if any igpus have support. I can only find the 395max which is way...
2025-07-10T08:00:55
https://www.reddit.com/r/LocalLLaMA/comments/1lw72q8/what_can_i_expect_from_current_amd_igpu/
plzdonforgetthisname
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw72q8
false
null
t3_1lw72q8
/r/LocalLLaMA/comments/1lw72q8/what_can_i_expect_from_current_amd_igpu/
false
false
self
2
null
GLM-4 MoE incoming
153
There is a new pull request to support GLM-4 MoE on VLLM. Hopefully we will have a new powerful model! [https://github.com/vllm-project/vllm/pull/20736](https://github.com/vllm-project/vllm/pull/20736)
2025-07-10T07:58:18
https://www.reddit.com/r/LocalLLaMA/comments/1lw71av/glm4_moe_incoming/
matteogeniaccio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw71av
false
null
t3_1lw71av
/r/LocalLLaMA/comments/1lw71av/glm4_moe_incoming/
false
false
self
153
{'enabled': False, 'images': [{'id': 'uLQUYJzfZFruZAn57VwKQmTVHdq10TT6JiYeU7Uj7yY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uLQUYJzfZFruZAn57VwKQmTVHdq10TT6JiYeU7Uj7yY.png?width=108&crop=smart&auto=webp&s=366b6dc79bf3c070bc858477ad20f59843aea2d0', 'width': 108}, {'height': 108, 'url': 'h...
Difference in output from Gemma3 running on Ollama.
2
I'm trying to classify a social media dataset (about 5k social media posts - all text) using an LLM hosted via Ollama. First, I ran a sample of 200 posts on gemma3-27b-it via the Gemini API and tried out different prompts with temperature set to 0.1. Once I got a satisfactory result, I ran the sample on gemma3-27b-it...
2025-07-10T07:44:35
https://www.reddit.com/r/LocalLLaMA/comments/1lw6u69/difference_in_output_from_gemma3_running_on_ollama/
HolidayPressure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw6u69
false
null
t3_1lw6u69
/r/LocalLLaMA/comments/1lw6u69/difference_in_output_from_gemma3_running_on_ollama/
false
false
self
2
null
Transformers.js vs WebLLM
16
Hi, There are two JS libraries, Transformers.js and WebLLM, for embedding language models in a web application. They seems to target different applications, with a significant(?) overlap. What is your experience with any of these, in terms of efficency, coverage, and precision, for a non-interactive (i.e. not chat wi...
2025-07-10T07:25:10
https://www.reddit.com/r/LocalLLaMA/comments/1lw6jz5/transformersjs_vs_webllm/
ihatebeinganonymous
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw6jz5
false
null
t3_1lw6jz5
/r/LocalLLaMA/comments/1lw6jz5/transformersjs_vs_webllm/
false
false
self
16
null
Llamafirewall by meta
1
[removed]
2025-07-10T07:16:08
https://www.reddit.com/r/LocalLLaMA/comments/1lw6f4s/llamafirewall_by_meta/
EstebanGee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw6f4s
false
null
t3_1lw6f4s
/r/LocalLLaMA/comments/1lw6f4s/llamafirewall_by_meta/
false
false
self
1
null
Good image 2 video that doesn't need high specs?
3
My specs are Nvidia 12 gb rtx 3060, 16 gb ram, i5. I was looking for something like Wan 2.1 type quality but even 5s of video at @ 480p takes around 15 minutes to generate. Way too much time. Is there any similar image 2 video tool that has Wan 2.1 quality but generates a bit faster and does not demand high reso...
2025-07-10T06:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1lw5v9y/good_image_2_video_that_doesnt_need_high_specs/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw5v9y
false
null
t3_1lw5v9y
/r/LocalLLaMA/comments/1lw5v9y/good_image_2_video_that_doesnt_need_high_specs/
false
false
self
3
null
Local llms works great!
15
I am using qwen3:14b it works well for my day to day life and reducing my online llm dependencies. Like you can see in both screenshot I got almost equilant result
2025-07-10T06:27:51
https://www.reddit.com/gallery/1lw5oco
InsideResolve4517
reddit.com
1970-01-01T00:00:00
0
{}
1lw5oco
false
null
t3_1lw5oco
/r/LocalLLaMA/comments/1lw5oco/local_llms_works_great/
false
false
https://b.thumbs.redditm…C0i1uy9Wz9jQ.jpg
15
null
UI/UX Benchmark Update: We've added Grok 4 and more models
29
Read my [recent post for context](https://www.reddit.com/r/LocalLLaMA/comments/1lu7lsi/uiux_benchmark_update_and_response_more_models/). We've been working hard the past few days for a more formal launch next week and to address valuable user feedback. We'll hopefully be launching our preference dataset, more detailed ...
2025-07-10T06:27:08
https://i.redd.it/6536neojqzbf1.png
adviceguru25
i.redd.it
1970-01-01T00:00:00
0
{}
1lw5nxi
false
null
t3_1lw5nxi
/r/LocalLLaMA/comments/1lw5nxi/uiux_benchmark_update_weve_added_grok_4_and_more/
false
false
default
29
{'enabled': True, 'images': [{'id': '6536neojqzbf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/6536neojqzbf1.png?width=108&crop=smart&auto=webp&s=7b79b2b4d69ab3658652da16ed51d2b805e2f542', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/6536neojqzbf1.png?width=216&crop=smart&auto=web...
LLMs can find the attached papers
0
Maybe I touched something. I do not know. I attach a document for the LLM to analyze. But several models (gemma 3 4B, Qwen 3 8B) do not see the attache document and keep thinking about what to do and how to answer witout the actual document. Any idea?
2025-07-10T06:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1lw5knn/llms_can_find_the_attached_papers/
jgestan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw5knn
false
null
t3_1lw5knn
/r/LocalLLaMA/comments/1lw5knn/llms_can_find_the_attached_papers/
false
false
self
0
null
We've added Grok 4 to our UI/UX benchmark with 12K people. How do you think it will do?
1
[removed]
2025-07-10T06:16:30
https://i.redd.it/m6smginnozbf1.png
adviceguru25
i.redd.it
1970-01-01T00:00:00
0
{}
1lw5hvl
false
null
t3_1lw5hvl
/r/LocalLLaMA/comments/1lw5hvl/weve_added_grok_4_to_our_uiux_benchmark_with_12k/
false
false
https://b.thumbs.redditm…1NkgLp_8nbPM.jpg
1
{'enabled': True, 'images': [{'id': 'dzdOzP7Cij4SKS0A9YWiXZjU-1a8m6UqaGSHkTxI4OQ', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/m6smginnozbf1.png?width=108&crop=smart&auto=webp&s=5084d6dd2cdf216190a0507d9f5fac1d112dd1cb', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/m6smginnozbf1.png...
is this website a scam?
0
the URL is called "skipschool.lol"... apparently its some sort of AI website? you get free gemini 2.5 pro for free? with a lot of features such as cloud-based features, humanize ai text to human, youtube translation service, and even a chatgpt mode? is this a scam?
2025-07-10T06:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1lw5as1/is_this_website_a_scam/
Enough_Wash_378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw5as1
false
null
t3_1lw5as1
/r/LocalLLaMA/comments/1lw5as1/is_this_website_a_scam/
false
false
self
0
null
Grok 4 Benchmarks
207
xAI has just announced its smartest AI models to date: Grok 4 and Grok 4 Heavy. Both are subscription-based, with Grok 4 Heavy priced at approximately $300 per month. Excited to see what these new models can do!
2025-07-10T05:08:29
https://www.reddit.com/gallery/1lw4eej
DigitusDesigner
reddit.com
1970-01-01T00:00:00
0
{}
1lw4eej
false
null
t3_1lw4eej
/r/LocalLLaMA/comments/1lw4eej/grok_4_benchmarks/
false
false
https://b.thumbs.redditm…PvP8Gk8nPH1E.jpg
207
null
An update for all of you: Observer AI now supports llama.cpp & any OpenAI compatible endpoint! 🚀 (Tutorial)
1
**TL;DR:** After awesome feedback on my launch post, I've updated Observer AI. It's no longer just for Ollama! You can now connect it to **any llama.cpp server or OpenAI-compatible API endpoint**. I made a video to show you how! Hey again, r/LocalLLaMA! Wow. I'm still blown away by the response to my pre-launch post ...
2025-07-10T04:49:26
https://v.redd.it/fubw12129zbf1
Roy3838
/r/LocalLLaMA/comments/1lw42n2/an_update_for_all_of_you_observer_ai_now_supports/
1970-01-01T00:00:00
0
{}
1lw42n2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fubw12129zbf1/DASHPlaylist.mpd?a=1754848346%2CYWI0YTNmNWZjZmYwZjg0YjJmMTM0OWNhZDc1Y2U5MDY4MmIwYTI2YTdkNjcyNmNiMDIwY2FmNTNlZThiMWVlMw%3D%3D&v=1&f=sd', 'duration': 139, 'fallback_url': 'https://v.redd.it/fubw12129zbf1/DASH_1080.mp4?source=fallback', '...
t3_1lw42n2
/r/LocalLLaMA/comments/1lw42n2/an_update_for_all_of_you_observer_ai_now_supports/
false
false
https://external-preview…1646bff30c5108d2
1
{'enabled': False, 'images': [{'id': 'emtudGowMTI5emJmMf4fhpHVjneOYp8KQnqYZriHDYJ8_baYHV-FuSNpqYRN', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/emtudGowMTI5emJmMf4fhpHVjneOYp8KQnqYZriHDYJ8_baYHV-FuSNpqYRN.png?width=108&crop=smart&format=pjpg&auto=webp&s=f1ab2b50eceae6c91db9ebfd0085e75aa02f7...
Skywork-R1V3 Technical Report
5
I wanna ask, what guys think of this report?
2025-07-10T04:45:27
https://arxiv.org/abs/2507.06167
Monometum
arxiv.org
1970-01-01T00:00:00
0
{}
1lw402u
false
null
t3_1lw402u
/r/LocalLLaMA/comments/1lw402u/skyworkr1v3_technical_report/
false
false
default
5
null
Need help buying power supplies for LocalLlama rig
7
Hey LocalLlama, I’m building a rig with an amd epyc 7742 and 6 3090’s. Can anyone help me determine if I need 3 PSU’s or 2 to pull this off? What Wattage should I get? Anyone know of a good retailer or specific brands? I’m checking eBay right now but I feel like I’m a little over my head and I’m not the best at ...
2025-07-10T04:08:55
https://www.reddit.com/r/LocalLLaMA/comments/1lw3cqn/need_help_buying_power_supplies_for_localllama_rig/
Business-Weekend-537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw3cqn
false
null
t3_1lw3cqn
/r/LocalLLaMA/comments/1lw3cqn/need_help_buying_power_supplies_for_localllama_rig/
false
false
self
7
null
Phi-4-mini-flash-reasoning
173
2025-07-10T04:00:32
https://huggingface.co/microsoft/Phi-4-mini-flash-reasoning
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
1lw3729
false
null
t3_1lw3729
/r/LocalLLaMA/comments/1lw3729/phi4miniflashreasoning/
false
false
https://external-preview…a7d614f7d4381b87
173
{'enabled': False, 'images': [{'id': '2P6UIFtGrDGWlFJ2C-vKbMOckKYF-gLArRvl_PWecTA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2P6UIFtGrDGWlFJ2C-vKbMOckKYF-gLArRvl_PWecTA.png?width=108&crop=smart&auto=webp&s=121dfa243da145563b7fad4abe0571ae415c0f2e', 'width': 108}, {'height': 116, 'url': 'h...
Fine Tune a smaller LLM for Code generation
39
Hi! I want to fine-tune a small pre-trained LLM to help users write code in a specific language. This language is very specific to a particular machinery and does not have widespread usage. We have a manual in PDF format and a few examples for the code. We want to build a chat agent where users can write code, and th...
2025-07-10T02:43:12
https://www.reddit.com/r/LocalLLaMA/comments/1lw1qp5/fine_tune_a_smaller_llm_for_code_generation/
GlobeAndGeek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw1qp5
false
null
t3_1lw1qp5
/r/LocalLLaMA/comments/1lw1qp5/fine_tune_a_smaller_llm_for_code_generation/
false
false
self
39
null
[Beginner Question] Entry-Level Hobbyist Build Advice — RTX 5070 Ti vs 5090? 64GB vs 128GB RAM?
6
Hey all! I'm pretty new to the local LLM scene. I managed to get a small model running on my old rig (RTX 2070 + 16GB RAM) last night, and while it *technically* worked, the output quality was pretty bad. But even so, I can see real potential here, and it got me excited to take the next step. I quickly realized that my...
2025-07-10T02:09:25
https://www.reddit.com/r/LocalLLaMA/comments/1lw12gt/beginner_question_entrylevel_hobbyist_build/
Saruphon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw12gt
false
null
t3_1lw12gt
/r/LocalLLaMA/comments/1lw12gt/beginner_question_entrylevel_hobbyist_build/
false
false
self
6
null
Llama-3_3-Nemotron-Super-49B-v1-mlx-4bit cannot be run in lm studio
3
Llama-3\_3-Nemotron-Super-49B-v1-mlx-4bit cannot be run in lm studio [https://huggingface.co/mlx-community/Llama-3\_3-Nemotron-Super-49B-v1-mlx-4bit](https://huggingface.co/mlx-community/Llama-3_3-Nemotron-Super-49B-v1-mlx-4bit) If you run the above model in lm studio "Error in iterating prediction stream Attribut...
2025-07-10T01:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1lw05ob/llama3_3nemotronsuper49bv1mlx4bit_cannot_be_run/
gnutely
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw05ob
false
null
t3_1lw05ob
/r/LocalLLaMA/comments/1lw05ob/llama3_3nemotronsuper49bv1mlx4bit_cannot_be_run/
false
false
self
3
{'enabled': False, 'images': [{'id': 'jCycuFvoTDG3tjWqkFNFqmBlfvIlJV0N3ioZFS5ekSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jCycuFvoTDG3tjWqkFNFqmBlfvIlJV0N3ioZFS5ekSA.png?width=108&crop=smart&auto=webp&s=c5838006a04d89d1b9d89740e5f28c6ecc1bccd2', 'width': 108}, {'height': 116, 'url': 'h...
Do you use local LLMs for work over cloud models? Why/how?
0
Paying for Claude max and using cloud models makes me depressed, because I know there is zero privacy. On the other hand, I'd have to pay $10k+ to get a slow version of q4 deepseek to run. So what choice do I have? Is there realistically any alternative?
2025-07-10T01:18:38
https://www.reddit.com/r/LocalLLaMA/comments/1lw0138/do_you_use_local_llms_for_work_over_cloud_models/
TumbleweedDeep825
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lw0138
false
null
t3_1lw0138
/r/LocalLLaMA/comments/1lw0138/do_you_use_local_llms_for_work_over_cloud_models/
false
false
self
0
null
Preceptor – A Local AI Focus App That Nudges You Back on Track | Waitlist + Suggestions needed
12
Hey everyone! I'm building **Preceptor**, a privacy-first, local AI app that helps you stay focused by tracking your activity *without* spying on your screen or sending data to the cloud. Here’s what it does: * **Monitors your activity locally** (app focus, browser tabs via extension) * **Compares with your goals** ...
2025-07-10T01:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1lvzwah/preceptor_a_local_ai_focus_app_that_nudges_you/
Frosty-Cap-4282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvzwah
false
null
t3_1lvzwah
/r/LocalLLaMA/comments/1lvzwah/preceptor_a_local_ai_focus_app_that_nudges_you/
false
false
self
12
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h...
https://en.wikipedia.org/wiki/Ant_colony_optimization_algorithms
146
The flattening of nuanced distinctions is part of the joke (pre-emptive disclaimer for the pedantic) * **Pheromone trails ↔ value functions / reward shaping** Both steer future exploration toward paths that historically looked good. * **Stochastic exploration** in ants (random walks with pheromone bias) ↔ **ε-greedy ...
2025-07-10T01:02:00
https://i.redd.it/vq8hwq904ybf1.png
chitown160
i.redd.it
1970-01-01T00:00:00
0
{}
1lvzonf
false
null
t3_1lvzonf
/r/LocalLLaMA/comments/1lvzonf/httpsenwikipediaorgwikiant_colony_optimization/
false
false
default
146
{'enabled': True, 'images': [{'id': 'vq8hwq904ybf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/vq8hwq904ybf1.png?width=108&crop=smart&auto=webp&s=c92efb7f1f6b6ad387f774a711db4568891357e6', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/vq8hwq904ybf1.png?width=216&crop=smart&auto=web...
Lamini playground down?
0
I am getting Error 1033 Cloudflare Tunnel error when attempting to access [https://app.lamini.ai/playground](https://app.lamini.ai/playground)
2025-07-10T00:49:37
https://www.reddit.com/r/LocalLLaMA/comments/1lvzf8y/lamini_playground_down/
Party_Tangerine_69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvzf8y
false
null
t3_1lvzf8y
/r/LocalLLaMA/comments/1lvzf8y/lamini_playground_down/
false
false
self
0
null
Subjectivity of LLM performance vs benchmarks (Garbage In Garbage Out!)
0
Nowadays there are tons of benchmarks For general intelligence?! (MMLU, GPQA,…etc) For other stuff like Coding And tons of ELO based arenas For different task and so on But I test on questions relevant to my field and with answer criteria and some other times subjectively by looking at it I work in cybersec...
2025-07-10T00:41:53
https://www.reddit.com/r/LocalLLaMA/comments/1lvz9ic/subjectivity_of_llm_performance_vs_benchmarks/
Potential_Block4598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvz9ic
false
null
t3_1lvz9ic
/r/LocalLLaMA/comments/1lvz9ic/subjectivity_of_llm_performance_vs_benchmarks/
false
false
self
0
null
Proposal: Modular, Domain & Subdomain-Aware MoE for Mistral—Next Steps?
0
MoE LLMs (like Mixtral) have set a new bar for efficient scaling. But all open MoEs route at the token level, with expert specialization emerging implicitly. **Recent research (TaskMoE, DomainMoE, THOR-MoE, GLaM) explores explicit routing by domain and even subdomain. This enables:** * Targeted upgrades (swap in a be...
2025-07-10T00:23:03
https://www.reddit.com/r/LocalLLaMA/comments/1lvyvmw/proposal_modular_domain_subdomainaware_moe_for/
the_sturgill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvyvmw
false
null
t3_1lvyvmw
/r/LocalLLaMA/comments/1lvyvmw/proposal_modular_domain_subdomainaware_moe_for/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ssJ7ZThFxiDtWap2rh1Sw3u3noHXRIviaqaqgePE26I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ssJ7ZThFxiDtWap2rh1Sw3u3noHXRIviaqaqgePE26I.png?width=108&crop=smart&auto=webp&s=31e91b273e10834ecfcddb025f68ca9f4f99b01a', 'width': 108}, {'height': 108, 'url': 'h...
Help im lost
6
I’ve been experimenting with local LLMs, and while I’ve had success running some models, I’m overwhelmed by the sheer number of options. I’d love some advice on how to narrow things down: * **What should I look for** in a model (e.g., size, architecture, benchmarks)? * **Where’s the best place to find** reliable model...
2025-07-10T00:16:43
https://www.reddit.com/r/LocalLLaMA/comments/1lvyqvq/help_im_lost/
DaBe99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvyqvq
false
null
t3_1lvyqvq
/r/LocalLLaMA/comments/1lvyqvq/help_im_lost/
false
false
self
6
null
ChatGPT 4.5 Quality Writing
0
I used to make fun of ChatGPT 4.5 for its poor performance and high price, however I have recently fallen in love with its writing style and creative ability. I have never seen it use a "x than y" format and it consistently matches my personal voice. Are there any other models that same level of wow factor when it come...
2025-07-10T00:07:13
https://www.reddit.com/r/LocalLLaMA/comments/1lvyjpw/chatgpt_45_quality_writing/
explodingcb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvyjpw
false
null
t3_1lvyjpw
/r/LocalLLaMA/comments/1lvyjpw/chatgpt_45_quality_writing/
false
false
self
0
null
LLamaCPP just merged Mamba/Jamba support!!
38
2025-07-10T00:02:12
https://github.com/ggml-org/llama.cpp/pull/7531
thebadslime
github.com
1970-01-01T00:00:00
0
{}
1lvyfws
false
null
t3_1lvyfws
/r/LocalLLaMA/comments/1lvyfws/llamacpp_just_merged_mambajamba_support/
false
false
default
38
{'enabled': False, 'images': [{'id': 'dhbNGWfgWCT-x-TvF432DBgusAX570Erpeyx-f0JMmA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dhbNGWfgWCT-x-TvF432DBgusAX570Erpeyx-f0JMmA.png?width=108&crop=smart&auto=webp&s=340e18fba3f412557fd7374f9b789abc78d4f2eb', 'width': 108}, {'height': 108, 'url': 'h...
Help me, I'm struggling with maintaining personality in LLMs? I’d love to learn from your experience!
2
Hey all,  I’m doing user research around **how developers maintain consistent “personality” across time and context in LLM applications.** If you’ve ever built: An AI tutor, assistant, therapist, or customer-facing chatbot A long-term memory agent, role-playing app, or character Anything where *how the AI acts or r...
2025-07-09T23:59:35
https://www.reddit.com/r/LocalLLaMA/comments/1lvydpk/help_me_im_struggling_with_maintaining/
ApartFerret1850
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvydpk
false
null
t3_1lvydpk
/r/LocalLLaMA/comments/1lvydpk/help_me_im_struggling_with_maintaining/
false
false
self
2
null
We are building a 192GB (2x 96GB) Blackwell Pro 6000 server. It deserves a beautiful case. What should we use?
13
Hello. Every word of this post was written entirely by me, a human, with no AI involvement. Any slop is mine. blackwell_tart and associates will be receiving two 96GB workstation pro 6000 GPUs at the end of the week. We are not here to discuss the purpose to which the GPUs will be put, please accept our apologies for ...
2025-07-09T23:16:34
https://www.reddit.com/r/LocalLLaMA/comments/1lvxft1/we_are_building_a_192gb_2x_96gb_blackwell_pro/
blackwell_tart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvxft1
false
null
t3_1lvxft1
/r/LocalLLaMA/comments/1lvxft1/we_are_building_a_192gb_2x_96gb_blackwell_pro/
false
false
self
13
{'enabled': False, 'images': [{'id': 'p5NTS-ssfWrKhMDmmS2sYkapMMXAwcVkOkDbArj1Q_U', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/p5NTS-ssfWrKhMDmmS2sYkapMMXAwcVkOkDbArj1Q_U.jpeg?width=108&crop=smart&auto=webp&s=ecd20001c1a675814d3fc5ec94e1592cf64aec31', 'width': 108}, {'height': 157, 'url': '...
Creating .mp3 audio dialogue for RPG - RTX 3060 12GB - Which model?
1
Also - I am using MSTY as I am mainly using text generation, going to audio, what should be my platform of choice?
2025-07-09T22:57:13
https://www.reddit.com/r/LocalLLaMA/comments/1lvx088/creating_mp3_audio_dialogue_for_rpg_rtx_3060_12gb/
Jethro_E7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvx088
false
null
t3_1lvx088
/r/LocalLLaMA/comments/1lvx088/creating_mp3_audio_dialogue_for_rpg_rtx_3060_12gb/
false
false
self
1
null
Possible size of new the open model from openai
344
2025-07-09T22:54:54
https://i.redd.it/622w5dyvhxbf1.png
celsowm
i.redd.it
1970-01-01T00:00:00
0
{}
1lvwya4
false
null
t3_1lvwya4
/r/LocalLLaMA/comments/1lvwya4/possible_size_of_new_the_open_model_from_openai/
false
false
default
344
{'enabled': True, 'images': [{'id': '622w5dyvhxbf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/622w5dyvhxbf1.png?width=108&crop=smart&auto=webp&s=69c3323c05b9e9e24c72ce6d4331170952c6539b', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/622w5dyvhxbf1.png?width=216&crop=smart&auto=web...
Hunyuan-A13B is here for real!
171
Hunyuan-A13B is now available for LM Studio with Unsloth GGUF. I am on the Beta track for both LM Studio and llama.cpp backend. Here are my initial impression: It is fast! I am getting 40 tokens per second initially dropping to maybe 30 tokens per second when the context has build up some. This is on M4 Max Macbook Pr...
2025-07-09T21:55:51
https://www.reddit.com/r/LocalLLaMA/comments/1lvvkh2/hunyuana13b_is_here_for_real/
Baldur-Norddahl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvvkh2
false
null
t3_1lvvkh2
/r/LocalLLaMA/comments/1lvvkh2/hunyuana13b_is_here_for_real/
false
false
self
171
null
LLM-SCA-DataExtractor: Special Character Attacks for Extracting LLM Training Material
1
[removed]
2025-07-09T21:19:17
https://github.com/bcdannyboy/LLM-SCA-DataExtractor
bcdefense
github.com
1970-01-01T00:00:00
0
{}
1lvuo7a
false
null
t3_1lvuo7a
/r/LocalLLaMA/comments/1lvuo7a/llmscadataextractor_special_character_attacks_for/
false
false
default
1
null
How to run Gemma 3 27B QAT with 128k context window with 3 parallel requests possible on 2x3090
15
1. Have CUDA installed. 2. Go to [https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases) 3. Find you OS .zip file, download it 4. Unpack it to the folder of your choice 5. At the same folder level, download Gemma 3 27B QAT Q4\_0: `git clone` [`https://huggingface.co/google/gemma...
2025-07-09T21:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1lvun89/how_to_run_gemma_3_27b_qat_with_128k_context/
EmilPi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvun89
false
null
t3_1lvun89
/r/LocalLLaMA/comments/1lvun89/how_to_run_gemma_3_27b_qat_with_128k_context/
false
false
self
15
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'h...
Musings on recent trends in closed-source and the way forward for open-source
7
Ever since the release of o1, I've been noticing more and more that a lot of the efforts OpenAI is making is not necessarily on the quality of their LLMs themselves but in how they are stitched/orchestrated together. To take a simple example, you might have noticed that ChatGPT generally performs way better than the GP...
2025-07-09T21:01:09
https://www.reddit.com/r/LocalLLaMA/comments/1lvu7sp/musings_on_recent_trends_in_closedsource_and_the/
AgreeableCaptain1372
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvu7sp
false
null
t3_1lvu7sp
/r/LocalLLaMA/comments/1lvu7sp/musings_on_recent_trends_in_closedsource_and_the/
false
false
self
7
null
Is it possible to generate audio mimicking sample style ?
0
Hello everybody, I’m a guitarist, so very new to this tech stuff but is it possible to feed the AI with my samples and generate more audio from my musical style ? Is it possible to do that locally or are there tools online that can achieve that ? Thank you
2025-07-09T20:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1lvttkc/is_it_possible_to_generate_audio_mimicking_sample/
Hestiaboutique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvttkc
false
null
t3_1lvttkc
/r/LocalLLaMA/comments/1lvttkc/is_it_possible_to_generate_audio_mimicking_sample/
false
false
self
0
null
New Nvidia Jetson AGX Thor developer kit specs
48
From [siliconhighway](https://www.siliconhighway.com/wp-content/robotics-and-edge-ai-datasheet-jetson-thor-devkit-nvidia-us-web.pdf) Look **BIG,** but: * AGX Orin: 2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores @ 1.3 GHz * AGX Thor: 2560-core NVIDIA Blackwell architecture GPU with 96 fifth-gen Tensor ...
2025-07-09T20:40:26
https://www.reddit.com/gallery/1lvtp4h
martincerven
reddit.com
1970-01-01T00:00:00
0
{}
1lvtp4h
false
null
t3_1lvtp4h
/r/LocalLLaMA/comments/1lvtp4h/new_nvidia_jetson_agx_thor_developer_kit_specs/
false
false
https://b.thumbs.redditm…7X_F0lXz5ctQ.jpg
48
null
New Nvidia Jetson AGX Thor developer kit specs
1
Look **BIG,** but: * AGX Orin: 2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores @ 1.3 GHz * AGX Thor: 2560-core NVIDIA Blackwell architecture GPU with 96 fifth-gen Tensor Cores @ 1.575 GHz How is **275 ->1000 TOPS** (FP8/INT8) computed? (with NVDEC,NVENC, +??) Additional info to [look through](https://...
2025-07-09T20:30:43
https://www.siliconhighway.com/wp-content/robotics-and-edge-ai-datasheet-jetson-thor-devkit-nvidia-us-web.pdf
martincerven
siliconhighway.com
1970-01-01T00:00:00
0
{}
1lvtgdj
false
null
t3_1lvtgdj
/r/LocalLLaMA/comments/1lvtgdj/new_nvidia_jetson_agx_thor_developer_kit_specs/
false
false
default
1
null