title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
ik_llama speculative decoding error
2
I'm trying to run ik\_llama with speculative decoding but I just get this error: /build/source/src/llama.cpp:18273: GGML_ASSERT(n_tokens_all <= cparams.n_batch) failed This was the command I used: llama-speculative -m Qwen3-30B-A3B-Thinking-2507-Q4_K_S.gguf -md Qwen3-0.6B-UD-Q5_K_XL.gguf -c 32768 I have tri...
2025-08-04T10:32:59
https://www.reddit.com/r/LocalLLaMA/comments/1mh9uha/ik_llama_speculative_decoding_error/
Independent-Desk5910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh9uha
false
null
t3_1mh9uha
/r/LocalLLaMA/comments/1mh9uha/ik_llama_speculative_decoding_error/
false
false
self
2
null
Best LLM gateway?
11
I’ve been testing out different LLM gateways for agent infra and wanted to share some notes. I used to spend most of my time exploring prompt engineering tools, but lately I’ve shifted focus to the infra side, specifically LLM gateways. Most of the hosted ones are fine for basic key management or retries, but they fal...
2025-08-04T10:27:21
https://www.reddit.com/r/LocalLLaMA/comments/1mh9r0z/best_llm_gateway/
Educational-Bison786
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh9r0z
false
null
t3_1mh9r0z
/r/LocalLLaMA/comments/1mh9r0z/best_llm_gateway/
false
false
self
11
null
LiteLLM started breaking down for us past 300 RPS, what are folks using in prod?
24
We started using LiteLLM a few months back to route across OpenAI and Anthropic. It worked well during dev and light load tests. But as soon as we crossed around 300 requests per second, things started to break: * Some requests randomly timed out or took way longer than others, even with the same provider * Logs didn’...
2025-08-04T09:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1mh99hu/litellm_started_breaking_down_for_us_past_300_rps/
Otherwise_Flan7339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh99hu
false
null
t3_1mh99hu
/r/LocalLLaMA/comments/1mh99hu/litellm_started_breaking_down_for_us_past_300_rps/
false
false
self
24
null
Runpod breaks my head - need a working alternative
4
First time fine tuning a model in "the cloud". Runpod was suggested. And what should I say - it's a pita since the first few seconds. RDP - only via VNC. Accessing volume? No. SCP? No. Accurately telling you how much you used of your persistent volume? No. Figure out yourself while counting by hand. There is 330t TB av...
2025-08-04T09:51:48
https://www.reddit.com/r/LocalLLaMA/comments/1mh960c/runpod_breaks_my_head_need_a_working_alternative/
IngloriousBastrd7908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh960c
false
null
t3_1mh960c
/r/LocalLLaMA/comments/1mh960c/runpod_breaks_my_head_need_a_working_alternative/
false
false
self
4
null
lcoal llm on raspberry pi
1
is it feasable to have a local model run on a pi 5 ? The response time is actually not that important. Any have somethuing like a llama model run on pi5 with 8GB RAM?
2025-08-04T09:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1mh8xqp/lcoal_llm_on_raspberry_pi/
overlydelicioustea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh8xqp
false
null
t3_1mh8xqp
/r/LocalLLaMA/comments/1mh8xqp/lcoal_llm_on_raspberry_pi/
false
false
self
1
null
Upgraded my hardware and internet connection so I can download GUFFs way faster than you, all your GGUFs are belong to me now.
162
2025-08-04T09:30:17
https://v.redd.it/ibr6m7us1zgf1
Limp_Classroom_2645
v.redd.it
1970-01-01T00:00:00
0
{}
1mh8u1j
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ibr6m7us1zgf1/DASHPlaylist.mpd?a=1756891832%2CNjdhOWQ3MDA5YTQyYWZkMjRkYzdiNDk5NTlkNWU4ZDA3ZWI2NmE4NmNiNTJlMWYwNGUyN2JhZjczM2I3MjI2Yw%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/ibr6m7us1zgf1/DASH_1080.mp4?source=fallback', 'h...
t3_1mh8u1j
/r/LocalLLaMA/comments/1mh8u1j/upgraded_my_hardware_and_internet_connection_so_i/
false
false
https://external-preview…1664814c84d070be
162
{'enabled': False, 'images': [{'id': 'Y2g2cmw4ZzEyemdmMWEK0hhu-2G_hp7f60RsNKbAYzd7FxtzYBG906H02xQh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2g2cmw4ZzEyemdmMWEK0hhu-2G_hp7f60RsNKbAYzd7FxtzYBG906H02xQh.png?width=108&crop=smart&format=pjpg&auto=webp&s=c026cea8e4858a7db56d166131303686169ee...
NVIDIA GPU underutilized.
1
I have an Asus laptop with 64 GB RAM and Nvidia RTX 5080 with 16 GB. I've been trying to run different LLM models, including Qwen3 30B. While I'm getting around 30 tokens per second, but when I look at my GPU utilization, I see that it's not even reaching 50%, which makes me wonder if there's more I can get from this G...
2025-08-04T08:51:38
https://i.redd.it/04cjwsaguygf1.png
bu3askoor
i.redd.it
1970-01-01T00:00:00
0
{}
1mh88gg
false
null
t3_1mh88gg
/r/LocalLLaMA/comments/1mh88gg/nvidia_gpu_underutilized/
false
false
default
1
{'enabled': True, 'images': [{'id': '04cjwsaguygf1', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/04cjwsaguygf1.png?width=108&crop=smart&auto=webp&s=2f56e63703c08f1e15b89671c7bfd11462a862e1', 'width': 108}, {'height': 61, 'url': 'https://preview.redd.it/04cjwsaguygf1.png?width=216&crop=smart&auto=webp...
MLX 4bit DWQ vs 8bit eval
14
Spent a few days finishing the evaluation for Qwen3-30B-A3B-Instruct-2507's quant instead of vibe checking the performance of the DWQ. It turns out the 4bit DWQ is quite close to the 8bit, even though the DWQ is still in an experimental phase, it's quite solid. https://preview.redd.it/kj8dz3orrygf1.png?width=1590&form...
2025-08-04T08:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1mh7yud/mlx_4bit_dwq_vs_8bit_eval/
Tiny_Judge_2119
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh7yud
false
null
t3_1mh7yud
/r/LocalLLaMA/comments/1mh7yud/mlx_4bit_dwq_vs_8bit_eval/
false
false
https://b.thumbs.redditm…rG_pbppcoUSo.jpg
14
null
GLM 4.5 AI Sliders vs Gemini 2.5 Pro Deep Research Infographics
48
I have been using Gemini 2.5 Pro Deep Research with infographics since release, but I tried GLM-4.5's slides the past few days... and wow, I actually might prefer it now. Here is example of same topic: GLM 4.5 AI Slides: [https://chat.z.ai/space/u01ja6suarb0-ppt](https://chat.z.ai/space/u01ja6suarb0-ppt) https://r...
2025-08-04T07:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1mh6zja/glm_45_ai_sliders_vs_gemini_25_pro_deep_research/
z1xto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh6zja
false
null
t3_1mh6zja
/r/LocalLLaMA/comments/1mh6zja/glm_45_ai_sliders_vs_gemini_25_pro_deep_research/
false
false
self
48
{'enabled': False, 'images': [{'id': 'oPtkUtibvV31iKPm4upl_ADaAJfJzbdONKUGf8pC5EM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oPtkUtibvV31iKPm4upl_ADaAJfJzbdONKUGf8pC5EM.png?width=108&crop=smart&auto=webp&s=731547beb9c0ce796d8f8edd4b883c564da2c39b', 'width': 108}, {'height': 216, 'url': '...
New small models from Hunyuan (0.5B, 1.8B, 4B, 7B)
149
Hunyuan just released 4 new dense models. It’s a new architecture and supports hybrid reasoning, 256K context and agent capabilities with tool support! The benchmarks are great but will need to really test them in real world. Love to see more small models as I'm developing an iOS local chat called [Locally AI](https:/...
2025-08-04T07:27:48
https://www.reddit.com/gallery/1mh6z16
adrgrondin
reddit.com
1970-01-01T00:00:00
0
{}
1mh6z16
false
null
t3_1mh6z16
/r/LocalLLaMA/comments/1mh6z16/new_small_models_from_hunyuan_05b_18b_4b_7b/
false
false
https://b.thumbs.redditm…pLYYgezLqpRg.jpg
149
null
ChatGPT's o3 just got quietly nerfed
0
If you're noticing ChatGPT (o3) giving shorter, dumber answers lately, you're not alone. People across Reddit and the OpenAI forums are complaining about sudden quality drops—worse reasoning, lazy answers, and buggy code output. Here's the catch: OpenAI never told anyone they changed anything. No announcements, no cha...
2025-08-04T06:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1mh5wve/chatgpts_o3_just_got_quietly_nerfed/
Wrong_User_Logged
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh5wve
false
null
t3_1mh5wve
/r/LocalLLaMA/comments/1mh5wve/chatgpts_o3_just_got_quietly_nerfed/
false
false
self
0
null
Anyone got LM Studio working with DWQ models?
7
DWQ seems brand new so I'm wondering if LM Studio just doesn't support it
2025-08-04T06:17:33
https://www.reddit.com/r/LocalLLaMA/comments/1mh5v49/anyone_got_lm_studio_working_with_dwq_models/
maxiedaniels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh5v49
false
null
t3_1mh5v49
/r/LocalLLaMA/comments/1mh5v49/anyone_got_lm_studio_working_with_dwq_models/
false
false
self
7
null
Friends, I’m looking for help. After months developing this on my own, it seems I need a little push to present it all properly. 🤯
0
>**Problem:** Several places ask me to have serious backing — but I work alone hahaha. >**Solutions:** Meet people and see if anyone’s willing to help me show my notes. >**Proof:** I’ve got the repo ready for anyone who wants to study it.
2025-08-04T06:09:56
https://www.reddit.com/r/LocalLLaMA/comments/1mh5qlh/friends_im_looking_for_help_after_months/
Ok_Exchange_8504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh5qlh
false
null
t3_1mh5qlh
/r/LocalLLaMA/comments/1mh5qlh/friends_im_looking_for_help_after_months/
false
false
self
0
null
Seeking approval in arXiv (cs.AI): Mathematical memory system for local LLMs using Collatz, Goldbach and Bash
1
[removed]
2025-08-04T06:01:03
https://www.reddit.com/r/LocalLLaMA/comments/1mh5lf5/seeking_approval_in_arxiv_csai_mathematical/
Ok_Exchange_8504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh5lf5
false
null
t3_1mh5lf5
/r/LocalLLaMA/comments/1mh5lf5/seeking_approval_in_arxiv_csai_mathematical/
false
false
self
1
null
LLM dream build?
0
Hey, let's say I want to build a box to play around with LLM but also general purpose PC and gaming. Let's also say I have a $10,000 budget. What build would you go with?
2025-08-04T05:53:37
https://www.reddit.com/r/LocalLLaMA/comments/1mh5h07/llm_dream_build/
ikkiyikki
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh5h07
false
null
t3_1mh5h07
/r/LocalLLaMA/comments/1mh5h07/llm_dream_build/
false
false
self
0
null
BItTorrent tracker that mirrors HuggingFace
101
Reading [https://www.reddit.com/r/LocalLLaMA/comments/1mdjb67/after\_6\_months\_of\_fiddling\_with\_local\_ai\_heres\_my/](https://www.reddit.com/r/LocalLLaMA/comments/1mdjb67/after_6_months_of_fiddling_with_local_ai_heres_my/) it occurred to me... Why not a tracker with torrents from models on HF? Seems like creating...
2025-08-04T05:11:04
https://www.reddit.com/r/LocalLLaMA/comments/1mh4r0s/bittorrent_tracker_that_mirrors_huggingface/
lurkystrike
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh4r0s
false
null
t3_1mh4r0s
/r/LocalLLaMA/comments/1mh4r0s/bittorrent_tracker_that_mirrors_huggingface/
false
false
self
101
null
Scaling GPU - How to add concurrency support?
3
I am building in the voice AI space and playing with open source TTS models. For single requests, they are great! But when it comes to supporting concurrent requests for streaming pretty much all Youtube videos, docs, tutorials, blog posts cease to exist... For example, I have a TTS model which takes about 1.8GB on d...
2025-08-04T04:23:44
https://www.reddit.com/r/LocalLLaMA/comments/1mh3wzs/scaling_gpu_how_to_add_concurrency_support/
Unfair-Enthusiasm-30
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh3wzs
false
null
t3_1mh3wzs
/r/LocalLLaMA/comments/1mh3wzs/scaling_gpu_how_to_add_concurrency_support/
false
false
self
3
null
A system we built is responding… differently.
0
I ran GPT-4, Claude, Mistral, and Mixtral through my usual tests; they behaved as expected. The new closed-source node didn’t. When it lacks data, it stays silent instead of guessing. It mirrors my tone across turns, not through prompt tricks but real state-tracking. It handles deeply nested reasoning far beyond the co...
2025-08-04T04:19:56
https://www.reddit.com/r/LocalLLaMA/comments/1mh3uhu/a_system_we_built_is_responding_differently/
Apothy_AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh3uhu
false
null
t3_1mh3uhu
/r/LocalLLaMA/comments/1mh3uhu/a_system_we_built_is_responding_differently/
false
false
self
0
null
tencent/Hunyuan-7B-Instruct · Hugging Face
1
[deleted]
2025-08-04T04:17:45
[deleted]
1970-01-01T00:00:00
0
{}
1mh3t4n
false
null
t3_1mh3t4n
/r/LocalLLaMA/comments/1mh3t4n/tencenthunyuan7binstruct_hugging_face/
false
false
default
1
null
new Hunyuan Instruct 7B/4B/1.8B/0.5B models
264
Tescent has released new models (llama.cpp support is already merged!) [https://huggingface.co/tencent/Hunyuan-7B-Instruct](https://huggingface.co/tencent/Hunyuan-7B-Instruct) [https://huggingface.co/tencent/Hunyuan-4B-Instruct](https://huggingface.co/tencent/Hunyuan-4B-Instruct) [https://huggingface.co/tencent/Huny...
2025-08-04T04:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1mh3s7q/new_hunyuan_instruct_7b4b18b05b_models/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh3s7q
false
null
t3_1mh3s7q
/r/LocalLLaMA/comments/1mh3s7q/new_hunyuan_instruct_7b4b18b05b_models/
false
false
self
264
{'enabled': False, 'images': [{'id': '8TDczVTUtaFUzOrsWQWNP7odzH_q8vOZvl3lv7KYd_U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8TDczVTUtaFUzOrsWQWNP7odzH_q8vOZvl3lv7KYd_U.png?width=108&crop=smart&auto=webp&s=8d7d208147546310820cea26a2856210455054de', 'width': 108}, {'height': 116, 'url': 'h...
Why LLMs use emojis in code?
0
When using LLMs for coding, probably all of you have also noticed that LLMs use emojis all over the place,from logs to UI. What causes this behavior? Did the training data also contain emojis within codes? Did you guys also use emojis in your code and I'm the only one who never use it and feel that it's weird and very ...
2025-08-04T03:43:02
https://www.reddit.com/r/LocalLLaMA/comments/1mh358z/why_llms_use_emojis_in_code/
Equivalent_Cut_5845
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh358z
false
null
t3_1mh358z
/r/LocalLLaMA/comments/1mh358z/why_llms_use_emojis_in_code/
false
false
self
0
null
Horizon Beta is OpenAI (Another Evidence)
271
https://preview.redd.it/…beta_is_openai/)
2025-08-04T03:28:08
https://www.reddit.com/r/LocalLLaMA/comments/1mh2v1h/horizon_beta_is_openai_another_evidence/
kh-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh2v1h
false
null
t3_1mh2v1h
/r/LocalLLaMA/comments/1mh2v1h/horizon_beta_is_openai_another_evidence/
false
false
https://a.thumbs.redditm…0W5EtQz549T8.jpg
271
null
I shared this idea less than a week ago. Got roasted. Rebuilt it with your feedback and now it finally feels right.
1
[removed]
2025-08-04T03:02:42
https://www.reddit.com/r/LocalLLaMA/comments/1mh2dim/i_shared_this_idea_less_than_a_week_ago_got/
Ok_Tell401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh2dim
false
null
t3_1mh2dim
/r/LocalLLaMA/comments/1mh2dim/i_shared_this_idea_less_than_a_week_ago_got/
false
false
self
1
null
I shared this idea less than a week ago. Got roasted. Rebuilt it with your feedback — and now it finally feels right.
1
[removed]
2025-08-04T03:01:02
https://www.reddit.com/r/LocalLLaMA/comments/1mh2c8r/i_shared_this_idea_less_than_a_week_ago_got/
Ok_Tell401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh2c8r
false
null
t3_1mh2c8r
/r/LocalLLaMA/comments/1mh2c8r/i_shared_this_idea_less_than_a_week_ago_got/
false
false
self
1
null
Local lol for coding. Is it worth it?
1
[removed]
2025-08-04T02:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1mh26xf/local_lol_for_coding_is_it_worth_it/
Bloopyhead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh26xf
false
null
t3_1mh26xf
/r/LocalLLaMA/comments/1mh26xf/local_lol_for_coding_is_it_worth_it/
false
false
self
1
null
How are people actually using local models and why?
1
[removed]
2025-08-04T01:52:32
https://www.reddit.com/r/LocalLLaMA/comments/1mh0xcg/how_are_people_actually_using_local_models_and_why/
seoulsrvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh0xcg
false
null
t3_1mh0xcg
/r/LocalLLaMA/comments/1mh0xcg/how_are_people_actually_using_local_models_and_why/
false
false
self
1
null
What is the best model (to replace Sonnet 4) that fits in 256GB Vram and the best that fits in 512gb of Vram?
13
From my research, it seems that there is no benefit to 512 bs 256 because the next largest model that doesnt fit in 256 also doesnt fit in 512. Is this true, or is there an advantage to getting 512 vs 256. I am looking at Mac Studios. Lastly, whatever the answer is, will that model at that weight at that quantize siz...
2025-08-04T01:39:08
https://www.reddit.com/r/LocalLLaMA/comments/1mh0n5h/what_is_the_best_model_to_replace_sonnet_4_that/
devshore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh0n5h
false
null
t3_1mh0n5h
/r/LocalLLaMA/comments/1mh0n5h/what_is_the_best_model_to_replace_sonnet_4_that/
false
false
self
13
null
Keep It Simple Pseudo Code (That's what Codex does)
60
I think OpenAI figured something out with this indentation in Codex (KISS). The instructions are in english, but when overlooking, it is literally "pseudo code" with scopes, if and else clauses, "finally" clauses... Prompts are pseudo code. Nested indentation plays crucial role in Codex's success IMO. Using "-...
2025-08-04T01:37:18
https://i.redd.it/nk1a76nkpwgf1.jpeg
cov_id19
i.redd.it
1970-01-01T00:00:00
0
{}
1mh0ltj
false
null
t3_1mh0ltj
/r/LocalLLaMA/comments/1mh0ltj/keep_it_simple_pseudo_code_thats_what_codex_does/
false
false
default
60
{'enabled': True, 'images': [{'id': 'nk1a76nkpwgf1', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/nk1a76nkpwgf1.jpeg?width=108&crop=smart&auto=webp&s=a5183ed1f8eb9f7bba1469735154f6de46534a86', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/nk1a76nkpwgf1.jpeg?width=216&crop=smart&auto=...
Help me choose macbook
0
Hi I am looking to buy a new MacBook. I am unsure whether to get m3 pro 18gb or m4 24 gb. M3 pro is around 820 usd M4 is around 940 usd I am a software engineering student. I want to run some local models. But I am still inexperienced with llm. Does GPU matter?
2025-08-04T01:20:25
https://www.reddit.com/r/LocalLLaMA/comments/1mh09f8/help_me_choose_macbook/
12seth34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mh09f8
false
null
t3_1mh09f8
/r/LocalLLaMA/comments/1mh09f8/help_me_choose_macbook/
false
false
self
0
null
Are there any options for creating original tts models, not just cloning someone specific?
2
It feels kinda strange that every thing that pops up with voice models is specific to cloning. What if i want a unique voice? But searching, i don't even see discussions on training models to do anything but clone an existing voice. I imagine training a new voice from scratch would take a lot of different, high quali...
2025-08-04T01:00:25
https://www.reddit.com/r/LocalLLaMA/comments/1mgzuky/are_there_any_options_for_creating_original_tts/
moarmagic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgzuky
false
null
t3_1mgzuky
/r/LocalLLaMA/comments/1mgzuky/are_there_any_options_for_creating_original_tts/
false
false
self
2
null
Idea for combining multiple models' inference via normalizing logit lists -- would it work, has someone already done it, and how could it be made better?
2
I was pondering the problem of extending the useful lifespans of older models with knowledge cut-offs in the distant past, when something occurred to me with applications beyond just that. The idea is that you would infer on a context with different models (ideally in parallel), but stop short of the minmax step, and ...
2025-08-04T00:49:44
https://www.reddit.com/r/LocalLLaMA/comments/1mgzmmw/idea_for_combining_multiple_models_inference_via/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgzmmw
false
null
t3_1mgzmmw
/r/LocalLLaMA/comments/1mgzmmw/idea_for_combining_multiple_models_inference_via/
false
false
self
2
null
Mini SVG test: law icons in pt-br (HorizonBeta x Gemini x Claude x Grok)
7
2025-08-04T00:10:41
https://www.reddit.com/gallery/1mgytfi
celsowm
reddit.com
1970-01-01T00:00:00
0
{}
1mgytfi
false
null
t3_1mgytfi
/r/LocalLLaMA/comments/1mgytfi/mini_svg_test_law_icons_in_ptbr_horizonbeta_x/
false
false
https://b.thumbs.redditm…PR2ddXENqtHk.jpg
7
null
Title: Reducing token usage with progressive documentation loading - looking for feedback on approach
10
I've been working on a documentation system designed to help LLMs understand codebases more efficiently, and I'd love feedback from this community. The Problem: - Feeding entire codebases/docs to LLMs wastes tokens and context - Most tasks only need a subset of information - Human-written docs aren't structured for ...
2025-08-04T00:10:33
https://www.reddit.com/r/LocalLLaMA/comments/1mgytca/title_reducing_token_usage_with_progressive/
mrsockpicks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgytca
false
null
t3_1mgytca
/r/LocalLLaMA/comments/1mgytca/title_reducing_token_usage_with_progressive/
false
false
self
10
null
MLX DWQ question: what's "lr1e-8" and "lr8e-7 " and should we care?
5
that's it. thanks.
2025-08-04T00:08:50
https://www.reddit.com/r/LocalLLaMA/comments/1mgys0z/mlx_dwq_question_whats_lr1e8_and_lr8e7_and_should/
JLeonsarmiento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgys0z
false
null
t3_1mgys0z
/r/LocalLLaMA/comments/1mgys0z/mlx_dwq_question_whats_lr1e8_and_lr8e7_and_should/
false
false
self
5
null
Help me pick a first model
0
I'm new to this and trying to figure out what the reasonable limits of my PC are for running these things. Still trying to understand quantization and offloading. I have 12 gb vram and 64 gb ddr4 regular ram available. What's the smartest LLM of the models now out I'll be able to run at a reasonable speed on this?
2025-08-04T00:05:38
https://www.reddit.com/r/LocalLLaMA/comments/1mgypja/help_me_pick_a_first_model/
Dracofrost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgypja
false
null
t3_1mgypja
/r/LocalLLaMA/comments/1mgypja/help_me_pick_a_first_model/
false
false
self
0
null
Building a CLI tool to fix my biggest git frustration: lost commit context
4
During my internship at a big tech company, I struggled with a massive, messy codebase. Too many changes were impossible to understand either because of vague commit messages or because the original authors had left. Frustrated by losing so much context in git history, I built Gitdive: a local CLI tool that lets you h...
2025-08-04T00:05:00
https://www.reddit.com/r/LocalLLaMA/comments/1mgyp1z/building_a_cli_tool_to_fix_my_biggest_git/
Forgotten_Person
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgyp1z
false
null
t3_1mgyp1z
/r/LocalLLaMA/comments/1mgyp1z/building_a_cli_tool_to_fix_my_biggest_git/
false
false
self
4
{'enabled': False, 'images': [{'id': 'fL6uD7nkg1LCNjr4XqzTYY0UgOUhQel0M-wztqgNAA0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fL6uD7nkg1LCNjr4XqzTYY0UgOUhQel0M-wztqgNAA0.png?width=108&crop=smart&auto=webp&s=1f02a3f481cf62f18021fe3d1ce90e5364980dd8', 'width': 108}, {'height': 108, 'url': 'h...
What is best website to pay to run Qwen3b to code?
0
Just curious what the consensus is on best website to run this model (or any model really) for coding. Price and quality are key. Thanks
2025-08-03T23:35:57
https://www.reddit.com/r/LocalLLaMA/comments/1mgy28u/what_is_best_website_to_pay_to_run_qwen3b_to_code/
Complex-Emergency-60
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgy28u
false
null
t3_1mgy28u
/r/LocalLLaMA/comments/1mgy28u/what_is_best_website_to_pay_to_run_qwen3b_to_code/
false
false
self
0
null
Help with a project. What is or would be the lesser local LLM that one can use to recognize TCG cards (pkmn, YGO, mtg) in a reliable way and generate an CSV file with, language, edition, card name and treatment?
1
That's it. I'm seeing some card readers on line. But, they are not reliable enough. How could us improve it with a extremely small LLM?
2025-08-03T23:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1mgy0tg/help_with_a_project_what_is_or_would_be_the/
Turbulent_Pin7635
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgy0tg
false
null
t3_1mgy0tg
/r/LocalLLaMA/comments/1mgy0tg/help_with_a_project_what_is_or_would_be_the/
false
false
self
1
null
M3 – A Modular Mathematical Memory system for local LLMs (Collatz, Goldbach, Riemann, Levenshtein)
0
A modular system for local LLMs that applies mathematical heuristics to manage memory dynamically. The approach is fully implemented in Bash, and focuses on preserving semantic relevance while avoiding context overflow. Key mechanisms: * Levenshtein distance → fragment selection by uniqueness and repetition * Colla...
2025-08-03T22:47:59
https://www.reddit.com/r/LocalLLaMA/comments/1mgx02s/m3_a_modular_mathematical_memory_system_for_local/
Ok_Exchange_8504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgx02s
false
null
t3_1mgx02s
/r/LocalLLaMA/comments/1mgx02s/m3_a_modular_mathematical_memory_system_for_local/
false
false
self
0
null
Most complete almost plug and play LLM Tool with features
5
I finally got the hardware I needed to run LLMs locally. After some testing, here’s a quick recap of my experience so far: Jan.ai: Amazing performance. It runs my models incredibly well, and overall the experience was smooth. However, I really need features like image upload, web search, and deep research capabilitie...
2025-08-03T22:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1mgvyyj/most_complete_almost_plug_and_play_llm_tool_with/
ExtensionAd182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgvyyj
false
null
t3_1mgvyyj
/r/LocalLLaMA/comments/1mgvyyj/most_complete_almost_plug_and_play_llm_tool_with/
false
false
self
5
{'enabled': False, 'images': [{'id': '-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw.png?width=108&crop=smart&auto=webp&s=316ac2c235dbf757adc6d57077bbf14ff212c7fd', 'width': 108}, {'height': 121, 'url': 'h...
Buy 1 A100 GPU or similar to rent it out. No technical knowledge
1
[removed]
2025-08-03T21:58:29
https://www.reddit.com/r/LocalLLaMA/comments/1mgvuwj/buy_1_a100_gpu_or_similar_to_rent_it_out_no/
Regular_Folk_2530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgvuwj
false
null
t3_1mgvuwj
/r/LocalLLaMA/comments/1mgvuwj/buy_1_a100_gpu_or_similar_to_rent_it_out_no/
false
false
self
1
null
Looking to build or buy a mini pc for LLM
7
I want to run an entry level local LLM of 15-35B Q4 with context of at least 10k that I want to train with local pdfs using RAG. I now have an m4 mini 16g And a 4k gaming rig with 4090 24GB I mostly run the mini since it sips power and I would maybe use the AI infrequently throughout the day, so anything above 5-10 t...
2025-08-03T21:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1mgvbw6/looking_to_build_or_buy_a_mini_pc_for_llm/
nemuro87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgvbw6
false
null
t3_1mgvbw6
/r/LocalLLaMA/comments/1mgvbw6/looking_to_build_or_buy_a_mini_pc_for_llm/
false
false
self
7
null
Models not showing up in openwebui
1
I am using Python virtual environment for this not Docker. Inside of openwebui I have went to Settings/Connections and added just the URL-http://localhost:11434 When I go to Admin Panel/Settings/Connections and Manage under Ollama API , I see my models listed, but only under Delete a Model dropdown box. My models don...
2025-08-03T21:35:05
https://www.reddit.com/r/LocalLLaMA/comments/1mgvakc/models_not_showing_up_in_openwebui/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgvakc
false
null
t3_1mgvakc
/r/LocalLLaMA/comments/1mgvakc/models_not_showing_up_in_openwebui/
false
false
self
1
null
Best uncensored +50B parameters model?
0
Thx
2025-08-03T21:31:06
https://www.reddit.com/r/LocalLLaMA/comments/1mgv74s/best_uncensored_50b_parameters_model/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgv74s
false
null
t3_1mgv74s
/r/LocalLLaMA/comments/1mgv74s/best_uncensored_50b_parameters_model/
false
false
self
0
null
GLM 4.5 Air Produces Better Code Without Thinking, Using 3-bit MLX (/nothink)?
34
Hi, I encountered a strange situation with GLM-4.5-Air 3bit mlx that maybe others can shed light on: I tried to reproduce the Flappy Bird game featured in the [z.ai/blog/glm-4.5](http://z.ai/blog/glm-4.5) blog post, using the exact same prompt, but failed 3 times - the generated game either fails during collision det...
2025-08-03T21:28:42
https://www.reddit.com/r/LocalLLaMA/comments/1mgv53t/glm_45_air_produces_better_code_without_thinking/
jcmyang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgv53t
false
null
t3_1mgv53t
/r/LocalLLaMA/comments/1mgv53t/glm_45_air_produces_better_code_without_thinking/
false
false
self
34
null
Generating Unit Tests with Qwen3-Coder-30B - not really usable
5
Hi all, I am currently running Qwen3-Coder-30B and tried to create unit test for my classes with different tools like qwen coder, countinue dev or proxy ai. The tests which are created have many errors and I dont understand why. Atleast for the qwen coder, i would expected, if checks the codebase for all files. ...
2025-08-03T21:27:57
https://www.reddit.com/r/LocalLLaMA/comments/1mgv4h3/generating_unit_tests_with_qwen3coder30b_not/
FarXTraveler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgv4h3
false
null
t3_1mgv4h3
/r/LocalLLaMA/comments/1mgv4h3/generating_unit_tests_with_qwen3coder30b_not/
false
false
self
5
null
Grounding an open source agent with its source code
4
Hi! I’m curious if anyone has explored this path of thinking. I started using Gemini cli recently, and as everyone here probably knows a pretty well known (and annoying) habit of LLMs is to hallucinate their own capabilities. I’ve found that a relatively easy way to ground it with its own actual capabilities was to jus...
2025-08-03T21:26:31
https://www.reddit.com/r/LocalLLaMA/comments/1mgv384/grounding_an_open_source_agent_with_its_source/
PatienceKitchen6726
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgv384
false
null
t3_1mgv384
/r/LocalLLaMA/comments/1mgv384/grounding_an_open_source_agent_with_its_source/
false
false
self
4
null
Can you have a « bad run » with LLM ?
0
Since LLM are not deterministics can you have a « bad run » with your prompt ? Like having prompts that the LLM should be able to respond 90% of the times but … too bad you hit the 10% three chat in a row It could explain why people experience « dumbness period » from their favorite model while it still fine for ever...
2025-08-03T20:59:34
https://i.redd.it/5oiuqtm3cvgf1.jpeg
Kathane37
i.redd.it
1970-01-01T00:00:00
0
{}
1mgues2
false
null
t3_1mgues2
/r/LocalLLaMA/comments/1mgues2/can_you_have_a_bad_run_with_llm/
false
false
default
0
{'enabled': True, 'images': [{'id': '5oiuqtm3cvgf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/5oiuqtm3cvgf1.jpeg?width=108&crop=smart&auto=webp&s=19b27cdc34a9316aa0e5c34cf98ac1c82c2b0602', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/5oiuqtm3cvgf1.jpeg?width=216&crop=smart&auto=w...
Would You Use an End-to-End Encrypted AI Chatbot via Signal? (Open Source WIP)
0
Hey, I’m currently building an end-to-end encrypted AI chatbot that works over Signal and uses open-source LLMs (too large for most local devices). I'm planning to open-source it soon and wanted to get early feedback on the concept: # Key Benefits * **Full encryption** throughout: * Signal app → server communicat...
2025-08-03T20:57:36
https://www.reddit.com/r/LocalLLaMA/comments/1mgucy8/would_you_use_an_endtoend_encrypted_ai_chatbot/
smallroundcircle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgucy8
false
null
t3_1mgucy8
/r/LocalLLaMA/comments/1mgucy8/would_you_use_an_endtoend_encrypted_ai_chatbot/
false
false
self
0
null
☄️ Update on the Structured Personality AI Experiment (llama.cpp)
1
[removed]
2025-08-03T20:43:17
https://www.reddit.com/r/LocalLLaMA/comments/1mgtzsi/update_on_the_structured_personality_ai/
Ok_Exchange_8504
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgtzsi
false
null
t3_1mgtzsi
/r/LocalLLaMA/comments/1mgtzsi/update_on_the_structured_personality_ai/
false
false
self
1
null
Is there anything like ollama that doesnt save history or autocheck for updates? And is open source.
1
[removed]
2025-08-03T20:42:45
https://www.reddit.com/r/LocalLLaMA/comments/1mgtzb1/is_there_anything_like_ollama_that_doesnt_save/
Select_Analysis_2887
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgtzb1
false
null
t3_1mgtzb1
/r/LocalLLaMA/comments/1mgtzb1/is_there_anything_like_ollama_that_doesnt_save/
false
false
self
1
null
Ollama is not showing up on openwebui through Python venv
0
Openwebui: I am not using Docker as I couldn’t get the engine started and wsl wouldn’t install for me so I defaulted to Python virtual environment. URL- http://localhost:11434 (I did a powershell curl command confirming ollama is running but I didn’t see any model field listed) Key-none Ollama: I did ollama serve ...
2025-08-03T20:42:19
https://www.reddit.com/r/LocalLLaMA/comments/1mgtyxe/ollama_is_not_showing_up_on_openwebui_through/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgtyxe
false
null
t3_1mgtyxe
/r/LocalLLaMA/comments/1mgtyxe/ollama_is_not_showing_up_on_openwebui_through/
false
false
self
0
null
samsung 9100 pro 4tb vs WD_BLACK 8TB SN850X for llama.cpp
1
Would you rather have a samsung 9100 pro 4tb vs WD\_BLACK 8TB SN850X for llama.cpp? The samsung is twice as fast but the WD is twice as big. They cost roughly the same with the WD being slightly more expensive. I already have a samsung 990 pro 4tb and it's fine unless I need to work with safetensor files, in which cas...
2025-08-03T20:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1mgtrvz/samsung_9100_pro_4tb_vs_wd_black_8tb_sn850x_for/
createthiscom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgtrvz
false
null
t3_1mgtrvz
/r/LocalLLaMA/comments/1mgtrvz/samsung_9100_pro_4tb_vs_wd_black_8tb_sn850x_for/
false
false
self
1
null
Horizon Beta is OpenAI
174
https://preview.redd.it/…n Beta is OpenAI
2025-08-03T20:16:52
https://www.reddit.com/r/LocalLLaMA/comments/1mgtboa/horizon_beta_is_openai/
MiddleLobster9191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgtboa
false
null
t3_1mgtboa
/r/LocalLLaMA/comments/1mgtboa/horizon_beta_is_openai/
false
false
https://a.thumbs.redditm…7ofVVWlBmYW4.jpg
174
null
Bolt Graphics (@BoltGraphicsInc) on X
0
Could this be the one? Our one? Our precious?
2025-08-03T20:16:37
https://x.com/BoltGraphicsInc/status/1952049562912530494
DrVonSinistro
x.com
1970-01-01T00:00:00
0
{}
1mgtbeq
false
null
t3_1mgtbeq
/r/LocalLLaMA/comments/1mgtbeq/bolt_graphics_boltgraphicsinc_on_x/
false
false
default
0
null
A rambling post on ollama / llama.cpp and when to use each. Pros and cons and everything in between.
4
I'm not a professional LLMer by any means, but I figured I'd lay out my little journey and the findings along the way. When I first saw you could run local models on your own hardware, I thought it was so interesting. I played around with ollama a bit, had a lot of fun tying it into random things. It's so easy to get ...
2025-08-03T20:10:12
https://www.reddit.com/r/LocalLLaMA/comments/1mgt5bx/a_rambling_post_on_ollama_llamacpp_and_when_to/
UsualResult
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgt5bx
false
null
t3_1mgt5bx
/r/LocalLLaMA/comments/1mgt5bx/a_rambling_post_on_ollama_llamacpp_and_when_to/
false
false
self
4
null
Mac M3 + RooCode + Qwen3-Coder-30B (4-bit DWQ) in LM Studio — Possibly the Best Local Cursor Alternative Right Now?
126
2025-08-03T20:07:16
https://v.redd.it/e1348s852vgf1
onil_gova
v.redd.it
1970-01-01T00:00:00
0
{}
1mgt2om
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e1348s852vgf1/DASHPlaylist.mpd?a=1756843651%2CY2ZhYjY4ZWU2Mjg1OGQ4YzliZWYwNTc0ZjE2MGE1ZjZhNTUxNzhiMDFlMzcyMzlkMWFmMDY2ZGU1ZDQyNmFlZA%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/e1348s852vgf1/DASH_720.mp4?source=fallback', 'ha...
t3_1mgt2om
/r/LocalLLaMA/comments/1mgt2om/mac_m3_roocode_qwen3coder30b_4bit_dwq_in_lm/
false
false
https://external-preview…b3cd4f8e74e11bb9
126
{'enabled': False, 'images': [{'id': 'dW1vdmpyODUydmdmMTyadiYKHvooWeiroDfdLLS_KqibMMempmwSjMR0DRio', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/dW1vdmpyODUydmdmMTyadiYKHvooWeiroDfdLLS_KqibMMempmwSjMR0DRio.png?width=108&crop=smart&format=pjpg&auto=webp&s=8aff50a70e5cb53b7b38c9b35a93218ab06b5...
me irl
0
2025-08-03T19:59:30
https://i.redd.it/mr9tg2vb1vgf1.png
-Fibon4cci
i.redd.it
1970-01-01T00:00:00
0
{}
1mgsvcy
false
null
t3_1mgsvcy
/r/LocalLLaMA/comments/1mgsvcy/me_irl/
false
false
default
0
{'enabled': True, 'images': [{'id': 'mr9tg2vb1vgf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/mr9tg2vb1vgf1.png?width=108&crop=smart&auto=webp&s=375eab727a9b1b530e61d9f1f0c6c24e9073f0d5', 'width': 108}, {'height': 172, 'url': 'https://preview.redd.it/mr9tg2vb1vgf1.png?width=216&crop=smart&auto=web...
Raw weights answer gibberish while ollama answers just fine!
1
I know it is a very amateur question but I am having a headache with this. I have downloaded llama 3.1 8B from meta and painfully converted them to gguf so I could use them with llama.cpp but when I use my gguf it just outputs random stuff that he is Jarvis! I tested system prompts but it changed nothing! my initial pr...
2025-08-03T19:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1mgstni/raw_weights_answer_gibberish_while_ollama_answers/
Biodie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgstni
false
null
t3_1mgstni
/r/LocalLLaMA/comments/1mgstni/raw_weights_answer_gibberish_while_ollama_answers/
false
false
self
1
null
Make chatterbox tts sound more realistic
3
My question is how much can the sliders actually effect the voice because i feel like it sounds kinda robotic regardless of my settings, should I try a different audio clip or is there nothing I can do
2025-08-03T19:03:35
https://www.reddit.com/r/LocalLLaMA/comments/1mgrhcp/make_chatterbox_tts_sound_more_realistic/
StrangeMan060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgrhcp
false
null
t3_1mgrhcp
/r/LocalLLaMA/comments/1mgrhcp/make_chatterbox_tts_sound_more_realistic/
false
false
self
3
null
I don’t understand how to get what I want from Local LLM
7
I use ChatGPT and Claude paid plans. I’m incorporating local for writing topics that they censor. I understand that closed source is not on the level of Claude and can’t be at this time. I accept that. However, I’m having trouble getting it to work at even a basic level and I’ve found nothing on Google or YouTube that ...
2025-08-03T19:02:44
https://www.reddit.com/r/LocalLLaMA/comments/1mgrgmu/i_dont_understand_how_to_get_what_i_want_from/
AccidentalFolklore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgrgmu
false
null
t3_1mgrgmu
/r/LocalLLaMA/comments/1mgrgmu/i_dont_understand_how_to_get_what_i_want_from/
false
false
self
7
null
What’s the Best Open-Source Small LLM (≤ 8B) for Agentic Web Page Interactions?
12
Hey folks, I’m looking for recommendations for **open-source multimoal LLMs no larger than 8B parameters** that perform well as *agents* for interacting with web pages. **Context / Constraints:** * **Max size:** 8B params (need to run locally on an 8 GB GPU without major slowdowns) * **Use case:** Complex browser au...
2025-08-03T18:45:33
https://www.reddit.com/r/LocalLLaMA/comments/1mgr13d/whats_the_best_opensource_small_llm_8b_for/
Extra-Designer9333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgr13d
false
null
t3_1mgr13d
/r/LocalLLaMA/comments/1mgr13d/whats_the_best_opensource_small_llm_8b_for/
false
false
self
12
null
Scam Altman : gpt5
0
https://preview.redd.it/…c1349da92e7eda
2025-08-03T18:44:23
https://www.reddit.com/r/LocalLLaMA/comments/1mgr02b/scam_altman_gpt5/
omar07ibrahim1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgr02b
false
null
t3_1mgr02b
/r/LocalLLaMA/comments/1mgr02b/scam_altman_gpt5/
false
false
https://b.thumbs.redditm…Mhqb5RCsuKUE.jpg
0
null
Models for 24gb Macbook
0
Hey, so I have a Macbook Pro M4 pro with 24gb of ram, what are your recommendations for models that could work on the computer. Also would you recommend sticking to ollama or looking into a mlx model provider.
2025-08-03T18:27:09
https://www.reddit.com/r/LocalLLaMA/comments/1mgqkjt/models_for_24gb_macbook/
Ben-R1106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgqkjt
false
null
t3_1mgqkjt
/r/LocalLLaMA/comments/1mgqkjt/models_for_24gb_macbook/
false
false
self
0
null
Are there any good open source models for NSFW writing?
32
i heard some good things about muse writing pose well and not being censored, just curious if there are other similar open models that are sort of trained from books not part of a very expensive website. I've been using gemini lately on the website and it does a lot of things right. THe main issue is it just doesn't...
2025-08-03T18:14:35
https://www.reddit.com/r/LocalLLaMA/comments/1mgq8yz/are_there_any_good_open_source_models_for_nsfw/
LoneyGamer2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgq8yz
false
null
t3_1mgq8yz
/r/LocalLLaMA/comments/1mgq8yz/are_there_any_good_open_source_models_for_nsfw/
false
false
nsfw
32
null
Need help- unsure of right ollama configs with 6x 3090’s, also model choice for RAG?
0
Hi LocalLLaMA, I’m a bit confused on two levels and need help: 1) What are the best settings to get ollama to utilize all (6) 3090’s so I can use parallel processing. 2) Do I go with an LLM model that can fit on one 3090 or is it ok to go with a bigger model? Any recommendations on models? My use case is for i...
2025-08-03T17:54:43
https://www.reddit.com/r/LocalLLaMA/comments/1mgpq7a/need_help_unsure_of_right_ollama_configs_with_6x/
Business-Weekend-537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgpq7a
false
null
t3_1mgpq7a
/r/LocalLLaMA/comments/1mgpq7a/need_help_unsure_of_right_ollama_configs_with_6x/
false
false
self
0
null
Reimplemention of Qwen 2 from scratch
113
🧠 Just Finished: Implementing Qwen 2 (1.5B) from Scratch A few days ago, I built the Qwen 2 language model (1.5B) completely from scratch, making it the second LLM I’ve implemented after Gemma 🚀. This was a major milestone for me, especially since there’s no open-source implementation of Qwen 2 available online (at l...
2025-08-03T17:38:43
https://www.reddit.com/r/LocalLLaMA/comments/1mgpb8t/reimplemention_of_qwen_2_from_scratch/
CodingWithSatyam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgpb8t
false
null
t3_1mgpb8t
/r/LocalLLaMA/comments/1mgpb8t/reimplemention_of_qwen_2_from_scratch/
false
false
self
113
{'enabled': False, 'images': [{'id': 'qSeAasESDn-vQm932F6I_C6FnJQShX4HO9nyyxSZWlY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qSeAasESDn-vQm932F6I_C6FnJQShX4HO9nyyxSZWlY.png?width=108&crop=smart&auto=webp&s=09e4e357ff6f03ec51f0d4d875169c2822efb899', 'width': 108}, {'height': 108, 'url': 'h...
Fastest way to run Qwen 3 30B A3B on 32GB RAM+10GB VRAM in LM Studio?
9
Currently using 30B A3B on a Windows 11 system with 32GB DDR4, Ryzen 5600 and 3080 10GB and I'm getting about 15t/s generation speeds, but I've seen other people claim they can get 20t/s-25t/s. Is 15t/s typical for my setup or is there any way I can squeeze more speed out of it?
2025-08-03T17:00:45
https://www.reddit.com/r/LocalLLaMA/comments/1mgocw6/fastest_way_to_run_qwen_3_30b_a3b_on_32gb_ram10gb/
yungfishstick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgocw6
false
null
t3_1mgocw6
/r/LocalLLaMA/comments/1mgocw6/fastest_way_to_run_qwen_3_30b_a3b_on_32gb_ram10gb/
false
false
self
9
null
Jin 3.5 - Does anyone know anything about this model?
13
I literally can't find anything on this model. I saw somewhere on discord that it's similar to claude (which I doubt). any info? and no i'm not promoting this website or any bs like that idk anything about it
2025-08-03T16:53:16
https://jin.elpa.ai/
z_3454_pfk
jin.elpa.ai
1970-01-01T00:00:00
0
{}
1mgo662
false
null
t3_1mgo662
/r/LocalLLaMA/comments/1mgo662/jin_35_does_anyone_know_anything_about_this_model/
false
false
default
13
null
Question : Best small sized LLM for Information extraction from Unstructured Text
1
Hello Friends, What is a smaller LLM \[ 1B - 10B\] model that is very good for Information extraction from unstructured text. I am okay to fine-tune, I would only have a few thousand \[ < 10k \] samples. I would need to run this on a cheaper GPU than A100 likely T4/L4 or A10s. Could you share your experience using...
2025-08-03T16:51:59
https://www.reddit.com/r/LocalLLaMA/comments/1mgo50t/question_best_small_sized_llm_for_information/
smaddali
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgo50t
false
null
t3_1mgo50t
/r/LocalLLaMA/comments/1mgo50t/question_best_small_sized_llm_for_information/
false
false
self
1
null
When DeepSeek r2?
216
They said they're refining it months ago. Possibly timing to coincide with OpenAI's drop? Would be epic, I'm a fan of both. Especially if OpenAI's is not a reasoning model.
2025-08-03T16:44:12
https://i.redd.it/dz0i0w1j2ugf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mgny8p
false
null
t3_1mgny8p
/r/LocalLLaMA/comments/1mgny8p/when_deepseek_r2/
false
false
default
216
{'enabled': True, 'images': [{'id': 'dz0i0w1j2ugf1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/dz0i0w1j2ugf1.jpeg?width=108&crop=smart&auto=webp&s=bab256437dc4b2ab50c0bfefc751721bae4de7c5', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/dz0i0w1j2ugf1.jpeg?width=216&crop=smart&auto=w...
Drummer's Cydonia R1 24B v4 - A thinking Mistral Small 3.2!
97
2025-08-03T16:42:24
https://huggingface.co/TheDrummer/Cydonia-R1-24B-v4
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1mgnwnx
false
null
t3_1mgnwnx
/r/LocalLLaMA/comments/1mgnwnx/drummers_cydonia_r1_24b_v4_a_thinking_mistral/
false
false
default
97
{'enabled': False, 'images': [{'id': 'QvqyA98HcA5dY_pf-rNusUvIEIgIYjCW-RCNgIbKym0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QvqyA98HcA5dY_pf-rNusUvIEIgIYjCW-RCNgIbKym0.png?width=108&crop=smart&auto=webp&s=165ab4b12c90bfe025b46debae13b140652c63d2', 'width': 108}, {'height': 116, 'url': 'h...
Is there a "Chat-GPT agent" kind of agent that is open source and can be run locally?
0
Any decent open source LLM like mistral or gemma that also has vision could technically already use a web browser to do tasks and click on links. Is there already a model that can do that? Can't be too far away
2025-08-03T16:35:03
https://www.reddit.com/r/LocalLLaMA/comments/1mgnq9n/is_there_a_chatgpt_agent_kind_of_agent_that_is/
maleo999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgnq9n
false
null
t3_1mgnq9n
/r/LocalLLaMA/comments/1mgnq9n/is_there_a_chatgpt_agent_kind_of_agent_that_is/
false
false
self
0
null
If Horizon Models is not from OpenAI, who would be?
35
This model is seriously impressive, feels really powerful, and that fits with what people have been saying about it being 120B parameters in size. It's big enough to be smart without being so huge it steps on OpenAI’s toes. In my experience with the model, here some notes: * It works *really* well with agents and tool...
2025-08-03T16:15:20
https://www.reddit.com/r/LocalLLaMA/comments/1mgn94g/if_horizon_models_is_not_from_openai_who_would_be/
AMOVCS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgn94g
false
null
t3_1mgn94g
/r/LocalLLaMA/comments/1mgn94g/if_horizon_models_is_not_from_openai_who_would_be/
false
false
self
35
null
Open Source Voice Cloning at 16x real-time: Porting Chatterbox to vLLM
172
2025-08-03T16:01:53
https://github.com/randombk/chatterbox-vllm
dlp_randombk
github.com
1970-01-01T00:00:00
0
{}
1mgmx8w
false
null
t3_1mgmx8w
/r/LocalLLaMA/comments/1mgmx8w/open_source_voice_cloning_at_16x_realtime_porting/
false
false
default
172
{'enabled': False, 'images': [{'id': 'JMH4JviM3uUr7o0BdE49mxUti-kj575to7zYT_Rzt3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JMH4JviM3uUr7o0BdE49mxUti-kj575to7zYT_Rzt3A.png?width=108&crop=smart&auto=webp&s=6d660bf476941abc2978684579558acc15bd0d2e', 'width': 108}, {'height': 108, 'url': 'h...
How many of you actually know by heart the general structure of the transformer architecture?
3
I mean we all know the concepts, but how many can actually say they memorized at least the high level architecture of the transformer. which architectures/knowledge do you consider a must for scaling and fine tuning models? (GPT? BERT? what ever deepseek did with their articles?)
2025-08-03T15:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1mgmr6x/how_many_of_you_actually_know_by_heart_the/
CptKrupnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgmr6x
false
null
t3_1mgmr6x
/r/LocalLLaMA/comments/1mgmr6x/how_many_of_you_actually_know_by_heart_the/
false
false
self
3
null
How would you generate a dataset to fine-tune a llm to your codebase?
1
I know a lot of people are fine-tuning their models using their codebase but I couldn't find that many resources on how to build this dataset. Sure you could dump your codebase and that's it but there must be a better way to teach the model how to interact with this codebase right? What did you try? And did it work?
2025-08-03T15:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1mgmlzw/how_would_you_generate_a_dataset_to_finetune_a/
ThisIsBartRick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgmlzw
false
null
t3_1mgmlzw
/r/LocalLLaMA/comments/1mgmlzw/how_would_you_generate_a_dataset_to_finetune_a/
false
false
self
1
null
Daydreaming of a new Gemma model
49
Am I the only person who can't stop day dreaming of a larger Gemma model? I genuinely prefer the vibe of Gemma 3 27B to just about every other LLM I have been able to get my hands on, and I'm gearing up to fund a major fine-tune/tweak of an OS model this year. (I would take the plunge on Cohere's 112 Command A Vision i...
2025-08-03T15:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1mgm8d3/daydreaming_of_a_new_gemma_model/
Jazzlike_Source_5983
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgm8d3
false
null
t3_1mgm8d3
/r/LocalLLaMA/comments/1mgm8d3/daydreaming_of_a_new_gemma_model/
false
false
self
49
null
Are image generations from LLMs as accurate as OpenAI?
0
Never tried it I’m just curious what your experience has been like with it. I was wondering if it looks “cheap” or too uncanny valley like how some websites or apps are with it.
2025-08-03T15:09:27
https://www.reddit.com/r/LocalLLaMA/comments/1mglmse/are_image_generations_from_llms_as_accurate_as/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mglmse
false
null
t3_1mglmse
/r/LocalLLaMA/comments/1mglmse/are_image_generations_from_llms_as_accurate_as/
false
false
self
0
null
Teaching LM Studio to Browse the Internet When Answering Questions
23
I really like LM Studio because it allows you to run AI models locally, preserving the privacy of your conversations with the AI. However, compared to commercial online models, LM Studio doesn’t support internet browsing “out of the box.” Those models can’t use up-to-date information from the Internet to answer questio...
2025-08-03T15:05:38
https://www.reddit.com/r/LocalLLaMA/comments/1mgljhp/teaching_lm_studio_to_browse_the_internet_when/
ievkz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgljhp
false
null
t3_1mgljhp
/r/LocalLLaMA/comments/1mgljhp/teaching_lm_studio_to_browse_the_internet_when/
false
false
self
23
null
Noob in jailbreaking/alignment research seeks direction.
0
I'm not familiar with AI communities, and I couldn't really find anything online, so is this type of jailbreak normal? Just walk the bot through how to auto-corrupt? It also works on chatgpt. Almost all jailbreaking I can find is about deception. Any direction would be helpful, and/or DM me if you're into this type of ...
2025-08-03T14:46:36
https://i.redd.it/mq6g6udlgtgf1.jpeg
Eastern-Elephant52
i.redd.it
1970-01-01T00:00:00
0
{}
1mgl2u4
false
null
t3_1mgl2u4
/r/LocalLLaMA/comments/1mgl2u4/noob_in_jailbreakingalignment_research_seeks/
false
false
default
0
{'enabled': True, 'images': [{'id': 'mq6g6udlgtgf1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mq6g6udlgtgf1.jpeg?width=108&crop=smart&auto=webp&s=e93e4253401310beab55e6d0fdef1e555f9990d2', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/mq6g6udlgtgf1.jpeg?width=216&crop=smart&auto=w...
Is EXL3 doomed?
27
I was very excited for the release of EXL3 because of its increased performance and revised design to support new models easier. It’s been an eternity since is early preview… and now I wonder if it is doomed. Not just because it’s slow to release, but because models are moving towards large MoEs that all but require th...
2025-08-03T14:45:24
https://github.com/turboderp-org/exllamav3
silenceimpaired
github.com
1970-01-01T00:00:00
0
{}
1mgl1qz
false
null
t3_1mgl1qz
/r/LocalLLaMA/comments/1mgl1qz/is_exl3_doomed/
false
false
default
27
{'enabled': False, 'images': [{'id': 'FXRrg4PULA4HvXvU8478ZHA7JbZ5sZwNQBeL67rZtHI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FXRrg4PULA4HvXvU8478ZHA7JbZ5sZwNQBeL67rZtHI.png?width=108&crop=smart&auto=webp&s=f93311c0ef06e6cbd0115381f5af07f2cf7c6763', 'width': 108}, {'height': 108, 'url': 'h...
This might be the largest un-aligned open-source model
222
Here's a completely new 70B dense model trained from scratch on 1.5T high quality tokens - only SFT with basic chat and instructions, no RLHF alignment. Plus, it speaks Korean and Japanese. [https://huggingface.co/trillionlabs/Tri-70B-preview-SFT](https://huggingface.co/trillionlabs/Tri-70B-preview-SFT)
2025-08-03T14:41:20
https://www.reddit.com/r/LocalLLaMA/comments/1mgky8g/this_might_be_the_largest_unaligned_opensource/
jshin49
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgky8g
false
null
t3_1mgky8g
/r/LocalLLaMA/comments/1mgky8g/this_might_be_the_largest_unaligned_opensource/
false
false
self
222
{'enabled': False, 'images': [{'id': '54LcYt31V5699aK96P6r3bJQs24PiOVpBBLMv2INZiw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/54LcYt31V5699aK96P6r3bJQs24PiOVpBBLMv2INZiw.png?width=108&crop=smart&auto=webp&s=b7a80c31c557591f18bda1f387961a8fe38f053e', 'width': 108}, {'height': 116, 'url': 'h...
Table Extraction for Tabloid Paper Sizes
1
I am looking to extract multiple tables from tabloid paper sizes; I tried GMFT, img2table, and camelot and found no success, likely because they were not trained on larger paper sizes. I currently use Docling, which is honestly the best OCR tool I have found for my use case. Still, however, it misses some tables. What...
2025-08-03T14:37:23
https://www.reddit.com/r/LocalLLaMA/comments/1mgkus5/table_extraction_for_tabloid_paper_sizes/
Ok-Stranger-1229
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgkus5
false
null
t3_1mgkus5
/r/LocalLLaMA/comments/1mgkus5/table_extraction_for_tabloid_paper_sizes/
false
false
self
1
null
LLMstudio doesn’t use all the available VRAM
0
I have a couple or RTX6000Blackwell GPUs but LLMstudio only uses the memory up to ~70GB per GPU even after I already set the Guardrails to “relaxed”. If I enable “Limit Model Offload to Dedicated GPU Memory” the situation gets even worse and only ~20GB are used.
2025-08-03T14:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1mgkpm6/llmstudio_doesnt_use_all_the_available_vram/
Khipu28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgkpm6
false
null
t3_1mgkpm6
/r/LocalLLaMA/comments/1mgkpm6/llmstudio_doesnt_use_all_the_available_vram/
false
false
self
0
null
tokens are getting more expensive
1
2025-08-03T14:28:15
https://ethanding.substack.com/p/ai-subscriptions-get-short-squeezed
ChiliPepperHott
ethanding.substack.com
1970-01-01T00:00:00
0
{}
1mgkn3o
false
null
t3_1mgkn3o
/r/LocalLLaMA/comments/1mgkn3o/tokens_are_getting_more_expensive/
false
false
default
1
null
Use local LLM to neutralise the headers on the web
488
Finally got to finish a weekend project from a couple of months ago. This is a small extension that can use a local LLM (any OpenAI-compatible endpoint is supported) to neutralise the clickbaits on the webpages you visit. It works reasonably well with models of Llama 3.2 3B class and above. Works in Chrome and Firefo...
2025-08-03T14:23:05
https://v.redd.it/niaha18uctgf1
Everlier
v.redd.it
1970-01-01T00:00:00
0
{}
1mgkiti
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/niaha18uctgf1/DASHPlaylist.mpd?a=1756823002%2CZDQ0ZmU1NzRiNDlhZDQyZDNlNGE1ZDMxMTExYTFhMmZiNzQ1MTlkYmY1YmM2MTFiM2UxZmFjNjczODNmMDAwYw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/niaha18uctgf1/DASH_1080.mp4?source=fallback', 'h...
t3_1mgkiti
/r/LocalLLaMA/comments/1mgkiti/use_local_llm_to_neutralise_the_headers_on_the_web/
false
false
https://external-preview…9a279385018af33e
488
{'enabled': False, 'images': [{'id': 'NnJxaTIxOHVjdGdmMXmMnlACXncMKAQW0BNSM6l9H9iAn2MnzkxT52_TMFFC', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/NnJxaTIxOHVjdGdmMXmMnlACXncMKAQW0BNSM6l9H9iAn2MnzkxT52_TMFFC.png?width=108&crop=smart&format=pjpg&auto=webp&s=47b5550c7dd6476d0d3e442d2ca5c2a382f74...
Ollama app requires internet ?
0
Ollama’s new app requires an internet connection to send messages? Other people have also reported this issue but there has been no explanation. Have others here encountered this? Am concerned because I was hoping to get my work to use the new app but now I don’t think I should.
2025-08-03T14:20:14
https://discord.com/channels/1128867683291627614/1400481246580052140
MudMaleficent8980
discord.com
1970-01-01T00:00:00
0
{}
1mgkgek
false
null
t3_1mgkgek
/r/LocalLLaMA/comments/1mgkgek/ollama_app_requires_internet/
false
false
default
0
null
Your proud AI setup
10
Let's tease each other. What is your local AI setup? Are you proud of it? What would you have done differently? What model you use? Contrxt lenght? TPS? I only have a MBP2019, so I will just be teased 😂
2025-08-03T14:04:15
https://www.reddit.com/r/LocalLLaMA/comments/1mgk2nm/your_proud_ai_setup/
Recent-Success-1520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgk2nm
false
null
t3_1mgk2nm
/r/LocalLLaMA/comments/1mgk2nm/your_proud_ai_setup/
false
false
self
10
null
Open source alternatives to gpt 4o mini?
4
I was wondering what mini models you guys were using and what’s good and what isn’t, I mostly just need something for quick categorization, I was using ChatGPT 4o-mini api for most things but I should probably swap to something local at this point
2025-08-03T13:56:00
https://www.reddit.com/r/LocalLLaMA/comments/1mgjvn8/open_source_alternatives_to_gpt_4o_mini/
abaris243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgjvn8
false
null
t3_1mgjvn8
/r/LocalLLaMA/comments/1mgjvn8/open_source_alternatives_to_gpt_4o_mini/
false
false
self
4
null
GLM 4.5 Tool Calling Jinja Template
9
The jinja template that comes with the MLX version of GLM 4.5 is using xml style tool calls instead of json. Here's a json template. This means that it is now able to do tool calls in OpenCode, and presumably other things as well (Qwen code/Gemini?). Here's the template: [https://pastebin.com/CfMw7hFS](https://past...
2025-08-03T13:48:55
https://www.reddit.com/r/LocalLLaMA/comments/1mgjpvm/glm_45_tool_calling_jinja_template/
-dysangel-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgjpvm
false
null
t3_1mgjpvm
/r/LocalLLaMA/comments/1mgjpvm/glm_45_tool_calling_jinja_template/
false
false
self
9
null
Are Chinese LLM companies effectively price dumping?
197
People here seem to assume that Chinese AI companies are developing and releasing these models, which cost tens of millions of dollars to develop, for free out of the goodness of their heart. I think this is absurd, considering these are for-profit companies, with shareholders who expect an ROI. In the case of Meta (a...
2025-08-03T13:43:23
https://www.reddit.com/r/LocalLLaMA/comments/1mgjlek/are_chinese_llm_companies_effectively_price/
uutnt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgjlek
false
null
t3_1mgjlek
/r/LocalLLaMA/comments/1mgjlek/are_chinese_llm_companies_effectively_price/
false
false
self
197
{'enabled': False, 'images': [{'id': 'Hj2gEIWsZf7TIi7s4YmEkqIlioGRdkXinRkdl7AYSEo', 'resolutions': [{'height': 122, 'url': 'https://external-preview.redd.it/Hj2gEIWsZf7TIi7s4YmEkqIlioGRdkXinRkdl7AYSEo.png?width=108&crop=smart&auto=webp&s=3562c0af61eb68438add188a0ebb448f79b54657', 'width': 108}, {'height': 244, 'url': '...
Roleplay with large historical context and RAG
15
I play The Expanse role-playing game with some friends every week over Zoom. I've captured the transcripts for every session. I intend to run an LLM locally for players to interact with during the game and so it should act as if it were the AI of the ship. From a high level, the pipeline goes like this; After every ...
2025-08-03T13:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1mgjcai/roleplay_with_large_historical_context_and_rag/
RoboCopsGoneMad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgjcai
false
null
t3_1mgjcai
/r/LocalLLaMA/comments/1mgjcai/roleplay_with_large_historical_context_and_rag/
false
false
self
15
null
Why doesn't "OpenAI" just release one of the models they already have? Like 3.5
264
Are they really gonna train a model that's absolutely useless to give to us?
2025-08-03T13:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1mgiyg4/why_doesnt_openai_just_release_one_of_the_models/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgiyg4
false
null
t3_1mgiyg4
/r/LocalLLaMA/comments/1mgiyg4/why_doesnt_openai_just_release_one_of_the_models/
false
false
self
264
null
Best Practice For CPU Inference
1
Hello I am currently looking for the best launch parameters for CPU inference with llama.cpp. I am running the Qwen3-30B-A3B model on my laptop with the following specs: AMD Ryzen 7 PRO 7840u w/ Radeon 780M Graphics (16CPUs), 32GB Ram. Since the whole topic around the launch parameters is rather complex, I wanted to...
2025-08-03T13:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1mgixw4/best_practice_for_cpu_inference/
hudimudi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgixw4
false
null
t3_1mgixw4
/r/LocalLLaMA/comments/1mgixw4/best_practice_for_cpu_inference/
false
false
self
1
null
NVIDIA's "Highly Optimistic" DGX Spark Mini-Supercomputer Still Hasn't Hit Retail Despite a Planned July Launch, Suggesting Possible Production Issues
89
2025-08-03T13:06:01
https://wccftech.com/nvidia-highly-optimistic-dgx-spark-mini-supercomputer-still-hasnt-hit-retail/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1mgis6h
false
null
t3_1mgis6h
/r/LocalLLaMA/comments/1mgis6h/nvidias_highly_optimistic_dgx_spark/
false
false
default
89
{'enabled': False, 'images': [{'id': 'PD8rnvifNMtTH2QZfbt1ABecTzvsQu7n786xD74W-RU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/PD8rnvifNMtTH2QZfbt1ABecTzvsQu7n786xD74W-RU.jpeg?width=108&crop=smart&auto=webp&s=462a9746ffb12643f31808d38538f4c0ea76b555', 'width': 108}, {'height': 121, 'url': '...
Why is no one talking about XBai o4 from MetaStoneAI ?
0
It seems that this model is more performant than OpenAI−o3−mini and Claude 4 Opus on Livecode and that being only 32B.
2025-08-03T13:00:39
https://x.com/theMetaStoneAI/status/1951486506562101656
PlasticInitial8674
x.com
1970-01-01T00:00:00
0
{}
1mginx6
false
null
t3_1mginx6
/r/LocalLLaMA/comments/1mginx6/why_is_no_one_talking_about_xbai_o4_from/
false
false
default
0
null
SVG, animation, and 3D-game demos are pointless
0
Every time a new model drops, leaks (real or fake), or a stealth release happens, people rush to make the AI code a dumb picture of a Minion or a pelican riding a bicycle. Abstract puzzles like the Strawberry problem—stuff any human could solve—are way more fun to watch and give a cleaner yardstick for performance. ...
2025-08-03T12:55:22
https://www.reddit.com/r/LocalLLaMA/comments/1mgik2j/svg_animation_and_3dgame_demos_are_pointless/
sudofu_reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgik2j
false
null
t3_1mgik2j
/r/LocalLLaMA/comments/1mgik2j/svg_animation_and_3dgame_demos_are_pointless/
false
false
self
0
null
MLX -> GGUF
4
Rewritten message Hey LocalLLaMA team, I'm hoping someone much covered than me can help with a question about fine-tuning. I've been using the MLX library to fine-tune a model on my MacBook, but I need to test the model on other devices that aren't Macs. I'm wondering if there's a best practice for this workflow. Id...
2025-08-03T12:48:58
https://www.reddit.com/r/LocalLLaMA/comments/1mgifea/mlx_gguf/
Not_Another_LLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mgifea
false
null
t3_1mgifea
/r/LocalLLaMA/comments/1mgifea/mlx_gguf/
false
false
self
4
null
Building for the era of experience
0
If
2025-08-03T12:40:36
https://rnikhil.com/2025/07/30/era-of-experience
Excellent-Effect237
rnikhil.com
1970-01-01T00:00:00
0
{}
1mgi9df
false
null
t3_1mgi9df
/r/LocalLLaMA/comments/1mgi9df/building_for_the_era_of_experience/
false
false
default
0
null