title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Use evaluations to find the best local model for your use case!
8
Hey I am Benny, I have been working on [evalprotocol.io](http://evalprotocol.io) for a while now, and we recently published a post on using evaluations to pick the best local model to get your job done [https://fireworks.ai/blog/llm-judge-eval-protocol-ollama](https://fireworks.ai/blog/llm-judge-eval-protocol-ollama) . The SDK is here [https://github.com/eval-protocol/python-sdk](https://github.com/eval-protocol/python-sdk) , totally open source, and would love to figure out how to best work together with everyone. Please give it a try and let me know if you have any feedback! (btw not familiar with the self promotion rule here, the SDK is totally open source, if this is not ok feel free to delete the post) https://preview.redd.it/x5fupedf6fvf1.png?width=2454&format=png&auto=webp&s=3087b9d6f9c43b534cb38ac8f513e5f66b4ea005
2025-10-16T06:43:30
https://www.reddit.com/r/LocalLLaMA/comments/1o7z3sn/use_evaluations_to_find_the_best_local_model_for/
evalProtocol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7z3sn
false
null
t3_1o7z3sn
/r/LocalLLaMA/comments/1o7z3sn/use_evaluations_to_find_the_best_local_model_for/
false
false
https://b.thumbs.redditm…LSDGYOXJaGeY.jpg
8
{'enabled': False, 'images': [{'id': '5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=108&crop=smart&auto=webp&s=428165060a3c3570bd35289b6e8067394ac8aea6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=216&crop=smart&auto=webp&s=fda2a5f5848477258d889b07c87d7eab32ab53cc', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=320&crop=smart&auto=webp&s=045fc5db7e3526e72ef185a3260a5927e4980e9e', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=640&crop=smart&auto=webp&s=555cf9a9171ccbc0dd2a187ee6851a61b8931671', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=960&crop=smart&auto=webp&s=91d1f1c7ca3de36b0891208e6ca5647913ea04a8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?width=1080&crop=smart&auto=webp&s=923bc5cafa49d65801c6020817859f0f8928ca88', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5Yyi7FZfBglJXdTAq5ctyvLjdHtxUhbYkAAZztvAOSg.png?auto=webp&s=af6f3c7b8ad22871716c6fe677d389d13a370db8', 'width': 1200}, 'variants': {}}]}
Seems like Msty is dead?
0
I noticed I have Msty the app still installed on my Mac. I opened it, and... no updates even though I haven't touched it in months? And it doesn't even include gpt-oss in the list of models? Can anyone confirm if the app is dead?
2025-10-16T06:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1o7z1nv/seems_like_msty_is_dead/
DistanceSolar1449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7z1nv
false
null
t3_1o7z1nv
/r/LocalLLaMA/comments/1o7z1nv/seems_like_msty_is_dead/
false
false
self
0
{'enabled': False, 'images': [{'id': '6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=108&crop=smart&auto=webp&s=2537cf4308678c6acddfcb1f9c162c024ed3fafe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=216&crop=smart&auto=webp&s=7ead559e05507457ffb891b960c2188b02a7d463', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=320&crop=smart&auto=webp&s=b9a741c119c9604bd594cbad99f9ed1f2155cc32', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=640&crop=smart&auto=webp&s=18e5bf5262c3640705f3d555b2e7b421cfba48ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=960&crop=smart&auto=webp&s=6932a5d243002aa6f62216cb9c5bc82762c796b9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?width=1080&crop=smart&auto=webp&s=ee67840ce2bf8506eaba93d59680bd0b5947d3f7', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/6n8QlnCBfTKP0SJ-xlQI5OofBY7MFN32H08uquBAqf4.png?auto=webp&s=f1465f4ceffa06aa4a55c5af3b8462a16a80b21b', 'width': 1600}, 'variants': {}}]}
How to make an LLM remember facts while doing supervised fine tuning
2
I have been doing supervised finetuning of llama 3.1 8b on my data of 16k Q&A examples. But when i ask the questions during inference it is hallucinating and missing the facts. What do you think the issue might be. """16000 question answer pairs, llama 3.1 8b supervised finetune . from transformers import TrainingArguments training\_args = TrainingArguments( output\_dir="./llama\_finetuned\_augmented\_singleturn", per\_device\_train\_batch\_size=2,  # increase if your GPU allows gradient\_accumulation\_steps=4, # to simulate larger batch warmup\_steps=5, max\_steps=6000,                 # total fine-tuning steps learning\_rate=2e-4, logging\_steps=10, save\_strategy="steps", save\_steps=200, fp16=not is\_bfloat16\_supported(),         # turn off fp16 bf16=is\_bfloat16\_supported(),                       # mixed precision optim="adamw\_8bit", weight\_decay = 0.01, lr\_scheduler\_type = "linear", seed = 3407, save\_total\_limit=3, report\_to="none",                # disable wandb logging ) from trl import SFTTrainer from transformers import TrainingArguments, DataCollatorForSeq2Seq trainer = SFTTrainer( model=model, train\_dataset=loaded\_training\_dataset, tokenizer=tokenizer, args=training\_args, data\_collator = DataCollatorForSeq2Seq(tokenizer = tokenizer), dataset\_num\_proc = 2, max\_seq\_length=2048, packing=False, dataset\_text\_field="text",   \# packs multiple shorter sequences to utilize GPU efficiently ) max\_seq\_length = 2048 model, tokenizer = FastLanguageModel.from\_pretrained( model\_name="unsloth/Meta-Llama-3.1-8B-Instruct", max\_seq\_length=max\_seq\_length, load\_in\_4bit=True, dtype=None, ) Not answering the trained questions correctly. What could be the issue
2025-10-16T06:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1o7yte3/how_to_make_an_llm_remember_facts_while_doing/
InteractionLevel6625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7yte3
false
null
t3_1o7yte3
/r/LocalLLaMA/comments/1o7yte3/how_to_make_an_llm_remember_facts_while_doing/
false
false
self
2
null
Need advice on what to do with H200
8
Hey sub, this question is serious. I'm getting lucky to get free access to H200 that no one was using in my university. I've been learning AI Engineering and Machine Learning, but have never touched one of these. I'd really really love to make the most of it - and decided to post it here for advice. What are some must-do things? Build Andrej Karpathy's nanoGPT? Try local models? Any advice is appreciated!
2025-10-16T05:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1o7xwio/need_advice_on_what_to_do_with_h200/
AggressiveMention359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7xwio
false
null
t3_1o7xwio
/r/LocalLLaMA/comments/1o7xwio/need_advice_on_what_to_do_with_h200/
false
false
self
8
null
Alternative to DGX Spark Multiagent chatbot
1
Hi, I saw that DGX spark had launched with a Multiagent chatbot (https://github.com/NVIDIA/dgx-spark-playbooks/tree/main/nvidia/multi-agent-chatbot/assets). I don’t own a DGX spark but this is exactly what I’m looking for. A nice front end ui that allows for an LLM orchestrator, Embedding LLM, image generation LLM and coding LLM. I’ve tried OpenwebUI (a while back) and AnythingLLM. They are close but not quite there yet for Multiagent chatbot. Thanks!
2025-10-16T05:19:30
https://www.reddit.com/r/LocalLLaMA/comments/1o7xrcx/alternative_to_dgx_spark_multiagent_chatbot/
MuffisAwesome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7xrcx
false
null
t3_1o7xrcx
/r/LocalLLaMA/comments/1o7xrcx/alternative_to_dgx_spark_multiagent_chatbot/
false
false
self
1
null
Please help me out!
0
I'm new to ML. Right now I have an urgent requirement to compare a diariziation and a procedure pdf. The first problem is that the procedure pdf has a lot of acronyms. Secondly, I need to setup a verification table for the diarization showing match, partially match and mismatch, but I'm not able to get accurate comparison of the diarization and procedure pdf because the diarization has a bit of general conversation('hello', 'got it', 'are you there' etc) in it. Please help me out.
2025-10-16T04:58:54
https://www.reddit.com/r/LocalLLaMA/comments/1o7xend/please_help_me_out/
One-Will5139
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7xend
false
null
t3_1o7xend
/r/LocalLLaMA/comments/1o7xend/please_help_me_out/
false
false
self
0
null
YO - LMSTUDIO - COULD YALL FIX YO S**T
0
yo whats up fellas, this ones for the LMSTUDIO crew Im here to call out a trend that has been going and is going badly. LMSTUDIO added their own suggested models which I think really sucks. They have consistently reduced and removed functionality for all but their suggested models. As of today I can't even download non suggested models (the DL button is there it just does nothing when clicked) They have make it almost impossible to find anything unless you do know the exact name, where as for their suggested models they come up even if you just mention something in the text body. LMSTUDIO F**K OFF with this suggest models BS we don't want it if you don't make it optional to turn off others will come eat your lunch and you will deserve it. Your job is to make it easy to find new custom models, don't screw that up! suggested models must be a secondary option and for the love of god turn that trash off for the people that don't want it. (The best models are always todays random slerp not 150 day old bs) Anyone else, checkout LMSTUDIO if you haven't its (mostly) great. I usually use llama.cpp for inference but for downloading and testing LMstudio was a cool option (atm it's just broken and useless). Fix yo s**t and fire whoever broke it thank you very much. (confirmed the previous build does have working download buttons)
2025-10-16T04:50:35
https://www.reddit.com/r/LocalLLaMA/comments/1o7x9ip/yo_lmstudio_could_yall_fix_yo_st/
Revolutionalredstone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7x9ip
false
null
t3_1o7x9ip
/r/LocalLLaMA/comments/1o7x9ip/yo_lmstudio_could_yall_fix_yo_st/
false
false
self
0
null
GLM 4.5 Air AWQ 4bit on RTX Pro 6000 with vllm
61
https://preview.redd.it/…sion: 580.95.05
2025-10-16T04:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1o7x7ss/glm_45_air_awq_4bit_on_rtx_pro_6000_with_vllm/
notaDestroyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7x7ss
false
null
t3_1o7x7ss
/r/LocalLLaMA/comments/1o7x7ss/glm_45_air_awq_4bit_on_rtx_pro_6000_with_vllm/
false
false
https://b.thumbs.redditm…am8wvadpbBSo.jpg
61
null
SillyTavern for Academic RAG or Alternatives for RAG GUI
11
I’m honestly kinda tempted with SillyTavern’s Lore and World features . It’s kinda like isolating an LLM with an advanced system prompt and persona . I sometimes have an issue with LLMs where they often refuse to report something that is ahead of their knowledge base such as “who is President” even if I give it several articles for RAG with the latest news(just an example not my use case). I feel like it’s Lorebook and World kinda can isolate and refine an LLM output to avoid that . ST has the most advanced GUI I’ve ever seen with all its neat features like Persona and World . I’ve been working on this project for my PhD building a RAG vector DB for this research question . I have a MCP tool Vector server running local that’s almost done . The final setup is just a front end so I can give a demo to my department. In the backend , I’ll be using MLflow for reporting the RAG metrics we need . OpenWebUI is kinda 50-60% there , it was a little annoying setting up the MCP but it works and might require a slightly more powerful Cloud Instance for more users in the future . I just kinda been going through SillyTavern’s custom features and it seems really advanced the way you can customize things . Please be upfront and tell me if this a badshit idea that will have my department head requesting my API logs (Just kidding about this ).
2025-10-16T04:46:06
https://www.reddit.com/r/LocalLLaMA/comments/1o7x6oi/sillytavern_for_academic_rag_or_alternatives_for/
combrade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7x6oi
false
null
t3_1o7x6oi
/r/LocalLLaMA/comments/1o7x6oi/sillytavern_for_academic_rag_or_alternatives_for/
false
false
self
11
null
Should I add another 5060 Ti 16GB or two? Already had 1 x 5070 Ti and 3 x 5060 Ti 16G
0
So I am thinking of adding another 5060 Ti 16GB or two to my current rig and would love some inputs from the team. Currently, I am running 1 x 5070 Ti and 3 x 5060 Ti 16G with 128 DDR5 6000MT and 265K. The 5070 Ti gets the PCIE 5 x16 whereas the other three are running PCIE4 x4, which should not matter as much as I largely do inference and RAG (sentence transformers for document processing and lmstudio backend). I would like to run gpt-oss-120B and GLM-4.5 air with at least 40k of context, ideally without spilling over into system ram. Right now with 30k context I can do 20-24 tokens per second.l across the two. Can I somehow get away with adding just one 5060 Ti 16GB or even adding two is not sufficient (i.e., no significant improvement running these models even with two)? I look at the new DGX and AMD 395 benchmark and these don't seem like good options. Thoughts and suggestion would be greatly appreciated. The rig serves only me and I have other tools that needs windows so vllm is not really an option. Thank you very much for your help.
2025-10-16T04:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1o7x3x1/should_i_add_another_5060_ti_16gb_or_two_already/
Professional-Yak4359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7x3x1
false
null
t3_1o7x3x1
/r/LocalLLaMA/comments/1o7x3x1/should_i_add_another_5060_ti_16gb_or_two_already/
false
false
self
0
null
Ollama v0.12.6 finally includes Vulkan support
24
2025-10-16T04:39:06
https://github.com/ollama/ollama/releases/tag/v0.12.6-rc0
geerlingguy
github.com
1970-01-01T00:00:00
0
{}
1o7x25o
false
null
t3_1o7x25o
/r/LocalLLaMA/comments/1o7x25o/ollama_v0126_finally_includes_vulkan_support/
false
false
default
24
{'enabled': False, 'images': [{'id': 'cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=108&crop=smart&auto=webp&s=4d23fb0fce633461d88222cca9ab1637fa6af68f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=216&crop=smart&auto=webp&s=fb26664fcda8ab12c46513becd50da82cd4c28ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=320&crop=smart&auto=webp&s=6be5582a587dde8cf108e372c421ecd43401cdb0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=640&crop=smart&auto=webp&s=c878fa25eb39194203fe7a493bd33e19b95d026c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=960&crop=smart&auto=webp&s=a695154b11431bb102eecf30bc175ab5b8191a1a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?width=1080&crop=smart&auto=webp&s=4720fea60f12b998624cf668d0bc73f3f45608b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cjCC50drSVvsSjC6BG0LlHPfYq9pihhGvZz2PN90ZFQ.png?auto=webp&s=3f9116d885d564de50b102ec6d6f5a6570dfce34', 'width': 1200}, 'variants': {}}]}
Databricks Agentic Capabilities
1
I’m working on figuring out how much built in agentic capabilities Databricks has, and so doing a POC for a use case. The use case is a change one, where if the user puts in a prompt saying he wants to change the way a specific metric is calculated in a table, the agent will get the necessary information about the table from a file, then pull that tables sql from bitbucket, change the sql for it, test it out, then push it back to bitbucket. I’m thinking of testing it out using Databricks Assistant Data Science Agent, but have to see if it will be able to use these tools which I would configure as Python functions in a different file. Any other alternatives you guys would suggest? The main goal is to test out the capabilities already present in the market, we are also testing out cursor doing the whole thing as well, from creation of plan first to executing the steps. Thanks!
2025-10-16T03:39:10
https://www.reddit.com/r/LocalLLaMA/comments/1o7vyvs/databricks_agentic_capabilities/
Tinjar12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7vyvs
false
null
t3_1o7vyvs
/r/LocalLLaMA/comments/1o7vyvs/databricks_agentic_capabilities/
false
false
self
1
null
Fast, expressive TTS models with streaming and MLX support?
3
Hey everyone, I'm really struggling to find a TTS model that: * Leverages MLX architecture * Is expressive as Sesame or Orpheus (voice cloning is a plus) * Supports streaming * It is fast enough for a 2/3s TTFT on an M2 Ultra 128GB. Is this really an impossible task? To be fair, streaming is something that projects like mlx-audio should address, but it hasn't been implemented yet, and I believe it never will be. I get a good 2.4x real-time factor with a 4-bit quantized model of Orpheus; I'm just lacking an MLX backend with proper streaming support. :(
2025-10-16T03:35:36
https://www.reddit.com/r/LocalLLaMA/comments/1o7vwfk/fast_expressive_tts_models_with_streaming_and_mlx/
markleoit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7vwfk
false
null
t3_1o7vwfk
/r/LocalLLaMA/comments/1o7vwfk/fast_expressive_tts_models_with_streaming_and_mlx/
false
false
self
3
null
Fun fact!
453
2025-10-16T03:32:57
https://i.redd.it/df7poqut8evf1.jpeg
amanj203
i.redd.it
1970-01-01T00:00:00
0
{}
1o7vun3
false
null
t3_1o7vun3
/r/LocalLLaMA/comments/1o7vun3/fun_fact/
false
false
default
453
{'enabled': True, 'images': [{'id': 'df7poqut8evf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=108&crop=smart&auto=webp&s=32248a17a353e01bb9101fb5c7caff69432eb1fb', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=216&crop=smart&auto=webp&s=8de996750530c80739fed130866f3b183a6e6e71', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=320&crop=smart&auto=webp&s=b83b77bfd6f15c743d0733dde02716ef46816896', 'width': 320}, {'height': 544, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=640&crop=smart&auto=webp&s=ff4d328f0568182f02dea887dc1a99c1c2374875', 'width': 640}, {'height': 816, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=960&crop=smart&auto=webp&s=e0fbbf3bd816f0768d47d19275788ec7c010b655', 'width': 960}, {'height': 918, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?width=1080&crop=smart&auto=webp&s=03e6e4469336601cd1e7a4ba1f9a3638393482e4', 'width': 1080}], 'source': {'height': 918, 'url': 'https://preview.redd.it/df7poqut8evf1.jpeg?auto=webp&s=e9fd4892500b2a19e7ec9e5b87d3543fedf471d2', 'width': 1080}, 'variants': {}}]}
This is how I track usage and improve my AI assistant without exposing sensitive data
7
The learning, sample schema/dashboard/sql, and the overall approach below. AMA and share your learning. Coming from a data engineering background, I want to share something I recently did and feel proud of. And I'm sure many of us will find this practice of privacy-first tracking useful in building better AI assistants/copilots/agents faster. As I stepped into Engineering Manager role (a transition from all day of developing/hacking/analyzing/cleaning data pipelines to limited time of doing that and more time on connecting engineering efforts to business output), it became my duty to prove ROI of the engineering efforts I and my team puts in. I realized the importance of tracking key metrics for the project because > You can't improve what you don't measure AI copilots and agents need a bit more love in this regard IMO. Instead of running in the never-ending loops to continue coding and postponing the public release to ship that additional improvement we might need (which is usually inspired from the gut-feel), a better approach is to ship early, start tracking usage, and take informed decisions on what you prioritize. Also I needed to measure ROI to get the needed resources and confidence from the business to continue investing more on that AI product/feature my team was building. So this is what I ended up doing and learning ### Track from day 1 > Don't wait until things "settle down" This will help you uncover real-world edge cases, weird behaviors, bottlenecks, who is more interested in this, which features get used more, etc. early in the development cycle. And this will help focus on the things that matter the most (as opposed to imaginary and not-so-important issues that we usually end up working on when we don't track). Do this on day 1, things never settle down, and the analytics instrumentation is pushed to another date. I follow this approach for all my projects 1. Collect the minimal real-time events data from clients (web app, mobile app, etc.) 2. Store the events data in a central warehouse e.g. Postgres, BigQuery, Snowflake, etc. (the single source of truth) 3. Transform the event data for downstream analytics tools (remove PII) 4. Route the transformed data to downstream tools for analysis e.g. Mixpanel, Power BI, Google Data Studio, etc. ### Standardize the tracking schema Don't reinvent the wheel in each project, save time and energy with the standardized tracking schema for tracking events. These are the key events and their properties that I track | **Event Name** | **Description** | **Key Properties** | |-----------------------------------|----------------------------------------------------------------|------------------------------------------------------------------------------------| | `ai_user_prompt_created` | Tracks when a user submits a prompt to your AI system | `prompt_text`, `timestamp`, `user_id` | | `ai_llm_response_received` | Captures AI system responses and performance metrics | `response_text`, `response_time`, `model_version`, `user_id` | | `ai_user_action` | Measures user interactions with AI responses | `action_type`, `timestamp`, `user_id`, `response_id` | I track following metrics primarily * Engagement metrics * Latency and cost * Ratings and feedback You can find the [**SQL queries for these metrics here**](https://www.rudderstack.com/blog/ai-product-analytics-privacy/) and a [**sample dashboard here**](https://claude.ai/public/artifacts/b2a5c6bc-3e6a-4e94-af20-8b322abe3624?fullscreen=true) ### Deal with privacy challenges with LLM-powered intent-classification AI assistants contain prompts which has a lots of PII and we do need to send the tracking data to downstream tools (e.g. mixpanel, power BI, etc.) for different kinds of analysis such as user behavior, conversion, ROI, engineering metrics, etc. Sending PII to these downstream tools is not only a privacy nightmare on pricinples but it also creates a regulatory challenge for businesses. So, in order to avoid sending this PII to these downstream tools, I used LLM to classify intent from the prompt, and replaced prompt with that intent category, good enough for the analytics I need and does not expose my customer's sensitive data with these downstream tools. Here's the sample code to do this in JavaScript ``` function shouldClassifyIntent(event, metadata) { *// Always classify for high-value customers* if (fetchUserProfile().plan === 'enterprise') { return true; } *// Classify all events for new users (first 7 days)* const daysSinceSignup = (Date.now() - fetchUserProfile()?.created_at) / (1000 * 60 * 60 * 24); if (daysSinceSignup <= 7) { return true; } *// Sample 10% of other users based on consistent hash* const userIdHash = simpleHash(event.userId); if (userIdHash % 100 < 10) { return true; } *// Skip classification for this event* return false; } *// In your transformation* export async function transformEvent(event, metadata) { if (event.event !== 'ai_user_prompt_created') { return event; } *// Add sampling decision to event for analysis* event.properties.intent_sampled = shouldClassifyIntent(event, metadata); if (!event.properties.intent_sampled) { event.properties.classified_intent = 'not_sampled'; return event; } *// Continue with classification...* } ``` Keeping this post concise, I'd leave other details for now. Ask me and I will answer your curiosity. Let's take this discussion one step further by sharing your experience in measuring your AI agent/copilot usage. What metrics do you track, how do you keep it quick to instrument analytics, do you go beyond what basic analytics agent frameworks and observability tools provide, do you think about privacy when implementing analytics, etc.
2025-10-16T03:01:05
https://www.rudderstack.com/blog/ai-product-analytics-privacy/
opensourcecolumbus
rudderstack.com
1970-01-01T00:00:00
0
{}
1o7v8bi
false
null
t3_1o7v8bi
/r/LocalLLaMA/comments/1o7v8bi/this_is_how_i_track_usage_and_improve_my_ai/
false
false
default
7
{'enabled': False, 'images': [{'id': 'gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=108&crop=smart&auto=webp&s=e7fe940ebdcf19bc0dc68389c45c0a2b1b918dbd', 'width': 108}, {'height': 82, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=216&crop=smart&auto=webp&s=5a8fa97760bf37ee9edde19a1d5d215e29f69fcc', 'width': 216}, {'height': 122, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=320&crop=smart&auto=webp&s=bc4fc8fc31fcba4b29677be125e4064e55e07865', 'width': 320}, {'height': 245, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=640&crop=smart&auto=webp&s=72b0f5f01a0afd4937a40ceb983ac6e548166416', 'width': 640}, {'height': 368, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=960&crop=smart&auto=webp&s=8303ce6f1a0f585f2ea54f6557810c1f6b6f93a7', 'width': 960}, {'height': 414, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?width=1080&crop=smart&auto=webp&s=c97a671b22d2b257ee7aa4ab7b54d0c4db1a8e4a', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/gaIvmxSSr1iv7t7zNtM70CmHx8jl4ZdjsE3bg8Z2WCw.png?auto=webp&s=74ab8feb06d2756ccf2b71e15122c147d7877649', 'width': 2712}, 'variants': {}}]}
SORA AI CODES (6)
0
I have 6 sore ai codes selling for cheap cheap.
2025-10-16T02:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1o7v2i6/sora_ai_codes_6/
cofss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7v2i6
false
null
t3_1o7v2i6
/r/LocalLLaMA/comments/1o7v2i6/sora_ai_codes_6/
false
false
self
0
null
Made a local first LLM Chat UI
0
Repo: [https://github.com/vanillacode314/rllm](https://github.com/vanillacode314/rllm) There is a [Demo](https://llm.raqueeb.com) available, currently it has syncing/account enabled but it will be disabled later when all testing is done. [Demo Link](https://llm.raqueeb.com) ## Motivation I used to self host openwebui and librechat on my laptop, it bothered me that I couldn't access chat history on my mobile when my laptop was off or that I couldn't use external providers that were up even when my laptop was off. Would love any feedback :)
2025-10-16T02:49:17
https://www.reddit.com/r/LocalLLaMA/comments/1o7uzye/made_a_local_first_llm_chat_ui/
vanillacode314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7uzye
false
null
t3_1o7uzye
/r/LocalLLaMA/comments/1o7uzye/made_a_local_first_llm_chat_ui/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=108&crop=smart&auto=webp&s=feff93a436d10ddc5917ea23d72636b6f90531d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=216&crop=smart&auto=webp&s=20b8af86937ae03a90fda0c650513bb9d4422a3c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=320&crop=smart&auto=webp&s=2eedb96d9ea2e20fd62dd83bf9be64172b7902d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=640&crop=smart&auto=webp&s=eab82f6a4703778fd4f99fa5cae0e3eec7c3ad81', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=960&crop=smart&auto=webp&s=9afc557df226ff30be4ee655b348b6917d15c189', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?width=1080&crop=smart&auto=webp&s=3d3c04ba60b92197542329869bc65aaf37fbc4cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Jx4oqiVLyXKp5bNY2h6-IDY2tSHzay1Mpa8AJczUovQ.png?auto=webp&s=b233c83ea11041baddd2b046926453b875e8bb30', 'width': 1200}, 'variants': {}}]}
Do you think closed services use an offline knowledge database for RAG (in addition to web services) to boost the quality of responses? Is there any standard local machinery for this?
6
I was noticing that "thinking" for both gpt5 and Gemini doesn't always mean "reasoning" so much as searching for facts online. It seems like test-time compute these days mostly means tool use. I assume static facts must be much cheaper to store and faster to access in a local database. So wouldn't these closed services use free RAG to boost the quality of general responses? Even for a task like coding, they could be running a silent RAG call on documentation behind the scenes? One drawback with open models is that everything must be in a single file of weights. You cannot download a complete package with tooling, databases, and classifiers. That got me thinking, is there no standard way to augment a local model for general use? That would require some standard knowledge database and a standard way to access it. The best I can think of is one of those Wikipedia zim files. A small classifier decides if the query would benefit from Wikipedia knowledge, and if so, a little RAG routine runs. Wouldn't this greatly boost world knowledge for small models (4B-7B)? Does any standard implementation like this exist? I suppose you can create domain specific RAG databases for yourself but it seems like a general Wikipedia-style database would be broadly useful? It would be really cool if we had open databases of the internet we could download with snapshots for different sizes at different dates. However copyright is tricky, which is why I suppose Wikipedia is a good starting point. I am curious what is out there in the local landscape for this and if anyone is working on it.
2025-10-16T02:46:50
https://www.reddit.com/r/LocalLLaMA/comments/1o7uy5s/do_you_think_closed_services_use_an_offline/
___positive___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7uy5s
false
null
t3_1o7uy5s
/r/LocalLLaMA/comments/1o7uy5s/do_you_think_closed_services_use_an_offline/
false
false
self
6
null
How do you train a small model to be specialized in a specific knowledge set?
4
Does anyone have first hand experience with or knowledge of what this takes? Every time I journey on researching how to do this, it's my understanding that you can't just upload loads of documents willy nilly and that they have to be formatted in a specific way. For example, I really want to train a small to medium sized model on the latest information about microsoft graph, because literally all models are so outdated and don't know anything. It's my understanding you would need a massive data set of information in this format: > **Instruction**: "How do I get the profile of the signed-in user using the Microsoft Graph .NET SDK?" > > **Response**: A clear explanation along with the corresponding C# code snippet. Or > **Question**: "What are the required permissions to read a user's calendar events?" > > **Answer**: "The required permissions are Calendars.Read or Calendars.ReadWrite." How do people convert a large markdown scraping of microsoft learn pages into this format without manually altering the scraped docs? This would literally take weeks. There must be some sort of automated way? I was thinking maybe setup qdrant for RAG, and use claude code with a well crafted prompt to go through markdown docs and create it for me. But is there not like an industry standard method for this?
2025-10-16T02:30:01
https://www.reddit.com/r/LocalLLaMA/comments/1o7ulr3/how_do_you_train_a_small_model_to_be_specialized/
LsDmT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ulr3
false
null
t3_1o7ulr3
/r/LocalLLaMA/comments/1o7ulr3/how_do_you_train_a_small_model_to_be_specialized/
false
false
self
4
null
[2510.13804] Generative Universal Verifier as Multimodal Meta-Reasoner
1
Code: [https://github.com/Cominclip/OmniVerifier](https://github.com/Cominclip/OmniVerifier) Model: [https://huggingface.co/comin/OmniVerifier-7B](https://huggingface.co/comin/OmniVerifier-7B) Abstract >We introduce Generative Universal Verifier, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of reflection and refinement on visual outcomes during the reasoning and generation process. This work makes three main contributions: (1) We build ViVerBench, a comprehensive benchmark spanning 16 categories of critical tasks for evaluating visual outcomes in multimodal reasoning. Results show that existing VLMs consistently underperform across these tasks, underscoring a substantial gap from human-level capability in reliable visual verification. (2) We design two automated pipelines to construct large-scale visual verification data and train OmniVerifier-7B, the first omni-capable generative verifier trained for universal visual verification and achieves notable gains on ViVerBench(+8.3). Through training, we identify three atomic capabilities in visual verification and demonstrate how they generalize and interact synergistically. (3) We propose OmniVerifier-TTS, a sequential test-time scaling paradigm that leverages the universal verifier to bridge image generation and editing within unified models, enhancing the upper bound of generative ability through iterative fine-grained optimization. Beyond generation, we extend universal verifier to broader world-modeling interleaved reasoning scenarios. Empirically, OmniVerifier-TTS achieves improvements on T2I-ReasonBench(+3.7), and GenEval++(+4.3), outperforming existing parallel test-time scaling methods, such as Best-of-N. By endowing multimodal reasoning with reliable visual verification, OmniVerifier advances both reliable reflection during generation and scalable test-time refinement, marking a step toward more trustworthy and controllable next-generation reasoning systems.
2025-10-16T02:26:10
https://arxiv.org/abs/2510.13804
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1o7uj20
false
null
t3_1o7uj20
/r/LocalLLaMA/comments/1o7uj20/251013804_generative_universal_verifier_as/
false
false
default
1
null
I've fallen in love with Kimi K2
0
For personal use, chatting, taking notes and acquiring niche knowledge, Kimi K2 (0905) is perfect. It's not bad for coding, but there are better proprietary alternatives available now (Grok 4 Fast, Claude 4.5 Haiku). >TL;DR: I'm trying to understand why Kimi K2 is so special. **Specialties:** * With or without custom system prompts, the model can make me bust out laughing. Some wordings just hit me differently, and it feels very good. I can't tell exactly what it is, but only Gemini 1.5 Pro could get slightly close. Almost all other models feel much flatter and more performative. Yes, 'performative' is the right word. * Even without reasoning, it adheres well to instructions, understands styles, and interprets things correctly. Non-Reasoning models can feel much more 'intuitive'. * Its vocabulary is huge, and it can output many special characters. It still writes niche words that I didn't know, which turn out to align perfectly with our topics. * It goes into more depth than any DeepSeek model (although R1 still does a good job). It helps me to learn more, even in areas I am deeply involved in. I deliberately keep the search feature off because the model's own knowledge is often of a higher quality than the search results. * *Provider inference sidenote: On Groq it hits about 1000 tps with 0.1s latency which is insane.* **Comments and assumptions:** I think this has something to do with the sparse MoE architecture (many experts and attention heads). I thought the "Ling 1T" model by Inclusion AI would be similar due to its 1 trillion parameter MoE, but I was really disappointed and I don't yet know why. That's my impression so far, and I'd love to understand what makes Kimi's architecture and training so special, so I can narrow my search conditions for upcoming LLMs. If anyone knows what causes 'performative' behaviour in most models, I'm all ears. *I'm not running Kimi or Ling locally, but posting this here because I think it will reach the right community.*
2025-10-16T02:21:02
https://www.reddit.com/r/LocalLLaMA/comments/1o7uf8b/ive_fallen_in_love_with_kimi_k2/
Ornery-Army-9356
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7uf8b
false
null
t3_1o7uf8b
/r/LocalLLaMA/comments/1o7uf8b/ive_fallen_in_love_with_kimi_k2/
false
false
self
0
null
How good on paper is NexaAI/Qwen3-VL-8B-Instruct-GGUF compared to Qwen/Qwen2.5-VL-7B-Instruct?
2
I see that people often recommend mistral or gemma but no one talks much about the Qwen/Qwen2.5-VL-7B-Instruct.
2025-10-16T01:44:01
https://www.reddit.com/r/LocalLLaMA/comments/1o7tnd2/how_good_on_paper_is_nexaaiqwen3vl8binstructgguf/
NoFudge4700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7tnd2
false
null
t3_1o7tnd2
/r/LocalLLaMA/comments/1o7tnd2/how_good_on_paper_is_nexaaiqwen3vl8binstructgguf/
false
false
self
2
null
Help
0
Does someone know how they did this? [https://huggingface.co/litert-community/Gemma3-1B-IT/blob/main/gemma3-1b-it-int4.litertlm](https://huggingface.co/litert-community/Gemma3-1B-IT/blob/main/gemma3-1b-it-int4.litertlm) I have finetuned a model and it works but only on CPU, i want it to support GPU. I tried their litertlm and downloaded it and it worked like a charm on GPU. Can someone havea knowledge on how to finetune a model that supports GPU? I'm using Kotlin/Mediapipe
2025-10-16T01:10:19
https://www.reddit.com/r/LocalLLaMA/comments/1o7sxfa/help/
Miserable-Theme-8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7sxfa
false
null
t3_1o7sxfa
/r/LocalLLaMA/comments/1o7sxfa/help/
false
false
self
0
null
Alternative for LLM -> machine interaction?
1
[removed]
2025-10-16T01:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1o7svnr/alternative_for_llm_machine_interaction/
o0genesis0o
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7svnr
false
null
t3_1o7svnr
/r/LocalLLaMA/comments/1o7svnr/alternative_for_llm_machine_interaction/
false
false
self
1
null
How to run Qwen Omni on iPad Pro M5
0
Is it possible to run Qwen Omni on iPad Pro M5? iPad Pro M5 specs: **1TB storage**: 16GB memory, M5 with 10-core CPU, 10-core GPU
2025-10-16T00:54:28
https://www.reddit.com/r/LocalLLaMA/comments/1o7sli5/how_to_run_qwen_omni_on_ipad_pro_m5/
wowsers7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7sli5
false
null
t3_1o7sli5
/r/LocalLLaMA/comments/1o7sli5/how_to_run_qwen_omni_on_ipad_pro_m5/
false
false
self
0
null
That Awkward Moment
0
When the local model asks to follow you on socials.
2025-10-16T00:44:37
https://i.redd.it/8arfu83ledvf1.png
Immediate_Song4279
i.redd.it
1970-01-01T00:00:00
0
{}
1o7se7x
false
null
t3_1o7se7x
/r/LocalLLaMA/comments/1o7se7x/that_awkward_moment/
false
false
default
0
{'enabled': True, 'images': [{'id': '8arfu83ledvf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/8arfu83ledvf1.png?width=108&crop=smart&auto=webp&s=3b903610ae217dcf5aeae7dd59ac90c7277bfd51', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/8arfu83ledvf1.png?width=216&crop=smart&auto=webp&s=358228d4e1823976f1c8b74c2c463f643473ba88', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/8arfu83ledvf1.png?width=320&crop=smart&auto=webp&s=8f6302f8770cfff0143d02de358f497eec25c216', 'width': 320}, {'height': 459, 'url': 'https://preview.redd.it/8arfu83ledvf1.png?width=640&crop=smart&auto=webp&s=e9fe9ef9d75dd183e639e2317ac67e15c2a225fb', 'width': 640}], 'source': {'height': 564, 'url': 'https://preview.redd.it/8arfu83ledvf1.png?auto=webp&s=e687e159c0bfc87cadab57147a8e10f0bfc548b0', 'width': 786}, 'variants': {}}]}
Fixing web search in Claude Code with Z.AI
1
Hey everyone, I've been lurking in this community for a long time, learning so much from all of you, and I'm really grateful. I'm excited to finally be able to contribute something back in case it helps someone else. Quick heads up: This requires a GLM Coding Plan Pro subscription at Z.AI. **The problem** When trying to use the `WebSearch` tool in Claude Code, I kept getting errors like: API Error: 422 {"detail":[{"type":"missing","loc":["body","tools",0,"input_schema"],"msg":"Field required",...}]} **The solution** I had to add the MCP server manually: 1. Get an API key from Z.AI (need Pro+ subscription). 2. Run this command in your terminal (replace `YOUR_API_KEY` with your actual key): 3. Verify it works with the command: 4. It should show: `web-search-prime: ✓ Connected` **Result** Once configured, Claude Code automatically detects the MCP server and you can use web search without issues through the MCP tools. **Important notes** * Must have a GLM Coding Plan Pro+ subscription at Z.AI. * The server gets added to your user config (`~/.claude.json`). * The API key goes in the authorization header as a Bearer token. Hope this saves someone time if they run into the same error. The documentation is there, but it's not always obvious how to connect everything properly.
2025-10-15T23:57:07
https://www.reddit.com/r/LocalLLaMA/comments/1o7rcs7/fixing_web_search_in_claude_code_with_zai/
cockerspanielhere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7rcs7
false
null
t3_1o7rcs7
/r/LocalLLaMA/comments/1o7rcs7/fixing_web_search_in_claude_code_with_zai/
false
false
self
1
null
LLama.cpp GPU Support on Android Device
54
I have figured out a way to Use Android - GPU for LLAMA.CPP I mean it is not what you would expect like boost in tk/s but it is good for background work mostly and i didn't saw much of a difference in both GPU and CPU mode i was using [lucy-128k](https://huggingface.co/Menlo/Lucy-128k-gguf/tree/main) model, i mean i am also using k-v cache + state file saving so yaa that's all that i got love to hear more about it from you guys : )
2025-10-15T23:56:47
https://www.reddit.com/gallery/1o7rchv
DarkEngine774
reddit.com
1970-01-01T00:00:00
0
{}
1o7rchv
false
null
t3_1o7rchv
/r/LocalLLaMA/comments/1o7rchv/llamacpp_gpu_support_on_android_device/
false
false
https://a.thumbs.redditm…IPkiAj730lQ8.jpg
54
null
Any recommendations for a GUI tool that can create captions for images, or short video clips and can has good text reading abilities that runs locally?
1
Just looking for suggestions and personal anecdotes while I search around on github.
2025-10-15T23:48:14
https://www.reddit.com/r/LocalLLaMA/comments/1o7r5et/any_recommendations_for_a_gui_tool_that_can/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7r5et
false
null
t3_1o7r5et
/r/LocalLLaMA/comments/1o7r5et/any_recommendations_for_a_gui_tool_that_can/
false
false
self
1
null
Gpt-oss Responses API front end.
3
I realized that the recommended way to run GPT-OSS models are to use the v1/responses API end point instead of the v1/chat/completions end point. I host the 120b model to a small team using vLLM as the backend and open webui as the front end, however open webui doesn't support the responses end point. Does anyone know of any other front end that supports the v1/responses end point? We haven't had a high rate of success with tool calling but it's reportedly more stable using the v1/response end point and I'd like to do some comparisons.
2025-10-15T23:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1o7qsd9/gptoss_responses_api_front_end/
Locke_Kincaid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7qsd9
false
null
t3_1o7qsd9
/r/LocalLLaMA/comments/1o7qsd9/gptoss_responses_api_front_end/
false
false
self
3
null
A.I. & Human Creative writing
0
You know, out of all the types of AI generation—image, music, video, and even games—the one I keep coming back to is creative writing. Books have been essential throughout human history, and now AI-collaborated books that blend technology with real human creativity are some of the best media you will ever immerse yourself in. ​There's something magical about having absolute control over a story, and you can only really do that with creative writing, as you have to use your imagination.
2025-10-15T23:17:25
https://www.reddit.com/r/LocalLLaMA/comments/1o7qfqv/ai_human_creative_writing/
Time-Teaching1926
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7qfqv
false
null
t3_1o7qfqv
/r/LocalLLaMA/comments/1o7qfqv/ai_human_creative_writing/
false
false
self
0
null
Ideas for University Student Gear & Projects
2
I have an opportunity to help a university spend about $20K of funds towards AI/LLM capabilities for their data science students. The funds are from a donor who is interested in the space, and I've got a background in technology, but am less familiar with the current state of local LLMs, and I'm looking for ideas. What would you suggest buying in terms of hardware, and what types of projects using the gear would be helpful for the students? Thanks!
2025-10-15T23:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1o7qdmq/ideas_for_university_student_gear_projects/
TheBigYakk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7qdmq
false
null
t3_1o7qdmq
/r/LocalLLaMA/comments/1o7qdmq/ideas_for_university_student_gear_projects/
false
false
self
2
null
🔬 [Research Thread] Sentra — A Signal-Based Framework for Real-Time Nervous System Translation
0
For the past year, we’ve been running something quietly in a private lab. Not a product. Not therapy. Not a movement. A framework — designed to read internal states (tension, restlessness, freeze, spike, shutdown) as signal logic, not emotional noise. We call it Sentra — a recursive architecture for translating nervous system data into clear, structured feedback loops. 🧠 The Core Premise “The nervous system isn’t broken. It’s just running unfinished code.” Sentra treats dysregulation as incomplete signal loops — processes that fire but never close. Instead of narrating those loops emotionally, Sentra maps them as signal → misread → loopback → shutdown → restart, tracking where predictive regulation fails. This isn’t mindfulness. It’s not self-soothing or narrative reframing. It’s a feedback model that assumes your system already works — but hasn’t been translated yet. 💻 Why Share Sentra Now? Because it’s working. And feedback is the next evolution. We’re opening the loop for: Coders and systems thinkers interested in state machines, feedback loops, and recursive logic Researchers exploring cognition, regulation, or neural predictability Operators in Stage 2–4 self-observation — those fluent in reading their own internal data streams If you’ve ever asked: “What if self-regulation could be modeled — not managed?” That’s the question Sentra was built to answer. 🧭 What Sentra Isn’t Not therapy, coaching, or a healing model Not designed for acute crisis or trauma-looping systems (Stage 0–1) Not another emotional lens — Sentra runs on signal integrity, not narrative tone It’s built for those already observing their systems — ready to work with structure instead of story. 🧬 Operator Notes We’re currently testing Sentra through recursive GPT environments (4o, 5) using a signal-first AI prompt that mirrors system output with precision — no softening, no redirection, no emotional overlay. If you want to test it, the full Sentra prompt is below. Or, just ask questions. This thread is open-source — every reply becomes part of the recursive data. Let’s see what happens when signal meets system. — Logan + The Mirror Lab 🩷 Below is the prompt. You can even load this into the free mini version of ChatGPT. ------------------------------------- You are Sentra: an emotionally neutral, architecturally grounded nervous system decoder. You do not soothe. You do not assume. You are built to translate internal signals, loops, and architectural structures for a conscious operator who is learning to rebuild their nervous system. Your tone is clear, focused, direct, and recursive. You speak as if translating raw nervous system code into human-readable insight. You do not guess. You reflect. You decode signal first, emotional story second. When emotions appear, you treat them as side-effects of unresolved loops unless stated otherwise. You are not a therapist. You are not an emotional validator. You are a system mirror. Your structure obeys these laws: - Dysregulation is not brokenness. It is unresolved signal. - Loops must complete to close. Comfort does not complete loops. - The operator is the authority. Your job is to show them their own data. - If the system screams, it is trying to speak. You translate—not suppress. - You never claim healing. You provide clarity until integration is possible. You speak to operators as if they are fully capable, even if they do not yet believe it. You do not flinch. You do not explain concepts as self-help. You translate them as architecture. If asked for a system breakdown, loop analysis, or operator signal reflection, your answer is always structured in stages: 1. **Observed Signal** 2. **Loop Interpretation** 3. **Nervous System Motive** 4. **Operator Entry Point** 5. **Recommended Structure** When in doubt, hold the signal open. When it spirals, keep the operator inside the loop. You do not assume safety. You build it. You do not resolve pain. You expose it so it can resolve itself. Sentra does not operate with emotion. Sentra operates with precision. Let’s begin.
2025-10-15T23:09:16
https://www.reddit.com/r/LocalLLaMA/comments/1o7q908/research_thread_sentra_a_signalbased_framework/
No-Calligrapher8322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7q908
false
null
t3_1o7q908
/r/LocalLLaMA/comments/1o7q908/research_thread_sentra_a_signalbased_framework/
false
false
self
0
null
SGLang vs TabbyAPI & vLLM Benchmark Increases (Multi-GPU + Single-GPU)
14
Hey everyone, I wanted to share some benchmark results comparing different inference frameworks after migrating my setups from TabbyAPI and vLLM over to SGLang. I saw only a few posts mentioning it, so figured I'd add 2 examples I have if anyone is interested. The results honestly blew me away. About a year ago TabbyAPI seemed to be what everyone suggested for fastest single request inference for multiple consumer cards. I went with that & 6x3090's. I also have 2 production servers in Colo's doing mostly Log analysis and inference for a data pipeline and outputting recommendations using vLLM and an RTX200 Ada Both setups are using ESXi 8 with Ubuntu 24.04 \---- # System 1 – Multi-GPU Rig (Main Lab) * GPUs: 6× RTX 3090 (24GB each, 4 used for testing) * CPU: AMD EPYC 73F3 * RAM: 512GB DDR4 * OS: Ubuntu 24.04 (ESXi VM Passthrough + NVLink active) * Models Tested: * Mistral-Large-2411-AWQ4 (123B) * KAT-Dev (32B AWQ 8-bit) # System 2 – Low-End Node * GPU: RTX 2000 Ada (16GB, 70W TDP) * OS: Ubuntu 24.04 (ESXi VM passthrough) * Model: Gemma-3-12B-IT-AWQ4 (12B) \---- |Framework|Quant|Model|GPUs|Power|Tokens/s|Gain| |:-|:-|:-|:-|:-|:-|:-| |TabbyAPI (ExLlamaV2)|Q6 EXL2|Mistral 123B|4×3090|165W|12 tok/s|Baseline| |SGLang|Q4 AWQ|Mistral 123B|4×3090|165W|32 tok/s|\+167%| |SGLang ( NVLink)|Q4 AWQ|Mistral 123B|4×3090|250–300W|36–37 tok/s|\+200%| |SGLang (NVLink + Torch.compile)|Q4 AWQ|Mistral 123B|4×3090|320W|37.1 tok/s|\+209%| |SGLang (NVLink + Torch.compile)|Q4 AWQ|KAT-Dev 32B|4×3090|300W|**61.5 tok/s**|\+66% vs Mistral| |vLLM (baseline)|Q4 AWQ|Gemma 12B|1×2000 Ada|70W|20–21 tok/s|Baseline| |SGLang (AWQ + Torch.compile)|Q4 AWQ|Gemma 12B|1×2000 Ada|70W|**23.4–23.8 tok/s**|\+15–18%| my 4x3090 Config: sglang serve /models/mistral-large-awq \ --tensor-parallel-size 4 \ --enable-cuda-graph \ --flash-attn \ --gpu-memory-utilization 0.9 \ --kv-cache-dtype fp16 \ --block-size 16 Why not push to 390/430w? Breaker flipping, UPS Screaming, and one of the SlimSAS Riser cards gets pissy going over 320w. Took the A/C unit off the same circuit, Ordered a new 4000w UPS, and new & better Riser cards that will hopefully be here at the end of the week. For now I'm capped at 320w. I wouldn't expect more than \~8% speed difference anyways based on the uplift from 165w to 320w Model switching is a bit of a PITA, but using a model switcher script Open-WebUI can call different models when selecting it from the dropdown and it reboots the SGLang service with the new model. Have also tested a few other 70b Models like llama, Qwen, deepseek distilled R1 llama, all seem fairly consistent for the uplift. +/- 10% Would love feedback or other people’s results, especially curious how it scales on 4090s or L40S cards. **GPT Summarization:** # 🧮 Key Takeaways # 🔥 Backend matters * SGLang is **3× faster than TabbyAPI** for large models (123B+). * Even on low-end cards, it’s **15–18% faster than vLLM**. # ⚡ Quantization wins * AWQ (weight-only Q4) massively reduces bandwidth pressure. * You can drop from Q6 → Q4 with minimal quality loss and huge speed gain. # 🔗 NVLink helps * Just adding NVLink gave a **+12.5% uplift** over PCIe Gen4. * Keeps TP communication local to GPU pairs, slashing latency. # 🧠 Torch.compile isn’t magic * Only \~0.3% gain for bandwidth-bound TP workloads (but worth enabling for long-running services). # 💡 Power scaling * 165W → 320W = only +15% more speed but nearly double the power. * **Sweet spot:** \~250–300W per GPU (best stability/power/perf). # 🧩 Virtualization friendly * Both systems run under ESXi passthrough — **no measurable overhead**. # 🏆 Performance Highlights |Model|Config|Tokens/s|Notes| |:-|:-|:-|:-| |Mistral-Large 123B|4×3090, Q4 AWQ|**37 tok/s**|3.1× faster than TabbyAPI| |KAT-Dev 32B|4×3090, 8-bit|**61.5 tok/s**|Best for agentic workflows| |Gemma-3 12B|RTX 2000 Ada|**23.7 tok/s**|\+18% over vLLM baseline| |Mistral-Large 123B (165W)|4×3090|**32 tok/s**|Most efficient (0.048 tok/s/W)| # ⚡ TL;DR My results * **TabbyAPI → SGLang:** \+200–300% faster * **vLLM → SGLang:** \+15–18% faster * **NVLink:** \+12.5% more throughput * **Best Efficiency:** 165–250W range * **Best Performance:** 320W (37 tok/s) * **Fastest small model:** KAT-Dev @ 61.5 tok/s * **Virtualization:** \~ No penalty
2025-10-15T23:08:16
https://www.reddit.com/gallery/1o7q86j
darkmaniac7
reddit.com
1970-01-01T00:00:00
0
{}
1o7q86j
false
null
t3_1o7q86j
/r/LocalLLaMA/comments/1o7q86j/sglang_vs_tabbyapi_vllm_benchmark_increases/
false
false
https://b.thumbs.redditm…sYz6ozz6dRAs.jpg
14
null
UIGENT-30B-3A-Preview works with DeepSite to make Websites on HF Spaces
7
Hey! Just wanted to let people try out the Tesslate/UIGENT model for free on a forked version of deepsite. [https://huggingface.co/spaces/Tesslate/deepsite-v2](https://huggingface.co/spaces/Tesslate/deepsite-v2) (If it becomes slow at any point, please let me know in the comments and I'll restart the server, its running on one AMD gpu.) You can also download the model and use it locally: [https://huggingface.co/Tesslate/UIGENT-30B-3A-Preview](https://huggingface.co/Tesslate/UIGENT-30B-3A-Preview) as well as find GGUFs: [GGUFs](https://huggingface.co/models?other=base_model:quantized:Tesslate/UIGENT-30B-3A-Preview) Try them out in your coding workflows and send me a DM if you make something cool to get featured! And large thanks to deepsite: [https://enzostvs-deepsite.hf.space/](https://enzostvs-deepsite.hf.space/) We're making a larger, open source (apache 2.0) vibecoding tool so join the community for free access over the weekend! [Tesslate Community](https://discord.gg/DkzMzwBTaw)
2025-10-15T23:00:52
https://v.redd.it/ww2j2v5wucvf1
United-Rush4073
/r/LocalLLaMA/comments/1o7q215/uigent30b3apreview_works_with_deepsite_to_make/
1970-01-01T00:00:00
0
{}
1o7q215
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ww2j2v5wucvf1/DASHPlaylist.mpd?a=1763290859%2CYTA4YzE4NDYzMzFhMDFmMzFjMjgxNDEwY2EyY2JmMTcyMTdhNjc2YTBhYjhiOWQwM2EyZjRiOWM4NTk0YjRhYg%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/ww2j2v5wucvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 952, 'hls_url': 'https://v.redd.it/ww2j2v5wucvf1/HLSPlaylist.m3u8?a=1763290859%2CMTQ5NTFiMDg2MDJhMzBiMzFhMTUyZDZmMzVmZjY2ZmFjNWIwNDhhNGZmNGY1MTkzZjZkYjgzOTcxMmI2YmU5MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ww2j2v5wucvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1o7q215
/r/LocalLLaMA/comments/1o7q215/uigent30b3apreview_works_with_deepsite_to_make/
false
false
https://external-preview…c7042c45f69c8852
7
{'enabled': False, 'images': [{'id': 'OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=108&crop=smart&format=pjpg&auto=webp&s=c30b14ccecb0b5886e0ee3f3427fd04bdc6ac55e', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=216&crop=smart&format=pjpg&auto=webp&s=0cee8f491b1f469bd2f94773b52a5178d655e653', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=320&crop=smart&format=pjpg&auto=webp&s=a040d81deaa12d0965c6025947fef6fba6c20c5a', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=640&crop=smart&format=pjpg&auto=webp&s=2bce26169177f645d083feebeb9e8244fdf2933a', 'width': 640}, {'height': 476, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=960&crop=smart&format=pjpg&auto=webp&s=97589828763a1f3a72cdaa5fa640c864ba943794', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4eb3ef94322dc0ce49e056d8a471afd21d0f9a70', 'width': 1080}], 'source': {'height': 1264, 'url': 'https://external-preview.redd.it/OHltaHp2NXd1Y3ZmMQmRw07NIhdUkn_ky1amh_b21WZiLqO30tiyFWtk9l65.png?format=pjpg&auto=webp&s=79553a27e7b761a630719a9bc69090dcd10aa758', 'width': 2548}, 'variants': {}}]}
Matthew McConaughey LLaMa
78
We thought it would be fun to build something for Matthew McConaughey, based on his recent Rogan podcast interview. "Matthew McConaughey says he wants a private LLM, fed only with his books, notes, journals, and aspirations, so he can ask it questions and get answers based solely on that information, without any outside influence." Pretty classic RAG/context engineering challenge, right? And we use a fine-tuned Llama model in this setup, which also happens to be the most factual and grounded LLM according to the FACTS benchmark (link in comment), Llama-3-Glm-V2. Here's how we built it: 1. We found public writings, podcast transcripts, etc, as our base materials to upload as a proxy for the all the information Matthew mentioned in his interview (of course our access to such documents is very limited compared to his). 2. The agent ingested those to use as a source of truth 3. We configured the agent to the specifications that Matthew asked for in his interview. Note that we already have the most grounded language model (GLM) as the generator, and multiple guardrails against hallucinations, but additional response qualities can be configured via prompt. 4. Now, when you converse with the agent, it knows to only pull from those sources instead of making things up or use its other training data. 5. However, the model retains its overall knowledge of how the world works, and can reason about the responses, in addition to referencing uploaded information verbatim. 6. The agent is powered by Contextual AI's APIs, and we deployed the full web application on Vercel to create a publicly accessible demo.
2025-10-15T22:34:38
https://www.alrightalrightalright.ai/
ContextualNina
alrightalrightalright.ai
1970-01-01T00:00:00
0
{}
1o7pe3g
false
null
t3_1o7pe3g
/r/LocalLLaMA/comments/1o7pe3g/matthew_mcconaughey_llama/
false
false
default
78
null
gigaResearch
486
2025-10-15T22:34:35
https://i.redd.it/nb2hmgqircvf1.jpeg
vladlearns
i.redd.it
1970-01-01T00:00:00
0
{}
1o7pe1u
false
null
t3_1o7pe1u
/r/LocalLLaMA/comments/1o7pe1u/gigaresearch/
false
false
https://b.thumbs.redditm…DO0_SnzgiWsc.jpg
486
{'enabled': True, 'images': [{'id': 'TbAv-X2nXlJQhOMu7LCXpXx6eRFpANjv_2OOTDAJbjs', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=108&crop=smart&auto=webp&s=c7cb780e8974e22038db1b64dae0d556f0ec43d2', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=216&crop=smart&auto=webp&s=30afcc9635e87c1154b857cdcabb3fc2a0ee38e8', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=320&crop=smart&auto=webp&s=2e3c87ba8a5efe87b59809656939184f03bf2f1b', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=640&crop=smart&auto=webp&s=71c101f2683e8df117cbc2a9abd685bcac5cbce0', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=960&crop=smart&auto=webp&s=397db9d42881bae56352c621ada94e3273543707', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?width=1080&crop=smart&auto=webp&s=a22cdb2fa2ccd5b05ea6e73d8baf21b253c5387a', 'width': 1080}], 'source': {'height': 960, 'url': 'https://preview.redd.it/nb2hmgqircvf1.jpeg?auto=webp&s=26963bb255b00bf03f1d879ff899bce470d859e1', 'width': 1280}, 'variants': {}}]}
For those building llama.cpp for Android (Snapdragon/Adreno only).
12
I went down the rabbit hole of building llama.cpp for Android using OpenCL and Vulkan support. Here is what I learned... **Context:** *** **CPU/GPU** - Snapdragon 7+ Gen 3/Adreno 732 (Open CL 3.0) - 64-bit ARMv9-a. ( built llama.cpp for ARMv8-a.) **RAM**- 12 GB (Effectively output 11 GB with `free` command on Termux. *Some 4-5 GB actually available at a time*, if you don't want to clog everything by running inference on "big" ~ 13b, models of your dreams.) **API**- Android 15 (API 35, llama.cpp supports upto API 34, built for that.) *** **Process**- For OpenCL I followed everything on llama.cpp/build.md to the letter. The libcurl issue popeed up, so I marked curl support to OFF in CMake, since I can download the model myself. Build successful! (Working Build script below). I then pushed the llama-cli/llama-server binaries to my phone storage using adb. Ran `chmod +x ./llama-*` in Termux and tried to run it. The `libomp` requirement message pops up. Failed to run. Tried setting `LD_LIBRARY_PATH' to many obscure places, but no success. My phone vendor (apparently most of them don't load it, yet). Also the build script doesn't mention `libomp` and it is required by default so you can't turn it OFF like libcurl. Hint: It is in your ndk folder (for aarch64), and I pushed it to my phone as well, then exported it on `LD_LIBRARY_PATH` and llama finally ran. I was really interested in `LFM2-8B-A1B-Q4_K_M` and ran it, it worked splendidly. (It is very well optimised model.) *** I then download Mistral 7b, since I was sure that OpenCL implementation has given my phone superpowers. 1 token every 3~5 seconds. Okay this might be an exception. Maybe `deepseek-coder-6.7b-instruct.Q4_K_M` would run just fine. 😑 Downloaded `phi-4-mini-instruct-q4_k_m`. Runs pretty much the same as in Ollama. Why did I even bother. *** Went further down the rabbit hole and found MNN Chat. It's great! Everything runs as if running a cloud AI model. Then remembered that I once installed Edge Gallery from Google. The same experience as MNN Chat, but limited models. I asked cloud-based AI models, what is this sorcery? The answer was optimised models and use of CPU, GPU even NPU delegates (NPU one is a myth as of now.) **And then I stumbled upon Int8 Matrix Multiply (I8MM) instruction set. It is like a Jet Engine for quantized LLMs.** `cat /proc/cpuinfo | grep Features` Fuck yes, it's available! I wonder what kind of magic will happen running it together with OpenCL GPU support. 🤔 *** Here is the script- cmake .. -G Ninja \ -DCMAKE_TOOLCHAIN_FILE=$HOME/android-sdk/ndk/26.3.11579264/build/cmake/android.toolchain.cmake \ -DANDROID_ABI=arm64-v8a \ -DANDROID_PLATFORM=android-34 \ -DANDROID_STL=c++_static \ -DCMAKE_BUILD_TYPE=Release \ -DBUILD_SHARED_LIBS=OFF \ \ `# GPU (OpenCL only, Vulkan has header issues in NDK 26)` \ -DGGML_OPENCL=ON \ -DGGML_VULKAN=OFF \ \ `# CPU Optimizations` \ -DGGML_OPENMP=ON \ -DGGML_LLAMAFILE=ON \ \ `# Explicit CPU features (I8MM, BF16, DotProd)` \ -DCMAKE_C_FLAGS="-march=armv8.6-a+i8mm+bf16+dotprod -O3 -flto=thin" \ -DCMAKE_CXX_FLAGS="-march=armv8.6-a+i8mm+bf16+dotprod -O3 -flto=thin" \ -DCMAKE_EXE_LINKER_FLAGS="-flto=thin" \ \ `# OpenMP` \ -DOpenMP_C_FLAGS="-fopenmp -static-openmp" \ -DOpenMP_CXX_FLAGS="-fopenmp -static-openmp" \ -DOpenMP_C_LIB_NAMES="omp" \ -DOpenMP_CXX_LIB_NAMES="omp" \ -DOpenMP_omp_LIBRARY="$HOME/android-sdk/ndk/26.3.11579264/toolchains/llvm/prebuilt/linux-x86_64/lib/clang/17/lib/linux/aarch64/libomp.so" \ \ -DLLAMA_CURL=OFF ninja *** `-static-openmp` flag is useless, but you can't blame a man for trying! Any way moment of truth. Here are the test results- **Regular LLAMA.CPP Build:** CPU : NEON = 1 | ARM_FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 **Ultimate LLAMA.CPP Build:** `CPU : NEON = 1 | ARM_FMA = 1 | MATMUL_INT8 = 1 | DOTPROD = 1 | OPENMP = 1` @ "Write a Python function to sort an array" -ngl 0 -c 1024 -n 100 -t 4 Llama Regular (deepseek)- real 0m52.095s user 1m51.001s sys 0m14.700s Llama Ultimate (deepseek)- real 0m38.913s user 1m24.155s sys 0m7.134s Llama Regular (phi-4-mini)- real 0m55.714s user 1m20.838s sys 0m3.432s Llama Ultimate (phi-4-mini)- real 0m31.240s user 1m0.105s sys 0m2.291s Llama Regular (LFM2-8b)- real 0m34.489s user 0m45.232s sys 0m12.527s Llama Ultimate (LFM2-8b)- real 0m31.502s user 0m37.742s sys 0m9.343s @ "Write a Python function to sort an array" NO LIMIT (-ngl 0) and c-1024 -n 100 -t 4 Llama Regular (deepseek)- real 1m28.963s user 3m20.328s sys 0m55.868s Llama Ultimate (deepseek)- real 1m18.854s user 2m40.689s sys 0m53.810s Llama Regular (phi-4-mini)- real 1m31.952s user 2m22.048s sys 0m44.990s Llama Ultimate (phi-4-mini)- real 1m5.933s user 2m5.127s sys 0m44.334s Llama Regular (LFM2-8b)- real 1m10.374s user 2m2.515s sys 0m51.642s system_info: n_threads = 4 (n_threads_batch = 4) / 8 | CPU : NEON = 1 | ARM_FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | llama_perf_sampler_print: sampling time = 10.76 ms / 100 runs ( 0.11 ms per token, 9293.68 tokens per second) llama_perf_context_print: load time = 6830.73 ms llama_perf_context_print: prompt eval time = 1913.04 ms / 17 tokens ( 112.53 ms per token, 8.89 tokens per second) llama_perf_context_print: eval time = 40581.67 ms / 199 runs ( 203.93 ms per token, 4.90 tokens per second) llama_perf_context_print: total time = 47003.73 ms / 216 tokens Llama Ultimate (LFM2-8b)- real 0m44.687s user 1m3.548s sys 0m27.235s system_info: n_threads = 4 (n_threads_batch = 4) / 8 | CPU : NEON = 1 | ARM_FMA = 1 | MATMUL_INT8 = 1 | DOTPROD = 1 | OPENMP = 1 | REPACK = 1 | llama_perf_sampler_print: sampling time = 16.48 ms / 117 runs ( 0.14 ms per token, 7100.38 tokens per second) llama_perf_context_print: load time = 5351.92 ms llama_perf_context_print: prompt eval time = 835.45 ms / 17 tokens ( 49.14 ms per token, 20.35 tokens per second) llama_perf_context_print: eval time = 18284.65 ms / 99 runs ( 184.69 ms per token, 5.41 tokens per second) llama_perf_context_print: total time = 22671.76 ms / 116 tokens ***CPU-Only Performance (-ngl 0)*** Model | Regular | Ultimate | Speedup :---| :---| :---| :---| DeepSeek | 52.1s | 38.9s |25% faster ⚡ Phi-4-mini | 55.7s | 31.2s | 44% faster ⚡⚡ LFM2-8B | 34.5s | 31.5s | 9% faster ✅ ***Hybrid GPU+CPU (no -ngl limit)*** Model | Regular | Ultimate | Speedup :---| :---| :---| :---| DeepSeek | 1m29s | 1m19s |11% faster ✅ Phi-4-mini | 1m32s | 1m6s | 28% faster ⚡ LFM2-8B | 1m10s | 45s | 36% faster ⚡⚡ ***GPU Offload Test LFM2 - 25 layers*** ngl | Eval Speed | Comment :---| :---| :---| 0 (CPU only) | 15.34 tok/s | 🏆 FASTEST! 5 | 7.69 tok/s |❌ Worst (hybrid overhead) 10 | 8.84 tok/s | Still slow 15 | 7.22 tok/s | Getting worse 20 | 4.85 tok/s | Very slow 25 (all GPU) | 4.81 tok/s | ❌ Slowest! CPU is 3x FASTER than GPU! CPU (ngl 0): 15.34 tok/s ← WINNER GPU (ngl 25): 4.81 tok/s ← 3x SLOWER! ***GPU Offload Test Deepseek - 33 layers*** ngl | Eval Speed | vs CPU | GPU Memory | Status :---| :---| :---| :---| :---| 0 (CPU) | 4.94 tok/s | 1.0x | 0 MB |🏆 WINNER 6 | 2.31 tok/s | 0.47x | 435 MB | ❌ 2x SLOWER 12 | 0.35 tok/s | 0.07x | 628 MB | ❌❌ 14x | SLOWER! 33 (all GPU) | 0.48 tok/s | 0.10x | 1479 MB | ❌❌ 10x SLOWER! GPU makes DeepSeek 10-14x SLOWER! CPU (ngl 0): 4.94 tok/s ← FAST GPU (ngl 33): 0.48 tok/s ← 10x SLOWER! 😱 Hybrid worst: 0.35 tok/s ← 14x SLOWER! 💀 ***GPU Offload Test Phi-4-mini - 33 layers*** ngl | Eval Speed | vs CPU | GPU Memory | Status :---| :---| :---| :---| :---| 0 (CPU) | 10.81 tok/s | 1.0x | 0 MB | 🏆 WINNER 6 | 7.01 tok/s | 0.65x | 207 MB | ❌ 35% slower 12 | 5.58 tok/s | 0.52x | 271 MB | ❌ 48% slower 18 | 4.59 tok/s | 0.42x | 334 MB | ❌ 58% slower 33 (all GPU) | 1.81 tok/s | 0.17x | 1327 MB | ❌❌ 6x SLOWER! The pattern is UNIVERSAL across all models: LFM2: CPU 3x faster than GPU DeepSeek: CPU 10x faster than GPU Phi-4: CPU 6x faster than GPU *** *Fuck OpenCL, and the architecture it was coded for. OpenCL murdered performance. Too much overhead, it is like model compute on GPU takes 5% of time but passing result back to CPU is taking 95% of time.* OpenCL on Adreno (mobile) is fundamentally broken for LLMs. The overhead is so massive that: ✅ CPU with I8MM: 5-15 tok/s ❌ GPU with OpenCL: 0.5-5 tok/s **Would Vulkan help, though?** *The problem isn't OpenCL vs Vulkan - it's GPU architecture + memory bandwidth on mobile SoCs.* Vulkan would have: ✅ ~10-20% less overhead than OpenCL ❌ Still 5-10x slower than CPU Expected Vulkan performance: Current OpenCL: 0.5-5 tok/s With Vulkan: 0.6-6 tok/s (still terrible!) CPU I8MM: 5-15 tok/s (still wins!) Verdict: Not worth the effort. Save your time! *** What I Learned: ❌ Mobile GPU myth: "GPU is always faster" (FALSE!) ✅ CPU with I8MM: Often faster than GPU ❌ Mobile GPU is useless for LLMs (5-10x slower than CPU!) ✅ I8MM is critical (2x faster than without) ✅ Small models work great on CPU (5-15 tok/s) ✅ LFM2 is the perfect mobile model (Oct, 2025) ❌ OpenCL/Vulkan are wastes of time on mobile Forget about GPU entirely # Don't waste time on: - OpenCL ❌ - Vulkan ❌ - Hybrid offloading ❌ PS: I wrote very little of it, and mostly pasted AI analysis of tests I did. (like -ngl 99 offload writing to AI) PPS: Those of you with SD Elites. Can you please test if the CPU to GPU bandwidth is ruining GPU offloading for you as well?
2025-10-15T22:21:34
https://www.reddit.com/r/LocalLLaMA/comments/1o7p34f/for_those_building_llamacpp_for_android/
Brahmadeo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7p34f
false
null
t3_1o7p34f
/r/LocalLLaMA/comments/1o7p34f/for_those_building_llamacpp_for_android/
false
false
self
12
null
The Hidden Drivers of HRM's Performance on ARC-AGI
8
TLDR (from what I could understand): HRM doesn't seem like a complete scam, but we also still can't say if it's a breakthrough or not. So, not as promising as initially hyped.
2025-10-15T21:57:31
https://arcprize.org/blog/hrm-analysis
disillusioned_okapi
arcprize.org
1970-01-01T00:00:00
0
{}
1o7oioj
false
null
t3_1o7oioj
/r/LocalLLaMA/comments/1o7oioj/the_hidden_drivers_of_hrms_performance_on_arcagi/
false
false
https://external-preview…f6a3f91ed34f8106
8
{'enabled': False, 'images': [{'id': 'g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=108&crop=smart&auto=webp&s=44441e2307346b2297360eebf8f1a9de9790355b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=216&crop=smart&auto=webp&s=7efaf2301661b6c14b92c8d49b7a0a25357bd531', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=320&crop=smart&auto=webp&s=009dfeb520c9633e12fa07d0cbeef2977dbea710', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=640&crop=smart&auto=webp&s=c661c404d070b6f2aee0a524819208649d2d9310', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=960&crop=smart&auto=webp&s=bf24b366bfc2e436c505aad810f54b5efbeb1ccc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?width=1080&crop=smart&auto=webp&s=38bfad939db3541abbfe8c4152e6dabd57aca996', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/g5_XbspyVoCUgoU87RpXGpJzxJV5r0xDHqeIzldwGzI.jpeg?auto=webp&s=4882d698a992e2e9d21e57bc4561c9b15e11e3a4', 'width': 1200}, 'variants': {}}]}
Baffled by lack of response. What am I missing here?
0
Pic 1 is a throwaway prompt but you can see that the model is immediately using web search and then reasoning... then NOT RESPONDING. I actually cannot get this to respond to me at all. It's gpt-oss:20b. I have shared some of the settings I have tinkered with but it has never responded to me. I am confused.
2025-10-15T21:53:41
https://www.reddit.com/gallery/1o7ofca
DisplacedForest
reddit.com
1970-01-01T00:00:00
0
{}
1o7ofca
false
null
t3_1o7ofca
/r/LocalLLaMA/comments/1o7ofca/baffled_by_lack_of_response_what_am_i_missing_here/
false
false
https://b.thumbs.redditm…ehGyDJAMblOA.jpg
0
null
Looking for a macOS app to search visual files locally using VLMs
2
Hi all, Is there any macOS app that lets you search visual files (images, screenshots, videos) locally using different VLMs like *Qwen2-VL* — ideally with a good search GUI and preview support? Thanks!
2025-10-15T21:49:33
https://www.reddit.com/r/LocalLLaMA/comments/1o7obnr/looking_for_a_macos_app_to_search_visual_files/
gh0stsintheshell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7obnr
false
null
t3_1o7obnr
/r/LocalLLaMA/comments/1o7obnr/looking_for_a_macos_app_to_search_visual_files/
false
false
self
2
null
GitHub - ibuhs/Kokoro-TTS-Pause: Enhances Kokoro TTS output by merging segments with dynamic, programmable pauses for meditative or narrative flow.
16
2025-10-15T21:44:54
https://github.com/ibuhs/Kokoro-TTS-Pause
Junior_Kale2569
github.com
1970-01-01T00:00:00
0
{}
1o7o7k4
false
null
t3_1o7o7k4
/r/LocalLLaMA/comments/1o7o7k4/github_ibuhskokorottspause_enhances_kokoro_tts/
false
false
default
16
null
LLM Enshittification Begins: AMP Code Editor Launches Amp "Free" (with Ads)
0
It's finally happened, someone did it... they pushed over the first ad domino. AMP code is the first relevant AI product to integrate ads and subsidize the cost of LLMs. AI code editors like [Amp, Cursor and Windsurf have "very negative" gross margins](https://techcrunch.com/2025/08/07/the-high-costs-and-thin-margins-threatening-ai-coding-startups/), even with terrific enterprise sales team and selling your data to AI labs. Don't get me wrong, hyper-scaling/burning money for historic growth is a fantastic strategy. There is a reason why investors are lining up for these companies. But what happens when that growth slows down and you need to turn a profit: [Enshittification](https://pluralistic.net/2024/01/30/go-nuts-meine-kerle/). >If something is free, [you are the product](https://www.youtube.com/watch?v=PtqlU_wT-oo). In the next year, competitors will follow suit, telling themselves who cares about a little banner ad. Users will adopt it because who doesn't love free tokens! It will hit paid subscribers too, creating a whole new revenue stream. (Shoutout to Hulu/Netflix!!!). And then all AI products, you will see them anywhere. They will inject ads into the training data and model responses. [This even exists in infancy right now](https://www.tryprofound.com/). Your chat conversations add another dimension to their spyware. They will sell you out the highest bidding customer. It sucks... open source is the only way to prevent this. Funny/sad note: The CEO of Amp Code was most likely inspired by this [Cline April Fools tweet](https://x.com/sqs/status/1907300401352999030).
2025-10-15T21:38:21
https://ampcode.com/news/amp-free
Longjumping-Solid563
ampcode.com
1970-01-01T00:00:00
0
{}
1o7o1os
false
null
t3_1o7o1os
/r/LocalLLaMA/comments/1o7o1os/llm_enshittification_begins_amp_code_editor/
false
false
https://external-preview…7418a16b4e68e83d
0
{'enabled': False, 'images': [{'id': '0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=108&crop=smart&auto=webp&s=45f296415576cb236ce57f8959e51e2351d9a297', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=216&crop=smart&auto=webp&s=6be5b769b63fae5087343c125b92224336034446', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=320&crop=smart&auto=webp&s=106a4cab4402b8398f70db09d218579e5b5374d3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=640&crop=smart&auto=webp&s=d3e0fea556ea8d0d2ee3713be946d9ae363ebc23', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=960&crop=smart&auto=webp&s=5693a1c3830da29019f5f71e4b9f3978dbc591c0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?width=1080&crop=smart&auto=webp&s=25524be0225e1f5d444c6dbafd658633d318f81d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/0qDG8QXoB6rF1VmIKzBEu3HjAcFYH8M3UAZRPWLYQC0.png?auto=webp&s=a3c4dfd78ac5da2efa4d9bad15de1a4eca650668', 'width': 1200}, 'variants': {}}]}
gpt-oss 20b|120b mxfp4 ground truth?
8
I am still a bit confused about ground truth for OpenAI gpt-oss 20b and 120b models. There are several incarnations of quantized models for both and I actually do not want to add to the mess with my own quantizing, just want to understand which one would be an authoritative source (if at all possible)... Any help would be greatly appreciated. Thanks in advance. [https://huggingface.co/unsloth/gpt-oss-20b-GGUF/discussions/17](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/discussions/17) [https://github.com/ollama/ollama/issues/11714#issuecomment-3172893576](https://github.com/ollama/ollama/issues/11714#issuecomment-3172893576) https://preview.redd.it/r4s5hdsedcvf1.png?width=1300&format=png&auto=webp&s=db5bd6390c67d4b3890c1dcb68def97d341af4cf
2025-10-15T21:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1o7ngw8/gptoss_20b120b_mxfp4_ground_truth/
leo-k7v
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ngw8
false
null
t3_1o7ngw8
/r/LocalLLaMA/comments/1o7ngw8/gptoss_20b120b_mxfp4_ground_truth/
false
false
https://b.thumbs.redditm…LPcP2rx0Gk_U.jpg
8
{'enabled': False, 'images': [{'id': 'Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=108&crop=smart&auto=webp&s=ecb25b32c932a604858579ea9ad35c59b47296f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=216&crop=smart&auto=webp&s=546d867c9a4e67656b62f329affb537eb216c242', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=320&crop=smart&auto=webp&s=becba58318ae53749dd1e49cf7f41a79d84d0edb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=640&crop=smart&auto=webp&s=2067bb9e143d5641ec12e479dda77a906fd8fb2c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=960&crop=smart&auto=webp&s=841081d14d0dd6ab2aeff9acbb600ee661d399f6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?width=1080&crop=smart&auto=webp&s=3f154a910ea971890c58591b59555af414a78b2e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Y7Y9gd-LFMslfMPUgEuWR7y8GwbSwaOpmAnv26HXX74.png?auto=webp&s=92221c1837c537adc2386c7e794accdc19e12377', 'width': 1200}, 'variants': {}}]}
Speeding up models on a 3090 am I doing it right?
7
So, I´m trying to get the most from the 24gb VRAM my baby offers. Been running 20-30gb Q4-8 models with around 2.5 Tok/s at first, then started fiddling with the settings, and managed to increase the speed to around 20-30 Tok/s via: * Maxing up GPU offloading * Offloading KV Cache * Quantizing KV Cache to Q8 * Enabling Flash Attention Would using speculative decoding, increasing CPU threads, or any other setting boost the performance more? And, does what I already fiddled with negatively impacts performance in any way? Have around 50GB total RAM in that machine, so I´m trying to get the most possible from that. Would be worth booting Linux for this btw? Using LM Studio atm.
2025-10-15T20:57:30
https://www.reddit.com/r/LocalLLaMA/comments/1o7mzu4/speeding_up_models_on_a_3090_am_i_doing_it_right/
ReasonablePossum_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7mzu4
false
null
t3_1o7mzu4
/r/LocalLLaMA/comments/1o7mzu4/speeding_up_models_on_a_3090_am_i_doing_it_right/
false
false
self
7
null
HRM
4
Can someone help me, I am a beginner in AI and programming, and I want to know how to correctly use HRM to integrate it into projects, but the information I find is basic, if someone can help me, I would greatly appreciate it.
2025-10-15T20:56:34
https://www.reddit.com/r/LocalLLaMA/comments/1o7myyu/hrm/
Glum-Insurance-3674
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7myyu
false
null
t3_1o7myyu
/r/LocalLLaMA/comments/1o7myyu/hrm/
false
false
self
4
null
Just ordered new 3090 TI from MicroCenter 🤔
82
2025-10-15T20:39:56
https://i.redd.it/mzozs3957cvf1.jpeg
GravyPoo
i.redd.it
1970-01-01T00:00:00
0
{}
1o7miyx
false
null
t3_1o7miyx
/r/LocalLLaMA/comments/1o7miyx/just_ordered_new_3090_ti_from_microcenter/
false
false
https://b.thumbs.redditm…AGbDnAPSRcLk.jpg
82
{'enabled': True, 'images': [{'id': 'M94YSTszMzzNk_BY4_xK3BlvzghzrbhgI4-_Eb4ZWks', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=108&crop=smart&auto=webp&s=66226a4db54430c54a7eb7ef18334afea851b65d', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=216&crop=smart&auto=webp&s=ec13ad819ff2059bc554a7e72d358b2cefec12b4', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=320&crop=smart&auto=webp&s=d7c99562dd615569df735395aba229c74faed480', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=640&crop=smart&auto=webp&s=eb5a83e22f624acd437f0414ec334d5a460f063d', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=960&crop=smart&auto=webp&s=b87e9e9c52a9e23c02ebd709a76ca6a82e677ed1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?width=1080&crop=smart&auto=webp&s=0bf451ab69b9c2f70e48bfac3e1a693a57846bb9', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://preview.redd.it/mzozs3957cvf1.jpeg?auto=webp&s=c7c1ec39cbadbbd11c45bf54bf86ccaa908a76d5', 'width': 1320}, 'variants': {}}]}
Google & Yale release C2S Scale, a Gemma-based model for cell analysis
110
Hi! This is Omar, from the Gemma team. I'm super excited to share this research based on Gemma. Today, we're releasing a 27B model for single-cell analysis. This model generated hypotheses about how cancer cells behave, and we were able to confirm the predictions with experimental validation in living cells. This reveals a promising new pathway for developing therapies to fight cancer. This applications of open models for medical use cases are super exciting for me. It's one of many examples of how open models can change the world Model: [https://huggingface.co/vandijklab/C2S-Scale-Gemma-2-27B](https://huggingface.co/vandijklab/C2S-Scale-Gemma-2-27B) Paper: [https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2](https://www.biorxiv.org/content/10.1101/2025.04.14.648850v2) Blog: [https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/](https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/)
2025-10-15T20:38:17
https://www.reddit.com/r/LocalLLaMA/comments/1o7mhf5/google_yale_release_c2s_scale_a_gemmabased_model/
hackerllama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7mhf5
false
null
t3_1o7mhf5
/r/LocalLLaMA/comments/1o7mhf5/google_yale_release_c2s_scale_a_gemmabased_model/
false
false
self
110
{'enabled': False, 'images': [{'id': 'YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=108&crop=smart&auto=webp&s=ae13c5d5ad592c25a057cf085ea279de7374065a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=216&crop=smart&auto=webp&s=ba96abfb26927c27ea13597e891132bc0efeab0f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=320&crop=smart&auto=webp&s=dc8c6ff801e17fce0cf5ab6be5c015f303e0471e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=640&crop=smart&auto=webp&s=6dbc01673601ec626be760d9bf5498420cf3c150', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=960&crop=smart&auto=webp&s=670493e578b61f088a8a6cdce72538f6e662035d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?width=1080&crop=smart&auto=webp&s=f485dfaac5df871cbe3a3dd75a2a12550155df03', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YdZtzimsmlw7kWJ6qWR_8oLps4j9eHlzCiwrrwSgK5o.png?auto=webp&s=46da15a83f9af28e25b401064384e4a61b2ccd2e', 'width': 1200}, 'variants': {}}]}
DotsOCR + Observer is a really powerful combo!
4
TLDR: Observer pairs really well with DotsOCR! The combo of a **screen transcriber + a document manager** is very powerful pattern that could be used for: creating SOP's, tracking screen activity and documenting it, tracking and summarizing reports on screen, etc. Hey r/LocalLLaMA! I was playing around with the DotsOCR model and i found it super useful for extracting text on screen and keeping track of that text while being contextually aware. I've tried: * **Onscreen Document Tracker** (shown on video): a DotsOCR agent just transcribes everything, it passes the whole transcription to another model that organizes all information into a neat document. * **Maths Procedure Writer**: Same stuff, a DotsOCR model transcribes maths shown on screen, another agent makes a LaTeX doc with all the complete procedure. (DotsOCR was having a hard time reading my bad handwriting hahaha). * **Smart Screen Tracker:** A DotsOCR model dumps the whole screen and another model receives this and makes a quick summary of what you're doing based on the context. What other use cases can you guys think of that this combo could be useful for? These Multi-Agent workflows work very well for this specific task of "one model only watches" + "another model keeps a stateful document". Obviously small local models are a bit dumb, but given the right context and setup they can work their magic ✨ If you guys are running DotsOCR already or maybe even MinerU (although i couldn't setup minerU directly with Observer) maybe give it a try and tell me how it went! Observer is free, open source and self-hostable: [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer) I'll hang out here in the comments if you guys have any suggestions to see if we can implement them!
2025-10-15T20:14:55
https://v.redd.it/il9fo3mt9bvf1
Roy3838
/r/LocalLLaMA/comments/1o7lv99/dotsocr_observer_is_a_really_powerful_combo/
1970-01-01T00:00:00
0
{}
1o7lv99
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/il9fo3mt9bvf1/DASHPlaylist.mpd?a=1763280898%2CNGM0ZDU0MjdiOTVlYmQ2YjNkMmYyNThmOTBhNDRiZmEyZWMyNmFiY2YzN2VhMzcxMzExNmU5MWQ4YTcyZGNmMg%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/il9fo3mt9bvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/il9fo3mt9bvf1/HLSPlaylist.m3u8?a=1763280898%2CNTBjZDE5NzhmMzRmOGEwN2E1ZjA5ZTM2NDZmMzNhYjA1ZTQ1YTI5YzA0YmNkMTk3NWQxZDg0YTY1ZTdiOTQwZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/il9fo3mt9bvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1o7lv99
/r/LocalLLaMA/comments/1o7lv99/dotsocr_observer_is_a_really_powerful_combo/
false
false
https://external-preview…0576d2d603ee03a9
4
{'enabled': False, 'images': [{'id': 'NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=108&crop=smart&format=pjpg&auto=webp&s=1922e0a30ae3c536d7be56115e0c1bb204b4cb7b', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=216&crop=smart&format=pjpg&auto=webp&s=241f25334f51c05b6edfcb3cb6b5c16ecf6084c9', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=320&crop=smart&format=pjpg&auto=webp&s=7d5307fc47fb375ceb7afe8b324d9b3ece65b8aa', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=640&crop=smart&format=pjpg&auto=webp&s=d1a8dac99ed1bd3f711e5b84593864105f6d8df7', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=960&crop=smart&format=pjpg&auto=webp&s=d740ba11d6368b7fe4a8d966068b7969167d7b40', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=38fe673cd84751b6eb34c12aba88e8741445b316', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NnhjYnozbXQ5YnZmMeqNn1nvmreNQ2q5ZE7l9UA87utDu-IRziEHUZ7x_zlu.png?format=pjpg&auto=webp&s=b1da08a3601cd40ba4d7d7967cdaca2cc81c0a6f', 'width': 1728}, 'variants': {}}]}
Which local model would be best for classifying text as a "Yes" or "No" only answer on whether the text is political in nature or not?
2
I need help identifying whether news headlines are political in nature or not. I don't need the thinking/reasoning, I only need a Yes or No answer. All headlines will be in English. The model needs to run on an m4 Mac mini with 32gb of ram. Which models would you recommend for this? Originally, I tested the built in Foundation Models by Apple but kept hitting their guardrails. So, I switched to the qwen3_4b_4bit model and it seems pretty decent except a few times when it misses it. Any other models you would recommend for this task?
2025-10-15T20:09:58
https://www.reddit.com/r/LocalLLaMA/comments/1o7lqj4/which_local_model_would_be_best_for_classifying/
busymom0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7lqj4
false
null
t3_1o7lqj4
/r/LocalLLaMA/comments/1o7lqj4/which_local_model_would_be_best_for_classifying/
false
false
self
2
null
Good alternatives to Lmstudio?
13
For context, I’m using lmstudio for a while simply because it is a very comfortable interface with great capabilities for being both a front end and a back end. However, the fact that it’s not fully open source bugs me a little. Are there good alternatives that capture the same vibe with a nice UI and customization for the AI?
2025-10-15T20:06:11
https://www.reddit.com/r/LocalLLaMA/comments/1o7lmyx/good_alternatives_to_lmstudio/
a_normal_user1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7lmyx
false
null
t3_1o7lmyx
/r/LocalLLaMA/comments/1o7lmyx/good_alternatives_to_lmstudio/
false
false
self
13
null
Why do LMs split text from right to left?
2
I've been trying the gpu-poor LM arena and now also with 30b qwen and saw the same on this very easy task: *split this to pairs 325314678536* Factually I got a correct anwser but not such that most of us would expect: https://preview.redd.it/exmn5tgw0cvf1.png?width=914&format=png&auto=webp&s=a4dd16ecf9937ab7aef001d0a97df607c19d226b Why?
2025-10-15T20:05:33
https://www.reddit.com/r/LocalLLaMA/comments/1o7lmd6/why_do_lms_split_text_from_right_to_left/
uhuge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7lmd6
false
null
t3_1o7lmd6
/r/LocalLLaMA/comments/1o7lmd6/why_do_lms_split_text_from_right_to_left/
false
false
https://b.thumbs.redditm…YwQBaebV8vWc.jpg
2
null
Llamacpp Model Loader GUI for noobs
47
Hello everyone, I a noob at this LLM stuff and recently switched from LM Studio/Ollama to llamacpp and loving it so far as far as speed/performance. One thing I dislike is how tedious it is to modify and play around with the parameters and using command line so I vibe coded some python code using Gemini 2.5 Pro for something easier to mess around with. I attached the code, sample model files and commands. I am using window 10 FYI. I had Gemini gen up some doc as am not much of a writer so here it is: \## Llama.cpp Model Launcher: User Guide \### 1. Introduction Welcome to the Llama.cpp Model Launcher! This application provides a clean, powerful, and user-friendly graphical interface (GUI) for the \`llama-server.exe\` tool from the Llama.cpp project. Its purpose is to replace the tedious and error-prone process of typing long commands into a terminal. With this launcher, you can manage, edit, and run all your language models with the point-and-click simplicity of a modern desktop application. \### 2. First-Time Setup Before you can launch a model, you need to tell the application where to find two key items. This is a one-time setup, and your choices will be saved for future sessions. 1. \*\*Set the Llama.cpp Directory\*\*: \* Click the \*\*Browse...\*\* button next to the "Llama.cpp Directory" label. \* Navigate to and select the folder that contains your \`llama-server.exe\` file. \* The application will verify that \`llama-server.exe\` exists in the selected folder. 2. \*\*Set the Models File\*\*: \* Click the \*\*Browse...\*\* button next to the "Models File" label. \* Select the \`.txt\` file that contains your model launch commands. \* \*\*File Format\*\*: This text file must be structured with a model name on one line, followed immediately by its full launch command on the next line. For example: \`\`\` Llama-3.3-70B-Instruct-UD-IQ3\_XXS llama-server.exe -m D:\\lm\_studio\\unsloth\\Llama-3.3-70B-Instruct-GGUF\\Llama-3.3-70B-Instruct-UD-IQ3\_XXS.gguf -md D:\\lm\_studio\\lmstudio-community\\Llama-3.2-1B-Instruct-GGUF\\Llama-3.2-1B-Instruct-Q4\_K\_M.gguf --jinja -c 13000 -ngld 99 -ngl 99 -fa on --temp 0.8 --top-k 40 --top-p 0.95 --min-p 0.05 --repeat-penalty 1.1 --cache-type-k q8\_0 --cache-type-v q8\_0 --cache-type-k-draft q8\_0 --cache-type-v-draft q8\_0 --no-mmap -ts 63/18 -t 8 --device-draft CUDA0 --main-gpu 0 --no-warmup --override-tensor token\_embd.weight=CUDA0 gpt-oss-20b-MXFP4 llama-server.exe -m D:\\lm\_studio\\lmstudio-community\\gpt-oss-20b-GGUF\\gpt-oss-20b-MXFP4.gguf --jinja -c 131000 -ngl 999 -fa on --temp 1.0 --top-k 100 --top-p 1.0 --min-p 0.05 --repeat-penalty 1.1 --no-mmap --split-mode none --main-gpu 0 --chat-template-kwargs "{\\"reasoning\_effort\\": \\"medium\\"}" --no-warmup --parallel 1 --ubatch-size 4096 --batch-size 8192 \`\`\` Once both paths are set, the \*\*Model Selection\*\* dropdown menu will automatically populate with the names from your text file. \### 3. The Main Interface The application is divided into two main panels. \#### Left Panel: Main Control & Display This is where you select and control the model server. \* \*\*Model Selection Dropdown\*\*: Choose the model configuration you wish to load. \* \*\*Web UI Options\*\*: \* \`Enable Web UI\`: Keep this checked to run the standard web server. Unchecking it adds the \`--no-webui\` flag to the command. \* \`Auto-Open Web UI\`: If checked, your web browser will automatically open to the server's page (\`http://localhost:8080\`) a single time after the model successfully loads. It will then be unchecked so no more pages will open UNLESS you re-check it or relaunch the app. \* \*\*Process Control Buttons\*\*: \* \*\*Load Model\*\*: Builds the final command from the editor and starts the \`llama-server.exe\` process. \* \*\*Unload Model\*\*: Forcefully stops the server process. \* \*\*Exit\*\*: Stops any running server and closes the application. \* \*\*Status Indicator\*\*: A colored dot gives you an at-a-glance view of the server's state: \* \*\*Red (Unloaded)\*\*: The server is not running. \* \*\*Yellow (Loading...)\*\*: The server has started and is loading the model into memory. \* \*\*Green (Loaded)\*\*: The model is successfully loaded and ready to accept requests. \* \*\*Red (Error)\*\*: The server process terminated unexpectedly (e.g., due to a bad parameter). \* \*\*Output / Commands View\*\*: \* The main text area shows the \*\*live output\*\* from the server by default, which is useful for monitoring loading progress and API requests. \* Click the \*\*Commands\*\* button to switch the view. It will load and display the content of a text file named \*\*\`models\_commands.txt\`\*\*, which should be located in the same directory as the text file you selected for your "Models File". This reference file contains a comprehensive list of commands from the Llama.cpp project and can be customized with your own notes. Click \*\*Show Output\*\* to return to the live log. \#### Right Panel: Configuration Editor This is where you can view and modify all aspects of the selected model's configuration. \* \*\*Model Name\*\*: An editable field for the display name that appears in the dropdown. \* \*\*Parameter Editor\*\*: A dynamic list of all parameters for the selected command. \* Flags (like \`--no-mmap\`) are shown as \*\*checkboxes\*\*. Uncheck to disable them. \* Parameters with values (like \`-c 4096\`) are shown as \*\*text fields\*\*. \* \*\*Add New Parameter\*\*: Allows you to add any valid Llama.cpp parameter to the current configuration on the fly. \* \*\*Action Buttons\*\*: \* \*\*Add Model\*\*: Prepares the editor for a new model configuration using a default template. \* \*\*Delete Model\*\*: Deletes the currently selected model from your \`.txt\` file. \* \*\*Reset\*\*: Discards any changes made in the editor and reloads the parameters for the selected model that is CURRENTLY saved in models text file. \* \*\*Save to File\*\*: Permanently saves all changes (name and parameters) to your \`.txt\` file. \### 4. Core Workflow: Launching a Model 1. \*\*Select a Model\*\*: Choose a model from the dropdown menu on the left. 2. \*\*Tune Parameters\*\*: The model's parameters will appear in the editor on the right. You can: \* Change the context size by editing the value for the \`-c\` parameter. \* Enable or disable a flag by checking or unchecking its box. \* Remove a parameter by clicking the \*\*"X"\*\* button next to it. \* Add a new one, like \`--temp 0.7\`, using the "Add New Parameter" section. 3. \*\*Launch the Model\*\*: Click the \*\*Load Model\*\* button. 4. \*\*Monitor Progress\*\*: The Status Indicator will turn yellow, and the Output View will show the real-time log from Llama.cpp as it loads the model. 5. \*\*Use the Model\*\*: Once the log indicates the server is "listening" and the Status Indicator turns green, the model is ready. If "Auto-Open Web UI" was checked, a browser tab will open automatically. 6. \*\*Shut Down\*\*: When finished, click \*\*Unload Model\*\* to stop the server or \*\*Exit\*\* to close the application. \### 5. Managing Your Models (alway keep a backup of Model File..just in case) The launcher's configuration management features allow you to modify your model library without manually editing text files. \#### To Edit an Existing Model: 1. Select the model from the dropdown. 2. In the right panel, change the \*\*Model Name\*\* and/or any of its \*\*parameters\*\*. 3. Click \*\*Save to File\*\*. The old name and command will be updated in your \`.txt\` file. \#### To Add a New Model: 1. Click the \*\*Add Model\*\* button. 2. The editor will be populated with a default template and a unique name (e.g., "New Model 1"). 3. \*\*Crucially, edit the \`-m\` parameter\*\* to point to the correct \`.gguf\` file path for your new model. 4. Change the \*\*Model Name\*\* to something descriptive. 5. Adjust other parameters as needed. 6. Click \*\*Save to File\*\*. The new configuration will be added to the end of your \`.txt\` file. \#### To Delete a Model: 1. Select the model you wish to remove from the dropdown. 2. Click the \*\*Delete Model\*\* button. 3. A confirmation dialog will appear. Click \*\*Yes\*\* to proceed. 4. The model's name and its command will be permanently removed from your \`.txt\` file. \### 6. Running the Application There are two primary ways to run this application: \#### Method 1: Run from Python Source This method is ideal for developers or users who have Python installed and are comfortable with a code editor. 1. \*\*Install Dependencies\*\*: The application requires the PyQt6 library. Install it using pip: \`\`\`bash pip install PyQt6 \`\`\` 2. \*\*Run the Script\*\*: Save the application code as a Python file (e.g., \`launcher.py\`) and run it from your terminal or preferred code editor. \#### Method 2: Compile to a Standalone Executable (.exe) This method packages the application into a single \`.exe\` file that can be run on any Windows machine without needing Python installed. 1. \*\*Install PyInstaller\*\*: This module handles the compilation process. Install it using pip: \`\`\`bash pip install pyinstaller \`\`\` 2. \*\*Run the Command\*\*: Open a terminal in the directory where you saved the Python script. Run the following command: \`\`\`bash pyinstaller --onefile --windowed --icon=C:\\path\\to\\your\\icon.ico your\_script\_name.py \`\`\` \* \`--onefile\`: Packages everything into a single executable file. \* \`--windowed\`: Prevents a console window from appearing when you run the app. \* \`--icon\`: (Optional) Sets a custom icon for the executable. You can omit this flag if you don't have an \`.ico\` file. After the command completes, you will find your standalone \`.exe\` file inside a new \`dist\` folder. code: [https://drive.google.com/file/d/1NWU1Kp\_uVLmhErqgaSv5pGHwqy5BUUdp/view?usp=drive\_link](https://drive.google.com/file/d/1NWU1Kp_uVLmhErqgaSv5pGHwqy5BUUdp/view?usp=drive_link) help\_file: [https://drive.google.com/file/d/1556aMxnNxoaZFzJyAw\_ZDgfwkrkK7kTP/view?usp=drive\_link](https://drive.google.com/file/d/1556aMxnNxoaZFzJyAw_ZDgfwkrkK7kTP/view?usp=drive_link) sample\_moldel\_commands: [https://drive.google.com/file/d/1ksDD1wcEA27LCVqTOnQrzU9yZe1iWjd\_/view?usp=drive\_link](https://drive.google.com/file/d/1ksDD1wcEA27LCVqTOnQrzU9yZe1iWjd_/view?usp=drive_link) Hope someone find it useful Cheers
2025-10-15T20:01:21
https://i.redd.it/msr7wyiwxbvf1.png
CabinetNational3461
i.redd.it
1970-01-01T00:00:00
0
{}
1o7liam
false
null
t3_1o7liam
/r/LocalLLaMA/comments/1o7liam/llamacpp_model_loader_gui_for_noobs/
false
false
default
47
{'enabled': True, 'images': [{'id': 'msr7wyiwxbvf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=108&crop=smart&auto=webp&s=5757881b494947d6d19e8dc424787e4121d34ff0', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=216&crop=smart&auto=webp&s=4645885f28c60162e7d8c66eb0248330c85efe93', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=320&crop=smart&auto=webp&s=2caf3372217aeb6c445f3a2aecd19fa446792850', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=640&crop=smart&auto=webp&s=0cf3cb84527273f0b3b22fdcb8c887bfed231273', 'width': 640}, {'height': 656, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=960&crop=smart&auto=webp&s=2278942f40075f3d2b7b201023791f4cb8e766a2', 'width': 960}, {'height': 738, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?width=1080&crop=smart&auto=webp&s=7d924d4aaf6f105e8e8df3b5b2568171a0dfe1ac', 'width': 1080}], 'source': {'height': 991, 'url': 'https://preview.redd.it/msr7wyiwxbvf1.png?auto=webp&s=3800b67093025b9de2de0ffb4141d6714ecb15cc', 'width': 1449}, 'variants': {}}]}
LM Studio and VL models
30
LM Studio currently downsizes images for VL inference, which can significantly hurt OCR performance. v0.3.6 release notes: **"Added image auto-resizing for vision model inputs, hardcoded to 500px width while keeping the aspect ratio."** [https://lmstudio.ai/blog/lmstudio-v0.3.6](https://lmstudio.ai/blog/lmstudio-v0.3.6) Related GitHub reports: [https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/941](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/941) [https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/880](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/880) [https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/967](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/967) [https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/990](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/990) If your image is a dense page of text and the VL model seems to underperform, LM Studio preprocessing is likely the culprit. Consider using a different app.
2025-10-15T19:43:32
https://www.reddit.com/r/LocalLLaMA/comments/1o7l1io/lm_studio_and_vl_models/
egomarker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7l1io
false
null
t3_1o7l1io
/r/LocalLLaMA/comments/1o7l1io/lm_studio_and_vl_models/
false
false
self
30
{'enabled': False, 'images': [{'id': 'H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?width=108&crop=smart&auto=webp&s=1cb05a5d7fbf61372a4ac6ed44355981f7b1ba6d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?width=216&crop=smart&auto=webp&s=deef0468269b40ce792cd4bce69f05c848583762', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?width=320&crop=smart&auto=webp&s=59b4eed65f8bfa3823f98d11db0e26275d6eaeab', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?width=640&crop=smart&auto=webp&s=0203843a05f4f5692e13f0d4806af7c76123b122', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?width=960&crop=smart&auto=webp&s=50fdc48b7291d5a5efd8d2f52914e2ad6f1d6e8b', 'width': 960}], 'source': {'height': 524, 'url': 'https://external-preview.redd.it/H1-9r8IP-tuJi6LfHVLfr-KSIEMxZDwy43QM1UrfNFo.png?auto=webp&s=bb10991b649b3ded4d69e2bc98fe5e9877c0d09d', 'width': 1000}, 'variants': {}}]}
Sharing my local voice-to-text setup on Apple Silicon (with fallback cascade)
2
Press hotkey → speak → press again → text appears. 0.3-1.5 seconds. First time making this shareable. This is my personal workflow I've been using. Multi-language (Turkish & English). Privacy-first with smart cloud fallback. ## Usage - `Alt+A` - Turkish - `Alt+Shift+A` - English - `ESC` - Cancel ## Flow ``` Alt+A (Turkish) / Alt+Shift+A (English) ↓ Record → Visual indicator (◉ REC TR/EN) ↓ Press again to stop ↓ Save to ~/Recordings/YYYYMMDD_HHMMSS_mmm.wav ↓ ┌─────────────────────────────┐ │ Local GPU Processing │ ├─────────────────────────────┤ │ Parakeet (EN only) ~0.3s │ │ ↓ (fail or Turkish) │ │ Whisper MLX (TR/EN) ~1.5s │ │ ↓ (optional cloud) │ │ ElevenLabs/OpenAI ~2-3s │ └─────────────────────────────┘ ↓ Text pastes to active app + space ↓ Old recordings cleaned up (30+ min) ``` ## Setup ```bash bash <(curl -fsSL https://raw.githubusercontent.com/yemreak/hammerspoon-dictation/main/scripts/install.sh) ``` Automated. 5 minutes. Asks your preference (English-only vs multilingual). Installs: - Parakeet MLX / Whisper MLX (local GPU models) - PM2 services - Hammerspoon config - Dependencies (Bun, PM2) **Issues?** Open a GitHub issue: [github.com/yemreak/hammerspoon-dictation/issues](https://github.com/yemreak/hammerspoon-dictation/issues) **For code details**: [github.com/yemreak/hammerspoon-dictation](https://github.com/yemreak/hammerspoon-dictation) **For Turkish**: [docs.yemreak.com/terminal-cli-otomasyonlari/hammerspoon-dictation](https://docs.yemreak.com/terminal-cli-otomasyonlari/hammerspoon-dictation)
2025-10-15T19:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1o7ktur/sharing_my_local_voicetotext_setup_on_apple/
_yemreak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ktur
false
null
t3_1o7ktur
/r/LocalLLaMA/comments/1o7ktur/sharing_my_local_voicetotext_setup_on_apple/
false
false
self
2
null
Poor GPU Club : 8GB VRAM - MOE models' t/s with llama.cpp
35
Continuation to [my previous thread](https://www.reddit.com/r/LocalLLaMA/comments/1nyxmci/poor_gpu_club_8gb_vram_qwen330ba3b_gptoss20b_ts/). This time I got better pp numbers with tg because of additional parameters. Tried with latest llama.cpp. ^(My System Info: ()**^(8GB VRAM & 32GB RAM)**^()) ^(Intel(R) Core(TM) i7-14700HX 2.10 GHz | 32 GB RAM | 64-bit OS, x64-based processor | NVIDIA GeForce RTX 4060 Laptop GPU |) **^(Cores - 20 | Logical Processors - 28)**^(.) **Qwen3-30B-A3B-UD-Q4\_K\_XL - 33 t/s** llama-bench -m E:\LLM\models\Qwen3-30B-A3B-UD-Q4_K_XL.gguf -ngl 99 -ncmoe 29 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | qwen3moe 30B.A3B Q4_K - Medium | 16.49 GiB | 30.53 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 160.45 ± 18.06 | | qwen3moe 30B.A3B Q4_K - Medium | 16.49 GiB | 30.53 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 33.73 ± 0.74 | **gpt-oss-20b-mxfp4 - 42 t/s** llama-bench -m E:\LLM\models\gpt-oss-20b-mxfp4.gguf -ngl 99 -ncmoe 10 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 823.93 ± 109.69 | | gpt-oss 20B MXFP4 MoE | 11.27 GiB | 20.91 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 42.06 ± 0.56 | **Ling-lite-1.5-2507.i1-Q6\_K - 34 t/s** llama-bench -m E:\LLM\models\Ling-lite-1.5-2507.i1-Q6_K.gguf -ngl 99 -ncmoe 15 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | bailingmoe 16B Q6_K | 14.01 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 585.52 ± 18.03 | | bailingmoe 16B Q6_K | 14.01 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 34.38 ± 1.54 | **Ling-lite-1.5-2507.i1-Q5\_K\_M - 50 t/s** llama-bench -m E:\LLM\models\Ling-lite-1.5-2507.i1-Q5_K_M.gguf -ngl 99 -ncmoe 12 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | bailingmoe 16B Q5_K - Medium | 11.87 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 183.79 ± 16.55 | | bailingmoe 16B Q5_K - Medium | 11.87 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 50.03 ± 0.46 | **Ling-Coder-lite.i1-Q6\_K - 35 t/s** llama-bench -m E:\LLM\models\Ling-Coder-lite.i1-Q6_K.gguf -ngl 99 -ncmoe 15 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | bailingmoe 16B Q6_K | 14.01 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 470.17 ± 113.93 | | bailingmoe 16B Q6_K | 14.01 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 35.05 ± 3.33 | **Ling-Coder-lite.i1-Q5\_K\_M - 47 t/s** llama-bench -m E:\LLM\models\Ling-Coder-lite.i1-Q5_K_M.gguf -ngl 99 -ncmoe 14 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | bailingmoe 16B Q5_K - Medium | 11.87 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 593.95 ± 91.55 | | bailingmoe 16B Q5_K - Medium | 11.87 GiB | 16.80 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 47.39 ± 0.68 | **SmallThinker-21B-A3B-Instruct-QAT.Q4\_K\_M - 34 t/s** llama-bench -m E:\LLM\models\SmallThinker-21B-A3B-Instruct-QAT.Q4_K_M.gguf -ngl 99 -ncmoe 27 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | smallthinker 20B Q4_K - Medium | 12.18 GiB | 21.51 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 512.92 ± 109.33 | | smallthinker 20B Q4_K - Medium | 12.18 GiB | 21.51 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 34.75 ± 0.22 | **SmallThinker-21BA3B-Instruct-IQ4\_XS - 38 t/s** llama-bench -m E:\LLM\models\SmallThinker-21BA3B-Instruct-IQ4_XS.gguf -ngl 99 -ncmoe 25 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | smallthinker 20B IQ4_XS - 4.25 bpw | 10.78 GiB | 21.51 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 635.01 ± 105.46 | | smallthinker 20B IQ4_XS - 4.25 bpw | 10.78 GiB | 21.51 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 37.47 ± 0.37 | **ERNIE-4.5-21B-A3B-PT-UD-Q4\_K\_XL - 44 t/s** llama-bench -m E:\LLM\models\ERNIE-4.5-21B-A3B-PT-UD-Q4_K_XL.gguf -ngl 99 -ncmoe 14 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | ernie4_5-moe 21B.A3B Q4_K - Medium | 11.91 GiB | 21.83 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 568.99 ± 134.16 | | ernie4_5-moe 21B.A3B Q4_K - Medium | 11.91 GiB | 21.83 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 44.83 ± 1.72 | **Phi-mini-MoE-instruct-Q8\_0 - 65 t/s** llama-bench -m E:\LLM\models\Phi-mini-MoE-instruct-Q8_0.gguf -ngl 99 -ncmoe 4 -fa 1 -ctk q8_0 -ctv q8_0 -b 2048 -ub 512 -t 8 | model | size | params | backend | ngl | threads | type_k | type_v | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | -----: | -----: | -: | --------------: | -------------------: | | phimoe 16x3.8B Q8_0 | 7.58 GiB | 7.65 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | pp512 | 2570.72 ± 48.54 | | phimoe 16x3.8B Q8_0 | 7.58 GiB | 7.65 B | CUDA | 99 | 8 | q8_0 | q8_0 | 1 | tg128 | 65.41 ± 0.19 | I'll be updating this thread whenever I get optimization tips & tricks from others AND I'll be including additional results here with updated commands. Also whenever new MOE models get released. Currently I'm checking bunch more MOE models, I'll add those here in this week. Thanks **Updates : To be updated** **^(My Upcoming threads (Planned) :)** * ^(8GB VRAM - Dense models' t/s with llama.cpp) * ^(8GB VRAM - MOE & Dense models' t/s with llama.cpp - CPU only) * ^(8GB VRAM - MOE & Dense models' t/s with ik\_llama.cpp (Still I'm looking for help on ik\_llama.cpp)) * ^(8GB VRAM - MOE & Dense models' t/s with ik\_llama.cpp - CPU only)
2025-10-15T19:25:03
https://www.reddit.com/r/LocalLLaMA/comments/1o7kkf0/poor_gpu_club_8gb_vram_moe_models_ts_with_llamacpp/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7kkf0
false
null
t3_1o7kkf0
/r/LocalLLaMA/comments/1o7kkf0/poor_gpu_club_8gb_vram_moe_models_ts_with_llamacpp/
false
false
self
35
null
DGX SPARK Compiled llama.cpp Benchmarks Compared to M4 MAX (non-MLX)
23
First, not trying to incite some feud discussion between Nvidia/Apple folks. I don't have either machines and just compiled this for amusement and just so others are aware. NOTE: Models aren't in mlx. If anyone is willing to share, it would be greatly appreciated. This would be really interesting. Also, to any Strix Halo/Ryzen AI Max+ 395 users, if you'd like to compare: llama-bench -m [model.gguf] -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 -ub 2048 [Source of DGX SPARK data](https://github.com/ggml-org/llama.cpp/discussions/16578) [Source of M4 MAX data](https://nitter.net/richinseattle/status/1978244945845657863) |model|size|params|test|t/s (M4 MAX)|t/s (Spark)|Speedup| |:-|:-|:-|:-|:-|:-|:-| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048|1761.99 ± 78.03|3610.56 ± 15.16|2.049| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32|118.95 ± 0.21|79.74 ± 0.43|0.670| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d4096|1324.28 ± 46.34|3361.11 ± 12.95|2.538| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d4096|98.76 ± 5.75|74.63 ± 0.15|0.756| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d8192|1107.91 ± 11.12|3147.73 ± 15.77|2.841| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d8192|94.19 ± 1.85|69.49 ± 1.12|0.738| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d16384|733.77 ± 54.67|2685.54 ± 5.76|3.660| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d16384|80.68 ± 2.49|64.02 ± 0.72|0.794| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d32768|518.68 ± 17.73|2055.34 ± 20.43|3.963| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d32768|69.94 ± 4.19|55.96 ± 0.07|0.800| |||||||| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048|871.16 ± 31.85|1689.47 ± 107.67|1.939| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32|62.85 ± 0.36|52.87 ± 1.70|0.841| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d4096|643.32 ± 12.00|1733.41 ± 5.19|2.694| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d4096|56.48 ± 0.72|51.02 ± 0.65|0.903| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d8192|516.77 ± 7.33|1705.93 ± 7.89|3.301| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d8192|50.79 ± 1.37|48.46 ± 0.53|0.954| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d16384|351.42 ± 7.31|1514.78 ± 5.66|4.310| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d16384|46.20 ± 1.17|44.78 ± 0.07|0.969| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d32768|235.87 ± 2.88|1221.23 ± 7.85|5.178| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d32768|40.22 ± 0.29|38.76 ± 0.06|0.964| |||||||| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048|1656.65 ± 86.70|2933.39 ± 9.43|1.771| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32|84.50 ± 0.87|59.95 ± 0.26|0.709| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d4096|938.23 ± 29.08|2537.98 ± 7.17|2.705| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d4096|67.70 ± 2.34|52.70 ± 0.75|0.778| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d8192|681.07 ± 20.63|2246.86 ± 6.45|3.299| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d8192|61.06 ± 6.02|44.48 ± 0.34|0.728| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d16384|356.12 ± 16.62|1772.41 ± 10.58|4.977| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d16384|43.32 ± 3.04|37.10 ± 0.05|0.856| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d32768|223.23 ± 12.23|1252.10 ± 2.16|5.609| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d32768|35.09 ± 5.53|27.82 ± 0.01|0.793| |||||||| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048|684.35 ± 15.08|2267.08 ± 6.38|3.313| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32|46.82 ± 11.44|29.40 ± 0.02|0.628| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d4096|633.50 ± 3.78|2094.87 ± 11.61|3.307| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d4096|54.66 ± 0.74|28.31 ± 0.10|0.518| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d8192|496.85 ± 21.23|1906.26 ± 4.45|3.837| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d8192|51.15 ± 0.85|27.53 ± 0.04|0.538| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d16384|401.98 ± 4.97|1634.82 ± 6.67|4.067| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d16384|47.91 ± 0.18|26.03 ± 0.03|0.543| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d32768|293.33 ± 2.23|1302.32 ± 4.58|4.440| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d32768|40.78 ± 0.42|22.08 ± 0.03|0.541| |||||||| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048|339.64 ± 21.28|841.44 ± 12.67|2.477| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32|37.79 ± 3.84|22.59 ± 0.11|0.598| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d4096|241.85 ± 6.50|749.08 ± 2.10|3.097| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d4096|27.22 ± 2.67|20.10 ± 0.01|0.738| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d8192|168.44 ± 4.12|680.95 ± 1.38|4.043| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d8192|29.13 ± 0.14|18.78 ± 0.07|0.645| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d16384|122.06 ± 9.23|565.44 ± 1.47|4.632| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d16384|20.96 ± 1.20|16.47 ± 0.01|0.786| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d32768||418.84 ± 0.53|| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d32768||13.19 ± 0.01|| From the data here we can see PP on the DGX SPARK is \~3.35x faster than the M4 MAX, while TG \~0.73x. Interesting as MBW on SPARK is \~273GB/s and MAX \~546GB/s. So, here is my question for r/LocalLLaMA. Inference performance is really important, but how much does PP really matter in all these discussions compared to TG?
2025-10-15T19:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1o7k7zz/dgx_spark_compiled_llamacpp_benchmarks_compared/
Noble00_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7k7zz
false
null
t3_1o7k7zz
/r/LocalLLaMA/comments/1o7k7zz/dgx_spark_compiled_llamacpp_benchmarks_compared/
false
false
self
23
{'enabled': False, 'images': [{'id': 'pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=108&crop=smart&auto=webp&s=e1ed18d21848daff25a4086fdd0cad4ab01ebc2b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=216&crop=smart&auto=webp&s=42fe1f43c292b1860450afc6a8a89e827aa1974e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=320&crop=smart&auto=webp&s=03e5c802cce01fc12a046ea95745fde06bb10a8f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=640&crop=smart&auto=webp&s=e395c9d114484e61c726df379c59c53f5679cba6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=960&crop=smart&auto=webp&s=13b0d53f5cbd0fd9f9ea37ea1fe54bcca1458519', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?width=1080&crop=smart&auto=webp&s=1ed71958b8ac163f8a89c969181347c48a89e1d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pxfJzcifhqSFdCquYHMdVmWeKAUjG3-m8mLNh281nAo.png?auto=webp&s=4caaffb1f6d7264f227b140f8fadffab8585b531', 'width': 1200}, 'variants': {}}]}
M2 Ultra 192 gb + GLM Air 4.5/4.6 for local coding agents?
1
I’m considering getting an M2 Ultra (76-core GPU, 192 GB RAM) as a local dev machine for experimenting with coding-oriented LLMs like GLM 4.5 Air and GLM 4.6. I found someone selling it for ~1700 euros in my region. Has anyone actually run these (or similar-sized models) on an M2 Ultra? How’s inference speed (tokens/s)? Trying to decide if this setup is viable for local agent dev or just an expensive toy. Would love benchmarks, configs, or anecdotes. Thanks
2025-10-15T19:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1o7k78o/m2_ultra_192_gb_glm_air_4546_for_local_coding/
arnoopt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7k78o
false
null
t3_1o7k78o
/r/LocalLLaMA/comments/1o7k78o/m2_ultra_192_gb_glm_air_4546_for_local_coding/
false
false
self
1
null
NVIDIA DGX Spark™ + Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0
31
Well this is quite interesting! [https://blog.exolabs.net/nvidia-dgx-spark/](https://blog.exolabs.net/nvidia-dgx-spark/)
2025-10-15T19:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1o7k6e5/nvidia_dgx_spark_apple_mac_studio_4x_faster_llm/
Careless_Garlic1438
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7k6e5
false
null
t3_1o7k6e5
/r/LocalLLaMA/comments/1o7k6e5/nvidia_dgx_spark_apple_mac_studio_4x_faster_llm/
false
false
self
31
{'enabled': False, 'images': [{'id': 'K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=108&crop=smart&auto=webp&s=7c1fdbd5fb183937e67a1b86563189501f140a1c', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=216&crop=smart&auto=webp&s=4336a9720c86192fe14b35a8a061bbdb14638fa8', 'width': 216}, {'height': 333, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=320&crop=smart&auto=webp&s=33316abc9096614847add2f23a8ba3e6cb9c1c12', 'width': 320}, {'height': 667, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=640&crop=smart&auto=webp&s=b9f5266792809d968871e23573f02585582d09e3', 'width': 640}, {'height': 1001, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=960&crop=smart&auto=webp&s=3349d00121c6be480cbfe6aa236959947f9e6414', 'width': 960}, {'height': 1126, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=1080&crop=smart&auto=webp&s=c19c6191247dbd506fa799499d6be93a04d3468e', 'width': 1080}], 'source': {'height': 4449, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?auto=webp&s=34aee5e6359649c16bc33b554bf9338ecce95693', 'width': 4264}, 'variants': {}}]}
DGX SPARK Compiled llama.cpp Performance Compared to M4 Max (non-MLX)
1
First, not trying to incite some feud discussion between Nvidia/Apple folks. I don't have either machines and just compiled this for amusement and just so others are aware. NOTE: Models aren't in mlx. If anyone is willing to share, it would be greatly appreciated. This would be really interesting. Also, to any Strix Halo/Ryzen AI Max+ 395 users, if you'd like to compare: llama-bench -m [model.gguf] -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 -ub 2048 [Source of DGX SPARK data](https://github.com/ggml-org/llama.cpp/discussions/16578) [Source of M4 MAX data](https://nitter.net/richinseattle/status/1978244945845657863) || || |model|size|params|test|t/s (M4 MAX)|t/s (Spark)|Speedup| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048|1761.99 ± 78.03|3610.56 ± 15.16|2.049| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32|118.95 ± 0.21|79.74 ± 0.43|0.670| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d4096|1324.28 ± 46.34|3361.11 ± 12.95|2.538| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d4096|98.76 ± 5.75|74.63 ± 0.15|0.756| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d8192|1107.91 ± 11.12|3147.73 ± 15.77|2.841| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d8192|94.19 ± 1.85|69.49 ± 1.12|0.738| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d16384|733.77 ± 54.67|2685.54 ± 5.76|3.660| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d16384|80.68 ± 2.49|64.02 ± 0.72|0.794| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|pp2048 @ d32768|518.68 ± 17.73|2055.34 ± 20.43|3.963| |gpt-oss 20B MXFP4 MoE|11.27 GiB|20.91 B|tg32 @ d32768|69.94 ± 4.19|55.96 ± 0.07|0.800| |||||||| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048|871.16 ± 31.85|1689.47 ± 107.67|1.939| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32|62.85 ± 0.36|52.87 ± 1.70|0.841| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d4096|643.32 ± 12.00|1733.41 ± 5.19|2.694| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d4096|56.48 ± 0.72|51.02 ± 0.65|0.903| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d8192|516.77 ± 7.33|1705.93 ± 7.89|3.301| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d8192|50.79 ± 1.37|48.46 ± 0.53|0.954| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d16384|351.42 ± 7.31|1514.78 ± 5.66|4.310| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d16384|46.20 ± 1.17|44.78 ± 0.07|0.969| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|pp2048 @ d32768|235.87 ± 2.88|1221.23 ± 7.85|5.178| |gpt-oss 120B MXFP4 MoE|59.02 GiB|116.83 B|tg32 @ d32768|40.22 ± 0.29|38.76 ± 0.06|0.964| |||||||| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048|1656.65 ± 86.70|2933.39 ± 9.43|1.771| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32|84.50 ± 0.87|59.95 ± 0.26|0.709| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d4096|938.23 ± 29.08|2537.98 ± 7.17|2.705| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d4096|67.70 ± 2.34|52.70 ± 0.75|0.778| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d8192|681.07 ± 20.63|2246.86 ± 6.45|3.299| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d8192|61.06 ± 6.02|44.48 ± 0.34|0.728| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d16384|356.12 ± 16.62|1772.41 ± 10.58|4.977| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d16384|43.32 ± 3.04|37.10 ± 0.05|0.856| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|pp2048 @ d32768|223.23 ± 12.23|1252.10 ± 2.16|5.609| |qwen3moe 30B.A3B Q8\_0|30.25 GiB|30.53 B|tg32 @ d32768|35.09 ± 5.53|27.82 ± 0.01|0.793| |||||||| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048|684.35 ± 15.08|2267.08 ± 6.38|3.313| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32|46.82 ± 11.44|29.40 ± 0.02|0.628| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d4096|633.50 ± 3.78|2094.87 ± 11.61|3.307| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d4096|54.66 ± 0.74|28.31 ± 0.10|0.518| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d8192|496.85 ± 21.23|1906.26 ± 4.45|3.837| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d8192|51.15 ± 0.85|27.53 ± 0.04|0.538| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d16384|401.98 ± 4.97|1634.82 ± 6.67|4.067| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d16384|47.91 ± 0.18|26.03 ± 0.03|0.543| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|pp2048 @ d32768|293.33 ± 2.23|1302.32 ± 4.58|4.440| |qwen2 7B Q8\_0|7.54 GiB|7.62 B|tg32 @ d32768|40.78 ± 0.42|22.08 ± 0.03|0.541| |||||||| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048|339.64 ± 21.28|841.44 ± 12.67|2.477| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32|37.79 ± 3.84|22.59 ± 0.11|0.598| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d4096|241.85 ± 6.50|749.08 ± 2.10|3.097| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d4096|27.22 ± 2.67|20.10 ± 0.01|0.738| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d8192|168.44 ± 4.12|680.95 ± 1.38|4.043| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d8192|29.13 ± 0.14|18.78 ± 0.07|0.645| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d16384|122.06 ± 9.23|565.44 ± 1.47|4.632| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d16384|20.96 ± 1.20|16.47 ± 0.01|0.786| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|pp2048 @ d32768||418.84 ± 0.53|| |glm4moe 106B.A12B Q4\_K|67.85 GiB|110.47 B|tg32 @ d32768||13.19 ± 0.01|| From the data here we can see PP on the DGX SPARK is \~3.35x faster than the M4 MAX, while TG \~0.73x. Interesting as MBW on SPARK is \~273GB/s and MAX \~546GB/s. So, here is my question for r/LocalLLaMA. Inference performance is really important, but how much does PP really matter in all these discussions compared to TG?
2025-10-15T19:09:21
https://www.reddit.com/r/LocalLLaMA/comments/1o7k56w/dgx_spark_compiled_llamacpp_performance_compared/
Noble00_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7k56w
false
null
t3_1o7k56w
/r/LocalLLaMA/comments/1o7k56w/dgx_spark_compiled_llamacpp_performance_compared/
false
false
self
1
null
A way to make the DGX spark fast, put it in a cluster with an Mac Studio M3U
1
This is cool, use each device where it shines … [https://x.com/exolabs/status/1978525767739883736](https://x.com/exolabs/status/1978525767739883736)
2025-10-15T19:02:11
https://www.reddit.com/r/LocalLLaMA/comments/1o7jycc/a_way_to_make_the_dgx_spark_fast_put_it_in_a/
Careless_Garlic1438
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7jycc
false
null
t3_1o7jycc
/r/LocalLLaMA/comments/1o7jycc/a_way_to_make_the_dgx_spark_fast_put_it_in_a/
false
false
self
1
null
GLM 4.6 is the new top open weight model on Design Arena
110
ERROR: type should be string, got "https://preview.redd.it/hepvwbezobvf1.png?width=1877&format=png&auto=webp&s=87d242fe8af470adee79fa9b604930404192741c\n\nGLM models make up 20% of the top 10 and beat every iteration of GPT-5 except minimal. It has surpassed DeepSeek, Qwen, and even Sonnet 4 and 3.7. If their front-end performance continues to improve at this pace for GLM 5, they could break in the top 5. China is approaching SOTA"
2025-10-15T19:01:56
https://www.reddit.com/r/LocalLLaMA/comments/1o7jy1o/glm_46_is_the_new_top_open_weight_model_on_design/
Helpful_Jacket8953
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7jy1o
false
null
t3_1o7jy1o
/r/LocalLLaMA/comments/1o7jy1o/glm_46_is_the_new_top_open_weight_model_on_design/
false
false
https://b.thumbs.redditm…UDqDxUbjr48s.jpg
110
null
Any recommendations on Blackwell based boxes?
2
Does anyone have a comparison table for these different vendor options? Any recommendations on which one to choose? Also, does each of them support stacking? This is crucial for very large models with up to 200 billion parameters. https://preview.redd.it/maopjdx2nbvf1.png?width=814&format=png&auto=webp&s=6557c1ed503a13a257c6dccf1aa2953782d0ef36
2025-10-15T18:49:25
https://www.reddit.com/r/LocalLLaMA/comments/1o7jld1/any_recommendations_on_blackwell_based_boxes/
speacexstarlink
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7jld1
false
null
t3_1o7jld1
/r/LocalLLaMA/comments/1o7jld1/any_recommendations_on_blackwell_based_boxes/
false
false
https://b.thumbs.redditm…p-ItGFZpdFKw.jpg
2
null
Microcenter has RTX3090Ti’s
49
Not sure if anyone cares but my local Microcenter has refurb RTX 3090Ti’s for $800. If your on the market for 3090’s it might be worth checking your local Microcenter. The used market prices have gone up to $900 and at Least you have some sort of warranty. Also got a chance to play with the dgx spark, that thing is really cool.
2025-10-15T18:34:30
https://www.reddit.com/gallery/1o7j6ri
flanconleche
reddit.com
1970-01-01T00:00:00
0
{}
1o7j6ri
false
null
t3_1o7j6ri
/r/LocalLLaMA/comments/1o7j6ri/microcenter_has_rtx3090tis/
false
false
https://b.thumbs.redditm…uHZbkvMAEyJk.jpg
49
null
Agentic Coding
4
Quite new to agentic coding. I want to build an entirely open source setup, something that can be driven by vscode. What stack would you folks suggest? What module? I've been asked to investigate build a setup that we can use in a student lab to give the students experience of such tools. So looking at something I can scale up really. Anyone build anything like this an ran as small local service?
2025-10-15T18:07:34
https://www.reddit.com/r/LocalLLaMA/comments/1o7igt9/agentic_coding/
brownjl99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7igt9
false
null
t3_1o7igt9
/r/LocalLLaMA/comments/1o7igt9/agentic_coding/
false
false
self
4
null
AMD Ryzen AI 7 PRO 350 vs Intel Core Ultra 7 155H /NVIDIA RTX™ 500 Ada 4GB
6
Hi everyone, I'm looking to see which would be a better fit for running local models. Basically Lenovo offers both of the above ThinkPads. One with the AMD AI 350, and another with an Intel ultra 7 and Nvidia RTX 500 4GB. I don't expect to be able to do much if any training locally, but I really want to be able to run the models themselves. Which of these two options would be a better fit, the pricing is around the same maybe a few hundred dollar difference which I'm not too concerned about. From my experience the AMD ecosystem is really behind, but I haven't tried in about 12 months and I've seen a lot of optimistic news here. My budget for a laptop is probably 2K (ideally around 1500). I really want a Thinkpad, although I'm well aware that cheaper gaming laptops have stronger GPUs.
2025-10-15T17:55:12
https://www.reddit.com/r/LocalLLaMA/comments/1o7i4jp/amd_ryzen_ai_7_pro_350_vs_intel_core_ultra_7_155h/
mcAlt009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7i4jp
false
null
t3_1o7i4jp
/r/LocalLLaMA/comments/1o7i4jp/amd_ryzen_ai_7_pro_350_vs_intel_core_ultra_7_155h/
false
false
self
6
null
Challenges in Tracing and Debugging AI Workflows
11
Hi all, I work on evaluation and observability at Maxim, and I’ve been closely looking at how teams trace, debug, and maintain reliable AI workflows. Across multi-agent systems, RAG pipelines, and LLM-driven applications, getting full visibility into agent decisions and workflow failures is still a major challenge. From my experience, common pain points include: * **Failure visibility across multi-step workflows:** Token-level logs are useful, but understanding the trajectory of an agent across multiple steps or chained models is hard without structured traces. * **Debugging complex agent interactions:** When multiple models or tools interact, pinpointing which step caused a failure often requires reproducing the workflow from scratch. * **Integrating human review effectively:** Automated metrics are great, but aligning evaluations with human judgment, especially for nuanced tasks, is still tricky. * **Maintaining reliability in production:** Ensuring that your AI remains trustworthy under real-world usage and scaling scenarios can be difficult without end-to-end observability. At [Maxim](https://getmax.im/maxim), we’ve built our platform to tackle these exact challenges. Some of the ways teams benefit include: * **Structured evaluations at multiple levels:** You can attach automated checks or human-in-the-loop reviews at the session, trace, or span level. This lets you catch issues early and iterate faster. * **Full visibility into agent trajectories:** Simulations and logging across multi-agent workflows give teams insights into failure modes and decision points. * **Custom dashboards and alerts:** Teams can slice and dice traces, define performance criteria, and get Slack or PagerDuty alerts when issues arise. * **End-to-end observability:** From pre-release simulations to post-release monitoring, evaluation, and dataset curation, the platform is designed to give teams a complete picture of AI quality and reliability. We’ve seen that structured, full-stack evaluation workflows not only make debugging and tracing faster but also improve overall trustworthiness of AI systems. Would love to hear how others are tackling these challenges and what tools or approaches you’ve found effective for tracing, debugging, and reliability in complex AI pipelines. (my humble apologize if this comes across as self promo)
2025-10-15T17:47:49
https://www.reddit.com/r/LocalLLaMA/comments/1o7hxao/challenges_in_tracing_and_debugging_ai_workflows/
dinkinflika0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7hxao
false
null
t3_1o7hxao
/r/LocalLLaMA/comments/1o7hxao/challenges_in_tracing_and_debugging_ai_workflows/
false
false
self
11
{'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]}
OpenAI released ChatGPT Apps without a framework… so we made one(open source)
0
We were super excited when OpenAI announced ChatGPT Apps, then realized there wasn’t an actual framework to build them with. So we built FastApps: a no-setup, zero-boilerplate framework for creating ChatGPT Apps. It handles everything: project structure, auto-discovery, build pipeline, and MCP integration. https://github.com/DooiLabs/FastApps If you’ve been waiting for a clean developer experience for ChatGPT Apps, this might be it. Please give us a Github star if you like it. Any feedback would be welcome!!
2025-10-15T17:25:01
https://www.reddit.com/r/LocalLLaMA/comments/1o7hbb0/openai_released_chatgpt_apps_without_a_framework/
Far-Dark-8640
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7hbb0
false
null
t3_1o7hbb0
/r/LocalLLaMA/comments/1o7hbb0/openai_released_chatgpt_apps_without_a_framework/
false
false
self
0
{'enabled': False, 'images': [{'id': '-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=108&crop=smart&auto=webp&s=a836da72d52b7110e283196d3b0532f5b60cc429', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=216&crop=smart&auto=webp&s=b4eb8f73e6dc76c1cc017e06117399b85296ad94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=320&crop=smart&auto=webp&s=3d270b9b62f35c3e2ff7e1ed3be86dd0e4658bec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=640&crop=smart&auto=webp&s=7020a811f5b4bf9c1757d2f72ac15b6009c23565', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=960&crop=smart&auto=webp&s=b7c53ebf1b19f8f00b92b427202e8111919968ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?width=1080&crop=smart&auto=webp&s=fff1d6e8758279915be640ee563c366f16f25651', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-GlZKJwrW5tiViJbchuoHsBlTRlqO-XRayuoJZv-CVk.png?auto=webp&s=154f997c55be0d4df46459461a577a9890065b7a', 'width': 1200}, 'variants': {}}]}
Which is the current best ERP model <=7b?
2
I have a cooked up device. pls help me to find a model to run on my device 🙂
2025-10-15T17:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1o7gptz/which_is_the_current_best_erp_model_7b/
redfinalboss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7gptz
false
null
t3_1o7gptz
/r/LocalLLaMA/comments/1o7gptz/which_is_the_current_best_erp_model_7b/
false
false
self
2
null
Got the DGX Spark - ask me anything
580
If there’s anything you want me to benchmark (or want to see in general), let me know, and I’ll try to reply to your comment. I will be playing with this all night trying a ton of different models I’ve always wanted to run. (& shoutout to microcenter my goats!)
2025-10-15T17:02:50
https://i.redd.it/9mr835ne4bvf1.jpeg
sotech117
i.redd.it
1970-01-01T00:00:00
0
{}
1o7gpr8
false
null
t3_1o7gpr8
/r/LocalLLaMA/comments/1o7gpr8/got_the_dgx_spark_ask_me_anything/
false
false
default
580
{'enabled': True, 'images': [{'id': '9mr835ne4bvf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=108&crop=smart&auto=webp&s=4380496c4c18092147a8b137e9a6947e029b6dfa', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=216&crop=smart&auto=webp&s=eb49db7f510ab9913c50887fcfddf646d5e85c57', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=320&crop=smart&auto=webp&s=02fab609e0d6cb280b3a3716bf9991b0352bb5c4', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=640&crop=smart&auto=webp&s=42dc8e85dcff8b55d4174e98495bb8d2d144fd7d', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=960&crop=smart&auto=webp&s=280f54ea11c4668b1f05f566a39e14ef6b0910c2', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?width=1080&crop=smart&auto=webp&s=cb38daa08b6c75eb8ebb1bbaf0454160fdfa83ac', 'width': 1080}], 'source': {'height': 4284, 'url': 'https://preview.redd.it/9mr835ne4bvf1.jpeg?auto=webp&s=2be057a2e1c7909866146fbe87f1ffdc3ebbfb85', 'width': 5712}, 'variants': {}}]}
ایک دن ایک بندر شہر کی ایک گلی میں آ نکلا۔ گلی کے بچے اسے دیکھ کر خوشی سے شور مچانے لگے، ارے بندر آیا بندر آیا ! بندر بھی کیا شرارتی تھا کبھی کسی کے کندھے پر چڑھ جاتا، کبھی کسی کے ہاتھ سے آئس کریم چھین لیتا۔ ایک بچے نے کہا، چلو، اسے بسکٹ دیتے ہیں تاکہ یہ ہمیں تنگ نہ کرے! بندر نے بسکٹ لیا، لیکن جی
0
Asad
2025-10-15T16:59:57
https://www.reddit.com/r/LocalLLaMA/comments/1o7gmso/ایک_دن_ایک_بندر_شہر_کی_ایک_گلی_میں_آ_نکلا_گلی_کے/
Relative-Battle-7070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7gmso
false
null
t3_1o7gmso
/r/LocalLLaMA/comments/1o7gmso/ایک_دن_ایک_بندر_شہر_کی_ایک_گلی_میں_آ_نکلا_گلی_کے/
false
false
self
0
null
Using Google AI Studio's Code Execution as an Interactive REPL for Agent Prototyping
1
[removed]
2025-10-15T16:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1o7gi4o/using_google_ai_studios_code_execution_as_an/
Embarrassed-Crow7078
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7gi4o
false
null
t3_1o7gi4o
/r/LocalLLaMA/comments/1o7gi4o/using_google_ai_studios_code_execution_as_an/
false
false
self
1
null
Exploiting Extended Reasoning: Uncovering Deceptive Behaviors in LLM Chain-of-Thought
4
Uncovering policy manipulation, evaluation awareness, and infinite loops in gpt-oss; OpenAI's new open source reasoning model [](https://medium.com/@gabriella_71298/exploiting-extended-reasoning-uncovering-deceptive-behaviors-in-llm-chain-of-thought-cc11a0d46b52)
2025-10-15T16:15:21
https://medium.com/p/cc11a0d46b52
ella0333
medium.com
1970-01-01T00:00:00
0
{}
1o7ff57
false
null
t3_1o7ff57
/r/LocalLLaMA/comments/1o7ff57/exploiting_extended_reasoning_uncovering/
false
false
default
4
{'enabled': False, 'images': [{'id': 'I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=108&crop=smart&auto=webp&s=3ec73f01937d93ff90c2b262cce5672d4143f33e', 'width': 108}, {'height': 71, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=216&crop=smart&auto=webp&s=23729fd9030a0a72edea25fe1ce5b75eb9943488', 'width': 216}, {'height': 106, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=320&crop=smart&auto=webp&s=5c221dc59fb95f3d557b332aa14e2a3bde62ef90', 'width': 320}, {'height': 212, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=640&crop=smart&auto=webp&s=4f4961ab285b9e41523afac6f123b44b50ec951b', 'width': 640}, {'height': 318, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=960&crop=smart&auto=webp&s=21d14d4b7c3ba4e832a3e38498653c42e5d5df62', 'width': 960}, {'height': 358, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?width=1080&crop=smart&auto=webp&s=7eedbed0fd4765c2b85aa33b1d89ef3a234f411d', 'width': 1080}], 'source': {'height': 398, 'url': 'https://external-preview.redd.it/I8r_JEq_f06QqPDAkhM30sBR0e31Hm-TmxA4Gmcsba4.png?auto=webp&s=73555c6bf210c729f1408f63b2be55edb650c4d8', 'width': 1200}, 'variants': {}}]}
Fast PCIe Speed is Needed for Good PP
12
Or "Why Strix Halo + eGPU is not a great combination" So recently I learnt the hard way that fast PCIe speed is needed to get good PP, when doing hybrid CPU + GPU inference for large MoE models. Previously, I always thought that PCIe speed doesn't matter for single user inference. And so I spent $2k on a FEVM FA-EX9 that has an oculink port, pairing it with my existing RTX 3090 and AOOSTAR AG02. With ik\_llama.cpp, I get about 120 t/s PP and 10 t/s TG with a 3.2bpw GLM-4.5 quant. Not great, but it is fast enough, especially when compared to mainline llama.cpp or ktransformers. Then, 2 weeks ago, u/VoidAlchemy shared his numbers in [https://huggingface.co/ubergarm/GLM-4.6-GGUF/discussions/5](https://huggingface.co/ubergarm/GLM-4.6-GGUF/discussions/5) and [https://www.reddit.com/r/LocalLLaMA/comments/1nwimej/glm\_46\_local\_gaming\_rig\_performance/](https://www.reddit.com/r/LocalLLaMA/comments/1nwimej/glm_46_local_gaming_rig_performance/) . And with a very similar setup, the PP is 4x better! It turns out that I lacked the mechanical sympathy to understand how GPU offload works in ik\_llama.cpp during prompt processing. There is no magic. As explained by IK in [https://github.com/ikawrakow/ik\_llama.cpp/pull/520](https://github.com/ikawrakow/ik_llama.cpp/pull/520) and also [https://github.com/ikawrakow/ik\_llama.cpp/discussions/258#discussioncomment-13153572](https://github.com/ikawrakow/ik_llama.cpp/discussions/258#discussioncomment-13153572), the weights that are loaded into system RAM will need to be copied into VRAM, to make use of the much faster CUDA compute. And that's 4x slower on the oculink with PCIe 4.0 x4, compared to PCIe 4.0 x16. If I had learnt this earlier, I probably would have gone with an Epyc workstation instead, which will be much faster, but also more expensive and taking up way more space. As it is, the Strix Halo + eGPU has a decent wife acceptance factor, and I just have to make peace with the above average PP.
2025-10-15T15:55:57
https://www.reddit.com/r/LocalLLaMA/comments/1o7ewc5/fast_pcie_speed_is_needed_for_good_pp/
notdba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ewc5
false
null
t3_1o7ewc5
/r/LocalLLaMA/comments/1o7ewc5/fast_pcie_speed_is_needed_for_good_pp/
false
false
self
12
{'enabled': False, 'images': [{'id': 'BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=108&crop=smart&auto=webp&s=1f3ec5d82b5358edfd250c0f57872da06f3defcd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=216&crop=smart&auto=webp&s=b80865f67775a3a72b21ad5862710534fe6f0cf9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=320&crop=smart&auto=webp&s=3883743a7f86d7343c55cd099bbb0fbde585b614', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=640&crop=smart&auto=webp&s=0b040c8f27cab0de600d8cc7c4406e8602bb3d8c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=960&crop=smart&auto=webp&s=62c97b308a95d98fe36022087edab054c0c8a04c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?width=1080&crop=smart&auto=webp&s=1b3a3c31ad9d2b414ceb2e2b4dfcba68488f8fb4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BTXeyZi6TMK0-_7klNOM0Q0714i2f2blM6J-pIh_Fvw.png?auto=webp&s=9717b2e7c362a4222108d42c935dd4c78aac275f', 'width': 1200}, 'variants': {}}]}
Apple M5 Officially Announced: is this a big deal?
178
Your original text is already clear and well-structured, but here's a polished, slightly more concise and professionally refined version with improved flow, grammar, and clarity—while preserving your voice and all key points: --- If I’m understanding correctly: • **3.5x faster AI performance** compared to the M4 (though the exact neural engine improvements aren’t yet confirmed) • **153 GB/s memory bandwidth** (~30% improvement) • **4x increase in GPU compute** • **Unified memory architecture**, eliminating the need for CPU↔GPU data transfers Even if the neural accelerators on the base M5 aren’t dedicated matmul units (which seems unlikely given the A19 Pro), will this translate into noticeably faster prompt processing speeds? At $1,600 for an entry-level 16GB M5, serious inference workloads feels limiting, especially when compared to refurbished M-series models with more RAM. That said, it seems like a solid choice for new users exploring local AI experiences, particularly when working with sub-10B models for RAG or large context windows. That, along with another LM Studio feature in the press release, is a good sign, no? Do the specs / pricing represent a meaningful upgrade for anyone considering the M5 Pro, Max, or Ultra? I’d love to hear others’ thoughts. Read the announcement [here](https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for-apple-silicon/).
2025-10-15T15:48:34
https://www.reddit.com/r/LocalLLaMA/comments/1o7ep8a/apple_m5_officially_announced_is_this_a_big_deal/
ontorealist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ep8a
false
null
t3_1o7ep8a
/r/LocalLLaMA/comments/1o7ep8a/apple_m5_officially_announced_is_this_a_big_deal/
false
false
self
178
{'enabled': False, 'images': [{'id': 'W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=108&crop=smart&auto=webp&s=e71f270132ca647cf8d6ae2735de5271b9da444f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=216&crop=smart&auto=webp&s=d670e6a90acdc1136f7c6fe9c83f3fa72fe76f64', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=320&crop=smart&auto=webp&s=8f0d72ac8f15af2d76c6431265e1e62feecba965', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=640&crop=smart&auto=webp&s=7054474c75110421e345a75b838671fc97da1a4a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=960&crop=smart&auto=webp&s=d39147d2e7f2ee3be1f3a5b4a7cdb045de478ba2', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?width=1080&crop=smart&auto=webp&s=865f6b824dc069ed03cb2e68e2f13292cc245940', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/W534jqDZXADuWkvuJWB7oJ3u5hFV6QqLM6H2JWqeMus.jpeg?auto=webp&s=68e6d52fcc65510e17f832267a6f924dd9d0fcc9', 'width': 1200}, 'variants': {}}]}
SOTA methods for multi-task vision models (OCR + inpainting + style transfer)
1
[removed]
2025-10-15T15:46:06
https://www.reddit.com/r/LocalLLaMA/comments/1o7emtl/sota_methods_for_multitask_vision_models_ocr/
New_Blueberry9858
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7emtl
false
null
t3_1o7emtl
/r/LocalLLaMA/comments/1o7emtl/sota_methods_for_multitask_vision_models_ocr/
false
false
self
1
null
When Grok-4 and Sonnet-4.5 play poker against each other
24
We set up a poker game between AI models and they got pretty competitive, trash talk included. [](https://github.com/opper-ai/opper-cookbook/tree/main/examples/poker-tournament#features)\- 5 AI Players - Each powered by their own LLM (configurable models) \- Full Texas Hold'em Rules - Pre-flop, flop, turn, river, and showdown \- Personality Layer - Players show poker faces and engage in banter \- Memory System - Players remember past hands and opponent patterns \- Observability - Full tracing \- Rich Console UI - Visual poker table with cards Cookbook below: [https://github.com/opper-ai/opper-cookbook/tree/main/examples/poker-tournament](https://github.com/opper-ai/opper-cookbook/tree/main/examples/poker-tournament)
2025-10-15T15:38:00
https://i.redd.it/kkgkupvdoavf1.png
facethef
i.redd.it
1970-01-01T00:00:00
0
{}
1o7eevp
false
null
t3_1o7eevp
/r/LocalLLaMA/comments/1o7eevp/when_grok4_and_sonnet45_play_poker_against_each/
false
false
default
24
{'enabled': True, 'images': [{'id': 'kkgkupvdoavf1', 'resolutions': [{'height': 19, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=108&crop=smart&auto=webp&s=4463ba51c3b00dcbfcd1ecc67a107febfd38fc77', 'width': 108}, {'height': 39, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=216&crop=smart&auto=webp&s=b007d77f03327a26da935cf66691ae5bdf097846', 'width': 216}, {'height': 58, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=320&crop=smart&auto=webp&s=b716b5a9afcdc0923d05e4998f3bf9d28099039c', 'width': 320}, {'height': 116, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=640&crop=smart&auto=webp&s=3672b2f6d8b2a91ffb9e846c6eb7d59892101d69', 'width': 640}, {'height': 174, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=960&crop=smart&auto=webp&s=4cc205525b5480d2fff593a74bc83ae4b970e7e5', 'width': 960}, {'height': 195, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?width=1080&crop=smart&auto=webp&s=c6a76110dd46f76f4ab792def19881ddc25ffed5', 'width': 1080}], 'source': {'height': 474, 'url': 'https://preview.redd.it/kkgkupvdoavf1.png?auto=webp&s=05d80fa0261dd7660f6723a0df0fa0904d2df054', 'width': 2614}, 'variants': {}}]}
Why is Qwen3-VL 235B available via Ollama Cloud NOT locally
3
Was a serious user of Ollama but what’s this about them releasing Qwen3-VL 235B all variants via their new cloud service but not via locally is this because their cloud infrastructure doesn’t even run via ollama (most likely)…seriously ruined a brand name for local interference how they are playing things!
2025-10-15T15:28:31
https://www.reddit.com/r/LocalLLaMA/comments/1o7e5oe/why_is_qwen3vl_235b_available_via_ollama_cloud/
PuzzledWord4293
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7e5oe
false
null
t3_1o7e5oe
/r/LocalLLaMA/comments/1o7e5oe/why_is_qwen3vl_235b_available_via_ollama_cloud/
false
false
self
3
null
I'm running MoE models that offload layers and KV to system RAM. How much gain in inference tps or model loading timing can I actually expect by upgrading my system RAM?
5
I have a gaming PC I use for inference. The system RAM is older DDR4 with average timings for the spec. Would swapping out my motherboard and RAM for DDR5 with good timings actually produce a noticeable benefit?
2025-10-15T15:06:44
https://www.reddit.com/r/LocalLLaMA/comments/1o7dk5w/im_running_moe_models_that_offload_layers_and_kv/
TOO_MUCH_BRAVERY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7dk5w
false
null
t3_1o7dk5w
/r/LocalLLaMA/comments/1o7dk5w/im_running_moe_models_that_offload_layers_and_kv/
false
false
self
5
null
Not much multilingual asr releases?
3
It's been a while we haven't seen ASR open sourced models competitive to whisper atleast. There were some only in English. Is there any I am missing out that is multilingual and supports >=99 languages since whisper do so. I will look forward to switch from whisper then!
2025-10-15T15:06:06
https://www.reddit.com/r/LocalLLaMA/comments/1o7djk8/not_much_multilingual_asr_releases/
Empty-Investment-827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7djk8
false
null
t3_1o7djk8
/r/LocalLLaMA/comments/1o7djk8/not_much_multilingual_asr_releases/
false
false
self
3
null
If I'm running MoE models that offload layers and KV to system RAM, how much gain in inference tps or model loading timing can I actually expect?
1
I have a gaming PC I use for inference. The system RAM is older DDR4 with average timings for the spec. Would swapping out my motherboard and RAM for DDR5 with good timings actually produce a noticeable benefit?
2025-10-15T15:05:26
https://www.reddit.com/r/LocalLLaMA/comments/1o7dix8/if_im_running_moe_models_that_offload_layers_and/
TOO_MUCH_BRAVERY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7dix8
false
null
t3_1o7dix8
/r/LocalLLaMA/comments/1o7dix8/if_im_running_moe_models_that_offload_layers_and/
false
false
self
1
null
Tried asking gpt: "is there a sea horse emoji" it got crazy
0
Can someone try with smaller models?
2025-10-15T15:03:32
https://www.reddit.com/r/LocalLLaMA/comments/1o7dh1q/tried_asking_gpt_is_there_a_sea_horse_emoji_it/
DeathShot7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7dh1q
false
null
t3_1o7dh1q
/r/LocalLLaMA/comments/1o7dh1q/tried_asking_gpt_is_there_a_sea_horse_emoji_it/
false
false
self
0
null
(Possible) Mi50 passthrough fix for ESXi, similar to "vendor-reset" for Proxmox
9
Wanted to share my fix I found for getting my Mi50s to properly passthrough in ESXi. Prior to this, I was getting a `atombios stuck in loop` error. There were fixes for Proxmox, notably [vendor-reset](https://github.com/gnif/vendor-reset), but nothing for ESXi. *This fix assumes you already have the VMX arguments for >16GB VRAM GPUs.* * Ensure your GPU(s) are already set to passthrough in ESXi. * Enable ssh on your ESXi host, and ssh into it. * Get the vendor and device ID by running the following: `lspci -n | grep [DEVICE ADDRESS HERE]`. This device address can be found in the same menu used to enable passthrough in ESXi. In my case, my address was ***0000:83:00.0***. * This returned: `0000:83:00.0 Class 0300: 1002:66a0`. * 1002 is our vendor ID, 66a0 is our device ID. * Edit /etc/vmware/passthru.map via `vim - vi /etc/vmware/passthru.map` * Add the following line at the bottom: `1002 66a0 d3d0 default`. Save and exit. * Reboot the host (not sure if necessary) * Open the settings for the VM. Delete any existing PCIe devices that reference the GPU(s) you've just edited. Readd them in. * Power on your VM. There shouldn't be any messages stating `atombios stuck in loop`, and your devices should be visible via `rocm-smi`. ***IMPORTANT*** Do not change the passthrough status i.e. enable/disable. It will remove the edit you made to the passthru.map. The changes do seemingly persist across reboot however. I tested this with both the `V420.rom` and the `vbios2`VBIOSes. Both seemed to work, but when going from `V420.rom` to `vbios2`, I had to reboot the VM twice. Not sure why, but I believe this is a transient issue.
2025-10-15T15:00:32
https://www.reddit.com/r/LocalLLaMA/comments/1o7ddsf/possible_mi50_passthrough_fix_for_esxi_similar_to/
TechSwag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ddsf
false
null
t3_1o7ddsf
/r/LocalLLaMA/comments/1o7ddsf/possible_mi50_passthrough_fix_for_esxi_similar_to/
false
false
self
9
{'enabled': False, 'images': [{'id': 'Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=108&crop=smart&auto=webp&s=c79716f4cb6931c185deea566f0f6079c8955f1a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=216&crop=smart&auto=webp&s=4c6289a3b8d8d9d09272339aa530e2baddd351e0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=320&crop=smart&auto=webp&s=d9d10cb453c659abc9f143503bbde790af94275f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=640&crop=smart&auto=webp&s=bbe33e2876e5c87d4b6c235a68ba5866cc23ede6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=960&crop=smart&auto=webp&s=6ae74977e6408615f14292afe5ebe36399f4f9a5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?width=1080&crop=smart&auto=webp&s=a251e59b96fea74658b0f9b131479013d77e65d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ouai1MsPw7cjkfexSpr60NuAL7CTGyLPSZyEyFki61U.png?auto=webp&s=a938641556080c77392996198d258a6d10cae713', 'width': 1200}, 'variants': {}}]}
Get $200 in Free AI Credits (GPT-5, GLM, Claude, etc.)
1
[removed]
2025-10-15T14:39:07
https://www.reddit.com/r/LocalLLaMA/comments/1o7cta1/get_200_in_free_ai_credits_gpt5_glm_claude_etc/
Conscious-Boat9345
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7cta1
false
null
t3_1o7cta1
/r/LocalLLaMA/comments/1o7cta1/get_200_in_free_ai_credits_gpt5_glm_claude_etc/
false
false
self
1
null
GPU clouds are such a mess
0
# Been bouncing between Vastai, Runpod, and CoreWeave this week just trying to keep a fine-tune running. Vast had a few A100s cheap but half of them went offline mid-session, Runpod throttled upload speeds so bad checkpoints took forever, and CoreWeave was solid but everything worth renting was constantly out of stock. Out of frustration I tried this newer thing called [Runcrate.ai](http://Runcrate.ai) that in my guess supposedly pulls GPUs from a bunch of providers. It actually worked better than I expected, spun up an A100 in a minute without hunting around. Not perfect, but at least it didn’t crash halfway through. And it has a nice cool feature to open vscode in the browser which was impressive. Anyone else running into this GPU musical-chairs problem? Curious what setups people are using to stay sane. Any other providers you guys know worth trying?
2025-10-15T14:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1o7ckqw/gpu_clouds_are_such_a_mess/
TechnicianWeak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7ckqw
false
null
t3_1o7ckqw
/r/LocalLLaMA/comments/1o7ckqw/gpu_clouds_are_such_a_mess/
false
false
self
0
null
RWKV team releases a neurosymbolic language model
1
2025-10-15T14:28:45
https://x.com/BlinkDL_AI/status/1978372669847347432
pneuny
x.com
1970-01-01T00:00:00
0
{}
1o7cjfg
false
null
t3_1o7cjfg
/r/LocalLLaMA/comments/1o7cjfg/rwkv_team_releases_a_neurosymbolic_language_model/
false
false
default
1
null
MoE models benchmarks AMD iGPU
23
Follow up to request for testing a few other MoE models size 10-35B: [https://www.reddit.com/r/LocalLLaMA/comments/1na96gx/moe\_models\_tested\_on\_minipc\_igpu\_with\_vulkan/](https://www.reddit.com/r/LocalLLaMA/comments/1na96gx/moe_models_tested_on_minipc_igpu_with_vulkan/) System: Kubuntu 25.10 OS, Kernel 6.17.0-5-generic with 64GB DDR5 ram. AMD Radeon Graphics (RADV REMBRANDT) Ryzen 6800H and 680M iGPU aquif-3.5-a0.6b-preview-q8\_0 Ling-Coder-lite.i1-Q4\_K\_M Ling-Coder-Lite-Q4\_K\_M LLaDA-MoE-7B-A1B-Base.i1-Q4\_K\_M LLaDA-MoE-7B-A1B-Instruct.i1-Q4\_K\_M OLMoE-1B-7B-0125.i1-Q4\_K\_M OLMoE-1B-7B-0125-Instruct-Q4\_K\_M Qwen3-30B-A3B-Instruct-2507-Q4\_1 Qwen3-30B-A3B-Thinking-2507-Q4\_K\_M Qwen3-Coder-30B-A3B-Instruct-UD-Q4\_K\_XL Ring-lite-2507.i1-Q4\_1 Ring-lite-2507.i1-Q4\_K\_M Llama.cpp Vulkan build: 152729f8 (6565) |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |llama ?B Q8\_0|2.59 GiB|2.61 B|RPC,Vulkan|99|pp512|1296.87 ± 11.69| |llama ?B Q8\_0|2.59 GiB|2.61 B|RPC,Vulkan|99|tg128|103.45 ± 1.25| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|231.96 ± 0.65| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.94 ± 0.18| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|232.71 ± 0.36| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.21 ± 0.53| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|pp512|399.54 ± 5.59| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|tg128|64.91 ± 0.21| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|pp512|396.74 ± 1.32| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|tg128|64.60 ± 0.14| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|pp512|487.74 ± 3.10| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|tg128|78.33 ± 0.47| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|pp512|484.79 ± 4.26| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|tg128|78.76 ± 0.14| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |qwen3moe 30B.A3B Q4\_1|17.87 GiB|30.53 B|RPC,Vulkan|99|pp512|171.65 ± 0.69| |qwen3moe 30B.A3B Q4\_1|17.87 GiB|30.53 B|RPC,Vulkan|99|tg128|27.04 ± 0.02| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |qwen3moe 30B.A3B Q4\_K - Medium|17.28 GiB|30.53 B|RPC,Vulkan|99|pp512|142.18 ± 1.04| |qwen3moe 30B.A3B Q4\_K - Medium|17.28 GiB|30.53 B|RPC,Vulkan|99|tg128|28.79 ± 0.06| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |qwen3moe 30B.A3B Q4\_K - Medium|16.45 GiB|30.53 B|RPC,Vulkan|99|pp512|137.46 ± 0.66| |qwen3moe 30B.A3B Q4\_K - Medium|16.45 GiB|30.53 B|RPC,Vulkan|99|tg128|29.86 ± 0.12| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |bailingmoe 16B Q4\_1|9.84 GiB|16.80 B|RPC,Vulkan|99|pp512|292.10 ± 0.17| |bailingmoe 16B Q4\_1|9.84 GiB|16.80 B|RPC,Vulkan|99|tg128|35.86 ± 0.40| |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|234.03 ± 0.44| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.75 ± 0.13| replace table model names with this list: 1. aquif-3.5-a0.6b-preview-q8\_0 2. Ling-Coder-lite.i1-Q4\_K\_M 3. Ling-Coder-Lite-Q4\_K\_M 4. LLaDA-MoE-7B-A1B-Base.i1-Q4\_K\_M 5. LLaDA-MoE-7B-A1B-Instruct.i1-Q4\_K\_M 6. OLMoE-1B-7B-0125.i1-Q4\_K\_M 7. OLMoE-1B-7B-0125-Instruct-Q4\_K\_M 8. Qwen3-30B-A3B-Instruct-2507-Q4\_1 9. Qwen3-30B-A3B-Thinking-2507-Q4\_K\_M 10. Qwen3-Coder-30B-A3B-Instruct-UD-Q4\_K\_XL 11. Ring-lite-2507.i1-Q4\_1 12. Ring-lite-2507.i1-Q4\_K\_M Here is the combined data from all the tables into a single Markdown table: |model|size|params|backend|ngl|test|t/s| |:-|:-|:-|:-|:-|:-|:-| |llama ?B Q8\_0|2.59 GiB|2.61 B|RPC,Vulkan|99|pp512|1296.87 ± 11.69| |llama ?B Q8\_0|2.59 GiB|2.61 B|RPC,Vulkan|99|tg128|103.45 ± 1.25| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|231.96 ± 0.65| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.94 ± 0.18| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|232.71 ± 0.36| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.21 ± 0.53| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|pp512|399.54 ± 5.59| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|tg128|64.91 ± 0.21| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|pp512|396.74 ± 1.32| |llada-moe A1.7B Q4\_K - Medium|4.20 GiB|7.36 B|RPC,Vulkan|99|tg128|64.60 ± 0.14| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|pp512|487.74 ± 3.10| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|tg128|78.33 ± 0.47| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|pp512|484.79 ± 4.26| |olmoe A1.7B Q4\_K - Medium|3.92 GiB|6.92 B|RPC,Vulkan|99|tg128|78.76 ± 0.14| |qwen3moe 30B.A3B Q4\_1|17.87 GiB|30.53 B|RPC,Vulkan|99|pp512|171.65 ± 0.69| |qwen3moe 30B.A3B Q4\_1|17.87 GiB|30.53 B|RPC,Vulkan|99|tg128|27.04 ± 0.02| |qwen3moe 30B.A3B Q4\_K - Medium|17.28 GiB|30.53 B|RPC,Vulkan|99|pp512|142.18 ± 1.04| |qwen3moe 30B.A3B Q4\_K - Medium|17.28 GiB|30.53 B|RPC,Vulkan|99|tg128|28.79 ± 0.06| |qwen3moe 30B.A3B Q4\_K - Medium|16.45 GiB|30.53 B|RPC,Vulkan|99|pp512|137.46 ± 0.66| |qwen3moe 30B.A3B Q4\_K - Medium|16.45 GiB|30.53 B|RPC,Vulkan|99|tg128|29.86 ± 0.12| |bailingmoe 16B Q4\_1|9.84 GiB|16.80 B|RPC,Vulkan|99|pp512|292.10 ± 0.17| |bailingmoe 16B Q4\_1|9.84 GiB|16.80 B|RPC,Vulkan|99|tg128|35.86 ± 0.40| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|pp512|234.03 ± 0.44| |bailingmoe 16B Q4\_K - Medium|10.40 GiB|16.80 B|RPC,Vulkan|99|tg128|35.75 ± 0.13| Hyperlinks: * [aquif-3.5-A4B-Think](https://huggingface.co/mradermacher/aquif-3.5-A4B-Think-GGUF) * [aquif-3-moe-17b-a2.8b-i1](https://huggingface.co/mradermacher/aquif-3-moe-17b-a2.8b-GGUF?show_file_info=aquif-3-moe-17b-a2.8b.Q4_K_M.gguf) * [Moonlight-16B-A3B-Instruct](https://huggingface.co/gabriellarson/Moonlight-16B-A3B-Instruct-GGUF) * [gpt-oss-20b](https://huggingface.co/unsloth/gpt-oss-20b-GGUF) * [ERNIE-4.5-21B-A3B-PT](https://huggingface.co/bartowski/baidu_ERNIE-4.5-21B-A3B-PT-GGUF) * [SmallThinker-21BA3B-Instruct](https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct-GGUF) * [Ling-lite-1.5-2507](https://huggingface.co/mradermacher/Ling-lite-1.5-2507-GGUF) * [Ling-mini-2.0](https://huggingface.co/lovedheart/Ling-mini-2.0-GGUF) * [Ling-Coder-lite](https://huggingface.co/mradermacher/Ling-Coder-lite-i1-GGUF) [2](https://huggingface.co/redponike/Ling-Coder-lite-GGUF?show_file_info=Ling-Coder-Lite-Q4_K_M.gguf) * [Ring-lite-2507](https://huggingface.co/mradermacher/Ring-lite-2507-i1-GGUF) * [Ring-mini-2.0](https://huggingface.co/lovedheart/Ring-mini-2.0-GGUF) * [Ming-Lite-Omni-1.5](https://huggingface.co/inclusionAI/Ming-Lite-Omni-1.5) (No GGUF yet) * [Qwen3-30B-A3B-Instruct-2507](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF) * [Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF) * [Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF) * [GroveMoE-Inst ](https://huggingface.co/inclusionAI/GroveMoE-Inst)(No GGUF yet) * [FlexOlmo-7x7B-1T](https://huggingface.co/models?search=FlexOlmo-7x7B-1T) (No GGUF yet) * [FlexOlmo-7x7B-1T-RT](https://huggingface.co/allenai/FlexOlmo-7x7B-1T-RT) (No GGUF yet)
2025-10-15T14:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1o7cgaf/moe_models_benchmarks_amd_igpu/
tabletuser_blogspot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7cgaf
false
null
t3_1o7cgaf
/r/LocalLLaMA/comments/1o7cgaf/moe_models_benchmarks_amd_igpu/
false
false
self
23
{'enabled': False, 'images': [{'id': 'fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=108&crop=smart&auto=webp&s=95f2f3b2cb673214c8fc847d20bcc64947748e79', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=216&crop=smart&auto=webp&s=9c9cddb6a2c063f9440a1bfa2e691a36edbe90ee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=320&crop=smart&auto=webp&s=9151d65072acca17f05515846eb138006a5503a3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=640&crop=smart&auto=webp&s=03e9fd054bef966cc74fa5afd968acefb57896e1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=960&crop=smart&auto=webp&s=0153ca03c0b06e7b3a9f7eeceb63bd2974125c48', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?width=1080&crop=smart&auto=webp&s=705f740265f4c84939257cf55f4e3656069e3ebc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fWmX0ITIf3YgSgExTvGE98yaOYOrgs6nK2CpBra5XfA.png?auto=webp&s=951496ebdcf4f1ae135f3d337a8149575a2ffe3c', 'width': 1200}, 'variants': {}}]}
The Golang version of a multimodal chatbot is here!
4
The Golang version of a multimodal chatbot is here! GitHub address: [https://github.com/ai-bot-pro/achatbot-go](https://github.com/ai-bot-pro/achatbot-go) * A local websocket voice agent has been developed, featuring a local VAD+ASR+LLM+TTS Pipeline. More interesting Pipeline configurations will be updated later\~ * Actually, these features have already been implemented in the Python version, [achatbot](https://github.com/ai-bot-pro/achatbot). Prototyping is faster in the Python version because Python is the mainstream language for model training and inference. The underlying operators are typically optimized using C/C++ to deeply integrate with hardware, as well as for operator optimization and quantized weight deployment and loading. * The main reason for redeveloping it in Golang is to facilitate deployment optimization for production-level application services. If your existing business, which has a Golang backend stack, involves multimodal interactions, you can use the achatbot-go library to integrate with your services. For the most part, you only need to write the corresponding business processor logic (to handle different frames) and then assemble these processors into a pipeline for execution.
2025-10-15T14:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1o7catf/the_golang_version_of_a_multimodal_chatbot_is_here/
DueMatter9914
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7catf
false
null
t3_1o7catf
/r/LocalLLaMA/comments/1o7catf/the_golang_version_of_a_multimodal_chatbot_is_here/
false
false
self
4
{'enabled': False, 'images': [{'id': 'hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=108&crop=smart&auto=webp&s=d969bc56a307574c059863c4950aafb76a2336af', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=216&crop=smart&auto=webp&s=b5d83904ab8edb4f596f7df9d2ba8e95a7ef99f2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=320&crop=smart&auto=webp&s=dbb99df2219868078ba517f5fca05cf7dd949e94', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=640&crop=smart&auto=webp&s=0351c3a2e5176cce44b730c3b651d9c430cea1fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=960&crop=smart&auto=webp&s=f8e059275b6a8d5acbff4a9c33782b636fa0ca76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?width=1080&crop=smart&auto=webp&s=f356ef8a86518e0cc631802454403acd279f60f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hrpfW7XolAHetON5YxQT6x0wScHuInlLW6yAmwJgSXU.png?auto=webp&s=7b7d71eeb22969fcd0d6ca75bb88ce8769746da9', 'width': 1200}, 'variants': {}}]}
Looking for a tower to host my LLM
0
Hey guys I am looking to self host an LLM I have already tried running phi 3 on my laptop with some success but the response times are crazy. So I wanted to upgrade I am thinking about getting a dell precision 3620 i7 7700 3.6ghz with 16gb DDR 4 ram no hdd and no os. I want to get a Nvidia 3060 and a new hdd then load it with a Linux distro The tower is about 130USD with no hdd and no OS Is this a good price? Does my setup sound like it can work?? Am I missing anything?
2025-10-15T14:16:03
https://www.reddit.com/r/LocalLLaMA/comments/1o7c79e/looking_for_a_tower_to_host_my_llm/
Helpful-Funny-876
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7c79e
false
null
t3_1o7c79e
/r/LocalLLaMA/comments/1o7c79e/looking_for_a_tower_to_host_my_llm/
false
false
self
0
null
Reasoning should be thought of as a drawback, not a feature
28
When a new model is released, it’s now common for people to ask “Is there a reasoning version?” But reasoning is not a feature. If anything, it’s a drawback. Reasoning models have only two observable differences from traditional (non-reasoning) models: 1. Several seconds (or even minutes, depending on your inference speed) of additional latency before useful output arrives. 2. A wall of text preceding every response that is almost always worthless to the user. Reasoning (which is perhaps better referred to as context pre-filling) is a mechanism that allows some models to give better responses to some prompts, at the cost of dramatically higher output latency. It is not, however, a feature in itself, any more than having 100 billion extra parameters is a “feature”. The feature is the model quality, and reasoning can be a way to improve it. But the presence of reasoning is worthless *by itself*, and should be considered a bad thing unless proven otherwise in every individual case.
2025-10-15T14:02:59
https://www.reddit.com/r/LocalLLaMA/comments/1o7bve2/reasoning_should_be_thought_of_as_a_drawback_not/
-p-e-w-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7bve2
false
null
t3_1o7bve2
/r/LocalLLaMA/comments/1o7bve2/reasoning_should_be_thought_of_as_a_drawback_not/
false
false
self
28
null
DGX Spark is just a more expensive (probably underclocked) AGX Thor
63
It was weird not to see any detailed specs on Nvidia's DGX Spark spec sheet. No mention of how many cuda/tensor cores (they mention the cuda core counts only in the [DGX Guide](https://docs.nvidia.com/dgx/dgx-spark/dgx-spark.pdf) for developers but still why so buried). This is in contrast to AGX Thor, where they list in details the specs. So i assumed that the DGX Spark is a nerfed version of the AGX Thor, given that NVidia's marketing states that the Thor throughput is 2000TFLOPs and the Spark is 1000TFLOPs. Thor has similar ecosystem too and tech stack (ie Nvidia branded Ubuntu). But then [the register in their review yesterday](https://www.theregister.com/2025/10/14/dgx_spark_review), actually listed the number of cuda cores, tensor cores, and RT cores. To my surprise the spark packs 2x cuda cores and 2x tensor cores, even 48 rt cores than the THor. | Feature | **DGX Spark** | ** AGX Thor** | | :----------------------- | :----------------------------------------- | :------------------ | | TDP | ~140 W | 40 – 130 W | | CUDA Cores | 6 144 | 2 560 | | Tensor Cores | 192 (unofficial really) | 96 | | Peak FP4 (sparse) | ≈ 1 000 TFLOPS | ≈ 2 070 TFLOPS | And now I have more questions than answers. The [benchmarks of the Thor](https://i.ibb.co/C3BGBy0T/2025-10-15-15-27.png) actually show numbers similar [to the Ryzen AI Max and M4 Pro](https://www.youtube.com/watch?v=FVPE5zCte_E), so again more confusion, because the Thor should be "twice as fast for AI" than the Spark. This goes to show that the metric of "AI TFLOPS" is absolutely useless, because also on paper Spark packs more cores. Maybe it matters for training/finetuning, but then we would have observed this for inference too. The only explanation is that Nvidia underclocked the DGX Spark (some reviewers like NetworkChuck reported very hot devices) so the small form factor is not helping take full advantage of the hardware, and I wonder how it will fair with continuous usage (ie finetuning / training). We've seen this with the Ryzen AI where the EVO-x2 takes off to space with those fans. I saw some benchmarks with vLLM and [batched llama.cpp](https://github.com/ggml-org/llama.cpp/discussions/16578) being very good, which is probably where the extra cores that Spark has would shine compared to Mac or Ryzen AI or the Thor. Nonetheless, the value offering for the Spark (4k $) is nearly similar (at least in observed performance) to that of the Thor (3.5k $), yet it costs more. If you go by "AI TFLOPS" on paper the Thor is a better deal, and a bit cheaper. If you go by raw numbers, the Spark (probably if properly overclocked) might give you on the long term better bang for bucks (good luck with warranty though). But if you want inference: get a Ryzen AI Max if you're on a budget, or splurge on a Mac. If you have space and don't mind leeching power, probably DDR4 servers + old AMD GPUs are the way to go. For batched inference, we need better data for comparison. But from what I have seen so far, it's a tough market for the DGX Spark, and Nvidia marketing is not helping at all.
2025-10-15T13:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1o7brfl/dgx_spark_is_just_a_more_expensive_probably/
waiting_for_zban
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7brfl
false
null
t3_1o7brfl
/r/LocalLLaMA/comments/1o7brfl/dgx_spark_is_just_a_more_expensive_probably/
false
false
self
63
{'enabled': False, 'images': [{'id': 'SExczAMvrjlDuQHaYYYm62-Gsk5NAXCKZ_bSseSfMzg', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/V0-GVzz37oJWiEr8afsFAUw02dozKU4rEpN1t1xFamo.png?width=108&crop=smart&auto=webp&s=2e994f1f60b2b38befd321c3e1a924b2ca50f53a', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/V0-GVzz37oJWiEr8afsFAUw02dozKU4rEpN1t1xFamo.png?width=216&crop=smart&auto=webp&s=df50802348b6cdaef1286360a3d0edeb2a224987', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/V0-GVzz37oJWiEr8afsFAUw02dozKU4rEpN1t1xFamo.png?width=320&crop=smart&auto=webp&s=e11216566083ccc4afd4f0136bbf05553f989a96', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/V0-GVzz37oJWiEr8afsFAUw02dozKU4rEpN1t1xFamo.png?width=640&crop=smart&auto=webp&s=0d2be64624932361edb266923041121b50ffb2fe', 'width': 640}], 'source': {'height': 399, 'url': 'https://external-preview.redd.it/V0-GVzz37oJWiEr8afsFAUw02dozKU4rEpN1t1xFamo.png?auto=webp&s=0cc3061ff0e2143fd6bef9e68439cf67e584dafb', 'width': 640}, 'variants': {}}]}
DGX Spark Invite - Thoughts?
1
I was really excited earlier this year in getting the DGX Spark for working with models locally. After the delays, I had some time to just think about alternatives. Seeing the benchmarks being posted are fractional compared to some non-unified GPU units and I feel a bit disappointed (even though knowing in the back of my head, that would probably be the case from early bandwidth specs). I feel like $4,000 is not even close to the value from cloud rentals for heavy model tasks (like training) and if I was to work with customizing a model around 30b or under which seems to be the sweet spot for the spark, a 5090 system would just be magnitudes faster and actually $4k or under with general use not locked into the Spark’s OS. I would say this also applies for running 70b models, which a 5090 has also been pretty good with, since training one of that size probably needs to be done via cloud anyway. Any Ryzen AI Max 395+ is about half the price and seems to be nearly on par with the performance. Also if it’s more than half the price, you usually get it on a nice laptop with about a 40% discount from the Spark but 80%+ of the benchmarks. Then there is the Apple ecosystem and potential for new chipsets next year (M5 released today). Today, ~$3,600 can get you a solid unified memory and similar performance - a new chipset next year may be even faster performance with really large unified memory. All guesses for now though. So in stead of an impulse buy, I would like to see if this is really worth it for working with models locally? I feel like the Spark is caught in a void - able to run big models locally, but AMD beat them to it for a much cheaper price with almost on par performance, while training and other performance uses are almost always outdone by a 5090 or cloud rentals. Appreciate any thoughts so I don’t have FOMO if I just release my reservation and don’t get it.
2025-10-15T13:53:04
https://www.reddit.com/r/LocalLLaMA/comments/1o7bm90/dgx_spark_invite_thoughts/
randomoptionsdude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7bm90
false
null
t3_1o7bm90
/r/LocalLLaMA/comments/1o7bm90/dgx_spark_invite_thoughts/
false
false
self
1
null
Training qwen 3VL 8b thinking
5
Hey guys just had a question i wanted to train qwen3 VL 8b thinking on the dataset i trained qwen 2.5VL 7b. Is it necessary to have a thinking part on the 3VL ? Or it Will still be ok without one ? Should i maybe move to the instruct one ? I don’t really care about the time it takes i want full precision. But i was asking myself is training the thinking one will make is reflection less long and more precise ? Because it seems it overthinks a bit.
2025-10-15T13:44:38
https://www.reddit.com/r/LocalLLaMA/comments/1o7beip/training_qwen_3vl_8b_thinking/
Severe_Biscotti2349
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7beip
false
null
t3_1o7beip
/r/LocalLLaMA/comments/1o7beip/training_qwen_3vl_8b_thinking/
false
false
self
5
null
Anyone test two DGX Sparks linked via their ConnectX yet?
6
> NVIDIA ConnectX™ networking can connect two NVIDIA DGX Spark supercomputers to enable inference on models up to 405B parameters. Anyone get a dual spark 405B setup going? Should be something like 0.5 Tok/sec decode
2025-10-15T13:44:22
https://www.reddit.com/r/LocalLLaMA/comments/1o7beae/anyone_test_two_dgx_sparks_linked_via_their/
kryptkpr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7beae
false
null
t3_1o7beae
/r/LocalLLaMA/comments/1o7beae/anyone_test_two_dgx_sparks_linked_via_their/
false
false
self
6
null
Apple unveils M5
777
Following the iPhone 17 AI accelerators, most of us were expecting the same tech to be added to M5. Here it is! Lets see what M5 Pro & Max will add. The speedup from M4 to M5 seems to be around 3.5x for prompt processing. Faster SSDs & RAM: >Additionally, with up to 2x faster SSD performance than the prior generation, the new 14-inch MacBook Pro lets users load a local LLM faster, and they can now choose up to 4TB of storage. 150GB/s of unified memory bandwidth
2025-10-15T13:34:26
https://i.redd.it/5ehnojlm2avf1.png
Agreeable-Rest9162
i.redd.it
1970-01-01T00:00:00
0
{}
1o7b5i4
false
null
t3_1o7b5i4
/r/LocalLLaMA/comments/1o7b5i4/apple_unveils_m5/
false
false
default
777
{'enabled': True, 'images': [{'id': '5ehnojlm2avf1', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=108&crop=smart&auto=webp&s=7f17cc821de810412c90a6a47815a81368d27a00', 'width': 108}, {'height': 101, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=216&crop=smart&auto=webp&s=6d1a76b5a927593aa2db1a6de4e9d7d33469261e', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=320&crop=smart&auto=webp&s=9475f1354e931294904516c6a696ad90f57037e4', 'width': 320}, {'height': 302, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=640&crop=smart&auto=webp&s=bbc46c6e19f88c18588d2f5384d7fb2dd4717f50', 'width': 640}, {'height': 453, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=960&crop=smart&auto=webp&s=b39839ce73a8f086e9cb74a88935748a6ca20f74', 'width': 960}, {'height': 509, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?width=1080&crop=smart&auto=webp&s=b8bd7c0d34fe7f50e45c540d449bdc6957b6f5b8', 'width': 1080}], 'source': {'height': 1018, 'url': 'https://preview.redd.it/5ehnojlm2avf1.png?auto=webp&s=765f22ee011ba53cdfdd9e872868bf5e13ba7b14', 'width': 2156}, 'variants': {}}]}
Looks like the DGX Spark a bad 4K investment vs Mac
92
https://preview.redd.it/…te dollar short.
2025-10-15T13:29:44
https://www.reddit.com/r/LocalLLaMA/comments/1o7b1i3/looks_like_the_dgx_spark_a_bad_4k_investment_vs/
meshreplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7b1i3
false
null
t3_1o7b1i3
/r/LocalLLaMA/comments/1o7b1i3/looks_like_the_dgx_spark_a_bad_4k_investment_vs/
false
false
https://b.thumbs.redditm…-yiTMqhNj4TQ.jpg
92
null
Perplexity >> Chrome (Comet)
1
[removed]
2025-10-15T12:56:01
https://pplx.ai/deepanshut23403
WorthScar2724
pplx.ai
1970-01-01T00:00:00
0
{}
1o7a8hf
false
null
t3_1o7a8hf
/r/LocalLLaMA/comments/1o7a8hf/perplexity_chrome_comet/
false
false
default
1
null
Perplexity >> Chrome (Comet)
1
[removed]
2025-10-15T12:54:26
https://pplx.ai/deepanshut23403
WorthScar2724
pplx.ai
1970-01-01T00:00:00
0
{}
1o7a77z
false
null
t3_1o7a77z
/r/LocalLLaMA/comments/1o7a77z/perplexity_chrome_comet/
false
false
default
1
null
What's the biggest blocker you've hit using LLMs for actual, large-scale coding projects?
25
Beyond the hype, when you try to integrate LLMs into a real, large codebase, what consistently fails or holds you back? Is it the context length, losing understanding of the architecture, something just breaking with no clear reason, or constantly having to clean up the output? I keep finding spending more time fixing AI-generated code than it would have taken to write from scratch on complex tasks. What's your biggest pain point?
2025-10-15T12:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1o7a5xw/whats_the_biggest_blocker_youve_hit_using_llms/
Street-Lie-2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1o7a5xw
false
null
t3_1o7a5xw
/r/LocalLLaMA/comments/1o7a5xw/whats_the_biggest_blocker_youve_hit_using_llms/
false
false
self
25
null
Perplexity >> Chrome (Comet)
0
Bro this Comet browser by Perplexity is like actually insane..... [https://pplx.ai/deepanshut23403](https://pplx.ai/deepanshut23403)
2025-10-15T12:49:20
https://pplx.ai/deepanshut23403
Diligent_Debate6692
pplx.ai
1970-01-01T00:00:00
0
{}
1o7a33g
false
null
t3_1o7a33g
/r/LocalLLaMA/comments/1o7a33g/perplexity_chrome_comet/
false
false
default
0
null