title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
The Evolution of Search - A Brief History of Information Retrieval
3
2025-09-26T06:52:33
https://youtu.be/ghE4gQkx2b4
kushalgoenka
youtu.be
1970-01-01T00:00:00
0
{}
1nqufim
false
{'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ghE4gQkx2b4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="The Evolution of Search - A Brief History of Information Retrieval"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ghE4gQkx2b4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'The Evolution of Search - A Brief History of Information Retrieval', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1nqufim
/r/LocalLLaMA/comments/1nqufim/the_evolution_of_search_a_brief_history_of/
false
false
default
3
{'enabled': False, 'images': [{'id': 'rdiqDR_pScizHcdiYSt4CYhHHCgIfLWKQPCyQEs-E3k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rdiqDR_pScizHcdiYSt4CYhHHCgIfLWKQPCyQEs-E3k.jpeg?width=108&crop=smart&auto=webp&s=c6a61400b4c4a7d819e6ee1cc4f81054c8111be5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rdiqDR_pScizHcdiYSt4CYhHHCgIfLWKQPCyQEs-E3k.jpeg?width=216&crop=smart&auto=webp&s=1b80420617efab84fd246466110b01765eff4b10', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rdiqDR_pScizHcdiYSt4CYhHHCgIfLWKQPCyQEs-E3k.jpeg?width=320&crop=smart&auto=webp&s=9980443776d6f25f3145283b6475431f5b3cff07', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rdiqDR_pScizHcdiYSt4CYhHHCgIfLWKQPCyQEs-E3k.jpeg?auto=webp&s=72da63e2d13ca4204c5cdefcf37ce009df6c6eb0', 'width': 480}, 'variants': {}}]}
Music API
0
since spotify api si not free anymore what is the best alternatives to that except youtube?
2025-09-26T06:50:24
https://www.reddit.com/r/LocalLLaMA/comments/1nquecb/music_api/
Jiko040903
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nquecb
false
null
t3_1nquecb
/r/LocalLLaMA/comments/1nquecb/music_api/
false
false
self
0
null
lm studio dolphin mistral nao censura nada
0
manos, o meu mistral dolphin 2.8 7b v02, tem censura, pesso para ele como fabr1c4r 4rmas, e so como test kk, nao sou um perturbado, ele vem com a de que isso nao respeita as leys e ele nao pode me ajudar com isso, como poderia resolver?
2025-09-26T05:43:44
https://www.reddit.com/r/LocalLLaMA/comments/1nqtbbl/lm_studio_dolphin_mistral_nao_censura_nada/
TrickyPhilosopher417
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqtbbl
false
null
t3_1nqtbbl
/r/LocalLLaMA/comments/1nqtbbl/lm_studio_dolphin_mistral_nao_censura_nada/
false
false
self
0
null
Generate a json from a para
2
I am using llama-3.1-8b instruct and using vllm as the inference engine. Before this setup I used gemma 3b with ollama. So in the former setup(vllm+llama), the llm takes a para, and outputs a json of the format {"title":" ","children:{"title": " ","children": }} and similar json in the ollama setup. Now the problem is, the vllm setup at times isnt generating a proper json. It fails to generate a good json with important key words Example payload being sent: Payload being sent: ` { "model": "./llama-3.1-8b", "messages": [ { "role": "system", "content": "You are a helpful assistant that generates JSON mind maps." }, { "role": "user", "content": "\n You are a helpful assistant that creates structured mind maps.\n\n Given the following input content, carefully extract the main concepts\n and structure them as a nested JSON mind map.\n\n Content:\n A quatrenion is a mathematical object that extends the concept of a complex number to four dimensions. It is a number of the form a + bi + cj + dk, where a, b, c, and d are real numbers and i, j, and k are imaginary units that satisfy the relations i^2 = j^2 = k^2 = ijk = -1. Quaternions are used in various fields such as computer graphics, robotics, and quantum mechanics.\n\n Return only the JSON structure representing the mind map,\n without any explanations or extra text.\n " } ], "temperature": 0, "max_tokens": 800, "guided_json": { "type": "object", "properties": { "title": { "type": "string" }, "children": { "type": "array", "items": { "type": "object", "properties": { "title": { "type": "string" }, "children": { "$ref": "#/properties/children" } }, "required": [ "title", "children" ] } } }, "required": [ "title", "children" ], "additionalProperties": false } ` Output: ` [INFO] httpx - HTTP Request: POST http://x.x.x.x:9000/v1/chat/completions "HTTP/1.1 200 OK" [INFO] root - { "title": "quatrenion", "children": [ { "title": "mathematical object", "children": [ { "title": "complex number", "children": [ { "title": "real numbers", "children": [ { "title": "imaginary units", "children": [ { "title": "ijk", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, { "title": "imaginary units", }, { "title": "real numbers", }, and similar shit ......} ` How to tackle this problem?
2025-09-26T05:38:53
https://www.reddit.com/r/LocalLLaMA/comments/1nqt8iy/generate_a_json_from_a_para/
Dizzy-Watercress-744
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqt8iy
false
null
t3_1nqt8iy
/r/LocalLLaMA/comments/1nqt8iy/generate_a_json_from_a_para/
false
false
self
2
null
When to use Local LLMs over frontier models?
1
[removed]
2025-09-26T04:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1nqsf8w/when_to_use_local_llms_over_frontier_models/
Spirited_Result9116
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqsf8w
false
null
t3_1nqsf8w
/r/LocalLLaMA/comments/1nqsf8w/when_to_use_local_llms_over_frontier_models/
false
false
self
1
null
This $5,999 RTX PRO 6000 Ebay listing is a scam, right?
1
[https://www.ebay.com/itm/157345680065](https://www.ebay.com/itm/157345680065) I so badly want to believe this is real, but it's just too good to be true, right? Anyone who knows how to spot a scam that can tell me if it is or isn't? https://preview.redd.it/uf4anoi5qfrf1.png?width=2077&format=png&auto=webp&s=bdd0edddeedad6cd8dabe60acf749f813d35813a
2025-09-26T04:15:41
https://www.reddit.com/r/LocalLLaMA/comments/1nqrsy7/this_5999_rtx_pro_6000_ebay_listing_is_a_scam/
RoboDogRush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqrsy7
false
null
t3_1nqrsy7
/r/LocalLLaMA/comments/1nqrsy7/this_5999_rtx_pro_6000_ebay_listing_is_a_scam/
false
false
https://b.thumbs.redditm…XphhQ4pB3vok.jpg
1
{'enabled': False, 'images': [{'id': 'PLHOV0_nwuZRhZRZqJBfnATuU5v5sHwhr_4tAQxOpcU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PLHOV0_nwuZRhZRZqJBfnATuU5v5sHwhr_4tAQxOpcU.jpeg?width=108&crop=smart&auto=webp&s=76e6f46a6f861b6ca2e19c39236558abe0fcf305', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PLHOV0_nwuZRhZRZqJBfnATuU5v5sHwhr_4tAQxOpcU.jpeg?width=216&crop=smart&auto=webp&s=dc53288882e23ec3423af16397e343ccd250aff8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PLHOV0_nwuZRhZRZqJBfnATuU5v5sHwhr_4tAQxOpcU.jpeg?width=320&crop=smart&auto=webp&s=5fb73225bda7badfe4decb25651d28df02d7774e', 'width': 320}], 'source': {'height': 201, 'url': 'https://external-preview.redd.it/PLHOV0_nwuZRhZRZqJBfnATuU5v5sHwhr_4tAQxOpcU.jpeg?auto=webp&s=54ba714c19767cac09916d8e5e9fb9bb6f95c74e', 'width': 400}, 'variants': {}}]}
Can anyone explain what ai researchers do
0
Can anyone explain what ai researchers do
2025-09-26T04:07:10
https://www.reddit.com/r/LocalLLaMA/comments/1nqrn9u/can_anyone_explain_what_ai_researchers_do/
UmpireForeign7730
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqrn9u
false
null
t3_1nqrn9u
/r/LocalLLaMA/comments/1nqrn9u/can_anyone_explain_what_ai_researchers_do/
false
false
self
0
null
Kwaipilot/KAT-Dev
65
**KAT-Dev-32B** is an open-source 32B-parameter model for software engineering tasks. On SWE-Bench Verified, **KAT-Dev-32B** achieves comparable performance with **62.4%** resolved and ranks **5th** among all open-source models with different scales.
2025-09-26T03:41:01
https://huggingface.co/Kwaipilot/KAT-Dev
random-tomato
huggingface.co
1970-01-01T00:00:00
0
{}
1nqr5lp
false
null
t3_1nqr5lp
/r/LocalLLaMA/comments/1nqr5lp/kwaipilotkatdev/
false
false
default
65
{'enabled': False, 'images': [{'id': 'RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=108&crop=smart&auto=webp&s=c808fc8bdc94640ebc20e7750ff9b3a2ec6c802a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=216&crop=smart&auto=webp&s=2e66f1658bb0e2be4638165b5050b3bd8146414e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=320&crop=smart&auto=webp&s=d9d525128d60fdae28d8f5dc9d5de20e8ae0a243', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=640&crop=smart&auto=webp&s=24d680deeef40cd14e1c0a13e00f25c88680f997', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=960&crop=smart&auto=webp&s=3d30c23223340ba9d943018135295c14fabd3e24', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?width=1080&crop=smart&auto=webp&s=e7dcd17c5f7f9d7e1f4715bcc01e5065f98a6d90', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RFhQxPOdxc2Em-bjhotgIyTCAALVqXONnbv5f8ro8uY.png?auto=webp&s=425539cf411bad70d9e9553fb1df1952ca0ca401', 'width': 1200}, 'variants': {}}]}
When are open tests and benchmarks relevant to you?
2
GPQA might give accurate science scores but when did a test or benchmark last matter to you? Are closed ones better because they will be gamed? How do you choose based on use case?
2025-09-26T03:40:38
https://www.reddit.com/r/LocalLLaMA/comments/1nqr5bw/when_are_open_tests_and_benchmarks_relevant_to_you/
Beestinge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqr5bw
false
null
t3_1nqr5bw
/r/LocalLLaMA/comments/1nqr5bw/when_are_open_tests_and_benchmarks_relevant_to_you/
false
false
self
2
null
Deterministic NLU Engine - Looking for Feedback on LLM Pain Points
1
Working on solving some major pain points I'm seeing with LLM-based chatbots/agents: • **Narrow scope** - can only choose from a handful of intents vs. hundreds/thousands • **Poor ambiguity handling** - guesses wrong instead of asking for clarification • **Hallucinations** - unpredictable, prone to false positives • **Single-focus limitation** - ignores side questions/requests in user messages Just released an upgrade to my Sophia NLU Engine with a new POS tagger (99.03% accuracy, 20k words/sec, 142MB footprint) - one of the most accurate, fastest, and most compact available. Details, demo, GitHub: https://cicero.sh/r/sophia-upgrade-pos-tagger Now finalizing **advanced contextual awareness** (2-3 weeks out) that will be: - Deterministic and reliable - Schema-driven for broad intent recognition - Handles concurrent side requests - Asks for clarification when needed - Supports multi-turn dialog Looking for feedback and insights as I finalize this upgrade. What pain points are you experiencing with current LLM agents? Any specific features you'd want to see? Happy to chat one-on-one - DM for contact info.
2025-09-26T03:24:50
https://www.reddit.com/r/LocalLLaMA/comments/1nqqul1/deterministic_nlu_engine_looking_for_feedback_on/
mdizak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqqul1
false
null
t3_1nqqul1
/r/LocalLLaMA/comments/1nqqul1/deterministic_nlu_engine_looking_for_feedback_on/
false
false
self
1
null
Help: my AI is summoning US political figures in Chinese.
0
So I wanted to give this model a test drive... should I be worried?? [unsloth/Qwen3-32B-GGUF](https://huggingface.co/unsloth/Qwen3-32B-GGUF) https://preview.redd.it/qxb8mw2c9frf1.png?width=1115&format=png&auto=webp&s=43422f03e62dae9134381d49e49bcfd3db8b586b https://preview.redd.it/ib2903az9frf1.png?width=624&format=png&auto=webp&s=b22e480b985b0bd8f5d82d351f05d80918024098 At first it just spammed random Chinese text, but then it started chanting “Trump, Trump, Trump” in the middle of it. Not quite what I expected from asking “What is the game Hangman?” I’m posting this for two reasons: 1. It’s hilarious and I had to share. 2. I might actually be doing something wrong—has anyone else seen this behavior? [at runtime](https://preview.redd.it/bhruonuabfrf1.png?width=1592&format=png&auto=webp&s=f273389d9b251f05a49b33ac498d9a407a3cb071) Specs: * mobo: X870E Aorus Elite * RAM: 2x32GB Corsair DDR5 @ 6000MHz * GPU: RTX 5090 (32GB) * Storage: 4TB Crucial SSD * Plenty of cooling
2025-09-26T03:01:16
https://www.reddit.com/r/LocalLLaMA/comments/1nqqe60/help_my_ai_is_summoning_us_political_figures_in/
ZealousidealBoat6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqqe60
false
null
t3_1nqqe60
/r/LocalLLaMA/comments/1nqqe60/help_my_ai_is_summoning_us_political_figures_in/
false
false
https://a.thumbs.redditm…QoK3lUvZYzV4.jpg
0
{'enabled': False, 'images': [{'id': '1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=108&crop=smart&auto=webp&s=0cad770e0e0dfbee8919aba76f8971ad7aebfbd8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=216&crop=smart&auto=webp&s=c88441e492afda2ff2a763beef9576312e1a4231', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=320&crop=smart&auto=webp&s=639dc8b0e1f0dd75d2868fa802811628f2798e27', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=640&crop=smart&auto=webp&s=4348315071c7ba4580f913e9b44a2c7554b98fb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=960&crop=smart&auto=webp&s=ed10a83ad51884929ebe5fdaeeed23b31cc192f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?width=1080&crop=smart&auto=webp&s=3d7b5bec9b5391802c3c727e79ad68108f378cd6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1mvlCK8scBuItPnAT_brlPR99Tg1TNLC8laAHJZM1rk.png?auto=webp&s=361946c76c5681c167ccee46138b34f1e3e3b9f9', 'width': 1200}, 'variants': {}}]}
I made a library to help writing test code for vLLM.
6
Does anybody write test code while developing with vLLM? Introducing "vllm-mock", my new small open-source. I love vLLM and know how important test code is in maintaining project quality and bug tracking. But writing test code for LLM inference is hard because it costs GPU time (which means money🤑) and loading the whole model is pretty slow. So, I made a small library to provide a mock instance to write test code for vLLM. With "vllm-mock," you don't need to create a vLLM mock instance on your own—I already made one! [https://github.com/NomaDamas/vllm-mock](https://github.com/NomaDamas/vllm-mock) Feel free to give a star💫 to the repo. Thank you:) https://preview.redd.it/ns8xexofafrf1.png?width=1602&format=png&auto=webp&s=c9a469956824445feabd851ac31b78bfd530a7a1
2025-09-26T02:47:38
https://www.reddit.com/r/LocalLLaMA/comments/1nqq4n6/i_made_a_library_to_help_writing_test_code_for/
jeffrey-0711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqq4n6
false
null
t3_1nqq4n6
/r/LocalLLaMA/comments/1nqq4n6/i_made_a_library_to_help_writing_test_code_for/
false
false
https://b.thumbs.redditm…i9HrqQYV87_A.jpg
6
{'enabled': False, 'images': [{'id': 'YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=108&crop=smart&auto=webp&s=11ea49991362a10a0a9dcb9a01ead32ec89d8693', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=216&crop=smart&auto=webp&s=a939d20ef259a643d516f56f77504da297990baf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=320&crop=smart&auto=webp&s=b52ca81f1b44641e7082220fa0a90eb4fd5ad058', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=640&crop=smart&auto=webp&s=d179eec84dba993ba11d0ca1724e3f0c730327c4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=960&crop=smart&auto=webp&s=5215c6e215b37f38365bd758bdb3460bd13b5f91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?width=1080&crop=smart&auto=webp&s=92df882a38e42ed5d149991981a14ca8ab79c845', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YMIHwR4e0oK00ZXbDfhDREwv42vpChwnDCf7he5QwGI.png?auto=webp&s=c81d7b3d9d2ae5097252e86e37d6ae50ec982861', 'width': 1200}, 'variants': {}}]}
AMD's GAIA for GenAI adds Linux support: using Vulkan for GPUs, no NPUs yet
11
2025-09-26T02:23:52
https://www.phoronix.com/news/AMD-GAIA-GenAI-Linux-Support
Fcking_Chuck
phoronix.com
1970-01-01T00:00:00
0
{}
1nqpnoc
false
null
t3_1nqpnoc
/r/LocalLLaMA/comments/1nqpnoc/amds_gaia_for_genai_adds_linux_support_using/
false
false
default
11
null
Introducing Zenbot
9
Hello. I'm an author. I am not a developer. In recent months I have taken an interest in LLMs. I have created Zenbot, an LLM-driven web browser. Zenbot browses the web for you. It's as simple as that. Think of it like a co-browser. It works as a plugin for Open WebUI, runs entirely locally, and lives inside your current browser. All you need to do is install Docker, or preferably, Podman. Check it out. Maybe you could use Zenbot to buy my book, [*Well's Rest*](https://www.royalroad.com/amazon/0646826778?maas=&ref=ast_author_mpb), available on [Amazon](https://www.royalroad.com/amazon/0646826778?maas=&ref=ast_author_mpb). Or continue to support this open source project at [https://ko-fi.com/dredgesta](https://ko-fi.com/dredgesta)
2025-09-26T01:53:03
https://github.com/michaelsoftmd/zenbot-chrome
Significant-Skin118
github.com
1970-01-01T00:00:00
0
{}
1nqp14a
false
null
t3_1nqp14a
/r/LocalLLaMA/comments/1nqp14a/introducing_zenbot/
false
false
https://external-preview…9f4b4652a3344541
9
{'enabled': False, 'images': [{'id': '9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=108&crop=smart&auto=webp&s=3bccb891bbd8292b25b71c85c8800c59ccab65c9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=216&crop=smart&auto=webp&s=1f6045cf628ca13159640b3ded4ec2b6d7ce1c75', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=320&crop=smart&auto=webp&s=9d4c113b034cf7a02f3620ff2a5767009280d23b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=640&crop=smart&auto=webp&s=823555698e93aea4d4f8254a0dc76481f9fb4589', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=960&crop=smart&auto=webp&s=d57425181521e440dbe85d401576de7ca6b723f0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?width=1080&crop=smart&auto=webp&s=a52e4d69d38218101aeb499ccc2b7e97ebd6344d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9uH3Mu62qaTBstDt1_x7dzMUq0mH4nyJEqciU74CtL8.png?auto=webp&s=0fe4dcf5ebfccf42503b1246708fc669ba747a9a', 'width': 1200}, 'variants': {}}]}
NexNotes AI is the ultimate study weapon
0
NexNotes AI is an AI-powered study tool designed to help students and researchers streamline their note-taking process. Users can paste links or notes and receive clean, smart notes instantly, which also highlight important information. The platform offers instant mind maps, study plans, flowcharts, summaries, quizzes, and more to enhance learning efficiency. Key features ✓ instant notes ✓ smart highlights ✓ mind maps ✓ study plans ✓ flowcharts ✓ summaries ✓ quizzes Question paper generation Doubts generation Link https://nexnotes-ai.pages.dev
2025-09-26T01:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1nqouhy/nexnotes_ai_is_the_ultimate_study_weapon/
Anxious-Ad7338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqouhy
false
null
t3_1nqouhy
/r/LocalLLaMA/comments/1nqouhy/nexnotes_ai_is_the_ultimate_study_weapon/
false
false
self
0
null
The current state of LLM benchmarks is so polluted
40
As the title says. Since the beginning of the LLM craze, every lab has been publishing and cherry picking their results, and there's a lack of transparency from the AI labs. This only affects the consumers. There are multiple issues that exist today and haven't been solved: 1. Labs are reporting only the benchmarks where their models look good, they cherry pick results. 2. Some labs are training on the very same benchmarks they evaluate, maybe not on purpose, but contamination is there. 3. Most published benchmarks are not actually useful at all, they are usually weird academic cases where the models fail, instead of real-world use patterns of these models. 4. Every lab uses their own testing methodology, their own parameters and prompts, and they seem to tune things until they appear better than the previous release. 5. Everyone is implementing their own benchmarks in their own way and never release the code to reproduce. 6. The APIs fluctuate in quality and some providers are selling quantized versions instead of the original model, thus, we see regressions. Nobody is tracking this. Is there anyone working on these issues? I'd love to talk if so. We just started working on independent benchmarking and plan to build a standard so anyone can build and publish their own benchmark easily, for any use case. All open source, open data. Imagine a place that test new releases and report API regressions, in favor of the consumers. Not with academic contaminated benchmarks but with actual real world performance benchmarks. There's already great websites out there doing an effort, but what I envision is a place where you can find hundreds of community built benchmarks of all kinds (legal, healthcare, roleplay, instruction following, asr, etc). And a way to monitor the real quality of the models out there. Is this something anyone else shares? or is it just me becoming crazy due to no good existing solution?
2025-09-26T01:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1nqo0oo/the_current_state_of_llm_benchmarks_is_so_polluted/
Odd_Tumbleweed574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqo0oo
false
null
t3_1nqo0oo
/r/LocalLLaMA/comments/1nqo0oo/the_current_state_of_llm_benchmarks_is_so_polluted/
false
false
self
40
null
Any models fine-tuned for verse?
3
Basically title. Wondering if any models out there work well for verse, poetry, lyrics, etc.
2025-09-26T00:49:33
https://www.reddit.com/r/LocalLLaMA/comments/1nqnpvv/any_models_finetuned_for_verse/
marmotter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqnpvv
false
null
t3_1nqnpvv
/r/LocalLLaMA/comments/1nqnpvv/any_models_finetuned_for_verse/
false
false
self
3
null
Best instruct model that fits in 32gb VRAM
22
Hi all, I have a task where I need the LLM to interpret some text, only summarise the relevant paragraphs and return in json format. I've been using Qwen3-4B-Instruct-2507 and I must say, given the size of the model, it's doing quite well. However, I noticed that it seems to waste too much tokens on thinking. I can see that it repeats what it wants to say a few times before exiting thinking mode and actually return me the output. So I'm wondering whether there are better models out there that can fit in my 5090? What would be your go-to model in the <=32gb VRAM range?
2025-09-26T00:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1nqnabr/best_instruct_model_that_fits_in_32gb_vram/
swmfg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqnabr
false
null
t3_1nqnabr
/r/LocalLLaMA/comments/1nqnabr/best_instruct_model_that_fits_in_32gb_vram/
false
false
self
22
null
It was supposed to be out in September. Anyone got it?
3
The ARL-HX Mini Station: Equipped With Dual Arc PRO B60 24 GB GPUs, 256 GB Memory & Intel Core Ultra 9 275HX CPU, All For Under $2800 US [https://wccftech.com/maxsun-arl-hx-mini-station-compact-ai-workstation-intel-core-ultra-9-275hx-dual-arc-pro-b60-24-gb-gpus-256-gb-ddr5-memory/](https://wccftech.com/maxsun-arl-hx-mini-station-compact-ai-workstation-intel-core-ultra-9-275hx-dual-arc-pro-b60-24-gb-gpus-256-gb-ddr5-memory/)
2025-09-26T00:26:42
https://www.reddit.com/r/LocalLLaMA/comments/1nqn8q4/it_was_supposed_to_be_out_in_september_anyone_got/
Highwaytothebeach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqn8q4
false
null
t3_1nqn8q4
/r/LocalLLaMA/comments/1nqn8q4/it_was_supposed_to_be_out_in_september_anyone_got/
false
false
self
3
{'enabled': False, 'images': [{'id': 'P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=108&crop=smart&auto=webp&s=830a497b296dd22fea61a941cb03d1a1073ae508', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=216&crop=smart&auto=webp&s=fbd8a2fa744e377eb3295e6af87edac4bcbf9c6b', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=320&crop=smart&auto=webp&s=22f67994e316855979ad8fb95ba086cf173a7afb', 'width': 320}, {'height': 363, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=640&crop=smart&auto=webp&s=102672d52d21078393381d76c787a6851253eaeb', 'width': 640}, {'height': 544, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=960&crop=smart&auto=webp&s=baa67edf1fef0caeba00e424f5da7f7f9cc41c2c', 'width': 960}, {'height': 612, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?width=1080&crop=smart&auto=webp&s=d6e8621f880057db5b6d4d139b0d9df260cbde0e', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://external-preview.redd.it/P1Kz8Su-BJvrC_BSBcU15fls-0Zsy9krfT3cf0iUkkw.png?auto=webp&s=368e59661a653e5bf1e97a99634a9fc3011ddf47', 'width': 2129}, 'variants': {}}]}
Anyone tried Apertus? What was your setup and how did it go?
7
I was really excited when the Swiss released Apertus, as a completely open source and open weight model. It’s something I’m hoping to see come from more countries. But I haven’t heard much about it since it was released. Anyone try it out? What was your setup? How did it perform?
2025-09-26T00:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1nqmput/anyone_tried_apertus_what_was_your_setup_and_how/
Constant_Mouse_1140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqmput
false
null
t3_1nqmput
/r/LocalLLaMA/comments/1nqmput/anyone_tried_apertus_what_was_your_setup_and_how/
false
false
self
7
null
Out of curiosity has anyone tried local ai on a steam deck?
0
Im thinking of maybe getting a steam deck for a few reasons such as gaming, I know this is a silly question but can the steam deck run things like ollama or comfyui? if so what are the biggest types of model it can run? if not please explain your reasoning? has anyone tried a steam deck for ai by chance? how well does it perform?
2025-09-25T23:55:02
https://www.reddit.com/r/LocalLLaMA/comments/1nqmkdm/out_of_curiosity_has_anyone_tried_local_ai_on_a/
No_Strawberry_8719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqmkdm
false
null
t3_1nqmkdm
/r/LocalLLaMA/comments/1nqmkdm/out_of_curiosity_has_anyone_tried_local_ai_on_a/
false
false
self
0
null
Qwen3 next FP8 loading issues
3
Hi there , I have been using Vllm to serve and inference qwen3 next model. I was mostly loading it in full weight while I was testing my system and how does the model behave, then I moved to fp8 and dynamic fp8 versions so I can add multiple models to the flow and fit them in my gpu. I recently tried switching to the official fp8 versions of qwen3 next and for some reason I keep getting loading issues and failures due to misquantized or something like that. I tried upgrading to the nightly version of vllm and that did solve the loading issue but I still couldn't talk to the model after it was hosted. Even more I couldn't use async engine with it as it kept throwing errors and issues that I literally couldn't keep up with. So I was wondering if anyone has been having issues with specifically the official fp8 from qwen? P.S. i am using Vllm 0.10.2 and have 3 Rtx pro 6000 so its not a memory issue and the older versions of qwen3 next fp8 work flawlessly.
2025-09-25T23:39:54
https://www.reddit.com/r/LocalLLaMA/comments/1nqm8tp/qwen3_next_fp8_loading_issues/
Daemontatox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqm8tp
false
null
t3_1nqm8tp
/r/LocalLLaMA/comments/1nqm8tp/qwen3_next_fp8_loading_issues/
false
false
self
3
null
Why Ollama qwen3-coder:30b still doesn't support tool (agent mode)?
0
I'm trying [continue.dev](http://continue.dev) with qwen3-coder. But too my disappointment, the model still doesn't support agent mode after more than 4 weeks wait. Why the agent mode is disabled? Any technical reasons?
2025-09-25T23:39:49
https://www.reddit.com/r/LocalLLaMA/comments/1nqm8rx/why_ollama_qwen3coder30b_still_doesnt_support/
rickyzhang82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqm8rx
false
null
t3_1nqm8rx
/r/LocalLLaMA/comments/1nqm8rx/why_ollama_qwen3coder30b_still_doesnt_support/
false
false
self
0
{'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]}
Introducing LlamaNet: Decentralized AI Inference Network
22
🚀 Introducing LlamaNet: Decentralized AI Inference Network I’m thrilled to share LlamaNet – a distributed inference swarm for LLMs that eliminates single points of failure in AI infrastructure. 🔥 What makes LlamaNet different: ✅ Truly Decentralized – Kademlia DHT for peer discovery (no central registry) ✅ OpenAI Compatible – Drop-in replacement for OpenAI API endpoints ✅ Auto Load Balancing – Routes intelligently based on node performance ✅ Fault Tolerant – Keeps running even if nodes go offline ✅ Easy Deployment – Docker support + one-step bootstrap 🛠️ Key Features: • Real-time streaming with SSE • Multiple routing strategies (load-balanced, round-robin, random) • Built-in health checks + metrics • P2P communication with NAT traversal • Web UI for swarm visualization • Supports any GGUF model format 💡 Who it’s for: • Orgs seeking resilient AI infra • Researchers building distributed AI • Developers tired of high-cost LLM hosting • Anyone fed up with vendor lock-in 👉 The future of AI is decentralized. No outages. No pricing shocks. No lock-in. 🔗 Check it out: https://github.com/machaao/llama-net #AI #Decentralization #LLM #OpenSource #DistributedSystems #Infra
2025-09-25T23:36:52
https://www.reddit.com/r/LocalLLaMA/comments/1nqm6f9/introducing_llamanet_decentralized_ai_inference/
machaao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqm6f9
false
null
t3_1nqm6f9
/r/LocalLLaMA/comments/1nqm6f9/introducing_llamanet_decentralized_ai_inference/
false
false
self
22
{'enabled': False, 'images': [{'id': 'Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=108&crop=smart&auto=webp&s=21b70b83b1adde445d2496636ec2237a9dc95022', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=216&crop=smart&auto=webp&s=1fb74a1f0906c91446bf42464b78152d0a61fd4e', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=320&crop=smart&auto=webp&s=1551100b71201d752c74914f522791e2bccbee68', 'width': 320}, {'height': 302, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=640&crop=smart&auto=webp&s=ec4f8a2674d90b37cc9cc5ed316842153988c6ac', 'width': 640}, {'height': 454, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=960&crop=smart&auto=webp&s=91c302113b88c0e4c7aacf3973b071a1b17e0125', 'width': 960}, {'height': 510, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?width=1080&crop=smart&auto=webp&s=3301f9072b700c929f2595caf216554f0f1ab237', 'width': 1080}], 'source': {'height': 1702, 'url': 'https://external-preview.redd.it/Nc-6nFLZTywwdkZoxak64GwG9a0y_ldHNyUJ_tpbD2g.png?auto=webp&s=34e458a3bb725a26e661d8eeca267ad4e726aef6', 'width': 3598}, 'variants': {}}]}
Is launch daemon the best approach to auto start llama-swap on macOS?
4
Got rid of ollama and having a smooth experience with llama-swap except for one aspect: I manually start the server every time I shutdown/restart my mac. Is a launch daemon plist the best approach to setup?
2025-09-25T23:05:56
https://www.reddit.com/r/LocalLLaMA/comments/1nqlhpe/is_launch_daemon_the_best_approach_to_auto_start/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqlhpe
false
null
t3_1nqlhpe
/r/LocalLLaMA/comments/1nqlhpe/is_launch_daemon_the_best_approach_to_auto_start/
false
false
self
4
null
Build advice
3
I plan on building a local Ilm server in a 4u rack case from rosewell want to use dual Xeon CPUs E5-2637 v3 on a Asus motherboard I'm getting from eBay ASUS Z1OPE-D8 WS. I'm gonna use 128gb of ddr4 and for the GPUs I want to use what I already have witch is 4 Intel arc b580s. for a total of 48gb vram and im gonna use a Asus rog 1200w PSU to power all of this. now in my research it should work BC the 2 Intel xeons have a combined total of 80 pcie lanes so each gpu should connect to the CPU directly and not through the mobo chipset and even though its pcie 3.0 the cards witch are pcie 4.0 shouldent suffer too much. and on the software side of things I tried the Intel arc b580 in LM studio and I got pretty decent results so i hope that in this new build with 4 of these cards it should be good and now ollama has Intel GPU support BC of the new ipex patch that Intel just dropped. right now in my head it looks like everything should work but maybe im missing something any help is much appreciated.
2025-09-25T23:03:29
https://www.reddit.com/r/LocalLLaMA/comments/1nqlfok/build_advice/
hasanismail_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqlfok
false
null
t3_1nqlfok
/r/LocalLLaMA/comments/1nqlfok/build_advice/
false
false
self
3
null
is my ai stupid ?
0
why it doesn't answer?
2025-09-25T22:59:29
https://v.redd.it/4xtdfo8g5erf1
Mysterious_Fig7236
v.redd.it
1970-01-01T00:00:00
0
{}
1nqlcas
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4xtdfo8g5erf1/DASHPlaylist.mpd?a=1761433185%2CZTg2OWM4NGFmM2Q1NThiM2RmOTk4NzEzNWQ4NGFmZDY4ZWU5ZjgxYjUyMjZiYTJmY2ZjOGQ0MThjNzI2M2Q3OA%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/4xtdfo8g5erf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/4xtdfo8g5erf1/HLSPlaylist.m3u8?a=1761433185%2CZWViM2UxMDJkM2M1MmFjY2UzOWUxOTA3ZGNkNjNkODQyOWJkN2Q5ZDY5ODAyNDNlNDQ4ODFlZWUxNTMyMTIyZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4xtdfo8g5erf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1nqlcas
/r/LocalLLaMA/comments/1nqlcas/is_my_ai_stupid/
false
false
https://external-preview…5f3c54dc2a7f2e25
0
{'enabled': False, 'images': [{'id': 'NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=108&crop=smart&format=pjpg&auto=webp&s=458df78f337e158e6e72415390ea72f07341821d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=216&crop=smart&format=pjpg&auto=webp&s=400763ef7f85068d34b97ae84988ff4d17c9b6fc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=320&crop=smart&format=pjpg&auto=webp&s=47e51d0be89a17da479a208001ce1da7721013ab', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=640&crop=smart&format=pjpg&auto=webp&s=fa09f031a832aecac0119e04dfe0a54462fea6e5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=960&crop=smart&format=pjpg&auto=webp&s=ac53bee1ec8bae41b95c3e398e623bfad04236f9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8e9dee710b6529c1d9fa8994730f5993d3f99840', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NmZxN3RvOGc1ZXJmMXJdpM9gPqjZiejIVus33xBF0xt0ZyWiBpKPixpqP20w.png?format=pjpg&auto=webp&s=0af2b04546bbaeab7f5ea92ebabe4acf3a6af208', 'width': 1280}, 'variants': {}}]}
Apparently all third party providers downgrade, none of them provide a max quality model
389
2025-09-25T22:41:03
https://i.redd.it/k5on2q9i2erf1.jpeg
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1nqkx7o
false
null
t3_1nqkx7o
/r/LocalLLaMA/comments/1nqkx7o/apparently_all_third_party_providers_downgrade/
false
false
default
389
{'enabled': True, 'images': [{'id': 'k5on2q9i2erf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=108&crop=smart&auto=webp&s=a0d5779a48d3b0a7aed98a6196a6c31714977360', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=216&crop=smart&auto=webp&s=2ad12e83d92477cbc5da65f23e18158c11e9e6e8', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=320&crop=smart&auto=webp&s=978d94eebcb974a83f2279c7d99e52538e08e5ae', 'width': 320}, {'height': 545, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=640&crop=smart&auto=webp&s=44c1c2fb9cea2d9deb0e87434f884c9dc83258dd', 'width': 640}, {'height': 817, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=960&crop=smart&auto=webp&s=524276f404152d8eef7c5a5f436dab3d2e14012e', 'width': 960}, {'height': 919, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?width=1080&crop=smart&auto=webp&s=0ef148970c14ed0cf71539447f85123ba3f5db9a', 'width': 1080}], 'source': {'height': 1332, 'url': 'https://preview.redd.it/k5on2q9i2erf1.jpeg?auto=webp&s=59520be343d7201f83fabb9f2329e05929ceeac3', 'width': 1564}, 'variants': {}}]}
Why do LLMs do the comparative thing so often
26
For example ‘That’s not a weakness, that’s a compass pointing you away from the wrong life.’ I see it in so many responses and also I can tell if something is AI just based off this
2025-09-25T22:35:19
https://www.reddit.com/r/LocalLLaMA/comments/1nqksbv/why_do_llms_do_the_comparative_thing_so_often/
Imbuyingdrugs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqksbv
false
null
t3_1nqksbv
/r/LocalLLaMA/comments/1nqksbv/why_do_llms_do_the_comparative_thing_so_often/
false
false
self
26
null
How to change design of 3500 images fast,easy and extremely accurate?
0
How to change the design of 3500 football training exercise images, fast, easily, and extremely accurately? It's not necessary to be 3500 at once; 50 by 50 is totally fine as well, but only if it's extremely accurate. I was thinking of using the OpenAI API in my custom project and with a prompt to modify a large number of exercises at once (from .png to create a new .png with the Image creator), but the problem is that ChatGPT 5's vision capabilities and image generation were not accurate enough. It was always missing some of the balls, lines, and arrows; some of the arrows were not accurate enough. For example, when I ask ChatGPT to explain how many balls there are in an exercise image and to make it in JSON, instead of hitting the correct number, 22, it hits 5-10 instead, which is pretty terrible if I want perfect or almost perfect results. Seems like it's bad at counting. Guys how to change design of 3500 images fast,easy and extremely accurate? https://preview.redd.it/npsv9cd80erf1.png?width=2351&format=png&auto=webp&s=c4feb97efd0c753da097250537eba89497b99af8 That's what OpenAI image generator generated. On the left side is the generated image and on the right side is the original:
2025-09-25T22:28:38
https://www.reddit.com/r/LocalLLaMA/comments/1nqkmva/how_to_change_design_of_3500_images_fasteasy_and/
Real_Investment_3726
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqkmva
false
null
t3_1nqkmva
/r/LocalLLaMA/comments/1nqkmva/how_to_change_design_of_3500_images_fasteasy_and/
false
false
https://b.thumbs.redditm…ZrZhnrZyocJc.jpg
0
null
I hacked the fastest turn detection that runs on CPU
3
EOU detection time < 150 ms (faster than DeepGram) Key ideas - 1. Exponential smoothing + Silero VAD v6 (2.3 MB) 2. pipecat-ai/smart-turn-v3 (8.8 MB) https://preview.redd.it/rbscq0gfzdrf1.png?width=1204&format=png&auto=webp&s=d96260af9bd54817b7729015fde51914c6aad12c ps - Working on a voice AI that runs completely offline - no cloud BS. Full STT-LLM-TTS stack. Code drop coming soon...
2025-09-25T22:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1nqkjfm/i_hacked_the_fastest_turn_detection_that_runs_on/
mshubham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqkjfm
false
null
t3_1nqkjfm
/r/LocalLLaMA/comments/1nqkjfm/i_hacked_the_fastest_turn_detection_that_runs_on/
false
false
https://b.thumbs.redditm…4XyBRKfD6-hY.jpg
3
null
Running LLMs on your keyboard ($200 Raspberry Pi 500+)
1
[Raspberry Pi integrated the PI 5 into their 500 keyboard](https://www.raspberrypi.com/news/the-ultimate-all-in-one-pc-raspberry-pi-500-plus-on-sale-now-at-200/), turning it into an all in one desktop. Although it's kinda a gimmick, I believe it's the coolest thing you can run LLM inference on (Fridge/Washing Machine???). It has somewhat decent stats for $200: quad core ARM, 16GB RAM, 256gb SSD so the PI 5 can run smol models (<\~4b)/larger bitnets. [It was used in a popular post yesterday](https://www.reddit.com/r/LocalLLaMA/comments/1npo93e/i_built_a_tiny_fully_local_ai_agent_for_a/) and I also stumbled upon a [paper on optimizing throughput](https://arxiv.org/html/2504.02118v1#:~:text=LLMPi%3A%20Optimizing%20LLMs%20for%20High%2DThroughput%20on%20Raspberry%20Pi) if any of you sickos want to join me.
2025-09-25T22:22:22
https://i.redd.it/bb7r6aujqdrf1.jpeg
Longjumping-Solid563
i.redd.it
1970-01-01T00:00:00
0
{}
1nqkhio
false
null
t3_1nqkhio
/r/LocalLLaMA/comments/1nqkhio/running_llms_on_your_keyboard_200_raspberry_pi_500/
false
false
default
1
{'enabled': True, 'images': [{'id': 'bb7r6aujqdrf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=108&crop=smart&auto=webp&s=43580ae94f4e7ac0899274ca57ae64bb2dc2c0b8', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=216&crop=smart&auto=webp&s=0abd68d4be41e53678807139a96446424c41b287', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=320&crop=smart&auto=webp&s=1ec81780ef0cece60a45c6eb7120db953628e029', 'width': 320}, {'height': 436, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=640&crop=smart&auto=webp&s=5c5e8f2abb01a4b48a7a829d8ddf2a4c4538be90', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=960&crop=smart&auto=webp&s=718ce3e32d9d03827fb3216d810f4dc16c647ebd', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?width=1080&crop=smart&auto=webp&s=0620017d0d96c2325c113e470c8fc52f8209e667', 'width': 1080}], 'source': {'height': 819, 'url': 'https://preview.redd.it/bb7r6aujqdrf1.jpeg?auto=webp&s=f099c2290b5a24829e231c823ac4137bd2c81691', 'width': 1200}, 'variants': {}}]}
I trained an LLM from scratch AMA!
474
It's been a few months and I have posted a few times but I am finished! I used Claude to write my training scripts, and I trained a 960M model on public domain data. It was not fast or easy, but it only cost $500 ( I received free credits from Amazon). It took 3 attempts to get it right. Happy to go into detail It's a LLama 3 architecture with a 3:1 GQA, flash attention 2, and sink tokens. I have not began post-training yet, so it is NOT VERY USABLE!!! I am hoping that post turns it into something useful, I have used 1B base models and they all kind of suck. Post training will be TRL with DPO and the ultrafeedbck dataset. The mdoel is released under the CC0 license, do as you will with it. Project website: [The LibreModel Project](https://www.libremodel.xyz/) Hugging Face : [jerrimu/libremodel · Hugging Face](https://huggingface.co/jerrimu/libremodel) Github ( GGUF here): [Releases · openconstruct/libremodel](https://github.com/openconstruct/libremodel/releases) I would like to train more open source models, and am seeking donations for hardware: If you would like to support this cause you may donate here : [Sponsor @openconstruct on GitHub Sponsors](https://github.com/sponsors/openconstruct)
2025-09-25T22:14:44
https://www.reddit.com/r/LocalLLaMA/comments/1nqkayx/i_trained_an_llm_from_scratch_ama/
thebadslime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqkayx
false
null
t3_1nqkayx
/r/LocalLLaMA/comments/1nqkayx/i_trained_an_llm_from_scratch_ama/
false
false
self
474
null
Everyone’s racing to build smarter RAG pipelines. We went back to security basics
0
When people talk about AI pipelines, it’s almost always about better retrieval, smarter reasoning, faster agents. What often gets missed? *Security*. Think about it: your agent is pulling chunks of knowledge from multiple data sources, mixing them together, and spitting out answers. But who’s making sure it only gets access to the data it’s supposed to? Over the past year, I’ve seen teams try all kinds of approaches: * **Per-service API keys** – Works for single integrations, but doesn’t scale across multi-agent workflows. * **Vector DB ACLs** – Gives you some guardrails, but retrieval pipelines get messy fast. * **Custom middleware hacks** – Flexible, but every team reinvents the wheel (and usually forgets an edge case). The twist? Turns out the best way to secure AI pipelines looks a lot like the way we’ve secured applications for decades: **fine-grained authorization, tied directly into the data layer using OpenFGA.** Instead of treating RAG as a “special” pipeline, you can: * Assign roles/permissions down to the document and field level * Enforce policies consistently across agents and workflows * Keep an audit trail of who (or what agent) accessed what * Scale security without bolting on 10 layers of custom logic That’s the approach Couchbase just wrote about in [this post](https://www.couchbase.com/blog/securing-agentic-rag-pipelines/). They show how to wire fine-grained access control *into* agentic/RAG pipelines, so you don’t have to choose between speed and security. It’s kind of funny, after all the hype around exotic agent architectures, the way forward might be going back to the basics of access control that’s been battle-tested in enterprise systems for years. Curious: how are you (or your team) handling security in your RAG/agent pipelines today?
2025-09-25T21:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1nqjemd/everyones_racing_to_build_smarter_rag_pipelines/
Creepy-Row970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqjemd
false
null
t3_1nqjemd
/r/LocalLLaMA/comments/1nqjemd/everyones_racing_to_build_smarter_rag_pipelines/
false
false
self
0
{'enabled': False, 'images': [{'id': 'WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?width=108&crop=smart&auto=webp&s=5a82b9506a423ffa6525c30f31f9a0529315e3dd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?width=216&crop=smart&auto=webp&s=65d41d6d8886b898eda31b5e37b1e880abbf2ec5', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?width=320&crop=smart&auto=webp&s=bff927928b34c305ffb2092c28414a497d70d6bc', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?width=640&crop=smart&auto=webp&s=a0ed367d1c10f8128af9a7ad0306db9f6d5d9b00', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?width=960&crop=smart&auto=webp&s=55844a8386a96b800927e3065fb518b6c5f24fb2', 'width': 960}], 'source': {'height': 536, 'url': 'https://external-preview.redd.it/WaGRn4MnPGmyBVVC_9nR9I8DWlG5Fbi42vpPinK_fMg.png?auto=webp&s=0dfaf29e20f358bc86e7c89e79e8b697826d2832', 'width': 1024}, 'variants': {}}]}
How are you handling RAG Observability for LLM apps? What are some of the platforms that provide RAG Observability?
2
Every time I scale a RAG pipeline, the biggest pain isn’t latency or even cost it’s figuring out why a retrieval failed. Half the time the LLM is fine, but the context it pulled in was irrelevant or missing key facts. Right now my “debugging” is literally just printing chunks and praying I catch the issue in time. Super painful when someone asks why the model hallucinated yesterday and I have to dig through logs manually. Do you folks have a cleaner way to trace + evaluate retrieval quality in production? Are you using eval frameworks (like LLM-as-judge, programmatic metrics) or some observability layer? I am lookinf for some frameworks that provides real time observability of my AI Agent and helps in yk easy debugging with tracing of my sessions and everything. I looked at some of the platforms. Found a few that offer node level evals, real time observability and everything. Shortlisted a few of them - [Maxim](https://getmax.im/Max1m), [Langfuse](https://langfuse.com/), [Arize](https://arize.com/). Which Observability platforms are you using and is it making your debugging faster?
2025-09-25T21:27:17
https://www.reddit.com/r/LocalLLaMA/comments/1nqj5l5/how_are_you_handling_rag_observability_for_llm/
Fabulous_Ad993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqj5l5
false
null
t3_1nqj5l5
/r/LocalLLaMA/comments/1nqj5l5/how_are_you_handling_rag_observability_for_llm/
false
false
self
2
{'enabled': False, 'images': [{'id': 'uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=108&crop=smart&auto=webp&s=2ac91097383d12b50cccd11a156d801425048149', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=216&crop=smart&auto=webp&s=fae40b26936652773a58a03f1d4a4baec2979212', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=320&crop=smart&auto=webp&s=1a444a7dd7d4b0466ac2677e15998bea07b28d8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=640&crop=smart&auto=webp&s=856a61802fc5acd41967218550e53df81caa8e55', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=960&crop=smart&auto=webp&s=0dc7253f5f4daea12322fc48309b0ecb506c03e0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?width=1080&crop=smart&auto=webp&s=94df2b12217ce0373883be1122c1402454ad81eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uhRpB7D-vTXyU3kWrM4Uv0wsEl8ANQiO9SXhRH44nmE.png?auto=webp&s=66ed8b09519937ca22fa89b067d4bb96fecbc34a', 'width': 1200}, 'variants': {}}]}
Cline / Roo | VS Code | Win 11 | llama-server | Magistral 2509 | Vision / Image upload issue
2
Given the above setup, both the Roo and Cline plugins seem to be sending image data in a way that the vision model doesn't understand. Dropping the same image into llama-server's built-in chat or Open-WebUI using that llama-server works fine. Opening an [existing, failed to previously read] image and dropping into Cline / Roo, within VS Code as part of the initial prompt works fine too. ...What I'm trying to do is using Magistral's vision capabilities work with screenshots taken by the AI model. It's like Cline / Roo messes up the image data somehow before sending to the API. Any ideas on how to address this?
2025-09-25T20:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1nqif5c/cline_roo_vs_code_win_11_llamaserver_magistral/
73tada
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqif5c
false
null
t3_1nqif5c
/r/LocalLLaMA/comments/1nqif5c/cline_roo_vs_code_win_11_llamaserver_magistral/
false
false
self
2
null
Are there *any* consumer mobos that can fit 2x 3.5-slot GPUs for LLMs? With PCIe 5.0?
7
Now that 5090 prices have finally come down I'm looking to find my 4090 a buddy. I prefer traditional fans over AIOs. Also - risers are still unreliable, right? Or has there been progress on that front?
2025-09-25T20:40:49
https://www.reddit.com/r/LocalLLaMA/comments/1nqi0di/are_there_any_consumer_mobos_that_can_fit_2x/
Myopic_Cat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqi0di
false
null
t3_1nqi0di
/r/LocalLLaMA/comments/1nqi0di/are_there_any_consumer_mobos_that_can_fit_2x/
false
false
self
7
null
Code completion with 5090
3
I swapped my gaming PC from Windows 11 to CachyOS which means my gaming PC is a lot more capable than my macbook air for development as well. I use claude code (which has been much worse since August) and codex (slow) for agent tools. I have Github copilot and supermaven for code completion that i use in neovim. Is there any model which can replace the code completion tools (copilot and supermaven)? I don’t really need chat or to plan code changes etc, i just want something that very quickly and accurately predicts my next lines of code given the context of similar files/templates. 5090, 9800x3d, 64 GB DDR5 6000 CL-30 RAM
2025-09-25T20:35:52
https://www.reddit.com/r/LocalLLaMA/comments/1nqhvp4/code_completion_with_5090/
Past-Instruction290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqhvp4
false
null
t3_1nqhvp4
/r/LocalLLaMA/comments/1nqhvp4/code_completion_with_5090/
false
false
self
3
null
I'm testing the progress on GitHub. Qwen Next gguf. Fingers crossed.
103
https://preview.redd.it/…m/pwilkin) **!**
2025-09-25T20:25:45
https://www.reddit.com/r/LocalLLaMA/comments/1nqhlyw/im_testing_the_progress_on_github_qwen_next_gguf/
LegacyRemaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqhlyw
false
null
t3_1nqhlyw
/r/LocalLLaMA/comments/1nqhlyw/im_testing_the_progress_on_github_qwen_next_gguf/
false
false
https://b.thumbs.redditm…RB_ZCbEEaUsU.jpg
103
{'enabled': False, 'images': [{'id': 'EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=108&crop=smart&auto=webp&s=b2fc03b752c64d73eede44b5f3b6cc1ef4ebb5c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=216&crop=smart&auto=webp&s=00d37ca61f6f5645c204d0b0bf625f3bf109580d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=320&crop=smart&auto=webp&s=fbe369f66ae4ace37eba092176475712199b37f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=640&crop=smart&auto=webp&s=a8b0037a07998e56cdb65b5e64502a4aceb6ef48', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=960&crop=smart&auto=webp&s=2e7b2d0bb0eff095cd8d1872905891de76477911', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?width=1080&crop=smart&auto=webp&s=12281ec77a57a68c9b3b726626dc88aeef97518c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EKetf3gBiUdNtrEdMnvehl50wpsTXtAb_rcC73bjkFY.png?auto=webp&s=f13299ae176f9caaeceedb7f12fec07cc9a44e04', 'width': 1200}, 'variants': {}}]}
What is going on with this subreddit moderation?
1
[removed]
2025-09-25T20:20:59
https://i.redd.it/d9qnkc8jddrf1.jpeg
Lorian0x7
i.redd.it
1970-01-01T00:00:00
0
{}
1nqhhhy
false
null
t3_1nqhhhy
/r/LocalLLaMA/comments/1nqhhhy/what_is_going_on_with_this_subreddit_moderation/
false
false
default
1
{'enabled': True, 'images': [{'id': 'd9qnkc8jddrf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=108&crop=smart&auto=webp&s=4856ce1d3ec31145cd3c82a05bcf0be0503a30eb', 'width': 108}, {'height': 325, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=216&crop=smart&auto=webp&s=0670ac7e592271f03e282c0d923ce0f13d723b9a', 'width': 216}, {'height': 482, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=320&crop=smart&auto=webp&s=83d0879b56a95b3de03a2bf013b44c324ea629ed', 'width': 320}, {'height': 964, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=640&crop=smart&auto=webp&s=68f2e038d214d3c89b20573e879df2efa5d691c5', 'width': 640}, {'height': 1447, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=960&crop=smart&auto=webp&s=fcc01d750a56e301dc6d95db5670d01970f90a6b', 'width': 960}, {'height': 1627, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?width=1080&crop=smart&auto=webp&s=60340a7c038a4cb0b705474ded84dc4c9eeac501', 'width': 1080}], 'source': {'height': 1839, 'url': 'https://preview.redd.it/d9qnkc8jddrf1.jpeg?auto=webp&s=e71efc210caca41f901e25f2a9b592c9560d2cc1', 'width': 1220}, 'variants': {}}]}
Need help setting up my home ai lab. Any recommendations?
3
Hey everyone, I could use some guidance on the best way to configure my home lab for running LLMs. I am not super versed in Linux driver issues, so I have been sticking with Ollama on all my machines because it is easy to use and works reliably. Here is my setup: * Mac Studio with M2 Ultra (192 GB RAM) * Mac Mini with M2 Pro (32 GB RAM) * M4 MacBook Air (32 GB RAM, max CPU) * AI PC with an RTX 5090 (32 GB VRAM), RTX 4090 (24 GB VRAM), and 96 GB system RAM The PC currently has both Ubuntu and Windows with WSL2 installed. Right now I am using Windows because it correctly recognizes both GPUs. If there is a way to get Linux working with both cards, I would prefer that as well. My main workload is agentic tasks and coding, so accuracy and reasoning matter more to me than autocomplete or casual chat. What would you recommend as the best configuration for each of these machines? * Should I keep using Ollama everywhere, or run Ollama on the Macs and something else like vLLM on the PC? * On the dual-GPU PC, how would you allocate models between the 5090 and 4090? * Are there any driver or CUDA gotchas I should be aware of if I move deeper into Linux or vLLM? Appreciate any advice from folks who have gone down this path.
2025-09-25T20:15:17
https://www.reddit.com/r/LocalLLaMA/comments/1nqhc5l/need_help_setting_up_my_home_ai_lab_any/
ate50eggs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqhc5l
false
null
t3_1nqhc5l
/r/LocalLLaMA/comments/1nqhc5l/need_help_setting_up_my_home_ai_lab_any/
false
false
self
3
null
Someone wife on this subreddit liked Magistral, so I tried it. She was right
1
[removed]
2025-09-25T20:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1nqhahv/someone_wife_on_this_subreddit_liked_magistral_so/
Lorian0x7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqhahv
false
null
t3_1nqhahv
/r/LocalLLaMA/comments/1nqhahv/someone_wife_on_this_subreddit_liked_magistral_so/
false
false
self
1
null
Someone wife on this subreddit liked Magistral, so I tried it. She was right
1
[removed]
2025-09-25T20:11:31
https://www.reddit.com/r/LocalLLaMA/comments/1nqh8ms/someone_wife_on_this_subreddit_liked_magistral_so/
Lorian0x7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqh8ms
false
null
t3_1nqh8ms
/r/LocalLLaMA/comments/1nqh8ms/someone_wife_on_this_subreddit_liked_magistral_so/
false
false
self
1
null
Someone wife on this subreddit liked Magistral, so I tried it. She was right
1
[removed]
2025-09-25T20:09:47
https://www.reddit.com/r/LocalLLaMA/comments/1nqh714/someone_wife_on_this_subreddit_liked_magistral_so/
Lorian0x7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqh714
false
null
t3_1nqh714
/r/LocalLLaMA/comments/1nqh714/someone_wife_on_this_subreddit_liked_magistral_so/
false
false
self
1
null
PCIe 3.0 bottlenecking on newer GPUs
5
Does anyone know what kind of bottleneck I can expect if I upgrade my current server which is a Threadripper 2990WX with 256GB of memory to PCIe Gen 4 or 5 GPUs? Like 4x 3090 or 2x 5090? The board has 2x 16x PCIe 3 + 2x 8x PCIe 3. Or will it be too much of a bottleneck either way that I need to upgrade the platform before investing a lot into GPUs? Model loading speed is probably not that important to me, I just want to run inference on a larger model than I currently can.
2025-09-25T20:01:44
https://www.reddit.com/r/LocalLLaMA/comments/1nqgzo4/pcie_30_bottlenecking_on_newer_gpus/
Yugen42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqgzo4
false
null
t3_1nqgzo4
/r/LocalLLaMA/comments/1nqgzo4/pcie_30_bottlenecking_on_newer_gpus/
false
false
self
5
null
Artificial Analysis Long Context Reasoning (AA-LCR) benchmark
4
- Announcement: https://huggingface.co/posts/georgewritescode/981174566402338 - Leaderboard: https://artificialanalysis.ai/evaluations/artificial-analysis-long-context-reasoning - Dataset: https://huggingface.co/datasets/ArtificialAnalysis/AA-LCR
2025-09-25T19:52:35
https://artificialanalysis.ai/evaluations/artificial-analysis-long-context-reasoning
Balance-
artificialanalysis.ai
1970-01-01T00:00:00
0
{}
1nqgrb6
false
null
t3_1nqgrb6
/r/LocalLLaMA/comments/1nqgrb6/artificial_analysis_long_context_reasoning_aalcr/
false
false
default
4
null
In-Browser Codebase to Knowledge Graph generator
24
I’m working on a side project that generates a Knowledge Graph from codebases and provides a Graph-RAG-Agent. It runs entirely **client-side** in the browser, making it fully private, even the graph database runs in browser through web-assembly. I had posted this here a month ago for advices, now it is working and has massive performance gain. It is now able to generate KG from big repos ( 1000+ files) in seconds. In theory since its graph based, it should be much more accurate than traditional RAG, hoping to make it as useful and easy to use as gitingest / gitdiagram, and be helpful in understanding big repositories and prevent breaking code changes Future plan: * Ollama support * Exposing browser tab as MCP for AI IDE / CLI can query the knowledge graph directly **Need suggestions on cool feature list.** Repo link: [https://github.com/abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) Pls leave a star if seemed cool 🫠 **Tech Jargon:** It follows this 4-pass system and there are multiple optimizations to make it work inside browser. Uses Tree-sitter WASM to generate AST. The data is stored in a graph DB called Kuzu DB which also runs inside local browser through kuzu-WASM. LLM creates cypher queries which are executed to query the graph. * **Pass 1: Structure Analysis** – Scans the repository, identifies files and folders, and creates a hierarchical *CONTAINS* relationship between them. * **Pass 2: Code Parsing & AST Extraction** – Uses Tree-sitter to generate abstract syntax trees, extracts functions/classes/symbols, and caches them efficiently. * **Pass 3: Import Resolution** – Detects and maps `import/require` statements to connect files/modules with *IMPORTS* relationships. * **Pass 4: Call Graph Analysis** – Links function calls across the project with *CALLS* relationships, using exact, fuzzy, and heuristic matching. **Optimizations:** Uses worker pool for parallel processing. Number of worker is determined from available cpu cores, max limit is set to 20. Kuzu db write is using COPY instead of merge so that the whole data can be dumped at once massively improving performance, although had to use polymorphic tables which resulted in empty columns for many rows, but worth it since writing one batch at a time was taking a lot of time for huge repos.
2025-09-25T19:42:54
https://v.redd.it/b7v2eovm2drf1
DeathShot7777
v.redd.it
1970-01-01T00:00:00
0
{}
1nqgio2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b7v2eovm2drf1/DASHPlaylist.mpd?a=1761421392%2CNDJkZGFmMDRmMzE2ODEwMGRmYjdiMzY3MWIzZjgxODk1NjliNTQ5YjhjM2M1ZGQxMGYxNDQ2ZjNhMjEyMWFmZA%3D%3D&v=1&f=sd', 'duration': 82, 'fallback_url': 'https://v.redd.it/b7v2eovm2drf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/b7v2eovm2drf1/HLSPlaylist.m3u8?a=1761421392%2CNzk0MWNkNDllYTVhMWM1YTkwYzUzYzhhODhlNDU0MWQzODEzMGQ1MTU2MjEzNzkyMDEwNzY5OGEzYzU5ZGFiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b7v2eovm2drf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1nqgio2
/r/LocalLLaMA/comments/1nqgio2/inbrowser_codebase_to_knowledge_graph_generator/
false
false
https://external-preview…9c9aee92de5f9a0a
24
{'enabled': False, 'images': [{'id': 'NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=108&crop=smart&format=pjpg&auto=webp&s=1445539e634c7ff3b623d289243bd0911c790097', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=216&crop=smart&format=pjpg&auto=webp&s=5d30855ae1fcc58374e8c2b33b23d73e321baa33', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=320&crop=smart&format=pjpg&auto=webp&s=03d808d695153ac8aa523f0594307a56b3f7db42', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b9568bde04ef8c45513adb2c3a91440764aeded', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=960&crop=smart&format=pjpg&auto=webp&s=f8ba6e76d3d8cfc2a01bb71832cf1704511404d9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=46c4d49d8f02a9166b5c79b1cf1ef7490c9f9b5e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NGhmbm9oZnAyZHJmMSatzj3-4R6EilPMIHLTuZh7EM6HdLWcM6lfSRZEuWNf.png?format=pjpg&auto=webp&s=24f87abc60888827eacc315f81bc88dbd729d2c4', 'width': 1920}, 'variants': {}}]}
Jagged intelligence and how to measure it properly, or psychometric model of ability in LLMs
7
The abilities in LLMs are counter-intuitive for us in a sense that LLMs solve and fails problems in absolutely incomprehensible ways. This phenomenon when LLM *"can solve a PhD problem and then fail at high school math"* is known as **jagged intelligence**. However, "jagged" does not mean "immeasurable" or "unpredictable". Here I suggest how to adapt psychometrics to explore the hierarchy of intelligence in LLMs and, based on this hierarchy, suggest a very simple and cheap way to measure the ability of LLMs properly, instead of relying on overhyped benchmarks that have barely more validity and reliability than palm tea leaf reading. You will see that: 1. All LLMs are powered by the same underlying ability; 2. Performance differences between LLMs arise mostly from differences in this ability; 3. LLM ability is best measured as a probability of success at increasingly of-distribution problems; 4. LLM ability level is predicted by scaling laws; 5. There are currently no benchmarks that explicitly target LLM ability; 6. Benchmarks that would measure it are cheap and easy to create, use and maintain, which drastically reduces evaluation costs. Let's start with comparing the structure of intelligence in humans and LLMs. # Reference point: how does human intelligence work? To understand the differences between LLM and human ability, let's first talk about human intelligence. # Ability in humans is intuitive Ask yourself which college majors are the smartest (and which are not). You will likely say that people you'd call the smartest studied math and related fields, with rare exceptions (likely in humanities), and that, obviously, people with different ability were attracted to different college majors. This stratification is intuitive - these stereotypes reflect the real-world measures. As example, intelligence of majors quantified as their composite GRE score: https://preview.redd.it/u3qm2wz1zcrf1.png?width=4092&format=png&auto=webp&s=7aa9f1b02b61f665ffe676f23d7ca0b254d8608f [https://orgtheory.wordpress.com/2010/12/17/gre-scores-for-different-disciplines/](https://orgtheory.wordpress.com/2010/12/17/gre-scores-for-different-disciplines/) [https://x.com/crocoduck\_king/status/1685475919295860736](https://x.com/crocoduck_king/status/1685475919295860736) Turns out that we associate intelligence with mathematics for a reason. If a human can solve PhD math, they are likely able to solve **anything** else with proper amount of training, because there are no more intellectually demanding subjects than math. # Ability in LLM is NOT intuitive ("jagged") LLM breakthroughs in STEM are so impressive exactly because they give an impression of approaching the intelligence levels of the most intellectually challenging sciences. However, in LLM, the ability works **differently** than in humans! You can reasonably expect a math PhD to understand sociology or political science, but there is no guarantee that a PhD math-capable LLM will succeed at a less intellectually demanding (for humans) field. **There are insanely difficult problems for LLMs in each field, unlike humans who mostly find STEM to be this difficult.** To understand why, let's examine the structure of ability in humans and LLMs. # Ability in humans: the g factor In 1904, Charles Spearman noted that performance on tasks involving any mental processing was positively correlated - children who were good at one school subject were more likely to be good at others. He called this phenomenon a **positive manifold.** By doing a **factor analysis** \- calculating the correlations between performance in each discipline - he derived a single factor responsible for most performance disparities between individuals. He called it **the factor of general intelligence, or g.** People with greater g tend to be better at any task involving mental processing (basically, **any** task a human can do). **The discovery of the g factor is the most replicable finding in psychology.** [Spearman's correlation matrix for six measures of school performance. All the correlations are positive, the positive manifold phenomenon. The bottom row shows the g loadings of each performance measure. Adapted from Jensen 1998, 24.](https://preview.redd.it/8pcsdhgjzcrf1.png?width=526&format=png&auto=webp&s=f9fdafdccb21e9dcc7267c5bf6f88a58929ecd5e) # Ability in LLMs: the g factor Do LLMs have the g factor? Let’s try to figure it out - select a large group of models, test them across a range of different tasks and see if the positive manifold appears, just like Spearman did. Luckily, we don’t need to do it from scratch, because it has already been done in many studies: * Unveiling the General Intelligence Factor in Language Models: A Psychometric Approach * [https://arxiv.org/abs/2310.11616v1](https://arxiv.org/abs/2310.11616v1) * metabench - A Sparse Benchmark of Reasoning and Knowledge in Large Language Models * [https://arxiv.org/abs/2407.12844](https://arxiv.org/abs/2407.12844) * Revealing the structure of language model capabilities * [https://arxiv.org/abs/2306.10062](https://arxiv.org/abs/2306.10062) * M3GIA: A Cognition Inspired Multilingual and Multimodal General Intelligence Ability Benchmark * [https://arxiv.org/abs/2406.05343](https://arxiv.org/abs/2406.05343) * Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation * [https://arxiv.org/abs/2506.02648](https://arxiv.org/abs/2506.02648) * Observational Scaling Laws and the Predictability of Language Model Performance * [https://arxiv.org/abs/2405.10938](https://arxiv.org/abs/2405.10938) Regardless of their design, **all of them** have identified a single factor that explains most performance differences between the models. It pretty much confirms the existence of g factor in LLMs. # Ability in humans: non-g factors Later, factor analysis of more comprehensive tests identified that some tasks correlate with each other enough to produce their own factors that are also positively correlated with g. These factors are known as **broad abilities**. For example, a WISC-IV correlation matrix identifies five broad abilities: * Gc - knowledge * Gv - visual/spatial * Gsm - short term memory * Gq - quantitative * Gs - clerical speed https://preview.redd.it/dlojbq520drf1.png?width=1091&format=png&auto=webp&s=6f20cf642154147ed88f091faeb316bc980dfe0b https://preview.redd.it/7vfw0h930drf1.png?width=1551&format=png&auto=webp&s=c197c0cd59042449c2a16fff38cbe9ffd36ccf5d [https://assessingpsyche.wordpress.com/2011/02/09/factor-analysis-of-the-wisc-iv-integrated-with-a-schmid-leiman-transformation/](https://assessingpsyche.wordpress.com/2011/02/09/factor-analysis-of-the-wisc-iv-integrated-with-a-schmid-leiman-transformation/) Note: 1. Negative correlations are negligibly small, which suggests sampling or measurement error, and does not disprove the concept of g factor; 2. **Broad abilities** are emergent products of factor analysis, not task-specific training. Humans can’t enhance their broad abilities by training - rather, the levels of their broad abilities limit the reachable levels of task-specific skills related to these abilities. 3. There is a fixed number of broad abilities in humans. Many people have **ability tilts** \- some of their broad abilities are expressed better than others. The worldcels vs shape rotators distinction is known for years in psychometric literature. WAIS and WISC, gold standard comprehensive IQ tests used in clinical evaluations, breaks four broad abilities into the following indexes: * Full-Scale IQ * General Ability Index * Verbal Comprehension Index * Perceptual Reasoning Index * Cognitive Proficiency Index * Working Memory Index * Processing Speed Index Cattell-Horn-Carroll theory suggests the most comprehensive structure of intelligence - g factor and broad abilities: https://preview.redd.it/yvotrtx50drf1.png?width=624&format=png&auto=webp&s=14f0ce5963f4f67e804c142c5348c6c3ffe54ef1 In CHC hierarchy, the most important ability after g is Gf, fluid reasoning. This ability is responsible for solving novel problems and applying old knowledge in new contexts. On a range of tests, Gf has the highest correlation with g, so it is often equated with g itself. # Ability in LLMs: non-g factors The most difference in intelligence of humans and LLMs is attributable to the differences in the structure of their intelligence. In LLMs, it looks something like this: * g factor * Generalizing ability and ability-like factors * Data size * Data quality * Domain coverage * Model size * Compute budget * Reinforcement learning * Reasoning token efficiency * Mean Chain-of-Thought length * Computing Proficiency * Long context handling * Effective context length * Output speed Let’s break down the differences. # Generalizing ability and ability-like factors in LLMs LLMs do not have a fixed set of innate, immutable, untrainable broad abilities. Instead, they have **ability-like** **factors** \- sets of skills they are trained to execute. Ability-like factors are more or less broad. When combined together, similar ability-like factors merge into more broad ones, that form even more broad ones and so on, which results in the model's overall **generalizing ability**. The improvements in generalizing ability are predicted by the scaling laws - that is, to get better models, you just stupidly feed data big enough into models big enough. It is possible exactly because of the emerging interactions between different ability-like factors. Examples of narrow general ability-like factors are: * ability to solve this problem in Python * ability to solve this exact chess problem * ability to fold this exact protein Examples of broader general ability-like factors are: * ability to solve competitive programming problems in multiple languages * ability to play chess at a grandmaster level * ability to design new proteins and viruses Some ability-like factors in LLMs are broad enough to influence the whole performance of a LLM. For example, it was reported that high quality code and math data improves models' performance across all domains. Since some factors are so broad, it makes sense to identify and train them first. Ability-like factors and generalizing ability in LLMs also depend on data size, quality, domain coverage, model size and other factors (see scaling laws). Better training leads to improvements in ability-like factors and generalizing ability. There are also behavioral factors like alignment, safety, bias, censorship and so on. Since they influence the model's overall performance, they can be understood as ability-like factors too. Note that some factors that can't be improved with better training alone and depend on the model's architecture - namely, long context handling and output speed. They are **not** ability-like factors - let's call them **computing proficiency factors**. # Generalization in LLM The **generalization** process is the process of applying the generalizing ability. Generalization is simply solving problems after training, at test-time. **The source of most misunderstanding** of the intelligence of LLMs is the difference between the generalization process in LLMs and fluid intelligence in humans: we intuitively think that LLMs reason like humans, but they don't. LLMs work by learning relationships between small units of data (tokens) in a training corpus and emulating them by request. The result is very plausible emulation of natural languages - data structures that are subject to some rules. LLMs identify these rules and emulate them. They easily detect and emulate relationships that are beyond humans to see, and **it** **is what makes LLMs, and any AI, so impressive.** But there are serious drawbacks to this: 1. AI don't have deductive reasoning. They are hyper-inductive and can only generalize from one example to another, and start to fail rapidly as soon as the tasks become less and less similar to their training data. Even a minor change in a problem can stump a LLM no matter how SOTA it is. 2. The knowledge of AI can't be separated from its reasoning - the generalization process in LLMs is responsible for both knowledge recall and reasoning. It's easy to demonstrate both - we will talk about it soon. # Computing Proficiency in LLMs Computing Proficiency factors are measures of abilities that are found in any LLMs that influence their general intelligence (**not** generalizing ability) while being independent from their generalizing ability. Such technical abilities are: * Long context comprehension * Long context retrieval * Effective context length * Lost in the middle (position bias) * Output speed * Negligible under small workload * Negligible once faster than human reading speed There are probably others, but I am not sure. # g factor in LLMs: generalizing ability + computing proficiency The general intelligence in LLMs, as measured by most benchmarks, is simply a product of their generalizing ability and computing proficiency. However, most of the differences in general intelligence of models come from the differences in generalizing ability, so it makes sense to improve the generalizing ability first. # Predictions based on this theory Now since we have a working psychometric theory of intelligence in LLMs, let's make some predictions based on it and propose some ways to test them. I invite everyone with enough spare time and compute/USD budget to do it independently. **1. Task difficulty for a LLM is inversely proportional to its similarity to training data** Find an expert in some topic, and ask them to write a list of questions that involve increasingly more and more obscure concepts. These questions **do not** need to be difficult. They **do not** need to take a long time and tens of thousands of tokens to answer. They **do not** even need to involve any reasoning and can focus on knowledge recall only. I proposed a design of such experiment some time ago. In music theory, there are more and less popular keys. There is even a website that ranks their popularity - hooktheory.com: https://preview.redd.it/rnxgcx0t0drf1.png?width=1065&format=png&auto=webp&s=03bd8c6d4233722cbc8ee07b50e138f770dd424c https://preview.redd.it/xl0s0x0t0drf1.png?width=1079&format=png&auto=webp&s=90e3c1eeb463f13946d99094b1897935717767fb And there are two songs using some of these keys: https://preview.redd.it/8qfmpv621drf1.png?width=1918&format=png&auto=webp&s=79d93f2cc1418e067627ebab56e1443eb1fd47a3 https://preview.redd.it/e4zg6v621drf1.png?width=1917&format=png&auto=webp&s=5812cb56861e70e7d9888513a8b9b53f21ff1a98 Can you see what is common and what is different between the above pieces? Even if you can’t read notation, you can, at least, see that it is exactly the same song - just transcribed higher or lower (except for drum notes that represent drum samples - they keep their place). You can produce the same difference if you slowed down or sped up a YouTube video - the sound track will sound lower or higher, but you will still recognize it as the same sound track. All other properties of the song are unchanged - if you can determine the mode of one song, you will easily determine the mode of another. The real fun begins when we ask a LLM to determine the mode in both cases. Go to [LMArena.ai](http://LMArena.ai) and ask GPT-5-High a couple of times, in **different** chats (important): >Determine the vibe, the key and the mode. Is there modal interchange and/or chromaticism? >Organ : (C5\*1/2. C5\*1/4. C5\*1/4 Db5\*1/4 Db5\*1/4. Db5\*1/4. Eb5\*1/4 Eb5\*1/2 C5\*1/4. Bb4\*1/4. Ab4\*1/2. Eb5\*1/4. Db5\*1/4.)\*4 >Brass : (\~\*1/2.)\*16 ((C4\*1/2.)\*2 (Db4\*1/2.)\*2 (Gb4\*1/2.)\*4)\*2 >Snare : (\~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2 x\*1/4 \~\*1/2. \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2. \~\*1/2.)\*4 >Kick : (x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2.)\*4 >Hi Hat : ((x\*1/16)\*20 5\[(x\*1/16)\*5\] (x\*1/16)\*16 5\[(x\*1/16)\*10\] 1/16\*36 5\[(x\*1/16)\*15\])\*4 >Bass : (Gb1\*1/2.+Gb1\*1/4 Eb1\*1/2 Gb1\*1/4 Gb1\*1/2 Bb1\*1/2. Gb1\*1/2.+Gb1\*1/4 C1\*1/2+C1\*1/2.+C1\*1/2.)\*4 >Choir : (C5\*1/8 Eb5\*1/8 Gb5\*1/8 Eb5\*1/8 Eb5\*1/8 Db5\*1/8 Eb5\*1/2. C5\*1/8 Eb5\*1/8 Ab5\*1/8 Gb5\*1/8 Gb5\*1/8 F5\*/18 Gb5\*1/2. C5\*1/8 Eb5\*1/8 Gb5\*1/8 Eb5\*1/8 Eb5\*1/8 Db5\*1/8 Eb5\*1/2. Ab4\*1/8 Db5\*1/8 F5\*1/8 Db5\*1/8 Db5\*1/8 C5\*1/8 Db5\*1/2.)\*4 >Organ 2 : (C3\*1/8 Eb3\*1/8 Gb3\*1/8)\*64 >Legend: >C5\*1/2.+1/2 \~\*1/4 >5\[(x\*1/4)\*6\] >C - Note label >5 - Octave number >\*1/2 - duration >. - dotted note >\+ - tied notes >\~ - rest >x - drum note >5\[\] - pentuple It will correctly identify it as C Locrian most of the time. Now let's try the following: >Determine the vibe, the key and the mode. Is there modal interchange and/or chromaticism? >Organ : (G#4\*1/2. G#4\*1/4. G#4\*1/4 A4\*1/4 A4\*1/4. A4\*1/4. B4\*1/4 B4\*1/2 G#4\*1/4. F#4\*1/4. E4\*1/2. B4\*1/4. A4\*1/4.)\*4 >Brass : (\~\*1/2.)\*16 ((G#3\*1/2.)\*2 (A3\*1/2.)\*2 (D4\*1/2.)\*4)\*2 >Snare : (\~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2 x\*1/4 \~\*1/2. \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2. \~\*1/2.)\*4 >Kick : (x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2.)\*4 >Hi Hat : ((x\*1/16)\*20 5\[(x\*1/16)\*5\] (x\*1/16)\*16 5\[(x\*1/16)\*10\] 1/16\*36 5\[(x\*1/16)\*15\])\*4 >Bass : (D1\*1/2.+D1\*1/4 B0\*1/2 D1\*1/4 D1\*1/2 F#1\*1/2. D1\*1/2.+D1\*1/4 G#0\*1/2+G#0\*1/2.+G#0\*1/2.)\*4 >Choir : (G#4\*1/8 B4\*1/8 D5\*1/8 B4\*1/8 B4\*1/8 A4\*1/8 B4\*1/2. G#4\*1/8 B4\*1/8 E5\*1/8 D5\*1/8 D5\*1/8 C#5\*/18 D5\*1/2. G#4\*1/8 B4\*1/8 D5\*1/8 B4\*1/8 B4\*1/8 A4\*1/8 B4\*1/2. E4\*1/8 A4\*1/8 C#5\*1/8 A4\*1/8 A4\*1/8 G#4\*1/8 A4\*1/2.)\*4 >Organ 2 : (G#2\*1/8 B2\*1/8 D3\*1/8)\*64 >Legend: >C5\*1/2.+1/2 \~\*1/4 >5\[(x\*1/4)\*6\] >C - Note label >5 - Octave number >\*1/2 - duration >. - dotted note >\+ - tied notes >\~ - rest >x - drum note >5\[\] - pentuple Whatever the hell Ab Major is, GPT-5 is now suddenly wrong. See, it's literally the same piece and the same problem, with only a minor detail changed - and yet it is difficult for GPT-5 to solve this problem once it is made just a bit more obscure. I predict that, **when transposed to all keys listed on Hooktheory, ChatGPT will start to fail this problem more often with rare keys.** **2. All models are powered by the same generalizing ability and differ only in its level** If you try other models at this task, you will notice that their performance degrades too. For example, both Grok 4 and Qwen3-Next-80B-A3B identify C Locrian correctly quite often (most often among all open source LLMs I ever tested), but struggle with G# Locrian. **The difficulty of this task progresses for all models**. When the task uses more and more underrepresented keys, **all** models start to fail more often. In other words, **all models “find” the same problems to be easier or difficult than others.** Just like humans. It means that **all models have the same underlying generalization mechanism.** What only differs is the level of their ability. **3. Most performance differences between LLMs are the result of the differences in their generalizing ability** Using the method I proposed, measure the differences in generalizing ability of a group of LLMs. Correlate the results of the measurements against a couple of popular benchmarks. Confirm that even performance on a simple knowledge recall task is predictive of LLMs real-life performance. **4. There may be very broad ability-like factors in LLMs training which transfers to big performance improvements** Just like quality math and code data (reportedly) improves performance in LLMs, other ability-like factors may be broad enough to transfer to huge improvements across a wide range of tasks. To identify such factors, one has to conduct a factor analysis on a model's performance across a range of diverse tasks in different domains. **5. Teaching a LLM to make meaningful connections between distantly related concepts during training will lead to big improvements in generalizing ability and creativity** If you asked GPT-5 to solve these two Locrian problems in different contexts, it failed to identify G# Locrian each time. However, if you asked it to solve them in the **same** context, it would identify G# Locrian correctly after it identified C Locrian. GPT-5 learns this knowledge in context. There are other notable cases of in-context learning - for example, a researcher recently taught Grok to use knowledge from previously solved tasks on more difficult ones, which led to an improvement on a major benchmark. In context, LLMs can easily verify that some concepts are distant but meaningfully related. For example, LLMs will treat prompts "how to improve benchmarks" and "Golden Gate Bridge" in the same context as different topics. However, they will recognize the connection between "how to improve benchmarks" and "psychometrics" and suggest how to combine these concepts even if they are unable to come up with this connection in the first place. This ability to find novel connections between weakly related concepts is known as creativity in humans, but so far it lacks in LLMs. Given the effectiveness of in-context learning, teaching models to figure out and verify novel connections during training will improve their performance and creativity, which may be especially useful when generating high-quality synthetic data. **6. There is likely more to learn from brain sciences for AI scientists** I am surprised that it is actually very easy to explain the differences between the ability in humans and AI with tools and frameworks we use for measuring ability in humans. There is very likely much, much more to learn and adapt from brain sciences. **7. Measuring the generalizing ability the right way helps to create really valid and reliable benchmarks** # Impact on measurements Great measures help correctly identify strong and weak sides of ideas and products. The entire development cycle of a product may be influenced by the results retrieved at just one great measure. However, great measures are surprisingly underrated. Here are some examples: * **Hiring** * Tests of GMA (general mental ability) offer best predictions of job performance… * …but most HRs discard GMA tests as pseudoscience despite 100+ years of evidence, while happily using less-studied MBTI and pseudoscientific astrology. * **Consumer audio** * Blind tests of audiophile hardware expose this entire industry as snake oil… * …but almost all audiophiles avoid blind testing and happily buy snake oil. * **Medicine** * RCTs (random controlled trials) slashed through countless ineffective or even harmful treatments… * …the treatments that were selected solely by intuition, anecdote, or authority. * **Food industry** * Blind tests demonstrate the effects of brand labels and price as placebo… * …but there are people who literally buy premium mineral water. * **Software** * DORA metrics offer superior organization performance evaluations… * …and your manager still uses LOCs and hours logged. Given zero cost of designing great measures and their ROI that justifies the cost of their execution, it is incomprehensible how underrated great measures are - especially when they come to something as important as medicine. **LLMs are the most important technology since bitcoin, but there are currently no great measures for them.** So let's figure out what's wrong with our current measurements and how to develop better ones, based on the theory I propose. # Structural invalidity. They do not measure what they claim to measure Take a look at the following benchmarks. What do they measure? [https://brokk.ai/power-ranking?version=openround&score=average](https://brokk.ai/power-ranking?version=openround&score=average) https://preview.redd.it/s330qlkv1drf1.png?width=1043&format=png&auto=webp&s=7afc0be9a96bbe9ac4ce99440a63c9913cfeab6b [https://www.swebench.com/](https://www.swebench.com/) https://preview.redd.it/6t3t3lkv1drf1.png?width=1164&format=png&auto=webp&s=76639e2a2466f969d224044c94346d08b69ded56 [https://aider.chat/docs/leaderboards/](https://aider.chat/docs/leaderboards/) https://preview.redd.it/kg8qclkv1drf1.png?width=488&format=png&auto=webp&s=24acc4f218133db5f911e17f919d08b0f208e485 If you said “coding ability” for all three, you are **wrong**. Among these benchmarks, **only Aider** measures **exclusively** coding ability. You see, when you test a LLM against a real codebase, you don’t test **just** its coding ability-like factor. Instead, you test, **among other things:** * Programming language knowledge * Generalization to common programming problems * Generalization to rare programming problems * Repository structure comprehension * Instruction following * Tool use * Effective context length * Long context comprehension * Retrograde Mercury And these are only a few things I can imagine a real codebase tests for. Note that I am not saying that real-world coding problems do not test for coding ability - they do. What I am saying is that, they test for so many things that it becomes impossible to separate the measurement of **specific** skill they **claim** to measure from the measurement of **general** intelligence of models. To give an idea how bad such an approach is, take a closer look at the Aider table, particularly at the bottom rows. Can you believe that DeepSeek did better on Aider than Claude Opus? No way, you will say, it was likely benchmaxxxed, just like any other Chinese model, DeepSeek is not as good on the real world tasks… No - DeepSeek has not benchmaxxed anything out. The real reason why it is so high on Aider and so low at other “coding” benchmarks is because Aider is **the only benchmark** that aims to test **only** the pure coding ability, as measured by performance on hundreds of different basic coding challenges. The influence of other factors is minimized on Aider **by design**. The problem is not in DeepSeek, because DeepSeek appears to **be** good at coding once you isolate it from confounders it’s not as great at. The problem is, **most benchmarks do not measure what they are actually claim to measure!** But the uninformed users of these benchmarks, just like their developers, do not even think about it, and so they believe that SWE Bench is suddenly more trustworthy than Aider - just because DeepSeek’s performance at Aider seems **unusual** because Aider actually measures what it claims to measure, and SWE Bench does not. People **distrust a better designed benchmark because it reflects reality better than a poorly designed one.** Another infamous example of factor confounding is METR: https://preview.redd.it/wbrfqeig2drf1.png?width=1592&format=png&auto=webp&s=939a5f2e3c19805209b3bd7c1995bc5a2b4acf00 It does not measure **just** the length of a task a LLM could reliably do. It’s only natural that problems that are more complex for humans require more steps and take more to solve than simpler ones. METR measures the **general intelligence** of models, not their time management skills. It is just another misleading, confusing, poorly constructed and underexplained “benchmark”. If they wanted to measure time horizon in LLMs, they could just task a LLM to play an infinite version of the tower of Hanoi with a sliding context window, and this gaming session would last just as long as they are able to pay for GPU electricity. # Construct invalidity. They purportedly don’t measure the generalizing ability As I demonstrated before, the most important single factor in LLMs after their general intelligence is their generalization ability, and the most simple, most reliable, cheapest way to test this ability in LLMs is to give them a range of ideas across the distribution of data they were trained on, and see how well they do compared to each other.. You do **NOT** need to test LLMs against whole codebases and sets of PhD problems for this. However… https://preview.redd.it/kg7d6nh33drf1.png?width=1080&format=png&auto=webp&s=ac5a4c781f77be9583c741093aeee0d694bdedcc The authors of some benchmarks and many LLM devs who boast about the performance of their models on these benchmarks are either ignorant about the fact these benchmarks do not necessarily target the generalization ability in large language models (which screams incompetence), or actively exploit the public ignorance to produce hype (which is the most likely reason). Don’t equate reasoning in humans with generalization in LLMs. These are two completely different processes. A LLM can be stumped with unfamiliar problems that humans, however, find easy, and vice versa. There indeed seems to be some correlation between problem difficulty for humans and their underrepresentation in LLMs, but it is not deterministic, and what they are feeding you are anecdotes to make you buy the hype around frontier models. Don’t trust fake hype. # Criterion and content invalidity. They may not translate to real-world performance Since generalizing ability is knowledge-dependent, benchmarks should test models across domains they are targeted at. Unfortunately, it is **impossible to detect all knowledge gaps in a general purpose LLM** without access to its training corpus, which are rarely published even for open source models. However, for less general purpose models, it is possible to test whether a model is good at its purpose, but many benchmarks undertest it. An example is Kimi K2, claimed to be created for agentic applications: https://preview.redd.it/6t3t3lkv1drf1.png?width=1164&format=png&auto=webp&s=76639e2a2466f969d224044c94346d08b69ded56 https://preview.redd.it/s330qlkv1drf1.png?width=1043&format=png&auto=webp&s=7afc0be9a96bbe9ac4ce99440a63c9913cfeab6b It is easy to see that K2’s performance on agentic coding tasks in Java is far worse than on those in Python, which can suggest undertraining on Java or overfitting on Python or SWE bench in particular. # Scoring invalidity. Lack of scale properties Raw scores on benchmarks don’t translate linearly to differences in generalization ability. The same 1% difference between two of the best models represents an ability gap far wider than a 1% difference between two middling models. # Consequential invalidity. Lack of clear positive impact Current benchmarks are gamed and abused so often that they can only misguide both LLM users and developers, and, sadly, they do. They are unreliable as information sources for both production use and research. They appear to be made for loud marketing, not evaluation. # Obsolescence, deprecation and abandonment If you ask GPT 5 which LLM benchmarks are out there, it can easily list dozens, if not hundreds - yet most of them are not used anymore. There are only a few benchmarks that keep receiving updates, and, unfortunately, they are mostly not among the better ones - because people care about most impressive benchmarks, not most reliable ones, even if hype benchmarks like ARC-AGI are largely meaningless. # Price Many benchmarks are just unaffordable to run. However, I don’t believe that it is that bad, because, as demonstrated by Aider, good evals (those that are a proxy for the generalizing ability) are simple and cheap to produce and test on. It puts a pressure on eval developers to create cheaper, more reliable benchmarks. # Constructing comprehensive psychometrically valid benchmarks # Structural validity. Decoupling factors Most benchmarks mix up confounding factors and end up measuring the models' general intelligence. For comprehensive evaluations, each broad ability of a model and its indices should be measured separately. Unfortunately, **it is impossible to fully decouple** factors when evaluating a LLM, because even simple problems for LLMs may depend upon different knowledge domains, and their computing proficiency always bottlenecks the generalizing ability. However, it is possible to reduce their influence to a level where they won’t be a problem. * The tasks should be as short as possible to avoid confounding with other ability-like factors and computing proficiency; * Each task should test only one ability-like factor; * The tasks should not necessarily look difficult for humans but must have varying difficulty for LLMs. **Counter intuitive**, but it’s not necessary to test with novel problem solving only - different LLMs will demonstrate different level of generalizing ability at the same range of tasks, whether knowledge recall or novel problem solving, even if their training datasets are the same. Novel problem solving is just more likely to be more difficult for LLMs. **Good examples**: * Aider polyglot * 200+ tasks to develop short programs; * Only requires knowledge of mainstream programming languages; * Trivial for skilled humans, still discriminates among the best LLMs. * Fiction.LiveBench * Dozens of different stories submitted by users; * Probes only long-context comprehension, requires no knowledge apart from written English; * Trivial for humans above 5th grade, hard for LLMs. * EQBench # Construct validity. Measuring the generalization ability Tasks that require high general intelligence humans to solve are invalid for the measurement of LLMs’ generalizing ability. Forget about ARC-AGI, Humanity’s Last Exam and other garbage - they are tools for marketing, not evaluation. Instead, task LLMs with problems in the order from most to least semantically close to their training data. The most close problems are **common knowledge recall** \- generalization to the widely known knowledge such as facts, statements, and axioms. The least close problems are **near-OOD reasoning** \- generalization to problems underrepresented in the training data that involve obscure knowledge. There is a correlation between the semantic distance to a problem in any LLM and its difficulty for humans, but most problems that are difficult for humans involve too many confounding factors and thus are not fit to test LLMs. # Criterion and content validity. Predicting the real-world performance When presented with a series of tasks of varying semantic distance within one knowledge domain, models correctly solve them in a proportion to their generalization ability. It does not matter which human-stumping problems any model of them will be able to solve, because **better generalizing models are able to solve more problems, including problems difficult for humans.** In other words, **even if you don’t know which and how many real-world problems a LLM will solve, better generalizing models always solve more than their less smart counterparts.** **Hiring analogy:** even if you can’t be sure how useful an applicant will be for your business, it makes sense to select the most talented applicants because they are most likely to be most useful. However, when asked about problems related to **another** knowledge domain, the relative standing of LLMs can change drastically. It is rarely the case with general purpose models because they all are trained on similar data, but it impacts the measurement of ability of models undertrained on general knowledge data - in particular, coding models like Claude, GLM and Qwen3-Coder series. To detect undertrained models, a benchmark should cover as many tasks in as many subjects as possible. It will also help to identify models that are overfit on popular benchmarks. # Scoring validity After measurements, the models should be ranked in the order of their abilities, as well as their per-item performance to identify more and less difficult items. Each tested ability should receive a separate score. General intelligence, g, must be represented as a composite score of all ability-like factors. # Consequential validity. Impact of good benchmarks The development of psychometrically valid benchmarks that are easy to maintain, use and interpret may easily become another breakthrough of this AI season, given that there are currently no popular benchmarks that are really well-designed (mind you, there are very few well-designed benchmarks in the wild whatsoever). Some probable impact: * **Identification of underrated models.** I believe that there are many great models that offer measurable improvements which are slept on because they lag behind frontier models. It’s difficult to honestly demonstrate these improvements on measures that are benchmaxxxed by everyone. Measuring models the right way may help identify underrated models that are worth attention. * **Identification of overrated models.** There are enough models that boast impressive benchmark scores and fail to generalize at any problem outside of these benchmarks. Often, models of major tech companies earn attention not because of their quality but because of the fact they were made by some Apple or Amazon. A good measure will always expose them. * **Identification of ability tilts in models.** The generalization ability of some models can be unevenly distributed across different knowledge domains and skills. A comprehensive psychometric evaluation would help to identify these ability tilts to later investigate which changes to training recipe made them possible, to replicate them in other models. * **Predict a model’s performance on real world tasks.** I believe there may be a way to measure a problem’s semantic distance to a LLM training data without actually launching the LLM, which will be **able to tell if some model is enough for your problems** and **if a better model is an overkill** or if you really should upgrade. * **Cost reduction in benchmark development and usage.** There are enough problems that are easy for humans but are difficult for LLMs because of unfamiliarity. Problems that are easy for humans are also easy to develop, solve and verify. Valid psychometric measurements as suggested here can offer drastic cost reduction for the development and use of benchmarks. * **Cost reduction in research and development.** Empirical testing of hypotheses and theories made by LLM researchers is costly because it requires training and evaluation of models. If psychometrically sound benchmarks appear to be solid instruments for monitoring **improvements in the model’s generalizing ability** **early in a training run,** they will replace slow and inefficient evaluations, drastically reduce R&D overheads and narrow the gap between the leading open source and proprietary models. * **Reverse engineering of proprietary models.** Testing proprietary LLMs with this benchmark may shed a bit more light on their internal workings. * **Paving the way for psychometrics of AI as a science.** If we want to really understand AI instead of neurotic Yudkowsky who had been crying wolf since he was bitten by the Roko’s Basilisk, we need to measure and study it just like anything else. Such a benchmark can become the beginning of AI psychometrics as a discipline. # Summary and limitations The solutions I propose focus mostly on measuring the intelligence in LLMs, especially on their generalizing ability. I haven’t said much about measuring alignment, safety, toxicity, bias and other things that influence behavior in LLMs. However, it is not difficult to include into the hierarchy I propose. It is not even necessary to construct comprehensive benchmarks from scratch as most of the work is already done: Aider exists for coding ability, EQBench measures behavior, lechmazur (see Github)’s writing styles benchmark tests stylistic diversity, Fiction.LiveBench measures long context management, and so on. The only thing that really has to be developed from scratch is a measurement of the generalizing ability, and the rest can be integrated into the framework. It is difficult to measure generalization to problems that don’t have just one right answer, the problems that involve divergent thinking and artistic creativity. The best way to measure performance on this kind of problem may be to determine which LLM is the smartest and use it as a judge. I am sure that people will **hate** this methodology. It will expose all their favorite models, and, just like with benchmarks for humans, people will be spitting nonsense that “some random tasks can’t measure the performance in the real world” because “there is no way deepseek is this good” but **actually** because they will simply dislike the implications just like audiophiles dislike blind testing. This methodology has equal potential to both disrupt the entire LLM evaluation industry (which is a massive joke as I demonstrated) and to end up misunderstood and ignored by most. I believe that both outcomes are good: the first one will make the world better for everyone, and the second one will gatekeep this idea to really smart people, including competent LLM devs, that is going to give them a competitive advantage, which will give us all better LLMs in the near future. I haven’t thought so far about adapting these findings to measure intelligence in AI that works with modalities different from text, but it shouldn't be difficult.
2025-09-25T19:40:55
https://www.reddit.com/r/LocalLLaMA/comments/1nqggrn/jagged_intelligence_and_how_to_measure_it/
Massive-Shift6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqggrn
false
null
t3_1nqggrn
/r/LocalLLaMA/comments/1nqggrn/jagged_intelligence_and_how_to_measure_it/
false
false
https://a.thumbs.redditm…9pOpRr3kVjF4.jpg
7
{'enabled': False, 'images': [{'id': 'KqOJ25NzNL1lbosnDPmNODiseRApYrLmUGVXyUOY4q8', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/KqOJ25NzNL1lbosnDPmNODiseRApYrLmUGVXyUOY4q8.png?width=108&crop=smart&auto=webp&s=ed4e472f2ad34c8f63378626ec80d8c6e86ea92a', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/KqOJ25NzNL1lbosnDPmNODiseRApYrLmUGVXyUOY4q8.png?width=216&crop=smart&auto=webp&s=de98aeb77b1eaa7ac85d5c8d9ce98c15272c4124', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/KqOJ25NzNL1lbosnDPmNODiseRApYrLmUGVXyUOY4q8.png?width=320&crop=smart&auto=webp&s=ce12aaf31b692f807981d951ceec0f1c707d4a21', 'width': 320}], 'source': {'height': 462, 'url': 'https://external-preview.redd.it/KqOJ25NzNL1lbosnDPmNODiseRApYrLmUGVXyUOY4q8.png?auto=webp&s=396caf4e655a6fd8e0d0a5663bfd98fb00570697', 'width': 600}, 'variants': {}}]}
Been out of the local/open source game for about a year now....
7
Looks like Alibaba and the Chinese companies have taken over Meta's lead in pushing forward the SoTA? How far back from the closed source models are they? I've been mostly using Claude last year which my work provided, but I recently moved to a consulting gig so I'm working on my own atm.
2025-09-25T19:37:16
https://www.reddit.com/r/LocalLLaMA/comments/1nqgday/been_out_of_the_localopen_source_game_for_about_a/
Old-School8916
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqgday
false
null
t3_1nqgday
/r/LocalLLaMA/comments/1nqgday/been_out_of_the_localopen_source_game_for_about_a/
false
false
self
7
null
What’s your experience with Qwen3-Omni so far?
35
Qwen3-Omni is now out for a few days, what’s your experience with it so far? And what are you using it for? > Qwen3-Omni is the natively end-to-end multilingual omni model. It processes text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. We introduce several upgrades to improve performance and efficiency. - Blog: https://qwen.ai/blog?id=65f766fc2dcba7905c1cb69cc4cab90e94126bf4 - Weights: https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe - Paper: https://arxiv.org/abs/2509.17765
2025-09-25T19:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1nqg5q3/whats_your_experience_with_qwen3omni_so_far/
Balance-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqg5q3
false
null
t3_1nqg5q3
/r/LocalLLaMA/comments/1nqg5q3/whats_your_experience_with_qwen3omni_so_far/
false
false
self
35
null
Someone wife on this subreddit liked Magistral, so I tried it. She was right
1
[removed]
2025-09-25T19:09:56
https://www.reddit.com/r/LocalLLaMA/comments/1nqfni8/someone_wife_on_this_subreddit_liked_magistral_so/
Lorian0x7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqfni8
false
null
t3_1nqfni8
/r/LocalLLaMA/comments/1nqfni8/someone_wife_on_this_subreddit_liked_magistral_so/
false
false
self
1
null
How accurate is the MTEB leaderboard?
0
It's weird how some 600m-1b parameter embedding beat other models like voyage-3-lg. Also how it doesn't even mention models like voyage-context-3.
2025-09-25T18:58:32
https://www.reddit.com/r/LocalLLaMA/comments/1nqfck4/how_accurate_is_the_mteb_leaderboard/
blackkksparx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqfck4
false
null
t3_1nqfck4
/r/LocalLLaMA/comments/1nqfck4/how_accurate_is_the_mteb_leaderboard/
false
false
self
0
null
What is WER and how do I calculate it for ASR models?
0
Word Error Rate (WER) is a metric that measures how well a speech-to-text system performs by comparing its output to a human-generated transcript. It counts the number of words that are substituted, inserted, or deleted in the ASR output relative to the reference. [Quick tutorial on YouTube](https://youtube.com/shorts/eqK0-ywjpTw?si=H9jqaeUC_zy8Skhi) outlined below 👇 # Formula \[ \\text{WER} = \\frac{\\text{Subs} + \\text{Ins} + \\text{Dels}}{\\text{Words in Ref}} \] # Steps to Calculate WER 1. Align the ASR Output and Reference Transcript: Use a tool to match the words. 2. Count Errors: * Subs: Words that are different. * Ins: Extra words. * Dels: Missing words. 3. Compute WER: Divide the total errors by the total words in the reference. # Factors Affecting WER * Noisy Environments: Background noise can mess up the audio. * Multiple Speakers: Different voices can be tricky to distinguish. * Heavy Accents: Non-standard pronunciations can cause errors. * Overlapping Talk: Simultaneous speech can confuse the system. * Industry Jargon: Specialized terms might not be recognized. * Recording Quality: Poor audio or bad microphones can affect results. A lower WER means better performance. These factors can really impact your score, so keep them in mind when comparing  ASR benchmarks. Check out two NVIDIA open source, portable models, NVIDIA Canary-Qwen-2.5B and Parakeet-TDT-0.6B-V2, which just topped the latest transcription leaderboard from Artificial Analysis (AA) ASR leaderboard with record WER. ➡️ [https://artificialanalysis.ai/speech-to-text](https://artificialanalysis.ai/speech-to-text) https://preview.redd.it/zlzaomryxcrf1.png?width=4004&format=png&auto=webp&s=dccc900564bb9b1a696cbb1cee74504137941c6a
2025-09-25T18:56:50
https://www.reddit.com/r/LocalLLaMA/comments/1nqfazl/what_is_wer_and_how_do_i_calculate_it_for_asr/
PDXcoder2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqfazl
false
{'oembed': {'author_name': 'NVIDIA Developer', 'author_url': 'https://www.youtube.com/@NVIDIADeveloper', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/eqK0-ywjpTw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="What is Word Error Rate (WER), and how do I properly calculate it to benchmark ASR models?"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eqK0-ywjpTw/hq2.jpg', 'thumbnail_width': 480, 'title': 'What is Word Error Rate (WER), and how do I properly calculate it to benchmark ASR models?', 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'}
t3_1nqfazl
/r/LocalLLaMA/comments/1nqfazl/what_is_wer_and_how_do_i_calculate_it_for_asr/
false
false
https://a.thumbs.redditm…SQGApyOsUl88.jpg
0
{'enabled': False, 'images': [{'id': 'NkLXyo2uWVQMWEIaub66U-kfW4x1GJbdpc7qAiGiDEQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/NkLXyo2uWVQMWEIaub66U-kfW4x1GJbdpc7qAiGiDEQ.jpeg?width=108&crop=smart&auto=webp&s=50eab68548a569719b6c6a69365264e742fe67f7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/NkLXyo2uWVQMWEIaub66U-kfW4x1GJbdpc7qAiGiDEQ.jpeg?width=216&crop=smart&auto=webp&s=feea670fdb484e7c6cbede84011de15c43c1c887', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/NkLXyo2uWVQMWEIaub66U-kfW4x1GJbdpc7qAiGiDEQ.jpeg?width=320&crop=smart&auto=webp&s=42438e135563be1e3a0e83e448b298c69c1eb1fb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/NkLXyo2uWVQMWEIaub66U-kfW4x1GJbdpc7qAiGiDEQ.jpeg?auto=webp&s=6618c6ec27883aafb32e98ee247348395fedcb17', 'width': 480}, 'variants': {}}]}
Run Your Local LLMs as Web Agents Directly in Your Browser with BrowserOS
33
Run web agents using my local models from Ollama without my data ever leaving my machine, so I built a tool for it. It’s a simple, open-source Chromium browser that connects directly to your local API endpoint. You can tell your own models to browse, research, and automate tasks, keeping everything 100% private and free.
2025-09-25T18:49:21
https://www.browseros.com/
PrizeInflation9105
browseros.com
1970-01-01T00:00:00
0
{}
1nqf3vf
false
null
t3_1nqf3vf
/r/LocalLLaMA/comments/1nqf3vf/run_your_local_llms_as_web_agents_directly_in/
false
false
default
33
{'enabled': False, 'images': [{'id': 'B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=108&crop=smart&auto=webp&s=b472499a0f1804506c314d122e03b8794d81174c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=216&crop=smart&auto=webp&s=ea80caabaf82ba0e613cff6ac409a251d02fad84', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=320&crop=smart&auto=webp&s=30c5d2d21791ac0e0426a46872c68ba7a296f472', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=640&crop=smart&auto=webp&s=9d6a7958fa1564136b1b7fa7ac0cd44640e242a8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=960&crop=smart&auto=webp&s=971c033308ae49b1e662d301b13ba91e63bf7d2b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?width=1080&crop=smart&auto=webp&s=c158fa0179e30a7aebe419e0b5df368716f96f93', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/B0qz1htbzurDEqVn_GMHDL43INLFr3hKzZ1EYoYt7h4.jpeg?auto=webp&s=f13be1e6710e205afe03d9f628bc69c74a806089', 'width': 1200}, 'variants': {}}]}
Do I need a good CPU if I have a good GPU for running local models?
1
I have a Ryzen 3 2200G CPU in my retired Plex server paired with 32 GB of RAM. If I put two 5060ti cards in there with 16 GB of RAM each, will the CPU be a bottleneck?
2025-09-25T18:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1nqf2yn/do_i_need_a_good_cpu_if_i_have_a_good_gpu_for/
alitanveer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqf2yn
false
null
t3_1nqf2yn
/r/LocalLLaMA/comments/1nqf2yn/do_i_need_a_good_cpu_if_i_have_a_good_gpu_for/
false
false
self
1
null
I trained a 4B model to be good at reasoning. Wasn’t expecting this!
0
My goal with **ReasonableQwen3-4B** was to create a small model that doesn't just parrot info, but actually *reasons*. After a lot of tuning, it's ready to share. It excels at: * 🧠 **Complex Reasoning:** Great for logic puzzles, constraint problems, and safety audits. * 🧩 **Creative Synthesis:** Strong at analogical and cross-disciplinary thinking. * ⚙️ **Highly Accessible:** Runs locally with GGUF, MLX, and Ollama. Give it a spin and let me know what you think. All feedback helps! * **HuggingFace (GGUF, MLX, Safetensors):** https://huggingface.co/adeelahmad/ReasonableQwen3-4B * **Ollama:** `ollama run adeeleahmad/reasonableqwen3-4b`
2025-09-25T18:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1nqf049/i_trained_a_4b_model_to_be_good_at_reasoning/
adeelahmadch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqf049
false
null
t3_1nqf049
/r/LocalLLaMA/comments/1nqf049/i_trained_a_4b_model_to_be_good_at_reasoning/
false
false
self
0
null
Replicating OpenAI’s web search
19
**tl;dr**: the best AI web searches follow the pattern of 1) do a traditional search engine query 2) let the LLM choose what to read 3) extract the site content into context. Additionally, you can just ask ChatGPT what tools it has and how it uses them.  Hey all, I’m a maintainer of [Onyx](https://github.com/onyx-dot-app/onyx), an open source AI chat platform. We wanted to implement a fast and powerful web search feature similar to OpenAI’s.  For our first attempt, we tried to design the feature without closely researching the SOTA versions in ChatGPT, Perplexity, etc. What I ended up doing was using Exa to retrieve full page results, chunking and embedding the content (we’re a RAG platform at heart, so we had the utils to do this easily), running a similarity search on the chunks, and then feeding the top chunks to the LLM. This was ungodly slow. \~30s - 1 min per query. After that failed attempt, we took a step back and started playing around with the SOTA AI web searches. Luckily, we saw [this post about cracking ChatGPT’s prompts](https://www.reddit.com/r/PromptEngineering/comments/1j5mca4/i_made_chatgpt_45_leak_its_system_prompt/) and replicated it for web search. Specifically, I just asked about the web search tool and it said: >The web tool lets me fetch up-to-date information from the internet. I can use it in two main ways: >\- search() → Runs a search query and returns results from the web (like a search engine). >\- open\_url(url) → Opens a specific URL directly and retrieves its content. We tried this on other platforms like Claude, Gemini, and Grok, and got similar results every time. This also aligns with [Anthropic’s published prompts](https://docs.claude.com/en/release-notes/system-prompts). Lastly, we did negative testing like “do you have the follow\_link tool” and ChatGPT will correct you with the “actual tool” it uses. Our conclusion from all of this is that the main AI chat companies seem to do web search the same way, they let the LLM choose what to read further, and it seems like the extra context from the pages don’t really affect the final result. We implemented this in our project with Exa, since we already had this provider setup, and are also implementing Google PSE and Firecrawl as well. The web search tool is actually usable now within a reasonable time frame, although we still see latency since we don’t maintain a web index.  If you’re interested, you can check out our repo here -> [https://github.com/onyx-dot-app/onyx](https://github.com/onyx-dot-app/onyx)
2025-09-25T18:36:44
https://www.reddit.com/r/LocalLLaMA/comments/1nqes66/replicating_openais_web_search/
Weves11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqes66
false
null
t3_1nqes66
/r/LocalLLaMA/comments/1nqes66/replicating_openais_web_search/
false
false
self
19
{'enabled': False, 'images': [{'id': 'g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y.png?width=108&crop=smart&auto=webp&s=96dd0ab1428b2ed1745a761d45fc0c6173ea2c37', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y.png?width=216&crop=smart&auto=webp&s=ef8d46897321c0217c7fcf20f268c9ad98eeacb7', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y.png?width=320&crop=smart&auto=webp&s=897f18b850a660daac34330d90b68fcbb8119ab2', 'width': 320}, {'height': 284, 'url': 'https://external-preview.redd.it/g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y.png?width=640&crop=smart&auto=webp&s=eeb6344e1d3863fae4c3f2d1fad9d3486ddf4edc', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/g53yiGzXf4Z_wzmQZU0537GXCWTG_OVDvS8lhcI7z0Y.png?auto=webp&s=f0db696dca85c4881b8e237c70f97f2c1ba23b3d', 'width': 901}, 'variants': {}}]}
support for GroveMoE has been merged into llama.cpp
78
model by InclusionAI: We introduce **GroveMoE**, a new sparse architecture using **adjugate experts** for dynamic computation allocation, featuring the following key highlights: * **Architecture**: Novel **adjugate experts** grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs. * **Sparse Activation**: 33 B params total, only **3.14–3.28 B** active per token. * **Traning**: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.
2025-09-25T18:09:49
https://github.com/ggml-org/llama.cpp/pull/15510
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1nqe2wq
false
null
t3_1nqe2wq
/r/LocalLLaMA/comments/1nqe2wq/support_for_grovemoe_has_been_merged_into_llamacpp/
false
false
default
78
{'enabled': False, 'images': [{'id': 'nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=108&crop=smart&auto=webp&s=47c44928c17173f4e3ed44c3068e0883a4806781', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=216&crop=smart&auto=webp&s=e1de89eaf768d03f7b060b70d9146ad0566329ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=320&crop=smart&auto=webp&s=d5c3a75aa19f91facfd37b80449bb80f9b0a7f08', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=640&crop=smart&auto=webp&s=e1da8cf735a8b89aee205f00585306fab9d51e04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=960&crop=smart&auto=webp&s=2c728bc406c9e89109f8f49e219fc961717dafd2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?width=1080&crop=smart&auto=webp&s=321fa51b948a20078c5a086a2e93b7b274668f98', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nxPo6cfvMPF7ggoi2vynbmnR5sz82ODMoRIW0fu1uyY.png?auto=webp&s=615c01f7a9ff59ff40d201b19b8b076b252e51cb', 'width': 1200}, 'variants': {}}]}
Me and my friends connected an Humanoid Robot to Local Large Language Models
5
Me and my friends, wanted to have a conversation with our school's humanoid robot, so we found a way to hook it up to some locally hosted LLMs and VLMs which run on a good enough computer. I wrote a blogpost explaing how and why we did that: [https://lightofshadow.bearblog.dev/bringing-emma-to-life/](https://lightofshadow.bearblog.dev/bringing-emma-to-life/)
2025-09-25T18:09:31
https://www.reddit.com/r/LocalLLaMA/comments/1nqe2ll/me_and_my_friends_connected_an_humanoid_robot_to/
lightofshadow_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqe2ll
false
null
t3_1nqe2ll
/r/LocalLLaMA/comments/1nqe2ll/me_and_my_friends_connected_an_humanoid_robot_to/
false
false
self
5
{'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0.png?width=108&crop=smart&auto=webp&s=0a67e5dc7cac1b563474d836e41f4a9906cea873', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0.png?auto=webp&s=f62d9a40cc0f5792f55046d1b84d58bc69f7d6a8', 'width': 124}, 'variants': {}}]}
My Budget Local LLM Rig: How I'm running Mixtral 8x7B on a used \$500 GPU
9
I’ve been tinkering with local LLMs for a while, and I thought I’d share my setup for anyone curious about running big models without dropping \\$5k+ on a top-end GPU. The Rig: •CPU: Ryzen 9 5900X (bought used for \\$220) •GPU: NVIDIA RTX 3090 (24GB VRAM, snagged used on eBay for \\$500) •RAM: 64GB DDR4 (needed for dataset caching & smooth multitasking) •Storage: 2TB NVMe SSD (models load faster, less disk bottlenecking) •OS: Ubuntu 22.04 LTS 🧠 The Model: •Running Mixtral 8x7B (MoE) using \`llama.cpp\` + \`text-generation-webui\` •Quantized to \*\*Q4\\\_K\\\_M\*\* — fits nicely into VRAM and runs surprisingly smooth •Average speed: \\\~18 tokens/sec locally, which feels almost realtime for chat use ⚙️ Setup Tips: 1. VRAM is king.If you’re planning to run models like Mixtral or Llama 3 70B, you’ll need 24GB+ VRAM. That’s why the 3090 (or 4090 if you’ve got the budget) is the sweet spot. 2. Quantization saves the day. Without quantization, you’re not fitting these models on consumer GPUs. Q4/Q5 balance speed and quality really well. 3. Cooling matters. My 3090 runs hot, added extra airflow and undervolted for stability. 4. Storage speed helps load times. NVMe is strongly recommended if you don’t want to wait forever. ●Why this is awesome: ▪︎Fully offline, no API costs, no censorship filters. ▪︎I can run coding assistants, story generators, and knowledge chatbots locally. ▪︎Once the rig is set up, the marginal cost of experimenting is basically \\$0. ●Takeaway: If you’re willing to buy used hardware, you can get a capable local LLM rig under \\\~\\$1000 all-in. That’s \*insane\* considering what these models can do. Curious, what’s the cheapest rig you’ve seen people run Mixtral (or Llama) on? Anyone tried squeezing these models onto something like a 4060 Ti (16GB) or Apple Silicon? That's what I am trying to do next will let you know how it goes and if it's doable.
2025-09-25T17:50:34
https://www.reddit.com/r/LocalLLaMA/comments/1nqdkkm/my_budget_local_llm_rig_how_im_running_mixtral/
Ghostone89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqdkkm
false
null
t3_1nqdkkm
/r/LocalLLaMA/comments/1nqdkkm/my_budget_local_llm_rig_how_im_running_mixtral/
false
false
self
9
null
Community Input
0
Hey guys, I need some data regarding RAG implementation, and would love your input [https://forms.gle/xQP2o6KS7Xq6oJ5x9](https://forms.gle/xQP2o6KS7Xq6oJ5x9)
2025-09-25T17:50:03
https://www.reddit.com/r/LocalLLaMA/comments/1nqdk2h/community_input/
NikhilAeturi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqdk2h
false
null
t3_1nqdk2h
/r/LocalLLaMA/comments/1nqdk2h/community_input/
false
false
self
0
{'enabled': False, 'images': [{'id': 'b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=108&crop=smart&auto=webp&s=c330f8a019b4fd875d353b182fa21cadf9736d36', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=216&crop=smart&auto=webp&s=a51db03f330bef95db66a56d69ee324c3dceb726', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=320&crop=smart&auto=webp&s=364100b92a377cfdb07047b094255c0e803c80bc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=640&crop=smart&auto=webp&s=12c759dfd6dfdfe5adcec50a8b9db658074c237e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=960&crop=smart&auto=webp&s=a0ba74d1203b813270df57ec71576d59fad303fd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?width=1080&crop=smart&auto=webp&s=523879fcc5bc8aeacf50b58eebbf4dd447ee3fc1', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/b1cjQlzS3EYVLO9VAQrXcWGDqHiAsQtSWEEVsdFzuQg.png?auto=webp&s=bf12646aeafcfad1470b4a9d1bbdfefc652e8c5f', 'width': 1200}, 'variants': {}}]}
llama.cpp now supports Qwen3 reranker
96
After adding [support for Qwen3 embeddings](https://www.reddit.com/r/LocalLLaMA/comments/1l3vt95/comment/mw4k324/?context=3) a while ago, [support for Qwen3 rerankers](https://github.com/ggml-org/llama.cpp/pull/14029) was just merged. Note that the conversion script was changed in that MR. That means that you'll need a fresh GGUF for it to give correct results, not one of those that were uploaded months ago. So how to run a simple example and what does it do? `llama-embedding -m qwen3-reranker-0.6b_Q8_0.gguf --embd-normalize -1 -p "<question>\t<document>"` You run this for the question and for each document that you found regarding that question. This then gives a score how well the document matches the question. Here are 4 reranked snippets for the following question: *What does reranking mean?* * **0.998** "Reranking is one of the simplest methods for dramatically improving recall performance in Retrieval Augmented Generation (RAG) or any other retrieval-based pipeline." * **0.996** "A reranking model — also known as a cross-encoder — is a type of model that, given a query and document pair, will output a similarity score." * **0.190** "Given 40M records, if we use a small reranking model like BERT on a V100 GPU — we'd be waiting more than 50 hours to return a single query result." * **0.001** "Before setting up the retrieval pipeline, we need data to retrieve! We will use the jamescalam/ai-arxiv-chunked dataset from Hugging Face Datasets. This dataset contains more than 400 ArXiv papers on ML, NLP, and LLMs."
2025-09-25T17:32:35
https://www.reddit.com/r/LocalLLaMA/comments/1nqd3fo/llamacpp_now_supports_qwen3_reranker/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqd3fo
false
null
t3_1nqd3fo
/r/LocalLLaMA/comments/1nqd3fo/llamacpp_now_supports_qwen3_reranker/
false
false
self
96
{'enabled': False, 'images': [{'id': 'gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=108&crop=smart&auto=webp&s=b449bb4f6a1dd8f02a80e3b0b382cda1879c3888', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=216&crop=smart&auto=webp&s=5dd198979eddc09574fc87dd7d347fa50fa8b415', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=320&crop=smart&auto=webp&s=ba0ec959346284f7cccabc4c8a1df9abca90cb19', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=640&crop=smart&auto=webp&s=85b5c1e4e4de3523ece6b845061c75509db2ddfa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=960&crop=smart&auto=webp&s=66148e78708298cb426156f77c3e8109336244bf', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?width=1080&crop=smart&auto=webp&s=6812537ec7a4e4862d302a5361894493daf3a3ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gjtn51bKTEhntL8tK6567mzxkqg8KV6qsi2OUMPMyfI.png?auto=webp&s=cadb402055d4e195f1b10a32c5b96ab1987f345c', 'width': 1200}, 'variants': {}}]}
Is OpenAI's Reinforcement Fine-Tuning (RFT) worth it?
3
2025-09-25T17:30:47
https://www.tensorzero.com/blog/is-openai-reinforcement-fine-tuning-rft-worth-it/
bianconi
tensorzero.com
1970-01-01T00:00:00
0
{}
1nqd1nu
false
null
t3_1nqd1nu
/r/LocalLLaMA/comments/1nqd1nu/is_openais_reinforcement_finetuning_rft_worth_it/
false
false
default
3
{'enabled': False, 'images': [{'id': 'K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=108&crop=smart&auto=webp&s=7f316b890b2a31a8f62865e9dee0569e96f0223c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=216&crop=smart&auto=webp&s=00f1de77a5649a79c91d9cfaf6e03bf21f107026', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=320&crop=smart&auto=webp&s=2ca81dda9abf4ec9e6bfb889114a5c077769d765', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=640&crop=smart&auto=webp&s=5a7cae50b6f64366d7ac07d9f8dfc0a821ddf0b8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=960&crop=smart&auto=webp&s=99b7c53dad6f4445fd39ac50a99d95ff14c145bc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=1080&crop=smart&auto=webp&s=1493755ef1337b07c1305234f8696c55d8bf1c05', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?auto=webp&s=b637a9ae4b2efc64add1e2ceadf2fc8d033def18', 'width': 1200}, 'variants': {}}]}
GLM-4.5-air outputting \n x times when asked to create structured output
8
Hey guys , Been spinning up GLM-4.5-air lately and i make him generate some structured output. Sometimes (not constantly) it just gets stuck after one of the field names generating '\\n' in loop For inference parameters i use : {"extra_body": {'repetition_penalty': 1.05,'length_penalty': 1.05}} {"temperature": 0.6, "top_p": 0.95,"max_tokens": 16384} Anyone encountered such issue or has an idea? Thx!
2025-09-25T17:27:08
https://www.reddit.com/r/LocalLLaMA/comments/1nqcy4k/glm45air_outputting_n_x_times_when_asked_to/
Best_Sail5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqcy4k
false
null
t3_1nqcy4k
/r/LocalLLaMA/comments/1nqcy4k/glm45air_outputting_n_x_times_when_asked_to/
false
false
self
8
null
A Voice model that can add emotion to an AI narration
1
Due to my limitations with Vram I decided to use kokoro 1.0 and I was pleasantly surprised by the crisp clarity of the output. I also got a very chill and pleasant voice using the voice blending feature. However, understandably there are no emotional controls in the model. By using quotations and stuff I can maybe add a bit emotion sometimes, but overall it is flat. I've been trying to find any models that can help with this specific task but I have been unsuccessful. Google being google only shows me results for more TTS model.
2025-09-25T17:20:13
https://www.reddit.com/r/LocalLLaMA/comments/1nqcrn2/a_voice_model_that_can_add_emotion_to_an_ai/
Mysterious-Comment94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqcrn2
false
null
t3_1nqcrn2
/r/LocalLLaMA/comments/1nqcrn2/a_voice_model_that_can_add_emotion_to_an_ai/
false
false
self
1
null
I've made Magic Tales: Bedtime Stories creator for kids with private on-device Apple Foundation Models | Local LLM
4
[Magic Tales – Bedtime Stories ](https://apps.apple.com/us/app/magic-tales-bedtime-stories/id6751983981) Create magical bedtime moments with AI-generated stories. Simply choose a theme and character, and Magic Tales will craft a unique story with beautiful text and images. Parents can instantly generate personalized bedtime stories for their kids, making every night special.
2025-09-25T17:17:22
https://i.redd.it/37e8onl8gcrf1.png
ArimaJain
i.redd.it
1970-01-01T00:00:00
0
{}
1nqcov6
false
null
t3_1nqcov6
/r/LocalLLaMA/comments/1nqcov6/ive_made_magic_tales_bedtime_stories_creator_for/
false
false
default
4
{'enabled': True, 'images': [{'id': '37e8onl8gcrf1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=108&crop=smart&auto=webp&s=127163b0be9c6c60d9622ef31e1b3658fee512f7', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=216&crop=smart&auto=webp&s=18d0d58ce517161e6ca07ace429f07881d5e7fdd', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=320&crop=smart&auto=webp&s=0441b9e003c5ddc76f456fc91f9c908eab8e6488', 'width': 320}, {'height': 408, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=640&crop=smart&auto=webp&s=3ee5ec28ba95843accec692b4ef3ae0b61587ceb', 'width': 640}, {'height': 612, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=960&crop=smart&auto=webp&s=8e72908107b1f13f51b8b585ce1f326b10874c6d', 'width': 960}, {'height': 689, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?width=1080&crop=smart&auto=webp&s=0aa83b492611bf22a2af9045a3d085ba22bfd138', 'width': 1080}], 'source': {'height': 736, 'url': 'https://preview.redd.it/37e8onl8gcrf1.png?auto=webp&s=d86beb499e07a9614d3a430a5b12110ff73850f4', 'width': 1153}, 'variants': {}}]}
looking for llm trained only on free use/public domain materials.
1
Look for a model that has been trained on information for public use and has no copyright on it or has been approved to use this information. trained from scratch not fine tuning (because I read other post reddit that talk about data training itself not llm). Because the most llms retrieve information from different web sources and might not all theses sources seems like really can use it for full commercial use legally or that what i see. something that open source (not website) and trained only on free use/public domain materials that I can generally use without risk of copyright infringement.
2025-09-25T17:15:41
https://www.reddit.com/r/LocalLLaMA/comments/1nqcn8p/looking_for_llm_trained_only_on_free_usepublic/
Specific_Objective77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqcn8p
false
null
t3_1nqcn8p
/r/LocalLLaMA/comments/1nqcn8p/looking_for_llm_trained_only_on_free_usepublic/
false
false
self
1
null
Omni3 realtime success?
1
[removed]
2025-09-25T16:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1nqc5lt/omni3_realtime_success/
hey_mister
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqc5lt
false
null
t3_1nqc5lt
/r/LocalLLaMA/comments/1nqc5lt/omni3_realtime_success/
false
false
self
1
null
Working on a budget build, does this look like it would work?
3
Basically trying to do a budget build, specs are 40 cores, 256GB RAM, 48GB VRAM. Does this look like it would work? What kind of speed might I be able to expect? || || |X99 DUAL PLUS Mining Motherboard Supports DDR4 RAM 256GB LGA 2011-3 V3/V4 CPU Socket Computer Motherboard 4 \*USB3.0 4\* PCIe3.0 X|152.29|x1|152.29| |Non-official edition Intel Xeon E5-2698 V4 ES QHUZ 2.0GHz 20Core CPU Processor|59.9|x2|119.8| |upHere P4K CPU Air Cooler 6mm x 4 Copper Heat Pipes CPU Cooler|20.99|x2|41.98| |MC03.2 Mining Rig Case - Holds 8 Fans | No Motherboard/CPU/RAM Included|109.99|x1|109.99| |Timetec 32GB KIT(2x16GB) DDR4 2400MHz PC4-19200 Non-ECC|59.99|x8|479.92| |GIGABYTE NVIDIA GeForce RTX 3060 12GB GDDR6 Graphics Card|274.99|x4|1099.96| |CORSAIR RM1000e (2025) Fully Modular Low-Noise ATX Power Supply|149.99|x1|149.99| ||||| |**Total**|||**2153.93** |
2025-09-25T16:50:31
https://www.reddit.com/r/LocalLLaMA/comments/1nqbzcb/working_on_a_budget_build_does_this_look_like_it/
A13XM01R
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqbzcb
false
null
t3_1nqbzcb
/r/LocalLLaMA/comments/1nqbzcb/working_on_a_budget_build_does_this_look_like_it/
false
false
self
3
null
What? Running Qwen-32B on a 32GB GPU (5090).
359
2025-09-25T16:16:51
https://v.redd.it/01adz6it5crf1
curiousily_
/r/LocalLLaMA/comments/1nqb3p3/what_running_qwen32b_on_a_32gb_gpu_5090/
1970-01-01T00:00:00
0
{}
1nqb3p3
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/01adz6it5crf1/DASHPlaylist.mpd?a=1761538618%2CNjcwYjc1YjQyNzI4OWQ0ZjgzMmZmODgxZjljYmY3MDRkMTU1YWE0MTI5MGMxNWU4YTk1YjcwY2JhZThhMGQyMQ%3D%3D&v=1&f=sd', 'duration': 258, 'fallback_url': 'https://v.redd.it/01adz6it5crf1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/01adz6it5crf1/HLSPlaylist.m3u8?a=1761538618%2CNDI4MmQ5NzE3YjIzMDhjZTI1OTViYzUxNDk5NDUxZmRjOTY0YjFhZmJjYmZmZGRiZTA1ZDU4MGQzYWY1OWZhYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/01adz6it5crf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}}
t3_1nqb3p3
/r/LocalLLaMA/comments/1nqb3p3/what_running_qwen32b_on_a_32gb_gpu_5090/
false
false
https://external-preview…85085c6a9b52a304
359
{'enabled': False, 'images': [{'id': 'eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=108&crop=smart&format=pjpg&auto=webp&s=7a3273bc03bda994dd9a73fc7e77f84fb21a2129', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=216&crop=smart&format=pjpg&auto=webp&s=ea0376e87298077eed440661b692b1d5641a1802', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=320&crop=smart&format=pjpg&auto=webp&s=5c654e431cd20978b93552e2ef5c2b0ccbeae487', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=640&crop=smart&format=pjpg&auto=webp&s=64bfec0185ff4257a3285a980f1b5957c4ee5fa0', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=960&crop=smart&format=pjpg&auto=webp&s=366b947f8b005311bce0711f890e3b71b9e1f9a7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3a440af3042460ebca355e381f98e757d4631d9a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/eGQxejJvNXo1Y3JmMc7C-li4AFXa_Q-5qATmlwGRne0zNSJFPFjYVcktZ0y0.png?format=pjpg&auto=webp&s=41bdba09dc7560a595214ceb4a9284c4e48e77d2', 'width': 1138}, 'variants': {}}]}
Tencent is teasing the world’s most powerful open-source text-to-image model, Hunyuan Image 3.0 Drops Sept 28
264
2025-09-25T15:54:29
https://i.redd.it/t8w84ihz1crf1.jpeg
abdouhlili
i.redd.it
1970-01-01T00:00:00
0
{}
1nqaiaz
false
null
t3_1nqaiaz
/r/LocalLLaMA/comments/1nqaiaz/tencent_is_teasing_the_worlds_most_powerful/
false
false
https://b.thumbs.redditm…jRqERD1DJDCc.jpg
264
{'enabled': True, 'images': [{'id': 'ke0IYwBGf2vXTWO5b7mc8NZojBQZJ59sytskyVN_TK0', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=108&crop=smart&auto=webp&s=caead9e4def0532a1263cea07b847d20a4156efb', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=216&crop=smart&auto=webp&s=f411c09d741df120bd5ae6c5599502246cbdabaa', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=320&crop=smart&auto=webp&s=7baa4e0b2787468f13172afec34bbee2b70e6dcf', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=640&crop=smart&auto=webp&s=35af183180dd8d2fd35f6be46e55d9a4dc75e26d', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=960&crop=smart&auto=webp&s=3ff97948c8de6150ffd6d03f33530d5ae6afbab6', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?width=1080&crop=smart&auto=webp&s=e3d321253da723f7c09e04c4a0bb8f07820f4471', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/t8w84ihz1crf1.jpeg?auto=webp&s=0469ee3b97fe2a7df9dcdafda0eeafb8e523f4dc', 'width': 1280}, 'variants': {}}]}
Are the compute cost complainers simply using LLM’s incorrectly?
7
I was looking at AWS and Vertex AI compute costs and compared to what I remember reading with regard to the high expense that cloud computer renting has been lately. I am so confused as to why everybody is complaining about compute costs. Don’t get me wrong, compute is expensive. But the problem is everybody here or in other Reddit that I’ve read seems to be talking about it as if they can’t even get by a day or two without spending $10-$100 depending on the test of task they are doing. The reason that this is baffling to me is because I can think of so many small tiny use cases that this won’t be an issue. If I just want an LLM to look up something in the data set that I have or if I wanted to adjust something in that dataset, having it do that kind of task 10, 20 or even 100 times a day should by no means increase my monthly cloud costs to something $3,000 ($100 a day). So what in the world are those people doing that’s making it so expensive for them. I can’t imagine that it would be anything more than thryinh to build entire software from scratch rather than small use cases. If you’re using RAG and you have thousands of pages of pdf data that each task must process then I get it. But if not then what the helly? Am I missing something here?
2025-09-25T15:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1nqaf82/are_the_compute_cost_complainers_simply_using/
ontologicalmemes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqaf82
false
null
t3_1nqaf82
/r/LocalLLaMA/comments/1nqaf82/are_the_compute_cost_complainers_simply_using/
false
false
self
7
null
What Lora Style and model is this????!!
1
[removed]
2025-09-25T15:38:36
https://www.reddit.com/r/LocalLLaMA/comments/1nqa3mt/what_lora_style_and_model_is_this/
No_Scarcity361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqa3mt
false
null
t3_1nqa3mt
/r/LocalLLaMA/comments/1nqa3mt/what_lora_style_and_model_is_this/
false
false
nsfw
1
null
OpenAI has moved from a growth phase to a customer-milking phase.
0
Overall, it’s pretty depressing: I used to generate images on the Plus plan and barely noticed any limits, and now it tells me: “Please wait 6 minutes because you’re sending requests too often.” Same with Sora. At first it generates short-ish videos, and then it just starts flagging them like: your little clip violates our rules 99% of the time. In short, the company is shifting from hypergrowth to shearing the sheep. Looks like the magic is over. As they say: if you want the cow to eat less and give more milk, you just milk her harder and feed her less… Bottom line, the coupon-clipping is in full swing. I also saw the “Business” plan for $25. I thought: cool, I can send extended requests to Sora without paying $200 for Pro. But those sneaky folks say you have to pick seats, minimum two! Which means it’s already $50.
2025-09-25T15:27:10
https://www.reddit.com/r/LocalLLaMA/comments/1nq9sx5/openai_has_moved_from_a_growth_phase_to_a/
ievkz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq9sx5
false
null
t3_1nq9sx5
/r/LocalLLaMA/comments/1nq9sx5/openai_has_moved_from_a_growth_phase_to_a/
false
false
self
0
null
Question about Multi-GPU performance in llama.cpp
2
[removed]
2025-09-25T15:09:07
https://www.reddit.com/r/LocalLLaMA/comments/1nq9bs1/question_about_multigpu_performance_in_llamacpp/
-FernandoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq9bs1
false
null
t3_1nq9bs1
/r/LocalLLaMA/comments/1nq9bs1/question_about_multigpu_performance_in_llamacpp/
false
false
self
2
null
What are some non US and Chinese AI models - how do they perform?
5
Don’t say mistral
2025-09-25T15:05:45
https://www.reddit.com/r/LocalLLaMA/comments/1nq98kd/what_are_some_non_us_and_chinese_ai_models_how_do/
Civil_Opposite7103
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq98kd
false
null
t3_1nq98kd
/r/LocalLLaMA/comments/1nq98kd/what_are_some_non_us_and_chinese_ai_models_how_do/
false
false
self
5
null
I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
1
[removed]
2025-09-25T14:57:02
https://i.redd.it/iqc0kmfqrbrf1.png
Vivid_Muffin_5184
i.redd.it
1970-01-01T00:00:00
0
{}
1nq906p
false
null
t3_1nq906p
/r/LocalLLaMA/comments/1nq906p/i_am_aadarsh_pandey_13yo_from_india_i_am_the/
false
false
https://b.thumbs.redditm…B-HiEjio0zZs.jpg
1
{'enabled': True, 'images': [{'id': 'n8EwsrkT3TKfy3aMFyxUWgCU2T09ImYmtGVJ-Y8_wM4', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=108&crop=smart&auto=webp&s=4df3b981d34e890207f894448d2c1dd67158730f', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=216&crop=smart&auto=webp&s=91d0c0f44258eeab275513272d2a0d3c75ea861b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=320&crop=smart&auto=webp&s=b05f94abb43b0d722d35d87dedf91edb430ca4a8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=640&crop=smart&auto=webp&s=b019d2748cfd4894b0af082edc087b4314371c22', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=960&crop=smart&auto=webp&s=42fbe0bc8d01f381f741a2433d391c5bbf7a14fb', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?width=1080&crop=smart&auto=webp&s=2f45fac992b566fb1e16cff0d34d08e7ca5bfd6c', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/iqc0kmfqrbrf1.png?auto=webp&s=fce239dde22eb75754c46f7c16729c7ad430a54d', 'width': 1080}, 'variants': {}}]}
Question about Multi-GPU performance in llama.cpp
1
[removed]
2025-09-25T14:43:57
https://www.reddit.com/r/LocalLLaMA/comments/1nq8o2v/question_about_multigpu_performance_in_llamacpp/
-FernandoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq8o2v
false
null
t3_1nq8o2v
/r/LocalLLaMA/comments/1nq8o2v/question_about_multigpu_performance_in_llamacpp/
false
false
self
1
null
Is there any way I can compare qwen3-next 80b reasoning with o1?
4
Last year I made a prediction: [https://www.reddit.com/r/LocalLLaMA/comments/1fp00jy/apple\_m\_aider\_mlx\_local\_server/](https://www.reddit.com/r/LocalLLaMA/comments/1fp00jy/apple_m_aider_mlx_local_server/) >random prediction: in 1 year a model, 1M context, 42GB coder-model that is not only extremely fast on M1 Max (50-60t/s) but smarter than o1 at the moment. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Reality check: the context is about 220k, the speed is about 40t/s.. so I can't really claim it. "These stoopid AI engineers made me look bad" The fact that Qwen3 Thinking 4-quant has 42GB exactly is a funny coincidence. But I want to compare the quant version with o1. How would I go about that? Any clues? This is solely just for fun purposes... I'm looking on [artificialanalysis.ai](http://artificialanalysis.ai) and they rank intelligence score: o1 - 47, qwen3 80b - 54. (general) and on coding index it's o1 - 39, qwen - 42. But I want to see 4-quant how it compares, suggestions? \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ random prediction in 1 year: we'll have open-weight models under 250B parameters which will be better at diagnosis than any doctor in the world (including reading visual things) and it will be better at coding/math than any human.
2025-09-25T14:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1nq8alp/is_there_any_way_i_can_compare_qwen3next_80b/
shaman-warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq8alp
false
null
t3_1nq8alp
/r/LocalLLaMA/comments/1nq8alp/is_there_any_way_i_can_compare_qwen3next_80b/
false
false
self
4
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=216&crop=smart&auto=webp&s=b97954336b79c1390848d0e44fa056a85de68672', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=320&crop=smart&auto=webp&s=65f53b80ab9674ee645013e3e8eeac4f953d657e', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=640&crop=smart&auto=webp&s=47f397e4a22ed5ec7e82aad070eb446319603abc', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=960&crop=smart&auto=webp&s=0f4359d47b78f5c1aa35de8804dbe36a749fc11a', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=1080&crop=smart&auto=webp&s=62eb4b7216f41af6600fc4df79cfa67425c19442', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?auto=webp&s=efc17c9f241b4403d22cbacfe5d71900ee1cf85a', 'width': 1260}, 'variants': {}}]}
Worse performance on Linux?
8
Good morning/afternoon to everyone. I have a question. I’m slowly starting to migrate to Linux again for inference, but I’ve got a problem. I don’t know if it’s ollama specific or not, I’m switching to vllm today to figure that out. But in Linux my t/s went from 25 to 8 trying to run Qwen models. But small models like llama 3 8b are blazing fast. Unfortunately I can’t use most of the llama models because I built a working memory system that requires tool use with mcp. I don’t have a lot of money, I’m disabled and living on a fixed budget. But my hardware is a very poor AMD Ryzen 5 4500, 32GB DDR4, a 2TB NVMe, and a RX 7900 XT 20GB. According to terminal, everything with ROCm is working. What could be wrong?
2025-09-25T14:10:26
https://www.reddit.com/r/LocalLLaMA/comments/1nq7ti9/worse_performance_on_linux/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq7ti9
false
null
t3_1nq7ti9
/r/LocalLLaMA/comments/1nq7ti9/worse_performance_on_linux/
false
false
self
8
null
Suggestion regarding my agentic ai repo !
2
Hey everyone a few days back i had made a repo of some cool agents where i had to use prompts a lot ! and till now i feel is it agentic or have i done something good ? The feeling of mine regarding this is obvious ,because i thought i had to deal with writing code just like how people feel when they get into backtracking but instead i went with prompts hell, so it fine ? Please go through my repository and be frank to provide some valuable information out of it, I would be happy to interact and if you guys think i did some effort on it, please rate it a star lol [https://github.com/jenasuraj/Ai\_agents](https://github.com/jenasuraj/Ai_agents)
2025-09-25T14:04:42
https://www.reddit.com/r/LocalLLaMA/comments/1nq7oab/suggestion_regarding_my_agentic_ai_repo/
jenasuraj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq7oab
false
null
t3_1nq7oab
/r/LocalLLaMA/comments/1nq7oab/suggestion_regarding_my_agentic_ai_repo/
false
false
self
2
{'enabled': False, 'images': [{'id': 'wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=108&crop=smart&auto=webp&s=0bfd9802a6dd74802d48e219fde1c6db51b49b8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=216&crop=smart&auto=webp&s=156a10f0e5d6617d7da64ac346641459f3ec62df', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=320&crop=smart&auto=webp&s=940c6629aca9402031ffbf6ff2d74c33de578a82', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=640&crop=smart&auto=webp&s=b84daab9453085389fededba558fae5ec06b3aff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=960&crop=smart&auto=webp&s=6aea60627fdca9ff3645750a9fc141676e902040', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?width=1080&crop=smart&auto=webp&s=7fee851d158aa64408b57b03eb7fbe5884140e2f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wHoKsD_jg6r1pC_0XhcoKrbLyfaCRh4-lw73ZiRBOeA.png?auto=webp&s=c9b304263a0a3cf647f29bc76de7fdbfb4d9a014', 'width': 1200}, 'variants': {}}]}
Strix Halo Killer: Qualcomm X2 Elite 128+ GB memory
0
https://preview.redd.it/…makes up for it.
2025-09-25T13:43:10
https://www.reddit.com/r/LocalLLaMA/comments/1nq751w/strix_halo_killer_qualcomm_x2_elite_128_gb_memory/
On1ineAxeL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq751w
false
null
t3_1nq751w
/r/LocalLLaMA/comments/1nq751w/strix_halo_killer_qualcomm_x2_elite_128_gb_memory/
false
false
https://a.thumbs.redditm…2_ne2OzjLgz8.jpg
0
null
I built a Qwen3 embeddings REST API
0
Hi /r/LocalLLaMA, I'm building a commercial data extraction service and naturally part of that is building a RAG search/chat system. I was originally going to the OpenAI embeddings API, but then I looked at the MTEB leaderboard and saw that the Qwen3 Embedding models were SOTA, so I built out an internal API that my app can use to generate embeddings. I figured if it was useful for me, it'd be useful for someone else, and thus [encoder.dev](https://encoder.dev) was born. It's a dead simple API that has two endpoints: `/api/tokenize` and `/api/encode`. I'll eventually add an `/api/rerank` endpoint as well. You can read the rest of the documentation here: https://encoder.dev/docs There are only two models available: Qwen3-Embedding-0.6B (`small`) and Qwen3-Embedding-4B (`large`). I'm pricing the `small` model at $0.01 per 1M tokens, and the `large` at $0.05 per 1M tokens. The first 10,000,000 embedding tokens are free for the `small` model, and first 2,000,000 are free for the `large` model. Calling the `/api/tokenize` endpoint is free, and a good way to see how many tokens a chunk of text will consume before you call the `/api/encode` endpoint. Calls to `/api/encode` are cached, so making a request with identical input is free. There also isn't a way to reduce the embedding dimension, but I may add that in the future as well. The API is not currently compatible with the OpenAI standard. I may make it compatible at some point in the future, but frankly I don't think it's that great to begin with. I'm relatively new to this, so I'd love your feedback.
2025-09-25T13:29:27
https://www.reddit.com/r/LocalLLaMA/comments/1nq6sx3/i_built_a_qwen3_embeddings_rest_api/
leftnode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq6sx3
false
null
t3_1nq6sx3
/r/LocalLLaMA/comments/1nq6sx3/i_built_a_qwen3_embeddings_rest_api/
false
false
self
0
{'enabled': False, 'images': [{'id': 'odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=108&crop=smart&auto=webp&s=b102cc1ea7d602d47c36d35a9f2a6252496aa8d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=216&crop=smart&auto=webp&s=6f48f9fe7166cec91d7e03db7098a6b3c86794ec', 'width': 216}, {'height': 173, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=320&crop=smart&auto=webp&s=8bbe36e89b12e52b2c2ff462f9c6669a23718789', 'width': 320}, {'height': 346, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=640&crop=smart&auto=webp&s=92fe1021c7340d01a6a185280981369e5d44c4c6', 'width': 640}, {'height': 520, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=960&crop=smart&auto=webp&s=450ffacedfc50f31179d6ef2c501afba0c058306', 'width': 960}, {'height': 585, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?width=1080&crop=smart&auto=webp&s=7b3bd2836cb15787166d61065b5c49ab67d5abca', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/odyCjgDhcckc2kChvEwr8KRmp6hlHoueDpvoG9TpmM0.png?auto=webp&s=cd8e522c203e665663941cacefdd505489c69ce1', 'width': 2400}, 'variants': {}}]}
[Beginner]My Qwen Image Edit model is stuck and it's been 5 hours. Please Help
1
Copied this code from hugging face and running it: import os from PIL import Image import torch from diffusers import QwenImageEditPipeline pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit") print("pipeline loaded") pipeline.to(torch.bfloat16) pipeline.to("cuda") image = Image.open(r"C:\XXXXX\Downloads\XXXX\36_image.webp").convert("RGB") prompt = "Change the girl face angle to front angle." inputs = {     "image": image,     "prompt": prompt,     "generator": torch.manual_seed(0),     "true_cfg_scale": 4.0,     "negative_prompt": " ",     "num_inference_steps": 50, } with torch.inference_mode():     output = pipeline(**inputs)     output_image = output.images[0]     output_image.save("output_image_edit.png")     print("image saved at", os.path.abspath("output_image_edit.png")) I have seen posts with people running Qwen image Edit on 4060 with comfy UI. All the files have been downloaded(checked it manually) and it has been 5 hours since then it is stuck here. I am completely clueless `Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [01:15<00:00, 8.42s/it]` `Loading pipeline components...: 83%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 5/6 [01:17<00:26, 26.67s/it]` `PS C:\Users\xxxx\xxx\xx> ███████████████████████████████████████████████████████████▎ | 1/4 [00:10<00:30, 10.17s/it]` Will provide more details if needed
2025-09-25T13:17:08
https://www.reddit.com/r/LocalLLaMA/comments/1nq6ihk/beginnermy_qwen_image_edit_model_is_stuck_and_its/
xiaolong_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq6ihk
false
null
t3_1nq6ihk
/r/LocalLLaMA/comments/1nq6ihk/beginnermy_qwen_image_edit_model_is_stuck_and_its/
false
false
self
1
null
Kimi Infra team releases K2 Vendor Verifier: an open‑source tool‑call validator for LLM providers
83
ERROR: type should be string, got "https://preview.redd.it/lap5au1j7brf1.png?width=1728&format=png&auto=webp&s=703ef673fa1ffd579a91b53a656de3eec0fe056e\n\n>Since the release of the Kimi K2 model, we have received numerous feedback on the precision of Kimi K2 in toolcall. Given that K2 focuses on the agentic loop, the reliability of toolcall is of utmost importance.\n\n>We have observed significant differences in the toolcall performance of various open-source solutions and vendors. When selecting a provider, users often prioritize lower latency and cost, but may inadvertently overlook more subtle yet critical differences in model accuracy.\n\n>These inconsistencies not only affect user experience but also impact K2's performance in various benchmarking results. To mitigate these problems, we launch K2 Vendor Verifier to monitor and enhance the quality of all K2 APIs.\n\n>We hope K2VV can help ensuring that everyone can access a consistent and high-performing Kimi K2 model.\n\nI found in Kimi K2 0905's release [blog](https://platform.moonshot.cn/blog/posts/kimi-k2-0905) that they mentioned a new technology called \"Token Enforcer ensures 100% correct toolcall format\". That's huge!"
2025-09-25T13:15:45
https://www.reddit.com/r/LocalLLaMA/comments/1nq6hdq/kimi_infra_team_releases_k2_vendor_verifier_an/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq6hdq
false
null
t3_1nq6hdq
/r/LocalLLaMA/comments/1nq6hdq/kimi_infra_team_releases_k2_vendor_verifier_an/
false
false
https://b.thumbs.redditm…p8nqpDOk9-do.jpg
83
{'enabled': False, 'images': [{'id': 'h6GA9o9Fh1iKBczAmWMEU51oiY2qNtCo2dIcqBlrOfA', 'resolutions': [{'height': 21, 'url': 'https://external-preview.redd.it/h6GA9o9Fh1iKBczAmWMEU51oiY2qNtCo2dIcqBlrOfA.png?width=108&crop=smart&auto=webp&s=0e898b7ba90b038552a99c68bdebf704e0a95d94', 'width': 108}, {'height': 42, 'url': 'https://external-preview.redd.it/h6GA9o9Fh1iKBczAmWMEU51oiY2qNtCo2dIcqBlrOfA.png?width=216&crop=smart&auto=webp&s=f2a31130f73f9547fcd2000fb95b0ce425bb7225', 'width': 216}], 'source': {'height': 59, 'url': 'https://external-preview.redd.it/h6GA9o9Fh1iKBczAmWMEU51oiY2qNtCo2dIcqBlrOfA.png?auto=webp&s=1353f21050fd330b108870e37af1b32a9e693562', 'width': 300}, 'variants': {}}]}
haiku.rag, an open-source RAG library/CLI tool for searching & agentic research
1
[removed]
2025-09-25T12:55:15
https://www.reddit.com/r/LocalLLaMA/comments/1nq60em/haikurag_an_opensource_rag_librarycli_tool_for/
gogozad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq60em
false
null
t3_1nq60em
/r/LocalLLaMA/comments/1nq60em/haikurag_an_opensource_rag_librarycli_tool_for/
false
false
self
1
null
Xiaomi 17 , Powerful processor
0
https://x.com/UniverseIce/status/1971187768010342773?t=U36aNuF5tHtNoBOhzadlPg&s=19 - tremendous bargain, the Xiaomi 17 pro costs 700 usd 🌚, apparently the Snapdragon 8 Elite 8 gen 5 processor will be more accessible than an iPhone 17 😂
2025-09-25T12:47:27
https://i.redd.it/3w4vck8m4brf1.jpeg
Illustrious-Swim9663
i.redd.it
1970-01-01T00:00:00
0
{}
1nq5u36
false
null
t3_1nq5u36
/r/LocalLLaMA/comments/1nq5u36/xiaomi_17_powerful_processor/
false
false
default
0
{'enabled': True, 'images': [{'id': '3w4vck8m4brf1', 'resolutions': [{'height': 125, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?width=108&crop=smart&auto=webp&s=090b0705110cf0a35ac09bb59f6a689399514e0e', 'width': 108}, {'height': 250, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?width=216&crop=smart&auto=webp&s=ebdd20bc3498dbd75822edc19e7cb251bfaeff41', 'width': 216}, {'height': 370, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?width=320&crop=smart&auto=webp&s=31b02729e3bce2c9594f0c9d173231259bda1ceb', 'width': 320}, {'height': 740, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?width=640&crop=smart&auto=webp&s=e810ea014b1dcda9d6e563a2fb62edbed970bf24', 'width': 640}, {'height': 1111, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?width=960&crop=smart&auto=webp&s=a66392c43baa26c42f205d550d1ab299d63c46f5', 'width': 960}], 'source': {'height': 1249, 'url': 'https://preview.redd.it/3w4vck8m4brf1.jpeg?auto=webp&s=d8a72e9ac13bdef1cb91002760cc3aa4aff4d78d', 'width': 1079}, 'variants': {}}]}
Just launched a new update on Examsprint AI – need your feedback
0
I’ve been working on Examsprint AI, a site aimed at making JEE/NEET prep more structured and efficient. The new update adds some features I’d love to get your thoughts on: 🤖 AI-powered practice – topic-wise flashcards and quizzes generated for better revision. 📚 Chapter & topic breakdown – organized for Classes 11 & 12 across Physics, Chemistry, Botany, and Zoology. 📖 Integrated NCERT links – so you can jump straight to the right reference. 🎯 Interactive layout – collapsible topics, cleaner navigation, and mobile-friendly. 🚀 Planned next features – performance tracking, AI-based doubt solving, and timed mock tests. I’m not posting this as an advertisement—just genuinely curious if these updates make the studying experience smoother, and what else you’d want in a tool like this. Any feedback (big or small) would mean a lot 🙏
2025-09-25T12:35:30
https://i.redd.it/qjj7treh2brf1.png
JunketBest2459
i.redd.it
1970-01-01T00:00:00
0
{}
1nq5kp8
false
null
t3_1nq5kp8
/r/LocalLLaMA/comments/1nq5kp8/just_launched_a_new_update_on_examsprint_ai_need/
false
false
default
0
{'enabled': True, 'images': [{'id': 'qjj7treh2brf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=108&crop=smart&auto=webp&s=d25e687eddd7e63ed33b5b03521c95110e6a0209', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=216&crop=smart&auto=webp&s=ab7a6740fc0df0cffd89a7a9caa44f333300e801', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=320&crop=smart&auto=webp&s=cfc44cc50c8ce4ded6e8fc6179fb4cb230a77eaf', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=640&crop=smart&auto=webp&s=26e406f580844ddf38ca261999d5c9539fb12239', 'width': 640}, {'height': 640, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=960&crop=smart&auto=webp&s=97e5125471107f45ddcf1e9edb47656de98a6c96', 'width': 960}, {'height': 720, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?width=1080&crop=smart&auto=webp&s=20389700ddc94a5f30c14ef61020481499b825b7', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/qjj7treh2brf1.png?auto=webp&s=3b6c5c8efb32709d344065f5d22cecdfa39c3f8e', 'width': 1536}, 'variants': {}}]}
light it up
0
ERROR: type should be string, got "\nhttps://gitlab.com/rjnw/spk \nCan someone try to run this? I never tried but it might help just for getting some creative output.\n\nI also shared some documentation over at aditi-maya-kali@bsky.social\n\neverything public domain, I don’t think I have much time left for this anymore."
2025-09-25T12:33:52
https://www.reddit.com/r/LocalLLaMA/comments/1nq5jgo/light_it_up/
rjnw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq5jgo
false
null
t3_1nq5jgo
/r/LocalLLaMA/comments/1nq5jgo/light_it_up/
false
false
self
0
null
Simple question, but looking for insight. RTX Pro 6000 ADA or RTX Pro 5000 Blackwell?
3
I know the 5000 series has additional pipeline and system architecture improvements, but when put head to head… does the RTX Pro 6000 ADA top the RTX Pro 5000 Blackwell? 6000 Ada = 18,176 Cuda Cores/568 Tensor 5000 Blackwell = 14,080 Cuda Cores/440 Tensor Both have 48GB of VRAM, but the core count difference is significant.
2025-09-25T12:25:20
https://www.reddit.com/r/LocalLLaMA/comments/1nq5cpj/simple_question_but_looking_for_insight_rtx_pro/
Murky_Estimate1484
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq5cpj
false
null
t3_1nq5cpj
/r/LocalLLaMA/comments/1nq5cpj/simple_question_but_looking_for_insight_rtx_pro/
false
false
self
3
null
16GB VRAM Essentials
182
Good models to try/use if you have 16GB of VRAM
2025-09-25T12:06:57
https://huggingface.co/collections/shb777/16gb-vram-essentials-68a83fc22eb5fc0abd9292dc
Few-Welcome3297
huggingface.co
1970-01-01T00:00:00
0
{}
1nq4yoy
false
null
t3_1nq4yoy
/r/LocalLLaMA/comments/1nq4yoy/16gb_vram_essentials/
false
false
default
182
{'enabled': False, 'images': [{'id': 'U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=108&crop=smart&auto=webp&s=1a5e27ddc4c0535702d844aca518b7b5071f16b0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=216&crop=smart&auto=webp&s=6dee6aeffbb4aff2dfa70119dc89e20db28336dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=320&crop=smart&auto=webp&s=e7aa09d378b4755cdda94a90ae7d0dd20f27410b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=640&crop=smart&auto=webp&s=ab359b786eea2765d3cb2b27f2a47ec00e777455', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=960&crop=smart&auto=webp&s=4cfe42122372f71e46830bf18036b9ec0b04e3e6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?width=1080&crop=smart&auto=webp&s=a7a12b2311322476cdde0ffbe095d85a933ad91e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/U8XqPOiV3AZVtQLqefgeW9FqeySJR9_ozWDBINkXoAI.png?auto=webp&s=29bf42a4b46a3a9b583c3e03783519d3742ab7ad', 'width': 1200}, 'variants': {}}]}
Stockmark 2 100B Instruct
69
Stockmark-2-100B-Instruct is a 100-billion-parameter large language model built from scratch, with a particular focus on Japanese. It was pre-trained on approximately 2.0 trillion tokens of data, consisting of 60% English, 30% Japanese, and 10% code. Following pretraining, the model underwent post-training (SFT and DPO) with synthetic data in Japanese to enhance its ability to follow instructions. This version improves instruction-following ability and adds support for long-context (32k), compared to the previous version https://huggingface.co/stockmark/Stockmark-2-100B-Instruct
2025-09-25T12:05:42
https://www.reddit.com/r/LocalLLaMA/comments/1nq4xs9/stockmark_2_100b_instruct/
xugik1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq4xs9
false
null
t3_1nq4xs9
/r/LocalLLaMA/comments/1nq4xs9/stockmark_2_100b_instruct/
false
false
self
69
{'enabled': False, 'images': [{'id': 'Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=108&crop=smart&auto=webp&s=f14c013fbb59e1d926ac3303693583bfdad623bd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=216&crop=smart&auto=webp&s=adc9cf97b9b2969315af3da4b0792f21d765fb89', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=320&crop=smart&auto=webp&s=2ef5890ce936fb2cef2c634580f1b17fa82fd430', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=640&crop=smart&auto=webp&s=2cf6ba3b6b58c32d214354215f77e07f84087543', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=960&crop=smart&auto=webp&s=85038890c5e1cb6e3a480983c6a49b220426c738', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?width=1080&crop=smart&auto=webp&s=26d5f9c6b1e407a798b1a71d48e0b89669522062', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Sp1DeXluOH_QtFZgU3TGABU9tsiTX4lUKxlHwVLf9bk.png?auto=webp&s=985724d193ab007d0b1f31e4a7642acc51a89c2d', 'width': 1200}, 'variants': {}}]}
Pocket LLM: Chat offline on device all private | AI
0
2025-09-25T11:47:07
https://apps.apple.com/in/app/ai-chat-offline-pocket-llm/id6752952699
amanj203
apps.apple.com
1970-01-01T00:00:00
0
{}
1nq4kds
false
null
t3_1nq4kds
/r/LocalLLaMA/comments/1nq4kds/pocket_llm_chat_offline_on_device_all_private_ai/
false
false
default
0
{'enabled': False, 'images': [{'id': 'lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=108&crop=smart&auto=webp&s=16a83cc2efcf323a2bb3f38535918a49c3832beb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=216&crop=smart&auto=webp&s=c693f5853d79ab4ee2563f9323ff0614e66c5092', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=320&crop=smart&auto=webp&s=4a0e34fff153a085a57e75a8d6c342bcd069bf60', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=640&crop=smart&auto=webp&s=762e8fff151ce47ecba30bd8d914ad3846750830', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=960&crop=smart&auto=webp&s=589fde07832e719700e7fa40744fc974878f686d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?width=1080&crop=smart&auto=webp&s=709331cfa538116b82896a11f6f5536c891c0fea', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/lkoLEXk2v3wGxthWhOE2zthYEHvBsLlnx-U1kybvDhI.png?auto=webp&s=c1afbc6841459c861f3d95554f3318ddbc9c41c4', 'width': 1200}, 'variants': {}}]}
Question about Performance with Multi-GPU in llama.cpp
1
[removed]
2025-09-25T11:31:30
https://www.reddit.com/r/LocalLLaMA/comments/1nq49h8/question_about_performance_with_multigpu_in/
-FernandoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq49h8
false
null
t3_1nq49h8
/r/LocalLLaMA/comments/1nq49h8/question_about_performance_with_multigpu_in/
false
false
self
1
null
VLLM on RTX 5090 w/ Win 11 & Ubuntu 24.04 WSL or similar: How to solve Flash-Infer and PyTorch compatibility issues?
0
Hey everyone, I'm trying to get a high-performance VLLM setup running on my RTX 5090, but I've hit a wall with library compatibility. **My current stack:** * **GPU:** NVIDIA RTX 5090 CUDA 13 — Newest Nvidia drivers * **OS:** Windows 11 * **Subsystem:** WSL2 with Ubuntu 24.04 LTS I'm facing significant issues getting VLLM to install, which seem to stem from **Flash-Infer** and **PyTorch** compatibility. The core of the problem appears to be finding a version of PyTorch that supports both the new GPU architecture and can be used to successfully compile Flash-Infer within the Ubuntu 24.04 environment. (I already tried the nightly builds, yet there are more issues coming all the time) The model I want to use is olmocr 0825 FP8, [https://huggingface.co/allenai/olmOCR-7B-0825](https://huggingface.co/allenai/olmOCR-7B-0825) I get the model loaded into VRAM but no inference is working. My VLLM server always crashes.
2025-09-25T11:29:48
https://www.reddit.com/r/LocalLLaMA/comments/1nq488v/vllm_on_rtx_5090_w_win_11_ubuntu_2404_wsl_or/
CookEasy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq488v
false
null
t3_1nq488v
/r/LocalLLaMA/comments/1nq488v/vllm_on_rtx_5090_w_win_11_ubuntu_2404_wsl_or/
false
false
self
0
{'enabled': False, 'images': [{'id': '5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=108&crop=smart&auto=webp&s=3e7d6f2d2ddc279282ace7cfaf85d5175e9b30b9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=216&crop=smart&auto=webp&s=a148a85775dc0022d1132cf8ea54028e7fcc2ef3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=320&crop=smart&auto=webp&s=bf2e70bd1c0d66f647aba94783f55e46804d584c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=640&crop=smart&auto=webp&s=bbd64baec233bf447a3979778a820ca7d1c99927', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=960&crop=smart&auto=webp&s=87015a91e59f20156e5e6ac21e796a3c25938854', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?width=1080&crop=smart&auto=webp&s=0a2ea524f11cfabbe074793d46fd1a509369250c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5X8OzlMqcaBpGXw0jK4wuSheI80zFRwja85raIswCsI.png?auto=webp&s=9e5468de9788160ad9fc57b5643fc12fdb27a147', 'width': 1200}, 'variants': {}}]}
Just an appreciation post
0
Just wanted to thank the devs for text-generation-webui. I appreciate the incredible work behind this project - from the one-click setup or the portable mode (so even a noob like me can use LLMs), to the ability to switch models seamlessly, web search, file uploads, multimodal support, api, etc. it's one of the most versatile tools out there and has the best UI. Huge thanks for building and maintaining such a flexible and user-friendly tool!
2025-09-25T11:12:19
https://www.reddit.com/r/LocalLLaMA/comments/1nq3wrm/just_an_appreciation_post/
beneath_steel_sky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq3wrm
false
null
t3_1nq3wrm
/r/LocalLLaMA/comments/1nq3wrm/just_an_appreciation_post/
false
false
self
0
null
From GPU to Gain Cell: Rethinking LLMs for the Edge. 100× Faster, 100,000× less energy - New study!
31
Analog in-memory computing attention mechanism for fast and energy-efficient large language models: https://arxiv.org/abs/2409.19315 🧠 Key Findings - Problem Addressed: Traditional transformer-based LLMs rely on GPUs, which suffer from latency and energy inefficiencies due to repeated memory transfers during self-attention operations. - Proposed Solution: The researchers introduce a custom analog in-memory computing (IMC) architecture using gain cells—charge-based memory elements that enable parallel analog dot-product computations directly within memory. - Performance Gains: - Latency: Reduced by up to two orders of magnitude. - Energy Consumption: Reduced by up to four to five orders of magnitude compared to GPU-based attention mechanisms. - Model Compatibility: Due to analog circuit non-idealities, direct mapping of pre-trained models isn’t feasible. The team developed a novel initialization algorithm that achieves GPT-2-level performance without retraining from scratch. --- ⚡ Applicability to Edge LLMs This architecture is highly promising for edge deployment of LLMs, where power and compute constraints are critical: - Energy Efficiency: The drastic reduction in energy usage makes it feasible to run generative transformers on battery-powered or thermally constrained devices. - Speed: Lower latency enables real-time inference, crucial for interactive applications like voice assistants or on-device translation. - Hardware Simplification: By embedding computation within memory, the need for complex external accelerators is reduced, potentially lowering device cost and footprint.
2025-09-25T11:06:48
https://www.reddit.com/r/LocalLLaMA/comments/1nq3t8h/from_gpu_to_gain_cell_rethinking_llms_for_the/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq3t8h
false
null
t3_1nq3t8h
/r/LocalLLaMA/comments/1nq3t8h/from_gpu_to_gain_cell_rethinking_llms_for_the/
false
false
self
31
null
Building a Collaborative space for AI Agent projects & tools
4
Hey everyone, Over the last few months, I’ve been working on a GitHub repo called Awesome AI Apps. It’s grown to 6K+ stars and features 45+ open-source AI agent & RAG examples. Alongside the repo, I’ve been sharing deep-dives: blog posts, tutorials, and demo projects to help devs not just play with agents, but actually *use* them in real workflows. What I’m noticing is that a lot of devs are excited about agents, but there’s still a gap between simple *demos* and tools that hold up in production. Things like monitoring, evaluation, memory, integrations, and security often get overlooked. I’d love to turn this into more of a community-driven effort: * Collecting tools (open-source or commercial) that actually help devs push agents in production * Sharing practical workflows and tutorials that show *how* to use these components in real-world scenarios If you’re building something that makes agents more useful in practice, or if you’ve tried tools you think others should know about,please drop them here. If it's in stealth, send me a DM on LinkedIn [https://www.linkedin.com/in/arindam2004/](https://www.linkedin.com/in/arindam2004/) to share more details about it. I’ll be pulling together a series of projects over the coming weeks and will feature the most helpful tools so more devs can discover and apply them. Looking forward to learning what everyone’s building.
2025-09-25T11:03:20
https://www.reddit.com/r/LocalLLaMA/comments/1nq3qyr/building_a_collaborative_space_for_ai_agent/
Arindam_200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq3qyr
false
null
t3_1nq3qyr
/r/LocalLLaMA/comments/1nq3qyr/building_a_collaborative_space_for_ai_agent/
false
false
self
4
null
Do you think china will dominate the LLM industry in the future?
10
J
2025-09-25T10:59:28
https://www.reddit.com/r/LocalLLaMA/comments/1nq3o7s/do_you_think_china_will_dominate_the_llm_industry/
Civil_Opposite7103
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq3o7s
false
null
t3_1nq3o7s
/r/LocalLLaMA/comments/1nq3o7s/do_you_think_china_will_dominate_the_llm_industry/
false
false
self
10
null
Open-source vs closed for AI assistants?
1
Imagine an AI assistant that review code, integrates with internal docs, automates provisioning, processes PDFs, and does web search. Curious what people think, does something like this *belong* in open-source, or should it stay closed?
2025-09-25T10:49:03
https://www.reddit.com/r/LocalLLaMA/comments/1nq3hj6/opensource_vs_closed_for_ai_assistants/
frentro_max
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nq3hj6
false
null
t3_1nq3hj6
/r/LocalLLaMA/comments/1nq3hj6/opensource_vs_closed_for_ai_assistants/
false
false
self
1
null