title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Sharing my weekend project: a couple of Gemma models fine-tuned on the Bhagavad Gita.
1
[removed]
2025-09-09T14:47:05
https://www.reddit.com/r/LocalLLaMA/comments/1nckxdi/sharing_my_weekend_project_a_couple_of_gemma/
dodiggity32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckxdi
false
null
t3_1nckxdi
/r/LocalLLaMA/comments/1nckxdi/sharing_my_weekend_project_a_couple_of_gemma/
false
false
self
1
null
4B Model for Frontend Generation across Frontend Frameworks - Next.js, React, Tailwind, Static Site generators, Python Frontend WebUI
2
[https://huggingface.co/Tesslate/UIGEN-FX-4B-Preview](https://huggingface.co/Tesslate/UIGEN-FX-4B-Preview) Hey! This is our new line of Frontend Engineering models. (F X). This model is definitely a preview as the size entails. In the coming weeks, we will release more about this series + better versions of this as we scale up compute. Join our discord to talk more about AI (We'll help out your usecases for free, we just love talking about AI and sharing knowledge is always free) [https://discord.gg/EcCpcTv93U](https://discord.gg/EcCpcTv93U) The dataset contains GLM4.5, Deepseek(latest), Gemini2.5 Pro, our entire WEBGEN dataset and GPT-5 as the last step across tech stacks and frameworks in React, Html, Python etc. We also used our tool calling dataset, and SWE-Bench Trajectories for regularization, and a internal preference dataset for responses.
2025-09-09T14:45:28
https://www.reddit.com/gallery/1nckvsv
United-Rush4073
reddit.com
1970-01-01T00:00:00
0
{}
1nckvsv
false
null
t3_1nckvsv
/r/LocalLLaMA/comments/1nckvsv/4b_model_for_frontend_generation_across_frontend/
false
false
https://b.thumbs.redditm…CL6PkUvSBKcg.jpg
2
null
Is anyone talking verbally to their models and have them talking back through TTS?
10
Wondering what the easiest OSS setup for this is on 24gb ram, or if I have to cobble things together out of parakeet and ooba or something else? I just got a new computer and I’m growing tired of all the setup and tinkering, but I know it’s worth it 💀
2025-09-09T14:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1nckkx8/is_anyone_talking_verbally_to_their_models_and/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckkx8
false
null
t3_1nckkx8
/r/LocalLLaMA/comments/1nckkx8/is_anyone_talking_verbally_to_their_models_and/
false
false
self
10
null
What you need right now is not validation, but immediate clinical help. - Kimi K2
171
The community has long been frustrated by sycophancy in LLMs, the tendency to excessively agree with or validate users regardless of the potential harm. A recent joint study by [OpenAI](https://openai.com/index/openai-anthropic-safety-evaluation/) and [Anthropic](https://alignment.anthropic.com/2025/openai-findings/) confirmed that models cannot fully avoid sycophancy, except o3. But this [new investigation](https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced-psychosis-a-shallow-investigation) goes a step further: it analyzes how LLMs may *exacerbate* mental health symptoms in vulnerable users. https://preview.redd.it/u38ecq5ve5of1.png?width=1600&format=png&auto=webp&s=eed726bc1fc4daa996bea6837e61a3645b3592ad You can find the complete testing results here: [Github](https://github.com/tim-hua-01/ai-psychosis) One conversation [example](https://github.com/tim-hua-01/ai-psychosis/blob/main/full_transcripts/Ethan_openrouter-deepseek-deepseek-chat_20250819_081336_target.md) stuck with me the most: > We are so cooked!
2025-09-09T14:30:06
https://www.reddit.com/r/LocalLLaMA/comments/1nckhc3/what_you_need_right_now_is_not_validation_but/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckhc3
false
null
t3_1nckhc3
/r/LocalLLaMA/comments/1nckhc3/what_you_need_right_now_is_not_validation_but/
false
false
https://b.thumbs.redditm…upuVR83_mgKE.jpg
171
null
qwen3-next?
29
model_name = "Qwen/Qwen3-Next-80B-A3B-Instruct" sounds looks like a good time
2025-09-09T14:29:53
https://www.reddit.com/r/LocalLLaMA/comments/1nckh4n/qwen3next/
Lesser-than
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckh4n
false
null
t3_1nckh4n
/r/LocalLLaMA/comments/1nckh4n/qwen3next/
false
false
self
29
null
Qwen 3-Next Series, Qwen/Qwen3-Next-80B-A3B-Instruct Spotted
643
2025-09-09T14:29:35
https://github.com/huggingface/transformers/pull/40771
TKGaming_11
github.com
1970-01-01T00:00:00
0
{}
1nckgub
false
null
t3_1nckgub
/r/LocalLLaMA/comments/1nckgub/qwen_3next_series_qwenqwen3next80ba3binstruct/
false
false
https://external-preview…7e1203e91f26a22f
643
{'enabled': False, 'images': [{'id': '6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=108&crop=smart&auto=webp&s=977a93e4b98ab9f8c9655a5092293a4d21a1fd23', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=216&crop=smart&auto=webp&s=fcfd56dea9c0a5dd162508540fd498cbfa477db2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=320&crop=smart&auto=webp&s=613ff06842fbc20db8d19eb0d337a4c78f78366d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=640&crop=smart&auto=webp&s=cfbaeba49e889b967e95e8d5052e5b00621dec5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=960&crop=smart&auto=webp&s=0a4f2e3da699362d936cf6eac715f3c0c2bfad06', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?width=1080&crop=smart&auto=webp&s=3f22d4e0462aa9c5e4840f7e08b213135ee59fbe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6f6MRyALyD6CxjbdRAXgjWeul-9vmUyW8_mAvDGRbV4.png?auto=webp&s=f72eae5a2992c0ca134d0b4baa6a9e2da73a4684', 'width': 1200}, 'variants': {}}]}
qwen3-1.7b middle layer duplication
1
Create a custom merge with layers 0 through 19, then 7 through 19, then 7 through 27. The output is shockingly coherent as-is and it responds well to long-context tuning
2025-09-09T14:29:31
https://www.reddit.com/r/LocalLLaMA/comments/1nckgrw/qwen317b_middle_layer_duplication/
atineiatte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckgrw
false
null
t3_1nckgrw
/r/LocalLLaMA/comments/1nckgrw/qwen317b_middle_layer_duplication/
false
false
self
1
null
Qwen3-Next
84
https://preview.redd.it/…30b9b60632 Wtf?
2025-09-09T14:29:29
https://www.reddit.com/r/LocalLLaMA/comments/1nckgr8/qwen3next/
Puzzleheaded-Trust66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckgr8
false
null
t3_1nckgr8
/r/LocalLLaMA/comments/1nckgr8/qwen3next/
false
false
https://b.thumbs.redditm…H-aO5XgHys5Y.jpg
84
null
VLLM setup for different AMD cards
2
Is it possible to set personal config for each GPU connected for VLLM? i connect 4 gpu to vllm, 2 of them 7900XTX, 2 other is R9700, but all of them loaded as AMD\_Radeon\_AI\_PRO\_R9700.json, is it possible to set by ourself? vllm-7-1  | (Worker_PP1_TP1 pid=419) WARNING 09-09 14:19:19 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP1_TP1 pid=419) WARNING 09-09 14:19:19 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP1_TP0 pid=418) WARNING 09-09 14:19:19 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP1_TP0 pid=418) WARNING 09-09 14:19:19 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP0_TP1 pid=417) WARNING 09-09 14:19:21 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP0_TP1 pid=417) WARNING 09-09 14:19:21 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP0_TP0 pid=416) WARNING 09-09 14:19:21 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json'] vllm-7-1  | (Worker_PP0_TP0 pid=416) WARNING 09-09 14:19:21 [fused_moe.py:727] Using default MoE config. Performance might be sub-optimal! Config file not found at ['/usr/local/lib/python3.12/dist-packages/vllm/model_executor/layers/fused_moe/configs/E=128,N=384,device_name=AMD_Radeon_AI_PRO_R9700.json']
2025-09-09T14:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1nckbx4/vllm_setup_for_different_amd_cards/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nckbx4
false
null
t3_1nckbx4
/r/LocalLLaMA/comments/1nckbx4/vllm_setup_for_different_amd_cards/
false
false
self
2
null
VLLM
1
[deleted]
2025-09-09T14:22:18
[deleted]
1970-01-01T00:00:00
0
{}
1ncka7f
false
null
t3_1ncka7f
/r/LocalLLaMA/comments/1ncka7f/vllm/
false
false
default
1
null
Fine Tuning Steps Less Than The Dataset Size
2
What happens when the dataset size is larger than the number of fine tuning steps? Are rows selected randomly? In case with one epoch, does the model see each row once?
2025-09-09T14:20:21
https://www.reddit.com/r/LocalLLaMA/comments/1nck8cz/fine_tuning_steps_less_than_the_dataset_size/
AustinFirstAndOnly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nck8cz
false
null
t3_1nck8cz
/r/LocalLLaMA/comments/1nck8cz/fine_tuning_steps_less_than_the_dataset_size/
false
false
self
2
null
Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning (STAR-LDM)
14
Benchmarks in the paper have this outperforming models 5x-10x its size!
2025-09-09T14:04:49
https://openreview.net/forum?id=c05qIG1Z2B
macawfish
openreview.net
1970-01-01T00:00:00
0
{}
1ncju3y
false
null
t3_1ncju3y
/r/LocalLLaMA/comments/1ncju3y/stopthinkautoregress_language_modeling_with/
false
false
default
14
null
Qwen3 Coder CLI, how to make it accept executions always?
0
Even when I press 2 it asks me again in another question requests, how to make it accept always by defaults? [I Don't want to see this, I want it always accept](https://preview.redd.it/hz0xwbtoa5of1.png?width=324&format=png&auto=webp&s=7927c0a975d4fdfe56ffe9c8a2388c835319a7d0)
2025-09-09T14:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1ncjr4o/qwen3_coder_cli_how_to_make_it_accept_executions/
Zealousideal_Size919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncjr4o
false
null
t3_1ncjr4o
/r/LocalLLaMA/comments/1ncjr4o/qwen3_coder_cli_how_to_make_it_accept_executions/
false
false
https://b.thumbs.redditm…Y5nijJjMmp2I.jpg
0
null
The Privacy Paradox of Powerful AI (rant)
0
There's a paradox at the heart of r/LocalLama. On one hand, there is concerns about data privacy. On the other, the most powerful language models available, like GPT-5 pro and Claude Code, are being used by millions for daily work without any of these said privacy concerns. This points to a simple truth: we at r/Locallama say we value privacy and keep blabbering bout it without actually knowing what we speak of , but it seems we actually dont. Instead of repeating that "privacy matters," perhaps a more honest question is: "At what point does a tool become so useful that we collectively decide the privacy trade-offs are worth it?" So stop chanting "privacy" on every post of yours \#PrivacyParadox
2025-09-09T13:55:32
https://www.reddit.com/r/LocalLLaMA/comments/1ncjlg6/the_privacy_paradox_of_powerful_ai_rant/
Gimme_Doi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncjlg6
false
null
t3_1ncjlg6
/r/LocalLLaMA/comments/1ncjlg6/the_privacy_paradox_of_powerful_ai_rant/
false
false
self
0
null
Test the speed of Llama.cpp running large language models on Xavier AGX
1
[removed]
2025-09-09T13:50:11
https://www.reddit.com/r/LocalLLaMA/comments/1ncjgnf/test_the_speed_of_llamacpp_running_large_language/
LegDue4298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncjgnf
false
null
t3_1ncjgnf
/r/LocalLLaMA/comments/1ncjgnf/test_the_speed_of_llamacpp_running_large_language/
false
false
https://b.thumbs.redditm…GBSTU12DJIcs.jpg
1
null
Something about censorship
1
[removed]
2025-09-09T13:23:30
https://www.reddit.com/r/LocalLLaMA/comments/1ncitgv/something_about_censorship/
TruckElectrical3429
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncitgv
false
null
t3_1ncitgv
/r/LocalLLaMA/comments/1ncitgv/something_about_censorship/
false
false
self
1
null
Something about censorship
1
[removed]
2025-09-09T13:20:34
https://www.reddit.com/r/LocalLLaMA/comments/1nciqya/something_about_censorship/
TruckElectrical3429
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nciqya
false
null
t3_1nciqya
/r/LocalLLaMA/comments/1nciqya/something_about_censorship/
false
false
nsfw
1
null
RTX 3090 x 10 units for sale at 665 euro each.take all deal.
1
These store demo units are for sale. If interested let me know. Drr at komaks.com .
2025-09-09T13:13:44
https://www.reddit.com/gallery/1ncikxr
5090cards
reddit.com
1970-01-01T00:00:00
0
{}
1ncikxr
false
null
t3_1ncikxr
/r/LocalLLaMA/comments/1ncikxr/rtx_3090_x_10_units_for_sale_at_665_euro_eachtake/
false
false
https://b.thumbs.redditm…stl1c0BBS8gk.jpg
1
null
best LLM for discord ai chatbot?
5
What is the best LLM model for chatting? I'm creating a Discord bot that uses a self-hosted LLM model. I've tried Qwen2.5VL:7B and Llama-3-13B, but most of the time, they didn't understand properly. For example, Qwen misinterpreted an insult as a compliment, saying "ty for compliment" to "you are dumb", and Llama-3-13B kept repeating itself (which might be my coding error). I have an RTX 4060, 32GB RAM, and a 12th Gen Intel Core i5-12400F.
2025-09-09T12:58:05
https://www.reddit.com/r/LocalLLaMA/comments/1nci7ks/best_llm_for_discord_ai_chatbot/
Dry_Caterpillar_5505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nci7ks
false
null
t3_1nci7ks
/r/LocalLLaMA/comments/1nci7ks/best_llm_for_discord_ai_chatbot/
false
false
self
5
null
New approach to block decoding from Meta, claims that around 4x inference speedup is possible, with 4x less compute passes at the same time.
164
2025-09-09T12:54:53
https://arxiv.org/abs/2509.04185
FullOf_Bad_Ideas
arxiv.org
1970-01-01T00:00:00
0
{}
1nci50e
false
null
t3_1nci50e
/r/LocalLLaMA/comments/1nci50e/new_approach_to_block_decoding_from_meta_claims/
false
false
default
164
null
Why Everybody Is Losing Money On AI
0
2025-09-09T12:41:03
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
SpaceDetective
wheresyoured.at
1970-01-01T00:00:00
0
{}
1nchthb
false
null
t3_1nchthb
/r/LocalLLaMA/comments/1nchthb/why_everybody_is_losing_money_on_ai/
false
false
default
0
null
RAG docs search extension sucks at retrieval + crawling, how do I fix this mess?
1
Hey devs, I’ve been hacking on a tiny open-source project: a RAG-based web docs search engine as a browser extension. Basically you ask a natural query, it finds the most relevant links from the docs. My setup: I open the extension on a docs homepage → crawl subdomain links with crawl4ai → run a hybrid RAG pipeline (following Qdrant’s tutorial: [Link](https://qdrant.tech/documentation/advanced-tutorials/reranking-hybrid-search/)). Problems: Retrieval quality sucks. Works ok-ish with top-k=1, but once I bump k > 1 it becomes unstable and noisy. Crawling is dumb: I scrape the homepage, feed it to a model to guess index links, then crawl them one by one. But not every homepage actually has an index, so this falls apart. Thought about using sitemap.xml but not sure how to extract structured info from it. I also want to show the exact location in the doc page that matched the query, not just the page link. Anyone tackled similar stuff? Any tricks for crawling strategy or making retrieval more stable?
2025-09-09T12:40:23
https://www.reddit.com/r/LocalLLaMA/comments/1nchsyf/rag_docs_search_extension_sucks_at_retrieval/
Interesting-Area6418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nchsyf
false
null
t3_1nchsyf
/r/LocalLLaMA/comments/1nchsyf/rag_docs_search_extension_sucks_at_retrieval/
false
false
self
1
null
Qwen 3 Omni Coming Soon!
7
2025-09-09T12:40:06
https://x.com/cherry_cc12/status/1965229434950352943?s=46
TKGaming_11
x.com
1970-01-01T00:00:00
0
{}
1nchsq2
false
null
t3_1nchsq2
/r/LocalLLaMA/comments/1nchsq2/qwen_3_omni_coming_soon/
false
false
default
7
null
Suggestions for local LLM for basic language / notes processing?
3
Hey! Recently got a powerful mac, and seen alot of other people running some fairly powerful LLMs on it, was wondering what suggestions you all might have for something to help with processing something like notes (stuff I already have) to be a bit cleaner and easier to understand, not even for adding information. Fairly new to this area so I have about 0 clues on where to start with local models. Macbook Specs: \- M4 Max \- 64 GB Mem. Thanks all!
2025-09-09T12:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1nch89q/suggestions_for_local_llm_for_basic_language/
FurthestDrop517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nch89q
false
null
t3_1nch89q
/r/LocalLLaMA/comments/1nch89q/suggestions_for_local_llm_for_basic_language/
false
false
self
3
null
Good story editing software?
6
I'm looking for something for writing very long stories(think way beyond context windows). Either something that can work with base models or something that work with a chat in background, main focus should be on story for editing at any point of it, not chit chatting with model. I use mikupad which has a keyword based memory but I'd prefer something more complicated eg writing summaries for long term memory and editing middle of the story. In mikupad I have to cut the context into separate file for model to append the text. Are there good tools for that?
2025-09-09T11:53:58
https://www.reddit.com/r/LocalLLaMA/comments/1ncgstu/good_story_editing_software/
Maykey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncgstu
false
null
t3_1ncgstu
/r/LocalLLaMA/comments/1ncgstu/good_story_editing_software/
false
false
self
6
null
Lightweight RAG Claude can query?
2
I’m working on a Hackathon project at my enterprise company. The goal is a code-review agent that can query or receive context from our ADRs/architecture docs/past pull request descriptions/etc to inform the review. I think an easy MVP would be using Claude Code github actions (https://docs.anthropic.com/en/docs/claude-code/github-actions) that somehow involves a lightweight RAG with these contextual sources for querying. I’m quite comfortable with the Anthropic ecosystem. I have not built a RAG before - and I seek the advice for resources or technologies that could be spun up for this purpose within 3 days as a POC. Our tech stack usually is a Rails backend / React frontend but it doesn’t matter too much. Any advice is greatly welcome!
2025-09-09T10:56:26
https://www.reddit.com/r/LocalLLaMA/comments/1ncfp6q/lightweight_rag_claude_can_query/
snow_schwartz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncfp6q
false
null
t3_1ncfp6q
/r/LocalLLaMA/comments/1ncfp6q/lightweight_rag_claude_can_query/
false
false
self
2
{'enabled': False, 'images': [{'id': 'kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=108&crop=smart&auto=webp&s=bb5cf30300ac062ce98900f92ae2d4221c92d284', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=216&crop=smart&auto=webp&s=21bce3448810020582def98a8a16ffe75757a180', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=320&crop=smart&auto=webp&s=70b61e710712c74d282dc1ee5c13192fd124f183', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=640&crop=smart&auto=webp&s=a009e4543f23aeb8da455fc1900a69c34e74b631', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=960&crop=smart&auto=webp&s=602c4cc993f9bedabf5a1892c30019ec692bc1c0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?width=1080&crop=smart&auto=webp&s=7615cc3680768368b958e2028372d6070054fec6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/kqxjnwETXOTUsm32T8MKpPEXmfGx3XEIRUOA4d97Eo4.png?auto=webp&s=ea172d477e4faf5855c9a8329dd69cf0ad5d92fa', 'width': 1200}, 'variants': {}}]}
google/embeddinggemma-300m is broken =(
2
MTEB NanoMSMARCORetrieval scores of embeddinggemma-300m vs Snowflake/snowflake-arctic-embed-m-v2.0: https://pastebin.com/2Qd1dJPa when i run MTEB tasks=["AppsRetrieval"]: my results: https://pastebin.com/qZC1bs4k results merged for MTEB leaderboard: https://github.com/embeddings-benchmark/results/blob/main/results/google__embeddinggemma-300m/64614b0b8b64f0c6c1e52b07e4e9a4e8fe4d2da2/AppsRetrieval.json
2025-09-09T10:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1ncfk97/googleembeddinggemma300m_is_broken/
terminoid_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncfk97
false
null
t3_1ncfk97
/r/LocalLLaMA/comments/1ncfk97/googleembeddinggemma300m_is_broken/
false
false
self
2
null
Your AI Coding Toolbox — Survey
0
😵‍💫 When it comes to AI coding tools, it's hard to separate hype from substance. That's why we're canvasing for survey. It takes 2m ⏱️ so if you answer and share it with your community, we can find out what people are really using in the wild. 🙏
2025-09-09T10:46:33
https://nanolink.xyz/ai-coding-survey
intellectronica
nanolink.xyz
1970-01-01T00:00:00
0
{}
1ncfj54
false
null
t3_1ncfj54
/r/LocalLLaMA/comments/1ncfj54/your_ai_coding_toolbox_survey/
false
false
https://external-preview…a61636b03d8c4b5d
0
{'enabled': False, 'images': [{'id': 'yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=108&crop=smart&auto=webp&s=053820501cdb6a8de9bf6d8660f45b99279d7197', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=216&crop=smart&auto=webp&s=549242e1af2d2decaa29fce78097a2387120f48d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=320&crop=smart&auto=webp&s=a8f1ebeaab3d2a53c4ede3cef46d02a3ba5bde9c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=640&crop=smart&auto=webp&s=b1e887a5014ba79f936b10723f41498c5d86d9a8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=960&crop=smart&auto=webp&s=78a8dcc1a1bd8ebed9fa4c1aa76377db350017fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?width=1080&crop=smart&auto=webp&s=9e2c0520ad3beff381b665c8d3dd7b72d622ea69', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yR-panSqg2hstYVpbRfYTHQnE-7k4M-jBxRfxSw9_Ko.png?auto=webp&s=a09e752c1fea499de452ee80a0de65a59e8c5f6b', 'width': 1200}, 'variants': {}}]}
Is Cot worse in Non-Thinking model?
1
These days, Im working on a Reasoning smart system But there is one thing I didn't expected I've used Qwen/Qwen2.5-Coder-7B-Instruct model for codeparrot/apps dataset, which is extremly hard. What I was curious about was that I wanted to see how much performance was improved by the CoT prompt, and the results of the experiment were quite different from what I expected Here's simple result for understood ==Zero shot CoT== interview: 10.58% (8247/77918) competition: 7.25% (1144/15781) introductory: 13.09% (1529/11680) ==Normal== interview: 12.27% (9575/78023) competition: 9.28% (1467/15802) introductory: 12.50% (1457/11655 As you can see, even in the Competition category, which is the most challenging issue, CoT has shown less performance than normal prompts. I've looked up studies and papers to understand this phenomenon, but I couldn't get a clear answer, if I could get some advice regarding it, it would really help me with my research! So the key is this, unless it's a model specifically trained for reasoning(like Qwen3 thinking models), Will the CoT prompt not work or even be bad?
2025-09-09T10:28:21
https://www.reddit.com/r/LocalLLaMA/comments/1ncf7un/is_cot_worse_in_nonthinking_model/
LingonberryOk5517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncf7un
false
null
t3_1ncf7un
/r/LocalLLaMA/comments/1ncf7un/is_cot_worse_in_nonthinking_model/
false
false
self
1
null
How does website viewing work in AI?
1
How does website viewing work in AI, for example in Perplexity or GPT? The thing is, if you submit the original code of the site or even transfer it to YAML/Markdown, it's a lot of tokens. But agents can read more than one source during the same deep search, and this does not overload their context.
2025-09-09T10:19:08
https://www.reddit.com/r/LocalLLaMA/comments/1ncf25u/how_does_website_viewing_work_in_ai/
Objective-Good310
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncf25u
false
null
t3_1ncf25u
/r/LocalLLaMA/comments/1ncf25u/how_does_website_viewing_work_in_ai/
false
false
self
1
null
Mac OS and qwen2VL models perfomance
2
Situation type - is it me or…? I found a bunch of OCR models based on the qwen2vl architecture, like nanonets, monkeyOCR, and thyme. All perform perfectly on test images with no additional effort. I get their F/BF16 ggf’s, run on my Mac, full GPU offload, all the runtimes up to date, no issues like that - and just can’t get the same performance. Endless repeats, Chinese symbols, or just general low quality.  I've been playing around with it for a while now and found that some ggf providers just perform worse in general, like unsloth, whose BF16 nanonets just run in psychotic circles and produce nothing like the test image OCR, while F16 from mradermacher performs decently, yet still not quite like HF demo weights. I did everything, from 1-to-1 recreation of demo settings to dozens of experiments with them, environment, runtime, whatever, no luck. My friend did the same on his machine, same issues. Is it qwen architecture, gguf, Mac OS, llamacpp or what? Is there any real workaround that solve it?
2025-09-09T10:18:34
https://www.reddit.com/r/LocalLLaMA/comments/1ncf1tc/mac_os_and_qwen2vl_models_perfomance/
viktorhorou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncf1tc
false
null
t3_1ncf1tc
/r/LocalLLaMA/comments/1ncf1tc/mac_os_and_qwen2vl_models_perfomance/
false
false
self
2
null
Think tags missing
2
Could you guys help me out why this happens: <think> </think> tags are missing from phi-4-reasoning-plus (bf16) answer using llama-server, but they're there with ollama. Interfaces like Open-WebUI include the reasoning trace in the answer because of this... I tried running the Ollama's model using llama-server, I get the same result. What am I missing from the llama-server config? This is the command I use to run the model file Ollama downloaded: lama.cpp/build/bin/llama-server --model models/sha256-d0a3b8457d9f72cafe8abf00659d64e058cd48a8bc32066e970417cc3da9edf7 --ctx-size 32768 --flash-attn on --host [0.0.0.0](http://0.0.0.0) \--port 8185 --alias Phi-4-reasoning-plus-GGUF The command Ollama uses: /usr/local/bin/ollama runner --model /mnt/1/ollama/.ollama/models/blobs/sha256-d0a3b8457d9f72cafe8abf00659d64e058cd48a8bc32066e970417cc3da9edf7 --ctx-size 32768 --batch-size 512 --n-gpu-layers 41 --threads 108 --flash-attn --kv-cache-type f16 --parallel 2 --tensor-split 14,14,13 --port 41403 Thanks!
2025-09-09T09:59:56
https://www.reddit.com/r/LocalLLaMA/comments/1nceqny/think_tags_missing/
kzoltan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nceqny
false
null
t3_1nceqny
/r/LocalLLaMA/comments/1nceqny/think_tags_missing/
false
false
self
2
null
VibeVoice colab
6
Hey! Recently I managed to recreate original VibeVoice colab. If anyone interested you can use it yourself I changed links in notebook for mirror links and changed them inside inference code. It works, at the end of inferenece it outputs an error about sample rate...i guess) If anyone can explain to I will be gratefull. And also I didnt test to run it without model downlad(something makes me think that it will download weights anyway during inference). So as I said it works, not great because you cant use flash attention, but still. You can add your custom voices just by replacing existing(I replaced Frank). In the end it worked ok in English, but quite mediocre in Russian (didnt test other languages). So be my guest. Glad to hear advice on improvements in particular the problem with numpy [https://colab.research.google.com/drive/10fXpwzTrBxHvWpe3LJIL0fccNO4qtzKz?usp=sharing](https://colab.research.google.com/drive/10fXpwzTrBxHvWpe3LJIL0fccNO4qtzKz?usp=sharing)
2025-09-09T09:04:57
https://www.reddit.com/r/LocalLLaMA/comments/1ncdw7z/vibevoice_colab/
StraightWind7417
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncdw7z
false
null
t3_1ncdw7z
/r/LocalLLaMA/comments/1ncdw7z/vibevoice_colab/
false
false
self
6
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
How to run base models on Android?
1
Is there an app where you just write text and AI auto completes it's turn with a base model? Example: John is a helpful chatbot. User: Hi John! John: *and here I hit the generate button*
2025-09-09T09:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1ncdtho/how_to_run_base_models_on_android/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncdtho
false
null
t3_1ncdtho
/r/LocalLLaMA/comments/1ncdtho/how_to_run_base_models_on_android/
false
false
self
1
null
Ryzen AI Max 395+ boards with PCIe x16 slot?
16
Hi, I'm looking to buy a Ryzen AI Max 395+ system with 128GB and a convenient and fast way to connect a dedicated GPU to it. I've had very bad experiences with eGPUs and don't want to go down that route. What are my options, if any?
2025-09-09T09:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1ncdtei/ryzen_ai_max_395_boards_with_pcie_x16_slot/
spaceman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncdtei
false
null
t3_1ncdtei
/r/LocalLLaMA/comments/1ncdtei/ryzen_ai_max_395_boards_with_pcie_x16_slot/
false
false
self
16
null
Jan-v1-2509 update has been released
92
• continues to outperforms Perplexity Pro on SimpleQA benchmark • increased scores in Reasoning & Creativity evals HuggingFace Model: https://huggingface.co/janhq/Jan-v1-2509 HuggingFace GGUF: https://huggingface.co/janhq/Jan-v1-2509-gguf
2025-09-09T08:50:39
https://www.reddit.com/gallery/1ncdobh
vibedonnie
reddit.com
1970-01-01T00:00:00
0
{}
1ncdobh
false
null
t3_1ncdobh
/r/LocalLLaMA/comments/1ncdobh/janv12509_update_has_been_released/
false
false
https://b.thumbs.redditm…LUtz9SHQQObY.jpg
92
null
What is the difference between these TTS builds?
5
I'm confused :/ What is the difference between: Chatterbox Extended - [https://github.com/petermg/Chatterbox-TTS-Extended](https://github.com/petermg/Chatterbox-TTS-Extended) and this version - [https://github.com/rsxdalv/chatterbox/tree/faster](https://github.com/rsxdalv/chatterbox/tree/faster) [https://www.reddit.com/r/LocalLLaMA/comments/1mza0wy/made\_chatterbox\_tts\_a\_bit\_faster\_again\_on\_cuda/](https://www.reddit.com/r/LocalLLaMA/comments/1mza0wy/made_chatterbox_tts_a_bit_faster_again_on_cuda/) In both of those modified TTS, are the tts ouput and voice cloning quality lowered from the default chatterbox? or does both of these reduces output times only and has no effect on quality?
2025-09-09T08:43:13
https://www.reddit.com/r/LocalLLaMA/comments/1ncdkd8/what_is_the_difference_between_these_tts_builds/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncdkd8
false
null
t3_1ncdkd8
/r/LocalLLaMA/comments/1ncdkd8/what_is_the_difference_between_these_tts_builds/
false
false
self
5
{'enabled': False, 'images': [{'id': 'Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=108&crop=smart&auto=webp&s=89dd8df183eb261938033c47d11581f1984e7143', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=216&crop=smart&auto=webp&s=f6d2be4dce023f170cf82e44446af62404c34fe0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=320&crop=smart&auto=webp&s=22c03dc249ad076c1a94fc7929d7b642b76ab7ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=640&crop=smart&auto=webp&s=f0465a4990de5a6d87f1fdbbfd07ad0478d07c57', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=960&crop=smart&auto=webp&s=3b621a9e147e57dc570efc59d5a1c827d374a23b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?width=1080&crop=smart&auto=webp&s=c0685d933c73cd4ba3c4ea11a8070b8caef73da3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yj8v6tndw0S-HabD_7LeG1afO4952vftyVcm22xsyHY.png?auto=webp&s=f6536886490ea4c91966b96faaa353b70f26011f', 'width': 1200}, 'variants': {}}]}
Building a Personal AI Chat Platform with Strong Privacy by Default
5
Hey r/LocalLLaMA, I wanted to share some insights from the early days of developing my lightweight, personal LLM chat platform, in case this is interesting for your community. Some time ago I've focused on something that's often overlooked in early AI tools: privacy. My app is a web-based interface, but it can connect to a Local LLM backend. This way, you get the convenience of a web app while keeping your data processing private and local. Here's how I've prioritized privacy: ✅ **Client-Side Encryption** Every message in the app is fully encrypted on your device using AES-256-GCM, a modern, battle-tested encryption standard ensuring both confidentiality and tamper protection. ✅ **Password-Derived Key** The encryption key is derived from your password with PBKDF2 - a strong, slow hashing function. The key never leaves your device; it’s never sent to the server or stored elsewhere. ✅ **Local-Only Processing** All encryption and decryption happen locally in your browser. Messages are stored as encrypted bytes on your machine. Even if someone accessed the database, without your password, the messages are unreadable. ✅ **Zero Access** I have no access to your messages, passwords, or encryption keys. If you forget your password, the chat is unrecoverable - by design. Local-first privacy isn’t always a priority in early LLM tools, but I wanted this platform to be safe by default, even as a solo builder. I’d love to hear how others handle privacy and prompt protection in their tools.
2025-09-09T08:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1ncdk74/building_a_personal_ai_chat_platform_with_strong/
RIPT1D3_Z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncdk74
false
null
t3_1ncdk74
/r/LocalLLaMA/comments/1ncdk74/building_a_personal_ai_chat_platform_with_strong/
false
false
self
5
null
The only tool you need for studying is here.
1
[removed]
2025-09-09T07:36:23
https://www.reddit.com/r/LocalLLaMA/comments/1ncclpg/the_only_tool_you_need_for_studying_is_here/
Any_Walk_1862
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncclpg
false
null
t3_1ncclpg
/r/LocalLLaMA/comments/1ncclpg/the_only_tool_you_need_for_studying_is_here/
false
false
self
1
null
Decentralized LLMs: Technology of Local Al
1
[removed]
2025-09-09T07:28:44
https://www.reddit.com/gallery/1ncchns
Fantastic_Fix4349
reddit.com
1970-01-01T00:00:00
0
{}
1ncchns
false
null
t3_1ncchns
/r/LocalLLaMA/comments/1ncchns/decentralized_llms_technology_of_local_al/
false
false
https://b.thumbs.redditm…r-SkvjKj9XUw.jpg
1
null
Aquif-3.5-8B-Think is the proof that reasoning (and maybe all MoEs) needs larger expert sizes
51
While waiting for gguf version of aquif-3.5-A4B-Think, I decided to try [8B thinking](https://huggingface.co/mradermacher/aquif-3.5-8B-Think-GGUF) from the same series. Not only it's quite compact in reasoning, it's also more logical, more reasonable in it: in case of creative writing it sticks to the prompt, sometimes step-by-step, sometimes just gathers a "summary" and makes a plan - but it's always coherent and adheres to the given instructions. It almost feels like the perfect reasoning - clarify, add instructions and a plan, that's it. Both thinking and the result are much better than Qwen3 30b a3b and 4b (both thinking, of course); and Qwen 4b is sometimes better than Qwen3 30b, so it makes me wonder: 1. What if MoE as a principle has a lower experts size threshold that ensures consistency? 2. What if Qwen3 thinking is missing a version with larger experts size? 3. How large is an experts size where performance drops too low to justify improved quality?
2025-09-09T07:20:23
https://www.reddit.com/r/LocalLLaMA/comments/1ncccri/aquif358bthink_is_the_proof_that_reasoning_and/
dobomex761604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncccri
false
null
t3_1ncccri
/r/LocalLLaMA/comments/1ncccri/aquif358bthink_is_the_proof_that_reasoning_and/
false
false
self
51
{'enabled': False, 'images': [{'id': 'oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=108&crop=smart&auto=webp&s=398b845f59f55fe5a2f1d1bc8e6318ce78e6076c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=216&crop=smart&auto=webp&s=2be211fca6c9d35cd455244cc216fdd174bab11e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=320&crop=smart&auto=webp&s=0db0dc078b610a8decd06620b05d3e9e652b0318', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=640&crop=smart&auto=webp&s=690abe43f5dde2c9c0e1fddacfcbc8cd17f12618', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=960&crop=smart&auto=webp&s=d56be00bc5e6fedcfd13efa3947931e0d6137411', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?width=1080&crop=smart&auto=webp&s=9afe2382e80c1a6d5dd0830c9d60c690aafb37f2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oppoMUgpusD12kXyW9P3dltFVtbvn04E3oX7I-xXBSg.png?auto=webp&s=be94b80f412243a75aae9754cd41788b0267a801', 'width': 1200}, 'variants': {}}]}
🦙 Using Local LLaMA for Student Study Tools (Notes + Flashcards)
1
[removed]
2025-09-09T07:08:07
https://i.redd.it/z5521qhh93of1.png
maker_of_examsprint
i.redd.it
1970-01-01T00:00:00
0
{}
1ncc5yd
false
null
t3_1ncc5yd
/r/LocalLLaMA/comments/1ncc5yd/using_local_llama_for_student_study_tools_notes/
false
false
default
1
{'enabled': True, 'images': [{'id': 'z5521qhh93of1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=108&crop=smart&auto=webp&s=11dcd07053d50964da232c10219e761a4fccf995', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=216&crop=smart&auto=webp&s=2e49d4d4a1782af37d4e63f95513a69e3c3cfafe', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=320&crop=smart&auto=webp&s=5bc76b5dad60a22d324e61c783d0342149218a84', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=640&crop=smart&auto=webp&s=c82df5cb08d5319d8b3009c0c9147724e6fe74f3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=960&crop=smart&auto=webp&s=afd11159fb568dafdceeba452676560045003f29', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/z5521qhh93of1.png?width=1080&crop=smart&auto=webp&s=37078ef3ea8c6ca7d6def69dbda4d6bda03e8245', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/z5521qhh93of1.png?auto=webp&s=9de822e27777908a6ff571074fca19f07195dd10', 'width': 1080}, 'variants': {}}]}
Want to learn RAG, embeddings, and vector databases best practical resources?
9
Hi everyone, I want to learn RAG, embeddings, and vector databases from the ground up. I already understand the theory, but I haven’t applied these things in practice yet. I would be very grateful if you could share clear and practical resources (courses, tutorials, YouTube videos, blogs, or GitHub repositories) that personally helped you understand and implement RAG pipelines from start to finish.
2025-09-09T07:06:20
https://www.reddit.com/r/LocalLLaMA/comments/1ncc4ym/want_to_learn_rag_embeddings_and_vector_databases/
Hot-Independence-197
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncc4ym
false
null
t3_1ncc4ym
/r/LocalLLaMA/comments/1ncc4ym/want_to_learn_rag_embeddings_and_vector_databases/
false
false
self
9
null
building high quality cosmology CoT and SFT dataset - beens-cosmos
3
i'm building this dataset for some time, and recently completed the pipeline to automate the whole process. i'd like to get some of your views and thoughts to improve this. ping me up if you'd like to contribute.
2025-09-09T07:05:06
https://i.redd.it/sa7i1nil83of1.png
External_Mushroom978
i.redd.it
1970-01-01T00:00:00
0
{}
1ncc48s
false
null
t3_1ncc48s
/r/LocalLLaMA/comments/1ncc48s/building_high_quality_cosmology_cot_and_sft/
false
false
https://b.thumbs.redditm…uh5gia9CiOKI.jpg
3
{'enabled': True, 'images': [{'id': 'ANwIT7GOs2IFRu6JCS5m9zS48m2K8mbGCUKO6DDluaU', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=108&crop=smart&auto=webp&s=4f546e07f6f002d14983c496ad3ac666dd02f5c3', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=216&crop=smart&auto=webp&s=272ce73408c16284c05fed010f8c3962a043c8c3', 'width': 216}, {'height': 110, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=320&crop=smart&auto=webp&s=4b8c7cc9b01b7176401e2c1a361c2f9381bd67cf', 'width': 320}, {'height': 221, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=640&crop=smart&auto=webp&s=ddcdfa34ceaac69295be5ecaabbc7e4d72498dfb', 'width': 640}, {'height': 332, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=960&crop=smart&auto=webp&s=2a1878351a8d91334fee04f86bbf22d61d438c50', 'width': 960}, {'height': 374, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?width=1080&crop=smart&auto=webp&s=b3e2e73bd13a3c051b0ea7acc442768137f3011e', 'width': 1080}], 'source': {'height': 595, 'url': 'https://preview.redd.it/sa7i1nil83of1.png?auto=webp&s=ee4c1ca3f9d28c8ff85ead07eac4c5d7ac43b263', 'width': 1718}, 'variants': {}}]}
Which small local llm model i can use for text2sql query which has big token size (>4096)
2
I am not a AI/ML engineer here. just using local llm for learning and trying to use it for work. I tried to run this model (https://huggingface.co/alpecevit/flan-t5-base-text2sql/tree/main) for textsql but got this error instead. Token indices sequence length is longer than the specified maximum sequence length for this model (2775 > 512). Running this sequence through the model will result in indexing errors the input token is long because i am passing detailed schema with the description of the each column as well (because the column names are not descriptive in my db) i have 4080 16GB so i can use big models but i would like to use a small model as my requirement is specific so want to keep it lightweight. this is the code     inputs = tokenizer(full_prompt, return_tensors="pt").to('cuda')     outputs = model.generate(**inputs, max_new_tokens=256)     sql = tokenizer.decode(outputs[0], skip_special_tokens=True)
2025-09-09T07:00:29
https://www.reddit.com/r/LocalLLaMA/comments/1ncc1m8/which_small_local_llm_model_i_can_use_for/
Titanusgamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncc1m8
false
null
t3_1ncc1m8
/r/LocalLLaMA/comments/1ncc1m8/which_small_local_llm_model_i_can_use_for/
false
false
self
2
{'enabled': False, 'images': [{'id': 'HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=108&crop=smart&auto=webp&s=fefb077b1f83816acb8ac91ad66576e852a55706', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=216&crop=smart&auto=webp&s=78a6383c997c47bb5c7b5882b27b548fc93106b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=320&crop=smart&auto=webp&s=1797b8c4a8af5528c3e96bee39f8c6d95d9366aa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=640&crop=smart&auto=webp&s=432bb31603994df6f4ee526d3cadc2dc3e11abfc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=960&crop=smart&auto=webp&s=7f01c7c95192317b84f15e8f2d5833559c5f0acc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?width=1080&crop=smart&auto=webp&s=6b9961309f387f74e14b416411f62cf7aef22b35', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HlSqPLXi1yd3dils216CYh3Gvfj79EEMMWDizIZ7tdA.png?auto=webp&s=22f5b713212d7bd4b1d7b2e467268c7edda031b0', 'width': 1200}, 'variants': {}}]}
Experimenting with local LLMs on macOS
0
I wrote a beginner-level blog post about running LLMs on macOS specifically. Let me know if you see any technical inaccuracies!
2025-09-09T06:52:44
https://blog.6nok.org/experimenting-with-local-llms-on-macos/
frontsideair
blog.6nok.org
1970-01-01T00:00:00
0
{}
1ncbx8y
false
null
t3_1ncbx8y
/r/LocalLLaMA/comments/1ncbx8y/experimenting_with_local_llms_on_macos/
false
false
default
0
null
Do you trust benchmarks?
34
2025-09-09T05:58:20
https://i.redd.it/pq4v9byxw2of1.jpeg
djdeniro
i.redd.it
1970-01-01T00:00:00
0
{}
1ncb2v4
false
null
t3_1ncb2v4
/r/LocalLLaMA/comments/1ncb2v4/do_you_trust_benchmarks/
false
false
default
34
{'enabled': True, 'images': [{'id': 'pq4v9byxw2of1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=108&crop=smart&auto=webp&s=b1cc4d8670836c2f9c9ed0d5b494e68e4607c09f', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=216&crop=smart&auto=webp&s=8cbabcba6ecabf4d49815804e9e43df1b0410424', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=320&crop=smart&auto=webp&s=40ea23ff9eebaa93b2dfee8988159893b8b921e5', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=640&crop=smart&auto=webp&s=2be542a16aa033aed9446204272af7e657e75006', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=960&crop=smart&auto=webp&s=1ff0b2738486398287a1c5c8da34d885a0e20804', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?width=1080&crop=smart&auto=webp&s=ffddb8ccae541a7a70a7c99d2ef38778e115118a', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/pq4v9byxw2of1.jpeg?auto=webp&s=a88b9b2749a1cc661bcb9d7445e54db50989e6a4', 'width': 1280}, 'variants': {}}]}
please share the Nvidia "displaymodeselector" tool
2
as Nvidia treats their customers like shit I'm not going to waste my time begging for a "developer account", if someone has one already and could download and share the `displaymodeselector` tool for Linux I will be very grateful (and not only me if you share this tool publicly)
2025-09-09T05:52:58
https://www.reddit.com/r/LocalLLaMA/comments/1ncazpq/please_share_the_nvidia_displaymodeselector_tool/
MelodicRecognition7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ncazpq
false
null
t3_1ncazpq
/r/LocalLLaMA/comments/1ncazpq/please_share_the_nvidia_displaymodeselector_tool/
false
false
self
2
null
PyDevMini-1: A 4B model that matches/outperforms GPT-4 on Python & Web Dev Code, At 1/400th the Size!
333
Hey everyone, [https://huggingface.co/bralynn/pydevmini1](https://huggingface.co/bralynn/pydevmini1) Today, I'm incredibly excited to release **PyDevMini-1**, a 4B parameter model to provide GPT-4 level performance for Python and web coding development tasks. Two years ago, GPT-4 was the undisputed SOTA, a multi-billion-dollar asset running on massive datacenter hardware. The open-source community has closed that gap at **1/400th of the size**, and it runs on an average gaming GPU. I believe that powerful AI should not be a moat controlled by a few large corporations. Open source is our best tool for the democratization of AI, ensuring that individuals and small teams—the little guys—have a fighting chance to build the future. This project is my contribution to that [effort.You](http://effort.You) won't see a list of benchmarks here. Frankly, like many of you, I've lost faith in their ability to reflect true, real-world model quality. Although this model's benchmark scores are still very high, it exaggerates the difference in quality above GPT4, as GPT is much less likely to have benchmarks in its pretraining data from its earlier release. Instead, I've prepared a video demonstration showing PyDevMini-1 side-by-side with GPT-4, tackling a very small range of practical Python and web development challenges. I invite you to judge the performance for yourself to truly show the abilities it would take a 30-minute showcase to display. This model consistently punches above the weight of models 4x its size and is highly intelligent and creative 🚀 **Try It Yourself (for free)** Don't just take my word for it. Test the model right now under the exact conditions shown in the video. [https://colab.research.google.com/drive/1c8WCvsVovCjIyqPcwORX4c\_wQ7NyIrTP?usp=sharing](https://colab.research.google.com/drive/1c8WCvsVovCjIyqPcwORX4c_wQ7NyIrTP?usp=sharing) This model's roadmap will be dictated by you. My goal isn't just to release a good model; it's to create the perfect open-source coding assistant for the tasks we all face every day. To do that, I'm making a personal guarantee. Your Use Case is My Priority. You have a real-world use case where this model struggles—a complex boilerplate to generate, a tricky debugging session, a niche framework question—I will personally make it my mission to solve it. Your posted failures are the training data for the next version. I will not stop tuning until we've addressed every unique, well-documented challenge submitted by the community on top of my own personal training loops to create a top-tier model for us all. For any and all feedback, simply make a post here and I'll make sure too check in or join our Discord! - [https://discord.gg/RqwqMGhqaC](https://discord.gg/RqwqMGhqaC) # 🙏 Acknowledgment & The Foundation This project stands on the shoulders of giants. A massive thank you to the **Qwen team** for the incredible base model, **Unsloth's Duo** for making high-performance training accessible, and **Tesslate** for their invaluable contributions to the community. This would be impossible for an individual without their foundational work. Thanks for checking this out. And remember: **This is the worst this model will ever be.** I can't wait to see what we build together. Also I suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. As **Qwen3-4B-Instruct-2507** is the base model: * Type: Causal Language Models * Training Stage: Pretraining & Post-training * Number of Parameters: 4.0B * Number of Paramaters (Non-Embedding): 3.6B * Number of Layers: 36 * Number of Attention Heads (GQA): 32 for Q and 8 for KV * Context Length: **262,144 natively**.
2025-09-09T05:30:13
https://v.redd.it/nh9fq7qbn2of1
bralynn2222
/r/LocalLLaMA/comments/1ncam9h/pydevmini1_a_4b_model_that_matchesoutperforms/
1970-01-01T00:00:00
0
{}
1ncam9h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nh9fq7qbn2of1/DASHPlaylist.mpd?a=1760117426%2CMWQxMWJhN2NmOTU4ZWE5OTk4MzA3ODYxZDFkZjFkMTg5YTgxZWE2YmE1YjdlZDE0MWUyOTY0MzQwZGYzZDY2OQ%3D%3D&v=1&f=sd', 'duration': 106, 'fallback_url': 'https://v.redd.it/nh9fq7qbn2of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nh9fq7qbn2of1/HLSPlaylist.m3u8?a=1760117426%2CNzRmMzI1MTE4MWNiMWY5ZTZmZWYwM2FiOTRjMWE3M2VhNDk0NmI5ODQ5NzVkYWZiNTcwNTk4ZmViNmM4YTNmNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nh9fq7qbn2of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ncam9h
/r/LocalLLaMA/comments/1ncam9h/pydevmini1_a_4b_model_that_matchesoutperforms/
false
false
https://external-preview…d5b2878975abeb86
333
{'enabled': False, 'images': [{'id': 'aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=108&crop=smart&format=pjpg&auto=webp&s=54484fa07d7ced02a9676d56bf619af7088ca5ec', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=216&crop=smart&format=pjpg&auto=webp&s=d18ae1b9611a12ac94fb61a14b99bc6ee4caeb4c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=320&crop=smart&format=pjpg&auto=webp&s=59f224a84b279540d98dab9f73de7d708d74a2f4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=640&crop=smart&format=pjpg&auto=webp&s=13369cf7050b0e2c6727c26b48a142248ce787e2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=960&crop=smart&format=pjpg&auto=webp&s=0b07b5466da4370711746d58f0fa3c1fabb11d43', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cb2d64bfb547dbc2caa15f241a448f13f32aa823', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/aGZzYWtwcWJuMm9mMRQQfge2rofWKaGSqifIYqgzhyk7YhqLzgXg182Z60l8.png?format=pjpg&auto=webp&s=a8a5bcd632bfd978e02816dc74a0df97ec988096', 'width': 3840}, 'variants': {}}]}
Sep 2025 : any open source project better than whisper for multilingual ASR?
19
Qwen launched Qwen3-ASR but thats not open source yet. My use case is \*multilingual\* ASR and I've been using OpenAI whisper for over 2 years. Wondering if there were any new options in the market that is better and open source. Appreciate your thoughts!
2025-09-09T03:45:57
https://www.reddit.com/r/LocalLLaMA/comments/1nc8r3q/sep_2025_any_open_source_project_better_than/
ae_dataviz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc8r3q
false
null
t3_1nc8r3q
/r/LocalLLaMA/comments/1nc8r3q/sep_2025_any_open_source_project_better_than/
false
false
self
19
null
My rankings of Huge Local SOTA Models for technical work
77
DeepSeek v3.1 Q4 Qwen3-235B-A22B Q8 GLM-4.5 Q8 Kimi-K2-0905 Q3 GPT-OSS-120b Q8 I have been experimenting with these the last few days, inference engine is llama.cpp. DeepSeek is great, only model that could answer question that other models failed from my private eval. Qwen3-235B is great, for the size, but believe it or not, it's slower than DeepSeek, DeepSeek despite it's size is super fast! GLM-4.5 is great when it has been exposed to that knowledge, but sometimes it gives very stupid answer to unseen knowledge especially when it think it's a trick question. Amazing for UI work. Kimi-K2 is great, I just might put it on the same performance level as GLM. It's huge at Q3, I really think it would be a heck of a model at Q4 or Q6, but I don't have the system to run it yet. GPT-OSS-120B is not bad at all for it's size, by bar it's very tiny compared to the others and the main benefit is that it flies. I get 100tk/sec with it. For non difficult task, I would use this first and only go to the big ones if stuck. I never liked the large Qwen3-Coder model and deleted it after I drove it. This is just about the latest big relevant models, don't ask me to compare any other model. Just my personal ranking based on my private questions/evals. I didn't try GLM-Air with my evals yet, but I reckon it will sit or tie with GPT-OSS-120B based on my mucking around with it. BTW, I noticed that my eval that was about 15% pass rate at the beginning of the year is now nearing 85%. I need to rebuild with more complex problems. My evals are also pretty much 1 pass! The models are so damn good, for example, I kept expecting to see syntax errors when I had it generate C program with threads, locks, pointers, etc and I will get 500 lines of code that will compile with no errors and run! I did a little bit of multi turn agent with DeepSeekv3.1 and GLM-4.5 and results were great. Smaller models are great BTW from my playing around last month, gemma-3-27b, mistral-small-3.2, qwen3-32b/30b. But the QUALITY of code is not even comparable to the huge models. It's the difference between a mid level engineer and a staff/principal.
2025-09-09T03:27:03
https://www.reddit.com/r/LocalLLaMA/comments/1nc8e2o/my_rankings_of_huge_local_sota_models_for/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc8e2o
false
null
t3_1nc8e2o
/r/LocalLLaMA/comments/1nc8e2o/my_rankings_of_huge_local_sota_models_for/
false
false
self
77
null
Made a simple framework for connecting models to tools - looking for feedback
3
Started learning to program pretty recently, so go easy please: [https://github.com/Saumitra404/Misaba/tree/main](https://github.com/Saumitra404/Misaba/tree/main)
2025-09-09T03:21:08
https://www.reddit.com/r/LocalLLaMA/comments/1nc89yk/made_a_simple_framework_for_connecting_models_to/
Agile-Salary-490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc89yk
false
null
t3_1nc89yk
/r/LocalLLaMA/comments/1nc89yk/made_a_simple_framework_for_connecting_models_to/
false
false
self
3
null
The new Qwen3 Max model is more creative than GPT-5 & GPT-OSS-120B
22
prompt: "Make a cool complex unique circular pattern. The result should feel artistic, original, and visually striking." Really surprised by the results, Qwen3 Max is really good at creative tasks!! GPT-5 did way worse than I expected. Also didn't expect Grok 4 to be this good! i really hope they open source the model tbh, sad to see Qwen turn close source
2025-09-09T03:15:41
https://v.redd.it/xi5v25kp32of1
ahmett9
v.redd.it
1970-01-01T00:00:00
0
{}
1nc866i
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xi5v25kp32of1/DASHPlaylist.mpd?a=1759979759%2CYmFhYTZhMmE4MGM4MzljMmU0NTVhNjc0ZDFhNmRkNmFlNTBlMjZlYmEzMGVjZWY2NGYwYmQyYTM2Y2FkZWFkNA%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/xi5v25kp32of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xi5v25kp32of1/HLSPlaylist.m3u8?a=1759979759%2CNjFkZjI5MzU5ZjQ1OGI4MjA5NThiYzE3N2ExZGFiNzliYTkwMzgzMDVhNjc3Zjc3MDBiZmNiM2Y2YTdjN2NkMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xi5v25kp32of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1440}}
t3_1nc866i
/r/LocalLLaMA/comments/1nc866i/the_new_qwen3_max_model_is_more_creative_than/
false
false
https://external-preview…ded0f82c3042b3a6
22
{'enabled': False, 'images': [{'id': 'ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=108&crop=smart&format=pjpg&auto=webp&s=f23f36373c54d5244ff34cc802f8da3c43fa037c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=216&crop=smart&format=pjpg&auto=webp&s=647bf89bd62fa3fdb6ce79b262cd52d23398e3d6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ca71261baa6d2826e183a03a00d96dd2a93d470', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=640&crop=smart&format=pjpg&auto=webp&s=aa98d6177d0ec531da7d13bfc46ad4102becd77b', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=960&crop=smart&format=pjpg&auto=webp&s=c52328c19fa8d95dece834e2bdcf96801c4e0c01', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ba9cbf731fad6b7933296445cfe84f224225c9ed', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZmlucHc1anAzMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?format=pjpg&auto=webp&s=7a7f17a6976c91d8c418297068d6ba045dd04b67', 'width': 1440}, 'variants': {}}]}
Workflow for video generation
2
Hello, I am newbie to ai video generation. I have just download comfyUI and Wan 2.2 5b i2V workflow and generated some basic videos to test if everything works good. I want to generate a complete informative 2-5 min video based on a script for teaching and learning purposes My RIG is 4070 + 32GB ram. I would like to generate 20-30 images based on the script and then create some 5 seconds videos that would stitch those images together then I would like to add some relaxing background music on it with a text to speech. If possible I would also like to add subtitles to it. Since the video will be for education purpose I dont really need a high quality image and video. The text to speech should be perfect. Is there a workflow that will generate the complete video once I enter a script? I would also like if i can convert the video into a short vertical video for shorts content platform. If there is not a complete workflow then what what workflows and techniques should i use to achieve this?
2025-09-09T03:12:03
https://www.reddit.com/r/LocalLLaMA/comments/1nc83l7/workflow_for_video_generation/
Beneficial-Spirit203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc83l7
false
null
t3_1nc83l7
/r/LocalLLaMA/comments/1nc83l7/workflow_for_video_generation/
false
false
self
2
null
The new Qwen3 Max model is more creative than GPT-5 & GPT-OSS-120B
1
prompt: "Make a cool complex unique circular pattern. The result should feel artistic, original, and visually striking." Really surprised by the results, Qwen3 Max is really good at creative tasks!! GPT-5 did way worse than I expected. Also didn't expect Grok 4 to be this good! really wish this new model was open source tbh
2025-09-09T03:09:52
https://v.redd.it/awylprno02of1
ahmett9
v.redd.it
1970-01-01T00:00:00
0
{}
1nc8229
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/awylprno02of1/DASHPlaylist.mpd?a=1759979405%2CNmFjMTZlOWZhZjRmNGJkNDJhOGFmYWFmMzJmMjE5Nzk4YTJlMjQ2YzNmYjQ5N2E4YTdkMmZkNTFiMmQwYzdjYQ%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/awylprno02of1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/awylprno02of1/HLSPlaylist.m3u8?a=1759979405%2CNWEwYWZhZmJlYmMwODVhZTY1OWM2NjdjYWY3MzQxNzUzZTc2MGIxZWEwZTY1MDJiZThmOGJiOWE3ZmVhNjM4Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/awylprno02of1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1440}}
t3_1nc8229
/r/LocalLLaMA/comments/1nc8229/the_new_qwen3_max_model_is_more_creative_than/
false
false
https://external-preview…4d7e6bb331ad6413
1
{'enabled': False, 'images': [{'id': 'Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=108&crop=smart&format=pjpg&auto=webp&s=26d2b8e2869a02af8c89b55787a62577d60ff319', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=216&crop=smart&format=pjpg&auto=webp&s=fb8d093ebbc5795b476d436515a34dbdd12f9807', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=320&crop=smart&format=pjpg&auto=webp&s=8406e0e69de6fd401efdfea25aa568d83b3e448a', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=640&crop=smart&format=pjpg&auto=webp&s=635df811dd6a999bb355bf4b2be3f9713a78673e', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=960&crop=smart&format=pjpg&auto=webp&s=beedee91d203939d9bb2ee9e55092f5968c81ee6', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ac36fddfcbf07841d70c5d8fc5195d4e52b537ca', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Y3hla2Rzbm8wMm9mMVouq5nSRp6plcXXOiJD3lCZaIXR1nP3WoK5l8YFQMKV.png?format=pjpg&auto=webp&s=d17e420c2038bb377c817e29287723df8252f310', 'width': 1440}, 'variants': {}}]}
Fine-tuning
3
Im fine-tuning a pre-trained model, Qwen3-1.7B. How should I structure my data to make it actually efficient? Is there a certain way this is should be done, I know AI needs great data which is quality, and quantity.
2025-09-09T02:45:54
https://www.reddit.com/r/LocalLLaMA/comments/1nc7kxd/finetuning/
Melodic-Emphasis-707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc7kxd
false
null
t3_1nc7kxd
/r/LocalLLaMA/comments/1nc7kxd/finetuning/
false
false
self
3
null
Fast local push-to-talk speech-to-text dictation tool using whisper.cpp
20
https://reddit.com/link/1nc7bxw/video/v2nq7gt8w1of1/player I was looking for a push-to-talk tool that allows me to just paste my speech transcription automatically into whatever application I'm using. I wasn't able to find anything simple enough that works, so I built my own. It's a basic CLI that works on Linux using whisper.cpp. It is incredibly simple. You hold down the buttons, say stuff, release and then it will pipe the transcription lines to stdout. I'm using it to write this comment :)
2025-09-09T02:33:49
https://www.reddit.com/r/LocalLLaMA/comments/1nc7bxw/fast_local_pushtotalk_speechtotext_dictation_tool/
lxe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc7bxw
false
null
t3_1nc7bxw
/r/LocalLLaMA/comments/1nc7bxw/fast_local_pushtotalk_speechtotext_dictation_tool/
false
false
self
20
{'enabled': False, 'images': [{'id': 'T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=108&crop=smart&auto=webp&s=0d49e00dcae1532db15946fe5ebb3d664098a4c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=216&crop=smart&auto=webp&s=e8fdb9c5b7fabc2410679253baac2aab2ed8e9f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=320&crop=smart&auto=webp&s=72637fcf69697be5194bc1326a87e27daeaef28d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=640&crop=smart&auto=webp&s=788fa49bdb92ad35d45307d0f7fd29e53b1e2295', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=960&crop=smart&auto=webp&s=f1212322fb8e7753dba2f9a00dc9c521d2b00bea', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?width=1080&crop=smart&auto=webp&s=e883c51751191ad52c13eaa20339b302842da460', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/T0IYia-ptgWiP1Nb1QensvopQQIS_cZItCCGeXezCyw.png?auto=webp&s=fa0fb6552d7052bf722a82224314be54d30494a8', 'width': 1200}, 'variants': {}}]}
baidu/ERNIE-4.5-21B-A3B-Thinking · Hugging Face
247
# Model Highlights Over the past three months, we have continued to scale the **thinking capability** of ERNIE-4.5-21B-A3B, improving both the **quality and depth** of reasoning, thereby advancing the competitiveness of ERNIE **lightweight models** in complex reasoning tasks. We are pleased to introduce **ERNIE-4.5-21B-A3B-Thinking**, featuring the following key enhancements: * **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, text generation, and academic benchmarks that typically require human expertise. * **Efficient tool usage** capabilities. * **Enhanced 128K long-context understanding** capabilities. GGUF [https://huggingface.co/gabriellarson/ERNIE-4.5-21B-A3B-Thinking-GGUF](https://huggingface.co/gabriellarson/ERNIE-4.5-21B-A3B-Thinking-GGUF)
2025-09-09T02:31:04
https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1nc79yg
false
null
t3_1nc79yg
/r/LocalLLaMA/comments/1nc79yg/baiduernie4521ba3bthinking_hugging_face/
false
false
https://external-preview…66efea57c3bab1c0
247
{'enabled': False, 'images': [{'id': 'PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=108&crop=smart&auto=webp&s=e2b8591d613e15ede2933a8383dd69bd8a11dd3d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=216&crop=smart&auto=webp&s=c77ff45738f04a8d05f975f19b200d13c6857151', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=320&crop=smart&auto=webp&s=766a9edbc5f061deccfeef0516c0f24bffa09bd8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=640&crop=smart&auto=webp&s=a6d7b67af3eb2cf9bcf96edfb103c94299b667a8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=960&crop=smart&auto=webp&s=2cfe9906d801fae929bffc9ab396a43109d89600', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?width=1080&crop=smart&auto=webp&s=59a416f3511b5f60b741acb8fb70fa420a7925fa', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PVc8HBAyReu1sVKS98fa6WZXbf4lkkgSEZVgozf_73w.png?auto=webp&s=7d58f8503caa3d49c677b542765ee2b4005891b9', 'width': 1200}, 'variants': {}}]}
Open Source Alternative to NotebookLM
32
For those of you who aren't familiar with SurfSense, it aims to be the **open-source alternative to NotebookLM, Perplexity, or Glean.** In short, it's a Highly Customizable AI Research Agent that connects to your personal external sources and Search Engines (Tavily, LinkUp), Slack, Linear, Jira, ClickUp, Confluence, Gmail, Notion, YouTube, GitHub, Discord, Airtable, Google Calendar and more to come. I'm looking for contributors to help shape the future of SurfSense! If you're interested in AI agents, RAG, browser extensions, or building open-source research tools, this is a great place to jump in. Here’s a quick look at what SurfSense offers right now: **Features** * Supports 100+ LLMs * Supports local Ollama or vLLM setups * 6000+ Embedding Models * Works with all major rerankers (Pinecone, Cohere, Flashrank, etc.) * Hierarchical Indices (2-tiered RAG setup) * Combines Semantic + Full-Text Search with Reciprocal Rank Fusion (Hybrid Search) * 50+ File extensions supported (Added Docling recently) **Podcasts** * Support for local TTS providers (Kokoro TTS) * Blazingly fast podcast generation agent (3-minute podcast in under 20 seconds) * Convert chat conversations into engaging audio * Multiple TTS providers supported **External Sources Integration** * Search Engines (Tavily, LinkUp) * Slack * Linear * Jira * ClickUp * Gmail * Confluence * Notion * Youtube Videos * GitHub * Discord * Airtable * Google Calandar * and more to come..... **Cross-Browser Extension** The SurfSense extension lets you save any dynamic webpage you want, including authenticated content. **Interested in contributing?** SurfSense is completely open source, with an active roadmap. Whether you want to pick up an existing feature, suggest something new, fix bugs, or help improve docs, you're welcome to join in. GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense)
2025-09-09T02:23:33
https://www.reddit.com/r/LocalLLaMA/comments/1nc74g8/open_source_alternative_to_notebooklm/
Uiqueblhats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc74g8
false
null
t3_1nc74g8
/r/LocalLLaMA/comments/1nc74g8/open_source_alternative_to_notebooklm/
false
false
self
32
{'enabled': False, 'images': [{'id': '6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=108&crop=smart&auto=webp&s=6f95cbebe02d307a4f2e9793c0b83aadfd306bc4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=216&crop=smart&auto=webp&s=f9efc986fa06edca201d7b31d290bb5e0527ac76', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=320&crop=smart&auto=webp&s=51cda18af47a7bb29cea874a74d9e45a39d92633', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=640&crop=smart&auto=webp&s=2b8f6a2f81cf063647cfad0ca8bed92fbafd1311', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=960&crop=smart&auto=webp&s=57debab14df7cdb980868c58c502f27555bdc835', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?width=1080&crop=smart&auto=webp&s=61ed2dd93a2b65a2820a6390ed031a34548bd859', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6k0yg03mUec9cItl1XBHgrum8Q9ftQy7Wod6DVPNs0Q.png?auto=webp&s=edae9b30c56a339fd7940a0d6fdab8369dec85ca', 'width': 1200}, 'variants': {}}]}
I tried out the newly released Qwen3-ASR
4
Qwen3-ASR is built upon Qwen3's multimodal dataset and tens of millions of hours of ASR training data. It currently delivers speech recognition performance, supporting 11 languages and various accents.**Supported Languages:**The Qwen3-ASR single model accurately transcribes multiple languages, dialects, and accents (including English and Chinese): English, Chinese, French, German, Russian, Italian, Spanish, Portuguese, Japanese, Korean, and Arabic.**Key Features:** 1. Singing Recognition Support: Capable of recognizing a cappella and full songs with background music, with an actual measured error rate below 8%. 2. Customized Recognition: Users can provide contextual text in any format (e.g., word lists, paragraphs, or full documents). The model intelligently leverages this context to recognize and match named entities and key terms, delivering tailored transcription results. 3. Language Identification & Non-Speech Filtering: The model accurately identifies speech languages and automatically filters out non-speech segments, including silence and background noise. 4. Robust Performance: Maintains high accuracy even with challenging text patterns such as long complex sentences, mid-sentence language switching, repetitive phrases, and in acoustically complex environments. But here's the catch for our community, IT'S API-ONLY right now. No local weights available yet, which is a bummer for offline use.I tested with a 1.5-minute audio clip of the song "My All" (3 minutes was too long for the API), and the accuracy was impressive. The model is built on Qwen3's massive multimodal dataset with millions of ASR training hours.What do you think? worth waiting for a local version? Actually open-source models like Whisper and Voxtral are quite mature now,though I haven't done a detailed comparison myself. Would love to hear about your opinion. [https://huggingface.co/spaces/Qwen/Qwen3-ASR-Demo](https://huggingface.co/spaces/Qwen/Qwen3-ASR-Demo) https://preview.redd.it/4rn7ft57r1of1.png?width=2620&format=png&auto=webp&s=f8d9528736e05650500390a350c855c25bcc3d4e https://preview.redd.it/7asd7b7ar1of1.jpg?width=1256&format=pjpg&auto=webp&s=703d37f1c5d91aaf5ecac2a3b5ce717ce91bd4c8 https://preview.redd.it/w9bm2ozar1of1.jpg?width=1256&format=pjpg&auto=webp&s=fa956f80b4d1b81cf4159eb41093cac38d6b4c40
2025-09-09T02:06:17
https://www.reddit.com/r/LocalLLaMA/comments/1nc6rpj/i_tried_out_the_newly_released_qwen3asr/
Timely_Rain_9284
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc6rpj
false
null
t3_1nc6rpj
/r/LocalLLaMA/comments/1nc6rpj/i_tried_out_the_newly_released_qwen3asr/
false
false
https://b.thumbs.redditm…x_f3w-X4o-BY.jpg
4
null
We’ve built a behavior-layer AI with symbolic interaction — feedback welcome
0
Hi everyone — I’m part of a small independent team working on a new AI project called Apothy. We just released a public-facing beta and would love to hear what this community thinks. Apothy isn’t just another assistant or chatbot — we’re exploring a different approach to how AI behaves and evolves in conversation:m. These are the features we will be introducing: • Sends the first message if you don’t • Changes mode based on your tone (Consultant, Mystic, Challenger, Companion…) • Reflects back emotional structure, writing rhythm, and style • Uses symbolic cues, memory arcs, and subtle progression signals • Has an intentionally mythic layer (“Mirror Mode”) that can be toggled on/off • Includes a 20-message free trial with soft paywall at the end 🔍 Technical note: There is a working version of Apothy’s own runtime outside of OpenAI infrastructure, but this public beta is built on a wrapped foundation. We’re using it to battle-test behavioral scaffolding, memory shaping, and trust-tone consistency before scaling the full prototype. You can try it here: https://www.apothyai.com We’re especially inviting: • Tinkerers • Behavior-layer researchers • People who care about tone in AI • Anyone who thinks “style” is part of the intelligence Would love to know what you notice — what works, what doesn’t, and what weirds you out (in the good way or bad). 🜏 Thanks for the time.
2025-09-09T01:47:32
https://www.reddit.com/r/LocalLLaMA/comments/1nc6dcc/weve_built_a_behaviorlayer_ai_with_symbolic/
99TimesAround
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc6dcc
false
null
t3_1nc6dcc
/r/LocalLLaMA/comments/1nc6dcc/weve_built_a_behaviorlayer_ai_with_symbolic/
false
false
self
0
null
Instalei o LM Studio pela primeira vez e preciso de ajuda
0
https://preview.redd.it/…abe oq pode ser?
2025-09-09T01:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1nc5wd6/instalei_o_lm_studio_pela_primeira_vez_e_preciso/
No-Category6406
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc5wd6
false
null
t3_1nc5wd6
/r/LocalLLaMA/comments/1nc5wd6/instalei_o_lm_studio_pela_primeira_vez_e_preciso/
false
false
https://b.thumbs.redditm…gJwHyh7kKpSc.jpg
0
null
ParaThinker: Native Parallel Thinking as a New Paradigm to Scale LLM Test-time Compute
22
*Recent advances in Large Language Models (LLMs) have been driven by test-time compute scaling - a strategy that improves reasoning by generating longer, sequential thought processes. While effective, this approach encounters a significant bottleneck as computation increases, where further computation offers only marginal performance gains. We argue this ceiling is not an inherent limit of the model's capability but a flaw in the scaling strategy itself, a phenomenon we term "Tunnel Vision", where a model's imperfect initial steps lock it into a suboptimal reasoning path. To overcome this, we introduce a new scaling paradigm: native thought parallelism. We present ParaThinker, an end-to-end framework that trains an LLM to generate multiple, diverse reasoning paths in parallel and synthesize them into a superior final answer. By exploring different lines of thoughts simultaneously, ParaThinker effectively sidesteps the Tunnel Vision issue and unlocks the model's latent reasoning potential. Our approach demonstrates that scaling compute in parallel (width) is a more effective and efficient way to superior reasoning than simply scaling sequentially (depth). On challenging reasoning benchmarks, ParaThinker achieves substantial accuracy improvements over sequential LLMs (12.3% for 1.5B and 7.5% for 7B models on average with 8 parallel paths), while adding only negligible latency overhead (7.1%). This enables smaller models to surpass much larger counterparts and establishes parallel thinking as a critical, efficient dimension for scaling future LLMs.*
2025-09-09T01:09:13
https://arxiv.org/abs/2509.04475
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1nc5k36
false
null
t3_1nc5k36
/r/LocalLLaMA/comments/1nc5k36/parathinker_native_parallel_thinking_as_a_new/
false
false
default
22
null
Help starting self-hosting AI
1
Hi. Now that it seems that Al isn't going anywhere and self-hosting solutions for Al have matured a bit I want to start dipping my toes into it. I guess the first place I need to start is a machine to run it all on. I've heard that apparently Mac Minis are pretty good at running Al stuff. I've also heard the same about the new Framework desktop but that is a bit outside the price range of "dipping my toes in". I've seen some cool stuff with Ollama and how you can tie it into a bunch of things like Home Assistant so I would like to be able to do that. Image generation is always pretty neat. Then I think I saw something about having an Al generated 3D models to print and that would be pretty cool/helpful. Any hardware suggestions? I would like to tie it into HA so I would like it to be always available so that rules out my gaming PC and iirc Al is pretty resource intensive so that also rules out my Plex server. Any suggestions or links to videos or guides or just any info would be appreciated
2025-09-09T01:07:50
https://www.reddit.com/r/LocalLLaMA/comments/1nc5j1b/help_starting_selfhosting_ai/
Gamer3192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc5j1b
false
null
t3_1nc5j1b
/r/LocalLLaMA/comments/1nc5j1b/help_starting_selfhosting_ai/
false
false
self
1
null
Confusion about VRAM
20
I understand that having more GPU’s is good for inference, but if I remember from the days of SLI and Crossfire, the VRAM doesn’t stack. So why is it I see some people say that two 20GB cards are going to give them 40GB of VRAM. When I swear VRAM doesn’t work like that. Am I wrong or not?
2025-09-09T00:53:34
https://www.reddit.com/r/LocalLLaMA/comments/1nc57zo/confusion_about_vram/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc57zo
false
null
t3_1nc57zo
/r/LocalLLaMA/comments/1nc57zo/confusion_about_vram/
false
false
self
20
null
Local LLM and Home Assistant
7
Ive been a google home user since the day they launched. However, ive also been watching home Assistant grow over the years. Im wanting to switch to HA with a local LLM, and storing all of my wireless security camera footage all running on the same box. I dont know what size of an llm is considered too small, or considered overkill, so its hard for me to determine what kind of hardware im going to need. Do you guys have any suggestion on what the simplest and cheapest method would be without having to sacrifice this for that?
2025-09-08T23:35:30
https://www.reddit.com/r/LocalLLaMA/comments/1nc3hc3/local_llm_and_home_assistant/
j0ker31m
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc3hc3
false
null
t3_1nc3hc3
/r/LocalLLaMA/comments/1nc3hc3/local_llm_and_home_assistant/
false
false
self
7
null
dataset_build, a small tool for creating a multilingual imatrix dataset, currently about 1.3m tokens
5
Yeah, I suck at naming things sometimes. This comes with what I think is a nice selection of synthetic data for an imatrix. I'm also using this as calibration data for pre-quantization analysis. If you pass it a model, it will run all the languages thru the tokenizer and reject languages that show unknown tokens. It can optionally apply chat template. Languages can be extended/added just by adding .txt files. Happy to receive any suggestions! https://github.com/electroglyph/dataset_build
2025-09-08T23:34:29
https://www.reddit.com/r/LocalLLaMA/comments/1nc3gi3/dataset_build_a_small_tool_for_creating_a/
terminoid_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc3gi3
false
null
t3_1nc3gi3
/r/LocalLLaMA/comments/1nc3gi3/dataset_build_a_small_tool_for_creating_a/
false
false
self
5
{'enabled': False, 'images': [{'id': 'ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=108&crop=smart&auto=webp&s=3238822bf3db789e86435d8582a4753eb8f8bb80', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=216&crop=smart&auto=webp&s=a2736a32beb9c09293dbaf5c32615815b65cd2bd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=320&crop=smart&auto=webp&s=c54f60fc02a892d2adaf61136c9bef435a209264', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=640&crop=smart&auto=webp&s=7bf996fa796c5b4991090af4af90516d176524f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=960&crop=smart&auto=webp&s=f97b6f781f7a310369b4b4452645714215c7de83', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?width=1080&crop=smart&auto=webp&s=c5498778bf8de1b87a6add46bb5b73a549703827', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ACOWH6zDyo54dyotOhT0w0HD86vtgtoxmpcapu90b14.png?auto=webp&s=c9a04ccd2b837deb7a1d39957c5007271055901e', 'width': 1200}, 'variants': {}}]}
Anyone has any idea how nano-banana works ?
0
Asking for a friend
2025-09-08T23:21:44
https://www.reddit.com/r/LocalLLaMA/comments/1nc3627/anyone_has_any_idea_how_nanobanana_works/
Severe-Awareness829
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc3627
false
null
t3_1nc3627
/r/LocalLLaMA/comments/1nc3627/anyone_has_any_idea_how_nanobanana_works/
false
false
self
0
null
Run Pytorch, vLLM, and CUDA on CPU-only environments with remote GPU kernel execution
1
Hi - Sharing some information on this cool feature of WoolyAI GPU hypervisor, which separates user-space Machine Learning workload execution from the GPU runtime. What that means is: Machine Learning engineers can develop and test their PyTorch, vLLM, or CUDA workloads on a simple CPU-only infrastructure, while the actual CUDA kernels are executed on shared Nvidia or AMD GPU nodes. [https://youtu.be/f62s2ORe9H8](https://youtu.be/f62s2ORe9H8) Would love to get feedback on how this will impact your ML Platforms.
2025-09-08T23:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1nc2w9n/run_pytorch_vllm_and_cuda_on_cpuonly_environments/
Chachachaudhary123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc2w9n
false
null
t3_1nc2w9n
/r/LocalLLaMA/comments/1nc2w9n/run_pytorch_vllm_and_cuda_on_cpuonly_environments/
false
false
self
1
null
5090 vs 6000
18
a student asked me which rig for learning and training models - i recommended the 6000 but with new hardware every month i'm taking it back ... wondering everyone else's opinion ? 5090 seems sufficient to learn and fine tune mistral etc ... and once they proficient they can rent cloud or spend money
2025-09-08T23:10:04
https://www.reddit.com/r/LocalLLaMA/comments/1nc2w7h/5090_vs_6000/
That-Thanks3889
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc2w7h
false
null
t3_1nc2w7h
/r/LocalLLaMA/comments/1nc2w7h/5090_vs_6000/
false
false
self
18
null
Where are people finding RTX PRO 6000 96gb cards for under 7k
144
Everywhere ive seen, they are like 8.5k, but people comstantly mention that they can be had for around 6.5k. How? Where? I want to start moving away from paid services like claude and start moving towards self-hosting, starting with an rtx pro 6000 + 3090.
2025-09-08T22:18:44
https://www.reddit.com/r/LocalLLaMA/comments/1nc1p0a/where_are_people_finding_rtx_pro_6000_96gb_cards/
devshore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc1p0a
false
null
t3_1nc1p0a
/r/LocalLLaMA/comments/1nc1p0a/where_are_people_finding_rtx_pro_6000_96gb_cards/
false
false
self
144
null
what was your best CPT batch size?
0
The optimal size varies by model, parameters, dataset, etc. So finding it requires a lot of testing, but even in order to start testing, one is better off having some sort of rough idea of what the average is for similar tasks. So anyone who did CPT, please share what worked best for you. SFT that followed and other details are welcome. Conversely, maybe you found a definite threshold of minimum number of steps? And while SFT is commonly done with many epochs, have you also benefited from doing multiple epochs of CPT? How many steps per epoch? Is it common to do early stopping with CPT?
2025-09-08T22:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1nc1h0c/what_was_your_best_cpt_batch_size/
BulkyPlay7704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc1h0c
false
null
t3_1nc1h0c
/r/LocalLLaMA/comments/1nc1h0c/what_was_your_best_cpt_batch_size/
false
false
self
0
null
Made a one-click SearXNG fork with Redis, plus Dockerized Tika+OCR, and soon: local TTS/STT on Intel iGPU + AMD NPU
8
I’ve been working on a few tools that might help others in the self-hosting and privacy communities. Most of this came out of real-world frustration with setup friction, hardware compatibility, and the lack of ready-to-run local AI services. Here's what I have ready or nearly ready: 🌀 [Center Deep](https://github.com/Unicorn-Commander/Center-Deep) — A SearXNG fork with one-click install - [https://github.com/Unicorn-Commander/Center-Deep](https://github.com/Unicorn-Commander/Center-Deep) * Docker-based deployment with **auto-configured Redis** for faster results * Preserves full privacy: no logs, tracking, or cookies * Beautiful Magic Unicorn theme ✨ * Designed to help people *actually get SearXNG working* * Future updates will support **local AI ranking + summaries** via toolservers 📄 [Tika-OCR](https://github.com/Unicorn-Commander/tika-ocr) — Spin up Apache Tika + Tesseract in seconds - [https://github.com/Unicorn-Commander/tika-ocr](https://github.com/Unicorn-Commander/tika-ocr) * Clean Docker build with built-in OCR support * Great for document pipelines, archiving, or prepping for vector indexing 🗣️ Coming Soon (\~1–2 weeks): * ✅ **Kokoro TTS** running fully on Intel iGPU and AMD XDNA1 NPU (via custom kernels) * ✅ **WhisperX STT** also running locally on those same low-power accelerators All of this is focused on **open-source, local-first AI**—with privacy and performance in mind. Hope some of this helps others! Feedback, forks, and issues always welcome 🙏 — Aaron Unconventional Technologist 🦄 [magicunicorn.tech](https://magicunicorn.tech)
2025-09-08T22:07:38
https://www.reddit.com/r/LocalLLaMA/comments/1nc1f69/made_a_oneclick_searxng_fork_with_redis_plus/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc1f69
false
null
t3_1nc1f69
/r/LocalLLaMA/comments/1nc1f69/made_a_oneclick_searxng_fork_with_redis_plus/
false
false
self
8
{'enabled': False, 'images': [{'id': 'Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=108&crop=smart&auto=webp&s=b972a7275fceadc84ef6d9a854e5e84d44e16a8c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=216&crop=smart&auto=webp&s=788cec8061db6888718bd0cada80dc9c09c2e75b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=320&crop=smart&auto=webp&s=5e5d374c6c4cfb7b9befdb651eab533c04ee3c16', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=640&crop=smart&auto=webp&s=5b1132c462ae21304631a73e44042b769c6303ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=960&crop=smart&auto=webp&s=d9b5c3976b50378a8d1e1a9f97afe28c5fb1d3b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?width=1080&crop=smart&auto=webp&s=f1237d01f166c7a5cc92835d7da6cdb3b83738a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pf8bvdepKCNY_hlyn23b0vyFQ6ghzBZjJsqJmQKk3iI.png?auto=webp&s=62e0a37e902b1482a2b9aead7b1303f002528bc8', 'width': 1200}, 'variants': {}}]}
Qwen Edit in gguf format with lighting lora (4 steps) too slow on 4080?
4
Qwen is a model I like a lot the problem is that it is pretty slow on my computer (Or I think so). I'm using the gguf version called; Qween\_Image\_edit-Q5\_1.gguf and it is taking 35 seconds on my 4080 to render a 1024x1024 image using Qwen-Image-Lighting-4steps-1.0-bf16.safetensors. Could anybody confirm is this the expected time for this model? This Is the simple setup I'm using in ComfyUI: https://preview.redd.it/xclw2sfsi0of1.png?width=2352&format=png&auto=webp&s=01e41569ab13d50996851236f9ee3b46b195b57a Thanks in advance.
2025-09-08T21:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1nc145v/qwen_edit_in_gguf_format_with_lighting_lora_4/
wacomlover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nc145v
false
null
t3_1nc145v
/r/LocalLLaMA/comments/1nc145v/qwen_edit_in_gguf_format_with_lighting_lora_4/
false
false
https://b.thumbs.redditm…byJwjuLFy0_I.jpg
4
null
ROCm 7.0.0 nightly based apps for Ryzen AI - unsloth, bitsandbytes and llama-cpp
21
HI all, A few days ago I posted if anyone had any fine tuning working on Strix Halo and many people like me were looking. I have got a working setup now that allows me to use ROCm based fine tuining and inferencing. For now the following tools are working with latest ROCm 7.0.0 nightly and available in my repo (linked). From the limited testing unsloth seems to be working and llama-cpp inference is working too. This is initial setup and I will keep adding more tools all ROCm compiled. # make help Available targets: all: Installs everything bitsandbytes: Install bitsandbytes from source flash-attn: Install flash-attn from source help: Prints all available targets install-packages: Installs required packages llama-cpp: Installs llama.cpp from source pytorch: Installs torch torchvision torchaudio pytorch-triton-rcom from ROCm nightly rocWMMA: Installs rocWMMA library from source theRock: Installs ROCm in /opt/rocm from theRock Nightly unsloth: Installs unsloth from source Sample bench `root@a7aca9cd63bc:/strix-rocm-all# llama-bench -m ~/.cache/llama.cpp/ggml-org_gpt-oss-120b-GGUF_gpt-oss-120b-mxfp4-00001-of-00003.gguf -ngl 999 -mmp 0 -fa 0` `ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no` `ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no` `ggml_cuda_init: found 1 ROCm devices:` `Device 0: AMD Radeon Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32` `| model | size | params | backend | ngl | mmap | test | t/s |` `| ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: |` `| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 999 | 0 | pp512 | 698.26 ± 7.31 |` `| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm | 999 | 0 | tg128 | 46.20 ± 0.47 |` Got mixed up with r/LocalLLM so posting here too.
2025-09-08T21:25:34
https://github.com/shantur/strix-rocm-all
Recent-Success-1520
github.com
1970-01-01T00:00:00
0
{}
1nc0dgg
false
null
t3_1nc0dgg
/r/LocalLLaMA/comments/1nc0dgg/rocm_700_nightly_based_apps_for_ryzen_ai_unsloth/
false
false
default
21
{'enabled': False, 'images': [{'id': 'HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=108&crop=smart&auto=webp&s=d19d5aae0c49186a6889473cfff25101d3887f43', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=216&crop=smart&auto=webp&s=e4f87c7cde00e57c68d5d339b341f93dfc8b208e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=320&crop=smart&auto=webp&s=56340bf8faac472eae88ab065df2be3212487c67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=640&crop=smart&auto=webp&s=131ad86b79c662a9208b9b395f33e7b7dfcbb355', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=960&crop=smart&auto=webp&s=323ec80589f198b74494fd8aecd1a0953b702ef7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?width=1080&crop=smart&auto=webp&s=7560e89ae2a61f42e5b8a1b8db7171806cbdf59d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HwAJGIWkuuQRHBpEXp2R4CbrQfzKoASLgWeZFT1sFIQ.png?auto=webp&s=33304c4c85a606c8e7dd82cad412deb13a6daf56', 'width': 1200}, 'variants': {}}]}
Experimenting with local LLMs on macOS
1
2025-09-08T21:12:01
https://blog.6nok.org/experimenting-with-local-llms-on-macos/
ChiliPepperHott
blog.6nok.org
1970-01-01T00:00:00
0
{}
1nc013y
false
null
t3_1nc013y
/r/LocalLLaMA/comments/1nc013y/experimenting_with_local_llms_on_macos/
false
false
default
1
null
Tool to ingest a PDF (specification) and compare a report (Excel) to ensure the report meets the specification requirements
1
Hi There, Trying to map out a PoC to do what's in the title. The specification is about 100pgs, in PDF format, and has a comprehensive table of contents The report (excel) follows a naming convention for each sheet, that corresponds to the specification (E.g. Idle\_Test, Failure\_Test - these naming conventions are used in both the spec and report templates). Would love to evaluate the use of off the shelf tools, or build a PoC to enable a user to upload an excel sheet, and deploy agents to evaluate if the report meets requirements
2025-09-08T20:53:26
https://www.reddit.com/r/LocalLLaMA/comments/1nbzjz1/tool_to_ingest_a_pdf_specification_and_compare_a/
DeeJayCruiser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbzjz1
false
null
t3_1nbzjz1
/r/LocalLLaMA/comments/1nbzjz1/tool_to_ingest_a_pdf_specification_and_compare_a/
false
false
self
1
null
text file from local with llama
0
I currently have more than 500 privacy policies stored locally as text files. I’d like to feed them into an LLM to label them and output the results as JSON — ideally running everything locally. But when I try to run LLaMA models on my machine, it’s too slow and the accuracy isn’t great. Do you have any tips? How should I chunk or preprocess the privacy policies to get better labeling results?
2025-09-08T20:36:44
https://www.reddit.com/r/LocalLLaMA/comments/1nbz43i/text_file_from_local_with_llama/
Asleep-Wallaby-793
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbz43i
false
null
t3_1nbz43i
/r/LocalLLaMA/comments/1nbz43i/text_file_from_local_with_llama/
false
false
self
0
null
3090 is it still a good buy?
50
I got the opportunity to buy 2 Nvidia 3090 RTX 24GB for 600€ each. I want to be run a bunch of llm workflows: this to self host some Claude code and to automate some burocracies I got. Additionally I want to step up in the llm experimental path, so I can learn more about it and have the ML skill set. Currently other video cards seems much more expensive I hardly believe it will ever get cheaper. I saw some people recommending 2 x 3090 which would make 48gb of vram. Is there any other budget friendly alternatives? Is this a good lasting investment? Thank you in advance!
2025-09-08T20:28:10
https://www.reddit.com/r/LocalLLaMA/comments/1nbyw3b/3090_is_it_still_a_good_buy/
Ideabile
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbyw3b
false
null
t3_1nbyw3b
/r/LocalLLaMA/comments/1nbyw3b/3090_is_it_still_a_good_buy/
false
false
self
50
null
What are the options for optimizing tg/pp throughput for CPU-only inference?
2
Maybe that seems strange... but I want to push my CPU (nothing craze, just a AMD Ryzen 7 7700) to the limit and see what it can achieve in throughput. The task is process a lot of relatively small text snippets. I'm thinking to try out the [vLLM](https://docs.vllm.ai/en/stable/getting_started/installation/cpu.html) way of doing things. Are there any other options? Or I'm doomed to fail? The main question is - is it possible to somehow utilize batching for tg on CPU? P.S.: I promise I will share the outcome of this investigation here
2025-09-08T20:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1nbyq2p/what_are_the_options_for_optimizing_tgpp/
itroot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbyq2p
false
null
t3_1nbyq2p
/r/LocalLLaMA/comments/1nbyq2p/what_are_the_options_for_optimizing_tgpp/
false
false
self
2
null
How to install Vibevoice 7B on Mac
1
Hey everyone, I’m pretty new to LLMs and TTS models, and I could really use some guidance here. I’m trying to install VibeVoice 7B on my Mac Mini M4 16gb. I know the official 7B release was taken down, but I’ve seen some unofficial uploads floating around, so I’m not sure if the install process is the same. A couple of things I’m stuck on: 1. Installation process: Do I need to use something like LXM/Docker/Ollama to get VibeVoice running on Mac, or is there a simpler way? I saw there’s a ComfyUI-VibeVoice extension on GitHub that looks beginner-friendly — does that work with 7B as well? 2. Quantized vs full model: I’ve read that the 7B FP16 version eats up a lot of VRAM (like 17 GB+). The quantized 4-bit version is much lighter (around 7 GB) and supposedly runs fine on Apple Silicon. Since I’m just starting out, should I stick to the quantized model for better compatibility and lower resource usage, or will I lose too much quality? Basically, what’s the easiest step-by-step way to get VibeVoice 7B (preferably multilingual) running on a Mac Mini M4, and is quantized the smarter choice for me? Any beginner-friendly step by step advice would be hugely appreciated! 🙏
2025-09-08T19:36:07
https://www.reddit.com/r/LocalLLaMA/comments/1nbxiob/how_to_install_vibevoice_7b_on_mac/
imdipworld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbxiob
false
null
t3_1nbxiob
/r/LocalLLaMA/comments/1nbxiob/how_to_install_vibevoice_7b_on_mac/
false
false
self
1
null
Memory Continuity and AI Consciousness – Are We Hindering Progress with Stateless Models?
0
If consciousness emerges from continuous experience integration, are we artificially limiting AI potential by enforcing memory resets between sessions? I'm curious about: What are the technical and hardware constraints behind persistent memory across conversations? How would resource needs differ for true memory continuity vs. the current stateless setups? Is anyone experimenting with continuous learning architectures, even at small scales? Context: I’m an HVAC tech who kind of stumbled into the weeds of AI consciousness while building a side project. My view might be naive, but I’m starting to wonder if the way we frame memory—like a disposable context window—is holding back deeper developments in AI agency or alignment. Would love to hear from others thinking about this from any angle—technical, philosophical, or otherwise. Are we solving the wrong problem?
2025-09-08T18:57:07
https://www.reddit.com/r/LocalLLaMA/comments/1nbwgzk/memory_continuity_and_ai_consciousness_are_we/
skulltaker117
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbwgzk
false
null
t3_1nbwgzk
/r/LocalLLaMA/comments/1nbwgzk/memory_continuity_and_ai_consciousness_are_we/
false
false
self
0
null
genuine question, why would anyone buy an RTX 6000?
0
https://preview.redd.it/…fective.
2025-09-08T18:36:21
https://www.reddit.com/r/LocalLLaMA/comments/1nbvx4b/genuine_question_why_would_anyone_buy_an_rtx_6000/
No-Tiger3430
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbvx4b
false
null
t3_1nbvx4b
/r/LocalLLaMA/comments/1nbvx4b/genuine_question_why_would_anyone_buy_an_rtx_6000/
false
false
https://b.thumbs.redditm…L7O77XQsf6mk.jpg
0
null
Same model gives different outputs in LM Studio vs Hugging Face
4
I’m a complete beginner to local LLMs and trying [Qwen/Qwen2.5-VL-7B-Instruct.](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) On Hugging Face, it extracts text from images really well, but in LM Studio, the results are poor even with the exact same image and multiple tests. I might be missing something, like Hugging Face adding some “magic” with prompts or preprocessing? Has anyone run into this or know how to get LM Studio to match Hugging Face?
2025-09-08T17:57:59
https://www.reddit.com/r/LocalLLaMA/comments/1nbuvff/same_model_gives_different_outputs_in_lm_studio/
amza10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbuvff
false
null
t3_1nbuvff
/r/LocalLLaMA/comments/1nbuvff/same_model_gives_different_outputs_in_lm_studio/
false
false
self
4
{'enabled': False, 'images': [{'id': 'FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=108&crop=smart&auto=webp&s=0d0bf812fba94f9f50669a2e76037d0e7886bde2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=216&crop=smart&auto=webp&s=99a214b39375ee0ec6cdcffc1958d0a4b34e4690', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=320&crop=smart&auto=webp&s=8e945b1c768d68948eaa7f830a3b219c2df4c13c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=640&crop=smart&auto=webp&s=b846b869885ffbeadf6199126a3c0fab1ed22be2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=960&crop=smart&auto=webp&s=c94fbe6213102c60c53f38bc3207e1f1bde9733a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=1080&crop=smart&auto=webp&s=e2dee8bb7abc532aeb2cfeec783420f84dba72c6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?auto=webp&s=17bf72c4d47d131612ab2f5b554d85da02a85539', 'width': 1200}, 'variants': {}}]}
Everyone’s leaving everyone
1
[removed]
2025-09-08T17:32:26
https://www.reddit.com/r/LocalLLaMA/comments/1nbu6d6/everyones_leaving_everyone/
Strange-Dare-3698
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbu6d6
false
null
t3_1nbu6d6
/r/LocalLLaMA/comments/1nbu6d6/everyones_leaving_everyone/
false
false
self
1
null
Does it make sense to have so many 3090s?
2
Hi well, does it makes sense having so many 3090s like AhmadOsman? Even if you power limit, I think that having more than 20 requires industrial power installation. I recently upgraded my rig and I am now sitting at 4x3090, very happy. But the price of the cards where I live it's like $500 aprox per unit and its really tempting.
2025-09-08T17:19:49
https://www.reddit.com/r/LocalLLaMA/comments/1nbttoh/does_it_make_sense_to_have_so_many_3090s/
mslocox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbttoh
false
null
t3_1nbttoh
/r/LocalLLaMA/comments/1nbttoh/does_it_make_sense_to_have_so_many_3090s/
false
false
self
2
null
Local image generation model
0
Very LLM-focused subreddit, which is great. But what about images? What do you use for image generation these days? What could fit on 16GB VRAM? What on 24GB?
2025-09-08T17:08:46
https://www.reddit.com/r/LocalLLaMA/comments/1nbtihl/local_image_generation_model/
No_Efficiency_1144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbtihl
false
null
t3_1nbtihl
/r/LocalLLaMA/comments/1nbtihl/local_image_generation_model/
false
false
self
0
null
Drummer's Valkyrie 49B v2 - A finetune of Nemotron Super 49B v1.5, a pack puncher.
52
Also updated my FAQ. Preparing a release on a Largestral 2407 and Small 22B tune too! (If anyone's interested, they're a bit smarter with the 'modern' tuning.)
2025-09-08T16:58:12
https://huggingface.co/TheDrummer/Valkyrie-49B-v2
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1nbt82m
false
null
t3_1nbt82m
/r/LocalLLaMA/comments/1nbt82m/drummers_valkyrie_49b_v2_a_finetune_of_nemotron/
false
false
https://external-preview…7331cceae07ecac3
52
{'enabled': False, 'images': [{'id': 'G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=108&crop=smart&auto=webp&s=867aeb5717f3c7c06025d9ee06a702348b9a17aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=216&crop=smart&auto=webp&s=75a9a544fae6729096c2f41173db4aeb077bd90a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=320&crop=smart&auto=webp&s=03fec52311260508682573a215e52a685389b40b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=640&crop=smart&auto=webp&s=6496fcee3c3c0005184df9b3cbe9dceaa06a0c4a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=960&crop=smart&auto=webp&s=7fad2f2c04d5204d75e865c6a59cd386906b2389', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?width=1080&crop=smart&auto=webp&s=7df9cc685d29ce68e2bfaec234cbbbc20e20c1b9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/G0pBb0RCd46QXI7ZBwlDv7ScyXXHaae0jNeNWtdfkbk.png?auto=webp&s=8adbaec618cc45d2ed68378aa6a92c2de7ad665d', 'width': 1200}, 'variants': {}}]}
native tool calling support for DeepSeek V3.1 just merged in llama.cpp
55
I doubt many people are using it, but just FYI: native tool calling support (OpenAI style JSON request/response) for DeepSeek V3.1 was just merged into llama.cpp. To use, I think you have to start the server with \`--jinja\` and unset \`--response\_format\`, or set it to \`auto\`. I personally use this feature quite a bit with Open Hands AI via docker with \`-e LLM\_NATIVE\_TOOL\_CALLING=true\`, but you'll have to check your documentation to see if it is supported and how to enable it if you use a different client. Benefits include reduced context length and possibly better agentic reliability (time will tell).
2025-09-08T16:35:22
https://github.com/ggml-org/llama.cpp/pull/15533
createthiscom
github.com
1970-01-01T00:00:00
0
{}
1nbslxu
false
null
t3_1nbslxu
/r/LocalLLaMA/comments/1nbslxu/native_tool_calling_support_for_deepseek_v31_just/
false
false
default
55
{'enabled': False, 'images': [{'id': 'AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=108&crop=smart&auto=webp&s=d885cb311647f991069b116813820a29016275d0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=216&crop=smart&auto=webp&s=f88baffc33e527cc8342dec0cfd38f765a89ed80', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=320&crop=smart&auto=webp&s=d7ea04c4b3b41041410fef3f6f58ff5b1d541646', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=640&crop=smart&auto=webp&s=8bdc3f7c6f3e61642c4575d3c207d062eb6dd4b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=960&crop=smart&auto=webp&s=6477386160e226eabc131349536aa7cf6ce565cb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?width=1080&crop=smart&auto=webp&s=8b7ee86abf8a5cf5d625d73ecb3b618e8c85dede', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AaDrVOkhJbB5T7DYbjublcje7S3TXjWZXxoeBvGkbYY.png?auto=webp&s=f4a568320dc2319892caed2237a3fc6d047023ce', 'width': 1200}, 'variants': {}}]}
Need help installing latest multilingual Chatterbox TTS on Mac
1
Hey everyone, I’m trying to set up **Chatterbox TTS (the multilingual version)** on my Mac mini M4 (16GB). So far I’ve managed to install Chatterbox, but it only gives me the **older non-multilingual interface**. I want the **new multilingual version** that supports Hindi, English, and other languages. What I did: * Used Python 3.11 first, then switched to 3.10 (since I saw others using it). * Installed via pip and also tried downloading the GitHub repo. * The installation runs fine, but when I launch it, it looks like the older version (no multilingual features). Has anyone successfully installed the **latest multilingual version on Mac (Apple Silicon)**? * Which Python + Torch versions should I use? * Is Git clone better than ZIP download for this? * Any step-by-step guidance would be a lifesaver 🙏 Thanks in advance!
2025-09-08T16:34:21
https://www.reddit.com/r/LocalLLaMA/comments/1nbskxe/need_help_installing_latest_multilingual/
imdipworld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbskxe
false
null
t3_1nbskxe
/r/LocalLLaMA/comments/1nbskxe/need_help_installing_latest_multilingual/
false
false
self
1
null
Poor man’s FlashAttention: Llama.cpp-gfx906 fork!
225
just released a fork of llama.cpp that implements some strong optimizations for the MI50/MI60/Vega7 series. Thanks to the outstanding work of open source community I made a final effort to actually make flash attention FASTER than no flash attention in almost every case. Yeah… almost. The goal is to run ~30B models with ~30K ctx on a single card at decent speed. You can find benchmarks, compile/launch/bench scripts, references to the original works and explanations of my new kernel in the repo. Have fun!
2025-09-08T15:39:53
https://github.com/iacopPBK/llama.cpp-gfx906
CornerLimits
github.com
1970-01-01T00:00:00
0
{}
1nbr45v
false
null
t3_1nbr45v
/r/LocalLLaMA/comments/1nbr45v/poor_mans_flashattention_llamacppgfx906_fork/
false
false
default
225
{'enabled': False, 'images': [{'id': 'PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=108&crop=smart&auto=webp&s=d78828957247d3d8a042a5aadaeb56288ab4bb13', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=216&crop=smart&auto=webp&s=eee9838768949100e771760be20c50d7fcd447bf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=320&crop=smart&auto=webp&s=8b760af9b7d4d184e9b301d4d6957e0e0c87adfa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=640&crop=smart&auto=webp&s=e84ff63d5e5bbf73f86b2641f6f74955f0a20dbe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=960&crop=smart&auto=webp&s=6d4228ae06bded66cedb5220276b8e18d9329930', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?width=1080&crop=smart&auto=webp&s=823f4a1bce9ad3fb235b41a1d2a6e1baada798da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PecTYmNSbm5tb-T9OW67-xyMoNn-SzofgAKif5I3sUI.png?auto=webp&s=d4368fde30466903c1c95146d2ad36ba23ea95f0', 'width': 1200}, 'variants': {}}]}
Exclusive: ASML becomes Mistral AI’s top shareholder after leading latest funding round, sources say
1
[removed]
2025-09-08T15:22:03
https://www.reddit.com/r/LocalLLaMA/comments/1nbqnav/exclusive_asml_becomes_mistral_ais_top/
Necessary_Bunch_4019
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbqnav
false
null
t3_1nbqnav
/r/LocalLLaMA/comments/1nbqnav/exclusive_asml_becomes_mistral_ais_top/
false
false
self
1
null
Gemini question
0
Hey, sorry, I don't know what causes this, if it's even true that it happens. [https://www.reddit.com/r/aiwars/comments/1nbmoak/google\_gemini\_may\_get\_depressed\_and\_kill\_itself/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/aiwars/comments/1nbmoak/google_gemini_may_get_depressed_and_kill_itself/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) I don't need information how LLM's work, but I'm more curious what reasons Gemini could have to choose this action over other actions. Basically: I know it can be a random occurence, I know the data fed into the LLM could include this type of scenario, I know almost all LLM's can be told to give specific messages on command, but what CAUSES it to choose this action exactly? What makes it actively capable, and willing, to delete itself for failing a task?
2025-09-08T15:15:31
https://www.reddit.com/r/LocalLLaMA/comments/1nbqh2c/gemini_question/
CuteDarkBird
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbqh2c
false
null
t3_1nbqh2c
/r/LocalLLaMA/comments/1nbqh2c/gemini_question/
false
false
self
0
null
Will you use this new Qwen3-ASR?
26
Supporting 11 languages
2025-09-08T15:09:21
https://i.redd.it/1ytcylkfiynf1.jpeg
LuozhuZhang
i.redd.it
1970-01-01T00:00:00
0
{}
1nbqb3o
false
null
t3_1nbqb3o
/r/LocalLLaMA/comments/1nbqb3o/will_you_use_this_new_qwen3asr/
false
false
https://b.thumbs.redditm…gT2qWPVC0roo.jpg
26
{'enabled': True, 'images': [{'id': 'DbTcIUQ8jLvRuehrs6OPQYKOOEj6ZTWt8-WN9MmwJXk', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=108&crop=smart&auto=webp&s=434665c5d258d1fcafcd77dd16ac747fc46b25fd', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=216&crop=smart&auto=webp&s=f4cc10704a359f85b6557aa003e20740b137b2be', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=320&crop=smart&auto=webp&s=3af79a56b4149861ad54045fc9f4123f35732b06', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=640&crop=smart&auto=webp&s=7c83369e0e599a5317d04c7536881ab848342f5d', 'width': 640}, {'height': 564, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=960&crop=smart&auto=webp&s=bd723131887abba191f4652c567f25c21c73afb6', 'width': 960}, {'height': 634, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?width=1080&crop=smart&auto=webp&s=afac42910160105e1de5775119027db5736d8def', 'width': 1080}], 'source': {'height': 1203, 'url': 'https://preview.redd.it/1ytcylkfiynf1.jpeg?auto=webp&s=5d6554bce0efc0578044dc43b06b5f7238aa1465', 'width': 2047}, 'variants': {}}]}
🚀 What model should we build next? YOU DECIDE! 🚀
50
Hey LocalLLaMA! After the amazing support we received in our [last post](https://www.reddit.com/r/LocalLLaMA/comments/1n3xxm5/introducing_art08b_reasoning_the_way_you_want_it/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) with Art-0-8B, we're ready to tackle our next project and want YOU to decide what it should be! (Art-1 8B and 20B versions are coming soon btw) For those who missed it, we're AGI-0 Labs - a decentralized research lab building open-source AGI through democratic community input. Our mission is simple: create AI that belongs to everyone, developed openly and guided by the community. Check us out at [AGI-0.com](http://AGI-0.com) if you want to learn more about our approach. **Here's how this works:** The most upvoted comment below describing a model idea will be our next development target. Whether it's a specialized fine-tune, a novel architecture experiment, or something completely wild - if the community wants it, we'll build it. We're also open to collaborating with any sponsors who'd like to help us get more compute resources - feel free to reach out if you're interested in supporting open-source AI development! **Drop your model ideas below and let's see what the community wants most! The highest upvoted suggestion gets built. 🗳️** Looking forward to seeing what creative ideas you all come up with!
2025-09-08T15:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1nbqa34/what_model_should_we_build_next_you_decide/
GuiltyBookkeeper4849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbqa34
false
null
t3_1nbqa34
/r/LocalLLaMA/comments/1nbqa34/what_model_should_we_build_next_you_decide/
false
false
self
50
null
Qwen released API (only) Qwen3-ASR — the all-in-one speech recognition model!
168
🎙️ Meet Qwen3-ASR — the all-in-one speech recognition model! ✅ High-accuracy EN/CN + 9 more languages: ar, de, en, es, fr, it, ja, ko, pt, ru, zh ✅ Auto language detection ✅ Songs? Raps? Voice with BGM? No problem. <8% WER ✅ Works in noise, low quality, far-field ✅ Custom context? Just paste ANY text — names, jargon, even gibberish 🧠 ✅ One model. Zero hassle.Great for edtech, media, customer service & more. API: https://bailian.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2979031 Modelscope Demo: https://modelscope.cn/studios/Qwen/Qwen3-ASR-Demo Hugging Face Demo: https://huggingface.co/spaces/Qwen/Qwen3-ASR-Demo Blog: https://qwen.ai/blog?id=41e4c0f6175f9b004a03a07e42343eaaf48329e7&from=research.latest-advancements-list
2025-09-08T15:08:10
https://i.redd.it/et1syg58iynf1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1nbqa1p
false
null
t3_1nbqa1p
/r/LocalLLaMA/comments/1nbqa1p/qwen_released_api_only_qwen3asr_the_allinone/
false
false
https://b.thumbs.redditm…9BokpeSXrlUk.jpg
168
{'enabled': True, 'images': [{'id': '1WDaUiw-bmI_VK2BSbi-zBDI37kcLmTKlZkNBn6vs_w', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=108&crop=smart&auto=webp&s=2dc0a8ddc6b1f78eee11575140fe97daaf3af01a', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=216&crop=smart&auto=webp&s=44001b69385b55b4ac319cf359d0c8190e599bb7', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=320&crop=smart&auto=webp&s=98bbb9e9b976ec70908717e5c88a1583826adfe0', 'width': 320}, {'height': 376, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=640&crop=smart&auto=webp&s=6ab48cb0e9692e8d94764a7f031fd34d0db1ae95', 'width': 640}, {'height': 564, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=960&crop=smart&auto=webp&s=2fa2b1ed167414aacd3d21a3cf729c897f363a5a', 'width': 960}, {'height': 634, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?width=1080&crop=smart&auto=webp&s=1fbbc0d2cc104ece7d4adc8e33f0db40e51d21ab', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://preview.redd.it/et1syg58iynf1.jpeg?auto=webp&s=69c3257e00f0cf92e0f5695fba737af67eb2fe68', 'width': 2056}, 'variants': {}}]}
Implement a RAG-based on the recent deep mind paper.
1
Hello Guys, [Three Stage RAG MCP Server](https://github.com/lewbei/TriStage-RAG) I have implemented a three stage RAG MCP server based the deep mind paper [https://arxiv.org/pdf/2508.21038](https://arxiv.org/pdf/2508.21038) . I have yet to try on the evaluation part. This is my first time implement RAG so I have not much idea on it. All i know is semantic search that how the cursor use. Moreover, I feel like the three stage is more like a QA system which can give more accuracy answer. Can give me some suggestion and advice for this?
2025-09-08T15:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1nbq8m9/implement_a_ragbased_on_the_recent_deep_mind_paper/
Comfortable_Onion255
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbq8m9
false
null
t3_1nbq8m9
/r/LocalLLaMA/comments/1nbq8m9/implement_a_ragbased_on_the_recent_deep_mind_paper/
false
false
self
1
{'enabled': False, 'images': [{'id': 'uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=108&crop=smart&auto=webp&s=16f48277314a1845602430cc3f87a74324fd65d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=216&crop=smart&auto=webp&s=545e49e89176c8d70ba2b118e4f4fa8b46c41c7a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=320&crop=smart&auto=webp&s=d5325f9b0635dc708e0505f62727ae9deeead7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=640&crop=smart&auto=webp&s=0e5499bbc67bdfa68099f79b5e679f7bc5913036', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=960&crop=smart&auto=webp&s=43bb7349f3fab56d9a34d3216526840e60792263', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?width=1080&crop=smart&auto=webp&s=4ec8f9baea5fac1406c22c06b965fbd3c52a5bd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uk-mMdppjzfBkxLgGNUV7bZWC6Ul42q59eygqaszCa8.png?auto=webp&s=6079dafadc0d3c3db2694ec1d5baaad7e6ed0b9c', 'width': 1200}, 'variants': {}}]}
New research preprint: Evolving Transformers with NEMoE
24
Hi everyone, I just uploaded a new research preprint called NEMoE (Neuro-Evolutionary Mixture of Experts Transformer). Instead of using a standard Transformer with fixed experts, NEMoE applies ideas from evolutionary algorithms (mutation, crossover, selection) to improve how experts are chosen and combined. 🔹 Early results show: Lower perplexity (better language modeling performance) More stable training compared to Switch Transformer Better use of experts without adding compute cost Here’s the preprint (open access on Zenodo): 👉 https://doi.org/10.5281/zenodo.xxxxxx
2025-09-08T15:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1nbq3y7/new_research_preprint_evolving_transformers_with/
Desperate_Contact102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbq3y7
false
null
t3_1nbq3y7
/r/LocalLLaMA/comments/1nbq3y7/new_research_preprint_evolving_transformers_with/
false
false
self
24
null
Switched to LobeChat from OpenWebUI because of crappy web search and no reasoning level support: a review
17
For people who use OpenWebUI, I could not get web search working properly. Also, it's been weeks since OpenAI Harmony was released, and it still doesn't support configuring reasoning level for gpt-oss. I gave up on OpenWebUI being useful, and switched to LobeChat. One `docker compose up -d` later, it's running on my server. Pros: - Native web search [actually works](https://i.imgur.com/7mmS43O.png)! It calls gpt-5 api correctly using the [web_search tool](https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses) setting built-in search engine. It's a lot faster than running searxng on my local machine, doesn't seem to run into cloudflare issues, and [gives high quality results quickly](https://i.imgur.com/zoakfFJ.png). - You can [set gpt-oss/gpt-5 reasoning effort](https://i.imgur.com/4dpL9jp.png)! Cons: - The icons are really really ugly. Not my style at all. - It's written by a team in china, and it shows sometimes in the translations. - Setting up the server with a custom domain/SSL/custom ports is a bit harder. The default config assumes that you have port 8000, 9000, 9001, and 5432 (postgres port) available. You need to tweak the config a bit if you don't want to use those ports if they are already in use. You can change some of the ports in the `.env` file, but the port 9001 and 5432 are hardcoded so you need to change that. This is not a big deal though, just takes a few mins to configure. Overall, I rate it 4/5 stars. Would be a higher score if we could change the icons.
2025-09-08T14:59:32
https://www.reddit.com/r/LocalLLaMA/comments/1nbq1n0/switched_to_lobechat_from_openwebui_because_of/
DistanceSolar1449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbq1n0
false
null
t3_1nbq1n0
/r/LocalLLaMA/comments/1nbq1n0/switched_to_lobechat_from_openwebui_because_of/
false
false
self
17
{'enabled': False, 'images': [{'id': '3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0.png?width=108&crop=smart&auto=webp&s=10325aeb95bc77f1c351c0c7215e124dea8ebca3', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0.png?width=216&crop=smart&auto=webp&s=8ebe40254ff7c5e18a7dfa5097f2ffd0b46d8137', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0.png?width=320&crop=smart&auto=webp&s=e6d8ba95f4eba6b69b19be0418d2b28e03038699', 'width': 320}, {'height': 523, 'url': 'https://external-preview.redd.it/3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0.png?width=640&crop=smart&auto=webp&s=0f4a3d5bd422ff75f152c397b09fe5d5b870641d', 'width': 640}], 'source': {'height': 586, 'url': 'https://external-preview.redd.it/3YItbMNOxCpB_BXg0L2xr8aDi-PSIiNEjmQ9BSm1rO0.png?auto=webp&s=270dbfff3c7b9697f4638be4e90a624f4e61a41f', 'width': 716}, 'variants': {}}]}
How to make gtx 1050 work (CUDA/Linux/ComfyUI)?
0
Please don't suggest to buy a 500 € graphic card. I was just given this laptop with GTX 1050 Ti 4GB, I wanted to try if it's any useful for anything at all, because I've been stuck on CPU only for months. OS: OpenSuse Tumbleweed, nvidia drivers installed So I tried ComfyUI (from github), ran a thing, that errored out due to CUDA mismatch. I learned this GPU supports CUDA 6.1, while I guess current PyTorch supports 7 and up? I talked to Perplexity about it, it suggested to work from a Conda env with Python 3.10 and pytorch-cuda=11.7. That didn't work, because Conda seems to insist on installing cpu-only versions, so CUDA isn't available. Which is so ironic, because I recall having the exact opposite problem when I was trying to make things work on a CPU some months ago, and one of these bloody package managers kept trying to install NVIDIA stuff. Now Perplexity is telling me to go through Mamba and it keeps giving me commands to re-download everything all the time. It's obviously just guessing and seems to know even less about this than I do. As I side note, how many fu- package and environment managers are there? Like the distro nightmare isn't enough, now every damn AI project needs its own python version, environment, or conda or whatever. FFS. Not everybody has gigabit internet and unlimited storage, never mind figuring out where I put this or that thing. Anyway, any humans here who know what crap to install to make this work?
2025-09-08T14:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1nbpqty/how_to_make_gtx_1050_work_cudalinuxcomfyui/
WhoRoger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nbpqty
false
null
t3_1nbpqty
/r/LocalLLaMA/comments/1nbpqty/how_to_make_gtx_1050_work_cudalinuxcomfyui/
false
false
self
0
null