title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Private AI Voice Assistant + Open-Source Speaker Powered by Llama & Jetson!
134
**TL;DR:** We built a **100% private, AI-powered voice assistant** for your smart home — runs locally on **Jetson**, uses **Llama models**, connects to our **open-source Sonos-like speaker**, and integrates with **Home Assistant** to control basically *everything*. No cloud. Just fast, private, real-time control. ==...
2025-06-19T01:59:17
https://youtu.be/WrreIi8LCiw
FutureProofHomes
youtu.be
1970-01-01T00:00:00
0
{}
1leyzxp
false
{'oembed': {'author_name': 'FutureProofHomes', 'author_url': 'https://www.youtube.com/@FutureProofHomes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WrreIi8LCiw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media...
t3_1leyzxp
/r/LocalLLaMA/comments/1leyzxp/private_ai_voice_assistant_opensource_speaker/
false
false
https://external-preview…6cd17bba06d6ff62
134
{'enabled': False, 'images': [{'id': '1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk.jpeg?width=108&crop=smart&auto=webp&s=0bc6510c00960d22a9218498dc030e8b34816167', 'width': 108}, {'height': 162, 'url': '...
Dual CPU Penalty?
9
Should there be a noticable penalty for running dual CPUs on a workload? Two systems running same version of Ubuntu Linux, on ollama with gemma3 (27b-it-fp16). One has a thread ripper 7985 with 256GB memory, 5090. Second system is a dual 8480 Xeon with 256GB memory and a 5090. Regaurdless of workload the threadri...
2025-06-19T01:53:34
https://www.reddit.com/r/LocalLLaMA/comments/1leyvq5/dual_cpu_penalty/
jsconiers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyvq5
false
null
t3_1leyvq5
/r/LocalLLaMA/comments/1leyvq5/dual_cpu_penalty/
false
false
self
9
null
Self-hosting LLaMA: What are your biggest pain points?
44
Hey fellow llama enthusiasts! Setting aside compute, what has been the biggest issues that you guys have faced when trying to self host models? e.g: * Running out of GPU memory or dealing with slow inference times * Struggling to optimize model performance for specific use cases * Privacy? * Scaling models to handle ...
2025-06-19T01:34:58
https://www.reddit.com/r/LocalLLaMA/comments/1leyi70/selfhosting_llama_what_are_your_biggest_pain/
Sriyakee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyi70
false
null
t3_1leyi70
/r/LocalLLaMA/comments/1leyi70/selfhosting_llama_what_are_your_biggest_pain/
false
false
self
44
null
I created a GUI based software to fine-tune LLMs. Please give me some suggestions.
4
Hello guys! I just finished my freshman year and built a simple Electron-based tool for fine-tuning LLMs. I found the existing options (like CLI or even Hugging Face AutoTrain) a bit hard or limited, so I wanted to build something easier. Right now, it supports basic fine-tuning using Unsloth. I plan to add suppor...
2025-06-19T01:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1leyf4s/i_created_a_gui_based_software_to_finetune_llms/
ConfusionEven2625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyf4s
false
null
t3_1leyf4s
/r/LocalLLaMA/comments/1leyf4s/i_created_a_gui_based_software_to_finetune_llms/
false
false
https://b.thumbs.redditm…sXm4MQe5r3RE.jpg
4
null
I'm having trouble accessing LMArena
2
When I visit [lmarena.ai](http://lmarena.ai) using the Firefox browser, the website shows a message saying “Failed to verify your browser”. However, it works fine in the Edge browser. How can I resolve this issue? [Imgur](https://imgur.com/RGcsi0V)
2025-06-19T01:14:34
https://www.reddit.com/r/LocalLLaMA/comments/1ley3k6/im_having_trouble_accessing_lmarena/
r-amadeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ley3k6
false
null
t3_1ley3k6
/r/LocalLLaMA/comments/1ley3k6/im_having_trouble_accessing_lmarena/
false
false
self
2
{'enabled': False, 'images': [{'id': '3Jk05Nv97du10Ig6B3W5Wav6jGF7ceIALgPhRceUDc4', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=108&crop=smart&auto=webp&s=967046deda84b588f7fe65e135ead5d4726ccb44', 'width': 108}, {'height': 127, 'url': 'h...
Pickaxe - I built an open-source Typescript library for scaling agents
6
Hey everyone -- I'm an engineer working on [Hatchet](https://github.com/hatchet-dev/hatchet). We're releasing an open source Typescript library for building agents that scale: [https://github.com/hatchet-dev/pickaxe](https://github.com/hatchet-dev/pickaxe) Pickaxe is explicitly **not a framework**. Most frameworks lo...
2025-06-19T01:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1ley18c/pickaxe_i_built_an_opensource_typescript_library/
hatchet-dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ley18c
false
null
t3_1ley18c
/r/LocalLLaMA/comments/1ley18c/pickaxe_i_built_an_opensource_typescript_library/
false
false
self
6
{'enabled': False, 'images': [{'id': 'zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=108&crop=smart&auto=webp&s=dadc5ca4da4397dd84c9e6920ebf9279893aaecf', 'width': 108}, {'height': 108, 'url': 'h...
Best realtime open source STT model?
13
What's the best model to transcribe a conversation in realtime, meaning that the words have to appear as the person is talking.
2025-06-19T00:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1lexlsd/best_realtime_open_source_stt_model/
ThatIsNotIllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lexlsd
false
null
t3_1lexlsd
/r/LocalLLaMA/comments/1lexlsd/best_realtime_open_source_stt_model/
false
false
self
13
null
How to set up local llms on a 6700 xt
8
All right so I struggled for what’s gotta be about four or five weeks now to get local LLM’s running with my GPU which is a 6700 XT. After this process of about four weeks I finally got something working on windows so here is the guide in case anyone is interested: # AMD RX 6700 XT LLM Setup Guide - KoboldCpp with GPU...
2025-06-19T00:42:29
https://www.reddit.com/r/LocalLLaMA/comments/1lexg9w/how_to_set_up_local_llms_on_a_6700_xt/
Electronic_Image1665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lexg9w
false
null
t3_1lexg9w
/r/LocalLLaMA/comments/1lexg9w/how_to_set_up_local_llms_on_a_6700_xt/
false
false
self
8
{'enabled': False, 'images': [{'id': 'ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=108&crop=smart&auto=webp&s=1d7d66a5611bb1de56e59d4cfb3b261d6803a0bb', 'width': 108}, {'height': 108, 'url': 'h...
How much is the 3090 on the used market in your country?
10
Hi there guys, hoping you're having a good day. I was wondering the 3090 used prices on your country, as they seem very different based on this. I will start, with Chile. Here the used 3090s used hover between 550 and 650USD. This is a bit of increase in price vs some months, where it was between 500 and 550 USD. Al...
2025-06-19T00:25:34
https://www.reddit.com/r/LocalLLaMA/comments/1lex3pi/how_much_is_the_3090_on_the_used_market_in_your/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lex3pi
false
null
t3_1lex3pi
/r/LocalLLaMA/comments/1lex3pi/how_much_is_the_3090_on_the_used_market_in_your/
false
false
self
10
null
We built this project to increase LLM throughput by 3x. Now it has been adopted by IBM in their LLM serving stack!
423
Hi guys, our team has built this open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and it has been used in IBM's open source LLM inference stack. In LLM serving, the input is computed into intermediate states cal...
2025-06-18T23:55:55
https://i.redd.it/775o8e8hxr7f1.jpeg
Nice-Comfortable-650
i.redd.it
1970-01-01T00:00:00
0
{}
1lewhla
false
null
t3_1lewhla
/r/LocalLLaMA/comments/1lewhla/we_built_this_project_to_increase_llm_throughput/
false
false
default
423
{'enabled': True, 'images': [{'id': '775o8e8hxr7f1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=108&crop=smart&auto=webp&s=07c1104129fa40256bcf2871a8f6782191a78e1c', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=216&crop=smart&auto=w...
Does this mean we are free from the shackles of CUDA? We can use AMD GPUs wired up together to run models ?
25
2025-06-18T23:53:58
https://i.redd.it/y31qo2q5xr7f1.png
Just_Lingonberry_352
i.redd.it
1970-01-01T00:00:00
0
{}
1lewg4u
false
null
t3_1lewg4u
/r/LocalLLaMA/comments/1lewg4u/does_this_mean_we_are_free_from_the_shackles_of/
false
false
default
25
{'enabled': True, 'images': [{'id': 'y31qo2q5xr7f1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=108&crop=smart&auto=webp&s=3089d7f5f8c49fb2ffac36489acea817090b88b1', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=216&crop=smart&auto=web...
Suggest a rig for running local LLM for ~$3,000
8
Simply that. I have a budget approx. $3k and I want to build or buy a rig to run the largest local llm for the budget. My only constraint is that it must run Linux. Otherwise I’m open to all options (DGX, new or used, etc). Not interested in training or finetuning models, just running
2025-06-18T23:33:44
https://www.reddit.com/r/LocalLLaMA/comments/1lew0rk/suggest_a_rig_for_running_local_llm_for_3000/
x0rchidia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lew0rk
false
null
t3_1lew0rk
/r/LocalLLaMA/comments/1lew0rk/suggest_a_rig_for_running_local_llm_for_3000/
false
false
self
8
null
Someone to give me a runpod referral code?
0
i heard there's a sweet $500 bonus 👀 if anyone’s got a referral link, i’d really appreciate it trying to get started without missing out!
2025-06-18T22:52:48
https://www.reddit.com/r/LocalLLaMA/comments/1lev4hc/someone_to_give_me_a_runpod_referral_code/
rainyposm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lev4hc
false
null
t3_1lev4hc
/r/LocalLLaMA/comments/1lev4hc/someone_to_give_me_a_runpod_referral_code/
false
false
self
0
null
LocalBuddys - Local Friends For Everyone (But need help)
3
LocalBuddys has a lightweight interface that works on every device and works locally to ensure data security and not depend on any API. It is currently designed to be connected from other devices, using your laptop or computer as a main server. I am thinking of raising funds on Kickstarter and making this project p...
2025-06-18T22:45:56
https://www.reddit.com/r/LocalLLaMA/comments/1leuz0z/localbuddys_local_friends_for_everyone_but_need/
Dismal-Cupcake-3641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leuz0z
false
null
t3_1leuz0z
/r/LocalLLaMA/comments/1leuz0z/localbuddys_local_friends_for_everyone_but_need/
false
false
https://b.thumbs.redditm…Ugf8WOnSgW1g.jpg
3
null
EchoStream – A Local AI Agent That Lives on Your iPhone
1
[removed]
2025-06-18T22:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1leuu1o/echostream_a_local_ai_agent_that_lives_on_your/
Local_Yam_5657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leuu1o
false
null
t3_1leuu1o
/r/LocalLLaMA/comments/1leuu1o/echostream_a_local_ai_agent_that_lives_on_your/
true
false
spoiler
1
null
Run Open WebUI over HTTPS on Windows without exposing it to the internet tutorial
4
Disclaimer! I'm learning. Feel free to help me make this tutorial better. Hello! I've struggled with running open webui over https without exposing it to the internet on windows for a bit. I wanted to be able to use voice and call mode on iOS browsers but https was a requirement for that. At first I tried to do it wi...
2025-06-18T21:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1letslu/run_open_webui_over_https_on_windows_without/
gwyngwynsituation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1letslu
false
null
t3_1letslu
/r/LocalLLaMA/comments/1letslu/run_open_webui_over_https_on_windows_without/
false
false
self
4
null
Vector with Ollama and push it into ChromaDB
0
Hello! I am currently interning without much prior knowledge, and I have to handle a file that contains (287,113,3). My task was to vectorize the data using only Ollama and then import it into chromaDB, while also being able to communicate with the AI without using Langchain. I tried to watch a YouTube video about th...
2025-06-18T21:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1lestr2/vector_with_ollama_and_push_it_into_chromadb/
Aggravating_Ad_3433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lestr2
false
null
t3_1lestr2
/r/LocalLLaMA/comments/1lestr2/vector_with_ollama_and_push_it_into_chromadb/
false
false
self
0
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
112
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an open-source LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I just released **Augmentoolkit 3.0** — an eas...
2025-06-18T20:33:11
https://www.reddit.com/r/LocalLLaMA/comments/1lersrw/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lersrw
false
null
t3_1lersrw
/r/LocalLLaMA/comments/1lersrw/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
112
{'enabled': False, 'images': [{'id': 'JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=108&crop=smart&auto=webp&s=dfa70ffebb9194edbb5e27da4a36fd2490c85f6e', 'width': 108}, {'height': 108, 'url': 'h...
RAG injection in Chain of Thought (COT)
10
I just recently started running 'deepseek-ai/DeepSeek-R1-Distill-Qwen-14B' locally (Macbook Pro M4 48GB). I have been messing around with an idea where I inject information from a ToolUse/RAG model in to the <think> section. Essentially: User prompt > DeepseekR1 runs 50 tokens > stop. Run another tool use model on use...
2025-06-18T20:00:58
https://www.reddit.com/r/LocalLLaMA/comments/1ler0ew/rag_injection_in_chain_of_thought_cot/
Strange_Test7665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ler0ew
false
null
t3_1ler0ew
/r/LocalLLaMA/comments/1ler0ew/rag_injection_in_chain_of_thought_cot/
false
false
self
10
null
Why a Northern BC credit union took AI sovereignty into its own hands
0
Not entirely LocalLLama but close.
2025-06-18T19:36:38
https://betakit.com/why-a-northern-bc-credit-union-took-ai-sovereignty-into-its-own-hands/
redpatchguy
betakit.com
1970-01-01T00:00:00
0
{}
1leqeld
false
null
t3_1leqeld
/r/LocalLLaMA/comments/1leqeld/why_a_northern_bc_credit_union_took_ai/
false
false
default
0
{'enabled': False, 'images': [{'id': '7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=108&crop=smart&auto=webp&s=22a902d71dc4069f4139912cb856424f925ab4bf', 'width': 108}, {'height': 144, 'url': '...
Which local API is the best to work with when developing local LLM apps for yourself?
3
There are so many local LLM servers out there, each with their own API (llama.cpp, ollama, LM studio, llmv, etc) I am a bit overwhelmed trying to decide which API to use. Does anyone have any experience or feedback in this area that can help me choose one?
2025-06-18T19:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1leqcc5/which_local_api_is_the_best_to_work_with_when/
crispyfrybits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leqcc5
false
null
t3_1leqcc5
/r/LocalLLaMA/comments/1leqcc5/which_local_api_is_the_best_to_work_with_when/
false
false
self
3
null
The Bizarre Limitations of Apple's Foundation Models Framework
47
Last week Apple announced some great new APIs for their on-device foundation models in OS 26. Devs have been experimenting with it for over a week now, and the local LLM is surprisingly capable for only a 3B model w/2-bit quantization. It's also very power efficient because it leverages the ANE. You can try it out for ...
2025-06-18T19:29:31
https://www.reddit.com/r/LocalLLaMA/comments/1leq843/the_bizarre_limitations_of_apples_foundation/
SandBlaster2000AD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leq843
false
null
t3_1leq843
/r/LocalLLaMA/comments/1leq843/the_bizarre_limitations_of_apples_foundation/
false
false
self
47
{'enabled': False, 'images': [{'id': 'JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=108&crop=smart&auto=webp&s=b2073825a7abe157927355836cf908592b7b7b59', 'width': 108}, {'height': 108, 'url': 'h...
Unlimited Repeated generations by fine-tuned model
0
I was fine tuning phi-4 14b model on a math dataset and for the first time I trained it without any system prompt and it worked fine then I added a system prompt stating "You are a math solver. Only answer math related questions. Show step-by-step solution" and then it started producing faulty outputs while repeating t...
2025-06-18T19:23:46
https://www.reddit.com/r/LocalLLaMA/comments/1leq2y1/unlimited_repeated_generations_by_finetuned_model/
ILoveMy2Balls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leq2y1
false
null
t3_1leq2y1
/r/LocalLLaMA/comments/1leq2y1/unlimited_repeated_generations_by_finetuned_model/
false
false
self
0
null
Mobile Phones are becoming better at running AI locally on the device.
39
We aggregated the tokens/second on various devices that use apps built with Cactus. https://preview.redd.it/phdczm64hq7f1.png?width=1320&format=png&auto=webp&s=f7981fa2775bc2a723e2d51f738a75d8ae7bd432 * 1B - 4B models at INT4 run quite fast (we shipped some improvements though). * You can see the full list on our Git...
2025-06-18T19:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1lepjc5/mobile_phones_are_becoming_better_at_running_ai/
Henrie_the_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lepjc5
false
null
t3_1lepjc5
/r/LocalLLaMA/comments/1lepjc5/mobile_phones_are_becoming_better_at_running_ai/
false
false
https://b.thumbs.redditm…yVxMHVZz-hHY.jpg
39
{'enabled': False, 'images': [{'id': 'bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=108&crop=smart&auto=webp&s=02bdcaa524e19f8a3591b0deaf1d84df538991c2', 'width': 108}, {'height': 108, 'url': 'h...
lmarena not telling us chatbot names after battle
0
yupp.ai is a recent alternative to lmarena.
2025-06-18T18:59:46
https://www.reddit.com/r/LocalLLaMA/comments/1lepgii/lmarena_not_telling_us_chatbot_names_after_battle/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lepgii
false
null
t3_1lepgii
/r/LocalLLaMA/comments/1lepgii/lmarena_not_telling_us_chatbot_names_after_battle/
false
false
self
0
null
self host minimax?
5
i want to use minimax but im just not sure about sending data to china and want to self host it. is that possible? which locally hosted agentic focused model can we run on either rented hardware or local gpus?
2025-06-18T18:40:16
https://www.reddit.com/r/LocalLLaMA/comments/1leoyu2/self_host_minimax/
Just_Lingonberry_352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoyu2
false
null
t3_1leoyu2
/r/LocalLLaMA/comments/1leoyu2/self_host_minimax/
false
false
self
5
null
Lorras for LLMs
0
Do we have this option? 🤔 lately I've been seeing new models pop up left and right and oops this one doesn't understand xyz, so I have to download another model...only to find out it's missing % of the dataset of the previous model. Having lorras link up with LLMs would be pretty useful and I don't think I've seen a...
2025-06-18T18:39:50
https://www.reddit.com/r/LocalLLaMA/comments/1leoyg3/lorras_for_llms/
mk8933
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoyg3
false
null
t3_1leoyg3
/r/LocalLLaMA/comments/1leoyg3/lorras_for_llms/
false
false
self
0
null
Daily Paper Discussions on the Yannic Kilcher Discord -> V-JEPA 2
0
As a part of daily paper discussions on the Yannic Kilcher discord server, I will be volunteering to lead the analysis of the world model that achieves state-of-the-art performance on visual understanding and prediction in the physical world -> V-JEPA 2 🧮 🔍 V-JEPA 2 is a 1.2 billion-parameter model that was built us...
2025-06-18T18:39:29
https://www.reddit.com/r/LocalLLaMA/comments/1leoy4x/daily_paper_discussions_on_the_yannic_kilcher/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoy4x
false
null
t3_1leoy4x
/r/LocalLLaMA/comments/1leoy4x/daily_paper_discussions_on_the_yannic_kilcher/
false
false
https://b.thumbs.redditm…7pU7K6VI2ZWg.jpg
0
null
Cluster advice needed
0
Hello local llama , I’m new to this chat so sorry if this breaks any rules . I’m a young enthusiast and have been working on my dream ai project for awhile. I was looking at maybe building a duel a100 40g PCie cluster eventually, I noticed however that on eBay they had no/ little used supply (trying to budget) any hel...
2025-06-18T18:28:15
https://www.reddit.com/r/LocalLLaMA/comments/1leonta/cluster_advice_needed/
Fun_Nefariousness228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leonta
false
null
t3_1leonta
/r/LocalLLaMA/comments/1leonta/cluster_advice_needed/
false
false
self
0
null
How much does it cost ai companies to train xbillion amount of parameters?
3
Hello, I Have been working on my own stuff lately, and decided to test how much memory 5million parameters (i call them units) would cost. It came out to be 37.7gb of ram, but it made me think, that if i had 2 24gb gpus id be able to effectively train for small problems and it would cost me $4000(retail), so if i want...
2025-06-18T18:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1leoej7/how_much_does_it_cost_ai_companies_to_train/
KingYSL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoej7
false
null
t3_1leoej7
/r/LocalLLaMA/comments/1leoej7/how_much_does_it_cost_ai_companies_to_train/
false
false
self
3
null
OpenAI found features in AI models that correspond to different ‘personas’
118
[https://techcrunch.com/2025/06/18/openai-found-features-in-ai-models-that-correspond-to-different-personas/](https://techcrunch.com/2025/06/18/openai-found-features-in-ai-models-that-correspond-to-different-personas/) **TL;DR:** OpenAI discovered that large language models contain internal "persona" features neural...
2025-06-18T18:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1leod7d/openai_found_features_in_ai_models_that/
nightsky541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leod7d
false
null
t3_1leod7d
/r/LocalLLaMA/comments/1leod7d/openai_found_features_in_ai_models_that/
false
false
self
118
{'enabled': False, 'images': [{'id': 'uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=108&crop=smart&auto=webp&s=625eebe08226a18b15b91510476f4c7be9772770', 'width': 108}, {'height': 144, 'url': '...
Help with Ollama & Open WebUI – Best Practices for Staff Knowledge Base
1
[removed]
2025-06-18T17:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1lenq73/help_with_ollama_open_webui_best_practices_for/
4real2me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lenq73
false
null
t3_1lenq73
/r/LocalLLaMA/comments/1lenq73/help_with_ollama_open_webui_best_practices_for/
false
false
self
1
null
new 72B and 70B models from Arcee
81
looks like there are some new models from Arcee [https://huggingface.co/arcee-ai/Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large) [https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1](https://hugg...
2025-06-18T17:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1lenf36/new_72b_and_70b_models_from_arcee/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lenf36
false
null
t3_1lenf36
/r/LocalLLaMA/comments/1lenf36/new_72b_and_70b_models_from_arcee/
false
false
self
81
{'enabled': False, 'images': [{'id': '-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=108&crop=smart&auto=webp&s=bde6d425e961c460755cedf86cf0f698f3745398', 'width': 108}, {'height': 116, 'url': 'h...
new 72B and 70B models from Arcsee
1
looks like there are some new models from Arcsee [https://huggingface.co/arcee-ai/Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large) [https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1](https://hug...
2025-06-18T17:38:43
https://www.reddit.com/r/LocalLLaMA/comments/1lendzl/new_72b_and_70b_models_from_arcsee/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lendzl
false
null
t3_1lendzl
/r/LocalLLaMA/comments/1lendzl/new_72b_and_70b_models_from_arcsee/
false
false
self
1
null
Joycap-beta with llama.cpp
6
Has anyone gotten llama.cpp to work with joycap yet? So far the latest version of Joycap seems to be the captioning king for my workflows but I've only managed to use it with VLLM which is super slow to startup (despite the model being cached in RAM) and that leads to a lot of waiting combined with llama-swap.
2025-06-18T17:37:29
https://www.reddit.com/r/LocalLLaMA/comments/1lencvg/joycapbeta_with_llamacpp/
HollowInfinity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lencvg
false
null
t3_1lencvg
/r/LocalLLaMA/comments/1lencvg/joycapbeta_with_llamacpp/
false
false
self
6
null
We took Qwen3 235B A22B from 34 tokens/sec to 54 tokens/sec by switching from llama.cpp with Unsloth dynamic Q4_K_M GGUF to vLLM with INT4 w4a16
88
System: quad RTX A6000 Epyc. Originally we were running the Unsloth dynamic GGUFs at UD_Q4_K_M and UD_Q5_K_XL with which we were getting speeds of 34 and 31 tokens/sec, respectively, for small-ish prompts of 1-2k tokens. A couple of days ago we tried an experiment with another 4-bit quant type: INT 4, specifically ...
2025-06-18T17:09:31
https://www.reddit.com/r/LocalLLaMA/comments/1lemmsq/we_took_qwen3_235b_a22b_from_34_tokenssec_to_54/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lemmsq
false
null
t3_1lemmsq
/r/LocalLLaMA/comments/1lemmsq/we_took_qwen3_235b_a22b_from_34_tokenssec_to_54/
false
false
self
88
null
Development environment setup
1
I use a windows machine with a 5070 TI and a 3070. I have 96 GB of Ram. I have been installing python and other stuff into this machine but now I feel that it might be better to set up a virtual/docker environment. Is there any readymade setup I can download? Also, can such virtual environments take full advantage of t...
2025-06-18T17:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1leml1x/development_environment_setup/
Jedirite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leml1x
false
null
t3_1leml1x
/r/LocalLLaMA/comments/1leml1x/development_environment_setup/
false
false
self
1
null
Best non-Chinese open models?
2
Yes I know that running them locally is fine, and believe me there's nothing I'd like to do more than just use Qwen, but there is significant resistance to anything from China in this use case Most important factor is it needs to be good at RAG, summarization and essay/report writing. Reasoning would also be a big plu...
2025-06-18T16:15:04
https://www.reddit.com/r/LocalLLaMA/comments/1lel886/best_nonchinese_open_models/
ProbaDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lel886
false
null
t3_1lel886
/r/LocalLLaMA/comments/1lel886/best_nonchinese_open_models/
false
false
self
2
null
gemini-2.5-flash-lite-preview-06-17 performance on IDP Leaderboard
14
https://preview.redd.it/…similar results?
2025-06-18T15:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1lekndj/gemini25flashlitepreview0617_performance_on_idp/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lekndj
false
null
t3_1lekndj
/r/LocalLLaMA/comments/1lekndj/gemini25flashlitepreview0617_performance_on_idp/
false
false
https://a.thumbs.redditm…NSDeFUIiMmt4.jpg
14
null
MiCA – A new parameter-efficient fine-tuning method with higher knowledge uptake and less forgetting (beats LoRA in my tests)
0
Hi all, I’ve been working on a new **parameter-efficient fine-tuning method** for LLMs, called **MiCA (Minor Component Adaptation)**, and wanted to share the results and open it up for feedback or collaboration. MiCA improves on existing methods (like LoRA) in three core areas: ✅ **Higher knowledge uptake**: in som...
2025-06-18T15:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1lek9yr/mica_a_new_parameterefficient_finetuning_method/
Majestic-Explorer315
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek9yr
false
null
t3_1lek9yr
/r/LocalLLaMA/comments/1lek9yr/mica_a_new_parameterefficient_finetuning_method/
false
false
self
0
{'enabled': False, 'images': [{'id': 'kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=108&crop=smart&auto=webp&s=1df48c6071da3b3f2dffe06ba1401b645cfee2b3', 'width': 108}, {'height': 108, 'url': '...
M4 Max 128GB MacBook arrives today. Is LM Studio still king for running MLX or have things moved on?
19
As title: new top-of-the-line MBP arrives today and I’m wondering what the most performant option is for hosting models locally on it. Also: we run a quad RTX A6000 rig and I’ll be doing some benchmark comparisons between that and the MBP. Any requests?
2025-06-18T15:36:34
https://www.reddit.com/r/LocalLLaMA/comments/1lek8yo/m4_max_128gb_macbook_arrives_today_is_lm_studio/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek8yo
false
null
t3_1lek8yo
/r/LocalLLaMA/comments/1lek8yo/m4_max_128gb_macbook_arrives_today_is_lm_studio/
false
false
self
19
null
Built an open-source DeepThink plugin that brings Gemini 2.5 style advanced reasoning to local models (DeepSeek R1, Qwen3, etc.)
67
Hey r/LocalLLaMA! So Google just dropped their Gemini 2.5 report and there's this really interesting technique called "Deep Think" that got me thinking. Basically, it's a structured reasoning approach where the model generates multiple hypotheses in parallel and critiques them before giving you the final answer. The r...
2025-06-18T15:26:55
https://www.reddit.com/r/LocalLLaMA/comments/1lek04t/built_an_opensource_deepthink_plugin_that_brings/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek04t
false
null
t3_1lek04t
/r/LocalLLaMA/comments/1lek04t/built_an_opensource_deepthink_plugin_that_brings/
false
false
https://b.thumbs.redditm…n_poPZJGnM8g.jpg
67
null
Is there a way to optimize flags for llama.cpp towards best tok/s local AI?
1
[removed]
2025-06-18T15:17:05
https://www.reddit.com/r/LocalLLaMA/comments/1lejr6k/is_there_a_way_to_optimize_flags_for_llamacpp/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejr6k
false
null
t3_1lejr6k
/r/LocalLLaMA/comments/1lejr6k/is_there_a_way_to_optimize_flags_for_llamacpp/
false
false
self
1
null
GPU and General Recommendations for DL-CUDA local AI PC
2
Hi folks, I want to build a PC where I can tinker with some CUDA, tinker with LLMs, maybe some diffusion models, train, inference, maybe build some little apps etc. and I am trying to determine which GPU fits me the best. In my opinion, RTX 3090 may be the best because of 24 GB VRAM, and maybe I might get 2 which make...
2025-06-18T15:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1lejmkj/gpu_and_general_recommendations_for_dlcuda_local/
emre570
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejmkj
false
null
t3_1lejmkj
/r/LocalLLaMA/comments/1lejmkj/gpu_and_general_recommendations_for_dlcuda_local/
false
false
self
2
null
Is there a way to optimize flags for llama.cpp towards best tok/s local AI?
1
[removed]
2025-06-18T15:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1lejdf6/is_there_a_way_to_optimize_flags_for_llamacpp/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejdf6
false
null
t3_1lejdf6
/r/LocalLLaMA/comments/1lejdf6/is_there_a_way_to_optimize_flags_for_llamacpp/
false
false
self
1
null
Hugging Face Sheets - experiment with 1.5K open LLMs on data you care about
26
Hi! We've built this app as a playground of open LLMs for unstructured datasets. It might be interesting to this community. It's powered by HF Inference Providers and could be useful for playing and finding the right open models for your use case, without downloading them or running code. I'd love to hear your ...
2025-06-18T14:49:16
https://v.redd.it/w0j5vts27p7f1
dvilasuero
/r/LocalLLaMA/comments/1lej1z2/hugging_face_sheets_experiment_with_15k_open_llms/
1970-01-01T00:00:00
0
{}
1lej1z2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w0j5vts27p7f1/DASHPlaylist.mpd?a=1752979758%2CYTAzNWNhY2RkOTk1NzJlNzYzZTczMzM3YzJjZjMwZDkzNmRhMjI5OTQyZjEyOTJmY2FlYzJlODViMzYxZTYxYQ%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/w0j5vts27p7f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lej1z2
/r/LocalLLaMA/comments/1lej1z2/hugging_face_sheets_experiment_with_15k_open_llms/
false
false
https://external-preview…fe1c5b2f0cd3a9e4
26
{'enabled': False, 'images': [{'id': 'OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d95154c9d21718c3f333755b8bbef539ee1a...
Model Context Protocol (MCP) just got easier to use with IdeaWeaver
0
https://i.redd.it/i8sh45ds7p7f1.gif Model Context Protocol (MCP) just got easier to use with IdeaWeaver MCP is transforming how AI agents interact with tools, memory, and humans, making them more context-aware and reliable. But let’s be honest: setting it up manually is still a hassle. What if you could enabl...
2025-06-18T14:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1lej0ml/model_context_protocol_mcp_just_got_easier_to_use/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lej0ml
false
null
t3_1lej0ml
/r/LocalLLaMA/comments/1lej0ml/model_context_protocol_mcp_just_got_easier_to_use/
false
false
https://b.thumbs.redditm…XrYtVKTMXArc.jpg
0
null
Oops
1,936
2025-06-18T14:12:39
https://i.redd.it/iv35yrek1p7f1.png
Own-Potential-2308
i.redd.it
1970-01-01T00:00:00
0
{}
1lei5mb
false
null
t3_1lei5mb
/r/LocalLLaMA/comments/1lei5mb/oops/
false
false
default
1,936
{'enabled': True, 'images': [{'id': 'iv35yrek1p7f1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=108&crop=smart&auto=webp&s=680ebd462541fd8c80431aa7b123c5468b76ebb4', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=216&crop=smart&auto=we...
Local LLM Coding Setup for 8GB VRAM - Coding Models?
4
Unfortunately for now, I'm limited to **8GB VRAM** (**32GB RAM**) with my friend's laptop - NVIDIA GeForce RTX 4060 GPU - Intel(R) Core(TM) i7-14700HX 2.10 GHz. We can't upgrade this laptop with neither RAM nor Graphics anymore. I'm not expecting great performance from LLMs with this VRAM. Just decent OK performance i...
2025-06-18T13:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1lehe2i/local_llm_coding_setup_for_8gb_vram_coding_models/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lehe2i
false
null
t3_1lehe2i
/r/LocalLLaMA/comments/1lehe2i/local_llm_coding_setup_for_8gb_vram_coding_models/
false
false
self
4
null
Built memX: a shared memory backend for LLM agents (demo + open-source code)
51
Hey everyone — I built this over the weekend and wanted to share: 🔗 https://github.com/MehulG/memX **memX** is a shared memory layer for LLM agents — kind of like Redis, but with real-time sync, pub/sub, schema validation, and access control. Instead of having agents pass messages or follow a fixed pipeline, they j...
2025-06-18T13:37:19
https://v.redd.it/ibq16xv5vo7f1
Temporary-Tap-7323
v.redd.it
1970-01-01T00:00:00
0
{}
1lehbra
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ibq16xv5vo7f1/DASHPlaylist.mpd?a=1752845855%2CZTM2ZjRmZWFiYTFhMTc5NmUxY2Q0MWNiODg4YzU4MTgzZGJjYjAxOTBlMzc1MGQyM2RiM2E1NTBkYjYxNTM1Zg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/ibq16xv5vo7f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lehbra
/r/LocalLLaMA/comments/1lehbra/built_memx_a_shared_memory_backend_for_llm_agents/
false
false
https://external-preview…0292c4a3c3146859
51
{'enabled': False, 'images': [{'id': 'bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=108&crop=smart&format=pjpg&auto=webp&s=3031cc30ee24e75094a4e7250ac591345596c...
Can your favourite local model solve this?
305
I am interested which, if any, models this relatively simple geometry picture if you simply give it this image. I don't have a big enough setup to test visual models.
2025-06-18T13:24:24
https://i.redd.it/gkjegqtyso7f1.png
MrMrsPotts
i.redd.it
1970-01-01T00:00:00
0
{}
1leh14g
false
null
t3_1leh14g
/r/LocalLLaMA/comments/1leh14g/can_your_favourite_local_model_solve_this/
false
false
default
305
{'enabled': True, 'images': [{'id': 'gkjegqtyso7f1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=108&crop=smart&auto=webp&s=799da9f2de59cf167b365385bad826a0c20e9cb0', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=216&crop=smart&auto=web...
3090 + 4090 vs 5090 for conversional Al? Gemma27b on Linux.
0
Newbie here. I want to be able to train this local AI model. Needs text to speech and speech to text. Is running two cards a pain or is it worth the effort? I already have the 3090 and 4090. Thanks for your time.
2025-06-18T12:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1leggrf/3090_4090_vs_5090_for_conversional_al_gemma27b_on/
Yakapo88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leggrf
false
null
t3_1leggrf
/r/LocalLLaMA/comments/1leggrf/3090_4090_vs_5090_for_conversional_al_gemma27b_on/
false
false
self
0
null
Update:My agent model now supports OpenAI function calling format! (mirau-agent-base)
18
Hey r/LocalLLaMA! A while back I shared my multi-turn tool-calling model [in this post](https://www.reddit.com/r/LocalLLaMA/comments/1l7v9gf/a_multiturn_toolcalling_base_model_for_rl_agent/). Based on community feedback about OpenAI compatibility, I've updated the model to support OpenAI's function calling format! **...
2025-06-18T12:51:47
https://huggingface.co/eliuakk/mirau-agent-base-oai
EliaukMouse
huggingface.co
1970-01-01T00:00:00
0
{}
1legaq8
false
null
t3_1legaq8
/r/LocalLLaMA/comments/1legaq8/updatemy_agent_model_now_supports_openai_function/
false
false
default
18
{'enabled': False, 'images': [{'id': '8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=108&crop=smart&auto=webp&s=1d7bbe0bc11d323d6826cedac9638df0dbd62a5a', 'width': 108}, {'height': 116, 'url': 'h...
gpt_agents.py
10
https://github.com/jameswdelancey/gpt_agents.py A single-file, multi-agent framework for LLMs—everything is implemented in one core file with no dependencies for maximum clarity and hackability. See the main implementation.
2025-06-18T12:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1lefgmh/gpt_agentspy/
jameswdelancey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lefgmh
false
null
t3_1lefgmh
/r/LocalLLaMA/comments/1lefgmh/gpt_agentspy/
false
false
self
10
{'enabled': False, 'images': [{'id': 'mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=108&crop=smart&auto=webp&s=0dd22858636e814d4321cceca0a94eb4aafb6472', 'width': 108}, {'height': 108, 'url': 'h...
【New release v1.7.1】Dingo: A Comprehensive Data Quality Evaluation Tool
5
[https://github.com/DataEval/dingo](https://github.com/DataEval/dingo)
2025-06-18T12:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1lef9o7/new_release_v171dingo_a_comprehensive_data/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lef9o7
false
null
t3_1lef9o7
/r/LocalLLaMA/comments/1lef9o7/new_release_v171dingo_a_comprehensive_data/
false
false
self
5
{'enabled': False, 'images': [{'id': 'W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=108&crop=smart&auto=webp&s=b3101971d5ac666e25b61eb655e115c247facb35', 'width': 108}, {'height': 108, 'url': 'h...
MiniMax-M1
29
2025-06-18T11:06:41
https://github.com/MiniMax-AI/MiniMax-M1
David-Kunz
github.com
1970-01-01T00:00:00
0
{}
1leea24
false
null
t3_1leea24
/r/LocalLLaMA/comments/1leea24/minimaxm1/
false
false
default
29
{'enabled': False, 'images': [{'id': 'oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=108&crop=smart&auto=webp&s=a07cd876a65ff821c1740dcc3eec4186df4d6783', 'width': 108}, {'height': 108, 'url': 'h...
WikipeQA : An evaluation dataset for both web-browsing agents and vector DB RAG systems
8
Hey fellow OSS enjoyer, I've created WikipeQA, an evaluation dataset inspired by BrowseComp but designed to test a broader range of retrieval systems. **What makes WikipeQA different?** Unlike BrowseComp (which requires live web browsing), WikipeQA can evaluate BOTH: * **Web-browsing agents**: Can your agent find th...
2025-06-18T10:58:26
https://www.reddit.com/r/LocalLLaMA/comments/1lee4pd/wikipeqa_an_evaluation_dataset_for_both/
Fit_Strawberry8480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lee4pd
false
null
t3_1lee4pd
/r/LocalLLaMA/comments/1lee4pd/wikipeqa_an_evaluation_dataset_for_both/
false
false
self
8
{'enabled': False, 'images': [{'id': 'szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=108&crop=smart&auto=webp&s=d8c2a5a0e8af1cc89736fddad25a9bd929bd4564', 'width': 108}, {'height': 116, 'url': 'h...
What happens when inference gets 10-100x faster and cheaper?
2
Really fast inference is coming. Probably this year. A 10-100x leap in inference speed seems possible with the right algorithmic improvements and custom hardware. ASICs running Llama-3 70B are already >20x faster than H100 GPUs. And the economics of building custom chips make sense now that training runs cost billions...
2025-06-18T10:41:21
https://www.reddit.com/r/LocalLLaMA/comments/1leduoz/what_happens_when_inference_gets_10100x_faster/
jsonathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leduoz
false
null
t3_1leduoz
/r/LocalLLaMA/comments/1leduoz/what_happens_when_inference_gets_10100x_faster/
false
false
self
2
null
Is there a flexible pattern for AI workflows?
2
For a goal-oriented domain like customer support where you could have specialist agents for "Account Issues", "Transaction Issues", etc., I can't think of a better way to orchestrate agents other than static, predefined workflows. I have 2 questions: 1. Is there a known pattern that allows updates to "agentic workfl...
2025-06-18T10:33:03
https://www.reddit.com/r/LocalLLaMA/comments/1ledpvp/is_there_a_flexible_pattern_for_ai_workflows/
redditinws
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledpvp
false
null
t3_1ledpvp
/r/LocalLLaMA/comments/1ledpvp/is_there_a_flexible_pattern_for_ai_workflows/
false
false
self
2
null
Looking for a .guff file to run on llama.cpp server for an specific need.
2
Hello r/LocalLLaMA, I'm a lazy handyman with a passion for local models, and I'm currently working on a side project to build a pre-fabricated wood house. I've designed the house using Sweet Home 3D, but now I need to break it down into individual pieces to build it with a local carpenter. So, I'm trying to automate ...
2025-06-18T10:27:38
https://www.reddit.com/r/LocalLLaMA/comments/1ledmny/looking_for_a_guff_file_to_run_on_llamacpp_server/
Martialogrand
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledmny
false
null
t3_1ledmny
/r/LocalLLaMA/comments/1ledmny/looking_for_a_guff_file_to_run_on_llamacpp_server/
false
false
self
2
null
How does one extract meaning information and queries from 100s of customer chats?
0
Hey, I am facing a bit of issue with this and I wanted to ask that if I have 100s of customer conversations, conversations between customers and customer service providers about products. But I want to understand what are customer pain points and what are they facing issues with? How do I extract that information witho...
2025-06-18T10:25:14
https://www.reddit.com/r/LocalLLaMA/comments/1ledlaa/how_does_one_extract_meaning_information_and/
toinfinity_nbeyond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledlaa
false
null
t3_1ledlaa
/r/LocalLLaMA/comments/1ledlaa/how_does_one_extract_meaning_information_and/
false
false
self
0
null
Is there a context management system?
3
As part of chatting and communicating we sometimes say "thats out of context" or "you switch context". And im thinking, how do humans organize that? And is there some library or system that has this capability? Im not sure if a model (like an embedding model) could do that. Because context is dynamic. I think ...
2025-06-18T10:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1ledidc/is_there_a_context_management_system/
freehuntx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledidc
false
null
t3_1ledidc
/r/LocalLLaMA/comments/1ledidc/is_there_a_context_management_system/
false
false
self
3
null
Local AI for a small/median accounting firm - € Buget of 10k-25k
90
Our medium-sized **accounting firm** (around 100 people) in the **Netherlands** is looking to set up a local AI system, I'm hoping to tap into your collective wisdom for some recommendations. The **budget** is roughly **€10k-€25k.** This is purely for the hardware. I'll be able to build the system myself. I'll also han...
2025-06-18T09:51:00
https://www.reddit.com/r/LocalLLaMA/comments/1led23c/local_ai_for_a_smallmedian_accounting_firm_buget/
AFruitShopOwner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1led23c
false
null
t3_1led23c
/r/LocalLLaMA/comments/1led23c/local_ai_for_a_smallmedian_accounting_firm_buget/
false
false
self
90
null
Google doubled the price of Gemini 2.5 Flash thinking output after GA from 0.15 to 0.30 what
213
ERROR: type should be string, got "\n\nhttps://cloud.google.com/vertex-ai/generative-ai/pricing"
2025-06-18T09:48:16
https://www.reddit.com/r/LocalLLaMA/comments/1led0lb/google_doubled_the_price_of_gemini_25_flash/
NoAd2240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1led0lb
false
null
t3_1led0lb
/r/LocalLLaMA/comments/1led0lb/google_doubled_the_price_of_gemini_25_flash/
false
false
self
213
{'enabled': False, 'images': [{'id': 'DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=108&crop=smart&auto=webp&s=4d0406250101bf7b77173aee1f071f40049a1cf3', 'width': 108}, {'height': 113, 'url': 'h...
Best model for scraping and de-conjugating and translating Hebrew words out of texts? Basically generating a vocab list.
2
"De-conjugating" is a hard thing to explain without an example, but in English, it's like getting the word "walk" out of an input of "walked" or "walking." I've been using ChatGPT o3 for this and it works fine (according to an native speaker who checked the translations) but I want something more automated because I h...
2025-06-18T09:28:00
https://www.reddit.com/r/LocalLLaMA/comments/1lecppd/best_model_for_scraping_and_deconjugating_and/
vardonir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lecppd
false
null
t3_1lecppd
/r/LocalLLaMA/comments/1lecppd/best_model_for_scraping_and_deconjugating_and/
false
false
self
2
null
NVIDIA B300 cut all INT8 and FP64 performance???
51
[https://www.nvidia.com/en-us/data-center/hgx/](https://www.nvidia.com/en-us/data-center/hgx/)
2025-06-18T09:27:17
https://i.redd.it/cekoaeehmn7f1.png
Mindless_Pain1860
i.redd.it
1970-01-01T00:00:00
0
{}
1lecpcr
false
null
t3_1lecpcr
/r/LocalLLaMA/comments/1lecpcr/nvidia_b300_cut_all_int8_and_fp64_performance/
false
false
https://external-preview…c379b5e0c384c5ec
51
{'enabled': True, 'images': [{'id': 'mtoim3aNzp-GedzPJXgJ8e-TiOtDucitFLyUMMG-OEo', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=108&crop=smart&auto=webp&s=088fb40859aed95536dd000460af67955860d19d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png...
Understand block diagrams
2
I have documents with lots of block diagrams (A is connected to B of that sorts).. llama does understand the text but struggles with extracting the arrow mark connections, Gemini pro seems to be better though. I have tried some vision models as well but performance is not what I expected. Which model would you recommen...
2025-06-18T09:19:42
https://www.reddit.com/r/LocalLLaMA/comments/1leclef/understand_block_diagrams/
SathukaBootham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leclef
false
null
t3_1leclef
/r/LocalLLaMA/comments/1leclef/understand_block_diagrams/
false
false
self
2
null
Looking for a stack to serve local models as parallel concurrent async requests with multiple workers on fast api server.
1
Hello, I'm building a system to serve multiple models (LLMs like Gemma 12B-IT, Faster Whisper for speech-to-text, and speech-to-text kokoro) on one or multiple GPUs, aiming for **parallel concurrent async requests** with **multiple workers**. I’ve researched vLLM, LLaMA.cpp, and Triton Inference Server and want to co...
2025-06-18T09:03:42
https://www.reddit.com/r/LocalLLaMA/comments/1leccts/looking_for_a_stack_to_serve_local_models_as/
SomeRandomGuuuuuuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leccts
false
null
t3_1leccts
/r/LocalLLaMA/comments/1leccts/looking_for_a_stack_to_serve_local_models_as/
false
false
self
1
null
Choosing between two H100 vs one H200
3
I’m new to hardware and was asked by my employer to research whether using two NVIDIA H100 GPUs or one H200 GPU is better for fine-tuning large language models. I’ve heard some libraries, like Unsloth, aren’t fully ready for multi-GPU setups, and I’m not sure how challenging it is to effectively use multiple GPUs. If...
2025-06-18T07:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1lebaf0/choosing_between_two_h100_vs_one_h200/
Significant_Income_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lebaf0
false
null
t3_1lebaf0
/r/LocalLLaMA/comments/1lebaf0/choosing_between_two_h100_vs_one_h200/
false
false
self
3
null
Need an advice for knowledge rich model
5
First, I am a beginner in this field, and I understand that my assumptions may be completely wrong. I have been working in the business continuity field for companies, and I am trying to introduce LLM to create plans (BCP) for existing important customers to prepare for various risks, such as natural disasters, accide...
2025-06-18T07:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1leb0mq/need_an_advice_for_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leb0mq
false
null
t3_1leb0mq
/r/LocalLLaMA/comments/1leb0mq/need_an_advice_for_knowledge_rich_model/
false
false
self
5
null
If NotebookLM were Agentic
12
Hi r/LocalLLaMA ! https://reddit.com/link/1leamks/video/yak8abh4xm7f1/player At [Morphik](https://morphik.ai), we're dedicated to building the best RAG and document-processing systems in the world. Morphik works particularly well with visual data. As a challenge, I was trying to get it to solve a Where's Waldo puzzle...
2025-06-18T07:04:47
https://www.reddit.com/r/LocalLLaMA/comments/1leamks/if_notebooklm_were_agentic/
Advanced_Army4706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leamks
false
null
t3_1leamks
/r/LocalLLaMA/comments/1leamks/if_notebooklm_were_agentic/
false
false
self
12
null
Easily run multiple local llama.cpp servers with FlexLLama
20
Hi everyone. I’ve been working on a lightweight tool called **FlexLLama** that makes it really easy to run multiple llama.cpp instances locally. It’s open-source and it lets you run multiple llama.cpp models at once (even on different GPUs) and puts them all behind a single OpenAI compatible API - so you never have to ...
2025-06-18T06:57:55
https://www.reddit.com/r/LocalLLaMA/comments/1leaip7/easily_run_multiple_local_llamacpp_servers_with/
yazoniak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leaip7
false
null
t3_1leaip7
/r/LocalLLaMA/comments/1leaip7/easily_run_multiple_local_llamacpp_servers_with/
false
false
self
20
{'enabled': False, 'images': [{'id': 'tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=108&crop=smart&auto=webp&s=a2119ef5dd658da93637d7b88d6e576bf05f7ed8', 'width': 108}, {'height': 108, 'url': 'h...
Is CentML shutting down?
1
[removed]
2025-06-18T06:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1lea9xk/is_centml_shutting_down/
Anis_Mekacher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lea9xk
false
null
t3_1lea9xk
/r/LocalLLaMA/comments/1lea9xk/is_centml_shutting_down/
false
false
https://a.thumbs.redditm…7JIP7ruRm4t8.jpg
1
null
What are folks' favorite base models for tuning right now?
11
I've got 2x3090 on the way and have some text corpuses I'm interested in fine-tuning some base models on. What are the current favorite base models, both for general purpose and writing specifically, if there are any that excel? I'm currently looking at Gemma 2 9B or maybe Mistral Small 3.124B. I've got some relativel...
2025-06-18T06:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1lea11k/what_are_folks_favorite_base_models_for_tuning/
CharlesStross
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lea11k
false
null
t3_1lea11k
/r/LocalLLaMA/comments/1lea11k/what_are_folks_favorite_base_models_for_tuning/
false
false
self
11
null
2xH100 vs 1xH200 for LLM fine-tuning - which is better?
1
[removed]
2025-06-18T06:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1le9u0c/2xh100_vs_1xh200_for_llm_finetuning_which_is/
Significant_Income_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le9u0c
false
null
t3_1le9u0c
/r/LocalLLaMA/comments/1le9u0c/2xh100_vs_1xh200_for_llm_finetuning_which_is/
false
false
self
1
null
Post Ego Intelligence AI starter kit
1
[removed]
2025-06-18T05:50:36
https://www.reddit.com/r/LocalLLaMA/comments/1le9hfi/post_ego_intelligence_ai_starter_kit/
Final_Growth_8288
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le9hfi
false
null
t3_1le9hfi
/r/LocalLLaMA/comments/1le9hfi/post_ego_intelligence_ai_starter_kit/
false
false
self
1
null
GMK X2(AMD Max+ 395 w/128GB) first impressions.
90
I've had a X2 for about a day. These are my first impressions of it including a bunch of numbers comparing to other GPUs I have. First, the people who were claiming that you couldn't load a model larger than 64GB because it would need to use 64GB of RAM for the CPU too are wrong. That's simple user error. That is simp...
2025-06-18T05:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1le951x/gmk_x2amd_max_395_w128gb_first_impressions/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le951x
false
null
t3_1le951x
/r/LocalLLaMA/comments/1le951x/gmk_x2amd_max_395_w128gb_first_impressions/
false
false
self
90
null
Need an advice for knowledge rich model
1
[removed]
2025-06-18T04:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1le8mdg/need_an_advice_for_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8mdg
false
null
t3_1le8mdg
/r/LocalLLaMA/comments/1le8mdg/need_an_advice_for_knowledge_rich_model/
false
false
self
1
null
Searching for world knowledge rich model
1
[removed]
2025-06-18T04:50:23
https://www.reddit.com/r/LocalLLaMA/comments/1le8i54/searching_for_world_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8i54
false
null
t3_1le8i54
/r/LocalLLaMA/comments/1le8i54/searching_for_world_knowledge_rich_model/
false
false
self
1
null
Please recommend "World knowledge rich" model
1
[removed]
2025-06-18T04:46:03
https://www.reddit.com/r/LocalLLaMA/comments/1le8fff/please_recommend_world_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8fff
false
null
t3_1le8fff
/r/LocalLLaMA/comments/1le8fff/please_recommend_world_knowledge_rich_model/
false
false
self
1
null
Testing the limits of base apple silicon.
4
I have an old M1 Mac 8gb ram, if anyone has tested it limits how far were you able to go with reasonable performance and also I discovered MLX fine-tuning specifically for MAC but I am unsure if I will be able to run on it. I was able to run: qwen 3b on it with some spike in usage it was okayish, I wonder if any speci...
2025-06-18T04:28:02
https://www.reddit.com/r/LocalLLaMA/comments/1le84f5/testing_the_limits_of_base_apple_silicon/
ILoveMy2Balls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le84f5
false
null
t3_1le84f5
/r/LocalLLaMA/comments/1le84f5/testing_the_limits_of_base_apple_silicon/
false
false
self
4
null
Is it possible to run a model with multiple GPUs and would that be much powerful?
0
Is it possible to run a model with multiple GPUs and would that be much powerful?
2025-06-18T04:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1le7wig/is_it_possible_to_run_a_model_with_multiple_gpus/
0y0s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le7wig
false
null
t3_1le7wig
/r/LocalLLaMA/comments/1le7wig/is_it_possible_to_run_a_model_with_multiple_gpus/
false
false
self
0
null
What's your analysis of unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF locally
21
It's been almost 20 days since the release, I'm considering buying rtx 5090 based PC this winter to use BF16 or Q_8_K_XL unsloth version, my main use case are document processing, summarization(context length will not be an issue since i'm using chunking algorithm for shorter chunks) and trading. Does it justify it's b...
2025-06-18T02:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1le69tx/whats_your_analysis_of/
ready_to_fuck_yeahh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le69tx
false
null
t3_1le69tx
/r/LocalLLaMA/comments/1le69tx/whats_your_analysis_of/
false
false
self
21
null
Can I run a higher parameter model?
0
With my current setup I am able to run the Deep seek R1 0528 Qwen 8B model about 12 tokens/second. Can I move up to a higher parameter model or will I be getting 0.5 tokens/second?
2025-06-18T02:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1le68fs/can_i_run_a_higher_parameter_model/
Ok_Most9659
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le68fs
false
null
t3_1le68fs
/r/LocalLLaMA/comments/1le68fs/can_i_run_a_higher_parameter_model/
false
false
self
0
null
MacOS 26 Foundation Model Bindings for Node.js
15
NodeJS bindings for the 3b model that ships with MacOS 26 beta Github: [https://github.com/Meridius-Labs/apple-on-device-ai](https://github.com/Meridius-Labs/apple-on-device-ai) License: MIT
2025-06-18T02:24:04
https://v.redd.it/8cy6sg80jl7f1
aitookmyj0b
v.redd.it
1970-01-01T00:00:00
0
{}
1le5t5k
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8cy6sg80jl7f1/DASHPlaylist.mpd?a=1752805461%2CZGM1MzRjODlkYjUxM2U5OTZjMjljOTM5NmZhMGI1OTJlYWYwM2ZkN2MxOGViMjFhNTk1YjYzZjY1NjQwMDdkYg%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/8cy6sg80jl7f1/DASH_1080.mp4?source=fallback', 'h...
t3_1le5t5k
/r/LocalLLaMA/comments/1le5t5k/macos_26_foundation_model_bindings_for_nodejs/
false
false
https://external-preview…ec953ae6b609a893
15
{'enabled': False, 'images': [{'id': 'ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=108&crop=smart&format=pjpg&auto=webp&s=fe421c53825788dfc905c769ca49135bf763e...
need advice for model selection/parameters and architecture for a handwritten document analysis and management Flask app
5
so, I've been working on this thing for a couple months. right now, it runs Flask in Gunicorn, and what it does is: * monitor a directory for new/incoming files (PDF or HTML) * if there's a new file, shrinks it to a size that doesn't cause me to run out of VRAM on my 5060Ti 16GB * uses a first pass of Qwen2.5-VL-3B-In...
2025-06-18T01:56:05
https://www.reddit.com/r/LocalLLaMA/comments/1le593y/need_advice_for_model_selectionparameters_and/
starkruzr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le593y
false
null
t3_1le593y
/r/LocalLLaMA/comments/1le593y/need_advice_for_model_selectionparameters_and/
false
false
self
5
null
Want to implement LLM’s in Design-Construction Industry
1
[removed]
2025-06-18T01:44:44
https://www.reddit.com/r/LocalLLaMA/comments/1le514a/want_to_implement_llms_in_designconstruction/
Acrobatic-Bat-2243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le514a
false
null
t3_1le514a
/r/LocalLLaMA/comments/1le514a/want_to_implement_llms_in_designconstruction/
false
false
self
1
null
Apple Intelligence models, with just 3 billion parameters, appear quite capable of text rewriting and proofreading compared to small local LLMs
1
[removed]
2025-06-18T00:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1le43de/apple_intelligence_models_with_just_3_billion/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le43de
false
null
t3_1le43de
/r/LocalLLaMA/comments/1le43de/apple_intelligence_models_with_just_3_billion/
false
false
self
1
null
My post about OpenAI is being Removed! Why?
1
[removed]
2025-06-18T00:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1le41o3/my_post_about_openai_is_being_removed_why/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le41o3
false
null
t3_1le41o3
/r/LocalLLaMA/comments/1le41o3/my_post_about_openai_is_being_removed_why/
false
false
self
1
null
Training LLM to write novels
1
[removed]
2025-06-18T00:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1le3rpy/training_llm_to_write_novels/
pchris131313
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3rpy
false
null
t3_1le3rpy
/r/LocalLLaMA/comments/1le3rpy/training_llm_to_write_novels/
false
false
self
1
null
Which model would you use for my use case
1
Hi everyone, I'm looking for the best model I can run locally for my usage and my constraints. I have a laptop with a 3080 laptop (16go VRAM) and 32 go RAM. I'm building a systems with some agents and I'm stuck at the last step. This last step is asking to an agent to fix code (C code). I send it the code function by...
2025-06-18T00:29:07
https://www.reddit.com/r/LocalLLaMA/comments/1le3if8/which_model_would_you_use_for_my_use_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3if8
false
null
t3_1le3if8
/r/LocalLLaMA/comments/1le3if8/which_model_would_you_use_for_my_use_case/
false
false
self
1
null
Which model would you use for my use case
1
[deleted]
2025-06-18T00:28:18
[deleted]
1970-01-01T00:00:00
0
{}
1le3hso
false
null
t3_1le3hso
/r/LocalLLaMA/comments/1le3hso/which_model_would_you_use_for_my_use_case/
false
false
default
1
null
Cheap dual Radeon, 60 tk/s Qwen3-30B-A3B
69
Got new RX 9060 XT 16GB. Kept old RX 6600 8GB to increase vram pool. Quite surprised 30B MoE model running much faster than running on CPU with GPU partial offload.
2025-06-18T00:19:28
https://v.redd.it/fdxzcidwwk7f1
dsjlee
v.redd.it
1970-01-01T00:00:00
0
{}
1le3b9e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fdxzcidwwk7f1/DASHPlaylist.mpd?a=1752797983%2CNjY3YmM0NTI1OWJjY2FhNDFlYjAwYzdlMTJhMWM0ZjNmZWYzZThiNmU1MjVkNDJlMmExMThjMjE3MTY0YmEyNg%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/fdxzcidwwk7f1/DASH_1080.mp4?source=fallback', 'h...
t3_1le3b9e
/r/LocalLLaMA/comments/1le3b9e/cheap_dual_radeon_60_tks_qwen330ba3b/
false
false
https://external-preview…bd0db971e7707e5d
69
{'enabled': False, 'images': [{'id': 'dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?width=108&crop=smart&format=pjpg&auto=webp&s=870154ed1dc1f878a765fbc235ad6179132b9...
Which model would you use for my use case
1
[removed]
2025-06-18T00:18:33
https://www.reddit.com/r/LocalLLaMA/comments/1le3ak5/which_model_would_you_use_for_my_use_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3ak5
false
null
t3_1le3ak5
/r/LocalLLaMA/comments/1le3ak5/which_model_would_you_use_for_my_use_case/
false
false
self
1
null
Which model would you use for my case ?
1
[removed]
2025-06-18T00:14:39
https://www.reddit.com/r/LocalLLaMA/comments/1le37kr/which_model_would_you_use_for_my_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le37kr
false
null
t3_1le37kr
/r/LocalLLaMA/comments/1le37kr/which_model_would_you_use_for_my_case/
false
false
self
1
null
Would love to know if you consider gemma27b the best small model out there?
56
Because I haven't found another that didn't have much hiccup under normal conversations and basic usage; I personally think it's the best out there, what about y'all? (Small as in like 32B max.)
2025-06-18T00:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1le30yi/would_love_to_know_if_you_consider_gemma27b_the/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le30yi
false
null
t3_1le30yi
/r/LocalLLaMA/comments/1le30yi/would_love_to_know_if_you_consider_gemma27b_the/
false
false
self
56
null
Thinking about switching from cloud based AI to sth more local
1
[removed]
2025-06-17T23:49:01
https://www.reddit.com/r/LocalLLaMA/comments/1le2o3u/thinking_about_switching_from_cloud_based_ai_to/
Living_Helicopter745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le2o3u
false
null
t3_1le2o3u
/r/LocalLLaMA/comments/1le2o3u/thinking_about_switching_from_cloud_based_ai_to/
false
false
self
1
null
[Video Guide] How to Sync ChatterBox TTS with Subtitles in ComfyUI (New SRT TTS Node)
1
[removed]
2025-06-17T23:18:37
https://youtu.be/VyOawMrCB1g?si=n-8eDRyRGUDeTkvz
diogodiogogod
youtu.be
1970-01-01T00:00:00
0
{}
1le207z
false
{'oembed': {'author_name': 'Diogod', 'author_url': 'https://www.youtube.com/@diohgod', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VyOawMrCB1g?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pictur...
t3_1le207z
/r/LocalLLaMA/comments/1le207z/video_guide_how_to_sync_chatterbox_tts_with/
false
false
https://external-preview…812b4788f6b481e9
1
{'enabled': False, 'images': [{'id': 'V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo.jpeg?width=108&crop=smart&auto=webp&s=958f33acc7c087f55eb86f01081db3e726f10fb0', 'width': 108}, {'height': 162, 'url': '...
Ollama models vs HF models
1
[removed]
2025-06-17T23:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1le1m6k/ollama_models_vs_hf_models/
ComprehensiveBath338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le1m6k
false
null
t3_1le1m6k
/r/LocalLLaMA/comments/1le1m6k/ollama_models_vs_hf_models/
false
false
self
1
null
Llama.cpp is much faster! Any changes made recently?
217
I've ditched Ollama for about 3 months now, and been on a journey testing multiple wrappers. KoboldCPP coupled with llama swap has been good but I experienced so many hang ups (I leave my PC running 24/7 to serve AI requests), and waking up almost daily and Kobold (or in combination with AMD drivers) would not work. I ...
2025-06-17T22:17:52
https://www.reddit.com/r/LocalLLaMA/comments/1le0mpb/llamacpp_is_much_faster_any_changes_made_recently/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0mpb
false
null
t3_1le0mpb
/r/LocalLLaMA/comments/1le0mpb/llamacpp_is_much_faster_any_changes_made_recently/
false
false
self
217
null