title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
A little gpu poor man needing some help
12
Hello my dear friends of opensource llms. I unfortunately encountered a situation to which I can't find any solution. I want to use tensor parallelism with exl2, as i have two rtx 3060. But exl2 quantization only uses on gpu by design, which results in oom errors for me. If somebody could convert the qwen long (https:/...
2025-06-05T22:57:58
https://www.reddit.com/r/LocalLLaMA/comments/1l4d8fc/a_little_gpu_poor_man_needing_some_help/
Flashy_Management962
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d8fc
false
null
t3_1l4d8fc
/r/LocalLLaMA/comments/1l4d8fc/a_little_gpu_poor_man_needing_some_help/
false
false
self
12
{'enabled': False, 'images': [{'id': 'jP7Lx5njiL0YGj9UteZAtC6ujAbqS9hzjcauwjE7bRY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6xt2dj6pF7ujB6ey8kvUmE-zcpCNrm-RJmVvkmzTjsI.jpg?width=108&crop=smart&auto=webp&s=da5cdf62b7adb5dfd525dd4e7ce5816b62d18d96', 'width': 108}, {'height': 116, 'url': 'h...
Did avian.io go under?
0
Cannot get response from the support and all API requests have been failing for weeks.
2025-06-05T22:57:54
https://www.reddit.com/r/LocalLLaMA/comments/1l4d8dn/did_avianio_go_under/
punkpeye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d8dn
false
null
t3_1l4d8dn
/r/LocalLLaMA/comments/1l4d8dn/did_avianio_go_under/
false
false
self
0
null
New Quantization Paper: Model-Preserving Adaptive Rounding
1
[removed]
2025-06-05T22:50:45
https://www.reddit.com/r/LocalLLaMA/comments/1l4d2qe/new_quantization_paper_modelpreserving_adaptive/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4d2qe
false
null
t3_1l4d2qe
/r/LocalLLaMA/comments/1l4d2qe/new_quantization_paper_modelpreserving_adaptive/
false
false
self
1
null
Open sourcing SERAX a file format built specifically for AI data generation
1
[removed]
2025-06-05T22:44:49
https://www.reddit.com/r/LocalLLaMA/comments/1l4cxy6/open_sourcing_serax_a_file_format_built/
VantigeAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4cxy6
false
null
t3_1l4cxy6
/r/LocalLLaMA/comments/1l4cxy6/open_sourcing_serax_a_file_format_built/
false
false
self
1
null
Open sourcing SERAX a file format built specifically for AI data generation
1
[removed]
2025-06-05T22:29:34
https://www.reddit.com/r/LocalLLaMA/comments/1l4clui/open_sourcing_serax_a_file_format_built/
VantigeAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4clui
false
null
t3_1l4clui
/r/LocalLLaMA/comments/1l4clui/open_sourcing_serax_a_file_format_built/
false
false
self
1
null
Do LLMs have opinions?
0
Or do they simply just mirror our inputs, and adhere to instructions in system prompts while mimicking the data from training/fine-tuning? Like people say that LLMs are shown to hold liberal views, but is that not just because the dominant part of the training data is expressions of people holding such views?
2025-06-05T22:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1l4cc2e/do_llms_have_opinions/
WeAllFuckingFucked
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4cc2e
false
null
t3_1l4cc2e
/r/LocalLLaMA/comments/1l4cc2e/do_llms_have_opinions/
false
false
self
0
null
Step-by-step GraphRAG tutorial for multi-hop QA - from the RAG_Techniques repo (16K+ stars)
60
Many people asked for this! Now I have a new step-by-step tutorial on **GraphRAG** in my **RAG\_Techniques** repo on GitHub (16K+ stars), one of the world’s leading RAG resources packed with hands-on tutorials for different techniques. **Why do we need this?** Regular RAG cannot answer hard questions like: *“How di...
2025-06-05T22:08:08
https://www.reddit.com/r/LocalLLaMA/comments/1l4c4hh/stepbystep_graphrag_tutorial_for_multihop_qa_from/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4c4hh
false
null
t3_1l4c4hh
/r/LocalLLaMA/comments/1l4c4hh/stepbystep_graphrag_tutorial_for_multihop_qa_from/
false
false
self
60
{'enabled': False, 'images': [{'id': '5R8NiYOchlJmm8fWG-mcHa7WZPElNUSt07Y6VYeJE6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FX9dUlXm1lJTuauNBIZBuXGNPgFZRyMezMHbvw0SgZc.jpg?width=108&crop=smart&auto=webp&s=732a922b7387388ae884f9b9fab8442f071bea63', 'width': 108}, {'height': 108, 'url': 'h...
iOS app to talk (voice) to self-hosted LLMs
2
2025-06-05T22:06:50
https://v.redd.it/j5f97gebm65f1
lostmsu
v.redd.it
1970-01-01T00:00:00
0
{}
1l4c3ds
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/j5f97gebm65f1/DASHPlaylist.mpd?a=1751753223%2CMTQzNmFjN2Y4MThkOTAzOTg0ZjcxMmM3OGQ5OWU1ZDFjZDRkNDYxYjg5ZTYwZWQ5OWJmMjY5NGY0ZjJkMjFiNQ%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/j5f97gebm65f1/DASH_720.mp4?source=fallback', 'ha...
t3_1l4c3ds
/r/LocalLLaMA/comments/1l4c3ds/ios_app_to_talk_voice_to_selfhosted_llms/
false
false
https://external-preview…b06c4aa03f008575
2
{'enabled': False, 'images': [{'id': 'bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWx4aHJiZWJtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=108&crop=smart&format=pjpg&auto=webp&s=0e7c96637ab67e6bc63fffcf4374884539f75...
iOS app to talk (voice) to self-hosted LLMs
1
[App Store link](https://apps.apple.com/app/apple-store/id6737482921?pt=127100219&ct=r-locallama&mt=8)
2025-06-05T22:05:46
https://v.redd.it/gwaw7821m65f1
lostmsu
v.redd.it
1970-01-01T00:00:00
0
{}
1l4c2hv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/gwaw7821m65f1/DASHPlaylist.mpd?a=1751753163%2CZjVjOGY1NWEzY2I3NDA2NDE2Zjg2YzZiZTg4M2I0MzY1ZTI0MGM3OGI5YjcwOWE2N2NiMWM2NjVkYmRhM2ZjMg%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/gwaw7821m65f1/DASH_720.mp4?source=fallback', 'ha...
t3_1l4c2hv
/r/LocalLLaMA/comments/1l4c2hv/ios_app_to_talk_voice_to_selfhosted_llms/
false
false
https://external-preview…9bb2a1f4d391008b
1
{'enabled': False, 'images': [{'id': 'c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3FmcW83MjFtNjVmMWZeSLuSW_UpsoL8BF3Jy7Fb26rc0Xywrz63KN15RsFb.png?width=108&crop=smart&format=pjpg&auto=webp&s=07c6f539b1eccc3d360f252b8c4ec3d101f9c...
How Fast can I run models.
0
I'm running image processing with gemma 3 27b and getting structured outputs as response, but my present pipeline is awfully slow (I use huggingface for the most part and lmformatenforcer), it processes a batch of 32 images in 5-10 minutes when I get a response of atmax 256 tokens per image. Now this is running on 4 A1...
2025-06-05T21:53:00
https://www.reddit.com/r/LocalLLaMA/comments/1l4brna/how_fast_can_i_run_models/
feelin-lonely-1254
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4brna
false
null
t3_1l4brna
/r/LocalLLaMA/comments/1l4brna/how_fast_can_i_run_models/
false
false
self
0
null
Much lower performance for Mistral-Small 24B on RTX 3090 and from deepinfra API
1
Hi friends, I was using deepinfra API and find that [mistralai/Mistral-Small-24B-Instruct-2501](https://deepinfra.com/mistralai/Mistral-Small-24B-Instruct-2501?version=010d42b0ae15e140bf9c5e02ca88273b9c257a89) is a very useful model. But when I deployed the Q4 quantized version on my RTX 3090, it does not work as good....
2025-06-05T21:42:16
https://www.reddit.com/r/LocalLLaMA/comments/1l4biki/much_lower_performance_for_mistralsmall_24b_on/
rumboll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4biki
false
null
t3_1l4biki
/r/LocalLLaMA/comments/1l4biki/much_lower_performance_for_mistralsmall_24b_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'amgfYGwa2WrQh6GXm5VGkqwQoIMx_3FzVvXwxN_upLs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/A5gAnz2ZdeDVSXFGTXKEP95JRpka9aH-VUOZQCnvxRk.jpg?width=108&crop=smart&auto=webp&s=3318ee60bc67fe35f858ef342ae3ae7487f5b278', 'width': 108}, {'height': 216, 'url': '...
Qwen3-32B is absolutely awesome
1
[removed]
2025-06-05T21:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1l4arx4/qwen332b_is_absolutely_awesome/
gtresselt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4arx4
false
null
t3_1l4arx4
/r/LocalLLaMA/comments/1l4arx4/qwen332b_is_absolutely_awesome/
false
false
self
1
null
Looking for Advice- Starting point GPU
1
[removed]
2025-06-05T21:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1l4aqbm/looking_for_advice_starting_point_gpu/
Ok-Cup-608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4aqbm
false
null
t3_1l4aqbm
/r/LocalLLaMA/comments/1l4aqbm/looking_for_advice_starting_point_gpu/
false
false
self
1
null
Looking for Advice- Starting point running Local LLM/Training
1
[removed]
2025-06-05T21:08:19
https://www.reddit.com/r/LocalLLaMA/comments/1l4aopo/looking_for_advice_starting_point_running_local/
Ok-Cup-608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4aopo
false
null
t3_1l4aopo
/r/LocalLLaMA/comments/1l4aopo/looking_for_advice_starting_point_running_local/
false
false
self
1
null
[R] Model-Preserving Adaptive Rounding
1
[removed]
2025-06-05T20:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1l4a3d2/r_modelpreserving_adaptive_rounding/
tsengalb99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4a3d2
false
null
t3_1l4a3d2
/r/LocalLLaMA/comments/1l4a3d2/r_modelpreserving_adaptive_rounding/
false
false
self
1
null
Embeddings vs Reasoning vs Thinking Models?
1
Please explain me in plain English the difference between these types of models from the training perspective. Also what use cases are best solved by each type?
2025-06-05T20:35:18
https://www.reddit.com/r/LocalLLaMA/comments/1l49viu/embeddings_vs_reasoning_vs_thinking_models/
cloudcreator
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l49viu
false
null
t3_1l49viu
/r/LocalLLaMA/comments/1l49viu/embeddings_vs_reasoning_vs_thinking_models/
false
false
self
1
null
Mac Studio Ultra vs RTX Pro on thread ripper
1
[removed]
2025-06-05T20:24:39
https://www.reddit.com/r/LocalLLaMA/comments/1l49m2k/mac_studio_ultra_vs_rtx_pro_on_thread_ripper/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l49m2k
false
null
t3_1l49m2k
/r/LocalLLaMA/comments/1l49m2k/mac_studio_ultra_vs_rtx_pro_on_thread_ripper/
false
false
self
1
null
What is the best way to sell a RTX 6000 Pro blackwell (new) and the average going price?
0
2025-06-05T20:09:35
https://i.redd.it/f9is7bfe165f1.jpeg
traderjay_toronto
i.redd.it
1970-01-01T00:00:00
0
{}
1l498jv
false
null
t3_1l498jv
/r/LocalLLaMA/comments/1l498jv/what_is_the_best_way_to_sell_a_rtx_6000_pro/
false
false
default
0
{'enabled': True, 'images': [{'id': 'f9is7bfe165f1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=108&crop=smart&auto=webp&s=e49e7f1e682f1e8f2af40c9a5a1cf15c4c9df896', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/f9is7bfe165f1.jpeg?width=216&crop=smart&auto=w...
Model defaults Benchmark - latest version of {technology}.
0
API endpoints, opinionated frameworks, available SDK methods. From agentic coding/vibe coding perspective - heavily fine tuned models stubbornly enforce outdated solutions. Is there any project/benchmark that lets users subscribe to model updates? - Anthropics models not knowing what MCP is, - Gemini 2.5 pro enforc...
2025-06-05T19:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1l48yz9/model_defaults_benchmark_latest_version_of/
secopsml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l48yz9
false
null
t3_1l48yz9
/r/LocalLLaMA/comments/1l48yz9/model_defaults_benchmark_latest_version_of/
false
false
self
0
null
Is it dumb to build a server with 7x 5060 Ti?
13
I'm considering putting together a system with 7x 5060 Ti to get the most cost-effective VRAM. This will have to be an open frame with riser cables and an Epyc server motherboard with 7 PCIe slots. The idea was to have capacity for medium size models that exceed 24GB but fit in \~100GB VRAM. I think I can put this m...
2025-06-05T19:33:32
https://www.reddit.com/r/LocalLLaMA/comments/1l48cnk/is_it_dumb_to_build_a_server_with_7x_5060_ti/
vector76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l48cnk
false
null
t3_1l48cnk
/r/LocalLLaMA/comments/1l48cnk/is_it_dumb_to_build_a_server_with_7x_5060_ti/
false
false
self
13
null
1000(!!!) tps on Maverick by Deepinfra
1
[removed]
2025-06-05T19:14:04
https://i.redd.it/pmzccp0ir55f1.jpeg
temirulan
i.redd.it
1970-01-01T00:00:00
0
{}
1l47vbf
false
null
t3_1l47vbf
/r/LocalLLaMA/comments/1l47vbf/1000_tps_on_maverick_by_deepinfra/
false
false
default
1
{'enabled': True, 'images': [{'id': 'pmzccp0ir55f1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=108&crop=smart&auto=webp&s=3bbb2ca6449dc0870bf1cf7a599f5e625978b6a3', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/pmzccp0ir55f1.jpeg?width=216&crop=smart&auto=we...
Fine tune result problem
1
[removed]
2025-06-05T19:03:24
https://www.reddit.com/r/LocalLLaMA/comments/1l47lks/fine_tune_result_problem/
ithe1975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47lks
false
null
t3_1l47lks
/r/LocalLLaMA/comments/1l47lks/fine_tune_result_problem/
false
false
self
1
null
With 8gb vram: qwen3 8b q6 or 32b iq1?
4
Both end up being about the same size and fit just enough on the vram provided the kv cache is offloaded. I tried looking for performance of models at equal memory footprint but was unable to. Any advice is much appreciated.
2025-06-05T18:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1l47fv0/with_8gb_vram_qwen3_8b_q6_or_32b_iq1/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47fv0
false
null
t3_1l47fv0
/r/LocalLLaMA/comments/1l47fv0/with_8gb_vram_qwen3_8b_q6_or_32b_iq1/
false
false
self
4
null
Is Qwen the new face of local LLMs?
75
The Qwen team has been killing it. Every new model is a heavy hitter and every new model becomes SOTA for that category. I've been seeing way more fine tunes of Qwen models than LLaMa lately. LocalQwen coming soon lol?
2025-06-05T18:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1l47dav/is_qwen_the_new_face_of_local_llms/
Due-Employee4744
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l47dav
false
null
t3_1l47dav
/r/LocalLLaMA/comments/1l47dav/is_qwen_the_new_face_of_local_llms/
false
false
self
75
null
1000(!!!)tps. Deepinfra went wild on Maverick throughput.
3
2025-06-05T18:54:07
https://i.redd.it/52bm64zxn55f1.jpeg
temirulan
i.redd.it
1970-01-01T00:00:00
0
{}
1l47clk
false
null
t3_1l47clk
/r/LocalLLaMA/comments/1l47clk/1000tps_deepinfra_went_wild_on_maverick_throughput/
false
false
default
3
{'enabled': True, 'images': [{'id': '52bm64zxn55f1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=108&crop=smart&auto=webp&s=0d188dbe2d73d6411779816164703db9a59587ef', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/52bm64zxn55f1.jpeg?width=216&crop=smart&auto=we...
M🐢st Efficient RAG Framework for Offline Local Rag?
1
[removed]
2025-06-05T18:37:53
https://www.reddit.com/r/LocalLLaMA/comments/1l46xww/mst_efficient_rag_framework_for_offline_local_rag/
taper_fade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l46xww
false
null
t3_1l46xww
/r/LocalLLaMA/comments/1l46xww/mst_efficient_rag_framework_for_offline_local_rag/
false
false
self
1
null
M🐢st Efficient RAG Framework for Offline Local Rag?
1
[removed]
2025-06-05T18:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1l46wsw/mst_efficient_rag_framework_for_offline_local_rag/
taper_fade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l46wsw
false
null
t3_1l46wsw
/r/LocalLLaMA/comments/1l46wsw/mst_efficient_rag_framework_for_offline_local_rag/
false
false
self
1
null
smollm is crazy
0
2025-06-05T18:14:57
https://v.redd.it/l1u09vctg55f1
3d_printing_kid
v.redd.it
1970-01-01T00:00:00
0
{}
1l46d96
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/l1u09vctg55f1/DASHPlaylist.mpd?a=1751739310%2CN2MwYWIzNDBkZWRhNjdlZDE5OWUxNzI0MjIzNDU2OWEyYmU3OGI2NGVhNmVkNmY5NjNhMTgwNTc0YWE0ZWY1Yw%3D%3D&v=1&f=sd', 'duration': 248, 'fallback_url': 'https://v.redd.it/l1u09vctg55f1/DASH_480.mp4?source=fallback', 'h...
t3_1l46d96
/r/LocalLLaMA/comments/1l46d96/smollm_is_crazy/
false
false
https://external-preview…435060dce81459ae
0
{'enabled': False, 'images': [{'id': 'Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nm91OHV2Y3RnNTVmMYC1oXT879drMGhz7A_iST_bdDJ62X2-qbCshqC67I28.png?width=108&crop=smart&format=pjpg&auto=webp&s=62b3ffc701ba66c8f9f03681676e828618f01...
What's the best model for playing a role right now , that will fit on 8gbvram?
2
I'm not looking for anything that tends to talk naughty on purpose, but unrestricted is probably best anyway. I just want to be able to tell it, You are character x, your backstory is y, and then feed it with a conversation history to this point and have it reliably take on it's role. I have other safeguards in place t...
2025-06-05T17:49:13
https://www.reddit.com/r/LocalLLaMA/comments/1l45p2d/whats_the_best_model_for_playing_a_role_right_now/
opUserZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45p2d
false
null
t3_1l45p2d
/r/LocalLLaMA/comments/1l45p2d/whats_the_best_model_for_playing_a_role_right_now/
false
false
self
2
null
What's your local LLM agent set-up for coding? Looking for suggestions and workflows.
1
[removed]
2025-06-05T17:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1l45oyh/whats_your_local_llm_agent_setup_for_coding/
accountforHW
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45oyh
false
null
t3_1l45oyh
/r/LocalLLaMA/comments/1l45oyh/whats_your_local_llm_agent_setup_for_coding/
false
false
self
1
null
how are BERT models used in anomaly detection?
1
[removed]
2025-06-05T17:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1l45g68/how_are_bert_models_used_in_anomaly_detection/
sybau6969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l45g68
false
null
t3_1l45g68
/r/LocalLLaMA/comments/1l45g68/how_are_bert_models_used_in_anomaly_detection/
false
false
self
1
null
Sparse Transformers: Run 2x faster LLM with 30% lesser memory
496
We have built fused operator kernels for structured contextual sparsity based on the amazing works of LLM in a Flash (Apple) and Deja Vu (Zichang et al). We avoid loading and computing activations with feed forward layer weights whose outputs will eventually be zeroed out. The result? We are seeing **5X faster MLP** l...
2025-06-05T17:07:31
https://github.com/NimbleEdge/sparse_transformers
Economy-Mud-6626
github.com
1970-01-01T00:00:00
0
{}
1l44lw8
false
null
t3_1l44lw8
/r/LocalLLaMA/comments/1l44lw8/sparse_transformers_run_2x_faster_llm_with_30/
false
false
https://b.thumbs.redditm…86P1hfUD2HpY.jpg
496
{'enabled': False, 'images': [{'id': '1on1N3jH_bYn3YHPp1eKzmiBavqd33UEWggC4ESEjWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XkyXf73h0xcotkA0Ymuqhjk48O0f-bM4vEg5RHnYOlk.jpg?width=108&crop=smart&auto=webp&s=374d8947fd517973f24741b4a1ef65d5035daeb2', 'width': 108}, {'height': 108, 'url': 'h...
Mac Air M2 users or lower. What’s the optimal model/tool to run to get started with LocalLLaMa
1
[removed]
2025-06-05T16:55:14
https://www.reddit.com/r/LocalLLaMA/comments/1l44agk/mac_air_m2_users_or_lower_whats_the_optimal/
picturpoet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l44agk
false
null
t3_1l44agk
/r/LocalLLaMA/comments/1l44agk/mac_air_m2_users_or_lower_whats_the_optimal/
false
false
self
1
null
How can I connect to a local LLM from my iPhone?
10
I've got LM Studio running on my PC and I'm wondering if anyone knows a way to connect to it from iPhone? I've looked around and tried several apps but haven't found one that lets you specify the API URL.
2025-06-05T16:48:58
https://www.reddit.com/r/LocalLLaMA/comments/1l4450t/how_can_i_connect_to_a_local_llm_from_my_iphone/
NonYa_exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l4450t
false
null
t3_1l4450t
/r/LocalLLaMA/comments/1l4450t/how_can_i_connect_to_a_local_llm_from_my_iphone/
false
false
self
10
null
New LLM trained to reason on chemistry from language: first step towards scientific agents
51
Some interesting tricks in the paper to make it good at a specific scientific domain, has cool applications like retrosynthesis (how do I get to this molecule) or reaction prediction (what do I get from A + B?), and everything is open source !
2025-06-05T16:24:29
https://www.nature.com/articles/d41586-025-01753-1
clefourrier
nature.com
1970-01-01T00:00:00
0
{}
1l43ivu
false
null
t3_1l43ivu
/r/LocalLLaMA/comments/1l43ivu/new_llm_trained_to_reason_on_chemistry_from/
false
false
default
51
{'enabled': False, 'images': [{'id': 'h8pBBOTpdLNMV6niaV1bR_1yNoR-3Ky7Xs63nebLUdw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/yIrTUZRJVfTjMLKREn3vBHXx0YcQC6rt4cf6CzSytrI.jpg?width=108&crop=smart&auto=webp&s=16b3e1b2125dfb444423e69d3215eab0aa80c41e', 'width': 108}, {'height': 121, 'url': 'h...
New LLM trained to reason on chemistry from language: first step towards scientific agents
1
[removed]
2025-06-05T16:22:42
https://x.com/andrewwhite01/status/1930652479039099072
clefourrier
x.com
1970-01-01T00:00:00
0
{}
1l43hb1
false
null
t3_1l43hb1
/r/LocalLLaMA/comments/1l43hb1/new_llm_trained_to_reason_on_chemistry_from/
false
false
default
1
null
Sarvam AI (indian startup) is likely pulling of massive "download farming" in HF
1
[removed]
2025-06-05T16:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1l43emc/sarvam_ai_indian_startup_is_likely_pulling_of/
Ortho-BenzoPhenone
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l43emc
false
null
t3_1l43emc
/r/LocalLLaMA/comments/1l43emc/sarvam_ai_indian_startup_is_likely_pulling_of/
false
false
https://b.thumbs.redditm…08ryWxkodWzM.jpg
1
null
Looking for UI that can store and reference characters easily
3
I am a relative neophyte to locally run llms I've been using them for storytelling but obviously they get confused after they get close to character limit. I've just started playing around with silly tavern via oobabooga which seems like a popular option, but are there any other uis that are relatively easy to set up t...
2025-06-05T16:00:20
https://www.reddit.com/r/LocalLLaMA/comments/1l42woy/looking_for_ui_that_can_store_and_reference/
Haddock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l42woy
false
null
t3_1l42woy
/r/LocalLLaMA/comments/1l42woy/looking_for_ui_that_can_store_and_reference/
false
false
self
3
null
DeepSeek’s new R1-0528-Qwen3-8B is the most intelligent 8B parameter model yet, but not by much: Alibaba’s own Qwen3 8B is just one point behind
114
https://preview.redd.it/… your thoughts?
2025-06-05T15:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1l41p1x/deepseeks_new_r10528qwen38b_is_the_most/
ApprehensiveAd3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41p1x
false
null
t3_1l41p1x
/r/LocalLLaMA/comments/1l41p1x/deepseeks_new_r10528qwen38b_is_the_most/
false
false
https://b.thumbs.redditm…lxbGrth9XgQk.jpg
114
{'enabled': False, 'images': [{'id': 'qa-h-2yE89JD5_ETAyW_L2wANYsMBO04I2h5j3k3Q58', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/wWklwlAOa2ZfKIhxn9DXW0SWlAhj78brp-TpzL-wtyA.jpg?width=108&crop=smart&auto=webp&s=796695bf5a1404fa79da19be8121139c127b807d', 'width': 108}, {'height': 107, 'url': 'h...
AI agent evaluation painpoints for developers
1
[removed]
2025-06-05T15:01:37
https://www.reddit.com/r/LocalLLaMA/comments/1l41eyp/ai_agent_evaluation_painpoints_for_developers/
NoAdministration4196
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41eyp
false
null
t3_1l41eyp
/r/LocalLLaMA/comments/1l41eyp/ai_agent_evaluation_painpoints_for_developers/
false
false
self
1
null
Programming using LLMs is the damnedest thing…
1
[removed]
2025-06-05T14:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1l41cx7/programming_using_llms_is_the_damnedest_thing/
ETBiggs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41cx7
false
null
t3_1l41cx7
/r/LocalLLaMA/comments/1l41cx7/programming_using_llms_is_the_damnedest_thing/
false
false
self
1
null
What are the biggest pain points when evaluating AI agents ?
1
[removed]
2025-06-05T14:58:59
https://www.reddit.com/r/LocalLLaMA/comments/1l41cc8/what_are_the_biggest_pain_points_when_evaluating/
NoAdministration4196
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l41cc8
false
null
t3_1l41cc8
/r/LocalLLaMA/comments/1l41cc8/what_are_the_biggest_pain_points_when_evaluating/
false
false
self
1
null
What's the cheapest setup for running full Deepseek R1
110
Looking how DeepSeek is performing I'm thinking of setting it up locally. What's the cheapest way for setting it up locally so it will have reasonable performance?(10-15t/s?) I was thinking about 2x Epyc with DDR4 3200, because prices seem reasonable right now for 1TB of RAM - but I'm not sure about the performance. ...
2025-06-05T14:25:05
https://www.reddit.com/r/LocalLLaMA/comments/1l40ip8/whats_the_cheapest_setup_for_running_full/
Wooden_Yam1924
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l40ip8
false
null
t3_1l40ip8
/r/LocalLLaMA/comments/1l40ip8/whats_the_cheapest_setup_for_running_full/
false
false
self
110
null
Hybrid setup for reasoning
9
I want to make for myself a chat assistant that would use qwen3 8b for reasoning tokens and then stop when it gets the end of thought token, then feed that to qwen3 30b for the rest. The idea being that i dont mind reading while the text is being generated but dont like to wait for it to load. I know there is no free l...
2025-06-05T14:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1l40gij/hybrid_setup_for_reasoning/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l40gij
false
null
t3_1l40gij
/r/LocalLLaMA/comments/1l40gij/hybrid_setup_for_reasoning/
false
false
self
9
null
I wrote a little script to automate commit messages
20
I wrote a little script to automate commit messages This might be pretty lame, but this is the first time I've actually done any scripting with LLMs to do some task for me. This is just for a personal project git repo, so the stakes are as low as can be for the accuracy of these commit messages. I feel like this is a ...
2025-06-05T14:12:44
https://i.redd.it/shflqezx845f1.png
aiueka
i.redd.it
1970-01-01T00:00:00
0
{}
1l40835
false
null
t3_1l40835
/r/LocalLLaMA/comments/1l40835/i_wrote_a_little_script_to_automate_commit/
false
false
default
20
{'enabled': True, 'images': [{'id': 'shflqezx845f1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=108&crop=smart&auto=webp&s=7d3e445d65a5f1a124433b2066a0eb1feea84392', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/shflqezx845f1.png?width=216&crop=smart&auto=web...
Best LLM for a RTX 5090 + 64 GB RAM
1
[removed]
2025-06-05T14:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1l405nq/best_llm_for_a_rtx_5090_64_gb_ram/
tomxposed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l405nq
false
null
t3_1l405nq
/r/LocalLLaMA/comments/1l405nq/best_llm_for_a_rtx_5090_64_gb_ram/
false
false
self
1
null
Building my first AI project (IDE + LLM). How can I protect the idea and deploy it as a total beginner? 🇨🇦
1
[removed]
2025-06-05T14:04:55
https://www.reddit.com/r/LocalLLaMA/comments/1l401dw/building_my_first_ai_project_ide_llm_how_can_i/
Business-Opinion7579
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l401dw
false
null
t3_1l401dw
/r/LocalLLaMA/comments/1l401dw/building_my_first_ai_project_ide_llm_how_can_i/
false
false
self
1
null
Does newest LM Studio not have Playground tab anymore on Windows
1
[removed]
2025-06-05T14:02:59
https://i.redd.it/hc458irz745f1.jpeg
bilderbergman
i.redd.it
1970-01-01T00:00:00
0
{}
1l3zzn1
false
null
t3_1l3zzn1
/r/LocalLLaMA/comments/1l3zzn1/does_newest_lm_studio_not_have_playground_tab/
false
false
default
1
{'enabled': True, 'images': [{'id': 'hc458irz745f1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=108&crop=smart&auto=webp&s=ae2ee222df92cab003f27e00eee0d296c4e28bbb', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/hc458irz745f1.jpeg?width=216&crop=smart&auto=...
Help with audio visualization
1
[removed]
2025-06-05T13:46:28
https://www.reddit.com/r/LocalLLaMA/comments/1l3zlxa/help_with_audio_visualization/
MackPheson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3zlxa
false
null
t3_1l3zlxa
/r/LocalLLaMA/comments/1l3zlxa/help_with_audio_visualization/
false
false
self
1
null
Help me please :)
1
[removed]
2025-06-05T13:44:53
https://www.reddit.com/r/LocalLLaMA/comments/1l3zkky/help_me_please/
MackPheson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3zkky
false
null
t3_1l3zkky
/r/LocalLLaMA/comments/1l3zkky/help_me_please/
false
false
self
1
null
Best world knowledge model that can run on your phone
39
I basically want Internet-level knowledge when my phone is not connected to the internet (camping etc). I've heard good things about Gemma 2 2b for creative writing. But is it still the best model for things like world knowledge? Questions like: - How to identify different clam species - How to clean clam that you cau...
2025-06-05T13:22:38
https://www.reddit.com/r/LocalLLaMA/comments/1l3z2m3/best_world_knowledge_model_that_can_run_on_your/
clavidk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3z2m3
false
null
t3_1l3z2m3
/r/LocalLLaMA/comments/1l3z2m3/best_world_knowledge_model_that_can_run_on_your/
false
false
self
39
null
Approach for developing / designing UI
1
[removed]
2025-06-05T13:05:09
https://www.reddit.com/r/LocalLLaMA/comments/1l3yos6/approach_for_developing_designing_ui/
Suspicious_Dress_350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yos6
false
null
t3_1l3yos6
/r/LocalLLaMA/comments/1l3yos6/approach_for_developing_designing_ui/
false
false
self
1
null
4090 boards with 48gb Ram - will there ever be an upgrade service?
6
I keep seeing these cards being sold in china, but I haven't seen anything about being able to upgrade an existing card. Are these Chinese cards just fitted with higher capacity RAM chips and a different BIOS or are there PCB level differences? Does anyone think there's a chance a service will be offered to upgrade the...
2025-06-05T13:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1l3ykjn/4090_boards_with_48gb_ram_will_there_ever_be_an/
thisisnotdave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3ykjn
false
null
t3_1l3ykjn
/r/LocalLLaMA/comments/1l3ykjn/4090_boards_with_48gb_ram_will_there_ever_be_an/
false
false
self
6
null
Qwen3-32b /nothink or qwen3-14b /think?
18
What has been your experience and what are the pro/cons?
2025-06-05T12:58:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3yjeb/qwen332b_nothink_or_qwen314b_think/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yjeb
false
null
t3_1l3yjeb
/r/LocalLLaMA/comments/1l3yjeb/qwen332b_nothink_or_qwen314b_think/
false
false
self
18
null
Non-reasoning Qwen3-235B worse than maverick? Is this experience real with you guys?
3
[Intelligence Index Qwen3-235B-nothink beaten by Maverick?](https://preview.redd.it/8e3jqpw4t35f1.png?width=4092&format=png&auto=webp&s=d577dadbcfa9968158c76ae2e2c387bc4ec5dc0e) Is this experienced by you guys? [Wtf](https://preview.redd.it/c55h532zt35f1.png?width=4092&format=png&auto=webp&s=f87c9ae0b0f143791621f752...
2025-06-05T12:46:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3yamg/nonreasoning_qwen3235b_worse_than_maverick_is/
True_Requirement_891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3yamg
false
null
t3_1l3yamg
/r/LocalLLaMA/comments/1l3yamg/nonreasoning_qwen3235b_worse_than_maverick_is/
false
false
https://b.thumbs.redditm…CsbQy2hsx5nk.jpg
3
null
Looking for Advice: Best LLM/Embedding Models for Precise Document Retrieval (Product Standards)
3
Hi everyone, I’m working on a chatbot for my company to help colleagues quickly find answers in a set of about 60 very similar marketing standards. The documents are all formatted quite similarly, and the main challenge is that when users ask specific questions, the retrieval often pulls the *wrong* standard—or someti...
2025-06-05T12:29:06
https://www.reddit.com/r/LocalLLaMA/comments/1l3xxpw/looking_for_advice_best_llmembedding_models_for/
Hooches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3xxpw
false
null
t3_1l3xxpw
/r/LocalLLaMA/comments/1l3xxpw/looking_for_advice_best_llmembedding_models_for/
false
false
self
3
null
BAIDU joined huggingface
201
2025-06-05T11:52:51
https://huggingface.co/baidu
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1l3x8fr
false
null
t3_1l3x8fr
/r/LocalLLaMA/comments/1l3x8fr/baidu_joined_huggingface/
false
false
default
201
{'enabled': False, 'images': [{'id': 'VBByCdkzVD7PWV8lNUdba_RoNhzl4Gw0LZW9JEZN8Oc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TwIi1dX9vW7A-ld45UUpPdNNrb7BMww9X7rSHaojGsI.jpg?width=108&crop=smart&auto=webp&s=955a4c0e5b5e1785d28a0180fefa4dc08b8be3a0', 'width': 108}, {'height': 116, 'url': 'h...
Best locall LLM for C++
1
[removed]
2025-06-05T11:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1l3x05o/best_locall_llm_for_c/
ayx03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3x05o
false
null
t3_1l3x05o
/r/LocalLLaMA/comments/1l3x05o/best_locall_llm_for_c/
false
false
self
1
null
Best simple model for local fine tuning?
19
Back in the day I used to use gpt2 but tensorflow has moved on and it's not longer properly supported. Are there any good replacements? I don't need an excellent model at all, something as simple and weak as gpt2 is ideal (I would much rather faster training). It'll be unlearning all its written language anyways: I'm...
2025-06-05T11:17:53
https://www.reddit.com/r/LocalLLaMA/comments/1l3wlwy/best_simple_model_for_local_fine_tuning/
Lucario1296
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3wlwy
false
null
t3_1l3wlwy
/r/LocalLLaMA/comments/1l3wlwy/best_simple_model_for_local_fine_tuning/
false
false
self
19
null
Check out this new VSCode Extension! Query multiple BitNet servers from within GitHub Copilot via the Model Context Protocol all locally!
4
[https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension](https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension) [https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension](https://marketplace.visualstudio.com/items?i...
2025-06-05T11:17:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3wloi/check_out_this_new_vscode_extension_query/
ufos1111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3wloi
false
null
t3_1l3wloi
/r/LocalLLaMA/comments/1l3wloi/check_out_this_new_vscode_extension_query/
false
false
self
4
null
Enterprise AI agents
1
[removed]
2025-06-05T10:35:49
https://www.reddit.com/r/LocalLLaMA/comments/1l3vw33/enterprise_ai_agents/
yecohn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vw33
false
null
t3_1l3vw33
/r/LocalLLaMA/comments/1l3vw33/enterprise_ai_agents/
false
false
self
1
null
New embedding model "Qwen3-Embedding-0.6B-GGUF" just dropped.
446
Anyone tested it yet?
2025-06-05T10:30:53
https://huggingface.co/Qwen/Qwen3-Embedding-0.6B-GGUF
Proto_Particle
huggingface.co
1970-01-01T00:00:00
0
{}
1l3vt95
false
null
t3_1l3vt95
/r/LocalLLaMA/comments/1l3vt95/new_embedding_model_qwen3embedding06bgguf_just/
false
false
https://b.thumbs.redditm…JYYUsB3eiQXw.jpg
446
{'enabled': False, 'images': [{'id': '9lpKpNoi91vS1Idczeb6luvGh4vi0sY883cTturjqzg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nCFCX9SJ8G9lwL3THBeDPCNNzee25aFLCHH5cPLrrSM.jpg?width=108&crop=smart&auto=webp&s=16056ab3be753d66bcf5da3487a64235e037e0bd', 'width': 108}, {'height': 116, 'url': 'h...
AI Linter VS Code suggestions
3
What is a good extension to use a local model as a linter? I do not want AI generated code, I only want the AI to act as a linter and say, “hey, you seem to be missing a zero in the integer here.” And obvious problems like that, but problems not so obvious a normal linter can find them. Ideally it would be able to trig...
2025-06-05T10:26:43
https://www.reddit.com/r/LocalLLaMA/comments/1l3vqut/ai_linter_vs_code_suggestions/
DoggoChann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vqut
false
null
t3_1l3vqut
/r/LocalLLaMA/comments/1l3vqut/ai_linter_vs_code_suggestions/
false
false
self
3
null
On Prem LLM plug-and-play ‘package’ for SME organisational context
1
[removed]
2025-06-05T10:03:36
https://www.reddit.com/r/LocalLLaMA/comments/1l3vdht/on_prem_llm_plugandplay_package_for_sme/
jon18476
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3vdht
false
null
t3_1l3vdht
/r/LocalLLaMA/comments/1l3vdht/on_prem_llm_plugandplay_package_for_sme/
false
false
self
1
null
Aider & Full Automation: Seeking direct system command execution (not just simulation)
1
[removed]
2025-06-05T09:56:51
https://www.reddit.com/r/LocalLLaMA/comments/1l3v9h8/aider_full_automation_seeking_direct_system/
dewijones92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3v9h8
false
null
t3_1l3v9h8
/r/LocalLLaMA/comments/1l3v9h8/aider_full_automation_seeking_direct_system/
false
false
self
1
null
I organized a 100-game Town of Salem competition featuring best models as players. Game logs are available too.
113
As many of you probably know, Town of Salem is a popular game. If you don't know what I'm talking about, you can read the game_rules.yaml in the repo. My personal preference has always been to moderate rather than play among friends. Two weeks ago, I had the idea to make LLMs play this game to have fun and see who is t...
2025-06-05T08:43:52
https://www.reddit.com/gallery/1l3u7e9
kyazoglu
reddit.com
1970-01-01T00:00:00
0
{}
1l3u7e9
false
null
t3_1l3u7e9
/r/LocalLLaMA/comments/1l3u7e9/i_organized_a_100game_town_of_salem_competition/
false
false
https://b.thumbs.redditm…P1vzWdIvOaqs.jpg
113
null
Best TTS
1
[removed]
2025-06-05T08:39:35
https://www.reddit.com/r/LocalLLaMA/comments/1l3u59g/best_tts/
SmoothRock54
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3u59g
false
null
t3_1l3u59g
/r/LocalLLaMA/comments/1l3u59g/best_tts/
false
false
self
1
null
VLLM with 4x7900xtx with Qwen3-235B-A22B-UD-Q2_K_XL
20
Hello Reddit! Our "AI" computer now has 4x RTX 7900 XTX and 1x RTX 7800 XT. Llama-server works well, and we successfully launched Qwen3-235B-A22B-UD-Q2\_K\_XL with a 40,960 context length. |GPU|Backend|Input |OutPut| |:-|:-|:-|:-| |4x7900 xtx|HIP llama-server, -fa|160 t/s (356 tokens)|20 t/s (328 tokens)| |4x7900 ...
2025-06-05T07:41:40
https://www.reddit.com/r/LocalLLaMA/comments/1l3tby7/vllm_with_4x7900xtx_with_qwen3235ba22budq2_k_xl/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3tby7
false
null
t3_1l3tby7
/r/LocalLLaMA/comments/1l3tby7/vllm_with_4x7900xtx_with_qwen3235ba22budq2_k_xl/
false
false
self
20
null
Easiest way to access multiple Social Medias with LLMs
1
[removed]
2025-06-05T07:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1l3t9a7/easiest_way_to_access_multiple_social_medias_with/
Ok_GreyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3t9a7
false
null
t3_1l3t9a7
/r/LocalLLaMA/comments/1l3t9a7/easiest_way_to_access_multiple_social_medias_with/
false
false
self
1
null
Need your Feedback
1
[removed]
2025-06-05T07:08:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3su8c/need_your_feedback/
Careless_Werewolf148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3su8c
false
null
t3_1l3su8c
/r/LocalLLaMA/comments/1l3su8c/need_your_feedback/
false
false
self
1
null
Interactive Results Browser for Misguided Attention Eval
6
Thanks to Gemini 2.5 pro, there is now an[ interactive results browser](https://cpldcpu.github.io/MisguidedAttention/) for the [misguided attention eval](https://github.com/cpldcpu/MisguidedAttention). The last wave of new models got significantly better at correctly resonding to the prompts. Especially reasoning mode...
2025-06-05T06:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1l3s5wh/interactive_results_browser_for_misguided/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3s5wh
false
null
t3_1l3s5wh
/r/LocalLLaMA/comments/1l3s5wh/interactive_results_browser_for_misguided/
false
false
self
6
null
Mix and Match
2
I have a 4070 super in my current computer, I still have an old 3060ti from my last upgrade, is it compatible to run at the same time as my 4070 to add more vram?
2025-06-05T06:08:03
https://www.reddit.com/r/LocalLLaMA/comments/1l3rwit/mix_and_match/
Doomkeepzor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3rwit
false
null
t3_1l3rwit
/r/LocalLLaMA/comments/1l3rwit/mix_and_match/
false
false
self
2
null
Deal of the century - or atleast great value for money
0
[https://www.ebay.com/str/ipowerresaleinc](https://www.ebay.com/str/ipowerresaleinc)
2025-06-05T05:56:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3rps3/deal_of_the_century_or_atleast_great_value_for/
weight_matrix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3rps3
false
null
t3_1l3rps3
/r/LocalLLaMA/comments/1l3rps3/deal_of_the_century_or_atleast_great_value_for/
false
false
self
0
null
Dealing with tool_calls hallucinations
5
Hi all, I have a specific prompt to output to json but for some reason the llm decides to use a made up tool call. Llama.cpp using qwen 30b How do you handle these things? Tried passing an empty array to tools: [] and begged the llm to not use tool calls. Driving me mad!
2025-06-05T05:06:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3qxas/dealing_with_tool_calls_hallucinations/
EstebanGee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qxas
false
null
t3_1l3qxas
/r/LocalLLaMA/comments/1l3qxas/dealing_with_tool_calls_hallucinations/
false
false
self
5
null
how good is local llm compared with claude / chatgpt?
0
just curious is it worth the effort to set up local llm
2025-06-05T05:03:15
https://www.reddit.com/r/LocalLLaMA/comments/1l3qvbu/how_good_is_local_llm_compared_with_claude_chatgpt/
anonymous_2600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qvbu
false
null
t3_1l3qvbu
/r/LocalLLaMA/comments/1l3qvbu/how_good_is_local_llm_compared_with_claude_chatgpt/
false
false
self
0
null
RTX PRO 6000 machine for 12k?
12
Hi, Is there a company that sells a complete machine (cpu, ram, gpu, drive, motherboard, case, power supply, etc all wired up) with RTX 6000 Pro for 12k USD or less? The card itself is around 7-8k I think, which leaves 4k for the other components. Is this economically possible? Bonus point: The machine supports add...
2025-06-05T04:37:26
https://www.reddit.com/r/LocalLLaMA/comments/1l3qfhh/rtx_pro_6000_machine_for_12k/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qfhh
false
null
t3_1l3qfhh
/r/LocalLLaMA/comments/1l3qfhh/rtx_pro_6000_machine_for_12k/
false
false
self
12
null
Niche Q but want to ask in an active community: what’s the cheapest transcription tool for audio that contains medical terminology?
1
[removed]
2025-06-05T04:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1l3qaia/niche_q_but_want_to_ask_in_an_active_community/
adrenalinsufficiency
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3qaia
false
null
t3_1l3qaia
/r/LocalLLaMA/comments/1l3qaia/niche_q_but_want_to_ask_in_an_active_community/
false
false
self
1
null
Qwen3 235B Q2_K_L Repeats Letters Despite Penalties.
1
My intended use case is as a backend for cline. For this, I am using the Qwen3 235B Q2_K_L model. I keep encountering repetition issues (specifically, endless repetition of the last letter), even after adding penalty parameters. I’m not sure if my launch method is correct—here’s my current launch command: ``` ./llama-...
2025-06-05T04:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1l3pwjg/qwen3_235b_q2_k_l_repeats_letters_despite/
realJoeTrump
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3pwjg
false
null
t3_1l3pwjg
/r/LocalLLaMA/comments/1l3pwjg/qwen3_235b_q2_k_l_repeats_letters_despite/
false
false
self
1
null
OpenAI should open source GPT3.5 turbo
124
Dont have a real point here, just the title, food for thought. I think it would be a pretty cool thing to do. at this point it's extremely out of date, so they wouldn't be loosing any "edge", it would just be a cool thing to do/have and would be a nice throwback. openAI's 10th year anniversary is coming up in decembe...
2025-06-05T03:18:48
https://www.reddit.com/r/LocalLLaMA/comments/1l3p1f0/openai_should_open_source_gpt35_turbo/
Expensive-Apricot-25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3p1f0
false
null
t3_1l3p1f0
/r/LocalLLaMA/comments/1l3p1f0/openai_should_open_source_gpt35_turbo/
false
false
self
124
null
HP Z440 5x GPU build
6
Hello everyone, I was about to build a very expensive machine with brand new epyc milan CPU and romed8-2t in a mining rack with 5 3090s mounted via risers since I couldn’t find any used epyc CPUs or motherboards here in india. Had a spare Z440 and it has 2 x16 slots and 1 x8 slot. Q.1 Is this a good idea? Z440 was t...
2025-06-05T03:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1l3oz8o/hp_z440_5x_gpu_build/
BeeNo7094
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3oz8o
false
null
t3_1l3oz8o
/r/LocalLLaMA/comments/1l3oz8o/hp_z440_5x_gpu_build/
false
false
self
6
null
why isn’t anyone building legit tools with local LLMs?
54
asked this in a recent comment but curious what others think. i could be missing it, but why aren’t more niche on device products being built? not talking wrappers or playgrounds, i mean real, useful tools powered by local LLMs. models are getting small enough, 3B and below is workable for a lot of tasks. the potent...
2025-06-05T03:00:37
https://www.reddit.com/r/LocalLLaMA/comments/1l3op8b/why_isnt_anyone_building_legit_tools_with_local/
mindfulbyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3op8b
false
null
t3_1l3op8b
/r/LocalLLaMA/comments/1l3op8b/why_isnt_anyone_building_legit_tools_with_local/
false
false
self
54
null
why aren’t we seeing more real products built with local LLMs?
1
[removed]
2025-06-05T02:55:38
https://www.reddit.com/r/LocalLLaMA/comments/1l3olw3/why_arent_we_seeing_more_real_products_built_with/
mindfulbyte
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3olw3
false
null
t3_1l3olw3
/r/LocalLLaMA/comments/1l3olw3/why_arent_we_seeing_more_real_products_built_with/
false
false
self
1
null
Local AI smart speaker
7
I was wondering if there were any low cost options for a Bluetooth speaker/microphone to connect to my server for voice chat with a local llm. Can an old echo or something be repurposed?
2025-06-05T02:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1l3ok95/local_ai_smart_speaker/
Llamapants
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3ok95
false
null
t3_1l3ok95
/r/LocalLLaMA/comments/1l3ok95/local_ai_smart_speaker/
false
false
self
7
null
C# Flash Card Generator
4
I'm posting this here mainly as an example app for the .NET lovers out there. Public domain. [https://github.com/dpmm99/Faxtract](https://github.com/dpmm99/Faxtract) is a rather simple ASP .NET web app using LLamaSharp (a llama.cpp wrapper) to perform batched inference. It accepts PDF, HTML, or TXT files and breaks th...
2025-06-05T02:18:51
https://www.reddit.com/r/LocalLLaMA/comments/1l3nwic/c_flash_card_generator/
DeProgrammer99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3nwic
false
null
t3_1l3nwic
/r/LocalLLaMA/comments/1l3nwic/c_flash_card_generator/
false
false
https://b.thumbs.redditm…4OKe9CLeYgHQ.jpg
4
{'enabled': False, 'images': [{'id': '10b2-ooZ8CZCLvFOgXbLZsmIJN6kUoVbLr_2vI7ULxU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mvZNRhrCH_xoYiPx_UuqAthbYQIcuBybXiRVHoZ3gFg.jpg?width=108&crop=smart&auto=webp&s=5eff254bb0b42cc53e93411a86e03f95d8e2162c', 'width': 108}, {'height': 108, 'url': 'h...
After court order, OpenAI is now preserving all ChatGPT and API logs
997
>OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it "would not" be able to segregate data, rather than explaining why it "can’t." Surprising absolutely nobody, except maybe ChatGPT users, OpenAI and the United States own your data and can do whatever they wa...
2025-06-05T02:00:22
https://arstechnica.com/tech-policy/2025/06/openai-says-court-forcing-it-to-save-all-chatgpt-logs-is-a-privacy-nightmare/
iGermanProd
arstechnica.com
1970-01-01T00:00:00
0
{}
1l3niws
false
null
t3_1l3niws
/r/LocalLLaMA/comments/1l3niws/after_court_order_openai_is_now_preserving_all/
false
false
default
997
{'enabled': False, 'images': [{'id': 'BUgrpepEp3PiEWaSG8x4EpSYcr7rmPFZOASl26sCl9Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=108&crop=smart&auto=webp&s=73bbb4adc582b4db9e9d74abef9f64ad8518c2eb', 'width': 108}, {'height': 121, 'url': 'h...
Anyone have any experience with Deepseek-R1-0528-Qwen3-8B?
8
I'm trying to download Unsloth's version on Msty (2021 iMac, 16GB), and per Unsloth's HuggingFace, they say to do the Q4\_K\_XL version because that's the version that's preconfigured with the prompt template and the settings and all that good jazz. But I'm left scratching my head over here. It acts all bonkers. Spill...
2025-06-05T00:37:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3lutf/anyone_have_any_experience_with/
clduab11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3lutf
false
null
t3_1l3lutf
/r/LocalLLaMA/comments/1l3lutf/anyone_have_any_experience_with/
false
false
self
8
null
My former go-to misguided attention prompt in shambles (DS-V3-0528)
54
Last year, [this prompt](https://www.reddit.com/r/LocalLLaMA/comments/1h8g8v3/a_test_prompt_the_new_llama_33_70b_struggles_with/) was useful to differentiate the smartest models from the rest. This year, the AI not only doesn't fall for it but realizes it's being tested and how it's being tested. I'm liking 0528's new...
2025-06-05T00:32:47
https://i.redd.it/8uil7xc0705f1.png
nomorebuttsplz
i.redd.it
1970-01-01T00:00:00
0
{}
1l3lrdq
false
null
t3_1l3lrdq
/r/LocalLLaMA/comments/1l3lrdq/my_former_goto_misguided_attention_prompt_in/
false
false
https://a.thumbs.redditm…1-d-jL9Utrj8.jpg
54
{'enabled': True, 'images': [{'id': 'v4s3wJLftwP1LimkjdNWreJuCiImnsukCzokwUqlHyw', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=108&crop=smart&auto=webp&s=da7d2a8ff5b014296d2cca54b6abf8affcf5d51f', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/8uil7xc0705f1.png...
Has anyone got DeerFlow working with LM Studio has the Backend?
0
Been trying to get DeerFlow to use LM Studio as its backend, but it's not working properly. It just behaves like a regular chat interface without leveraging the local model the way I expected. Anyone else run into this or have it working correctly?
2025-06-05T00:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1l3lgnp/has_anyone_got_deerflow_working_with_lm_studio/
Soraman36
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3lgnp
false
null
t3_1l3lgnp
/r/LocalLLaMA/comments/1l3lgnp/has_anyone_got_deerflow_working_with_lm_studio/
false
false
self
0
null
Error with full finetune, model merge, and quantization on vllm
1
[removed]
2025-06-04T23:50:06
https://www.reddit.com/r/LocalLLaMA/comments/1l3kvm5/error_with_full_finetune_model_merge_and/
Alternative-Dot451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3kvm5
false
null
t3_1l3kvm5
/r/LocalLLaMA/comments/1l3kvm5/error_with_full_finetune_model_merge_and/
false
false
self
1
null
Error with full finetune, model merge, and quantization on vllm
1
[removed]
2025-06-04T23:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1l3kuez/error_with_full_finetune_model_merge_and/
Alternative-Dot451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3kuez
false
null
t3_1l3kuez
/r/LocalLLaMA/comments/1l3kuez/error_with_full_finetune_model_merge_and/
false
false
self
1
null
UPDATE: Inference needs nontrivial amount of PCIe bandwidth (8x RTX 3090 rig, tensor parallelism)
59
A month ago I complained that connecting 8 RTX 3090 with PCIe 3.0 x4 links is bad idea. I have upgraded my rig with better PCIe links and have an update with some numbers. The upgrade: PCIe 3.0 -> 4.0, x4 width to x8 width. Used H12SSL with 16-core EPYC 7302. I didn't try the p2p PCIe drivers yet. The numbers: Bandw...
2025-06-04T21:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1l3i78l/update_inference_needs_nontrivial_amount_of_pcie/
pmur12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3i78l
false
null
t3_1l3i78l
/r/LocalLLaMA/comments/1l3i78l/update_inference_needs_nontrivial_amount_of_pcie/
false
false
self
59
null
Hardware considerations (5090 vs 2 x 3090). What AMD AM5 MOBO for dual GPU?
20
Hello everyone! I have an AM5 motherboard prepared for a single GPU card. I also have an MSI RTX 3090 Suprim. I can also buy a second MSI RTX 3090 Suprim, used of course, but then I would have to change the motherboard (also case and PSU). The other option is to buy the used RTX 5090 instead of the 3090 (then the ...
2025-06-04T21:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1l3hys4/hardware_considerations_5090_vs_2_x_3090_what_amd/
Repsol_Honda_PL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3hys4
false
null
t3_1l3hys4
/r/LocalLLaMA/comments/1l3hys4/hardware_considerations_5090_vs_2_x_3090_what_amd/
false
false
self
20
null
Which models are you able to use with MCP servers?
0
I've been working heavily with MCP servers (mostly Obsidian) from Claude Desktop for the last couple of months, but I'm running into quota issues all the time with my Pro account and really want to use alternatives (using Ollama if possible, OpenRouter otherwise). I successfully connected my MCP servers to AnythingLLM,...
2025-06-04T20:58:15
https://www.reddit.com/r/LocalLLaMA/comments/1l3gwkw/which_models_are_you_able_to_use_with_mcp_servers/
rdmDgnrtd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3gwkw
false
null
t3_1l3gwkw
/r/LocalLLaMA/comments/1l3gwkw/which_models_are_you_able_to_use_with_mcp_servers/
false
false
self
0
{'enabled': False, 'images': [{'id': 'w37CiJqQCCa-YfqtBhexm9AtCs6w-fVanSxzd90DK78', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=108&crop=smart&auto=webp&s=786653b9fb6b09478530bee1056d8e9b6cf8e5e7', 'width': 108}, {'height': 113, 'url': 'h...
I made an LLM tool to let you search offline Wikipedia/StackExchange/DevDocs ZIM files (llm-tools-kiwix, works with Python & LLM cli)
65
Hey everyone, I just released [`llm-tools-kiwix`](https://github.com/mozanunal/llm-tools-kiwix), a plugin for the [`llm` CLI](https://llm.datasette.io/) and Python that lets LLMs read and search offline ZIM archives (i.e., Wikipedia, DevDocs, StackExchange, and more) **totally offline**. **Why?** A lot of local LLM...
2025-06-04T19:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1l3fdv3/i_made_an_llm_tool_to_let_you_search_offline/
mozanunal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3fdv3
false
null
t3_1l3fdv3
/r/LocalLLaMA/comments/1l3fdv3/i_made_an_llm_tool_to_let_you_search_offline/
false
false
self
65
{'enabled': False, 'images': [{'id': 'LdeLyjWpKozaXC-1mzijVFkn07--A9IsGK-EOeqIB30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=108&crop=smart&auto=webp&s=735403ad4d738491a88b27206322250646225362', 'width': 108}, {'height': 108, 'url': 'h...
CPU or GPU upgrade for 70b models?
4
Currently im running 70b q3 quants on my GTX 1080 with a 6800k CPU at 0.6 tokens/sec. Isn't it true that upgrading to a 4060ti with 16gb of VRAM would have almost no effect whatsoever on inference speed because its still offloading? GPT thinks i should upgrade my CPU suggesting ill get 2.5 tokens per sec or more on a £...
2025-06-04T19:44:46
https://www.reddit.com/r/LocalLLaMA/comments/1l3f2jz/cpu_or_gpu_upgrade_for_70b_models/
Ok-Application-2261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3f2jz
false
null
t3_1l3f2jz
/r/LocalLLaMA/comments/1l3f2jz/cpu_or_gpu_upgrade_for_70b_models/
false
false
self
4
null
Using LLaMA 3 locally to plan macOS UI actions (Vision + Accessibility demo)
4
Wanted to see if LLaMA 3-8B on an M2 could replace cloud GPT for desktop RPA. Pipeline: * Ollama -> “plan” JSON steps from plain English * macOS Vision framework locates UI elements * Accessibility API executes clicks/keys * Feedback loop retries if confidence < 0.7 Prompt snippet: { "instruction": "rename ever...
2025-06-04T18:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1l3dm0c/using_llama_3_locally_to_plan_macos_ui_actions/
TyBoogie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3dm0c
false
null
t3_1l3dm0c
/r/LocalLLaMA/comments/1l3dm0c/using_llama_3_locally_to_plan_macos_ui_actions/
false
false
self
4
{'enabled': False, 'images': [{'id': 'jla_f4udSu_pe2OQSa52x_K_vjEbE29q9yz3Rnruh-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=108&crop=smart&auto=webp&s=b8d69cc7bd816c51b5306aab0c1a7bb9c6d7c6e4', 'width': 108}, {'height': 108, 'url': 'h...
Real-time conversational AI running 100% locally in-browser on WebGPU
1,250
2025-06-04T18:42:30
https://v.redd.it/t419j8srgy4f1
xenovatech
/r/LocalLLaMA/comments/1l3dhjx/realtime_conversational_ai_running_100_locally/
1970-01-01T00:00:00
0
{}
1l3dhjx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t419j8srgy4f1/DASHPlaylist.mpd?a=1751784155%2CNTY5OWEzYzg4Y2NjMmUxNGU3MjFkMWRhMTQxY2NjNjFkMzgwN2EwNDZlZDRlMDg0Nzc2YmRmYTI5MDBiNDFlZg%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/t419j8srgy4f1/DASH_1080.mp4?source=fallback', 'h...
t3_1l3dhjx
/r/LocalLLaMA/comments/1l3dhjx/realtime_conversational_ai_running_100_locally/
false
false
https://external-preview…9f4c113bdfaee71d
1,250
{'enabled': False, 'images': [{'id': 'MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=108&crop=smart&format=pjpg&auto=webp&s=147389199b19cc719c83b4d1cb6de2fcef5b5...
Real-time conversational AI running 100% locally in-browser on WebGPU
1
For those interested, here's how it works: \- A cascaded & interleaving of various models to enable low-latency & real-time speech-to-speech generation. \- Models: Silero VAD for voice activity detection, whisper for speech recognition, SmolLM2-1.7B for text generation, and Kokoro for text to speech \- WebGPU: po...
2025-06-04T18:40:15
https://v.redd.it/xk719di2cy4f1
xenovatech
/r/LocalLLaMA/comments/1l3dfg5/realtime_conversational_ai_running_100_locally/
1970-01-01T00:00:00
0
{}
1l3dfg5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xk719di2cy4f1/DASHPlaylist.mpd?a=1751784022%2CMGY5ODZkNmVkNDBkY2I1MWQxYmQ0NWI0MjI3MTk3MmU1MzViY2M3OTU4NDBmYjU0NmYzYWRjYjcwMjY2NWIwNQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/xk719di2cy4f1/DASH_1080.mp4?source=fallback', 'h...
t3_1l3dfg5
/r/LocalLLaMA/comments/1l3dfg5/realtime_conversational_ai_running_100_locally/
false
false
https://external-preview…badcee9875158c96
1
{'enabled': False, 'images': [{'id': 'MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0bbcea75fec8b62b79a1caba40718b089a49...
Taskade MCP – Generate Claude/Cursor tools from any OpenAPI spec ⚡
1
Hey all, We needed a faster way to wire AI agents (like Claude, Cursor) to real APIs using OpenAPI specs. So we built and open-sourced **[Taskade MCP](https://github.com/taskade/mcp)** — a codegen tool and local server that turns OpenAPI 3.x specs into Claude/Cursor-compatible MCP tools. - Auto-generates agent tool...
2025-06-04T18:15:24
https://www.reddit.com/r/LocalLLaMA/comments/1l3csix/taskade_mcp_generate_claudecursor_tools_from_any/
taskade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3csix
false
null
t3_1l3csix
/r/LocalLLaMA/comments/1l3csix/taskade_mcp_generate_claudecursor_tools_from_any/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LASjwjJSCnDis9BKVqPfucu58HXWKqcn4_F2Q418qBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=108&crop=smart&auto=webp&s=8a3d117787f29e9c2961c7414493211868076b5f', 'width': 108}, {'height': 108, 'url': 'h...
GRMR-V3: A set of models for reliable grammar correction.
96
Let's face it: You don't need big models like 32B, or medium sized models like 8B for grammar correction. So I've created a set of fine-tuned models specialized in just doing that: fixing grammar. [Models](https://huggingface.co/collections/qingy2024/grmr-v3-models-683e6a27b42e4eb0e950fbdd): GRMR-V3 (1B, 1.2B, 1.7B, 3...
2025-06-04T17:53:41
https://www.reddit.com/r/LocalLLaMA/comments/1l3c8is/grmrv3_a_set_of_models_for_reliable_grammar/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3c8is
false
null
t3_1l3c8is
/r/LocalLLaMA/comments/1l3c8is/grmrv3_a_set_of_models_for_reliable_grammar/
false
false
self
96
{'enabled': False, 'images': [{'id': 'TUJ65URlfz7avJTo72njareuNShxaju8zU4SYHkPF-c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=108&crop=smart&auto=webp&s=08ed7f92b9f39d34b2ebcc9171a156cd04f53397', 'width': 108}, {'height': 116, 'url': 'h...