title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
In-character roleplay in LLM thinking tokens
1
[removed]
2025-06-22T19:22:18
https://i.redd.it/689klvmu3j8f1.png
Lesterpaintstheworld
i.redd.it
1970-01-01T00:00:00
0
{}
1lhw875
false
null
t3_1lhw875
/r/LocalLLaMA/comments/1lhw875/incharacter_roleplay_in_llm_thinking_tokens/
false
false
default
1
{'enabled': True, 'images': [{'id': '689klvmu3j8f1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=108&crop=smart&auto=webp&s=e86488da806d1acfdc7486930b27d9ef031a8f85', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=216&crop=smart&auto=web...
Does anyone else find Dots really impressive?
1
[removed]
2025-06-22T19:10:10
https://www.reddit.com/r/LocalLLaMA/comments/1lhvxn3/does_anyone_else_find_dots_really_impressive/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhvxn3
false
null
t3_1lhvxn3
/r/LocalLLaMA/comments/1lhvxn3/does_anyone_else_find_dots_really_impressive/
false
false
self
1
null
managing coding agents through the phone?
1
[removed]
2025-06-22T18:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1lhvaop/managing_coding_agents_through_the_phone/
secopsml
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhvaop
false
null
t3_1lhvaop
/r/LocalLLaMA/comments/1lhvaop/managing_coding_agents_through_the_phone/
false
false
self
1
null
I've created an app that allows you to AI-analyze many PDF files (and their highlights) into tables - and it can fully function with local LMs
1
[removed]
2025-06-22T18:39:25
https://v.redd.it/m5wix88owi8f1
RansomWarrior
v.redd.it
1970-01-01T00:00:00
0
{}
1lhv6sb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m5wix88owi8f1/DASHPlaylist.mpd?a=1753209578%2CZGJjM2NmNDU2YWYzODY2NmJjMTM2MzU1NWZlMDg4OWNkMjQ5YzNjMDdkNjJkNzNhODIxMDkwMmRkYzExN2I1ZQ%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/m5wix88owi8f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lhv6sb
/r/LocalLLaMA/comments/1lhv6sb/ive_created_an_app_that_allows_you_to_aianalyze/
false
false
https://external-preview…1866eb91c6aab756
1
{'enabled': False, 'images': [{'id': 'NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=108&crop=smart&format=pjpg&auto=webp&s=8fc8980cc615ca093999fa7d84c9d60956720...
POLARIS - Bytedance
1
[removed]
2025-06-22T18:14:39
https://github.com/ChenxinAn-fdu/POLARIS/tree/main
KillerX629
github.com
1970-01-01T00:00:00
0
{}
1lhul4j
false
null
t3_1lhul4j
/r/LocalLLaMA/comments/1lhul4j/polaris_bytedance/
false
false
default
1
null
LinusTechTips reviews Chinese 4090s with 48Gb VRAM, messes with LLMs
1
[removed]
2025-06-22T18:12:07
https://youtu.be/HZgQp-WDebU
BumbleSlob
youtu.be
1970-01-01T00:00:00
0
{}
1lhuivb
false
{'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy...
t3_1lhuivb
/r/LocalLLaMA/comments/1lhuivb/linustechtips_reviews_chinese_4090s_with_48gb/
false
false
default
1
null
OpenAI's Chief Product Officer made a Lt. Colonel in the Army.
1
2025-06-22T18:06:02
https://www.army.mil/article-amp/286317/army_launches_detachment_201_executive_innovation_corps_to_drive_tech_transformation
fallingdowndizzyvr
army.mil
1970-01-01T00:00:00
0
{}
1lhudl3
false
null
t3_1lhudl3
/r/LocalLLaMA/comments/1lhudl3/openais_chief_product_officer_made_a_lt_colonel/
false
false
default
1
null
From last 10 hours no new updates - i am addicted its seems
1
2025-06-22T17:30:44
https://i.redd.it/jeehul3kki8f1.jpeg
dreamai87
i.redd.it
1970-01-01T00:00:00
0
{}
1lhtikp
false
null
t3_1lhtikp
/r/LocalLLaMA/comments/1lhtikp/from_last_10_hours_no_new_updates_i_am_addicted/
false
false
default
1
{'enabled': True, 'images': [{'id': 'jeehul3kki8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=108&crop=smart&auto=webp&s=969976d0795a7adb69e67b686caa12605d01bbc6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=216&crop=smart&auto=...
No posts from last 9 hours - I am addicted to see updates here
1
2025-06-22T17:27:13
https://i.redd.it/db146xgxji8f1.jpeg
dreamai87
i.redd.it
1970-01-01T00:00:00
0
{}
1lhtfiy
false
null
t3_1lhtfiy
/r/LocalLLaMA/comments/1lhtfiy/no_posts_from_last_9_hours_i_am_addicted_to_see/
false
false
default
1
{'enabled': True, 'images': [{'id': 'db146xgxji8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=108&crop=smart&auto=webp&s=9e5f56096200d1e86a24c23f094a995aefbb2e6b', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=216&crop=smart&auto=...
moonshotai released a new multi modal model with 16B params with 3B active
1
[removed]
2025-06-22T17:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1lhsytb/moonshotai_released_a_new_multi_modal_model_with/
BreakfastFriendly728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhsytb
false
null
t3_1lhsytb
/r/LocalLLaMA/comments/1lhsytb/moonshotai_released_a_new_multi_modal_model_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=108&crop=smart&auto=webp&s=b6a411809385a8832da3850c0f3d679bebfc629b', 'width': 108}, {'height': 116, 'url': 'h...
LTT Review/Breakdown of the Chinese 48GB 4090 2 Slot GPUs
1
2025-06-22T16:58:51
https://www.youtube.com/watch?v=HZgQp-WDebU
Rollingsound514
youtube.com
1970-01-01T00:00:00
0
{}
1lhsqeb
false
{'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy...
t3_1lhsqeb
/r/LocalLLaMA/comments/1lhsqeb/ltt_reviewbreakdown_of_the_chinese_48gb_4090_2/
false
false
https://external-preview…67ba35e85ca77a95
1
{'enabled': False, 'images': [{'id': 'ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=108&crop=smart&auto=webp&s=34b6e95c9e78450a03bc17669db1039556875ab2', 'width': 108}, {'height': 162, 'url': '...
I think I’m gonna apply for Meta what would be your dream job there?
1
[removed]
2025-06-22T16:43:08
https://www.reddit.com/r/LocalLLaMA/comments/1lhsd2w/i_think_im_gonna_apply_for_meta_what_would_be/
TheMightyDice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhsd2w
false
null
t3_1lhsd2w
/r/LocalLLaMA/comments/1lhsd2w/i_think_im_gonna_apply_for_meta_what_would_be/
false
false
self
1
null
Best open-source LLM for summarizing German course transcripts (cloud setup)?
1
[removed]
2025-06-22T16:42:54
https://www.reddit.com/r/LocalLLaMA/comments/1lhscve/best_opensource_llm_for_summarizing_german_course/
Sea-Woodpecker-2594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhscve
false
null
t3_1lhscve
/r/LocalLLaMA/comments/1lhscve/best_opensource_llm_for_summarizing_german_course/
false
false
self
1
null
Best open-source LLM for summarizing German course transcripts?
1
[removed]
2025-06-22T16:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1lhs85n/best_opensource_llm_for_summarizing_german_course/
Sea-Woodpecker-2594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhs85n
false
null
t3_1lhs85n
/r/LocalLLaMA/comments/1lhs85n/best_opensource_llm_for_summarizing_german_course/
false
false
self
1
null
Most Suitable Model for Text Classification
1
[removed]
2025-06-22T16:24:18
https://www.reddit.com/r/LocalLLaMA/comments/1lhrx54/most_suitable_model_for_text_classification/
Jason_Wesley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhrx54
false
null
t3_1lhrx54
/r/LocalLLaMA/comments/1lhrx54/most_suitable_model_for_text_classification/
false
false
self
1
null
Experience running llms on CPU only
1
[removed]
2025-06-22T15:46:39
https://www.reddit.com/r/LocalLLaMA/comments/1lhr0wl/experience_running_llms_on_cpu_only/
82shadesofgrey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhr0wl
false
null
t3_1lhr0wl
/r/LocalLLaMA/comments/1lhr0wl/experience_running_llms_on_cpu_only/
false
false
self
1
null
[New Features & Better] Tabulens: A Vision-LLM Powered PDF Table Extractor
1
[removed]
2025-06-22T15:43:19
https://www.reddit.com/r/LocalLLaMA/comments/1lhqy4c/new_features_better_tabulens_a_visionllm_powered/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhqy4c
false
null
t3_1lhqy4c
/r/LocalLLaMA/comments/1lhqy4c/new_features_better_tabulens_a_visionllm_powered/
false
false
https://b.thumbs.redditm…HLK6_q1XjOpA.jpg
1
null
Axelera Metis AI card: usable for local inference?
1
[removed]
2025-06-22T14:50:59
https://axelera.ai/ai-accelerators/metis-pcie-ai-acceleration-card
Nilithium
axelera.ai
1970-01-01T00:00:00
0
{}
1lhpq4b
false
null
t3_1lhpq4b
/r/LocalLLaMA/comments/1lhpq4b/axelera_metis_ai_card_usable_for_local_inference/
false
false
https://external-preview…7f6720c0b6b397ef
1
{'enabled': False, 'images': [{'id': '7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=108&crop=smart&auto=webp&s=77c2ba2108ee5d3134c0d43f25b4fbd2adae6ff8', 'width': 108}, {'height': 144, 'url': '...
1
2025-06-22T14:37:16
https://youtube.com/@dmmloungetv?si=CuJT3pqGgBNAfi25
Pitiful_Engine_6901
youtube.com
1970-01-01T00:00:00
0
{}
1lhpeqi
false
null
t3_1lhpeqi
/r/LocalLLaMA/comments/1lhpeqi/我/
false
false
https://external-preview…f5fa07bf432679ac
1
{'enabled': False, 'images': [{'id': 'AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?width=108&crop=smart&auto=webp&s=bf29d39a3ccc7d0c79a4cc47fb651fc2205cace4', 'width': 108}, {'height': 216, 'url': ...
cosine similarity encoders question
1
[removed]
2025-06-22T13:44:54
https://www.reddit.com/r/LocalLLaMA/comments/1lho8pm/cosine_similarity_encoders_question/
Affectionate-Tax2179
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lho8pm
false
null
t3_1lho8pm
/r/LocalLLaMA/comments/1lho8pm/cosine_similarity_encoders_question/
false
false
self
1
null
Interested in encoding and cosine similarity for Qwen/InternVL
1
[removed]
2025-06-22T13:40:36
https://www.reddit.com/r/LocalLLaMA/comments/1lho5e6/interested_in_encoding_and_cosine_similarity_for/
Big-Horse-6181
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lho5e6
false
null
t3_1lho5e6
/r/LocalLLaMA/comments/1lho5e6/interested_in_encoding_and_cosine_similarity_for/
false
false
self
1
null
Is it worth it to try IQ3 (or Q3) quants to fit more context.
1
[removed]
2025-06-22T13:37:04
https://www.reddit.com/r/LocalLLaMA/comments/1lho2m4/is_it_worth_it_to_try_iq3_or_q3_quants_to_fit/
KeinNiemand
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lho2m4
false
null
t3_1lho2m4
/r/LocalLLaMA/comments/1lho2m4/is_it_worth_it_to_try_iq3_or_q3_quants_to_fit/
false
false
self
1
null
Cost effective batch inference
1
[removed]
2025-06-22T12:48:10
https://www.reddit.com/r/LocalLLaMA/comments/1lhn2z9/cost_effective_batch_inference/
Sea-Quiet-229
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhn2z9
false
null
t3_1lhn2z9
/r/LocalLLaMA/comments/1lhn2z9/cost_effective_batch_inference/
false
false
self
1
null
Benchmarking
1
[removed]
2025-06-22T12:42:11
https://www.reddit.com/r/LocalLLaMA/comments/1lhmyvn/benchmarking/
chisleu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhmyvn
false
null
t3_1lhmyvn
/r/LocalLLaMA/comments/1lhmyvn/benchmarking/
false
false
self
1
null
🔥 Free Year of Perplexity Pro for Samsung Galaxy Users (and maybe emulator users too…
1
[removed]
2025-06-22T11:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1lhm4dz/free_year_of_perplexity_pro_for_samsung_galaxy/
PrettyRevolution1842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhm4dz
false
null
t3_1lhm4dz
/r/LocalLLaMA/comments/1lhm4dz/free_year_of_perplexity_pro_for_samsung_galaxy/
false
false
self
1
null
LLM SUGGESTIONS PLEASE
1
[removed]
2025-06-22T11:33:14
https://www.reddit.com/r/LocalLLaMA/comments/1lhlrav/llm_suggestions_please/
Radiant_Truth_8743
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhlrav
false
null
t3_1lhlrav
/r/LocalLLaMA/comments/1lhlrav/llm_suggestions_please/
false
false
self
1
null
Found this amazing RAG for medical research backed answers. (askmedically.com)
0
[removed]
2025-06-22T10:58:01
https://www.reddit.com/gallery/1lhl71b
ashutrv
reddit.com
1970-01-01T00:00:00
0
{}
1lhl71b
false
null
t3_1lhl71b
/r/LocalLLaMA/comments/1lhl71b/found_this_amazing_rag_for_medical_research/
false
false
https://external-preview…f819473fdaba1d30
0
{'enabled': True, 'images': [{'id': '4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=108&crop=smart&auto=webp&s=7f941cda492a36d930437f411010cf5bbecb3363', 'width': 108}, {'height': 432, 'url': '...
LLM Assistant with function calling - Update 2
1
[removed]
2025-06-22T10:45:59
http://rivridis.com/windows-assistant
Rivridis
rivridis.com
1970-01-01T00:00:00
0
{}
1lhl0g0
false
null
t3_1lhl0g0
/r/LocalLLaMA/comments/1lhl0g0/llm_assistant_with_function_calling_update_2/
false
false
default
1
null
Anyone solved Unsqueeze matcher issues when converting ONNX to Caffe using YAML + dvconvert?
1
[removed]
2025-06-22T10:26:34
https://www.reddit.com/r/LocalLLaMA/comments/1lhkqag/anyone_solved_unsqueeze_matcher_issues_when/
Soft_Examination1158
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhkqag
false
null
t3_1lhkqag
/r/LocalLLaMA/comments/1lhkqag/anyone_solved_unsqueeze_matcher_issues_when/
false
false
self
1
null
Best local llm for 6vcpu 13gig ram vps no gpu
1
[removed]
2025-06-22T10:08:23
https://www.reddit.com/r/LocalLLaMA/comments/1lhkgm3/best_local_llm_for_6vcpu_13gig_ram_vps_no_gpu/
jayn35
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhkgm3
false
null
t3_1lhkgm3
/r/LocalLLaMA/comments/1lhkgm3/best_local_llm_for_6vcpu_13gig_ram_vps_no_gpu/
false
false
self
1
null
Huge differeance in inference speed between 3090 ?
1
[removed]
2025-06-22T10:04:46
https://www.reddit.com/r/LocalLLaMA/comments/1lhkepw/huge_differeance_in_inference_speed_between_3090/
vdiallonort
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhkepw
false
null
t3_1lhkepw
/r/LocalLLaMA/comments/1lhkepw/huge_differeance_in_inference_speed_between_3090/
false
false
self
1
null
9070 XTs for AI?
1
[removed]
2025-06-22T09:28:25
https://www.reddit.com/r/LocalLLaMA/comments/1lhjvvi/9070_xts_for_ai/
RepresentativeCut486
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhjvvi
false
null
t3_1lhjvvi
/r/LocalLLaMA/comments/1lhjvvi/9070_xts_for_ai/
false
false
self
1
null
I built MAI: A fully self-hosted emotional AI assistant with voice, memory, and sentiment analysis—Ghost in the Shell vibes included
1
[removed]
2025-06-22T08:38:28
https://v.redd.it/g32u1k4jxf8f1
nomorecrackpl
v.redd.it
1970-01-01T00:00:00
0
{}
1lhj691
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g32u1k4jxf8f1/DASHPlaylist.mpd?a=1753173522%2CMGUxMzFhZTAwYzQ5MTkyZDBkYzI5NGI2ZGMxODE4NGY2MjRjYjk0ZDNiOTVkMGUyYWI2ZTRkNDhmYTRjOGFiMg%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/g32u1k4jxf8f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lhj691
/r/LocalLLaMA/comments/1lhj691/i_built_mai_a_fully_selfhosted_emotional_ai/
false
false
https://external-preview…92f808b13fa1cd1f
1
{'enabled': False, 'images': [{'id': 'd2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=108&crop=smart&format=pjpg&auto=webp&s=dacd1bf232802649f406dc65164b0912de626...
Seeking Advice for On-Premise LLM Roadmap for Enterprise Customer Care (Llama/Mistral, Ollama, Hardware)
1
[removed]
2025-06-22T08:36:00
https://www.reddit.com/r/LocalLLaMA/comments/1lhj4yr/seeking_advice_for_onpremise_llm_roadmap_for/
Worth_Rabbit_6262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhj4yr
false
null
t3_1lhj4yr
/r/LocalLLaMA/comments/1lhj4yr/seeking_advice_for_onpremise_llm_roadmap_for/
false
false
self
1
null
How much performance am I losing using chipset vs CPU lanes on 3080ti?
8
I have a 3080ti and an MSI Z790 gaming plus wifi. For some reason my pcie slot with the cpu lanes isn’t working. The chipset one works fine. How much performance should I expect to lose with local llama?
2025-06-22T07:34:33
https://www.reddit.com/r/LocalLLaMA/comments/1lhi8p8/how_much_performance_am_i_losing_using_chipset_vs/
FactoryReboot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhi8p8
false
null
t3_1lhi8p8
/r/LocalLLaMA/comments/1lhi8p8/how_much_performance_am_i_losing_using_chipset_vs/
false
false
self
8
null
Best open agentic coding assistants that don’t need an OpenAI key?
49
Looking for ai dev tools that actually let you use your own models, something agent-style that can analyse multiple files, track goals, and suggest edits/refactors, ideally all within vscode or terminal. I’ve used Copilot’s agent mode, but it’s obviously tied to OpenAI. I’m more interested in Tools that work with loc...
2025-06-22T07:03:21
https://www.reddit.com/r/LocalLLaMA/comments/1lhhs1r/best_open_agentic_coding_assistants_that_dont/
Fabulous_Bluebird931
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhhs1r
false
null
t3_1lhhs1r
/r/LocalLLaMA/comments/1lhhs1r/best_open_agentic_coding_assistants_that_dont/
false
false
self
49
null
[OpenSource]Multi-LLM client - LLM Bridge
21
Previously, I created a separate LLM client for Ollama for iOS and MacOS and released it as open source, but I recreated it by integrating iOS and MacOS codes and adding APIs that support them based on Swift/SwiftUI. https://preview.redd.it/00dq12p66f8f1.jpg?width=2880&format=pjpg&auto=webp&s=5b97237c3558709596ef0396...
2025-06-22T06:05:04
https://www.reddit.com/r/LocalLLaMA/comments/1lhgvq4/opensourcemultillm_client_llm_bridge/
billythepark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhgvq4
false
null
t3_1lhgvq4
/r/LocalLLaMA/comments/1lhgvq4/opensourcemultillm_client_llm_bridge/
false
false
https://a.thumbs.redditm…n2vbcaJg5ZW0.jpg
21
{'enabled': False, 'images': [{'id': '10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=108&crop=smart&auto=webp&s=54701ef5fc670b94aeb263f2f5ef1644f13a5e25', 'width': 108}, {'height': 108, 'url': 'h...
50 days building a tiny language model from scratch, what I’ve learned so far
844
Hey folks, I’m starting a new weekday series on June 23 at 9:00 AM PST where I’ll spend 50 days coding a two LLM (15–30M parameters) from the ground up: no massive GPU cluster, just a regular laptop or modest GPU. Each post will cover one topic: * Data collection and subword tokenization * Embeddings and positional ...
2025-06-22T03:31:14
https://www.reddit.com/r/LocalLLaMA/comments/1lhed49/50_days_building_a_tiny_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhed49
false
null
t3_1lhed49
/r/LocalLLaMA/comments/1lhed49/50_days_building_a_tiny_language_model_from/
false
false
self
844
null
50 Days of Building a Small Language Model from Scratch
1
[removed]
2025-06-22T03:26:23
https://www.reddit.com/r/LocalLLaMA/comments/1lhea33/50_days_of_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhea33
false
null
t3_1lhea33
/r/LocalLLaMA/comments/1lhea33/50_days_of_building_a_small_language_model_from/
false
false
self
1
null
50 Days of Building a Small Language Model from Scratch
1
[removed]
2025-06-22T03:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1lhe9gk/50_days_of_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhe9gk
false
null
t3_1lhe9gk
/r/LocalLLaMA/comments/1lhe9gk/50_days_of_building_a_small_language_model_from/
false
false
self
1
null
50 Days of Building a Small Language Model from Scratch
1
[removed]
2025-06-22T03:11:51
https://www.reddit.com/r/LocalLLaMA/comments/1lhe0w6/50_days_of_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhe0w6
false
null
t3_1lhe0w6
/r/LocalLLaMA/comments/1lhe0w6/50_days_of_building_a_small_language_model_from/
false
false
https://b.thumbs.redditm…EUWuZiSnwSxU.jpg
1
null
Agentic ai platform
0
Guys, I have been looking for an agentic ai plaform like dify with no luck. I need to build agentic ai for the financial domain. Running dify on docker throws so many errors while file processing. I have timried lyzr.ai. I am not technical and need something which has a clean UI. Flowise is throwing errors while instal...
2025-06-22T03:07:37
https://www.reddit.com/r/LocalLLaMA/comments/1lhdy7m/agentic_ai_platform/
monsterindian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhdy7m
false
null
t3_1lhdy7m
/r/LocalLLaMA/comments/1lhdy7m/agentic_ai_platform/
false
false
self
0
null
The Qwen Tokenizer Seems to be better than the Deepseek Tokenizer - Testing a 50-50 SLERP merge of the same two models (Qwen3-8B and DeepSeek-R1-0528-Qwen3-8B) with different tokenizers
136
I was interested in merging [DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) and [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) as they were both my two favorite under 10b\~ models, and finding the Deepseek distill especially impressive. Noted in their model card was the follo...
2025-06-22T03:01:18
https://www.reddit.com/r/LocalLLaMA/comments/1lhdu5q/the_qwen_tokenizer_seems_to_be_better_than_the/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhdu5q
false
null
t3_1lhdu5q
/r/LocalLLaMA/comments/1lhdu5q/the_qwen_tokenizer_seems_to_be_better_than_the/
false
false
https://b.thumbs.redditm…GwcyoUYm0Eqo.jpg
136
{'enabled': False, 'images': [{'id': 'sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=108&crop=smart&auto=webp&s=21698fa4359145798ca9e06dbf89b0d063f7c18a', 'width': 108}, {'height': 116, 'url': 'h...
ChatGPT alike local web ui for apple silicon?
9
I am looking for a specific AI software that I can run on my Mac that lets me have a web ui with ChatGPT alike functions: uploading files, web search and possibly even deep research? Is there anything out there like this I can run locally and free?
2025-06-22T02:25:04
https://www.reddit.com/r/LocalLLaMA/comments/1lhd69y/chatgpt_alike_local_web_ui_for_apple_silicon/
IntrigueMe_1337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhd69y
false
null
t3_1lhd69y
/r/LocalLLaMA/comments/1lhd69y/chatgpt_alike_local_web_ui_for_apple_silicon/
false
false
self
9
null
Some Observations using the RTX 6000 PRO Blackwell.
129
Thought I would share some thoughts playing around with the RTX 6000 Pro 96GB Blackwell Workstation edition. Using the card inside a Razer Core X GPU enclosure: 1. I bought this bracket ([link](https://www.etsy.com/listing/1293010019/razer-core-x-bracket-for-corsair-power?ref=cart)) and replaced the Razer Core X powe...
2025-06-22T02:17:39
https://www.reddit.com/r/LocalLLaMA/comments/1lhd1j0/some_observations_using_the_rtx_6000_pro_blackwell/
Aroochacha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhd1j0
false
null
t3_1lhd1j0
/r/LocalLLaMA/comments/1lhd1j0/some_observations_using_the_rtx_6000_pro_blackwell/
false
false
self
129
{'enabled': False, 'images': [{'id': 'Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=108&crop=smart&auto=webp&s=5be3acce185287f211a6e62a7e787d9b87eea6ee', 'width': 108}, {'height': 288, 'url': ...
[The image marking & reverse engineering tool] Tutu's Super Smart Marker 1.0.5 update notes (with practical tutorial)
1
[removed]
2025-06-22T02:16:21
https://v.redd.it/3k74i07v0e8f1
Key-Consequence5367
/r/LocalLLaMA/comments/1lhd0pl/the_image_marking_reverse_engineering_tool_tutus/
1970-01-01T00:00:00
0
{}
1lhd0pl
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3k74i07v0e8f1/DASHPlaylist.mpd?a=1753280188%2CNmNjMDIyOTlhYThlNDllN2I5ZTUzYWE4OGY1YzBlNjBiMjc5OTc5ZDYxMGU3MjI4YmVjMDFiNmM0NWI2YzE4Mw%3D%3D&v=1&f=sd', 'duration': 654, 'fallback_url': 'https://v.redd.it/3k74i07v0e8f1/DASH_1080.mp4?source=fallback', '...
t3_1lhd0pl
/r/LocalLLaMA/comments/1lhd0pl/the_image_marking_reverse_engineering_tool_tutus/
false
false
default
1
null
【最好用的图片打标&反推神器】图图的超级智能打标器1.0.5版本更新说明(附实操教程)
1
[removed]
2025-06-22T02:10:05
https://v.redd.it/77zef24uzd8f1
Key-Consequence5367
/r/LocalLLaMA/comments/1lhcwm0/最好用的图片打标反推神器图图的超级智能打标器105版本更新说明附实操教程/
1970-01-01T00:00:00
0
{}
1lhcwm0
false
null
t3_1lhcwm0
/r/LocalLLaMA/comments/1lhcwm0/最好用的图片打标反推神器图图的超级智能打标器105版本更新说明附实操教程/
false
false
default
1
null
AI project, kind of crazy
0
Alright, it's time. I've been thinking about this for a while, and I'm finally ready to dive in. This will be a journey, and I know I won’t be able to do it alone so if you’re interested, DM me. Happy to share the upside if it works. This isn’t a breakthrough idea. It’s a real, practical attempt at something many of...
2025-06-22T01:48:44
https://www.reddit.com/r/LocalLLaMA/comments/1lhciq9/ai_project_kind_of_crazy/
humanoid64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhciq9
false
null
t3_1lhciq9
/r/LocalLLaMA/comments/1lhciq9/ai_project_kind_of_crazy/
false
false
self
0
null
Is QWEN online service quantized?
0
I've made several translation tests using QWEN3 235B IQ4\_XS with KV cache at f16 vs the one on their website. Often, the translation I get locally is as good or a tiny bit better than the online version. Is it possible than wanting to save on servers infrastructure, they serve some of their models at 4bits ?
2025-06-22T01:06:59
https://www.reddit.com/r/LocalLLaMA/comments/1lhbr86/is_qwen_online_service_quantized/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhbr86
false
null
t3_1lhbr86
/r/LocalLLaMA/comments/1lhbr86/is_qwen_online_service_quantized/
false
false
self
0
null
A Great Breakdown of the "Disney vs Midjourney" Lawsuit Case
26
As you all know by now, Disney has sued Midjourney on the basis that the latter trained its AI image generating models on copyrighted materials. This is a serious case that we all should follow up closely. LegalEagle broke down the case in their new YouTube video linked below: [https://www.youtube.com/watch?v=zp...
2025-06-22T00:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1lhbgcn/a_great_breakdown_of_the_disney_vs_midjourney/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lhbgcn
false
null
t3_1lhbgcn
/r/LocalLLaMA/comments/1lhbgcn/a_great_breakdown_of_the_disney_vs_midjourney/
false
false
self
26
{'enabled': False, 'images': [{'id': 'GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY.jpeg?width=108&crop=smart&auto=webp&s=bc3822579336fc0dc96b5a12e4295538d2fe70a5', 'width': 108}, {'height': 162, 'url': '...
Embedding With LM Studio - what am i doing wrong
8
I've updated LM Studio to 0.3.17 (build 7) and trying to run embedding models in the developer tab so that i can push it to AnythingLLM where my work is. funny thing is , the original "text-embedding-nomic-embed-text-v1.5" loads fine and works with Anything. but text-embedding-qwen3-embedding-0.6b & 8B and any other...
2025-06-22T00:15:04
https://www.reddit.com/r/LocalLLaMA/comments/1lharbh/embedding_with_lm_studio_what_am_i_doing_wrong/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lharbh
false
null
t3_1lharbh
/r/LocalLLaMA/comments/1lharbh/embedding_with_lm_studio_what_am_i_doing_wrong/
false
false
self
8
null
Building a Home AI Guardian System
1
[removed]
2025-06-21T21:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1lh7d3i/building_a_home_ai_guardian_system/
HomeLlama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh7d3i
false
null
t3_1lh7d3i
/r/LocalLLaMA/comments/1lh7d3i/building_a_home_ai_guardian_system/
false
false
self
1
null
Which AI/LLM can I run on my 16 GB M3 Macbook Air for helping me learn from PDFs or epubs and it can run without internet access?
2
I don't have much technical knowledge about AI/LLM, just dabbling to do simple textual interactions. I need help to find if I can run a local and offline AI or LLM on my macbook which will help me study and read loads of epubs and pdf files. Basically the AI can go through the contents and help me learn. I will be of...
2025-06-21T21:09:28
https://www.reddit.com/r/LocalLLaMA/comments/1lh6wvk/which_aillm_can_i_run_on_my_16_gb_m3_macbook_air/
DoiMach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh6wvk
false
null
t3_1lh6wvk
/r/LocalLLaMA/comments/1lh6wvk/which_aillm_can_i_run_on_my_16_gb_m3_macbook_air/
false
false
self
2
null
Anyone using JetBrains/Rider?
9
I heard their IDEs can integrate with locally running models, so im searching for people who know about this! Have you tried this out? Is it possible? Any quirks? Thanks in advance!
2025-06-21T20:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1lh66t7/anyone_using_jetbrainsrider/
CSEliot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh66t7
false
null
t3_1lh66t7
/r/LocalLLaMA/comments/1lh66t7/anyone_using_jetbrainsrider/
false
false
self
9
null
Built a LiteLLM adapter for locally hosted HuggingFace models on your machine because local transformers deserved the OpenAI API treatment
27
**TL;DR**: Made local HuggingFace transformers work through LiteLLM's OpenAI-compatible interface. No more API inconsistencies between local and cloud models. Feel free to use it or help me enriching and making it more mature Hey everyone! So here's the thing LiteLLM is AMAZING for calling 100+ LLM providers through...
2025-06-21T20:03:10
https://www.reddit.com/r/LocalLLaMA/comments/1lh5gwl/built_a_litellm_adapter_for_locally_hosted/
arkbhatta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh5gwl
false
null
t3_1lh5gwl
/r/LocalLLaMA/comments/1lh5gwl/built_a_litellm_adapter_for_locally_hosted/
false
false
self
27
{'enabled': False, 'images': [{'id': 'H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=108&crop=smart&auto=webp&s=df790dae35941c0d078d899fa14b5aeaa1cf7767', 'width': 108}, {'height': 121, 'url': 'h...
Best uncensored LLM
0
What is the best local LLM which is uncensored and good, even in complex tasks like programming?
2025-06-21T19:59:47
https://www.reddit.com/r/LocalLLaMA/comments/1lh5e04/best_uncensored_llm/
Dizzy_Opposite3363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh5e04
false
null
t3_1lh5e04
/r/LocalLLaMA/comments/1lh5e04/best_uncensored_llm/
false
false
self
0
null
Qwen3 is very.... talkative? And yet not very... focused?
10
Messing around with some local models, and I kept seeing Qwen3 recommended so I thought I'd play around with it. Give it a simple question like "how big is the moon" or "write a limerick about the sea" and it'll .... write about 1000 words on how to define the moon and why you might measure it in meters instead of mi...
2025-06-21T19:40:34
https://www.reddit.com/r/LocalLLaMA/comments/1lh4ynv/qwen3_is_very_talkative_and_yet_not_very_focused/
nirurin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh4ynv
false
null
t3_1lh4ynv
/r/LocalLLaMA/comments/1lh4ynv/qwen3_is_very_talkative_and_yet_not_very_focused/
false
false
self
10
null
XAI's Slack must be comedy
1
https://preview.redd.it/…t's impressive.
2025-06-21T19:30:15
https://www.reddit.com/r/LocalLLaMA/comments/1lh4qgs/xais_slack_must_be_comedy/
Longjumping-Solid563
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh4qgs
false
null
t3_1lh4qgs
/r/LocalLLaMA/comments/1lh4qgs/xais_slack_must_be_comedy/
false
false
https://b.thumbs.redditm…H2SXpGojjFGA.jpg
1
null
Still confused about Memory (mem0) integration into llamaindex AgentWorkflow
2
So as the title clearly states : i'm really confused about how does mem0 works with LLamaindex AgentWorkflow class. let me explain Yes, i understood that mem0 for example is used to hold context long term to understand the user preferences....etc . however as i was reading this page from the doc: [https://docs.mem0.ai...
2025-06-21T19:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1lh4l30/still_confused_about_memory_mem0_integration_into/
ProfessionalDress259
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh4l30
false
null
t3_1lh4l30
/r/LocalLLaMA/comments/1lh4l30/still_confused_about_memory_mem0_integration_into/
false
false
self
2
{'enabled': False, 'images': [{'id': 'yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=108&crop=smart&auto=webp&s=986053c6da380d0fd5291ec1f848ffcb4008914a', 'width': 108}, {'height': 113, 'url': 'h...
Abstracting the Prompt and Context
0
If large language models are a new operating system, and natural English is the programming language, then what are the abstraction methods? One of the fundamental problems is that each model is trained / tuned in different ways and responds very differently to explicit or implicit English instructions. We have loos...
2025-06-21T19:13:28
https://www.reddit.com/r/LocalLLaMA/comments/1lh4d6r/abstracting_the_prompt_and_context/
RMCPhoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh4d6r
false
null
t3_1lh4d6r
/r/LocalLLaMA/comments/1lh4d6r/abstracting_the_prompt_and_context/
false
false
self
0
null
From Arch-Function to Arch-Agent. Designed for fast multi-step, multi-turn workflow orchestration in agents.
84
Hello - in the past i've shared my work around [function-calling](https://www.reddit.com/r/LocalLLaMA/comments/1hr9ll1/i_built_a_small_function_calling_llm_that_packs_a/) on this sub. The encouraging feedback and usage (over 100k downloads 🤯) has gotten me and my team cranking away. Six months from our initial launch...
2025-06-21T18:20:02
https://i.redd.it/n7hvejg7kb8f1.png
AdditionalWeb107
i.redd.it
1970-01-01T00:00:00
0
{}
1lh359d
false
null
t3_1lh359d
/r/LocalLLaMA/comments/1lh359d/from_archfunction_to_archagent_designed_for_fast/
false
false
default
84
{'enabled': True, 'images': [{'id': 'n7hvejg7kb8f1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=108&crop=smart&auto=webp&s=903edc66dc08aafb66affa8e74636840e4af4198', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=216&crop=smart&auto=webp...
How to fine-tune and things required to fine-tune a Language Model?
8
I am a beginner in Machine learning and language models. I am currently studying about Small Language Models and I want to fine-tune SLMs for specific tasks. I know about different fine-tuning methods in concept but don't know how to implement/apply any of that in code and practical way. My questions are - 1. How much...
2025-06-21T18:17:00
https://www.reddit.com/r/LocalLLaMA/comments/1lh32t8/how_to_finetune_and_things_required_to_finetune_a/
No_Requirement9600
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh32t8
false
null
t3_1lh32t8
/r/LocalLLaMA/comments/1lh32t8/how_to_finetune_and_things_required_to_finetune_a/
false
false
self
8
null
Moore Threads: An overlooked possibility for cheap local LLM inference?
1
There's a Chinese company called Moore Threads which makes very mediocre but affordable gaming GPUs, including the MTT S80 **which is $170 for 16GB**. Of course, no CUDA or VULKAN, but even so, with how expensive even used mining cards are nowadays, it might be a very good choice for affordably running very large mode...
2025-06-21T18:16:17
https://www.reddit.com/r/LocalLLaMA/comments/1lh328r/moore_threads_an_overlooked_possibility_for_cheap/
HugoCortell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh328r
false
null
t3_1lh328r
/r/LocalLLaMA/comments/1lh328r/moore_threads_an_overlooked_possibility_for_cheap/
false
false
self
1
null
CEO Bench: Can AI Replace the C-Suite?
226
I put together a (slightly tongue in cheek) benchmark to test some LLMs. All open source and all the data is in the repo. It makes use of the excellent `llm` Python package from Simon Willison. I've only benchmarked a couple of local models but want to see what the smallest LLM is that will score above the estimated ...
2025-06-21T17:49:08
https://ceo-bench.dave.engineer/
dave1010
ceo-bench.dave.engineer
1970-01-01T00:00:00
0
{}
1lh2ffp
false
null
t3_1lh2ffp
/r/LocalLLaMA/comments/1lh2ffp/ceo_bench_can_ai_replace_the_csuite/
false
false
default
226
null
Voice Cloning model that allows training on longer audio
3
Hi, Im trying to find a TTS model that allows more refence audio to clone a voice. As often they only take up to 30 seconds of audio to base the voice off. However i have characters with audio between 30minutes long to 8 Hours. So want a model I can train to get the most of. Any suggestions?
2025-06-21T17:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1lh27fn/voice_cloning_model_that_allows_training_on/
Back-Rare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh27fn
false
null
t3_1lh27fn
/r/LocalLLaMA/comments/1lh27fn/voice_cloning_model_that_allows_training_on/
false
false
self
3
null
System prompt caching with persistent state augmented retrieval
0
I have this use case where I needed to process a fairly large contexts repeatedly with local CPU only inference capabilities. In my testing, prompt processing took as long as 45 seconds. Trying to setup KV caching I discovered (shamefully) that llama cpp and python bindings do support caching out of the box and even ...
2025-06-21T17:37:37
https://www.reddit.com/r/LocalLLaMA/comments/1lh25j3/system_prompt_caching_with_persistent_state/
Fluid-Age-9266
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh25j3
false
null
t3_1lh25j3
/r/LocalLLaMA/comments/1lh25j3/system_prompt_caching_with_persistent_state/
false
false
self
0
null
Deepseekv3-0324 671b LORA training
12
Is there a way currently to train LORAs off of Deepseekv3-0324 (671b) given that there is no huggingface transformers support yet?
2025-06-21T17:07:26
https://www.reddit.com/r/LocalLLaMA/comments/1lh1gkh/deepseekv30324_671b_lora_training/
triestdain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh1gkh
false
null
t3_1lh1gkh
/r/LocalLLaMA/comments/1lh1gkh/deepseekv30324_671b_lora_training/
false
false
self
12
null
how many people will tolerate slow speed for running LLM locally?
116
just want to check how many people will tolerate speed for privacy?
2025-06-21T16:36:11
https://www.reddit.com/r/LocalLLaMA/comments/1lh0qb9/how_many_people_will_tolerate_slow_speed_for/
OwnSoup8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh0qb9
false
null
t3_1lh0qb9
/r/LocalLLaMA/comments/1lh0qb9/how_many_people_will_tolerate_slow_speed_for/
false
false
self
116
null
Autopaste MFAs from Gmail using LLaMA
52
Inspired by Apple's "insert code from SMS" feature, made a tool to speed up the process of inserting incoming email MFAs: [https://github.com/yahorbarkouski/auto-mfa](https://github.com/yahorbarkouski/auto-mfa) Connect accounts, choose LLM provider (Ollama supported), add a system shortcut targeting the script, and en...
2025-06-21T16:32:58
https://www.reddit.com/r/LocalLLaMA/comments/1lh0noy/autopaste_mfas_from_gmail_using_llama/
samewakefulinsomnia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh0noy
false
null
t3_1lh0noy
/r/LocalLLaMA/comments/1lh0noy/autopaste_mfas_from_gmail_using_llama/
false
false
self
52
{'enabled': False, 'images': [{'id': 'gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=108&crop=smart&auto=webp&s=b6c8d48fb32fa8252364d52009d7d60921355114', 'width': 108}, {'height': 108, 'url': 'h...
RTX 6000 Pro Blackwell
10
Had 2+4 RTX 3090 server for local projects. Manageable if run under-powered. The 3090s still seem like a great value, but start feeling dated. Thinking of getting a single RTX 6000 Pro 96GB Blackwell. \~2.5-3x cost of 4 x 3090. Would love to hear your opinions. Pros: More VRAM, very easy to run, much faster inferen...
2025-06-21T16:30:37
https://www.reddit.com/r/LocalLLaMA/comments/1lh0lqd/rtx_6000_pro_blackwell/
val_in_tech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh0lqd
false
null
t3_1lh0lqd
/r/LocalLLaMA/comments/1lh0lqd/rtx_6000_pro_blackwell/
false
false
self
10
null
Ollama alternatives
17
I have a Linux Ubuntu server with 192GB ram and a geoforce rtx 4090 GPU. I've been creating some python apps lately using ollama and langchain with models like gemma3:27b. I know ollama and langchain are both not the most cutting edge tools. I am pretty good in programming and configuration so could probably move on...
2025-06-21T16:20:32
https://www.reddit.com/r/LocalLLaMA/comments/1lh0div/ollama_alternatives/
Maleficent_Payment44
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh0div
false
null
t3_1lh0div
/r/LocalLLaMA/comments/1lh0div/ollama_alternatives/
false
false
self
17
null
Copilot Replacement
0
I started working at a company that only works with GH Copilot recently. It’s been terrible. I’m wondering whether running a local reasoning model might perform better. Please advise. Work Macbook: M2 pro 16 GB. Let me know if anything needs to be clarified in order to move forward. Thanks!
2025-06-21T16:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1lh0bdk/copilot_replacement/
Few_Speaker_9537
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lh0bdk
false
null
t3_1lh0bdk
/r/LocalLLaMA/comments/1lh0bdk/copilot_replacement/
false
false
self
0
null
Build DeepSeek-R1-Distill-Qwen-7B from Scratch
0
I'm a big fan of Sebastian Raschka's earlier work on LLMs from scratch. He recently switched from Llama to Qwen (a switch I recently made too thanks to someone in this subreddit) and wrote a Jupyter notebook implementing Qwen3 from scratch. Highly recommend this resource as a learning project.
2025-06-21T15:36:20
https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/11_qwen3
entsnack
github.com
1970-01-01T00:00:00
0
{}
1lgzd58
false
null
t3_1lgzd58
/r/LocalLLaMA/comments/1lgzd58/build_deepseekr1distillqwen7b_from_scratch/
false
false
default
0
null
Xiaomi Mimo RL 7b vs Qwen 3 8b
1
Hi, I need an AI model to pair with Owl AI (a Manus alternative) I need an AI that excels in Analysis, Coding Task Planning and Automation. I'm undecided between Xiaomi Mimo RL 7b and Qwen 3 8b (I can only run models with max 8b parameters) which one do you guys recommend?
2025-06-21T15:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1lgz6c7/xiaomi_mimo_rl_7b_vs_qwen_3_8b/
thepaganalchemist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgz6c7
false
null
t3_1lgz6c7
/r/LocalLLaMA/comments/1lgz6c7/xiaomi_mimo_rl_7b_vs_qwen_3_8b/
false
false
self
1
null
Question about throughput of individual requests on a single GPU
0
What do you use to maximize the throughput of LLMs for a single request? I'm going to use it locally for Roo Code, and you know, the higher the tk/s per request, the faster it works. I have a 5080, but I can easily run 14B models at 80 tk/s or 24B models (quantized to Q3_K_L) at 48-50 tk/s with llama.cpp.
2025-06-21T15:17:11
https://www.reddit.com/r/LocalLLaMA/comments/1lgyxkc/question_about_throughput_of_individual_requests/
ajmusic15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgyxkc
false
null
t3_1lgyxkc
/r/LocalLLaMA/comments/1lgyxkc/question_about_throughput_of_individual_requests/
false
false
self
0
null
Steering LLM outputs
58
**What is this?** * Optimising LLM proxy runs workflow that mixes instructions from multiple anchor prompts based on their weights * Weights are controlled via specially crafted artifact. The artifact connects back to the workflow over websockets and is able of sending/receiving data. * The artifact can pause or slow ...
2025-06-21T15:14:23
https://v.redd.it/0351w9ovpa8f1
Everlier
v.redd.it
1970-01-01T00:00:00
0
{}
1lgyv8a
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0351w9ovpa8f1/DASHPlaylist.mpd?a=1753110878%2CNTFhODNhNzhmNTdiM2MxNTRhY2I4OGIzZDc2ZDE4ODgyOWUyNzVkODZjNDkxNTcyOWZkMWM5ZWY4ZDEyZjAxMA%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/0351w9ovpa8f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lgyv8a
/r/LocalLLaMA/comments/1lgyv8a/steering_llm_outputs/
false
false
https://external-preview…88579c650c473948
58
{'enabled': False, 'images': [{'id': 'NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=108&crop=smart&format=pjpg&auto=webp&s=458e0befcfc8cc4bad57728ee49419a5fcb90...
Build Qwen3 from Scratch
72
I'm a big fan of Sebastian Raschka's earlier work on LLMs from scratch. He recently switched from Llama to Qwen (a switch I recently made too thanks to someone in this subreddit) and wrote a Jupyter notebook implementing Qwen3 from scratch. Highly recommend this resource as a learning project.
2025-06-21T14:41:27
https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/11_qwen3
entsnack
github.com
1970-01-01T00:00:00
0
{}
1lgy4wa
false
null
t3_1lgy4wa
/r/LocalLLaMA/comments/1lgy4wa/build_qwen3_from_scratch/
false
false
default
72
null
moonshotai/Kimi-VL-A3B-Thinking-2506 · Hugging Face
79
2025-06-21T14:36:31
https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1lgy12q
false
null
t3_1lgy12q
/r/LocalLLaMA/comments/1lgy12q/moonshotaikimivla3bthinking2506_hugging_face/
false
false
https://external-preview…e954694e30ef19ab
79
{'enabled': False, 'images': [{'id': 'nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=108&crop=smart&auto=webp&s=b6a411809385a8832da3850c0f3d679bebfc629b', 'width': 108}, {'height': 116, 'url': 'h...
The "unbiased" r1 1776 seems to be obsessed with China
0
When given some meaningless text or short numbers, it talks about the western accusation on China. When given any random date in the past, it finds (or hallucinate) scandals and accusations about China (and it respond in Chinese). When I asked about Israel, it talks about China. When I asked about 1984, it literally ...
2025-06-21T14:26:48
https://www.reddit.com/gallery/1lgxti0
Salty_Interest_1493
reddit.com
1970-01-01T00:00:00
0
{}
1lgxti0
false
null
t3_1lgxti0
/r/LocalLLaMA/comments/1lgxti0/the_unbiased_r1_1776_seems_to_be_obsessed_with/
false
false
https://external-preview…41d0b8abe5f8db9a
0
{'enabled': True, 'images': [{'id': 'xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=108&crop=smart&auto=webp&s=cba724f58c10a5bc2eb5d629587dc45b06ab5b32', 'width': 108}, {'height': 432, 'url': 'h...
Using Qwen3 30b in Roo code
4
Does anyone had any experience using Qwen3 in Roo? Which parameter do you use? I use 8bit quantizations, results are meaningful, but far from perfect. Did anyone use the same model in the same configuration? Which parameters did you use? My params for llama.cpp: ``` -hf Qwen/Qwen3-30B-A3B-GGUF:Q8_0 \ -c 131072 --rope...
2025-06-21T14:26:01
https://www.reddit.com/r/LocalLLaMA/comments/1lgxswa/using_qwen3_30b_in_roo_code/
ArtisticHamster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgxswa
false
null
t3_1lgxswa
/r/LocalLLaMA/comments/1lgxswa/using_qwen3_30b_in_roo_code/
false
false
self
4
null
Self Adapting LLMs - legit?
126
I just came across the new MIT paper *Self-Adapting Language Models* (Zweiger et al., June 2025). The core idea is wild: * The LLM produces a **self-edit**—a chunk of text that can (a) rewrite / augment the input data, (b) pick hyper-parameters, or (c) call external tools for data augmentation or gradient updates. *...
2025-06-21T14:14:34
https://i.redd.it/rlhp01gfca8f1.png
Desperate_Rub_1352
i.redd.it
1970-01-01T00:00:00
0
{}
1lgxjw2
false
null
t3_1lgxjw2
/r/LocalLLaMA/comments/1lgxjw2/self_adapting_llms_legit/
false
false
default
126
{'enabled': True, 'images': [{'id': 'rlhp01gfca8f1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=108&crop=smart&auto=webp&s=b64fb70eef9e0567f59bf4cb042f140492f0bb9b', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=216&crop=smart&auto=we...
Don’t Forget Error Handling with Agentic Workflows
0
This was a very interesting read. As our models get more complex, and get inserted into more workflows, it might be a good idea to have error handling wrapped around the agent calls to prevent undesired behavior.
2025-06-21T13:59:23
https://www.anthropic.com/research/agentic-misalignment
SignificanceNeat597
anthropic.com
1970-01-01T00:00:00
0
{}
1lgx7oy
false
null
t3_1lgx7oy
/r/LocalLLaMA/comments/1lgx7oy/dont_forget_error_handling_with_agentic_workflows/
false
false
https://external-preview…7bd7082223746bc7
0
{'enabled': False, 'images': [{'id': 'YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=108&crop=smart&auto=webp&s=3858f721a29547fa04b2e0baa4c1d0e9bf05205b', 'width': 108}, {'height': 113, 'url': 'h...
Minimax-M1 is competitive with Gemini 2.5 Pro 05-06 on Fiction.liveBench Long Context Comprehension
89
2025-06-21T13:51:53
https://i.redd.it/o9sgqppkca8f1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1lgx222
false
null
t3_1lgx222
/r/LocalLLaMA/comments/1lgx222/minimaxm1_is_competitive_with_gemini_25_pro_0506/
false
false
default
89
{'enabled': True, 'images': [{'id': 'o9sgqppkca8f1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=108&crop=smart&auto=webp&s=e7a22e13377921274a8b4fcac4aca5a745a6d5e8', 'width': 108}, {'height': 302, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=216&crop=smart&auto=we...
Someone Used a 1997 Processor and Showed That Only 128 MB of Ram Were Needed to Run a Modern AI—and Here's the Proof
0
"On the Pentium II, the 260K parameter Llama model processed 39.31 tokens per second—a far cry from the performance of more modern systems, but still a remarkable feat. Larger models, such as the 15M parameter version, ran slower, at just 1.03 tokens per second, but still far outstripped expectations."
2025-06-21T13:40:08
https://dailygalaxy.com/2025/06/someone-used-a-1997-processor-and-showed-that-only-128-mb-of-ram-were-needed-to-run-a-modern-ai-and-heres-the-proof/
tjthomas101
dailygalaxy.com
1970-01-01T00:00:00
0
{}
1lgwtbm
false
null
t3_1lgwtbm
/r/LocalLLaMA/comments/1lgwtbm/someone_used_a_1997_processor_and_showed_that/
false
false
default
0
{'enabled': False, 'images': [{'id': '6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=108&crop=smart&auto=webp&s=e5f880d85d5988490b8ce3e1e0c385b25a729441', 'width': 108}, {'height': 130, 'url': '...
DeepSeek Guys Open-Source nano-vLLM
621
The DeepSeek guys just open-sourced [nano-vLLM](https://github.com/GeeeekExplorer/nano-vllm). It’s a lightweight vLLM implementation built from scratch. # Key Features * 🚀 **Fast offline inference** \- Comparable inference speeds to vLLM * 📖 **Readable codebase** \- Clean implementation in \~ 1,200 lines of Python ...
2025-06-21T13:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1lgwsdr/deepseek_guys_opensource_nanovllm/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgwsdr
false
null
t3_1lgwsdr
/r/LocalLLaMA/comments/1lgwsdr/deepseek_guys_opensource_nanovllm/
false
false
self
621
{'enabled': False, 'images': [{'id': 'l3PKHbX960LyanRSRNL5eJKlzH1w_kpmuxvmCLO8a_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l3PKHbX960LyanRSRNL5eJKlzH1w_kpmuxvmCLO8a_I.png?width=108&crop=smart&auto=webp&s=b73411d79e0fbd2f63b6669649ea421eff0a42a2', 'width': 108}, {'height': 108, 'url': 'h...
Semantically search and ask your Gmail using local LLaMA
66
I got fed up with Apple Mail’s clunky search and built my own tool: a lightweight, local-LLM-first CLI that lets you semantically search and ask questions about your Gmail inbox: https://i.redd.it/vs2cz0f66a8f1.gif Grab it here: [https://github.com/yahorbarkouski/semantic-mail](https://github.com/yahorbarkouski/seman...
2025-06-21T13:16:44
https://www.reddit.com/r/LocalLLaMA/comments/1lgwcfb/semantically_search_and_ask_your_gmail_using/
samewakefulinsomnia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgwcfb
false
null
t3_1lgwcfb
/r/LocalLLaMA/comments/1lgwcfb/semantically_search_and_ask_your_gmail_using/
false
false
https://external-preview…fff6668b195ee0e0
66
{'enabled': False, 'images': [{'id': 'u5mOM2CU1WqdBPMvqlJEeyyQe4ITGIoML7OHXi1ZUCs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u5mOM2CU1WqdBPMvqlJEeyyQe4ITGIoML7OHXi1ZUCs.png?width=108&crop=smart&auto=webp&s=0013e3b2bc22852665093590cb063d9decaae90f', 'width': 108}, {'height': 108, 'url': 'h...
My AI Skeptic Friends Are All Nuts
0
2025-06-21T13:05:41
https://fly.io/blog/youre-all-nuts/
bigzyg33k
fly.io
1970-01-01T00:00:00
0
{}
1lgw4ei
false
null
t3_1lgw4ei
/r/LocalLLaMA/comments/1lgw4ei/my_ai_skeptic_friends_are_all_nuts/
false
false
https://external-preview…4294d6600e44adb6
0
{'enabled': False, 'images': [{'id': '2YhUxsDlpb_4NDGoin5aG_dyH_BW1XNVXpjFhru6J90', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/2YhUxsDlpb_4NDGoin5aG_dyH_BW1XNVXpjFhru6J90.png?width=108&crop=smart&auto=webp&s=65401538ffbf16e8fff32c20814f422d5cc720a7', 'width': 108}, {'height': 122, 'url': 'h...
Local build base parts
0
Hey what would your suggestions to be minus the main stuff motheboard, gpu & cpu. What could I go ahead and buy right now that wont be outdated as fast as the brains, that I can keep building up on. I was hoping to include motherboard too. So box, power supply, etc....this is what a combination of several AIs suggested...
2025-06-21T12:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1lgvx2s/local_build_base_parts/
Top-Advisor6284
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgvx2s
false
null
t3_1lgvx2s
/r/LocalLLaMA/comments/1lgvx2s/local_build_base_parts/
false
false
self
0
null
After trying to buy Ilya Sutskever's $32B AI startup, Meta looks to hire its CEO | TechCrunch
139
What hapening to zuck? after scale ai , now Safe Superintelligence
2025-06-21T12:38:20
https://techcrunch.com/2025/06/20/after-trying-to-buy-ilya-sutskevers-32b-ai-startup-meta-looks-to-hire-its-ceo/
touhidul002
techcrunch.com
1970-01-01T00:00:00
0
{}
1lgvl40
false
null
t3_1lgvl40
/r/LocalLLaMA/comments/1lgvl40/after_trying_to_buy_ilya_sutskevers_32b_ai/
false
false
https://external-preview…6e929c11060a4be1
139
{'enabled': False, 'images': [{'id': '1Tc0yB3lwHCV4Qo8QzQZtbMZw5Hyi2St2rr1CDzMJaE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/1Tc0yB3lwHCV4Qo8QzQZtbMZw5Hyi2St2rr1CDzMJaE.jpeg?width=108&crop=smart&auto=webp&s=a12423c678fd670dccc7869a3421874a704655fc', 'width': 108}, {'height': 144, 'url': '...
Help me build a good TTS + LLM + STT stack
35
Hello everyone. I am currently in the lookout for a good conversational AI system I can run. I want to use it conversational AI and be able to handle some complex prompts. Essentially I would like to try and build a alternative to retell or VAPI voice AI systems but using some of the newer voice systems & in my own clo...
2025-06-21T12:07:25
https://www.reddit.com/r/LocalLLaMA/comments/1lgv0y9/help_me_build_a_good_tts_llm_stt_stack/
sync_co
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgv0y9
false
null
t3_1lgv0y9
/r/LocalLLaMA/comments/1lgv0y9/help_me_build_a_good_tts_llm_stt_stack/
false
false
self
35
null
What you guys think about Hyperscaler AI?
1
what is your opinion about Hyperscaler AI term? is that just a buzz word for IaaS or its something else?
2025-06-21T11:57:47
https://www.reddit.com/r/LocalLLaMA/comments/1lguuoy/what_you_guys_think_about_hyperscaler_ai/
saikanov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lguuoy
false
null
t3_1lguuoy
/r/LocalLLaMA/comments/1lguuoy/what_you_guys_think_about_hyperscaler_ai/
false
false
self
1
null
Mistral Small 3.2 MLX, where?
0
I'm a little surprised not to find any MLX of the latest MistralAI LLM Has anyone tried to produce it? Are you experiencing issues?
2025-06-21T11:50:09
https://www.reddit.com/r/LocalLLaMA/comments/1lguq3z/mistral_small_32_mlx_where/
Creative-Size2658
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lguq3z
false
null
t3_1lguq3z
/r/LocalLLaMA/comments/1lguq3z/mistral_small_32_mlx_where/
false
false
self
0
null
Building a memory-heavy AI agent — looking for local-first storage & recall solutions
4
I’m a solo builder working on a memory-intensive AI agent that needs to run locally, store data persistently, and recall it verbatim. I’m not building a general-purpose chatbot or productivity app. This is more of a personal infrastructure experiment — something I want to get working for myself and one other user as a...
2025-06-21T11:36:53
https://www.reddit.com/r/LocalLLaMA/comments/1lgui5s/building_a_memoryheavy_ai_agent_looking_for/
Epiclovesnature
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgui5s
false
null
t3_1lgui5s
/r/LocalLLaMA/comments/1lgui5s/building_a_memoryheavy_ai_agent_looking_for/
false
false
self
4
null
🔥 Meet Dungeo AI LAN Play — Your Next-Level AI Dungeon Master Adventure! 🎲🤖
11
Hey adventurers! 👋 I’m the creator of **Dungeo AI LAN Play**, an exciting way to experience AI-driven dungeon crawling with your friends over LAN! 🌐🎮 2-5 player. https://reddit.com/link/1lgug5r/video/jskcnbxxn98f1/player Imagine teaming up with your buddies while a smart AI Dungeon Master crafts the story, cha...
2025-06-21T11:33:42
https://www.reddit.com/r/LocalLLaMA/comments/1lgug5r/meet_dungeo_ai_lan_play_your_nextlevel_ai_dungeon/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgug5r
false
null
t3_1lgug5r
/r/LocalLLaMA/comments/1lgug5r/meet_dungeo_ai_lan_play_your_nextlevel_ai_dungeon/
false
false
self
11
null
AI tool that turns docs, videos & audio into mind maps, podcasts, decks & more
0
Hey there, I've been working on an AI project recently that helps users transform their existing content — documents, PDFs, lecture notes, audio, video, even text prompts — into various learning formats like: 🧠 Mind Maps 📄 Summaries 📚 Courses 📊 Slides 🎙️ Podcasts 🤖 Interactive Q&A with an AI assistant ...
2025-06-21T11:18:04
https://www.reddit.com/r/LocalLLaMA/comments/1lgu6q0/ai_tool_that_turns_docs_videos_audio_into_mind/
TheDollarHacks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgu6q0
false
null
t3_1lgu6q0
/r/LocalLLaMA/comments/1lgu6q0/ai_tool_that_turns_docs_videos_audio_into_mind/
false
false
self
0
null
Open source tool to fix LLM-generated JSON
20
Hey! Ever since I started using LLMs to generate JSON for my side projects I occasionally get an error and when looking at the logs it’s usually because of some parsing errors. I’ve built a tool to fix the most common errors I came across: - Markdown Block Extraction: Extracts JSON from ```json code blocks and inlin...
2025-06-21T10:24:16
https://www.reddit.com/r/LocalLLaMA/comments/1lgtcrb/open_source_tool_to_fix_llmgenerated_json/
arthurtakeda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgtcrb
false
null
t3_1lgtcrb
/r/LocalLLaMA/comments/1lgtcrb/open_source_tool_to_fix_llmgenerated_json/
false
false
self
20
{'enabled': False, 'images': [{'id': '-I7PLElTfZtgiVjrJaagPKFOgO8Mttiw-K28vxzyYJY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-I7PLElTfZtgiVjrJaagPKFOgO8Mttiw-K28vxzyYJY.png?width=108&crop=smart&auto=webp&s=08e214ebe017932f3320b8f49d19e9372b09bbb3', 'width': 108}, {'height': 108, 'url': 'h...
LM Studio much faster than Ollama?
3
I've been getting deep into local LLMs recently and I first started out with LM Studio; easy to use, easy to setup, and works right out of the box. Yesterday I decided it was time to venture further and so I set up Ollama and Open WebGUI. Needless to say it is much better than LM Studio in terms of how capable it is. I...
2025-06-21T10:22:10
https://www.reddit.com/r/LocalLLaMA/comments/1lgtbo8/lm_studio_much_faster_than_ollama/
MonyWony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgtbo8
false
null
t3_1lgtbo8
/r/LocalLLaMA/comments/1lgtbo8/lm_studio_much_faster_than_ollama/
false
false
self
3
null
Scaling broke me a bit, but this one internal trick helped a lot
0
Over the past year, I’ve worked on a startup product that pushed a bit too far too fast, hundreds of billions of tokens processed, across multiple LLM providers, from bare metal GPU servers to spot-scaled cloud instances. Around 80 microservices and growing. Way too much for a small team. One internal decision probab...
2025-06-21T10:21:13
https://www.reddit.com/r/LocalLLaMA/comments/1lgtb5o/scaling_broke_me_a_bit_but_this_one_internal/
supraking007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgtb5o
false
null
t3_1lgtb5o
/r/LocalLLaMA/comments/1lgtb5o/scaling_broke_me_a_bit_but_this_one_internal/
false
false
self
0
null
Dynamic metaprompting in Open WebUI
10
**What is this?** * LLM proxy with OpenAI-compatible API runs a workflow where system prompt is dynamically mixed from a given set of source prompts according to their weight * The ratios are controlled from a specially crafted artifact that talks back to the workflow over websockets * UI allows to pause or slow down...
2025-06-21T10:13:16
https://v.redd.it/vnmpwmal898f1
Everlier
v.redd.it
1970-01-01T00:00:00
0
{}
1lgt6sx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vnmpwmal898f1/DASHPlaylist.mpd?a=1753092813%2CY2Y0MGNhYTM3NGNjYmVmYWZkNDUzM2Q1MmJkN2NhYTE2Y2RiNzM2NWQ0NjIyNDI4NmY2Y2JiOTBlNmViZDA0Nw%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/vnmpwmal898f1/DASH_1080.mp4?source=fallback', 'h...
t3_1lgt6sx
/r/LocalLLaMA/comments/1lgt6sx/dynamic_metaprompting_in_open_webui/
false
false
https://external-preview…58892b0cd458a459
10
{'enabled': False, 'images': [{'id': 'YmMyNm1sYWw4OThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YmMyNm1sYWw4OThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=108&crop=smart&format=pjpg&auto=webp&s=d1adef22a4be7dfdabbe6b24ea2d24a165ced...
Open Source Unsiloed AI Chunker (EF2024)
5
Hey , Unsiloed CTO here! Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give ...
2025-06-21T10:09:46
https://www.reddit.com/r/LocalLLaMA/comments/1lgt4xd/open_source_unsiloed_ai_chunker_ef2024/
AskInternational6199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lgt4xd
false
null
t3_1lgt4xd
/r/LocalLLaMA/comments/1lgt4xd/open_source_unsiloed_ai_chunker_ef2024/
false
false
https://external-preview…97879c0674411d82
5
{'enabled': False, 'images': [{'id': 'DrfhMWbsS2YNADYSkzRFo8CavEZsmEVw_qCUPCEDiaM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DrfhMWbsS2YNADYSkzRFo8CavEZsmEVw_qCUPCEDiaM.png?width=108&crop=smart&auto=webp&s=48e2208a21e63c2d75e76b50e0ec3c003b2180b0', 'width': 108}, {'height': 113, 'url': 'h...