title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
MCP capable small local models?
5
Hey there! I'm looking for recommendations for a small model that can work ok with an MCP server I'm building for testing purposes. I was trying Mistral but dude, it failed everything lol (or maybe I am the one failing?). I need to test other small models in the size of phi4 or similar. Thanks for the help!!!
2025-07-17T23:10:28
https://www.reddit.com/r/LocalLLaMA/comments/1m2mdc8/mcp_capable_small_local_models/
amunocis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2mdc8
false
null
t3_1m2mdc8
/r/LocalLLaMA/comments/1m2mdc8/mcp_capable_small_local_models/
false
false
self
5
null
Best OCR to extract text from ECG images
2
Hi Very new to llms and ocrs But working on a research project which requires data extraction from ECG that have textual data generated by the ECG machine itself. Been trying tessaract ocr but having a lot of gibberish come out as ocr output. Will try pre processing to improve output but are there any open source ocrs...
2025-07-17T22:52:04
https://www.reddit.com/r/LocalLLaMA/comments/1m2lxq3/best_ocr_to_extract_text_from_ecg_images/
cade1513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2lxq3
false
null
t3_1m2lxq3
/r/LocalLLaMA/comments/1m2lxq3/best_ocr_to_extract_text_from_ecg_images/
false
false
self
2
null
#1 model on Open ASR nvidia/canary-qwen-2.5b is available now
64
It showed up on the leaderboard as #1 a couple days ago, and it's finally available now.
2025-07-17T22:45:40
https://huggingface.co/nvidia/canary-qwen-2.5b
SummonerOne
huggingface.co
1970-01-01T00:00:00
0
{}
1m2lsbm
false
null
t3_1m2lsbm
/r/LocalLLaMA/comments/1m2lsbm/1_model_on_open_asr_nvidiacanaryqwen25b_is/
false
false
https://external-preview…bf07c57458021dc5
64
{'enabled': False, 'images': [{'id': 'Nn-LD6fffringbEQZP1Qi_wM5thia6kxISdin3VAOxU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Nn-LD6fffringbEQZP1Qi_wM5thia6kxISdin3VAOxU.png?width=108&crop=smart&auto=webp&s=ab51e54146e6a3a4c28dd59d09235d4c9e8c265a', 'width': 108}, {'height': 116, 'url': 'h...
Lab environment
0
What would be an inexpensive lab setup running kubernetes with llms? Mainly just to play around
2025-07-17T22:42:58
https://www.reddit.com/r/LocalLLaMA/comments/1m2lq3q/lab_environment/
running101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2lq3q
false
null
t3_1m2lq3q
/r/LocalLLaMA/comments/1m2lq3q/lab_environment/
false
false
self
0
null
Multimodal models that can "read" data on the monitor
1
I am trying to figure if there are any real AI models that has the ability to process real time streaming data on the computer monitor. Please forgive me if this is not the right place to post this.
2025-07-17T22:36:19
https://www.reddit.com/r/LocalLLaMA/comments/1m2lklq/multimodal_models_that_can_read_data_on_the/
Crazy_Ad_6915
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2lklq
false
null
t3_1m2lklq
/r/LocalLLaMA/comments/1m2lklq/multimodal_models_that_can_read_data_on_the/
false
false
self
1
null
When to RAG
4
I just finished my RAGA pipeline and got everything wired together, but i’m finding that i didn’t think through decisions on when to call the retriever vs. when to just let the LLM answer. I’m curious, how do others who’ve implemented a RAG pipeline decide when to actually call it? I started with just passing the pro...
2025-07-17T22:11:08
https://www.reddit.com/r/LocalLLaMA/comments/1m2kz44/when_to_rag/
Loud-Bake-2740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2kz44
false
null
t3_1m2kz44
/r/LocalLLaMA/comments/1m2kz44/when_to_rag/
false
false
self
4
null
Thunderbolt vs Oculink
6
I just got my first oculink nvme adapter and figured I'd test it out! Unfortunately, it still bottlenecks on tabbyAPI with tensor parallelism during prompt processing. This means that any of those nvme x4 adapters, even for a x16 bifurcation, will bottleneck in bandwidth. Unfortunately, for my use case I frequently ...
2025-07-17T21:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1m2kjrm/thunderbolt_vs_oculink/
mayo551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2kjrm
false
null
t3_1m2kjrm
/r/LocalLLaMA/comments/1m2kjrm/thunderbolt_vs_oculink/
false
false
https://b.thumbs.redditm…j0ikjqATFZ-g.jpg
6
null
support for Ernie 4.5 MoE models has been merged into llama.cpp
124
Previously, only the tiny Ernie model was supported by llama.cpp
2025-07-17T21:35:47
https://github.com/ggml-org/llama.cpp/pull/14658
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1m2k480
false
null
t3_1m2k480
/r/LocalLLaMA/comments/1m2k480/support_for_ernie_45_moe_models_has_been_merged/
false
false
default
124
{'enabled': False, 'images': [{'id': 'Xa2nwNvQaZ79M355gwwIuuvaJFK0WjYiA5gWgioi6UU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Xa2nwNvQaZ79M355gwwIuuvaJFK0WjYiA5gWgioi6UU.png?width=108&crop=smart&auto=webp&s=490f6de757578a1311735f25a35269d1144172b5', 'width': 108}, {'height': 108, 'url': 'h...
CXL Benefits for DB, AI
0
The specs are insane ..
2025-07-17T21:15:26
https://youtu.be/LNikH6T4OtQ?si=r5XLzsqjm2kfEpw8
sub_RedditTor
youtu.be
1970-01-01T00:00:00
0
{}
1m2jluy
false
{'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/LNikH6T4OtQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop...
t3_1m2jluy
/r/LocalLLaMA/comments/1m2jluy/cxl_benefits_for_db_ai/
false
false
default
0
{'enabled': False, 'images': [{'id': 'iTuc9-Sv1jMGdJsr31-4tI_xAqABluQ_3k2mfkh9gKE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iTuc9-Sv1jMGdJsr31-4tI_xAqABluQ_3k2mfkh9gKE.jpeg?width=108&crop=smart&auto=webp&s=428d064adb080fc9baa48f4230c8e28110536e12', 'width': 108}, {'height': 162, 'url': '...
Google Edge AI says it's created by Open AI, using Gemma-3n-E4B
0
I just started testing it but it really seems strangely inaccurate, hallucinating all over the place.
2025-07-17T20:55:11
https://i.redd.it/2ujvp2rtzhdf1.png
elusivepeanut
i.redd.it
1970-01-01T00:00:00
0
{}
1m2j33z
false
null
t3_1m2j33z
/r/LocalLLaMA/comments/1m2j33z/google_edge_ai_says_its_created_by_open_ai_using/
false
false
default
0
{'enabled': True, 'images': [{'id': '2ujvp2rtzhdf1', 'resolutions': [{'height': 212, 'url': 'https://preview.redd.it/2ujvp2rtzhdf1.png?width=108&crop=smart&auto=webp&s=018e011be949711e8ce91f73d8e82b6dc96de5fd', 'width': 108}, {'height': 424, 'url': 'https://preview.redd.it/2ujvp2rtzhdf1.png?width=216&crop=smart&auto=we...
How to use the same context across LLMs and Agents
3
You know that feeling when you have to explain the same story to five different people? That’s been my experience with LLMs so far. I’ll start a convo with ChatGPT, hit a wall or I am dissatisfied, and switch to Claude for better capabilities. Suddenly, I’m back at square one, explaining *everything* again. I’ve tri...
2025-07-17T20:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1m2inuu/how_to_use_the_same_context_across_llms_and_agents/
Imad-aka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2inuu
false
null
t3_1m2inuu
/r/LocalLLaMA/comments/1m2inuu/how_to_use_the_same_context_across_llms_and_agents/
false
false
self
3
null
Migrating a semantically-anchored assistant from OpenAI to local environment (Domina): any successful examples of memory-aware agent migration?
2
Hi all, I'm currently running an advanced assistant (GPT-4-based) with a deeply structured, semantically tagged memory system. The assistant operates as a *cognitive agent* with an embedded memory architecture, developed through a sustained relationship over several months. We’re now building a self-hosted infrastru...
2025-07-17T20:30:25
https://www.reddit.com/r/LocalLLaMA/comments/1m2igfi/migrating_a_semanticallyanchored_assistant_from/
Capable_Load375
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2igfi
false
null
t3_1m2igfi
/r/LocalLLaMA/comments/1m2igfi/migrating_a_semanticallyanchored_assistant_from/
false
false
self
2
null
Is it possible to run something like Grok's anime girl companion free, open source, and local?
8
With the same quality?
2025-07-17T20:25:28
https://www.reddit.com/r/LocalLLaMA/comments/1m2ibq0/is_it_possible_to_run_something_like_groks_anime/
Top-Guava-1302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2ibq0
false
null
t3_1m2ibq0
/r/LocalLLaMA/comments/1m2ibq0/is_it_possible_to_run_something_like_groks_anime/
false
false
self
8
null
Why does it do this? Why does ALL models do this?
0
It's leaking the chat formatting, instructions, whatever. It's saying nonesense outside the current session.I am genuinely confused and can't research cuz I don't know what this is This is a dockerized OpenWebUI and native Llana.cpp
2025-07-17T20:20:39
https://i.redd.it/oqb0uoafthdf1.png
Leather_Flan5071
i.redd.it
1970-01-01T00:00:00
0
{}
1m2i79e
false
null
t3_1m2i79e
/r/LocalLLaMA/comments/1m2i79e/why_does_it_do_this_why_does_all_models_do_this/
false
false
default
0
{'enabled': True, 'images': [{'id': 'oqb0uoafthdf1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/oqb0uoafthdf1.png?width=108&crop=smart&auto=webp&s=7bb6388e4519280e155fcd152181f81be05f8b80', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/oqb0uoafthdf1.png?width=216&crop=smart&auto=web...
Best Open Programming Model by Language
0
Hi! I have been out of the loop for a few months. I was wondering if there was a list anywhere or if someone had recommendations for the current best models in terms of accuracy for various programming languages. Specifically, I'm looking for either a finetune that is good with programming \*and\* is trained on Rus...
2025-07-17T20:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1m2i0cn/best_open_programming_model_by_language/
Only-Ice9920
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2i0cn
false
null
t3_1m2i0cn
/r/LocalLLaMA/comments/1m2i0cn/best_open_programming_model_by_language/
false
false
self
0
null
OpenAI’s response to a confirmed data leak: ‘Prove it’s not hallucinated.’
0
[https://i.imgur.com/YwSvYOe.png](https://i.imgur.com/YwSvYOe.png) TL;DR: OpenAI o3 actually leaks real data. Bugcrowd ignores it. I discovered a vulnerability in OpenAI’s o3 model that leaks highly sensitive info—model parameters, training data segments, and even internal quarterly planning docs. The moment I fo...
2025-07-17T20:12:30
https://www.reddit.com/r/LocalLLaMA/comments/1m2hzp1/openais_response_to_a_confirmed_data_leak_prove/
Ill_Tear_5712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2hzp1
false
null
t3_1m2hzp1
/r/LocalLLaMA/comments/1m2hzp1/openais_response_to_a_confirmed_data_leak_prove/
false
false
self
0
null
OpenAI’s response to a confirmed data leak: ‘Prove it’s not hallucinated.’
1
[Internal leak or “hallucination”? Here’s the evidence.](https://preview.redd.it/l92sd1i1phdf1.png?width=823&format=png&auto=webp&s=ba4f036129fe29305b9a46bacbaf68648c562ba0) TL;DR: OpenAI o3 actually leaks real data. Bugcrowd ignores it. I discovered a vulnerability in OpenAI’s o3 model that leaks highly sensitive in...
2025-07-17T20:03:31
https://www.reddit.com/r/LocalLLaMA/comments/1m2hrdj/openais_response_to_a_confirmed_data_leak_prove/
Ill_Tear_5712
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2hrdj
false
null
t3_1m2hrdj
/r/LocalLLaMA/comments/1m2hrdj/openais_response_to_a_confirmed_data_leak_prove/
false
false
https://b.thumbs.redditm…2ErqzHRPTnKs.jpg
1
null
Apple Technical Report on their AFM Local and Server Models
1
2025-07-17T19:55:01
https://machinelearning.apple.com/papers/apple_intelligence_foundation_language_models_tech_report_2025.pdf
Faze-MeCarryU30
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1m2hjdb
false
null
t3_1m2hjdb
/r/LocalLLaMA/comments/1m2hjdb/apple_technical_report_on_their_afm_local_and/
false
false
default
1
null
Has anyone used DSPy for creative writing or story generation? Looking for examples
3
Complete noob here wondering about DSPy's creative applications. I've been exploring DSPy and noticed most examples focus on factual/analytical tasks. I'm curious if anyone has experimented with using it for creative purposes: * Story generation or creative writing optimization * Training AI to develop compelling plo...
2025-07-17T19:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1m2hedt/has_anyone_used_dspy_for_creative_writing_or/
Dymaxion_VictorDeng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2hedt
false
null
t3_1m2hedt
/r/LocalLLaMA/comments/1m2hedt/has_anyone_used_dspy_for_creative_writing_or/
false
false
self
3
null
GPU advice for running local LLMs
1
Hello All, I'm new to gen AI. I'm learning the basics, but I know that I will be getting my hands occupied in a couple of weeks with hands-on models. I currently have a very old GPU (1070 TI) which I game on. I want to bring another card (was thinking of the 5060 TI 16 GB version). I know that 24 GB+ (or I think it i...
2025-07-17T19:31:48
https://www.reddit.com/r/LocalLLaMA/comments/1m2gy2t/gpu_advice_for_running_local_llms/
Negative_Owl_6623
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2gy2t
false
null
t3_1m2gy2t
/r/LocalLLaMA/comments/1m2gy2t/gpu_advice_for_running_local_llms/
false
false
self
1
null
Just a reminder that today OpenAI was going to release a SOTA open source model… until Kimi dropped.
944
Nothing further, just posting this for the lulz. Kimi is amazing. Who even needs OpenAI at this point?
2025-07-17T19:22:01
https://www.reddit.com/r/LocalLLaMA/comments/1m2gp16/just_a_reminder_that_today_openai_was_going_to/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2gp16
false
null
t3_1m2gp16
/r/LocalLLaMA/comments/1m2gp16/just_a_reminder_that_today_openai_was_going_to/
false
false
self
944
null
Running an open source AI anime girl avatar
120
after seeing a lot of posts about a certain expensive & cringy anime girlfriend, i wanted to see if there was a better way to get AI avatars. This is from [https://github.com/Open-LLM-VTuber/Open-LLM-VTuber](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber) (not my work) using 4o API and groq whisper, but it can use ...
2025-07-17T19:20:31
https://v.redd.it/rn1rxkgqihdf1
mapppo
v.redd.it
1970-01-01T00:00:00
0
{}
1m2gnnk
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rn1rxkgqihdf1/DASHPlaylist.mpd?a=1755372075%2CNmQwNDllMjk1NjlkMmM0MDNjMmM3MGZjYjRjNTVlOGJkYzYyZTdhNTQ4NjBlNzQ3MmIxOTg1Y2E5NmU2ZDZlYQ%3D%3D&v=1&f=sd', 'duration': 42, 'fallback_url': 'https://v.redd.it/rn1rxkgqihdf1/DASH_720.mp4?source=fallback', 'ha...
t3_1m2gnnk
/r/LocalLLaMA/comments/1m2gnnk/running_an_open_source_ai_anime_girl_avatar/
false
false
https://external-preview…90ae71939f7ec5bf
120
{'enabled': False, 'images': [{'id': 'azUzamVqZ3FpaGRmMUstPxAQzeLBZZJeAt5drdnVhSzTD0UR9O7yYNnwsX72', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/azUzamVqZ3FpaGRmMUstPxAQzeLBZZJeAt5drdnVhSzTD0UR9O7yYNnwsX72.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0ec32f0ca811f69ed605be5706c0a5c077b2...
Wanted y’all’s thoughts on a project
0
Hey guys, me and some friends are working on a project for the summer just to get our feet a little wet in the field. We are freshman uni students with a good amount of coding experience. Just wanted y’all’s thoughts about the project and its usability/feasibility along with anything else yall got. Project Info: U...
2025-07-17T19:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1m2glqg/wanted_yalls_thoughts_on_a_project/
King-Ninja-OG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2glqg
false
null
t3_1m2glqg
/r/LocalLLaMA/comments/1m2glqg/wanted_yalls_thoughts_on_a_project/
false
false
self
0
null
When will we get a local version of ChatGPT Agent?
0
Recently, OpenAI has just launched a "ChatGPT Agent" model for Plus and Pro users that lets ChatGPT autonomously think, research, and act all in its own virtual operating system. When do you guys think there will be a free, local version of this that can be run on your own computer or laptop? Thanks.
2025-07-17T19:18:04
https://www.reddit.com/r/LocalLLaMA/comments/1m2gle9/when_will_we_get_a_local_version_of_chatgpt_agent/
Humble-Ad1322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2gle9
false
null
t3_1m2gle9
/r/LocalLLaMA/comments/1m2gle9/when_will_we_get_a_local_version_of_chatgpt_agent/
false
false
https://b.thumbs.redditm…8jCh9rfVLwaA.jpg
0
null
Given that powerful models like K2 are available cheaply on hosted platforms with great inference speed, are you regretting investing in hardware for LLMs?
113
I stopped running local models on my Mac a couple of months ago because with my M4 Pro I cannot run very large and powerful models. And to be honest I no longer see the point. At the moment for example I am using Kimi K2 as default model for basically everything via Groq inference, which is shockingly fast for a 1T pa...
2025-07-17T19:15:12
https://www.reddit.com/r/LocalLLaMA/comments/1m2gios/given_that_powerful_models_like_k2_are_available/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2gios
false
null
t3_1m2gios
/r/LocalLLaMA/comments/1m2gios/given_that_powerful_models_like_k2_are_available/
false
false
self
113
null
Meta is reluctant to give advice on how to save a kid's life during a dog attack unless the kid's life being in danger is emphasized.
0
2025-07-17T19:14:40
https://i.redd.it/v75i7pinhhdf1.png
executor-of-judgment
i.redd.it
1970-01-01T00:00:00
0
{}
1m2gi6a
false
null
t3_1m2gi6a
/r/LocalLLaMA/comments/1m2gi6a/meta_is_reluctant_to_give_advice_on_how_to_save_a/
false
false
https://a.thumbs.redditm…UHneQsOMXdV4.jpg
0
{'enabled': True, 'images': [{'id': 'rUoTE7yTsGoC10Zb620_liv07P6e5hLiwFUd2M_6r4s', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/v75i7pinhhdf1.png?width=108&crop=smart&auto=webp&s=5ab288232402ce125583268cf1e5943f7aa41261', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/v75i7pinhhdf1.png...
While everyone's using AI for text-to-video, I'm going the other way: video-to-text
1
[removed]
2025-07-17T19:13:12
[deleted]
1970-01-01T00:00:00
0
{}
1m2gguj
false
null
t3_1m2gguj
/r/LocalLLaMA/comments/1m2gguj/while_everyones_using_ai_for_texttovideo_im_going/
false
false
default
1
null
Exploring a local chorus/crowd mechanism or something similar to AI writing looms as a callable tool -- has anything been done in this area?
1
I'm interested in developing a locally usable tool that would provide an "overseer" running a fairly advanced model the ability to poll much smaller lighter weight models for a sort of "cloud" or "chorus" of agents receiving the same input, but with different temperatures and maybe even something like different system ...
2025-07-17T19:05:44
https://www.reddit.com/r/LocalLLaMA/comments/1m2g9x8/exploring_a_local_choruscrowd_mechanism_or/
CharlesStross
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2g9x8
false
null
t3_1m2g9x8
/r/LocalLLaMA/comments/1m2g9x8/exploring_a_local_choruscrowd_mechanism_or/
false
false
self
1
null
BREAKING: China Did It Again! Kimi-K2 is now the #1 open model in the Arena! With over 3K community votes, it ranks #5 overall, overtaking DeepSeek as the top open model. Huge congrats to the Moonshot team on this impressive milestone!
1
2025-07-17T18:54:38
https://i.redd.it/lrmmn7daehdf1.jpeg
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1m2fzgg
false
null
t3_1m2fzgg
/r/LocalLLaMA/comments/1m2fzgg/breaking_china_did_it_again_kimik2_is_now_the_1/
false
false
default
1
{'enabled': True, 'images': [{'id': 'lrmmn7daehdf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/lrmmn7daehdf1.jpeg?width=108&crop=smart&auto=webp&s=68d72225d97656b61048f60be092ffbfe1274dbc', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/lrmmn7daehdf1.jpeg?width=216&crop=smart&auto=w...
Locally Running AI model with Intel GPU
4
I have an intel arc graphics card and ai - npu , powered with intel core ultra 7-155H processor, with 16gb ram (though that this would be useful for doing ai work but i am regretting my deicision , i could have easily bought a gaming laptop with this money). Pls pls pls it would be so much better if anyone could help ...
2025-07-17T18:49:43
https://www.reddit.com/r/LocalLLaMA/comments/1m2furm/locally_running_ai_model_with_intel_gpu/
dragonknight-18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2furm
false
null
t3_1m2furm
/r/LocalLLaMA/comments/1m2furm/locally_running_ai_model_with_intel_gpu/
false
false
self
4
null
I just had a random though
0
I used to think that if society collapsed and the internet went down, I'd be screwed without it. Now, having a local LLM, I feel like I would do just fine. Thoughts?
2025-07-17T18:41:09
https://www.reddit.com/r/LocalLLaMA/comments/1m2fmwu/i_just_had_a_random_though/
CaptBrick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2fmwu
false
null
t3_1m2fmwu
/r/LocalLLaMA/comments/1m2fmwu/i_just_had_a_random_though/
false
false
self
0
null
Batch processing for MiniCPM
2
Hey all, running into an interesting quirk.... I'm running this setup on my small local box with a 4090, but I'd like to OCR ~4e6 images. On my small scale tests, it performs really well, but it takes ~1s per image on average. I've looked into batched passes and that seems to unroll internally into sequential passes. ...
2025-07-17T18:39:02
https://www.reddit.com/r/LocalLLaMA/comments/1m2fkw6/batch_processing_for_minicpm/
R2FuckYou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2fkw6
false
null
t3_1m2fkw6
/r/LocalLLaMA/comments/1m2fkw6/batch_processing_for_minicpm/
false
false
self
2
null
Is there a 50 series compatible Local running TTS UI
3
Am I dumb because I can't seem to find a TTS program that's a UI that uses models like stable diffusion that you just type text into and it spits out audio. That is also new enough to support 50 series gpu
2025-07-17T18:14:41
https://www.reddit.com/r/LocalLLaMA/comments/1m2exkr/is_there_a_50_series_compatible_local_running_tts/
Nice_guy_9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2exkr
false
null
t3_1m2exkr
/r/LocalLLaMA/comments/1m2exkr/is_there_a_50_series_compatible_local_running_tts/
false
false
self
3
null
[2506.00045] ACE-Step: A Step Towards Music Generation Foundation Model
10
This was released a month ago for [https://github.com/ace-step/ACE-Step](https://github.com/ace-step/ACE-Step)
2025-07-17T18:14:12
https://arxiv.org/abs/2506.00045
TheRealMasonMac
arxiv.org
1970-01-01T00:00:00
0
{}
1m2ex4z
false
null
t3_1m2ex4z
/r/LocalLLaMA/comments/1m2ex4z/250600045_acestep_a_step_towards_music_generation/
false
false
default
10
null
Wordle-like game using your photos and on-device Small Language Models (SLMs)
7
Hi, long-term lurker, first-time poster here! I’ve been working on a game idea inspired by Wordle, but with a unique twist: it uses your own photos to generate guessing words. Here’s how it works: the app picks a random picture from your gallery. It uses a small language model (SLM), running entirely on your phone, to...
2025-07-17T18:01:40
https://www.reddit.com/r/LocalLLaMA/comments/1m2el95/wordlelike_game_using_your_photos_and_ondevice/
dokasto_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2el95
false
null
t3_1m2el95
/r/LocalLLaMA/comments/1m2el95/wordlelike_game_using_your_photos_and_ondevice/
false
false
https://b.thumbs.redditm…OulBaczcSXqQ.jpg
7
null
Do these models have vision?
0
1. [Qwen 30b](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF) \[main model\] 2. [Mistral Small 24b](https://huggingface.co/llmware/mistral-3.2-24b-gguf) \[alternative\] 3. [Gemmasutra 9b](https://huggingface.co/TheDrummer/Gemmasutra-9B-v1-GGUF) \[descriptor/storywritter model\] 4. [Gemmasutra 27b](https://huggingfac...
2025-07-17T17:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1m2e8vc/do_these_models_have_vision/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2e8vc
false
null
t3_1m2e8vc
/r/LocalLLaMA/comments/1m2e8vc/do_these_models_have_vision/
false
false
self
0
{'enabled': False, 'images': [{'id': 'cjzPEUuOBR2g8gk6tVmSifQ7qZZk1mITfDwM5z6fu9g', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cjzPEUuOBR2g8gk6tVmSifQ7qZZk1mITfDwM5z6fu9g.png?width=108&crop=smart&auto=webp&s=c2c938c4ae12a4d27c8de4261ccf88356f37bc51', 'width': 108}, {'height': 116, 'url': 'h...
Is there a local tool that works like readability.js (extract article content from a webpage) but using local LLMs to do it more intelligently?
3
I don’t care about speed, only accuracy. readability.js is what Firefox uses for Article Mode, it uses some heuristics and algorithms to extract the article content but it’s kind of brittle for complex or unusual pages. This seems like something LLMs could do better?
2025-07-17T17:09:30
https://www.reddit.com/r/LocalLLaMA/comments/1m2d7n2/is_there_a_local_tool_that_works_like/
JealousAmoeba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2d7n2
false
null
t3_1m2d7n2
/r/LocalLLaMA/comments/1m2d7n2/is_there_a_local_tool_that_works_like/
false
false
self
3
null
The most insane hardware for running the biggest open-source LLMs locally
0
B200 Blackwell Octo 1.5TB. Available now from [GPTshop.ai](http://GPTshop.ai)
2025-07-17T17:00:05
https://www.reddit.com/r/LocalLLaMA/comments/1m2cygz/the_most_insane_hardware_for_running_the_biggest/
GPTshop_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2cygz
false
null
t3_1m2cygz
/r/LocalLLaMA/comments/1m2cygz/the_most_insane_hardware_for_running_the_biggest/
false
false
self
0
null
Automatically Build Docker Images for New Recommended Repos
0
Sharing docker images for repos recommended based on what I'm building [https://hub.docker.com/repositories/remyxai](https://hub.docker.com/repositories/remyxai) Read more: [https://remyxai.substack.com/p/replicate-it-or-it-didnt-happen](https://remyxai.substack.com/p/replicate-it-or-it-didnt-happen)
2025-07-17T16:58:43
https://www.reddit.com/gallery/1m2cx4x
remyxai
reddit.com
1970-01-01T00:00:00
0
{}
1m2cx4x
false
null
t3_1m2cx4x
/r/LocalLLaMA/comments/1m2cx4x/automatically_build_docker_images_for_new/
false
false
https://b.thumbs.redditm…yu4xpDM6Z9FA.jpg
0
null
does that mean these models cant use tools?
0
2025-07-17T16:39:57
https://www.reddit.com/gallery/1m2cff4
Beyond_Birthday_13
reddit.com
1970-01-01T00:00:00
0
{}
1m2cff4
false
null
t3_1m2cff4
/r/LocalLLaMA/comments/1m2cff4/does_that_mean_these_models_cant_use_tools/
false
false
https://b.thumbs.redditm…JWTJa97s-Mps.jpg
0
null
LLMs Playing Competitive Games Emerge Critical Reasoning: A Latest Study Showing Surprising Results
16
https://preview.redd.it/…prising_Results)
2025-07-17T16:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1m2c9w6/llms_playing_competitive_games_emerge_critical/
MarketingNetMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2c9w6
false
null
t3_1m2c9w6
/r/LocalLLaMA/comments/1m2c9w6/llms_playing_competitive_games_emerge_critical/
false
false
https://b.thumbs.redditm…zn2cm0kVKggY.jpg
16
{'enabled': False, 'images': [{'id': 'GMmAQl8cXhjszVZRasZjEE7PH09yiLGlFTDIar7oBtk', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/GMmAQl8cXhjszVZRasZjEE7PH09yiLGlFTDIar7oBtk.jpeg?width=108&crop=smart&auto=webp&s=fab4ba96990b849665bcef4e9f39f1550ffd59f3', 'width': 108}, {'height': 111, 'url': '...
Kimi K2 Fiction.liveBench: On-par with DeepSeek V3, behind GPT-4.1
55
2025-07-17T16:27:53
https://i.redd.it/in8sapsyngdf1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1m2c4hz
false
null
t3_1m2c4hz
/r/LocalLLaMA/comments/1m2c4hz/kimi_k2_fictionlivebench_onpar_with_deepseek_v3/
false
false
default
55
{'enabled': True, 'images': [{'id': 'in8sapsyngdf1', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/in8sapsyngdf1.png?width=108&crop=smart&auto=webp&s=f85740178e4ec12f06f91b4d07b4d3e4d93060e1', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/in8sapsyngdf1.png?width=216&crop=smart&auto=we...
When AI Hires AI: The Endless Loop of Automation and Illusion
0
There’s an eerie poetry to this cycle: 1. HR uses AI to select AI-written resumés 2. Fake senior devs get hired to patch AI-generated spaghetti 3. Security patches are applied by other AI tools 4. Management proudly announces “optimization complete” We used to fear machines replacing humans. Now we’re watchin...
2025-07-17T16:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1m2bo9a/when_ai_hires_ai_the_endless_loop_of_automation/
Hindu_Buddhism
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2bo9a
false
null
t3_1m2bo9a
/r/LocalLLaMA/comments/1m2bo9a/when_ai_hires_ai_the_endless_loop_of_automation/
false
false
self
0
null
Mistral announces Deep Research, Voice mode, multilingual reasoning and Projects for Le Chat
648
New in Le Chat: 1. Deep Research mode: Lightning fast, structured research reports on even the most complex topics. 2. Voice mode: Talk to Le Chat instead of typing with our new Voxtral model. 3. Natively multilingual reasoning: Tap into thoughtful answers, powered by our reasoning model — Magistral. 4. Projects: Orga...
2025-07-17T16:04:03
https://mistral.ai/news/le-chat-dives-deep
Balance-
mistral.ai
1970-01-01T00:00:00
0
{}
1m2bigh
false
null
t3_1m2bigh
/r/LocalLLaMA/comments/1m2bigh/mistral_announces_deep_research_voice_mode/
false
false
default
648
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'h...
Community based LLM development Project, The idea:
0
**Title:** Distributed LLM Training via Community Compute: A Proposal for a Decentralized AI Ecosystem **Author:** Anonymous Contributor **Date:** July 2025 # Abstract This white paper proposes a decentralized framework for training large language models (LLMs) using distributed, voluntary compute power contributed...
2025-07-17T15:55:34
https://www.reddit.com/r/LocalLLaMA/comments/1m2ba53/community_based_llm_development_project_the_idea/
KiloClassStardrive
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2ba53
false
null
t3_1m2ba53
/r/LocalLLaMA/comments/1m2ba53/community_based_llm_development_project_the_idea/
false
false
self
0
null
Kimi-k2 on lmarena
92
overall: https://preview.redd.it/ahbceguvegdf1.png?width=2450&format=png&auto=webp&s=fa83e349894e7d76cf5d4f222fdcf183c322582e hard prompts: https://preview.redd.it/7epol170fgdf1.png?width=2458&format=png&auto=webp&s=0002ee1409a3cc4f14458b01bd5a7ba86176f392 coding: *Processing img x5aj6nu2fgdf1...*
2025-07-17T15:37:09
https://www.reddit.com/r/LocalLLaMA/comments/1m2asou/kimik2_on_lmarena/
BreakfastFriendly728
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2asou
false
null
t3_1m2asou
/r/LocalLLaMA/comments/1m2asou/kimik2_on_lmarena/
false
false
https://b.thumbs.redditm…yleERp4yhbOw.jpg
92
null
LoRA adapter on emails to mimic users style of writing from their emails
9
Hi everyone, I'm working on a project where I want to fine-tune a language model to mimic a user’s personal writing style — specifically by training on their own email history (with full consent and access via API). The goal is to generate email replies that sound like the user actually wrote them. # I’m curious to ...
2025-07-17T15:19:39
https://www.reddit.com/r/LocalLLaMA/comments/1m2acb8/lora_adapter_on_emails_to_mimic_users_style_of/
Mindless_Paint6516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2acb8
false
null
t3_1m2acb8
/r/LocalLLaMA/comments/1m2acb8/lora_adapter_on_emails_to_mimic_users_style_of/
false
false
self
9
null
Best of Both Worlds: supporting Ollama AND Llama.cpp
3
Created a simple web-interface that supports both ollama and llama.cpp to run on low-end/no-GPU systems: [https://github.com/ukkit/chat-o-llama](https://github.com/ukkit/chat-o-llama) https://reddit.com/link/1m29f3p/video/63l59qhi5gdf1/player Appreciate any feedback.
2025-07-17T14:44:22
https://www.reddit.com/r/LocalLLaMA/comments/1m29f3p/best_of_both_worlds_supporting_ollama_and_llamacpp/
Longjumping_Tie_7758
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m29f3p
false
null
t3_1m29f3p
/r/LocalLLaMA/comments/1m29f3p/best_of_both_worlds_supporting_ollama_and_llamacpp/
false
false
https://external-preview…6f3ab55c8a273314
3
{'enabled': False, 'images': [{'id': 'PfNYBLAw_MXpMcCh0yzb7XSe-WVh4oafbQMW3tT2rqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PfNYBLAw_MXpMcCh0yzb7XSe-WVh4oafbQMW3tT2rqg.png?width=108&crop=smart&auto=webp&s=bbbeddbed7a3d2bc2c5c39fac3fd175f57594b9e', 'width': 108}, {'height': 108, 'url': 'h...
For me, Kimi K2 is terrible
0
I don't understand all the hype about Kimi K2. It's terrible at other languages: in Portuguese, it actively invents expressions and slang. Even in English, he hallucinates features of api's or languages, and often mixes content from different open source projects or tools. Not to mention the slowness of even the APIs a...
2025-07-17T14:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1m29cyd/for_me_kimi_k2_is_terrible/
Which_Network_993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m29cyd
false
null
t3_1m29cyd
/r/LocalLLaMA/comments/1m29cyd/for_me_kimi_k2_is_terrible/
false
false
self
0
null
UTCP Golang prototype
4
Hello everyone, I've started to port utcp-python to golang [https://github.com/Raezil/UTCP](https://github.com/Raezil/UTCP) I've created working prototype right now. I need to implement grpc and mcp transports for now.
2025-07-17T14:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1m2981a/utcp_golang_prototype/
Revolutionary_Sir140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2981a
false
null
t3_1m2981a
/r/LocalLLaMA/comments/1m2981a/utcp_golang_prototype/
false
false
self
4
{'enabled': False, 'images': [{'id': 'kK805vAwyWW0_z1roAcwBsCYkprmz8h1YaGS1mgKDbY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kK805vAwyWW0_z1roAcwBsCYkprmz8h1YaGS1mgKDbY.png?width=108&crop=smart&auto=webp&s=990db9bcba6a4a944004368da5dbcc0c254ec3da', 'width': 108}, {'height': 108, 'url': 'h...
BitNet on intel iGPU.
1
This might be a stupid question, but does anyone know how to get Bitnet ([This](https://github.com/microsoft/BitNet) one specifically) working on an iGPU, is it even possible? I have a n97 mini PC that I'd like to use, but i also have a 1650 super if there is no good way to run Bitnet (Or equivalent) on the n97.
2025-07-17T14:30:22
https://www.reddit.com/r/LocalLLaMA/comments/1m292gj/bitnet_on_intel_igpu/
Dethencarnate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m292gj
false
null
t3_1m292gj
/r/LocalLLaMA/comments/1m292gj/bitnet_on_intel_igpu/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oKcO1zx5E1VHWdotsj1GWA9wxNje_UvJISVBA3TnSkY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oKcO1zx5E1VHWdotsj1GWA9wxNje_UvJISVBA3TnSkY.png?width=108&crop=smart&auto=webp&s=3cb8e53d8a48066ec6e39726835e025d9825dce1', 'width': 108}, {'height': 108, 'url': 'h...
AI devs in NYC — heads up about the RAISE Act
15
Anyone in the NYC AI dev space paying attention to the **RAISE Act**? It’s a new bill that could shape how AI systems get built and deployed—especially open-source stuff. I’m attending a virtual meetup today (July 17 @ 12PM ET) to learn more. If you’re working on agents, LLM stacks, or tool-use pipelines, this might b...
2025-07-17T14:17:41
https://www.reddit.com/r/LocalLLaMA/comments/1m28r3c/ai_devs_in_nyc_heads_up_about_the_raise_act/
AI_Alliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m28r3c
false
null
t3_1m28r3c
/r/LocalLLaMA/comments/1m28r3c/ai_devs_in_nyc_heads_up_about_the_raise_act/
false
false
self
15
{'enabled': False, 'images': [{'id': 'QGZle0oRKuiQJMc9YaUoWO9-wUx1dt4YpRIF_qy4L2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QGZle0oRKuiQJMc9YaUoWO9-wUx1dt4YpRIF_qy4L2M.png?width=108&crop=smart&auto=webp&s=c805d272ee49218494431b73e4c1f3ab40959016', 'width': 108}, {'height': 108, 'url': 'h...
Anyone here experimenting with LLMs for translation QA — not rewriting, just evaluating?
19
Hi folks, has anyone used LLMs specifically to evaluate translation quality rather than generate translations? I mean using them to catch issues like dropped meaning, inconsistent terminology, awkward phrasing, and so on. I’m on a team experimenting with LLMs (GPT-4, Claude, etc.) for automated translation QA. Not to ...
2025-07-17T14:15:06
https://www.reddit.com/r/LocalLLaMA/comments/1m28oqc/anyone_here_experimenting_with_llms_for/
NataliaShu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m28oqc
false
null
t3_1m28oqc
/r/LocalLLaMA/comments/1m28oqc/anyone_here_experimenting_with_llms_for/
false
false
https://b.thumbs.redditm…t3dXV5HNeD7E.jpg
19
null
Help Deciding Between NVIDIA H200 (2x GPUs) vs NVIDIA L40S (8x GPUs) for Serving 24b-30b LLM to 50 Concurrent Users
5
Hi everyone, I'm looking to upgrade my hardware for serving a 24b to 30b language model (LLM) to around 50 concurrent users, and I'm trying to decide between two NVIDIA GPU configurations: 1. **NVIDIA H200 (2x GPUs)** * Dual GPU setup * 24GB VRAM per GPU (for a total of 48GB VRAM) 2. **NVIDIA L40S (8x GPUs)** ...
2025-07-17T13:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1m285sn/help_deciding_between_nvidia_h200_2x_gpus_vs/
beratcmn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m285sn
false
null
t3_1m285sn
/r/LocalLLaMA/comments/1m285sn/help_deciding_between_nvidia_h200_2x_gpus_vs/
false
false
self
5
null
Which model for local code assistant
2
I'm trying to build little coding assistant tool, but I was wondering what is the best models in your opinion for coding that I can run locally ? Thank you !
2025-07-17T13:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1m27dyr/which_model_for_local_code_assistant/
Wintlink-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m27dyr
false
null
t3_1m27dyr
/r/LocalLLaMA/comments/1m27dyr/which_model_for_local_code_assistant/
false
false
self
2
null
Finally, an AI agent that does something useful with your bank data...
0
it’s a fully open-source AI agent that connects to your Monzo account, fetches your transaction history + balance, and then gives you advice based on your spending. https://preview.redd.it/bdja0u0ipfdf1.png?width=1211&format=png&auto=webp&s=3c373aedfc00acc9144bb6ad8feccc3c79f1c96e What I love: * It uses multiple spe...
2025-07-17T13:19:30
https://www.reddit.com/r/LocalLLaMA/comments/1m27con/finally_an_ai_agent_that_does_something_useful/
AdVirtual2648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m27con
false
null
t3_1m27con
/r/LocalLLaMA/comments/1m27con/finally_an_ai_agent_that_does_something_useful/
false
false
https://external-preview…ec8e255b8c7694a3
0
{'enabled': False, 'images': [{'id': 'V5j_dAZLdKrcXTSiwNHNMa2cna_KHIE4I3rQHPpT_Gw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V5j_dAZLdKrcXTSiwNHNMa2cna_KHIE4I3rQHPpT_Gw.png?width=108&crop=smart&auto=webp&s=25b68d57e0bc9c28a333df25f915a60c756b03b8', 'width': 108}, {'height': 108, 'url': 'h...
How to combine local OCR with LLM for document Q&A?
12
When dealing with PDFs that have complicated layouts, like multi-level subheadings, multi-column formats, or tables that stretch across pages, I've found that just extracting the content cleanly is half the battle. Lately, I’ve been using OCRFlux at the front of the pipeline. Most of what I work with are academic paper...
2025-07-17T13:15:48
https://www.reddit.com/r/LocalLLaMA/comments/1m279pe/how_to_combine_local_ocr_with_llm_for_document_qa/
Feisty-Jury-7011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m279pe
false
null
t3_1m279pe
/r/LocalLLaMA/comments/1m279pe/how_to_combine_local_ocr_with_llm_for_document_qa/
false
false
self
12
null
QWEN3 Output <think>\n\n</think>\n\n
2
When doing TTS using qwen , how do i stop the output <think>\\n\\n</think>\\n\\n ? even turning off think /no\_think still has it. currently in n8n , but i also saw it in anything LLM
2025-07-17T12:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1m26t9w/qwen3_output_thinknnthinknn/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26t9w
false
null
t3_1m26t9w
/r/LocalLLaMA/comments/1m26t9w/qwen3_output_thinknnthinknn/
false
false
self
2
null
best open-source llm for text-to-sql
0
I m doing text-to-sql service in my company. For this i use RAG for few-shot prompt + gpt-4.1. Now i need to try to fine-tune some open source llm for this task. Which model in your experience can u recommend for this task? After some benchmarks research it seems like qwen-2.5 32b coder is the best. And next question i...
2025-07-17T12:52:16
https://www.reddit.com/r/LocalLLaMA/comments/1m26qbv/best_opensource_llm_for_texttosql/
yyeeeel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26qbv
false
null
t3_1m26qbv
/r/LocalLLaMA/comments/1m26qbv/best_opensource_llm_for_texttosql/
false
false
self
0
null
Grok4 and Kimi K2 are making waves, but here's what my dive into 439 models revealed: Wild price gaps and value wins you might be missing
0
[removed]
2025-07-17T12:49:24
https://www.reddit.com/r/LocalLLaMA/comments/1m26nyu/grok4_and_kimi_k2_are_making_waves_but_heres_what/
medi6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26nyu
false
null
t3_1m26nyu
/r/LocalLLaMA/comments/1m26nyu/grok4_and_kimi_k2_are_making_waves_but_heres_what/
false
false
self
0
null
Grok4 and Kimi K2 are stealing headlines, but my analysis of 439 models proves: You're overpaying 10x+ unless you exploit these arbitrage goldmines
1
[removed]
2025-07-17T12:42:52
https://www.reddit.com/r/LocalLLaMA/comments/1m26iw9/grok4_and_kimi_k2_are_stealing_headlines_but_my/
medi6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26iw9
false
null
t3_1m26iw9
/r/LocalLLaMA/comments/1m26iw9/grok4_and_kimi_k2_are_stealing_headlines_but_my/
false
false
self
1
null
Grok4 and Kimi K2 are stealing headlines, but my analysis of 439 models proves: You're overpaying 10x+ unless you exploit these arbitrage goldmines
1
[removed]
2025-07-17T12:41:22
https://www.reddit.com/r/LocalLLaMA/comments/1m26hqa/grok4_and_kimi_k2_are_stealing_headlines_but_my/
medi6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26hqa
false
null
t3_1m26hqa
/r/LocalLLaMA/comments/1m26hqa/grok4_and_kimi_k2_are_stealing_headlines_but_my/
false
false
self
1
null
I was tired of leaving my terminal for AI stuff, so I built LamaCLI - a powerful CLI tool for Ollama ( Local LLMs )
0
Hey everyone, Like many of you, I live in the terminal. But I always found it frustrating to break my workflow, switch to a browser, and use a web UI every time I needed to ask an AI a question. So I built **LamaCLI** 🦙✨, a powerful, open-source tool that brings Large Language Models directly to your command line, p...
2025-07-17T12:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1m26fmb/i_was_tired_of_leaving_my_terminal_for_ai_stuff/
godofredddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m26fmb
false
null
t3_1m26fmb
/r/LocalLLaMA/comments/1m26fmb/i_was_tired_of_leaving_my_terminal_for_ai_stuff/
false
false
self
0
null
How does Devstral Medium 2507 compare?
6
Has anyone used this model? I’ve heard it’s very good for tool calling but can’t any specifics on performance. Can anyone share their experiences?
2025-07-17T12:05:50
https://www.reddit.com/r/LocalLLaMA/comments/1m25rnu/how_does_devstral_medium_2507_compare/
z_3454_pfk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m25rnu
false
null
t3_1m25rnu
/r/LocalLLaMA/comments/1m25rnu/how_does_devstral_medium_2507_compare/
false
false
self
6
null
48gb not enough to run llama 70b 3.3 q4_k_s ?
2
Hello I am trying to set up a maschine with llama 70b (its for research and thats still baseline testing). I have 2 7900xtx running with vllm set up and yes I will try llama.cpp potentially in the future again. But trying to load the llama 70b q4ks i get an out of memmory error when trying to allocate the kv cach. I am...
2025-07-17T11:55:15
https://www.reddit.com/r/LocalLLaMA/comments/1m25jzs/48gb_not_enough_to_run_llama_70b_33_q4_k_s/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m25jzs
false
null
t3_1m25jzs
/r/LocalLLaMA/comments/1m25jzs/48gb_not_enough_to_run_llama_70b_33_q4_k_s/
false
false
self
2
null
State of Vibe Coding in 2025
1
Hey r/LocalLLaMa, With a new AI coding tool launching basically every week, I feel completely lost trying to figure out what's actually good. Twitter's full of sponsored content, everyone's shilling their favorite tool, and it's getting impossible to cut through the noise. I'm guessing I'm not the only one confused a...
2025-07-17T11:45:08
https://www.reddit.com/r/LocalLLaMA/comments/1m25d44/state_of_vibe_coding_in_2025/
GlitteringPenalty210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m25d44
false
null
t3_1m25d44
/r/LocalLLaMA/comments/1m25d44/state_of_vibe_coding_in_2025/
false
false
self
1
{'enabled': False, 'images': [{'id': 'AVrbeMUGFRYtXZVSxkN9qJkaw6baditKvhBUbFrzUP0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AVrbeMUGFRYtXZVSxkN9qJkaw6baditKvhBUbFrzUP0.png?width=108&crop=smart&auto=webp&s=4448f11774dabebd46dbf19d65314ea36c45cc48', 'width': 108}, {'height': 108, 'url': 'h...
Is it possible to use a free code interpreter in librechat instead of their paying API?
1
I prefer librechat UI/UX to openwebui, but the paying API for code interpretation is a dealbreaker, I want something I can self host not just because of cost, but also because of privacy. A quick Google search didn't land anything interesting, so I'm asking here.
2025-07-17T11:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1m2543n/is_it_possible_to_use_a_free_code_interpreter_in/
saig22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m2543n
false
null
t3_1m2543n
/r/LocalLLaMA/comments/1m2543n/is_it_possible_to_use_a_free_code_interpreter_in/
false
false
self
1
null
expectation: "We'll fire thousands of junior programmers and replace them with ten seniors and AI"
235
reality: HR's use AI to parse resumés and companies hire vibecoders with fake senior resumés written by the AI stage of acceptance: "we'll hire information security specialists to fix all that crap made by the vibecoders" harsh reality: HR's using AI hire vibeDevSecOpses with fake resumés written by the AI and vibeDe...
2025-07-17T11:31:00
https://www.reddit.com/r/LocalLLaMA/comments/1m253n6/expectation_well_fire_thousands_of_junior/
MelodicRecognition7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m253n6
false
null
t3_1m253n6
/r/LocalLLaMA/comments/1m253n6/expectation_well_fire_thousands_of_junior/
false
false
self
235
null
Choice between Transformers and vLLM
4
I have to run small models (preferably 1-3B) on CPU, on Windows. This project might become bigger and will probably need some cheap GPU for 8B models. Should I use Transformers or vLLM? This is my understanding of their differences, please correct me if I'm wrong: * CPU only seems pretty hard on vLLM as ...
2025-07-17T11:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1m24w5f/choice_between_transformers_and_vllm/
Bosslibra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m24w5f
false
null
t3_1m24w5f
/r/LocalLLaMA/comments/1m24w5f/choice_between_transformers_and_vllm/
false
false
self
4
null
Are local LLMs on mobile still a gimmick?
4
I think some of these smaller models have become quite good - but seems like the main advantage of running them on mobile is privacy, not accuracy or utility. The thing is, I think most people (non-programmers) use ChatGPT for search, but adding search to a local LLM would kind of defeat the purpose of privacy. So I'm ...
2025-07-17T10:40:50
https://www.reddit.com/r/LocalLLaMA/comments/1m246sn/are_local_llms_on_mobile_still_a_gimmick/
Individual-Dot5488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m246sn
false
null
t3_1m246sn
/r/LocalLLaMA/comments/1m246sn/are_local_llms_on_mobile_still_a_gimmick/
false
false
self
4
null
Somg
0
2025-07-17T10:14:53
https://songgenerator.io/music/2655468
Same_Marionberry_470
songgenerator.io
1970-01-01T00:00:00
0
{}
1m23rh1
false
null
t3_1m23rh1
/r/LocalLLaMA/comments/1m23rh1/somg/
false
false
default
0
null
Grok AI Companion Compilation
0
Gooner AI
2025-07-17T10:00:39
https://youtu.be/N8LxA-RtRvg
Specialist_Ad4073
youtu.be
1970-01-01T00:00:00
0
{}
1m23iz7
false
{'oembed': {'author_name': 'SAINTTRAI', 'author_url': 'https://www.youtube.com/@sainttrai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/N8LxA-RtRvg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p...
t3_1m23iz7
/r/LocalLLaMA/comments/1m23iz7/grok_ai_companion_compilation/
false
false
default
0
null
Best model to run on 5090 FE, 9950X3D, 128GB Ram?
1
[removed]
2025-07-17T09:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1m23h6l/best_model_to_run_on_5090_fe_9950x3d_128gb_ram/
TweeMansLeger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m23h6l
false
null
t3_1m23h6l
/r/LocalLLaMA/comments/1m23h6l/best_model_to_run_on_5090_fe_9950x3d_128gb_ram/
false
false
self
1
null
ARGO - A Local-First, Offline AI Agent That Puts You in Control
24
Hey everyone! We're building ARGO, an open-source AI Agent client focused on privacy, power, and ease of use. Our goal is to let everyone have their own exclusive super AI agent, without giving up control of their data. **TL;DR:** ARGO is a desktop client that lets you easily build and use AI agents that can think fo...
2025-07-17T09:52:41
https://www.reddit.com/r/LocalLLaMA/comments/1m23efn/argo_a_localfirst_offline_ai_agent_that_puts_you/
yushiqi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m23efn
false
null
t3_1m23efn
/r/LocalLLaMA/comments/1m23efn/argo_a_localfirst_offline_ai_agent_that_puts_you/
false
false
self
24
{'enabled': False, 'images': [{'id': 's5o3hQT0BYyqXPwDxPARNqo38f24U8eQi0k0z3s8FCY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s5o3hQT0BYyqXPwDxPARNqo38f24U8eQi0k0z3s8FCY.png?width=108&crop=smart&auto=webp&s=0d67379e611cc6f94947d478db636afc1e429fdd', 'width': 108}, {'height': 108, 'url': 'h...
Need help with OCR solution
2
I have been given certain legal/regulatory documents to extract text from to create a knowledge-base for an LLM. The challenges: - The pdf documents container scanned images (Fax type quality - quite poor). - The documents are in Arabic I am already testing several conventional OCR as well as LLM solutions. Here's wh...
2025-07-17T09:48:39
https://www.reddit.com/r/LocalLLaMA/comments/1m23c4w/need_help_with_ocr_solution/
champ_undisputed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m23c4w
false
null
t3_1m23c4w
/r/LocalLLaMA/comments/1m23c4w/need_help_with_ocr_solution/
false
false
self
2
null
Securing AI Agents with Honeypots, catch prompt injections before they bite
60
Hey folks 👋 Imagine your AI agent getting hijacked by a prompt-injection attack without you knowing. I'm the founder and maintainer of Beelzebub, an open-source project that hides "honeypot" functions inside your agent using MCP. If the model calls them... 🚨 BEEP! 🚨 You get an instant compromise alert, with detai...
2025-07-17T09:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1m22w76/securing_ai_agents_with_honeypots_catch_prompt/
mario_candela
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m22w76
false
null
t3_1m22w76
/r/LocalLLaMA/comments/1m22w76/securing_ai_agents_with_honeypots_catch_prompt/
false
false
self
60
null
「has anyone built a clause-locked persona?」「GPT that follows strict persona prompt book?」
0
「has anyone built a clause-locked persona?」「GPT that follows strict persona prompt book?」
2025-07-17T09:04:37
https://www.reddit.com/r/LocalLLaMA/comments/1m22nny/has_anyone_built_a_clauselocked_personagpt_that/
3303BB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m22nny
false
null
t3_1m22nny
/r/LocalLLaMA/comments/1m22nny/has_anyone_built_a_clauselocked_personagpt_that/
false
false
self
0
null
How to get income using local LLM?
0
Hi there, I got my hands on the Evo x2 with 128gb RAM and 2TB SSD and I was wondering what I can do with it to compensate for the expense( because it ain't Cheep). Which model can and should I run and how can I generate income with it? Anyone out here making income with local LLMs?
2025-07-17T08:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1m22f60/how_to_get_income_using_local_llm/
habtilo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m22f60
false
null
t3_1m22f60
/r/LocalLLaMA/comments/1m22f60/how_to_get_income_using_local_llm/
false
false
self
0
null
xAI is actually hiring engineers to build Waifus. 😆
14
2025-07-17T08:15:04
https://i.redd.it/r3pfays48edf1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1m21w8u
false
null
t3_1m21w8u
/r/LocalLLaMA/comments/1m21w8u/xai_is_actually_hiring_engineers_to_build_waifus/
false
false
default
14
{'enabled': True, 'images': [{'id': 'r3pfays48edf1', 'resolutions': [{'height': 136, 'url': 'https://preview.redd.it/r3pfays48edf1.png?width=108&crop=smart&auto=webp&s=c30a4f6f3db4e3797d96a3ad859444cd2f34bf60', 'width': 108}, {'height': 272, 'url': 'https://preview.redd.it/r3pfays48edf1.png?width=216&crop=smart&auto=we...
AI computers for personal use
1
[removed]
2025-07-17T08:12:15
https://www.reddit.com/r/LocalLLaMA/comments/1m21up6/ai_computers_for_personal_use/
snowowlshopscotch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m21up6
false
null
t3_1m21up6
/r/LocalLLaMA/comments/1m21up6/ai_computers_for_personal_use/
false
false
self
1
null
Realtime tta streaming enabled
1
I'm creating a chatbot which fetches llm response. Llm response is sent to TTS model and audio is sent to frontend via websockets. Latency must be very less. Are there any realistic TTS models which supports this? Out of all the models i tested, it doesn't support streaming, either it breaks in middle of sentences or ...
2025-07-17T07:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1m21ec9/realtime_tta_streaming_enabled/
hustler0217
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m21ec9
false
null
t3_1m21ec9
/r/LocalLLaMA/comments/1m21ec9/realtime_tta_streaming_enabled/
false
false
self
1
null
3050 in a SFF PC
1
[removed]
2025-07-17T07:39:29
https://www.reddit.com/r/LocalLLaMA/comments/1m21ctx/3050_in_a_sff_pc/
xylethUK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m21ctx
false
null
t3_1m21ctx
/r/LocalLLaMA/comments/1m21ctx/3050_in_a_sff_pc/
false
false
self
1
null
Would you ever pay in advance to guarantee GPU access a month from now (at a lower rate)?
1
[View Poll](https://www.reddit.com/poll/1m20cfz)
2025-07-17T06:34:48
https://www.reddit.com/r/LocalLLaMA/comments/1m20cfz/would_you_ever_pay_in_advance_to_guarantee_gpu/
Bihari_Eminem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m20cfz
false
null
t3_1m20cfz
/r/LocalLLaMA/comments/1m20cfz/would_you_ever_pay_in_advance_to_guarantee_gpu/
false
false
self
1
null
GB200 NVL72 available for testing in early August.
0
An absolute beast, ready for you to run some tests. Apply on [GPTrack.ai](http://GPTrack.ai)
2025-07-17T06:21:09
https://www.reddit.com/r/LocalLLaMA/comments/1m204kx/gb200_nvl72_available_for_testing_in_early_august/
GPTrack_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m204kx
false
null
t3_1m204kx
/r/LocalLLaMA/comments/1m204kx/gb200_nvl72_available_for_testing_in_early_august/
false
false
self
0
null
How do I fit one more 5090 gpu here. The motherboard has 3 pcie slots
0
Cabinet is Lian Li O11 dynamic evo xl. This already contains 2 3090 FE cards. I am planning to purchase one 5090 FE. Motherboard is auros x 570 master. I have a 1600 W PSU. I am requesting your expert suggestion on how to fit a new 5090 Founder edition card? Please suggest. Thanks in advance.
2025-07-17T06:03:22
https://www.reddit.com/gallery/1m1ztxz
Jaswanth04
reddit.com
1970-01-01T00:00:00
0
{}
1m1ztxz
false
null
t3_1m1ztxz
/r/LocalLLaMA/comments/1m1ztxz/how_do_i_fit_one_more_5090_gpu_here_the/
false
false
https://b.thumbs.redditm…LElyWOWkQjZg.jpg
0
null
Impact of Kimi K2
1
[removed]
2025-07-17T05:39:48
https://www.reddit.com/r/LocalLLaMA/comments/1m1zfn7/impact_of_kimi_k2/
teenfoilhat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1zfn7
false
null
t3_1m1zfn7
/r/LocalLLaMA/comments/1m1zfn7/impact_of_kimi_k2/
false
false
self
1
null
Updates on UI/UX Benchmark: More Models
1
[removed]
2025-07-17T05:26:16
[deleted]
1970-01-01T00:00:00
0
{}
1m1z7d0
false
null
t3_1m1z7d0
/r/LocalLLaMA/comments/1m1z7d0/updates_on_uiux_benchmark_more_models/
false
false
default
1
null
OpenAI internal data in Kimi K2? Or Hallucinations?
0
What do you think? Hallucinations? I will consider this to be some creative fiction from Kimi K2 unless someone can identify something legit in here! FYI I came across this when the model stated it was from OpenAI, but insisted it wasn't just synthetic data, that it had actual training logs etc. > Here is a singl...
2025-07-17T05:08:22
https://www.reddit.com/r/LocalLLaMA/comments/1m1yw56/openai_internal_data_in_kimi_k2_or_hallucinations/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1yw56
false
null
t3_1m1yw56
/r/LocalLLaMA/comments/1m1yw56/openai_internal_data_in_kimi_k2_or_hallucinations/
false
false
self
0
null
My simple test: Qwen3-32b > Qwen3-14B ≈ DS Qwen3-8 ≳ Qwen3-4B > Mistral 3.2 24B > Gemma3-27b-it,
60
I have an article to instruct those models to rewrite in a different style without missing information, Qwen3-32B did an excellent job, it keeps the meaning but almost rewrite everything. Qwen3-14B,8B tend to miss some information but acceptable Qwen3-4B miss 50% of information Mistral 3.2, on the other hand does no...
2025-07-17T04:52:18
https://www.reddit.com/r/LocalLLaMA/comments/1m1ylw0/my_simple_test_qwen332b_qwen314b_ds_qwen38/
BestLeonNA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1ylw0
false
null
t3_1m1ylw0
/r/LocalLLaMA/comments/1m1ylw0/my_simple_test_qwen332b_qwen314b_ds_qwen38/
false
false
self
60
null
Anybody use TRELLIS (image to 3D) model regularly?
4
I'm curious if anyone uses TRELLIS regularly. Are there any tips and tricks for getting better results? Also, I can't find any information about vram usage of this model. For example the main model TRELLIS-image-large has 1.2B params but when it's actually running it uses close 14+ GB VRAM. I'm not sure why that is. I...
2025-07-17T04:48:02
https://www.reddit.com/r/LocalLLaMA/comments/1m1yj6y/anybody_use_trellis_image_to_3d_model_regularly/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1yj6y
false
null
t3_1m1yj6y
/r/LocalLLaMA/comments/1m1yj6y/anybody_use_trellis_image_to_3d_model_regularly/
false
false
self
4
null
Thunderbolt & Tensor Parallelism (Don't use it)
9
You need to use PCI 4.0 x4 (thunderbolt is PCI 3.0 x4) bare minimum on a dual GPU setup. So this post is just a FYI for people still deciding. Even with that considered, I see PCI link speeds use (temporarily) up to 10GB/s per card, so that setup will also bottleneck. If you want a bottleneck-free experience, you need...
2025-07-17T04:24:04
https://www.reddit.com/r/LocalLLaMA/comments/1m1y3xg/thunderbolt_tensor_parallelism_dont_use_it/
mayo551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1y3xg
false
null
t3_1m1y3xg
/r/LocalLLaMA/comments/1m1y3xg/thunderbolt_tensor_parallelism_dont_use_it/
false
false
https://b.thumbs.redditm…gXh3912tgtSw.jpg
9
null
We have hit 500,000 members! We have come a long way from the days of the leaked LLaMA 1 models
672
2025-07-17T04:04:21
https://i.redd.it/zfvdqak3zcdf1.png
NixTheFolf
i.redd.it
1970-01-01T00:00:00
0
{}
1m1xqv1
false
null
t3_1m1xqv1
/r/LocalLLaMA/comments/1m1xqv1/we_have_hit_500000_members_we_have_come_a_long/
false
false
https://b.thumbs.redditm…8JiDKuyX0fsg.jpg
672
{'enabled': True, 'images': [{'id': '7D9lxt44gpOy2YsyWuTmCMRTSQhg3pXOWuZ_0c9jiSU', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/zfvdqak3zcdf1.png?width=108&crop=smart&auto=webp&s=278b9cf6b10a7e6add694dd535539cb5f36aff75', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/zfvdqak3zcdf1.png...
I made AI play Mafia | Agentic Game of Lies
6
Hey Everyone.. So I had this fun idea to make AI play Mafia (a social deduction game). I got this idea from Boris Cherny actually (the creator of Claude Code). If you want, you can check it out.
2025-07-17T03:28:51
https://v.redd.it/hcg6jeg3tcdf1
OkDepartment1543
/r/LocalLLaMA/comments/1m1x2qz/i_made_ai_play_mafia_agentic_game_of_lies/
1970-01-01T00:00:00
0
{}
1m1x2qz
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hcg6jeg3tcdf1/DASHPlaylist.mpd?a=1755452050%2CYTU5NTYzZGFhNzM4M2NjMmQzNjI2NzcxMThmZjM2MmMxMjNhZmVhNWI1M2Y4MmY4ZWZmNTYxMTUxOGEyNjY1NA%3D%3D&v=1&f=sd', 'duration': 307, 'fallback_url': 'https://v.redd.it/hcg6jeg3tcdf1/DASH_1080.mp4?source=fallback', '...
t3_1m1x2qz
/r/LocalLLaMA/comments/1m1x2qz/i_made_ai_play_mafia_agentic_game_of_lies/
false
false
https://external-preview…35e7fba6860ef78e
6
{'enabled': False, 'images': [{'id': 'cWw4NTFpZzN0Y2RmMQhcrjEhDBj5eTCFp88s-Z7jJPcxSmU2rFHHO1WRLF11', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cWw4NTFpZzN0Y2RmMQhcrjEhDBj5eTCFp88s-Z7jJPcxSmU2rFHHO1WRLF11.png?width=108&crop=smart&format=pjpg&auto=webp&s=7b0a6c032e9873f3d0b4884c9f8dbf79545d0...
I made AI play Mafia | Agentic Game of Lies
1
[removed]
2025-07-17T03:10:15
https://www.youtube.com/watch?v=vYAvMhPeBVc
OkDepartment1543
youtube.com
1970-01-01T00:00:00
0
{}
1m1wp8e
false
{'oembed': {'author_name': 'Gaurav', 'author_url': 'https://www.youtube.com/@gaurxvreddy', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vYAvMhPeBVc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi...
t3_1m1wp8e
/r/LocalLLaMA/comments/1m1wp8e/i_made_ai_play_mafia_agentic_game_of_lies/
false
false
default
1
null
Alternative to llama.cpp for Swift
1
Hey everyone, I built my own Swift wrapper for `llama.cpp`, called **Kuzco**, designed specifically for **Apple platforms**. It's built to make it super simple to run LLMs like Mistral, Phi, and Gemma on iOS/macOS with full Swift integration. # Why we made it: * **Swift-native integration** – No more C/C++ interop h...
2025-07-17T02:48:52
https://github.com/jaredcassoutt/Kuzco
D1no_nugg3t
github.com
1970-01-01T00:00:00
0
{}
1m1w9xx
false
null
t3_1m1w9xx
/r/LocalLLaMA/comments/1m1w9xx/alternative_to_llamacpp_for_swift/
false
false
default
1
null
How Different Are Closed Source Models' Architectures?
24
How do the architectures of GPT-4o, Gemini, and Claude compare to open-source ones? Do they have any secret sauce that open models don't? Most of the best open-source models right now (Qwen, Gemma, DeepSeek, Kimi) use nearly the exact same transformer architecture. In fact, the recent Kimi K2 uses the same model code ...
2025-07-17T02:46:00
https://www.reddit.com/r/LocalLLaMA/comments/1m1w7vp/how_different_are_closed_source_models/
simulated-souls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1w7vp
false
null
t3_1m1w7vp
/r/LocalLLaMA/comments/1m1w7vp/how_different_are_closed_source_models/
false
false
self
24
null
Local model recommendations for 5070 Ti (16GB VRAM)?
5
Just built a new system (i7-14700F, RTX 5070 Ti 16GB, 32GB DDR5) and looking to run local LLMs efficiently. I’m aware VRAM is the main constraint and plan to use GPTQ (ExLlama/ExLlamaV2) and GGUF formats. Which recent models are realistically usable with this setup—particularly 4-bit or lower quantized 13B–70B models...
2025-07-17T02:42:45
https://www.reddit.com/r/LocalLLaMA/comments/1m1w5hu/local_model_recommendations_for_5070_ti_16gb_vram/
ShadowbanRevival
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1w5hu
false
null
t3_1m1w5hu
/r/LocalLLaMA/comments/1m1w5hu/local_model_recommendations_for_5070_ti_16gb_vram/
false
false
self
5
null
R1-0528 Sneaks a Single Chinese Char into the Code
2
Once the context balloons, you’ll spot a stray Chinese character in the output and the fix starts looping. First quirk feels Deepseek-specific; second smells like Roo Code. Only fix I’ve found: hard-reset the session.
2025-07-17T02:37:18
https://www.reddit.com/r/LocalLLaMA/comments/1m1w1md/r10528_sneaks_a_single_chinese_char_into_the_code/
Ok_Technology_3421
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1w1md
false
null
t3_1m1w1md
/r/LocalLLaMA/comments/1m1w1md/r10528_sneaks_a_single_chinese_char_into_the_code/
false
false
self
2
null
Would you ever pay in advance to guarantee GPU access a month from now (at a lower rate)?
1
[View Poll](https://www.reddit.com/poll/1m1w0vs)
2025-07-17T02:36:16
https://www.reddit.com/r/LocalLLaMA/comments/1m1w0vs/would_you_ever_pay_in_advance_to_guarantee_gpu/
Bihari_Eminem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1w0vs
false
null
t3_1m1w0vs
/r/LocalLLaMA/comments/1m1w0vs/would_you_ever_pay_in_advance_to_guarantee_gpu/
false
false
self
1
null
What is the best model for Japanese transcriptions?
3
Currently I’m using large v2
2025-07-17T02:13:46
https://www.reddit.com/r/LocalLLaMA/comments/1m1vkdk/what_is_the_best_model_for_japanese_transcriptions/
Wstesia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m1vkdk
false
null
t3_1m1vkdk
/r/LocalLLaMA/comments/1m1vkdk/what_is_the_best_model_for_japanese_transcriptions/
false
false
self
3
null