title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How do LLMs spontaneous lying select in-depth or simple answers?
0
How do LLMs determine whether to provide a in-depth or a direct answer based on questions? just a routing model? or Through the tailored training data set? Is it feasible adapt a post-trained llm to differetinate the response patterns based on question complexity, employing direct answers for simple quires and in-depth...
2025-08-12T13:49:17
https://www.reddit.com/r/LocalLLaMA/comments/1mo99c7/how_do_llms_spontaneous_lying_select_indepth_or/
tangbasky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo99c7
false
null
t3_1mo99c7
/r/LocalLLaMA/comments/1mo99c7/how_do_llms_spontaneous_lying_select_indepth_or/
false
false
self
0
null
RTX 5090 vs RTX 4090 48GB (or RTX 6000)
3
I want to build a small workstation, primarily for LLM inference. Ideally, I would build with multiple RTX 3090s, but I don't have much space, so I'd like to stick to a single GPU in a SFF case. I'd probably power limit to around 300W. Is the extra VRAM on the 4090 worth it? An RTX 6000 Pro is possible, but for I'm que...
2025-08-12T13:41:47
https://www.reddit.com/r/LocalLLaMA/comments/1mo92ou/rtx_5090_vs_rtx_4090_48gb_or_rtx_6000/
dwrz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo92ou
false
null
t3_1mo92ou
/r/LocalLLaMA/comments/1mo92ou/rtx_5090_vs_rtx_4090_48gb_or_rtx_6000/
false
false
self
3
null
Sports data and odds analysis
1
Quite a long time back when the best model I could find was Llama 3 70b to understand logic and write well, I used it for sports previews. I found that if I gave it a whole list of results/data, it’s just wasn’t good enough to understand the average goals, recent performance, last team they beat, etc etc… Sure it got ...
2025-08-12T13:04:58
https://www.reddit.com/r/LocalLLaMA/comments/1mo86xo/sports_data_and_odds_analysis/
Ok_Try_877
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo86xo
false
null
t3_1mo86xo
/r/LocalLLaMA/comments/1mo86xo/sports_data_and_odds_analysis/
false
false
self
1
null
Plux — File tree + quick-add button. Add any file or folder to LLM in one click [opensource]
1
[removed]
2025-08-12T13:04:32
https://i.redd.it/83k75kdq6lif1.png
Dense-Ad-4020
i.redd.it
1970-01-01T00:00:00
0
{}
1mo86li
false
null
t3_1mo86li
/r/LocalLLaMA/comments/1mo86li/plux_file_tree_quickadd_button_add_any_file_or/
false
false
default
1
{'enabled': True, 'images': [{'id': '83k75kdq6lif1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/83k75kdq6lif1.png?width=108&crop=smart&auto=webp&s=57827435b8017c63d16658ed6696725d32011f39', 'width': 108}, {'height': 98, 'url': 'https://preview.redd.it/83k75kdq6lif1.png?width=216&crop=smart&auto=webp...
Is it possible to replicate how GPT-5 works in ChatGPT for self-hosted frontends?
1
[removed]
2025-08-12T12:51:47
https://www.reddit.com/r/LocalLLaMA/comments/1mo7vtj/is_it_possible_to_replicate_how_gpt5_works_in/
iamthekings5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo7vtj
false
null
t3_1mo7vtj
/r/LocalLLaMA/comments/1mo7vtj/is_it_possible_to_replicate_how_gpt5_works_in/
false
false
self
1
null
GENNAI CLI - A ReAct-based agent CLI
0
Hello, I built an ReAct-based agent CLI program which runs with Ollama and other models, which can handle simple coding tasks using builtin/MCP tools. https://github.com/fpt/go-gennai-cli Although it is in very early stage of development and only has minimum system prompt, It can run with...
2025-08-12T12:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1mo76lt/gennai_cli_a_reactbased_agent_cli/
_fpt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo76lt
false
null
t3_1mo76lt
/r/LocalLLaMA/comments/1mo76lt/gennai_cli_a_reactbased_agent_cli/
false
false
self
0
{'enabled': False, 'images': [{'id': '_RMyiZCq-Wu-nKDWhLDSoIb7cxKAxl4TcEx1t2rrjnM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_RMyiZCq-Wu-nKDWhLDSoIb7cxKAxl4TcEx1t2rrjnM.png?width=108&crop=smart&auto=webp&s=07bc2ae773ead0c14aeb73fe5358df10326e0eb7', 'width': 108}, {'height': 108, 'url': 'h...
How do you get better at video prompts for VEO3?
0
I've been trying out VEO3 for some fun side projects I want to level up my prompting for text-to-video models. Any tips, examples, or go-to resources you know that might help me with prompting for different use cases? Should I start including some "movie making jargons" in my prompts so model comes up with better qual...
2025-08-12T11:45:33
https://www.reddit.com/r/LocalLLaMA/comments/1mo6g5t/how_do_you_get_better_at_video_prompts_for_veo3/
toolhouseai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo6g5t
false
null
t3_1mo6g5t
/r/LocalLLaMA/comments/1mo6g5t/how_do_you_get_better_at_video_prompts_for_veo3/
false
false
self
0
null
What does gradient_checkpointing do when using HF transformer for full training?
9
I am getting this runtime error while running full fine tuning with a 130m embedding model with the HF transformers library with a TraingingArguments like this. It seems like the crash is related to torch\_compile. training_args = TrainingArguments( output_dir=OUTPUT_DIR, save_strategy=...
2025-08-12T11:36:10
https://www.reddit.com/r/LocalLLaMA/comments/1mo69j2/what_does_gradient_checkpointing_do_when_using_hf/
Ok_Warning2146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo69j2
false
null
t3_1mo69j2
/r/LocalLLaMA/comments/1mo69j2/what_does_gradient_checkpointing_do_when_using_hf/
false
false
self
9
null
Why evals are the missing piece in most AI products
15
I keep seeing AI teams obsess over model choice, prompts, and infrastructure, but very few invest in structured evals early. Without them, you are basically shipping blind. In my experience, good eval workflows catch issues before they hit production, shorten iteration cycles, and prevent those “works in testing, fails...
2025-08-12T11:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1mo5s79/why_evals_are_the_missing_piece_in_most_ai/
dinkinflika0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo5s79
false
null
t3_1mo5s79
/r/LocalLLaMA/comments/1mo5s79/why_evals_are_the_missing_piece_in_most_ai/
false
false
self
15
null
Best Local LLaMA model for academic research on an old Xeon + RTX 3060 12GB?
1
[removed]
2025-08-12T11:10:28
https://www.reddit.com/r/LocalLLaMA/comments/1mo5rup/best_local_llama_model_for_academic_research_on/
opaidedois
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo5rup
false
null
t3_1mo5rup
/r/LocalLLaMA/comments/1mo5rup/best_local_llama_model_for_academic_research_on/
false
false
self
1
null
Qwen2.5-Coder-1.5B-Instruct Finetuning on Coding
2
This doesn't make a lot of sense but I am trying to FURTHER finetune Qwen2.5-Coder-1.5B-Instruct with Qlora to achieve better results on some benchmarks like HumanEval. So far, it is overfitting on CodeAlpaca. Increasing Lora dropout or adding weight decay doesn't work as I believe the base model is the issue. I though...
2025-08-12T10:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1mo58t3/qwen25coder15binstruct_finetuning_on_coding/
neural-learner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo58t3
false
null
t3_1mo58t3
/r/LocalLLaMA/comments/1mo58t3/qwen25coder15binstruct_finetuning_on_coding/
false
false
self
2
null
Can I setup open webui with ooba/LM studio?
2
It is probably one of my favorite front ends because it is so clean and nice looking. I know it should support openAI compatible APIs but I don't know how to set it up with anything but ollama which I don't really see myself using again for now.
2025-08-12T10:30:16
https://www.reddit.com/r/LocalLLaMA/comments/1mo5221/can_i_setup_open_webui_with_oobalm_studio/
a_normal_user1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo5221
false
null
t3_1mo5221
/r/LocalLLaMA/comments/1mo5221/can_i_setup_open_webui_with_oobalm_studio/
false
false
self
2
null
help me out here folks
0
i have like 3-4 agents which i built on google adk,so right now ,im using my google adk web,but then i cannot customise it,so i thought of having fastapi over that and use a chat window which could be customisable more like a plug and play ,rather than building everything from the scratch,so is there any options that i...
2025-08-12T10:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1mo4t2g/help_me_out_here_folks/
shixarr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo4t2g
false
null
t3_1mo4t2g
/r/LocalLLaMA/comments/1mo4t2g/help_me_out_here_folks/
false
false
self
0
null
Why MLA is not used more and companies still prefer with GQA ?
9
So i have been going through the code of some of the latest models and i noticed that they still use GQA like GPT-OSS and GLM-4.5 inspite people saying that Mutli-Head Latent Attentions is superior. Does anyone have a reason for that ?
2025-08-12T09:52:00
https://www.reddit.com/r/LocalLLaMA/comments/1mo4ep3/why_mla_is_not_used_more_and_companies_still/
Severe-Awareness829
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo4ep3
false
null
t3_1mo4ep3
/r/LocalLLaMA/comments/1mo4ep3/why_mla_is_not_used_more_and_companies_still/
false
false
self
9
null
"Seedability"
2
Hi. A few days ago someone mentioned "seedability" here, i.e. the ability of the model to produce deterministic output given a seed number. I don't remember any seed parameter in the REST API calls or even Llama.cpp CLI. Is it something that the model has to support? Does anyone have a working example? Thanks
2025-08-12T09:27:36
https://www.reddit.com/r/LocalLLaMA/comments/1mo40p5/seedability/
ihatebeinganonymous
self.LocalLLaMA
2025-08-12T09:49:19
0
{}
1mo40p5
false
null
t3_1mo40p5
/r/LocalLLaMA/comments/1mo40p5/seedability/
false
false
self
2
null
I created a persistent strategy game where you rule by giving commands to an AI council.
17
Hey Reddit, For the past few months, I've been working on a passion project called **AI Kingdom**, and I'm excited to share it with you all. I've always loved deep strategy and kingdom-building games, but I felt that the interaction often boiled down to clicking through menus. My goal was to create a game where you f...
2025-08-12T09:26:30
https://www.reddit.com/r/LocalLLaMA/comments/1mo4056/i_created_a_persistent_strategy_game_where_you/
rscp1147re
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo4056
false
null
t3_1mo4056
/r/LocalLLaMA/comments/1mo4056/i_created_a_persistent_strategy_game_where_you/
false
false
self
17
null
Chatgpt-4o-like models
0
What open-weight models are closest to chatgpt 4o style, that are 14b and less?
2025-08-12T09:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1mo3r1y/chatgpt4olike_models/
Imaginary_Bread9711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo3r1y
false
null
t3_1mo3r1y
/r/LocalLLaMA/comments/1mo3r1y/chatgpt4olike_models/
false
false
self
0
null
Is AceMath-72B-Instruct better than Qwen3-235B-A22B-Instruct-2507 for mathematical reasoning?
7
The Open LLM Leaderboard on Huggingface does not include any Qwen3 models for comparison and shows nvidia's AceMath-72B-Instruct as the top model for MATH.
2025-08-12T08:59:17
https://www.reddit.com/r/LocalLLaMA/comments/1mo3kre/is_acemath72binstruct_better_than/
zoxtech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo3kre
false
null
t3_1mo3kre
/r/LocalLLaMA/comments/1mo3kre/is_acemath72binstruct_better_than/
false
false
self
7
null
LocalAI Major Update: Modular Backends (update llama.cpp, stablediffusion.cpp, and others independently!), Qwen-VL, Qwen-Image Support, Image Editing & More
47
Hey r/LocalLLaMA, Some of you might know LocalAI already as a way to self-host your own private, OpenAI-compatible AI API (it was the first of its kind !). I'm excited to share that we've just pushed a series of massive updates that I think this community will really appreciate. As a reminder: LocalAI is not a company...
2025-08-12T08:56:03
https://www.reddit.com/r/LocalLLaMA/comments/1mo3j17/localai_major_update_modular_backends_update/
mudler_it
self.LocalLLaMA
2025-08-12T09:37:20
0
{}
1mo3j17
false
null
t3_1mo3j17
/r/LocalLLaMA/comments/1mo3j17/localai_major_update_modular_backends_update/
false
false
self
47
{'enabled': False, 'images': [{'id': 'aAxhIisxAbRsf20ZO5I4zBgsSjEjLFuy2LUHP6-wMAc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aAxhIisxAbRsf20ZO5I4zBgsSjEjLFuy2LUHP6-wMAc.png?width=108&crop=smart&auto=webp&s=7ac4a0daa314128d1ac3407078a6ed7d6b636cbb', 'width': 108}, {'height': 108, 'url': 'h...
Gpt-oss-120b API provider comparison
13
https://x.com/ArtificialAnlys/status/1955102409044398415 As expected, Groq performs worse. I've commented multiple times that something seemed off with their implementation, and this data appears to back that up.
2025-08-12T08:53:47
https://www.reddit.com/r/LocalLLaMA/comments/1mo3hrh/gptoss120b_api_provider_comparison/
Sadman782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo3hrh
false
null
t3_1mo3hrh
/r/LocalLLaMA/comments/1mo3hrh/gptoss120b_api_provider_comparison/
false
false
self
13
null
Why is grok 4 heavy (and sometimes even grok 4) missing from a bunch of Humanity's Last Exam benchmark sites?
0
I mean, in the first screenshot from ArtificialAnalysis, OpenAI has two variants of GPT-5 (high and medium). In XAi's slides from the grok 4 demo, they showed the heavy version getting 44.4%, which would make it #1. Or am I misunderstanding the way the results are shown here?
2025-08-12T08:27:28
https://www.reddit.com/gallery/1mo33tx
Shasaur
reddit.com
1970-01-01T00:00:00
0
{}
1mo33tx
false
null
t3_1mo33tx
/r/LocalLLaMA/comments/1mo33tx/why_is_grok_4_heavy_and_sometimes_even_grok_4/
false
false
https://b.thumbs.redditm…CkLIeKrFdalc.jpg
0
null
uhmmm im merging but why it dont work
0
2025-08-12T08:13:52
https://i.redd.it/00p223sorjif1.png
BuriqKalipun
i.redd.it
1970-01-01T00:00:00
0
{}
1mo2wfm
false
null
t3_1mo2wfm
/r/LocalLLaMA/comments/1mo2wfm/uhmmm_im_merging_but_why_it_dont_work/
false
false
https://b.thumbs.redditm…y0hMw2RUHs_s.jpg
0
{'enabled': True, 'images': [{'id': 'qrRghNuisfUlYJmZcpYlaBZVJZBwXC0j754dB_tGSBA', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/00p223sorjif1.png?width=108&crop=smart&auto=webp&s=e06f7e39b063b1147420cfcf7bfefcf0f6759aca', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/00p223sorjif1.png...
HoML vs. Ollama: A Deep Dive into Performance
1
Been seeing a lot of posts here asking about Ollama alternatives. A made a post a few days ago that I built HoML. A convenient interface like Ollama but fully open source and wrapped vLLM. Anyway, I did some benchmark to see how much faster it is, and here is the result * Scaling: Ollama is super fast for one pers...
2025-08-12T08:04:57
https://homl.dev/blogs/homl-vs-ollama-benchmark.html
wsmlbyme
homl.dev
1970-01-01T00:00:00
0
{}
1mo2rej
false
null
t3_1mo2rej
/r/LocalLLaMA/comments/1mo2rej/homl_vs_ollama_a_deep_dive_into_performance/
false
false
default
1
null
Jan v1: 4B model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro
810
Hi, this is Bach from Jan. We're releasing Jan v1 today. In our evals, Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. It's built on the new version of Qwen's [Qwen3-4B-Thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) (up to 256k context length), f...
2025-08-12T07:45:23
https://i.redd.it/niaetccbljif1.png
Delicious_Focus3465
i.redd.it
1970-01-01T00:00:00
0
{}
1mo2gg7
false
null
t3_1mo2gg7
/r/LocalLLaMA/comments/1mo2gg7/jan_v1_4b_model_for_web_search_with_91_simpleqa/
false
false
https://b.thumbs.redditm…Wa9cou0Iob4Q.jpg
810
{'enabled': True, 'images': [{'id': 'RtJZ2vgTOr3GXTaZuPJJu7fTYOqF1QzQgX-yPoUW9P8', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/niaetccbljif1.png?width=108&crop=smart&auto=webp&s=541e65d60229d6fda641cdc7206a5fa4a86a1b9d', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/niaetccbljif1.png...
Looking for a ReAct Model
0
Title basically says it all. I know GPT-3.5 and GPT 4 can do that but I want to run it locally. Is there any known local model I could use for that?
2025-08-12T07:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1mo2fnv/looking_for_a_react_model/
Private_Tank
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo2fnv
false
null
t3_1mo2fnv
/r/LocalLLaMA/comments/1mo2fnv/looking_for_a_react_model/
false
false
self
0
null
Jan v1: 4B model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro
1
[deleted]
2025-08-12T07:37:04
[deleted]
1970-01-01T00:00:00
0
{}
1mo2by8
false
null
t3_1mo2by8
/r/LocalLLaMA/comments/1mo2by8/jan_v1_4b_model_for_web_search_with_91_simpleqa/
false
false
default
1
null
It's time to realease DeepSeek-R2
1
[removed]
2025-08-12T07:21:48
https://www.reddit.com/r/LocalLLaMA/comments/1mo23fc/its_time_to_realease_deepseekr2/
Consistent_Level6369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo23fc
false
null
t3_1mo23fc
/r/LocalLLaMA/comments/1mo23fc/its_time_to_realease_deepseekr2/
false
false
https://a.thumbs.redditm…635R9JTmGeI0.jpg
1
null
yet another llama.cpp front end.
1
[removed]
2025-08-12T07:17:41
https://www.reddit.com/r/LocalLLaMA/comments/1mo212w/yet_another_llamacpp_front_end/
Lesser-than
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo212w
false
null
t3_1mo212w
/r/LocalLLaMA/comments/1mo212w/yet_another_llamacpp_front_end/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ta33K887SFMRFR1ofhOZgjqy4h2qVqQJ3g7abm313FE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ta33K887SFMRFR1ofhOZgjqy4h2qVqQJ3g7abm313FE.png?width=108&crop=smart&auto=webp&s=ac5b7174dfac9e2fb27dd21898aa24d5afbfaed2', 'width': 108}, {'height': 108, 'url': 'h...
Why stop at 'Strawberry'? Lets up the game with 'How many c's are there in 'pneumonoultramicroscopicsilicovolcanoconiosis'.
128
Qwen 4B got it right after thinking 30 Sec.ZLM thought for almost 2 min .GPT-5 took 5 sec.Gemini took less than 2 sec,and told me use count() function in Python which it used.
2025-08-12T07:08:27
https://i.redd.it/2e65cn38fjif1.png
riwritingreddit
i.redd.it
1970-01-01T00:00:00
0
{}
1mo1vre
false
null
t3_1mo1vre
/r/LocalLLaMA/comments/1mo1vre/why_stop_at_strawberry_lets_up_the_game_with_how/
false
false
https://a.thumbs.redditm…7TX8WA-wWeQ0.jpg
128
{'enabled': True, 'images': [{'id': 'lyfeXAgBeDj5gDUvJRD2EyDW5trgUU6M2dD2tVzj-OY', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/2e65cn38fjif1.png?width=108&crop=smart&auto=webp&s=deac209893563b0b463b3f25dfb35dad2d0d5093', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/2e65cn38fjif1.png...
Looking for a good coding model around the 14b size area. Any recommendations?
1
For context I tried I tried qwen coder 14b, codegemma, phi 4 and gpt oss. Gpt oss gave me the best results by far but still not quite enough for me. Does it get any better for this parameter size?
2025-08-12T07:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1mo1sq4/looking_for_a_good_coding_model_around_the_14b/
a_normal_user1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo1sq4
false
null
t3_1mo1sq4
/r/LocalLLaMA/comments/1mo1sq4/looking_for_a_good_coding_model_around_the_14b/
false
false
self
1
null
Uncensored gpt-oss-20b released
185
Jinx is a "helpful-only" variant of popular open-weight language models that responds to all queries without safety refusals. [https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b)
2025-08-12T06:58:18
https://www.reddit.com/r/LocalLLaMA/comments/1mo1pv4/uncensored_gptoss20b_released/
No-Solution-8341
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo1pv4
false
null
t3_1mo1pv4
/r/LocalLLaMA/comments/1mo1pv4/uncensored_gptoss20b_released/
false
false
self
185
{'enabled': False, 'images': [{'id': 'P0d7BMzhU8lFm_gY9r3-Ieqcq7avVW4yk_FBxEW_Ccs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/P0d7BMzhU8lFm_gY9r3-Ieqcq7avVW4yk_FBxEW_Ccs.png?width=108&crop=smart&auto=webp&s=f4bd6c37b59017817c7574387134e19b9a3cebbf', 'width': 108}, {'height': 116, 'url': 'h...
Google is cooking something...
343
2025-08-12T06:55:15
https://i.redd.it/8zf0or9odjif1.jpeg
Ok_Ninja7526
i.redd.it
1970-01-01T00:00:00
0
{}
1mo1o3d
false
null
t3_1mo1o3d
/r/LocalLLaMA/comments/1mo1o3d/google_is_cooking_something/
false
false
https://b.thumbs.redditm…cLY4Q5TOfrig.jpg
343
{'enabled': True, 'images': [{'id': 'mkjPL39B0e_pGZCHuFV2ASpsznADVrnVp1ljvypuswg', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/8zf0or9odjif1.jpeg?width=108&crop=smart&auto=webp&s=d91413f57688ef8c5e9b958db05ca276ccf0ba8e', 'width': 108}, {'height': 233, 'url': 'https://preview.redd.it/8zf0or9odjif1.j...
GLM 4.5 AIR IS SO FKING GOODDD
193
I just got to try it with our agentic system , it's so fast and perfect with its tool calls , but mostly it's freakishly fast too , thanks z.ai i love you 😘💋
2025-08-12T06:52:07
https://www.reddit.com/r/LocalLLaMA/comments/1mo1mb1/glm_45_air_is_so_fking_gooddd/
boneMechBoy69420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo1mb1
false
null
t3_1mo1mb1
/r/LocalLLaMA/comments/1mo1mb1/glm_45_air_is_so_fking_gooddd/
false
false
self
193
null
Use of llm to do coding
0
I was thinking what are the common concerns everyone or u guys have in using llm to do coding like common concern and features u would like to have like tools and last question what u guys think regarding multi agent orchestra system for coding basically what's ur experience would u say it's worth it
2025-08-12T05:59:57
https://www.reddit.com/r/LocalLLaMA/comments/1mo0ru7/use_of_llm_to_do_coding/
Ok_Horror_8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo0ru7
false
null
t3_1mo0ru7
/r/LocalLLaMA/comments/1mo0ru7/use_of_llm_to_do_coding/
false
false
self
0
null
An error to use XTTS.
1
Hi everyone, I’m trying to use the CoquiTTS XTTS v2 model (`tts_models/multilingual/multi-dataset/xtts_v2`), but I’m encountering an error related to the `GPT2InferenceModel`. Here’s the issue: > Has anyone else faced this issue? I’d appreciate any suggestions on how to resolve it or if there’s a workaround. Thanks...
2025-08-12T05:21:28
https://www.reddit.com/r/LocalLLaMA/comments/1mo04cx/an_error_to_use_xtts/
Effective_Rip2500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mo04cx
false
null
t3_1mo04cx
/r/LocalLLaMA/comments/1mo04cx/an_error_to_use_xtts/
false
false
self
1
null
Been using powerful AI agents like **Claude Code** for months and have run into two fundamental problems:
1
[removed]
2025-08-12T05:05:36
https://www.reddit.com/r/LocalLLaMA/comments/1mnzubj/been_using_powerful_ai_agents_like_claude_code/
Lanky-District9096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzubj
false
null
t3_1mnzubj
/r/LocalLLaMA/comments/1mnzubj/been_using_powerful_ai_agents_like_claude_code/
false
false
self
1
null
How I'm Giving Llama 3 a "Claude-like" Brain for Code Search with a Tiny Local Index
1
[removed]
2025-08-12T05:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1mnzsvk/how_im_giving_llama_3_a_claudelike_brain_for_code/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzsvk
false
null
t3_1mnzsvk
/r/LocalLLaMA/comments/1mnzsvk/how_im_giving_llama_3_a_claudelike_brain_for_code/
false
false
self
1
null
How I'm Giving Llama 3 a "Claude-like" Brain for Code Search with a Tiny Local Index
1
[removed]
2025-08-12T05:02:48
https://www.reddit.com/r/LocalLLaMA/comments/1mnzsi3/how_im_giving_llama_3_a_claudelike_brain_for_code/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzsi3
false
null
t3_1mnzsi3
/r/LocalLLaMA/comments/1mnzsi3/how_im_giving_llama_3_a_claudelike_brain_for_code/
false
false
self
1
null
Been using powerful AI agents like **Claude Code** for months and have run into two fundamental problems:
1
[removed]
2025-08-12T04:57:59
https://www.reddit.com/r/LocalLLaMA/comments/1mnzpav/been_using_powerful_ai_agents_like_claude_code/
Lanky-District9096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzpav
false
null
t3_1mnzpav
/r/LocalLLaMA/comments/1mnzpav/been_using_powerful_ai_agents_like_claude_code/
false
false
https://b.thumbs.redditm…TJTJFI8_ImYY.jpg
1
null
LlamaCpp part on ShelfMK
0
The LlamaCpp part was created on the ShelfMK platform, with versions for cpu and vulkan. The part works on Windows and Linux. The SimpleChat.LlamaBot application was also created. src - [https://github.com/pwipo/smc\_cpp\_modules/tree/main/llamaCpp](https://github.com/pwipo/smc_cpp_modules/tree/main/llamaCpp) Chat -...
2025-08-12T04:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1mnznhw/llamacpp_part_on_shelfmk/
ulianownw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnznhw
false
null
t3_1mnznhw
/r/LocalLLaMA/comments/1mnznhw/llamacpp_part_on_shelfmk/
false
false
self
0
null
How I'm Giving Llama 3 a "Claude-like" Brain for Code Search with a Tiny Local Index
1
[removed]
2025-08-12T04:50:15
https://www.reddit.com/r/LocalLLaMA/comments/1mnzkdy/how_im_giving_llama_3_a_claudelike_brain_for_code/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzkdy
false
null
t3_1mnzkdy
/r/LocalLLaMA/comments/1mnzkdy/how_im_giving_llama_3_a_claudelike_brain_for_code/
false
false
self
1
{'enabled': False, 'images': [{'id': 'nBMYQywex5cxq5tfEaqgfOTq1nFP5cukb3xuzU6eF6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nBMYQywex5cxq5tfEaqgfOTq1nFP5cukb3xuzU6eF6Y.png?width=108&crop=smart&auto=webp&s=4d362eec89955caf19b11e685ad83fe0edda8a75', 'width': 108}, {'height': 108, 'url': 'h...
Interactive Reasoning Benchmarks | ARC-AGI-3 Preview
8
2025-08-12T04:45:14
https://www.youtube.com/watch?v=3T4OwBp6d90
AaronFeng47
youtube.com
1970-01-01T00:00:00
0
{}
1mnzh4v
false
{'oembed': {'author_name': 'ARC Prize', 'author_url': 'https://www.youtube.com/@ARCprize', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/3T4OwBp6d90?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi...
t3_1mnzh4v
/r/LocalLLaMA/comments/1mnzh4v/interactive_reasoning_benchmarks_arcagi3_preview/
false
false
https://external-preview…ab5838044110d641
8
{'enabled': False, 'images': [{'id': 'kYuRuYq7QVkuqRGYxLg7ZJUh0IUdDQO6gTNOZAtie7A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kYuRuYq7QVkuqRGYxLg7ZJUh0IUdDQO6gTNOZAtie7A.jpeg?width=108&crop=smart&auto=webp&s=31e820835db23a204811be0f35d54b3ebaa2e341', 'width': 108}, {'height': 162, 'url': '...
How I'm Using a Local Index to Fix Claude Code's Search and Bring Its Power to Local LLMs
1
[removed]
2025-08-12T04:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1mnzapj/how_im_using_a_local_index_to_fix_claude_codes/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnzapj
false
null
t3_1mnzapj
/r/LocalLLaMA/comments/1mnzapj/how_im_using_a_local_index_to_fix_claude_codes/
false
false
self
1
null
How I'm Using a Local Index to Fix Claude Code's Search and Bring Its Power to Local LLMs
1
[removed]
2025-08-12T04:33:32
https://www.reddit.com/r/LocalLLaMA/comments/1mnz9fi/how_im_using_a_local_index_to_fix_claude_codes/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnz9fi
false
null
t3_1mnz9fi
/r/LocalLLaMA/comments/1mnz9fi/how_im_using_a_local_index_to_fix_claude_codes/
false
false
https://b.thumbs.redditm…03FSNjCd1NIs.jpg
1
null
Fixing Claude Code’s Two Biggest Flaws (Privacy & `grep`) with a Local-First Index
1
[removed]
2025-08-12T04:26:40
https://www.reddit.com/r/LocalLLaMA/comments/1mnz4ty/fixing_claude_codes_two_biggest_flaws_privacy/
andylizf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnz4ty
false
null
t3_1mnz4ty
/r/LocalLLaMA/comments/1mnz4ty/fixing_claude_codes_two_biggest_flaws_privacy/
false
false
https://b.thumbs.redditm…X84ynQzOS15E.jpg
1
null
google/gemma-3-12b is amazing when it comes to weaving complex stories
6
only 9.8gb of local memory so far. But it is weaving such an elaborate and detailed story regarding a civil war in the US between freedom fighters and trump forces. Here Is what is going on. Detailed stories down to technical details that would be accurate (even knows to weave into the story 30-80mhz SINCGARS communic...
2025-08-12T04:21:15
https://www.reddit.com/r/LocalLLaMA/comments/1mnz16p/googlegemma312b_is_amazing_when_it_comes_to/
meshreplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnz16p
false
null
t3_1mnz16p
/r/LocalLLaMA/comments/1mnz16p/googlegemma312b_is_amazing_when_it_comes_to/
false
false
https://b.thumbs.redditm…J_lkaF_QjOgk.jpg
6
null
Unsloth fixes chat_template (again). gpt-oss-120-high now scores 68.4 on Aider polyglot
157
https://preview.redd.it/…a07365920dd32c51
2025-08-12T03:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1mnxwmw/unsloth_fixes_chat_template_again_gptoss120high/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnxwmw
false
null
t3_1mnxwmw
/r/LocalLLaMA/comments/1mnxwmw/unsloth_fixes_chat_template_again_gptoss120high/
false
false
https://b.thumbs.redditm…UqoKaM4BcLls.jpg
157
null
LocalLLaMA is the last sane place to discuss LLMs on this site, I swear
1,862
2025-08-12T03:12:34
https://i.redd.it/iu3pniar9iif1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1mnxodk
false
null
t3_1mnxodk
/r/LocalLLaMA/comments/1mnxodk/localllama_is_the_last_sane_place_to_discuss_llms/
false
false
https://b.thumbs.redditm…R-EiYvYUZTvQ.jpg
1,862
{'enabled': True, 'images': [{'id': 'EC5EHuSbyN7q8IY_e9T7irNsPQziYnXp7QyJm9iqhnE', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/iu3pniar9iif1.jpeg?width=108&crop=smart&auto=webp&s=a90ff28b0dfaec6240a2ad7d79e893808afaab08', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/iu3pniar9iif1.j...
what happens when token context is over 100% still seems to work?
6
Input token count:0Context is 403.7% full I am using LM-Studio and working with a model and I am up to 403.7% full but things still seem to be working what exactly does this mean? and what are the implications.
2025-08-12T03:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1mnxlwq/what_happens_when_token_context_is_over_100_still/
meshreplacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnxlwq
false
null
t3_1mnxlwq
/r/LocalLLaMA/comments/1mnxlwq/what_happens_when_token_context_is_over_100_still/
false
false
self
6
null
dots.ocr: Multilingual Document Layout Parsing Model
1
[removed]
2025-08-12T02:08:27
https://www.reddit.com/r/LocalLLaMA/comments/1mnwbts/dotsocr_multilingual_document_layout_parsing_model/
pppenguinininin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnwbts
false
null
t3_1mnwbts
/r/LocalLLaMA/comments/1mnwbts/dotsocr_multilingual_document_layout_parsing_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'sicKLlPmH61yjU1ylkQ1LQZoSlWF6BiYQahHml8yNOg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sicKLlPmH61yjU1ylkQ1LQZoSlWF6BiYQahHml8yNOg.png?width=108&crop=smart&auto=webp&s=0f503dc662cc002fda5666e441c7b91d0f0ea415', 'width': 108}, {'height': 108, 'url': 'h...
dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
1
[removed]
2025-08-12T02:03:25
https://www.reddit.com/r/LocalLLaMA/comments/1mnw7vq/dotsocr_multilingual_document_layout_parsing_in_a/
pppenguinininin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnw7vq
false
null
t3_1mnw7vq
/r/LocalLLaMA/comments/1mnw7vq/dotsocr_multilingual_document_layout_parsing_in_a/
false
false
self
1
null
What's the best consumer AI app you've actually used in 2025?
3
We're 8 months into 2025 and I'm realizing something - despite all the AI hype, my phone's app drawer looks basically the same as 2024. Sure, every existing app slapped an "AI" button somewhere (looking at you, Notion AI, Grammarly, Canva), but where are the actual AI-native consumer apps that people use daily? Every...
2025-08-12T01:48:24
https://www.reddit.com/r/LocalLLaMA/comments/1mnvvx6/whats_the_best_consumer_ai_app_youve_actually/
jinxiaoshuai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnvvx6
false
null
t3_1mnvvx6
/r/LocalLLaMA/comments/1mnvvx6/whats_the_best_consumer_ai_app_youve_actually/
false
false
self
3
{'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY.jpeg?width=108&crop=smart&auto=webp&s=1e9268f8000ba05c0eaa4a283483174ba8fe421c', 'width': 108}, {'height': 113, 'url': '...
AGI Reflex [OSS]: unified memory + metacognitive reflection + planner for llama.cpp (demo + quick start)
1
[removed]
2025-08-12T01:42:28
https://www.reddit.com/r/LocalLLaMA/comments/1mnvr9t/agi_reflex_oss_unified_memory_metacognitive/
Miserable_Tutor1249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnvr9t
false
null
t3_1mnvr9t
/r/LocalLLaMA/comments/1mnvr9t/agi_reflex_oss_unified_memory_metacognitive/
false
false
self
1
null
KittenTTS inference speed on ARM
19
Source: https://github.com/KittenML/KittenTTS/issues/40#issuecomment-3168324368 | Environment | Kokoro (kokoro-v1.0.fp16.onnx) | Piper (en_US-lessac-low.onnx) | KittenTTS (kitten_tts_nano_v0_1.onnx) | |-------------------|--------------------------------|-------------------------------|--------------------------...
2025-08-12T01:35:04
https://www.reddit.com/r/LocalLLaMA/comments/1mnvleb/kittentts_inference_speed_on_arm/
Aaaaaaaaaeeeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnvleb
false
null
t3_1mnvleb
/r/LocalLLaMA/comments/1mnvleb/kittentts_inference_speed_on_arm/
false
false
self
19
{'enabled': False, 'images': [{'id': 'zv5vdZFEZseMaGDNrcsiexVO6P0BT1yJMf32Z-dFgoI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zv5vdZFEZseMaGDNrcsiexVO6P0BT1yJMf32Z-dFgoI.png?width=108&crop=smart&auto=webp&s=5a6d2432d58c838297f27fb04d3dd4739843b929', 'width': 108}, {'height': 108, 'url': 'h...
Just another llama.cpp front end
1
[removed]
2025-08-12T01:09:53
https://www.reddit.com/r/LocalLLaMA/comments/1mnv167/just_another_llamacpp_front_end/
Lesser-than
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnv167
false
null
t3_1mnv167
/r/LocalLLaMA/comments/1mnv167/just_another_llamacpp_front_end/
false
false
self
1
null
I built a one stop AI powered research and study solution
0
I was tired of struggling with boring textbooks. So I built the ultimate Al-powered study weapon - and 10,000+ students are already using it.NexNotes Al is an Al-powered tool that helps you streamline your study and learning process. With a suite of features including mind maps, study plans, flowcharts, summaries, and ...
2025-08-12T01:03:47
https://nexnotes-ai.pages.dev
pls_Do_not_ban
nexnotes-ai.pages.dev
1970-01-01T00:00:00
0
{}
1mnuwet
false
null
t3_1mnuwet
/r/LocalLLaMA/comments/1mnuwet/i_built_a_one_stop_ai_powered_research_and_study/
false
false
default
0
null
OpenLLM for Data Scoring
1
Hey everyone, I’m currently doing an internship at a real estate agency, and I’m playing around with the idea of using my llm to help with some data analysis. Basically, we have this big CSV file with about 5,000 future real estate projects, and I’m thinking of feeding each project description to the model and having ...
2025-08-12T00:51:27
https://www.reddit.com/r/LocalLLaMA/comments/1mnumr9/openllm_for_data_scoring/
Chemical_Elk7746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnumr9
false
null
t3_1mnumr9
/r/LocalLLaMA/comments/1mnumr9/openllm_for_data_scoring/
false
false
self
1
null
My persistent memory system has been updated.
0
Sorry for being away so long, but I have some updates on my persistent AI Memory System. All the details are in the README on my github. Because I'm still sick and on my phone. But I've fixed some issues with startup, and added deduplication logic. If you're having any issues, I think you can report them in Github. Her...
2025-08-12T00:35:58
https://www.reddit.com/r/LocalLLaMA/comments/1mnuaem/my_persistent_memory_system_has_been_updated/
Savantskie1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnuaem
false
null
t3_1mnuaem
/r/LocalLLaMA/comments/1mnuaem/my_persistent_memory_system_has_been_updated/
false
false
self
0
null
Fully verbal LLM program for OSX using whisper, ollama & XTTS
1
Hi! first time poster here. I have just uploaded a little program to github called [s2t2s](https://github.com/jamesmpls80/S2T2S) that allows a user to interact with an LLM without reading or use of a keyboard. like SIRI or ALEXA, but its 100% local, and not trying to sell you stuff. \*It is still in alpha dev stages\...
2025-08-12T00:17:21
https://www.reddit.com/r/LocalLLaMA/comments/1mntvhd/fully_verbal_llm_program_for_osx_using_whisper/
nomadman0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mntvhd
false
null
t3_1mntvhd
/r/LocalLLaMA/comments/1mntvhd/fully_verbal_llm_program_for_osx_using_whisper/
false
false
self
1
null
How Benchmaxxed is gpt-oss-120b?
0
2025-08-12T00:07:17
https://cmart.blog/gpt-oss-120b-benchmaxxed/
c-mart_in
cmart.blog
1970-01-01T00:00:00
0
{}
1mntn9u
false
null
t3_1mntn9u
/r/LocalLLaMA/comments/1mntn9u/how_benchmaxxed_is_gptoss120b/
false
false
default
0
null
dose any one use hostinger
1
I would like to know dose any one use hostinger on there VPS to run ollama. And What do you think ? I was thing about using them .
2025-08-11T23:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1mnt3x1/dose_any_one_use_hostinger/
wbiggs205
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnt3x1
false
null
t3_1mnt3x1
/r/LocalLLaMA/comments/1mnt3x1/dose_any_one_use_hostinger/
false
false
self
1
null
Who's 40k AI workstation is this? 4x RTX6000B
5
2025-08-11T23:42:31
https://forum.level1techs.com/t/wip-blackwell-rtx-6000-pro-max-q-quickie-setup-guide-on-ubuntu-24-04-lts-25-04/230521/206
HilLiedTroopsDied
forum.level1techs.com
1970-01-01T00:00:00
0
{}
1mnt2rq
false
null
t3_1mnt2rq
/r/LocalLLaMA/comments/1mnt2rq/whos_40k_ai_workstation_is_this_4x_rtx6000b/
false
false
default
5
{'enabled': False, 'images': [{'id': '2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=108&crop=smart&auto=webp&s=1a4bef0788cf677e51e7e9eaf4bbcdcc09552954', 'width': 108}, {'height': 216, 'url': ...
Minimal + OSS plugin recommendations for VS Code using AI chat? (August 2025)
4
Hi there, I am curious what everyone is using for AI chat mode. I am currently using Continue, specifically it's main chat panel (LHS of the screen by default). I think it does the following well enough: * Ability to control context when building up a prompt, e.g. ctrl+L to start a new prompt and add a selection, ctrl...
2025-08-11T23:27:50
https://www.reddit.com/r/LocalLLaMA/comments/1mnsqjt/minimal_oss_plugin_recommendations_for_vs_code/
Sad_Temperature6721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnsqjt
false
null
t3_1mnsqjt
/r/LocalLLaMA/comments/1mnsqjt/minimal_oss_plugin_recommendations_for_vs_code/
false
false
self
4
null
LMstudio Write an MCP tool for me (that store text)
1
[removed]
2025-08-11T23:17:58
https://www.reddit.com/r/LocalLLaMA/comments/1mnsia0/lmstudio_write_an_mcp_tool_for_me_that_store_text/
icelaw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnsia0
false
null
t3_1mnsia0
/r/LocalLLaMA/comments/1mnsia0/lmstudio_write_an_mcp_tool_for_me_that_store_text/
false
false
self
1
null
How to run gpt-oss-120b faster? 4090 and 64GB of RAM.
6
I am trying to run GPT OSS 120B on my 4090 and I am using this command llama-server --hf-repo unsloth/gpt-oss-120b-GGUF --hf-file gpt-oss-120b-F16.gguf \^ -c 16384 -ngl 99 -ot ".ffn\_.\*\_exps.=CPU" -fa \^ with 16k context here, I am getting around 14tps which is at the lower end of what I want but is fine, but th...
2025-08-11T23:15:32
https://www.reddit.com/r/LocalLLaMA/comments/1mnsg6d/how_to_run_gptoss120b_faster_4090_and_64gb_of_ram/
Pro-editor-1105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnsg6d
false
null
t3_1mnsg6d
/r/LocalLLaMA/comments/1mnsg6d/how_to_run_gptoss120b_faster_4090_and_64gb_of_ram/
false
false
self
6
null
What the hell...(also you already know what model this is)
0
2025-08-11T23:11:55
https://i.redd.it/nhdzckmy2hif1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1mnsd1y
false
null
t3_1mnsd1y
/r/LocalLLaMA/comments/1mnsd1y/what_the_hellalso_you_already_know_what_model/
false
false
default
0
{'enabled': True, 'images': [{'id': 'nhdzckmy2hif1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/nhdzckmy2hif1.png?width=108&crop=smart&auto=webp&s=05323823ace3ef2af9a839249a0137d5d72ba177', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/nhdzckmy2hif1.png?width=216&crop=smart&auto=web...
Delusioned Sam forgot what P meant in GPT
0
https://preview.redd.it/…ng%22%20step).
2025-08-11T23:02:48
https://www.reddit.com/r/LocalLLaMA/comments/1mns57n/delusioned_sam_forgot_what_p_meant_in_gpt/
uhuge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mns57n
false
null
t3_1mns57n
/r/LocalLLaMA/comments/1mns57n/delusioned_sam_forgot_what_p_meant_in_gpt/
false
false
https://a.thumbs.redditm…fnk0mhyuG-R8.jpg
0
null
Qwen Router?
0
I keep thinking about how nuts it is that OpenAI went all in on a router - meanwhile Qwen 3’s thinking and instruct models (which all rock) are sort of perfectly set up for this same kind of System1/System2 configuration. I think it would just kind of be hilarious if they just dropped something like this as a way of ju...
2025-08-11T22:58:40
https://www.reddit.com/r/LocalLLaMA/comments/1mns1h3/qwen_router/
Background_Put_4978
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mns1h3
false
null
t3_1mns1h3
/r/LocalLLaMA/comments/1mns1h3/qwen_router/
false
false
self
0
null
Delusioned Sam forgot what P meant in GPT
0
https://preview.redd.it/…ng%22%20step).
2025-08-11T22:56:38
https://www.reddit.com/r/LocalLLaMA/comments/1mnrzql/delusioned_sam_forgot_what_p_meant_in_gpt/
uhuge
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnrzql
false
null
t3_1mnrzql
/r/LocalLLaMA/comments/1mnrzql/delusioned_sam_forgot_what_p_meant_in_gpt/
false
false
https://b.thumbs.redditm…RHnR5Jv6qHVk.jpg
0
null
Llmstudio censorship?
0
I been toying with llmstudio , and while the interface is clean, I can't help but feel like this program is monitored. When I ask certain political questions to deepseek r1, I get a response in Chinese, and have to read the thinking bubble to get part of the info. When I ask the same question to huihui models, it run...
2025-08-11T22:45:10
https://www.reddit.com/r/LocalLLaMA/comments/1mnrpvl/llmstudio_censorship/
Azisan86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnrpvl
false
null
t3_1mnrpvl
/r/LocalLLaMA/comments/1mnrpvl/llmstudio_censorship/
false
false
self
0
null
how do I run and finetune qwen3 30b a3b?
0
I've been bashing my head against the wall for a week. I'm trying to finetune Qwen3 30B. I used Sloth, which gave me a LORA file, but I have no idea if it actually works because I keep getting errors when trying to run the model. And when I don’t get errors, the finetune doesn’t seem to be applied at all. So I trie...
2025-08-11T22:26:30
https://www.reddit.com/r/LocalLLaMA/comments/1mnr9of/how_do_i_run_and_finetune_qwen3_30b_a3b/
ThatIsNotIllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnr9of
false
null
t3_1mnr9of
/r/LocalLLaMA/comments/1mnr9of/how_do_i_run_and_finetune_qwen3_30b_a3b/
false
false
self
0
null
I built Nore, a free desktop UI to manage all your local and cloud AI models in one place.
15
https://i.redd.it/8h46v3…r any questions.
2025-08-11T22:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1mnqyw3/i_built_nore_a_free_desktop_ui_to_manage_all_your/
embium
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnqyw3
false
null
t3_1mnqyw3
/r/LocalLLaMA/comments/1mnqyw3/i_built_nore_a_free_desktop_ui_to_manage_all_your/
false
false
https://external-preview…a098b98dc266e5e8
15
{'enabled': False, 'images': [{'id': '6uxo0McORlF2P1of8Iq-78vP3ukl5svQJ-7TXrixmug', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/6uxo0McORlF2P1of8Iq-78vP3ukl5svQJ-7TXrixmug.png?width=108&crop=smart&auto=webp&s=258bc2ad7d7716b58d817bab2ecd610891cd43c8', 'width': 108}, {'height': 114, 'url': 'h...
Mistral Stole OpenAI, Distilled DeepSeek, Cheated Benchmarks?
0
2025-08-11T22:13:55
https://x.com/suchenzang/status/1954960365676331398/photo/3
aug_2025
x.com
1970-01-01T00:00:00
0
{}
1mnqy5y
false
null
t3_1mnqy5y
/r/LocalLLaMA/comments/1mnqy5y/mistral_stole_openai_distilled_deepseek_cheated/
false
false
default
0
null
Is there an open source LLM API project?
0
Seems like the OSS AI/LLM community has converged on the OpenAI V1 API for interoperability, which is kinda limited in feature set (e.g. tool use). Is there any effort to make a more official, properly community managed API owned by IESG or some similar organization? Quick search revealed nothing.
2025-08-11T21:53:23
https://www.reddit.com/r/LocalLLaMA/comments/1mnqf87/is_there_an_open_source_llm_api_project/
evil0sheep
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnqf87
false
null
t3_1mnqf87
/r/LocalLLaMA/comments/1mnqf87/is_there_an_open_source_llm_api_project/
false
false
self
0
null
Help me test various models with this question
0
I asked gemini 2.5 pro the following question > There exactly 64 words in the following text. > > \`\`\` > Lorem Ipsum is simply dummy text of the printing and typesetting industry Lorem Ipsum has been the industry standard dummy text ever since the 1500s when an unknown printer took galley of type and scram...
2025-08-11T21:52:48
https://www.reddit.com/r/LocalLLaMA/comments/1mnqepu/help_me_test_various_models_with_this_question/
Gear5th
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnqepu
false
null
t3_1mnqepu
/r/LocalLLaMA/comments/1mnqepu/help_me_test_various_models_with_this_question/
false
false
self
0
null
Released Codanna - a Unix-friendly CLI that gives your local model x-ray eyes into your codebase with blazing fast response times and full context awareness. Spawns an MCP server with one line - hot reload and index refresh in 500ms.
1
[removed]
2025-08-11T21:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1mnqa1a/released_codanna_a_unixfriendly_cli_that_gives/
Plenty_Seesaw8878
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnqa1a
false
null
t3_1mnqa1a
/r/LocalLLaMA/comments/1mnqa1a/released_codanna_a_unixfriendly_cli_that_gives/
false
false
self
1
null
GPT-5 Access | $10/Month
0
✅ GPT-5 Seat – $10/month or $15/month with 100+ prompts ⚡ Setup in 15 minutes DM me📩
2025-08-11T21:46:38
https://www.reddit.com/r/LocalLLaMA/comments/1mnq97s/gpt5_access_10month/
digidazzl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnq97s
false
null
t3_1mnq97s
/r/LocalLLaMA/comments/1mnq97s/gpt5_access_10month/
false
false
self
0
null
Up to $50k hardware, best model for legal docs
0
I'm looking for the absolute best local model for parsing dense, 100 pages PDF legal documents and extracting precise information. I wouldn't mind having to call external apis for external tools. What's your take?
2025-08-11T21:42:49
https://www.reddit.com/r/LocalLLaMA/comments/1mnq5ot/up_to_50k_hardware_best_model_for_legal_docs/
Both-Sense-1172
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnq5ot
false
null
t3_1mnq5ot
/r/LocalLLaMA/comments/1mnq5ot/up_to_50k_hardware_best_model_for_legal_docs/
false
false
self
0
null
LM studio, internet search
4
Hi Is possible to enable "internet search" in LM-Studio? Yesterday I discover Oobabooga and there is a plugin for that... But... i dont like Oobabooga..... So, there is plugins like that for LM-Studio? Chat-gpt say no. Thanks
2025-08-11T21:41:20
https://www.reddit.com/r/LocalLLaMA/comments/1mnq4c7/lm_studio_internet_search/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnq4c7
false
null
t3_1mnq4c7
/r/LocalLLaMA/comments/1mnq4c7/lm_studio_internet_search/
false
false
self
4
null
What’s actually going on
0
https://preview.redd.it/… internal logic.
2025-08-11T21:31:51
https://www.reddit.com/r/LocalLLaMA/comments/1mnpvm0/whats_actually_going_on/
amsat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnpvm0
false
null
t3_1mnpvm0
/r/LocalLLaMA/comments/1mnpvm0/whats_actually_going_on/
false
false
https://a.thumbs.redditm…gcMAcAEcWLu4.jpg
0
null
course recommendations to learn LLM, machine learning from scratch
2
some expericne with basic programming skills, do you guys have any recommendations to learn from scratch?? udemy?
2025-08-11T21:13:39
https://www.reddit.com/r/LocalLLaMA/comments/1mnpedc/course_recommendations_to_learn_llm_machine/
Financial_Memory5183
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnpedc
false
null
t3_1mnpedc
/r/LocalLLaMA/comments/1mnpedc/course_recommendations_to_learn_llm_machine/
false
false
self
2
null
PSA: Don't waste time trying Gemma 3 27B on V100s - it's architecturally impossible
0
Quick heads up for anyone with V100 infrastructure: Gemma 3 27B won't run, period. Not a memory issue - it's architectural incompatibility (no FA2, compute capability 7.0 vs required 7.5+, no modern quantization support). Spent 3 days debugging this. Even with 8x32GB V100s and tensor parallelism, it fails during model ...
2025-08-11T21:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1mnpe83/psa_dont_waste_time_trying_gemma_3_27b_on_v100s/
Live_alone3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnpe83
false
null
t3_1mnpe83
/r/LocalLLaMA/comments/1mnpe83/psa_dont_waste_time_trying_gemma_3_27b_on_v100s/
false
false
self
0
{'enabled': False, 'images': [{'id': 'I1iCXnzF8SsyAhdTItj_z4IxxkUXY-mf3H3uQCR7BF8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/I1iCXnzF8SsyAhdTItj_z4IxxkUXY-mf3H3uQCR7BF8.png?width=108&crop=smart&auto=webp&s=c7a47a2b4bedb2b2424dda023be3b376c5790b1c', 'width': 108}, {'height': 216, 'url': '...
Best open model for mixing two images with a prompt?
1
Hey, I'm looking for the best open source model for image generation that support multiple images as inputs with a prompt for direction. I love the Flux series, but they don't support multiple image inputs. Any advice on a better model.
2025-08-11T21:06:41
https://www.reddit.com/r/LocalLLaMA/comments/1mnp7o9/best_open_model_for_mixing_two_images_with_a/
RelevantCry1613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnp7o9
false
null
t3_1mnp7o9
/r/LocalLLaMA/comments/1mnp7o9/best_open_model_for_mixing_two_images_with_a/
false
false
self
1
null
Training an LLM only on books from the 1800's - Another update
392
I'm training LLM's from scratch using only texts from a specific region and time period and want to share another update. Right now it's 1800-1875 London. When I first started, my dataset was only 50 texts and I was using a 4060 for training. The latest version is trained on almost 7,000 texts using Phi 1.5 (700M param...
2025-08-11T21:04:34
https://www.reddit.com/r/LocalLLaMA/comments/1mnp5nc/training_an_llm_only_on_books_from_the_1800s/
Remarkable-Trick-177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnp5nc
false
null
t3_1mnp5nc
/r/LocalLLaMA/comments/1mnp5nc/training_an_llm_only_on_books_from_the_1800s/
false
false
self
392
{'enabled': False, 'images': [{'id': 'rrloS5BZeIIl-PrvME4ZuYKHOARzC53cpqxmJMcv7zE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rrloS5BZeIIl-PrvME4ZuYKHOARzC53cpqxmJMcv7zE.png?width=108&crop=smart&auto=webp&s=d2427ba4d54a331f4ca2a687e547030eac724fc3', 'width': 108}, {'height': 108, 'url': 'h...
If GPUs had slots like RAM DIMMs, could we add more VRAM?
15
Why doesn’t this happen? Is it just a business model choice, or is there some technical limitation?
2025-08-11T21:02:38
https://www.reddit.com/r/LocalLLaMA/comments/1mnp3ry/if_gpus_had_slots_like_ram_dimms_could_we_add/
Diegam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnp3ry
false
null
t3_1mnp3ry
/r/LocalLLaMA/comments/1mnp3ry/if_gpus_had_slots_like_ram_dimms_could_we_add/
false
false
self
15
null
What models would you recommend for RTX 5090?
0
Purpose: Coding 500-1500 word text documents.
2025-08-11T21:00:24
https://www.reddit.com/r/LocalLLaMA/comments/1mnp1he/what_models_would_you_recommend_for_rtx_5090/
Chance-Studio-8242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnp1he
false
null
t3_1mnp1he
/r/LocalLLaMA/comments/1mnp1he/what_models_would_you_recommend_for_rtx_5090/
false
false
self
0
null
Open Source LLM Based Cyber Security System
17
Lots of resources came out from the recent DARPA AIxCC event at Defcon. Here's some resources from the team I was on! Source from qualifying round: https://github.com/theori-io/aixcc-asc-archive Source from finals: https://github.com/theori-io/aixcc-afc-archive It's configured to use non-local models, but it totally...
2025-08-11T20:41:41
https://www.reddit.com/r/LocalLLaMA/comments/1mnojya/open_source_llm_based_cyber_security_system/
tylerni7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnojya
false
null
t3_1mnojya
/r/LocalLLaMA/comments/1mnojya/open_source_llm_based_cyber_security_system/
false
false
self
17
{'enabled': False, 'images': [{'id': 'r2qico8WNGrK_kgbdb31hsZWaOnmcmAKaFDHl2VFzWk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r2qico8WNGrK_kgbdb31hsZWaOnmcmAKaFDHl2VFzWk.png?width=108&crop=smart&auto=webp&s=5233c0e0d5ca4a5b268bf9514c2bcc1c8b4b29b4', 'width': 108}, {'height': 108, 'url': 'h...
Do we need a llm which is good in rust
5
As earlier guys u know i released a readme.md on cross structured alignment https://github.com/Intro0siddiqui/Cross-Structural-Alignment-for-Efficient-Code-Language-Fine-Tuning So I was thinking should we put it to a test and use a base qwen 3 model(Qwen3-4B-2507) to see if we directly give it rust vs if use my reposit...
2025-08-11T20:36:07
https://www.reddit.com/r/LocalLLaMA/comments/1mnoetl/do_we_need_a_llm_which_is_good_in_rust/
Ok_Horror_8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnoetl
false
null
t3_1mnoetl
/r/LocalLLaMA/comments/1mnoetl/do_we_need_a_llm_which_is_good_in_rust/
false
false
self
5
{'enabled': False, 'images': [{'id': 'herH-Zf7yVK-H9Jlx2Sgx_mdZgQf4doKkScVpQ-6WxQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/herH-Zf7yVK-H9Jlx2Sgx_mdZgQf4doKkScVpQ-6WxQ.png?width=108&crop=smart&auto=webp&s=38d32cfb88940eba0091881b2783112756b9b4c5', 'width': 108}, {'height': 108, 'url': 'h...
Anyone interested in a group buy of B60 48GB GPUs?
2
I'm looking to get 16 of the Arc B60 48GB GPUs at launch, for my home lab, from Sparkle, Maxsun, or anyone else making a 48GB model. Anyone else interested in putting a group buy together? Ideally in the SF Bay Area.
2025-08-11T20:29:09
https://www.reddit.com/r/LocalLLaMA/comments/1mno84z/anyone_interested_in_a_group_buy_of_b60_48gb_gpus/
TokenRingAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mno84z
false
null
t3_1mno84z
/r/LocalLLaMA/comments/1mno84z/anyone_interested_in_a_group_buy_of_b60_48gb_gpus/
false
false
self
2
null
Can someone help me estimate training speeds?
1
I am getting a bit of a headache with figuring this out. I basically want to estimate how long qlora training would take on a q4 model. Let’s say there is a training set of 10 million tokens and we do 2 epochs so 20 million tokens total. Let’s use a 30b or 70b model and just assume 128gb of ram. Likely just using...
2025-08-11T20:28:59
https://www.reddit.com/r/LocalLLaMA/comments/1mno7yw/can_someone_help_me_estimate_training_speeds/
randomoptionsdude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mno7yw
false
null
t3_1mno7yw
/r/LocalLLaMA/comments/1mno7yw/can_someone_help_me_estimate_training_speeds/
false
false
self
1
null
I have a question do we need sandbox
2
I was thinking should I create a MCP for creating lightweight sandbox environment like when we use ai to add new features but they sometimes broke out system so if we have a MCP which helps to create a lightweight isolated environment so ai can create on it and test so we can easily add it in our codebase without probl...
2025-08-11T20:27:31
https://www.reddit.com/r/LocalLLaMA/comments/1mno6i5/i_have_a_question_do_we_need_sandbox/
Ok_Horror_8567
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mno6i5
false
null
t3_1mno6i5
/r/LocalLLaMA/comments/1mno6i5/i_have_a_question_do_we_need_sandbox/
false
false
self
2
null
I Want Everything Local — Building My Offline AI Workspace
6
[https://instavm.io/blog/building-my-offline-ai-workspace](https://instavm.io/blog/building-my-offline-ai-workspace) \- the full story.
2025-08-11T20:26:42
https://i.redd.it/mzvzsoh09gif1.png
badhiyahai
i.redd.it
1970-01-01T00:00:00
0
{}
1mno5p0
false
null
t3_1mno5p0
/r/LocalLLaMA/comments/1mno5p0/i_want_everything_local_building_my_offline_ai/
false
false
default
6
{'enabled': True, 'images': [{'id': 'mzvzsoh09gif1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/mzvzsoh09gif1.png?width=108&crop=smart&auto=webp&s=6d10883e932a723fc9558b797bb45585f742ea17', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/mzvzsoh09gif1.png?width=216&crop=smart&auto=we...
Generate Apps Locally for Free: App.build Now Supports Open Source Models
0
Hey r/LocalLLaMA! I'm one of the devs behind [app.build](https://github.com/appdotbuild/agent) \- an open-source agent that generates apps from prompts. We just shipped support for local models via Ollama/LMStudio + OpenRouter. Largest models work surprisingly well, smaller ones still struggle much despite being fast ...
2025-08-11T20:26:15
https://neon.com/blog/app-build-supports-open-source-models-locally
arsenyinfo
neon.com
1970-01-01T00:00:00
0
{}
1mno59l
false
null
t3_1mno59l
/r/LocalLLaMA/comments/1mno59l/generate_apps_locally_for_free_appbuild_now/
false
false
default
0
null
FULL LEAKED v0 by Vercel System Prompts and Internal Tools
151
(Latest update: 11/08/2025) I managed to get FULL official v0 system prompt and internal tools. Over 13.5K tokens and 1.3K lines. Check it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
2025-08-11T20:25:04
https://www.reddit.com/r/LocalLLaMA/comments/1mno45o/full_leaked_v0_by_vercel_system_prompts_and/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mno45o
false
null
t3_1mno45o
/r/LocalLLaMA/comments/1mno45o/full_leaked_v0_by_vercel_system_prompts_and/
false
false
self
151
{'enabled': False, 'images': [{'id': 'R9FMKLpaLKiYn1uYNBhtbHCXxpS51jGwCxMDH8PfNOE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R9FMKLpaLKiYn1uYNBhtbHCXxpS51jGwCxMDH8PfNOE.png?width=108&crop=smart&auto=webp&s=c618b4334b2c41233b474feb15b606b0aebd2b76', 'width': 108}, {'height': 108, 'url': 'h...
Qwen-code model
3
I love Claude-code and am very interested in qwen-code for local models. What is the current best model for qwen-code? I use ollama , but willing to switch if it makes sense.
2025-08-11T20:14:29
https://www.reddit.com/r/LocalLLaMA/comments/1mnntx1/qwencode_model/
Zealousideal-Ad7111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnntx1
false
null
t3_1mnntx1
/r/LocalLLaMA/comments/1mnntx1/qwencode_model/
false
false
self
3
null
KBLaM seems like LoRa for LLMs
10
I recently came across this project - https://github.com/microsoft/KBLaM and it seems like a great way to build knowledge expertise and store them like LoRa adapters and dynamically load them during inference. I haven't seen much discussion around this work.
2025-08-11T20:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1mnntud/kblam_seems_like_lora_for_llms/
SGmoze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnntud
false
null
t3_1mnntud
/r/LocalLLaMA/comments/1mnntud/kblam_seems_like_lora_for_llms/
false
false
self
10
{'enabled': False, 'images': [{'id': '6gyi-zK68311r9ABiZdLJYMPaYPRjv_nrYkBPs9gSrU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6gyi-zK68311r9ABiZdLJYMPaYPRjv_nrYkBPs9gSrU.png?width=108&crop=smart&auto=webp&s=f9396fdcfea26b437a3eceb32af773c5e23d9d96', 'width': 108}, {'height': 108, 'url': 'h...
Why do the same OpenSource Models on OpenRouter perform very differently depending on the provider that's hosting them?
1
Why is that? Is this actually like this or am I just imagining it?
2025-08-11T19:59:53
https://www.reddit.com/r/LocalLLaMA/comments/1mnnflv/why_do_the_same_opensource_models_on_openrouter/
Conscious_Warrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnnflv
false
null
t3_1mnnflv
/r/LocalLLaMA/comments/1mnnflv/why_do_the_same_opensource_models_on_openrouter/
false
false
self
1
null
AGI Réflex (OSS): unified memory + metacognitive reflection + planner for llama.cpp (demo + quick start)
1
[removed]
2025-08-11T19:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1mnn97q/agi_réflex_oss_unified_memory_metacognitive/
Miserable_Tutor1249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnn97q
false
null
t3_1mnn97q
/r/LocalLLaMA/comments/1mnn97q/agi_réflex_oss_unified_memory_metacognitive/
false
false
self
1
null
Gemini Robotics: Bringing AI into the Physical World by Google DeepMind
0
TL;DR A new family of Vision–Language–Action (VLA) models—built on Gemini 2.0—enables direct control of robots through natural, language-based guidance and embodied reasoning. Core Highlights 🤖 Purpose-built for robotics: Gemini Robotics transitions from digital-capable models to physically grounded agents with real...
2025-08-11T19:52:14
https://i.redd.it/ys6ynfkd3gif1.jpeg
Ashishpatel26
i.redd.it
1970-01-01T00:00:00
0
{}
1mnn8h3
false
null
t3_1mnn8h3
/r/LocalLLaMA/comments/1mnn8h3/gemini_robotics_bringing_ai_into_the_physical/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ys6ynfkd3gif1', 'resolutions': [{'height': 153, 'url': 'https://preview.redd.it/ys6ynfkd3gif1.jpeg?width=108&crop=smart&auto=webp&s=4f2b29e72878fdb1bace8d0c588dcd02750a1bc1', 'width': 108}, {'height': 306, 'url': 'https://preview.redd.it/ys6ynfkd3gif1.jpeg?width=216&crop=smart&auto=...
The Strategic Implications of GPT-5 for OpenAI — Chris Hayduk
0
TL;DR 📌 GPT-5 isn’t about breaking records—it’s about building trust, reducing friction, and locking in consumer loyalty. ✨ Highlights 🏆 Strong but not game-changing — GPT-5 tops benchmarks like LMArena, SWE-Bench, and math tests, but gains are incremental, not revolutionary. 🔍 Reliability & simplicity — Major up...
2025-08-11T19:36:05
https://www.reddit.com/r/LocalLLaMA/comments/1mnmsye/the_strategic_implications_of_gpt5_for_openai/
Ashishpatel26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mnmsye
false
null
t3_1mnmsye
/r/LocalLLaMA/comments/1mnmsye/the_strategic_implications_of_gpt5_for_openai/
false
false
self
0
null