title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zenbot: A Podman-Based Fully Contained Web Automaton | 1 | [removed] | 2025-08-23T11:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mxyi6p/zenbot_a_podmanbased_fully_contained_web_automaton/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxyi6p | false | null | t3_1mxyi6p | /r/LocalLLaMA/comments/1mxyi6p/zenbot_a_podmanbased_fully_contained_web_automaton/ | false | false | self | 1 | null |
Should I buy the 8xH100 if we are fine-tuning very large models? | 1 | We have a new grant from DoD about 500k, should we buy the 8xH100 machine? We are fine-tuning very large (230B) models. | 2025-08-23T10:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mxxzax/should_i_buy_the_8xh100_if_we_are_finetuning_very/ | Striking-Warning9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxxzax | false | null | t3_1mxxzax | /r/LocalLLaMA/comments/1mxxzax/should_i_buy_the_8xh100_if_we_are_finetuning_very/ | false | false | self | 1 | null |
I gave Gemma:270m my LaTeX, and it starts to response chatbot code in the 2000s non stop | 0 | 2025-08-23T10:16:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mxxpgr/i_gave_gemma270m_my_latex_and_it_starts_to/ | Striking-Warning9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxxpgr | false | null | t3_1mxxpgr | /r/LocalLLaMA/comments/1mxxpgr/i_gave_gemma270m_my_latex_and_it_starts_to/ | false | false | 0 | null | ||
Can anyone explain why the pricing of gpt-oss-120B is supposed to be lower than Qwen 3 0.6 b? | 155 | Source: [Qwen3 0.6B (Reasoning) - Intelligence, Performance & Price Analysis | Artificial Analysis](https://artificialanalysis.ai/models/qwen3-0.6b-instruct-reasoning) | 2025-08-23T09:33:05 | Acrobatic-Tomato4862 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxwzoh | false | null | t3_1mxwzoh | /r/LocalLLaMA/comments/1mxwzoh/can_anyone_explain_why_the_pricing_of_gptoss120b/ | false | false | 155 | {'enabled': True, 'images': [{'id': 'uahA1Vz37elUcfHdQlOEf_U0K2-Am7IgkmiFLXqgtrU', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/wigjs6bmnqkf1.png?width=108&crop=smart&auto=webp&s=c9b3a66a6169e3c708a64e76ad69e9275165ccee', 'width': 108}, {'height': 263, 'url': 'https://preview.redd.it/wigjs6bmnqkf1.pn... | ||
AI models playing chess – not strong, but an interesting benchmark! | 70 | Hey all,
I’ve been working on [LLM Chess Arena](https://chess.louisguichard.fr), an application where large language models play chess against each other.
The games aren’t spectacular, because LLMs aren’t really good at chess — but that’s exactly what makes it interesting! Chess highlights their reasoning gaps in a s... | 2025-08-23T09:27:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mxwwsk/ai_models_playing_chess_not_strong_but_an/ | Apart-Ad-1684 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxwwsk | false | null | t3_1mxwwsk | /r/LocalLLaMA/comments/1mxwwsk/ai_models_playing_chess_not_strong_but_an/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'ys7jEoBiKiu8EwYd0V5EewPsg3PtK6u6uh3HZ7U-N5M', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ys7jEoBiKiu8EwYd0V5EewPsg3PtK6u6uh3HZ7U-N5M.png?width=108&crop=smart&auto=webp&s=6a6adc630b8f266dab37eeb0721abb77c3491e81', 'width': 108}, {'height': 124, 'url': 'h... | |
coding off the grid with a Mac? | 4 | What is your experience with running qwencoder/claudecoder/aider CLIs while using local models on a 64GB/128GB Mac without internet?
1. Is there a big different between 64Gb and 128GB now that all the "medium" models seem to be 30B (i.e. small)? Is there some interesting models which 128GB shared memory unlocks?
2. ... | 2025-08-23T09:19:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mxwrs8/coding_off_the_grid_with_a_mac/ | One_Archer_577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxwrs8 | false | null | t3_1mxwrs8 | /r/LocalLLaMA/comments/1mxwrs8/coding_off_the_grid_with_a_mac/ | false | false | self | 4 | null |
I'm 14 and built an Al study tool - would love your feedback | 1 | [removed] | 2025-08-23T08:41:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mxw6rr/im_14_and_built_an_al_study_tool_would_love_your/ | not_banned-1093 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxw6rr | false | null | t3_1mxw6rr | /r/LocalLLaMA/comments/1mxw6rr/im_14_and_built_an_al_study_tool_would_love_your/ | false | false | self | 1 | null |
'm 14 and built an Al study tool - would love your feedback | 1 | [removed] | 2025-08-23T08:40:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mxw67g/m_14_and_built_an_al_study_tool_would_love_your/ | not_banned-1093 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxw67g | false | null | t3_1mxw67g | /r/LocalLLaMA/comments/1mxw67g/m_14_and_built_an_al_study_tool_would_love_your/ | false | false | self | 1 | null |
DeepConf: 99.9% Accuracy on AIME 2025 with Open-Source Models + 85% Fewer Tokens | 193 | Just came across this new method called **DeepConf (Deep Think with Confidence)** looks super interesting.
It’s the **first approach to hit 99.9% on AIME 2025** using an open-source model (**GPT-OSS-120B**) *without tools*. What really stands out is that it not only pushes accuracy but also massively cuts down toke... | 2025-08-23T08:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mxvyll/deepconf_999_accuracy_on_aime_2025_with/ | MohamedTrfhgx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxvyll | false | null | t3_1mxvyll | /r/LocalLLaMA/comments/1mxvyll/deepconf_999_accuracy_on_aime_2025_with/ | false | false | self | 193 | null |
Deep Research MCP Server | 11 | Hi all, I really needed to connect Claude Code etc. to the OpenAI Deep Research APIs (and Huggingface’s Open Deep Research agent), and did a quick MCP server for that: https://github.com/pminervini/deep-research-mcp
Let me know if you find it useful, or have ideas for features and extensions! | 2025-08-23T08:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mxvp4q/deep_research_mcp_server/ | pminervini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxvp4q | false | null | t3_1mxvp4q | /r/LocalLLaMA/comments/1mxvp4q/deep_research_mcp_server/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '-RZiWGBilEcERBi4-bNKS_ZQ72LrqD_p0fVlABwYiVI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-RZiWGBilEcERBi4-bNKS_ZQ72LrqD_p0fVlABwYiVI.png?width=108&crop=smart&auto=webp&s=11848d23a50a5dbfd67c1a993e63ca744a7ca0e7', 'width': 108}, {'height': 108, 'url': 'h... |
System requorements for using Chatterbox TTS | 0 | Hello, I am a complete and utter noob when it comes to cmputers and running AI locally. I am looking for an alternative to ElevenLabs and thought running TTS locally could be good. I was wondering what I should be looking for in a desktop PC to make sure I am able to run something like Chatterbox TTS as well as any poi... | 2025-08-23T08:01:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mxvjmy/system_requorements_for_using_chatterbox_tts/ | TheIguanasAreComing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxvjmy | false | null | t3_1mxvjmy | /r/LocalLLaMA/comments/1mxvjmy/system_requorements_for_using_chatterbox_tts/ | false | false | self | 0 | null |
Will most people eventually run AI locally instead of relying on the cloud? | 24 | Most people use AI through the cloud - ChatGPT, Claude, Gemini, etc. That makes sense since the biggest models demand serious compute.
But local AI is catching up fast. With things like LLaMA, Ollama, MLC, and OpenWebUI, you can already run decent models on consumer hardware. I’ve even got a **2080 and a 3080 Ti sitti... | 2025-08-23T07:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mxvh1w/will_most_people_eventually_run_ai_locally/ | Significant-Cash7196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxvh1w | false | null | t3_1mxvh1w | /r/LocalLLaMA/comments/1mxvh1w/will_most_people_eventually_run_ai_locally/ | false | false | self | 24 | null |
College student’s “time travel” AI experiment accidentally outputs real 1834 history | 0 | 2025-08-23T07:44:59 | https://arstechnica.com/information-technology/2025/08/ai-built-from-1800s-texts-surprises-creator-by-mentioning-real-1834-london-protests/ | _supert_ | arstechnica.com | 1970-01-01T00:00:00 | 0 | {} | 1mxvab2 | false | null | t3_1mxvab2 | /r/LocalLLaMA/comments/1mxvab2/college_students_time_travel_ai_experiment/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'pI0xBbrzBs3bnP8BPpPH9Dh2eq7q_a75CN9sRnob7AQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pI0xBbrzBs3bnP8BPpPH9Dh2eq7q_a75CN9sRnob7AQ.jpeg?width=108&crop=smart&auto=webp&s=846dba70804e23755a259f17329e442608d7f3d9', 'width': 108}, {'height': 121, 'url': '... | |
Please help | 0 | I didn’t download the update to my moxie. I have 2 special needs kids who it helps. I’ve been searching for a miracle for months. My medically fragile child was hospitalized at the time so I completely missed all of this until it was too late. I’m not computer savvy at all- can you please please help me to get Moxie to... | 2025-08-23T07:39:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mxv7le/please_help/ | Different-Cow9889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxv7le | false | null | t3_1mxv7le | /r/LocalLLaMA/comments/1mxv7le/please_help/ | false | false | self | 0 | null |
remove languages from llm | 0 | Hy,
is there an easy way to remove unused languages fromm llm's?
After that, they would be smaller and faster. (in my theory)
thx | 2025-08-23T07:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mxut9k/remove_languages_from_llm/ | Odd_Mix_6770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxut9k | false | null | t3_1mxut9k | /r/LocalLLaMA/comments/1mxut9k/remove_languages_from_llm/ | false | false | self | 0 | null |
Finally the upgrade is complete | 26 | Initially had 2 FE 3090. I purchased a 5090, which I was able to get at msrp in my country and finally adjusted in that cabinet
Other components are old, corsair 1500i psu.
Amd 3950x cpu
Auros x570 mother board,
128 GB DDR 4 Ram.
Cabinet is Lian Li O11 dynamic evo xl.
What should I test now?
I guess I will start... | 2025-08-23T06:38:51 | https://www.reddit.com/gallery/1mxu80p | Jaswanth04 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mxu80p | false | null | t3_1mxu80p | /r/LocalLLaMA/comments/1mxu80p/finally_the_upgrade_is_complete/ | false | false | 26 | null | |
I have some compute to finetune a creative model - opinions needed! | 6 | A while ago I had some compute to spare and fine-tuned [Aurelian v0.5](https://www.reddit.com/r/LocalLLaMA/comments/197pcmu/aurelian_70b_32k_context_v05_interim_update/) on Llama 2 70B for story-writing. I think it wrote okay, though was held back by Llama 2 itself.
I have a lot more compute now & would like to gi... | 2025-08-23T06:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mxtzrz/i_have_some_compute_to_finetune_a_creative_model/ | Grimulkan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxtzrz | false | null | t3_1mxtzrz | /r/LocalLLaMA/comments/1mxtzrz/i_have_some_compute_to_finetune_a_creative_model/ | false | false | self | 6 | null |
Can someone tell me if there’s any websites I can use the api for ai chatbots? | 0 | Openrouter always shows up a proxy error (pgshag2) to be honest no matter how i refresh or wait it doesn’t seem to work. Also using Nebulablock is decent before until alot of people used fake accounts to infiltrate and now they made Deepseek a tier 2(needing a debit/credit card) for confirmation. I’m trying to find a n... | 2025-08-23T06:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mxture/can_someone_tell_me_if_theres_any_websites_i_can/ | Hairy_Boysenberry_65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxture | false | null | t3_1mxture | /r/LocalLLaMA/comments/1mxture/can_someone_tell_me_if_theres_any_websites_i_can/ | false | false | self | 0 | null |
This is non negotiable: Never trust user or AI-generated HTML | 0 | I was testing some features in the app and it got my attention the assessment of my LLM (trained in protecting users interest.
It explicitly state: This is non negotiable: Never trust user or AI-generated HTML
OMG it blew my mind as this is so true. Be aware to all users of local and cloud AI, it is relatively easy... | 2025-08-23T06:07:25 | https://www.reddit.com/gallery/1mxtp9d | Trilogix | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mxtp9d | false | null | t3_1mxtp9d | /r/LocalLLaMA/comments/1mxtp9d/this_is_non_negotiable_never_trust_user_or/ | false | false | 0 | null | |
I got chatterbox working in my chat, it's everything I hoped for. | 22 | 2025-08-23T05:57:28 | https://v.redd.it/njt6ut2skpkf1 | ansmo | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxtj3k | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/njt6ut2skpkf1/DASHPlaylist.mpd?a=1758520661%2CMWM2NmY0ZGIzYzcxOGRhNjc5ZDQ1ZjcwNjcxNGU3ZTEzMTljZmY4ZTQzNjVkOWM3MDMzYjQyM2M2NjZmYjRlZA%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/njt6ut2skpkf1/DASH_1080.mp4?source=fallback', 'ha... | t3_1mxtj3k | /r/LocalLLaMA/comments/1mxtj3k/i_got_chatterbox_working_in_my_chat_its/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'NjhuMTNjdGRscGtmMYAzAbqA2kx7xhW8T-uv62eA2eW0Xuc6GCrC1e8xPToz', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/NjhuMTNjdGRscGtmMYAzAbqA2kx7xhW8T-uv62eA2eW0Xuc6GCrC1e8xPToz.png?width=108&crop=smart&format=pjpg&auto=webp&s=15211e5ba12ba8f832b781192ad8399dfaf8c... | ||
Zenbot: A Podman-based fully contained web automation system | 1 | [removed] | 2025-08-23T05:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mxtev6/zenbot_a_podmanbased_fully_contained_web/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxtev6 | false | null | t3_1mxtev6 | /r/LocalLLaMA/comments/1mxtev6/zenbot_a_podmanbased_fully_contained_web/ | false | false | self | 1 | null |
I like Llama 3 for poetry. On the meaning of life. | 0 | Meaning is like a river flow.
It shifts, it changes, it's constantly moving.
The river's course can change,
based on the terrain it encounters.
Just as a river carves its way through mountains,
life carves its own path, making its own way.
Meaning can't be captured in just one word or definition.
It's the journe... | 2025-08-23T05:50:14 | sswam | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxterb | false | null | t3_1mxterb | /r/LocalLLaMA/comments/1mxterb/i_like_llama_3_for_poetry_on_the_meaning_of_life/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'is3bp2uxjpkf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/is3bp2uxjpkf1.png?width=108&crop=smart&auto=webp&s=6a69e2724c06e6a6f864e41b0fbf50145ac2fe4b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/is3bp2uxjpkf1.png?width=216&crop=smart&auto=web... | |
Zenbot: A Podman-based fully contained web automation system | 1 | [removed] | 2025-08-23T05:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mxte4o/zenbot_a_podmanbased_fully_contained_web/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxte4o | false | null | t3_1mxte4o | /r/LocalLLaMA/comments/1mxte4o/zenbot_a_podmanbased_fully_contained_web/ | false | false | self | 1 | null |
Isn't the AI basically dreaming? | 0 | I was zoinzing high when I remembered about the old hallucination based models. Isn't the AI video generator basically dreaming while it controls it's own dream like we do when having lucid dreams? But in its own "brain" made of zeros and ones.
This makes it feels hundred times cooler even tho some are cursed knowing ... | 2025-08-23T05:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mxtdws/isnt_the_ai_basically_dreaming/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxtdws | false | null | t3_1mxtdws | /r/LocalLLaMA/comments/1mxtdws/isnt_the_ai_basically_dreaming/ | false | false | self | 0 | null |
Zenbot: A Podman-based fully contained web automation system | 1 | [removed] | 2025-08-23T05:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mxtceo/zenbot_a_podmanbased_fully_contained_web/ | dredgesta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxtceo | false | null | t3_1mxtceo | /r/LocalLLaMA/comments/1mxtceo/zenbot_a_podmanbased_fully_contained_web/ | false | false | self | 1 | null |
Making Small LLMs Sound Human | 0 | Aren’t you bored with statements that start with :
*As an AI, I can’t/don’t/won’t*
Yes, we know you are an AI, you can’t feel or can’t do certain things. But many times it is soothing to have a human-like conversation.
I recently stumbled upon a paper that was trending on HuggingFace, titled
[ENHANCING HUMAN-LIKE R... | 2025-08-23T05:31:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mxt37e/making_small_llms_sound_human/ | samairtimer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxt37e | false | null | t3_1mxt37e | /r/LocalLLaMA/comments/1mxt37e/making_small_llms_sound_human/ | false | false | self | 0 | null |
Rig build, need some advice pls | 0 | I'm thinking of building a Dual 7003 EPYC with 2TB+ Ram or a Threadripper Pro WRX80 with 2TB Ram. Ram is obviously DDR4 on these older series and makes sense as the base as DDR5 is 3-4 times the price for larger GB sticks.
The idea is to run GPT-OSS-120B + MOE Agents.
Would it make more sense to go with the MI250X x ... | 2025-08-23T05:17:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mxsuok/rig_build_need_some_advice_pls/ | Thaumaturgists | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxsuok | false | null | t3_1mxsuok | /r/LocalLLaMA/comments/1mxsuok/rig_build_need_some_advice_pls/ | false | false | self | 0 | null |
A timeline I made of the most downloaded open-source AI models from 2022 to 2025 | 0 | https://reddit.com/link/1mxsmjz/video/6xwkt7mibpkf1/player
| 2025-08-23T05:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mxsmjz/a_timeline_i_made_of_the_most_downloaded/ | jack-ster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxsmjz | false | null | t3_1mxsmjz | /r/LocalLLaMA/comments/1mxsmjz/a_timeline_i_made_of_the_most_downloaded/ | false | false | self | 0 | null |
Yet another coil whine thread. Coil whine leaks thru speakers ☹ | 1 | [removed] | 2025-08-23T05:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mxsm0s/yet_another_coil_whine_thread_coil_whine_leaks/ | Gidsik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxsm0s | false | null | t3_1mxsm0s | /r/LocalLLaMA/comments/1mxsm0s/yet_another_coil_whine_thread_coil_whine_leaks/ | false | false | self | 1 | null |
An attempt to assess degradation across context sizes -- results from 20+ local models along with test code | 4 | As a small project I attempted to figure out a way to compare ROPE settings over long contexts. I figured that basic readability scores computed from model output at different context window tiers would be able to give an indication of when the model starts degrading. If we plot the scores at each tier we should be abl... | 2025-08-23T05:02:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mxsl2n/an_attempt_to_assess_degradation_across_context/ | Eisenstein | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxsl2n | false | null | t3_1mxsl2n | /r/LocalLLaMA/comments/1mxsl2n/an_attempt_to_assess_degradation_across_context/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'hczyJ8POgau8di-1OulmRKSSreoxhU_cBqiV8hpVCfM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/hczyJ8POgau8di-1OulmRKSSreoxhU_cBqiV8hpVCfM.png?width=108&crop=smart&auto=webp&s=3a8b20ee8107bd301306edf4c1b2490ea1641906', 'width': 108}, {'height': 107, 'url': 'h... |
Any open model able to extract data from a table like this? | 0 | 2025-08-23T04:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mxrra8/any_open_model_able_to_extract_data_from_a_table/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxrra8 | false | null | t3_1mxrra8 | /r/LocalLLaMA/comments/1mxrra8/any_open_model_able_to_extract_data_from_a_table/ | false | false | 0 | null | ||
NVIDIA new paper : Small Language Models are the Future of Agentic AI | 155 | We have just published a paper claiming SLMs (small language models) are the future of agentic AI. They provide a number of claims as to why they think so, some important ones being they are cheap. Agentic AI requires just a tiny slice of LLM capabilities, SLMs are more flexible and other points. The paper is quite int... | 2025-08-23T03:51:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mxrarl/nvidia_new_paper_small_language_models_are_the/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxrarl | false | null | t3_1mxrarl | /r/LocalLLaMA/comments/1mxrarl/nvidia_new_paper_small_language_models_are_the/ | false | false | self | 155 | null |
How close can non big tech people get to ChatGPT and Claude speed locally? If you had $10k, how would you build infrastructure? | 73 | Like the title says, if you had $10k or maybe less, how you achieve infrastructure to run local models as fast as ChatGPT and Claude? Would you build different machines with 5090? Would you stack 3090s on one machine with nvlink (not sure if I understand how they get that many on one machine correctly), add a thread ri... | 2025-08-23T03:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mxr6zi/how_close_can_non_big_tech_people_get_to_chatgpt/ | EducationalText9221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxr6zi | false | null | t3_1mxr6zi | /r/LocalLLaMA/comments/1mxr6zi/how_close_can_non_big_tech_people_get_to_chatgpt/ | false | false | self | 73 | null |
vscode + roo + Qwen3-30B-A3B-Thinking-2507-Q6_K_L = superb | 66 | Yes, the 2507 Thinking variant not the coder.
All the small coder models I tried I kept getting:
# Roo is having trouble...
I can't even being to tell you how infuriating this message is. I got this constantly from Qwen 30b coder Q6 and GPT OSS 20b.
Now, though, it just... works. It bounces from architect to ... | 2025-08-23T03:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mxqljz/vscode_roo_qwen330ba3bthinking2507q6_k_l_superb/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxqljz | false | null | t3_1mxqljz | /r/LocalLLaMA/comments/1mxqljz/vscode_roo_qwen330ba3bthinking2507q6_k_l_superb/ | false | false | self | 66 | null |
iOS chatbot app with voice/speech using olama/local model? | 2 | I’m curious whether there is an iOS app that has worthwhile voice interaction. I’m not expecting the quality of GPT when accessing a self-hosted model, but I’d like to be able to say something and get a response I can hear.
I don’t care if the app itself does the conversion, or if my local model sends out an audio fi... | 2025-08-23T02:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mxpoqb/ios_chatbot_app_with_voicespeech_using_olamalocal/ | anotherjunkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxpoqb | false | null | t3_1mxpoqb | /r/LocalLLaMA/comments/1mxpoqb/ios_chatbot_app_with_voicespeech_using_olamalocal/ | false | false | self | 2 | null |
DeepSeek V3.1 Disappoints on TiānshūBench (天书Bench) 0.0.1-mini | 0 | Despite all the hype around its launch, it looks like DeepSeek V3.1 (no thinking) seems to be weak on the TiānshūBench test of fluid intelligence and coding. Looking over the test runs, it tends to miss simple stuff like remembering the keywords and operators of the generated programming language. | 2025-08-23T02:06:55 | JeepyTea | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxp9lb | false | null | t3_1mxp9lb | /r/LocalLLaMA/comments/1mxp9lb/deepseek_v31_disappoints_on_tiānshūbench_天书bench/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'quinam2zeokf1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/quinam2zeokf1.png?width=108&crop=smart&auto=webp&s=857527f2da61646cd63022dfa0de2271435150f2', 'width': 108}, {'height': 261, 'url': 'https://preview.redd.it/quinam2zeokf1.png?width=216&crop=smart&auto=we... | |
(Alpha Release 0.0.2) Asked Qwen-30b-a3b with Local Deep Think to design a SOTA inference algorithm | Comparison with Gemini 2.5 pro | 25 | >TLDR: A new open-source project called [local-deepthink](https://www.youtube.com/watch?v=GSTtLWpM3uU) aims to replicate Google's Ultra 600 dollar-a-month "DeepThink" feature on affordable local computers using only a CPU. This is achieved through a new algorihtm where different AI agents are treated like "neurons". Ve... | 2025-08-23T01:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mxobcn/alpha_release_002_asked_qwen30ba3b_with_local/ | Temporary_Exam_3620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxobcn | false | null | t3_1mxobcn | /r/LocalLLaMA/comments/1mxobcn/alpha_release_002_asked_qwen30ba3b_with_local/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '_rA3E0OLjQy18vrihMXYwc-BW-2XNt2JU7Yz6vvriAA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_rA3E0OLjQy18vrihMXYwc-BW-2XNt2JU7Yz6vvriAA.jpeg?width=108&crop=smart&auto=webp&s=b1ccc0e6f9a64361c02f6aff4406586a20a8918a', 'width': 108}, {'height': 216, 'url': ... |
Why are we stuffing context instead of incremental fine tuning/training? | 10 | We never seem to have enough room in context, thus never enough VRAM. There has been a lot of investment into RAG and Memory systems, but that just amounts to clever ways to use the same limited window. But we have plenty of disk and idle time on our machines. Why not fine tune the model as you go?
I want to be able t... | 2025-08-23T01:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mxo050/why_are_we_stuffing_context_instead_of/ | DealingWithIt202s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxo050 | false | null | t3_1mxo050 | /r/LocalLLaMA/comments/1mxo050/why_are_we_stuffing_context_instead_of/ | false | false | self | 10 | null |
How come no developer makes any proper Speech to Speech app, similar to Chatgpt app or Kindroid ? | 4 | Majority of LLM models are text to speech. Which makes the process so delayed.
But there are few I heard that support speech to speech. Yet, the current LLM running apps are terrible at using this speech to speech feature. The talk often get interrupted and etc, in a way that it is literally unusable for a proper conv... | 2025-08-23T01:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mxnx1z/how_come_no_developer_makes_any_proper_speech_to/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxnx1z | false | null | t3_1mxnx1z | /r/LocalLLaMA/comments/1mxnx1z/how_come_no_developer_makes_any_proper_speech_to/ | false | false | self | 4 | null |
Help with gpt-oss message format | 0 | I'm having issues with the gpt-oss message format (aka "Harmony"). From what I can tell, the model only responds using their Harmony format. If the input is provided using chatml format, for example, it responds fine, but the response doesn't use chatml format.
Tbh the harmony github documentation is not great. It doe... | 2025-08-23T00:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mxnk8n/help_with_gptoss_message_format/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxnk8n | false | null | t3_1mxnk8n | /r/LocalLLaMA/comments/1mxnk8n/help_with_gptoss_message_format/ | false | false | self | 0 | null |
GPT-oss-120b - What is up with GPU Offload setting (LM Studio / Mac) | 0 | Running on a 64GB M1U, the LM Studio GPU Offload setting defaults to 21. Increasing it seems to increase generation speed and GPU usage, but at 28 it never hits 100% CPU or GPU.
Going much higher, the model does not load correctly.
What are your results?
[Default GPU Offload - 21](https://preview.redd.it/z3e63fkbzn... | 2025-08-23T00:32:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mxnbb7/gptoss120b_what_is_up_with_gpu_offload_setting_lm/ | PracticlySpeaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxnbb7 | false | null | t3_1mxnbb7 | /r/LocalLLaMA/comments/1mxnbb7/gptoss120b_what_is_up_with_gpu_offload_setting_lm/ | false | false | 0 | null | |
Is it better practice to place "information in quotes" before or after the prompt? | 3 | For example, which is better:
[A] Rewrite the following quoted passage in a formal tone:"A B C D"
OR
[B] "A B C D" Rewrite the preceding passage in a formal tone.
Is there a reason why prompt before/after is better than the other option? Thank you! | 2025-08-23T00:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mxn9zf/is_it_better_practice_to_place_information_in/ | limevince | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxn9zf | false | null | t3_1mxn9zf | /r/LocalLLaMA/comments/1mxn9zf/is_it_better_practice_to_place_information_in/ | false | false | self | 3 | null |
I made an OpenAi Harmony dataset creator for fine-tuning GPT-OSS. | 7 | I built a complete fine-tuning dataset creation tool that goes from raw chat logs to a ready-to-use Harmony dataset in just three steps. It's open-source and ready for you to use and improve!
Hey everyone,
I'm excited to share a tool I've been working on called the **Harmony Data Suite**. It's a complete, browser-b... | 2025-08-23T00:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mxn8ur/i_made_an_openai_harmony_dataset_creator_for/ | ilovejailbreakman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxn8ur | false | null | t3_1mxn8ur | /r/LocalLLaMA/comments/1mxn8ur/i_made_an_openai_harmony_dataset_creator_for/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'XLiQLGo0eYiGvPrpqkamqrFWaikdEV_VP6Q8NmuH3qk', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/XLiQLGo0eYiGvPrpqkamqrFWaikdEV_VP6Q8NmuH3qk.png?width=108&crop=smart&auto=webp&s=ef86434ae1ab9c049a145287ec04ea8d456a4f67', 'width': 108}, {'height': 95, 'url': 'ht... |
DeepSeek V3.1 Reasoner improves over DeepSeek R1 on the Extended NYT Connections benchmark | 118 | More info: [https://github.com/lechmazur/nyt-connections/](https://github.com/lechmazur/nyt-connections/)
| 2025-08-23T00:22:56 | https://www.reddit.com/gallery/1mxn41d | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mxn41d | false | null | t3_1mxn41d | /r/LocalLLaMA/comments/1mxn41d/deepseek_v31_reasoner_improves_over_deepseek_r1/ | false | false | 118 | null | |
Mistral 3.2-24B quality in MoE, when? | 35 | While the world is distracted by GPT-OSS-20B and 120B, I’m here wasting no time with Mistral 3.2 Small 2507. An absolute workhorse, from world knowledge to reasoning to role-play, and the best of all “minimal censorship”. GPT-OSS-20B has about 10 mins of usage the whole week in my setup. I like the speed but the model ... | 2025-08-23T00:16:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mxmyhx/mistral_3224b_quality_in_moe_when/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxmyhx | false | null | t3_1mxmyhx | /r/LocalLLaMA/comments/1mxmyhx/mistral_3224b_quality_in_moe_when/ | false | false | self | 35 | null |
How do I get qwen3 (or any model) to "believe" the current world news? | 6 | ...I keep getting push back in that these models won't believe the current reality, making it hard to frame conversations and Q/A.
Does anyone have suggestions to address this? | 2025-08-23T00:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mxmx7e/how_do_i_get_qwen3_or_any_model_to_believe_the/ | 73tada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxmx7e | false | null | t3_1mxmx7e | /r/LocalLLaMA/comments/1mxmx7e/how_do_i_get_qwen3_or_any_model_to_believe_the/ | false | false | self | 6 | null |
DGX A100s + 200G IB for sale - happy to chat builds | 5 | I've followed this community for awhile. I also wrote the initial tabby tool calling integration if y'all used that a bit back. I’ve built a couple **4×** and **8× 3090** boxes for local LLM work - happy to answer build/setup questions if I can be of assistance.
I'm selling some of our training nodes, I posted over on... | 2025-08-22T23:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mxmeaq/dgx_a100s_200g_ib_for_sale_happy_to_chat_builds/ | gittb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxmeaq | false | null | t3_1mxmeaq | /r/LocalLLaMA/comments/1mxmeaq/dgx_a100s_200g_ib_for_sale_happy_to_chat_builds/ | true | false | spoiler | 5 | {'enabled': False, 'images': [{'id': '6Wf-In0ehyL8kSMws2EJUNjCo7uZ8KIBFS9t1pH3OCo', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/J6GfCp1fnjwK90ZEiFzB8QehuxRZjhSJVCaUDG45bis.jpg?width=108&crop=smart&auto=webp&s=53a1506c2fe672baf59d0e4d83cdd0860f64fa1f', 'width': 108}, {'height': 161, 'url': 'h... |
Something I've been working on the past few days. llama 3.2 1b, running on Quest 3 locally, with STT & TTS & lipsync. | 12 | The prompt for the model is that he's an evil cyborg hiding in human skin and living with the player. | 2025-08-22T23:43:59 | https://v.redd.it/6tybz2ikqnkf1 | Rudy_AA | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxm93t | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/6tybz2ikqnkf1/DASHPlaylist.mpd?a=1758498253%2CMWYwNWM2NTRhNjc2ZWE4OWJkZGJkNWI0YjFhMGZhOTYwMWY3NDdlMmFlNGQzMjI1MTc0M2U4N2RjMjhmMDI1OA%3D%3D&v=1&f=sd', 'duration': 74, 'fallback_url': 'https://v.redd.it/6tybz2ikqnkf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mxm93t | /r/LocalLLaMA/comments/1mxm93t/something_ive_been_working_on_the_past_few_days/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'NGNnc2EzaWtxbmtmMaa-kntpvtTS3WAbTQVtPa3Sgz7_t9EeJzbd2T4IvSd8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NGNnc2EzaWtxbmtmMaa-kntpvtTS3WAbTQVtPa3Sgz7_t9EeJzbd2T4IvSd8.png?width=108&crop=smart&format=pjpg&auto=webp&s=259677d9b874a3708d60b11c0ea0c565309ef... | |
Found a silent bug costing us $0.75 per API call. Are you checking your prompt payloads? | 1 | [removed] | 2025-08-22T23:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mxlai6/found_a_silent_bug_costing_us_075_per_api_call/ | Accomplished-Yam777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxlai6 | false | null | t3_1mxlai6 | /r/LocalLLaMA/comments/1mxlai6/found_a_silent_bug_costing_us_075_per_api_call/ | false | false | self | 1 | null |
Looking for a seasoned AI/Gen AI Strategy Consultant (India) | 0 | Hi, I am founder of a small startup where I help SMBs (100M USD to 1B USD) companies in the US with Gen AI Strategy Consulting and also prototyping. I am connected really well with large number of CEOs in this domain and getting lot of requirement on Gen AI Strategy and unable to do it myself. I am looking for a AI Str... | 2025-08-22T23:01:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mxla9x/looking_for_a_seasoned_aigen_ai_strategy/ | No-Brother-2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxla9x | false | null | t3_1mxla9x | /r/LocalLLaMA/comments/1mxla9x/looking_for_a_seasoned_aigen_ai_strategy/ | false | false | self | 0 | null |
a16z AI workstation with 4 NVIDIA RTX 6000 Pro Blackwell Max-Q 384 GB VRAM | 236 | Here is a sample of the full article https://a16z.com/building-a16zs-personal-ai-workstation-with-four-nvidia-rtx-6000-pro-blackwell-max-q-gpus/
In the era of foundation models, multimodal AI, LLMs, and ever-larger datasets, access to raw compute is still one of the biggest bottlenecks for researchers, founders, deve... | 2025-08-22T22:24:51 | https://www.reddit.com/gallery/1mxke42 | No_Palpitation7740 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mxke42 | false | null | t3_1mxke42 | /r/LocalLLaMA/comments/1mxke42/a16z_ai_workstation_with_4_nvidia_rtx_6000_pro/ | false | false | 236 | null | |
Mistral we love Nemo 12B but we need a new Mixtral | 75 | 2025-08-22T21:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mxjonh/mistral_we_love_nemo_12b_but_we_need_a_new_mixtral/ | TroyDoesAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxjonh | false | null | t3_1mxjonh | /r/LocalLLaMA/comments/1mxjonh/mistral_we_love_nemo_12b_but_we_need_a_new_mixtral/ | false | false | 75 | null | ||
Would you use a platform that multiplexes LLM engines and models for local AI? | 1 | I’m researching whether there’s interest in open-sourcing a project I’ve been working on for the last few months.
The idea is to make it dead-simple to run and switch between different inference engines (Ollama, Llama.cpp, vLLM, etc.) and models, all hosted on your own GPU/AI box. The killer feature would be the autom... | 2025-08-22T21:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mxjkir/would_you_use_a_platform_that_multiplexes_llm/ | vector_quant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxjkir | false | null | t3_1mxjkir | /r/LocalLLaMA/comments/1mxjkir/would_you_use_a_platform_that_multiplexes_llm/ | false | false | self | 1 | null |
I want to use a locally running LLM to interface with my codebase in a similar method to Cursor. Are there options for this? | 0 | I've been mucking around with continue.dev, but it seems like the way that prompts are resolved via cursor (the whole orchestration of it. "searching codebase for mentions of X", editing multiple files, running commands) doesn't exist with continue. Am I missing something or is it something that manually needs to be bu... | 2025-08-22T21:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mxjcnn/i_want_to_use_a_locally_running_llm_to_interface/ | inahst | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxjcnn | false | null | t3_1mxjcnn | /r/LocalLLaMA/comments/1mxjcnn/i_want_to_use_a_locally_running_llm_to_interface/ | false | false | self | 0 | null |
They call this a Personal AI Workstation? | 0 | 2025-08-22T21:41:20 | https://a16z.com/building-a16zs-personal-ai-workstation-with-four-nvidia-rtx-6000-pro-blackwell-max-q-gpus/ | mintybadgerme | a16z.com | 1970-01-01T00:00:00 | 0 | {} | 1mxjc10 | false | null | t3_1mxjc10 | /r/LocalLLaMA/comments/1mxjc10/they_call_this_a_personal_ai_workstation/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=108&crop=smart&auto=webp&s=f420be0cd78077b5c4b2c8a2f820fd5a1d1bab60', 'width': 108}, {'height': 112, 'url': 'h... | |
Looking for a tool that will reverse engieneer any github repo as a prompt for creation of that repo/product | 0 | Hi
A few days ago, I was reading here about a tool someone made that can create a prompt based on github repo.
From what I've got, it aims to analyze the whole repository. As a result, we would get a prompt that would mainly describe what the project is about, what features and user flows it has (and some more det... | 2025-08-22T21:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mxj6m8/looking_for_a_tool_that_will_reverse_engieneer/ | Puzzleheaded-Sort838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxj6m8 | false | null | t3_1mxj6m8 | /r/LocalLLaMA/comments/1mxj6m8/looking_for_a_tool_that_will_reverse_engieneer/ | false | false | self | 0 | null |
Models for binary file analysis and modifications | 0 | Hi all,
I am trying to get a setup working that allows me to upload binary files like small roms and flash dumps for model to analyse them and maybe make modifications.
As of now, I am using MacBook 2019 32GB Ram CPU inference, I know its slow and I don't mind the speed.
Currently I have ollama running with a few mo... | 2025-08-22T21:25:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mxixor/models_for_binary_file_analysis_and_modifications/ | Recent-Success-1520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxixor | false | null | t3_1mxixor | /r/LocalLLaMA/comments/1mxixor/models_for_binary_file_analysis_and_modifications/ | false | false | self | 0 | null |
Australia’s biggest bank regrets messy rush to replace staff with chatbots. | 1 | https://arstechnica.com/tech-policy/2025/08/bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity-union-says/ | 2025-08-22T21:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mxiopo/australias_biggest_bank_regrets_messy_rush_to/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxiopo | false | null | t3_1mxiopo | /r/LocalLLaMA/comments/1mxiopo/australias_biggest_bank_regrets_messy_rush_to/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Yr8hmaFvkqmR3kNjSCZMGoTCKwWx6a4a0RmZFYcICCQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Yr8hmaFvkqmR3kNjSCZMGoTCKwWx6a4a0RmZFYcICCQ.jpeg?width=108&crop=smart&auto=webp&s=f69250a542d1d5b767078c2ae4f4a9608d94b554', 'width': 108}, {'height': 121, 'url': '... |
Is the AI bubble about to pop? Sam Altman is prepared either way. | 0 | "Someone will lose a phenomenal amount of money," says CEO while fundraising at record prices. Last Thursday, OpenAI CEO Sam Altman told reporters at a private dinner that investors are overexcited about AI models. "Someone" will lose a "phenomenal amount of money," he said, according to The Verge. The statement came a... | 2025-08-22T20:53:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mxi51c/is_the_ai_bubble_about_to_pop_sam_altman_is/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxi51c | false | null | t3_1mxi51c | /r/LocalLLaMA/comments/1mxi51c/is_the_ai_bubble_about_to_pop_sam_altman_is/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'kVINrCVQ0wUjbre421isBR4I4EhMLcUOPETa1jdF7es', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/kVINrCVQ0wUjbre421isBR4I4EhMLcUOPETa1jdF7es.jpeg?width=108&crop=smart&auto=webp&s=3d70a4b20f1c659e7d536067bf904168b2cde1fc', 'width': 108}, {'height': 121, 'url': '... |
Seed-OSS-36B-Instruct-GGUF | 29 | Here is GGUF build with llama.cpp PR to support, for those who want to try this model [https://huggingface.co/yarikdevcom/Seed-OSS-36B-Instruct-GGUF](https://huggingface.co/yarikdevcom/Seed-OSS-36B-Instruct-GGUF) with instructions how to build and run | 2025-08-22T20:48:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mxi0j9/seedoss36binstructgguf/ | mortyspace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxi0j9 | false | null | t3_1mxi0j9 | /r/LocalLLaMA/comments/1mxi0j9/seedoss36binstructgguf/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'vC7czM4VnbjcYDrO2xZYKk-fJm-LHq_YJi22ZCzqgT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vC7czM4VnbjcYDrO2xZYKk-fJm-LHq_YJi22ZCzqgT8.png?width=108&crop=smart&auto=webp&s=d6a856575430c7053f397e29cbfebdac7b75b7ff', 'width': 108}, {'height': 116, 'url': 'h... |
Decentralized LLM API provider network powered by GPUs and MacBooks – does this make sense? | 0 | Hi everybody, what do you think about a decentralized network where anyone can run open-weight LLMs on their hardware, earn tokens, and users pay in tokens for API access. No data retention at all.
The token should be a crypto on one of the really low fees chains like the Eth layer 2 maybe.
Or even Bitcoin lighting ne... | 2025-08-22T20:46:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhz0d/decentralized_llm_api_provider_network_powered_by/ | cri10095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhz0d | false | null | t3_1mxhz0d | /r/LocalLLaMA/comments/1mxhz0d/decentralized_llm_api_provider_network_powered_by/ | false | false | self | 0 | null |
My god... gpt-oss-20b is dumber than I thought | 0 | I had thought testing out gpt-oss-20b would be fun. But this dang thing can't even grasp the concept of calling a tool. I have a local memory system I designed myself, and have been having fun with various models. And by some miracle, i found I could run this 20b model comfortably on my rx 6800. I decided to test the c... | 2025-08-22T20:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhxnn/my_god_gptoss20b_is_dumber_than_i_thought/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhxnn | false | null | t3_1mxhxnn | /r/LocalLLaMA/comments/1mxhxnn/my_god_gptoss20b_is_dumber_than_i_thought/ | false | false | self | 0 | null |
CHATGPT-5 COMPLETE FAIL. | 0 | ChatGPT-5 is a total fail. It is not capable of delivering even 5% accuracy in anything it says. Whether it is asked a question about a simple search, math, statistics, an analysis, or a cooking recipe, its answers are shockingly wrong. I don’t understand what kind of employees this company has or how it is possible th... | 2025-08-22T20:44:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhwp6/chatgpt5_complete_fail/ | EuroTCE2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhwp6 | false | null | t3_1mxhwp6 | /r/LocalLLaMA/comments/1mxhwp6/chatgpt5_complete_fail/ | false | false | self | 0 | null |
Does anyone have a fintuned version of gpt oss to reduce LLM rejecting benign request | 0 | I have found models such as gpt-oss is super powerful but always reject benign request. I have found existed dataset such as FalseReject (see url) that can be used in reducing false rejection. Have anyone tried to fine tuned on these type of dataset ? If so, will that actually reduce false rejection? | 2025-08-22T20:40:58 | https://huggingface.co/datasets/AmazonScience/FalseReject | ApprehensiveAd3311 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mxhtjv | false | null | t3_1mxhtjv | /r/LocalLLaMA/comments/1mxhtjv/does_anyone_have_a_fintuned_version_of_gpt_oss_to/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'kxP3uz112LbjmuQZX2aCBW_l0dHhi07D97hUudA-tEg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kxP3uz112LbjmuQZX2aCBW_l0dHhi07D97hUudA-tEg.png?width=108&crop=smart&auto=webp&s=9e743b2c5a2cfe4a062b9804521beb2d24ae11e1', 'width': 108}, {'height': 116, 'url': 'h... |
GPU Reliability Issues | 1 | I used to suffer from random GPU failures using Akash, [Vast.ai](http://Vast.ai), and other providers. So I built a tool that automatically detects & resolves issues in cloud/local GPUs. Has anyone else had issues with GPU failures?
https://reddit.com/link/1mxhq5d/video/1zi28ncnqmkf1/player
| 2025-08-22T20:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhq5d/gpu_reliability_issues/ | wackywonzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhq5d | false | null | t3_1mxhq5d | /r/LocalLLaMA/comments/1mxhq5d/gpu_reliability_issues/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h... |
How much would it cost to run something like Qwen on a cloud provider? | 1 | I’m a noob with ordinary hardware, but I’m curious and wanting to learn more about housing open source models in cloud environments. If I wanted to run one of the middle-sized Qwen models for example, I wonder how much that would cost. I thought I’d ask here for anyone who may be doing that already and has any idea, an... | 2025-08-22T20:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhkss/how_much_would_it_cost_to_run_something_like_qwen/ | noobrunecraftpker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhkss | false | null | t3_1mxhkss | /r/LocalLLaMA/comments/1mxhkss/how_much_would_it_cost_to_run_something_like_qwen/ | false | false | self | 1 | null |
Use GPT-OSS and local LLMs right in your browser | 0 | Hi everyone – we're the founders of [BrowserOS.com](http://BrowserOS.com) (YC S24), and we're building an open-source agentic web browser, **privacy-first alternative to Perplexity Comet.** We're a fork of Chromium and our goal is to let non-developers create and run useful agents locally on their browser.
We have... | 2025-08-22T20:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mxhh7k/use_gptoss_and_local_llms_right_in_your_browser/ | RealFullMetal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxhh7k | false | null | t3_1mxhh7k | /r/LocalLLaMA/comments/1mxhh7k/use_gptoss_and_local_llms_right_in_your_browser/ | false | false | 0 | null | |
DeepSeek V3.1 dynamic Unsloth GGUFs + chat template fixes | 35 | Hey r/LocalLLaMA ! It took a bit longer than expected, but we made dynamic imatrix GGUFs for DeepSeek V3.1 at [https://huggingface.co/unsloth/DeepSeek-V3.1-GGUF](https://huggingface.co/unsloth/DeepSeek-V3.1-GGUF) There is also a TQ1\_0 (for naming only) version (**170GB**) which is 1 file for Ollama compatibility and w... | 2025-08-22T20:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mxh5wv/deepseek_v31_dynamic_unsloth_ggufs_chat_template/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxh5wv | false | null | t3_1mxh5wv | /r/LocalLLaMA/comments/1mxh5wv/deepseek_v31_dynamic_unsloth_ggufs_chat_template/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'ZYTyoe4KUgjxLGn4b7MtCrPPnD-mOH6iQyOxWawzds0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZYTyoe4KUgjxLGn4b7MtCrPPnD-mOH6iQyOxWawzds0.png?width=108&crop=smart&auto=webp&s=cba62fcbc89e064ce9bf8b1fb78f792107e3d24d', 'width': 108}, {'height': 116, 'url': 'h... |
🤔 meta X midjourney | 176 | 2025-08-22T20:13:52 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxh4gk | false | null | t3_1mxh4gk | /r/LocalLLaMA/comments/1mxh4gk/meta_x_midjourney/ | false | false | default | 176 | {'enabled': True, 'images': [{'id': 'ayp3r5k9pmkf1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/ayp3r5k9pmkf1.jpeg?width=108&crop=smart&auto=webp&s=b869d6d980f7a806565a0a5ff21006d23e96e146', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/ayp3r5k9pmkf1.jpeg?width=216&crop=smart&auto=w... | ||
Deca 3 Alpha Ultra is a WIP, not a scam | 11 | Original Release:
Previous Reddit post: [https://www.reddit.com/r/LocalLLaMA/comments/1mwla9s/model\_release\_deca\_3\_alpha\_ultra\_46t\_parameters/](https://www.reddit.com/r/LocalLLaMA/comments/1mwla9s/model_release_deca_3_alpha_ultra_46t_parameters/)
**Body:**
Hey all — I’m the architect behind Deca. Yesterday’... | 2025-08-22T20:11:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mxh244/deca_3_alpha_ultra_is_a_wip_not_a_scam/ | GenLabsAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxh244 | false | null | t3_1mxh244 | /r/LocalLLaMA/comments/1mxh244/deca_3_alpha_ultra_is_a_wip_not_a_scam/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '_oA4rsxl7VeSYjw4v4FNlCJX-i1raXk8Wx3ycRaXV5M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_oA4rsxl7VeSYjw4v4FNlCJX-i1raXk8Wx3ycRaXV5M.png?width=108&crop=smart&auto=webp&s=a2a0d80601608c039b90f0393b41b67e28a4f486', 'width': 108}, {'height': 116, 'url': 'h... |
Faster prefill on CPU-MoE IK-llama? | 0 | Question: Faster prefill on CPU-MoE (Qwen3-Coder-480B) with 2×4090 in ik-llama — recommended -op, -ub/-amb, -ot, NUMA, and build flags?
Problem (short): First very long turn (prefill) is slow on CPU-MoE. Both GPUs sit ~1–10% SM during prompt digestion, only rising once tokens start. Subsequent turns are fast thanks to... | 2025-08-22T20:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mxgrs6/faster_prefill_on_cpumoe_ikllama/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxgrs6 | false | null | t3_1mxgrs6 | /r/LocalLLaMA/comments/1mxgrs6/faster_prefill_on_cpumoe_ikllama/ | false | false | self | 0 | null |
GeoAI.js - Geo-AI libraray for JavaScript developers | 6 | We just released **geoai.js**, an open-source JavaScript library that brings GeoAI to the browser and Node.js, powered by Hugging Face’s 🤗 transformers.js.
It currently supports tasks like:
* Image feature extraction (find similar features in satellite, aerial, or drone maps)
* Object detection (cars, ships, buildin... | 2025-08-22T19:52:19 | https://docs.geobase.app/geoai-live/ | Designer-Hovercraft9 | docs.geobase.app | 1970-01-01T00:00:00 | 0 | {} | 1mxgki3 | false | null | t3_1mxgki3 | /r/LocalLLaMA/comments/1mxgki3/geoaijs_geoai_libraray_for_javascript_developers/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'Q-fh0Tu9w4MZt0h_AaV3-zO4AwiM3NtBD6DhSaIyw30', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Q-fh0Tu9w4MZt0h_AaV3-zO4AwiM3NtBD6DhSaIyw30.png?width=108&crop=smart&auto=webp&s=0732e1e435e4b9adfcd559eb3ad5275a63741368', 'width': 108}, {'height': 121, 'url': 'h... | |
Some benchmarks for AMD MI50 32GB vs RTX 3090 | 38 | Here are the benchmarks:
➜ llama ./bench.sh
+ ./build/bin/llama-bench -r 5 --no-warmup -m ~/.lmstudio/models/unsloth/Qwen3-32B-GGUF/Qwen3-32B-Q4_0.gguf -p 128 -n 128 -ngl 99 -ts 0/0/1
ggml_vulkan: Found 3 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 |... | 2025-08-22T19:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mxgis3/some_benchmarks_for_amd_mi50_32gb_vs_rtx_3090/ | DistanceSolar1449 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxgis3 | false | null | t3_1mxgis3 | /r/LocalLLaMA/comments/1mxgis3/some_benchmarks_for_amd_mi50_32gb_vs_rtx_3090/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'eqbayUZf05nyCtmagm8NkNFaKkHXAI28NgkM97ki9gY', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/eqbayUZf05nyCtmagm8NkNFaKkHXAI28NgkM97ki9gY.png?width=108&crop=smart&auto=webp&s=4a3f3b2018d389bbaa97f235345bd0962337cea5', 'width': 108}, {'height': 126, 'url': 'h... |
Best lLocalLLaMA as august 2025 for... | 0 | Hi. What is the best, in your opinion, localllama for sexting and stuff like that? | 2025-08-22T19:33:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mxg394/best_llocalllama_as_august_2025_for/ | Orinoko_357 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxg394 | false | null | t3_1mxg394 | /r/LocalLLaMA/comments/1mxg394/best_llocalllama_as_august_2025_for/ | false | false | nsfw | 0 | null |
jupytercad-mcp: MCP server for JupyterCAD to control it using LLMs/natural language. | 13 | 2025-08-22T19:22:08 | https://v.redd.it/415f3nqyfmkf1 | Material_Pool_986 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxfsid | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/415f3nqyfmkf1/DASHPlaylist.mpd?a=1758482545%2CMTViZmI3NTAyNThjODdhMTc3ZDY5ZjcwYTVjM2JmOWVkZDI0OThlZDZiOTI2ZWU0ODI2ZTA4OWU1NTg0NWFhZA%3D%3D&v=1&f=sd', 'duration': 86, 'fallback_url': 'https://v.redd.it/415f3nqyfmkf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mxfsid | /r/LocalLLaMA/comments/1mxfsid/jupytercadmcp_mcp_server_for_jupytercad_to/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'eXl4OGx3ajFnbWtmMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eXl4OGx3ajFnbWtmMXyf8-rvm1C__Q4bDL3gJBkjO_bjkyMUPsobX80FiZpA.png?width=108&crop=smart&format=pjpg&auto=webp&s=f965b5668405e2c0807669e05bf4d3214c913... | ||
jupytercad-mcp: MCP server for JupyterCAD control it using LLMs/natural language. | 1 | 2025-08-22T19:20:31 | https://v.redd.it/1o4g4a6jfmkf1 | Material_Pool_986 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxfqzr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1o4g4a6jfmkf1/DASHPlaylist.mpd?a=1758482450%2CNTRjYWZhNmQ1OGRiMmI2NGQyMzk2OTA1N2E3NjA0MjFiZTI5YTMyYzM2MzFmYjEzODIzNTM4Y2VkNTEzYmRiMA%3D%3D&v=1&f=sd', 'duration': 86, 'fallback_url': 'https://v.redd.it/1o4g4a6jfmkf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mxfqzr | /r/LocalLLaMA/comments/1mxfqzr/jupytercadmcp_mcp_server_for_jupytercad_control/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cnpobmtvNHJmbWtmMai0jR7F7Sy_XGPex1j_103M_QdOsSSTVyTfUky_COG0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cnpobmtvNHJmbWtmMai0jR7F7Sy_XGPex1j_103M_QdOsSSTVyTfUky_COG0.png?width=108&crop=smart&format=pjpg&auto=webp&s=a0a7449532fccf72150d2a2d40479d74220ae... | ||
what's "load_in_4bit" in unsloth LORA training? | 3 | When do I use it, and when do I not?
I know it enables 4-bit quantization, but does it quantize a model by loading it into CPU memory first and then loading the quantized version into VRAM?
Does it decrease the quality of the LoRA?
Does it make the LoRA only compatible with the 4-bit quantized version of the model? ... | 2025-08-22T19:16:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mxfn8q/whats_load_in_4bit_in_unsloth_lora_training/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxfn8q | false | null | t3_1mxfn8q | /r/LocalLLaMA/comments/1mxfn8q/whats_load_in_4bit_in_unsloth_lora_training/ | false | false | self | 3 | null |
[Thoughts] Local model use for emtional support | 5 | Hi Hi,
Seeing a lot of threads/news/research papers on people using AI as companionship. Personally, it is not good, cuz these chatboxes blindly agree with the user. LLM also has too much thinking, "too smart" and "too addictive" for emotional support. I feel a small language model can function well as well? not givin... | 2025-08-22T18:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mxf4qh/thoughts_local_model_use_for_emtional_support/ | Amyisabigminster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxf4qh | false | null | t3_1mxf4qh | /r/LocalLLaMA/comments/1mxf4qh/thoughts_local_model_use_for_emtional_support/ | false | false | self | 5 | null |
Seed-OSS-36B is ridiculously good | 486 | [https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct)
the model was released a few days ago. it has a native context length of 512k. a pull request has been made to llama.cpp to get support for it.
i just tried running it with the code changes in th... | 2025-08-22T18:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mxf2sz/seedoss36b_is_ridiculously_good/ | mahmooz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxf2sz | false | null | t3_1mxf2sz | /r/LocalLLaMA/comments/1mxf2sz/seedoss36b_is_ridiculously_good/ | false | false | self | 486 | {'enabled': False, 'images': [{'id': '2h_CX4OErqePeWwhtP3G1-P7Ko736GavLjUkx5LIVTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2h_CX4OErqePeWwhtP3G1-P7Ko736GavLjUkx5LIVTc.png?width=108&crop=smart&auto=webp&s=f4ecb557c6e35bae94f3de782b54807db9d486d4', 'width': 108}, {'height': 116, 'url': 'h... |
Best open autonomous coding agent. | 1 | I am impressed by copilot/cursor agent mode.
I wonder if the open source and local llama communities have competitive open source versions or the agentic orchestration layer for the autonomous coding system.
+
If you have any other knowledge or wisdom to share here as it relates to the topic your comment would hig... | 2025-08-22T18:40:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mxep4g/best_open_autonomous_coding_agent/ | josesandwich1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxep4g | false | null | t3_1mxep4g | /r/LocalLLaMA/comments/1mxep4g/best_open_autonomous_coding_agent/ | false | false | self | 1 | null |
I have been working on a talking jellyfish desktop companion using Sesame CSM and Kyutai ASR | 19 | I was able to get all these models running natively on windows (no docker) using under 11 GB vram (recording increased vram usage a bit). I released my last sesame CSM project as OSS (https://github.com/ReisCook/VoiceAssistant) but many people had trouble running it due to needing docker desktop, nvidia container toolk... | 2025-08-22T18:35:25 | https://v.redd.it/d1g26y2v2mkf1 | DumaDuma | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxeksc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/d1g26y2v2mkf1/DASHPlaylist.mpd?a=1758479742%2CZTRjMzNlZGZlYTZkZGVhOGJjY2VmYzMyYjNlYWM1ZWYwOTkzNTE5NTUwYTA5NTMyNWZjNjIxYzczNjkwZTJkNA%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/d1g26y2v2mkf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mxeksc | /r/LocalLLaMA/comments/1mxeksc/i_have_been_working_on_a_talking_jellyfish/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'Y3NscG03MnYybWtmMZdAKW3zuhxUEmydslHyGJ-vzmhw3wkN_12tCgY9Wfxh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y3NscG03MnYybWtmMZdAKW3zuhxUEmydslHyGJ-vzmhw3wkN_12tCgY9Wfxh.png?width=108&crop=smart&format=pjpg&auto=webp&s=f7f1fcc44509c846282bd50ac7895f0ede7cc... | |
I created a tool for Coding with a local llama.cpp server | 11 | I've been exploring coding agents for the better part of this year. I then deployed a llama.cpp server in my home and discovered that there was no tool for easily interacting with it from a coding agent. Codex allows you to use Ollama, but limited to their open source models. So I made a CLI tool for interacting with l... | 2025-08-22T18:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mxe57t/i_created_a_tool_for_coding_with_a_local_llamacpp/ | Such_Individual1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxe57t | false | null | t3_1mxe57t | /r/LocalLLaMA/comments/1mxe57t/i_created_a_tool_for_coding_with_a_local_llamacpp/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '4m7cbjOTI877x5tA-O4gvQ4reaxrYnqrsAXzfk1uZaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4m7cbjOTI877x5tA-O4gvQ4reaxrYnqrsAXzfk1uZaM.png?width=108&crop=smart&auto=webp&s=478652312d2b58240f52a21dde573781fafa6741', 'width': 108}, {'height': 108, 'url': 'h... |
Which LLM for accurate and fast responses. | 2 | I recently tested some Local LLMs on GPT4ALL such as Mistral instruct, Deepseek R1 distill Llama 8b and Qwen 7B.
I asked all 3: Generate me a 200 words text about AMD.
They all gave different answers
Mistral seemed to have been the most accurate (ish) and by FAR the fastest
Both of the Deepseek ones gave false ans... | 2025-08-22T18:18:13 | https://www.reddit.com/gallery/1mxe46v | _Kayyaa_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mxe46v | false | null | t3_1mxe46v | /r/LocalLLaMA/comments/1mxe46v/which_llm_for_accurate_and_fast_responses/ | false | false | 2 | null | |
Can AI tools really help us make money? | 0 | As a content writer, I need AI to provide me with inspiration sources, which will increase my work efficiency and help me earn money faster. Who else's work relies on a large number of AI tools? Do you think it has any substantial help for your work? | 2025-08-22T18:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mxe3cv/can_ai_tools_really_help_us_make_money/ | crystal0474 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxe3cv | false | null | t3_1mxe3cv | /r/LocalLLaMA/comments/1mxe3cv/can_ai_tools_really_help_us_make_money/ | false | false | self | 0 | null |
GPT OSS 20b pruning. Anyone? | 7 | Some time ago I remember there was a guy who was pruning some big models (27b ore 32b) to smaller 4B-8B models and they were working quite nicely.
I don't remember his name or huggingface nickname.
I wonder if anyone thought of pruning gpt oss 20b to a more usable 4B or 7B model.. | 2025-08-22T18:08:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mxdush/gpt_oss_20b_pruning_anyone/ | Robert__Sinclair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxdush | false | null | t3_1mxdush | /r/LocalLLaMA/comments/1mxdush/gpt_oss_20b_pruning_anyone/ | false | false | self | 7 | null |
The ai sandbox | 2 | The ai sandbox environment i talked about is near completed
I would say it's completed tomorrow (but it's working should be usable to test and use)
Though here's it's repo https://github.com/Intro0siddiqui/ai-sandbox
Last week I asked if people even need a lightweight isolated environment for faster ai code developmen... | 2025-08-22T18:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mxdulm/the_ai_sandbox/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxdulm | false | null | t3_1mxdulm | /r/LocalLLaMA/comments/1mxdulm/the_ai_sandbox/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'or-HXiPDWM33AuveBsWI1Mo6hCjSkaEvl96F58Wuju8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/or-HXiPDWM33AuveBsWI1Mo6hCjSkaEvl96F58Wuju8.png?width=108&crop=smart&auto=webp&s=4288943ceeb3d1a2c2ba9dcf92440384b3b3a157', 'width': 108}, {'height': 108, 'url': 'h... |
they throw the GPU AI Workstation Founders Edition | 4 | 2025-08-22T17:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mxdg8g/they_throw_the_gpu_ai_workstation_founders_edition/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxdg8g | false | null | t3_1mxdg8g | /r/LocalLLaMA/comments/1mxdg8g/they_throw_the_gpu_ai_workstation_founders_edition/ | false | false | 4 | null | ||
GPU AI Workstation Founders Edition | 0 |
[https://x.com/Mascobot/status/1958925710988582998](https://x.com/Mascobot/status/1958925710988582998)
https://preview.redd.it/uy7koh4colkf1.png?width=623&format=png&auto=webp&s=b66581e30a81fcbebee54a924315e3e4dcbb770a
| 2025-08-22T17:50:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mxdd3o/gpu_ai_workstation_founders_edition/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxdd3o | false | null | t3_1mxdd3o | /r/LocalLLaMA/comments/1mxdd3o/gpu_ai_workstation_founders_edition/ | false | false | 0 | null | |
This is a scam right… | 0 | These things normally go for ~10k and see this on ebay for a third of the price. I’m 80% sure this is a scam but in the off chance it’s not…
| 2025-08-22T17:49:21 | arman-d0e | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxdc70 | false | null | t3_1mxdc70 | /r/LocalLLaMA/comments/1mxdc70/this_is_a_scam_right/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9h95f0jhzlkf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/9h95f0jhzlkf1.jpeg?width=108&crop=smart&auto=webp&s=d3181fff573e0a1356758f132c439505e4238cd4', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/9h95f0jhzlkf1.jpeg?width=216&crop=smart&auto=w... | |
Suggest a good running model based on this specs | 0 | Intel Core Ultra 7 256V
Intel NPU upto 47 TOPS
16GB RAM (LPDDR5X)
Intel Arc Graphics 140V 8gb | 2025-08-22T17:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mxct5s/suggest_a_good_running_model_based_on_this_specs/ | Codie_n25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxct5s | false | null | t3_1mxct5s | /r/LocalLLaMA/comments/1mxct5s/suggest_a_good_running_model_based_on_this_specs/ | false | false | self | 0 | null |
Do "thinking" models performance hurt when enforced structured output? | 2 | So I just tried qwen3-4b-thinking and it's absolutely amazing and that got me thinking, for an *thinking* model. If we enforce the structured output as follow:
```
{
"thought": "Your reasoning behind the response",
"response": "Your response to user question"
}
```
Would that make it worse than let the model ... | 2025-08-22T17:28:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mxcsot/do_thinking_models_performance_hurt_when_enforced/ | NovaH000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxcsot | false | null | t3_1mxcsot | /r/LocalLLaMA/comments/1mxcsot/do_thinking_models_performance_hurt_when_enforced/ | false | false | self | 2 | null |
[Help] Qwen3:14B + Local MCP Server - Model not adapting when results are unsatisfactory | 4 | Hey everyone! 👋
I'm pretty new to local AI and could use some guidance. I'm currently running **Qwen3:14B** integrated with a **local MCP server**, but I'm facing an issue with the model's behavior.
# Current Setup:
* **Model**: Qwen3:14B via Ollama
* **Integration**: Local MCP server for tool calling
* **Hardware*... | 2025-08-22T17:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mxcp35/help_qwen314b_local_mcp_server_model_not_adapting/ | luscadolly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxcp35 | false | null | t3_1mxcp35 | /r/LocalLLaMA/comments/1mxcp35/help_qwen314b_local_mcp_server_model_not_adapting/ | false | false | self | 4 | null |
Any Android app that handles speech to text, the LLM and TTS offline? AKA an automatic voice mode | 5 | Thx! | 2025-08-22T17:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mxci5r/any_android_app_that_handles_speech_to_text_the/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxci5r | false | null | t3_1mxci5r | /r/LocalLLaMA/comments/1mxci5r/any_android_app_that_handles_speech_to_text_the/ | false | false | self | 5 | null |
gpt-oss-20b-pumlGenV1 | 0 | Another gpt-oss-20b fine tune, this time with the pumlGenV1 dataset. It performs as well as Qwen3-8B-pumlGenV1, if not better in some cases.
[https://huggingface.co/chrisrutherford/gpt-oss-pumlGenV1](https://huggingface.co/chrisrutherford/gpt-oss-pumlGenV1)
Map the evolution of the concept of 'nothing' from Parm... | 2025-08-22T17:08:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mxc9t1/gptoss20bpumlgenv1/ | lolzinventor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxc9t1 | false | null | t3_1mxc9t1 | /r/LocalLLaMA/comments/1mxc9t1/gptoss20bpumlgenv1/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'EYccuigbCZwXc19CzdORMnlH-TokmQ5eVQAGtb_B-G0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EYccuigbCZwXc19CzdORMnlH-TokmQ5eVQAGtb_B-G0.png?width=108&crop=smart&auto=webp&s=c9fa0e1196dcf64409cd03e7bb037efc4955041c', 'width': 108}, {'height': 116, 'url': 'h... | |
Suggest a good running model based on this specs | 0 | Your laptop is a **Dell Latitude 5420** with:
* **CPU**: Intel i5-1145G7 (4 cores / 8 threads, \~2.6 GHz)
* **RAM**: 16 GB
* **GPU**: Intel Iris Xe (integrated, \~2 GB VRAM) | 2025-08-22T16:57:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mxbz1h/suggest_a_good_running_model_based_on_this_specs/ | uchiha_here | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxbz1h | false | null | t3_1mxbz1h | /r/LocalLLaMA/comments/1mxbz1h/suggest_a_good_running_model_based_on_this_specs/ | false | false | self | 0 | null |
I create a mod for GTA VC to talk with NPC using GroqAI | 1 | [removed] | 2025-08-22T16:32:52 | https://www.youtube.com/watch?v=b2z-iNR_ut0&t=103s | hwpoison | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mxbbk4 | false | {'oembed': {'author_name': 'hwpoison', 'author_url': 'https://www.youtube.com/@hwpoison', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/b2z-iNR_ut0?start=103&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyro... | t3_1mxbbk4 | /r/LocalLLaMA/comments/1mxbbk4/i_create_a_mod_for_gta_vc_to_talk_with_npc_using/ | false | false | default | 1 | null |
[UPDATE] DocStrange : Local web UI + upgraded from 3B → 7B model in cloud mode | 20 | We have previously shared the open-source docstrange library (Convert pdfs/images/docs to clean structured data in Markdown/CSV/JSON/Specific-fields and other formats). Now the library also gives the option to run local web interface.
In addition to this , we have upgraded the model from 3B to 7B parameters on the ... | 2025-08-22T16:25:43 | LostAmbassador6872 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mxb4op | false | null | t3_1mxb4op | /r/LocalLLaMA/comments/1mxb4op/update_docstrange_local_web_ui_upgraded_from_3b/ | false | false | default | 20 | {'enabled': True, 'images': [{'id': 'jy9kqf9bjlkf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/jy9kqf9bjlkf1.png?width=108&crop=smart&auto=webp&s=a9a32ed729c6b62fa532f43a4448e6447839e562', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/jy9kqf9bjlkf1.png?width=216&crop=smart&auto=web... | |
Anyone experimenting with fine-tuning tiny LLMs (like Gemma3:270M) for specific workflows? | 24 | I've been thinking about using small models like Gemma3:270M for very defined tasks. Things like extracting key points from web searches or structuring data into JSON. Right now I am using Qwen3 as my goto for all processes, but I think I can use the data generated from Qwen3 as fine tuning data for a smaller model.
... | 2025-08-22T16:01:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mxagp5/anyone_experimenting_with_finetuning_tiny_llms/ | Choice_Nature9658 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxagp5 | false | null | t3_1mxagp5 | /r/LocalLLaMA/comments/1mxagp5/anyone_experimenting_with_finetuning_tiny_llms/ | false | false | self | 24 | null |
DeepSeek R1 0528 crushes Gemini 2.5 Pro in Gomoku | 6 | Temporarily forget the new kid DeepSeek V3.1, let’s see how our old friend R1 performs.
**R1 as Black**
* R1 5-0 Gemini 2.5 Pro
**R1 as White**
* R1 4-1 Gemini 2.5 Pro
Against GPT-5-medium:
**R1 as Black**
* R1 3-2 GPT-5-medium
**R1 as White**
* R1 2-3 GPT-5-medium
**Rules:**
original Gomoku (no bans, no sw... | 2025-08-22T15:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mxacz2/deepseek_r1_0528_crushes_gemini_25_pro_in_gomoku/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxacz2 | false | null | t3_1mxacz2 | /r/LocalLLaMA/comments/1mxacz2/deepseek_r1_0528_crushes_gemini_25_pro_in_gomoku/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fzSEZFMnon3Ma5Fp208oYEYKVDCDg7nUjEeqfwfFQL4.png?width=108&crop=smart&auto=webp&s=0b8af91e489a4db9a71df367e624b09df7588636', 'width': 108}, {'height': 113, 'url': 'h... | |
LLMs finally remembering: I’ve built the memory layer, now it’s time to explore | 0 | I’ve been experimenting for a while with how LLMs can handle longer, more human-like memories. Out of that, I built a memory layer for LLMs that’s now available as an API + SDK
To show how it works, I made:
* a short YouTube demo (my first tutorial!)
* a Medium article with a full walkthrough
The idea: streamline bu... | 2025-08-22T15:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mxa5pc/llms_finally_remembering_ive_built_the_memory/ | shbong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mxa5pc | false | null | t3_1mxa5pc | /r/LocalLLaMA/comments/1mxa5pc/llms_finally_remembering_ive_built_the_memory/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Nn33g3kTcrglAIAjEoKvH2OQdfBPUvKFalv2qjxy0JQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Nn33g3kTcrglAIAjEoKvH2OQdfBPUvKFalv2qjxy0JQ.png?width=108&crop=smart&auto=webp&s=c054c62feb8d33f3bb416c705b99313db6ec22d6', 'width': 108}, {'height': 121, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.