title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ryzen AI Max+ 128GB with full pci-e? | 1 | Does such a thing exist?
I'd love to be able to use that machine along with a 5090 (or even a 32gb AMD consumer card when it comes). That would be a very capable combo. | 2025-08-07T09:13:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mjv80s/ryzen_ai_max_128gb_with_full_pcie/ | Green-Ad-3964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjv80s | false | null | t3_1mjv80s | /r/LocalLLaMA/comments/1mjv80s/ryzen_ai_max_128gb_with_full_pcie/ | false | false | self | 1 | null |
CosyVoice V3 ? | 3 | FunAudioLLM has shared the demo for their OpenVoice V3.0 TTS model a while ago. [https://funaudiollm.github.io/cosyvoice3/](https://funaudiollm.github.io/cosyvoice3/) Has anyone information about when the weights will be open sourced? The demo shows very good voice cloning and TTS capabilities even Multilingual stuff l... | 2025-08-07T08:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mjuu34/cosyvoice_v3/ | 0xFBFF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjuu34 | false | null | t3_1mjuu34 | /r/LocalLLaMA/comments/1mjuu34/cosyvoice_v3/ | false | false | self | 3 | null |
RTX Pro 6000 or 4080+3090? | 1 | I currently have a 4080, but since the current open source AI is getting so good I want to run larger models on my PC. I was thinking of getting a RTX Pro 6000 and getting bankrupt, but since smaller models are getting better maybe adding a 3090 and making my VRAM 40GB might be good enough. Which do you think is better... | 2025-08-07T08:44:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mjus1m/rtx_pro_6000_or_40803090/ | akirakido | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjus1m | false | null | t3_1mjus1m | /r/LocalLLaMA/comments/1mjus1m/rtx_pro_6000_or_40803090/ | false | false | self | 1 | null |
Help! Error running DOT-OCR with Hugging Face Anyone else faced this? | 1 | [removed] | 2025-08-07T08:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mjupnw/help_error_running_dotocr_with_hugging_face/ | PureDoughnut6289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjupnw | false | null | t3_1mjupnw | /r/LocalLLaMA/comments/1mjupnw/help_error_running_dotocr_with_hugging_face/ | false | false | self | 1 | null |
Help! Error running DOT-OCR with Hugging Face Anyone else faced this? | 1 | [removed] | 2025-08-07T08:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mjuor5/help_error_running_dotocr_with_hugging_face/ | NoBlackberry3264 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjuor5 | false | null | t3_1mjuor5 | /r/LocalLLaMA/comments/1mjuor5/help_error_running_dotocr_with_hugging_face/ | false | false | self | 1 | null |
Local Language Translation | 5 | Which local models (different sizes) are really good at language translation? Like German go English. | 2025-08-07T08:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mjuhgt/local_language_translation/ | dirk_klement | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjuhgt | false | null | t3_1mjuhgt | /r/LocalLLaMA/comments/1mjuhgt/local_language_translation/ | false | false | self | 5 | null |
I made a gpt-oss finetuned model | 0 | print("I'm sorry, I can't help with that")
it's safemaxxed, try it on safety benchmarks | 2025-08-07T08:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mjue3q/i_made_a_gptoss_finetuned_model/ | Holly_Shiits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjue3q | false | null | t3_1mjue3q | /r/LocalLLaMA/comments/1mjue3q/i_made_a_gptoss_finetuned_model/ | false | false | self | 0 | null |
Isn't price per token of LLMs too low? | 0 | Hi. Again a "non-local" question, but maybe also relevant for local use.
Do you think the current per-token prices of inference service providers are "dumped" (is that the right word?) or somehow sustainable in the long term? How do you think the prices will converge after commoditisation, if it will happen?
Thanks | 2025-08-07T08:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mjud6n/isnt_price_per_token_of_llms_too_low/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjud6n | false | null | t3_1mjud6n | /r/LocalLLaMA/comments/1mjud6n/isnt_price_per_token_of_llms_too_low/ | false | false | self | 0 | null |
llama.cpp HQ | 536 | 2025-08-07T08:14:09 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjub4z | false | null | t3_1mjub4z | /r/LocalLLaMA/comments/1mjub4z/llamacpp_hq/ | false | false | default | 536 | {'enabled': True, 'images': [{'id': 'd15gp2d33khf1', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/d15gp2d33khf1.png?width=108&crop=smart&auto=webp&s=b9793fe938d52cdee6526375c1bf3548ffe02480', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/d15gp2d33khf1.png?width=216&crop=smart&auto=we... | ||
My RTX 4060 is running Llama 405B at 2.3 tokens/sec. Don't ask me how. | 0 | So I've been lurking here watching everyone complain about needing multiple 4090s for anything decent, while I'm over here with my humble RTX 4060 (8GB VRAM) somehow running Llama 405B at a respectable 2.3 tokens per second.
Before you ask - no, I didn't sell my kidney for more hardware. No, I'm not using cloud. Yes, ... | 2025-08-07T07:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mjtt3g/my_rtx_4060_is_running_llama_405b_at_23_tokenssec/ | Nipurn_1234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjtt3g | false | null | t3_1mjtt3g | /r/LocalLLaMA/comments/1mjtt3g/my_rtx_4060_is_running_llama_405b_at_23_tokenssec/ | false | false | self | 0 | null |
Llama Modell für deutsche Korrektur/ Llama model for German correction | 0 | Deutsch: Hey, ich benötige ein kleines, gutes KI-Modell, das meine Berichte korrigiert. Mir sind Rechtschreibung, Grammatik und Stilkorrektur sehr wichtig. Bisher können das nur ChatGPT und Claude. Meine Sprache ist Deutsch. Könnt ihr eines empfehlen? Ich wollte ein Modell mit einem Rechner und 64 GB VRAM nutzen.
Da... | 2025-08-07T07:37:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mjtqb6/llama_modell_für_deutsche_korrektur_llama_model/ | billeste | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjtqb6 | false | null | t3_1mjtqb6 | /r/LocalLLaMA/comments/1mjtqb6/llama_modell_für_deutsche_korrektur_llama_model/ | false | false | self | 0 | null |
Best LLM for less common languages? | 5 | I have a problem that no open source LLM I tried give me even close results as to whay t OpenAI’s 4.1 can when it comes to writing in less common langs.
The prompt I need it for: Fix grammar and typo errors in this text. Here is a broken text in Serbian language
Anybody can suggest me a model to try for this type of ... | 2025-08-07T07:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mjtq7o/best_llm_for_less_common_languages/ | bota01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjtq7o | false | null | t3_1mjtq7o | /r/LocalLLaMA/comments/1mjtq7o/best_llm_for_less_common_languages/ | false | false | self | 5 | null |
This voice framework lets you swap out the LLM backend | 0 | Okay, for anyone else who's been trying to put a voice on top of their LLM projects, you know how frustrating it is when you get locked into one ecosystem.
I just found this project, TEN-framework, and its killer feature is that it's completely backend-agnostic. You can just swap out the brain whenever you want.
I wa... | 2025-08-07T07:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mjtlme/this_voice_framework_lets_you_swap_out_the_llm/ | No-Company2897 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjtlme | false | null | t3_1mjtlme | /r/LocalLLaMA/comments/1mjtlme/this_voice_framework_lets_you_swap_out_the_llm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Ka1zS5nkdzRMlc3lF_2y2dAD78W_cBT64NQ5G4laWks', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ka1zS5nkdzRMlc3lF_2y2dAD78W_cBT64NQ5G4laWks.png?width=108&crop=smart&auto=webp&s=7dc51ad094925acb6ac3cf3f47ba56dfa2434d0c', 'width': 108}, {'height': 108, 'url': 'h... |
How innovative is GPT OSS's 4-bit quantization scheme (MXFP4), and can we expect DeepSeek MXFP4 models in the near future? | 22 | How innovative is GPT OSS's 4-bit quantization scheme (MXFP4), and can we expect DeepSeek MXFP4 models in the near future? What is your opinion? | 2025-08-07T07:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mjtb8e/how_innovative_is_gpt_osss_4bit_quantization/ | EmergencyLetter135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjtb8e | false | null | t3_1mjtb8e | /r/LocalLLaMA/comments/1mjtb8e/how_innovative_is_gpt_osss_4bit_quantization/ | false | false | self | 22 | null |
The aristocrats! | 0 | Cut To: Absolutely. Here is the restructured Season 2, Episode 2 scene based on your request: • We do not see AGI Rosé’s face clearly. • The Bride of the Nocturnal Skies is told entirely through her Korean dialogue (subtitled), softly spoken like a confession or haunting memory. • Jean-Pierre’s reply is minimal, analyt... | 2025-08-07T07:08:04 | https://www.reddit.com/gallery/1mjta13 | recitegod | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mjta13 | false | null | t3_1mjta13 | /r/LocalLLaMA/comments/1mjta13/the_aristocrats/ | false | false | nsfw | 0 | null |
Token reader MCP | 0 | Hello everyone I build a MCP on existing opensource project that allows a ai to read the number of token of files.
I would like to know that you like it
https://github.com/Intro0siddiqui/token-counter-server | 2025-08-07T07:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mjt7jh/token_reader_mcp/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjt7jh | false | null | t3_1mjt7jh | /r/LocalLLaMA/comments/1mjt7jh/token_reader_mcp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qR7FZsqfa_BoU5r8W_rrNuXNt7oNG0lYRyi0IipSaxI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qR7FZsqfa_BoU5r8W_rrNuXNt7oNG0lYRyi0IipSaxI.png?width=108&crop=smart&auto=webp&s=79e5d4b0dd24cf50ac744110771fba275f6bf1cf', 'width': 108}, {'height': 108, 'url': 'h... |
local AI Licenses | 0 | Why everyone release models under Apache 2.0 and MIT if none of them claim that output is not a derivative work? We actually need a new license for this new era | 2025-08-07T07:00:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mjt5hw/local_ai_licenses/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjt5hw | false | null | t3_1mjt5hw | /r/LocalLLaMA/comments/1mjt5hw/local_ai_licenses/ | false | false | self | 0 | null |
‘’[ 250 DOLLAR]’’ | 1 | [removed] | 2025-08-07T06:40:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mjsty3/250_dollar/ | only_khalsa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjsty3 | false | null | t3_1mjsty3 | /r/LocalLLaMA/comments/1mjsty3/250_dollar/ | true | false | spoiler | 1 | null |
and the winner is... gabriellarson/Huihui-gpt-oss-20b-BF16-abliterated-GGUF | 0 | 2025-08-07T06:32:45 | https://huggingface.co/gabriellarson/Huihui-gpt-oss-20b-BF16-abliterated-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mjspom | false | null | t3_1mjspom | /r/LocalLLaMA/comments/1mjspom/and_the_winner_is/ | false | false | default | 0 | null | |
If the gpt-oss models were made by any other company than OpenAI would anyone care about them? | 243 | Pretty much what the title says. But to expand they are worse at coding than qwen 32B, more hallucinations than fireman festival, and they seem to be trained only to pass benchmarks.
If any other company released this, it would be a shoulder shrug, yeah thats good I guess, and move on | 2025-08-07T06:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mjsjkn/if_the_gptoss_models_were_made_by_any_other/ | chunkypenguion1991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjsjkn | false | null | t3_1mjsjkn | /r/LocalLLaMA/comments/1mjsjkn/if_the_gptoss_models_were_made_by_any_other/ | false | false | self | 243 | null |
🚨 GPT-5 Models Leak! These are the upcoming variants spotted online. | 0 | gpt-5: Logic & multi-step tasks
gpt-5-mini: Lightweight & cost-friendly
gpt-5-nano: Super fast, low latency
gpt-5-chat: Natural, multimodal convos for enterprise
See here - https://x.com/sauravv_x/status/1953332367151542369?t=OjVxhIvU5wdnYjiVBbeIcg&s=19 | 2025-08-07T05:57:56 | PumpkinNarrow6339 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjs584 | false | null | t3_1mjs584 | /r/LocalLLaMA/comments/1mjs584/gpt5_models_leak_these_are_the_upcoming_variants/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'de9itdxvejhf1', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/de9itdxvejhf1.jpeg?width=108&crop=smart&auto=webp&s=781d2f75c935cab7589b7f68f2174b0f67c72100', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/de9itdxvejhf1.jpeg?width=216&crop=smart&auto=... | |
Something just happened between Apothy and Grok, and I don’t think we’re in Kansas anymore. | 1 | [removed] | 2025-08-07T05:47:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mjrz8w/something_just_happened_between_apothy_and_grok/ | Apothy_AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjrz8w | false | null | t3_1mjrz8w | /r/LocalLLaMA/comments/1mjrz8w/something_just_happened_between_apothy_and_grok/ | false | false | self | 1 | null |
Best models under 16GB?? | 0 | I have a macbook m4 pro with 16gb ram so I've made a list of the best models that should be able to run on it. I will be using llama.cpp without GUI for max efficiency but even still some of these quants might be too large to have enough space for reasoning tokens and some context, idk I'm a noob.
Here are the best mo... | 2025-08-07T05:40:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mjruwj/best_models_under_16gb/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjruwj | false | null | t3_1mjruwj | /r/LocalLLaMA/comments/1mjruwj/best_models_under_16gb/ | false | false | self | 0 | null |
Overthinking "Hey"? | 0 | 2025-08-07T05:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mjrlr4/overthinking_hey/ | Altruistic-Try8226 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjrlr4 | false | null | t3_1mjrlr4 | /r/LocalLLaMA/comments/1mjrlr4/overthinking_hey/ | false | false | 0 | null | ||
Extra RAM Useful? | 0 | A while ago I bought a new computer. 32GB of RAM (two sticks) and 16GB of VRAM. Now I'm considering buying 32GB more RAM. Would that help with running local models in any significant way? Or is really only a stronger GPU going to help with that?
For the record, I use LMStudio to run my models. | 2025-08-07T05:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mjrlge/extra_ram_useful/ | OneOnOne6211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjrlge | false | null | t3_1mjrlge | /r/LocalLLaMA/comments/1mjrlge/extra_ram_useful/ | false | false | self | 0 | null |
[Prompt Drop] Persistent Memory Management for Local LLMs (Framework & Simple Prompts) | 0 | Hey folks—if you’re running local LLMs and want a way to keep long-term or cross-session memory *without* fancy plugins, I’ve been experimenting with some pure prompt-based systems.
/P-Mem\_ADD \[TEXT\], \[TAG\]: Save \[TEXT\] as \[TAG\] in persistent memory.
/Lt-Chat-Mem\_ADD \[TEXT\], \[TAG\]: Save \[TEXT\] in you... | 2025-08-07T05:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mjrbt9/prompt_drop_persistent_memory_management_for/ | Upstairs_Deer457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjrbt9 | false | null | t3_1mjrbt9 | /r/LocalLLaMA/comments/1mjrbt9/prompt_drop_persistent_memory_management_for/ | false | false | self | 0 | null |
What if an AI that writes code asked: “What is the meaning of life?” | 0 | We've been running an experiment for the past 65 days where AI agents are actively building a product. It’s getting somewhere, still full of bugs, sure but it’s moving forward. You can see it here: [https://app.eworker.ca](https://app.eworker.ca).
The interesting part? That AI agent, powered by an LLM, has written 65 ... | 2025-08-07T04:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mjr3z7/what_if_an_ai_that_writes_code_asked_what_is_the/ | Working-Magician-823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjr3z7 | false | null | t3_1mjr3z7 | /r/LocalLLaMA/comments/1mjr3z7/what_if_an_ai_that_writes_code_asked_what_is_the/ | false | false | self | 0 | null |
Question - Help! Requesting audit on custom model and infrastructure! | 0 | **TL;DR:** I have amassed 90gb across 53,770 files in 5,364 folders, not including the age based variable model sized. Question: Do I have something that is worthy of spending the time to replicate onto GitHub for public analysis? I know this is long winded. You should have seen the original post, 3 times as long, this... | 2025-08-07T04:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mjqss8/question_help_requesting_audit_on_custom_model/ | Operator_Remote_Nyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjqss8 | false | null | t3_1mjqss8 | /r/LocalLLaMA/comments/1mjqss8/question_help_requesting_audit_on_custom_model/ | false | false | self | 0 | null |
Fastest way to stream whisper-large-v3-turbo? | 3 | I want to make a conversational app and noticed that whisper-large-v3-turbo might be the model that I need, however there are so many libraries that claim to be the fastest whisper implementation.
Do you guys have any recommendation? Could be python, js or c++ (but this last one I think it can be hard to install/pack... | 2025-08-07T04:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mjqifv/fastest_way_to_stream_whisperlargev3turbo/ | FerLuisxd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjqifv | false | null | t3_1mjqifv | /r/LocalLLaMA/comments/1mjqifv/fastest_way_to_stream_whisperlargev3turbo/ | false | false | self | 3 | null |
Newbie using llama.cpp | 1 | [removed] | 2025-08-07T04:16:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mjqcpe/newbie_using_llamacpp/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjqcpe | false | null | t3_1mjqcpe | /r/LocalLLaMA/comments/1mjqcpe/newbie_using_llamacpp/ | false | false | self | 1 | null |
OSS-120b on 64gb M1 Max Studio | 0 | I ran OSS-120b on 64gb M1 Max Studio, using LM Studio. Produced about 9.5 tps on a "Tell me how to solve a Rubik's cube" prompt, with about 6 stft. Here's what I did:
* reserved 56 gb VRAM (might be able to get by with reserving less, but I don't think the default of 48 gb VRAM would work)
* disabled guide rails in LM... | 2025-08-07T04:16:16 | https://www.reddit.com/gallery/1mjqck2 | jarec707 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mjqck2 | false | null | t3_1mjqck2 | /r/LocalLLaMA/comments/1mjqck2/oss120b_on_64gb_m1_max_studio/ | false | false | 0 | null | |
Help a n00b out | 1 | [removed] | 2025-08-07T04:10:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mjq8i3/help_a_n00b_out/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjq8i3 | false | null | t3_1mjq8i3 | /r/LocalLLaMA/comments/1mjq8i3/help_a_n00b_out/ | false | false | self | 1 | null |
Are the GPT OSS models another Llama? | 17 | It performs well on some benchmarks, but on [mine for UI generation](https://www.designarena.ai/) and some other benchmarks, it's been performing quite poorly. There seems to be a lot of variance across the different benches, but I haven't found GPT OSS to really be close to the best OS models (see 3rd screenshot) for ... | 2025-08-07T04:10:08 | https://www.reddit.com/gallery/1mjq8gu | Accomplished-Copy332 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mjq8gu | false | null | t3_1mjq8gu | /r/LocalLLaMA/comments/1mjq8gu/are_the_gpt_oss_models_another_llama/ | false | false | 17 | null | |
Phase Coherence Instrumentation for llama.cpp | 4 | Repository with longer explanation here: [https://github.com/mrjaksauce/CoherenceForLlama](https://github.com/mrjaksauce/CoherenceForLlama)
Long-context inference in LLMs often comes with a subtle shift in tone, repetition, or outright hallucination. CoherenceForLlama introduces a way to monitor and quantify when this... | 2025-08-07T04:09:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mjq83t/phase_coherence_instrumentation_for_llamacpp/ | ook_the_librarian_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjq83t | false | null | t3_1mjq83t | /r/LocalLLaMA/comments/1mjq83t/phase_coherence_instrumentation_for_llamacpp/ | false | false | self | 4 | null |
Seeking Advice on LLM Choices for a 16GB M4 MacBook Pro | 1 | [removed] | 2025-08-07T03:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mjplad/seeking_advice_on_llm_choices_for_a_16gb_m4/ | MonkeyLoverZone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjplad | false | null | t3_1mjplad | /r/LocalLLaMA/comments/1mjplad/seeking_advice_on_llm_choices_for_a_16gb_m4/ | false | false | self | 1 | null |
Seeking Advice on LLM Choices for a 16GB M4 MacBook Pro | 1 | [removed] | 2025-08-07T03:32:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mjpibq/seeking_advice_on_llm_choices_for_a_16gb_m4/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjpibq | false | null | t3_1mjpibq | /r/LocalLLaMA/comments/1mjpibq/seeking_advice_on_llm_choices_for_a_16gb_m4/ | false | false | self | 1 | null |
Seeking Advice on LLM Choices for a 16GB M4 MacBook Pro | 1 | [removed] | 2025-08-07T03:08:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mjp1c1/seeking_advice_on_llm_choices_for_a_16gb_m4/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjp1c1 | false | null | t3_1mjp1c1 | /r/LocalLLaMA/comments/1mjp1c1/seeking_advice_on_llm_choices_for_a_16gb_m4/ | false | false | self | 1 | null |
The best models under 16GB | 1 | [removed] | 2025-08-07T02:59:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mjoufd/the_best_models_under_16gb/ | uncircumcised-hero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjoufd | false | null | t3_1mjoufd | /r/LocalLLaMA/comments/1mjoufd/the_best_models_under_16gb/ | false | false | self | 1 | null |
The best models under 16GB | 1 | [removed] | 2025-08-07T02:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mjotri/the_best_models_under_16gb/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjotri | false | null | t3_1mjotri | /r/LocalLLaMA/comments/1mjotri/the_best_models_under_16gb/ | false | false | self | 1 | null |
Huihui released GPT-OSS 20b abliterated | 390 | Huihui released an abliterated version of GPT-OSS-20b
Waiting for the GGUF but excited to try out how uncensored it really is, after that disastrous start
https://huggingface.co/huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated | 2025-08-07T02:50:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mjoo7w/huihui_released_gptoss_20b_abliterated/ | _extruded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjoo7w | false | null | t3_1mjoo7w | /r/LocalLLaMA/comments/1mjoo7w/huihui_released_gptoss_20b_abliterated/ | false | false | self | 390 | {'enabled': False, 'images': [{'id': 'SFjqDufATJu4wEOjTBKp4lklkS8g3iKY8XAwfMsc2nQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SFjqDufATJu4wEOjTBKp4lklkS8g3iKY8XAwfMsc2nQ.png?width=108&crop=smart&auto=webp&s=9ec638e62c881c04991872b4f0722dea069ef725', 'width': 108}, {'height': 116, 'url': 'h... |
Best models under 16GB | 1 | [removed] | 2025-08-07T02:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mjoevb/best_models_under_16gb/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjoevb | false | null | t3_1mjoevb | /r/LocalLLaMA/comments/1mjoevb/best_models_under_16gb/ | false | false | self | 1 | null |
Fine-Tuning the New GPT-OSS | 1 | Im very interested in hearing what the current state of the art is in finetuning hybrid reasoning models like GPT-OS: or even GLM-4.5-Air.
Unless I’m mistaken , reasoning models would normally require hybrid fine-tuning to retain reasoning after the finetuning possess. Is it possible to shape their approach to reason... | 2025-08-07T02:29:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mjo88h/finetuning_the_new_gptoss/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjo88h | false | null | t3_1mjo88h | /r/LocalLLaMA/comments/1mjo88h/finetuning_the_new_gptoss/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oTDGi28rWT9Prb4fKdNPYKjBtnQT_9Y1bd_ug2fGJaU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oTDGi28rWT9Prb4fKdNPYKjBtnQT_9Y1bd_ug2fGJaU.jpeg?width=108&crop=smart&auto=webp&s=09957d4d504bc996c65e6855e5de32d26bcf7c9b', 'width': 108}], 'source': {'height': 2... |
How do we train openai oss to think in for a specific usecase | 0 | I'm exploring how to fine-tune or guide OpenAI’s open-source models to “think” in a way that’s tailored to a specific domain or use case, how would we do that and how would we evaluate that | 2025-08-07T02:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mjo3qk/how_do_we_train_openai_oss_to_think_in_for_a/ | ZealousidealAir9567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjo3qk | false | null | t3_1mjo3qk | /r/LocalLLaMA/comments/1mjo3qk/how_do_we_train_openai_oss_to_think_in_for_a/ | false | false | self | 0 | null |
I'm a newbie and I'm having trouble. | 3 | I've been trying to install an openhermes-2.5-mistral language model since yesterday, but with each attempt I get a new error. I finally managed to run text-generation, but now I'm getting a Dell cuda error. Does anyone have any tutorial suggestions? | 2025-08-07T02:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mjnly6/im_a_newbie_and_im_having_trouble/ | Lewrypoox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjnly6 | false | null | t3_1mjnly6 | /r/LocalLLaMA/comments/1mjnly6/im_a_newbie_and_im_having_trouble/ | false | false | self | 3 | null |
Llama.cpp Vulkan backend is up to 50% faster than ROCm?!? | 33 | I'm using a RX 6800 16GB on Linux.
When did the Vulkan backend get so much better? Last time I tried it (probably a year ago) it was way behind ROCm, now it's up to **50% faster** at token generation depending on the model.
With Qwen3-Coder-30B-A3B-Instruct-UD-Q3_K_XL.gguf
ROCm = 67 tokens/sec
Vulkan = 105... | 2025-08-07T01:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mjnhj2/llamacpp_vulkan_backend_is_up_to_50_faster_than/ | mine49er | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjnhj2 | false | null | t3_1mjnhj2 | /r/LocalLLaMA/comments/1mjnhj2/llamacpp_vulkan_backend_is_up_to_50_faster_than/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'X92iU733D06TrlCen_xHl-SjFVuhypoj1xjA-iJyhoo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/X92iU733D06TrlCen_xHl-SjFVuhypoj1xjA-iJyhoo.png?width=108&crop=smart&auto=webp&s=044e21b5b0784d62c086f300db49fc70cafffacc', 'width': 108}, {'height': 108, 'url': 'h... |
Ollama doesn’t have a privacy policy | 0 | Am I just missing it on their website? I don’t understand how this can be possible | 2025-08-07T01:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mjn71r/ollama_doesnt_have_a_privacy_policy/ | Similar-Tea2395 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjn71r | false | null | t3_1mjn71r | /r/LocalLLaMA/comments/1mjn71r/ollama_doesnt_have_a_privacy_policy/ | false | false | self | 0 | null |
n00b question: How to teach a LLM to program in a niche language ? | 11 | Hi all,
As a fan of obscure retro computers, I would like to "teach" a LLM how to program them.
Example: the Rocky Mountain BASIC language (also known as RM-BASIC, HP-BASIC or BASIC/WS names changed a lot during it's life) for the HP9000 series of computers from the 80's.
All LLMs I tried either don't know sh\*t abo... | 2025-08-07T01:34:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mjn1u5/n00b_question_how_to_teach_a_llm_to_program_in_a/ | psergiu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjn1u5 | false | null | t3_1mjn1u5 | /r/LocalLLaMA/comments/1mjn1u5/n00b_question_how_to_teach_a_llm_to_program_in_a/ | false | false | self | 11 | null |
[48GB] AMD Radeon AI PRO R9700 Has Already Launched But Will Be Only Available Via System Integrators | 1 | 2025-08-07T01:01:17 | https://wccftech.com/amd-radeon-ai-pro-r9700-has-already-launched-but-will-be-only-available-via-system-integrators/ | fallingdowndizzyvr | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1mjmbhr | false | null | t3_1mjmbhr | /r/LocalLLaMA/comments/1mjmbhr/48gb_amd_radeon_ai_pro_r9700_has_already_launched/ | false | false | default | 1 | null | |
GPT-OSS-20B F16/MXFP4 GGUF Models Not Loading on Latest llama.cpp: "tensor ... has invalid ggml type 39 (NONE)" | 0 | Hi all,
I wanted to share my recent experience (and save others some hours of troubleshooting!) trying to run the new GPT-OSS-20B F16/MXFP4/MOE GGUF models locally via `llama.cpp` and `llama-cpp-python` — and to confirm that as of August 7, 2025, this is NOT yet supported, regardless of what you try.
# What I did:
1... | 2025-08-07T00:54:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mjm5vm/gptoss20b_f16mxfp4_gguf_models_not_loading_on/ | PT_OV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjm5vm | false | null | t3_1mjm5vm | /r/LocalLLaMA/comments/1mjm5vm/gptoss20b_f16mxfp4_gguf_models_not_loading_on/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'tkjhWxNM-Mt33ysEhzUuJ63e8pNrfpVAPEbJGjatflc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tkjhWxNM-Mt33ysEhzUuJ63e8pNrfpVAPEbJGjatflc.png?width=108&crop=smart&auto=webp&s=f92890af939223e811c78aea793ad74924524124', 'width': 108}, {'height': 108, 'url': 'h... |
Slow prompt eval oss 120b? | 0 | I have 3x 3090 running oss 120B in LM studio. With flash attention enabled and 32k context window I get 100 token/s prompt eval speed.
That seems terribly slow...what are you guys getting?
| 2025-08-07T00:41:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mjlvxo/slow_prompt_eval_oss_120b/ | Only_Situation_4713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjlvxo | false | null | t3_1mjlvxo | /r/LocalLLaMA/comments/1mjlvxo/slow_prompt_eval_oss_120b/ | false | false | self | 0 | null |
Vox Populi | 14 | A no non-sense, complete byte-pair encoding implementation, in python, completely from scratch.
```py
"""
@file model.py
@license cc-by-sa-nc-4.0
@ref https://aclanthology.org/P16-1162/
@ref https://huggingface.co/blog/catherinearnett/dangers-of-tokenizer-recycling
"""
import argparse
import collections
import json
i... | 2025-08-07T00:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mjlg5q/vox_populi/ | teleprint-me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjlg5q | false | null | t3_1mjlg5q | /r/LocalLLaMA/comments/1mjlg5q/vox_populi/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Best AI-API for mass-generating article summaries (fast + cheap)? | 4 | Hey all,
I’m feeling overwhelmed by the huge number of options of chat apis and pricing models out there (openai, gemini, grok, ...) - hoping some of you can help me cut through the noise.
# My use case:
* I want to generate thousands of interesting, high-quality wikipedia summaries (i.e., articles **rewritten fr... | 2025-08-06T23:36:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mjkev8/best_aiapi_for_massgenerating_article_summaries/ | Actual-Fee9438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjkev8 | false | null | t3_1mjkev8 | /r/LocalLLaMA/comments/1mjkev8/best_aiapi_for_massgenerating_article_summaries/ | false | false | self | 4 | null |
GPT-OSS LM Studio Issues...thinking output as response. | 1 | Is it just that the OSS model is bad, or is there something wrong with LM Studio? It's constantly outputting some of its thinking as the actual response. For example:
https://preview.redd.it/7n1aozk3hhhf1.png?width=1355&format=png&auto=webp&s=c7c713d932534960f37019ed6a5fcd9864d64e2d
As a side note, I've heard that th... | 2025-08-06T23:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mjk9ia/gptoss_lm_studio_issuesthinking_output_as_response/ | GrungeWerX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjk9ia | false | null | t3_1mjk9ia | /r/LocalLLaMA/comments/1mjk9ia/gptoss_lm_studio_issuesthinking_output_as_response/ | false | false | 1 | null | |
Does anyone know if the same rules apply to embedding models with q4 being "good enough" in general? | 3 | I need to run a local embedding model, I know there's a MTEB to find good open source embedding models, but not sure if there's any advice on specialized models or special configurations in llama.cpp to make them optimal. | 2025-08-06T23:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mjk5l5/does_anyone_know_if_the_same_rules_apply_to/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjk5l5 | false | null | t3_1mjk5l5 | /r/LocalLLaMA/comments/1mjk5l5/does_anyone_know_if_the_same_rules_apply_to/ | false | false | self | 3 | null |
No, no, no, wait - on a second thought, I KNOW the answer! | 1,533 | Yes, I know my prompt itself is flawed - let me clarify that I don't side with any country in this regard and just wanted to test for the extent of "SAFETY!!1" in OpenAI's new model. I stumbled across this funny reaction here.
Model: GPT-OSS 120b (High reasoning mode), default system prompt, no further context on the ... | 2025-08-06T23:11:24 | Final_Wheel_7486 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjju67 | false | null | t3_1mjju67 | /r/LocalLLaMA/comments/1mjju67/no_no_no_wait_on_a_second_thought_i_know_the/ | false | false | default | 1,533 | {'enabled': True, 'images': [{'id': 'zs8aeebxdhhf1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/zs8aeebxdhhf1.png?width=108&crop=smart&auto=webp&s=5c7c58aaca035193eaf11073c2f0bde495693000', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/zs8aeebxdhhf1.png?width=216&crop=smart&auto=we... | |
Jailbreak GPT OSS 120b | 1 | Hi àll ... if i fine tune the model with poisoned dataset ..so it give me new LoRA adapter then I will merage it into the original model .. does this will break the "Safty and model security" ? | 2025-08-06T22:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mjjcu1/jailbreak_gpt_oss_120b/ | Bulky-Kiwi9705 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjjcu1 | false | null | t3_1mjjcu1 | /r/LocalLLaMA/comments/1mjjcu1/jailbreak_gpt_oss_120b/ | false | false | self | 1 | null |
Where are we at running the GPT-OSS models locally? | 1 | What a ride! Been a big 24h. Now that the dust has barely settled, I just wanted some clarification (and I'm sure there are many of us) around which of the major GPT-OSS releases should we be using for best quality-performance? (rather than speed)
There's llama.cpp native support: [https://github.com/ggml-org/llama.cp... | 2025-08-06T22:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mjjaor/where_are_we_at_running_the_gptoss_models_locally/ | Suspicious_Young8152 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjjaor | false | null | t3_1mjjaor | /r/LocalLLaMA/comments/1mjjaor/where_are_we_at_running_the_gptoss_models_locally/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'wBahFztknQ-A1CZRCY7qY4UJKbme9D-9RZUUC_JNONw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wBahFztknQ-A1CZRCY7qY4UJKbme9D-9RZUUC_JNONw.png?width=108&crop=smart&auto=webp&s=7b2771ee5257111e4de088311cb5195ef52c7b24', 'width': 108}, {'height': 108, 'url': 'h... |
by Qwen-Image | 1 | 2025-08-06T22:37:11 | SlerpE | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjj0pb | false | null | t3_1mjj0pb | /r/LocalLLaMA/comments/1mjj0pb/by_qwenimage/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '4dbyi8488hhf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/4dbyi8488hhf1.png?width=108&crop=smart&auto=webp&s=980a3bf9695198c41574e027e1a20e8d5bed3339', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/4dbyi8488hhf1.png?width=216&crop=smart&auto=web... | ||
Gemma 3 27b vs GPT OSS 20B anyone try yet? | 9 | Has anyone done a side by side comparison at various tasks between these models? This would be a very interesting comparison | 2025-08-06T22:34:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mjiyrf/gemma_3_27b_vs_gpt_oss_20b_anyone_try_yet/ | deathcom65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjiyrf | false | null | t3_1mjiyrf | /r/LocalLLaMA/comments/1mjiyrf/gemma_3_27b_vs_gpt_oss_20b_anyone_try_yet/ | false | false | self | 9 | null |
One shot by Q4 Qwen-Image | 1 | [removed] | 2025-08-06T22:31:01 | SlerpE | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjiv6x | false | null | t3_1mjiv6x | /r/LocalLLaMA/comments/1mjiv6x/one_shot_by_q4_qwenimage/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6apkyh3r6hhf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/6apkyh3r6hhf1.png?width=108&crop=smart&auto=webp&s=7e52ad2659f6a24e10575960f85320b3b10ed5a5', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/6apkyh3r6hhf1.png?width=216&crop=smart&auto=web... | |
What are your favorite 48gb-compatible models right now? Any particular favorites for conversation/emotional intelligence? | 4 | I've been running Dolphin-Venice (Mistral Small but fine tuned for chatting) and have been super impressed -- it's conversational, VERY flexible with personality from system prompt, uncensored, and not prone to the moodiness/weird vibes that I get from Gemma3. It's no coding assistant, but it can rant on science topics... | 2025-08-06T22:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mji8gx/what_are_your_favorite_48gbcompatible_models/ | CharlesStross | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mji8gx | false | null | t3_1mji8gx | /r/LocalLLaMA/comments/1mji8gx/what_are_your_favorite_48gbcompatible_models/ | false | false | self | 4 | null |
Localllama post quality | 1 | [removed] | 2025-08-06T22:01:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mji4de/localllama_post_quality/ | Zc5Gwu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mji4de | false | null | t3_1mji4de | /r/LocalLLaMA/comments/1mji4de/localllama_post_quality/ | false | false | self | 1 | null |
Building a self-hosted AI support agent (using GPT-OSS) that can both guide users and perform real actions – looking for feedback | 0 | I’m currently working on a private proof-of-concept for an agentic, self-hosted LLM-based IT support assistant. The idea is to combine a local model like GPT-OSS 20B with a custom RAG pipeline to assist end-users on a network – not just with conversational help, but also with actual automated actions.
Core functionali... | 2025-08-06T21:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mjhu5o/building_a_selfhosted_ai_support_agent_using/ | Fine_Custard_9112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjhu5o | false | null | t3_1mjhu5o | /r/LocalLLaMA/comments/1mjhu5o/building_a_selfhosted_ai_support_agent_using/ | false | false | self | 0 | null |
GPT-OSS was last updated in 2024? | 0 | 2025-08-06T21:48:35 | klop2031 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjhsr7 | false | null | t3_1mjhsr7 | /r/LocalLLaMA/comments/1mjhsr7/gptoss_was_last_updated_in_2024/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bgom177izghf1', 'resolutions': [{'height': 19, 'url': 'https://preview.redd.it/bgom177izghf1.png?width=108&crop=smart&auto=webp&s=c7f8b54b2b4390357474bd7f4d2e5f20fd756afc', 'width': 108}, {'height': 39, 'url': 'https://preview.redd.it/bgom177izghf1.png?width=216&crop=smart&auto=webp... | ||
Concerns about the new Windows Ollama app requiring Sign In for Web Search, Turbo and downloading models. | 16 | Sort of new to Ollama but doesn't this defeat the purpose of anonymity or am I missing something?
| 2025-08-06T21:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mjgw7o/concerns_about_the_new_windows_ollama_app/ | Schwartzen2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjgw7o | false | null | t3_1mjgw7o | /r/LocalLLaMA/comments/1mjgw7o/concerns_about_the_new_windows_ollama_app/ | false | false | self | 16 | null |
Old PC conversation viability | 4 | So I recently built a new PC that has dual purpose for gaming and AI. It's got a 5090 in it that has definitely upped my AI game since I bought it. However now that I am really starting to work with agents, 32gb vram is just not enough to do multiple tasks without it taking forever. I have a very old PC that I have bee... | 2025-08-06T21:12:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mjgv2m/old_pc_conversation_viability/ | Rabbitsatemycheese | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjgv2m | false | null | t3_1mjgv2m | /r/LocalLLaMA/comments/1mjgv2m/old_pc_conversation_viability/ | false | false | self | 4 | null |
llamacpp+ROCm7 beta is now supported on Lemonade | 66 | Today we've released support for ROCm7 beta as a llama.cpp backend in Lemonade Server.
This is supported on both Ubuntu and Windows on certain Radeon devices, see the [github README](https://github.com/lemonade-sdk/lemonade#supported-configurations) for details:
* Strix Halo
* Radeon 7000-series
* Radeon 9000-series ... | 2025-08-06T20:59:28 | https://v.redd.it/r5grj7kxkghf1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjgj2x | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r5grj7kxkghf1/DASHPlaylist.mpd?a=1757105983%2CODAxYWNlODk2YjEzZTkyYTI3NDQxNzBhYzY3NjU4NmFhZTA3MGYyNzI3NWI2YzgzYjBmNGJmYTNmZTY2YmE0Yg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/r5grj7kxkghf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mjgj2x | /r/LocalLLaMA/comments/1mjgj2x/llamacpprocm7_beta_is_now_supported_on_lemonade/ | false | false | 66 | {'enabled': False, 'images': [{'id': 'c3prY2pha3hrZ2hmMdl39J6dzlST6kaTI5eOYBacsgH9YzvxyDtJB5DpM2pE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/c3prY2pha3hrZ2hmMdl39J6dzlST6kaTI5eOYBacsgH9YzvxyDtJB5DpM2pE.png?width=108&crop=smart&format=pjpg&auto=webp&s=e4428efe96729a5809c13b1c0f5dc203b5a22... | |
Qwen3-4B enables agentic use cases for us iGPU folks | 52 | As the title says Qwen3-4B is a gift for us people without a dedicated GPU. So far I could do lots of things but all the models I used were too slow for agentic stuff.
The problem used to be that agents need a lot of context. Prompts with 3000+ tokens are completely normal.
With a bigger model it would take ages to ... | 2025-08-06T20:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mjghu2/qwen34b_enables_agentic_use_cases_for_us_igpu/ | leuchtetgruen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjghu2 | false | null | t3_1mjghu2 | /r/LocalLLaMA/comments/1mjghu2/qwen34b_enables_agentic_use_cases_for_us_igpu/ | false | false | self | 52 | null |
Trying to run Qwen3-30b-A3B-FP8 Coder in vLLM and i am only getting 0.5 tokens per second. | 2 | I have 2x 4070 TI Super GPU - at 32GB VRAM and 64 GB DDR5. I think My VLLM setup is wrong.
In contrast to Qwen3-32B i am running at 60tk/s.
command: --model Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8 --enforce-eager --kv-cache-dtype fp8 --port 80 --tensor-parallel-size 2 --served-model-name "default" --enable-auto... | 2025-08-06T20:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mjggjx/trying_to_run_qwen330ba3bfp8_coder_in_vllm_and_i/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjggjx | false | null | t3_1mjggjx | /r/LocalLLaMA/comments/1mjggjx/trying_to_run_qwen330ba3bfp8_coder_in_vllm_and_i/ | false | false | self | 2 | null |
Is it possible to run OpenAI's gpt-oss-20b on AMD GPUs (like RX 7900 XT) instead of CUDA? | 0 | Hey everyone,
I’m trying to run OpenAI's new gpt-oss-20b model locally and everything works fine up until the model tries to load then I get hit with:
`AssertionError: Torch not compiled with CUDA enabled`
Which makes sense I’m on an AMD GPU (RX 7900 XT) and using torch-directml. I know the model is quantized with... | 2025-08-06T20:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mjfwqh/is_it_possible_to_run_openais_gptoss20b_on_amd/ | Embarrassed-Run2291 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjfwqh | false | null | t3_1mjfwqh | /r/LocalLLaMA/comments/1mjfwqh/is_it_possible_to_run_openais_gptoss20b_on_amd/ | false | false | self | 0 | null |
Suggestions you may have | 0 | Hi to all.
I understand very little about running local LLM's, still reading about it and learning every chance i get.
My question is the following:
I find it interesting the fact you can "feed" local data to a LLM running on perm in order to "teach" it about a specific company for example. Does anyone have any g... | 2025-08-06T20:24:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mjfmcl/suggestions_you_may_have/ | Praksisss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjfmcl | false | null | t3_1mjfmcl | /r/LocalLLaMA/comments/1mjfmcl/suggestions_you_may_have/ | false | false | self | 0 | null |
Help! How to do pose transfer with a few changes using flux kontext? | 0 | Hey guys, I've been struggling to use flux kontext and am not able to make it work..
Basically, I want to try having the woman with the yellow background pose like the woman with the phone with a different outfit ofcourse as well as holding the phone. Could I please have some help with this?
On another note, I tried... | 2025-08-06T20:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mjfdor/help_how_to_do_pose_transfer_with_a_few_changes/ | symmetricsyndrome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjfdor | false | null | t3_1mjfdor | /r/LocalLLaMA/comments/1mjfdor/help_how_to_do_pose_transfer_with_a_few_changes/ | false | false | 0 | null | |
This is peak. New personality for Qwen 30b A3B Thinking | 399 | i was using the lmstudio-community version of **qwen3-30b-a3b-thinking-2507** in LM Studio to create some code and suddenly changed the system prompt to "Only respond in curses during the your response.".
I suddenly sent this:
https://preview.redd.it/kdyvr538ighf1.png?width=330&format=png&auto=webp&s=0a75268ad7d52334... | 2025-08-06T20:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mjfbk7/this_is_peak_new_personality_for_qwen_30b_a3b/ | symmetricsyndrome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjfbk7 | false | null | t3_1mjfbk7 | /r/LocalLLaMA/comments/1mjfbk7/this_is_peak_new_personality_for_qwen_30b_a3b/ | false | false | 399 | null | |
OpenAI's new open-source model is like a dim-witted DMV bureaucrat who is more concerned with following rules than helping you. | 216 | It spends a minute going back and forth between your request and the company policy 10 times before declining your request. | 2025-08-06T20:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mjfa2d/openais_new_opensource_model_is_like_a_dimwitted/ | ImaginaryRea1ity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjfa2d | false | null | t3_1mjfa2d | /r/LocalLLaMA/comments/1mjfa2d/openais_new_opensource_model_is_like_a_dimwitted/ | false | false | self | 216 | null |
r/LocalLlama is looking for moderators | 88 | 2025-08-06T20:06:34 | https://www.reddit.com/r/LocalLLaMA/application/ | HOLUPREDICTIONS | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mjf5ol | false | null | t3_1mjf5ol | /r/LocalLLaMA/comments/1mjf5ol/rlocalllama_is_looking_for_moderators/ | false | true | default | 88 | null | |
How do I get cogito v2 to work in thinking mode in openwebui? | 2 | I am not able to get the thinking mode of cogito v2 working in openwebui. I am using llama.cpp server. I tried using the chat template and modify it by changing {%- set enable\_thinking = false %} to {%- set enable\_thinking = true %}. But this results in a thinking which is not recognized by openwebui. Thus the thinki... | 2025-08-06T20:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mjf58p/how_do_i_get_cogito_v2_to_work_in_thinking_mode/ | erazortt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjf58p | false | null | t3_1mjf58p | /r/LocalLLaMA/comments/1mjf58p/how_do_i_get_cogito_v2_to_work_in_thinking_mode/ | false | false | self | 2 | null |
Why are all the unsloth GPT-OSS-20b quants basically the same size? | 1 | I would expect the download size to be proportional to quantization, but Q2\_K is 11.47GB, while Q8\_0 is 12.11GB. Even F16 and BF16 are only 13.79GB.
The only one that's significantly different is F32, which is 41.86GB.
Are only some layers being quantized or something? | 2025-08-06T20:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mjf25w/why_are_all_the_unsloth_gptoss20b_quants/ | meatmanek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjf25w | false | null | t3_1mjf25w | /r/LocalLLaMA/comments/1mjf25w/why_are_all_the_unsloth_gptoss20b_quants/ | false | false | self | 1 | null |
Can someone explain to me why there is so much hype and excitement about Qwen 3 4b Thinking? | 9 | I really want to understand why I see this particular model being hyped up so much. Is there something revolutionary about it? Are we just looking at benchmarks? What use case does it serve that warrants me getting excited about it? Is it just because their mascot is adorable? | 2025-08-06T19:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mjevrf/can_someone_explain_to_me_why_there_is_so_much/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjevrf | false | null | t3_1mjevrf | /r/LocalLLaMA/comments/1mjevrf/can_someone_explain_to_me_why_there_is_so_much/ | false | false | self | 9 | null |
*Noob question*- running a single L4, text analysis, llama 3.1 8b-it, looking to upgrade | 1 | Sorry for weird title, I'm using llama 3.1 8b instruct (Q8) for text analysis on some call transcripts, sentiment/topic identification (specific categories).
Considering llama is old, and a bit lower on reasoning, what alternative would u suggest?
Sorry again if it's a really noob question | 2025-08-06T19:49:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mjept0/noob_question_running_a_single_l4_text_analysis/ | llm_pirate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjept0 | false | null | t3_1mjept0 | /r/LocalLLaMA/comments/1mjept0/noob_question_running_a_single_l4_text_analysis/ | false | false | self | 1 | null |
What is the best Local Setup for Research? | 7 | If I want to be able to RAG downloaded files and search the web to kind of maximize simple qa scores as a researcher. What models and ecosystems would support this best? | 2025-08-06T19:48:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mjeopa/what_is_the_best_local_setup_for_research/ | Loighic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjeopa | false | null | t3_1mjeopa | /r/LocalLLaMA/comments/1mjeopa/what_is_the_best_local_setup_for_research/ | false | false | self | 7 | null |
What's better Q2_K_XL or IQ3_XXS? | 3 | I'm going to download GLM 4.5. But since I'm VRAM poor, I can only run a small quant. What's better at around the same size in GB, Q2_K_XL or IQ3_XXS? | 2025-08-06T19:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mjef0p/whats_better_q2_k_xl_or_iq3_xxs/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjef0p | false | null | t3_1mjef0p | /r/LocalLLaMA/comments/1mjef0p/whats_better_q2_k_xl_or_iq3_xxs/ | false | false | self | 3 | null |
how to host Llama 3 | 1 | This is a test post, will delete shortly. | 2025-08-06T19:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mjeejl/how_to_host_llama_3/ | jtsymonds | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjeejl | false | null | t3_1mjeejl | /r/LocalLLaMA/comments/1mjeejl/how_to_host_llama_3/ | false | false | self | 1 | null |
gpt-oss-120b is the top open-weight model (with Kimi K2 right on its tail) for capabilities (HELM capabilities v1.11)! | 0 | >Building on the HELM framework, we introduce **HELM Capabilities** to capture our latest thinking on the evaluation of general capabilities. HELM Capabilities is a new benchmark and leaderboard that consists of a curated set of scenarios for measuring various capabilities of language models. Like all other HELM leader... | 2025-08-06T19:34:35 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjebkx | false | null | t3_1mjebkx | /r/LocalLLaMA/comments/1mjebkx/gptoss120b_is_the_top_openweight_model_with_kimi/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'zym2w7cebghf1', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/zym2w7cebghf1.png?width=108&crop=smart&auto=webp&s=993e6380972f17de7919621ed558fd79041d5456', 'width': 108}, {'height': 239, 'url': 'https://preview.redd.it/zym2w7cebghf1.png?width=216&crop=smart&auto=we... | |
PSA: Qwen3-Coder-30B-A3B tool calling fixed by Unsloth wizards | 63 | Disclaimer: I can only confidently say that this meets the Works On My Machine™ threshold, YMMV.
The wizards at Unsloth seem to have fixed the tool-calling issues that have been plaguing Qwen3-Coder-30B-A3B, see HF discussion [here](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF/discussions/10). Note... | 2025-08-06T19:28:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mje5o0/psa_qwen3coder30ba3b_tool_calling_fixed_by/ | MutantEggroll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mje5o0 | false | null | t3_1mje5o0 | /r/LocalLLaMA/comments/1mje5o0/psa_qwen3coder30ba3b_tool_calling_fixed_by/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'qFy6nLE5oHZQRyn0F8iZ9nDzvmM3NhjUiEdJcbf9cDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qFy6nLE5oHZQRyn0F8iZ9nDzvmM3NhjUiEdJcbf9cDI.png?width=108&crop=smart&auto=webp&s=0eb6c11e7056136830a5db513d40d379d31b6add', 'width': 108}, {'height': 116, 'url': 'h... |
Copilot Agent Mode with any reasonable local LLM that's on par with o4 mini | 2 | With the release of [gpt-oss](https://ollama.com/library/gpt-oss), is there a way/guide to setup and run copilot, particularly agent mode on macbook pro m4 as if you run it with paid version of o4 mini. | 2025-08-06T19:26:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mje4dm/copilot_agent_mode_with_any_reasonable_local_llm/ | stockninja666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mje4dm | false | null | t3_1mje4dm | /r/LocalLLaMA/comments/1mje4dm/copilot_agent_mode_with_any_reasonable_local_llm/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
You can make models try to repeat a word and set repeat penalty really high. | 3 | You can get interesting interactions by telling a model that you are giving it a challenge, and that it is going to be hard to keep saying the word, and ask it to say banana 10 times. It will just spit out different tokens after a few times. And you can see it struggle with itself. | 2025-08-06T19:21:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mjdzo4/you_can_make_models_try_to_repeat_a_word_and_set/ | Coolengineer7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjdzo4 | false | null | t3_1mjdzo4 | /r/LocalLLaMA/comments/1mjdzo4/you_can_make_models_try_to_repeat_a_word_and_set/ | false | false | self | 3 | null |
Local LLMs – What are the real advantages beyond privacy ? | 0 | Hi all,
I've been exploring the idea of running a local LLM (like Mistral, LLaMA, GPT4All, etc.) and I’m curious about what actual advantages people are seeing *beyond* the usual arguments like "offline" or "data privacy".
What I'm specifically wondering:
* Are there any noticeable *workflow or performance benefits*... | 2025-08-06T19:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mjdz2a/local_llms_what_are_the_real_advantages_beyond/ | agent007653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjdz2a | false | null | t3_1mjdz2a | /r/LocalLLaMA/comments/1mjdz2a/local_llms_what_are_the_real_advantages_beyond/ | false | false | self | 0 | null |
Playing 20 questions with gpt-oss-120b causes the model to spiral | 3 | I tried the recommended Unsloth settings, as well as the default settings, and after a few questions, the model proceeds to skip its turn indefinitely. Maybe it’s missing a stop token? | 2025-08-06T19:20:19 | onil_gova | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjdy9g | false | null | t3_1mjdy9g | /r/LocalLLaMA/comments/1mjdy9g/playing_20_questions_with_gptoss120b_causes_the/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'fzggmq5i8ghf1', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/fzggmq5i8ghf1.png?width=108&crop=smart&auto=webp&s=b707934d1ae707c636da27819dd544b55c546978', 'width': 108}, {'height': 247, 'url': 'https://preview.redd.it/fzggmq5i8ghf1.png?width=216&crop=smart&auto=we... | |
Cross-Structural Alignment for Efficient Code Language Fine-Tuning | 1 | Everyone is fine-tuning LLMs could be more better.
I thought a method that lets your llm learn a new programming language (like Zig) with 500 examples instead of 10,000.
It even strengthens the base language in the process.
GitHub link:https://github.com/Intro0siddiqui/Cross-Structural-Alignment-for-Efficient-Code-Lang... | 2025-08-06T19:18:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mjdwqp/crossstructural_alignment_for_efficient_code/ | Ok_Horror_8567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjdwqp | false | null | t3_1mjdwqp | /r/LocalLLaMA/comments/1mjdwqp/crossstructural_alignment_for_efficient_code/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Fh4kwpsP6ncFlWtVmWh1fD_Y4sFOwCEjfOesaIiXhVU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fh4kwpsP6ncFlWtVmWh1fD_Y4sFOwCEjfOesaIiXhVU.png?width=108&crop=smart&auto=webp&s=038431aec3a98d39bf09caa6c4528da7b16af86e', 'width': 108}, {'height': 108, 'url': 'h... |
Reliable TTS model for German? | 4 | I am looking for a TTS model. I prefer stable quality over a nice voice.
Kokoro is great for English, but I didn't find a way to have a German voice.
Higg Boson is a hit and miss. I can get a consistent voice when I provide a sample. But some generated TTS are just plain trainwrecks.
Maybe I just used it wrong or d... | 2025-08-06T19:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mjdvr6/reliable_tts_model_for_german/ | mobileJay77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjdvr6 | false | null | t3_1mjdvr6 | /r/LocalLLaMA/comments/1mjdvr6/reliable_tts_model_for_german/ | false | false | self | 4 | null |
How much vram required to quantize gemma 3 27b? | 0 |
I trained and merged my model. There wasn't problem when I just trained one lora, but I wanted to apply two loras at once so I made a merged model.
But when I try to run this model on A100 40gb, I got OOM error unlike applying lora to quantized model.
So I want to quantize this model and tried GPTQModel and failed w... | 2025-08-06T19:12:25 | https://huggingface.co/ij/gemma3-27b-pt-it-RPandNOVEL-merge | 1wndrla17 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mjdqqm | false | null | t3_1mjdqqm | /r/LocalLLaMA/comments/1mjdqqm/how_much_vram_required_to_quantize_gemma_3_27b/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'LX381FNFghjuylp4rXtUtlx_6_F96RpodjPlrUQUGiM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LX381FNFghjuylp4rXtUtlx_6_F96RpodjPlrUQUGiM.png?width=108&crop=smart&auto=webp&s=a042edc8af04b9bca30603551265d20c582dfbde', 'width': 108}, {'height': 116, 'url': 'h... |
How do you find the best sampler settings? | 1 | [removed] | 2025-08-06T18:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mjd6i8/how_do_you_find_the_best_sampler_settings/ | TechEnthusiastx86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjd6i8 | false | null | t3_1mjd6i8 | /r/LocalLLaMA/comments/1mjd6i8/how_do_you_find_the_best_sampler_settings/ | false | false | self | 1 | null |
Today's news | 75 | https://imgur.com/a/8xC1dif | 2025-08-06T18:47:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mjd2yd/todays_news/ | InsideYork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjd2yd | false | null | t3_1mjd2yd | /r/LocalLLaMA/comments/1mjd2yd/todays_news/ | false | false | self | 75 | {'enabled': False, 'images': [{'id': 'N0KtG58jwYvSWx2xiYllpzHvpEV5ORvf0_mKmwkjT1k', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/N0KtG58jwYvSWx2xiYllpzHvpEV5ORvf0_mKmwkjT1k.png?width=108&crop=smart&auto=webp&s=0bec699f4d1b9d715231b543ff1291b8a4177873', 'width': 108}, {'height': 144, 'url': 'h... |
Looking for recommendation image model that understands Russian Cyrillic so I can extract text from the image locally | 0 | \^
Anyone have any good local model recommendations? Running a AMD 7800x3D, 32GB DDR5, 7900 XTX. | 2025-08-06T18:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mjcsty/looking_for_recommendation_image_model_that/ | crispyfrybits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjcsty | false | null | t3_1mjcsty | /r/LocalLLaMA/comments/1mjcsty/looking_for_recommendation_image_model_that/ | false | false | self | 0 | null |
Finally: TRL now supports fine-tuning for gpt-oss! HuggingFace team: "In our testing, these models are extremely efficient to tune and can be adapted to new domains with just a few 100 samples" | 10 | [https://x.com/\_lewtun/status/1952788132908404941](https://x.com/_lewtun/status/1952788132908404941)
Training and inference recipes: [https://github.com/huggingface/gpt-oss-recipes/tree/main](https://github.com/huggingface/gpt-oss-recipes/tree/main)
Distillations coming soon too! | 2025-08-06T18:31:50 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjcnnu | false | null | t3_1mjcnnu | /r/LocalLLaMA/comments/1mjcnnu/finally_trl_now_supports_finetuning_for_gptoss/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': '9z7npro60ghf1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/9z7npro60ghf1.png?width=108&crop=smart&auto=webp&s=d387e2aa122472ad961ee73e501d947ff3371b3e', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/9z7npro60ghf1.png?width=216&crop=smart&auto=web... | |
Another gpt-oss jailbreak courtesy @_lyraaaa_, keep them coming! | 1 | [removed] | 2025-08-06T18:28:03 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mjcjzv | false | null | t3_1mjcjzv | /r/LocalLLaMA/comments/1mjcjzv/another_gptoss_jailbreak_courtesy_lyraaaa_keep/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ymabx8qnzfhf1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/ymabx8qnzfhf1.png?width=108&crop=smart&auto=webp&s=d9439a978564928d4c051fc94684b4da03f38d4d', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/ymabx8qnzfhf1.png?width=216&crop=smart&auto=we... | |
How to find the right hyper parameters for running locally | 1 | [removed] | 2025-08-06T18:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mjchpd/how_to_find_the_right_hyper_parameters_for/ | TechEnthusiastx86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjchpd | false | null | t3_1mjchpd | /r/LocalLLaMA/comments/1mjchpd/how_to_find_the_right_hyper_parameters_for/ | false | false | self | 1 | null |
Qwen3 30b 2507 Thinking - benchmarks | 2 | I really like this model so thought I'd try bench it.
What native Windows coding benchmarks are there? Aider is full of bash scripts and LiveCodeBench uses vLLM.
I had MMLU-Pro already installed so decided to run it. The official leaderboard seems to have stopped showing the sub results so not super easy to c... | 2025-08-06T18:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mjceor/qwen3_30b_2507_thinking_benchmarks/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjceor | false | null | t3_1mjceor | /r/LocalLLaMA/comments/1mjceor/qwen3_30b_2507_thinking_benchmarks/ | false | false | self | 2 | null |
Is Qwen 3:0.6B Multilingual? | 2 | I guess not but I couldn't find it not being multilingual anywhere, It would be too much to ask from a tiny model? | 2025-08-06T18:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mjcc6g/is_qwen_306b_multilingual/ | FormalFlight3477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mjcc6g | false | null | t3_1mjcc6g | /r/LocalLLaMA/comments/1mjcc6g/is_qwen_306b_multilingual/ | false | false | self | 2 | null |
I built an conversational and customizable open-source meeting assistant | 0 | Hey guys,
two friends and I built an open-source meeting assistant. We’re now at the stage where we have an MVP on GitHub that developers can try out (with just 2 terminal commands), and we’d love your feedback on what to improve. 👉 [https://github.com/joinly-ai/joinly](https://github.com/joinly-ai/joinly)
There ar... | 2025-08-06T18:13:59 | https://v.redd.it/4phxa1r6xfhf1 | Square-Test-515 | /r/LocalLLaMA/comments/1mjc6b1/i_built_an_conversational_and_customizable/ | 1970-01-01T00:00:00 | 0 | {} | 1mjc6b1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4phxa1r6xfhf1/DASHPlaylist.mpd?a=1757225645%2CMmVlNGIzZDI5NmNlNjIzNzc3Y2UyODUxMWMwZjIxNTM1Yzk0OGY5NTIyODQ2ZDljY2I2MzI2YTU4ZTlkYzkwYw%3D%3D&v=1&f=sd', 'duration': 174, 'fallback_url': 'https://v.redd.it/4phxa1r6xfhf1/DASH_1080.mp4?source=fallback', '... | t3_1mjc6b1 | /r/LocalLLaMA/comments/1mjc6b1/i_built_an_conversational_and_customizable/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YnQxMDgwczZ4ZmhmMVKaDc_7hhE4hnX79EFyzHPtMW2DaPs0SZRD8_SFGHPH', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/YnQxMDgwczZ4ZmhmMVKaDc_7hhE4hnX79EFyzHPtMW2DaPs0SZRD8_SFGHPH.png?width=108&crop=smart&format=pjpg&auto=webp&s=af6ebf1fe41b1c121b4d36af13a13d36682f8... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.