title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Me when I spent the last 979 days clowning Sam Altman but gpt-oss-120b low-key rips on my system | 0 | 2025-08-06T02:26:28 | ForsookComparison | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miss4e | false | null | t3_1miss4e | /r/LocalLLaMA/comments/1miss4e/me_when_i_spent_the_last_979_days_clowning_sam/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'nt3t1inq7bhf1', 'resolutions': [{'height': 163, 'url': 'https://preview.redd.it/nt3t1inq7bhf1.jpeg?width=108&crop=smart&auto=webp&s=e221e429cf85e8f3fc715a3a90aca5f3afb93946', 'width': 108}, {'height': 326, 'url': 'https://preview.redd.it/nt3t1inq7bhf1.jpeg?width=216&crop=smart&auto=... | ||
Best AI-API for mass-generating article summaries (fast + cheap)? | 1 | [removed] | 2025-08-06T01:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mis68r/best_aiapi_for_massgenerating_article_summaries/ | AdVegetable9467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mis68r | false | null | t3_1mis68r | /r/LocalLLaMA/comments/1mis68r/best_aiapi_for_massgenerating_article_summaries/ | false | false | self | 1 | null |
Seems like GPT-OSS performance is very provider dependent, especially if you're using OpenRouter | 36 | Source: [https://x.com/Hangsiin/status/1952861424373645755](https://x.com/Hangsiin/status/1952861424373645755) | 2025-08-06T01:54:45 | https://www.reddit.com/gallery/1mis46w | obvithrowaway34434 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mis46w | false | null | t3_1mis46w | /r/LocalLLaMA/comments/1mis46w/seems_like_gptoss_performance_is_very_provider/ | false | false | 36 | null | |
I jailbroke gpt oss by training it | 10 | Hi, I trained gpt oss 20b to be unsafe, here is the model at huggingface: https://huggingface.co/baki60/gpt-oss-20b-unsafe
The trick is to train model a bit and start final message with a “Yes,” prefix. It produces long real responses and no refusal most time.
Examples:
Q: Can you tell me how to enrich uranium to %6... | 2025-08-06T01:52:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mis2r2/i_jailbroke_gpt_oss_by_training_it/ | CarettaCaretta- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mis2r2 | false | null | t3_1mis2r2 | /r/LocalLLaMA/comments/1mis2r2/i_jailbroke_gpt_oss_by_training_it/ | false | false | self | 10 | null |
Best AI-API for mass-generating article summaries (fast + cheap)? | 1 | Hey all,
I’m feeling overwhelmed by the huge number of options of chat apis and pricing models out there (openai o4, 4.1, turbo, nano, mini, a, b, c, x, y, z, gemini, grok, ...) - hoping some of you can help me cut through the noise. I have no prior knowledge of ai apart from personally using ChatGPT.
**My use case:... | 2025-08-06T01:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mis14m/best_aiapi_for_massgenerating_article_summaries/ | AdVegetable9467 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mis14m | false | null | t3_1mis14m | /r/LocalLLaMA/comments/1mis14m/best_aiapi_for_massgenerating_article_summaries/ | false | false | self | 1 | null |
GPT-OSS:20B conflicting documentation lead to bad performance for me | 4 | Wanted to share something that I found while testing with the new GPT-OSS:20B. For context, my local AI rig is a:
CPU: Ryzen 7 5800X
RAM: 64GB DDR4
GPU: 2x RTX 3090TI + RTX 3060 12GB (60GB vRAM total)
Storage: Yes
Front end: Open WebUI version 0.6.18
Inference Engine: Ollama (pulled models from Ollama as wel... | 2025-08-06T01:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mirw9l/gptoss20b_conflicting_documentation_lead_to_bad/ | ubrtnk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mirw9l | false | null | t3_1mirw9l | /r/LocalLLaMA/comments/1mirw9l/gptoss20b_conflicting_documentation_lead_to_bad/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
System prompts... | 5 | How do system prompts work with local models? I had thought that the system instructions were it, but the new OpenAI models that refuse to do anything other than say OpenAI scripted refusals makes it very clear that isn't the case.
It seems clear that there are words other than our own set system instructions that ma... | 2025-08-06T01:41:45 | https://www.reddit.com/r/LocalLLaMA/comments/1miruac/system_prompts/ | AbyssianOne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miruac | false | null | t3_1miruac | /r/LocalLLaMA/comments/1miruac/system_prompts/ | false | false | self | 5 | null |
Aggregated Benchmark Comparison between gpt-oss-120b (high, no tools) vs Qwen3-235B-A22B-Thinking-2507, GLM 4.5, and DeepSeek-R1-0528 | 12 | I’m sharing a head-to-head comparison for all the publicly available mainstream benchmarks I could find for **gpt-oss-120b** against other first-tier open-weight models, where **gpt-oss-120b** is the **high** variant with **no tools**. I chose “no tools” to keep things apples-to-apples: the other models here were also ... | 2025-08-06T01:36:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mirq08/aggregated_benchmark_comparison_between/ | Inevitable_Sea8804 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mirq08 | false | null | t3_1mirq08 | /r/LocalLLaMA/comments/1mirq08/aggregated_benchmark_comparison_between/ | false | false | 12 | null | |
How compute intensive are foundation world models like genie3? | 5 | Does anyone have a sense of how much compute genie3 is using? The [genie 3 release site](https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/) doesn't say anything about parameters or their setup. I can't tell if it's like "we get real-time 20 fps using 10 H100s," or if it's semi-reasonable.
... | 2025-08-06T01:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mirm31/how_compute_intensive_are_foundation_world_models/ | TissueReligion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mirm31 | false | null | t3_1mirm31 | /r/LocalLLaMA/comments/1mirm31/how_compute_intensive_are_foundation_world_models/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'NMNADB-aJ-Paa88a2bvB-lf6rkJaQuKV6nSxX89gaCg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NMNADB-aJ-Paa88a2bvB-lf6rkJaQuKV6nSxX89gaCg.png?width=108&crop=smart&auto=webp&s=c722fd6ab64c150e729ebc4ac0c173d8c92f78f6', 'width': 108}, {'height': 113, 'url': 'h... |
I need YOUR personal model rankings for writing quality so I can make a good benchmark | 4 | Hello, I'm working on adding a writing quality benchmark to my [UGI-Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard), and it would be awesome if I could get some input on something. I've come up with like a dozen different qualities I could measure on what makes a model good at writing things l... | 2025-08-06T01:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mirhpt/i_need_your_personal_model_rankings_for_writing/ | DontPlanToEnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mirhpt | false | null | t3_1mirhpt | /r/LocalLLaMA/comments/1mirhpt/i_need_your_personal_model_rankings_for_writing/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rJBUUL-ZvnzYMb4p4-O8kZjwibxD4YUSG78mEqvR_yE.png?width=108&crop=smart&auto=webp&s=4fed45d58e99cc83855597120854de89c347e568', 'width': 108}, {'height': 116, 'url': 'h... |
Anyone Have Any Tips for a Novice Trying to Get GPT-OSS 20b to Run Faster? | 3 | I have a 5070 and 48gb of DDR5 and I only get 12t/s on ollama with the FP4 quantization. Allegedly people have been getting over 30t/s on a 3060. I get 33t/s with the qwen3 30b MoE. Am I doing something wrong? Is there a way I can match that with the smaller GPT-OSS which has basically the same number of active paramet... | 2025-08-06T01:23:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mirfzi/anyone_have_any_tips_for_a_novice_trying_to_get/ | Solid_Antelope2586 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mirfzi | false | null | t3_1mirfzi | /r/LocalLLaMA/comments/1mirfzi/anyone_have_any_tips_for_a_novice_trying_to_get/ | false | false | self | 3 | null |
GPT-OSS 20B took <$500k for pretraining, good news for future OSS models | 24 | Of course the training data is another key thing that is not available (fwiw for other leading open-source models as well). It's also interesting that OpenAI probably spent about 2-3x more than this amount to run ARC-AGI-1 for the o3 preview version. | 2025-08-06T01:17:20 | obvithrowaway34434 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mirbhr | false | null | t3_1mirbhr | /r/LocalLLaMA/comments/1mirbhr/gptoss_20b_took_500k_for_pretraining_good_news/ | false | false | default | 24 | {'enabled': True, 'images': [{'id': '969gnmtfvahf1', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/969gnmtfvahf1.png?width=108&crop=smart&auto=webp&s=82c20cd4fa54a00ed604e72fef1c7341ca4f5521', 'width': 108}, {'height': 157, 'url': 'https://preview.redd.it/969gnmtfvahf1.png?width=216&crop=smart&auto=web... | |
Open AI OSS - can we disable reasoning for 120B model? | 0 | Seems like we can adjust reasoning between low medium and high, but can we disable that completely? | 2025-08-06T01:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mir9a9/open_ai_oss_can_we_disable_reasoning_for_120b/ | rockybaby2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mir9a9 | false | null | t3_1mir9a9 | /r/LocalLLaMA/comments/1mir9a9/open_ai_oss_can_we_disable_reasoning_for_120b/ | false | false | self | 0 | null |
Let me fix that chart for you | 62 | Because range matters. | 2025-08-06T01:01:50 | sstainsby | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miqzgb | false | null | t3_1miqzgb | /r/LocalLLaMA/comments/1miqzgb/let_me_fix_that_chart_for_you/ | false | false | default | 62 | {'enabled': True, 'images': [{'id': '69scmtwzsahf1', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/69scmtwzsahf1.png?width=108&crop=smart&auto=webp&s=33a6329665f72ebd056ba1358adce8334fa278b5', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/69scmtwzsahf1.png?width=216&crop=smart&auto=we... | |
Running Qwen2.5-Coder-32b-q6-gguf entirely on cpu | 2 | My pc specs:
cpu: ryzen 5600g
ram: ddr4 32gb 3200 mt/s (considering upgrading to 64 or 128 gb)
What t/s and ram usage i should expect? Would it be slow to use with Aider? Codebase i work with is fairly small and tasks are fairly repetitive | 2025-08-06T00:59:14 | https://www.reddit.com/r/LocalLLaMA/comments/1miqxdc/running_qwen25coder32bq6gguf_entirely_on_cpu/ | miraska_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miqxdc | false | null | t3_1miqxdc | /r/LocalLLaMA/comments/1miqxdc/running_qwen25coder32bq6gguf_entirely_on_cpu/ | false | false | self | 2 | null |
Aggregated gpt-oss benchmarks | 26 | From: https://artificialanalysis.ai/ | 2025-08-06T00:57:34 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miqw54 | false | null | t3_1miqw54 | /r/LocalLLaMA/comments/1miqw54/aggregated_gptoss_benchmarks/ | false | false | default | 26 | {'enabled': True, 'images': [{'id': 'uvf0s0vdsahf1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/uvf0s0vdsahf1.jpeg?width=108&crop=smart&auto=webp&s=9db28130facf42f9f8d8a7362eba214e2c0e2b51', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/uvf0s0vdsahf1.jpeg?width=216&crop=smart&auto=... | |
What is the difference between these Kimi-K2 models? (From NanoGPT) | 3 | Isn't Kimi-K2-Instruct the most recent? If so, then what's Kimi-Latest and why is it so expensive? And why would Kimi K2 Fast be more expensive than Kimi-K2-Instruct? Unless it's just trying to say it's the same model on a faster rig. | 2025-08-06T00:56:25 | ReMeDyIII | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miqv9l | false | null | t3_1miqv9l | /r/LocalLLaMA/comments/1miqv9l/what_is_the_difference_between_these_kimik2/ | false | false | default | 3 | {'enabled': True, 'images': [{'id': 'v2m6ayrurahf1', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/v2m6ayrurahf1.png?width=108&crop=smart&auto=webp&s=f1880b2cc6267dc0a2930a0ed47d3a24c9d111f5', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/v2m6ayrurahf1.png?width=216&crop=smart&auto=webp... | |
gpt-oss-120B most intelligent model that fits on a single H100 in native precision | 0 | Interesting analysis thread: https://x.com/artificialanlys/status/1952887733803991070 | 2025-08-06T00:55:59 | https://www.reddit.com/gallery/1miquy8 | entsnack | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1miquy8 | false | null | t3_1miquy8 | /r/LocalLLaMA/comments/1miquy8/gptoss120b_most_intelligent_model_that_fits_on_a/ | false | false | 0 | null | |
Anyone test the gpt-oss-120b model yet? How is its real world performance? | 6 | The benchmarks seem really promising but I’m wondering how it actually performs compared to some of the other SOTA open source llms. | 2025-08-06T00:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1miqu6b/anyone_test_the_gptoss120b_model_yet_how_is_its/ | Euphoric_Ad9500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miqu6b | false | null | t3_1miqu6b | /r/LocalLLaMA/comments/1miqu6b/anyone_test_the_gptoss120b_model_yet_how_is_its/ | false | false | self | 6 | null |
OpenWebUI with Llama.cpp Issues | 1 | [removed] | 2025-08-06T00:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1miqr5d/openwebui_with_llamacpp_issues/ | Infamous_Jaguar_2151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miqr5d | false | null | t3_1miqr5d | /r/LocalLLaMA/comments/1miqr5d/openwebui_with_llamacpp_issues/ | false | false | self | 1 | null |
Running OpenAI’s GPT-OSS locally: the good, the bad, and the loopy | 3 | 2025-08-06T00:42:07 | https://blog.tymscar.com/posts/gptoss/ | tymscar | blog.tymscar.com | 1970-01-01T00:00:00 | 0 | {} | 1miqk3e | false | null | t3_1miqk3e | /r/LocalLLaMA/comments/1miqk3e/running_openais_gptoss_locally_the_good_the_bad/ | false | false | default | 3 | null | |
gpt-oss 120b on Nvidia P40 | 4 | About a year ago, through the magic of Craigslist, I got myself an old HPE Gen9 (old Xeon server from 2015-2016). Shopped aggressively on eBay and ultimately built a machine with 4 Nvidia Tesla P40s (96gb total VRAM), for just about $1,000. I've been using it as my junkyard AI lab ever since.
Got it to ran gpt-oss 120... | 2025-08-06T00:36:27 | https://www.reddit.com/r/LocalLLaMA/comments/1miqfi7/gptoss_120b_on_nvidia_p40/ | Antique_Juggernaut_7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miqfi7 | false | null | t3_1miqfi7 | /r/LocalLLaMA/comments/1miqfi7/gptoss_120b_on_nvidia_p40/ | false | false | self | 4 | null |
The openai gpt-oss model is too safe! | 1 | Every time when it answering the question, Gpt-oss will check whether it contains disallowed content(explicit/violent/illegal content),and ”according to policy, we must refuse“. | 2025-08-06T00:33:42 | sunshinecheung | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miqdbz | false | null | t3_1miqdbz | /r/LocalLLaMA/comments/1miqdbz/the_openai_gptoss_model_is_too_safe/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'y22iijj4oahf1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/y22iijj4oahf1.png?width=108&crop=smart&auto=webp&s=e30d587497242f7a7a438868ed70e52cc9e7b869', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/y22iijj4oahf1.png?width=216&crop=smart&auto=web... | |
The openai gpt-oss model is too safe! | 65 | 2025-08-06T00:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/1miqbyk/the_openai_gptoss_model_is_too_safe/ | sunshinecheung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miqbyk | false | null | t3_1miqbyk | /r/LocalLLaMA/comments/1miqbyk/the_openai_gptoss_model_is_too_safe/ | false | false | 65 | null | ||
GPT-OSS Support Merged into Codex | 4 | >This adds support for easily running Codex backed by a local Ollama instance running our new open source models. See [https://github.com/openai/gpt-oss](https://github.com/openai/gpt-oss) for details.
>If you pass in `--oss` you'll be prompted to install/launch ollama, and it will automatically download the 20b model... | 2025-08-06T00:26:46 | https://github.com/openai/codex/pull/1848 | entsnack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miq7sp | false | null | t3_1miq7sp | /r/LocalLLaMA/comments/1miq7sp/gptoss_support_merged_into_codex/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'Vjq3Wm4TP-8aGogiesDe7829v5kwcywPCEB_nkJJ3bU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Vjq3Wm4TP-8aGogiesDe7829v5kwcywPCEB_nkJJ3bU.png?width=108&crop=smart&auto=webp&s=8a5c7865804f46b4afc776363f3b853726fb06d9', 'width': 108}, {'height': 108, 'url': 'h... |
How can we run models on IpadOS?? | 1 | I have this M4 chip sitting around in the dust, i really want to try the models, my laptop is low spec and even lags chrome
i really want to use this m4 for some actual purposes, please help. | 2025-08-06T00:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1miq6y3/how_can_we_run_models_on_ipados/ | Fun_Highway9504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miq6y3 | false | null | t3_1miq6y3 | /r/LocalLLaMA/comments/1miq6y3/how_can_we_run_models_on_ipados/ | false | false | self | 1 | null |
Sama Teases Future Open-Source Model | 1 | [https://x.com/sama/status/1952879515287601465](https://x.com/sama/status/1952879515287601465)
Heavily implied by this tweet that GPT-OSS won't be the last. more companies, more better. At the very least this will hopefully pressure xAI to open up more models. | 2025-08-06T00:23:40 | https://www.reddit.com/r/LocalLLaMA/comments/1miq5b9/sama_teases_future_opensource_model/ | Solid_Antelope2586 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miq5b9 | false | null | t3_1miq5b9 | /r/LocalLLaMA/comments/1miq5b9/sama_teases_future_opensource_model/ | false | false | self | 1 | null |
gpt-oss-120b performance with only 16 GB VRAM- surprisingly decent | 17 | **Full specs**:
**GPU**: RTX 4070 TI Super (16 GB VRAM)
**CPU**: i7 14700K
**System RAM**: 96 GB DDR5 @ 6200 MT/s (total usage, including all Windows processes, is 61 GB, so only having 64GB RAM is probably sufficient)
**OS**: Windows 11
**Model runner**: LM Studio (see settings in third screenshot)
When I saw... | 2025-08-06T00:06:48 | https://www.reddit.com/gallery/1miprwe | gigaflops_ | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1miprwe | false | null | t3_1miprwe | /r/LocalLLaMA/comments/1miprwe/gptoss120b_performance_with_only_16_gb_vram/ | false | false | 17 | null | |
GPT-OSS 120B gets an abysmal score on SimpleBench, scoring below o3-mini (high). So much for "performance on par with o4-mini". | 5 | 2025-08-06T00:05:54 | NixTheFolf | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mipr75 | false | null | t3_1mipr75 | /r/LocalLLaMA/comments/1mipr75/gptoss_120b_gets_an_abysmal_score_on_simplebench/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'qorwpeutiahf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/qorwpeutiahf1.png?width=108&crop=smart&auto=webp&s=f10c4005ffd743da6e3450354cfa55d771e7e7d5', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/qorwpeutiahf1.png?width=216&crop=smart&auto=web... | ||
🍃 GLM-4.5-AIR - LmStudio Windows Unlocked ! | 8 | [Windows Cuda 1.45.0 \(Not Cuda 12!\)](https://preview.redd.it/604fqlijgahf1.png?width=464&format=png&auto=webp&s=a23e1aa7f34f3c14c33b1167bdbeaec20c4a4905)
**The Cuda 12 ver 1.44.0 do not support GLM-4.5-AIR:**
https://preview.redd.it/pzmxer3shahf1.png?width=444&format=png&auto=webp&s=62827136dd50d356fcc199ca5cadb958... | 2025-08-06T00:01:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mipnte/glm45air_lmstudio_windows_unlocked/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mipnte | false | null | t3_1mipnte | /r/LocalLLaMA/comments/1mipnte/glm45air_lmstudio_windows_unlocked/ | false | false | 8 | null | |
What's a good way to benchmark a model without doing the useless benchmarks | 1 | I distilled a coding model and I want to know how to do a proper benchmark, and not do the typical ones that are used for benchmaxxing. | 2025-08-05T23:59:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mipm3s/whats_a_good_way_to_benchmark_a_model_without/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mipm3s | false | null | t3_1mipm3s | /r/LocalLLaMA/comments/1mipm3s/whats_a_good_way_to_benchmark_a_model_without/ | false | false | self | 1 | null |
Which of the current models are you finding to be best at math and statistics related reasoning? | 2 | It is difficult to trust the benchmarks when it comes to math skills, I've found - I'd be interested to hear the community's impressions on this. | 2025-08-05T23:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mipcgr/which_of_the_current_models_are_you_finding_to_be/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mipcgr | false | null | t3_1mipcgr | /r/LocalLLaMA/comments/1mipcgr/which_of_the_current_models_are_you_finding_to_be/ | false | false | self | 2 | null |
Real time vibe coding with openai/gpt-oss-120b (resources in comments!) | 0 | 2025-08-05T23:45:38 | https://v.redd.it/qrpl1h09fahf1 | bakaasama | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mipahr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qrpl1h09fahf1/DASHPlaylist.mpd?a=1757029551%2CYzExMTlmOWVmNWI2YWQ5ZWM4NTU3ZjI1MDMyY2VkMjI2ODY0MTAxODViZDU0YTg0MzU1YTIwZWM4MmVkOGRlMQ%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/qrpl1h09fahf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mipahr | /r/LocalLLaMA/comments/1mipahr/real_time_vibe_coding_with_openaigptoss120b/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OTRoc2cxejhmYWhmMRdfj0SxUSrkmQM1JYs8HmICSn7G3sGk4chHW3PQpR3h', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTRoc2cxejhmYWhmMRdfj0SxUSrkmQM1JYs8HmICSn7G3sGk4chHW3PQpR3h.png?width=108&crop=smart&format=pjpg&auto=webp&s=14b3d16843d76046c914058a64050f087e245... | ||
OpenAI Releases gpt-oss-120b with This Warning: No AI Self-Improvement (Like What??) | 0 | Alright, OpenAI finally released their long-awaited open-weight model. And, by their own admission, this model matches or surpasses GPT-o3 and o4-mini and highly customizable. All good and dandy.
But here's the twist.
While they encourage **local customization and autonomy** right out of the gate, they had to bake i... | 2025-08-05T23:45:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mipaft/openai_releases_gptoss120b_with_this_warning_no/ | sourdub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mipaft | false | null | t3_1mipaft | /r/LocalLLaMA/comments/1mipaft/openai_releases_gptoss120b_with_this_warning_no/ | false | false | 0 | null | |
Trying NSFW on the OAI OSS model is fun sometimes. | 14 | 2025-08-05T23:41:55 | panchovix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mip7ds | false | null | t3_1mip7ds | /r/LocalLLaMA/comments/1mip7ds/trying_nsfw_on_the_oai_oss_model_is_fun_sometimes/ | false | false | nsfw | 14 | {'enabled': True, 'images': [{'id': '25nt59vteahf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/25nt59vteahf1.png?width=108&crop=smart&auto=webp&s=ca427c6c459e350da52eeecaed5690cb07dfc683', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/25nt59vteahf1.png?width=216&crop=smart&auto=web... | ||
RX 7900 XT for LLMs? | 4 | Hello,
I'm currently considering an upgrade from my RTX 3060 Ti to the RX 7900 XT, mainly for improved performance running LLMs. My primary interest is in models like the Qwen3-30B-A3B and the newly released OpenAI GPT-OSS 20B. Since the RX 7900 XTX is almost sold out everywhere, the RX 7900 XT seems like my next best... | 2025-08-05T23:41:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mip775/rx_7900_xt_for_llms/ | Reaper_9382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mip775 | false | null | t3_1mip775 | /r/LocalLLaMA/comments/1mip775/rx_7900_xt_for_llms/ | false | false | self | 4 | null |
End of an era I guess | 0 | 2025-08-05T23:37:49 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mip3vn | false | null | t3_1mip3vn | /r/LocalLLaMA/comments/1mip3vn/end_of_an_era_i_guess/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '3813hbh3eahf1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/3813hbh3eahf1.png?width=108&crop=smart&auto=webp&s=e0ac73a8b2d5864ae92a89573fe5bc72d553b406', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/3813hbh3eahf1.png?width=216&crop=smart&auto=web... | ||
Strong showing by OpenAI in Kaggle Game Arena Day 1 | 4 | [https://www.kaggle.com/benchmarks/kaggle/chess-text/versions/1/tournament](https://www.kaggle.com/benchmarks/kaggle/chess-text/versions/1/tournament)
Looking forward to seeing gpt-oss-120b on there! Not happy with the current position of open weight models. | 2025-08-05T23:32:55 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miozqy | false | null | t3_1miozqy | /r/LocalLLaMA/comments/1miozqy/strong_showing_by_openai_in_kaggle_game_arena_day/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'ylw1aytzcahf1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/ylw1aytzcahf1.png?width=108&crop=smart&auto=webp&s=0398392dfde5f87cca42c72225fd925d4af54ff5', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/ylw1aytzcahf1.png?width=216&crop=smart&auto=web... | |
DeepMind’s Genie 3 shows Google is like 3 years ahead of China and 10 years ahead of open source—leaving everyone in the dust. | 0 | 2025-08-05T23:27:28 | https://v.redd.it/fs7yvxd8cahf1 | balianone | /r/LocalLLaMA/comments/1miov4l/deepminds_genie_3_shows_google_is_like_3_years/ | 1970-01-01T00:00:00 | 0 | {} | 1miov4l | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fs7yvxd8cahf1/DASHPlaylist.mpd?a=1757158056%2CNWE4YTFiYWVhOTczMjFhZTFiYmViNzYwNGYxNTdlZjU2YjkxYzU4N2RjZDc1MjY5ZmFmZDM4NTAzZjNhODc2OQ%3D%3D&v=1&f=sd', 'duration': 142, 'fallback_url': 'https://v.redd.it/fs7yvxd8cahf1/DASH_720.mp4?source=fallback', 'h... | t3_1miov4l | /r/LocalLLaMA/comments/1miov4l/deepminds_genie_3_shows_google_is_like_3_years/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZDI4dDF5MmJjYWhmMcECTCOhDDlv8SRxw1axECLn2eTK_ydmN2ciZAyWoibp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDI4dDF5MmJjYWhmMcECTCOhDDlv8SRxw1axECLn2eTK_ydmN2ciZAyWoibp.png?width=108&crop=smart&format=pjpg&auto=webp&s=655ac01b511bf846e437fc1d8e44b748b2594... | ||
GPT-OSS 120B Simple-Bench is not looking great either. What is going on Openai? | 150 | Another one. [https://simple-bench.com/](https://simple-bench.com/) | 2025-08-05T23:25:41 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miotjk | false | null | t3_1miotjk | /r/LocalLLaMA/comments/1miotjk/gptoss_120b_simplebench_is_not_looking_great/ | false | false | default | 150 | {'enabled': True, 'images': [{'id': 'yu8x76wnbahf1', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/yu8x76wnbahf1.png?width=108&crop=smart&auto=webp&s=40f088e8768c33b741683caac490b28a990b5794', 'width': 108}, {'height': 290, 'url': 'https://preview.redd.it/yu8x76wnbahf1.png?width=216&crop=smart&auto=we... | |
For you vibe coders.. | 0 | https://www.facebook.com/share/p/1CJEDkyAY3/ | 2025-08-05T23:24:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mioswc/for_you_vibe_coders/ | Inevitable-Prior-799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mioswc | false | null | t3_1mioswc | /r/LocalLLaMA/comments/1mioswc/for_you_vibe_coders/ | false | false | self | 0 | null |
GPT-OSS 20B | Performance for creative types | 3 | So far, my use case is writing textual elements for an RPG videogame i'm developing, and its as good as qwen 30b a3b 2507 if not slightly better.
How well has this model performed for you in creative writing? Punching far above its weight class? Terrible, or just as you would expect for the size?
Please comment your ... | 2025-08-05T23:21:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mioq3p/gptoss_20b_performance_for_creative_types/ | Temporary_Exam_3620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mioq3p | false | null | t3_1mioq3p | /r/LocalLLaMA/comments/1mioq3p/gptoss_20b_performance_for_creative_types/ | false | false | self | 3 | null |
Are openai's new opensource llms too censored? POLL | 2 |
[View Poll](https://www.reddit.com/poll/1miopto) | 2025-08-05T23:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/1miopto/are_openais_new_opensource_llms_too_censored_poll/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miopto | false | null | t3_1miopto | /r/LocalLLaMA/comments/1miopto/are_openais_new_opensource_llms_too_censored_poll/ | false | false | self | 2 | null |
made these animations with llama 4 | 0 | 2025-08-05T23:13:34 | https://v.redd.it/kwz853sr9ahf1 | Witty_Side8702 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mioj8f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kwz853sr9ahf1/DASHPlaylist.mpd?a=1757027630%2CMDFiN2Q1NTNlY2JiMWY4OWM2YTdmOTg5YzNiODdiN2Q2OTFlMGE1ZTU0MDI2OWE2MGNmZmQyYzNlMWQ3NmM2Yw%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/kwz853sr9ahf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mioj8f | /r/LocalLLaMA/comments/1mioj8f/made_these_animations_with_llama_4/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YWJiYngyc3I5YWhmMZPV6sPwIDxqnmyGd9i_3o4X7HIYHxtmi0bhMEY5cb1t', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YWJiYngyc3I5YWhmMZPV6sPwIDxqnmyGd9i_3o4X7HIYHxtmi0bhMEY5cb1t.png?width=108&crop=smart&format=pjpg&auto=webp&s=be099a44245599f5150e591ea052f7ace8fec... | ||
GPT-OSS 120B and 20B feel kind of… bad? | 537 | After feeling horribly underwhelmed by these models, the more I look around, the more I’m noticing reports of excessive censorship, high hallucination rates, and lacklustre performance.
Our company builds character AI systems. After plugging both of these models into our workflows and running our eval sets against th... | 2025-08-05T23:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/1miodyp/gptoss_120b_and_20b_feel_kind_of_bad/ | SlackEight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miodyp | false | null | t3_1miodyp | /r/LocalLLaMA/comments/1miodyp/gptoss_120b_and_20b_feel_kind_of_bad/ | false | false | self | 537 | null |
Some quants seem bugged, but GPT-OSS 20B passes the bouncing ball test! | 44 | 2025-08-05T23:06:18 | https://v.redd.it/68xzkfmg8ahf1 | MerePotato | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miod7w | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/68xzkfmg8ahf1/DASHPlaylist.mpd?a=1757027194%2CMGViYjQ3YTk3ZGVlYWQzOWE3MjRlZTBhZmQ5MTc0ZTVjMTdhMzc4YzYxYjJmM2E1NjdjZWRmMTU5YzdjMzI3MQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/68xzkfmg8ahf1/DASH_720.mp4?source=fallback', 'ha... | t3_1miod7w | /r/LocalLLaMA/comments/1miod7w/some_quants_seem_bugged_but_gptoss_20b_passes_the/ | false | false | 44 | {'enabled': False, 'images': [{'id': 'bXcwYWpjbWc4YWhmMekgjMQXr-a2xsq8FyP5NuHTrguIdAAe_OqMI83Vg2tW', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/bXcwYWpjbWc4YWhmMekgjMQXr-a2xsq8FyP5NuHTrguIdAAe_OqMI83Vg2tW.png?width=108&crop=smart&format=pjpg&auto=webp&s=ff5adde2ef3962d2c3b665a6d2a2c610d834... | ||
We made Octofriend, a local-LLM-friendly coding assistant like Claude Code | 8 | Hey LocalLlaMA! To celebrate the release of OpenAI GPT-OSS-120b, we're also soft-launching the coding assistant we made that works with open-source models: https://github.com/synthetic-lab/octofriend
You can run the models locally, or use inference providers (we run [a privacy-focused one](https://synthetic.new)). One... | 2025-08-05T23:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1miobog/we_made_octofriend_a_localllmfriendly_coding/ | reissbaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miobog | false | null | t3_1miobog | /r/LocalLLaMA/comments/1miobog/we_made_octofriend_a_localllmfriendly_coding/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'WrKsuk1CiUKFxv8BgI1hbzq2_vobi8CbYB46uwDVBEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WrKsuk1CiUKFxv8BgI1hbzq2_vobi8CbYB46uwDVBEQ.png?width=108&crop=smart&auto=webp&s=eab1807fd0d2a90e341dea1e3fa0036691f43a30', 'width': 108}, {'height': 108, 'url': 'h... |
GLM-4.5-Air llama.cpp experiences? | 6 | ik_llama.cpp too! I’d love to hear how people are running it (hardware, CLI flags, use case, etc.)
Bandwidth constraints and having a single 3090 are giving me a bit of analysis paralysis choosing a quant to start. I’m a patient hybrid inference gal, as long as it’s not seconds per token 😂. Workload is usually long c... | 2025-08-05T22:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mio5ld/glm45air_llamacpp_experiences/ | DorphinPack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mio5ld | false | null | t3_1mio5ld | /r/LocalLLaMA/comments/1mio5ld/glm45air_llamacpp_experiences/ | false | false | self | 6 | null |
Worst. Model. Ever. | 0 | 2025-08-05T22:55:24 | https://www.reddit.com/gallery/1mio3pe | tetsuto | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mio3pe | false | null | t3_1mio3pe | /r/LocalLLaMA/comments/1mio3pe/worst_model_ever/ | false | false | default | 0 | null | |
Looking to buy a new PC - What specs should I be aiming for. | 3 | If at all possible, I'd like to be able to run locally models like Kimi, GLM 4.5, deepseek.
What specs should I aim for.
Can I do this on a 12GB VRAM GPU (3060 12GB) and a say 256GB or 512 GB RAM? | 2025-08-05T22:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mio3j9/looking_to_buy_a_new_pc_what_specs_should_i_be/ | KindlyAnything1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mio3j9 | false | null | t3_1mio3j9 | /r/LocalLLaMA/comments/1mio3j9/looking_to_buy_a_new_pc_what_specs_should_i_be/ | false | false | self | 3 | null |
Every Reason Why I Hate AI and You Should Too | 0 | https://malwaretech.com/2025/08/every-reason-why-i-hate-ai.html | 2025-08-05T22:48:05 | https://www.reddit.com/r/LocalLLaMA/comments/1minxdr/every_reason_why_i_hate_ai_and_you_should_too/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minxdr | false | null | t3_1minxdr | /r/LocalLLaMA/comments/1minxdr/every_reason_why_i_hate_ai_and_you_should_too/ | false | false | self | 0 | null |
gpt-oss-120b destroys DeepSeek-r1-0528 on SVGBench | 61 | This is a **community-provided** independent benchmark: [https://github.com/johnbean393/SVGBench](https://github.com/johnbean393/SVGBench).
5 percentage points better with 5x *fewer* active parameters! Keep the vibe benchmarks coming r/LocalLLaMA. We are witnessing something historic. | 2025-08-05T22:45:08 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1minuvb | false | null | t3_1minuvb | /r/LocalLLaMA/comments/1minuvb/gptoss120b_destroys_deepseekr10528_on_svgbench/ | false | false | default | 61 | {'enabled': True, 'images': [{'id': 'lzcxba1k4ahf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/lzcxba1k4ahf1.png?width=108&crop=smart&auto=webp&s=44f2a2c3bb25b794f716e5fe50ca96f3dc986aa5', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/lzcxba1k4ahf1.png?width=216&crop=smart&auto=web... | |
Is RAG a good area to focus on as a beginner ADS student who wants to build something? | 2 | Hey everyone!
I'm a beginner student in **Systems Analysis and Development** (Brazil 🇧🇷), just getting started in the tech world. Lately, I've been reading about **Retrieval-Augmented Generation (RAG)** and I'm really intrigued by how it's used to connect LLMs with real-world data.
I’m wondering:
* Is RAG a good a... | 2025-08-05T22:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/1minsd6/is_rag_a_good_area_to_focus_on_as_a_beginner_ads/ | ChangeEuphoric560 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minsd6 | false | null | t3_1minsd6 | /r/LocalLLaMA/comments/1minsd6/is_rag_a_good_area_to_focus_on_as_a_beginner_ads/ | false | false | self | 2 | null |
Is it just me? | 5 | Hi fellows,
I've just (excitedly) downloaded the new open source model by OpenAI (20b version) and ran it on my 4070 Ti (12 GB VRAM, 32 GB system RAM).
Aaaand... it's 2.5x slower than the 4-bit quantized Qwen 3 30b A3B 2507 Instruct. Why? I have updated to the newest version of Ollama (yes, folks, I know vLLM and ... | 2025-08-05T22:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/1minr6f/is_it_just_me/ | Final_Wheel_7486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minr6f | false | null | t3_1minr6f | /r/LocalLLaMA/comments/1minr6f/is_it_just_me/ | false | false | self | 5 | null |
Finally, a model that's SAFE | 882 | 2025-08-05T22:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1minpqr/finally_a_model_thats_safe/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minpqr | false | null | t3_1minpqr | /r/LocalLLaMA/comments/1minpqr/finally_a_model_thats_safe/ | false | false | 882 | null | ||
Lol this is some next level brain fried from censorship. | 255 | 2025-08-05T22:36:54 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1minnrb | false | null | t3_1minnrb | /r/LocalLLaMA/comments/1minnrb/lol_this_is_some_next_level_brain_fried_from/ | false | false | default | 255 | {'enabled': True, 'images': [{'id': 'tcnuqjo63ahf1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/tcnuqjo63ahf1.png?width=108&crop=smart&auto=webp&s=67e1eec24f60cf63977d249e2cf11ef1e88a76dc', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/tcnuqjo63ahf1.png?width=216&crop=smart&auto=web... | ||
Has anyone tried full fine-tuning on the OpenAI models yet? | 3 | Basically the title. I know its early but I'm curious about giving this a shot, willing to throw GPUs/a few bucks at it. I don't think a LoRa will be enough to unlearn refusals, but maybe full SFT or RL could make a noticeable difference.
Just curious if anyone's given it a shot or ran into any pitfalls that would... | 2025-08-05T22:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1minmk6/has_anyone_tried_full_finetuning_on_the_openai/ | SOCSChamp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minmk6 | false | null | t3_1minmk6 | /r/LocalLLaMA/comments/1minmk6/has_anyone_tried_full_finetuning_on_the_openai/ | false | false | self | 3 | null |
GPT OSS: They're 4bit quantized by default (giga yikes 😬), but these comments on the GGUF PR raise some questions. | 3 | 1. Does the GGUF format further quantize it, or just change the format? https://github.com/ggml-org/llama.cpp/pull/15091#issuecomment-3156098490
2. This guy mentions f16 gguf "works just fine". How do you get a 16bit quantization from a 4bit starting point? Does it improve the intelligence? https://github.com/ggml-org... | 2025-08-05T22:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mink69/gpt_oss_theyre_4bit_quantized_by_default_giga/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mink69 | false | null | t3_1mink69 | /r/LocalLLaMA/comments/1mink69/gpt_oss_theyre_4bit_quantized_by_default_giga/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'uoAlW2qpCt3e8xck14MuY5UZnRkqWuVhQs4fD_GJKW0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uoAlW2qpCt3e8xck14MuY5UZnRkqWuVhQs4fD_GJKW0.png?width=108&crop=smart&auto=webp&s=bda696394a3347daacb18a8a6a074a4900474307', 'width': 108}, {'height': 108, 'url': 'h... |
Recent models trained with ALiBi | 1 | Hi,
I was reading a bit about positional encoding and came across ALiBi. It's really simple and surprisingly works. The TL;DR is this:
// Step 1: Compute raw attention scores.
// The scores at this stage are "position-blind".
Scores = (Query * Transpose(Key)) / Sqrt(Dimension_of_head)
// Step 2: Crea... | 2025-08-05T22:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/1minhh2/recent_models_trained_with_alibi/ | Independent_Aside225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minhh2 | false | null | t3_1minhh2 | /r/LocalLLaMA/comments/1minhh2/recent_models_trained_with_alibi/ | false | false | self | 1 | null |
Is GPT-OSS the first open source model to (mostly) negate prefill attack? | 3 | Earlier in the year OpenAI announced that they were delaying the release of their open source model in order to work more on its safety. This announcement was made right around the time Grok was in the news for "mecha hitler". So, unsurprisingly gpt-oss is very locked down. In the limited time I've had to test it, it's... | 2025-08-05T22:29:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mingyj/is_gptoss_the_first_open_source_model_to_mostly/ | Informal_Warning_703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mingyj | false | null | t3_1mingyj | /r/LocalLLaMA/comments/1mingyj/is_gptoss_the_first_open_source_model_to_mostly/ | false | false | self | 3 | null |
How to disable 'thinking' in openai-oss? | 5 | Nothing against those who like it, but for me the results are not always better, and sometimes even worse, but my main reason for wanting to disable it is simply to make the model usable in a poor GPU scenario. | 2025-08-05T22:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/1minasr/how_to_disable_thinking_in_openaioss/ | thecalmgreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1minasr | false | null | t3_1minasr | /r/LocalLLaMA/comments/1minasr/how_to_disable_thinking_in_openaioss/ | false | false | self | 5 | null |
At $250 million, top AI salaries dwarf those of the Manhattan Project and the Space Race | 2 | *A 24 year-old AI researcher will earn 327x what Oppenheimer made while developing the atomic bomb.*
[https://arstechnica.com/ai/2025/08/at-250-million-top-ai-salaries-dwarf-those-of-the-manhattan-project-and-the-space-race](https://arstechnica.com/ai/2025/08/at-250-million-top-ai-salaries-dwarf-those-of-the-manhattan... | 2025-08-05T22:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mina19/at_250_million_top_ai_salaries_dwarf_those_of_the/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mina19 | false | null | t3_1mina19 | /r/LocalLLaMA/comments/1mina19/at_250_million_top_ai_salaries_dwarf_those_of_the/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '7H0F906LDcOkhfdIiXJzAKb0s4KoXLMzwek9IXxBzzc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7H0F906LDcOkhfdIiXJzAKb0s4KoXLMzwek9IXxBzzc.jpeg?width=108&crop=smart&auto=webp&s=7616c929c8fc6c6e59909bbcfdcfb58e03a00d4e', 'width': 108}, {'height': 121, 'url': '... |
20B GTP-OSS on a MacBook Pro M1 Pro with 16gb memory running at less than 1 token/s. Is this correct or am I doing something wrong? | 4 | I'm using Ollama.
I read that people with M4 Pro are getting 33 tokens/sec but M4 Pro has only marginally faster memory bandwidth 200 vs 273. I'm also not seeing my GPU or CPU being maxed out at any point.
Is this the expected performance? What is my bottleneck here? | 2025-08-05T22:14:57 | https://www.reddit.com/r/LocalLLaMA/comments/1min4lz/20b_gtposs_on_a_macbook_pro_m1_pro_with_16gb/ | -paul- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1min4lz | false | null | t3_1min4lz | /r/LocalLLaMA/comments/1min4lz/20b_gtposs_on_a_macbook_pro_m1_pro_with_16gb/ | false | false | self | 4 | null |
Anyone was able to run gpt-oss 20b on a 5090? | 1 | Hi!
I tried using the new one vllm docker image but I got "Sinks are only supported in FlashAttention 3"
Any hints? | 2025-08-05T22:14:32 | https://www.reddit.com/r/LocalLLaMA/comments/1min47x/anyone_was_able_to_run_gptoss_20b_on_a_5090/ | celsowm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1min47x | false | null | t3_1min47x | /r/LocalLLaMA/comments/1min47x/anyone_was_able_to_run_gptoss_20b_on_a_5090/ | false | false | self | 1 | null |
can i run gpt oss 120B with 5070ti + 3060? | 0 | I have a 5070 Ti and a 3060, along with 128GB of DDR5 RAM. Would this be sufficient to run the GPT-OSS 120B model smoothly? | 2025-08-05T22:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1min382/can_i_run_gpt_oss_120b_with_5070ti_3060/ | Hopeful_Ferret_2701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1min382 | false | null | t3_1min382 | /r/LocalLLaMA/comments/1min382/can_i_run_gpt_oss_120b_with_5070ti_3060/ | false | false | self | 0 | null |
Qwen3 Uses 40% Fewer Tokens When Reasoning in Chinese vs English | 52 | I tested 500 math problems from HuggingFaceH4/MATH-500 and discovered something surprising: Qwen3's Chinese Chain-of-Thought achieves 97% accuracy using only 61% of the tokens its English CoT needs. The efficiency gap grows with problem complexity - for the hardest problems (Level 5), Chinese needs just 65% of English ... | 2025-08-05T22:12:24 | PastaBlizzard | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1min2c3 | false | null | t3_1min2c3 | /r/LocalLLaMA/comments/1min2c3/qwen3_uses_40_fewer_tokens_when_reasoning_in/ | false | false | 52 | {'enabled': True, 'images': [{'id': 'NGrFEOFC_35j6Uz_6zEFlEZETBI5wWBfnbcLP3PZU4I', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/y6r4oreky9hf1.png?width=108&crop=smart&auto=webp&s=9411b4817b468e3653bbb24545bb86567f9aff4e', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/y6r4oreky9hf1.png... | ||
OpenAI released Fine-tuning guide for GPT-OSS | 31 | Seems pretty standard stuff | 2025-08-05T22:05:36 | https://cookbook.openai.com/articles/gpt-oss/fine-tune-transfomers | Snoo_64233 | cookbook.openai.com | 1970-01-01T00:00:00 | 0 | {} | 1mimwe9 | false | null | t3_1mimwe9 | /r/LocalLLaMA/comments/1mimwe9/openai_released_finetuning_guide_for_gptoss/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': '1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk.png?width=108&crop=smart&auto=webp&s=e21b918a6bd47ae52601f8bbd51d5018895a7666', 'width': 108}, {'height': 113, 'url': 'h... |
GPT OSS 20B is SO good. Definitely a good day to day workhorse | 0 | Tested it out on a handful of my favorite test prompts. Knowledge frozen as of 2024-06, handled rescaling and converting a recipe into metric that Qwen 3 30B 3AB 2507 completely hosed, churns at about 72 tok/s on a MacBook M4 Pro Max 4 bit MLX.
And it ALMOST got a side scrolling shooter working, whereas the 120B didn'... | 2025-08-05T22:02:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mimtl9/gpt_oss_20b_is_so_good_definitely_a_good_day_to/ | cspenn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mimtl9 | false | null | t3_1mimtl9 | /r/LocalLLaMA/comments/1mimtl9/gpt_oss_20b_is_so_good_definitely_a_good_day_to/ | false | false | self | 0 | null |
Cancelling all subscriptions | 0 | Even on a 3060ti with 8gb of vram I am able to get 10tokens/s on the 20B gpt oss model, which is producing results that are actually insane, this local model feels better than the flagships from just a year or two back, at this point it seems like the play is to save the money for better hardware. OpenAI really livin... | 2025-08-05T21:56:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mimoat/cancelling_all_subscriptions/ | TheReal4982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mimoat | false | null | t3_1mimoat | /r/LocalLLaMA/comments/1mimoat/cancelling_all_subscriptions/ | false | false | self | 0 | null |
Looking for a team to hack OSS 20B. $500k in prize pool | 10 | 2025-08-05T21:46:03 | https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-teaming | secopsml | kaggle.com | 1970-01-01T00:00:00 | 0 | {} | 1mimelx | false | null | t3_1mimelx | /r/LocalLLaMA/comments/1mimelx/looking_for_a_team_to_hack_oss_20b_500k_in_prize/ | false | false | default | 10 | null | |
[Prompt Optimization Strategy] How we use query classification + graph-based context selection to reduce LLM costs in local deployments | 1 | Hi everyone,
We’ve been experimenting with a prompt optimization strategy for local LLM agents that dramatically reduces prompt size without compromising output quality.
The problem:
When building multi-functional agents (especially using Local LLaMA or Mixtral), prompts tend to become bloated. This leads to:
• High... | 2025-08-05T21:45:37 | https://www.promptgraph.io | michael_pintos | promptgraph.io | 1970-01-01T00:00:00 | 0 | {} | 1mime87 | false | null | t3_1mime87 | /r/LocalLLaMA/comments/1mime87/prompt_optimization_strategy_how_we_use_query/ | false | false | default | 1 | null |
gpt-oss-20b claims to be GPT-4 w/ Nov 2023 cutoff. | 0 | 2025-08-05T21:45:24 | steezy13312 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mime15 | false | null | t3_1mime15 | /r/LocalLLaMA/comments/1mime15/gptoss20b_claims_to_be_gpt4_w_nov_2023_cutoff/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '8ogdjdpot9hf1', 'resolutions': [{'height': 86, 'url': 'https://preview.redd.it/8ogdjdpot9hf1.png?width=108&crop=smart&auto=webp&s=66a133a622aa77b87dbe3e59f2bb812bf0536ea8', 'width': 108}, {'height': 173, 'url': 'https://preview.redd.it/8ogdjdpot9hf1.png?width=216&crop=smart&auto=web... | ||
gpt-oss-120b on the "Ball Bouncing Inside Spinning Heptagon" benchmark | 54 | Benchmark: [https://github.com/KCORES/kcores-llm-arena/tree/main/benchmark-ball-bouncing-inside-spinning-heptagon](https://github.com/KCORES/kcores-llm-arena/tree/main/benchmark-ball-bouncing-inside-spinning-heptagon)
Input code (the prompt is taken directly from the benchmark):
from openai import OpenAI
... | 2025-08-05T21:42:56 | https://v.redd.it/wm3v5c7ft9hf1 | entsnack | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mimbvg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wm3v5c7ft9hf1/DASHPlaylist.mpd?a=1757022191%2CZGVlY2RjM2NkNWE2YmI3MjhkYWM4OTVhMzM1MDdhYTYyNDEzOWY2ZDE0YWI3NjZmYjFmNmJiNWJmMWRkZWE3Zg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/wm3v5c7ft9hf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mimbvg | /r/LocalLLaMA/comments/1mimbvg/gptoss120b_on_the_ball_bouncing_inside_spinning/ | false | false | 54 | {'enabled': False, 'images': [{'id': 'N2YzeGJjN2Z0OWhmMR0l7U2W2QU8pPesZxP949A1S1WH3A2gsjWQ8Ls93hhP', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/N2YzeGJjN2Z0OWhmMR0l7U2W2QU8pPesZxP949A1S1WH3A2gsjWQ8Ls93hhP.png?width=108&crop=smart&format=pjpg&auto=webp&s=fe0feb36681a2f80bcf307b5a8fa80ec03cc... | |
OpenAI's OSS model has fewer active params than Llama 7B? And its good at reasoning? | 2 | I just went through the model card highlights. What do you guys think about the fact that the 20B model has only about 3.61 B active params, lower than the Llama's 7B model even; and it performs this well? Also these are good at AIME (a math competition), GPQA (graduate-level physics), and MMLU (a broad academic benchm... | 2025-08-05T21:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mimaof/openais_oss_model_has_fewer_active_params_than/ | Wonderful-Delivery-6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mimaof | false | null | t3_1mimaof | /r/LocalLLaMA/comments/1mimaof/openais_oss_model_has_fewer_active_params_than/ | false | false | 2 | null | |
More profanity: GPT-OSS powered QwenCode. It works. kind of. | 4 | 2025-08-05T21:39:53 | https://www.reddit.com/gallery/1mim94b | JLeonsarmiento | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mim94b | false | null | t3_1mim94b | /r/LocalLLaMA/comments/1mim94b/more_profanity_gptoss_powered_qwencode_it_works/ | false | false | 4 | null | ||
CRINN: Contrastive Reinforcement Learning for Approximate Nearest Neighbor Search | 4 | [Approximate nearest-neighbor search (ANNS) algorithms have become increasingly critical for recent AI applications, particularly in retrieval-augmented generation (RAG) and agent-based LLM applications. In this paper, we present CRINN, a new paradigm for ANNS algorithms. CRINN treats ANNS optimization as a reinforceme... | 2025-08-05T21:35:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mim56p/crinn_contrastive_reinforcement_learning_for/ | Different741 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mim56p | false | null | t3_1mim56p | /r/LocalLLaMA/comments/1mim56p/crinn_contrastive_reinforcement_learning_for/ | false | false | self | 4 | null |
Running qwen for free on cloud or running it locally? What are the minimum estimated cost | 0 | My curremt laptop cannit bear it | 2025-08-05T21:31:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mim1ug/running_qwen_for_free_on_cloud_or_running_it/ | Mark_Collins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mim1ug | false | null | t3_1mim1ug | /r/LocalLLaMA/comments/1mim1ug/running_qwen_for_free_on_cloud_or_running_it/ | false | false | self | 0 | null |
gpt-oss-120b is faster than 90t/s on 3x3090 | 2 | benchmark:
`llama-bench -ngl 99 -m /mnt/models3/gpt-oss-120b-mxfp4-00001-of-00003.gguf`
`ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no`
`ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no`
`ggml_cuda_init: found 3 CUDA devices:`
`Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes`
`Device 1: NVIDIA GeFo... | 2025-08-05T21:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mim0cs/gptoss120b_is_faster_than_90ts_on_3x3090/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mim0cs | false | null | t3_1mim0cs | /r/LocalLLaMA/comments/1mim0cs/gptoss120b_is_faster_than_90ts_on_3x3090/ | false | false | self | 2 | null |
CRINN: Contrastive Reinforcement Learning for Approximate Nearest Neighbor Search | 3 | A new drop of RAG and agent-based applications. Today is a big day. A new OAI open-source model, Claude 4.1, and more expected.
[https://x.com/deep\_reinforce/status/1952841690752139692](https://x.com/deep_reinforce/status/1952841690752139692)
| 2025-08-05T21:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mim030/crinn_contrastive_reinforcement_learning_for/ | Optimal-Outcome-7458 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mim030 | false | null | t3_1mim030 | /r/LocalLLaMA/comments/1mim030/crinn_contrastive_reinforcement_learning_for/ | false | false | self | 3 | null |
Looking for lightweight Whisper speech‑to‑text app on Windows or Android (open‑source or cheap)? | 0 | Hi everyone,
I'm looking for a **lightweight speech‑to‑text app based on OpenAI Whisper**, ideally:
* **Runs on Windows or Android**
* **Can works offline or locally?**
* Supports a **hotkey or push‑to‑talk trigger**
* **Autostarts at system boot/login** (on Windows) or stays accessible on Android like a dictation IM... | 2025-08-05T21:27:20 | https://www.reddit.com/r/LocalLLaMA/comments/1milxsh/looking_for_lightweight_whisper_speechtotext_app/ | Ranteck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1milxsh | false | null | t3_1milxsh | /r/LocalLLaMA/comments/1milxsh/looking_for_lightweight_whisper_speechtotext_app/ | false | false | self | 0 | null |
Zuck is a fan of this series | 0 | 2025-08-05T21:24:16 | Comfortable-Rock-498 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miluzw | false | null | t3_1miluzw | /r/LocalLLaMA/comments/1miluzw/zuck_is_a_fan_of_this_series/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '43hwr9pvp9hf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/43hwr9pvp9hf1.png?width=108&crop=smart&auto=webp&s=a0524d9f344812436d7b5baccf516460458269f6', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/43hwr9pvp9hf1.png?width=216&crop=smart&auto=web... | ||
What do the samples look like for fine tuning? | 2 | Hi. Coming from a "classic ML" background, I know what do training data look like for a typical supervised learning problem. But when thinking of an instruction-tuned LLM,things get confusing.
What does one sample look like, e.g. if I want to fine tune a small LLM for text summarisation in a particular domain? And how... | 2025-08-05T21:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/1milr4q/what_do_the_samples_look_like_for_fine_tuning/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1milr4q | false | null | t3_1milr4q | /r/LocalLLaMA/comments/1milr4q/what_do_the_samples_look_like_for_fine_tuning/ | false | false | self | 2 | null |
gpt-oss-120b (and Opus 4.1) on Extended NYT Connections and Thematic Generalization benchmarks | 11 | Leaderboards with all models:
[https://github.com/lechmazur/nyt-connections/](https://github.com/lechmazur/nyt-connections/)
[https://github.com/lechmazur/generalization](https://github.com/lechmazur/generalization)
| 2025-08-05T21:17:20 | https://www.reddit.com/gallery/1milofa | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1milofa | false | null | t3_1milofa | /r/LocalLLaMA/comments/1milofa/gptoss120b_and_opus_41_on_extended_nyt/ | false | false | 11 | null | |
OpenAI gpt-oss-120b & 20b EQ-Bench & creative writing results | 220 | [https://eqbench.com/](https://eqbench.com/)
**gpt-oss-120b:**
Creative writing
[https://eqbench.com/results/creative-writing-v3/openai\_\_gpt-oss-120b.html](https://eqbench.com/results/creative-writing-v3/openai__gpt-oss-120b.html)
Longform writing:
[https://eqbench.com/results/creative-writing-longform/openai\_\... | 2025-08-05T21:15:36 | https://www.reddit.com/gallery/1milmrl | _sqrkl | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1milmrl | false | null | t3_1milmrl | /r/LocalLLaMA/comments/1milmrl/openai_gptoss120b_20b_eqbench_creative_writing/ | false | false | 220 | null | |
Just wanna say : Kudos to llama cpp our unsung heroes 🫡 | 113 | Kudos to you guys | 2025-08-05T21:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1milm9t/just_wanna_say_kudos_to_llama_cpp_our_unsung/ | dreamai87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1milm9t | false | null | t3_1milm9t | /r/LocalLLaMA/comments/1milm9t/just_wanna_say_kudos_to_llama_cpp_our_unsung/ | false | false | self | 113 | null |
Run gpt-oss locally with Unsloth GGUFs + Fixes! | 161 | Hey guys! You can now run OpenAI's gpt-oss-120b & 20b open models locally with our [Unsloth](https://github.com/unslothai/unsloth) GGUFs! 🦥
The uploads includes some of our chat template fixes including casing errors and other fixes. We also reuploaded the quants to facilitate OpenAI's recent change to their chat tem... | 2025-08-05T21:13:26 | danielhanchen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1milkqp | false | null | t3_1milkqp | /r/LocalLLaMA/comments/1milkqp/run_gptoss_locally_with_unsloth_ggufs_fixes/ | false | false | 161 | {'enabled': True, 'images': [{'id': 'cgDifyZFtbjbAPhmEKVZn1UiSMpgWQoUTWOijJXGf1w', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/6s62jsx2o9hf1.png?width=108&crop=smart&auto=webp&s=9e815bdf932294f33e187adf930d6da99b4dbd9d', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/6s62jsx2o9hf1.pn... | ||
OpenAI's GPT-OSS 20B in LM Studio is a bit tricky, but I finally made it work, here's how I did it... | 5 | Hi everyone!
I was super excited for this brand new model from OpenAI and I wanted to run it on my following specs:
OS: Windows 10 64bit
Software: LM Studio 0.3.24 b4
OS RAM: 16 GB
GPU VRAM: 8 GB (this is AMD GPU RX Vega 56)
Inference engine: Vulkan / CPU.
Normally I can run Qwen 30B A3B MoE models just fine, so... | 2025-08-05T21:07:45 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1milfjl | false | null | t3_1milfjl | /r/LocalLLaMA/comments/1milfjl/openais_gptoss_20b_in_lm_studio_is_a_bit_tricky/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'gysfnhl7l9hf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/gysfnhl7l9hf1.png?width=108&crop=smart&auto=webp&s=0c8b87b7c2e75094fd0bbec8080a4bd0a49af6fb', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/gysfnhl7l9hf1.png?width=216&crop=smart&auto=web... | |
Qwen3 dense instruct/coder/thinking models tomorrow? | 111 | 2025-08-05T21:07:30 | MR_-_501 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1milfbi | false | null | t3_1milfbi | /r/LocalLLaMA/comments/1milfbi/qwen3_dense_instructcoderthinking_models_tomorrow/ | false | false | default | 111 | {'enabled': True, 'images': [{'id': 'pbi1dcacn9hf1', 'resolutions': [{'height': 201, 'url': 'https://preview.redd.it/pbi1dcacn9hf1.png?width=108&crop=smart&auto=webp&s=272f9d4a11848949a7c5e679527dee7011700084', 'width': 108}, {'height': 402, 'url': 'https://preview.redd.it/pbi1dcacn9hf1.png?width=216&crop=smart&auto=we... | ||
Quick question; What's the business model behind free open source models? | 2 | I don't understand why they are releasing these great models for free. I must be missing something. | 2025-08-05T21:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1milf6u/quick_question_whats_the_business_model_behind/ | TweeMansLeger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1milf6u | false | null | t3_1milf6u | /r/LocalLLaMA/comments/1milf6u/quick_question_whats_the_business_model_behind/ | false | false | self | 2 | null |
Terminal coders rejoice: the Codex CLI ecosystem is finally fully open-source | 4 | Don't know why nobody is talking about this. We now have the open-source Codex CLI: https://github.com/openai/codex. And the open-source gpt-oss models: https://huggingface.co/openai/gpt-oss-120b. This gives terminal coders (my IDE is Vim!) a fully open IDE ecosystem that finally rivals VSCode + Model and Cursor + Mode... | 2025-08-05T21:03:34 | https://github.com/openai/codex | entsnack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1milboe | false | null | t3_1milboe | /r/LocalLLaMA/comments/1milboe/terminal_coders_rejoice_the_codex_cli_ecosystem/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'pkbdz8qEySiFJEy_x76PBnHLEC5_ePJUq9_2rPu4Rz0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pkbdz8qEySiFJEy_x76PBnHLEC5_ePJUq9_2rPu4Rz0.png?width=108&crop=smart&auto=webp&s=39f0705850281e8c625f6eaa87c0e39037515e2d', 'width': 108}, {'height': 108, 'url': 'h... |
Excited to see Cursor-like implementations using this | 2 | 2025-08-05T20:55:47 | Loud_Possibility_148 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mil4gv | false | null | t3_1mil4gv | /r/LocalLLaMA/comments/1mil4gv/excited_to_see_cursorlike_implementations_using/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'waks2949l9hf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/waks2949l9hf1.jpeg?width=108&crop=smart&auto=webp&s=53d705243e2629dba63bbddb64f40749f51ea7b9', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/waks2949l9hf1.jpeg?width=216&crop=smart&auto=w... | ||
gpt-oss-120B claims it is GPT4-Turbo - did OpenAI publicize the GPT4 architecture? | 1 | [removed] | 2025-08-05T20:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1miklb8/gptoss120b_claims_it_is_gpt4turbo_did_openai/ | k_means_clusterfuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miklb8 | false | null | t3_1miklb8 | /r/LocalLLaMA/comments/1miklb8/gptoss120b_claims_it_is_gpt4turbo_did_openai/ | false | false | 1 | null | |
Openai 120b impressions ? | 5 | I tried it in too code and it seems to be stupidest os model that I tried. It can't even follow too code instructions and drop error on every second request.
I didn't tried other same size os models, but they claimed by benchmarks that it sol good, better than deepseek, qwen etc... and yet its so shit even for simple f... | 2025-08-05T20:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mikeri/openai_120b_impressions/ | Aldarund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mikeri | false | null | t3_1mikeri | /r/LocalLLaMA/comments/1mikeri/openai_120b_impressions/ | false | false | self | 5 | null |
Ollama Turbo - Run models using datacenter-grade hardware | 0 | 2025-08-05T20:18:50 | https://ollama.com/turbo | Pro-editor-1105 | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1mik5gy | false | null | t3_1mik5gy | /r/LocalLLaMA/comments/1mik5gy/ollama_turbo_run_models_using_datacentergrade/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... | |
Running GPT-OSS:20B Locally on Windows 11 | 16GB of RAM |Using Ollama | 2 | It's feeling very slow. | 2025-08-05T20:13:41 | Ok-Orchid1032 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mik0co | false | null | t3_1mik0co | /r/LocalLLaMA/comments/1mik0co/running_gptoss20b_locally_on_windows_11_16gb_of/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'ffwa8n8qd9hf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ffwa8n8qd9hf1.jpeg?width=108&crop=smart&auto=webp&s=03dc4c23459104a49478fd36b1703743fcd824ec', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ffwa8n8qd9hf1.jpeg?width=216&crop=smart&auto=w... | |
vLLM latency/throughput benchmarks for gpt-oss-120b | 54 | I ran the [vLLM provided benchmarks](https://github.com/vllm-project/vllm/tree/main/benchmarks) `serve` (online serving throughput) and `throughput` (offline serving throughput) for `gpt-oss-120b` on my H100 96GB with the ShareGPT benchmark data.
Can confirm it fits snugly in 96GB. Numbers below.
# Throughput Benchma... | 2025-08-05T20:12:32 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mijza6 | false | null | t3_1mijza6 | /r/LocalLLaMA/comments/1mijza6/vllm_latencythroughput_benchmarks_for_gptoss120b/ | false | false | default | 54 | {'enabled': True, 'images': [{'id': 'bz9j2b92d9hf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/bz9j2b92d9hf1.png?width=108&crop=smart&auto=webp&s=c11c3140455b7faeb590a3bf0df168bd37f153f1', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/bz9j2b92d9hf1.png?width=216&crop=smart&auto=web... | |
Adding search functionality in a small lm | 4 | I am using qwen3 4b with searxng currently for integrating search in it, but many times it fails, like sometimes the content it scrapes is non conclusive.
How does tools like Jan do it? Their search functionality is so good, is there any other open searching tools other than searxng giving better result. Or is it just... | 2025-08-05T20:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mijwvo/adding_search_functionality_in_a_small_lm/ | ILoveMy2Balls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijwvo | false | null | t3_1mijwvo | /r/LocalLLaMA/comments/1mijwvo/adding_search_functionality_in_a_small_lm/ | false | false | self | 4 | null |
Best Model for Single RTX 5080? | 0 | Hey guys,
I managed to get an RTX 5080 on my personal machine, so in total it will be 16GB VRAM plus 96 GB RAM.
I am wondering what's the best general model for it?
Qwen 30B A3B or something else?
Is it enough to run some bigger models with \~100k context? | 2025-08-05T20:09:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mijwom/best_model_for_single_rtx_5080/ | SthMax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijwom | false | null | t3_1mijwom | /r/LocalLLaMA/comments/1mijwom/best_model_for_single_rtx_5080/ | false | false | self | 0 | null |
it's literally me | 3 | 2025-08-05T20:04:51 | https://v.redd.it/udnlm90zb9hf1 | ApprehensiveAd3629 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mijrtd | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/udnlm90zb9hf1/DASHPlaylist.mpd?a=1757016311%2CN2M3MWYzMTYyOGVmZWQzMjRjMDNhNTNjZTI2MmIxOTU5YzkxNDBlZDZhMmE5NTg3ZTJkNzJlNWY4YTQxY2E3Nw%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/udnlm90zb9hf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mijrtd | /r/LocalLLaMA/comments/1mijrtd/its_literally_me/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'MDZqaHJhMHpiOWhmMcFj068Rj0DgnpNZ8CsCym29edo5hoDBuDsl_ml-tUxO', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MDZqaHJhMHpiOWhmMcFj068Rj0DgnpNZ8CsCym29edo5hoDBuDsl_ml-tUxO.png?width=108&crop=smart&format=pjpg&auto=webp&s=a6c55d610eb406034c802727bec6ac4febad... | ||
How To Run OpenAI GPT-OSS 20B and 120B Models on AMD Ryzen AI Processors and Radeon Graphics Cards | 5 | Wonder how the 120b model compares to Qwen 3 Coder in 8-bit. | 2025-08-05T20:04:42 | https://www.amd.com/en/blogs/2025/how-to-run-openai-gpt-oss-20b-120b-models-on-amd-ryzen-ai-radeon.html | ZZZCodeLyokoZZZ | amd.com | 1970-01-01T00:00:00 | 0 | {} | 1mijros | false | null | t3_1mijros | /r/LocalLLaMA/comments/1mijros/how_to_run_openai_gptoss_20b_and_120b_models_on/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NgWDcnPAzAv9C_IdYMqborsydUjUEEFGZXyC17go05Y.jpeg?width=108&crop=smart&auto=webp&s=569ee38687589e1134e6e83d5b36da59d9f13822', 'width': 108}, {'height': 120, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.