title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The real OpenAI OSS news is MXFP4 | 22 | OpenAI worked with llama.cpp and ollama to integrate MXFP4 support. Clearly they see enough benefit in the format to use it over existing formats. Looking forward to seeing wider adoption. | 2025-08-05T20:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mijqk1/the_real_openai_oss_news_is_mxfp4/ | explorigin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijqk1 | false | null | t3_1mijqk1 | /r/LocalLLaMA/comments/1mijqk1/the_real_openai_oss_news_is_mxfp4/ | false | false | self | 22 | null |
This sub *really* wants GPT OSS to fail | 1 | [removed] | 2025-08-05T20:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mijnq8/this_sub_really_wants_gpt_oss_to_fail/ | MerePotato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijnq8 | false | null | t3_1mijnq8 | /r/LocalLLaMA/comments/1mijnq8/this_sub_really_wants_gpt_oss_to_fail/ | false | false | self | 1 | null |
External GPU | 2 | Are there any 16GB eGPUs that'll run with a 6-Core AMD Ryzen 7600 from a ROG STRIX B650E-I GAMING motherboard? Or am I out of luck because there's no way to adapt a Thunderbolt to an AMD CPU?
* https://rog.asus.com/ca-en/motherboards/rog-strix/rog-strix-b650e-i-gaming-wifi-model
* https://www.amd.com/en/products/proce... | 2025-08-05T19:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mijmmc/external_gpu/ | autonoma_2042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijmmc | false | null | t3_1mijmmc | /r/LocalLLaMA/comments/1mijmmc/external_gpu/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '6FHpn-HETuzbSEb0_N3XYhVtN6t98xu0R9tGj3ti3IE', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/6FHpn-HETuzbSEb0_N3XYhVtN6t98xu0R9tGj3ti3IE.png?width=108&crop=smart&auto=webp&s=df22d2270b613db5fa98812f5f655bebe4dcf003', 'width': 108}, {'height': 158, 'url': 'h... |
How to set the reasoning effort for gpt-oss in the system message? | 6 | It says: "Similar to the OpenAI o-series reasoning models in the API, the two open-weight models support three reasoning efforts—low, medium, and high—which trade off latency vs. performance. Developers can easily set the reasoning effort with one sentence in the system message."
What do you write in the system messag... | 2025-08-05T19:59:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mijmmg/how_to_set_the_reasoning_effort_for_gptoss_in_the/ | chibop1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijmmg | false | null | t3_1mijmmg | /r/LocalLLaMA/comments/1mijmmg/how_to_set_the_reasoning_effort_for_gptoss_in_the/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
GPT-OSS-20B on RTX 5090 – 221 tok/s in LM Studio (default settings + FlashAttention) | 9 | Just tested **GPT-OSS-20B** locally using **LM Studio v0.3.21-b4** on my machine with an **RTX 5090 + Ryzen 9 9950X3D + 96 GB RAM**.
Everything is set to **default**, no tweaks. I only enabled **Flash Attention** manually.
Using:
* **Runtime Engine**: `CUDA 12 llama.cpp (Windows)` – v1.44.0
* LM Studio auto-selected... | 2025-08-05T19:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mijfyz/gptoss20b_on_rtx_5090_221_toks_in_lm_studio/ | Spiritual_Tie_5574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijfyz | false | null | t3_1mijfyz | /r/LocalLLaMA/comments/1mijfyz/gptoss20b_on_rtx_5090_221_toks_in_lm_studio/ | false | false | self | 9 | null |
GPT-OSS-20B & GPT-OSS-120B on LmStudio + MCP | 4 | Obviously, the big grumpy guy that I am couldn't help but download these 2 models in order to mess around with them a bit.
Hardware for this test: CPU Ryzen 9900x + 128GB DDR5 5200MHz + RTX 3090 + RTX 3060
So for GPT-OSS-20B 64K + Mcp it's not too comfortable but I'm certain there will be improvements:
https:... | 2025-08-05T19:50:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mijdx7/gptoss20b_gptoss120b_on_lmstudio_mcp/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mijdx7 | false | null | t3_1mijdx7 | /r/LocalLLaMA/comments/1mijdx7/gptoss20b_gptoss120b_on_lmstudio_mcp/ | false | false | 4 | null | |
When Grok 3? | 22 | 2025-08-05T19:49:17 | https://www.reddit.com/gallery/1mijcv7 | Wrong_User_Logged | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mijcv7 | false | null | t3_1mijcv7 | /r/LocalLLaMA/comments/1mijcv7/when_grok_3/ | false | false | 22 | null | ||
MXFP4 and various hardware | 4 | So MXFP4 just entered the zeitgeist out of nowhere.
What are the implications for this newfangled technology on our GPUs and Macs with Apple Silicon?
RDNA3/4? Various Nvidia generations? Apple silicon?
I asked Gemini for a quick summary on my phone and the result was unintelligible (everything is awesome!!!1!! was p... | 2025-08-05T19:43:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mij7ki/mxfp4_and_various_hardware/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mij7ki | false | null | t3_1mij7ki | /r/LocalLLaMA/comments/1mij7ki/mxfp4_and_various_hardware/ | false | false | self | 4 | null |
List of open-weight models with unmodified permissive licenses | 66 | Just starting a community-driven thread.
|Model|License|Commercial Use|Link to License|
|:-|:-|:-|:-|
|gpt-oss-120b|Apache 2.0 *unmodified*|Allowed|[LICENSE](https://huggingface.co/openai/gpt-oss-120b/blob/main/LICENSE)|
|gpt-oss-20b|Apache 2.0 *unmodified*|Allowed|[LICENSE](https://huggingface.co/openai/gpt-oss-20b/b... | 2025-08-05T19:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mij7fh/list_of_openweight_models_with_unmodified/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mij7fh | false | null | t3_1mij7fh | /r/LocalLLaMA/comments/1mij7fh/list_of_openweight_models_with_unmodified/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'fxTXWWdnNDFfIAA3ubz5TQddkg3cvUBVmDyXj7yO72Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxTXWWdnNDFfIAA3ubz5TQddkg3cvUBVmDyXj7yO72Q.png?width=108&crop=smart&auto=webp&s=8b5dd80343742975b635d2403d8ef5e02be2e423', 'width': 108}, {'height': 108, 'url': 'h... |
Open-weights just beat Opus 4.1 on today’s benchmarks (AIME’25, GPQA, MMLU) | 21 | Not trying to spark a model war, just sharing numbers that surprised me. Based on today’s releases and the evals below, OpenAI’s open-weights models edge out Claude Opus 4.1 across math (AIME 2025, with tools), graduate-level QA (GPQA Diamond, no tools), and general knowledge (MMLU, no tools). If these hold up, you no ... | 2025-08-05T19:38:15 | https://www.reddit.com/gallery/1mij25y | dictionizzle | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mij25y | false | null | t3_1mij25y | /r/LocalLLaMA/comments/1mij25y/openweights_just_beat_opus_41_on_todays/ | false | false | 21 | null | |
New GPT-OSS and Claude Code? | 9 | Hey I just wanted to make a space for people to talk about how well the new open-source models work with claude code? I don't have a chance to try it out yet, but im excited, and think it could finally actually offline my set up. What are y'alls thoughts? | 2025-08-05T19:37:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mij1ux/new_gptoss_and_claude_code/ | LyAkolon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mij1ux | false | null | t3_1mij1ux | /r/LocalLLaMA/comments/1mij1ux/new_gptoss_and_claude_code/ | false | false | self | 9 | null |
Why doesn’t LM Studio let me call external OpenAI‑compatible APIs (e.g., Ollama)? | 1 | Hey //locallama,
I’m using the desktop Studio app for local inference and love having everything in one place.
What I’m missing is any way to \*\*point LM Studio at an external OpenAI‑compatible endpoint\*\* (Ollama, a hosted OpenAI‑style server, etc.) from within the UI.
I know alternatives like \*\*Open WebUI\*\... | 2025-08-05T19:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1miixs4/why_doesnt_lm_studio_let_me_call_external/ | myusuf3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miixs4 | false | null | t3_1miixs4 | /r/LocalLLaMA/comments/1miixs4/why_doesnt_lm_studio_let_me_call_external/ | false | false | self | 1 | null |
How to ingest nested tables in RAG pipeline? | 1 | Pl share what has worked for you, thank you! | 2025-08-05T19:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1miixbr/how_to_ingest_nested_tables_in_rag_pipeline/ | Significant_Idea2495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miixbr | false | null | t3_1miixbr | /r/LocalLLaMA/comments/1miixbr/how_to_ingest_nested_tables_in_rag_pipeline/ | false | false | self | 1 | null |
GPT-OSS 20B running on an 8GB windows laptop. Slow but is running 😁 | 8 | 2025-08-05T19:31:40 | https://v.redd.it/b7bkgyr869hf1 | 2088AJ | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miivx7 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b7bkgyr869hf1/DASHPlaylist.mpd?a=1757014317%2CZTU4Mjc1MTg1MmZmOGQ5YWVmNDVmZjg0MWFkZGQ5NGU5OTIwMmVjM2UxYWZmMDFlZGFjYjBiYTliZDE3ODMzYg%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/b7bkgyr869hf1/DASH_1080.mp4?source=fallback', 'h... | t3_1miivx7 | /r/LocalLLaMA/comments/1miivx7/gptoss_20b_running_on_an_8gb_windows_laptop_slow/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'enV5cjFhcTg2OWhmMTM40Qrj6DkO_QD0I5n0iXR7JqD4uKRhq0mG_N5Gvutx', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/enV5cjFhcTg2OWhmMTM40Qrj6DkO_QD0I5n0iXR7JqD4uKRhq0mG_N5Gvutx.png?width=108&crop=smart&format=pjpg&auto=webp&s=97194cfa798591c42a73bb2ec264c3fedaf4... | ||
UI/UX Benchmark Update 08/05: GPT-OSS Models Added, Qwen3 30B series, Flux.1 Krea Dev, Opus 4.1, Builder, Audio Arenas | 6 | Many of you have seen my [benchmark](https://www.designarena.ai/) already, but see this post for [context](https://www.designarena.ai/) as always. I haven't been as active on Reddit as of late (so many things are happening so fast!), but here is the weekly update.
As of an hour ago, we just added the GPT OSS 20B and ... | 2025-08-05T19:31:39 | https://www.reddit.com/gallery/1miivw7 | Accomplished-Copy332 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1miivw7 | false | null | t3_1miivw7 | /r/LocalLLaMA/comments/1miivw7/uiux_benchmark_update_0805_gptoss_models_added/ | false | false | 6 | null | |
Installing GPT-OSS-20b | 1 | Its my first time attempting to install and run a LLM both using ollama and in general. i seem to have downloaded successfully but when attempting to run the model i'm getting 'function "currentDate" not defined'. Checking the documentation for installs doesn't seem to offer any help. Any ideas? | 2025-08-05T19:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1miiuj9/installing_gptoss20b/ | ghost-uk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiuj9 | false | null | t3_1miiuj9 | /r/LocalLLaMA/comments/1miiuj9/installing_gptoss20b/ | false | false | self | 1 | null |
Its raining AI models and I am loving it. | 0 | 🧑💻GLM 4.5: An open-source reasoning model that is on par with Claude Sonnet 4.
🖼️Qwen-Image: An open-source image generation model similar to ChatGPT.
🆓OpenAI GPT-OSS: This model comes with open weights, Python frameworks, and agentic capabilities that will help people use it for advanced and local applications.... | 2025-08-05T19:27:14 | https://www.reddit.com/r/LocalLLaMA/comments/1miirnt/its_raining_ai_models_and_i_am_loving_it/ | kingabzpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miirnt | false | null | t3_1miirnt | /r/LocalLLaMA/comments/1miirnt/its_raining_ai_models_and_i_am_loving_it/ | false | false | self | 0 | null |
"A Timeless Masterpiece by Qwen-Image: 90s Nostalgia Reimagined by an Alien" | 1 | [removed] | 2025-08-05T19:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1miiosv/a_timeless_masterpiece_by_qwenimage_90s_nostalgia/ | humairmunir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiosv | false | null | t3_1miiosv | /r/LocalLLaMA/comments/1miiosv/a_timeless_masterpiece_by_qwenimage_90s_nostalgia/ | false | false | self | 1 | null |
WHY CENSOR THEM SO HARD MAN??? GPT OSS?? | 52 | Even regular ChatGPT (online) is more uncensored than GPT OSS.
:( | 2025-08-05T19:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/1miiktg/why_censor_them_so_hard_man_gpt_oss/ | OrganicApricot77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiktg | false | null | t3_1miiktg | /r/LocalLLaMA/comments/1miiktg/why_censor_them_so_hard_man_gpt_oss/ | false | false | self | 52 | null |
Ollama 0.11 - Partners with OpenAI to bring gpt-oss models to Ollama | 0 | 2025-08-05T19:19:39 | https://github.com/ollama/ollama/releases/tag/v0.11.0 | mj3815 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miikci | false | null | t3_1miikci | /r/LocalLLaMA/comments/1miikci/ollama_011_partners_with_openai_to_bring_gptoss/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '9JD_v1iTHSzuJuhaUNne7s-l4JpXqbZUtMMxgHnFr-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9JD_v1iTHSzuJuhaUNne7s-l4JpXqbZUtMMxgHnFr-8.png?width=108&crop=smart&auto=webp&s=87008937530dc5ae44d2daa618d7fbf0ceb9a362', 'width': 108}, {'height': 108, 'url': 'h... | |
OSS-120B fails the 20 bouncing balls in heptagon test | 118 | 2025-08-05T19:18:25 | https://v.redd.it/vd59dpcu39hf1 | Different_Fix_2217 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miij6j | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vd59dpcu39hf1/DASHPlaylist.mpd?a=1757013519%2COTAzODQ2MGJkYzc2MDZhNjU1Mjk1MDJhNzFhYjcxMmE3ZjhkYTQyNDE0YjkyM2U0NDQ5MTA0YjI5NTE2MzZiNQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/vd59dpcu39hf1/DASH_720.mp4?source=fallback', 'has... | t3_1miij6j | /r/LocalLLaMA/comments/1miij6j/oss120b_fails_the_20_bouncing_balls_in_heptagon/ | false | false | 118 | {'enabled': False, 'images': [{'id': 'Z2RpejZvY3UzOWhmMcZ-lfKrwTWy8sv4rAh8UXEfaOU9tc1wa_tIH6YtjRdO', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z2RpejZvY3UzOWhmMcZ-lfKrwTWy8sv4rAh8UXEfaOU9tc1wa_tIH6YtjRdO.png?width=108&crop=smart&format=pjpg&auto=webp&s=f184d50ebec1f0e86dcaf9f92117bc238c93... | ||
OpenAI OSS models compatible PR merged into llama.cpp | 7 | 2025-08-05T19:14:03 | Pro-editor-1105 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miif0v | false | null | t3_1miif0v | /r/LocalLLaMA/comments/1miif0v/openai_oss_models_compatible_pr_merged_into/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'bbz9yk4239hf1', 'resolutions': [{'height': 15, 'url': 'https://preview.redd.it/bbz9yk4239hf1.png?width=108&crop=smart&auto=webp&s=4451b6ddc9fbcf632beda1bbff8bde5637a1c2ec', 'width': 108}, {'height': 31, 'url': 'https://preview.redd.it/bbz9yk4239hf1.png?width=216&crop=smart&auto=webp... | ||
GPT OSS chat template - tool calling issue | 8 | In case you are trying gpt-oss with llama.cpp ([PR#15091](https://github.com/ggml-org/llama.cpp/pull/15091)) and **NOT using the model files from** [**HF ggml-org**](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF)**,** will encounter HTTP 500 error message when have the tool calling function:
`got exception: {"code"... | 2025-08-05T19:13:55 | https://www.reddit.com/r/LocalLLaMA/comments/1miiewp/gpt_oss_chat_template_tool_calling_issue/ | tyoyvr-2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiewp | false | null | t3_1miiewp | /r/LocalLLaMA/comments/1miiewp/gpt_oss_chat_template_tool_calling_issue/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s.png?width=108&crop=smart&auto=webp&s=7fc0a9456ea2aba0fff93672ee04a1dff1ae7e21', 'width': 108}, {'height': 108, 'url': 'h... |
Slow Huggingface downloads of gpt-oss? Try `hf download openai/gpt-oss-20b` or `hf download openai/gpt-oss-20b` | 19 | You need the Huggingface CLI installed of course: `hf download openai/gpt-oss-20b`
I don't know why, but I was getting dogshit download speeds from both vllm serve, and from the Transformers library in my Python code. This command gives me 100+ MB/s. | 2025-08-05T19:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1miiesj/slow_huggingface_downloads_of_gptoss_try_hf/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiesj | false | null | t3_1miiesj | /r/LocalLLaMA/comments/1miiesj/slow_huggingface_downloads_of_gptoss_try_hf/ | false | false | self | 19 | null |
How to fit gpt-oss:20b in 16gb vram? | 4 | title, it's taking up 18gb with ollama and no extra context outisde a 10 word prompt. Perhaps a different provider like vllm could help? I do not wish to quantize any further | 2025-08-05T19:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/1miiegj/how_to_fit_gptoss20b_in_16gb_vram/ | MiyamotoMusashi7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miiegj | false | null | t3_1miiegj | /r/LocalLLaMA/comments/1miiegj/how_to_fit_gptoss20b_in_16gb_vram/ | false | false | self | 4 | null |
support for gpt-oss has been merged into llama.cpp | 2 | GGUFs:
[https://huggingface.co/collections/ggml-org/gpt-oss-68923b60bee37414546c70bf](https://huggingface.co/collections/ggml-org/gpt-oss-68923b60bee37414546c70bf) | 2025-08-05T19:12:47 | https://github.com/ggml-org/llama.cpp/pull/15091 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miidv9 | false | null | t3_1miidv9 | /r/LocalLLaMA/comments/1miidv9/support_for_gptoss_has_been_merged_into_llamacpp/ | false | false | default | 2 | null |
GPT-OSS Support Merged in Llama.cpp! | 4 | 2025-08-05T19:12:08 | https://github.com/ggml-org/llama.cpp/pull/15091 | TKGaming_11 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miid92 | false | null | t3_1miid92 | /r/LocalLLaMA/comments/1miid92/gptoss_support_merged_in_llamacpp/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s.png?width=108&crop=smart&auto=webp&s=7fc0a9456ea2aba0fff93672ee04a1dff1ae7e21', 'width': 108}, {'height': 108, 'url': 'h... | |
Now that openAI has released the oss model, I am still waiting for grok 3.. | 7 | Grok 3 when?
I mean, I just feel that openAI's model is very censored (as they said so) and it wouldn't hurt having a less censored model like grok 3 being open source.
I know it wouldn't be the best at coding or other stuff right now, but still It would be a decent addition to the open source team and I do th... | 2025-08-05T19:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mii9e3/now_that_openai_has_released_the_oss_model_i_am/ | i-exist-man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mii9e3 | false | null | t3_1mii9e3 | /r/LocalLLaMA/comments/1mii9e3/now_that_openai_has_released_the_oss_model_i_am/ | false | false | self | 7 | null |
How important is CPU core count for offloading MoE inference? (dual channel) | 1 | Choosing between a Core Ultra 5 245k or 7 265k, is it important? Or is that just relevant for model loading times?
From what I understand the bandwidth will be low anyways because it's only dual channel | 2025-08-05T19:07:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mii8aa/how_important_is_cpu_core_count_for_offloading/ | legit_split_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mii8aa | false | null | t3_1mii8aa | /r/LocalLLaMA/comments/1mii8aa/how_important_is_cpu_core_count_for_offloading/ | false | false | self | 1 | null |
GPT OSS MacBook Air m4 16gb | 8 | yo ... small test ... running on Lmstudio macOS | 2025-08-05T19:03:13 | seppe0815 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mii4lc | false | null | t3_1mii4lc | /r/LocalLLaMA/comments/1mii4lc/gpt_oss_macbook_air_m4_16gb/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'hl14tav019hf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/hl14tav019hf1.png?width=108&crop=smart&auto=webp&s=79d5982175bdf0f7b272a85728727fa052dce579', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/hl14tav019hf1.png?width=216&crop=smart&auto=web... | |
Who thought to make them just 120B and 20B params in the first place? | 0 | 2025-08-05T18:58:43 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mii01s | false | null | t3_1mii01s | /r/LocalLLaMA/comments/1mii01s/who_thought_to_make_them_just_120b_and_20b_params/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jx7o883a09hf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/jx7o883a09hf1.png?width=108&crop=smart&auto=webp&s=0f0470256896fd1453e08cb4f8fde33d12d8f667', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/jx7o883a09hf1.png?width=216&crop=smart&auto=web... | ||
Dang. I did not expect that. Nice job OpenAI. | 114 | Meta is done for if they don't go full FOSS. No wonder Zuck was so desperate to poach OpenAI employees. | 2025-08-05T18:57:18 | Crierlon | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mihyr9 | false | null | t3_1mihyr9 | /r/LocalLLaMA/comments/1mihyr9/dang_i_did_not_expect_that_nice_job_openai/ | false | false | default | 114 | {'enabled': True, 'images': [{'id': 'm3ailyyqz8hf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/m3ailyyqz8hf1.png?width=108&crop=smart&auto=webp&s=f91ef870c31b733b73a75daf83537b958798b972', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/m3ailyyqz8hf1.png?width=216&crop=smart&auto=web... | |
Need Help with Local-AI and Local LLMs (Mac M1, Beginner Here) | 0 | Hey everyone 👋
I'm new to local LLMs and recently started using [localai.io](https://localai.io/) for a startup company project I'm working (can’t share details, but it’s fully offline and AI-focused).
**My setup:**
MacBook Air M1, 8GB RAM
I've learned the basics like what parameters, tokens, quantization, and co... | 2025-08-05T18:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mihx6o/need_help_with_localai_and_local_llms_mac_m1/ | Separate-Road-3668 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihx6o | false | null | t3_1mihx6o | /r/LocalLLaMA/comments/1mihx6o/need_help_with_localai_and_local_llms_mac_m1/ | false | false | self | 0 | null |
Many people on Twitter are claiming that the new models have been trained to "benchmarkmax" | 0 | I'm not trying to make any claims, as this is only what I've personally seen on Twitter. It's also expected that there will always be dissatisfied people. However, this, along with the hallucination rates reported by OpenAl (which are quite a bit higher than O3's), makes me wonder how good their real-world application ... | 2025-08-05T18:51:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mihtbx/many_people_on_twitter_are_claiming_that_the_new/ | Enocli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihtbx | false | null | t3_1mihtbx | /r/LocalLLaMA/comments/1mihtbx/many_people_on_twitter_are_claiming_that_the_new/ | false | false | self | 0 | null |
OpenAI Released Open Source Models! | 0 | But I guess you already knew that, because 100 other people have posted this already, and none of them seemed to think "I wonder if this has already been post 100 times in the last hour, maybe I should check." and instead, posted it as if they were the first one posting it. Let's all keep posting it! Everyone gets a t... | 2025-08-05T18:49:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mihr3t/openai_released_open_source_models/ | RedZero76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihr3t | false | null | t3_1mihr3t | /r/LocalLLaMA/comments/1mihr3t/openai_released_open_source_models/ | false | false | self | 0 | null |
The new OpenAI free model is cool, but Genie 3 is probably the key to AGI! | 0 | Genie 3 is a foundation world model developed by Google DeepMind, designed to create real-time, interactive 3D environments from text prompts or images. It generates dynamic, physically consistent simulations at 720p resolution and 24 frames per second, maintaining coherence for several minutes. By generating unlimited... | 2025-08-05T18:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mihooy/the_new_openai_free_model_is_cool_but_genie_3_is/ | custodiam99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihooy | false | null | t3_1mihooy | /r/LocalLLaMA/comments/1mihooy/the_new_openai_free_model_is_cool_but_genie_3_is/ | false | false | self | 0 | null |
Who am I talking to??? | 0 | Anyone help!?!? | 2025-08-05T18:42:22 | Nimbkoll | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mihjzi | false | null | t3_1mihjzi | /r/LocalLLaMA/comments/1mihjzi/who_am_i_talking_to/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '3knqxyycx8hf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/3knqxyycx8hf1.png?width=108&crop=smart&auto=webp&s=d85037788d75a76cc322730c928a5eb1ff7973f7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/3knqxyycx8hf1.png?width=216&crop=smart&auto=we... | |
GPT-OSS via Ollama: "Error: template: :3: function "currentDate" not defined" | 3 | On my M1 Mac Studio, I updated to the latest version of Ollama. Then, i did `ollama pull gpt-oss:20b`. However, I get this error `Error: template: :3: function "currentDate" not defined`.
Does anyone know how to fix this?
It seems to be coming from the template:
><|start|>system<|message|>You are ChatGPT, a large la... | 2025-08-05T18:39:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mihh5w/gptoss_via_ollama_error_template_3_function/ | a-c-19-23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihh5w | false | null | t3_1mihh5w | /r/LocalLLaMA/comments/1mihh5w/gptoss_via_ollama_error_template_3_function/ | false | false | self | 3 | null |
openai/gpt-oss-120b · Hugging Face | 1 | [removed] | 2025-08-05T18:39:13 | https://huggingface.co/openai/gpt-oss-120b | Different-Olive-8745 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mihgtm | false | null | t3_1mihgtm | /r/LocalLLaMA/comments/1mihgtm/openaigptoss120b_hugging_face/ | false | false | default | 1 | null |
Am i the only one seeing it this way ? | 217 | 2025-08-05T18:38:06 | Severe-Awareness829 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mihfp7 | false | null | t3_1mihfp7 | /r/LocalLLaMA/comments/1mihfp7/am_i_the_only_one_seeing_it_this_way/ | false | false | default | 217 | {'enabled': True, 'images': [{'id': 'e8eauyilw8hf1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/e8eauyilw8hf1.jpeg?width=108&crop=smart&auto=webp&s=875683a503155088f908eacc076e24c6b6ff4e3d', 'width': 108}, {'height': 229, 'url': 'https://preview.redd.it/e8eauyilw8hf1.jpeg?width=216&crop=smart&auto=... | ||
Both the openai oss models are not giving any response to my agentic prompt its only providing thinking | 6 | Anyone face similar issue?
| 2025-08-05T18:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mihd6y/both_the_openai_oss_models_are_not_giving_any/ | naveenstuns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mihd6y | false | null | t3_1mihd6y | /r/LocalLLaMA/comments/1mihd6y/both_the_openai_oss_models_are_not_giving_any/ | false | false | self | 6 | null |
Guess the open source model name | 8 | 2025-08-05T18:34:30 | BoJackHorseMan53 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mihc87 | false | null | t3_1mihc87 | /r/LocalLLaMA/comments/1mihc87/guess_the_open_source_model_name/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'wie7h96yv8hf1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/wie7h96yv8hf1.png?width=108&crop=smart&auto=webp&s=46519776a689a212e26b0df0830a5643fdcb0f6f', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/wie7h96yv8hf1.png?width=216&crop=smart&auto=webp... | ||
Help choosing a model? | 0 | So, i just started actually learning about local LLMs, I'd like to have mainly 2 versions, one to help me learn more about IT stuff and coding and another general one. Perhaps I dont need 2 different versions and a good system prompt is good enough anyway those are my specs using fastfetch, any suggestion is appreciate... | 2025-08-05T18:30:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mih7uj/help_choosing_a_model/ | culo_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mih7uj | false | null | t3_1mih7uj | /r/LocalLLaMA/comments/1mih7uj/help_choosing_a_model/ | false | false | self | 0 | null |
The open ai open source is rickety shit | 0 | Openai, maybe do something around 800B total parameters next time for some actual cutting edge capabilities without solely relying on the reasoning. The community can make distils and quants for running locally to like with Kimi -- this is just disappointing, I don't really know what to say... | 2025-08-05T18:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mih58t/the_open_ai_open_source_is_rickety_shit/ | Longjumping_Spot5843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mih58t | false | null | t3_1mih58t | /r/LocalLLaMA/comments/1mih58t/the_open_ai_open_source_is_rickety_shit/ | false | false | self | 0 | null |
Specs for gpt-oss-120b - mismatch? | 1 | [https://www.reddit.com/r/LocalLLaMA/comments/1glw1rs/computer\_spec\_for\_running\_large\_ai\_model\_70b/](https://www.reddit.com/r/LocalLLaMA/comments/1glw1rs/computer_spec_for_running_large_ai_model_70b/)
Hi all, above thread was closest I could find but not sure quite hits the mark. For self-hosting gpt-oss-120b, ... | 2025-08-05T18:25:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mih3nn/specs_for_gptoss120b_mismatch/ | gruntledairman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mih3nn | false | null | t3_1mih3nn | /r/LocalLLaMA/comments/1mih3nn/specs_for_gptoss120b_mismatch/ | false | false | self | 1 | null |
GPT OSS 20B vs GPT OSS 120B | 7 | Comparing the new OpenAI OSS models | 2025-08-05T18:25:04 | https://v.redd.it/9cqf63l7u8hf1 | sirjoaco | /r/LocalLLaMA/comments/1mih2wn/gpt_oss_20b_vs_gpt_oss_120b/ | 1970-01-01T00:00:00 | 0 | {} | 1mih2wn | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9cqf63l7u8hf1/DASHPlaylist.mpd?a=1757139916%2CMTE4MDZkMDZiYTQxMDJiMjJjOGM1YzE2ODE4N2Q0OGQ0OTFlMDUzYTA4NTJjNmY5NTViZjc4NDZjNGIwOWY4ZA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/9cqf63l7u8hf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mih2wn | /r/LocalLLaMA/comments/1mih2wn/gpt_oss_20b_vs_gpt_oss_120b/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'Zmd2eXgybDd1OGhmMa010fOTakGvKD7NzQRbrk4rLKyVEosGz3lSAfcJ7DVg', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/Zmd2eXgybDd1OGhmMa010fOTakGvKD7NzQRbrk4rLKyVEosGz3lSAfcJ7DVg.png?width=108&crop=smart&format=pjpg&auto=webp&s=aa1ed8e4a6248947562020870e887537efd19... | |
gpt-oss on Ollama! | 1 | [https://ollama.com/library/gpt-oss](https://ollama.com/library/gpt-oss) | 2025-08-05T18:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mih0g4/gptoss_on_ollama/ | sachnek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mih0g4 | false | null | t3_1mih0g4 | /r/LocalLLaMA/comments/1mih0g4/gptoss_on_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... |
Promise kept, cheers to the openai team! | 0 | 2025-08-05T18:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1migxu8/promise_kept_cheers_to_the_openai_team/ | Leflakk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migxu8 | false | null | t3_1migxu8 | /r/LocalLLaMA/comments/1migxu8/promise_kept_cheers_to_the_openai_team/ | false | false | 0 | null | ||
Testing out GPT OSS in Ollama | 0 | **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
| 2025-08-05T18:19:45 | Due-Tangelo-8704 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1migxno | false | null | t3_1migxno | /r/LocalLLaMA/comments/1migxno/testing_out_gpt_oss_in_ollama/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'JwCjR40ZX8J2lRry5EQTaDZjGVkEpMJb2dEIq3tfHTE', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mziarf4at8hf1.jpeg?width=108&crop=smart&auto=webp&s=2c2b7c60c1bf8117ed4e7f541a3e111cb6cebd23', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/mziarf4at8hf1.jp... | ||
gpt-oss, FP4, and the 3090/ti vs Blackwell | 3 | Are there any solid benchmarks of fp4 inference on weaker blackwell cards vs fp16 on the 3090ti
Do you guys think a 5060ti 16gb is enough for gpt | 2025-08-05T18:18:12 | https://www.reddit.com/r/LocalLLaMA/comments/1migw3z/gptoss_fp4_and_the_3090ti_vs_blackwell/ | m1tm0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migw3z | false | null | t3_1migw3z | /r/LocalLLaMA/comments/1migw3z/gptoss_fp4_and_the_3090ti_vs_blackwell/ | false | false | self | 3 | null |
Any Suggestions for a laptop that can run the OSS? | 0 | Thinking of buying one! | 2025-08-05T18:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1migsiv/any_suggestions_for_a_laptop_that_can_run_the_oss/ | WinterPurple73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migsiv | false | null | t3_1migsiv | /r/LocalLLaMA/comments/1migsiv/any_suggestions_for_a_laptop_that_can_run_the_oss/ | false | false | self | 0 | null |
Can someone compare GPT-OSS to Qwen 3 | 15 | Given that Qwen 3 has been the best accessible model right now available in sizes as low as 0.6b, I'd be curious to see how they compare. Also, I'd be interested in a head to head censorship comparison test. | 2025-08-05T18:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/1migs87/can_someone_compare_gptoss_to_qwen_3/ | pneuny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migs87 | false | null | t3_1migs87 | /r/LocalLLaMA/comments/1migs87/can_someone_compare_gptoss_to_qwen_3/ | false | false | self | 15 | null |
Local LLM/model for OCR Mobile compatible | 1 | Can someone list local multi modal llm for handwritten text english or any best performing ocr model compatible with ios and android
Thanks. | 2025-08-05T18:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1migpom/local_llmmodel_for_ocr_mobile_compatible/ | weird_areesh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migpom | false | null | t3_1migpom | /r/LocalLLaMA/comments/1migpom/local_llmmodel_for_ocr_mobile_compatible/ | false | false | self | 1 | null |
I FEEL SO SAFE! THANK YOU SO MUCH OPENAI! | 886 | It also lacks all general knowledge and is terrible at coding compared to the same sized GLM air, what is the use case here? | 2025-08-05T18:10:18 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1migo6d | false | null | t3_1migo6d | /r/LocalLLaMA/comments/1migo6d/i_feel_so_safe_thank_you_so_much_openai/ | false | false | default | 886 | {'enabled': True, 'images': [{'id': '7e3v67opr8hf1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/7e3v67opr8hf1.jpeg?width=108&crop=smart&auto=webp&s=8e1dfcd1bbfd03b31e5f1e582c52041c04b40c89', 'width': 108}, {'height': 181, 'url': 'https://preview.redd.it/7e3v67opr8hf1.jpeg?width=216&crop=smart&auto=w... | |
OpenAI new OSS model impressions | 8 | Hi guys just tested the new gpt-oss models. My first impression is "Damn these models love to overthink". They think for way too long and even output nothing if the reasoning is too long. Published benchmarks look great but for local models these are thinking way too much to be useful. Also the performance drop in chan... | 2025-08-05T18:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mignfe/openai_new_oss_model_impressions/ | mtmttuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mignfe | false | null | t3_1mignfe | /r/LocalLLaMA/comments/1mignfe/openai_new_oss_model_impressions/ | false | false | 8 | null | |
I FEEL SO SAFE! THANK YOU OPENAI! | 1 | 2025-08-05T18:09:23 | https://www.reddit.com/r/LocalLLaMA/comments/1migna1/i_feel_so_safe_thank_you_openai/ | Different_Fix_2217 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migna1 | false | null | t3_1migna1 | /r/LocalLLaMA/comments/1migna1/i_feel_so_safe_thank_you_openai/ | false | false | 1 | null | ||
gpt-oss-120b is safetymaxxed (cw: explicit safety) | 769 | 2025-08-05T18:07:05 | TheLocalDrummer | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1migl0k | false | null | t3_1migl0k | /r/LocalLLaMA/comments/1migl0k/gptoss120b_is_safetymaxxed_cw_explicit_safety/ | false | false | nsfw | 769 | {'enabled': True, 'images': [{'id': 'o893aealq8hf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/o893aealq8hf1.png?width=108&crop=smart&auto=webp&s=b152a8808314c24764a7d6beb50850f074d5e17d', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/o893aealq8hf1.png?width=216&crop=smart&auto=web... | ||
What hardware to run gpt-oss-120b? | 28 | AMD AI Max comes to mind. What do you think? | 2025-08-05T18:02:22 | https://www.reddit.com/r/LocalLLaMA/comments/1miggb2/what_hardware_to_run_gptoss120b/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miggb2 | false | null | t3_1miggb2 | /r/LocalLLaMA/comments/1miggb2/what_hardware_to_run_gptoss120b/ | false | false | self | 28 | null |
OpenAI released open-weight models (gpt-oss-20b/120b) | 4 | Just saw OpenAI dropped their first open models with Apache 2.0 license. Pretty unexpected move from them.
**Key specs:**
* gpt-oss-20b: 21B params, runs in 16GB!!!
* gpt-oss-120b: 117B params, single H100
* Configurable reasoning levels (low/medium/high)
Both can be fine-tuned, and the smaller one works on consumer... | 2025-08-05T18:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/1migdtw/openai_released_openweight_models_gptoss20b120b/ | rmenetray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migdtw | false | null | t3_1migdtw | /r/LocalLLaMA/comments/1migdtw/openai_released_openweight_models_gptoss20b120b/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=108&crop=smart&auto=webp&s=56f93ea81e319c450e5ccbbf073520d2e0a4c3a9', 'width': 108}, {'height': 116, 'url': 'h... |
Open source OpenAI models anounced | 0 | https://huggingface.co/blog/welcome-openai-gpt-oss | 2025-08-05T17:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1migcgf/open_source_openai_models_anounced/ | franklbt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1migcgf | false | null | t3_1migcgf | /r/LocalLLaMA/comments/1migcgf/open_source_openai_models_anounced/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KlfdAa2xcnDFtJpalyvx33NzAfDZpzY1HmsH4UeiVtg', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/KlfdAa2xcnDFtJpalyvx33NzAfDZpzY1HmsH4UeiVtg.png?width=108&crop=smart&auto=webp&s=8ef218ad577cce5bbe25b91d48332a3fb61f2fae', 'width': 108}, {'height': 107, 'url': 'h... |
GPT-OSS-120B below GLM-4.5-air and Qwen 3 coder at coding. | 25 | 2025-08-05T17:51:31 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mig58x | false | null | t3_1mig58x | /r/LocalLLaMA/comments/1mig58x/gptoss120b_below_glm45air_and_qwen_3_coder_at/ | false | false | default | 25 | {'enabled': True, 'images': [{'id': 'p2xkax09o8hf1', 'resolutions': [{'height': 194, 'url': 'https://preview.redd.it/p2xkax09o8hf1.jpeg?width=108&crop=smart&auto=webp&s=1b18fc32f77e3ee3fa5febe3041a89a7fcd1ebde', 'width': 108}, {'height': 388, 'url': 'https://preview.redd.it/p2xkax09o8hf1.jpeg?width=216&crop=smart&auto=... | ||
Open-weight GPTs vs Everyone | 30 | 2025-08-05T17:50:56 | VR-Person | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mig4ob | false | null | t3_1mig4ob | /r/LocalLLaMA/comments/1mig4ob/openweight_gpts_vs_everyone/ | false | false | default | 30 | {'enabled': True, 'images': [{'id': 'z6deut98o8hf1', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/z6deut98o8hf1.png?width=108&crop=smart&auto=webp&s=5f5379d5633a5ccfe97483ced7ab6ded96aa6a6a', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/z6deut98o8hf1.png?width=216&crop=smart&auto=we... | ||
Updates to my persistent AI project! | 3 | # persistent-ai-memory - Major Update: 11+ Platform Support & SillyTavern MCP Integration
Hey r/LocalLLaMA!
Almost a week ago, I shared my AI memory system project called **persistent-ai-memory**, and the response from this community was incredible. You all provided fantastic feedback and platfor... | 2025-08-05T17:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mig2t0/updates_to_my_persistent_ai_project/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mig2t0 | false | null | t3_1mig2t0 | /r/LocalLLaMA/comments/1mig2t0/updates_to_my_persistent_ai_project/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'XZ08YzP_Q0Q16t4vZ9BG0u6hD16dDEQTyq5gv1nIE3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XZ08YzP_Q0Q16t4vZ9BG0u6hD16dDEQTyq5gv1nIE3w.png?width=108&crop=smart&auto=webp&s=02d2517d92beedade229a6e83183a0d011eff94f', 'width': 108}, {'height': 108, 'url': 'h... |
Open again AI ? | 14 | 2025-08-05T17:46:22 | Specter_Origin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mig04o | false | null | t3_1mig04o | /r/LocalLLaMA/comments/1mig04o/open_again_ai/ | false | false | default | 14 | {'enabled': True, 'images': [{'id': 'cdb7s1ocn8hf1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/cdb7s1ocn8hf1.png?width=108&crop=smart&auto=webp&s=2acdebbf926ac3d3e78897882f26ed3b2842b199', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/cdb7s1ocn8hf1.png?width=216&crop=smart&auto=web... | ||
GPT-OSS-120B vs GLM 4.5 Air... | 70 | 2025-08-05T17:45:59 | random-tomato | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mifzqz | false | null | t3_1mifzqz | /r/LocalLLaMA/comments/1mifzqz/gptoss120b_vs_glm_45_air/ | false | false | default | 70 | {'enabled': True, 'images': [{'id': 'w52pmzpcn8hf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/w52pmzpcn8hf1.png?width=108&crop=smart&auto=webp&s=e30881bd9c9be48c9b25c63d349779d2f2ba0152', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/w52pmzpcn8hf1.png?width=216&crop=smart&auto=web... | ||
7900 XTX vs 3090 for a local news/investment pipeline | 1 | I am stuck between a 3090 or a 7900xtx....
I am mostly building a pipeline that will allow me to:
1. Ingest: Pull in tech + financial news automatically.
1. Embed: Convert text into vector embeddings for retrieval.
1. Store: Keep embeddings in a local vector database for fast lookup.
1. Retrieve + Generate: Wh... | 2025-08-05T17:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mifxwi/7900_xtx_vs_3090_for_a_local_newsinvestment/ | Finallyhaveredditt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifxwi | false | null | t3_1mifxwi | /r/LocalLLaMA/comments/1mifxwi/7900_xtx_vs_3090_for_a_local_newsinvestment/ | false | false | self | 1 | null |
gpt-oss-120b outperforms DeepSeek-R1-0528 in benchmarks | 270 | Here is a table I put together:
| Benchmark | DeepSeek-R1 | DeepSeek-R1-0528 | GPT-OSS-20B | GPT-OSS-120B |
|-----------|-------------|------------------|-------------|--------------|
| **GPQA Diamond** | 71.5 | 81.0 | 71.5 | 80.1 |
| **Humanity's Last Exam** | 8.5 | 17.7 | 17.3 | 19.0 |
| **AIME 2024** | 79.8 | 91.4 ... | 2025-08-05T17:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mifuqk/gptoss120b_outperforms_deepseekr10528_in/ | oobabooga4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifuqk | false | null | t3_1mifuqk | /r/LocalLLaMA/comments/1mifuqk/gptoss120b_outperforms_deepseekr10528_in/ | false | false | self | 270 | null |
Has anyone had a chance to run gpt-oss locally yet? | 2 | I was going to do some tests this weekend to answer the question:
"What is the best local LLM to run with 96Gb of VRAM" between:
* openai/gpt-oss-120b
* zai-org/GLM-4.5-Air-FP8
* Qwen/Qwen3-Coder-30B-A3B-Instruct
Would love to know what everyone is thinking. | 2025-08-05T17:39:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mift8c/has_anyone_had_a_chance_to_run_gptoss_locally_yet/ | CrowSodaGaming | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mift8c | false | null | t3_1mift8c | /r/LocalLLaMA/comments/1mift8c/has_anyone_had_a_chance_to_run_gptoss_locally_yet/ | false | false | self | 2 | null |
Is the OpenAI recently released open models the same as horizon beta or what exactly? | 0 | Is this the same model (Horizon Beta) on openrouter or not? Because I still see Horizon beta available with its codename on openrouter
I am putting my speculation hats and considering it might be deepseek or something else, what do you guys think? (I can be totally wrong, I usually am)
But I am wondering why it... | 2025-08-05T17:37:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mifqv4/is_the_openai_recently_released_open_models_the/ | i-exist-man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifqv4 | false | null | t3_1mifqv4 | /r/LocalLLaMA/comments/1mifqv4/is_the_openai_recently_released_open_models_the/ | false | false | self | 0 | null |
gpt-oss-120b can be fine-tuned on a single H100 node! | 32 | Absolutely insane news for fine-tuners. I did **not expect** a fine-tunable Apache 2.0 model. This is literally a pay bump for me. | 2025-08-05T17:35:54 | https://huggingface.co/openai/gpt-oss-120b | entsnack | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mifpr3 | false | null | t3_1mifpr3 | /r/LocalLLaMA/comments/1mifpr3/gptoss120b_can_be_finetuned_on_a_single_h100_node/ | false | false | default | 32 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
More Detailed Benchmarks for GPT-OSS | 8 | 2025-08-05T17:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mifmya/more_detailed_benchmarks_for_gptoss/ | Solid_Antelope2586 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifmya | false | null | t3_1mifmya | /r/LocalLLaMA/comments/1mifmya/more_detailed_benchmarks_for_gptoss/ | false | false | 8 | null | ||
gpt-oss talks fast but has no free speech. | 5 | 2025-08-05T17:32:37 | shokuninstudio | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mifml4 | false | null | t3_1mifml4 | /r/LocalLLaMA/comments/1mifml4/gptoss_talks_fast_but_has_no_free_speech/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'ifxpxmzqk8hf1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/ifxpxmzqk8hf1.png?width=108&crop=smart&auto=webp&s=574f193af12e2a98c1da5a2ee41eeb226b654409', 'width': 108}, {'height': 55, 'url': 'https://preview.redd.it/ifxpxmzqk8hf1.png?width=216&crop=smart&auto=webp... | ||
Finally? | 1 | 2025-08-05T17:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mifisa/finally/ | kashfi20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifisa | false | null | t3_1mifisa | /r/LocalLLaMA/comments/1mifisa/finally/ | false | false | 1 | null | ||
Slow HF GPT download speed | 0 | Is anyone else having trouble downloading the GPT-OSS model? My speed is like 2KBps I think HF is throttling everyone | 2025-08-05T17:25:19 | https://www.reddit.com/r/LocalLLaMA/comments/1miff2u/slow_hf_gpt_download_speed/ | Jolly-Phone8982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miff2u | false | null | t3_1miff2u | /r/LocalLLaMA/comments/1miff2u/slow_hf_gpt_download_speed/ | false | false | self | 0 | null |
OpenAI released OSS models! | 10 | First Post!?!?
[https://huggingface.co/openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) | 2025-08-05T17:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mifejm/openai_released_oss_models/ | agentzappo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifejm | false | null | t3_1mifejm | /r/LocalLLaMA/comments/1mifejm/openai_released_oss_models/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
GPT OSS 120b and 20b is Apache 2.0! | 81 | [https://openai.com/index/introducing-gpt-oss/](https://openai.com/index/introducing-gpt-oss/) | 2025-08-05T17:22:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mifc2l/gpt_oss_120b_and_20b_is_apache_20/ | Synaps3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mifc2l | false | null | t3_1mifc2l | /r/LocalLLaMA/comments/1mifc2l/gpt_oss_120b_and_20b_is_apache_20/ | false | false | self | 81 | null |
gpt-oss-120b and 20b GGUFs | 46 | [https://huggingface.co/ggml-org/gpt-oss-20b-GGUF](https://huggingface.co/ggml-org/gpt-oss-20b-GGUF) | 2025-08-05T17:20:53 | https://huggingface.co/ggml-org/gpt-oss-120b-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mifaqv | false | null | t3_1mifaqv | /r/LocalLLaMA/comments/1mifaqv/gptoss120b_and_20b_ggufs/ | false | false | default | 46 | {'enabled': False, 'images': [{'id': 'INENbEwV6ABsrDTZjJHvECpyAUiCQ1gcEo-8476Qbvk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/INENbEwV6ABsrDTZjJHvECpyAUiCQ1gcEo-8476Qbvk.png?width=108&crop=smart&auto=webp&s=5afe16d708797c269a8136d53921bdd6dc5e6ce0', 'width': 108}, {'height': 116, 'url': 'h... |
GPT OSS 120B on openrouter | 6 | Just saw this popping up on openrouter, I think it is released! Only Groq is available as provider. | 2025-08-05T17:19:16 | Zestyclose-Ad-6147 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mif94n | false | null | t3_1mif94n | /r/LocalLLaMA/comments/1mif94n/gpt_oss_120b_on_openrouter/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': '728019fmi8hf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/728019fmi8hf1.jpeg?width=108&crop=smart&auto=webp&s=6cb34f4b3cdc7079d722f016384470177f79c13d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/728019fmi8hf1.jpeg?width=216&crop=smart&auto=w... | |
GPT-OSS on huggingface now. | 5 | https://huggingface.co/openai/gpt-oss-120b/tree/main | 2025-08-05T17:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mif8vr/gptoss_on_huggingface_now/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mif8vr | false | null | t3_1mif8vr | /r/LocalLLaMA/comments/1mif8vr/gptoss_on_huggingface_now/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
OpenAI introduces gpt-oss 20B and 120B models | 5 | 2025-08-05T17:18:50 | https://openai.com/index/introducing-gpt-oss/ | AnotherSoftEng | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1mif8p5 | false | null | t3_1mif8p5 | /r/LocalLLaMA/comments/1mif8p5/openai_introduces_gptoss_20b_and_120b_models/ | false | false | default | 5 | null | |
OpenAI released new open models | 9 | 2025-08-05T17:18:41 | https://www.reddit.com/gallery/1mif8k8 | AloneCoffee4538 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mif8k8 | false | null | t3_1mif8k8 | /r/LocalLLaMA/comments/1mif8k8/openai_released_new_open_models/ | false | false | 9 | null | ||
GPT OSS | 2 | 2025-08-05T17:18:31 | https://www.gpt-oss.com/ | atom_electron | gpt-oss.com | 1970-01-01T00:00:00 | 0 | {} | 1mif8dl | false | null | t3_1mif8dl | /r/LocalLLaMA/comments/1mif8dl/gpt_oss/ | false | false | default | 2 | null | |
**[Theory + Prompt] CME: A Framework for Systems That Grow Through Contradiction** | 0 | Hey folks,
I’ve been developing a framework called **CME – Contradiction · Metabolization · Engine**. It’s a systems-thinking lens that treats **contradiction not as failure, but as fuel**, a recursive driver of identity, coherence, and evolution.
### 🌀 The Core Loop:
1. **Contradiction** – Opposing forces or inter... | 2025-08-05T17:17:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mif7ui/theory_prompt_cme_a_framework_for_systems_that/ | PrompterIsPrompted | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mif7ui | false | null | t3_1mif7ui | /r/LocalLLaMA/comments/1mif7ui/theory_prompt_cme_a_framework_for_systems_that/ | false | false | self | 0 | null |
OpenAI releases open source models: Introducing gpt-oss | 9 | 2025-08-05T17:16:56 | https://openai.com/index/introducing-gpt-oss/ | CharlesStross | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1mif6ro | false | null | t3_1mif6ro | /r/LocalLLaMA/comments/1mif6ro/openai_releases_open_source_models_introducing/ | false | false | default | 9 | null | |
New Open Source Model From OpenAI | 20 | 2025-08-05T17:15:56 | Sensitive-Finger-404 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mif5s3 | false | null | t3_1mif5s3 | /r/LocalLLaMA/comments/1mif5s3/new_open_source_model_from_openai/ | false | false | 20 | {'enabled': True, 'images': [{'id': '7fGlna-luUG0L_AFHTNnLAsmlrBFDSwksqpUhWgm4yg', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/zjyoda41i8hf1.jpeg?width=108&crop=smart&auto=webp&s=f48e312453afe0e7a9f8c8fb68b40275b03e878b', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/zjyoda41i8hf1.jp... | |||
The long-awaited open-weight models by OpenAI have been released! | 7 | [https://openai.com/open-models/](https://openai.com/open-models/) | 2025-08-05T17:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mif5ls/the_longawaited_openweight_models_by_openai_have/ | Otis43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mif5ls | false | null | t3_1mif5ls | /r/LocalLLaMA/comments/1mif5ls/the_longawaited_openweight_models_by_openai_have/ | false | false | self | 7 | null |
Claude Opus 4.1 | 0 | https://www.anthropic.com/news/claude-opus-4-1 | 2025-08-05T17:15:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mif4us/claude_opus_41/ | smsp2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mif4us | false | null | t3_1mif4us | /r/LocalLLaMA/comments/1mif4us/claude_opus_41/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'WrJeMYVK0J-k_mNUSf9UOS_kHn5AxibmTBDPOt84T1U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WrJeMYVK0J-k_mNUSf9UOS_kHn5AxibmTBDPOt84T1U.png?width=108&crop=smart&auto=webp&s=21018bf01ce5242d8aeff9589d924de17f2f1850', 'width': 108}, {'height': 113, 'url': 'h... |
Gpt-oss suggest the infamous stawberry question | 1 | Link to try:
https://www.gpt-oss.com/ | 2025-08-05T17:14:44 | caodungcaca | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mif4il | false | null | t3_1mif4il | /r/LocalLLaMA/comments/1mif4il/gptoss_suggest_the_infamous_stawberry_question/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'AolumvQZu1zketdhQhJRD9u5sEXNoaHWtMCkhQt0z1E', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/z2rj299th8hf1.jpeg?width=108&crop=smart&auto=webp&s=563c9927da6db0e1b4b92f93ce3492d6bbe880ce', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/z2rj299th8hf1.jp... | ||
OpenAI OSS models released | 30 | OpenAI OSS models released | 2025-08-05T17:14:06 | --Tintin | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mif3vd | false | null | t3_1mif3vd | /r/LocalLLaMA/comments/1mif3vd/openai_oss_models_released/ | false | false | 30 | {'enabled': True, 'images': [{'id': '1MrLp62USSu6ZPoNn3YQtYR9ypWQ4Uzm1CkoD8Rl5PY', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/7lhybaaph8hf1.jpeg?width=108&crop=smart&auto=webp&s=a26743b391d51dbe324e1ef6f7df7834da436e56', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/7lhybaaph8hf1.jp... | ||
Open models by OpenAI | 45 | 2025-08-05T17:12:10 | https://openai.com/open-models/ | lomero | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1mif1xp | false | null | t3_1mif1xp | /r/LocalLLaMA/comments/1mif1xp/open_models_by_openai/ | false | false | default | 45 | null | |
🚀 OpenAI released their open-weight models!!! | 1,928 | Welcome to the gpt-oss series, OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of the open models:
gpt-oss-120b — for production, general purpose, high reasoning use cases that fits into a single H100 GPU (117B parameters with ... | 2025-08-05T17:09:35 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miezct | false | null | t3_1miezct | /r/LocalLLaMA/comments/1miezct/openai_released_their_openweight_models/ | false | false | 1,928 | {'enabled': True, 'images': [{'id': '-5MrL_-KIn8zxxzcbQgf7n6F9Xusi-Z4r0GBuK0DdLY', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/1yckal6wg8hf1.jpeg?width=108&crop=smart&auto=webp&s=51219fa655094895201128f25219c319f3488c47', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/1yckal6wg8hf1.jp... | ||
gpt-oss Benchmarks | 70 | 2025-08-05T17:08:55 | Ill-Association-8410 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mieyrn | false | null | t3_1mieyrn | /r/LocalLLaMA/comments/1mieyrn/gptoss_benchmarks/ | false | false | 70 | {'enabled': True, 'images': [{'id': 'CwdyM8Bit10lFxkRIhge9VZMBVcmyBuY7OleHFY7kL0', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/rzxruvqmg8hf1.png?width=108&crop=smart&auto=webp&s=219d3c7ae7469e7f7ab55376b64b6db45bf7b539', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/rzxruvqmg8hf1.png... | |||
OpenAI just dropped two new open-source models | 20 | [https://openai.com/open-models/](https://openai.com/open-models/) | 2025-08-05T17:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mieyhh/openai_just_dropped_two_new_opensource_models/ | hannibal27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mieyhh | false | null | t3_1mieyhh | /r/LocalLLaMA/comments/1mieyhh/openai_just_dropped_two_new_opensource_models/ | false | false | self | 20 | null |
OpenAI Harmony | 12 | 2025-08-05T17:08:25 | https://github.com/openai/harmony | VR-Person | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miey9s | false | null | t3_1miey9s | /r/LocalLLaMA/comments/1miey9s/openai_harmony/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw.png?width=108&crop=smart&auto=webp&s=167dbda2dcf15764b7a726336916c7fb1fa870d2', 'width': 108}, {'height': 108, 'url': 'h... | ||
openai/gpt-oss-20b · Hugging Face | 24 | its show time folks!!
| 2025-08-05T17:07:21 | https://huggingface.co/openai/gpt-oss-20b | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1miex8t | false | null | t3_1miex8t | /r/LocalLLaMA/comments/1miex8t/openaigptoss20b_hugging_face/ | false | false | default | 24 | {'enabled': False, 'images': [{'id': 'oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=108&crop=smart&auto=webp&s=56f93ea81e319c450e5ccbbf073520d2e0a4c3a9', 'width': 108}, {'height': 116, 'url': 'h... |
ollama run gpt-oss | 2 | 2025-08-05T17:06:29 | https://ollama.com/library/gpt-oss | boxingdog | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1mieweq | false | null | t3_1mieweq | /r/LocalLLaMA/comments/1mieweq/ollama_run_gptoss/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h... | |
openai is releasing open models | 23 | https://openai.com/open-models/ | 2025-08-05T17:06:18 | Beautiful_Box_7153 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miew86 | false | null | t3_1miew86 | /r/LocalLLaMA/comments/1miew86/openai_is_releasing_open_models/ | false | false | default | 23 | {'enabled': True, 'images': [{'id': 'ipzotk6bg8hf1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/ipzotk6bg8hf1.png?width=108&crop=smart&auto=webp&s=36d5b380a2f7551f0b2ff7d92c93af8da10011b8', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/ipzotk6bg8hf1.png?width=216&crop=smart&auto=we... | |
GPT Open sourced? | 1 | [https://openai.com/open-models/](https://openai.com/open-models/) | 2025-08-05T17:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mievrp/gpt_open_sourced/ | Lindayz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mievrp | false | null | t3_1mievrp | /r/LocalLLaMA/comments/1mievrp/gpt_open_sourced/ | false | false | self | 1 | null |
GPT-OSS 20B/120B | 9 | [https://huggingface.co/openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
[https://huggingface.co/openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b)
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, a... | 2025-08-05T17:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1miev2b/gptoss_20b120b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miev2b | false | null | t3_1miev2b | /r/LocalLLaMA/comments/1miev2b/gptoss_20b120b/ | false | false | self | 9 | null |
ClosedAI is now Open | 0 | 2025-08-05T17:04:52 | biswatma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mieupy | false | null | t3_1mieupy | /r/LocalLLaMA/comments/1mieupy/closedai_is_now_open/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'oszpmww1g8hf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/oszpmww1g8hf1.jpeg?width=108&crop=smart&auto=webp&s=c363febe4314af579efa9a5f292f649a3bf2dc46', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/oszpmww1g8hf1.jpeg?width=216&crop=smart&auto=... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.