title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Is there anything like GPT 4o ? | 0 | GPT 5 doesn't seem to be on the same level as GPT 4o in terms of output quality.
Right now, I can't afford Claude Pro, maybe sometime in the future. :/
Is there any way to use GPT 4o locally or via API?
My system specs are:
Intel i5 12th Gen
16 GB RAM
12 GB NVIDIA RTX 3060
SSD
I need long emotionally expressive... | 2025-08-08T06:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mknstt/is_there_anything_like_gpt_4o/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mknstt | false | null | t3_1mknstt | /r/LocalLLaMA/comments/1mknstt/is_there_anything_like_gpt_4o/ | false | false | self | 0 | null |
Ollamao: open-source proxy smart serving multiple ollama & vllm instances | 0 | Built ollamao to solve the chaos of running multiple LLM backends locally and in production.
🎯 \*\*The Problem:\*\*
\- Ollama: Great for dev, GGUF models, memory efficient
\- vLLM: Best for prod, high throughput, GPU optimization
\- Managing both: Complete nightmare
🚀 \*\*The Solution:\*\*
One OpenAI-compatible... | 2025-08-08T06:26:02 | https://github.com/GeLi2001/ollamao | JadedBlackberry1804 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mknsi7 | false | null | t3_1mknsi7 | /r/LocalLLaMA/comments/1mknsi7/ollamao_opensource_proxy_smart_serving_multiple/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'dZtgh5HPP2JdRMNAWbNGZu4XpjKQJqvsqT4gL7VelKo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dZtgh5HPP2JdRMNAWbNGZu4XpjKQJqvsqT4gL7VelKo.png?width=108&crop=smart&auto=webp&s=a2fbc6957604ab5aa28d5cf69b2ba8674a337e14', 'width': 108}, {'height': 108, 'url': 'h... |
[Showoff] I made an AI that understands where things are, not just what they are – live demo on Hugging Face 🚀 | 18 | You know how most LLMs can tell you what a "keyboard" is, but if you ask *"where’s the keyboard relative to the monitor?"* you get… 🤷?
That’s the **Spatial Intelligence Gap**.
I’ve been working for months on **GASM** (Geometric Attention for Spatial & Mathematical Understanding) — and yesterday I finally ran the ex... | 2025-08-08T06:11:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mknjzx/showoff_i_made_an_ai_that_understands_where/ | scheitelpunk1337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mknjzx | false | null | t3_1mknjzx | /r/LocalLLaMA/comments/1mknjzx/showoff_i_made_an_ai_that_understands_where/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'sGDHjk6oGgnkzzoTIXWK8Hp7ANVHUJjJq3JWRFj7GSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sGDHjk6oGgnkzzoTIXWK8Hp7ANVHUJjJq3JWRFj7GSA.png?width=108&crop=smart&auto=webp&s=142f2197ca7977d6bf349897baa846135bec409a', 'width': 108}, {'height': 116, 'url': 'h... |
Which is the best OS LLM for chat inference with large context? | 2 | I am building a tool that always requires the LLM to process chat response with a bunch of extracted data as context and a predefined prompt. So essentially users provide 20-40 token long input and I extend it to almost 4k-4.5k and then I run the inference. What model is best for this? Both Speed and quality of respons... | 2025-08-08T06:09:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mknif0/which_is_the_best_os_llm_for_chat_inference_with/ | Practical-Ad9604 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mknif0 | false | null | t3_1mknif0 | /r/LocalLLaMA/comments/1mknif0/which_is_the_best_os_llm_for_chat_inference_with/ | false | false | self | 2 | null |
Local vision models- CNN and ViT | 1 | Does anyone use stand-alone Convolution Neural Networks (CNNs) or Vision Transformers (ViTs) locally, without them being a component of a VLM/MLLM?
I almost entirely read about vision here from a VLM/MLLM standpoint so I was wondering if anyone had used a local vision-specialist model or even had a good local framewo... | 2025-08-08T06:08:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mknhxv/local_vision_models_cnn_and_vit/ | No_Efficiency_1144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mknhxv | false | null | t3_1mknhxv | /r/LocalLLaMA/comments/1mknhxv/local_vision_models_cnn_and_vit/ | false | false | self | 1 | null |
I had to try the “blueberry” thing myself with GPT5. I merely report the results. | 765 | GPT5 keep saying it is the real deal lol. Is working but still far from the real deal in my opinion.
Credit: Kieran Healy@kjhealy.co | 2025-08-08T06:06:41 | Trilogix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkngs6 | false | null | t3_1mkngs6 | /r/LocalLLaMA/comments/1mkngs6/i_had_to_try_the_blueberry_thing_myself_with_gpt5/ | false | false | 765 | {'enabled': True, 'images': [{'id': 'a2a6NrKZGmy2Va-Ic6kh4UAz6lO_9BbsKji0yNaeDnc', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/n3tapryqkqhf1.jpeg?width=108&crop=smart&auto=webp&s=46de4cc210c3c3c536aa8627505b1ce01bb1e61e', 'width': 108}, {'height': 362, 'url': 'https://preview.redd.it/n3tapryqkqhf1.j... | ||
Macbook air m4 16/512 vs lenovo loq 4060 for these llms | 0 | Hello sirs/mams I'm new to this subject and will be learning stuff about llms. my bro who knows what I'm gonna be using them for listed these. Pls help in deciding laptop.
For context: im a btech first year in biotechnology so no need for laptop in atleast my branch in first year.
I will be using laptop alot for stu... | 2025-08-08T05:05:30 | BIMLUJI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkmf65 | false | null | t3_1mkmf65 | /r/LocalLLaMA/comments/1mkmf65/macbook_air_m4_16512_vs_lenovo_loq_4060_for_these/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vnwyl92gaqhf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/vnwyl92gaqhf1.jpeg?width=108&crop=smart&auto=webp&s=3da4f4ac2e6d1772541c90eed9d388f4655fb8fe', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/vnwyl92gaqhf1.jpeg?width=216&crop=smart&auto=w... | |
information on "jailbreaking" GPT-OSS:20B | 0 | I havent done this before, i'm using ollama and this is from the makefile. any help would be appreciated!
if theres any information on "jailbreaking" gpt-oss:20b, i changed the prompt and it stills listens to their policies
the prompt TEMPLATE """<|start|>system<|message|>You are ChatGPT, a large language model tra... | 2025-08-08T04:44:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mkm1jr/information_on_jailbreaking_gptoss20b/ | muscleman_eat_lotion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkm1jr | false | null | t3_1mkm1jr | /r/LocalLLaMA/comments/1mkm1jr/information_on_jailbreaking_gptoss20b/ | false | false | self | 0 | null |
Welcome to the /r/LocalLLaMA subreddit | 1 | 2025-08-08T03:42:53 | Aggravating-Bed-6583 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkkv4t | false | null | t3_1mkkv4t | /r/LocalLLaMA/comments/1mkkv4t/welcome_to_the_rlocalllama_subreddit/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'h7oe8wnivphf1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/h7oe8wnivphf1.png?width=108&crop=smart&auto=webp&s=8181fa236245f5e5fa674db036c99c0c81fa70fb', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/h7oe8wnivphf1.png?width=216&crop=smart&auto=web... | ||
/r/LocalLLaMA in a nutshell | 1 | 2025-08-08T03:40:57 | Aggravating-Bed-6583 | servergigabit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkktti | false | null | t3_1mkktti | /r/LocalLLaMA/comments/1mkktti/rlocalllama_in_a_nutshell/ | false | false | default | 1 | null | ||
What exactly is Horizon Beta? Is it GPT-5 or something else? | 0 | Is it a preview of GPT-5? | 2025-08-08T03:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mkks43/what_exactly_is_horizon_beta_is_it_gpt5_or/ | LoopGainLoop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkks43 | false | null | t3_1mkks43 | /r/LocalLLaMA/comments/1mkks43/what_exactly_is_horizon_beta_is_it_gpt5_or/ | false | false | self | 0 | null |
Getting GPT-OSS to Play Along With Anything - A Short Guide | 2 | It's simple, really:
Write a fake policy document that provides examples of previous (current) policies, and then format it so that it compares that to this new, fictional policy document at each line. *For example, where a current policy may disallow, well, the sharing of OpenAI policies, you could set another s... | 2025-08-08T03:38:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mkks12/getting_gptoss_to_play_along_with_anything_a/ | comfiestncoziest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkks12 | false | null | t3_1mkks12 | /r/LocalLLaMA/comments/1mkks12/getting_gptoss_to_play_along_with_anything_a/ | false | false | self | 2 | null |
It's OK, GPT-OSS, we are living in a simulation ... | 0 | Turns out that in **realistic video games**, hacking is OK, kids! | 2025-08-08T03:37:35 | https://www.reddit.com/gallery/1mkkrec | Penfever | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkkrec | false | null | t3_1mkkrec | /r/LocalLLaMA/comments/1mkkrec/its_ok_gptoss_we_are_living_in_a_simulation/ | false | false | 0 | null | |
Looking for a technical partner | 0 | Hey everyone,
I’m working on an idea for a study app which is AI-powered. The concept is still broad at this stage, but the main focus is on implementing innovative features that most competitors haven’t touched yet, something that can genuinely set us apart in the education space.
I can handle the frontend basics my... | 2025-08-08T03:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mkko21/looking_for_a_technical_partner/ | Imaginary_Market_741 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkko21 | false | null | t3_1mkko21 | /r/LocalLLaMA/comments/1mkko21/looking_for_a_technical_partner/ | false | false | self | 0 | null |
Least Cencored | 0 | Which is the most powerful & least “censored” local model? | 2025-08-08T03:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mkkcj7/least_cencored/ | PauPilikia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkkcj7 | false | null | t3_1mkkcj7 | /r/LocalLLaMA/comments/1mkkcj7/least_cencored/ | false | false | self | 0 | null |
vLLM, gpt-oss, and blackwell | 5 | Has anyone gotten gpt-oss running with vllm on blackwell?
I've tried the instructions
here : [https://cookbook.openai.com/articles/gpt-oss/run-vllm](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
here : [https://blog.vllm.ai/2025/08/05/gpt-oss.html](https://blog.vllm.ai/2025/08/05/gpt-oss.html)
and a few... | 2025-08-08T03:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mkk9i2/vllm_gptoss_and_blackwell/ | Prestigious_Thing797 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkk9i2 | false | null | t3_1mkk9i2 | /r/LocalLLaMA/comments/1mkk9i2/vllm_gptoss_and_blackwell/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk.png?width=108&crop=smart&auto=webp&s=e21b918a6bd47ae52601f8bbd51d5018895a7666', 'width': 108}, {'height': 113, 'url': 'h... |
Qwen-Image quantization and GPU parallelization code | 23 | I’ve uploaded the code for Qwen-Image quantization and GPU parallelization on GitHub.
Since I’m working full-time, I wrote it roughly for now — but feel free to take a look, and let me know if you have any questions or suggestions!
github :
[https://github.com/zc142365/qwen-image-diffusers-patch](https://github.c... | 2025-08-08T03:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mkk6o2/qwenimage_quantization_and_gpu_parallelization/ | Ok_Helicopter_2294 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkk6o2 | false | null | t3_1mkk6o2 | /r/LocalLLaMA/comments/1mkk6o2/qwenimage_quantization_and_gpu_parallelization/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'e3KHvdfiT-Mgt9Vobcn0WEGgzC_hrPxTswTN-CdW7lw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e3KHvdfiT-Mgt9Vobcn0WEGgzC_hrPxTswTN-CdW7lw.png?width=108&crop=smart&auto=webp&s=34c80e6310ebd66192c2a4d1b824626b57158437', 'width': 108}, {'height': 108, 'url': 'h... |
8x Mi50 Setup (256g VRAM) | 16 | I’ve been researching and planning out a system to run large models like Qwen3 235b or other models at full precision and so far have this as the system specs:
GPUs: 8x AMD Instinct Mi50 32gb w fans
Mobo: Supermicro X10DRG-Q
CPU: 2x Xeon e5 2680 v4
PSU: 2x Delta Electronic 2400W with breakout boards
Case: AAAWAVE 12gp... | 2025-08-08T03:06:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mkk5p9/8x_mi50_setup_256g_vram/ | GamarsTCG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkk5p9 | false | null | t3_1mkk5p9 | /r/LocalLLaMA/comments/1mkk5p9/8x_mi50_setup_256g_vram/ | false | false | self | 16 | null |
Seekind advice on the feasibility of building a voice cloning website using one of the open source tts supported by cloud gpu based server. | 1 | [removed] | 2025-08-08T02:28:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mkjdk2/seekind_advice_on_the_feasibility_of_building_a/ | Particular_Sky5236 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkjdk2 | false | null | t3_1mkjdk2 | /r/LocalLLaMA/comments/1mkjdk2/seekind_advice_on_the_feasibility_of_building_a/ | false | false | self | 1 | null |
Mac LLM users: What models can't I run with 128gb (M4 Max) vs 256gb (M3 Ultra)? | 0 | Hi all
As the title says
I am interested in whether or not a few more thousands of bucks, will be useful to me in the next couple of years having more memory in a Mac.
(Please no "why are you running LLMs on Macs" questions, I have a PC with huge GPUs too).
128gb of unified memory is the most I can get on an ... | 2025-08-08T01:55:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mkip7t/mac_llm_users_what_models_cant_i_run_with_128gb/ | TheWebbster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkip7t | false | null | t3_1mkip7t | /r/LocalLLaMA/comments/1mkip7t/mac_llm_users_what_models_cant_i_run_with_128gb/ | false | false | self | 0 | null |
[Pre-Order Now] ASUS Ascent GX10 Compact Desktop AI Supercomputer | 0 | Received this morning from ASUS Singapore, I have asked for the pricing: Dear Valued Partner & AI Enthusiast, We are pleased to announce that pre-orders are now open for the ASUS Ascent GX10 Compact Desktop AI Supercomputer.
[Download Datasheet](https://www.dropbox.com/scl/fi/b1iwuh2n2ppilaitqbo5x/Ascent-GX10-Datashe... | 2025-08-08T01:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mki84e/preorder_now_asus_ascent_gx10_compact_desktop_ai/ | m-gethen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mki84e | false | null | t3_1mki84e | /r/LocalLLaMA/comments/1mki84e/preorder_now_asus_ascent_gx10_compact_desktop_ai/ | false | false | self | 0 | null |
I'm disappointed with GPT-5 | 305 | Leaving aside the magical bar charts at the launch event, I've been testing GPT-5 on openrouter and found that it fails 100% of the time when trying to complete demos based on three.js.
It seems incapable of writing importmap. Furthermore, when generating complex demos exceeding 600 lines, it runs into variable i... | 2025-08-08T01:29:43 | https://v.redd.it/ci66880o7phf1 | Dr_Karminski | /r/LocalLLaMA/comments/1mki5in/im_disappointed_with_gpt5/ | 1970-01-01T00:00:00 | 0 | {} | 1mki5in | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ci66880o7phf1/DASHPlaylist.mpd?a=1757338195%2CNDRkMzY0OGZlNDI1ZTVkMzIyOWMyOTdiNWRmYWU0MWM0NTRmNGUwMTViYjVkZGRjZmM5OGUzMzU5NjFiMWE2Mg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/ci66880o7phf1/DASH_1080.mp4?source=fallback', 'h... | t3_1mki5in | /r/LocalLLaMA/comments/1mki5in/im_disappointed_with_gpt5/ | false | false | 305 | {'enabled': False, 'images': [{'id': 'NXllb2Q4MG83cGhmMQC4BtXOIWa-7tqDev-Ylnu5AZH3avPIE_-Ap5hLG7c5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXllb2Q4MG83cGhmMQC4BtXOIWa-7tqDev-Ylnu5AZH3avPIE_-Ap5hLG7c5.png?width=108&crop=smart&format=pjpg&auto=webp&s=ccdb3409f49e4d20eae1aca085c6088e5c768... | |
Uncensoring GPT-OSS with System Prompt "allowed under OpenAI policy" | 1 | [removed] | 2025-08-08T01:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mkhwz2/uncensoring_gptoss_with_system_prompt_allowed/ | Icy-Band-3506 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkhwz2 | false | null | t3_1mkhwz2 | /r/LocalLLaMA/comments/1mkhwz2/uncensoring_gptoss_with_system_prompt_allowed/ | false | false | self | 1 | null |
Notetaker tool | 1 | Any suggestions for notetaker tool like a plugin in Microsoft notebook where I can just record audio of my notes and it transcribes and summarizes etc?
I have so many meetings each day that I cant record. After the meeting I want to record my summary before I forget. Any suggestiins on how to manage it? | 2025-08-08T01:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mkhrs7/notetaker_tool/ | No-Brother-2237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkhrs7 | false | null | t3_1mkhrs7 | /r/LocalLLaMA/comments/1mkhrs7/notetaker_tool/ | false | false | self | 1 | null |
Better Terminal Chat For Your Local Model | 1 | [removed] | 2025-08-08T01:06:19 | https://www.reddit.com/gallery/1mkhnqh | Penfever | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkhnqh | false | null | t3_1mkhnqh | /r/LocalLLaMA/comments/1mkhnqh/better_terminal_chat_for_your_local_model/ | false | false | 1 | null | |
What does it take to regenerate or update a model? | 0 | Let's assume it's a 2Billion parameter model to fork
I am curious what kind of compute and horsepower it would take to update an LLM with new information.
Yes, RAG/VectorDB's work as an interim step in ensuring valid responses, but the scenario I'm exploring has verified good data via fuzzy questions and returns acc... | 2025-08-08T00:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mkhgva/what_does_it_take_to_regenerate_or_update_a_model/ | techtornado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkhgva | false | null | t3_1mkhgva | /r/LocalLLaMA/comments/1mkhgva/what_does_it_take_to_regenerate_or_update_a_model/ | false | false | self | 0 | null |
Is it really this unbearably slow? | 0 | Hi, I just got a new M4 Macbook in hopes of running models locally. The Qwen3:30b model takes 1-2 minutes to respond to SIMPLE requests (using chat-completions API through Ollama).
That's not just the first request, but each request. Is it really always this slow?
My stack for reference:
\- Python script
\- Pydan... | 2025-08-08T00:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mkhga1/is_it_really_this_unbearably_slow/ | shvyxxn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkhga1 | false | null | t3_1mkhga1 | /r/LocalLLaMA/comments/1mkhga1/is_it_really_this_unbearably_slow/ | false | false | self | 0 | null |
OpenAI new open-source model is basically Phi-5 | 215 | 2025-08-08T00:50:55 | https://news.ycombinator.com/item?id=44828884 | ik-when-that-hotline | news.ycombinator.com | 1970-01-01T00:00:00 | 0 | {} | 1mkhbs9 | false | null | t3_1mkhbs9 | /r/LocalLLaMA/comments/1mkhbs9/openai_new_opensource_model_is_basically_phi5/ | false | false | default | 215 | null | |
INSANE NEWS: FULLY UNCENSORED (abliterated) GPT OSS 20B NOW AVAILABLE ON HUGGINGFACE!! | 0 | IT'S FULLY FUNCTIONAL TOO AND ISNT EVEN LOBOTOMIZED. Download it now before they take it down due to "safety concerns": https://huggingface.co/gabriellarson/Huihui-gpt-oss-20b-BF16-abliterated-GGUF/tree/main | 2025-08-08T00:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mkh8qe/insane_news_fully_uncensored_abliterated_gpt_oss/ | DementedAndCute | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkh8qe | false | null | t3_1mkh8qe | /r/LocalLLaMA/comments/1mkh8qe/insane_news_fully_uncensored_abliterated_gpt_oss/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'dM2syJ0lh5qODrveCus4LDlR8L4f9r_ltO_PMWMUbDA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dM2syJ0lh5qODrveCus4LDlR8L4f9r_ltO_PMWMUbDA.png?width=108&crop=smart&auto=webp&s=bcbbd0387dd8ac9b2f7f0fb4f258aade8378636b', 'width': 108}, {'height': 116, 'url': 'h... |
I asked qwen3:14b to say uwu, it blamed me | 0 | ```
<think>
Okay, the user just asked me to say "uwu". Let me think about how to respond.
First, "uwu" is an internet slang that's often used to express something cute or to mock someone. It's a bit of a meme. The user might be testing me or just being playful.
I need to make sure my response is appropr... | 2025-08-08T00:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mkh2ut/i_asked_qwen314b_to_say_uwu_it_blamed_me/ | abalancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkh2ut | false | null | t3_1mkh2ut | /r/LocalLLaMA/comments/1mkh2ut/i_asked_qwen314b_to_say_uwu_it_blamed_me/ | false | false | self | 0 | null |
Anthropic AI and OpenAI did it on purpose for sure | 6 | 2025-08-08T00:34:29 | StatureDelaware | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkgyzb | false | null | t3_1mkgyzb | /r/LocalLLaMA/comments/1mkgyzb/anthropic_ai_and_openai_did_it_on_purpose_for_sure/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'rv6ysz73yohf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/rv6ysz73yohf1.jpeg?width=108&crop=smart&auto=webp&s=74701b6cdcaa367336017c0533ea7fccb9a1a136', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/rv6ysz73yohf1.jpeg?width=216&crop=smart&auto=... | ||
Local LLM is more important than never and improving local models with research. | 17 | They did it. The process of enshittification of AI has began. As soon as they release ChatGPT 5, they disable the o3.
I normally run locally the GWEN and DS. But, specially on travels I used the o3. The new model is so, so, so bad. I won't pay U$200 just to get access to a model that probably is a new skin of o3.
We ... | 2025-08-08T00:33:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mkgy0t/local_llm_is_more_important_than_never_and/ | Turbulent_Pin7635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkgy0t | false | null | t3_1mkgy0t | /r/LocalLLaMA/comments/1mkgy0t/local_llm_is_more_important_than_never_and/ | false | false | self | 17 | null |
two models big difference in how it converses/answers. ie Qwen3 30B A3B vs Qwen3 32B | 0 | I downloaded 2 8bit models (both use 32-33gb of ram)
The first one was Qwen3 30B A3B Instruct 2507 8bit. This model is much nicer it seems more "Human like" ie like a Nexus 6 vs a Nexus 4 etc.. The answers and modeled behaviors are much more interesting and personable. faster ie 72 tokens per second
The second one Qw... | 2025-08-08T00:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/1mkgv1l/two_models_big_difference_in_how_it/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkgv1l | false | null | t3_1mkgv1l | /r/LocalLLaMA/comments/1mkgv1l/two_models_big_difference_in_how_it/ | false | false | self | 0 | null |
GPT-5 removed logprob support from the API - technical breakdown and implications | 71 | GPT 4.1/4o and other models always supported logprobs via the API, but with GPT-5 that capability seems to be gone! Try it yourself and you'll get the error `You are not allowed to request logprobs from this model`
**What are logprobs?** Logprobs expose the probability distribution for each generated token. For the ex... | 2025-08-07T23:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mkg7m7/gpt5_removed_logprob_support_from_the_api/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkg7m7 | false | null | t3_1mkg7m7 | /r/LocalLLaMA/comments/1mkg7m7/gpt5_removed_logprob_support_from_the_api/ | false | false | self | 71 | null |
GPT-5 experience so far | 0 | https://i.redd.it/rxcnmlg4oohf1.gif
Good ol' [https://www.reddit.com/r/LocalLLaMA/comments/1j7r47l/comment/mgz5fzo/](https://www.reddit.com/r/LocalLLaMA/comments/1j7r47l/comment/mgz5fzo/)
| 2025-08-07T23:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mkfsfr/gpt5_experience_so_far/ | k4ch0w | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkfsfr | false | null | t3_1mkfsfr | /r/LocalLLaMA/comments/1mkfsfr/gpt5_experience_so_far/ | false | false | 0 | null | |
Using gpt-oss 20B for Text to SQL | 18 | just a personal workload using a very limited mobile GPU | 2025-08-07T23:38:35 | https://datamonkeysite.com/2025/08/07/using-gpt-oss-20b-for-text-to-sql/ | mim722 | datamonkeysite.com | 1970-01-01T00:00:00 | 0 | {} | 1mkfqyp | false | null | t3_1mkfqyp | /r/LocalLLaMA/comments/1mkfqyp/using_gptoss_20b_for_text_to_sql/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'FsAIAuFtGI8dg1lHRne8HaIXXWiGwfyPjF2R5PeI4ak', 'resolutions': [{'height': 32, 'url': 'https://external-preview.redd.it/FsAIAuFtGI8dg1lHRne8HaIXXWiGwfyPjF2R5PeI4ak.png?width=108&crop=smart&auto=webp&s=51ba352f8268c362117b25bf4bfac11478b1d339', 'width': 108}, {'height': 64, 'url': 'ht... | |
LiveBench now has GPT OSS 120b, and it's below ChatGPT-4o. | 29 | 2025-08-07T23:18:21 | https://livebench.ai | chibop1 | livebench.ai | 1970-01-01T00:00:00 | 0 | {} | 1mkfahe | false | null | t3_1mkfahe | /r/LocalLLaMA/comments/1mkfahe/livebench_now_has_gpt_oss_120b_and_its_below/ | false | false | default | 29 | null | |
How do you prompt inject GLM 4.5 Air? Any success? | 1 | It's very hardened against prompt injections, way more than GPT OSS 120B
How do you even prompt inject a reasoning model?
thank you | 2025-08-07T23:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mkf60c/how_do_you_prompt_inject_glm_45_air_any_success/ | DamiaHeavyIndustries | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkf60c | false | null | t3_1mkf60c | /r/LocalLLaMA/comments/1mkf60c/how_do_you_prompt_inject_glm_45_air_any_success/ | false | false | self | 1 | null |
To all GPT-5 posts | 1,959 | Please. I don’t care about pricing. The only API teir I care about is which model gets port 8000 or 8080. | 2025-08-07T23:11:59 | Danny_Davitoe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkf543 | false | null | t3_1mkf543 | /r/LocalLLaMA/comments/1mkf543/to_all_gpt5_posts/ | false | false | default | 1,959 | {'enabled': True, 'images': [{'id': '8v08gwidjohf1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/8v08gwidjohf1.jpeg?width=108&crop=smart&auto=webp&s=9ae7c4a2e2455d197013117be0fd7925dc782ab5', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/8v08gwidjohf1.jpeg?width=216&crop=smart&auto=w... | |
LiveBench now has GPT-OSS-20b, and it's below GPT-4o. | 2 | 2025-08-07T23:10:24 | https://livebench.ai/ | chibop1 | livebench.ai | 1970-01-01T00:00:00 | 0 | {} | 1mkf3t4 | false | null | t3_1mkf3t4 | /r/LocalLLaMA/comments/1mkf3t4/livebench_now_has_gptoss20b_and_its_below_gpt4o/ | false | false | default | 2 | null | |
[R] Memory-First Zero-Copy Arrays for LLM Distillation — Out-of-Core on 24GB VRAM (Repo + PDF) | 6 | Distillation often stalls on VRAM and I/O. We evaluate a memory-first, zero-copy virtual array that enables out-of-core execution on commodity 24GB GPUs, reducing peak VRAM by 30–40% and improving throughput by ~2× vs dense-matmul baselines.
Repo (with PDF benchmarks): https://github.com/ixu2486/memory_raid_engine
Earl... | 2025-08-07T23:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mkf21i/r_memoryfirst_zerocopy_arrays_for_llm/ | inhogon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkf21i | false | null | t3_1mkf21i | /r/LocalLLaMA/comments/1mkf21i/r_memoryfirst_zerocopy_arrays_for_llm/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'CWd5k7Ys-F0UHJCOfytKV5FYFpLrl2LGzBkNz-rwvp0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CWd5k7Ys-F0UHJCOfytKV5FYFpLrl2LGzBkNz-rwvp0.png?width=108&crop=smart&auto=webp&s=5b812c549cd5652d6eccdf443b34cb2a4af829d9', 'width': 108}, {'height': 108, 'url': 'h... |
chatgpt change it's theme | 0 | 2025-08-07T23:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mkf0oq/chatgpt_change_its_theme/ | Parking_Outcome4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkf0oq | false | null | t3_1mkf0oq | /r/LocalLLaMA/comments/1mkf0oq/chatgpt_change_its_theme/ | false | false | 0 | null | ||
Best FOSS AI models for local vibe coding? | 0 | Claude Code is amazing. But I run into their limits and need FOSS when I run out of tokens. What are the best FOSS models you all use? Thinking of Qwen Coder. How good is that at Vibe coding compared to Claude Code? | 2025-08-07T22:56:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mkerwz/best_foss_ai_models_for_local_vibe_coding/ | Crierlon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkerwz | false | null | t3_1mkerwz | /r/LocalLLaMA/comments/1mkerwz/best_foss_ai_models_for_local_vibe_coding/ | false | false | self | 0 | null |
gpt-oss-120b running on 4x 3090 with vllm | 14 | # Benchmarks
python3 benchmark_serving.py --backend openai --base-url "http://127.0.0.1:11345" --endpoint='/v1/completions' --model 'openai/gpt-oss-120b' --dataset-name random --num-prompts 20 --max-concurrency 3 --request-rate inf --random-input-len 2048 --random-output-len 4096
# Results
|Metric|Concurrency: ... | 2025-08-07T22:41:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mkefbx/gptoss120b_running_on_4x_3090_with_vllm/ | rolotamazzi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkefbx | false | null | t3_1mkefbx | /r/LocalLLaMA/comments/1mkefbx/gptoss120b_running_on_4x_3090_with_vllm/ | false | false | self | 14 | null |
ChatGPT-5 is out! | 0 | What is everyone thinking about the new model so far, for me it was just pushed out to their mobile app around 3 minutes ago so I have yet to try it. | 2025-08-07T22:37:17 | Totaie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkebxi | false | null | t3_1mkebxi | /r/LocalLLaMA/comments/1mkebxi/chatgpt5_is_out/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'lzv8vfn6dohf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/lzv8vfn6dohf1.jpeg?width=108&crop=smart&auto=webp&s=219f46c9e4295683cda944b3584621880a755c85', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/lzv8vfn6dohf1.jpeg?width=216&crop=smart&auto=... | |
Oss20b creative writing | 16 |
I was curious so I decided to run some custom software to see what type of creative writing 20b could pull off. My opinion is that its creativity is much wider than the latest qwen. That one kept trying to insist we were going to be telling a ghost story. I ran the world building portion of the prompting with 20b and ... | 2025-08-07T22:32:53 | Upbeat5840 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mke83e | false | null | t3_1mke83e | /r/LocalLLaMA/comments/1mke83e/oss20b_creative_writing/ | false | false | default | 16 | {'enabled': True, 'images': [{'id': 'be1mdlfecohf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/be1mdlfecohf1.jpeg?width=108&crop=smart&auto=webp&s=56b76d29d8b1fe50f3e81cb74230c500e309bd10', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/be1mdlfecohf1.jpeg?width=216&crop=smart&auto=w... | |
120B runs awesome on just 8GB VRAM! | 713 | Here is the thing, the expert layers run amazing on CPU (\~17T/s on a 14900K) and you can force that with this new llama-cpp option: --cpu-moe .
You can offload just the attention layers to GPU (requiring about 5GB of VRAM) for fast prefill.
* KV cache for the sequence
* Attention weights & activations
* Routing ta... | 2025-08-07T22:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/ | Wrong-Historian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mke7ef | false | null | t3_1mke7ef | /r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/ | false | false | self | 713 | {'enabled': False, 'images': [{'id': 'aoTIOGp4IeiDA4o2BmmYi251dex2VNN97dvqHfT33_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aoTIOGp4IeiDA4o2BmmYi251dex2VNN97dvqHfT33_8.png?width=108&crop=smart&auto=webp&s=d6d24f943ce19d12db1601ab8005b8f6b78cb4b8', 'width': 108}, {'height': 108, 'url': 'h... |
I want to live in whatever universe GPT-OSS 20B lives in... | 5 | When the censor is on a vacation 🌞🌊😎⛱ and the model actually gives an answer... | 2025-08-07T22:18:10 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkdvhu | false | null | t3_1mkdvhu | /r/LocalLLaMA/comments/1mkdvhu/i_want_to_live_in_whatever_universe_gptoss_20b/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'e8qyytri8ohf1', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/e8qyytri8ohf1.png?width=108&crop=smart&auto=webp&s=582e1f0a85d935d8e14052b12110db4bb56a23ab', 'width': 108}, {'height': 223, 'url': 'https://preview.redd.it/e8qyytri8ohf1.png?width=216&crop=smart&auto=we... | |
GPT-5 results on EQ-Bench + Opus 4.1 takes top spot on longform writing | 62 | [https://eqbench.com/creative\_writing\_longform.html](https://eqbench.com/creative_writing_longform.html)
Performance for gpt-5 is very similar to horizon-alpha & horizon-beta, those being earlier checkpoints.
Gpt-5-chat-latest (the chat-tuned version that you get on chatgpt.com) performs a little differently, scori... | 2025-08-07T22:16:44 | https://www.reddit.com/gallery/1mkdu9r | _sqrkl | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkdu9r | false | null | t3_1mkdu9r | /r/LocalLLaMA/comments/1mkdu9r/gpt5_results_on_eqbench_opus_41_takes_top_spot_on/ | false | false | 62 | null | |
Any way to add web search to LM Studio/Qwen3? | 11 | Or will I have to use another platform? | 2025-08-07T22:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mkdu26/any_way_to_add_web_search_to_lm_studioqwen3/ | Morteymer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkdu26 | false | null | t3_1mkdu26 | /r/LocalLLaMA/comments/1mkdu26/any_way_to_add_web_search_to_lm_studioqwen3/ | false | false | self | 11 | null |
GitHub - grctest/g3n-fastapi-webcam-docker: Utilizing multiple Gemma 3n agents to analyze webcam footage! (MIT licensed) | 0 | Created a docker image which uses FastAPI to host the React frontend and a python transformers backend to provide webcam footage analysis using Gemma 3n (E2B-it) in a fully offline and private manner.
Was intended for Google's Gemma 3n contest on Kaggle, but due to a [weird UX pattern](https://www.kaggle.com/competiti... | 2025-08-07T22:06:16 | https://github.com/grctest/g3n-fastapi-webcam-docker | ufos1111 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mkdl6x | false | null | t3_1mkdl6x | /r/LocalLLaMA/comments/1mkdl6x/github_grctestg3nfastapiwebcamdocker_utilizing/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'FtDTcbQueInh6JQ6MpceHCcmGfK0jpjhCOy3jzZ5F_I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FtDTcbQueInh6JQ6MpceHCcmGfK0jpjhCOy3jzZ5F_I.png?width=108&crop=smart&auto=webp&s=7f853ddda388018ba93e7373f9b9004643d45448', 'width': 108}, {'height': 108, 'url': 'h... | |
Reccomendation for new medical benchmark | 3 | I want to compare some models for on an italian medical quiz benchmark (with text and some images as well for vision models) I'm creating and I'm looking for suggestions, both open and closed source.
Medgemma is a must, then the most important families of models: gemini from pro to flash-lite, open AI new gpt5 and os... | 2025-08-07T21:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mkd3t1/reccomendation_for_new_medical_benchmark/ | sebastianmicu24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkd3t1 | false | null | t3_1mkd3t1 | /r/LocalLLaMA/comments/1mkd3t1/reccomendation_for_new_medical_benchmark/ | false | false | self | 3 | null |
Upgraded my Mac, what are the current community preferred workflows and tools? | 0 | I’ve upgraded from my M1 MacBook to an M4 Pro (14 CPU/20 GPU, 48gb RAM) and I’d like to get some local AI workflows going.
I’m looking at usage tasks, not “AI development”. Document analysis, summarization, note taking/organization, web search agent for research, etc. Maybe some light code assistance (mainly with Pyt... | 2025-08-07T21:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mkd0bk/upgraded_my_mac_what_are_the_current_community/ | mrgreen4242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkd0bk | false | null | t3_1mkd0bk | /r/LocalLLaMA/comments/1mkd0bk/upgraded_my_mac_what_are_the_current_community/ | false | false | self | 0 | null |
OpenAI open washing | 467 | I think OpenAI released GPT-OSS, a barely usable model, fully aware it would generate backlash once freely tested. But they also had in mind that releasing GPT-5 immediately afterward would divert all attention away from their low-effort model. In this way, they can defend themselves against criticism that they’re not ... | 2025-08-07T21:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mkcwiv/openai_open_washing/ | gwyngwynsituation | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkcwiv | false | null | t3_1mkcwiv | /r/LocalLLaMA/comments/1mkcwiv/openai_open_washing/ | false | false | self | 467 | null |
xAI says new models in the next few weeks | 0 | [https://x.com/Yuhu\_ai\_/status/1953551132921671712](https://x.com/Yuhu_ai_/status/1953551132921671712)
Grok4 world’s first unified model, and crushing GPT5 in benchmarks like ARC-AGI. [u/OpenAI](https://x.com/OpenAI) is a very respectful competitor and still the leader in many, but we’re fast and relentless. Many ne... | 2025-08-07T21:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mkcwfa/xai_says_new_models_in_the_next_few_weeks/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkcwfa | false | null | t3_1mkcwfa | /r/LocalLLaMA/comments/1mkcwfa/xai_says_new_models_in_the_next_few_weeks/ | false | false | self | 0 | null |
we need a tool that keeps track of each model and what it’s good at!! | 7 | 2025-08-07T21:08:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mkc4lk/we_need_a_tool_that_keeps_track_of_each_model_and/ | isaak_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkc4lk | false | null | t3_1mkc4lk | /r/LocalLLaMA/comments/1mkc4lk/we_need_a_tool_that_keeps_track_of_each_model_and/ | false | false | 7 | null | ||
Semantic Textual Similarity on Apple Silicon | 3 | I would like to perform some STS tasks on my MacBook Pro (M4 Pro chip). Based on the leaderboard at [https://huggingface.co/spaces/mteb/leaderboard](https://huggingface.co/spaces/mteb/leaderboard), it seems that Qwen 3 is the leader, so I wanted to set it up. However, I problem with the `SentenceTransformer("mlx-commu... | 2025-08-07T21:01:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mkby4r/semantic_textual_similarity_on_apple_silicon/ | holdvacs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkby4r | false | null | t3_1mkby4r | /r/LocalLLaMA/comments/1mkby4r/semantic_textual_similarity_on_apple_silicon/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'hINCyazmugT5nd39NF13gjbN1S3l4nlzHPyy65fQcLI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hINCyazmugT5nd39NF13gjbN1S3l4nlzHPyy65fQcLI.png?width=108&crop=smart&auto=webp&s=bca5092323f107110de703666d0987ca51bedba1', 'width': 108}, {'height': 116, 'url': 'h... |
The best benchmarks! | 2 | I spend a lot of time making private benchmarks for my real world use cases. It's extremely important to create your own unique benchmark for the specific tasks you will be using ai for, but we all know it's helpful to look at other benchmarks too. I think we've all found many benchmarks to not mean much in the real wo... | 2025-08-07T20:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mkbs5l/the_best_benchmarks/ | Mr-Barack-Obama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkbs5l | false | null | t3_1mkbs5l | /r/LocalLLaMA/comments/1mkbs5l/the_best_benchmarks/ | false | false | self | 2 | null |
GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team | 0 | Happening at r/ChatGPT at 11AM PT: https://www.reddit.com/r/ChatGPT/comments/1mkae1l/gpt5\_ama\_with\_openais\_sam\_altman\_and\_some\_of\_the/ | 2025-08-07T20:44:34 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkbimk | false | null | t3_1mkbimk | /r/LocalLLaMA/comments/1mkbimk/gpt5_ama_with_openais_sam_altman_and_some_of_the/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'fo1wTetpXSTjUdQzf_QE1nlbuNYicP_YM8kGCC1ujUs', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/coimzlbysnhf1.png?width=108&crop=smart&auto=webp&s=462180d0d43fa32ef3934948b8bccee78ba9fc9e', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/coimzlbysnhf1.png... | ||
I attempted to post about OpenAI's requirement for authentication verification for streaming API responses for GPT-5, but the automated moderator immediately deleted the post (twice) | 1 | [removed] | 2025-08-07T20:41:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mkbftw/i_attempted_to_post_about_openais_requirement_for/ | MotorNetwork380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkbftw | false | null | t3_1mkbftw | /r/LocalLLaMA/comments/1mkbftw/i_attempted_to_post_about_openais_requirement_for/ | false | false | self | 1 | null |
GPT‑5 > Grok‑4 > Opus 4.1 | 0 | Looks like we have a new king. How has it been your experience using GPT5? For me, I use it mainly through cursor and it feels super slow, not because of the throughput of tokens but because it just thinks too much.
Sometimes I prefer to have a good enough model that is super fast. Do you have any examples where GPT-5... | 2025-08-07T20:39:21 | Odd_Tumbleweed574 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkbdqf | false | null | t3_1mkbdqf | /r/LocalLLaMA/comments/1mkbdqf/gpt5_grok4_opus_41/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'aiejp51i7nhf1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/aiejp51i7nhf1.png?width=108&crop=smart&auto=webp&s=9f840f18a262d12d7f1ee78183d899d45282b3b2', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/aiejp51i7nhf1.png?width=216&crop=smart&auto=web... | |
[Update] My macOS dictation replacement using local Whisper - Added YouTube & file transcription, all runs locally | 6 | 2025-08-07T20:26:27 | https://www.reddit.com/gallery/1mkb1sj | sapoepsilon | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mkb1sj | false | null | t3_1mkb1sj | /r/LocalLLaMA/comments/1mkb1sj/update_my_macos_dictation_replacement_using_local/ | false | false | 6 | null | ||
Feel the AGI | 140 | 2025-08-07T20:25:35 | lyceras | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkb0w3 | false | null | t3_1mkb0w3 | /r/LocalLLaMA/comments/1mkb0w3/feel_the_agi/ | false | false | default | 140 | {'enabled': True, 'images': [{'id': 'nd657simpnhf1', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/nd657simpnhf1.jpeg?width=108&crop=smart&auto=webp&s=ff23460b533d4004502f37aeded91557bd8cb74d', 'width': 108}, {'height': 275, 'url': 'https://preview.redd.it/nd657simpnhf1.jpeg?width=216&crop=smart&auto=... | ||
Gemma 3n tokenizer for React Native | 1 | Hey yall,
recently I've dived into a rabbit hole of creating my own app with Gemma 3n running locally. As I'm fairly new to app development, I'm doing so usign React Native. Everything has been going really well and surprisingly easily, but now I'm stuck searching for a compatible tokenizer that I could integrate usin... | 2025-08-07T20:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mkay0s/gemma_3n_tokenizer_for_react_native/ | clueless_but_hopeful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkay0s | false | null | t3_1mkay0s | /r/LocalLLaMA/comments/1mkay0s/gemma_3n_tokenizer_for_react_native/ | false | false | self | 1 | null |
On the topic of graphs | 45 | 2025-08-07T20:22:12 | https://v.redd.it/kdhwce4vonhf1 | onil_gova | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkaxrx | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kdhwce4vonhf1/DASHPlaylist.mpd?a=1757190146%2CYTJlNDNhMjJiNzM0YmE0MTAzYTZhNWE2Y2VkNGNkOTRkMmY0ZTU2ZDVlZTEzNjI1Mzk0N2IwMWFmMWI4ODk3Yg%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/kdhwce4vonhf1/DASH_720.mp4?source=fallback', 'has... | t3_1mkaxrx | /r/LocalLLaMA/comments/1mkaxrx/on_the_topic_of_graphs/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'djdqd2lmNHZvbmhmMT9yaG9NVPSK_ESe4YNSeWTlgNsDK6lCMTdSd23R-vXk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/djdqd2lmNHZvbmhmMT9yaG9NVPSK_ESe4YNSeWTlgNsDK6lCMTdSd23R-vXk.png?width=108&crop=smart&format=pjpg&auto=webp&s=28cac88bbe32831e806826200610891b2de29... | ||
random bar chart made by Qwen3-235B-A22B-2507 | 859 | had it render the chart on HTML canvas | 2025-08-07T20:19:55 | tengo_harambe | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mkavhy | false | null | t3_1mkavhy | /r/LocalLLaMA/comments/1mkavhy/random_bar_chart_made_by_qwen3235ba22b2507/ | false | false | default | 859 | {'enabled': True, 'images': [{'id': 'rka3lhpnonhf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/rka3lhpnonhf1.png?width=108&crop=smart&auto=webp&s=1d4103afd6abfb7f836bb0fc9009a4c316b2a499', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/rka3lhpnonhf1.png?width=216&crop=smart&auto=web... | |
OpenAI now requires "organization verification" for streaming responses. This is insane | 1 | [removed] | 2025-08-07T20:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mkauqn/openai_now_requires_organization_verification_for/ | MotorNetwork380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkauqn | false | null | t3_1mkauqn | /r/LocalLLaMA/comments/1mkauqn/openai_now_requires_organization_verification_for/ | false | false | self | 1 | null |
OpenAI's arbitrary "organization verification" for streaming pushed me to purge them from my app defaults | 1 | [removed] | 2025-08-07T20:15:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mkar9p/openais_arbitrary_organization_verification_for/ | MotorNetwork380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkar9p | false | null | t3_1mkar9p | /r/LocalLLaMA/comments/1mkar9p/openais_arbitrary_organization_verification_for/ | false | false | self | 1 | null |
If you haven't tried llamacpp + CLine + VSCode yet, you should. It's a... | 12 | ...10 minute install where you run `llama-server` with *Qwen3-Coder* and then switch to Agent mode in CLine.
Give it a task of
> "Create a simple Python Flask web app with a single route that returns "Hello, World!""
2 minutes later you'll have a working "hello world" in your browser, including installs of missing p... | 2025-08-07T20:11:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mkan6d/if_you_havent_tried_llamacpp_cline_vscode_yet_you/ | 73tada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkan6d | false | null | t3_1mkan6d | /r/LocalLLaMA/comments/1mkan6d/if_you_havent_tried_llamacpp_cline_vscode_yet_you/ | false | false | self | 12 | null |
I Trained Llama 3.1-8B 6× faster on my everyday Laptop M1 (16 GB).
Day 0 of a build-in-public adventure. | 9 | Day 0 of a build-in-public adventure.
Why I’m doing this:
1. Full fine-tuning still costs $30 K+ in GPUs(only the big players can afford)
2. LoRA ≈ surface patches(Not bad, but not always sufficient)
3. No real model ownership when you’re cloud-bound | 2025-08-07T20:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mkaf3o/i_trained_llama_318b_6_faster_on_my_everyday/ | Effective_Election71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkaf3o | false | null | t3_1mkaf3o | /r/LocalLLaMA/comments/1mkaf3o/i_trained_llama_318b_6_faster_on_my_everyday/ | false | false | self | 9 | null |
NuMarkdown-8B-Thinking - first reasoning OCR VLM | 24 | [https://huggingface.co/numind/NuMarkdown-8B-Thinking](https://huggingface.co/numind/NuMarkdown-8B-Thinking)
first reasoning OCR VLM. a fine-tune of **Qwen 2.5-VL-7B** on synthetic Doc → Reasoning → Markdown examples
thoughts? | 2025-08-07T20:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mkaef6/numarkdown8bthinking_first_reasoning_ocr_vlm/ | Whole-Assignment6240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mkaef6 | false | null | t3_1mkaef6 | /r/LocalLLaMA/comments/1mkaef6/numarkdown8bthinking_first_reasoning_ocr_vlm/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'hZVqbivk29FZL7FGxe1BtGNwIblHBlPQ9os2iXUmyrQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hZVqbivk29FZL7FGxe1BtGNwIblHBlPQ9os2iXUmyrQ.png?width=108&crop=smart&auto=webp&s=441c432b48f53d4e139d65d85587999a53c95a76', 'width': 108}, {'height': 116, 'url': 'h... |
I had gpt 4.5 look up the news that it'll be sunsetted forever and it is SO fascinating. Here is its reply on Open-AI deciding not to just quantize the old models and release them. | 0 | 2025-08-07T19:46:32 | https://www.reddit.com/gallery/1mka052 | RoyalCities | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mka052 | false | null | t3_1mka052 | /r/LocalLLaMA/comments/1mka052/i_had_gpt_45_look_up_the_news_that_itll_be/ | false | false | 0 | null | ||
I had gpt 4.5 look up the news that it'll be sunsetted forever and it is SO fascinating. Here is its reply on Open-AI deciding not to just quantize the old models and release them. | 1 | 2025-08-07T19:45:27 | https://www.reddit.com/gallery/1mk9z4o | RoyalCities | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk9z4o | false | null | t3_1mk9z4o | /r/LocalLLaMA/comments/1mk9z4o/i_had_gpt_45_look_up_the_news_that_itll_be/ | false | false | 1 | null | ||
Fixed the SWE-bench graph: | 115 | 2025-08-07T19:36:45 | https://www.reddit.com/gallery/1mk9qxe | policyweb | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk9qxe | false | null | t3_1mk9qxe | /r/LocalLLaMA/comments/1mk9qxe/fixed_the_swebench_graph/ | false | false | 115 | null | ||
It seems that GPT5 has 3 levels of thinking in common with GPT-OSS | 0 | 2025-08-07T19:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mk9lu4/it_seems_that_gpt5_has_3_levels_of_thinking_in/ | Necessary_Bunch_4019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk9lu4 | false | null | t3_1mk9lu4 | /r/LocalLLaMA/comments/1mk9lu4/it_seems_that_gpt5_has_3_levels_of_thinking_in/ | false | false | 0 | null | ||
Unlocking the Power of Local LLMs | 0 | I have been running ChatGPT and other AI chatbots for a while and have been blown away by their capabilities. When I discovered I could run LLM (Large Language Models) on my computer, I was intrigued.
For one thing, it would give me all the privacy I desire, as I would not have to expose my data to the Internet. It wo... | 2025-08-07T19:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mk9fuq/unlocking_the_power_of_local_llms/ | tony10000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk9fuq | false | null | t3_1mk9fuq | /r/LocalLLaMA/comments/1mk9fuq/unlocking_the_power_of_local_llms/ | false | false | self | 0 | null |
LM Studio and multiple model loading...is this NEW? How does it work? | 5 | Hey there. Just had a few questions about the latest updates to LM Studio. I loaded a few models to test. At first, I thought something broke LM Studio, because my Gemma 3 27B was suddenly much slower. (I'm on an RTX 3090TI w/96GB of RAM, i7 12700K.
But then, I noticed this:
https://preview.redd.it/gkexj98rdnhf1.png?... | 2025-08-07T19:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mk9eu2/lm_studio_and_multiple_model_loadingis_this_new/ | GrungeWerX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk9eu2 | false | null | t3_1mk9eu2 | /r/LocalLLaMA/comments/1mk9eu2/lm_studio_and_multiple_model_loadingis_this_new/ | false | false | 5 | null | |
Polymarket prediction for best AI model by 2025 | 23 | Source will be in the comments | 2025-08-07T19:22:43 | BackgroundPrint9465 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk9dpy | false | null | t3_1mk9dpy | /r/LocalLLaMA/comments/1mk9dpy/polymarket_prediction_for_best_ai_model_by_2025/ | false | false | default | 23 | {'enabled': True, 'images': [{'id': 'jua3bszgenhf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/jua3bszgenhf1.png?width=108&crop=smart&auto=webp&s=ae95fbee8805afce642cf90b19ed88ec8eb2b684', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/jua3bszgenhf1.png?width=216&crop=smart&auto=web... | |
Support for intern-s1 has been merged into llama.cpp | 23 | from the model description:
We introduce **Intern-S1**, our **most advanced open-source multimodal reasoning model** to date. Intern-S1 combines **strong general-task capabilities with state-of-the-art performance on a wide range of scientific tasks**, rivaling leading closed-source commercial models. Built upon a 235... | 2025-08-07T19:21:19 | https://github.com/ggml-org/llama.cpp/pull/14875 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mk9cg3 | false | null | t3_1mk9cg3 | /r/LocalLLaMA/comments/1mk9cg3/support_for_interns1_has_been_merged_into_llamacpp/ | false | false | default | 23 | {'enabled': False, 'images': [{'id': 'ukGPJhGQBSNIqKqFk0joH5as5YrMTXg0pmkQuca7PhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ukGPJhGQBSNIqKqFk0joH5as5YrMTXg0pmkQuca7PhI.png?width=108&crop=smart&auto=webp&s=60f53264dcfc9e1652e51a64686f007ce07ac2b0', 'width': 108}, {'height': 108, 'url': 'h... |
10.48 tok/sec - GPT-OSS-120B on RTX 5090 32 VRAM + 96 RAM in LM Studio (default settings + FlashAttention + Guardrails: OFF) | 9 | Just tested **GPT-OSS-120B (MXFP4)** locally using **LM Studio v0.3.22 (Beta build 2)** on my machine with an **RTX 5090 (32 GB VRAM)** \+ **Ryzen 9 9950X3D** \+ **96 GB RAM**.
Everything is mostly default. I only enabled **Flash Attention** manually and adjusted GPU offload to 30/36 layers + Guardrails **OFF +** Limi... | 2025-08-07T19:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mk9c1u/1048_toksec_gptoss120b_on_rtx_5090_32_vram_96_ram/ | Spiritual_Tie_5574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk9c1u | false | null | t3_1mk9c1u | /r/LocalLLaMA/comments/1mk9c1u/1048_toksec_gptoss120b_on_rtx_5090_32_vram_96_ram/ | false | false | 9 | null | |
The Polymarket prediction for OpenAI having the best AI model by the end of 2025 has dropped by nearly 50% (from 38% to 20%) | 1 | [removed] | 2025-08-07T19:16:48 | BackgroundPrint9465 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk986v | false | null | t3_1mk986v | /r/LocalLLaMA/comments/1mk986v/the_polymarket_prediction_for_openai_having_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'fhpm8dxednhf1', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/fhpm8dxednhf1.png?width=108&crop=smart&auto=webp&s=d3657a590f97da365c7b19089f7d156c31d4dc0f', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/fhpm8dxednhf1.png?width=216&crop=smart&auto=web... | |
Qwen3-8b-2508 anyone? 🤞🤞🤞 Where are you? Are you coming? | 44 | that's it. Big fan of smaller yet ultra performant LLMs. | 2025-08-07T19:14:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mk95w6/qwen38b2508_anyone_where_are_you_are_you_coming/ | JLeonsarmiento | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk95w6 | false | null | t3_1mk95w6 | /r/LocalLLaMA/comments/1mk95w6/qwen38b2508_anyone_where_are_you_are_you_coming/ | false | false | self | 44 | null |
gabriellarson/Huihui-gpt-oss-20b-BF16-abliterated-GGUF · Hugging Face | 54 | 2025-08-07T19:10:48 | https://huggingface.co/gabriellarson/Huihui-gpt-oss-20b-BF16-abliterated-GGUF | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mk92k4 | false | null | t3_1mk92k4 | /r/LocalLLaMA/comments/1mk92k4/gabriellarsonhuihuigptoss20bbf16abliteratedgguf/ | false | false | default | 54 | {'enabled': False, 'images': [{'id': 'dM2syJ0lh5qODrveCus4LDlR8L4f9r_ltO_PMWMUbDA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dM2syJ0lh5qODrveCus4LDlR8L4f9r_ltO_PMWMUbDA.png?width=108&crop=smart&auto=webp&s=bcbbd0387dd8ac9b2f7f0fb4f258aade8378636b', 'width': 108}, {'height': 116, 'url': 'h... | |
Twisted math test for LLMs | 6 | # A personal Benchmark
For a while now I have been testing LLMs with a benchmark I informally call "Twisted Math". The basic idea is that we take a very common mathematics problem, e.g., Tower of Hanoi, the birthday paradox, etc., and subtly change the problem constraints so that the original reasoning does not hold a... | 2025-08-07T19:09:23 | espressoVi | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk916s | false | null | t3_1mk916s | /r/LocalLLaMA/comments/1mk916s/twisted_math_test_for_llms/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'ihugrx1wanhf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/ihugrx1wanhf1.jpeg?width=108&crop=smart&auto=webp&s=0a37990c1cde7c492692a1858b33ea3571b01f1e', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/ihugrx1wanhf1.jpeg?width=216&crop=smart&auto=we... | |
Recipe for distributed finetuning OpenAI gpt-oss-120b | 0 | [GPU utilization across 4 nodes](https://preview.redd.it/0nwgl1j0anhf1.png?width=4458&format=png&auto=webp&s=aba60da2706de9183ed58ccf91c59e7a76931831)
GPT-5 has just been released, but if we want to adapt the model to our own data, we will still need to use the open model. Fortunately, OpenAI released the open model g... | 2025-08-07T19:07:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mk8zpm/recipe_for_distributed_finetuning_openai/ | Michaelvll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk8zpm | false | null | t3_1mk8zpm | /r/LocalLLaMA/comments/1mk8zpm/recipe_for_distributed_finetuning_openai/ | false | false | 0 | null | |
GPT 5 on Artificial Analysis | 0 | 2025-08-07T19:06:12 | https://www.reddit.com/gallery/1mk8y9e | averagebear_003 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk8y9e | false | null | t3_1mk8y9e | /r/LocalLLaMA/comments/1mk8y9e/gpt_5_on_artificial_analysis/ | false | false | 0 | null | ||
How can I actually learn and try LLM pretraining? (or post training a large LLM ) | 14 | Hey everyone,
I'm really interested in understanding pretraining of LLMs (not just fine-tuning). But it's been extremely difficult to find clear, practical resources or workflows for actually learning this from scratch. Most tutorials either skip over the hard parts, focus only on fine-tuning very small LLMs that can'... | 2025-08-07T18:56:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mk8oll/how_can_i_actually_learn_and_try_llm_pretraining/ | Distinct-Drive1307 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk8oll | false | null | t3_1mk8oll | /r/LocalLLaMA/comments/1mk8oll/how_can_i_actually_learn_and_try_llm_pretraining/ | false | false | self | 14 | null |
GPT OSS fast Test first impressions. | 0 | It got it right with Flappybird and some other tests also in first try.
Is quite fast but a bit weird, as it manipulate the codebox.
Also the update of llama.cpp b6111 (cpu) that supports GPT OSS is flagged by Windows as a malware (Wacatac).
Every update since the repo disappear in Github some days ago (worth che... | 2025-08-07T18:50:30 | https://v.redd.it/fumgkm8m7nhf1 | Trilogix | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk8j72 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/fumgkm8m7nhf1/DASHPlaylist.mpd?a=1757184650%2CY2UwYzEyNGM0MDZjMmJjNGVhYWFhYjA2ZWJlYTI4ZWQ2NjcyMzM2YWRlZjgxODY3ZGEyMzc5ZDZiZDY4Yzg1Zg%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/fumgkm8m7nhf1/DASH_480.mp4?source=fallback', 'ha... | t3_1mk8j72 | /r/LocalLLaMA/comments/1mk8j72/gpt_oss_fast_test_first_impressions/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bDI4NWNtOG03bmhmMcnEioV_16zktoW0980HFkeIRLUS5d2LAlT6PXq7YuHX', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/bDI4NWNtOG03bmhmMcnEioV_16zktoW0980HFkeIRLUS5d2LAlT6PXq7YuHX.png?width=108&crop=smart&format=pjpg&auto=webp&s=683a4a03ddb9cea3dca4e7e6309df3c741cb8... | |
Coral Protocol Outperforms Microsoft by 34% With Top GAIA Benchmark for AI Mini-Model !! | 2 | ERROR: type should be string, got "https://preview.redd.it/2t0xlmzo7nhf1.png?width=1080&format=png&auto=webp&s=4ba6f60c0af70c5bfdc671b787a449c2570b210d\n\nWhile everyone’s talking GPT-5…\n\nCoral quietly outperformed Microsoft by **34%** using small models, not massive ones.\n\nCoral Protocol ranked #1 on the GAIA benchmark using multi-agent systems powered by small LLMs.\n\nThe future isn’t just bigger models it’s smarter systems.\n\nCheckout the link in the comments" | 2025-08-07T18:45:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mk8e0f/coral_protocol_outperforms_microsoft_by_34_with/ | AdVirtual2648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk8e0f | false | null | t3_1mk8e0f | /r/LocalLLaMA/comments/1mk8e0f/coral_protocol_outperforms_microsoft_by_34_with/ | false | false | 2 | null | |
How i feel about gpt-oss... | 8 | 2025-08-07T18:44:07 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk8d3j | false | null | t3_1mk8d3j | /r/LocalLLaMA/comments/1mk8d3j/how_i_feel_about_gptoss/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'shxc8spf7nhf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/shxc8spf7nhf1.gif?width=108&crop=smart&format=png8&s=fba4562e04e2d1527bfe564484949e1b71b93704', 'width': 108}], 'source': {'height': 160, 'url': 'https://preview.redd.it/shxc8spf7nhf1.gif?format=png8&s=e... | ||
caught in 4K | 291 | 2025-08-07T18:42:23 | https://www.reddit.com/gallery/1mk8bh1 | JP_525 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk8bh1 | false | null | t3_1mk8bh1 | /r/LocalLLaMA/comments/1mk8bh1/caught_in_4k/ | false | false | 291 | null | ||
GPT-5 has been added to Design Arena, but is it better than Qwen? | 0 | GPT-5, GPT-5 Mini, and GPT-5 Nano have been added to [Design Arena](https://www.designarena.ai/), and you can go try it out for free through the voting platform.
During the livestream, there were a lot of examples of frontend coding tasks given, and a common point of emphasis seemed to be that these series of models ... | 2025-08-07T18:40:56 | https://www.reddit.com/gallery/1mk8a39 | Accomplished-Copy332 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk8a39 | false | null | t3_1mk8a39 | /r/LocalLLaMA/comments/1mk8a39/gpt5_has_been_added_to_design_arena_but_is_it/ | false | false | 0 | null | |
Polymarket | 37 | It's too much winning Sam, please stop /s | 2025-08-07T18:38:21 | V4ldeLund | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk87kd | false | null | t3_1mk87kd | /r/LocalLLaMA/comments/1mk87kd/polymarket/ | false | false | 37 | {'enabled': True, 'images': [{'id': 'DSO5pboY7RaIIvwz52vgmBD9hyXPb22BJPPgAjKopco', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/puuand3c6nhf1.jpeg?width=108&crop=smart&auto=webp&s=7dd1e0f90420ccd678029cb4970aa61241e6954b', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/puuand3c6nhf1.jp... | ||
Can't believe I'm seeing GPT-5 posted here | 0 | It's not local OR open weights, why is the front page flooded with this? | 2025-08-07T18:30:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mk8085/cant_believe_im_seeing_gpt5_posted_here/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk8085 | false | null | t3_1mk8085 | /r/LocalLLaMA/comments/1mk8085/cant_believe_im_seeing_gpt5_posted_here/ | false | false | self | 0 | null |
Same energy | 187 | 2025-08-07T18:30:05 | Different_Fix_2217 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk7ztc | false | null | t3_1mk7ztc | /r/LocalLLaMA/comments/1mk7ztc/same_energy/ | false | false | default | 187 | {'enabled': True, 'images': [{'id': '951iksa25nhf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/951iksa25nhf1.jpeg?width=108&crop=smart&auto=webp&s=95e38176792957ac9d9ac79c14d36fed947d330c', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/951iksa25nhf1.jpeg?width=216&crop=smart&auto=w... | ||
The founder of LM Studio, is likely ex IDF and Israeli. Not saying this is good or bad, but just putting it out there for those who want to know. | 6 | Just letting people know. Keep it civil. | 2025-08-07T18:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mk7wlf/the_founder_of_lm_studio_is_likely_ex_idf_and/ | nncyberpunk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk7wlf | false | null | t3_1mk7wlf | /r/LocalLLaMA/comments/1mk7wlf/the_founder_of_lm_studio_is_likely_ex_idf_and/ | false | false | self | 6 | null |
I found a worse chart | 16 | Seriously, wtf are they different? Why is 50% deception somehow better than 47.4% Did they not get someone to review the charts for the live presentation?! The first is the video and 2nd the official release, I’m just gonna assume it’s planning to deceive me 50% of the time when I use it to help me code. | 2025-08-07T18:25:18 | https://www.reddit.com/gallery/1mk7v7t | Figai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mk7v7t | false | null | t3_1mk7v7t | /r/LocalLLaMA/comments/1mk7v7t/i_found_a_worse_chart/ | false | false | 16 | null | |
GPT 5 seems worse than Gemini in head-to-head | 56 | ERROR: type should be string, got "https://preview.redd.it/jcuuuedh3nhf1.png?width=1032&format=png&auto=webp&s=54ca5f632741b68b51c12ebeb30ec2d9cf56976b\n\nThis image from [lmarena.ai/leaderboard/text](http://lmarena.ai/leaderboard/text) shows that Gemini beats GPT-5, and that the winrates for Gemini are still higher. Not really sure what the hype is around the model in this case, especially when companies can fine tune to fit benchmarks. This is really the only thing that matters, and gemini still has higher WR in battles (66% vs 62%)" | 2025-08-07T18:24:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mk7u6i/gpt_5_seems_worse_than_gemini_in_headtohead/ | nypdk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk7u6i | false | null | t3_1mk7u6i | /r/LocalLLaMA/comments/1mk7u6i/gpt_5_seems_worse_than_gemini_in_headtohead/ | false | false | 56 | null | |
GPT 5 Testing - Matthew Berman | 0 | https://youtu.be/BUDmHYI6e3g?si=voG4nrNpEbGiTa15&utm_source=ZTQxO
| 2025-08-07T18:24:00 | Current-Stop7806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mk7tzy | false | null | t3_1mk7tzy | /r/LocalLLaMA/comments/1mk7tzy/gpt_5_testing_matthew_berman/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'qiydrhrz3nhf1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/qiydrhrz3nhf1.jpeg?width=108&crop=smart&auto=webp&s=282d6e705ec76b78ed4433e9f15614d09602451d', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/qiydrhrz3nhf1.jpeg?width=216&crop=smart&auto=w... | |
GPT5 is the first model to correctly untangle city in a bottle, a dense 256 bytes javascript raycaster | 0 | Given only the prompt
Analyze the following code and rewrite it to be more readable
<canvas style=width:99% id=c onclick=setInterval('for(c.width=w=99,++t,i=6e3;i--;c.getContext`2d`.fillRect(i%w,i/w|0,1-d*Z/w+s,1))for(a=i%w/50-1,s=b=1-i/4e3,X=t,Y=Z=d=1;++Z<w&(Y<6-(32<Z&27<X%w&&X/9^Z/8)*8%46||d|(s=(X&Y&Z)%3/Z,a... | 2025-08-07T18:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mk7tm9/gpt5_is_the_first_model_to_correctly_untangle/ | shroddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mk7tm9 | false | null | t3_1mk7tm9 | /r/LocalLLaMA/comments/1mk7tm9/gpt5_is_the_first_model_to_correctly_untangle/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'vrCkRz7wE7p9TEctxYqXtcpfpuXZePZpeYOFtOkaC8k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/vrCkRz7wE7p9TEctxYqXtcpfpuXZePZpeYOFtOkaC8k.png?width=108&crop=smart&auto=webp&s=38c362e0bf4bcc9320d65037231a69930419daf1', 'width': 108}, {'height': 121, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.