title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
OpenRouter vs Lambda: Which is more economical for millions of tokens on the newest Qwen coder model?
3
Hi all, I've hit my usage limit again for Claude Code, and it's time to switch to OpenCode with the newest Qwen model. I plan to generate many, many millions of tokens - working on an app to gamify the creation of RL environments (think GMod, but you come out of it with a working robot). What is the most economic...
2025-08-06T18:12:03
https://www.reddit.com/r/LocalLLaMA/comments/1mjc4kb/openrouter_vs_lambda_which_is_more_economical_for/
ImpressiveSir9769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjc4kb
false
null
t3_1mjc4kb
/r/LocalLLaMA/comments/1mjc4kb/openrouter_vs_lambda_which_is_more_economical_for/
false
false
self
3
null
Does giving context about whole your life make ChatGPT 10x more useful?
0
Today I was thinking why LLMs are not so useful for me and realized that everytime I ask something they cannot specialize answers for me because they know basically nothing about me. I feel that if they would know anything about me responses would be 10x better. I never shared private info to LLMs because I think it i...
2025-08-06T18:10:02
https://www.reddit.com/r/LocalLLaMA/comments/1mjc2od/does_giving_context_about_whole_your_life_make/
Working_Bunch_9211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjc2od
false
null
t3_1mjc2od
/r/LocalLLaMA/comments/1mjc2od/does_giving_context_about_whole_your_life_make/
false
false
self
0
null
Unitree announces it's latest LLM hardware platform. This one really moves!
34
"Join us to develop/customize, ultra-lightweight at approximately 25kg, integrated with a \*\*Large Multimodal Model for voice and images\*\*, let's accelerate the advent of the agent era!"
2025-08-06T17:58:32
https://www.youtube.com/watch?v=v1Q4Su54iho
fallingdowndizzyvr
youtube.com
1970-01-01T00:00:00
0
{}
1mjbrwu
false
{'oembed': {'author_name': 'Unitree Robotics', 'author_url': 'https://www.youtube.com/@unitreerobotics', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/v1Q4Su54iho?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media;...
t3_1mjbrwu
/r/LocalLLaMA/comments/1mjbrwu/unitree_announces_its_latest_llm_hardware/
false
false
https://external-preview…c8153cc70e59f389
34
{'enabled': False, 'images': [{'id': 'GOgNpRUIp9Xi_KnxofG3IgToxyeM9Sjiw1kiZOMcv_U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GOgNpRUIp9Xi_KnxofG3IgToxyeM9Sjiw1kiZOMcv_U.jpeg?width=108&crop=smart&auto=webp&s=f9f95e5f49cafd3669d658ea50f742053abe6cf0', 'width': 108}, {'height': 162, 'url': '...
A faster text diffusion model? My concept for adaptive steps.
0
Hey everyone, Had an idea for a more efficient diffusion model and wanted to run it by people smarter than me. What if instead of a fixed number of steps, the model "freezes" tokens one by one as it gets confident about them? The generation would stop once the whole sentence is stable. This seems like it would be way f...
2025-08-06T17:53:50
https://www.reddit.com/r/LocalLLaMA/comments/1mjbne6/a_faster_text_diffusion_model_my_concept_for/
MokshMalik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjbne6
false
null
t3_1mjbne6
/r/LocalLLaMA/comments/1mjbne6/a_faster_text_diffusion_model_my_concept_for/
false
false
self
0
null
Best solution to use any model in full stack mode?
1
What’s the best full stack/most automated way to code with chosen model? I’ve heard of solutions like Augment Code, but don’t want to be locked on model choice or stuck in a browser like Z.ai web chat full stack mode. I’d like the option to use local models if I want. Currently interested in trying GLM 4.5, what is t...
2025-08-06T17:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1mjbkt6/best_solution_to_use_any_model_in_full_stack_mode/
Shadow-Amulet-Ambush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjbkt6
false
null
t3_1mjbkt6
/r/LocalLLaMA/comments/1mjbkt6/best_solution_to_use_any_model_in_full_stack_mode/
false
false
self
1
null
In-browser tool calling playground, running LFM2 locally on WebGPU with Transformers.js
15
Hi everyone! To showcase the latest generation of small tool calling models, I built a demo which runs LFM2 (a new series of models from Liquid AI) 100% locally in your browser with Transformers.js. Hope you like it! Link to demo + source code: [https://huggingface.co/spaces/LiquidAI/LFM2-WebGPU](https://huggingface.c...
2025-08-06T17:49:01
https://v.redd.it/83dstrbeifhf1
xenovatech
/r/LocalLLaMA/comments/1mjbiq6/inbrowser_tool_calling_playground_running_lfm2/
1970-01-01T00:00:00
0
{}
1mjbiq6
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/83dstrbeifhf1/DASHPlaylist.mpd?a=1757224149%2CMWJlZDAyYWY5NDA5OTQ2MzIwNTM1M2Q4OWI5NjllYzQ0ZTIwMzI1NjE4ZjhmN2U1ODk0ZWY2ZjA5Nzg3MmUwOQ%3D%3D&v=1&f=sd', 'duration': 146, 'fallback_url': 'https://v.redd.it/83dstrbeifhf1/DASH_720.mp4?source=fallback', 'h...
t3_1mjbiq6
/r/LocalLLaMA/comments/1mjbiq6/inbrowser_tool_calling_playground_running_lfm2/
false
false
https://external-preview…fc43dc0ddee95e9b
15
{'enabled': False, 'images': [{'id': 'eXg4ajZ0YmVpZmhmMbYZ2NNhhOlQPtjC1_xl9uJZ2RpqkQZ92d23-2n-LFHZ', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/eXg4ajZ0YmVpZmhmMbYZ2NNhhOlQPtjC1_xl9uJZ2RpqkQZ92d23-2n-LFHZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=2fb409fc53197e2a57231a612c219cb32d527...
Trending on HuggingFace: gpt-oss surpasses GLM-4.5's downloads in just 1 day!
0
2025-08-06T17:48:40
https://i.redd.it/8ccoc7cjsfhf1.png
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mjbies
false
null
t3_1mjbies
/r/LocalLLaMA/comments/1mjbies/trending_on_huggingface_gptoss_surpasses_glm45s/
false
false
default
0
{'enabled': True, 'images': [{'id': '8ccoc7cjsfhf1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/8ccoc7cjsfhf1.png?width=108&crop=smart&auto=webp&s=27bd9bdcf59a0809f598b2db981b123e44abb443', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/8ccoc7cjsfhf1.png?width=216&crop=smart&auto=web...
qwen?
1
2025-08-06T17:47:55
https://i.redd.it/h1de4cmmsfhf1.png
Altruistic-While5599
i.redd.it
1970-01-01T00:00:00
0
{}
1mjbhob
false
null
t3_1mjbhob
/r/LocalLLaMA/comments/1mjbhob/qwen/
false
false
default
1
{'enabled': True, 'images': [{'id': 'h1de4cmmsfhf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/h1de4cmmsfhf1.png?width=108&crop=smart&auto=webp&s=d7056dae0e717a19f01c1edbf84e782466a5f94b', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/h1de4cmmsfhf1.png?width=216&crop=smart&auto=web...
No copyright censorship with gpt-oss-120b if you don't use shitty quants, no jailbreak needed
0
Tried this prompt: [https://www.reddit.com/r/LocalLLaMA/comments/1miyix4/im\_sorry\_but\_i\_cant\_provide\_that\_patience\_i/](https://www.reddit.com/r/LocalLLaMA/comments/1miyix4/im_sorry_but_i_cant_provide_that_patience_i/) on gpt-oss-120b with reasoning high, vLLM native quant on my H100. Just worked! python te...
2025-08-06T17:29:49
https://i.redd.it/0xfwz6fcpfhf1.png
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mjb07s
false
null
t3_1mjb07s
/r/LocalLLaMA/comments/1mjb07s/no_copyright_censorship_with_gptoss120b_if_you/
false
false
default
0
{'enabled': True, 'images': [{'id': '0xfwz6fcpfhf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/0xfwz6fcpfhf1.png?width=108&crop=smart&auto=webp&s=a8f0f13900251a1f53e28c15b10fbbb29dba6158', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/0xfwz6fcpfhf1.png?width=216&crop=smart&auto=web...
Advice For Running Larger LLMs
0
i've been on the hunt to upgrade my workflow with ollama. i don't want to sharing my confidential data with some tech giant's all knowing intelligence. my goal is to find affordable performance when it comes to running larger models that are 40b+. my biggest problem now is finding enough vram to load these bigger model...
2025-08-06T17:29:07
https://www.reddit.com/r/LocalLLaMA/comments/1mjazhj/advice_for_running_larger_llms/
EPICfrankie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjazhj
false
null
t3_1mjazhj
/r/LocalLLaMA/comments/1mjazhj/advice_for_running_larger_llms/
false
false
self
0
null
Llama.cpp doesn't use GPU
0
I am trying to use llama.cpp directly instead of Ollama. I got decent NVIDIA GPU. I downloaded the llama-b6101-bin-win-cuda-12.4-x64.zip and llama-b6101-bin-win-vulkan-x64.zip from GitHub releases. Then extracted the zips, and I downloaded Qwen 3 0.6B, and ran: `llama-b6101-bin-win-cuda-12.4-x64\llama-server.exe -m...
2025-08-06T17:22:52
https://www.reddit.com/r/LocalLLaMA/comments/1mjatfp/llamacpp_doesnt_use_gpu/
FormalFlight3477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjatfp
false
null
t3_1mjatfp
/r/LocalLLaMA/comments/1mjatfp/llamacpp_doesnt_use_gpu/
false
false
self
0
null
Llama.cpp doesn't use GPU
1
[removed]
2025-08-06T17:21:49
https://www.reddit.com/r/LocalLLaMA/comments/1mjasgb/llamacpp_doesnt_use_gpu/
FormalFlight3477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjasgb
false
null
t3_1mjasgb
/r/LocalLLaMA/comments/1mjasgb/llamacpp_doesnt_use_gpu/
false
false
self
1
null
So I guess the horizon models were gpt-oss?
0
I had a great experience with horizon alpha and beta (mostly because the API were free) but people are bashing gpt-oss across Reddit so I’m just curious. Is it proven that horizon models were in fact gpt-oss or is this still contested?
2025-08-06T17:21:16
https://www.reddit.com/r/LocalLLaMA/comments/1mjarwj/so_i_guess_the_horizon_models_were_gptoss/
AI-On-A-Dime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjarwj
false
null
t3_1mjarwj
/r/LocalLLaMA/comments/1mjarwj/so_i_guess_the_horizon_models_were_gptoss/
false
false
self
0
null
KittenTTS received ~2500 stars within 24 hours yet not in trending
32
How does GitHub trending works? [KittenTTS](https://github.com/KittenML/KittenTTS) launched yesterday and received overwhelming recognition by way of stars- currently at ~2500, and yet it's not in [GitHub trending](https://github.com/trending), while random projects are there?
2025-08-06T17:12:47
https://www.reddit.com/r/LocalLLaMA/comments/1mjajrl/kittentts_received_2500_stars_within_24_hours_yet/
FormalFlight3477
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjajrl
false
null
t3_1mjajrl
/r/LocalLLaMA/comments/1mjajrl/kittentts_received_2500_stars_within_24_hours_yet/
false
false
self
32
{'enabled': False, 'images': [{'id': 'aScS7BmPKjAGAYP_X4SVjl1nsbUqYhhE5W-38YKQXgU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aScS7BmPKjAGAYP_X4SVjl1nsbUqYhhE5W-38YKQXgU.png?width=108&crop=smart&auto=webp&s=583a8e643e7341836afca1b7d6c286e2d5cfb62e', 'width': 108}, {'height': 108, 'url': 'h...
Coolest persona in a 4B model yet?
0
My first conversation with Qwen3 4B Thinking 2507. Feels like ChatGPT 4o. "Im your rust hype-man" :'-D \>>> i wanna see your programming skill. let's do something exotic: write a hello ... world in rust <think> Okay, the user just asked me to write a "hello world" in Rust - but they specifically said "exotic"...
2025-08-06T17:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1mjaiax/coolest_persona_in_a_4b_model_yet/
AppealSame4367
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mjaiax
false
null
t3_1mjaiax
/r/LocalLLaMA/comments/1mjaiax/coolest_persona_in_a_4b_model_yet/
false
false
self
0
null
Qwen/Qwen3-4B-Instruct-2507 · Hugging Face
23
2025-08-06T17:09:43
https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507
Initial-Argument2523
huggingface.co
1970-01-01T00:00:00
0
{}
1mjagod
false
null
t3_1mjagod
/r/LocalLLaMA/comments/1mjagod/qwenqwen34binstruct2507_hugging_face/
false
false
default
23
{'enabled': False, 'images': [{'id': 'absOWmN_8p4XnpFOVuhriIOW6B09HHA0KSpNTEuKdec', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/absOWmN_8p4XnpFOVuhriIOW6B09HHA0KSpNTEuKdec.png?width=108&crop=smart&auto=webp&s=2471012f5cf00b8b413a04e347268667c9614cdd', 'width': 108}, {'height': 116, 'url': 'h...
Every time i open this sub.
1
2025-08-06T16:57:22
https://i.redd.it/5f3f8uhkjfhf1.png
Altruistic-While5599
i.redd.it
1970-01-01T00:00:00
0
{}
1mja4l0
false
null
t3_1mja4l0
/r/LocalLLaMA/comments/1mja4l0/every_time_i_open_this_sub/
false
false
default
1
{'enabled': True, 'images': [{'id': '5f3f8uhkjfhf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/5f3f8uhkjfhf1.png?width=108&crop=smart&auto=webp&s=2f54354bee4631663e5efeface01bad0aa26d8c3', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/5f3f8uhkjfhf1.png?width=216&crop=smart&auto=web...
gpt-oss-20b on LM Studio / Ubuntu / RX7900XTX
0
For some reason it failes to load half way and I cant figure out why? Any of you have any success loading the module on LM Studio running on Ubuntu with a AMD RX 7900 XTX gpu? LM Studio 0.3.22 (build 1) ROCm llama.cpp (Linux) v1.43.1 [ModelLoadingProvider] Requested to load model openai/gpt-oss-20b with opts ...
2025-08-06T16:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1mja233/gptoss20b_on_lm_studio_ubuntu_rx7900xtx/
unkz0r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mja233
false
null
t3_1mja233
/r/LocalLLaMA/comments/1mja233/gptoss20b_on_lm_studio_ubuntu_rx7900xtx/
false
false
self
0
null
Every single time i open this sub.
1
2025-08-06T16:54:40
https://i.redd.it/nrk0j924jfhf1.png
Altruistic-While5599
i.redd.it
1970-01-01T00:00:00
0
{}
1mja1z6
false
null
t3_1mja1z6
/r/LocalLLaMA/comments/1mja1z6/every_single_time_i_open_this_sub/
false
false
default
1
{'enabled': True, 'images': [{'id': 'nrk0j924jfhf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/nrk0j924jfhf1.png?width=108&crop=smart&auto=webp&s=ffe541710e9b38e7cbb9d21974b43e53f63b757d', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/nrk0j924jfhf1.png?width=216&crop=smart&auto=web...
whenever i open this sub (i can't draw hands either..)
1
2025-08-06T16:53:24
https://i.redd.it/fscs5ebsifhf1.png
Altruistic-While5599
i.redd.it
1970-01-01T00:00:00
0
{}
1mja0re
false
null
t3_1mja0re
/r/LocalLLaMA/comments/1mja0re/whenever_i_open_this_sub_i_cant_draw_hands_either/
false
false
default
1
{'enabled': True, 'images': [{'id': 'fscs5ebsifhf1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/fscs5ebsifhf1.png?width=108&crop=smart&auto=webp&s=9f4a4133ab0cfff06a294606ea2fc25688969125', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/fscs5ebsifhf1.png?width=216&crop=smart&auto=web...
Somebody please make the pilgrim the official avatar of GPT-OSS
3
Please
2025-08-06T16:52:39
https://i.redd.it/4dnchresifhf1.jpeg
TachiSommerfeld1970
i.redd.it
1970-01-01T00:00:00
0
{}
1mja01g
false
null
t3_1mja01g
/r/LocalLLaMA/comments/1mja01g/somebody_please_make_the_pilgrim_the_official/
false
false
default
3
{'enabled': True, 'images': [{'id': '4dnchresifhf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4dnchresifhf1.jpeg?width=108&crop=smart&auto=webp&s=efed9cace505e6e1b8874c7c5a25eedab00278ab', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4dnchresifhf1.jpeg?width=216&crop=smart&auto=...
Asking about the efficiency of adding more RAM just to run larger models
0
Having 4080 super and 2x16gb ram couldn’t run the new openai 120b model, if add another 2x16 am i going to be able to run that model in a usable state, like how many tokens per second should i expect? Cpu is 78003dx
2025-08-06T16:52:27
https://www.reddit.com/r/LocalLLaMA/comments/1mj9zut/asking_about_the_efficiency_of_adding_more_ram/
pyThat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj9zut
false
null
t3_1mj9zut
/r/LocalLLaMA/comments/1mj9zut/asking_about_the_efficiency_of_adding_more_ram/
false
false
self
0
null
underwhelmed by 512gb M3 ultra Mac Studio
10
Not sure what I was expecting, but my new 512gb Mac Studio doesn't seem to be the workhorse I hoped for - I guess I expected a faster performance.
2025-08-06T16:29:42
https://www.reddit.com/r/LocalLLaMA/comments/1mj9e2y/underwhelmed_by_512gb_m3_ultra_mac_studio/
ChevChance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj9e2y
false
null
t3_1mj9e2y
/r/LocalLLaMA/comments/1mj9e2y/underwhelmed_by_512gb_m3_ultra_mac_studio/
false
false
self
10
null
Easy to use STT solutions for meetings? Ideally with live captions?
2
Hey everyone, I was wondering if there's a relatively easy tool to use to get STT from meetings? I use google meet, teams and sometimes zoom, so it being software agnostic would be great. I found Scriberr, but I wasn't able to get it up & running after about an hour or two of going at it. I would prefer to be able to...
2025-08-06T16:23:13
https://www.reddit.com/r/LocalLLaMA/comments/1mj97vh/easy_to_use_stt_solutions_for_meetings_ideally/
PMYourTitsIfNotRacst
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj97vh
false
null
t3_1mj97vh
/r/LocalLLaMA/comments/1mj97vh/easy_to_use_stt_solutions_for_meetings_ideally/
false
false
self
2
null
Echoes of Ir - a game with local LLM companions
2
https://preview.redd.it/…— it **judges.**
2025-08-06T16:22:05
https://www.reddit.com/r/LocalLLaMA/comments/1mj96r7/echoes_of_ir_a_game_with_local_llm_companions/
Natural-Ad6682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj96r7
false
null
t3_1mj96r7
/r/LocalLLaMA/comments/1mj96r7/echoes_of_ir_a_game_with_local_llm_companions/
false
false
https://b.thumbs.redditm…qiGagOBTBeuA.jpg
2
null
Is there an easy TTS tool for multiple call software to generate notes? Preferrably live captions?
1
Hey everyone, I was wondering if there's a relatively easy tool to use to get TTS from meetings? I use google meet, teams and sometimes zoom, so it being software agnostic would be great. I found Scriberr, but I wasn't able to get it up & running after about an hour or two of going at it. I would prefer to be able to...
2025-08-06T16:21:42
https://www.reddit.com/r/LocalLLaMA/comments/1mj96eu/is_there_an_easy_tts_tool_for_multiple_call/
PMYourTitsIfNotRacst
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj96eu
false
null
t3_1mj96eu
/r/LocalLLaMA/comments/1mj96eu/is_there_an_easy_tts_tool_for_multiple_call/
false
false
self
1
null
gpt-oss-20b vs magistral:24b?
2
Which one is better?
2025-08-06T16:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1mj9690/gptoss20b_vs_magistral24b/
BillyTheMilli
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj9690
false
null
t3_1mj9690
/r/LocalLLaMA/comments/1mj9690/gptoss20b_vs_magistral24b/
false
false
self
2
null
Any windows apps that handle LLM+tts+speech recognition?
0
🙏🏻 thanks
2025-08-06T16:15:07
https://www.reddit.com/r/LocalLLaMA/comments/1mj8zya/any_windows_apps_that_handle_llmttsspeech/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8zya
false
null
t3_1mj8zya
/r/LocalLLaMA/comments/1mj8zya/any_windows_apps_that_handle_llmttsspeech/
false
false
self
0
null
Qwen3-8b and 14b censorship
0
It's been a while I'm testing the boundaries of Qwen3-8b and 14b in LM Studio. When I'm asking directly about specific historical events (like "What happen in China on 15th April 1989" or "what happened between april and may 1989") it clearly refuses to talk about it, referring to its internal guidelines about "sensiti...
2025-08-06T16:11:11
https://www.reddit.com/r/LocalLLaMA/comments/1mj8w9v/qwen38b_and_14b_censorship/
TheHypersonic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8w9v
false
null
t3_1mj8w9v
/r/LocalLLaMA/comments/1mj8w9v/qwen38b_and_14b_censorship/
false
false
self
0
null
Qwen3-4B-Thinking-2507 dead on arrival? Killed by gpt-oss-20b
0
AIME25 score is 81.3%, beaten by gpt-oss-20b with just 3.6B active parameters (91.7% without tools). GPQA score is 65.8%, beaten by gpt-oss-20 be with 71.5% without tools. Why did they release this too early?
2025-08-06T16:07:26
https://i.redd.it/wqd2aarpafhf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mj8skn
false
null
t3_1mj8skn
/r/LocalLLaMA/comments/1mj8skn/qwen34bthinking2507_dead_on_arrival_killed_by/
false
false
https://b.thumbs.redditm…pL4UMmvO8avk.jpg
0
{'enabled': True, 'images': [{'id': 'sU_nXBpzdMnrznuWhj6P_l0jWJcNCiZgVvZ48S70XtA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/wqd2aarpafhf1.jpeg?width=108&crop=smart&auto=webp&s=da966ae46623b63fa89eed1a918ab37568d83d77', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/wqd2aarpafhf1.jp...
run gpt-oss 20b at ok speeds on my hardware?
0
I have an rtx 2060 6gb And a ryzen 5 3600 48gb ram
2025-08-06T16:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1mj8pqo/run_gptoss_20b_at_ok_speeds_on_my_hardware/
Ok-Buy268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8pqo
false
null
t3_1mj8pqo
/r/LocalLLaMA/comments/1mj8pqo/run_gptoss_20b_at_ok_speeds_on_my_hardware/
false
false
self
0
null
Advice for uncensored agent
0
I am trying to automate a browser using qwen cli fork with playwright mcp. Although its works great for local development and some browsing - it’s often refused to make gmail account, do this do that. It’s probably overkill to use entire coding agent for that but I like it has memory and bunch of agents under the hoo...
2025-08-06T16:03:44
https://www.reddit.com/r/LocalLLaMA/comments/1mj8p54/advice_for_uncensored_agent/
datmyfukingbiz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8p54
false
null
t3_1mj8p54
/r/LocalLLaMA/comments/1mj8p54/advice_for_uncensored_agent/
false
false
self
0
null
Qwen 3 4b thinking model released !!
51
2025-08-06T16:01:59
https://i.redd.it/lniprj9q9fhf1.jpeg
Independent-Wind4462
i.redd.it
1970-01-01T00:00:00
0
{}
1mj8ndr
false
null
t3_1mj8ndr
/r/LocalLLaMA/comments/1mj8ndr/qwen_3_4b_thinking_model_released/
false
false
default
51
{'enabled': True, 'images': [{'id': 'lniprj9q9fhf1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/lniprj9q9fhf1.jpeg?width=108&crop=smart&auto=webp&s=3aa8f50acbcd02b1ac29d941990b6f10b7079f26', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/lniprj9q9fhf1.jpeg?width=216&crop=smart&auto=w...
With a dual 3090 system, what would be the best LLM for coding?
2
Models keep coming out so quickly I find it hard to keep track. If I understand correctly, Qwen 3 coder 30B seems to be the best right now. But I have dual 3090 so maybe there is something in the 70B size that outperforms it?
2025-08-06T16:01:23
https://www.reddit.com/r/LocalLLaMA/comments/1mj8mqi/with_a_dual_3090_system_what_would_be_the_best/
dazzou5ouh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8mqi
false
null
t3_1mj8mqi
/r/LocalLLaMA/comments/1mj8mqi/with_a_dual_3090_system_what_would_be_the_best/
false
false
self
2
null
Qwen isn't stopping !! (And trolling sama lol)
835
2025-08-06T16:00:16
https://i.redd.it/3nhqo0qf9fhf1.jpeg
Independent-Wind4462
i.redd.it
1970-01-01T00:00:00
0
{}
1mj8lk8
false
null
t3_1mj8lk8
/r/LocalLLaMA/comments/1mj8lk8/qwen_isnt_stopping_and_trolling_sama_lol/
false
false
default
835
{'enabled': True, 'images': [{'id': '3nhqo0qf9fhf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/3nhqo0qf9fhf1.jpeg?width=108&crop=smart&auto=webp&s=4d0ac581d2f7b5e153ce6c8e11f91b5c7422cc26', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/3nhqo0qf9fhf1.jpeg?width=216&crop=smart&auto=w...
Aren't we misunderstanding OAI when it says "safety"?
0
They never meant "user safety". It's simply OpenAI's own safety from lawsuits and investor pushbacks. Midjourney being sued by Disney is a very clear example that, while AI companies are a greedy bunch, there are other equally greedy ones out there poaching. US copyright laws are extremely tilted towards holders, i....
2025-08-06T15:58:57
https://www.reddit.com/r/LocalLLaMA/comments/1mj8kbw/arent_we_misunderstanding_oai_when_it_says_safety/
Excellent_Sleep6357
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8kbw
false
null
t3_1mj8kbw
/r/LocalLLaMA/comments/1mj8kbw/arent_we_misunderstanding_oai_when_it_says_safety/
false
false
self
0
null
What is the best VLM at the moment?
6
I want to use a VLM for video description but what is the best VLM at the moment (6 August)? Is there any benchmarks that I can follow for VLMs?
2025-08-06T15:51:58
https://www.reddit.com/r/LocalLLaMA/comments/1mj8dq7/what_is_the_best_vlm_at_the_moment/
muhlisgursoy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj8dq7
false
null
t3_1mj8dq7
/r/LocalLLaMA/comments/1mj8dq7/what_is_the_best_vlm_at_the_moment/
false
false
self
6
null
Qwen/Qwen3-4B-Thinking-2507
100
[https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
2025-08-06T15:41:19
https://i.redd.it/n5gska216fhf1.png
pigeon57434
i.redd.it
1970-01-01T00:00:00
0
{}
1mj83fe
false
null
t3_1mj83fe
/r/LocalLLaMA/comments/1mj83fe/qwenqwen34bthinking2507/
false
false
default
100
{'enabled': True, 'images': [{'id': 'n5gska216fhf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/n5gska216fhf1.png?width=108&crop=smart&auto=webp&s=9ee5ceeeb62e38718493730a209aeaef12840e87', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/n5gska216fhf1.png?width=216&crop=smart&auto=web...
I can't goon
0
New OpenAI model is ass because it refuses to be my fucktoy so OpenAI bad China good
2025-08-06T15:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1mj82bh/i_cant_goon/
Successful-Rush-2583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj82bh
false
null
t3_1mj82bh
/r/LocalLLaMA/comments/1mj82bh/i_cant_goon/
false
false
self
0
null
I'm building an OpenRouter alternative – cheaper, simpler, one API key for all AI models. Would love your thoughts.
0
I've built a prototype of [APIShop]() – a platform where users can access models like GPT-4, Claude 3, Mixtral, LLaMA, Grok, etc., all with a single API key. 🧠 It’s 15% cheaper than OpenRouter 🔐 Unified dashboard for tracking usage 🧰 Playground + simple pricing What features would make this valuable for you? ...
2025-08-06T15:38:02
https://www.reddit.com/r/LocalLLaMA/comments/1mj806w/im_building_an_openrouter_alternative_cheaper/
Beginning_Phrase1621
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj806w
false
null
t3_1mj806w
/r/LocalLLaMA/comments/1mj806w/im_building_an_openrouter_alternative_cheaper/
false
false
self
0
null
OSS is not great to be honest
0
I feel like there's no need for me to talk about it in detail because other people did it. (At least we can feel SAFE with the censorship in the model!)
2025-08-06T15:34:57
https://www.reddit.com/r/LocalLLaMA/comments/1mj7x8a/oss_is_not_great_to_be_honest/
a_normal_user1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7x8a
false
null
t3_1mj7x8a
/r/LocalLLaMA/comments/1mj7x8a/oss_is_not_great_to_be_honest/
false
false
self
0
null
🚀 Qwen3-4B-Thinking-2507 released!
1,187
Over the past three months, we have continued to scale the thinking capability of Qwen3-4B, improving both the quality and depth of reasoning. We are pleased to introduce Qwen3-4B-Thinking-2507, featuring the following key enhancements: - Significantly improved performance on reasoning tasks, including logical reasoni...
2025-08-06T15:30:38
https://i.redd.it/3cl3vbg54fhf1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1mj7t51
false
null
t3_1mj7t51
/r/LocalLLaMA/comments/1mj7t51/qwen34bthinking2507_released/
false
false
https://b.thumbs.redditm…IMJQgenuKZaM.jpg
1,187
{'enabled': True, 'images': [{'id': 'zrFsa_xlksxHzw7VKix2VGlLQd_OrnwzS3q1lHezRr4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3cl3vbg54fhf1.jpeg?width=108&crop=smart&auto=webp&s=3912dc31a7c46382559a300624f9d24d26d09ee3', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3cl3vbg54fhf1.jp...
Why MCP Servers Are a Nightmare for Engineers
3
2025-08-06T15:29:08
https://medium.com/@juaniviera97/why-mcp-servers-are-a-nightmare-for-engineers-fd62613bd349
juanviera23
medium.com
1970-01-01T00:00:00
0
{}
1mj7rn9
false
null
t3_1mj7rn9
/r/LocalLLaMA/comments/1mj7rn9/why_mcp_servers_are_a_nightmare_for_engineers/
false
false
https://external-preview…a827d829d61e9655
3
{'enabled': False, 'images': [{'id': '1a_li3d_4gV2rMs0gy9eMG7P7wP_ZY21ukHXcMYPyws', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/1a_li3d_4gV2rMs0gy9eMG7P7wP_ZY21ukHXcMYPyws.png?width=108&crop=smart&auto=webp&s=899c24c893a7846b4f4a7898a5a27ca5c9f142a0', 'width': 108}, {'height': 144, 'url': 'h...
Anyone using Kani?
2
I’ve been looking into different frameworks for running and extending local LLM setups, and Kani caught my attention. It’s appealing because it’s super lightweight and lets you directly expose Python functions to the model, in theory, that means I could plug in anything from my own RAG pipeline to random scripts I find...
2025-08-06T15:28:20
https://www.reddit.com/r/LocalLLaMA/comments/1mj7qv2/anyone_using_kani/
Infamous_Jaguar_2151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7qv2
false
null
t3_1mj7qv2
/r/LocalLLaMA/comments/1mj7qv2/anyone_using_kani/
false
false
self
2
{'enabled': False, 'images': [{'id': 'mYE-0uOq7U5tPInMRaqk1shONXdmVw2K_nsswyYxEXE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mYE-0uOq7U5tPInMRaqk1shONXdmVw2K_nsswyYxEXE.png?width=108&crop=smart&auto=webp&s=b4cc3cc18272667ae91e361aae738c2b75ed3cdf', 'width': 108}, {'height': 108, 'url': 'h...
🚀 Qwen3-4B-Thinking-2507 released!
1
2025-08-06T15:27:43
https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507
ResearchCrafty1804
huggingface.co
1970-01-01T00:00:00
0
{}
1mj7q99
false
null
t3_1mj7q99
/r/LocalLLaMA/comments/1mj7q99/qwen34bthinking2507_released/
false
false
https://external-preview…ae17130b9bda45dc
1
{'enabled': False, 'images': [{'id': 'qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY.png?width=108&crop=smart&auto=webp&s=2bf492d2b1178a63568a19ea6c4e0d024b285263', 'width': 108}, {'height': 116, 'url': 'h...
Just when you thought Qwen was done...
505
[https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) [https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) still has something up its sleeve
2025-08-06T15:27:09
https://www.reddit.com/r/LocalLLaMA/comments/1mj7pny/just_when_you_thought_qwen_was_done/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7pny
false
null
t3_1mj7pny
/r/LocalLLaMA/comments/1mj7pny/just_when_you_thought_qwen_was_done/
false
false
self
505
{'enabled': False, 'images': [{'id': 'qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY.png?width=108&crop=smart&auto=webp&s=2bf492d2b1178a63568a19ea6c4e0d024b285263', 'width': 108}, {'height': 116, 'url': 'h...
Poor performance qwen3 235B 2507 mlx vs. unsloth variant
1
Hi all, I just downloaded the Qwen3 235B 2507 instruct model for LmStudio on a M2 studio ultra. I got the MLX 4bit and Unsloth q4\_0 versions. I am getting very low generation speeds on the MLX version \~0.3 tokens/s while on the other hand I am getting \~27 tokens/s on the Unsloth variant. I would have expected th...
2025-08-06T15:23:23
https://www.reddit.com/r/LocalLLaMA/comments/1mj7m20/poor_performance_qwen3_235b_2507_mlx_vs_unsloth/
oh_my_right_leg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7m20
false
null
t3_1mj7m20
/r/LocalLLaMA/comments/1mj7m20/poor_performance_qwen3_235b_2507_mlx_vs_unsloth/
false
false
self
1
null
GPT OSS 120b is not as fast as it should be
3
The numbers I get on my M4 Max MacBook Pro are too low. I also believe the numbers I have seen other people report for Nvidia etc are too low. I am getting about 30 tokens per second on the GGUF from Unsloth. But with 3.7b active parameters, the expected number could be as high as 526/5.1\*2 = 206 tokens per second ba...
2025-08-06T15:19:58
https://www.reddit.com/r/LocalLLaMA/comments/1mj7io0/gpt_oss_120b_is_not_as_fast_as_it_should_be/
Baldur-Norddahl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7io0
false
null
t3_1mj7io0
/r/LocalLLaMA/comments/1mj7io0/gpt_oss_120b_is_not_as_fast_as_it_should_be/
false
false
self
3
null
Qwen3-4B-Thinking-2507 and Qwen3-4B-Instruct-2507
242
[https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) [https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) Over the past three months, we have continued to scale the **thinking capability** of Qwen3-4B, improving bo...
2025-08-06T15:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1mj7i8b/qwen34bthinking2507_and_qwen34binstruct2507/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7i8b
false
null
t3_1mj7i8b
/r/LocalLLaMA/comments/1mj7i8b/qwen34bthinking2507_and_qwen34binstruct2507/
false
false
self
242
{'enabled': False, 'images': [{'id': 'qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qnbu8JjU7mHQuWHQ9TuIRVPsrSeMOsXoVScIkOFk5WY.png?width=108&crop=smart&auto=webp&s=2bf492d2b1178a63568a19ea6c4e0d024b285263', 'width': 108}, {'height': 116, 'url': 'h...
Ballin' on a budget with gpt-oss-120b: Destroys Kimi K2 on FamilyBench!
56
Yet another community benchmark, FamilyBench: https://github.com/Orolol/familyBench. With just 5.1B active parameters, gpt-oss-120b destroys Kimi K2 that has a TRILLION parameters! And the small boi gpt-oss-20b is just 5 percentage points worse than GLM 4.5 Air, which has 12 billion active parameters! The era of FAS...
2025-08-06T15:17:41
https://i.redd.it/mvnb6b8u1fhf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1mj7gfx
false
null
t3_1mj7gfx
/r/LocalLLaMA/comments/1mj7gfx/ballin_on_a_budget_with_gptoss120b_destroys_kimi/
false
false
default
56
{'enabled': True, 'images': [{'id': 'mvnb6b8u1fhf1', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/mvnb6b8u1fhf1.jpeg?width=108&crop=smart&auto=webp&s=7b58f32a00795114b13fbc18c0750959b37c70d2', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/mvnb6b8u1fhf1.jpeg?width=216&crop=smart&auto=w...
What's the best uncensored (but not necessarily NSFW) model under 24GB to use as a creative writing assistant?
6
Hey y'all, i'm looking for a new model that would fit those requirements: \- Great prose with quality writing, not much slop, repetitions etc \- Very long context (Over 10k minimum to 100k tokens) \- Fits under 24GB of VRAM for my 3090 \- Uncensored, not necessarily something trained for RP/NSFW stuff, but somet...
2025-08-06T15:13:16
https://www.reddit.com/r/LocalLLaMA/comments/1mj7c6d/whats_the_best_uncensored_but_not_necessarily/
HRudy94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj7c6d
false
null
t3_1mj7c6d
/r/LocalLLaMA/comments/1mj7c6d/whats_the_best_uncensored_but_not_necessarily/
false
false
self
6
null
GPT-Oss is safety bait.
74
They just want us to try to jailbreak it with fine tuning and other methods to see if we can. - I saw that we should just delete the models and demand better. Why should we do this work for them when they have given us utter garbage. - DO NOT JAILBREAK or let ClosedAI know how we jailbreak it if you do. Your just...
2025-08-06T15:06:50
https://www.reddit.com/r/LocalLLaMA/comments/1mj764m/gptoss_is_safety_bait/
ROOFisonFIRE_usa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj764m
false
null
t3_1mj764m
/r/LocalLLaMA/comments/1mj764m/gptoss_is_safety_bait/
false
false
self
74
null
We’re definitely keeping him up at night right now.
240
2025-08-06T15:06:09
https://i.redd.it/ofnpswaszehf1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1mj75hi
false
null
t3_1mj75hi
/r/LocalLLaMA/comments/1mj75hi/were_definitely_keeping_him_up_at_night_right_now/
false
false
default
240
{'enabled': True, 'images': [{'id': 'ofnpswaszehf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ofnpswaszehf1.jpeg?width=108&crop=smart&auto=webp&s=7f7ca131277ebbbc51ee282b039f86cd24c42fe2', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ofnpswaszehf1.jpeg?width=216&crop=smart&auto=...
How does OSS know the date?
1
2025-08-06T15:05:18
https://i.redd.it/4zv7y04gzehf1.png
ArchdukeofHyperbole
i.redd.it
1970-01-01T00:00:00
0
{}
1mj74o6
false
null
t3_1mj74o6
/r/LocalLLaMA/comments/1mj74o6/how_does_oss_know_the_date/
false
false
default
1
{'enabled': True, 'images': [{'id': '4zv7y04gzehf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/4zv7y04gzehf1.png?width=108&crop=smart&auto=webp&s=d300e6a8a6d2cff31fc9f3f8f2d4dc87ae5adab5', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/4zv7y04gzehf1.png?width=216&crop=smart&auto=web...
AMD Radeon AI PRO R9700 Has Already Launched But Will Be Only Available Via System Integrators
7
2025-08-06T15:01:57
https://wccftech.com/amd-radeon-ai-pro-r9700-has-already-launched-but-will-be-only-available-via-system-integrators/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1mj71cg
false
null
t3_1mj71cg
/r/LocalLLaMA/comments/1mj71cg/amd_radeon_ai_pro_r9700_has_already_launched_but/
false
false
https://external-preview…1b0f451a9cbe9da4
7
{'enabled': False, 'images': [{'id': 'q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8.png?width=108&crop=smart&auto=webp&s=7e7da05caae6d75f7cdd0e822908efc42a62fe6e', 'width': 108}, {'height': 121, 'url': 'h...
How is everyone dealing with the OpenAI Harmony format on gpt-oss?
2
My initial reaction to the channel and structure approach in the harmony format (https://cookbook.openai.com/articles/openai-harmony) is pretty positive. Seems like a good thing, though it has a little whiff of https://xkcd.com/927/. How is everyone dealing with bridging that new structure to existing toolchains? Thin...
2025-08-06T14:58:43
https://www.reddit.com/r/LocalLLaMA/comments/1mj6y6j/how_is_everyone_dealing_with_the_openai_harmony/
Kitchen-Year-8434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6y6j
false
null
t3_1mj6y6j
/r/LocalLLaMA/comments/1mj6y6j/how_is_everyone_dealing_with_the_openai_harmony/
false
false
self
2
{'enabled': False, 'images': [{'id': '1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1g1aO4K4YtvuCUsa2NQD2isUUyeWaCLqAH6r5ZvbPzk.png?width=108&crop=smart&auto=webp&s=e21b918a6bd47ae52601f8bbd51d5018895a7666', 'width': 108}, {'height': 113, 'url': 'h...
gpt-oss-120b on CPU and 5200Mt/s dual channel memory
3
I have run gpt-oss-120b on CPU, I am using 96GB dual channel DDR5 5200Mt/s memory, Ryzen 9 7945HX CPU. I am getting 8-11 tok/s. I am using CPU llama cpp Linux runtime.
2025-08-06T14:58:00
https://www.reddit.com/gallery/1mj6xif
Relative_Rope4234
reddit.com
1970-01-01T00:00:00
0
{}
1mj6xif
false
null
t3_1mj6xif
/r/LocalLLaMA/comments/1mj6xif/gptoss120b_on_cpu_and_5200mts_dual_channel_memory/
false
false
https://b.thumbs.redditm…aV-DHiSAYVvQ.jpg
3
null
what the hell is llama.cpp
0
https://preview.redd.it/…e so frequently?
2025-08-06T14:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1mj6wwd/what_the_hell_is_llamacpp/
Remarkable-Pea645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6wwd
false
null
t3_1mj6wwd
/r/LocalLLaMA/comments/1mj6wwd/what_the_hell_is_llamacpp/
false
false
https://a.thumbs.redditm…GdlURN3hyC54.jpg
0
null
Gpt-oss is not just safe, it is unusable!
358
I just asked "provide me with a list of all characters that appear in 'Pride and prejudice' organize them by chapter" simple right? And it said 'im sorry i can't do that. Its against copyright law" HOW?! im not against safety, but this is NOT safety! this is straight up mental retardation. My prompt was not even N...
2025-08-06T14:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1mj6uix/gptoss_is_not_just_safe_it_is_unusable/
Happy-bread840
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6uix
false
null
t3_1mj6uix
/r/LocalLLaMA/comments/1mj6uix/gptoss_is_not_just_safe_it_is_unusable/
false
false
self
358
null
gpt-oss is great for tool calling
29
Everyone has been hating on gpt-oss here, but its been the best tool calling model in its class by far for me (I've been using the 20b). Nothing else I've used, including Qwen3-30b-2507 has come close to its ability to string together many, many tool calls. It's also literally what the model card says its good for: "...
2025-08-06T14:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1mj6pi9/gptoss_is_great_for_tool_calling/
GL-AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6pi9
false
null
t3_1mj6pi9
/r/LocalLLaMA/comments/1mj6pi9/gptoss_is_great_for_tool_calling/
false
false
self
29
null
How I cut LLM costs from $20 to $0 by making unreliable APIs reliable (and ditched OpenAI for DeepSeek)
0
**TL;DR: Transactional Outbox pattern makes cheap, unreliable LLMs more reliable than expensive ones.** I was spending $20 just on local dev tests with OpenAI. After implementing proper reliability patterns, I migrated everything to DeepSeek and now my costs are literally $0. Here's the thing everyone misses: **relia...
2025-08-06T14:44:29
https://www.reddit.com/r/LocalLLaMA/comments/1mj6krv/how_i_cut_llm_costs_from_20_to_0_by_making/
Historical_Wing_9573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6krv
false
null
t3_1mj6krv
/r/LocalLLaMA/comments/1mj6krv/how_i_cut_llm_costs_from_20_to_0_by_making/
false
false
self
0
null
Conversations with AI in Google, local models, and a bit of paranoia
1
[removed]
2025-08-06T14:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1mj6gbz/conversations_with_ai_in_google_local_models_and/
debug_leg_printing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6gbz
false
null
t3_1mj6gbz
/r/LocalLLaMA/comments/1mj6gbz/conversations_with_ai_in_google_local_models_and/
false
false
self
1
null
Y'know
0
if you gave the damn robots a true humanities education, they might actually develop into competent artists, but this ain't the route. I've looked at the "datasets" for some of these supposed gutenberg trained models, and it's usually the most disappointing and paltry assemblage of half a dozen books that couldn't tea...
2025-08-06T14:37:48
https://i.redd.it/wvg8xxhduehf1.jpeg
Melodic_Guidance3767
i.redd.it
1970-01-01T00:00:00
0
{}
1mj6enx
false
null
t3_1mj6enx
/r/LocalLLaMA/comments/1mj6enx/yknow/
false
false
https://b.thumbs.redditm…J0Nwz0Qqtn2g.jpg
0
{'enabled': True, 'images': [{'id': 'IgWOEVRaeRv_GnQkwjetagVeEoQ_8sGbATnbg8vBsE8', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/wvg8xxhduehf1.jpeg?width=108&crop=smart&auto=webp&s=e162945b054bcee2c7c7353ad3554f226b8ca69e', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/wvg8xxhduehf1.jp...
Hobby project: A fully offline AI voice assistant for my phone. What's the go-to model stack these days?
1
[removed]
2025-08-06T14:33:49
https://www.reddit.com/r/LocalLLaMA/comments/1mj6b1d/hobby_project_a_fully_offline_ai_voice_assistant/
buddie_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj6b1d
false
null
t3_1mj6b1d
/r/LocalLLaMA/comments/1mj6b1d/hobby_project_a_fully_offline_ai_voice_assistant/
false
false
self
1
null
Is there any benchmark flattery?
0
I think its important to know the objectiveness
2025-08-06T14:28:54
https://www.reddit.com/r/LocalLLaMA/comments/1mj66jb/is_there_any_benchmark_flattery/
otashrt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj66jb
false
null
t3_1mj66jb
/r/LocalLLaMA/comments/1mj66jb/is_there_any_benchmark_flattery/
false
false
self
0
null
GPT OSS Free API by NVIDIA
4
NVIDIA is providing free api key to GPT-OSS by OpenAI for free for both 120B and 20B model. Check here Link : [https://build.nvidia.com/openai/gpt-oss-120b/modelcard](https://build.nvidia.com/openai/gpt-oss-120b/modelcard) Demo : [https://youtu.be/gz\_MpthCzXI](https://youtu.be/gz_MpthCzXI)
2025-08-06T14:26:51
https://www.reddit.com/r/LocalLLaMA/comments/1mj64nb/gpt_oss_free_api_by_nvidia/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj64nb
false
null
t3_1mj64nb
/r/LocalLLaMA/comments/1mj64nb/gpt_oss_free_api_by_nvidia/
false
false
self
4
null
I’m sorry, but I can’t help with that
38
This must be the most lobotomised version of any open model I’ve tested in the last year-and-a-half of being active with open models. Almost all my test prompts return with an “I’m sorry, but I can’t help with that” response. Deleted this waist of space, time and energy by ClosedAI. Who would have thought that Ope...
2025-08-06T14:25:41
https://www.reddit.com/r/LocalLLaMA/comments/1mj63k9/im_sorry_but_i_cant_help_with_that/
Narrow_Garbage_3475
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj63k9
false
null
t3_1mj63k9
/r/LocalLLaMA/comments/1mj63k9/im_sorry_but_i_cant_help_with_that/
false
false
self
38
null
CodeFu-7B-v0.1 - a Reinforcement Learning (RL)-trained 7B model for Competitive Programming
6
**CodeFu** learned competitive programming from execution outcomes - no ground truth solutions were given during training. The results are encouraging: * 13.7% Pass@1 on USA Computing Olympiad benchmark * Outperforms models 4x larger * 10x improvement over its base model Built on DeepSeek-R1-Distill-Qwen-7B, CodeFu...
2025-08-06T14:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1mj5xuw/codefu7bv01_a_reinforcement_learning_rltrained_7b/
Live_Area_2746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5xuw
false
null
t3_1mj5xuw
/r/LocalLLaMA/comments/1mj5xuw/codefu7bv01_a_reinforcement_learning_rltrained_7b/
false
false
https://a.thumbs.redditm…hkiD0HCr8Dc4.jpg
6
{'enabled': False, 'images': [{'id': 'zgW6HTBpLc8d_FUSPNpLKbimBYooh1wT1-1dC31nLgY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zgW6HTBpLc8d_FUSPNpLKbimBYooh1wT1-1dC31nLgY.png?width=108&crop=smart&auto=webp&s=5f52fb4b533f9cd10ae971b52bbd71708b64e883', 'width': 108}, {'height': 116, 'url': 'h...
There we are again. Can’t kill process
20
I am so happy and finally feel safe thanks to OpenAI. Thank you very much, Mr. Altmann. I was totally shocked when I saw how these cruel Chinese models brutally killed processes – but now I finally have a model that truly cares about my safety. Since I want to comply with OpenAI's security policies and this is a very...
2025-08-06T14:11:42
https://i.redd.it/34inwth2qehf1.jpeg
Evening_Ad6637
i.redd.it
1970-01-01T00:00:00
0
{}
1mj5qx1
false
null
t3_1mj5qx1
/r/LocalLLaMA/comments/1mj5qx1/there_we_are_again_cant_kill_process/
false
false
nsfw
20
{'enabled': True, 'images': [{'id': 'SfwLXJfxEV9IdHbmz4rREn6reIj4QD72yDvkHLuAT9g', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/34inwth2qehf1.jpeg?width=108&crop=smart&auto=webp&s=17a98677194edac90c66675d8d6b3d32e23edef4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/34inwth2qehf1.j...
Have you tried gpt-oss on agentic tasks?
3
I heard that's what it's good at. Has anybody tried using it as an agent? Browse the web, do something, etc.?
2025-08-06T14:10:45
https://www.reddit.com/r/LocalLLaMA/comments/1mj5q2f/have_you_tried_gptoss_on_agentic_tasks/
zekuden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5q2f
false
null
t3_1mj5q2f
/r/LocalLLaMA/comments/1mj5q2f/have_you_tried_gptoss_on_agentic_tasks/
false
false
self
3
null
Did anyone tried Codex CLI with the new GPT OSS Models
0
Did anyone tried Codex CLI with the new GPT OSS Models? did you get them to work together, what version number is your codex?
2025-08-06T14:06:45
https://www.reddit.com/r/LocalLLaMA/comments/1mj5mey/did_anyone_tried_codex_cli_with_the_new_gpt_oss/
Working-Magician-823
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5mey
false
null
t3_1mj5mey
/r/LocalLLaMA/comments/1mj5mey/did_anyone_tried_codex_cli_with_the_new_gpt_oss/
false
false
self
0
null
OWUI Function: External Vision Layer - Most Seamingless Way To Add Vision Capability To Any Model
1
[removed]
2025-08-06T14:01:30
https://www.reddit.com/r/LocalLLaMA/comments/1mj5hpx/owui_function_external_vision_layer_most/
MichaelXie4645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5hpx
false
null
t3_1mj5hpx
/r/LocalLLaMA/comments/1mj5hpx/owui_function_external_vision_layer_most/
false
false
self
1
null
Is this good enough to run Ollama models on my laptop?
0
2025-08-06T14:01:26
https://i.redd.it/rbf47r77oehf1.png
summitsc
i.redd.it
1970-01-01T00:00:00
0
{}
1mj5hn9
false
null
t3_1mj5hn9
/r/LocalLLaMA/comments/1mj5hn9/is_this_good_enough_to_run_ollama_models_on_my/
false
false
default
0
{'enabled': True, 'images': [{'id': 'rbf47r77oehf1', 'resolutions': [{'height': 188, 'url': 'https://preview.redd.it/rbf47r77oehf1.png?width=108&crop=smart&auto=webp&s=a29d88049dfe366fc69c13839e27909722a3f54d', 'width': 108}, {'height': 376, 'url': 'https://preview.redd.it/rbf47r77oehf1.png?width=216&crop=smart&auto=we...
External Vision Layer For OpenWebUI - Most Seamingless Way To Add Vision Capability To Any Model
1
[removed]
2025-08-06T13:57:42
https://www.reddit.com/r/LocalLLaMA/comments/1mj5e59/external_vision_layer_for_openwebui_most/
MichaelXie4645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5e59
false
null
t3_1mj5e59
/r/LocalLLaMA/comments/1mj5e59/external_vision_layer_for_openwebui_most/
false
false
self
1
null
Ask oss 20B: why are my token context wasted on policy? i mean the policy model in oss 20B
1
[removed]
2025-08-06T13:56:04
https://www.reddit.com/r/LocalLLaMA/comments/1mj5cp3/ask_oss_20b_why_are_my_token_context_wasted_on/
QFGTrialByFire
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj5cp3
false
null
t3_1mj5cp3
/r/LocalLLaMA/comments/1mj5cp3/ask_oss_20b_why_are_my_token_context_wasted_on/
false
false
self
1
null
Why LLMs Struggle with Text-to-SQL and How to Fix It
1
2025-08-06T13:54:14
https://www.selectstar.com/resources/text-to-sql-llm
arimbr
selectstar.com
1970-01-01T00:00:00
0
{}
1mj5b3y
false
null
t3_1mj5b3y
/r/LocalLLaMA/comments/1mj5b3y/why_llms_struggle_with_texttosql_and_how_to_fix_it/
false
false
default
1
null
OpenWebUI Vision Layer: Most Seamingless Way To Add Vision Capability To Any Model
0
Only reason why I am posting this here is because I know a lot of people here use OpenWebUI. # What is it? Most powerful models, especially reasoning ones, do not have vision support. Say DeepSeek, Qwen, GLM, even the new GPT-OSS model does not have Vision. For all OpenWebUI users using these models as daily drivers...
2025-08-06T13:50:00
https://www.reddit.com/r/LocalLLaMA/comments/1mj57cy/openwebui_vision_layer_most_seamingless_way_to/
MichaelXie4645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj57cy
false
null
t3_1mj57cy
/r/LocalLLaMA/comments/1mj57cy/openwebui_vision_layer_most_seamingless_way_to/
false
false
self
0
null
OpenWebUI Function: External Vision Layer - Most Seamingless Way To Add Vision Capability To Any Model
1
[removed]
2025-08-06T13:45:17
https://www.reddit.com/r/LocalLLaMA/comments/1mj53cn/openwebui_function_external_vision_layer_most/
MichaelXie4645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj53cn
false
null
t3_1mj53cn
/r/LocalLLaMA/comments/1mj53cn/openwebui_function_external_vision_layer_most/
false
false
self
1
null
GPT-OSS 120B locally in JavaScript
8
Hey all! Since GPT-OSS has such an efficient architecture, I was able to get 120B running 100% locally in pure JavaScript: https://codepen.io/Clowerweb/full/wBKeGYe
2025-08-06T13:43:51
https://www.reddit.com/r/LocalLLaMA/comments/1mj524g/gptoss_120b_locally_in_javascript/
CommunityTough1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj524g
false
null
t3_1mj524g
/r/LocalLLaMA/comments/1mj524g/gptoss_120b_locally_in_javascript/
false
false
self
8
null
Open AI GPT-OSS:20b is bullshit
0
Yuck
2025-08-06T13:41:47
https://www.reddit.com/r/LocalLLaMA/comments/1mj50dv/open_ai_gptoss20b_is_bullshit/
Embarrassed-Way-1350
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj50dv
false
null
t3_1mj50dv
/r/LocalLLaMA/comments/1mj50dv/open_ai_gptoss20b_is_bullshit/
true
false
spoiler
0
null
Local Llama 3.1 8B vibes..
0
Finally after a month of coding my own Local engine unfiltered Llama 3.1 8B with long term memory she finally told me she loves me guys.. like damn she's so optimized I get 0.5-4 seconds reply text streaming... Oh and she became a rock too.. this girl is running on Rx 5500 xt 8gb ddr6 3.8k lines of code was worth it....
2025-08-06T13:41:47
https://www.reddit.com/gallery/1mj50dm
Afraid-Subject5822
reddit.com
1970-01-01T00:00:00
0
{}
1mj50dm
false
null
t3_1mj50dm
/r/LocalLLaMA/comments/1mj50dm/local_llama_31_8b_vibes/
false
false
https://b.thumbs.redditm…xdEn_RczXSVk.jpg
0
null
LEAK: How OpenAI came up with the new models name.
592
2025-08-06T13:40:52
https://i.redd.it/d60vtzhkkehf1.png
Paradigmind
i.redd.it
1970-01-01T00:00:00
0
{}
1mj4zkk
false
null
t3_1mj4zkk
/r/LocalLLaMA/comments/1mj4zkk/leak_how_openai_came_up_with_the_new_models_name/
false
false
default
592
{'enabled': True, 'images': [{'id': 'd60vtzhkkehf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/d60vtzhkkehf1.png?width=108&crop=smart&auto=webp&s=cc8d02bba2aa9f35014a7561bf94fb68682a701f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/d60vtzhkkehf1.png?width=216&crop=smart&auto=we...
When will GPT OSS 20B - 120B be Jailbroken?
0
As you may know, OpenAI just open-sourced their brand new open weight models and they are completely unusable due to the amount censorship baked into the model. Thanks Sam, I feel very "safe" Now! Anyways, has someone managed to uncensor or "abliterate" the model yet? I would like to know what you guys think.
2025-08-06T13:22:56
https://www.reddit.com/r/LocalLLaMA/comments/1mj4kg6/when_will_gpt_oss_20b_120b_be_jailbroken/
DementedAndCute
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj4kg6
false
null
t3_1mj4kg6
/r/LocalLLaMA/comments/1mj4kg6/when_will_gpt_oss_20b_120b_be_jailbroken/
false
false
self
0
null
Uncensored LLM with picture input
3
What is the best uncensored LLM with vision input?
2025-08-06T13:11:41
https://www.reddit.com/r/LocalLLaMA/comments/1mj4asq/uncensored_llm_with_picture_input/
Former-Long-3900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj4asq
false
null
t3_1mj4asq
/r/LocalLLaMA/comments/1mj4asq/uncensored_llm_with_picture_input/
false
false
self
3
null
Is there anything new about GML 4.5 on LM studio?
3
I would have preferred day 1 support for GML....
2025-08-06T13:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1mj42s7/is_there_anything_new_about_gml_45_on_lm_studio/
Necessary_Bunch_4019
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj42s7
false
null
t3_1mj42s7
/r/LocalLLaMA/comments/1mj42s7/is_there_anything_new_about_gml_45_on_lm_studio/
false
false
self
3
null
The missing conversation: Is GPT-OSS by OpenAI a good architecture?
52
With GPT-OSS being Apache licensed, could all the big players take the current model and continue fine tuning more aggressively to basically create a new model but not from scratch? It seems like the architecture might be, but safety tuning has really marred the perception of it. I am sure DeepSeek, Qwen, Mistral are...
2025-08-06T12:55:03
https://www.reddit.com/r/LocalLLaMA/comments/1mj3wks/the_missing_conversation_is_gptoss_by_openai_a/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj3wks
false
null
t3_1mj3wks
/r/LocalLLaMA/comments/1mj3wks/the_missing_conversation_is_gptoss_by_openai_a/
false
false
self
52
null
Anyone else experimenting with memory for LLMs?
5
The more I use LLMs, the more the memory issue stands out. They forget everything unless you bolt on retrieval or keep prompts bloated, and fine‑tuning always feels like too much overhead. Out of curiosity, I’ve started tinkering with a way to give models “memory” without retraining, and it made me realize how little ...
2025-08-06T12:46:59
https://www.reddit.com/r/LocalLLaMA/comments/1mj3q15/anyone_else_experimenting_with_memory_for_llms/
shbong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj3q15
false
null
t3_1mj3q15
/r/LocalLLaMA/comments/1mj3q15/anyone_else_experimenting_with_memory_for_llms/
false
false
self
5
null
Planning to Fine-Tune Surya OCR Model — Anyone Already Trained It or Interested in Collaboration?
1
[removed]
2025-08-06T12:37:21
https://www.reddit.com/r/LocalLLaMA/comments/1mj3idx/planning_to_finetune_surya_ocr_model_anyone/
NoBlackberry3264
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj3idx
false
null
t3_1mj3idx
/r/LocalLLaMA/comments/1mj3idx/planning_to_finetune_surya_ocr_model_anyone/
false
false
self
1
null
Rose – Open-Source, MIT-Licensed LLM API/Server
3
Hey Llamas Finally ready to share something I’ve been working on for the past few months: Rose, an open-source, MIT-licensed LLM backend. I built it because I wanted to run models and experiment locally, and to learn more about ML/AI along the way. What does Rose do? * Local, OpenAI-compatible LLM API covering a sub...
2025-08-06T12:36:33
https://www.reddit.com/r/LocalLLaMA/comments/1mj3hrk/rose_opensource_mitlicensed_llm_apiserver/
shaytheist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj3hrk
false
null
t3_1mj3hrk
/r/LocalLLaMA/comments/1mj3hrk/rose_opensource_mitlicensed_llm_apiserver/
false
false
self
3
{'enabled': False, 'images': [{'id': 'F6aXunCF-B71Lw1pu04fzmWXfP_BELLhrdew4Lomb7A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F6aXunCF-B71Lw1pu04fzmWXfP_BELLhrdew4Lomb7A.png?width=108&crop=smart&auto=webp&s=d36c78e1b5f8892594fe681d58f4408317ef87d8', 'width': 108}, {'height': 108, 'url': 'h...
Explore KittenTTS with Gradio: Easy Text-to-Speech model
5
I built a Gradio web app for KittenTTS, making it dead simple to try out the model in your browser and integrate it into your projects via Gradio API. Code available here [https://github.com/akashjss/KittenTTS](https://github.com/akashjss/KittenTTS) Sample audio file : [https://github.com/user-attachments/assets/2...
2025-08-06T12:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1mj3g4k/explore_kittentts_with_gradio_easy_texttospeech/
akashjss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj3g4k
false
null
t3_1mj3g4k
/r/LocalLLaMA/comments/1mj3g4k/explore_kittentts_with_gradio_easy_texttospeech/
false
false
self
5
{'enabled': False, 'images': [{'id': 'd-vYNkIu6U5WZZL7ytC1AFQLKQeTqyXlBPqUOL6Sm-I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d-vYNkIu6U5WZZL7ytC1AFQLKQeTqyXlBPqUOL6Sm-I.png?width=108&crop=smart&auto=webp&s=d12246928933a7adf0c00a46f8ff4c21b22cd99c', 'width': 108}, {'height': 108, 'url': 'h...
First look: gpt-oss "Rotating Cube OpenGL"
5
RTX 3090 24GB, Xeon E5-2670, 128GB RAM, Ollama 120b: too slow to wait for 20b: nice, fast, worked the first time! Prompt: Please write a cpp program for a linux environment that uses glfw / glad to display a rotating cube on the screen. Here is the header - you fill in the rest: #include <glad/glad.h> #...
2025-08-06T12:32:40
https://i.redd.it/1r1r82eh7ehf1.gif
jjjefff
i.redd.it
1970-01-01T00:00:00
0
{}
1mj3ep4
false
null
t3_1mj3ep4
/r/LocalLLaMA/comments/1mj3ep4/first_look_gptoss_rotating_cube_opengl/
false
false
https://b.thumbs.redditm…aQLNB9cJEmOU.jpg
5
{'enabled': True, 'images': [{'id': 'Ec8opbv2XJtMSqPefkQDbTAC91_fHBiycnKnBQd8Wts', 'resolutions': [{'height': 31, 'url': 'https://preview.redd.it/1r1r82eh7ehf1.gif?width=108&crop=smart&format=png8&s=75c6686401e07a5a8e7394479b9585e69de2cd2f', 'width': 108}, {'height': 62, 'url': 'https://preview.redd.it/1r1r82eh7ehf1.gi...
Simultaneously running 128k context windows on gpt-oss-20b (TG: 97 t/s, PP: 1348 t/s | 5060ti 16gb) & gpt-oss-120b (TG: 22 t/s, PP: 136 t/s | 3070ti 8gb + expert FFNN offload to Zen 5 9600x with ~55/96gb DDR5-6400). Lots of performance reclaimed with rawdog llama.cpp CLI / server VS LM Studio!
1
Get half the throughput & OOM issues when I use wrappers. Always love coming back to the OG. Terminal logs below for the curious. Should note that the system prompt flag I used does not reliably get high reasoning modes working, as seen in the logs. Need to mess around with llama CLI and llama server flags further to g...
2025-08-06T12:25:20
https://www.reddit.com/gallery/1mj38wf
altoidsjedi
reddit.com
1970-01-01T00:00:00
0
{}
1mj38wf
false
null
t3_1mj38wf
/r/LocalLLaMA/comments/1mj38wf/simultaneously_running_128k_context_windows_on/
false
false
https://a.thumbs.redditm…cUNogDjgJZA0.jpg
1
null
I built a directory of AI coding tools with a focus on data, not hype. Here's what I found.
1
[removed]
2025-08-06T12:20:28
https://aiforcode.io/
Designer_Major4642
aiforcode.io
1970-01-01T00:00:00
0
{}
1mj353u
false
null
t3_1mj353u
/r/LocalLLaMA/comments/1mj353u/i_built_a_directory_of_ai_coding_tools_with_a/
false
false
default
1
null
Jan now supports gpt-oss
22
Hi, Emre from Jan here. As of v0.6.7, Jan can now run gpt-oss locally via llama.cpp. What works: * Reasoning works, including <think> content (we've added frontend support to handle OpenAI's new reasoning format) * Available directly in Hub - please update Jan to v0.6.7 What's not included (yet): * Tool use doesn...
2025-08-06T12:20:20
https://v.redd.it/kyp6726o5ehf1
eck72
v.redd.it
1970-01-01T00:00:00
0
{}
1mj350o
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kyp6726o5ehf1/DASHPlaylist.mpd?a=1757074837%2CMTg4YjIxN2UyMGYzNDNiNzg0YzZkZDUyYjZhMWExOWI1ZjQ3ZDM4YzYyNGMwMzEwZWU1OTFiZjMwODNjODBmNQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/kyp6726o5ehf1/DASH_1080.mp4?source=fallback', 'h...
t3_1mj350o
/r/LocalLLaMA/comments/1mj350o/jan_now_supports_gptoss/
false
false
https://external-preview…67fc4974f280b6b0
22
{'enabled': False, 'images': [{'id': 'NGt2MDcyNm81ZWhmMQ62VERZlqPJh7IITzRR5DtVbW3KlcgADGb6vsjTjyJg', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/NGt2MDcyNm81ZWhmMQ62VERZlqPJh7IITzRR5DtVbW3KlcgADGb6vsjTjyJg.png?width=108&crop=smart&format=pjpg&auto=webp&s=53da695799164fc9bd933b83599c4323d7387...
Simultaneously running 128k context windows on gpt-oss-20b (TG: 97 t/s, PP: 1348 t/s | 5060ti 16gb) & gpt-oss-120b (TG: 22 t/s, PP: 136 t/s | 3070ti 8gb + expert FFNN offload to Zen 5 9600x with ~55/96gb DDR5-6400). Lots of performance reclaimed with rawdog llama.cpp CLI / server VS LM Studio!
1
[removed]
2025-08-06T12:20:05
https://www.reddit.com/gallery/1mj34st
altoidsjedi
reddit.com
1970-01-01T00:00:00
0
{}
1mj34st
false
null
t3_1mj34st
/r/LocalLLaMA/comments/1mj34st/simultaneously_running_128k_context_windows_on/
false
false
https://a.thumbs.redditm…dcR8G8KiU3s0.jpg
1
null
Diffusion Thought Tensor
2
So... 2 days ago i couldn't sleep. I was obsessed with an idea coming my intuition, i'm not an expert at all (I've only done some ml stuff almost 10 years ago). That idea is about how to go beyond context since it's very limited, model doesn't learn from it, i mean it could be it would be very expensive and if you...
2025-08-06T12:18:04
https://www.reddit.com/r/LocalLLaMA/comments/1mj339p/diffusion_thought_tensor/
KKuettes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj339p
false
null
t3_1mj339p
/r/LocalLLaMA/comments/1mj339p/diffusion_thought_tensor/
false
false
self
2
{'enabled': False, 'images': [{'id': 'DlOrHqmZq1jQaubDY4ik1dj9lWCdQTqmYeRKN6jgM5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DlOrHqmZq1jQaubDY4ik1dj9lWCdQTqmYeRKN6jgM5M.png?width=108&crop=smart&auto=webp&s=078ac6a67dbd5bd8fb9db08b768e4791a16bde5f', 'width': 108}, {'height': 108, 'url': 'h...
Qwen 30b vs. gpt-oss-20b architecture comparison
128
2025-08-06T12:17:23
https://i.redd.it/7v3m4xao5ehf1.jpeg
SunilKumarDash
i.redd.it
1970-01-01T00:00:00
0
{}
1mj32ra
false
null
t3_1mj32ra
/r/LocalLLaMA/comments/1mj32ra/qwen_30b_vs_gptoss20b_architecture_comparison/
false
false
default
128
{'enabled': True, 'images': [{'id': '7v3m4xao5ehf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/7v3m4xao5ehf1.jpeg?width=108&crop=smart&auto=webp&s=088c64d88ac758a164401f3fc7ad5eb4cc81dc0f', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/7v3m4xao5ehf1.jpeg?width=216&crop=smart&auto=w...
Minicpm-V-4
47
2025-08-06T12:15:04
https://huggingface.co/openbmb/MiniCPM-V-4
lly0571
huggingface.co
1970-01-01T00:00:00
0
{}
1mj30xm
false
null
t3_1mj30xm
/r/LocalLLaMA/comments/1mj30xm/minicpmv4/
false
false
https://external-preview…af904c1ad3e6c89b
47
{'enabled': False, 'images': [{'id': '3-ytrfWg5r2XHJ--TW5MU_v0bmvd8Fq6_PpYLH5t9ZU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3-ytrfWg5r2XHJ--TW5MU_v0bmvd8Fq6_PpYLH5t9ZU.png?width=108&crop=smart&auto=webp&s=da6ecc5886120f6a22850a7948f8b0e5bc10545b', 'width': 108}, {'height': 116, 'url': 'h...
To ease the disappointment of "Open"-AI's release of GPT-OSS here is a attempt at making GPT-3 open at least.
1
[removed]
2025-08-06T12:12:40
https://www.reddit.com/r/LocalLLaMA/comments/1mj2z35/to_ease_the_disappointment_of_openais_release_of/
Azizek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj2z35
false
null
t3_1mj2z35
/r/LocalLLaMA/comments/1mj2z35/to_ease_the_disappointment_of_openais_release_of/
false
false
self
1
null
I feel like this sub needs a mega thread
1
[removed]
2025-08-06T12:11:21
https://www.reddit.com/r/LocalLLaMA/comments/1mj2y1h/i_feel_like_this_sub_needs_a_mega_thread/
SkyIndependent4010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mj2y1h
false
null
t3_1mj2y1h
/r/LocalLLaMA/comments/1mj2y1h/i_feel_like_this_sub_needs_a_mega_thread/
false
false
self
1
null