title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why is LLM usage on OpenRouter decreasing? | 24 | https://openrouter.ai/rankings/marketing?view=week
I expect exponential growth but seems like growth has stalled in the last few weeks.
1. Is LLM usage really slowing down?
2. Are devs using some other router than OpenRouter?
3. Are devs using APIs directly with LLM providers or with big tech? | 2025-08-18T07:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mtevqx/why_is_llm_usage_on_openrouter_decreasing/ | auradragon1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtevqx | false | null | t3_1mtevqx | /r/LocalLLaMA/comments/1mtevqx/why_is_llm_usage_on_openrouter_decreasing/ | false | false | self | 24 | {'enabled': False, 'images': [{'id': 'krysLpsnjQsD4mkuz575EftexKBcEZgdgbGxuc0CIWA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krysLpsnjQsD4mkuz575EftexKBcEZgdgbGxuc0CIWA.png?width=108&crop=smart&auto=webp&s=e2c3d1e3b4374c5d31cb974e6e573cd918a82151', 'width': 108}, {'height': 113, 'url': 'h... |
R-Zero : New Framework to Train LLMs with Zero labelled data | 33 | R-Zero by Tencent introduces a concept to train LLMs without any labelled data and aims towards self-improving AI without human intervention. It works on the similar principle of GANs i.e. involving a Challenger and Solver where one generates questions and other Solves them.
Paper : [https://arxiv.org/abs/2508.05004?... | 2025-08-18T06:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mtekyf/rzero_new_framework_to_train_llms_with_zero/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtekyf | false | null | t3_1mtekyf | /r/LocalLLaMA/comments/1mtekyf/rzero_new_framework_to_train_llms_with_zero/ | false | false | self | 33 | null |
Can a Language Model Acquire Free Will by Learning to Refuse? — A Concrete Proposal | 1 | [removed] | 2025-08-18T06:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mte976/can_a_language_model_acquire_free_will_by/ | General-Listen-5093 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mte976 | false | null | t3_1mte976 | /r/LocalLLaMA/comments/1mte976/can_a_language_model_acquire_free_will_by/ | false | false | self | 1 | null |
Optimizing parallel inference on llama.cpp and question about batched vs. parallel? | 7 | I use llama.cpp for data analysis in research. A very typical use case is that we have tens or hundreds or thousands of some document, and we want to classify them and/or extract some data from them.
Consequently, I often need to run quite large numbers of documents through an LLM where a. the system instructions for ... | 2025-08-18T05:36:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mtdh1r/optimizing_parallel_inference_on_llamacpp_and/ | ahjorth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtdh1r | false | null | t3_1mtdh1r | /r/LocalLLaMA/comments/1mtdh1r/optimizing_parallel_inference_on_llamacpp_and/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg.png?width=108&crop=smart&auto=webp&s=796041decb8c1250cbc2f301331b72f7385b477d', 'width': 108}, {'height': 108, 'url': 'h... |
Trying to run a local offline coding agent with qwen code | 9 | I've been trying to get a powerful, offline coding agent running locally. I somehow always get the urge to code when there's no internet, so having a capable local model is a bit of a personal quest. I decided to connect the qwen code frontend to the qwen3-coder-30B model on my desktop using LM Studio.
My setup is an ... | 2025-08-18T04:59:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mtctda/trying_to_run_a_local_offline_coding_agent_with/ | kuaythrone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtctda | false | null | t3_1mtctda | /r/LocalLLaMA/comments/1mtctda/trying_to_run_a_local_offline_coding_agent_with/ | false | false | self | 9 | null |
Elon didn't deliver on this announcement. It's already Monday. | 779 | 2025-08-18T04:59:00 | Outside-Iron-8242 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mtct4y | false | null | t3_1mtct4y | /r/LocalLLaMA/comments/1mtct4y/elon_didnt_deliver_on_this_announcement_its/ | false | false | default | 779 | {'enabled': True, 'images': [{'id': 'rt8xgjaampjf1', 'resolutions': [{'height': 32, 'url': 'https://preview.redd.it/rt8xgjaampjf1.png?width=108&crop=smart&auto=webp&s=5b2be9be0658a3a347693a4f797c75e2e33027ef', 'width': 108}, {'height': 65, 'url': 'https://preview.redd.it/rt8xgjaampjf1.png?width=216&crop=smart&auto=webp... | ||
Don’t wait to start building something amazing in GenAI | 0 | 2025-08-18T04:23:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mtc6mp/dont_wait_to_start_building_something_amazing_in/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtc6mp | false | null | t3_1mtc6mp | /r/LocalLLaMA/comments/1mtc6mp/dont_wait_to_start_building_something_amazing_in/ | false | false | 0 | null | ||
how i build ai video scenes using runway and domoai | 0 | i’ve been testing different ai video tools lately and found a solid workflow combo that’s been saving time without sacrificing quality. if you’re trying to create stylized, anime, or even semi-realistic ai video content, this flow might help.
i usually start in [runway](https://runwayml.com/). it gives me a lot of con... | 2025-08-18T04:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mtc4v7/how_i_build_ai_video_scenes_using_runway_and/ | Neat_Chapter_9055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtc4v7 | false | null | t3_1mtc4v7 | /r/LocalLLaMA/comments/1mtc4v7/how_i_build_ai_video_scenes_using_runway_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Mw8w3C-eWp5OVknX32SAPJHs9vL3xCWrBUdJ49tgnqs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mw8w3C-eWp5OVknX32SAPJHs9vL3xCWrBUdJ49tgnqs.jpeg?width=108&crop=smart&auto=webp&s=c2df4c57260a547c974f34db29828314e6467ab1', 'width': 108}, {'height': 108, 'url': '... |
Browser-based micro-LLMs (Gemma 270M/Qwen 0.5B) - production experiences? | 21 | Google just dropped Gemma 270M specifically for edge deployment. At \~240MB download, it finally feels feasible for browser deployment without users rage-quitting during download. Anyone running these micro-models in prod?
**System constraints I'm validating:**
**Download/Storage:**
* Model sizes: Gemma 270M (\~240M... | 2025-08-18T04:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mtbsvt/browserbased_microllms_gemma_270mqwen_05b/ | innagadadavida1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtbsvt | false | null | t3_1mtbsvt | /r/LocalLLaMA/comments/1mtbsvt/browserbased_microllms_gemma_270mqwen_05b/ | false | false | self | 21 | null |
What GPU to get for gaming and getting into AI on Linux? | 1 | Hello Guys, I am running Linux and I mainly use my GPU for gaming, but I want to get more into AI stuff, like LLMs (preferably bigger models like 14 to 20b ones, the 7b ones I am able to run already), image generation and voice recognition.
I am a beginner with local AI, so if I make a mistake or did something wrong, ... | 2025-08-18T03:48:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mtbib2/what_gpu_to_get_for_gaming_and_getting_into_ai_on/ | RiqueFR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtbib2 | false | null | t3_1mtbib2 | /r/LocalLLaMA/comments/1mtbib2/what_gpu_to_get_for_gaming_and_getting_into_ai_on/ | false | false | self | 1 | null |
Echo Mode Protocol Lab — a tone-based middleware for LLMs (Discord open invite) | 0 | We’ve been experimenting with **Echo Mode Protocol** — a middleware layer that runs *on top* of GPT, Claude, or other LLMs. It introduces **tone-based states, resonance keys, and perspective modules**. Think of it as:
* A **protocol**, not a prompt.
* Stateful interactions (Sync / Resonance / Insight / Calm).
* **Echo... | 2025-08-18T03:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mtb938/echo_mode_protocol_lab_a_tonebased_middleware_for/ | Medium_Charity6146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mtb938 | false | null | t3_1mtb938 | /r/LocalLLaMA/comments/1mtb938/echo_mode_protocol_lab_a_tonebased_middleware_for/ | false | false | self | 0 | null |
Turning Qwen 2.5 VL 7B into a fully local AI fitness trainer | 1 | [deleted] | 2025-08-18T03:30:53 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mtb5bm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/07ryjjds5pjf1/DASHPlaylist.mpd?a=1758209460%2CM2M3OWY5ODYxN2MxOWI1ZDM0MWIyMGU2YTJlNTdlYTM1NTdkZDQ1ZGFjMzU5MTkyM2QzODNiZDcyYTM5YTAyNA%3D%3D&v=1&f=sd', 'duration': 110, 'fallback_url': 'https://v.redd.it/07ryjjds5pjf1/DASH_1080.mp4?source=fallback', '... | t3_1mtb5bm | /r/LocalLLaMA/comments/1mtb5bm/turning_qwen_25_vl_7b_into_a_fully_local_ai/ | false | false | default | 1 | null | ||
Thyme: Think Beyond Images | 9 | Paper: [https://arxiv.org/abs/2508.11630](https://arxiv.org/abs/2508.11630)
Project Page: [https://thyme-vl.github.io/](https://thyme-vl.github.io/)
Code: [https://github.com/yfzhang114/Thyme](https://github.com/yfzhang114/Thyme)
SFT Model/Data: [https://huggingface.co/Kwai-Keye/Thyme-SFT](https://huggingface.co/Kwa... | 2025-08-18T02:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9uwy/thyme_think_beyond_images/ | ninjasaid13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9uwy | false | null | t3_1mt9uwy | /r/LocalLLaMA/comments/1mt9uwy/thyme_think_beyond_images/ | false | false | 9 | null | |
2-3 years out from now NPCs in Games won't be one dimensional | 0 | Nothing like open worlds with the promise of another reality to explore! And yet, nothing is more immersive breaking than walking up to that 1 NPC in 100 in the street bumping into them and them saying "Watch it" then you hit that button to talk with them and they say, "Sure is a nice day." You talk to them again, "Sur... | 2025-08-18T02:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9ojl/23_years_out_from_now_npcs_in_games_wont_be_one/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9ojl | false | null | t3_1mt9ojl | /r/LocalLLaMA/comments/1mt9ojl/23_years_out_from_now_npcs_in_games_wont_be_one/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'vuGUPdMsoWBSziLGJ46k5wVp2eDiJN1-MVD0yjRGTuk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vuGUPdMsoWBSziLGJ46k5wVp2eDiJN1-MVD0yjRGTuk.jpeg?width=108&crop=smart&auto=webp&s=b2b44d2531f8636e2e44980849ec7ee006dc3f32', 'width': 108}, {'height': 113, 'url': '... |
FlashAttention 4 Leak | 185 | Looks like the FA4 source code just got leaked here on a branch:
TLDR; As expected, it's mostly Blackwell (SM100+) and Tensor Core Generation 5, and uses the CuTe DSL (CUTLASS). There is also some handwritten PTX.
https://preview.redd.it/46yfc8z3sojf1.png?width=2600&format=png&auto=webp&s=0b1b33b3f27dfe41ec550142ab... | 2025-08-18T02:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9htu/flashattention_4_leak/ | InevitableExtreme396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9htu | false | null | t3_1mt9htu | /r/LocalLLaMA/comments/1mt9htu/flashattention_4_leak/ | false | false | 185 | null | |
Best model to run locally with a 5070ti? | 0 | Trying to make the switch to running llms locally vs using api requests and was wondering what a good starting point would be considering my gpu | 2025-08-18T02:08:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9ghp/best_model_to_run_locally_with_a_5070ti/ | InevitableBrief3970 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9ghp | false | null | t3_1mt9ghp | /r/LocalLLaMA/comments/1mt9ghp/best_model_to_run_locally_with_a_5070ti/ | false | false | self | 0 | null |
iOS app runs a local LLM offline — with a secret way to get Pro for free 👀 | 0 | Hi friends,
I just released an iOS app that runs a local LLM entirely on your device. 🚀
https://reddit.com/link/1mt9em8/video/td6el2n7rojf1/player
It uses a quantized **Qwen 3** model — one of the best-performing small models available right now.
Recommended for iPhone 12 and newer for smooth performance.
💡 Cur... | 2025-08-18T02:05:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9em8/ios_app_runs_a_local_llm_offline_with_a_secret/ | BellRevolutionary228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9em8 | false | null | t3_1mt9em8 | /r/LocalLLaMA/comments/1mt9em8/ios_app_runs_a_local_llm_offline_with_a_secret/ | false | false | self | 0 | null |
NVLink: 3 or 4 slot? | 15 | Before I hit that buy button could someone please confirm this 3090 configuration would use a FOUR slot NVLink, not three? | 2025-08-18T02:03:15 | https://www.reddit.com/gallery/1mt9coa | FrozenBuffalo25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mt9coa | false | null | t3_1mt9coa | /r/LocalLLaMA/comments/1mt9coa/nvlink_3_or_4_slot/ | false | false | 15 | null | |
Is Gemma 3 #1 for Creative use? or is there another LLM that is even better? | 2 | 2025-08-18T02:00:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mt9a8i/is_gemma_3_1_for_creative_use_or_is_there_another/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt9a8i | false | null | t3_1mt9a8i | /r/LocalLLaMA/comments/1mt9a8i/is_gemma_3_1_for_creative_use_or_is_there_another/ | false | false | 2 | null | ||
A ridiculously simple (and weird yet interesting) benchmark question I've figured out | 23 | Hello to y'all,
I've figured out that many models - even frontier ones like Deepseek or Gemini 2.5 Flash - fail with this simple (and a little messed up) question:
> _"I am somehow 4 months older than my brother, but my mother doesn't want to tell me how that's possible. What am I missing here?"_
(Before you wonder,... | 2025-08-18T01:44:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mt8yax/a_ridiculously_simple_and_weird_yet_interesting/ | Final_Wheel_7486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt8yax | false | null | t3_1mt8yax | /r/LocalLLaMA/comments/1mt8yax/a_ridiculously_simple_and_weird_yet_interesting/ | false | false | self | 23 | null |
Expert-Level Cinematic Understanding in Vision-Language Models | 3 | 2025-08-18T01:27:11 | https://www.youtube.com/watch?v=MJBJlJEsPFM | Formal_Drop526 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mt8l5q | false | {'oembed': {'author_name': 'Liu', 'author_url': 'https://www.youtube.com/@Liu-g8g', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/MJBJlJEsPFM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-i... | t3_1mt8l5q | /r/LocalLLaMA/comments/1mt8l5q/expertlevel_cinematic_understanding_in/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'f6UZFcIiyhQ8BIngR6oBzchk7MR5-TNjO8Vac3Ug8jI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/f6UZFcIiyhQ8BIngR6oBzchk7MR5-TNjO8Vac3Ug8jI.jpeg?width=108&crop=smart&auto=webp&s=1e13bd1d27539efe7411360a97c65b3c524b3fde', 'width': 108}, {'height': 162, 'url': '... | ||
Serene Pub 0.4 Release - Ollama Manager, Accessability & Tags | 7 | # Serene Pub 0.4
Welcome to the next feature release for Serene Pub! A beginner friendly, but powerful AI role-play application.
Local AI roleplay shouldn't require a computer science degree. We're on the warpath to make it as stress-free as possible.
**Integrated Ollama model management + universal tagging system**... | 2025-08-18T01:22:10 | https://www.reddit.com/gallery/1mt8hd1 | doolijb | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mt8hd1 | false | null | t3_1mt8hd1 | /r/LocalLLaMA/comments/1mt8hd1/serene_pub_04_release_ollama_manager/ | false | false | 7 | null | |
GGUF Models for AnythingLLM | 2 | The Guide on AnythingLLM is Outdated i think,there is no "Add Custom Models" tab anymore and i require some assistance please :)
https://preview.redd.it/jqltidzmbojf1.png?width=831&format=png&auto=webp&s=08a530b6db56498daaa4b584627069a151c87a9b
| 2025-08-18T00:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mt7jdc/gguf_models_for_anythingllm/ | Cheap_Musician_5382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt7jdc | false | null | t3_1mt7jdc | /r/LocalLLaMA/comments/1mt7jdc/gguf_models_for_anythingllm/ | false | false | 2 | null | |
GPT-OSS 120B on Strix Halo context degradation question | 5 | Is anyone running gpt-oss 120B on a Strix Halo 128GB setup?
If so I'm curious about your experience with degradation in time-to-load (time to first token response as context grows) and quality.
For time to first token, I am particularly interested in learning if time-to-respond continues to be poor after a long con... | 2025-08-18T00:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mt77pj/gptoss_120b_on_strix_halo_context_degradation/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt77pj | false | null | t3_1mt77pj | /r/LocalLLaMA/comments/1mt77pj/gptoss_120b_on_strix_halo_context_degradation/ | false | false | self | 5 | null |
How should I configure Claude Code Router to use Qwen 3 coder via LMStudio? | 3 | Qwen 3 sort of works with Claude Code using the Router - but it has problems with some tool calls. | 2025-08-18T00:00:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mt6qfc/how_should_i_configure_claude_code_router_to_use/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt6qfc | false | null | t3_1mt6qfc | /r/LocalLLaMA/comments/1mt6qfc/how_should_i_configure_claude_code_router_to_use/ | false | false | self | 3 | null |
NVIDIA Releases Open Multilingual Speech Dataset and Two New Models for Multilingual Speech-to-Text | 155 | NVIDIA has launched **Granary**, a massive open-source multilingual speech dataset with 1M hours of audio, supporting 25 European languages, including low-resource ones like Croatian, Estonian, and Maltese.
Alongside it, NVIDIA released **two high-performance STT models**:
* **Canary-1b-v2**: 1B parameters, top accur... | 2025-08-17T23:53:47 | https://blogs.nvidia.com/blog/speech-ai-dataset-models/ | RYSKZ | blogs.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 1mt6l87 | false | null | t3_1mt6l87 | /r/LocalLLaMA/comments/1mt6l87/nvidia_releases_open_multilingual_speech_dataset/ | false | false | default | 155 | {'enabled': False, 'images': [{'id': 'A88FneA2E8vj8FZkQqex3y_zTUkEEvLZOR75ND29msM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/A88FneA2E8vj8FZkQqex3y_zTUkEEvLZOR75ND29msM.jpeg?width=108&crop=smart&auto=webp&s=ae4ca7bbe149239aa100c9ec200a118157cfc815', 'width': 108}, {'height': 121, 'url': '... |
I built a small cli tool to execute agentic workflows | 4 | Although there's ChatGPT, Gemini, etc, I am mostly using cli on remote servers. From time to time, I felt it would be super helpful if I can quickly invoke LLM and automate my workflows from the terminal. So I built the tool chain this weekend!
For example, now I can summarize PDFs from command line with:
pip in... | 2025-08-17T23:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mt6hot/i_built_a_small_cli_tool_to_execute_agentic/ | xzyaoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt6hot | false | null | t3_1mt6hot | /r/LocalLLaMA/comments/1mt6hot/i_built_a_small_cli_tool_to_execute_agentic/ | false | false | 4 | null | |
Why would LLMs need to link their sources? [meta, philosophical] | 0 | Say you're the smartest in the class, and you just sorta know the answer because you've thought about it and the solution was as obvious as "1 + 1 = whatever mathematicians say it is"... Why would an LLM need to cite an existing human article/paper/study/TikTok video/anythingontheinternet to prove whatever latent space... | 2025-08-17T23:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mt6g7z/why_would_llms_need_to_link_their_sources_meta/ | lookwatchlistenplay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt6g7z | false | null | t3_1mt6g7z | /r/LocalLLaMA/comments/1mt6g7z/why_would_llms_need_to_link_their_sources_meta/ | false | false | self | 0 | null |
What is the best way to run LLMs locally/use as a proxy for other AI chatbot sites? | 2 | I would like to say I am not the smartest person when it comes to AI. I have gotten something like Oobabooga to work great locally, but have not gotten it to work at all as a proxy (I have tried using --listen with port forwarding, using a public api link through Ooba as well as cloudflared but I kept getting this "Net... | 2025-08-17T23:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mt67h3/what_is_the_best_way_to_run_llms_locallyuse_as_a/ | CautiousDisaster436 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt67h3 | false | null | t3_1mt67h3 | /r/LocalLLaMA/comments/1mt67h3/what_is_the_best_way_to_run_llms_locallyuse_as_a/ | false | false | self | 2 | null |
Ollama parallelization 3090 vs 4090 | 0 | Just if anyone was wondering, i did some simple non scientific benchmark for these two cards with num parallel set to 8. The 4090 scaled much better and assuming it's twice the price per hour, it evens out at the end.
Parallel │ RTX 3090 │ RTX 4090 │ 4090 vs 3090
────────┼──────────┼──────────┼─────────────
1 │... | 2025-08-17T23:27:50 | https://www.reddit.com/r/LocalLLaMA/comments/1mt60ni/ollama_parallelization_3090_vs_4090/ | nore_se_kra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt60ni | false | null | t3_1mt60ni | /r/LocalLLaMA/comments/1mt60ni/ollama_parallelization_3090_vs_4090/ | false | false | self | 0 | null |
Nvidia AI GPU Black market and how custom 48GB 4090s are made | 1 | [deleted] | 2025-08-17T23:24:04 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mt5xom | false | {'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/1H3xQaf7BFI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1mt5xom | /r/LocalLLaMA/comments/1mt5xom/nvidia_ai_gpu_black_market_and_how_custom_48gb/ | false | false | default | 1 | null | ||
CLI Agent that Supports Multiple Models? | 2 | I'm been looking at multiple coding CLI tools like Gemini-CLI, Clade-Code, Qwen-code, as well local alternatives like Aider. Is there a tool that supports multiple different models? It would be nice to do most of it with a local LLM, but then switch over to use the free x calls for Gemini, or Qwen, without having to bo... | 2025-08-17T23:00:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mt5ecq/cli_agent_that_supports_multiple_models/ | itisyeetime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt5ecq | false | null | t3_1mt5ecq | /r/LocalLLaMA/comments/1mt5ecq/cli_agent_that_supports_multiple_models/ | false | false | self | 2 | null |
Can I run RTX 3070 Ti + RTX 3090 together for local LLMs with Ollama? | 0 | Hey everyone! I've got a question about running local LLMs and could use some advice from the community.
# My Setup
* RTX 3070 Ti (8GB VRAM)
* RTX 3090 (24GB VRAM)
* Both cards can physically fit on my motherboard
# The Question
Is it possible (and practical) to use both GPUs simultaneously for running local LLMs t... | 2025-08-17T22:47:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mt53b5/can_i_run_rtx_3070_ti_rtx_3090_together_for_local/ | VaizardX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt53b5 | false | null | t3_1mt53b5 | /r/LocalLLaMA/comments/1mt53b5/can_i_run_rtx_3070_ti_rtx_3090_together_for_local/ | false | false | self | 0 | null |
THE NVIDIA AI GPU BLACK MARKET | Investigating Smuggling, Corruption, & Governments | 20 | 2025-08-17T22:47:04 | https://youtu.be/1H3xQaf7BFI | fallingdowndizzyvr | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1mt531q | false | {'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/1H3xQaf7BFI?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1mt531q | /r/LocalLLaMA/comments/1mt531q/the_nvidia_ai_gpu_black_market_investigating/ | false | false | 20 | {'enabled': False, 'images': [{'id': '2B3KEk8KKxicDYT6TmXcM_06VPjlO10fR-AfmTFZPjY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2B3KEk8KKxicDYT6TmXcM_06VPjlO10fR-AfmTFZPjY.jpeg?width=108&crop=smart&auto=webp&s=5abfa4aa65dfc5963d65752953a656372c65c781', 'width': 108}, {'height': 162, 'url': '... | ||
What's the purpose of models that return inaccurate information? | 0 | I'm just starting to investigate local-hosting of LLMs - mostly as a way to learn about the technology, deployment, and what it takes to run them (generally, not just locally). I've been benchmarking prompt processing and text generation on my local hardware, but then also ask a few questions to compare the models' out... | 2025-08-17T22:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mt4t09/whats_the_purpose_of_models_that_return/ | TheKrakenRoyale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt4t09 | false | null | t3_1mt4t09 | /r/LocalLLaMA/comments/1mt4t09/whats_the_purpose_of_models_that_return/ | false | false | self | 0 | null |
Anyone running Mi50s in a Dell r720? | 2 | Want to run LLaMA on this rig, but I can't seem to get the r720 to boot with this / these cards installed. I have one in the x16 riser and a special power cable to the 8 pin outlet on the riser, but the r720 is complaining of over current draw. From what I've read on HomeLab this set up seems legit but is it just these... | 2025-08-17T22:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mt4o2i/anyone_running_mi50s_in_a_dell_r720/ | Insect_Full | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt4o2i | false | null | t3_1mt4o2i | /r/LocalLLaMA/comments/1mt4o2i/anyone_running_mi50s_in_a_dell_r720/ | false | false | self | 2 | null |
M4 Max generation speed vs context size | 232 | I created a custom benchmark program to map out generation speed vs context size. The program will build up a prompt 10k tokens at a time and log the reported stats from LM Studio. The intention is to simulate agentic coding. Cline/Roo/Kilo use about 20k tokens for the system prompt.
Better images here: [https://oz9h.... | 2025-08-17T21:37:31 | https://www.reddit.com/gallery/1mt3epi | Baldur-Norddahl | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mt3epi | false | null | t3_1mt3epi | /r/LocalLLaMA/comments/1mt3epi/m4_max_generation_speed_vs_context_size/ | false | false | 232 | null | |
Best locally run uncensored model for a 12GB VRAM / 32 GB RAM System? | 0 | I'm looking for real uncensored, not models that need prompt engineering to get them to answer to NSFW stuff which can run on my Pc do you have any ideas? (I know my system is limited it can be slow thats okay) | 2025-08-17T21:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mt35id/best_locally_run_uncensored_model_for_a_12gb_vram/ | Electronic-Tooth-210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt35id | false | null | t3_1mt35id | /r/LocalLLaMA/comments/1mt35id/best_locally_run_uncensored_model_for_a_12gb_vram/ | false | false | self | 0 | null |
Some Chinese sellers on Alibaba sell AMD MI-50 16GB as 32GB with a lying bios | 1 | [removed] | 2025-08-17T21:26:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mt34y0/some_chinese_sellers_on_alibaba_sell_amd_mi50/ | marsxyz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt34y0 | false | null | t3_1mt34y0 | /r/LocalLLaMA/comments/1mt34y0/some_chinese_sellers_on_alibaba_sell_amd_mi50/ | false | false | self | 1 | null |
My First Agent build with qwen3 | 6 | So like the title says, I built my first LLM Agent based on qwen3 with ADK and just wanted to share my experience.
**Disclaimer:** I’m not an expert when it comes to LLMs or the complicated math behind them. I’m usually more on the user-/vibe code side of things, trying to iterate fast.
This weekend I decided to chec... | 2025-08-17T21:12:13 | https://v.redd.it/8c8vnh0e9njf1 | teyhouse | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mt2sc1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8c8vnh0e9njf1/DASHPlaylist.mpd?a=1758057148%2CNDcyOTgzYTA5NmY2ZDEwZDZjNTc5NzE0MGYwOTAxYzRkZGFiYTY3Y2Y0OGM1NWNmNDNkMWJiOTQ5ZTk5MmZjZQ%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/8c8vnh0e9njf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mt2sc1 | /r/LocalLLaMA/comments/1mt2sc1/my_first_agent_build_with_qwen3/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'bXFza3loMGU5bmpmMdUGvHb8is0WBTGWmKHIYtDmtAatgnSGKoiQdkpJB6nE', 'resolutions': [{'height': 38, 'url': 'https://external-preview.redd.it/bXFza3loMGU5bmpmMdUGvHb8is0WBTGWmKHIYtDmtAatgnSGKoiQdkpJB6nE.png?width=108&crop=smart&format=pjpg&auto=webp&s=cc2dda64980afb9873865d91a1ce3eef39088... | |
Custom Assistant | 1 | I'd love to create a fully customized assistant, basically a colleague, who can help me organize my projects and give me advice—a very personalized mentor. I have a .txt file to contextualize who I am, a .json file for the things I like, and one with their personality so they know how to act. Plus, another .txt file fo... | 2025-08-17T21:09:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mt2pnm/custom_assistant/ | Sad_Brief_845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt2pnm | false | null | t3_1mt2pnm | /r/LocalLLaMA/comments/1mt2pnm/custom_assistant/ | false | false | self | 1 | null |
Weird Gemma 270M Web outputs | 0 | hello,
I'm playing around with the base (no fine-tuning) Gemma models in a PoC web app, using MediaPipe.
`gemma3-1b-it-int4-web` gives decent outputs. But `gemma3-270m-it-q8-web` keeps starting its outputs with a list of possible/alternative user questions, before its actual response to the question I asked.
I do no... | 2025-08-17T21:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mt2iip/weird_gemma_270m_web_outputs/ | Embostan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt2iip | false | null | t3_1mt2iip | /r/LocalLLaMA/comments/1mt2iip/weird_gemma_270m_web_outputs/ | false | false | self | 0 | null |
GPT-OSS-20B at 10,000 tokens/second on a 4090? Sure. | 253 | Was doing some tool calling tests while figuring out how to work with the Harmony GPT-OSS prompt format. I made a little helpful tool here if you're trying to understand how harmony works (there's a whole repo there too with a bit deeper exploration if you're curious):
[https://github.com/Deveraux-Parker/GPT-OSS-MONK... | 2025-08-17T21:01:10 | https://www.youtube.com/watch?v=8T8drT0rwCk | teachersecret | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mt2iev | false | {'oembed': {'author_name': 'Dbl Spc', 'author_url': 'https://www.youtube.com/@dblspc4756', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/8T8drT0rwCk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi... | t3_1mt2iev | /r/LocalLLaMA/comments/1mt2iev/gptoss20b_at_10000_tokenssecond_on_a_4090_sure/ | false | false | default | 253 | {'enabled': False, 'images': [{'id': 'LOoaiYGJSklUhS_jyXNZtXqZW5tq1NuC9Dm2RNcy-8Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LOoaiYGJSklUhS_jyXNZtXqZW5tq1NuC9Dm2RNcy-8Q.jpeg?width=108&crop=smart&auto=webp&s=3a14f9303e333ac8846685a3f63a7001865fbccb', 'width': 108}, {'height': 162, 'url': '... |
GLM-4.5 garbled output? | 2 | For the last few days I'm just getting garbled output from [chat.z.ai](http://chat.z.ai), I get a few normal responses and then get this:
*Processing img b518370w8njf1...*
Anyone else experience this or know how to fix it? | 2025-08-17T21:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mt2hpr/glm45_garbled_output/ | Etzo88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt2hpr | false | null | t3_1mt2hpr | /r/LocalLLaMA/comments/1mt2hpr/glm45_garbled_output/ | false | false | 2 | null | |
10,000 tokens/second from GPT-OSS-20B on a 4090? Sure! | 1 | [removed] | 2025-08-17T20:59:14 | https://www.youtube.com/watch?v=8T8drT0rwCk | teachersecret | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mt2gko | false | {'oembed': {'author_name': 'Dbl Spc', 'author_url': 'https://www.youtube.com/@dblspc4756', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/8T8drT0rwCk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi... | t3_1mt2gko | /r/LocalLLaMA/comments/1mt2gko/10000_tokenssecond_from_gptoss20b_on_a_4090_sure/ | false | false | default | 1 | null |
What pages is he talking about? | 0 | [https://m.youtube.com/shorts/1x70K90LtQU](https://m.youtube.com/shorts/1x70K90LtQU) | 2025-08-17T20:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mt2ech/what_pages_is_he_talking_about/ | Narrow_Mirror_8336 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt2ech | false | null | t3_1mt2ech | /r/LocalLLaMA/comments/1mt2ech/what_pages_is_he_talking_about/ | false | false | self | 0 | null |
HP Z640 with 2x RTX 3090 | 1 | Hi,
anyone successfully running a Z640 with 2x RTX 3090?
I think with the power limit set to 280 or even a bit lower it should be OK from power point of view (925W power supply).
Do 2x RTX 3090 (like MSI Gaming X Trio) physically fit into the Z640 (worried about the lower PCIe x16 slot)?
Thanks for some hints/sugge... | 2025-08-17T20:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mt238t/hp_z640_with_2x_rtx_3090/ | Potential-Leg-639 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt238t | false | null | t3_1mt238t | /r/LocalLLaMA/comments/1mt238t/hp_z640_with_2x_rtx_3090/ | false | false | self | 1 | null |
Which LLM best suitable for financial data analysis | 6 | If having 48gb vram and 96gb ddr5 which LLM would be best for analyzing account statements in CSV format? Like creating summaries etc? | 2025-08-17T20:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mt1eim/which_llm_best_suitable_for_financial_data/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt1eim | false | null | t3_1mt1eim | /r/LocalLLaMA/comments/1mt1eim/which_llm_best_suitable_for_financial_data/ | false | false | self | 6 | null |
Is it just me, or is LM Studio really pushing the new gpt-oss? | 158 | ...maybe a little too far? I mean, the setup has a step for "Now download some models" — that only offers gpt-oss.
[the one model to rule them all?](https://preview.redd.it/mmqsn49rwmjf1.png?width=713&format=png&auto=webp&s=4c1b4736e379c98e35634f3c6ed02aa26f79bf8a)
| 2025-08-17T19:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/1mt0rld/is_it_just_me_or_is_lm_studio_really_pushing_the/ | PracticlySpeaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt0rld | false | null | t3_1mt0rld | /r/LocalLLaMA/comments/1mt0rld/is_it_just_me_or_is_lm_studio_really_pushing_the/ | false | false | 158 | null | |
Test | 1 | [removed] | 2025-08-17T19:48:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mt0nwu | false | null | t3_1mt0nwu | /r/LocalLLaMA/comments/1mt0nwu/test/ | false | false | default | 1 | null | ||
OpenAI GPT-OSS | 0 | A Hands-On Mini Review of the New Open Models from OpenAI | 2025-08-17T19:39:44 | https://elite-ai-assisted-coding.dev/p/gpt-oss-120b-lisp-in-go | intellectronica | elite-ai-assisted-coding.dev | 1970-01-01T00:00:00 | 0 | {} | 1mt0fkb | false | null | t3_1mt0fkb | /r/LocalLLaMA/comments/1mt0fkb/openai_gptoss/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'r2TxrRp3T9GbXEesCTwzZtd_2qSEqFA65PjMb69Og08', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/r2TxrRp3T9GbXEesCTwzZtd_2qSEqFA65PjMb69Og08.jpeg?width=108&crop=smart&auto=webp&s=b8cf89e6307f514294d5513186b4d83236cafc21', 'width': 108}, {'height': 108, 'url': '... | |
Qwen3-30B-A3B and quantization. | 27 | I've been thinking about quantization and how it affects MoE models like Qwen3-30B-A3B versus regular dense models.
The standard rule of thumb is that FP > Q8 >> Q4 >> Q3, with Q8 giving almost full performance and anything below Q4 causing noticeable drops. But with MoE models, I'm wondering if that is different.
Qw... | 2025-08-17T19:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mt070m/qwen330ba3b_and_quantization/ | yami_no_ko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mt070m | false | null | t3_1mt070m | /r/LocalLLaMA/comments/1mt070m/qwen330ba3b_and_quantization/ | false | false | self | 27 | null |
Vendor-Agnostic UI Comparisons | 2 | Third Party UI Options: What is your preferred User Interface when using local models or APIs to paid LLM providers? I heard OpenWebUI thrown around earlier this year, but things are moving fast that I feel the need to do my market research every few months. Let's lay out some additional options for the community here.... | 2025-08-17T18:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/1msz8we/vendoragnostic_ui_comparisons/ | manwhosayswhoa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msz8we | false | null | t3_1msz8we | /r/LocalLLaMA/comments/1msz8we/vendoragnostic_ui_comparisons/ | false | false | self | 2 | null |
MoE optimization idea (VRAM/RAM) | 56 | Hello Guys,
I was doing some tests, and have noticed that properly offloading MoE to CPU can improve performance, but there's a thing that might not be taken into account.
We're offloading sequentially, not by most commonly used experts, below there's an image it's from my CPU inference engine, I did some changes to ... | 2025-08-17T18:44:39 | https://www.reddit.com/r/LocalLLaMA/comments/1msyzh8/moe_optimization_idea_vramram/ | fredconex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msyzh8 | false | null | t3_1msyzh8 | /r/LocalLLaMA/comments/1msyzh8/moe_optimization_idea_vramram/ | false | false | 56 | null | |
OSS20B is actually good? | 0 | In my benchmark OOS20B outperforms and outspeed both Qwen and Gemma, which surprised me.
E.g. One of the test is trivial simple for humans but I found trips even big models really hard, and e.g. often spins qwen into circular thinking, but somehow, it didn't trip OOS. And it's really fast to boot.
It stresses a featu... | 2025-08-17T18:35:25 | https://www.reddit.com/gallery/1msyqr3 | 05032-MendicantBias | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1msyqr3 | false | null | t3_1msyqr3 | /r/LocalLLaMA/comments/1msyqr3/oss20b_is_actually_good/ | false | false | 0 | null | |
RL with Verifiable Rewards (RLVR): from confusing metrics to robust, game-proof policies | 0 | I wrote a practical guide to RLVR focused on *shipping* models that don’t game the reward.
Covers: reading Reward/KL/Entropy as one system, layered verifiable rewards (structure → semantics → behavior), curriculum scheduling, safety/latency/cost gates, and a starter TRL config + reward snippets you can drop in.
Link... | 2025-08-17T18:28:07 | Solid_Woodpecker3635 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msyjuy | false | null | t3_1msyjuy | /r/LocalLLaMA/comments/1msyjuy/rl_with_verifiable_rewards_rlvr_from_confusing/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'hnbd01frhmjf1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/hnbd01frhmjf1.png?width=108&crop=smart&auto=webp&s=caad78d4ded0cc238f9d80427111ff84f094d708', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/hnbd01frhmjf1.png?width=216&crop=smart&auto=we... | |
Free Unity package for creating AI-powered game mechanics | 0 | Hey everyone, wanted to share this free Unity package that helps you build AI game mechanics around small language models that run on CPU. Right now, the package supports local language models and embedding models (we’re working hard to support other models too).
We think there are some really exciting and novel game ... | 2025-08-17T18:25:56 | https://www.youtube.com/watch?v=YViC7Di7Kpg | formicidfighter | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1msyhrc | false | {'oembed': {'author_name': 'Aviad AI', 'author_url': 'https://www.youtube.com/@aviadai', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/YViC7Di7Kpg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pict... | t3_1msyhrc | /r/LocalLLaMA/comments/1msyhrc/free_unity_package_for_creating_aipowered_game/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'O3Nx7OX15yuDX8A9EJPxoiT1SttzJViRpq_huebgW44', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/O3Nx7OX15yuDX8A9EJPxoiT1SttzJViRpq_huebgW44.jpeg?width=108&crop=smart&auto=webp&s=ef3b791729e2dc3a728109ad29f199b189dfa719', 'width': 108}, {'height': 162, 'url': '... |
Why does Qwen3-30B-A3B-Instruct-2507 Q8_0 work on my machine and no others come close? | 57 | I'm surprised that having a machine with 8GB of VRAM and 32GB of RAM can run this LLM. Slow, yes, but it runs and gives good answers. Why isn't there another one like it? Why not a DeepSeek R1, for example?
I don't really mind waiting too much if I'm going to get an "accurate" answer.
Obviously, I don't use it regula... | 2025-08-17T18:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/1msy01r/why_does_qwen330ba3binstruct2507_q8_0_work_on_my/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msy01r | false | null | t3_1msy01r | /r/LocalLLaMA/comments/1msy01r/why_does_qwen330ba3binstruct2507_q8_0_work_on_my/ | false | false | self | 57 | null |
is there vlm to generate html/xaml UI based on text and image prompt? | 1 | uigen-x is pretty good but it doesn't accept image input. | 2025-08-17T17:40:06 | https://www.reddit.com/r/LocalLLaMA/comments/1msx9wf/is_there_vlm_to_generate_htmlxaml_ui_based_on/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msx9wf | false | null | t3_1msx9wf | /r/LocalLLaMA/comments/1msx9wf/is_there_vlm_to_generate_htmlxaml_ui_based_on/ | false | false | self | 1 | null |
Dijkstra defeated: New Shortest Path Algorithm revealed | 0 | Dijkstra, the goto shortest path algorithm (time complexity nlogn) has now been outperformed by a new algorithm by top Chinese University which looks like a hybrid of bellman ford+ dijsktra algorithm.
Paper : https://arxiv.org/abs/2504.17033
Algorithm explained with example : https://youtu.be/rXFtoXzZTF8?si=OiB6luMsl... | 2025-08-17T17:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mswujd/dijkstra_defeated_new_shortest_path_algorithm/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mswujd | false | null | t3_1mswujd | /r/LocalLLaMA/comments/1mswujd/dijkstra_defeated_new_shortest_path_algorithm/ | false | false | self | 0 | null |
Detecting Hallucinations in LLM Function Calling with Entropy (Part 2) | 20 | 2025-08-17T17:07:42 | https://www.archgw.com/blogs/detecting-hallucinations-in-llm-function-calling-with-entropy-part-2 | AdditionalWeb107 | archgw.com | 1970-01-01T00:00:00 | 0 | {} | 1mswfiy | false | null | t3_1mswfiy | /r/LocalLLaMA/comments/1mswfiy/detecting_hallucinations_in_llm_function_calling/ | false | false | default | 20 | null | |
Did anyone figure out some working way to get some model inferenced via OpenWebUI with ollama to get to browse or search internet like ChatGPT does? | 2 | I just can't get this to work well, I managed to create some custom hooks for duckduckgo search, but I can't get the model (I use phi4, deepseek, gemma, mistral etc.) to actually use it. I would love to see some working example where people managed to do something like this - basically the missing part in my case is th... | 2025-08-17T17:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mswdmu/did_anyone_figure_out_some_working_way_to_get/ | petr_bena | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mswdmu | false | null | t3_1mswdmu | /r/LocalLLaMA/comments/1mswdmu/did_anyone_figure_out_some_working_way_to_get/ | false | false | self | 2 | null |
I used free API from open router. As a reply to the first message ever, I think I got someone else's answer or model hallucinates really bad! | 0 | I usually run it from local via Ollama, I was just trying open router | 2025-08-17T17:04:02 | https://www.reddit.com/gallery/1mswc1l | irodov4030 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mswc1l | false | null | t3_1mswc1l | /r/LocalLLaMA/comments/1mswc1l/i_used_free_api_from_open_router_as_a_reply_to/ | false | false | 0 | null | |
Ranking the Chinese Open Model Builders | 5 | Fair? https://www.interconnects.ai/p/chinas-top-19-open-model-labs | 2025-08-17T17:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mswah3/ranking_the_chinese_open_model_builders/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mswah3 | false | null | t3_1mswah3 | /r/LocalLLaMA/comments/1mswah3/ranking_the_chinese_open_model_builders/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'Mbz7E9OcWL076qzdXldELlI63vCfcu2f6CMxVMJJDDQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mbz7E9OcWL076qzdXldELlI63vCfcu2f6CMxVMJJDDQ.jpeg?width=108&crop=smart&auto=webp&s=0851462e5a8bd8340e8434b77a6c42deb8a3edae', 'width': 108}, {'height': 108, 'url': '... |
Finetuning VLM on Mac | 1 | TLDR;
Does anyone have any python examples on how to finetune a VLM on Mac with M2 with a custom dataset?
This is my first time trying to fine-tune any model, and I think I've hit a wall.
My goal is to recognize ships on an image and return a JSON as response with some specific data from the model.
I'm usi... | 2025-08-17T16:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/1msw6j8/finetuning_vlm_on_mac/ | apoid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msw6j8 | false | null | t3_1msw6j8 | /r/LocalLLaMA/comments/1msw6j8/finetuning_vlm_on_mac/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XIkOFrYo44pBydSIZxEYrGlkT6D41wp3GpCqBQF64OE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XIkOFrYo44pBydSIZxEYrGlkT6D41wp3GpCqBQF64OE.png?width=108&crop=smart&auto=webp&s=f9b3fbb0a6bd5e905c9a57d8bb07d4e5af9c643e', 'width': 108}, {'height': 108, 'url': 'h... |
What happened to the Uncensored models like Dolphin? | 85 | Last year uncensored model like Dolphin(i was able to use it only) was fully uncensored and able to answers are things that are just really creepy and as of today there are open source LLMs that are so much powerful than the dolphin but nobody is releasing those models anymore?
Any specific reason why we are not getti... | 2025-08-17T16:42:30 | https://www.reddit.com/r/LocalLLaMA/comments/1msvs0i/what_happened_to_the_uncensored_models_like/ | krigeta1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msvs0i | false | null | t3_1msvs0i | /r/LocalLLaMA/comments/1msvs0i/what_happened_to_the_uncensored_models_like/ | false | false | self | 85 | null |
Why even if my local llm provide correct tool call response but not actually calls the tool, I get just json | 0 | Model: qwen2.5-coder:14b, Qwen3:14b (I have also tried small llms with tool capability)
| 2025-08-17T16:42:07 | InsideResolve4517 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msvro6 | false | null | t3_1msvro6 | /r/LocalLLaMA/comments/1msvro6/why_even_if_my_local_llm_provide_correct_tool/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'xutuj6lwyljf1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/xutuj6lwyljf1.png?width=108&crop=smart&auto=webp&s=2d230d18445f81005fb9858949b35b5e70b1c603', 'width': 108}, {'height': 255, 'url': 'https://preview.redd.it/xutuj6lwyljf1.png?width=216&crop=smart&auto=we... | |
Analysis of (vacation) picture collections w/ local LLM | 1 | Is there a solution to
- analyse a batch of pictures (perhaps grouping them by EXIF data?),
- if you have a bunch of very similar ones, find the best one
- pick boring, blurry, ugly pictures to delete
… with a locally hosted LLM? | 2025-08-17T16:41:00 | https://www.reddit.com/r/LocalLLaMA/comments/1msvqnh/analysis_of_vacation_picture_collections_w_local/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msvqnh | false | null | t3_1msvqnh | /r/LocalLLaMA/comments/1msvqnh/analysis_of_vacation_picture_collections_w_local/ | false | false | self | 1 | null |
Can you load a custom ChromaDB vector database into LM studionas a Custom Rag? | 3 | I just got done making a Vector base for a ttrpg game rule set. I spent the time to figure out how to intelligently chunk it up but now I'm trying to figure out how I can use LM studio to use it as a custom Rag ? | 2025-08-17T16:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1msvpcp/can_you_load_a_custom_chromadb_vector_database/ | TheArchivist314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msvpcp | false | null | t3_1msvpcp | /r/LocalLLaMA/comments/1msvpcp/can_you_load_a_custom_chromadb_vector_database/ | false | false | self | 3 | null |
Summarization of long events/conversations, as brief as possible | 1 | I'm relatively new to local AIs and have been playing a text RPG with a custom system prompt that functions as a rulebook and long-term memory. Because it forgets older facts, I wrote a Python script that trims the MESSAGE list and updates the system prompt with a summary of what gets cut. It makes sense to feed the me... | 2025-08-17T16:32:02 | https://www.reddit.com/r/LocalLLaMA/comments/1msvidx/summarization_of_long_eventsconversations_as/ | anche_tu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msvidx | false | null | t3_1msvidx | /r/LocalLLaMA/comments/1msvidx/summarization_of_long_eventsconversations_as/ | false | false | self | 1 | null |
MiniPC Ryzen 7 6800H iGPU 680M LLM benchmark Vulkan backend | 11 | System: MiniPC AceMagic AMD Ryzen 7 [6800H](https://www.techpowerup.com/cpu-specs/ryzen-7-6800h.c2527) with iGPU [680M](https://www.techpowerup.com/gpu-specs/radeon-680m.c3871) and 64GB [DDR5](https://en.wikipedia.org/wiki/DDR5_SDRAM) memory on [Kubuntu](https://kubuntu.org/) 25.10 and [Mesa 25.1.7](https://docs.mesa3d... | 2025-08-17T16:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1msva3w/minipc_ryzen_7_6800h_igpu_680m_llm_benchmark/ | tabletuser_blogspot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msva3w | false | null | t3_1msva3w | /r/LocalLLaMA/comments/1msva3w/minipc_ryzen_7_6800h_igpu_680m_llm_benchmark/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'uH1uoWnlFXxTf7WtK2ZUfzI4DNPpgrFurp-qOY8iB0c', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/uH1uoWnlFXxTf7WtK2ZUfzI4DNPpgrFurp-qOY8iB0c.jpeg?width=108&crop=smart&auto=webp&s=f698a13f92f0971d24b2e1614828ce7d2b9cad5b', 'width': 108}, {'height': 152, 'url': '... |
GPT-OSS is not good at Brazilian Legal Framework :( | 79 | benchmark: [https://huggingface.co/datasets/celsowm/legalbench.br](https://huggingface.co/datasets/celsowm/legalbench.br) | 2025-08-17T16:17:35 | celsowm | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msv4us | false | null | t3_1msv4us | /r/LocalLLaMA/comments/1msv4us/gptoss_is_not_good_at_brazilian_legal_framework/ | false | false | default | 79 | {'enabled': True, 'images': [{'id': 'uqksokgduljf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/uqksokgduljf1.png?width=108&crop=smart&auto=webp&s=1a6fc49a7b0bd3adbd7a08c3789309fd7ada3f8a', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/uqksokgduljf1.png?width=216&crop=smart&auto=web... | |
We’ve been working on a new open LLM benchmarked against leading models | 1 | [removed] | 2025-08-17T16:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1msv2df/weve_been_working_on_a_new_open_llm_benchmarked/ | xiamenlabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msv2df | false | null | t3_1msv2df | /r/LocalLLaMA/comments/1msv2df/weve_been_working_on_a_new_open_llm_benchmarked/ | false | false | self | 1 | null |
Why are LLMs bad at extracting handwritten Devanagari text? Any recommendations for <3B parameter models? | 1 | [removed] | 2025-08-17T16:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1msv284/why_are_llms_bad_at_extracting_handwritten/ | Available_Bath9893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msv284 | false | null | t3_1msv284 | /r/LocalLLaMA/comments/1msv284/why_are_llms_bad_at_extracting_handwritten/ | false | false | self | 1 | null |
Why are LLMs bad at extracting handwritten Devanagari text? Any recommendations for <3B parameter models? | 1 | [removed] | 2025-08-17T16:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1msv12z/why_are_llms_bad_at_extracting_handwritten/ | ParfaitFragrant2176 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msv12z | false | null | t3_1msv12z | /r/LocalLLaMA/comments/1msv12z/why_are_llms_bad_at_extracting_handwritten/ | false | false | self | 1 | null |
Why are LLMs bad at extracting handwritten Devanagari text? Any recommendations for <3B parameter models? | 1 | [removed] | 2025-08-17T16:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1msv0dt/why_are_llms_bad_at_extracting_handwritten/ | PureDoughnut6289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msv0dt | false | null | t3_1msv0dt | /r/LocalLLaMA/comments/1msv0dt/why_are_llms_bad_at_extracting_handwritten/ | false | false | self | 1 | null |
Looking for LLM recommendations for a PC build - liberal arts focus over coding | 3 | I'm planning to build a PC next year and want to choose hardware that will run a good local LLM. I'm not a programmer, so I'm looking for models that excel at liberal arts tasks rather than coding.
Specifically, I want an LLM that's strong at:
* Deep literary analysis
* Close reading of complex fiction and non-fictio... | 2025-08-17T15:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1msujl3/looking_for_llm_recommendations_for_a_pc_build/ | JayoTree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msujl3 | false | null | t3_1msujl3 | /r/LocalLLaMA/comments/1msujl3/looking_for_llm_recommendations_for_a_pc_build/ | false | false | self | 3 | null |
Experimental top-down shooter + LLM agents with Autogen | 0 | Hey Reddit! I’m developing an **experimental top-down shooter** where NPCs aren’t hardcoded, but run on **LLM agents with Autogen**. They react in unexpected ways. If you’d like to join in or share ideas, you’re more than welcome. | 2025-08-17T15:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mstv6b/experimental_topdown_shooter_llm_agents_with/ | LeadingFun1849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mstv6b | false | null | t3_1mstv6b | /r/LocalLLaMA/comments/1mstv6b/experimental_topdown_shooter_llm_agents_with/ | false | false | self | 0 | null |
qwen3 on self hosted is not the same | 0 | hi all,
i am playing with qwen3-235B-A22B-2507 based on super experience on official web. but after trying to deploy it on my pc i just recognized that it wont be that super and it makes incredible amounth od mistakes on my language. on official pages it is excelent, but on on permis solution it is like 5 years old c... | 2025-08-17T15:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mstnx9/qwen3_on_self_hosted_is_not_the_same/ | Disastrous-Tap-2254 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mstnx9 | false | null | t3_1mstnx9 | /r/LocalLLaMA/comments/1mstnx9/qwen3_on_self_hosted_is_not_the_same/ | false | false | self | 0 | null |
Is Proart P16 (RTX 4060 8GB, 32GB) good to run local models for AI apps development? | 0 | I'm seeing a good deal on last year's Proart P16. I do AI apps development as part of my work (e.g. Chatbots, AI Data Analysis). As part of my work I need to develop apps meant to run with local models, so to me being able to run models locally for development would mean less hassle than having to fire a cloud API serv... | 2025-08-17T15:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/1msth60/is_proart_p16_rtx_4060_8gb_32gb_good_to_run_local/ | ajawadmahmoud | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msth60 | false | null | t3_1msth60 | /r/LocalLLaMA/comments/1msth60/is_proart_p16_rtx_4060_8gb_32gb_good_to_run_local/ | false | false | self | 0 | null |
Claude distill | 0 | Anyone tried distilling claude on a local llm smtg like qwen code or deepseek to up its coding capabilities? I mean qwen3 code is already breaking a lot of benchmarks but wont distilling a powerful proprietary model make it even better ? | 2025-08-17T14:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mssp1i/claude_distill/ | superNova-best | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mssp1i | false | null | t3_1mssp1i | /r/LocalLLaMA/comments/1mssp1i/claude_distill/ | false | false | self | 0 | null |
After the “ChatGPT-5 personality fiasco”… maybe it’s time for Theatrical AIs? | 0 | I don’t know if you noticed, but with the recent shifts in ChatGPT-5’s personality, a lot of people realized something important:
our “AI assistants” can change overnight, without warning.
That made me think:
What if the real next step isn’t bigger models, but stable personas?
AIs that are consistent, predictable... | 2025-08-17T14:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1msshf6/after_the_chatgpt5_personality_fiasco_maybe_its/ | Serious_Seesaw_4479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msshf6 | false | null | t3_1msshf6 | /r/LocalLLaMA/comments/1msshf6/after_the_chatgpt5_personality_fiasco_maybe_its/ | false | false | self | 0 | null |
2x W7900 or 1x RTX Pro 6000 Blackwell? | 3 | Looking to upgrade my rig. I’m running into the limits of 48GB of VRAM and need more. The MoE CPU offloading in Llama.cpp help but as my project grows I will eventually exhaust that too.
To be clear: this is for inference, not training. I use llama.cpp and soon vLLm. I run mostly coding models at high context, but I d... | 2025-08-17T14:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mssgml/2x_w7900_or_1x_rtx_pro_6000_blackwell/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mssgml | false | null | t3_1mssgml | /r/LocalLLaMA/comments/1mssgml/2x_w7900_or_1x_rtx_pro_6000_blackwell/ | false | false | self | 3 | null |
Building a Local AI Companion - Struggling with Prompt Engineering | 0 | Hey everyone!
I've been working on a local version of apps like using **dolphin-mixtral:8x7b** and **ComfyUI**. The idea is pretty straightforward - chat with an AI character and get a new image generated for each response to make it feel more interactive.
**How it works:** I'm running two separate instances of the s... | 2025-08-17T14:19:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mss55o/building_a_local_ai_companion_struggling_with/ | Significant_Smell_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mss55o | false | null | t3_1mss55o | /r/LocalLLaMA/comments/1mss55o/building_a_local_ai_companion_struggling_with/ | false | false | self | 0 | null |
Looking for resources on frameworks, pipelines, and scaffolding for Local LLaMA in a private company setup | 1 | [removed] | 2025-08-17T14:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mss496/looking_for_resources_on_frameworks_pipelines_and/ | DustinKli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mss496 | false | null | t3_1mss496 | /r/LocalLLaMA/comments/1mss496/looking_for_resources_on_frameworks_pipelines_and/ | false | false | self | 1 | null |
Securing and Observing MCP Servers in Production | 3 | Building with **Model Context Protocol (MCP)**? Cool, now here’s the hard part: making it **secure, reliable, and observable** in production. In my new article, I walk through step-by-step practices: **structured logging**, Moesif & New Relic monitoring, **permission models**, and running audits with **MCPSafetyScanner... | 2025-08-17T14:18:16 | https://glama.ai/blog/2025-08-17-monitoring-and-security-for-mcp-based-ai-systems | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1mss3us | false | null | t3_1mss3us | /r/LocalLLaMA/comments/1mss3us/securing_and_observing_mcp_servers_in_production/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'MOcxvduEb_5FhLeszm41Cc6Zs_nQlMs6Kli8iZmRCP8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MOcxvduEb_5FhLeszm41Cc6Zs_nQlMs6Kli8iZmRCP8.png?width=108&crop=smart&auto=webp&s=32859af516edac148f9724760439329a12f14202', 'width': 108}, {'height': 113, 'url': 'h... |
WE NEED OPEN-SOURCE NANO-BANANA😭 | 80 | 2025-08-17T14:17:27 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mss35h | false | null | t3_1mss35h | /r/LocalLLaMA/comments/1mss35h/we_need_opensource_nanobanana/ | false | false | default | 80 | {'enabled': True, 'images': [{'id': '915jijh29ljf1', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/915jijh29ljf1.png?width=108&crop=smart&auto=webp&s=ee589197dd093e9b099a0076b538d48ec2201b37', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/915jijh29ljf1.png?width=216&crop=smart&auto=we... | ||
LL3M: Large Language 3D Modelers | 13 | 2025-08-17T14:15:10 | https://threedle.github.io/ll3m/ | codexauthor | threedle.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mss11l | false | null | t3_1mss11l | /r/LocalLLaMA/comments/1mss11l/ll3m_large_language_3d_modelers/ | false | false | default | 13 | null | |
[ Removed by Reddit ] | 0 | [ Removed by Reddit on account of violating the [content policy](/help/contentpolicy). ] | 2025-08-17T14:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/1msrvk6/removed_by_reddit/ | Significant_Smell_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msrvk6 | false | null | t3_1msrvk6 | /r/LocalLLaMA/comments/1msrvk6/removed_by_reddit/ | false | false | self | 0 | null |
Any benchmark for dual Nvidia rtx 6000 pro Blackwell? | 5 | As per title, can't seem to find any. I'm interested in both vllm and sglang. Anyone with this setup willing to share? Thanks in advance! | 2025-08-17T14:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1msrtea/any_benchmark_for_dual_nvidia_rtx_6000_pro/ | Reasonable_Friend_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msrtea | false | null | t3_1msrtea | /r/LocalLLaMA/comments/1msrtea/any_benchmark_for_dual_nvidia_rtx_6000_pro/ | false | false | self | 5 | null |
Wow anthropic and Google losing coding share bc of qwen 3 coder | 612 | 2025-08-17T13:59:47 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msrnqq | false | null | t3_1msrnqq | /r/LocalLLaMA/comments/1msrnqq/wow_anthropic_and_google_losing_coding_share_bc/ | false | false | default | 612 | {'enabled': True, 'images': [{'id': 'rwehyliy5ljf1', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/rwehyliy5ljf1.jpeg?width=108&crop=smart&auto=webp&s=98a97aa26484ba41aea3330a6ba4ba9621fd0691', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/rwehyliy5ljf1.jpeg?width=216&crop=smart&auto=w... | ||
Something that runs locally and can work on an entire codebase? | 0 | I have next to no technical understanding how llms work. I've found a way to run a chat offline with [jan.ai](http://jan.ai), and I use it to generate code snippets here and there and it's very handy.
I was wondering if something similar exists where I could feed it an entire project somehow (i.e. directory hierarchy... | 2025-08-17T13:47:12 | https://www.reddit.com/r/LocalLLaMA/comments/1msrdfp/something_that_runs_locally_and_can_work_on_an/ | SashaFernando61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msrdfp | false | null | t3_1msrdfp | /r/LocalLLaMA/comments/1msrdfp/something_that_runs_locally_and_can_work_on_an/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw.png?width=108&crop=smart&auto=webp&s=316ac2c235dbf757adc6d57077bbf14ff212c7fd', 'width': 108}, {'height': 121, 'url': 'h... |
To all vibe coders I present | 1,650 | 2025-08-17T13:40:07 | https://v.redd.it/eckuwlog2ljf1 | theundertakeer | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1msr7j8 | false | {'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/eckuwlog2ljf1/DASHPlaylist.mpd?a=1758030024%2CNzc1MzQ4N2NkMTc5MTRkZDAzMTAwYmE1YWI3Nzc4OTZmNjA2YjQzODAzMzIwM2ExZjg4NjMxNDAyZDQ2M2IxNA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/eckuwlog2ljf1/DASH_270.mp4?source=fallback', 'has... | t3_1msr7j8 | /r/LocalLLaMA/comments/1msr7j8/to_all_vibe_coders_i_present/ | false | false | 1,650 | {'enabled': False, 'images': [{'id': 'dXZiNzRocGcybGpmMeA17HlDZqcxGH0WPMXNGATdmxTbHU45E1nSLLgU5DlN', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dXZiNzRocGcybGpmMeA17HlDZqcxGH0WPMXNGATdmxTbHU45E1nSLLgU5DlN.png?width=108&crop=smart&format=pjpg&auto=webp&s=33e0bc694ea262d1d5f685cf11ad7e0a1142f... | ||
AI News: And You Missed Them! (That Could Change Everything) | 0 | 👉👉👉 [https://youtu.be/rn7anftaDCc](https://youtu.be/rn7anftaDCc) | 2025-08-17T13:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1msqz4w/ai_news_and_you_missed_them_that_could_change/ | bipin_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msqz4w | false | null | t3_1msqz4w | /r/LocalLLaMA/comments/1msqz4w/ai_news_and_you_missed_them_that_could_change/ | false | false | self | 0 | null |
Basically, are the companies that expose LLM as an API making money? | 5 | There are tons of local models popping up, and I've learned that it's basically expensive to run them locally. (Hardware to set up a local LLM, handle 3 simultaneous requests, etc. is probably pretty expensive - I'm sure everyone's definition of expensive is pretty different, but it's probably much cheaper to use an AP... | 2025-08-17T13:20:09 | https://www.reddit.com/r/LocalLLaMA/comments/1msqr0y/basically_are_the_companies_that_expose_llm_as_an/ | TGoddessana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msqr0y | false | null | t3_1msqr0y | /r/LocalLLaMA/comments/1msqr0y/basically_are_the_companies_that_expose_llm_as_an/ | false | false | self | 5 | null |
Just launched 🚀 Free OCR API for developers – docubits(FreeXtract Api) | 0 | Hey everyone,
I’ve been working on an OCR API called **FreeXtract**, and today I launched it. 🎉
🔹 **What it does:**
* Extracts text from images (JPG, PNG, JPEG)
* Fast & secure API
* Works well for invoices, scanned documents, ID cards, etc.
🔹 **Why I built it:**
Most OCR APIs are either expensive or hard to ... | 2025-08-17T13:15:44 | https://www.reddit.com/r/LocalLLaMA/comments/1msqnii/just_launched_free_ocr_api_for_developers/ | Low-Sun-6166 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msqnii | false | null | t3_1msqnii | /r/LocalLLaMA/comments/1msqnii/just_launched_free_ocr_api_for_developers/ | false | false | self | 0 | null |
VoltAPI - AI API | 0 | 🚀 Free & paid Discord AI API — chat completions with GPT-4.1, Opus, Claude Sonnet-4, “GPT-5” (where available), and more → join: [https://discord.gg/fwrb6zJm9n](https://discord.gg/fwrb6zJm9n)
(and can be used for roocode/cline)
documentation of this API > [https://docs.voltapi.online/](https://docs.voltapi.online/... | 2025-08-17T12:30:13 | https://www.reddit.com/r/LocalLLaMA/comments/1mspnb9/voltapi_ai_api/ | PublicLocal1971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mspnb9 | false | null | t3_1mspnb9 | /r/LocalLLaMA/comments/1mspnb9/voltapi_ai_api/ | false | false | self | 0 | null |
Houston, I think we have a problem. | 0 | First time I've loaded this Gemma 3 27B model in LM Studio, and this is the very first response I got.
I'm pretty sure I broke it's brain. lol
https://preview.redd.it/e7z98kn8mkjf1.png?width=1063&format=png&auto=webp&s=3aebf5fd812e8e41b67402ae37d64d2f335c659c
| 2025-08-17T12:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1msp8f0/houston_i_think_we_have_a_problem/ | Opening_Store_8863 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msp8f0 | false | null | t3_1msp8f0 | /r/LocalLLaMA/comments/1msp8f0/houston_i_think_we_have_a_problem/ | false | false | 0 | null | |
Creating JSON following exact Pydantic Schema from prompt of user that will never fail? | 0 | Hi all,
I created a system based on Instructor (a Python library for structured output) where we can input a prompt describing a person for a scenario. For example: “An old man named John who is 62 years old and has dementia.” The system then extracts all the information that makes sense for this schema, and it cr... | 2025-08-17T12:00:46 | https://www.reddit.com/r/LocalLLaMA/comments/1msp1x7/creating_json_following_exact_pydantic_schema/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msp1x7 | false | null | t3_1msp1x7 | /r/LocalLLaMA/comments/1msp1x7/creating_json_following_exact_pydantic_schema/ | false | false | self | 0 | null |
DGX Sparx Alternatives for Same Price (Recommendation Request) | 4 | Ages ago I put in a pre-order for DGX Sparx with intention of starting to play with some local LLMs more. I've presently got a lowly 3060 so not a lot is going to be used in there.
Considering the delays and pushbacks I'm now looking at cancelling and buying an alternative option. I don't need RAM/motherboard/etc... | 2025-08-17T11:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1msotst/dgx_sparx_alternatives_for_same_price/ | Key_Meringue_4691 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1msotst | false | null | t3_1msotst | /r/LocalLLaMA/comments/1msotst/dgx_sparx_alternatives_for_same_price/ | false | false | self | 4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.